Sample records for precision subsampling system

  1. [Assessment of precision and accuracy of digital surface photogrammetry with the DSP 400 system].

    PubMed

    Krimmel, M; Kluba, S; Dietz, K; Reinert, S

    2005-03-01

    The objective of the present study was to evaluate the precision and accuracy of facial anthropometric measurements obtained through digital 3-D surface photogrammetry with the DSP 400 system in comparison to traditional 2-D photogrammetry. Fifty plaster casts of cleft infants were imaged and 21 standard anthropometric measurements were obtained. For precision assessment the measurements were performed twice in a subsample. Accuracy was determined by comparison of direct measurements and indirect 2-D and 3-D image measurements. Precision of digital surface photogrammetry was almost as good as direct anthropometry and clearly better than 2-D photogrammetry. Measurements derived from 3-D images showed better congruence to direct measurements than from 2-D photos. Digital surface photogrammetry with the DSP 400 system is sufficiently precise and accurate for craniofacial anthropometric examinations.

  2. Integrating SAS and GIS software to improve habitat-use estimates from radiotelemetry data

    USGS Publications Warehouse

    Kenow, K.P.; Wright, R.G.; Samuel, M.D.; Rasmussen, P.W.

    2001-01-01

    Radiotelemetry has been used commonly to remotely determine habitat use by a variety of wildlife species. However, habitat misclassification can occur because the true location of a radiomarked animal can only be estimated. Analytical methods that provide improved estimates of habitat use from radiotelemetry location data using a subsampling approach have been proposed previously. We developed software, based on these methods, to conduct improved habitat-use analyses. A Statistical Analysis System (SAS)-executable file generates a random subsample of points from the error distribution of an estimated animal location and formats the output into ARC/INFO-compatible coordinate and attribute files. An associated ARC/INFO Arc Macro Language (AML) creates a coverage of the random points, determines the habitat type at each random point from an existing habitat coverage, sums the number of subsample points by habitat type for each location, and outputs tile results in ASCII format. The proportion and precision of habitat types used is calculated from the subsample of points generated for each radiotelemetry location. We illustrate the method and software by analysis of radiotelemetry data for a female wild turkey (Meleagris gallopavo).

  3. Tank 241-B-108, cores 172 and 173 analytical results for the final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nuzum, J.L., Fluoro Daniel Hanford

    1997-03-04

    The Data Summary Table (Table 3) included in this report compiles analytical results in compliance with all applicable DQOS. Liquid subsamples that were prepared for analysis by an acid adjustment of the direct subsample are indicated by a `D` in the A column in Table 3. Solid subsamples that were prepared for analysis by performing a fusion digest are indicated by an `F` in the A column in Table 3. Solid subsamples that were prepared for analysis by performing a water digest are indicated by a I.wl. or an `I` in the A column of Table 3. Due to poormore » precision and accuracy in original analysis of both Lower Half Segment 2 of Core 173 and the core composite of Core 173, fusion and water digests were performed for a second time. Precision and accuracy improved with the repreparation of Core 173 Composite. Analyses with the repreparation of Lower Half Segment 2 of Core 173 did not show improvement and suggest sample heterogeneity. Results from both preparations are included in Table 3.« less

  4. Sub-sampling genetic data to estimate black bear population size: A case study

    USGS Publications Warehouse

    Tredick, C.A.; Vaughan, M.R.; Stauffer, D.F.; Simek, S.L.; Eason, T.

    2007-01-01

    Costs for genetic analysis of hair samples collected for individual identification of bears average approximately US$50 [2004] per sample. This can easily exceed budgetary allowances for large-scale studies or studies of high-density bear populations. We used 2 genetic datasets from 2 areas in the southeastern United States to explore how reducing costs of analysis by sub-sampling affected precision and accuracy of resulting population estimates. We used several sub-sampling scenarios to create subsets of the full datasets and compared summary statistics, population estimates, and precision of estimates generated from these subsets to estimates generated from the complete datasets. Our results suggested that bias and precision of estimates improved as the proportion of total samples used increased, and heterogeneity models (e.g., Mh[CHAO]) were more robust to reduced sample sizes than other models (e.g., behavior models). We recommend that only high-quality samples (>5 hair follicles) be used when budgets are constrained, and efforts should be made to maximize capture and recapture rates in the field.

  5. Test of a mosquito eggshell isolation method and subsampling procedure.

    PubMed

    Turner, P A; Streever, W J

    1997-03-01

    Production of Aedes vigilax, the common salt-marsh mosquito, can be assessed by determining eggshell densities found in soil. In this study, 14 field-collected eggshell samples were used to test a subsampling technique and compare eggshell counts obtained with a flotation method to those obtained by direct examination of sediment (DES). Relative precision of the subsampling technique was assessed by determining the minimum number of subsamples required to estimate the true mean and confidence interval of a sample at a predetermined confidence level. A regression line was fitted to cube-root transformed eggshell counts obtained from flotation and DES and found to be significant (P < 0.001, r2 = 0.97). The flotation method allowed processing of samples in about one-third of the time required by DES, but recovered an average of 44% of the eggshells present. Eggshells obtained with the flotation method can be used to predict those from DES using the following equation: DES count = [1.386 x (flotation count)0.33 - 0.01]3.

  6. Double Sampling with Multiple Imputation to Answer Large Sample Meta-Research Questions: Introduction and Illustration by Evaluating Adherence to Two Simple CONSORT Guidelines

    PubMed Central

    Capers, Patrice L.; Brown, Andrew W.; Dawson, John A.; Allison, David B.

    2015-01-01

    Background: Meta-research can involve manual retrieval and evaluation of research, which is resource intensive. Creation of high throughput methods (e.g., search heuristics, crowdsourcing) has improved feasibility of large meta-research questions, but possibly at the cost of accuracy. Objective: To evaluate the use of double sampling combined with multiple imputation (DS + MI) to address meta-research questions, using as an example adherence of PubMed entries to two simple consolidated standards of reporting trials guidelines for titles and abstracts. Methods: For the DS large sample, we retrieved all PubMed entries satisfying the filters: RCT, human, abstract available, and English language (n = 322, 107). For the DS subsample, we randomly sampled 500 entries from the large sample. The large sample was evaluated with a lower rigor, higher throughput (RLOTHI) method using search heuristics, while the subsample was evaluated using a higher rigor, lower throughput (RHITLO) human rating method. Multiple imputation of the missing-completely at-random RHITLO data for the large sample was informed by: RHITLO data from the subsample; RLOTHI data from the large sample; whether a study was an RCT; and country and year of publication. Results: The RHITLO and RLOTHI methods in the subsample largely agreed (phi coefficients: title = 1.00, abstract = 0.92). Compliance with abstract and title criteria has increased over time, with non-US countries improving more rapidly. DS + MI logistic regression estimates were more precise than subsample estimates (e.g., 95% CI for change in title and abstract compliance by year: subsample RHITLO 1.050–1.174 vs. DS + MI 1.082–1.151). As evidence of improved accuracy, DS + MI coefficient estimates were closer to RHITLO than the large sample RLOTHI. Conclusion: Our results support our hypothesis that DS + MI would result in improved precision and accuracy. This method is flexible and may provide a practical way to examine large corpora of literature. PMID:25988135

  7. Subsampling effects in neuronal avalanche distributions recorded in vivo

    PubMed Central

    Priesemann, Viola; Munk, Matthias HJ; Wibral, Michael

    2009-01-01

    Background Many systems in nature are characterized by complex behaviour where large cascades of events, or avalanches, unpredictably alternate with periods of little activity. Snow avalanches are an example. Often the size distribution f(s) of a system's avalanches follows a power law, and the branching parameter sigma, the average number of events triggered by a single preceding event, is unity. A power law for f(s), and sigma = 1, are hallmark features of self-organized critical (SOC) systems, and both have been found for neuronal activity in vitro. Therefore, and since SOC systems and neuronal activity both show large variability, long-term stability and memory capabilities, SOC has been proposed to govern neuronal dynamics in vivo. Testing this hypothesis is difficult because neuronal activity is spatially or temporally subsampled, while theories of SOC systems assume full sampling. To close this gap, we investigated how subsampling affects f(s) and sigma by imposing subsampling on three different SOC models. We then compared f(s) and sigma of the subsampled models with those of multielectrode local field potential (LFP) activity recorded in three macaque monkeys performing a short term memory task. Results Neither the LFP nor the subsampled SOC models showed a power law for f(s). Both, f(s) and sigma, depended sensitively on the subsampling geometry and the dynamics of the model. Only one of the SOC models, the Abelian Sandpile Model, exhibited f(s) and sigma similar to those calculated from LFP activity. Conclusion Since subsampling can prevent the observation of the characteristic power law and sigma in SOC systems, misclassifications of critical systems as sub- or supercritical are possible. Nevertheless, the system specific scaling of f(s) and sigma under subsampling conditions may prove useful to select physiologically motivated models of brain function. Models that better reproduce f(s) and sigma calculated from the physiological recordings may be selected over alternatives. PMID:19400967

  8. The PASADO core processing strategy — A proposed new protocol for sediment core treatment in multidisciplinary lake drilling projects

    NASA Astrophysics Data System (ADS)

    Ohlendorf, Christian; Gebhardt, Catalina; Hahn, Annette; Kliem, Pierre; Zolitschka, Bernd

    2011-07-01

    Using the ICDP (International Continental Scientific Drilling Program) deep lake drilling expedition no. 5022 as an example, we describe core processing and sampling procedures as well as new tools developed for subsampling. A manual core splitter is presented that is (1) mobile, (2) able to cut plastic core liners lengthwise without producing swarf of liner material and (3) consists of off-the-shelf components. In order to improve the sampling of sediment cores, a new device, the core sampling assembly (CSA), was developed that meets the following targets: (1) the partitioning of the sediment into discs of equal thickness is fast and precise, (2) disturbed sediment at the inner surface of the liner is discarded during this sampling process, (3) usage of the available sediment is optimised, (4) subsamples are volumetric and oriented, and (5) identical subsamples are taken. The CSA can be applied to D-shaped split sediment cores of any diameter and consists of a divider and a D-shaped scoop. The sampling plan applied for ICDP expedition 5022 is illustrated and may be used as a guideline for planning the efficient partitioning of sediment amongst different lake research groups involved in multidisciplinary projects. For every subsample, the use of quality flags is suggested (1) to document the sample condition, (2) to give a first sediment classification and (3) to guarantee a precise adjustment of logging and scanning data with data determined on individual samples. Based on this, we propose a protocol that might be applied across lake drilling projects in order to facilitate planning and documentation of sampling campaigns and to ensure a better comparability of results.

  9. Inferring collective dynamical states from widely unobserved systems.

    PubMed

    Wilting, Jens; Priesemann, Viola

    2018-06-13

    When assessing spatially extended complex systems, one can rarely sample the states of all components. We show that this spatial subsampling typically leads to severe underestimation of the risk of instability in systems with propagating events. We derive a subsampling-invariant estimator, and demonstrate that it correctly infers the infectiousness of various diseases under subsampling, making it particularly useful in countries with unreliable case reports. In neuroscience, recordings are strongly limited by subsampling. Here, the subsampling-invariant estimator allows to revisit two prominent hypotheses about the brain's collective spiking dynamics: asynchronous-irregular or critical. We identify consistently for rat, cat, and monkey a state that combines features of both and allows input to reverberate in the network for hundreds of milliseconds. Overall, owing to its ready applicability, the novel estimator paves the way to novel insight for the study of spatially extended dynamical systems.

  10. The Surface Brightness-color Relations Based on Eclipsing Binary Stars: Toward Precision Better than 1% in Angular Diameter Predictions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Graczyk, Dariusz; Gieren, Wolfgang; Konorski, Piotr

    In this study we investigate the calibration of surface brightness–color (SBC) relations based solely on eclipsing binary stars. We selected a sample of 35 detached eclipsing binaries with trigonometric parallaxes from Gaia DR1 or Hipparcos whose absolute dimensions are known with an accuracy better than 3% and that lie within 0.3 kpc from the Sun. For the purpose of this study, we used mostly homogeneous optical and near-infrared photometry based on the Tycho-2 and 2MASS catalogs. We derived geometric angular diameters for all stars in our sample with a precision better than 10%, and for 11 of them with amore » precision better than 2%. The precision of individual angular diameters of the eclipsing binary components is currently limited by the precision of the geometric distances (∼5% on average). However, by using a subsample of systems with the best agreement between their geometric and photometric distances, we derived the precise SBC relations based only on eclipsing binary stars. These relations have precisions that are comparable to the best available SBC relations based on interferometric angular diameters, and they are fully consistent with them. With very precise Gaia parallaxes becoming available in the near future, angular diameters with a precision better than 1% will be abundant. At that point, the main uncertainty in the total error budget of the SBC relations will come from transformations between different photometric systems, disentangling of component magnitudes, and for hot OB stars, the main uncertainty will come from the interstellar extinction determination. We argue that all these issues can be overcome with modern high-quality data and conclude that a precision better than 1% is entirely feasible.« less

  11. The Influence of Mark-Recapture Sampling Effort on Estimates of Rock Lobster Survival

    PubMed Central

    Kordjazi, Ziya; Frusher, Stewart; Buxton, Colin; Gardner, Caleb; Bird, Tomas

    2016-01-01

    Five annual capture-mark-recapture surveys on Jasus edwardsii were used to evaluate the effect of sample size and fishing effort on the precision of estimated survival probability. Datasets of different numbers of individual lobsters (ranging from 200 to 1,000 lobsters) were created by random subsampling from each annual survey. This process of random subsampling was also used to create 12 datasets of different levels of effort based on three levels of the number of traps (15, 30 and 50 traps per day) and four levels of the number of sampling-days (2, 4, 6 and 7 days). The most parsimonious Cormack-Jolly-Seber (CJS) model for estimating survival probability shifted from a constant model towards sex-dependent models with increasing sample size and effort. A sample of 500 lobsters or 50 traps used on four consecutive sampling-days was required for obtaining precise survival estimations for males and females, separately. Reduced sampling effort of 30 traps over four sampling days was sufficient if a survival estimate for both sexes combined was sufficient for management of the fishery. PMID:26990561

  12. Rotation-robust math symbol recognition and retrieval using outer contours and image subsampling

    NASA Astrophysics Data System (ADS)

    Zhu, Siyu; Hu, Lei; Zanibbi, Richard

    2013-01-01

    This paper presents an unified recognition and retrieval system for isolated offline printed mathematical symbols for the first time. The system is based on nearest neighbor scheme and uses modified Turning Function and Grid Features to calculate the distance between two symbols based on Sum of Squared Difference. An unwrap process and an alignment process are applied to modify Turning Function to deal with the horizontal and vertical shift caused by the changing of staring point and rotation. This modified Turning Function make our system robust against rotation of the symbol image. The system obtains top-1 recognition rate of 96.90% and 47.27% Area Under Curve (AUC) of precision/recall plot on the InftyCDB-3 dataset. Experiment result shows that the system with modified Turning Function performs significantly better than the system with original Turning Function on the rotated InftyCDB-3 dataset.

  13. Format effects in two teacher rating scales of hyperactivity.

    PubMed

    Sandoval, J

    1981-06-01

    The object of this study was to investigate the effect of differences in format on the precision of teacher ratings and thus on the reliability and validity of two teacher rating scales of children's hyperactive behavior. Teachers (N = 242) rated a sample of children in their classrooms using rating scales assessing similar attributes with different formats. For a sub-sample the rating scales were readministered after 2 weeks. The results indicated that improvement can be made in the precision of teacher ratings that may be reflected in improved reliability and validity.

  14. Crop area estimation based on remotely-sensed data with an accurate but costly subsample

    NASA Technical Reports Server (NTRS)

    Gunst, R. F.

    1983-01-01

    Alternatives to sampling-theory stratified and regression estimators of crop production and timber biomass were examined. An alternative estimator which is viewed as especially promising is the errors-in-variable regression estimator. Investigations established the need for caution with this estimator when the ratio of two error variances is not precisely known.

  15. Influence of various water quality sampling strategies on load estimates for small streams

    USGS Publications Warehouse

    Robertson, Dale M.; Roerish, Eric D.

    1999-01-01

    Extensive streamflow and water quality data from eight small streams were systematically subsampled to represent various water‐quality sampling strategies. The subsampled data were then used to determine the accuracy and precision of annual load estimates generated by means of a regression approach (typically used for big rivers) and to determine the most effective sampling strategy for small streams. Estimation of annual loads by regression was imprecise regardless of the sampling strategy used; for the most effective strategy, median absolute errors were ∼30% based on the load estimated with an integration method and all available data, if a regression approach is used with daily average streamflow. The most effective sampling strategy depends on the length of the study. For 1‐year studies, fixed‐period monthly sampling supplemented by storm chasing was the most effective strategy. For studies of 2 or more years, fixed‐period semimonthly sampling resulted in not only the least biased but also the most precise loads. Additional high‐flow samples, typically collected to help define the relation between high streamflow and high loads, result in imprecise, overestimated annual loads if these samples are consistently collected early in high‐flow events.

  16. An Interdisciplinary Method for the Visualization of Novel High-Resolution Precision Photography and Micro-XCT Data Sets of NASA's Apollo Lunar Samples and Antarctic Meteorite Samples to Create Combined Research-Grade 3D Virtual Samples for the Benefit of Astromaterials Collections Conservation, Curation, Scientific Research and Education

    NASA Technical Reports Server (NTRS)

    Blumenfeld, E. H.; Evans, C. A.; Oshel, E. R.; Liddle, D. A.; Beaulieu, K.; Zeigler, R. A.; Hanna, R. D.; Ketcham, R. A.

    2016-01-01

    New technologies make possible the advancement of documentation and visualization practices that can enhance conservation and curation protocols for NASA's Astromaterials Collections. With increasing demands for accessibility to updated comprehensive data, and with new sample return missions on the horizon, it is of primary importance to develop new standards for contemporary documentation and visualization methodologies. Our interdisciplinary team has expertise in the fields of heritage conservation practices, professional photography, photogrammetry, imaging science, application engineering, data curation, geoscience, and astromaterials curation. Our objective is to create virtual 3D reconstructions of Apollo Lunar and Antarctic Meteorite samples that are a fusion of two state-of-the-art data sets: the interior view of the sample by collecting Micro-XCT data and the exterior view of the sample by collecting high-resolution precision photography data. These new data provide researchers an information-rich visualization of both compositional and textural information prior to any physical sub-sampling. Since January 2013 we have developed a process that resulted in the successful creation of the first image-based 3D reconstruction of an Apollo Lunar Sample correlated to a 3D reconstruction of the same sample's Micro- XCT data, illustrating that this technique is both operationally possible and functionally beneficial. In May of 2016 we began a 3-year research period during which we aim to produce Virtual Astromaterials Samples for 60 high-priority Apollo Lunar and Antarctic Meteorite samples and serve them on NASA's Astromaterials Acquisition and Curation website. Our research demonstrates that research-grade Virtual Astromaterials Samples are beneficial in preserving for posterity a precise 3D reconstruction of the sample prior to sub-sampling, which greatly improves documentation practices, provides unique and novel visualization of the sample's interior and exterior features, offers scientists a preliminary research tool for targeted sub-sample requests, and additionally is a visually engaging interactive tool for bringing astromaterials science to the public.

  17. Comparison and assessment of aerial and ground estimates of waterbird colonies

    USGS Publications Warehouse

    Green, M.C.; Luent, M.C.; Michot, T.C.; Jeske, C.W.; Leberg, P.L.

    2008-01-01

    Aerial surveys are often used to quantify sizes of waterbird colonies; however, these surveys would benefit from a better understanding of associated biases. We compared estimates of breeding pairs of waterbirds, in colonies across southern Louisiana, USA, made from the ground, fixed-wing aircraft, and a helicopter. We used a marked-subsample method for ground-counting colonies to obtain estimates of error and visibility bias. We made comparisons over 2 sampling periods: 1) surveys conducted on the same colonies using all 3 methods during 3-11 May 2005 and 2) an expanded fixed-wing and ground-survey comparison conducted over 4 periods (May and Jun, 2004-2005). Estimates from fixed-wing aircraft were approximately 65% higher than those from ground counts for overall estimated number of breeding pairs and for both dark and white-plumaged species. The coefficient of determination between estimates based on ground and fixed-wing aircraft was ???0.40 for most species, and based on the assumption that estimates from the ground were closer to the true count, fixed-wing aerial surveys appeared to overestimate numbers of nesting birds of some species; this bias often increased with the size of the colony. Unlike estimates from fixed-wing aircraft, numbers of nesting pairs made from ground and helicopter surveys were very similar for all species we observed. Ground counts by one observer resulted in underestimated number of breeding pairs by 20% on average. The marked-subsample method provided an estimate of the number of missed nests as well as an estimate of precision. These estimates represent a major advantage of marked-subsample ground counts over aerial methods; however, ground counts are difficult in large or remote colonies. Helicopter surveys and ground counts provide less biased, more precise estimates of breeding pairs than do surveys made from fixed-wing aircraft. We recommend managers employ ground counts using double observers for surveying waterbird colonies when feasible. Fixed-wing aerial surveys may be suitable to determine colony activity and composition of common waterbird species. The most appropriate combination of survey approaches will be based on the need for precise and unbiased estimates, balanced with financial and logistical constraints.

  18. An Interdisciplinary Method for the Visualization of Novel High-Resolution Precision Photography and Micro-XCT Data Sets of NASA's Apollo Lunar Samples and Antarctic Meteorite Samples to Create Combined Research-Grade 3D Virtual Samples for the Benefit of Astromaterials Collections Conservation, Curation, Scientific Research and Education

    NASA Astrophysics Data System (ADS)

    Blumenfeld, E. H.; Evans, C. A.; Zeigler, R. A.; Righter, K.; Beaulieu, K. R.; Oshel, E. R.; Liddle, D. A.; Hanna, R.; Ketcham, R. A.; Todd, N. S.

    2016-12-01

    New technologies make possible the advancement of documentation and visualization practices that can enhance conservation and curation protocols for NASA's Astromaterials Collections. With increasing demands for accessibility to updated comprehensive data, and with new sample return missions on the horizon, it is of primary importance to develop new standards for contemporary documentation and visualization methodologies. Our interdisciplinary team has expertise in the fields of heritage conservation practices, professional photography, photogrammetry, imaging science, application engineering, data curation, geoscience, and astromaterials curation. Our objective is to create virtual 3D reconstructions of Apollo Lunar and Antarctic Meteorite samples that are a fusion of two state-of-the-art data sets: the interior view of the sample by collecting Micro-XCT data and the exterior view of the sample by collecting high-resolution precision photography data. These new data provide researchers an information-rich visualization of both compositional and textural information prior to any physical sub-sampling. Since January 2013 we have developed a process that resulted in the successful creation of the first image-based 3D reconstruction of an Apollo Lunar Sample correlated to a 3D reconstruction of the same sample's Micro-XCT data, illustrating that this technique is both operationally possible and functionally beneficial. In May of 2016 we began a 3-year research period during which we aim to produce Virtual Astromaterials Samples for 60 high-priority Apollo Lunar and Antarctic Meteorite samples and serve them on NASA's Astromaterials Acquisition and Curation website. Our research demonstrates that research-grade Virtual Astromaterials Samples are beneficial in preserving for posterity a precise 3D reconstruction of the sample prior to sub-sampling, which greatly improves documentation practices, provides unique and novel visualization of the sample's interior and exterior features, offers scientists a preliminary research tool for targeted sub-sample requests, and additionally is a visually engaging interactive tool for bringing astromaterials science to the public.

  19. Improving Ramsey spectroscopy in the extreme-ultraviolet region with a random-sampling approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eramo, R.; Bellini, M.; European Laboratory for Non-linear Spectroscopy

    2011-04-15

    Ramsey-like techniques, based on the coherent excitation of a sample by delayed and phase-correlated pulses, are promising tools for high-precision spectroscopic tests of QED in the extreme-ultraviolet (xuv) spectral region, but currently suffer experimental limitations related to long acquisition times and critical stability issues. Here we propose a random subsampling approach to Ramsey spectroscopy that, by allowing experimentalists to reach a given spectral resolution goal in a fraction of the usual acquisition time, leads to substantial improvements in high-resolution spectroscopy and may open the way to a widespread application of Ramsey-like techniques to precision measurements in the xuv spectral region.

  20. [Factor structure validity of the social capital scale used at baseline in the ELSA-Brasil study].

    PubMed

    Souto, Ester Paiva; Vasconcelos, Ana Glória Godoi; Chor, Dora; Reichenheim, Michael E; Griep, Rosane Härter

    2016-07-21

    This study aims to analyze the factor structure of the Brazilian version of the Resource Generator (RG) scale, using baseline data from the Brazilian Longitudinal Health Study in Adults (ELSA-Brasil). Cross-validation was performed in three random subsamples. Exploratory factor analysis using exploratory structural equation models was conducted in the first two subsamples to diagnose the factor structure, and confirmatory factor analysis was used in the third to corroborate the model defined by the exploratory analyses. Based on the 31 initial items, the model with the best fit included 25 items distributed across three dimensions. They all presented satisfactory convergent validity (values greater than 0.50 for the extracted variance) and precision (values greater than 0.70 for compound reliability). All factor correlations were below 0.85, indicating full discriminative factor validity. The RG scale presents acceptable psychometric properties and can be used in populations with similar characteristics.

  1. Assessing the capability of different satellite observing configurations to resolve the distribution of methane emissions at kilometer scales

    NASA Astrophysics Data System (ADS)

    Turner, Alexander J.; Jacob, Daniel J.; Benmergui, Joshua; Brandman, Jeremy; White, Laurent; Randles, Cynthia A.

    2018-06-01

    Anthropogenic methane emissions originate from a large number of fine-scale and often transient point sources. Satellite observations of atmospheric methane columns are an attractive approach for monitoring these emissions but have limitations from instrument precision, pixel resolution, and measurement frequency. Dense observations will soon be available in both low-Earth and geostationary orbits, but the extent to which they can provide fine-scale information on methane sources has yet to be explored. Here we present an observation system simulation experiment (OSSE) to assess the capabilities of different satellite observing system configurations. We conduct a 1-week WRF-STILT simulation to generate methane column footprints at 1.3 × 1.3 km2 spatial resolution and hourly temporal resolution over a 290 × 235 km2 domain in the Barnett Shale, a major oil and gas field in Texas with a large number of point sources. We sub-sample these footprints to match the observing characteristics of the recently launched TROPOMI instrument (7 × 7 km2 pixels, 11 ppb precision, daily frequency), the planned GeoCARB instrument (2.7 × 3.0 km2 pixels, 4 ppb precision, nominal twice-daily frequency), and other proposed observing configurations. The information content of the various observing systems is evaluated using the Fisher information matrix and its eigenvalues. We find that a week of TROPOMI observations should provide information on temporally invariant emissions at ˜ 30 km spatial resolution. GeoCARB should provide information available on temporally invariant emissions ˜ 2-7 km spatial resolution depending on sampling frequency (hourly to daily). Improvements to the instrument precision yield greater increases in information content than improved sampling frequency. A precision better than 6 ppb is critical for GeoCARB to achieve fine resolution of emissions. Transient emissions would be missed with either TROPOMI or GeoCARB. An aspirational high-resolution geostationary instrument with 1.3 × 1.3 km2 pixel resolution, hourly return time, and 1 ppb precision would effectively constrain the temporally invariant emissions in the Barnett Shale at the kilometer scale and provide some information on hourly variability of sources.

  2. Subsampling program for the estimation of fish impingement

    NASA Astrophysics Data System (ADS)

    Beauchamp, John J.; Kumar, K. D.

    1984-11-01

    Federal regulations require operators of nuclear and coal-fired power-generating stations to estimate the number of fish impinged on intake screens. During winter months, impingement may range into the hundreds of thousands for certain species, making it impossible to count all intake screens completely. We present graphs for determinig the appropriate“optimal” subsample that must be obtained to estimate the total number impinged. Since the number of fish impinged tends to change drastically within a short time period, the subsample size is determined based on the most recent data. This allows for the changing nature of the species-age composition of the impinged fish. These graphs can also be used for subsampling fish catches in an aquatic system when the size of the catch is too large to sample completely.

  3. Evaluation of Bayesian estimation of a hidden continuous-time Markov chain model with application to threshold violation in water-quality indicators

    USGS Publications Warehouse

    Deviney, Frank A.; Rice, Karen; Brown, Donald E.

    2012-01-01

    Natural resource managers require information concerning  the frequency, duration, and long-term probability of occurrence of water-quality indicator (WQI) violations of defined thresholds. The timing of these threshold crossings often is hidden from the observer, who is restricted to relatively infrequent observations. Here, a model for the hidden process is linked with a model for the observations, and the parameters describing duration, return period, and long-term probability of occurrence are estimated using Bayesian methods. A simulation experiment is performed to evaluate the approach under scenarios based on the equivalent of a total monitoring period of 5-30 years and an observation frequency of 1-50 observations per year. Given constant threshold crossing rate, accuracy and precision of parameter estimates increased with longer total monitoring period and more-frequent observations. Given fixed monitoring period and observation frequency, accuracy and precision of parameter estimates increased with longer times between threshold crossings. For most cases where the long-term probability of being in violation is greater than 0.10, it was determined that at least 600 observations are needed to achieve precise estimates.  An application of the approach is presented using 22 years of quasi-weekly observations of acid-neutralizing capacity from Deep Run, a stream in Shenandoah National Park, Virginia. The time series also was sub-sampled to simulate monthly and semi-monthly sampling protocols. Estimates of the long-term probability of violation were unbiased despite sampling frequency; however, the expected duration and return period were over-estimated using the sub-sampled time series with respect to the full quasi-weekly time series.

  4. Time's up. descriptive epidemiology of multi-morbidity and time spent on health related activity by older Australians: a time use survey.

    PubMed

    Jowsey, Tanisha; McRae, Ian S; Valderas, Jose M; Dugdale, Paul; Phillips, Rebecca; Bunton, Robin; Gillespie, James; Banfield, Michelle; Jones, Lesley; Kljakovic, Marjan; Yen, Laurann

    2013-01-01

    Most Western health systems remain single illness orientated despite the growing prevalence of multi-morbidity. Identifying how much time people with multiple chronic conditions spend managing their health will help policy makers and health service providers make decisions about areas of patient need for support. This article presents findings from an Australian study concerning the time spent on health related activity by older adults (aged 50 years and over), most of whom had multiple chronic conditions. A recall questionnaire was developed, piloted, and adjusted. Sampling was undertaken through three bodies; the Lung Foundation Australia (COPD sub-sample), National Diabetes Services Scheme (Diabetes sub-sample) and National Seniors Australia (Seniors sub-sample). Questionnaires were mailed out during 2011 to 10,600 older adults living in Australia. 2540 survey responses were received and analysed. Descriptive analyses were completed to obtain median values for the hours spent on each activity per month. The mean number of chronic conditions was 3.7 in the COPD sub-sample, 3.4 in the Diabetes sub-sample and 2.0 in the NSA sub-sample. The study identified a clear trend of increased time use associated with increased number of chronic conditions. Median monthly time use was 5-16 hours per month overall for our three sub-samples. For respondents in the top decile with five or more chronic conditions the median time use was equivalent to two to three hours per day, and if exercise is included in the calculations, respondents spent from between five and eight hours per day: an amount similar to full-time work. Multi-morbidity imposes considerable time burdens on patients. Ageing is associated with increasing rates of multi-morbidity. Many older adults are facing high demands on their time to manage their health in the face of decreasing energy and mobility. Their time use must be considered in health service delivery and health system reform.

  5. Precise method for the measurement of catalase activity in honey.

    PubMed

    Huidobro, José F; Sánchez, M Pilar; Muniategui, Soledad; Sancho, M Teresa

    2005-01-01

    An improved method is reported for the determination of catalase activity in honey. We tested different dialysis membranes, dialysis fluid compositions and amounts, dialysis temperatures, sample amounts, and dialysis times. The best results were obtained by dialysis of 7.50 g sample in a cellulose dialysis sack, using two 3 L portions of 0.015 M sodium phosphate buffer (pH 7.0) as the dialysis fluid at 4 degrees C for 22 h. As in previous methods, catalase activity was determined on the basis of the rate of disappearance of the substrate, H202, with the H202 determined spectrophotometrically at 400 nm in an assay system containing o-dianisidine and peroxidase. Trials indicated that the best solvent for the o-dianisidine was 0.2 M sodium phosphate buffer, pH 6.1; the best starting H202 concentration was 3 mM; the best HCl concentration for stopping the reaction was 6 N; and the best sample volume for catalase measurement was 7.0 mL. Precision values (relative standard deviations for analyses of 10 subsamples of each of 3 samples) were high, ranging from 0.48% for samples with high catalase activity to 1.98% for samples with low catalase activity.

  6. Performance of quantitative vegetation sampling methods across gradients of cover in Great Basin plant communities

    USGS Publications Warehouse

    Pilliod, David S.; Arkle, Robert S.

    2013-01-01

    Resource managers and scientists need efficient, reliable methods for quantifying vegetation to conduct basic research, evaluate land management actions, and monitor trends in habitat conditions. We examined three methods for quantifying vegetation in 1-ha plots among different plant communities in the northern Great Basin: photography-based grid-point intercept (GPI), line-point intercept (LPI), and point-quarter (PQ). We also evaluated each method for within-plot subsampling adequacy and effort requirements relative to information gain. We found that, for most functional groups, percent cover measurements collected with the use of LPI, GPI, and PQ methods were strongly correlated. These correlations were even stronger when we used data from the upper canopy only (i.e., top “hit” of pin flags) in LPI to estimate cover. PQ was best at quantifying cover of sparse plants such as shrubs in early successional habitats. As cover of a given functional group decreased within plots, the variance of the cover estimate increased substantially, which required more subsamples per plot (i.e., transect lines, quadrats) to achieve reliable precision. For GPI, we found that that six–nine quadrats per hectare were sufficient to characterize the vegetation in most of the plant communities sampled. All three methods reasonably characterized the vegetation in our plots, and each has advantages depending on characteristics of the vegetation, such as cover or heterogeneity, study goals, precision of measurements required, and efficiency needed.

  7. Image subsampling and point scoring approaches for large-scale marine benthic monitoring programs

    NASA Astrophysics Data System (ADS)

    Perkins, Nicholas R.; Foster, Scott D.; Hill, Nicole A.; Barrett, Neville S.

    2016-07-01

    Benthic imagery is an effective tool for quantitative description of ecologically and economically important benthic habitats and biota. The recent development of autonomous underwater vehicles (AUVs) allows surveying of spatial scales that were previously unfeasible. However, an AUV collects a large number of images, the scoring of which is time and labour intensive. There is a need to optimise the way that subsamples of imagery are chosen and scored to gain meaningful inferences for ecological monitoring studies. We examine the trade-off between the number of images selected within transects and the number of random points scored within images on the percent cover of target biota, the typical output of such monitoring programs. We also investigate the efficacy of various image selection approaches, such as systematic or random, on the bias and precision of cover estimates. We use simulated biotas that have varying size, abundance and distributional patterns. We find that a relatively small sampling effort is required to minimise bias. An increased precision for groups that are likely to be the focus of monitoring programs is best gained through increasing the number of images sampled rather than the number of points scored within images. For rare species, sampling using point count approaches is unlikely to provide sufficient precision, and alternative sampling approaches may need to be employed. The approach by which images are selected (simple random sampling, regularly spaced etc.) had no discernible effect on mean and variance estimates, regardless of the distributional pattern of biota. Field validation of our findings is provided through Monte Carlo resampling analysis of a previously scored benthic survey from temperate waters. We show that point count sampling approaches are capable of providing relatively precise cover estimates for candidate groups that are not overly rare. The amount of sampling required, in terms of both the number of images and number of points, varies with the abundance, size and distributional pattern of target biota. Therefore, we advocate either the incorporation of prior knowledge or the use of baseline surveys to establish key properties of intended target biota in the initial stages of monitoring programs.

  8. Consequences of slow growth for 230Th/U dating of Quaternary opals, Yucca Mountain, NV, USA

    USGS Publications Warehouse

    Neymark, L.A.; Paces, J.B.

    2000-01-01

    Thermal ionization mass-spectrometry 234U/238U and 230Th/238U data are reported for uranium-rich opals coating fractures and cavities within the silicic tuffs forming Yucca Mountain, NV, the potential site of a high-level radioactive waste repository. High uranium concentrations (up to 207 ppm) and extremely high 230Th/232Th activity ratios (up to about 106) make microsamples of these opals suitable for precise 230Th/U dating. Conventional 230Th/U ages range from 40 to greater than 600 ka, and initial 234U/238U activity ratios between 1.03 and 8.2. Isotopic evidence indicates that the opals have not experienced uranium mobility; however, wide variations in apparent ages and initial 234U/238U ratios for separate subsamples of the same outermost mineral surfaces, positive correlation between ages and sample weights, and negative correlation between 230Th/U ages and calculated initial 234U/238U are inconsistent with the assumption that all minerals in a given subsample was deposited instantaneously. The data are more consistent with a conceptual model of continuous deposition where secondary mineral growth has occurred at a constant, slow rate up to the present. This model assumes that individual subsamples represent mixtures of older and younger material, and that calculations using the resulting isotope ratios reflect an average age. Ages calculated using the continuous-deposition model for opals imply average mineral growth rates of less than 5 mm/m.y. The model of continuous deposition also predicts discordance between ages obtained using different radiometric methods for the same subsample. Differences in half-lives will result in younger apparent ages for the shorter-lived isotope due to the greater influence of younger materials continuously added to mineral surfaces. Discordant 14C, 230Th/U and U-Pb ages obtained from outermost mineral surfaces at Yucca Mountain support this model. (C) 2000 Elsevier Science B.V. All rights reserved.

  9. Text mining electronic hospital records to automatically classify admissions against disease: Measuring the impact of linking data sources.

    PubMed

    Kocbek, Simon; Cavedon, Lawrence; Martinez, David; Bain, Christopher; Manus, Chris Mac; Haffari, Gholamreza; Zukerman, Ingrid; Verspoor, Karin

    2016-12-01

    Text and data mining play an important role in obtaining insights from Health and Hospital Information Systems. This paper presents a text mining system for detecting admissions marked as positive for several diseases: Lung Cancer, Breast Cancer, Colon Cancer, Secondary Malignant Neoplasm of Respiratory and Digestive Organs, Multiple Myeloma and Malignant Plasma Cell Neoplasms, Pneumonia, and Pulmonary Embolism. We specifically examine the effect of linking multiple data sources on text classification performance. Support Vector Machine classifiers are built for eight data source combinations, and evaluated using the metrics of Precision, Recall and F-Score. Sub-sampling techniques are used to address unbalanced datasets of medical records. We use radiology reports as an initial data source and add other sources, such as pathology reports and patient and hospital admission data, in order to assess the research question regarding the impact of the value of multiple data sources. Statistical significance is measured using the Wilcoxon signed-rank test. A second set of experiments explores aspects of the system in greater depth, focusing on Lung Cancer. We explore the impact of feature selection; analyse the learning curve; examine the effect of restricting admissions to only those containing reports from all data sources; and examine the impact of reducing the sub-sampling. These experiments provide better understanding of how to best apply text classification in the context of imbalanced data of variable completeness. Radiology questions plus patient and hospital admission data contribute valuable information for detecting most of the diseases, significantly improving performance when added to radiology reports alone or to the combination of radiology and pathology reports. Overall, linking data sources significantly improved classification performance for all the diseases examined. However, there is no single approach that suits all scenarios; the choice of the most effective combination of data sources depends on the specific disease to be classified. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Study of the Effect of Temporal Sampling Frequency on DSCOVR Observations Using the GEOS-5 Nature Run Results. Part II; Cloud Coverage

    NASA Technical Reports Server (NTRS)

    Holdaway, Daniel; Yang, Yuekui

    2016-01-01

    This is the second part of a study on how temporal sampling frequency affects satellite retrievals in support of the Deep Space Climate Observatory (DSCOVR) mission. Continuing from Part 1, which looked at Earth's radiation budget, this paper presents the effect of sampling frequency on DSCOVR-derived cloud fraction. The output from NASA's Goddard Earth Observing System version 5 (GEOS-5) Nature Run is used as the "truth". The effect of temporal resolution on potential DSCOVR observations is assessed by subsampling the full Nature Run data. A set of metrics, including uncertainty and absolute error in the subsampled time series, correlation between the original and the subsamples, and Fourier analysis have been used for this study. Results show that, for a given sampling frequency, the uncertainties in the annual mean cloud fraction of the sunlit half of the Earth are larger over land than over ocean. Analysis of correlation coefficients between the subsamples and the original time series demonstrates that even though sampling at certain longer time intervals may not increase the uncertainty in the mean, the subsampled time series is further and further away from the "truth" as the sampling interval becomes larger and larger. Fourier analysis shows that the simulated DSCOVR cloud fraction has underlying periodical features at certain time intervals, such as 8, 12, and 24 h. If the data is subsampled at these frequencies, the uncertainties in the mean cloud fraction are higher. These results provide helpful insights for the DSCOVR temporal sampling strategy.

  11. Texture-adaptive hyperspectral video acquisition system with a spatial light modulator

    NASA Astrophysics Data System (ADS)

    Fang, Xiaojing; Feng, Jiao; Wang, Yongjin

    2014-10-01

    We present a new hybrid camera system based on spatial light modulator (SLM) to capture texture-adaptive high-resolution hyperspectral video. The hybrid camera system records a hyperspectral video with low spatial resolution using a gray camera and a high-spatial resolution video using a RGB camera. The hyperspectral video is subsampled by the SLM. The subsampled points can be adaptively selected according to the texture characteristic of the scene by combining with digital imaging analysis and computational processing. In this paper, we propose an adaptive sampling method utilizing texture segmentation and wavelet transform (WT). We also demonstrate the effectiveness of the sampled pattern on the SLM with the proposed method.

  12. Measurement of the antineutrino to neutrino charged-current interaction cross section ratio in MINERvA

    NASA Astrophysics Data System (ADS)

    Ren, L.; Aliaga, L.; Altinok, O.; Bellantoni, L.; Bercellie, A.; Betancourt, M.; Bodek, A.; Bravar, A.; Budd, H.; Cai, T.; Carneiro, M. F.; da Motta, H.; Devan, J.; Dytman, S. A.; Díaz, G. A.; Eberly, B.; Endress, E.; Felix, J.; Fields, L.; Fine, R.; Gago, A. M.; Galindo, R.; Gallagher, H.; Ghosh, A.; Golan, T.; Gran, R.; Han, J. Y.; Harris, D. A.; Hurtado, K.; Kiveni, M.; Kleykamp, J.; Kordosky, M.; Le, T.; Maher, E.; Manly, S.; Mann, W. A.; Marshall, C. M.; Martinez Caicedo, D. A.; McFarland, K. S.; McGivern, C. L.; McGowan, A. M.; Messerly, B.; Miller, J.; Mislivec, A.; Morfín, J. G.; Mousseau, J.; Naples, D.; Nelson, J. K.; Norrick, A.; Nuruzzaman, Paolone, V.; Park, J.; Patrick, C. E.; Perdue, G. N.; Ramírez, M. A.; Ransome, R. D.; Ray, H.; Rimal, D.; Rodrigues, P. A.; Ruterbories, D.; Schellman, H.; Solano Salinas, C. J.; Sultana, M.; Sánchez Falero, S.; Valencia, E.; Walton, T.; Wolcott, J.; Wospakrik, M.; Yaeggy, B.; MinerνA Collaboration

    2017-04-01

    We present measurements of the neutrino and antineutrino total charged-current cross sections on carbon and their ratio using the MINERvA scintillator-tracker. The measurements span the energy range 2-22 GeV and were performed using forward and reversed horn focusing modes of the Fermilab low-energy NuMI beam to obtain large neutrino and antineutrino samples. The flux is obtained using a subsample of charged-current events at low hadronic energy transfer along with precise higher energy external neutrino cross section data overlapping with our energy range between 12-22 GeV. We also report on the antineutrino-neutrino cross section ratio, RCC , which does not rely on external normalization information. Our ratio measurement, obtained within the same experiment using the same technique, benefits from the cancellation of common sample systematic uncertainties and reaches a precision of ˜5 % at low energy. Our results for the antineutrino-nucleus scattering cross section and for RCC are the most precise to date in the energy range Eν<6 GeV .

  13. Measurement of the antineutrino to neutrino charged-current interaction cross section ratio in MINERvA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, L.; Aliaga, L.; Altinok, O.

    Here, we present measurements of the neutrino and antineutrino total charged-current cross sections on carbon and their ratio using the MINERvA scintillator-tracker. The measurements span the energy range 2-22 GeV and were performed using forward and reversed horn focusing modes of the Fermilab low-energy NuMI beam to obtain large neutrino and antineutrino samples. The flux is obtained using a sub-sample of charged-current events at low hadronic energy transfer along with precise higher energy external neutrino cross section data overlapping with our energy range between 12-22 GeV. We also report on the antineutrino-neutrino cross section ratio, Rcc, which does not rely on external normalization information. Our ratio measurement, obtained within the same experiment using the same technique, benefits from the cancellation of common sample systematic uncertainties and reaches a precision of 5% at low energy. Our results for the antineutrino-nucleus scattering cross section and for Rcc are the most precise to date in the energy rangemore » $$E_{\

  14. Measurement of the antineutrino to neutrino charged-current interaction cross section ratio in MINERvA

    DOE PAGES

    Ren, L.; Aliaga, L.; Altinok, O.; ...

    2017-04-14

    Here, we present measurements of the neutrino and antineutrino total charged-current cross sections on carbon and their ratio using the MINERvA scintillator-tracker. The measurements span the energy range 2-22 GeV and were performed using forward and reversed horn focusing modes of the Fermilab low-energy NuMI beam to obtain large neutrino and antineutrino samples. The flux is obtained using a sub-sample of charged-current events at low hadronic energy transfer along with precise higher energy external neutrino cross section data overlapping with our energy range between 12-22 GeV. We also report on the antineutrino-neutrino cross section ratio, Rcc, which does not rely on external normalization information. Our ratio measurement, obtained within the same experiment using the same technique, benefits from the cancellation of common sample systematic uncertainties and reaches a precision of 5% at low energy. Our results for the antineutrino-nucleus scattering cross section and for Rcc are the most precise to date in the energy rangemore » $$E_{\

  15. Sampling Frequency Optimisation and Nonlinear Distortion Mitigation in Subsampling Receiver

    NASA Astrophysics Data System (ADS)

    Castanheira, Pedro Xavier Melo Fernandes

    Subsampling receivers utilise the subsampling method to down convert signals from radio frequency (RF) to a lower frequency location. Multiple signals can also be down converted using the subsampling receiver, but using the incorrect subsampling frequency could result in signals aliasing one another after down conversion. The existing method for subsampling multiband signals focused on down converting all the signals without any aliasing between the signals. The case considered initially was a dual band signal, and then it was further extended to a more general multiband case. In this thesis, a new method is proposed with the assumption that only one signal is needed to not overlap the other multiband signals that are down converted at the same time. The proposed method will introduce unique formulas using the said assumption to calculate the valid subsampling frequencies, ensuring that the target signal is not aliased by the other signals. Simulation results show that the proposed method will provide lower valid subsampling frequencies for down conversion compared to the existing methods.

  16. The JRC Nanomaterials Repository: A unique facility providing representative test materials for nanoEHS research.

    PubMed

    Totaro, Sara; Cotogno, Giulio; Rasmussen, Kirsten; Pianella, Francesca; Roncaglia, Marco; Olsson, Heidi; Riego Sintes, Juan M; Crutzen, Hugues P

    2016-11-01

    The European Commission has established a Nanomaterials Repository that hosts industrially manufactured nanomaterials that are distributed world-wide for safety testing of nanomaterials. In a first instance these materials were tested in the OECD Testing Programme. They have then also been tested in several EU funded research projects. The JRC Repository of Nanomaterials has thus developed into serving the global scientific community active in the nanoEHS (regulatory) research. The unique Repository facility is a state-of-the-art installation that allows customised sub-sampling under the safest possible conditions, with traceable final sample vials distributed world-wide for research purposes. This paper describes the design of the Repository to perform a semi-automated subsampling procedure, offering high degree of flexibility and precision in the preparation of NM vials for customers, while guaranteeing the safety of the operators, and environmental protection. The JRC nanomaterials are representative for part of the world NMs market. Their wide use world-wide facilitates the generation of comparable and reliable experimental results and datasets in (regulatory) research by the scientific community, ultimately supporting the further development of the OECD regulatory test guidelines. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  17. Freezing-thawing and sub-sampling influence the marination performance of chicken breast meat.

    PubMed

    Bowker, B; Zhuang, H

    2017-09-01

    Vacuum-tumbling marination is often used to improve the yield and quality of whole or portioned broiler breast fillets. The relationship between the marination performance of whole Pectoralis major muscles and breast fillet sub-samples is not well understood. The objective was to determine the effects of sub-sampling and freezing-thawing on the marination performance and cook loss of broiler breast meat. Paired right and left breast fillets were marinated as whole fillets or sub-samples (cranial and mid-caudal portions). Samples were marinated at 48 h postmortem (fresh) or stored at -20°C and then thawed prior to marination (frozen-thawed). Samples were vacuum-tumbled in 20% wt/wt brine (5% NaCl, 3% STP) and weighed pre-marination, during marination (15, 30, and 45 min), and 24 h post-marination. Samples were then cooked to 75°C for determination of cook loss. Marinade uptake was greater in caudal sub-samples than intact fillets and cranial sub-samples after 15 min of marination (P < 0.0001). After 30 min, marinade uptake was greater in caudal sub-samples and intact fillets than cranial sub-samples (P < 0.05). After 45 min, marinade uptake for fresh samples was greatest in intact fillets and lowest in cranial sub-samples. For frozen-thawed samples, marinade uptake at 45 min was greater in caudal sub-samples and intact fillets than cranial sub-samples (P < 0.0001). Marinade uptake in sub-samples at 30 min was greater in frozen-thawed versus fresh fillets (P < 0.05). Differences in marinade retention were not observed. Cook loss was similar between fresh and frozen-thawed samples but was greater in sub-samples compared to intact fillets (P < 0.0001). Correlations between marinade uptake in intact fillets and cranial sub-samples were greater in fresh (r = 0.64 to 0.78) than frozen-thawed samples (r = 0.39 to 0.59). Correlations between marinade uptake in intact fillets and caudal sub-samples were greater in frozen-thawed (r = 0.79 to 0.82) than fresh samples (r = 0.46 to 0.63). Data suggest that the relationships between marination performance of whole breast fillets and fillet sub-samples are dependent upon prior sample handling and intra-fillet sampling location. Published by Oxford University Press on behalf of Poultry Science Association 2017.

  18. The Broselow and Handtevy Resuscitation Tapes: A Comparison of the Performance of Pediatric Weight Prediction.

    PubMed

    Lowe, Calvin G; Campwala, Rashida T; Ziv, Nurit; Wang, Vincent J

    2016-08-01

    To assess the performance of two pediatric length-based tapes (Broselow and Handtevy) in predicting actual weights of US children. In this descriptive study, weights and lengths of children (newborn through 13 years of age) were extracted from the 2009-2010 National Health and Nutrition Examination Survey (NHANES). Using the measured length ranges for each tape and the NHANES-extracted length data, every case from the study sample was coded into Broselow and Handtevy zones. Mean weights were calculated for each zone and compared to the predicted Broselow and Handtevy weights using measures of bias, precision, and accuracy. A sub-sample was examined that excluded cases with body mass index (BMI)≥95th percentile. Weights of children longer than each tape also were examined. A total of 3,018 cases from the NHANES database met criteria. Although both tapes underestimated children's weight, the Broselow tape outperformed the Handtevy tape across most length ranges in measures of bias, precision, and accuracy of predicted weights relative to actual weights. Accuracy was higher in the Broselow tape for shorter children and in the Handtevy tape for taller children. Among the sub-sample with cases of BMI≥95th percentile removed, performance of the Handtevy tape improved, yet the Broselow tape still performed better. When assessing the weights of children who were longer than either tape, the actual mean weights did not approximate adult weights; although, those exceeding the Handtevy tape were closer. For pediatric weight estimation, the Broselow tape performed better overall than the Handtevy tape and more closely approximated actual weight. Lowe CG , Campwala RT , Ziv N , Wang VJ . The Broselow and Handtevy resuscitation tapes: a comparison of the performance of pediatric weight prediction. Prehosp Disaster Med. 2016;31(4):364-375.

  19. Galaxy Clustering in Early Sloan Digital Sky Survey Redshift Data

    NASA Astrophysics Data System (ADS)

    Zehavi, Idit; Blanton, Michael R.; Frieman, Joshua A.; Weinberg, David H.; Mo, Houjun J.; Strauss, Michael A.; Anderson, Scott F.; Annis, James; Bahcall, Neta A.; Bernardi, Mariangela; Briggs, John W.; Brinkmann, Jon; Burles, Scott; Carey, Larry; Castander, Francisco J.; Connolly, Andrew J.; Csabai, Istvan; Dalcanton, Julianne J.; Dodelson, Scott; Doi, Mamoru; Eisenstein, Daniel; Evans, Michael L.; Finkbeiner, Douglas P.; Friedman, Scott; Fukugita, Masataka; Gunn, James E.; Hennessy, Greg S.; Hindsley, Robert B.; Ivezić, Željko; Kent, Stephen; Knapp, Gillian R.; Kron, Richard; Kunszt, Peter; Lamb, Donald Q.; Leger, R. French; Long, Daniel C.; Loveday, Jon; Lupton, Robert H.; McKay, Timothy; Meiksin, Avery; Merrelli, Aronne; Munn, Jeffrey A.; Narayanan, Vijay; Newcomb, Matt; Nichol, Robert C.; Owen, Russell; Peoples, John; Pope, Adrian; Rockosi, Constance M.; Schlegel, David; Schneider, Donald P.; Scoccimarro, Roman; Sheth, Ravi K.; Siegmund, Walter; Smee, Stephen; Snir, Yehuda; Stebbins, Albert; Stoughton, Christopher; SubbaRao, Mark; Szalay, Alexander S.; Szapudi, Istvan; Tegmark, Max; Tucker, Douglas L.; Uomoto, Alan; Vanden Berk, Dan; Vogeley, Michael S.; Waddell, Patrick; Yanny, Brian; York, Donald G.

    2002-05-01

    We present the first measurements of clustering in the Sloan Digital Sky Survey (SDSS) galaxy redshift survey. Our sample consists of 29,300 galaxies with redshifts 5700kms-1<=cz<=39,000kms-1, distributed in several long but narrow (2.5d-5°) segments, covering 690 deg2. For the full, flux-limited sample, the redshift-space correlation length is approximately 8 h-1 Mpc. The two-dimensional correlation function ξ(rp,π) shows clear signatures of both the small-scale, ``fingers-of-God'' distortion caused by velocity dispersions in collapsed objects and the large-scale compression caused by coherent flows, though the latter cannot be measured with high precision in the present sample. The inferred real-space correlation function is well described by a power law, ξ(r)=(r/6.1+/-0.2h-1Mpc)-1.75+/-0.03, for 0.1h-1Mpc<=r<=16h-1Mpc. The galaxy pairwise velocity dispersion is σ12~600+/-100kms-1 for projected separations 0.15h-1Mpc<=rp<=5h-1Mpc. When we divide the sample by color, the red galaxies exhibit a stronger and steeper real-space correlation function and a higher pairwise velocity dispersion than do the blue galaxies. The relative behavior of subsamples defined by high/low profile concentration or high/low surface brightness is qualitatively similar to that of the red/blue subsamples. Our most striking result is a clear measurement of scale-independent luminosity bias at r<~10h-1Mpc: subsamples with absolute magnitude ranges centered on M*-1.5, M*, and M*+1.5 have real-space correlation functions that are parallel power laws of slope ~-1.8 with correlation lengths of approximately 7.4, 6.3, and 4.7 h-1 Mpc, respectively.

  20. Nyström type subsampling analyzed as a regularized projection

    NASA Astrophysics Data System (ADS)

    Kriukova, Galyna; Pereverzyev, Sergiy, Jr.; Tkachenko, Pavlo

    2017-07-01

    In the statistical learning theory the Nyström type subsampling methods are considered as tools for dealing with big data. In this paper we consider Nyström subsampling as a special form of the projected Lavrentiev regularization, and study it using the approaches developed in the regularization theory. As a result, we prove that the same capacity independent learning rates that are guaranteed for standard algorithms running with quadratic computational complexity can be obtained with subquadratic complexity by the Nyström subsampling approach, provided that the subsampling size is chosen properly. We propose a priori rule for choosing the subsampling size and a posteriori strategy for dealing with uncertainty in the choice of it. The theoretical results are illustrated by numerical experiments.

  1. Extraction of features from medical images using a modular neural network approach that relies on learning by sample

    NASA Astrophysics Data System (ADS)

    Brahmi, Djamel; Serruys, Camille; Cassoux, Nathalie; Giron, Alain; Triller, Raoul; Lehoang, Phuc; Fertil, Bernard

    2000-06-01

    Medical images provide experienced physicians with meaningful visual stimuli but their features are frequently hard to decipher. The development of a computational model to mimic physicians' expertise is a demanding task, especially if a significant and sophisticated preprocessing of images is required. Learning from well-expertised images may be a more convenient approach, inasmuch a large and representative bunch of samples is available. A four-stage approach has been designed, which combines image sub-sampling with unsupervised image coding, supervised classification and image reconstruction in order to directly extract medical expertise from raw images. The system has been applied (1) to the detection of some features related to the diagnosis of black tumors of skin (a classification issue) and (2) to the detection of virus-infected and healthy areas in retina angiography in order to locate precisely the border between them and characterize the evolution of infection. For reasonably balanced training sets, we are able to obtained about 90% correct classification of features (black tumors). Boundaries generated by our system mimic reproducibility of hand-outlines drawn by experts (segmentation of virus-infected area).

  2. LENSED: a code for the forward reconstruction of lenses and sources from strong lensing observations

    NASA Astrophysics Data System (ADS)

    Tessore, Nicolas; Bellagamba, Fabio; Metcalf, R. Benton

    2016-12-01

    Robust modelling of strong lensing systems is fundamental to exploit the information they contain about the distribution of matter in galaxies and clusters. In this work, we present LENSED, a new code which performs forward parametric modelling of strong lenses. LENSED takes advantage of a massively parallel ray-tracing kernel to perform the necessary calculations on a modern graphics processing unit (GPU). This makes the precise rendering of the background lensed sources much faster, and allows the simultaneous optimization of tens of parameters for the selected model. With a single run, the code is able to obtain the full posterior probability distribution for the lens light, the mass distribution and the background source at the same time. LENSED is first tested on mock images which reproduce realistic space-based observations of lensing systems. In this way, we show that it is able to recover unbiased estimates of the lens parameters, even when the sources do not follow exactly the assumed model. Then, we apply it to a subsample of the Sloan Lens ACS Survey lenses, in order to demonstrate its use on real data. The results generally agree with the literature, and highlight the flexibility and robustness of the algorithm.

  3. Inadequacy of internal covariance estimation for super-sample covariance

    NASA Astrophysics Data System (ADS)

    Lacasa, Fabien; Kunz, Martin

    2017-08-01

    We give an analytical interpretation of how subsample-based internal covariance estimators lead to biased estimates of the covariance, due to underestimating the super-sample covariance (SSC). This includes the jackknife and bootstrap methods as estimators for the full survey area, and subsampling as an estimator of the covariance of subsamples. The limitations of the jackknife covariance have been previously presented in the literature because it is effectively a rescaling of the covariance of the subsample area. However we point out that subsampling is also biased, but for a different reason: the subsamples are not independent, and the corresponding lack of power results in SSC underprediction. We develop the formalism in the case of cluster counts that allows the bias of each covariance estimator to be exactly predicted. We find significant effects for a small-scale area or when a low number of subsamples is used, with auto-redshift biases ranging from 0.4% to 15% for subsampling and from 5% to 75% for jackknife covariance estimates. The cross-redshift covariance is even more affected; biases range from 8% to 25% for subsampling and from 50% to 90% for jackknife. Owing to the redshift evolution of the probe, the covariances cannot be debiased by a simple rescaling factor, and an exact debiasing has the same requirements as the full SSC prediction. These results thus disfavour the use of internal covariance estimators on data itself or a single simulation, leaving analytical prediction and simulations suites as possible SSC predictors.

  4. Cluster lot quality assurance sampling: effect of increasing the number of clusters on classification precision and operational feasibility.

    PubMed

    Okayasu, Hiromasa; Brown, Alexandra E; Nzioki, Michael M; Gasasira, Alex N; Takane, Marina; Mkanda, Pascal; Wassilak, Steven G F; Sutter, Roland W

    2014-11-01

    To assess the quality of supplementary immunization activities (SIAs), the Global Polio Eradication Initiative (GPEI) has used cluster lot quality assurance sampling (C-LQAS) methods since 2009. However, since the inception of C-LQAS, questions have been raised about the optimal balance between operational feasibility and precision of classification of lots to identify areas with low SIA quality that require corrective programmatic action. To determine if an increased precision in classification would result in differential programmatic decision making, we conducted a pilot evaluation in 4 local government areas (LGAs) in Nigeria with an expanded LQAS sample size of 16 clusters (instead of the standard 6 clusters) of 10 subjects each. The results showed greater heterogeneity between clusters than the assumed standard deviation of 10%, ranging from 12% to 23%. Comparing the distribution of 4-outcome classifications obtained from all possible combinations of 6-cluster subsamples to the observed classification of the 16-cluster sample, we obtained an exact match in classification in 56% to 85% of instances. We concluded that the 6-cluster C-LQAS provides acceptable classification precision for programmatic action. Considering the greater resources required to implement an expanded C-LQAS, the improvement in precision was deemed insufficient to warrant the effort. Published by Oxford University Press on behalf of the Infectious Diseases Society of America 2014. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  5. Comparison of methods for the concentration of suspended sediment in river water for subsequent chemical analysis

    USGS Publications Warehouse

    Horowltz, A.J.

    1986-01-01

    Centrifugation, settling/centrifugation, and backflush-filtration procedures have been tested for the concentration of suspended sediment from water for subsequent trace-metal analysis. Either of the first two procedures is comparable with in-line filtration and can be carried out precisely, accurately, and with a facility that makes the procedures amenable to large-scale sampling and analysis programs. There is less potential for post-sampling alteration of suspended sediment-associated metal concentrations with the centrifugation procedure because sample stabilization is accomplished more rapidly than with settling/centrifugation. Sample preservation can be achieved by chilling. Suspended sediment associated metal levels can best be determined by direct analysis but can also be estimated from the difference between a set of unfiltered-digested and filtered subsamples. However, when suspended sediment concentrations (<150 mg/L) or trace-metal levels are low, the direct analysis approach makes quantitation more accurate and precise and can be accomplished with simpler analytical procedures.

  6. Critical points of the cosmic velocity field and the uncertainties in the value of the Hubble constant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Hao; Naselsky, Pavel; Mohayaee, Roya, E-mail: liuhao@nbi.dk, E-mail: roya@iap.fr, E-mail: naselsky@nbi.dk

    2016-06-01

    The existence of critical points for the peculiar velocity field is a natural feature of the correlated vector field. These points appear at the junctions of velocity domains with different orientations of their averaged velocity vectors. Since peculiar velocities are the important cause of the scatter in the Hubble expansion rate, we propose that a more precise determination of the Hubble constant can be made by restricting analysis to a subsample of observational data containing only the zones around the critical points of the peculiar velocity field, associated with voids and saddle points. On large-scales the critical points, where themore » first derivative of the gravitational potential vanishes, can easily be identified using the density field and classified by the behavior of the Hessian of the gravitational potential. We use high-resolution N-body simulations to show that these regions are stable in time and hence are excellent tracers of the initial conditions. Furthermore, we show that the variance of the Hubble flow can be substantially minimized by restricting observations to the subsample of such regions of vanishing velocity instead of aiming at increasing the statistics by averaging indiscriminately using the full data sets, as is the common approach.« less

  7. Precise Relative Earthquake Depth Determination Using Array Processing Techniques

    NASA Astrophysics Data System (ADS)

    Florez, M. A.; Prieto, G. A.

    2014-12-01

    The mechanism for intermediate depth and deep earthquakes is still under debate. The temperatures and pressures are above the point where ordinary fractures ought to occur. Key to constraining this mechanism is the precise determination of hypocentral depth. It is well known that using depth phases allows for significant improvement in event depth determination, however routinely and systematically picking such phases for teleseismic or regional arrivals is problematic due to poor signal-to-noise ratios around the pP and sP phases. To overcome this limitation we have taken advantage of the additional information carried by seismic arrays. We have used beamforming and velocity spectral analysis techniques to precise measure pP-P and sP-P differential travel times. These techniques are further extended to achieve subsample accuracy and to allow for events where the signal-to-noise ratio is close to or even less than 1.0. The individual estimates obtained at different subarrays for a pair of earthquakes can be combined using a double-difference technique in order to precisely map seismicity in regions where it is tightly clustered. We illustrate these methods using data from the recent M 7.9 Alaska earthquake and its aftershocks, as well as data from the Bucaramanga nest in northern South America, arguably the densest and most active intermediate-depth earthquake nest in the world.

  8. A GPU-Accelerated 3-D Coupled Subsample Estimation Algorithm for Volumetric Breast Strain Elastography.

    PubMed

    Peng, Bo; Wang, Yuqi; Hall, Timothy J; Jiang, Jingfeng

    2017-04-01

    Our primary objective of this paper was to extend a previously published 2-D coupled subsample tracking algorithm for 3-D speckle tracking in the framework of ultrasound breast strain elastography. In order to overcome heavy computational cost, we investigated the use of a graphic processing unit (GPU) to accelerate the 3-D coupled subsample speckle tracking method. The performance of the proposed GPU implementation was tested using a tissue-mimicking phantom and in vivo breast ultrasound data. The performance of this 3-D subsample tracking algorithm was compared with the conventional 3-D quadratic subsample estimation algorithm. On the basis of these evaluations, we concluded that the GPU implementation of this 3-D subsample estimation algorithm can provide high-quality strain data (i.e., high correlation between the predeformation and the motion-compensated postdeformation radio frequency echo data and high contrast-to-noise ratio strain images), as compared with the conventional 3-D quadratic subsample algorithm. Using the GPU implementation of the 3-D speckle tracking algorithm, volumetric strain data can be achieved relatively fast (approximately 20 s per volume [2.5 cm ×2.5 cm ×2.5 cm]).

  9. Adaptive Chroma Subsampling-binding and Luma-guided Chroma Reconstruction Method for Screen Content Images.

    PubMed

    Chung, Kuo-Liang; Huang, Chi-Chao; Hsu, Tsu-Chun

    2017-09-04

    In this paper, we propose a novel adaptive chroma subsampling-binding and luma-guided (ASBLG) chroma reconstruction method for screen content images (SCIs). After receiving the decoded luma and subsampled chroma image from the decoder, a fast winner-first voting strategy is proposed to identify the used chroma subsampling scheme prior to compression. Then, the decoded luma image is subsampled as the identified subsampling scheme was performed on the chroma image such that we are able to conclude an accurate correlation between the subsampled decoded luma image and the decoded subsampled chroma image. Accordingly, an adaptive sliding window-based and luma-guided chroma reconstruction method is proposed. The related computational complexity analysis is also provided. We take two quality metrics, the color peak signal-to-noise ratio (CPSNR) of the reconstructed chroma images and SCIs and the gradient-based structure similarity index (CGSS) of the reconstructed SCIs to evaluate the quality performance. Let the proposed chroma reconstruction method be denoted as 'ASBLG'. Based on 26 typical test SCIs and 6 JCT-VC test screen content video sequences (SCVs), several experiments show that on average, the CPSNR gains of all the reconstructed UV images by 4:2:0(A)-ASBLG, SCIs by 4:2:0(MPEG-B)-ASBLG, and SCVs by 4:2:0(A)-ASBLG are 2.1 dB, 1.87 dB, and 1.87 dB, respectively, when compared with that of the other combinations. Specifically, in terms of CPSNR and CGSS, CSBILINEAR-ASBLG for the test SCIs and CSBICUBIC-ASBLG for the test SCVs outperform the existing state-of-the-art comparative combinations, where CSBILINEAR and CSBICUBIC denote the luma-aware based chroma subsampling schemes by Wang et al.

  10. A sub-sampled approach to extremely low-dose STEM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stevens, A.; Luzi, L.; Yang, H.

    The inpainting of randomly sub-sampled images acquired by scanning transmission electron microscopy (STEM) is an attractive method for imaging under low-dose conditions (≤ 1 e -Å 2) without changing either the operation of the microscope or the physics of the imaging process. We show that 1) adaptive sub-sampling increases acquisition speed, resolution, and sensitivity; and 2) random (non-adaptive) sub-sampling is equivalent, but faster than, traditional low-dose techniques. Adaptive sub-sampling opens numerous possibilities for the analysis of beam sensitive materials and in-situ dynamic processes at the resolution limit of the aberration corrected microscope and is demonstrated here for the analysis ofmore » the node distribution in metal-organic frameworks (MOFs).« less

  11. 40 CFR 761.350 - Subsampling from composite samples.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Subsampling from composite samples...-Site Disposal, in Accordance With § 761.61 § 761.350 Subsampling from composite samples. (a) Preparing the composite. Composite the samples (eight from a flattened pile; eight or more from a conical pile...

  12. A GPU-accelerated 3D Coupled Sub-sample Estimation Algorithm for Volumetric Breast Strain Elastography

    PubMed Central

    Peng, Bo; Wang, Yuqi; Hall, Timothy J; Jiang, Jingfeng

    2017-01-01

    Our primary objective of this work was to extend a previously published 2D coupled sub-sample tracking algorithm for 3D speckle tracking in the framework of ultrasound breast strain elastography. In order to overcome heavy computational cost, we investigated the use of a graphic processing unit (GPU) to accelerate the 3D coupled sub-sample speckle tracking method. The performance of the proposed GPU implementation was tested using a tissue-mimicking (TM) phantom and in vivo breast ultrasound data. The performance of this 3D sub-sample tracking algorithm was compared with the conventional 3D quadratic sub-sample estimation algorithm. On the basis of these evaluations, we concluded that the GPU implementation of this 3D sub-sample estimation algorithm can provide high-quality strain data (i.e. high correlation between the pre- and the motion-compensated post-deformation RF echo data and high contrast-to-noise ratio strain images), as compared to the conventional 3D quadratic sub-sample algorithm. Using the GPU implementation of the 3D speckle tracking algorithm, volumetric strain data can be achieved relatively fast (approximately 20 seconds per volume [2.5 cm × 2.5 cm × 2.5 cm]). PMID:28166493

  13. Analysis of Clinical Cohort Data Using Nested Case-control and Case-cohort Sampling Designs. A Powerful and Economical Tool.

    PubMed

    Ohneberg, K; Wolkewitz, M; Beyersmann, J; Palomar-Martinez, M; Olaechea-Astigarraga, P; Alvarez-Lerma, F; Schumacher, M

    2015-01-01

    Sampling from a large cohort in order to derive a subsample that would be sufficient for statistical analysis is a frequently used method for handling large data sets in epidemiological studies with limited resources for exposure measurement. For clinical studies however, when interest is in the influence of a potential risk factor, cohort studies are often the first choice with all individuals entering the analysis. Our aim is to close the gap between epidemiological and clinical studies with respect to design and power considerations. Schoenfeld's formula for the number of events required for a Cox' proportional hazards model is fundamental. Our objective is to compare the power of analyzing the full cohort and the power of a nested case-control and a case-cohort design. We compare formulas for power for sampling designs and cohort studies. In our data example we simultaneously apply a nested case-control design with a varying number of controls matched to each case, a case cohort design with varying subcohort size, a random subsample and a full cohort analysis. For each design we calculate the standard error for estimated regression coefficients and the mean number of distinct persons, for whom covariate information is required. The formula for the power of a nested case-control design and the power of a case-cohort design is directly connected to the power of a cohort study using the well known Schoenfeld formula. The loss in precision of parameter estimates is relatively small compared to the saving in resources. Nested case-control and case-cohort studies, but not random subsamples yield an attractive alternative for analyzing clinical studies in the situation of a low event rate. Power calculations can be conducted straightforwardly to quantify the loss of power compared to the savings in the num-ber of patients using a sampling design instead of analyzing the full cohort.

  14. The Impact of Subsampling on MODIS Level-3 Statistics of Cloud Optical Thickness and Effective Radius

    NASA Technical Reports Server (NTRS)

    Oreopoulos, Lazaros

    2004-01-01

    The MODIS Level-3 optical thickness and effective radius cloud product is a gridded l deg. x 1 deg. dataset that is derived from aggregation and subsampling at 5 km of 1 km, resolution Level-2 orbital swath data (Level-2 granules). This study examines the impact of the 5 km subsampling on the mean, standard deviation and inhomogeneity parameter statistics of optical thickness and effective radius. The methodology is simple and consists of estimating mean errors for a large collection of Terra and Aqua Level-2 granules by taking the difference of the statistics at the original and subsampled resolutions. It is shown that the Level-3 sampling does not affect the various quantities investigated to the same degree, with second order moments suffering greater subsampling errors, as expected. Mean errors drop dramatically when averages over a sufficient number of regions (e.g., monthly and/or latitudinal averages) are taken, pointing to a dominance of errors that are of random nature. When histograms built from subsampled data with the same binning rules as in the Level-3 dataset are used to reconstruct the quantities of interest, the mean errors do not deteriorate significantly. The results in this paper provide guidance to users of MODIS Level-3 optical thickness and effective radius cloud products on the range of errors due to subsampling they should expect and perhaps account for, in scientific work with this dataset. In general, subsampling errors should not be a serious concern when moderate temporal and/or spatial averaging is performed.

  15. COMPARISON OF LABORATORY SUBSAMPLING METHODS OF BENTHIC SAMPLES FROM BOATABLE RIVERS USING ACTUAL AND SIMULATED COUNT DATA

    EPA Science Inventory

    We examined the effects of using a fixed-count subsample of 300 organisms on metric values using macroinvertebrate samples collected with 3 field sampling methods at 12 boatable river sites. For each sample, we used metrics to compare an initial fixed-count subsample of approxima...

  16. Dependence of the clustering properties of galaxies on stellar velocity dispersion in the Main galaxy sample of SDSS DR10

    NASA Astrophysics Data System (ADS)

    Deng, Xin-Fa; Song, Jun; Chen, Yi-Qing; Jiang, Peng; Ding, Ying-Ping

    2014-08-01

    Using two volume-limited Main galaxy samples of the Sloan Digital Sky Survey Data Release 10 (SDSS DR10), we investigate the dependence of the clustering properties of galaxies on stellar velocity dispersion by cluster analysis. It is found that in the luminous volume-limited Main galaxy sample, except at r=1.2, richer and larger systems can be more easily formed in the large stellar velocity dispersion subsample, while in the faint volume-limited Main galaxy sample, at r≥0.9, an opposite trend is observed. According to statistical analyses of the multiplicity functions, we conclude in two volume-limited Main galaxy samples: small stellar velocity dispersion galaxies preferentially form isolated galaxies, close pairs and small group, while large stellar velocity dispersion galaxies preferentially inhabit the dense groups and clusters. However, we note the difference between two volume-limited Main galaxy samples: in the faint volume-limited Main galaxy sample, at r≥0.9, the small stellar velocity dispersion subsample has a higher proportion of galaxies in superclusters ( n≥200) than the large stellar velocity dispersion subsample.

  17. Estimating occupancy and abundance of stream amphibians using environmental DNA from filtered water samples

    USGS Publications Warehouse

    Pilliod, David S.; Goldberg, Caren S.; Arkle, Robert S.; Waits, Lisette P.

    2013-01-01

    Environmental DNA (eDNA) methods for detecting aquatic species are advancing rapidly, but with little evaluation of field protocols or precision of resulting estimates. We compared sampling results from traditional field methods with eDNA methods for two amphibians in 13 streams in central Idaho, USA. We also evaluated three water collection protocols and the influence of sampling location, time of day, and distance from animals on eDNA concentration in the water. We found no difference in detection or amount of eDNA among water collection protocols. eDNA methods had slightly higher detection rates than traditional field methods, particularly when species occurred at low densities. eDNA concentration was positively related to field-measured density, biomass, and proportion of transects occupied. Precision of eDNA-based abundance estimates increased with the amount of eDNA in the water and the number of replicate subsamples collected. eDNA concentration did not vary significantly with sample location in the stream, time of day, or distance downstream from animals. Our results further advance the implementation of eDNA methods for monitoring aquatic vertebrates in stream habitats.

  18. A Sub-Sampling Approach for Data Acquisition in Gamma Ray Emission Tomography

    NASA Astrophysics Data System (ADS)

    Fysikopoulos, Eleftherios; Kopsinis, Yannis; Georgiou, Maria; Loudos, George

    2016-06-01

    State of the art data acquisition systems for small animal imaging gamma ray detectors often rely on free running Analog to Digital Converters (ADCs) and high density Field Programmable Gate Arrays (FPGA) devices for digital signal processing. In this work, a sub-sampling acquisition approach, which exploits a priori information regarding the shape of the obtained detector pulses is proposed. Output pulses shape depends on the response of the scintillation crystal, photodetector's properties and amplifier/shaper operation. Using these known characteristics of the detector pulses prior to digitization, one can model the voltage pulse derived from the shaper (a low-pass filter, last in the front-end electronics chain), in order to reduce the desirable sampling rate of ADCs. Fitting with a small number of measurements, pulse shape estimation is then feasible. In particular, the proposed sub-sampling acquisition approach relies on a bi-exponential modeling of the pulse shape. We show that the properties of the pulse that are relevant for Single Photon Emission Computed Tomography (SPECT) event detection (i.e., position and energy) can be calculated by collecting just a small fraction of the number of samples usually collected in data acquisition systems used so far. Compared to the standard digitization process, the proposed sub-sampling approach allows the use of free running ADCs with sampling rate reduced by a factor of 5. Two small detectors consisting of Cerium doped Gadolinium Aluminum Gallium Garnet (Gd3Al2Ga3O12 : Ce or GAGG:Ce) pixelated arrays (array elements: 2 × 2 × 5 mm3 and 1 × 1 × 10 mm3 respectively) coupled to a Position Sensitive Photomultiplier Tube (PSPMT) were used for experimental evaluation. The two detectors were used to obtain raw images and energy histograms under 140 keV and 661.7 keV irradiation respectively. The sub-sampling acquisition technique (10 MHz sampling rate) was compared with a standard acquisition method (52 MHz sampling rate), in terms of energy resolution and image signal to noise ratio for both gamma ray energies. The Levenberg-Marquardt (LM) non-linear least-squares algorithm was used, in post processing, in order to fit the acquired data with the proposed model. The results showed that analog pulses prior to digitization are being estimated with high accuracy after fitting with the bi-exponential model.

  19. A prototype splitter apparatus for dividing large catches of small fish

    USGS Publications Warehouse

    Stapanian, Martin A.; Edwards, William H.

    2012-01-01

    Due to financial and time constraints, it is often necessary in fisheries studies to divide large samples of fish and estimate total catch from the subsample. The subsampling procedure may involve potential human biases or may be difficult to perform in rough conditions. We present a prototype gravity-fed splitter apparatus for dividing large samples of small fish (30–100 mm TL). The apparatus features a tapered hopper with a sliding and removable shutter. The apparatus provides a comparatively stable platform for objectively obtaining subsamples, and it can be modified to accommodate different sizes of fish and different sample volumes. The apparatus is easy to build, inexpensive, and convenient to use in the field. To illustrate the performance of the apparatus, we divided three samples (total N = 2,000 fish) composed of four fish species. Our results indicated no significant bias in estimating either the number or proportion of each species from the subsample. Use of this apparatus or a similar apparatus can help to standardize subsampling procedures in large surveys of fish. The apparatus could be used for other applications that require dividing a large amount of material into one or more smaller subsamples.

  20. Effects of subsampling of passive acoustic recordings on acoustic metrics.

    PubMed

    Thomisch, Karolin; Boebel, Olaf; Zitterbart, Daniel P; Samaran, Flore; Van Parijs, Sofie; Van Opzeeland, Ilse

    2015-07-01

    Passive acoustic monitoring is an important tool in marine mammal studies. However, logistics and finances frequently constrain the number and servicing schedules of acoustic recorders, requiring a trade-off between deployment periods and sampling continuity, i.e., the implementation of a subsampling scheme. Optimizing such schemes to each project's specific research questions is desirable. This study investigates the impact of subsampling on the accuracy of two common metrics, acoustic presence and call rate, for different vocalization patterns (regimes) of baleen whales: (1) variable vocal activity, (2) vocalizations organized in song bouts, and (3) vocal activity with diel patterns. To this end, above metrics are compared for continuous and subsampled data subject to different sampling strategies, covering duty cycles between 50% and 2%. The results show that a reduction of the duty cycle impacts negatively on the accuracy of both acoustic presence and call rate estimates. For a given duty cycle, frequent short listening periods improve accuracy of daily acoustic presence estimates over few long listening periods. Overall, subsampling effects are most pronounced for low and/or temporally clustered vocal activity. These findings illustrate the importance of informed decisions when applying subsampling strategies to passive acoustic recordings or analyses for a given target species.

  1. Low-Power SOI CMOS Transceiver

    NASA Technical Reports Server (NTRS)

    Fujikawa, Gene (Technical Monitor); Cheruiyot, K.; Cothern, J.; Huang, D.; Singh, S.; Zencir, E.; Dogan, N.

    2003-01-01

    The work aims at developing a low-power Silicon on Insulator Complementary Metal Oxide Semiconductor (SOI CMOS) Transceiver for deep-space communications. RF Receiver must accomplish the following tasks: (a) Select the desired radio channel and reject other radio signals, (b) Amplify the desired radio signal and translate them back to baseband, and (c) Detect and decode the information with Low BER. In order to minimize cost and achieve high level of integration, receiver architecture should use least number of external filters and passive components. It should also consume least amount of power to minimize battery cost, size, and weight. One of the most stringent requirements for deep-space communication is the low-power operation. Our study identified that two candidate architectures listed in the following meet these requirements: (1) Low-IF receiver, (2) Sub-sampling receiver. The low-IF receiver uses minimum number of external components. Compared to Zero-IF (Direct conversion) architecture, it has less severe offset and flicker noise problems. The Sub-sampling receiver amplifies the RF signal and samples it using track-and-hold Subsampling mixer. These architectures provide low-power solution for the short- range communications missions on Mars. Accomplishments to date include: (1) System-level design and simulation of a Double-Differential PSK receiver, (2) Implementation of Honeywell SOI CMOS process design kit (PDK) in Cadence design tools, (3) Design of test circuits to investigate relationships between layout techniques, geometry, and low-frequency noise in SOI CMOS, (4) Model development and verification of on-chip spiral inductors in SOI CMOS process, (5) Design/implementation of low-power low-noise amplifier (LNA) and mixer for low-IF receiver, and (6) Design/implementation of high-gain LNA for sub-sampling receiver. Our initial results show that substantial improvement in power consumption is achieved using SOI CMOS as compared to standard CMOS process. Potential advantages of SOI CMOS for deep-space communication electronics include: (1) Radiation hardness, (2) Low-power operation, and (3) System-on-Chip (SOC) solutions.

  2. Optical-domain subsampling for data efficient depth ranging in Fourier-domain optical coherence tomography

    PubMed Central

    Siddiqui, Meena; Vakoc, Benjamin J.

    2012-01-01

    Recent advances in optical coherence tomography (OCT) have led to higher-speed sources that support imaging over longer depth ranges. Limitations in the bandwidth of state-of-the-art acquisition electronics, however, prevent adoption of these advances into the clinical applications. Here, we introduce optical-domain subsampling as a method for imaging at high-speeds and over extended depth ranges but with a lower acquisition bandwidth than that required using conventional approaches. Optically subsampled laser sources utilize a discrete set of wavelengths to alias fringe signals along an extended depth range into a bandwidth limited frequency window. By detecting the complex fringe signals and under the assumption of a depth-constrained signal, optical-domain subsampling enables recovery of the depth-resolved scattering signal without overlapping artifacts from this bandwidth-limited window. We highlight key principles behind optical-domain subsampled imaging, and demonstrate this principle experimentally using a polygon-filter based swept-source laser that includes an intra-cavity Fabry-Perot (FP) etalon. PMID:23038343

  3. Three-factor structure for Epistemic Belief Inventory: A cross-validation study

    PubMed Central

    2017-01-01

    Research on epistemic beliefs has been hampered by lack of validated models and measurement instruments. The most widely used instrument is the Epistemological Questionnaire, which has been criticized for validity, and it has been proposed a new instrument based in the Epistemological Questionnaire: the Epistemic Belief Inventory. The Spanish-language version of Epistemic Belief Inventory was applied to 1,785 Chilean high school students. Exploratory and confirmatory factor analyses in independent subsamples were performed. A three factor structure emerged and was confirmed. Reliability was comparable to other studies, and the factor structure was invariant among randomized subsamples. The structure that was found does not replicate the one proposed originally, but results are interpreted in light of embedded systemic model of epistemological beliefs. PMID:28278258

  4. A New Approach of Personality and Psychiatric Disorders: A Short Version of the Affective Neuroscience Personality Scales

    PubMed Central

    Pingault, Jean-Baptiste; Falissard, Bruno; Côté, Sylvana; Berthoz, Sylvie

    2012-01-01

    Background The Affective Neuroscience Personality Scales (ANPS) is an instrument designed to assess endophenotypes related to activity in the core emotional systems that have emerged from affective neuroscience research. It operationalizes six emotional endophenotypes with empirical evidence derived from ethology, neural analyses and pharmacology: PLAYFULNESS/joy, SEEKING/interest, CARING/nurturance, ANGER/rage, FEAR/anxiety, and SADNESS/separation distress. We aimed to provide a short version of this questionnaire (ANPS-S). Methodology/Principal Findings We used a sample of 830 young French adults which was randomly split into two subsamples. The first subsample was used to select the items for the short scales. The second subsample and an additional sample of 431 Canadian adults served to evaluate the psychometric properties of the short instrument. The ANPS-S was similar to the long version regarding intercorrelations between the scales and gender differences. The ANPS-S had satisfactory psychometric properties, including factorial structure, unidimensionality of all scales, and internal consistency. The scores from the short version were highly correlated with the scores from the long version. Conclusions/Significance The short ANPS proves to be a promising instrument to assess endophenotypes for psychiatrically relevant science. PMID:22848510

  5. A sub-sampled approach to extremely low-dose STEM

    DOE PAGES

    Stevens, A.; Luzi, L.; Yang, H.; ...

    2018-01-22

    The inpainting of deliberately and randomly sub-sampled images offers a potential means to image specimens at a high resolution and under extremely low-dose conditions (≤1 e -/Å 2) using a scanning transmission electron microscope. We show that deliberate sub-sampling acquires images at least an order of magnitude faster than conventional low-dose methods for an equivalent electron dose. More importantly, when adaptive sub-sampling is implemented to acquire the images, there is a significant increase in the resolution and sensitivity which accompanies the increase in imaging speed. Lastly, we demonstrate the potential of this method for beam sensitive materials and in-situ observationsmore » by experimentally imaging the node distribution in a metal-organic framework.« less

  6. A sub-sampled approach to extremely low-dose STEM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stevens, A.; Luzi, L.; Yang, H.

    The inpainting of deliberately and randomly sub-sampled images offers a potential means to image specimens at a high resolution and under extremely low-dose conditions (≤1 e -/Å 2) using a scanning transmission electron microscope. We show that deliberate sub-sampling acquires images at least an order of magnitude faster than conventional low-dose methods for an equivalent electron dose. More importantly, when adaptive sub-sampling is implemented to acquire the images, there is a significant increase in the resolution and sensitivity which accompanies the increase in imaging speed. Lastly, we demonstrate the potential of this method for beam sensitive materials and in-situ observationsmore » by experimentally imaging the node distribution in a metal-organic framework.« less

  7. Efficient determination of the uncertainty for the optimization of SPECT system design: a subsampled fisher information matrix.

    PubMed

    Fuin, Niccolo; Pedemonte, Stefano; Arridge, Simon; Ourselin, Sebastien; Hutton, Brian F

    2014-03-01

    System designs in single photon emission tomography (SPECT) can be evaluated based on the fundamental trade-off between bias and variance that can be achieved in the reconstruction of emission tomograms. This trade off can be derived analytically using the Cramer-Rao type bounds, which imply the calculation and the inversion of the Fisher information matrix (FIM). The inverse of the FIM expresses the uncertainty associated to the tomogram, enabling the comparison of system designs. However, computing, storing and inverting the FIM is not practical with 3-D imaging systems. In order to tackle the problem of the computational load in calculating the inverse of the FIM, a method based on the calculation of the local impulse response and the variance, in a single point, from a single row of the FIM, has been previously proposed for system design. However this approximation (circulant approximation) does not capture the global interdependence between the variables in shift-variant systems such as SPECT, and cannot account e.g., for data truncation or missing data. Our new formulation relies on subsampling the FIM. The FIM is calculated over a subset of voxels arranged in a grid that covers the whole volume. Every element of the FIM at the grid points is calculated exactly, accounting for the acquisition geometry and for the object. This new formulation reduces the computational complexity in estimating the uncertainty, but nevertheless accounts for the global interdependence between the variables, enabling the exploration of design spaces hindered by the circulant approximation. The graphics processing unit accelerated implementation of the algorithm reduces further the computation times, making the algorithm a good candidate for real-time optimization of adaptive imaging systems. This paper describes the subsampled FIM formulation and implementation details. The advantages and limitations of the new approximation are explored, in comparison with the circulant approximation, in the context of design optimization of a parallel-hole collimator SPECT system and of an adaptive imaging system (similar to the commercially available D-SPECT).

  8. The Hadley circulation: assessing NCEP/NCAR reanalysis and sparse in-situ estimates

    NASA Astrophysics Data System (ADS)

    Waliser, D. E.; Shi, Zhixiong; Lanzante, J. R.; Oort, A. H.

    We present a comparison of the zonal mean meridional circulations derived from monthly in situ data (i.e. radiosondes and ship reports) and from the NCEP/NCAR reanalysis product. To facilitate the interpretation of the results, a third estimate of the mean meridional circulation is produced by subsampling the reanalysis at the locations where radiosonde and surface ship data are available for the in situ calculation. This third estimate, known as the subsampled estimate, is compared to the complete reanalysis estimate to assess biases in conventional, in situ estimates of the Hadley circulation associated with the sparseness of the data sources (i.e., radiosonde network). The subsampled estimate is also compared to the in situ estimate to assess the biases introduced into the reanalysis product by the numerical model, initialization process and/or indirect data sources such as satellite retrievals. The comparisons suggest that a number of qualitative differences between the in situ and reanalysis estimates are mainly associated with the sparse sampling and simplified interpolation schemes associated with in situ estimates. These differences include: (1) a southern Hadley cell that consistently extends up to 200 hPa in the reanalysis, whereas the bulk of the circulation for the in situ and subsampled estimates tends to be confined to the lower half of the troposphere, (2) more well-defined and consistent poleward limits of the Hadley cells in the reanalysis compared to the in-situ and subsampled estimates, and (3) considerably less variability in magnitude and latitudinal extent of the Ferrel cells and southern polar cell exhibited in the reanalysis estimate compared to the in situ and subsampled estimates. Quantitative comparison shows that the subsampled estimate, relative to the reanalysis estimate, produces a stronger northern Hadley cell ( 20%), a weaker southern Hadley cell ( 20-60%), and weaker Ferrel cells in both hemispheres. These differences stem from poorly measured oceanic regions which necessitate significant interpolation over broad regions. Moreover, they help to pinpoint specific shortcomings in the present and previous in situ estimates of the Hadley circulation. Comparisons between the subsampled and in situ estimates suggest that the subsampled estimate produces a slightly stronger Hadley circulation in both hemispheres, with the relative differences in some seasons as large as 20-30%. 6These differences suggest that the mean meridional circulation associated with the NCEP/NCAR reanalysis is more energetic than observations suggest. Examination of ENSO-related changes to the Hadley circulation suggest that the in situ and subsampled estimates significantly overestimate the effects of ENSO on the Hadley circulation due to the reliance on sparsely distributed data. While all three estimates capture the large-scale region of low-level equatorial convergence near the dateline that occurs during El Nino, the in situ and subsampled estimates fail to effectively reproduce the large-scale areas of equatorial mass divergence to the west and east of this convergence area, leading to an overestimate of the effects of ENSO on the zonal mean circulation.

  9. GPS-based household interview survey for the Cincinnati, Ohio Region.

    DOT National Transportation Integrated Search

    2012-02-01

    Methods for Conducting a Large-Scale GPS-Only Survey of Households: Past Household Travel Surveys (HTS) in the United States have only piloted small subsamples of Global Positioning Systems (GPS) completes compared with 1-2 day self-reported travel i...

  10. Subsampling phase retrieval for rapid thermal measurements of heated microstructures.

    PubMed

    Taylor, Lucas N; Talghader, Joseph J

    2016-07-15

    A subsampling technique for real-time phase retrieval of high-speed thermal signals is demonstrated with heated metal lines such as those found in microelectronic interconnects. The thermal signals were produced by applying a current through aluminum resistors deposited on soda-lime-silica glass, and the resulting refractive index changes were measured using a Mach-Zehnder interferometer with a microscope objective and high-speed camera. The temperatures of the resistors were measured both by the phase-retrieval method and by monitoring the resistance of the aluminum lines. The method used to analyze the phase is at least 60× faster than the state of the art but it maintains a small spatial phase noise of 16 nm, remaining comparable to the state of the art. For slowly varying signals, the system is able to perform absolute phase measurements over time, distinguishing temperature changes as small as 2 K. With angular scanning or structured illumination improvements, the system could also perform fast thermal tomography.

  11. Construct Validity of the Behavior Assessment System for Children (BASC) Self-Report of Personality: Evidence from Adolescents Referred to Residential Treatment

    ERIC Educational Resources Information Center

    Weis, Robert; Smenner, Lindsey

    2007-01-01

    The authors investigate the construct validity of the Behavior Assessment System for Children Self-Report of Personality (BASC-SRP; Reynolds & Kamphaus, 1998). A sample of 970 adolescents (16-18 years) with histories of disruptive behavior problems and truancy complete the SRP; a subsample of 290 adolescents also completed the Minnesota…

  12. Genealogical Properties of Subsamples in Highly Fecund Populations

    NASA Astrophysics Data System (ADS)

    Eldon, Bjarki; Freund, Fabian

    2018-03-01

    We consider some genealogical properties of nested samples. The complete sample is assumed to have been drawn from a natural population characterised by high fecundity and sweepstakes reproduction (abbreviated HFSR). The random gene genealogies of the samples are—due to our assumption of HFSR—modelled by coalescent processes which admit multiple mergers of ancestral lineages looking back in time. Among the genealogical properties we consider are the probability that the most recent common ancestor is shared between the complete sample and the subsample nested within the complete sample; we also compare the lengths of `internal' branches of nested genealogies between different coalescent processes. The results indicate how `informative' a subsample is about the properties of the larger complete sample, how much information is gained by increasing the sample size, and how the `informativeness' of the subsample varies between different coalescent processes.

  13. Rapid and accurate species tree estimation for phylogeographic investigations using replicated subsampling.

    PubMed

    Hird, Sarah; Kubatko, Laura; Carstens, Bryan

    2010-11-01

    We describe a method for estimating species trees that relies on replicated subsampling of large data matrices. One application of this method is phylogeographic research, which has long depended on large datasets that sample intensively from the geographic range of the focal species; these datasets allow systematicists to identify cryptic diversity and understand how contemporary and historical landscape forces influence genetic diversity. However, analyzing any large dataset can be computationally difficult, particularly when newly developed methods for species tree estimation are used. Here we explore the use of replicated subsampling, a potential solution to the problem posed by large datasets, with both a simulation study and an empirical analysis. In the simulations, we sample different numbers of alleles and loci, estimate species trees using STEM, and compare the estimated to the actual species tree. Our results indicate that subsampling three alleles per species for eight loci nearly always results in an accurate species tree topology, even in cases where the species tree was characterized by extremely rapid divergence. Even more modest subsampling effort, for example one allele per species and two loci, was more likely than not (>50%) to identify the correct species tree topology, indicating that in nearly all cases, computing the majority-rule consensus tree from replicated subsampling provides a good estimate of topology. These results were supported by estimating the correct species tree topology and reasonable branch lengths for an empirical 10-locus great ape dataset. Copyright © 2010 Elsevier Inc. All rights reserved.

  14. Matrix and Tensor Completion on a Human Activity Recognition Framework.

    PubMed

    Savvaki, Sofia; Tsagkatakis, Grigorios; Panousopoulou, Athanasia; Tsakalides, Panagiotis

    2017-11-01

    Sensor-based activity recognition is encountered in innumerable applications of the arena of pervasive healthcare and plays a crucial role in biomedical research. Nonetheless, the frequent situation of unobserved measurements impairs the ability of machine learning algorithms to efficiently extract context from raw streams of data. In this paper, we study the problem of accurate estimation of missing multimodal inertial data and we propose a classification framework that considers the reconstruction of subsampled data during the test phase. We introduce the concept of forming the available data streams into low-rank two-dimensional (2-D) and 3-D Hankel structures, and we exploit data redundancies using sophisticated imputation techniques, namely matrix and tensor completion. Moreover, we examine the impact of reconstruction on the classification performance by experimenting with several state-of-the-art classifiers. The system is evaluated with respect to different data structuring scenarios, the volume of data available for reconstruction, and various levels of missing values per device. Finally, the tradeoff between subsampling accuracy and energy conservation in wearable platforms is examined. Our analysis relies on two public datasets containing inertial data, which extend to numerous activities, multiple sensing parameters, and body locations. The results highlight that robust classification accuracy can be achieved through recovery, even for extremely subsampled data streams.

  15. The use of epifluorescent microscopy and quantitative polymerase chain reaction to determine the presence/absence and identification of microorganisms associated with domestic and foreign wallboard samples

    USGS Publications Warehouse

    Griffin, Dale W.

    2011-01-01

    Epifluorescent microscopy and quantitative polymerase chain reaction (qPCR) were utilized to determine the presence, concentration and identification of bacteria, and more specifically sulfate reducing bacteria (SRB) in subsamples of Chinese and North American wallboard, and wallboard-mine rock. Bacteria were visible in most subsamples, which included wallboard-lining paper from each side of the wallboard, wallboard filler, wallboard tape and fragments of mined wallboard rock via microscopy. Observed bacteria occurred as single or small clusters of cells and no mass aggregates indicating colonization were noted. Universal 16S qPCR was utilized to directly examine samples and detected bacteria at concentrations ranging from 1.4 x 103 to 6.4 x 104 genomic equivalents per mm2 of paper or per gram of wallboard filler or mined rock, in 12 of 41 subsamples. Subsamples were incubated in sulfate reducing broth for ~30 to 60 days (enrichment assay) and then analyzed by universal 16S and SRB qPCR. Enrichment universal 16S qPCR detected bacteria in 32 of 41 subsamples at concentrations ranging from 1.5 x 104 to 4.2 x 107 genomic equivalents per ml of culture broth. Evaluation of enriched subsamples by SRB qPCR demonstrated that SRB were not detectable in most of the samples and if they were detected, detection was not reproducible (an indication of low concentrations, if present). Enrichment universal 16S and SRB qPCR demonstrated that viable bacteria were present in subsamples (as expected given exposure of the samples following manufacture, transport and use) but that SRB were either not present or present at very low numbers. Further, no differences in trends were noted between the various Chinese and North American wallboard samples. In all, the microscopy and qPCR data indicated that the suspected ‘sulfur emissions’ emanating from suspect wallboard samples is not due to microbial activity.

  16. Implications of Satellite Swath Width on Global Aerosol Optical Thickness Statistics

    NASA Technical Reports Server (NTRS)

    Colarco, Peter; Kahn, Ralph; Remer, Lorraine; Levy, Robert; Welton, Ellsworth

    2012-01-01

    We assess the impact of swath width on the statistics of aerosol optical thickness (AOT) retrieved by satellite as inferred from observations made by the Moderate Resolution Imaging Spectroradiometer (MODIS). We sub-sample the year 2009 MODIS data from both the Terra and Aqua spacecraft along several candidate swaths of various widths. We find that due to spatial sampling there is an uncertainty of approximately 0.01 in the global, annual mean AOT. The sub-sampled monthly mean gridded AOT are within +/- 0.01 of the full swath AOT about 20% of the time for the narrow swath sub-samples, about 30% of the time for the moderate width sub-samples, and about 45% of the time for the widest swath considered. These results suggest that future aerosol satellite missions with only a narrow swath view may not sample the true AOT distribution sufficiently to reduce significantly the uncertainty in aerosol direct forcing of climate.

  17. Potential use of telephone surveys for non-communicable disease surveillance in developing countries: evidence from a national household survey in Lebanon.

    PubMed

    Sibai, Abla M; Ghandour, Lilian A; Chaaban, Rawan; Mokdad, Ali H

    2016-05-31

    Given the worldwide proliferation of cellphones, this paper examines their potential use for the surveillance of non-communicable disease (NCD) risk factors in a Middle Eastern country. Data were derived from a national household survey of 2,656 adults (aged 18 years or older) in Lebanon in 2009. Responses to questions on phone ownership yielded two subsamples, the 'cell phone sample' (n = 1,404) and the 'any phone sample' (n = 2,158). Prevalence estimates of various socio-demographics and 11 key NCD risk factors and comorbidities were compared between each subsample and the overall household sample. Adjusting for baseline age and sex distribution, no differences were observed for all NCD indicators when comparing either of subsamples to the overall household sample, except for binge drinking [(OR = 1.55, 95 % CI: 1.33-1.81) and (OR = 1.48, 95 % CI: 1.18-1.85) for 'cell phone subsample' and 'any phone subsample', respectively] and self-rated health (OR = 1.23, 95 % CI: 1.10-1.36) and (OR = 1.16, 95 % CI: 1.02-1.32), respectively). Differences in the odds of hyperlipidemia (OR = 1.27, 95 % CI: 1.06-1.51) was also found in the subsample of 'any phone' carriers. Multi-mode telephone surveillance techniques provide viable alternative to face-to-face surveys in developing countries. Cell phones may also be useful for personalized public health and medical care interventions in young populations.

  18. Very low cost real time histogram-based contrast enhancer utilizing fixed-point DSP processing

    NASA Astrophysics Data System (ADS)

    McCaffrey, Nathaniel J.; Pantuso, Francis P.

    1998-03-01

    A real time contrast enhancement system utilizing histogram- based algorithms has been developed to operate on standard composite video signals. This low-cost DSP based system is designed with fixed-point algorithms and an off-chip look up table (LUT) to reduce the cost considerably over other contemporary approaches. This paper describes several real- time contrast enhancing systems advanced at the Sarnoff Corporation for high-speed visible and infrared cameras. The fixed-point enhancer was derived from these high performance cameras. The enhancer digitizes analog video and spatially subsamples the stream to qualify the scene's luminance. Simultaneously, the video is streamed through a LUT that has been programmed with the previous calculation. Reducing division operations by subsampling reduces calculation- cycles and also allows the processor to be used with cameras of nominal resolutions. All values are written to the LUT during blanking so no frames are lost. The enhancer measures 13 cm X 6.4 cm X 3.2 cm, operates off 9 VAC and consumes 12 W. This processor is small and inexpensive enough to be mounted with field deployed security cameras and can be used for surveillance, video forensics and real- time medical imaging.

  19. CERES GEO Ed4 Available Data

    Atmospheric Science Data Center

    2017-10-11

    ... Spatial Resolution Temporal Coverage CER_GEO_Ed4_GOE08 Hourly 2-4km observation at nadir, subsampled every 8-9 km 2000-03-01 to 2003-04-01 CER_GEO_Ed4_GOE09 Hourly 2-4km observation at nadir, subsampled ...

  20. Inorganic, Radioisotopic, and Organic Analysis of 241-AP-101 Tank Waste

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fiskum, S.K.; Bredt, P.R.; Campbell, J.A.

    2000-10-17

    Battelle received five samples from Hanford waste tank 241-AP-101, taken at five different depths within the tank. No visible solids or organic layer were observed in the individual samples. Individual sample densities were measured, then the five samples were mixed together to provide a single composite. The composite was homogenized and representative sub-samples taken for inorganic, radioisotopic, and organic analysis. All analyses were performed on triplicate sub-samples of the composite material. The sample composite did not contain visible solids or an organic layer. A subsample held at 10 C for seven days formed no visible solids.

  1. Cultural Competence and Children's Mental Health Service Outcomes

    ERIC Educational Resources Information Center

    Mancoske, Ronald J.; Lewis, Marva L.; Bowers-Stephens, Cheryll; Ford, Almarie

    2012-01-01

    This study describes the relationships between clients' perception of cultural competency of mental health providers and service outcomes. A study was conducted of a public children's mental health program that used a community-based, systems of care approach. Data from a subsample (N = 111) of families with youths (average age 12.3) and primarily…

  2. Ichthyoplankton abundance and variance in a large river system concerns for long-term monitoring

    USGS Publications Warehouse

    Holland-Bartels, Leslie E.; Dewey, Michael R.; Zigler, Steven J.

    1995-01-01

    System-wide spatial patterns of ichthyoplankton abundance and variability were assessed in the upper Mississippi and lower Illinois rivers to address the experimental design and statistical confidence in density estimates. Ichthyoplankton was sampled from June to August 1989 in primary milieus (vegetated and non-vegated backwaters and impounded areas, main channels and main channel borders) in three navigation pools (8, 13 and 26) of the upper Mississippi River and in a downstream reach of the Illinois River. Ichthyoplankton densities varied among stations of similar aquatic landscapes (milieus) more than among subsamples within a station. An analysis of sampling effort indicated that the collection of single samples at many stations in a given milieu type is statistically and economically preferable to the collection of multiple subsamples at fewer stations. Cluster analyses also revealed that stations only generally grouped by their preassigned milieu types. Pilot studies such as this can define station groupings and sources of variation beyond an a priori habitat classification. Thus the minimum intensity of sampling required to achieve a desired statistical confidence can be identified before implementing monitoring efforts.

  3. Mining Health App Data to Find More and Less Successful Weight Loss Subgroups

    PubMed Central

    2016-01-01

    Background More than half of all smartphone app downloads involve weight, diet, and exercise. If successful, these lifestyle apps may have far-reaching effects for disease prevention and health cost-savings, but few researchers have analyzed data from these apps. Objective The purposes of this study were to analyze data from a commercial health app (Lose It!) in order to identify successful weight loss subgroups via exploratory analyses and to verify the stability of the results. Methods Cross-sectional, de-identified data from Lose It! were analyzed. This dataset (n=12,427,196) was randomly split into 24 subsamples, and this study used 3 subsamples (combined n=972,687). Classification and regression tree methods were used to explore groupings of weight loss with one subsample, with descriptive analyses to examine other group characteristics. Data mining validation methods were conducted with 2 additional subsamples. Results In subsample 1, 14.96% of users lost 5% or more of their starting body weight. Classification and regression tree analysis identified 3 distinct subgroups: “the occasional users” had the lowest proportion (4.87%) of individuals who successfully lost weight; “the basic users” had 37.61% weight loss success; and “the power users” achieved the highest percentage of weight loss success at 72.70%. Behavioral factors delineated the subgroups, though app-related behavioral characteristics further distinguished them. Results were replicated in further analyses with separate subsamples. Conclusions This study demonstrates that distinct subgroups can be identified in “messy” commercial app data and the identified subgroups can be replicated in independent samples. Behavioral factors and use of custom app features characterized the subgroups. Targeting and tailoring information to particular subgroups could enhance weight loss success. Future studies should replicate data mining analyses to increase methodology rigor. PMID:27301853

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Szabó, R.; Sárneczky, K.; Szabó, Gy. M.

    Unlike NASA’s original Kepler Discovery Mission, the renewed K2 Mission will target the plane of the Ecliptic, observing each field for approximately 75 days. This will bring new opportunities and challenges, in particular the presence of a large number of main-belt asteroids that will contaminate the photometry. The large pixel size makes K2 data susceptible to the effects of apparent minor planet encounters. Here, we investigate the effects of asteroid encounters on photometric precision using a subsample of the K2 engineering data taken in 2014 February. We show examples of asteroid contamination to facilitate their recognition and distinguish these eventsmore » from other error sources. We conclude that main-belt asteroids will have considerable effects on K2 photometry of a large number of photometric targets during the Mission that will have to be taken into account. These results will be readily applicable for future space photometric missions applying large-format CCDs, such as TESS and PLATO.« less

  5. On the X-ray spectrum of the volume emissivity arising from Abell clusters

    NASA Technical Reports Server (NTRS)

    Stottlemyer, A. R.; Boldt, E. A.

    1984-01-01

    HEAO 1 A-2 X-ray spectra (2-15 keV) for an optically selected sample of Abell clusters of galaxies with z less than 0.1 have been analyzed to determine the energy dependence of the cosmological X-ray volume emissivity arising from such clusters. This spectrum is well fitted by an isothermal-bremsstrahlung model with kT = 7.4 + or - 1.5 KeV. This result is a test of the isothermal-volume-emissivity spectrum to be inferred from the conjecture that all contributing clusters may be characterized by kT = 7 keV, as assumed by McKee et al. (1980) in estimating the underlying luminosity function for the same sample. Although satisfied at the statistical level indicated, the analysis of a low-luminosity subsample suggests that this assumption of identical isothermal spectra would lead to a systematic error for a more statistically precise determination of the luminosity function's form.

  6. Opportunities and Challenges of Linking Scientific Core Samples to the Geoscience Data Ecosystem

    NASA Astrophysics Data System (ADS)

    Noren, A. J.

    2016-12-01

    Core samples generated in scientific drilling and coring are critical for the advancement of the Earth Sciences. The scientific themes enabled by analysis of these samples are diverse, and include plate tectonics, ocean circulation, Earth-life system interactions (paleoclimate, paleobiology, paleoanthropology), Critical Zone processes, geothermal systems, deep biosphere, and many others, and substantial resources are invested in their collection and analysis. Linking core samples to researchers, datasets, publications, and funding agencies through registration of globally unique identifiers such as International Geo Sample Numbers (IGSNs) offers great potential for advancing several frontiers. These include maximizing sample discoverability, access, reuse, and return on investment; a means for credit to researchers; and documentation of project outputs to funding agencies. Thousands of kilometers of core samples and billions of derivative subsamples have been generated through thousands of investigators' projects, yet the vast majority of these samples are curated at only a small number of facilities. These numbers, combined with the substantial similarity in sample types, make core samples a compelling target for IGSN implementation. However, differences between core sample communities and other geoscience disciplines continue to create barriers to implementation. Core samples involve parent-child relationships spanning 8 or more generations, an exponential increase in sample numbers between levels in the hierarchy, concepts related to depth/position in the sample, requirements for associating data derived from core scanning and lithologic description with data derived from subsample analysis, and publications based on tens of thousands of co-registered scan data points and thousands of analyses of subsamples. These characteristics require specialized resources for accurate and consistent assignment of IGSNs, and a community of practice to establish norms, workflows, and infrastructure to support implementation.

  7. Accelerated high-resolution photoacoustic tomography via compressed sensing

    NASA Astrophysics Data System (ADS)

    Arridge, Simon; Beard, Paul; Betcke, Marta; Cox, Ben; Huynh, Nam; Lucka, Felix; Ogunlade, Olumide; Zhang, Edward

    2016-12-01

    Current 3D photoacoustic tomography (PAT) systems offer either high image quality or high frame rates but are not able to deliver high spatial and temporal resolution simultaneously, which limits their ability to image dynamic processes in living tissue (4D PAT). A particular example is the planar Fabry-Pérot (FP) photoacoustic scanner, which yields high-resolution 3D images but takes several minutes to sequentially map the incident photoacoustic field on the 2D sensor plane, point-by-point. However, as the spatio-temporal complexity of many absorbing tissue structures is rather low, the data recorded in such a conventional, regularly sampled fashion is often highly redundant. We demonstrate that combining model-based, variational image reconstruction methods using spatial sparsity constraints with the development of novel PAT acquisition systems capable of sub-sampling the acoustic wave field can dramatically increase the acquisition speed while maintaining a good spatial resolution: first, we describe and model two general spatial sub-sampling schemes. Then, we discuss how to implement them using the FP interferometer and demonstrate the potential of these novel compressed sensing PAT devices through simulated data from a realistic numerical phantom and through measured data from a dynamic experimental phantom as well as from in vivo experiments. Our results show that images with good spatial resolution and contrast can be obtained from highly sub-sampled PAT data if variational image reconstruction techniques that describe the tissues structures with suitable sparsity-constraints are used. In particular, we examine the use of total variation (TV) regularization enhanced by Bregman iterations. These novel reconstruction strategies offer new opportunities to dramatically increase the acquisition speed of photoacoustic scanners that employ point-by-point sequential scanning as well as reducing the channel count of parallelized schemes that use detector arrays.

  8. The utility of satellite observations for constraining fine-scale and transient methane sources

    NASA Astrophysics Data System (ADS)

    Turner, A. J.; Jacob, D.; Benmergui, J. S.; Brandman, J.; White, L.; Randles, C. A.

    2017-12-01

    Resolving differences between top-down and bottom-up emissions of methane from the oil and gas industry is difficult due, in part, to their fine-scale and often transient nature. There is considerable interest in using atmospheric observations to detect these sources. Satellite-based instruments are an attractive tool for this purpose and, more generally, for quantifying methane emissions on fine scales. A number of instruments are planned for launch in the coming years from both low earth and geostationary orbit, but the extent to which they can provide fine-scale information on sources has yet to be explored. Here we present an observation system simulation experiment (OSSE) exploring the tradeoffs between pixel resolution, measurement frequency, and instrument precision on the fine-scale information content of a space-borne instrument measuring methane. We use the WRF-STILT Lagrangian transport model to generate more than 200,000 column footprints at 1.3×1.3 km2 spatial resolution and hourly temporal resolution over the Barnett Shale in Texas. We sub-sample these footprints to match the observing characteristics of the planned TROPOMI and GeoCARB instruments as well as different hypothetical observing configurations. The information content of the various observing systems is evaluated using the Fisher information matrix and its singular values. We draw conclusions on the capabilities of the planned satellite instruments and how these capabilities could be improved for fine-scale source detection.

  9. Quasars Probing Quasars: The Circumgalactic Medium Surrounding Z 2 Quasars

    NASA Astrophysics Data System (ADS)

    Lau, Marie Wingyee

    Models of galaxy formation make the most direct predictions on gas related processes. Specifically, a picture on how gas flows through dark matter halos and onto galaxies to fuel star formation. A major prediction is that massive halos, including those hosting the progenitors of massive elliptical galaxies, exhibit a higher fraction of hot gas with T 107 K. Another prediction is that some mechanism must be invoked to quench the supply of cool gas in massive systems. Under the current galaxy formation paradigm, every massive galaxy has undergone a quasar phase, making high-redshift quasars the progenitors of inactive supermassive black holes found in the center of nearly all galaxies. Moreover, quasars clustering implies Mhalo = 1012.5 Msun , making quasar-host galaxies the progenitors of present day, massive, red and dead galaxies. The Quasars Probing Quasars survey is well-suited to examine gas related processes in the context of massive galaxy formation, as well as quasar feedback. To date the survey has selected 700 closely projected quasar pairs. To study the circumgalactic medium, a sub-sample of pairs with projected separation within 300 kpc at the foreground quasar's redshift are selected. From the first to seventh paper in the Quasars Probing Quasars series, the statistical results had been limited to covering fractions, equivalent widths, and without precise redshift measurements of the foreground quasars. Signatures of quasar feedback in the cool circumgalactic medium had not been identified. Hence, a sub-sample of 14 pairs with echellette spectra are selected for more detailed analysis. It is found that the low and high ions roughly trace each other in velocity structure. The HI and low ion surface densities decline with projected distance. HI absorption is strong even beyond the virial radius. Unresolved Lyalpha emission in one case and NV detection in another case together imply that a fraction of transverse sightlines are illuminated. The ionization parameter U positively correlates with impact parameter, which implies the foreground quasar does not dominate the radiation field. The circumgalactic medium is significantly enriched even beyond the virial radius, and has median [M/H] = -0.6. O/Fe is supersolar. No evolution in the total H column is found up to projected distance of 200 kpc, within which the median N H = 1020.5 cm-2. Within the virial radius, the mass of the cool CGM is estimated at MCGM ≈ 1.5*10 11 Msun. In two cases, detection of CII* implies electron density ne > 10 cm-3. Motivated by the preliminary kinematic results from this high-resolution sample, kinematic analysis of 148 pairs with precise foreground quasar redshifts is performed. The background spectra of this sample are of low and high resolution. The mean absorptions in metals exhibit velocity widths sigmav ≈ 300 km s-1, however the large widths do not require outflows. The mean absorptions have centroids redshifted from the systemic redshift by +200 km s-1. The asymmetry may be explained if the quasars are anisotropic or intermittent, and the gas is not flowing onto the galaxy. Finally, several observational and theoretical lines of future inquiry using multiwavelength data are presented.

  10. STRUCTURE OF THE UNIVERSITY PERSONALITY INVENTORY FOR CHINESE COLLEGE STUDENTS.

    PubMed

    Zhang, Jieting; Lanza, Stephanie; Zhang, Minqiang; Su, Binyuan

    2015-06-01

    The University Personality Inventory, a mental health instrument for college students, is frequently used for screening in China. However, its unidimensionality has been questioned. This study examined its dimensions to provide more information about the specific mental problems for students at risk. Four subsamples were randomly created from a sample (N = 6,110; M age = 19.1 yr.) of students at a university in China. Principal component analysis with Promax rotation was applied on the first two subsamples to explore dimension of the inventory. Confirmatory factor analysis was conducted on the third subsample to verify the exploratory dimensions. Finally, the identified factors were compared to the Symptom Checklist-90 (SCL-90) to support validity, and sex differences were examined, based on the fourth subsample. Five factors were identified: Physical Symptoms, Cognitive Symptoms, Emotional Vulnerability, Social Avoidance, and Interpersonal Sensitivity, accounting for 60.3% of the variance. All the five factors were significantly correlated with the SCL-90. Women scored significantly higher than men on Cognitive Symptoms and Interpersonal Sensitivity.

  11. Subsampled Hessian Newton Methods for Supervised Learning.

    PubMed

    Wang, Chien-Chih; Huang, Chun-Heng; Lin, Chih-Jen

    2015-08-01

    Newton methods can be applied in many supervised learning approaches. However, for large-scale data, the use of the whole Hessian matrix can be time-consuming. Recently, subsampled Newton methods have been proposed to reduce the computational time by using only a subset of data for calculating an approximation of the Hessian matrix. Unfortunately, we find that in some situations, the running speed is worse than the standard Newton method because cheaper but less accurate search directions are used. In this work, we propose some novel techniques to improve the existing subsampled Hessian Newton method. The main idea is to solve a two-dimensional subproblem per iteration to adjust the search direction to better minimize the second-order approximation of the function value. We prove the theoretical convergence of the proposed method. Experiments on logistic regression, linear SVM, maximum entropy, and deep networks indicate that our techniques significantly reduce the running time of the subsampled Hessian Newton method. The resulting algorithm becomes a compelling alternative to the standard Newton method for large-scale data classification.

  12. The microbiological safety of duckweed fed chickens: a risk assessment of using duckweed reared on domestic wastewater as a protein source in broiler chickens

    NASA Astrophysics Data System (ADS)

    Moyo, S.; Dalu, J. M.; Ndamba, J.

    The possibility of transmission of pathogens from duckweed supplemented feed to chickens and consequently to the human consumer necessitated the microbiological testing of duckweed fed chickens. This assessment was thus done to determine whether there is transmission of pathogens from the duckweed supplemented feed to the chickens; determine whether such infection would be systemic or be confined to the gastro-intestinal tract of the birds; and to investigate the microbial load and distribution of the microbes with age. The study birds were sacrificed at 3, 6, 8 and 10 weeks of age and examined for the indicator organisms Escherichia coli and Salmonella spp. There was no discernible pattern in the microbial load of both the duckweed fed chickens and control birds with age although the control birds sampled clearly had a lower microbial load than the experimental flock. Some Salmonella and two enteropathogenic E. coli strains were isolated from control and experimental sub-samples at 3 weeks. There were no Salmonellae isolated in the subsequent batches of birds and feed although a number of E. coli were isolated. More isolates were obtained from the three weeks’ sub-samples (collected during wet weather) than from all the other sub-samples. The use of duckweed at this inclusion rate under the processing conditions at Nemanwa was thus concluded to be microbiologically safe as long as due caution is exercised during the processing of the duckweed and handling of the birds. There are indications that the chickens may get contaminated especially during wet weather as evidenced by the isolation of E. coli and Salmonella spp from the first batch sub-samples. This was attributed to poor environmental sanitation at the plant particularly in view of the prevailing wet conditions at the time.

  13. Comparison of homogenization techniques and incidence of aflatoxin contamination in dried figs for export.

    PubMed

    Bircan, Cavit

    2009-01-01

    To determine differences in mean aflatoxin contamination and subsample variance from dry and slurry homogenizations, 10 kg of six different, naturally contaminated dried fig samples were collected from various exporting companies in accordance with the EU Commission Directive. The samples were first dry-mixed for 5 min using a blender and sub-sampled seven times; the remainder was slurry homogenized (1 : 1, v/v) and sub-sampled seven times. Aflatoxin B1 and total aflatoxin levels were recorded and coefficient of variations (CV) computed for all sub-samples. Only a small reduction in sub-sample variations, indicated by the lower CV values, and slight differences in mean aflatoxin B1 and total aflatoxin levels were observed when slurry homogenization was applied. Therefore, 7326 dried figs, destined for export from Turkey to the EU and collected during the 2008 crop year, were dry-homogenized and tested for aflatoxins (B1, B2, G1 and G2) by immunoaffinity column clean-up using RP-HPLC. While 34% of the samples contained detectable levels of total aflatoxins (0.20-208.75 µg kg(-1)), only 9% of them exceeded the EU limit of 4 µg kg(-1) in the range 2.0-208.75 µg kg(-1), respectively. A substantial increase in the incidence of aflatoxins was observed in 2008, most likely due to the drought stress experienced in Aydin province as occurred in 2007.

  14. Strategic Use of Random Subsample Replication and a Coefficient of Factor Replicability

    ERIC Educational Resources Information Center

    Katzenmeyer, William G.; Stenner, A. Jackson

    1975-01-01

    The problem of demonstrating replicability of factor structure across random variables is addressed. Procedures are outlined which combine the use of random subsample replication strategies with the correlations between factor score estimates across replicate pairs to generate a coefficient of replicability and confidence intervals associated with…

  15. HAND WIPE SUBSAMPLING METHOD FOR USE WITH BIOMARKER MEASUREMENTS IN THE AGRICULTURAL HEALTH STUDY/PESTICIDE EXPOSURE STUDY

    EPA Science Inventory

    Dermal exposure studies incorporating urinary biomarker measurements are complicated because dermal sampling may intercept or remove the target chemical before it is absorbed. A hand wipe subsampling method has been developed using polyurethane foam-tipped (PUF) swabs to minim...

  16. The program structure does not reliably recover the correct population structure when sampling is uneven: subsampling and new estimators alleviate the problem.

    PubMed

    Puechmaille, Sebastien J

    2016-05-01

    Inferences of population structure and more precisely the identification of genetically homogeneous groups of individuals are essential to the fields of ecology, evolutionary biology and conservation biology. Such population structure inferences are routinely investigated via the program structure implementing a Bayesian algorithm to identify groups of individuals at Hardy-Weinberg and linkage equilibrium. While the method is performing relatively well under various population models with even sampling between subpopulations, the robustness of the method to uneven sample size between subpopulations and/or hierarchical levels of population structure has not yet been tested despite being commonly encountered in empirical data sets. In this study, I used simulated and empirical microsatellite data sets to investigate the impact of uneven sample size between subpopulations and/or hierarchical levels of population structure on the detected population structure. The results demonstrated that uneven sampling often leads to wrong inferences on hierarchical structure and downward-biased estimates of the true number of subpopulations. Distinct subpopulations with reduced sampling tended to be merged together, while at the same time, individuals from extensively sampled subpopulations were generally split, despite belonging to the same panmictic population. Four new supervised methods to detect the number of clusters were developed and tested as part of this study and were found to outperform the existing methods using both evenly and unevenly sampled data sets. Additionally, a subsampling strategy aiming to reduce sampling unevenness between subpopulations is presented and tested. These results altogether demonstrate that when sampling evenness is accounted for, the detection of the correct population structure is greatly improved. © 2016 John Wiley & Sons Ltd.

  17. Spectral reconstruction of signals from periodic nonuniform subsampling based on a Nyquist folding scheme

    NASA Astrophysics Data System (ADS)

    Jiang, Kaili; Zhu, Jun; Tang, Bin

    2017-12-01

    Periodic nonuniform sampling occurs in many applications, and the Nyquist folding receiver (NYFR) is an efficient, low complexity, and broadband spectrum sensing architecture. In this paper, we first derive that the radio frequency (RF) sample clock function of NYFR is periodic nonuniform. Then, the classical results of periodic nonuniform sampling are applied to NYFR. We extend the spectral reconstruction algorithm of time series decomposed model to the subsampling case by using the spectrum characteristics of NYFR. The subsampling case is common for broadband spectrum surveillance. Finally, we take example for a LFM signal under large bandwidth to verify the proposed algorithm and compare the spectral reconstruction algorithm with orthogonal matching pursuit (OMP) algorithm.

  18. Taking the Next Step: Combining Incrementally Valid Indicators to Improve Recidivism Prediction

    ERIC Educational Resources Information Center

    Walters, Glenn D.

    2011-01-01

    The possibility of combining indicators to improve recidivism prediction was evaluated in a sample of released federal prisoners randomly divided into a derivation subsample (n = 550) and a cross-validation subsample (n = 551). Five incrementally valid indicators were selected from five domains: demographic (age), historical (prior convictions),…

  19. STRUCTURE OF THE UNIVERSITY PERSONALITY INVENTORY FOR CHINESE COLLEGE STUDENTS1,2

    PubMed Central

    ZHANG, JIETING; LANZA, STEPHANIE; ZHANG, MINQIANG; SU, BINYUAN

    2016-01-01

    Summary The University Personality Inventory, a mental health instrument for college students, is frequently used for screening in China. However, its unidimensionality has been questioned. This study examined its dimensions to provide more information about the specific mental problems for students at risk. Four subsamples were randomly created from a sample (N = 6,110; M age = 19.1 yr.) of students at a university in China. Principal component analysis with Promax rotation was applied on the first two subsamples to explore dimension of the inventory. Confirmatory factor analysis was conducted on the third subsample to verify the exploratory dimensions. Finally, the identified factors were compared to the Sympton Checklist–90 (SCL–90) to support validity, and sex differences were examined, based on the fourth subsample. Five factors were identified: Physical Symptoms, Cognitive Symptoms, Emotional Vulnerability, Social Avoidance, and Interpersonal Sensitivity, accounting for 60.3% of the variance. All the five factors were significantly correlated with the SCL–90. Women significantly scored higher than men on Cognitive Symptoms and Interpersonal Sensitivity. PMID:25933045

  20. In vivo retinal imaging for fixational eye motion detection using a high-speed digital micromirror device (DMD)-based ophthalmoscope.

    PubMed

    Vienola, Kari V; Damodaran, Mathi; Braaf, Boy; Vermeer, Koenraad A; de Boer, Johannes F

    2018-02-01

    Retinal motion detection with an accuracy of 0.77 arcmin corresponding to 3.7 µm on the retina is demonstrated with a novel digital micromirror device based ophthalmoscope. By generating a confocal image as a reference, eye motion could be measured from consecutively measured subsampled frames. The subsampled frames provide 7.7 millisecond snapshots of the retina without motion artifacts between the image points of the subsampled frame, distributed over the full field of view. An ophthalmoscope pattern projection speed of 130 Hz enabled a motion detection bandwidth of 65 Hz. A model eye with a scanning mirror was built to test the performance of the motion detection algorithm. Furthermore, an in vivo motion trace was obtained from a healthy volunteer. The obtained eye motion trace clearly shows the three main types of fixational eye movements. Lastly, the obtained eye motion trace was used to correct for the eye motion in consecutively obtained subsampled frames to produce an averaged confocal image correct for motion artefacts.

  1. In vivo retinal imaging for fixational eye motion detection using a high-speed digital micromirror device (DMD)-based ophthalmoscope

    PubMed Central

    Vienola, Kari V.; Damodaran, Mathi; Braaf, Boy; Vermeer, Koenraad A.; de Boer, Johannes F.

    2018-01-01

    Retinal motion detection with an accuracy of 0.77 arcmin corresponding to 3.7 µm on the retina is demonstrated with a novel digital micromirror device based ophthalmoscope. By generating a confocal image as a reference, eye motion could be measured from consecutively measured subsampled frames. The subsampled frames provide 7.7 millisecond snapshots of the retina without motion artifacts between the image points of the subsampled frame, distributed over the full field of view. An ophthalmoscope pattern projection speed of 130 Hz enabled a motion detection bandwidth of 65 Hz. A model eye with a scanning mirror was built to test the performance of the motion detection algorithm. Furthermore, an in vivo motion trace was obtained from a healthy volunteer. The obtained eye motion trace clearly shows the three main types of fixational eye movements. Lastly, the obtained eye motion trace was used to correct for the eye motion in consecutively obtained subsampled frames to produce an averaged confocal image correct for motion artefacts. PMID:29552396

  2. Forecasting European cold waves based on subsampling strategies of CMIP5 and Euro-CORDEX ensembles

    NASA Astrophysics Data System (ADS)

    Cordero-Llana, Laura; Braconnot, Pascale; Vautard, Robert; Vrac, Mathieu; Jezequel, Aglae

    2016-04-01

    Forecasting future extreme events under the present changing climate represents a difficult task. Currently there are a large number of ensembles of simulations for climate projections that take in account different models and scenarios. However, there is a need for reducing the size of the ensemble to make the interpretation of these simulations more manageable for impact studies or climate risk assessment. This can be achieved by developing subsampling strategies to identify a limited number of simulations that best represent the ensemble. In this study, cold waves are chosen to test different approaches for subsampling available simulations. The definition of cold waves depends on the criteria used, but they are generally defined using a minimum temperature threshold, the duration of the cold spell as well as their geographical extend. These climate indicators are not universal, highlighting the difficulty of directly comparing different studies. As part of the of the CLIPC European project, we use daily surface temperature data obtained from CMIP5 outputs as well as Euro-CORDEX simulations to predict future cold waves events in Europe. From these simulations a clustering method is applied to minimise the number of ensembles required. Furthermore, we analyse the different uncertainties that arise from the different model characteristics and definitions of climate indicators. Finally, we will test if the same subsampling strategy can be used for different climate indicators. This will facilitate the use of the subsampling results for a wide number of impact assessment studies.

  3. Joint Chroma Subsampling and Distortion-Minimization-Based Luma Modification for RGB Color Images With Application.

    PubMed

    Chung, Kuo-Liang; Hsu, Tsu-Chun; Huang, Chi-Chao

    2017-10-01

    In this paper, we propose a novel and effective hybrid method, which joins the conventional chroma subsampling and the distortion-minimization-based luma modification together, to improve the quality of the reconstructed RGB full-color image. Assume the input RGB full-color image has been transformed to a YUV image, prior to compression. For each 2×2 UV block, one 4:2:0 subsampling is applied to determine the one subsampled U and V components, U s and V s . Based on U s , V s , and the corresponding 2×2 original RGB block, a main theorem is provided to determine the ideally modified 2×2 luma block in constant time such that the color peak signal-to-noise ratio (CPSNR) quality distortion between the original 2×2 RGB block and the reconstructed 2×2 RGB block can be minimized in a globally optimal sense. Furthermore, the proposed hybrid method and the delivered theorem are adjusted to tackle the digital time delay integration images and the Bayer mosaic images whose Bayer CFA structure has been widely used in modern commercial digital cameras. Based on the IMAX, Kodak, and screen content test image sets, the experimental results demonstrate that in high efficiency video coding, the proposed hybrid method has substantial quality improvement, in terms of the CPSNR quality, visual effect, CPSNR-bitrate trade-off, and Bjøntegaard delta PSNR performance, of the reconstructed RGB images when compared with existing chroma subsampling schemes.

  4. SAM-Like Evolved Gas Analyses of Phyllosilicate Minerals and Applications to SAM Analyses of the Sheepbed Mudstone, Gale Crater, Mars

    NASA Technical Reports Server (NTRS)

    McAdam, A. C.; Franz, H. B.; Mahaffy, P. R.; Eigenbrode, J. L.; Stern, J. C.; Brunner, B.; Sutter, B.; Archer, P. D.; Ming , D. W.; Morris, R. V.; hide

    2014-01-01

    While in Yellowknife Bay, the Mars Science Laboratory Curiosity rover collected two drilled samples, John Klein (hereafter "JK") and Cumberland ("CB"), from the Sheepbed mudstone, as well as a scooped sample from the Rocknest aeolian bedform ("RN"). These samples were sieved by Curiosity's sample processing system and then several subsamples of these materials were delivered to the Sample Analysis at Mars (SAM) instrument suite and the CheMin X-ray diffraction/X-ray fluorescence instrument. CheMin provided the first in situ X-ray diffraction-based evidence of clay minerals on Mars, which are likely trioctahedral smectites (e.g., Fe-saponite) and comprise 20 wt% of the mudstone samples [1]. SAM's evolved gas analysis (EGA) mass spectrometry analyses of JK and CB subsamples, as well as RN subsamples, detected H2O, CO2, O2, H2, SO2, H2S, HCl, NO, OCS, CS2 and other trace gases evolved during pyrolysis. The identity of evolved gases and temperature( s) of evolution can augment mineral detection by CheMin and place constraints on trace volatile-bearing phases present below the CheMin detection limit or those phases difficult to characterize with XRD (e.g., X-ray amorphous phases). Here we will focus on the SAM H2O data, in the context of CheMin analyses, and comparisons to laboratory SAM-like analyses of several phyllosilicate minerals including smectites.

  5. Guidelines for the processing and quality assurance of benthic invertebrate samples collected as part of the National Water-Quality Assessment Program

    USGS Publications Warehouse

    Cuffney, T.F.; Gurtz, M.E.; Meador, M.R.

    1993-01-01

    Benthic invertebrate samples are collected as part of the U.S. Geological Survey's National Water-Quality Assessment Program. This is a perennial, multidisciplinary program that integrates biological, physical, and chemical indicators of water quality to evaluate status and trends and to develop an understanding of the factors controlling observed water quality. The Program examines water quality in 60 study units (coupled ground- and surface-water systems) that encompass most of the conterminous United States and parts of Alaska and Hawaii. Study-unit teams collect and process qualitative and semi-quantitative invertebrate samples according to standardized procedures. These samples are processed (elutriated and subsampled) in the field to produce as many as four sample components: large-rare, main-body, elutriate, and split. Each sample component is preserved in 10-percent formalin, and two components, large-rare and main-body, are sent to contract laboratories for further processing. The large-rare component is composed of large invertebrates that are removed from the sample matrix during field processing and placed in one or more containers. The main-body sample component consists of the remaining sample materials (sediment, detritus, and invertebrates) and is subsampled in the field to achieve a volume of 750 milliliters or less. The remaining two sample components, elutriate and split, are used for quality-assurance and quality-control purposes. Contract laboratories are used to identify and quantify invertebrates from the large-rare and main-body sample components according to the procedures and guidelines specified within this document. These guidelines allow the use of subsampling techniques to reduce the volume of sample material processed and to facilitate identifications. These processing procedures and techniques may be modified if the modifications provide equal or greater levels of accuracy and precision. The intent of sample processing is to determine the quantity of each taxon present in the semi-quantitative samples or to list the taxa present in qualitative samples. The processing guidelines provide standardized laboratory forms, sample labels, detailed sample processing flow charts, standardized format for electronic data, quality-assurance procedures and checks, sample tracking standards, and target levels for taxonomic determinations. The contract laboratory (1) is responsible for identifications and quantifications, (2) constructs reference collections, (3) provides data in hard copy and electronic forms, (4) follows specified quality-assurance and quality-control procedures, and (5) returns all processed and unprocessed portions of the samples. The U.S. Geological Survey's Quality Management Group maintains a Biological Quality-Assurance Unit, located at the National Water-Quality Laboratory, Arvada, Colorado, to oversee the use of contract laboratories and ensure the quality of data obtained from these laboratories according to the guidelines established in this document. This unit establishes contract specifications, reviews contractor performance (timeliness, accuracy, and consistency), enters data into the National Water Information System-II data base, maintains in-house reference collections, deposits voucher specimens in outside museums, and interacts with taxonomic experts within and outside the U.S. Geological Survey. This unit also modifies the existing sample processing and quality-assurance guidelines, establishes criteria and testing procedures for qualifying potential contract laboratories, identifies qualified taxonomic experts, and establishes voucher collections.

  6. Effect of investigator disturbance in experimental forensic entomology: succession and community composition.

    PubMed

    De Jong, G D; Hoback, W W

    2006-06-01

    Carrion insect succession studies have historically used repeated sampling of single or a few carcasses to produce data, either weighing the carcasses, removing a qualitative subsample of the fauna present, or both, on every visit over the course of decomposition and succession. This study, conducted in a set of related experimental hypotheses with two trials in a single season, investigated the effect that repeated sampling has on insect succession, determined by the number of taxa collected on each visit and by community composition. Each trial lasted at least 21 days, with daily visits on the first 14 days. Rat carcasses used in this study were all placed in the field on the same day, but then either sampled qualitatively on every visit (similar to most succession studies) or ignored until a given day of succession, when they were sampled qualitatively (a subsample) and then destructively sampled in their entirety. Carcasses sampled on every visit were in two groups: those from which only a sample of the fauna was taken and those from which a sample of fauna was taken and the carcass was weighed for biomass determination. Of the carcasses visited only once, the number of taxa in subsamples was compared to the actual number of taxa present when the carcass was destructively sampled to determine if the subsamples adequately represented the total carcass fauna. Data from the qualitative subsamples of those carcasses visited only once were also compared to data collected from carcasses that were sampled on every visit to investigate the effect of the repeated sampling. A total of 39 taxa were collected from carcasses during the study and the component taxa are discussed individually in relation to their role in succession. Number of taxa differed on only one visit between the qualitative subsamples and the actual number of taxa present, primarily because the organisms missed by the qualitative sampling were cryptic (hidden deep within body cavities) or rare (only represented by very few specimens). There were no differences discovered between number of taxa in qualitative subsamples from carcasses sampled repeatedly (with or without biomass determinations) and those sampled only a single time. Community composition differed considerably in later stages of decomposition, with disparate communities due primarily to small numbers of rare taxa. These results indicate that the methods used historically for community composition determination in experimental forensic entomology are generally adequate.

  7. Cross-Informant Agreement of Children's Social-Emotional Skills: An Investigation of Ratings by Teachers, Parents, and Students from a Nationally Representative Sample

    ERIC Educational Resources Information Center

    Gresham, Frank M.; Elliott, Stephen N.; Metallo, Sarah; Byrd, Shelby; Wilson, Elizabeth; Cassidy, Kaitlan

    2018-01-01

    This study examines the agreement across informant pairs of teachers, parents, and students regarding the students' social-emotional learning (SEL) competencies. Two student subsamples representative of the social skills improvement system (SSIS) SEL edition rating forms national standardization sample were examined: first, 168 students (3rd to…

  8. Hearing Loss Is Negatively Related to Episodic and Semantic Long-Term Memory but Not to Short-Term Memory

    ERIC Educational Resources Information Center

    Ronnberg, Jerker; Danielsson, Henrik; Rudner, Mary; Arlinger, Stig; Sternang, Ola; Wahlin, Ake; Nilsson, Lars-Goran

    2011-01-01

    Purpose: To test the relationship between degree of hearing loss and different memory systems in hearing aid users. Method: Structural equation modeling (SEM) was used to study the relationship between auditory and visual acuity and different cognitive and memory functions in an age-hetereogenous subsample of 160 hearing aid users without…

  9. Screening ADHD Problems in the Sports Behavior Checklist: Factor Structure, Convergent and Divergent Validity, and Group Differences

    ERIC Educational Resources Information Center

    Clendenin, Aaron A.; Businelle, Michael S.; Kelley, Mary Lou

    2005-01-01

    The Sports Behavior Checklist (SBC) is subjected to a principal components analysis, and subscales are correlated with subscales of the Conners' Revised Parent Form and the Social Skills Rating System. Both of these analyses are conducted to determine the construct validity of the instrument. A subsample of lower socioeconomic status individuals…

  10. Efficient Word Reading: Automaticity of Print-Related Skills Indexed by Rapid Automatized Naming through Cusp-Catastrophe Modeling

    ERIC Educational Resources Information Center

    Sideridis, Georgios D.; Simos, Panagiotis; Mouzaki, Angeliki; Stamovlasis, Dimitrios

    2016-01-01

    The study explored the moderating role of rapid automatized naming (RAN) in reading achievement through a cusp-catastrophe model grounded on nonlinear dynamic systems theory. Data were obtained from a community sample of 496 second through fourth graders who were followed longitudinally over 2 years and split into 2 random subsamples (validation…

  11. Evaluating Three Dimensions of Environmental Knowledge and Their Impact on Behaviour

    NASA Astrophysics Data System (ADS)

    Braun, Tina; Dierkes, Paul

    2017-09-01

    This research evaluates the development of three environmental knowledge dimensions of secondary school students after participation in a singular 1-day outdoor education programme. Applying a cross-national approach, system, action-related and effectiveness knowledge levels of students educated in Germany and Singapore were assessed before and after intervention participation. Correlations between single knowledge dimensions and behaviour changes due to the environmental education intervention were examined. The authors applied a pre-, post- and retention test design and developed a unique multiple-choice instrument. Results indicate significant baseline differences in the prevalence of the different knowledge dimensions between subgroups. Both intervention subsamples showed a low presence of all baseline knowledge dimensions. Action-related knowledge levels were higher than those of system and effectiveness knowledge. Subsample-specific differences in performed pro-environmental behaviour were also significant. Both experimental groups showed significant immediate and sustained knowledge increases in the three dimensions after programme participation. Neither of the two control cohorts showed any significant increase in any knowledge dimension. Effectiveness knowledge improved most. The amount of demonstrated environmental actions increased significantly in both intervention groups. Both control cohorts did not show shifts in environmental behaviour. Yet, only weak correlations between any knowledge dimension and behaviour could be found.

  12. The Influence of Family Climate and Family Process on Child Development.

    ERIC Educational Resources Information Center

    Bell, Linda G.; Bell, David C.

    This study investigates the relationship between the degree of individuation in families and the personality development of adolescent family members. A subsample of 30 white, middle-class families were chosen for analysis from a larger sample of 99 families. Fifteen families from the subsample had adolescent girls who scored high, and fifteen had…

  13. Determination of element/Ca ratios in foraminifera and corals using cold- and hot-plasma techniques in inductively coupled plasma sector field mass spectrometry

    NASA Astrophysics Data System (ADS)

    Lo, Li; Shen, Chuan-Chou; Lu, Chia-Jung; Chen, Yi-Chi; Chang, Ching-Chih; Wei, Kuo-Yen; Qu, Dingchuang; Gagan, Michael K.

    2014-02-01

    We have developed a rapid and precise procedure for measuring multiple elements in foraminifera and corals by inductively coupled plasma sector field mass spectrometry (ICP-SF-MS) with both cold- [800 W radio frequency (RF) power] and hot- (1200 W RF power) plasma techniques. Our quality control program includes careful subsampling protocols, contamination-free workbench spaces, and refined plastic-ware cleaning process. Element/Ca ratios are calculated directly from ion beam intensities of 24Mg, 27Al, 43Ca, 55Mn, 57Fe, 86Sr, and 138Ba, using a standard bracketing method. A routine measurement time is 3-5 min per dissolved sample. The matrix effects of nitric acid, and Ca and Sr levels, are carefully quantified and overcome. There is no significant difference between data determined by cold- and hot-plasma methods, but the techniques have different advantages. The cold-plasma technique offers a more stable plasma condition and better reproducibility for ppm-level elements. Long-term 2-sigma relative standard deviations (2-RSD) for repeat measurements of an in-house coral standard are 0.32% for Mg/Ca and 0.43% for Sr/Ca by cold-plasma ICP-SF-MS, and 0.69% for Mg/Ca and 0.51% for Sr/Ca by hot-plasma ICP-SF-MS. The higher sensitivity and enhanced measurement precision of the hot-plasma procedure yields 2-RSD precision for μmol/mol trace elements of 0.60% (Mg/Ca), 9.9% (Al/Ca), 0.68% (Mn/Ca), 2.7% (Fe/Ca), 0.50% (Sr/Ca), and 0.84% (Ba/Ca) for an in-house foraminiferal standard. Our refined ICP-SF-MS technique, which has the advantages of small sample size (2-4 μg carbonate consumed) and fast sample throughput (5-8 samples/hour), should open the way to the production of high precision and high resolution geochemical records for natural carbonate materials.

  14. Assessment of EchoMRI-AH versus dual-energy X-ray absorptiometry to measure human body composition.

    PubMed

    Galgani, J E; Smith, S R; Ravussin, E

    2011-09-01

    The sensitivity to detect small changes in body composition (fat mass and fat-free mass) largely depends on the precision of the instrument. We compared EchoMRI-AH and dual-energy X-ray absorptiometry (DXA) (Hologic QDR-4500A) for estimating fat mass in 301 volunteers. Body composition was evaluated in 136 males and 165 females with a large range of body mass index (BMI) (19-49 kg m(-2)) and age (19-91 years old) using DXA and EchoMRI-AH. In a subsample of 13 lean (BMI=19-25 kg m(-2)) and 21 overweight/obese (BMI>25 kg m(-2)) individuals, within-subject precision was evaluated from repeated measurements taken within 1 h (n=3) and 1 week apart (mean of three measurements taken on each day). Using Bland-Altman analysis, we compared the mean of the fat mass measurements versus the difference in fat mass measured by both instruments. We found that EchoMRI-AH quantified larger amount of fat versus DXA in non-obese (BMI<30 kg m(-2) (1.1 kg, 95% confidence interval (CI(95)):-3.7 to 6.0)) and obese (BMI ≥ 30 kg m(-2) (4.2 kg, CI(95):-1.4 to 9.8)) participants. Within-subject precision (coefficient of variation, %) in fat mass measured within 1 h was remarkably better when measured by EchoMRI-AH than DXA (<0.5 versus <1.5%, respectively; P<0.001). However, 1-week apart within-subject variability showed similar values for both instruments (<2.2%; P=0.15). EchoMRI-AH yielded greater fat mass values when compared with DXA (Hologic QDR-4500A), particularly in fatter subjects. EchoMRI-AH and DXA showed similar 1-week apart precision when fat mass was measured both in lean and overweight/obese individuals.

  15. Estimating network effect in geocenter motion: Applications

    NASA Astrophysics Data System (ADS)

    Zannat, Umma Jamila; Tregoning, Paul

    2017-10-01

    The network effect is the error associated with the subsampling of the Earth surface by space geodetic networks. It is an obstacle toward the precise measurement of geocenter motion, that is, the relative motion between the center of mass of the Earth system and the center of figure of the Earth surface. In a complementary paper, we proposed a theoretical approach to estimate the magnitude of this effect from the displacement fields predicted by geophysical models. Here we evaluate the effectiveness of our estimate for two illustrative physical processes: coseismic displacements inducing instantaneous changes in the Helmert parameters and elastic deformation due to surface water movements causing secular drifts in those parameters. For the first, we consider simplified models of the 2004 Sumatra-Andaman and the 2011 Tōhoku-Oki earthquakes, and for the second, we use the observations of the Gravity Recovery and Climate Experiment, complemented by an ocean model. In both case studies, it is found that the magnitude of the network effect, even for a large global network, is often as large as the magnitude of the changes in the Helmert parameters themselves. However, we also show that our proposed modification to the definition of the center of network frame to include weights proportional to the area of the Earth surface that the stations represent can significantly reduce the network effect in most cases.

  16. High levels of grass pollen inside European dairy farms: a role for the allergy-protective effects of environment?

    PubMed

    Sudre, B; Vacheyrou, M; Braun-Fahrländer, C; Normand, A-C; Waser, M; Reboux, G; Ruffaldi, P; von Mutius, E; Piarroux, R

    2009-07-01

    There is evidence of an allergy protective effect in children raised on farm. It has been assumed that microbial exposure may confer this protection. However in farm, little attention has been given to the pollen level and to concomitant microbiological exposure, and indoor pollen concentrations have never been precisely quantified. The kinetics of pollen in dairy farms have been studied in a pilot study (n = 9), and exposure in a sub-sample of the ongoing European birth cohort PASTURE (n = 106). Measurements of viable microorganisms and pollen were performed in air samples. To identify factors that modulate the pollen concentration multivariate regression analyses were run. Indoor pollen (95% of Poaceae fragments and grains) were significantly higher in winter than in summer (P = 0.001) and ranged between 858 to 11 265 counts/m(3) during feeding in winter, thus exceeding typical outdoor levels during the pollen season. Geometric mean in French farms was significantly higher than in German and Swiss farms (7 534, 992 and 1 079 count/m(3), respectively). The presence of a ventilation system and loose housing systems significantly reduced indoor pollen levels. This pollen concentration rise after feeding was accompanied by an increase in fungal and actinomycetal levels, whereas the concentration of bacteria was not associated with feeding. Farmers and their children who attend cowsheds during the feeding sessions are exposed perennially to high pollen concentrations. It might be speculated that the combined permanent exposure to microbes from livestock and grass pollen may initiate tolerance in children living on a farm.

  17. Continuity vs. the Crowd-Tradeoffs Between Continuous and Intermittent Citizen Hydrology Streamflow Observations.

    PubMed

    Davids, Jeffrey C; van de Giesen, Nick; Rutten, Martine

    2017-07-01

    Hydrologic data has traditionally been collected with permanent installations of sophisticated and accurate but expensive monitoring equipment at limited numbers of sites. Consequently, observation frequency and costs are high, but spatial coverage of the data is limited. Citizen Hydrology can possibly overcome these challenges by leveraging easily scaled mobile technology and local residents to collect hydrologic data at many sites. However, understanding of how decreased observational frequency impacts the accuracy of key streamflow statistics such as minimum flow, maximum flow, and runoff is limited. To evaluate this impact, we randomly selected 50 active United States Geological Survey streamflow gauges in California. We used 7 years of historical 15-min flow data from 2008 to 2014 to develop minimum flow, maximum flow, and runoff values for each gauge. To mimic lower frequency Citizen Hydrology observations, we developed a bootstrap randomized subsampling with replacement procedure. We calculated the same statistics, and their respective distributions, from 50 subsample iterations with four different subsampling frequencies ranging from daily to monthly. Minimum flows were estimated within 10% for half of the subsample iterations at 39 (daily) and 23 (monthly) of the 50 sites. However, maximum flows were estimated within 10% at only 7 (daily) and 0 (monthly) sites. Runoff volumes were estimated within 10% for half of the iterations at 44 (daily) and 12 (monthly) sites. Watershed flashiness most strongly impacted accuracy of minimum flow, maximum flow, and runoff estimates from subsampled data. Depending on the questions being asked, lower frequency Citizen Hydrology observations can provide useful hydrologic information.

  18. A Weighted Least Squares Approach To Robustify Least Squares Estimates.

    ERIC Educational Resources Information Center

    Lin, Chowhong; Davenport, Ernest C., Jr.

    This study developed a robust linear regression technique based on the idea of weighted least squares. In this technique, a subsample of the full data of interest is drawn, based on a measure of distance, and an initial set of regression coefficients is calculated. The rest of the data points are then taken into the subsample, one after another,…

  19. Fundamental techniques for resolution enhancement of average subsampled images

    NASA Astrophysics Data System (ADS)

    Shen, Day-Fann; Chiu, Chui-Wen

    2012-07-01

    Although single image resolution enhancement, otherwise known as super-resolution, is widely regarded as an ill-posed inverse problem, we re-examine the fundamental relationship between a high-resolution (HR) image acquisition module and its low-resolution (LR) counterpart. Analysis shows that partial HR information is attenuated but still exists, in its LR version, through the fundamental averaging-and-subsampling process. As a result, we propose a modified Laplacian filter (MLF) and an intensity correction process (ICP) as the pre and post process, respectively, with an interpolation algorithm to partially restore the attenuated information in a super-resolution (SR) enhanced image image. Experiments show that the proposed MLF and ICP provide significant and consistent quality improvements on all 10 test images with three well known interpolation methods including bilinear, bi-cubic, and the SR graphical user interface program provided by Ecole Polytechnique Federale de Lausanne. The proposed MLF and ICP are simple in implementation and generally applicable to all average-subsampled LR images. MLF and ICP, separately or together, can be integrated into most interpolation methods that attempt to restore the original HR contents. Finally, the idea of MLF and ICP can also be applied for average, subsampled one-dimensional signal.

  20. An approach to unbiased subsample interpolation for motion tracking.

    PubMed

    McCormick, Matthew M; Varghese, Tomy

    2013-04-01

    Accurate subsample displacement estimation is necessary for ultrasound elastography because of the small deformations that occur and the subsequent application of a derivative operation on local displacements. Many of the commonly used subsample estimation techniques introduce significant bias errors. This article addresses a reduced bias approach to subsample displacement estimations that consists of a two-dimensional windowed-sinc interpolation with numerical optimization. It is shown that a Welch or Lanczos window with a Nelder-Mead simplex or regular-step gradient-descent optimization is well suited for this purpose. Little improvement results from a sinc window radius greater than four data samples. The strain signal-to-noise ratio (SNR) obtained in a uniformly elastic phantom is compared with other parabolic and cosine interpolation methods; it is found that the strain SNR ratio is improved over parabolic interpolation from 11.0 to 13.6 in the axial direction and 0.7 to 1.1 in the lateral direction for an applied 1% axial deformation. The improvement was most significant for small strains and displacement tracking in the lateral direction. This approach does not rely on special properties of the image or similarity function, which is demonstrated by its effectiveness with the application of a previously described regularization technique.

  1. Standard errors and confidence intervals for variable importance in random forest regression, classification, and survival.

    PubMed

    Ishwaran, Hemant; Lu, Min

    2018-06-04

    Random forests are a popular nonparametric tree ensemble procedure with broad applications to data analysis. While its widespread popularity stems from its prediction performance, an equally important feature is that it provides a fully nonparametric measure of variable importance (VIMP). A current limitation of VIMP, however, is that no systematic method exists for estimating its variance. As a solution, we propose a subsampling approach that can be used to estimate the variance of VIMP and for constructing confidence intervals. The method is general enough that it can be applied to many useful settings, including regression, classification, and survival problems. Using extensive simulations, we demonstrate the effectiveness of the subsampling estimator and in particular find that the delete-d jackknife variance estimator, a close cousin, is especially effective under low subsampling rates due to its bias correction properties. These 2 estimators are highly competitive when compared with the .164 bootstrap estimator, a modified bootstrap procedure designed to deal with ties in out-of-sample data. Most importantly, subsampling is computationally fast, thus making it especially attractive for big data settings. Copyright © 2018 John Wiley & Sons, Ltd.

  2. Long memory volatility of gold price returns: How strong is the evidence from distinct economic cycles?

    NASA Astrophysics Data System (ADS)

    Bentes, Sonia R.

    2016-02-01

    This paper examines the long memory behavior in the volatility of gold returns using daily data for the period 1985-2009. We divided the whole sample into eight sub-samples in order to analyze the robustness and consistency of our results during different crisis periods. This constitutes our main contribution. We cover four major world crises, namely, (i) the US stock market crash of 1987; (ii) the Asian financial crisis of 1997; (iii) the World Trade Center terrorist attack of 2001 and finally, (iv) the sub-prime crisis of 2007, in order to investigate how the fractional integrated parameter of the FIGARCH(1, d,1) model evolves over time. Our findings are twofold: (i) there is evidence of long memory in the conditional variance over the whole sample period; (ii) when we consider the sub-sample analysis, the results show mixed evidence. Thus, for the 1985-2003 period the long memory parameter is positive and statistically significant in the pre-crisis sub-samples, and there is no evidence of long memory in the crisis sub-sample periods; however the reverse pattern occurs for the 2005-2009 period. This highlights the unique characteristics of the 2007 sub-prime crisis.

  3. Temporal variations of potential fecundity of southern blue whiting (Micromesistius australis australis) in the Southeast Pacific

    NASA Astrophysics Data System (ADS)

    Flores, Andrés; Wiff, Rodrigo; Díaz, Eduardo; Carvajal, Bernardita

    2017-08-01

    Fecundity is a key aspect of fish species reproductive biology because it relates directly to total egg production. Yet, despite such importance, fecundity estimates are lacking or scarce for several fish species. The gravimetric method is the most-used one to estimate fecundity by essentially scaling up the oocyte density to the ovary weight. It is a relatively simple and precise technique, but also time consuming because it requires counting all oocytes in an ovary subsample. The auto-diametric method, on the other hand, is relatively new for estimating fecundity, representing a rapid alternative, because it requires only an estimation of mean oocyte density from mean oocyte diameter. Using the extensive database available from commercial fishery and design surveys for southern blue whiting Micromesistius australis australis in the Southeast Pacific, we compared estimates of fecundity using both gravimetric and auto-diametric methods. Temporal variations in potential fecundity from the auto-diametric method were evaluated using generalised linear models considering predictors from maternal characteristics such as female size, condition factor, oocyte size, and gonadosomatic index. A global and time-invariant auto-diametric equation was evaluated using a simulation procedure based on non-parametric bootstrap. Results indicated there were not significant differences regarding fecundity estimates between the gravimetric and auto-diametric method (p > 0.05). Simulation showed the application of a global equation is unbiased and sufficiently precise to estimate time-invariant fecundity of this species. Temporal variations on fecundity were explained by maternal characteristic, revealing signals of fecundity down-regulation. We discuss how oocyte size and nutritional condition (measured as condition factor) are one of the important factors determining fecundity. We highlighted also the relevance of choosing the appropriate sampling period to conduct maturity studies and ensure precise estimates of fecundity of this species.

  4. Associated Effects of Automated Essay Evaluation Software on Growth in Writing Quality for Students with and without Disabilities

    ERIC Educational Resources Information Center

    Wilson, Joshua

    2017-01-01

    The present study examined growth in writing quality associated with feedback provided by an automated essay evaluation system called PEG Writing. Equal numbers of students with disabilities (SWD) and typically-developing students (TD) matched on prior writing achievement were sampled (n = 1196 total). Data from a subsample of students (n = 655)…

  5. Bootstrap rolling window estimation approach to analysis of the Environment Kuznets Curve hypothesis: evidence from the USA.

    PubMed

    Aslan, Alper; Destek, Mehmet Akif; Okumus, Ilyas

    2018-01-01

    This study aims to examine the validity of inverted U-shaped Environmental Kuznets Curve by investigating the relationship between economic growth and environmental pollution for the period from 1966 to 2013 in the USA. Previous studies based on the assumption of parameter stability and obtained parameters do not change over the full sample. This study uses bootstrap rolling window estimation method to detect the possible changes in causal relations and also obtain the parameters for sub-sample periods. The results show that the parameter of economic growth has increasing trend in 1982-1996 sub-sample periods, and it has decreasing trend in 1996-2013 sub-sample periods. Therefore, the existence of inverted U-shaped Environmental Kuznets Curve is confirmed in the USA.

  6. Attenuation of species abundance distributions by sampling

    PubMed Central

    Shimadzu, Hideyasu; Darnell, Ross

    2015-01-01

    Quantifying biodiversity aspects such as species presence/ absence, richness and abundance is an important challenge to answer scientific and resource management questions. In practice, biodiversity can only be assessed from biological material taken by surveys, a difficult task given limited time and resources. A type of random sampling, or often called sub-sampling, is a commonly used technique to reduce the amount of time and effort for investigating large quantities of biological samples. However, it is not immediately clear how (sub-)sampling affects the estimate of biodiversity aspects from a quantitative perspective. This paper specifies the effect of (sub-)sampling as attenuation of the species abundance distribution (SAD), and articulates how the sampling bias is induced to the SAD by random sampling. The framework presented also reveals some confusion in previous theoretical studies. PMID:26064626

  7. Geochemistry of mercury and other constituents in subsurface sediment—Analyses from 2011 and 2012 coring campaigns, Cache Creek Settling Basin, Yolo County, California

    USGS Publications Warehouse

    Arias, Michelle R.; Alpers, Charles N.; Marvin-DiPasquale, Mark C.; Fuller, Christopher C.; Agee, Jennifer L.; Sneed, Michelle; Morita, Andrew Y.; Salas, Antonia

    2017-10-31

    Cache Creek Settling Basin was constructed in 1937 to trap sediment from Cache Creek before delivery to the Yolo Bypass, a flood conveyance for the Sacramento River system that is tributary to the Sacramento–San Joaquin Delta. Sediment management options being considered by stakeholders in the Cache Creek Settling Basin include sediment excavation; however, that could expose sediments containing elevated mercury concentrations from historical mercury mining in the watershed. In cooperation with the California Department of Water Resources, the U.S. Geological Survey undertook sediment coring campaigns in 2011–12 (1) to describe lateral and vertical distributions of mercury concentrations in deposits of sediment in the Cache Creek Settling Basin and (2) to improve constraint of estimates of the rate of sediment deposition in the basin.Sediment cores were collected in the Cache Creek Settling Basin, Yolo County, California, during October 2011 at 10 locations and during August 2012 at 5 other locations. Total core depths ranged from approximately 4.6 to 13.7 meters (15 to 45 feet), with penetration to about 9.1 meters (30 feet) at most locations. Unsplit cores were logged for two geophysical parameters (gamma bulk density and magnetic susceptibility); then, selected cores were split lengthwise. One half of each core was then photographed and archived, and the other half was subsampled. Initial subsamples from the cores (20-centimeter composite samples from five predetermined depths in each profile) were analyzed for total mercury, methylmercury, total reduced sulfur, iron speciation, organic content (as the percentage of weight loss on ignition), and grain-size distribution. Detailed follow-up subsampling (3-centimeter intervals) was done at six locations along an east-west transect in the southern part of the Cache Creek Settling Basin and at one location in the northern part of the basin for analyses of total mercury; organic content; and cesium-137, which was used for dating. This report documents site characteristics; field and laboratory methods; and results of the analyses of each core section and subsample of these sediment cores, including associated quality-assurance and quality-control data.

  8. Validation of the Weight Concerns Scale Applied to Brazilian University Students.

    PubMed

    Dias, Juliana Chioda Ribeiro; da Silva, Wanderson Roberto; Maroco, João; Campos, Juliana Alvares Duarte Bonini

    2015-06-01

    The aim of this study was to evaluate the validity and reliability of the Portuguese version of the Weight Concerns Scale (WCS) when applied to Brazilian university students. The scale was completed by 1084 university students from Brazilian public education institutions. A confirmatory factor analysis was conducted. The stability of the model in independent samples was assessed through multigroup analysis, and the invariance was estimated. Convergent, concurrent, divergent, and criterion validities as well as internal consistency were estimated. Results indicated that the one-factor model presented an adequate fit to the sample and values of convergent validity. The concurrent validity with the Body Shape Questionnaire and divergent validity with the Maslach Burnout Inventory for Students were adequate. Internal consistency was adequate, and the factorial structure was invariant in independent subsamples. The results present a simple and short instrument capable of precisely and accurately assessing concerns with weight among Brazilian university students. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Statistical U-Th dating results of speleothem from south Europe and the orbital-scale implication

    NASA Astrophysics Data System (ADS)

    Hu, H. M.

    2016-12-01

    Reconstructing of hydroclimate in the Mediterranean on an orbital time scale helps improve our understanding of interaction between orbital forcing and north hemisphere climate. We collected 180 speleothem subsamples from Observatoire Cave (Monaco), Prince Cave (south France), Chateaueuf Cave (South France), Arago Cave (South France), and Basura Cave (North Italy) during 2013 to 2015 C.E. Uranium-thorium dating were conducted in the High-Precision Mass Spectrometry and Environment Change Laboratory (HISPEC), National Taiwan University. The results show that most of the speleothem formed during interglacial periods, particularly in marine isotope stage (MIS) 1, 5, and 11. However, only a few speleothem were dated between 180 to 250 thousand years ago (ka). The interval is approximately equivalent to MIS 7, which is a period with contrasting orbital parameters compared to MIS1, 5, and 11. Our statistical dating result implies that the orbital-scale humid/dry condition in southern Europe could be dominantly controlled by orbital forcing.

  10. How Central and Connected Am I in My Family? Family-Based Social Capital of Individuals with Intellectual Disability

    ERIC Educational Resources Information Center

    Widmer, E. D.; Kempf-Constantin, N.; Robert-Tissot, C.; Lanzi, F.; Carminati, G. Galli

    2008-01-01

    Using social network methods, this article explores the ways in which individuals with intellectual disability (ID) perceive their family contexts and the social capital that they provide. Based on a subsample of 24 individuals with ID, a subsample of 24 individuals with ID and psychiatric disorders, and a control sample of 24 pre-graduate and…

  11. Ergovaline Stability in Tall Fescue Based on Sample Handling and Storage Methods

    NASA Astrophysics Data System (ADS)

    Lea, Krista; Smith, Lori; Gaskill, Cynthia; Coleman, Robert; Smith, S.

    2014-09-01

    Ergovaline is an ergot alkaloid produced by the endophyte Neotyphodium coenophialum (Morgan-Jones and Gams) found in tall fescue (Schedonorus arundinacea (Schreb.) Dumort.) and blamed for a multitude of livestock disorders. Ergovaline is known to be unstable and affected by many variables. The objective of this study was to determine the effect of sample handling and storage on the stability of ergovaline in tall fescue samples. Fresh tall fescue was collected from a horse farm in central Kentucky at three harvest dates and transported on ice to the University of Kentucky Veterinary Diagnostic Laboratory. Plant material was frozen in liquid nitrogen, milled and mixed before being allocated into different sub-samples. Three sub-samples were assigned to each of 14 sample handling or storage treatments. Sample handling included increased heat and UV light to simulate transportation in a vehicle and on ice in a cooler per standard transportation recommendations. Storage conditions included storage at 22oC, 5oC and -20oC for up to 28 days. Each sub-sample was then analyzed for ergovaline concentration using HPLC with fluorescence detection and this experiment was repeated for each harvest date. Sub-samples exposed to UV light and heat lost a significant fraction of ergovaline in 2 hours, while sub-samples stored on ice in a cooler showed no change in ergovaline in 2 hours. All sub-samples stored at 22oC, 5oC and -20oC lost a significant fraction of ergovaline in the first 24 hours of storage. There was little change in ergovaline in the freezer (-20oC) after the first 24 hours up to 28 days of storage but intermittent losses were observed at 22oC and 5oC. To obtain results that most closely represent levels in the field, all samples should be transported on ice to the laboratory immediately after harvest for same day analysis. If immediate testing is not possible, samples should be stored at -20oC until analysis.

  12. Detectability of Granger causality for subsampled continuous-time neurophysiological processes.

    PubMed

    Barnett, Lionel; Seth, Anil K

    2017-01-01

    Granger causality is well established within the neurosciences for inference of directed functional connectivity from neurophysiological data. These data usually consist of time series which subsample a continuous-time biophysiological process. While it is well known that subsampling can lead to imputation of spurious causal connections where none exist, less is known about the effects of subsampling on the ability to reliably detect causal connections which do exist. We present a theoretical analysis of the effects of subsampling on Granger-causal inference. Neurophysiological processes typically feature signal propagation delays on multiple time scales; accordingly, we base our analysis on a distributed-lag, continuous-time stochastic model, and consider Granger causality in continuous time at finite prediction horizons. Via exact analytical solutions, we identify relationships among sampling frequency, underlying causal time scales and detectability of causalities. We reveal complex interactions between the time scale(s) of neural signal propagation and sampling frequency. We demonstrate that detectability decays exponentially as the sample time interval increases beyond causal delay times, identify detectability "black spots" and "sweet spots", and show that downsampling may potentially improve detectability. We also demonstrate that the invariance of Granger causality under causal, invertible filtering fails at finite prediction horizons, with particular implications for inference of Granger causality from fMRI data. Our analysis emphasises that sampling rates for causal analysis of neurophysiological time series should be informed by domain-specific time scales, and that state-space modelling should be preferred to purely autoregressive modelling. On the basis of a very general model that captures the structure of neurophysiological processes, we are able to help identify confounds, and offer practical insights, for successful detection of causal connectivity from neurophysiological recordings. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Impact of methadone with versus without drug abuse counseling on HIV risk: 4- and 12-month findings from a clinical trial.

    PubMed

    Kelly, Sharon M; Schwartz, Robert P; Oʼgrady, Kevin E; Gandhi, Devang; Jaffe, Jerome H

    2012-06-01

    Human immunodeficiency virus (HIV)-risk behaviors were examined at 4- and 12-month follow-up for 230 newly admitted methadone patients randomly assigned to receive either methadone only (n = 99) or methadone with drug abuse counseling (n = 131) in the first 4 months of treatment. The AIDS Risk Assessment was administered at baseline (treatment entry) and at 4- and 12-month follow-up. Linear mixed model analysis examined changes in HIV drug- and sex-risk behaviors over the 12 months in the total sample, drug-risk behaviors in the subsample that reported injecting drugs at baseline (n = 110), and sex-risk behaviors in the subsample that reported engaging in unprotected sex at baseline (n = 130). Significant decreases over time were found in the frequencies of injecting, injecting with other injectors, and sharing cooker, cotton, or rinse water in the total sample and the injector subsample (P < 0.05). Decreases were also found in the frequencies of having sex without a condom either with someone who was not a spouse or primary partner or while high (P < 0.05) in the total sample and the frequencies of having sex without a condom and having sex without a condom while high in the unprotected-sex subsample (P < 0.05). No significant treatment group main effects or Treatment Group × Time interaction effects were found in any of the HIV-risk behaviors in the total sample or either subsample (P > 0.05). During the first 12 months of treatment, providing drug abuse counseling with methadone compared with providing methadone alone was not associated with significant changes in HIV-risk behaviors for methadone maintenance patients.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stevens, Andrew; Browning, Nigel D.

    Traditionally, microscopists have worked with the Nyquist-Shannon theory of sampling, which states that to be able to reconstruct the image fully it needs to be sampled at a rate of at least twice the highest spatial frequency in the image. This sampling rate assumes that the image is sampled at regular intervals and that every pixel contains information that is crucial for the image (it even assumes that noise is important). Images in general, and especially low dose S/TEM images, contain significantly less information than can be encoded by a grid of pixels (which is why image compression works). Mathematicallymore » speaking, the image data has a low dimensional or sparse representation. Through the application of compressive sensing methods [1,2,3] this representation can be found using pre-designed measurements that are usually random for implementation simplicity. These measurements and the compressive sensing reconstruction algorithms have the added benefit of reducing noise. This reconstruction approach can be extended into higher dimensions, whereby the random sampling in each 2-D image can be extended into: a sequence of tomographic projections (i.e. tilt images); a sequence of video frames (i.e. incorporating temporal resolution and dynamics); spectral resolution (i.e. energy filtering an image to see the distribution of elements); and ptychography (i.e. sampling a full diffraction image at each location in a 2-D grid across the sample). This approach has been employed experimentally for materials science samples requiring low-dose imaging [2], and can be readily applied to biological samples. Figure 1 shows the resolution possible in a complex biological system, mouse pancreatic islet beta cells [4], when tomogram slices are reconstructed using subsampling. Reducing the number of pixels (1/6 pix and 1/3*1/3) shows minimal degradation compared to the reconstructions using all pixels (all data and 1/3 tilt). Although subsampling 1/6 of the tilts (1/6 of overall dose) degrades the reconstruction to the point that the cellular structures cannot be identified. Using 1/3 of both the pixels and the tilts provides a high quality image at 1/9 the overall dose even for this most basic and rapid demonstration of the CS methods. Figure 2 demonstrates the theoretical tomogram reconstruction quality (vertical axis) as undersampling (horizontal axis) is increased; we examined subsampling pixels and tilt-angles individually and a combined approach in which both pixels and tilts are sub-sampled. Note that subsampling pixels maintains high quality reconstructions (solid lines). Using the inpainting algorithm to obtain tomograms can automatically reduce the dose applied to the system by an order of magnitude. Perhaps the best way to understand the impact is to consider that by using inpainting (and with minimal hardware changes), a sample that can normally withstand a dose of ~10 e/Å2 can potentially be imaged with an “equivalent quality” to a dose level of 103 e/Å2. To put this in perspective, this is approaching the dose level used for the most advanced images, in terms of spatial resolution, for inorganic systems. While there are issues for biological specimens beyond dose (structural complexity being the most important one), this sampling approach allows the methods that are traditionally used for materials science to be applied to biological systems [5]. References: [1] A Stevens, H Yang, L Carin et al. Microscopy 63(1), (2014), pp. 41. [2] L Kovarik, A Stevens, A Liyu et al. Appl. Phys. Lett. 109, 164102 (2016) [3] A Stevens, L Kovarik, P Abellan et al. Adv. Structural and Chemical Imaging 1(10), (2015), pp. 1. [4] MD Guay, W Czaja, MA Aronova et al. Scientific Reports 6, 27614 (2016) [5] Supported by the Chemical Imaging, Signature Discovery, and Analytics in Motion Initiatives at PNNL. PNNL is operated by Battelle Memorial Inst. for the US DOE; contract DE-AC05-76RL01830.« less

  15. Friendship chemistry: An examination of underlying factors☆.

    PubMed

    Campbell, Kelly; Holderness, Nicole; Riggs, Matt

    2015-06-01

    Interpersonal chemistry refers to a connection between two individuals that exists upon first meeting. The goal of the current study is to identify beliefs about the underlying components of friendship chemistry. Individuals respond to an online Friendship Chemistry Questionnaire containing items that are derived from interdependence theory and the friendship formation literature. Participants are randomly divided into two subsamples. A principal axis factor analysis with promax rotation is performed on subsample 1 and produces 5 factors: Reciprocal candor, mutual interest, personableness, similarity, and physical attraction. A confirmatory factor analysis is conducted using subsample 2 and provides support for the 5-factor model. Participants with agreeable, open, and conscientious personalities more commonly report experiencing friendship chemistry, as do those who are female, young, and European/white. Responses from participants who have never experienced chemistry are qualitatively analyzed. Limitations and directions for future research are discussed.

  16. Psychometric Analysis of the Revised Memory and Behavior Problems Checklist: Factor Structure of Occurrence and Reaction Ratings

    PubMed Central

    Roth, David L.; Gitlin, Laura N.; Coon, David W.; Stevens, Alan B.; Burgio, Louis D.; Gallagher-Thompson, Dolores; Belle, Steven H.; Burns, Robert

    2008-01-01

    A modified version of the Revised Memory and Behavior Problems Checklist (RMBPC; L. Teri et al., 1992) was administered across 6 different sites to 1,229 family caregivers of community-dwelling adults with dementia. The total sample was divided randomly into 2 subsamples. Principal components analyses on occurrence responses and reaction ratings from the first subsample resulted in a 3-factor solution that closely resembled the originally proposed dimensions (memory-related problems, disruptive behaviors, and depression). Confirmatory factor analyses on data from the second subsample indicated adequate fit for the 3-factor model. Correlations with other caregiver and care-recipient measures supported the convergent and discriminant validity of the RMBPC measures. In addition, female caregivers and White caregivers reported more problems, on average, than male caregivers and African American caregivers, respectively. PMID:14692875

  17. PULSAR SIGNAL DENOISING METHOD BASED ON LAPLACE DISTRIBUTION IN NO-SUBSAMPLING WAVELET PACKET DOMAIN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wenbo, Wang; Yanchao, Zhao; Xiangli, Wang

    2016-11-01

    In order to improve the denoising effect of the pulsar signal, a new denoising method is proposed in the no-subsampling wavelet packet domain based on the local Laplace prior model. First, we count the true noise-free pulsar signal’s wavelet packet coefficient distribution characteristics and construct the true signal wavelet packet coefficients’ Laplace probability density function model. Then, we estimate the denosied wavelet packet coefficients by using the noisy pulsar wavelet coefficients based on maximum a posteriori criteria. Finally, we obtain the denoisied pulsar signal through no-subsampling wavelet packet reconstruction of the estimated coefficients. The experimental results show that the proposed method performs better when calculating the pulsar time of arrival than the translation-invariant wavelet denoising method.

  18. Extremely Low Frequency (ELF) Communications System Ecological Monitoring Program: Summary of 1986 Progress.

    DTIC Science & Technology

    1987-12-01

    assessment of data collection techniques *quantification of temporal and spatial patterns of variables *assessment of end point variability...nutrient variables are also being examined as covarlates. Development of a model to test for differences in growth patterns is continuing. At each of...condition. These variables are recorded at the end of each growing season. For evaluation of height growth patterns , a subsample of 100 seedlings per

  19. The hyacinth project

    NASA Astrophysics Data System (ADS)

    Francis, T.

    2003-04-01

    HYACINTH is the acronym for "Development of HYACE tools in new tests on Hydrates". The project is being carried out by a consortium of six companies and academic institutions from Germany, The Netherlands and the United Kingdom. It is a European Framework Five project whose objective is to bring the pressure corers developed in the earlier HYACE project, together with new core handling technology developed in the HYACINTH project, to the operational stage. Our philosophy is that if all one does with a pressure core is to bleed off the gas it contains, a major scientific opportunity has been missed. The current system enables pressure cores to be acquired, then transferred, without loss of pressure, into laboratory chambers so that they can be geophysically logged. The suite of equipment - HYACE Rotary Corer (HRC), Fugro Pressure Corer (FPC), Shear Transfer Chamber (STC), Logging Chamber (LC), Storage Chamber (SC) and Vertical Multi-Sensor Core Logger (V-MSCL) - will be briefly described. Other developments currently in progress to extend the capabilities of the system will be summarised: - to allow electrical resistivity logging of the pressure cores - to enable pressurised sub-samples to be taken from the cores - to facilitate microbiological experiments on pressurised sub-samples The first scientific results obtained with the HYACE/HYACINTH technology were achieved on ODP Leg 204 and are the subject of another talk at this meeting.

  20. Children's Depression Screener (ChilD-S): Development and Validation of a Depression Screening Instrument for Children in Pediatric Care

    ERIC Educational Resources Information Center

    Fruhe, Barbara; Allgaier, Antje-Kathrin; Pietsch, Kathrin; Baethmann, Martina; Peters, Jochen; Kellnar, Stephan; Heep, Axel; Burdach, Stefan; von Schweinitz, Dietrich; Schulte-Korne, Gerd

    2012-01-01

    The aim of the present study was to develop and validate the Children's Depression Screener (ChilD-S) for use in pediatric care. In two pediatric samples, children aged 9-12 (NI = 200; NII = 246) completed an explorative item pool (subsample I) and a revised item pool (subsample II). Diagnostic accuracy of each of the 22 items from the revised…

  1. An Approach to Unbiased Subsample Interpolation for Motion Tracking

    PubMed Central

    McCormick, Matthew M.; Varghese, Tomy

    2013-01-01

    Accurate subsample displacement estimation is necessary for ultrasound elastography because of the small deformations that occur and the subsequent application of a derivative operation on local displacements. Many of the commonly used subsample estimation techniques introduce significant bias errors. This article addresses a reduced bias approach to subsample displacement estimations that consists of a two-dimensional windowed-sinc interpolation with numerical optimization. It is shown that a Welch or Lanczos window with a Nelder–Mead simplex or regular-step gradient-descent optimization is well suited for this purpose. Little improvement results from a sinc window radius greater than four data samples. The strain signal-to-noise ratio (SNR) obtained in a uniformly elastic phantom is compared with other parabolic and cosine interpolation methods; it is found that the strain SNR ratio is improved over parabolic interpolation from 11.0 to 13.6 in the axial direction and 0.7 to 1.1 in the lateral direction for an applied 1% axial deformation. The improvement was most significant for small strains and displacement tracking in the lateral direction. This approach does not rely on special properties of the image or similarity function, which is demonstrated by its effectiveness with the application of a previously described regularization technique. PMID:23493609

  2. Rasch analysis of the Rosenberg Self-Esteem Scale with African Americans.

    PubMed

    Chao, Ruth Chu-Lien; Vidacovich, Courtney; Green, Kathy E

    2017-03-01

    Effectively diagnosing African Americans' self-esteem has posed an unresolved challenge. To address this assessment issue, we conducted exploratory factor analysis and Rasch analysis to assess the psychometric characteristics of the Rosenberg Self-Esteem Scale (RSES, Rosenberg, 1965) for African American college students. The dimensional structure of the RSES was first identified with the first subsample (i.e., calibration subsample) and then held up under cross-validation with a second subsample (i.e., validation subsample). Exploratory factor analysis and Rasch analysis both supported unidimensionality of the measure, with that finding replicated for a random split of the sample. Response scale use was generally appropriate, items were endorsed at a high level reflecting high levels of self-esteem, and person separation and reliability of person separation were adequate, and reflected results similar to those found in prior research. However, as some categories were infrequently used, we also collapsed scale points and found a slight improvement in scale and item indices. No differential item functioning was found by sex or having received professional assistance versus not; there were no mean score differences by age group, marital status, or year in college. Two items were seen as problematic. Implications for theory and research on multicultural mental health are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  3. Interactive radiographic image retrieval system.

    PubMed

    Kundu, Malay Kumar; Chowdhury, Manish; Das, Sudeb

    2017-02-01

    Content based medical image retrieval (CBMIR) systems enable fast diagnosis through quantitative assessment of the visual information and is an active research topic over the past few decades. Most of the state-of-the-art CBMIR systems suffer from various problems: computationally expensive due to the usage of high dimensional feature vectors and complex classifier/clustering schemes. Inability to properly handle the "semantic gap" and the high intra-class versus inter-class variability problem of the medical image database (like radiographic image database). This yields an exigent demand for developing highly effective and computationally efficient retrieval system. We propose a novel interactive two-stage CBMIR system for diverse collection of medical radiographic images. Initially, Pulse Coupled Neural Network based shape features are used to find out the most probable (similar) image classes using a novel "similarity positional score" mechanism. This is followed by retrieval using Non-subsampled Contourlet Transform based texture features considering only the images of the pre-identified classes. Maximal information compression index is used for unsupervised feature selection to achieve better results. To reduce the semantic gap problem, the proposed system uses a novel fuzzy index based relevance feedback mechanism by incorporating subjectivity of human perception in an analytic manner. Extensive experiments were carried out to evaluate the effectiveness of the proposed CBMIR system on a subset of Image Retrieval in Medical Applications (IRMA)-2009 database consisting of 10,902 labeled radiographic images of 57 different modalities. We obtained overall average precision of around 98% after only 2-3 iterations of relevance feedback mechanism. We assessed the results by comparisons with some of the state-of-the-art CBMIR systems for radiographic images. Unlike most of the existing CBMIR systems, in the proposed two-stage hierarchical framework, main importance is given on constructing efficient and compact feature vector representation, search-space reduction and handling the "semantic gap" problem effectively, without compromising the retrieval performance. Experimental results and comparisons show that the proposed system performs efficiently in the radiographic medical image retrieval field. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  4. Parallel MR Imaging with Accelerations Beyond the Number of Receiver Channels Using Real Image Reconstruction.

    PubMed

    Ji, Jim; Wright, Steven

    2005-01-01

    Parallel imaging using multiple phased-array coils and receiver channels has become an effective approach to high-speed magnetic resonance imaging (MRI). To obtain high spatiotemporal resolution, the k-space is subsampled and later interpolated using multiple channel data. Higher subsampling factors result in faster image acquisition. However, the subsampling factors are upper-bounded by the number of parallel channels. Phase constraints have been previously proposed to overcome this limitation with some success. In this paper, we demonstrate that in certain applications it is possible to obtain acceleration factors potentially up to twice the channel numbers by using a real image constraint. Data acquisition and processing methods to manipulate and estimate of the image phase information are presented for improving image reconstruction. In-vivo brain MRI experimental results show that accelerations up to 6 are feasible with 4-channel data.

  5. Friendship chemistry: An examination of underlying factors☆

    PubMed Central

    Campbell, Kelly; Holderness, Nicole; Riggs, Matt

    2015-01-01

    Interpersonal chemistry refers to a connection between two individuals that exists upon first meeting. The goal of the current study is to identify beliefs about the underlying components of friendship chemistry. Individuals respond to an online Friendship Chemistry Questionnaire containing items that are derived from interdependence theory and the friendship formation literature. Participants are randomly divided into two subsamples. A principal axis factor analysis with promax rotation is performed on subsample 1 and produces 5 factors: Reciprocal candor, mutual interest, personableness, similarity, and physical attraction. A confirmatory factor analysis is conducted using subsample 2 and provides support for the 5-factor model. Participants with agreeable, open, and conscientious personalities more commonly report experiencing friendship chemistry, as do those who are female, young, and European/white. Responses from participants who have never experienced chemistry are qualitatively analyzed. Limitations and directions for future research are discussed. PMID:26097283

  6. GF-3 SAR Image Despeckling Based on the Improved Non-Local Means Using Non-Subsampled Shearlet Transform

    NASA Astrophysics Data System (ADS)

    Shi, R.; Sun, Z.

    2018-04-01

    GF-3 synthetic aperture radar (SAR) images are rich in information and have obvious sparse features. However, the speckle appears in the GF-3 SAR images due to the coherent imaging system and it hinders the interpretation of images seriously. Recently, Shearlet is applied to the image processing with its best sparse representation. A new Shearlet-transform-based method is proposed in this paper based on the improved non-local means. Firstly, the logarithmic operation and the non-subsampled Shearlet transformation are applied to the GF-3 SAR image. Secondly, in order to solve the problems that the image details are smoothed overly and the weight distribution is affected by the speckle, a new non-local means is used for the transformed high frequency coefficient. Thirdly, the Shearlet reconstruction is carried out. Finally, the final filtered image is obtained by an exponential operation. Experimental results demonstrate that, compared with other despeckling methods, the proposed method can suppress the speckle effectively in homogeneous regions and has better capability of edge preserving.

  7. Improving imbalanced scientific text classification using sampling strategies and dictionaries.

    PubMed

    Borrajo, L; Romero, R; Iglesias, E L; Redondo Marey, C M

    2011-09-15

    Many real applications have the imbalanced class distribution problem, where one of the classes is represented by a very small number of cases compared to the other classes. One of the systems affected are those related to the recovery and classification of scientific documentation. Sampling strategies such as Oversampling and Subsampling are popular in tackling the problem of class imbalance. In this work, we study their effects on three types of classifiers (Knn, SVM and Naive-Bayes) when they are applied to search on the PubMed scientific database. Another purpose of this paper is to study the use of dictionaries in the classification of biomedical texts. Experiments are conducted with three different dictionaries (BioCreative, NLPBA, and an ad-hoc subset of the UniProt database named Protein) using the mentioned classifiers and sampling strategies. Best results were obtained with NLPBA and Protein dictionaries and the SVM classifier using the Subsampling balancing technique. These results were compared with those obtained by other authors using the TREC Genomics 2005 public corpus. Copyright 2011 The Author(s). Published by Journal of Integrative Bioinformatics.

  8. A System Approach to Navy Medical Education and Training. Appendix 18. Radiation Technician.

    DTIC Science & Technology

    1974-08-31

    attrition was forecast to approximate twenty percent, final sample and sub-sample sizes were adjusted accordingly. Stratified random sampling... HYPERTENSIVE INTRAVENOUS PYELOGRAMS 2 ITAKE RENAL LOOPOGRAMI I 3 ITAKE CIXU, I.Eo CONSTANT INFUSION 4 10 RENAL SPLIT FUNCTION TEST, E.G. STAMEY 5...ITAKE PORTAL FILM OF AREA BEING TREATED WITH COBALT 32 [INFORM DOCTOR OF UNEXPECTED X-RAY FINDINGS 33 IREAD X-RAY FILMS FOR TECHNICAL ADEQUACY 34

  9. Opto-mechanical system design of test system for near-infrared and visible target

    NASA Astrophysics Data System (ADS)

    Wang, Chunyan; Zhu, Guodong; Wang, Yuchao

    2014-12-01

    Guidance precision is the key indexes of the guided weapon shooting. The factors of guidance precision including: information processing precision, control system accuracy, laser irradiation accuracy and so on. The laser irradiation precision is an important factor. This paper aimed at the demand of the precision test of laser irradiator,and developed the laser precision test system. The system consists of modified cassegrain system, the wide range CCD camera, tracking turntable and industrial PC, and makes visible light and near infrared target imaging at the same time with a Near IR camera. Through the analysis of the design results, when it exposures the target of 1000 meters that the system measurement precision is43mm, fully meet the needs of the laser precision test.

  10. Elephant Seals and Temperature Data: Calibrations and Limitations.

    NASA Astrophysics Data System (ADS)

    Simmons, S. E.; Tremblay, Y.; Costa, D. P.

    2006-12-01

    In recent years with technological advances, instruments deployed on diving marine animals have been used to sample the environment in addition to their behavior. Of all oceanographic variables one of the most valuable and easiest to record is temperature. Here we report on a series of lab calibration and field validation experiments that consider the accuracy of temperature measurements from animal borne ocean samplers. Additionally we consider whether sampling frequency or animal behavior affects the quality of the temperature data collected by marine animals. Rapid response, external temperature sensors on eight Wildlife Computers MK9 time-depth recorders (TDRs) were calibrated using water baths at the Naval Postgraduate School (Monterey, CA). These water baths are calibrated using a platinum thermistor to 0.001° C. Instruments from different production batches were calibrated before and after deployments on adult female northern elephant seals, to examine tag performance over time and under `normal' usage. Tag performance in the field was validated by comparisons with temperature data from a Seabird CTD. In April/May of 2004, casts to 200m were performed over the Monterey Canyon using a CTD array carrying MK9s. These casts were performed before and after the release of a juvenile elephant seal from the boat. The seal was also carrying an MK9 TDR, allowing the assessment of any animal effect on temperature profiles. Sampling frequency during these field validations was set at one second intervals and the data from TDRs on both the CTD and the seals was sub-sampled at four, eight, 30 and 300 (5 min) seconds. The sub-sampled data was used to determine thermocline depth, a thermocline depth zone and temperature gradients and assess whether sampling frequency or animal behavior affects the quality of temperature data. Preliminary analyses indicate that temperature sensors deployed on elephant seals can provide water column temperature data of high quality and precision.

  11. Modeling Systematic Change in Stopover Duration Does Not Improve Bias in Trends Estimated from Migration Counts.

    PubMed

    Crewe, Tara L; Taylor, Philip D; Lepage, Denis

    2015-01-01

    The use of counts of unmarked migrating animals to monitor long term population trends assumes independence of daily counts and a constant rate of detection. However, migratory stopovers often last days or weeks, violating the assumption of count independence. Further, a systematic change in stopover duration will result in a change in the probability of detecting individuals once, but also in the probability of detecting individuals on more than one sampling occasion. We tested how variation in stopover duration influenced accuracy and precision of population trends by simulating migration count data with known constant rate of population change and by allowing daily probability of survival (an index of stopover duration) to remain constant, or to vary randomly, cyclically, or increase linearly over time by various levels. Using simulated datasets with a systematic increase in stopover duration, we also tested whether any resulting bias in population trend could be reduced by modeling the underlying source of variation in detection, or by subsampling data to every three or five days to reduce the incidence of recounting. Mean bias in population trend did not differ significantly from zero when stopover duration remained constant or varied randomly over time, but bias and the detection of false trends increased significantly with a systematic increase in stopover duration. Importantly, an increase in stopover duration over time resulted in a compounding effect on counts due to the increased probability of detection and of recounting on subsequent sampling occasions. Under this scenario, bias in population trend could not be modeled using a covariate for stopover duration alone. Rather, to improve inference drawn about long term population change using counts of unmarked migrants, analyses must include a covariate for stopover duration, as well as incorporate sampling modifications (e.g., subsampling) to reduce the probability that individuals will be detected on more than one occasion.

  12. Development of a Linear Ion Trap Mass Spectrometer (LITMS) Investigation for Future Planetary Surface Missions

    NASA Technical Reports Server (NTRS)

    Brinckerhoff, W.; Danell, R.; Van Ameron, F.; Pinnick, V.; Li, X.; Arevalo, R.; Glavin, D.; Getty, S.; Mahaffy, P.; Chu, P.; hide

    2014-01-01

    Future surface missions to Mars and other planetary bodies will benefit from continued advances in miniature sensor and sample handling technologies that enable high-performance chemical analyses of natural samples. Fine-scale (approx.1 mm and below) analyses of rock surfaces and interiors, such as exposed on a drill core, will permit (1) the detection of habitability markers including complex organics in association with their original depositional environment, and (2) the characterization of successive layers and gradients that can reveal the time-evolution of those environments. In particular, if broad-based and highly-sensitive mass spectrometry techniques could be brought to such scales, the resulting planetary science capability would be truly powerful. The Linear Ion Trap Mass Spectrometer (LITMS) investigation is designed to conduct fine-scale organic and inorganic analyses of short (approx.5-10 cm) rock cores such as could be acquired by a planetary lander or rover arm-based drill. LITMS combines both pyrolysis/gas chromatograph mass spectrometry (GCMS) of sub-sampled core fines, and laser desorption mass spectrometry (LDMS) of the intact core surface, using a common mass analyzer, enhanced from the design used in the Mars Organic Molecule Analyzer (MOMA) instrument on the 2018 ExoMars rover. LITMS additionally features developments based on the Sample Analysis at Mars (SAM) investigation on MSL and recent NASA-funded prototype efforts in laser mass spectrometry, pyrolysis, and precision subsampling. LITMS brings these combined capabilities to achieve its four measurement objectives: (1) Organics: Broad Survey Detect organic molecules over a wide range of molecular weight, volatility, electronegativity, concentration, and host mineralogy. (2) Organic: Molecular Structure Characterize internal molecular structure to identify individual compounds, and reveal functionalization and processing. (3) Inorganic Host Environment Assess the local chemical/mineralogical makeup of organic host phases to help determine deposition and preservation factors. (4) Chemical Stratigraphy Analyze the fine spatial distribution and variation of key species with depth.

  13. Impacts of heterogeneous organic matter on phenanthrene sorption--Equilibrium and kinetic studies with aquifer material

    USGS Publications Warehouse

    Karapanagioti, Hrissi K.; Kleineidam, Sybille; Sabatini, David A.; Grathwohl, Peter; Ligouis, Bertrand

    2000-01-01

    Sediment organic matter heterogeneity in sediments is shown to impact the sorption behavior of contaminants. We investigated the sorptive properties as well as the composition of organic matter in different subsamples (mainly grain size fractions) of the Canadian River Alluvium (CRA). Organic petrography was used as a new tool to describe and characterize the organic matter in the subsamples. The samples studied contained many different types of organic matter including bituminous coal particles. Differences in sorption behavior were explained based on these various types of organic matter. Subsamples containing predominately coaly, particulate organic matter showed the highest Koc, the highest nonlinearity of sorption isotherms and the slowest sorption kinetics. Soil subsamples with organic matter present as organic coatings around the quartz grains evidenced the lowest Koc, the most linear sorption isotherms and the fastest sorption kinetics, which was not limited by slow intraparticle diffusion. Due to the high sorption capacity of the coaly particles even when it is present as only a small fraction of the composite organic content (<3%) causes Koc values which are much higher than expected for soil organic matter (e.g. Koc − Kow relationships). The results show that the identification and quantification of the coaly particles within a sediment or soil sample is a prerequisite in order to understand or predict sorption behavior of organic pollutants.

  14. Quality of life in survivors of hematological malignancies stratified by cancer type, time since diagnosis and stem cell transplantation.

    PubMed

    Esser, Peter; Kuba, Katharina; Mehnert, Anja; Johansen, Christoffer; Hinz, Andreas; Lordick, Florian; Götze, Heide

    2018-06-01

    Quality of life (QoL) has become an important tool to guide decision making in oncology. Given the heterogeneity among hematological cancer survivors, however, clinicians need comparative data across different subsets. This study recruited survivors of hematological malignancies (≥ 2.5 years after diagnosis) from two German cancer registries. QoL was assessed with the EORTC QLQ-C30. The sample was stratified by cancer type, time since diagnosis, treatment with stem cell transplantation (SCT) and type of SCT. First, levels of QoL were compared across subsamples when controlling for several covariates. Second, we contrasted subsamples with gender- and age-matched population controls obtained from the general population. Of 2001 survivors contacted by mail, 922 (46%) participated in the study. QoL did not significantly differ between the subsamples. All subsamples scored significantly lower in functioning and significantly higher in symptom burden compared to population controls (all p < .001). Almost all of these group effects reached clinically meaningful sizes (Cohen's d ≥ .5). Group differences in global health/QoL were mostly non-significant. Hematological cancer survivors are associated with practically relevant impairments irrespective of differences in central medical characteristics. Nevertheless, survivors seem to evaluate their overall situation as relatively well. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  15. Can Low Frequency Measurements Be Good Enough? - A Statistical Assessment of Citizen Hydrology Streamflow Observations

    NASA Astrophysics Data System (ADS)

    Davids, J. C.; Rutten, M.; Van De Giesen, N.

    2016-12-01

    Hydrologic data has traditionally been collected with permanent installations of sophisticated and relatively accurate but expensive monitoring equipment at limited numbers of sites. Consequently, the spatial coverage of the data is limited and costs are high. Achieving adequate maintenance of sophisticated monitoring equipment often exceeds local technical and resource capacity, and permanently deployed monitoring equipment is susceptible to vandalism, theft, and other hazards. Rather than using expensive, vulnerable installations at a few points, SmartPhones4Water (S4W), a form of Citizen Hydrology, leverages widely available mobile technology to gather hydrologic data at many sites in a manner that is repeatable and scalable. However, there is currently a limited understanding of the impact of decreased observational frequency on the accuracy of key streamflow statistics like minimum flow, maximum flow, and runoff. As a first step towards evaluating the tradeoffs between traditional continuous monitoring approaches and emerging Citizen Hydrology methods, we randomly selected 50 active U.S. Geological Survey (USGS) streamflow gauges in California. We used historical 15 minute flow data from 01/01/2008 through 12/31/2014 to develop minimum flow, maximum flow, and runoff values (7 year total) for each gauge. In order to mimic lower frequency Citizen Hydrology observations, we developed a bootstrap randomized subsampling with replacement procedure. We calculated the same statistics, along with their respective distributions, from 50 subsample iterations with four different subsampling intervals (i.e. daily, three day, weekly, and monthly). Based on our results we conclude that, depending on the types of questions being asked, and the watershed characteristics, Citizen Hydrology streamflow measurements can provide useful and accurate information. Depending on watershed characteristics, minimum flows were reasonably estimated with subsample intervals ranging from daily to monthly. However, maximum flows in most cases were poorly characterized, even at daily subsample intervals. In general, runoff volumes were accurately estimated from daily, three day, weekly, and even in some cases, monthly observations.

  16. A simplified and cost-effective enrichment protocol for the isolation of Campylobacter spp. from retail broiler meat without microaerobic incubation

    PubMed Central

    2011-01-01

    Background To simplify the methodology for the isolation of Campylobacter spp. from retail broiler meat, we evaluated 108 samples (breasts and thighs) using an unpaired sample design. The enrichment broths were incubated under aerobic conditions (subsamples A) and for comparison under microaerobic conditions (subsamples M) as recommended by current reference protocols. Sensors were used to measure the dissolved oxygen (DO) in the broth and the percentage of oxygen (O2) in the head space of the bags used for enrichment. Campylobacter isolates were identified with multiplex PCR assays and typed using pulsed-field gel electrophoresis (PFGE). Ribosomal intergenic spacer analyses (RISA) and denaturing gradient gel electrophoresis (DGGE) were used to study the bacterial communities of subsamples M and A after 48 h enrichment. Results The number of Campylobacter positive subsamples were similar for A and M when all samples were combined (P = 0.81) and when samples were analyzed by product (breast: P = 0.75; thigh: P = 1.00). Oxygen sensors showed that DO values in the broth were around 6 ppm and O2 values in the head space were 14-16% throughout incubation. PFGE demonstrated high genomic similarity of isolates in the majority of the samples in which isolates were obtained from subsamples A and M. RISA and DGGE results showed a large variability in the bacterial populations that could be attributed to sample-to-sample variations and not enrichment conditions (aerobic or microaerobic). These data also suggested that current sampling protocols are not optimized to determine the true number of Campylobacter positive samples in retail boiler meat. Conclusions Decreased DO in enrichment broths is naturally achieved. This simplified, cost-effective enrichment protocol with aerobic incubation could be incorporated into reference methods for the isolation of Campylobacter spp. from retail broiler meat. PMID:21812946

  17. [Social impact of literacy in the household: analysis of the association with smoking in illiterate co-residents in Brazil].

    PubMed

    Ribeiro, Felipe Garcia; Carraro, André; Motta, Janaína Vieira Dos Santos; Gigante, Denise Petrucci

    2016-06-01

    Objective To investigate the social impact of literacy on the smoking behavior of illiterate individuals who share the household with literate individuals. Method This cross-sectional study employed data from the 2008 Brazilian National Household Survey (Pesquisa Nacional por Amostra de Domicílios, PNAD). Smokers were defined as individuals reporting use of any tobacco product daily or less than daily. The literacy profiles of residents were identified. Poisson regressions adjusted for skin color, age, and maximum level of literacy in the household were performed. Four groups were analyzed: men living in rural areas, men living in urban areas, women living in rural areas, and women living in urban areas. Results For urban men, the presence of literate women only in the household was a protection factor against smoking (prevalence ratio, PR: 0.77; 95%CI: 0.71-0.82) vs. households in which all the males were illiterate. The same protective effect was found for rural men (PR: 0.79; 95%CI: 0.73-0.85). In turn, the presence of literate men only living in the same household with illiterate men did not provide protection against smoking in any case (PR: 0.93; 95%CI: 0.83-1.03 for the urban subsample; and PR: 0.99; 95%CI: 0.88-1.11 for the rural subsample). Illiterate women benefited from the presence of both literate men (PR: 0.77; 95%CI: 0.71-0.84 for the urban sample; and PR: 0.78; 95%CI: 0.69-0.89 for the rural subsample) and literate women (PR: 0.81; 95%CI: 0.72-0.92 for the urban subsample; and PR: 0.75; IC95%: 0.60-0.93 for the rural subsample). Conclusions Literate women seem to have positively affected illiterate co-residents of both sexes. This result is in agreement with reports showing broad advantages of female schooling.

  18. Preferences and actual chemotherapy decision-making in the greater plains collaborative breast cancer study.

    PubMed

    Berger, Ann M; Buzalko, Russell J; Kupzyk, Kevin A; Gardner, Bret J; Djalilova, Dilorom M; Otte, Julie L

    2017-12-01

    There is renewed interest in identifying breast cancer patients' participation in decision-making about adjuvant chemotherapy. There is a gap in the literature regarding the impact of these decisions on quality of life (QOL) and quality of care (QOC). Our aims were to determine similarities and differences in how patients diagnosed with breast cancer preferred to make decisions with providers about cancer treatment, to examine the patient's recall of her role when the decision was made about chemotherapy and to determine how preferred and actual roles, as well as congruence between them, relate to QOL and perceived QOC. Greater Plains Collaborative clinical data research network of PCORnet conducted the 'Share Thoughts on Breast Cancer' survey among women 12-18 months post-diagnosis at eight sites in seven Midwestern United States. Patients recalled their preferred and actual treatment decision-making roles and three new shared decision-making (SDM) variables were created. Patients completed QOL and QOC measurements. Correlations and t-tests were used. Of 1235 returned surveys, 873 (full sample) and 329 (subsample who received chemotherapy) were used. About one-half of women in both the full (50.7%) and subsample (49.8%,) preferred SDM with providers about treatment decisions, but only 41.2% (full) and 42.6% (subsample) reported experiencing SDM. Significant differences were found between preferred versus actual roles in the full (p < .001) and subsample (p < .004). In the full sample, there were no relationships between five decision-making variables with QOL, but there was an association with QOC. The subsample's decision-making variables related to several QOL scales and QOC items, with a more patient-centered decision than originally preferred related to higher physical and social/family well-being, overall QOL and QOC. Patients benefit from providers' efforts to identify patient preferences, encourage an active role in SDM, and tailor decision making to their desired choice.

  19. Blocker-tolerant and high-sensitivity $Δ$$\\!$$Σ$ correlation digitizer for radar and coherent receiver applications

    DOE PAGES

    Mincey, John S.; Silva-Martinez, Jose; Karsilayan, AydinIlker; ...

    2017-03-17

    In this study, a coherent subsampling digitizer for pulsed Doppler radar systems is proposed. Prior to transmission, the radar system modulates the RF pulse with a known pseudorandom binary phase shift keying (BPSK) sequence. Upon reception, the radar digitizer uses a programmable sample-and-hold circuit to multiply the received waveform by a properly time-delayed version of the known a priori BPSK sequence. This operation demodulates the desired echo signal while suppressing the spectrum of all in-band noncorrelated interferers, making them appear as noise in the frequency domain. The resulting demodulated narrowband Doppler waveform is then subsampled at the IF frequency bymore » a delta-sigma modulator. Because the digitization bandwidth within the delta-sigma feedback loop is much less than the input bandwidth to the digitizer, the thermal noise outside of the Doppler bandwidth is effectively filtered prior to quantization, providing an increase in signal-to-noise ratio (SNR) at the digitizer's output compared with the input SNR. In this demonstration, a delta-sigma correlation digitizer is fabricated in a 0.18-μm CMOS technology. The digitizer has a power consumption of 1.12 mW with an IIP3 of 7.5 dBm. The digitizer is able to recover Doppler tones in the presence of blockers up to 40 dBm greater than the Doppler tone.« less

  20. Correcting for initial Th in speleothems to obtain the age of calcite nucleation after a growth hiatus

    NASA Astrophysics Data System (ADS)

    Richards, D. A.; Nita, D. C.; Moseley, G. E.; Hoffmann, D. L.; Standish, C. D.; Smart, P. L.; Edwards, R.

    2013-12-01

    In addition to the many U-Th dated speleothem records (δ18O δ13C, trace elements) of past environmental change based on continuous phases of calcite growth, discontinuous records also provide important constraints for a wide range of past states of the Earth system, including sea levels, permafrost extent, regional aridity and local cave flooding. Chronological information about human activity or faunal evolution can also be obtained where calcite can be seen to overlie cave art or mammalian bones, for example. Among the important considerations when determining the U-Th age of calcite that nucleates on an exposed surface are (1) initial 230Th/232Th, which can be elevated and variable in some settings, and (2) growth rate and sub-sample density, where extrapolation is required. By way of example, we present sea level data based on U-Th ages of vadose speleothems (i.e. formed above the water table and distinct from 'phreatic' examples) from caves of the circum-Caribbean , where calcite growth was interrupted by rising sea levels and then reinitiated after regression. These estimates demand large corrections and derived sea level constraints are compared with alternative data from coral reef terraces, phreatic overgrowths on speleothems or indirect, proxy evidence from oxygen isotopes to constrain rates of ice volume growth. Flowstones from the Bahamas provide useful sea level constraints because they present the longest and most continuous records in such settings (a function of preservation potential in addition to hydrological routing) and also earliest growth post-emergence after sea level fall. We revisit estimates for sea level regression at the end of MIS 5 at ~ 80 ka (Richards et al, 1994; Lundberg and Ford, 1994) and make corrections for non-Bulk Earth initial Th contamination (230Th/232Th activity ratio > 10), based on isochron analysis of alternative stalagmites from the same settings and recent high resolution analysis. We also present new U-Th ages for contiguous layers sub-sampled from the first 2-3 mm of flowstone growth after the MIS 5 hiatus, using a sub-sample milling strategy that matches spatial resolution with maximum achievable precision (ThermoFinnigan Neptune MC-ICPMS methodology; 20-30 mg calcite, U = ~ 300 ng.g-1, 2σ age uncertainty is × 600 a at ~80 ka). Isochron methods are used to estimate the range of initial 230Th/232Th ratio and are compared with elevated values obtained from stalagmites from the same cave (Beck et al, 2001; Hoffmann et al, 2010). A similar strategy is presented for a stalagmite with much faster axial growth data, and the data are combined with additional sea level information from the same region to estimate the rate and uncertainty of sea level regression at the MIS stage 5/4 boundary. Elevated initial 230Th/232Th values have also been observed in a stalagmite from 6 m below present sea level in a cenote from the Yucatan, Mexico, where 5 phases of calcite between 10 and 5.5 ka are separated by serpulid worm tubes formed during periods of submergence. The transition between each phase provides constraints on age and elevation of relative sea level, but the former is hampered by the uncertainty of the high initial 230Th/232Th correction. We consider the possible sources of elevated Th ratios: hydrogenous, colloidal and carbonate or other detrital components.

  1. Sampling in Developmental Science: Situations, Shortcomings, Solutions, and Standards.

    PubMed

    Bornstein, Marc H; Jager, Justin; Putnick, Diane L

    2013-12-01

    Sampling is a key feature of every study in developmental science. Although sampling has far-reaching implications, too little attention is paid to sampling. Here, we describe, discuss, and evaluate four prominent sampling strategies in developmental science: population-based probability sampling, convenience sampling, quota sampling, and homogeneous sampling. We then judge these sampling strategies by five criteria: whether they yield representative and generalizable estimates of a study's target population, whether they yield representative and generalizable estimates of subsamples within a study's target population, the recruitment efforts and costs they entail, whether they yield sufficient power to detect subsample differences, and whether they introduce "noise" related to variation in subsamples and whether that "noise" can be accounted for statistically. We use sample composition of gender, ethnicity, and socioeconomic status to illustrate and assess the four sampling strategies. Finally, we tally the use of the four sampling strategies in five prominent developmental science journals and make recommendations about best practices for sample selection and reporting.

  2. Sampling in Developmental Science: Situations, Shortcomings, Solutions, and Standards

    PubMed Central

    Bornstein, Marc H.; Jager, Justin; Putnick, Diane L.

    2014-01-01

    Sampling is a key feature of every study in developmental science. Although sampling has far-reaching implications, too little attention is paid to sampling. Here, we describe, discuss, and evaluate four prominent sampling strategies in developmental science: population-based probability sampling, convenience sampling, quota sampling, and homogeneous sampling. We then judge these sampling strategies by five criteria: whether they yield representative and generalizable estimates of a study’s target population, whether they yield representative and generalizable estimates of subsamples within a study’s target population, the recruitment efforts and costs they entail, whether they yield sufficient power to detect subsample differences, and whether they introduce “noise” related to variation in subsamples and whether that “noise” can be accounted for statistically. We use sample composition of gender, ethnicity, and socioeconomic status to illustrate and assess the four sampling strategies. Finally, we tally the use of the four sampling strategies in five prominent developmental science journals and make recommendations about best practices for sample selection and reporting. PMID:25580049

  3. Search for the Higgs boson in lepton, tau, and jets final states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abazov, V. M.; Abbott, B.; Acharya, B. S.

    2013-09-01

    We present a search for the standard model Higgs boson in final states with an electron or muon and a hadronically decaying tau lepton in association with two or more jets using 9.7 fb-1 of Run II Fermilab Tevatron Collider data collected with the D0 detector. The analysis is sensitive to Higgs boson production via gluon fusion, associated vector boson production, and vector boson fusion, followed by the Higgs boson decay to tau lepton pairs or to W boson pairs. The ratios of 95% C.L. upper limits on the cross section times branching ratio to those predicted by the standardmore » model are obtained for orthogonal subsamples that are enriched in either H → τ τ decays or H → WW decays, and for the combination of these subsample limits. The observed and expected limit ratios for the combined subsamples at a Higgs boson mass of 125 GeV are 11.3 and 9.0 respectively.« less

  4. Age, growth, and size of Lake Superior Pygmy Whitefish (Prosopium coulterii)

    USGS Publications Warehouse

    Stewart, Taylor; Derek Ogle,; Gorman, Owen T.; Vinson, Mark

    2016-01-01

    Pygmy Whitefish (Prosopium coulterii) are a small, glacial relict species with a disjunct distribution in North America and Siberia. In 2013 we collected Pygmy Whitefish at 28 stations from throughout Lake Superior. Total length was recorded for all fish and weight and sex were recorded and scales and otoliths were collected from a subsample. We compared the precision of estimated ages between readers and between scales and otoliths, estimated von Bertalanffy growth parameters for male and female Pygmy Whitefish, and reported the first weight-length relationship for Pygmy Whitefish. Age estimates between scales and otoliths differed significantly with otolith ages significantly greater for most ages after age-3. Maximum otolith age was nine for females and seven for males, which is older than previously reported for Pygmy Whitefish from Lake Superior. Growth was initially fast but slowed considerably after age-3 for males and age-4 for females, falling to 3–4 mm per year at maximum estimated ages. Females were longer than males after age-3. Our results suggest the size, age, and growth of Pygmy Whitefish in Lake Superior have not changed appreciably since 1953.

  5. An investigation into the effects of temporal resolution on hepatic dynamic contrast-enhanced MRI in volunteers and in patients with hepatocellular carcinoma

    NASA Astrophysics Data System (ADS)

    Gill, Andrew B.; Black, Richard T.; Bowden, David J.; Priest, Andrew N.; Graves, Martin J.; Lomas, David J.

    2014-06-01

    This study investigated the effect of temporal resolution on the dual-input pharmacokinetic (PK) modelling of dynamic contrast-enhanced MRI (DCE-MRI) data from normal volunteer livers and from patients with hepatocellular carcinoma. Eleven volunteers and five patients were examined at 3 T. Two sections, one optimized for the vascular input functions (VIF) and one for the tissue, were imaged within a single heart-beat (HB) using a saturation-recovery fast gradient echo sequence. The data was analysed using a dual-input single-compartment PK model. The VIFs and/or uptake curves were then temporally sub-sampled (at interval ▵t = [2-20] s) before being subject to the same PK analysis. Statistical comparisons of tumour and normal tissue PK parameter values using a 5% significance level gave rise to the same study results when temporally sub-sampling the VIFs to HB < ▵t <4 s. However, sub-sampling to ▵t > 4 s did adversely affect the statistical comparisons. Temporal sub-sampling of just the liver/tumour tissue uptake curves at ▵t ≤ 20 s, whilst using high temporal resolution VIFs, did not substantially affect PK parameter statistical comparisons. In conclusion, there is no practical advantage to be gained from acquiring very high temporal resolution hepatic DCE-MRI data. Instead the high temporal resolution could be usefully traded for increased spatial resolution or SNR.

  6. Remote Sensing Image Change Detection Based on NSCT-HMT Model and Its Application.

    PubMed

    Chen, Pengyun; Zhang, Yichen; Jia, Zhenhong; Yang, Jie; Kasabov, Nikola

    2017-06-06

    Traditional image change detection based on a non-subsampled contourlet transform always ignores the neighborhood information's relationship to the non-subsampled contourlet coefficients, and the detection results are susceptible to noise interference. To address these disadvantages, we propose a denoising method based on the non-subsampled contourlet transform domain that uses the Hidden Markov Tree model (NSCT-HMT) for change detection of remote sensing images. First, the ENVI software is used to calibrate the original remote sensing images. After that, the mean-ratio operation is adopted to obtain the difference image that will be denoised by the NSCT-HMT model. Then, using the Fuzzy Local Information C-means (FLICM) algorithm, the difference image is divided into the change area and unchanged area. The proposed algorithm is applied to a real remote sensing data set. The application results show that the proposed algorithm can effectively suppress clutter noise, and retain more detailed information from the original images. The proposed algorithm has higher detection accuracy than the Markov Random Field-Fuzzy C-means (MRF-FCM), the non-subsampled contourlet transform-Fuzzy C-means clustering (NSCT-FCM), the pointwise approach and graph theory (PA-GT), and the Principal Component Analysis-Nonlocal Means (PCA-NLM) denosing algorithm. Finally, the five algorithms are used to detect the southern boundary of the Gurbantunggut Desert in Xinjiang Uygur Autonomous Region of China, and the results show that the proposed algorithm has the best effect on real remote sensing image change detection.

  7. Remote Sensing Image Change Detection Based on NSCT-HMT Model and Its Application

    PubMed Central

    Chen, Pengyun; Zhang, Yichen; Jia, Zhenhong; Yang, Jie; Kasabov, Nikola

    2017-01-01

    Traditional image change detection based on a non-subsampled contourlet transform always ignores the neighborhood information’s relationship to the non-subsampled contourlet coefficients, and the detection results are susceptible to noise interference. To address these disadvantages, we propose a denoising method based on the non-subsampled contourlet transform domain that uses the Hidden Markov Tree model (NSCT-HMT) for change detection of remote sensing images. First, the ENVI software is used to calibrate the original remote sensing images. After that, the mean-ratio operation is adopted to obtain the difference image that will be denoised by the NSCT-HMT model. Then, using the Fuzzy Local Information C-means (FLICM) algorithm, the difference image is divided into the change area and unchanged area. The proposed algorithm is applied to a real remote sensing data set. The application results show that the proposed algorithm can effectively suppress clutter noise, and retain more detailed information from the original images. The proposed algorithm has higher detection accuracy than the Markov Random Field-Fuzzy C-means (MRF-FCM), the non-subsampled contourlet transform-Fuzzy C-means clustering (NSCT-FCM), the pointwise approach and graph theory (PA-GT), and the Principal Component Analysis-Nonlocal Means (PCA-NLM) denosing algorithm. Finally, the five algorithms are used to detect the southern boundary of the Gurbantunggut Desert in Xinjiang Uygur Autonomous Region of China, and the results show that the proposed algorithm has the best effect on real remote sensing image change detection. PMID:28587299

  8. Segmentation, feature extraction, and multiclass brain tumor classification.

    PubMed

    Sachdeva, Jainy; Kumar, Vinod; Gupta, Indra; Khandelwal, Niranjan; Ahuja, Chirag Kamal

    2013-12-01

    Multiclass brain tumor classification is performed by using a diversified dataset of 428 post-contrast T1-weighted MR images from 55 patients. These images are of primary brain tumors namely astrocytoma (AS), glioblastoma multiforme (GBM), childhood tumor-medulloblastoma (MED), meningioma (MEN), secondary tumor-metastatic (MET), and normal regions (NR). Eight hundred fifty-six regions of interest (SROIs) are extracted by a content-based active contour model. Two hundred eighteen intensity and texture features are extracted from these SROIs. In this study, principal component analysis (PCA) is used for reduction of dimensionality of the feature space. These six classes are then classified by artificial neural network (ANN). Hence, this approach is named as PCA-ANN approach. Three sets of experiments have been performed. In the first experiment, classification accuracy by ANN approach is performed. In the second experiment, PCA-ANN approach with random sub-sampling has been used in which the SROIs from the same patient may get repeated during testing. It is observed that the classification accuracy has increased from 77 to 91 %. PCA-ANN has delivered high accuracy for each class: AS-90.74 %, GBM-88.46 %, MED-85 %, MEN-90.70 %, MET-96.67 %, and NR-93.78 %. In the third experiment, to remove bias and to test the robustness of the proposed system, data is partitioned in a manner such that the SROIs from the same patient are not common for training and testing sets. In this case also, the proposed system has performed well by delivering an overall accuracy of 85.23 %. The individual class accuracy for each class is: AS-86.15 %, GBM-65.1 %, MED-63.36 %, MEN-91.5 %, MET-65.21 %, and NR-93.3 %. A computer-aided diagnostic system comprising of developed methods for segmentation, feature extraction, and classification of brain tumors can be beneficial to radiologists for precise localization, diagnosis, and interpretation of brain tumors on MR images.

  9. Method of high precision interval measurement in pulse laser ranging system

    NASA Astrophysics Data System (ADS)

    Wang, Zhen; Lv, Xin-yuan; Mao, Jin-jin; Liu, Wei; Yang, Dong

    2013-09-01

    Laser ranging is suitable for laser system, for it has the advantage of high measuring precision, fast measuring speed,no cooperative targets and strong resistance to electromagnetic interference,the measuremen of laser ranging is the key paremeters affecting the performance of the whole system.The precision of the pulsed laser ranging system was decided by the precision of the time interval measurement, the principle structure of laser ranging system was introduced, and a method of high precision time interval measurement in pulse laser ranging system was established in this paper.Based on the analysis of the factors which affected the precision of range measure,the pulse rising edges discriminator was adopted to produce timing mark for the start-stop time discrimination,and the TDC-GP2 high precision interval measurement system based on TMS320F2812 DSP was designed to improve the measurement precision.Experimental results indicate that the time interval measurement method in this paper can obtain higher range accuracy. Compared with the traditional time interval measurement system,the method simplifies the system design and reduce the influence of bad weather conditions,furthermore,it satisfies the requirements of low costs and miniaturization.

  10. Modeling and Positioning of a PZT Precision Drive System.

    PubMed

    Liu, Che; Guo, Yanling

    2017-11-08

    The fact that piezoelectric ceramic transducer (PZT) precision drive systems in 3D printing are faced with nonlinear problems with respect to positioning, such as hysteresis and creep, has had an extremely negative impact on the precision of laser focusing systems. To eliminate the impact of PZT nonlinearity during precision drive movement, mathematical modeling and theoretical analyses of each module comprising the system were carried out in this study, a micro-displacement measurement circuit based on Position Sensitive Detector (PSD) is constructed, followed by the establishment of system closed-loop control and creep control models. An XL-80 laser interferometer (Renishaw, Wotton-under-Edge, UK) was used to measure the performance of the precision drive system, showing that system modeling and control algorithms were correct, with the requirements for precision positioning of the drive system satisfied.

  11. Modeling and Positioning of a PZT Precision Drive System

    PubMed Central

    Liu, Che; Guo, Yanling

    2017-01-01

    The fact that piezoelectric ceramic transducer (PZT) precision drive systems in 3D printing are faced with nonlinear problems with respect to positioning, such as hysteresis and creep, has had an extremely negative impact on the precision of laser focusing systems. To eliminate the impact of PZT nonlinearity during precision drive movement, mathematical modeling and theoretical analyses of each module comprising the system were carried out in this study, a micro-displacement measurement circuit based on Position Sensitive Detector (PSD) is constructed, followed by the establishment of system closed-loop control and creep control models. An XL-80 laser interferometer (Renishaw, Wotton-under-Edge, UK) was used to measure the performance of the precision drive system, showing that system modeling and control algorithms were correct, with the requirements for precision positioning of the drive system satisfied. PMID:29117140

  12. Study of nanometer-level precise phase-shift system used in electronic speckle shearography and phase-shift pattern interferometry

    NASA Astrophysics Data System (ADS)

    Jing, Chao; Liu, Zhongling; Zhou, Ge; Zhang, Yimo

    2011-11-01

    The nanometer-level precise phase-shift system is designed to realize the phase-shift interferometry in electronic speckle shearography pattern interferometry. The PZT is used as driving component of phase-shift system and translation component of flexure hinge is developed to realize micro displacement of non-friction and non-clearance. Closed-loop control system is designed for high-precision micro displacement, in which embedded digital control system is developed for completing control algorithm and capacitive sensor is used as feedback part for measuring micro displacement in real time. Dynamic model and control model of the nanometer-level precise phase-shift system is analyzed, and high-precision micro displacement is realized with digital PID control algorithm on this basis. It is proved with experiments that the location precision of the precise phase-shift system to step signal of displacement is less than 2nm and the location precision to continuous signal of displacement is less than 5nm, which is satisfied with the request of the electronic speckle shearography and phase-shift pattern interferometry. The stripe images of four-step phase-shift interferometry and the final phase distributed image correlated with distortion of objects are listed in this paper to prove the validity of nanometer-level precise phase-shift system.

  13. U-Pb ages of secondary silica at Yucca Mountain, Nevada: Implications for the paleohydrology of the unsaturated zone

    USGS Publications Warehouse

    Neymark, L.A.; Amelin, Y.; Paces, J.B.; Peterman, Z.E.

    2002-01-01

    Uranium, Th and Pb isotopes were analyzed in layers of opal and chalcedony from individual mm- to cm-thick calcite and silica coatings at Yucca Mountain, Nevada, USA, a site that is being evaluated for a potential high-level nuclear waste repository. These calcite and silica coatings on fractures and in lithophysal cavities in Miocene-age tuffs in the unsaturated zone (UZ) precipitated from descending water and record a long history of percolation through the UZ. Opal and chalcedony have high concentrations of U (10 to 780 ppm) and low concentrations of common Pb as indicated by large values of 206Pb/204Pb (up to 53,806), thus making them suitable for U-Pb age determinations. Interpretations of U-Pb isotope systems in opal samples at Yucca Mountain are complicated by the incorporation of excess 234U at the time of mineral formation, resulting in reverse discordance of U-Pb ages. However, the 207PB/235U ages are much less affected by deviation from initial secular equilibrium and provide reliable ages of most silica deposits between 0.6 and 9.8 Ma. For chalcedony subsamples showing normal age discordance, these ages may represent minimum times of deposition. Typically, 207Pb/235U ages are consistent with the microstratigraphy in the mineral coating samples, such that the youngest ages are for subsamples from outer layers, intermediate ages are from inner layers, and oldest ages are from innermost layers. 234U and 230Th in most silica layers deeper in the coatings are in secular equilibrium with 238U, which is consistent with their old age and closed system behavior during the past -0.5 Ma. The ages for subsamples of silica layers from different microstratigraphic positions in individual calcite and silica coating samples collected from lithophysal cavities in the welded part of the Topopah Spring Tuff yield slow long-term average growth rates of 1 to 5 mm/Ma. These data imply that the deeper parts of the UZ at Yucca Mountain maintained long-term hydrologic stability over the past 10 Ma. despite significant climate variations. U-Pb ages for subsamples of silica layers from different microstratigraphic positions in individual calcite and silica coating samples collected from fractures in the shallower part of the UZ (welded part of the overlying Tiva Canyon Tuff) indicate larger long-term average growth rates up to 23 mm/Ma and an absence of recently deposited materials (ages of outermost layers are 3-5 Ma.). These differences between the characteristics of the coatings for samples from the shallower and deeper parts of the UZ may indicate that the nonwelded tuffs (PTn), located between the welded parts of the Tiva Canyon and Topopah Spring Tuffs, play an important role in moderating UZ flow.

  14. The Nutrition, Aging, and Memory in Elders (NAME) study: design and methods for a study of micronutrients and cognitive function in a homebound elderly population.

    PubMed

    Scott, Tammy M; Peter, Inga; Tucker, Katherine L; Arsenault, Lisa; Bergethon, Peter; Bhadelia, Rafeeque; Buell, Jennifer; Collins, Lauren; Dashe, John F; Griffith, John; Hibberd, Patricia; Leins, Drew; Liu, Timothy; Ordovas, Jose M; Patz, Samuel; Price, Lori Lyn; Qiu, Wei Qiao; Sarnak, Mark; Selhub, Jacob; Smaldone, Lauren; Wagner, Carey; Wang, Lixia; Weiner, Daniel; Yee, Jacqueline; Rosenberg, Irwin; Folstein, Marshal

    2006-06-01

    Micronutrient status can affect cognitive function in the elderly; however, there is much to learn about the precise effects. Understanding mediating factors by which micronutrient status affects cognitive function would contribute to elders' quality of life and their ability to remain in the home. The Nutrition, Aging, and Memory in Elders (NAME) Study is designed to advance the current level of knowledge by investigating potential mediating factors by which micronutrient status contributes to cognitive impairment and central nervous system abnormalities in the elderly. NAME targets homebound elders because they are understudied and particularly at risk for poor nutritional status. Subjects are community-based elders aged 60 and older, recruited through area Aging Services Access Points. The NAME core data include demographics; neuropsychological testing and activities of daily living measures; food frequency, health and behavioral questionnaires; anthropometrics; gene status; plasma micronutrients, homocysteine, and other blood determinants. A neurological examination, psychiatric examination, and brain MRI and volumetric measurements are obtained from a sub-sample. Preliminary data from first 300 subjects are reported. These data show that the NAME protocol is feasible and that the enrolled subjects are racially diverse, at-risk, and had similar basic demographics to the population from which they were drawn. The goal of the NAME study is to evaluate novel relationships between nutritional factors and cognitive impairment. These data may provide important information on potential new therapeutic strategies and supplementation standards for the elderly to maintain cognitive function and potentially reduce the public health costs of dementia.

  15. 230Th/U dating of a late Pleistocene alluvial fan along the southern San Andreas fault

    USGS Publications Warehouse

    Fletcher, Kathryn E.K.; Sharp, Warren D.; Kendrick, Katherine J.; Behr, Whitney M.; Hudnut, Kenneth W.; Hanks, Thomas C.

    2010-01-01

    U-series dating of pedogenic carbonate-clast coatings provides a reliable, precise minimum age of 45.1 ± 0.6 ka (2σ) for the T2 geomorphic surface of the Biskra Palms alluvial fan, Coachella Valley, California. Concordant ages for multiple subsamples from individual carbonate coatings provide evidence that the 238U-234U-230Th system has remained closed since carbonate formation. The U-series minimum age is used to assess previously published 10Be exposure ages of cobbles and boulders. All but one cobble age and some boulder 10Be ages are younger than the U-series minimum age, indicating that surface cobbles and some boulders were partially shielded after deposition of the fan and have been subsequently exhumed by erosion of fine-grained matrix to expose them on the present fan surface. A comparison of U-series and 10Be ages indicates that the interval between final alluvial deposition on the T2 fan surface and accumulation of dateable carbonate is not well resolved at Biskra Palms; however, the “time lag” inherent to dating via U-series on pedogenic carbonate can be no larger than ∼10 k.y., the uncertainty of the 10Be-derived age of the T2 fan surface. Dating of the T2 fan surface via U-series on pedogenic carbonate (minimum age, 45.1 ± 0.6 ka) and 10Be on boulder-top samples using forward modeling (preferred age, 50 ± 5 ka) provides broadly consistent constraints on the age of the fan surface and helps to elucidate its postdepositional development.

  16. 230Th/U dating of a late pleistocene alluvial fan along the southern san andreas fault

    USGS Publications Warehouse

    Fletcher, K.E.K.; Sharp, W.D.; Kendrick, K.J.; Behr, W.M.; Hudnut, K.W.; Hanks, T.C.

    2010-01-01

    U-series dating of pedogenic carbonate-clast coatings provides a reliable, precise minimum age of 45.1 ?? 0.6 ka (2??) for the T2 geomorphic surface of the Biskra Palms alluvial fan, Coachella Valley, California. Concordant ages for multiple subsamples from individual carbonate coatings provide evidence that the 238U-234U-230Th system has remained closed since carbonate formation. The U-series minimum age is used to assess previously published 10Be exposure ages of cobbles and boulders. All but one cobble age and some boulder 10Be ages are younger than the U-series minimum age, indicating that surface cobbles and some boulders were partially shielded after deposition of the fan and have been subsequently exhumed by erosion of fine-grained matrix to expose them on the present fan surface. A comparison of U-series and 10Be ages indicates that the interval between final alluvial deposition on the T2 fan surface and accumulation of dateable carbonate is not well resolved at Biskra Palms; however, the "time lag" inherent to dating via U-series on pedogenic carbonate can be no larger than ~10 k.y., the uncertainty of the 10Be-derived age of the T2 fan surface. Dating of the T2 fan surface via U-series on pedogenic carbonate (minimum age, 45.1 ?? 0.6 ka) and 10Be on boulder-top samples using forward modeling (preferred age, 50 ?? 5 ka) provides broadly consistent constraints on the age of the fan surface and helps to elucidate its postdepositional development. ?? 2010 Geological Society of America.

  17. Development and evaluation of a tool for retrospective exposure assessment of selected endocrine disrupting chemicals and EMF in the car manufacturing industry.

    PubMed

    Mester, Birte; Schmeisser, Nils; Lünzmann, Hauke; Pohlabeln, Hermann; Langner, Ingo; Behrens, Thomas; Ahrens, Wolfgang

    2011-08-01

    A system for retrospective occupational exposure assessment combining the efficiency of a job exposure matrix (JEM) and the precision of a subsequent individual expert exposure assessment (IEEA) was developed. All steps of the exposure assessment were performed by an interdisciplinary expert panel in the context of a case-control study on male germ cell cancer nested in the car manufacturing industries. An industry-specific JEM was developed and automatic exposure estimation was performed based on this JEM. A subsample of exposure ratings was done by IEEA to identify determinants of disagreement between the JEM and the individual review. Possible determinants were analyzed by calculating odds ratios (ORs) of disagreement between ratings with regard to different dimensions (e.g. high versus low intensity of exposure). Disagreement in ≥20% of the sampled exposure ratings with a statistically significant OR was chosen as a threshold for inclusion of the exposure ratings into a final IEEA. The most important determinants of disagreement between JEM and individual review were working outside of the production line (disagreement 80%), low probability of exposure (disagreement 25%), and exposure depending on specific activities like usage of specific lacquers (disagreement 32%) for jobs within the production line. These determinants were the selection criteria of exposure ratings for the subsequent final IEEA. Combining a JEM and a subsequent final IEEA for a selected subset of exposure ratings is a feasible and labor-saving approach for exposure assessment in large occupational epidemiological studies.

  18. Attachment to Life: Psychometric Analyses of the Valuation of Life Scale and Differences Among Older Adults

    PubMed Central

    Gitlin, Laura N.; Parisi, Jeanine; Huang, Jin; Winter, Laraine; Roth, David L.

    2016-01-01

    Purpose of study: Examine psychometric properties of Lawton’s Valuation of Life (VOL) scale, a measure of an older adults’ assessment of the perceived value of their lives; and whether ratings differ by race (White, Black/African American) and sex. Design and Methods: The 13-item VOL scale was administered at baseline in 2 separate randomized trials (Advancing Better Living for Elders, ABLE; Get Busy Get Better, GBGB) for a total of 527 older adults. Principal component analyses were applied to a subset of ABLE data (subsample 1) and confirmatory factor analyses were conducted on remaining data (subsample 2 and GBGB). Once the factor structure was identified and confirmed, 2 subscales were created, corresponding to optimism and engagement. Convergent validity of total and subscale scores were examined using measures of depressive symptoms, social support, control-oriented strategies, mastery, and behavioral activation. For discriminant validity, indices of health status, physical function, financial strain, cognitive status, and number of falls were examined. Results: Trial samples (ABLE vs. GBGB) differed by age, race, marital status, education, and employment. Principal component analysis on ABLE subsample 1 (n = 156) yielded two factors subsequently confirmed in confirmatory factor analyses on ABLE subsample 2 (n = 163) and GBGB sample (N = 208) separately. Adequate fit was found for the 2-factor model. Correlational analyses supported strong convergent and discriminant validity. Some statistically significant race and sex differences in subscale scores were found. Implications: VOL measures subjective appraisals of perceived value of life. Consisting of two interrelated subscales, it offers an efficient approach to ascertain personal attributions. PMID:26874189

  19. Prediction of postoperative pulmonary complications in a population-based surgical cohort.

    PubMed

    Canet, Jaume; Gallart, Lluís; Gomar, Carmen; Paluzie, Guillem; Vallès, Jordi; Castillo, Jordi; Sabaté, Sergi; Mazo, Valentín; Briones, Zahara; Sanchis, Joaquín

    2010-12-01

    Current knowledge of the risk for postoperative pulmonary complications (PPCs) rests on studies that narrowly selected patients and procedures. Hypothesizing that PPC occurrence could be predicted from a reduced set of perioperative variables, we aimed to develop a predictive index for a broad surgical population. Patients undergoing surgical procedures given general, neuraxial, or regional anesthesia in 59 hospitals were randomly selected for this prospective, multicenter study. The main outcome was the development of at least one of the following: respiratory infection, respiratory failure, bronchospasm, atelectasis, pleural effusion, pneumothorax, or aspiration pneumonitis. The cohort was randomly divided into a development subsample to construct a logistic regression model and a validation subsample. A PPC predictive index was constructed. Of 2,464 patients studied, 252 events were observed in 123 (5%). Thirty-day mortality was higher in patients with a PPC (19.5%; 95% [CI], 12.5-26.5%) than in those without a PPC (0.5%; 95% CI, 0.2-0.8%). Regression modeling identified seven independent risk factors: low preoperative arterial oxygen saturation, acute respiratory infection during the previous month, age, preoperative anemia, upper abdominal or intrathoracic surgery, surgical duration of at least 2 h, and emergency surgery. The area under the receiver operating characteristic curve was 90% (95% CI, 85-94%) for the development subsample and 88% (95% CI, 84-93%) for the validation subsample. The risk index based on seven objective, easily assessed factors has excellent discriminative ability. The index can be used to assess individual risk of PPC and focus further research on measures to improve patient care.

  20. Using a screening tool to evaluate potential use of e-health services for older people with and without cognitive impairment.

    PubMed

    Malinowsky, Camilla; Nygård, Louise; Kottorp, Anders

    2014-01-01

    E-health services are increasingly offered to provide clients with information and a link to healthcare services. The aim of this study is to investigate the perceived access to and the potential to use technologies important for e-health services among older adults with mild cognitive impairment (MCI) or mild Alzheimer's disease (AD) and controls. The perceived access to and perception of difficulty in the use of everyday technology (such as cell phones, coffee machines, computers) was investigated in a sample of older adults (n = 118) comprising three subsamples: adults with MCI (n = 37), with mild AD (n = 37), and controls (n = 44) using the Everyday Technology Use Questionnaire (ETUQ). The use of seven technologies important for e-health services was specifically examined for each subsample and compared between the subsamples. The findings demonstrated that the older adults in all subsamples perceive access to e-health technologies and potentially would use them competently in several e-health services. However, among persons with AD a lower proportion of perceived access to the technology was described, as well as for persons with MCI. To make the benefits of e-health services available and used by all clients, it is important to consider access to the technology required in e-health services and also to support the clients' capabilities to understand and use the technologies. Also, the potential use of the ETUQ to explore the perceived access to and competence in using e-health technologies is a vital issue in the use of e-health services.

  1. Sediment Core Extrusion Method at Millimeter Resolution Using a Calibrated, Threaded-rod.

    PubMed

    Schwing, Patrick T; Romero, Isabel C; Larson, Rebekka A; O'Malley, Bryan J; Fridrik, Erika E; Goddard, Ethan A; Brooks, Gregg R; Hastings, David W; Rosenheim, Brad E; Hollander, David J; Grant, Guy; Mulhollan, Jim

    2016-08-17

    Aquatic sediment core subsampling is commonly performed at cm or half-cm resolution. Depending on the sedimentation rate and depositional environment, this resolution provides records at the annual to decadal scale, at best. An extrusion method, using a calibrated, threaded-rod is presented here, which allows for millimeter-scale subsampling of aquatic sediment cores of varying diameters. Millimeter scale subsampling allows for sub-annual to monthly analysis of the sedimentary record, an order of magnitude higher than typical sampling schemes. The extruder consists of a 2 m aluminum frame and base, two core tube clamps, a threaded-rod, and a 1 m piston. The sediment core is placed above the piston and clamped to the frame. An acrylic sampling collar is affixed to the upper 5 cm of the core tube and provides a platform from which to extract sub-samples. The piston is rotated around the threaded-rod at calibrated intervals and gently pushes the sediment out the top of the core tube. The sediment is then isolated into the sampling collar and placed into an appropriate sampling vessel (e.g., jar or bag). This method also preserves the unconsolidated samples (i.e., high pore water content) at the surface, providing a consistent sampling volume. This mm scale extrusion method was applied to cores collected in the northern Gulf of Mexico following the Deepwater Horizon submarine oil release. Evidence suggests that it is necessary to sample at the mm scale to fully characterize events that occur on the monthly time-scale for continental slope sediments.

  2. Norm comparisons of the Spanish-language and English-language WAIS-III: Implications for clinical assessment and test adaptation.

    PubMed

    Funes, Cynthia M; Rodriguez, Juventino Hernandez; Lopez, Steven Regeser

    2016-12-01

    This study provides a systematic comparison of the norms of 3 Spanish-language Wechsler Adult Intelligence Scales (WAIS-III) batteries from Mexico, Spain, and Puerto Rico, and the U.S. English-language WAIS-III battery. Specifically, we examined the performance of the 4 normative samples on 2 identical subtests (Digit Span and Digit Symbol-Coding) and 1 nearly identical subtest (Block Design). We found that across most age groups the means associated with the Spanish-language versions of the 3 subtests were lower than the means of the U.S. English-language version. In addition, we found that for most age ranges the Mexican subsamples scored lower than the Spanish subsamples. Lower educational levels of Mexicans and Spaniards compared to U.S. residents are consistent with the general pattern of findings. These results suggest that because of the different norms, applying any of the 3 Spanish-language versions of the WAIS-III generally risks underestimating deficits, and that applying the English-language WAIS-III norms risks overestimating deficits of Spanish-speaking adults. There were a few exceptions to these general patterns. For example, the Mexican subsample ages 70 years and above performed significantly better on the Digit Symbol and Block Design than did the U.S. and Spanish subsamples. Implications for the clinical assessment of U.S. Spanish-speaking Latinos and test adaptation are discussed with an eye toward improving the clinical care for this community. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  3. Replicating cluster subtypes for the prevention of adolescent smoking and alcohol use.

    PubMed

    Babbin, Steven F; Velicer, Wayne F; Paiva, Andrea L; Brick, Leslie Ann D; Redding, Colleen A

    2015-01-01

    Substance abuse interventions tailored to the individual level have produced effective outcomes for a wide variety of behaviors. One approach to enhancing tailoring involves using cluster analysis to identify prevention subtypes that represent different attitudes about substance use. This study applied this approach to better understand tailored interventions for smoking and alcohol prevention. Analyses were performed on a sample of sixth graders from 20 New England middle schools involved in a 36-month tailored intervention study. Most adolescents reported being in the Acquisition Precontemplation (aPC) stage at baseline: not smoking or not drinking and not planning to start in the next six months. For smoking (N=4059) and alcohol (N=3973), each sample was randomly split into five subsamples. Cluster analysis was performed within each subsample based on three variables: Pros and Cons (from Decisional Balance Scales), and Situational Temptations. Across all subsamples for both smoking and alcohol, the following four clusters were identified: (1) Most Protected (MP; low Pros, high Cons, low Temptations); (2) Ambivalent (AM; high Pros, average Cons and Temptations); (3) Risk Denial (RD; average Pros, low Cons, average Temptations); and (4) High Risk (HR; high Pros, low Cons, and very high Temptations). Finding the same four clusters within aPC for both smoking and alcohol, replicating the results across the five subsamples, and demonstrating hypothesized relations among the clusters with additional external validity analyses provide strong evidence of the robustness of these results. These clusters demonstrate evidence of validity and can provide a basis for tailoring interventions. Copyright © 2014. Published by Elsevier Ltd.

  4. Replicating cluster subtypes for the prevention of adolescent smoking and alcohol use

    PubMed Central

    Babbin, Steven F.; Velicer, Wayne F.; Paiva, Andrea L.; Brick, Leslie Ann D.; Redding, Colleen A.

    2015-01-01

    Introduction Substance abuse interventions tailored to the individual level have produced effective outcomes for a wide variety of behaviors. One approach to enhancing tailoring involves using cluster analysis to identify prevention subtypes that represent different attitudes about substance use. This study applied this approach to better understand tailored interventions for smoking and alcohol prevention. Methods Analyses were performed on a sample of sixth graders from 20 New England middle schools involved in a 36-month tailored intervention study. Most adolescents reported being in the Acquisition Precontemplation (aPC) stage at baseline: not smoking or not drinking and not planning to start in the next six months. For smoking (N= 4059) and alcohol (N= 3973), each sample was randomly split into five subsamples. Cluster analysis was performed within each subsample based on three variables: Pros and Cons (from Decisional Balance Scales), and Situational Temptations. Results Across all subsamples for both smoking and alcohol, the following four clusters were identified: (1) Most Protected (MP; low Pros, high Cons, low Temptations); (2) Ambivalent (AM; high Pros, average Cons and Temptations); (3) Risk Denial (RD; average Pros, low Cons, average Temptations); and (4) High Risk (HR; high Pros, low Cons, and very high Temptations). Conclusions Finding the same four clusters within aPC for both smoking and alcohol, replicating the results across the five subsamples, and demonstrating hypothesized relations among the clusters with additional external validity analyses provide strong evidence of the robustness of these results. These clusters demonstrate evidence of validity and can provide a basis for tailoring interventions. PMID:25222849

  5. Sediment Core Extrusion Method at Millimeter Resolution Using a Calibrated, Threaded-rod

    PubMed Central

    Schwing, Patrick T.; Romero, Isabel C.; Larson, Rebekka A.; O'Malley, Bryan J.; Fridrik, Erika E.; Goddard, Ethan A.; Brooks, Gregg R.; Hastings, David W.; Rosenheim, Brad E.; Hollander, David J.; Grant, Guy; Mulhollan, Jim

    2016-01-01

    Aquatic sediment core subsampling is commonly performed at cm or half-cm resolution. Depending on the sedimentation rate and depositional environment, this resolution provides records at the annual to decadal scale, at best. An extrusion method, using a calibrated, threaded-rod is presented here, which allows for millimeter-scale subsampling of aquatic sediment cores of varying diameters. Millimeter scale subsampling allows for sub-annual to monthly analysis of the sedimentary record, an order of magnitude higher than typical sampling schemes. The extruder consists of a 2 m aluminum frame and base, two core tube clamps, a threaded-rod, and a 1 m piston. The sediment core is placed above the piston and clamped to the frame. An acrylic sampling collar is affixed to the upper 5 cm of the core tube and provides a platform from which to extract sub-samples. The piston is rotated around the threaded-rod at calibrated intervals and gently pushes the sediment out the top of the core tube. The sediment is then isolated into the sampling collar and placed into an appropriate sampling vessel (e.g., jar or bag). This method also preserves the unconsolidated samples (i.e., high pore water content) at the surface, providing a consistent sampling volume. This mm scale extrusion method was applied to cores collected in the northern Gulf of Mexico following the Deepwater Horizon submarine oil release. Evidence suggests that it is necessary to sample at the mm scale to fully characterize events that occur on the monthly time-scale for continental slope sediments. PMID:27585268

  6. Frequency of hepatocellular fibrillar inclusions in European flounder (Platichthys flesus) from the Douro River estuary, Portugal.

    PubMed

    Carrola, João; Fontaínhas-Fernandes, António; Pires, Maria João; Rocha, Eduardo

    2014-02-01

    Liver lesions in wild fish have been associated with xenobiotic exposure. Facing reports of pollution in the Douro River estuary (north of Portugal), we have been making field surveys using fishes and targeting histopathological biomarkers of exposure and effect. Herein, we intended to better characterize and report the rate of one poorly understood lesion-hepatocellular fibrillar inclusions (HFI)-found in European flounder (Platichthys flesus). With this report, we aimed to establish sound baseline data that could be viewed as a starting point for future biomonitoring, while offering the world's second only pool of field data on such a liver toxicopathic lesion, which could be compared with data available from the UK estuaries. Sampling was done in the Douro River estuary over 1 year. A total of 72 animals were fished with nets, in spring-summer (SS) and autumn-winter (AW) campaigns. Livers were processed for histopathology and both routine and special staining procedures (alcian blue, periodic acid Schiff (PAS), tetrazonium coupling reaction). Immunohistochemistry targeted AE1/AE3 (pan cytokeratins). The severity of the HFI extent was graded using a system with four levels, varying from 0 (absence of HFI) to 3 (high relative density of cells with HFI). Cells (isolated/groups) with HFI appeared in 35 % or more of the fish, in the total samples of each season, and over 40 % in more homogeneous sub-samples. There were no significant differences when comparing samples versus sub-samples or SS versus AW. When merging the data sets from the two seasons, the frequency of fish with HFI was ≈36 % for the total sample and ≈49 % for the sub-sample. The extreme group (biggest and smallest fish) revealed a HFI frequency of only 16 %, which differed significantly from the total and sub-sampled groups. Immunostaining and PAS were negative for the HFI, and alcian blue could, at times, faintly stain the inclusions. These were positive with the tetrazonium reaction. We showed the presence of HFI in European flounder from the Douro River estuary, proving that they are essentially protein in nature, that no seasonal changes existed in the HFI frequency, and that it was rarer in the smallest and biggest fish groups. Within the ranges of weight/size of our total sample, we estimate that the frequency of HFI in the local flounder is ≈35 %. That rate stands as a baseline value for future assessments, namely for biomonitoring purposes targeting correlations with the estuary pollution status.

  7. Runaway and pregnant: risk factors associated with pregnancy in a national sample of runaway/homeless female adolescents.

    PubMed

    Thompson, Sanna J; Bender, Kimberly A; Lewis, Carol M; Watkins, Rita

    2008-08-01

    Homeless youth are at particularly high risk for teen pregnancy; research indicates as many as 20% of homeless young women become pregnant. These pregnant and homeless teens lack financial resources and adequate health care, resulting in increased risk for low-birth-weight babies and high infant mortality. This study investigated individual and family-level predictors of teen pregnancy among a national sample of runaway/homeless youth in order to better understand the needs of this vulnerable population. Data from the Runaway/Homeless Youth Management Information System (RHY MIS) provided a national sample of youth seeking services at crisis shelters. A sub-sample of pregnant females and a random sub-sample (matched by age) of nonpregnant females comprised the study sample (N = 951). Chi-square and t tests identified differences between pregnant and nonpregnant runaway females; maximum likelihood logistic regression identified individual and family-level predictors of teen pregnancy. Teen pregnancy was associated with being an ethnic minority, dropping out of school, being away from home for longer periods of time, having a sexually transmitted disease, and feeling abandoned by one's family. Family factors, such as living in a single parent household and experiencing emotional abuse by one's mother, increased the odds of a teen being pregnant. The complex problems associated with pregnant runaway/homeless teens create challenges for short-term shelter services. Suggestions are made for extending shelter services to include referrals and coordination with teen parenting programs and other systems of care.

  8. Comparing Women's and Men's Sexual Offending Using a Statewide Incarcerated Sample: A Two-Study Design.

    PubMed

    Comartin, Erin B; Burgess-Proctor, Amanda; Kubiak, Sheryl; Bender, Kimberly A; Kernsmith, Poco

    2018-05-01

    This study identifies the characteristics that distinguish between women's and men's sexual offending. We compare women and men currently incarcerated for a sex offense in one state using two data sources: administrative data on sex offenders in the state prison ( N = 9,235) and subsample surveys ( n = 129). Bivariate and logistic regressions were used in these analyses. Women account for a small proportion (1.1%, N = 98) of incarcerated sex offenders. In the population, women and men were convicted of similar types of sex offenses. The subsample was demographically similar to the population. In the subsample, women were more likely than men to have a child victim, be the parent/guardian of the victim, have a co-offender, and repeatedly perpetrate against the same victim. Findings suggest that women convicted and sentenced for a sex offense differ from their male counterparts, with predictive factors being dependent upon the age of their victim(s). Sex offender treatment interventions developed for men are poorly suited to and may have limited efficacy for women.

  9. Galaxy Properties Across and Through the 6dFGS Fundamental Plane

    NASA Astrophysics Data System (ADS)

    Springob, Chris M.; Magoulas, C.; Proctor, R.; Colless, M.; Jones, D. H.; Kobayashi, C.; Campbell, L.; Lucey, J.; Mould, J.; Merson, A.

    2011-05-01

    The 6dF Galaxy Survey (6dFGS) is an all southern sky galaxy survey, including 125,000 redshifts and a Fundamental Plane (FP) subsample of 10,000 peculiar velocities, making it the largest peculiar velocity sample to date. We have developed a robust procedure for fitting the FP, performing a maximum likelihood fit to a tri-variate Gaussian. We have subsequently examined the variation of a variety of properties across and through the FP, including environment, morphology, metallicity, alpha-enhancement, and stellar age. We find little variation in the FP with global environment. Some variation of morphology is found along the plane, though this is likely a consequence of selection effects. Elemental abundances are found to vary both across and through the FP. The parameter that varies most directly through the FP is stellar age. We find that galaxies with stellar populations with average ages older than 3 Gyr occupy a thinner FP than those younger than 3 Gyr. Thus, a modest improvement in distance errors is realized if one divides the sample into subsamples segregated by age, and fits the FP of each subsample independently.

  10. Students' and teachers' perceptions of aggressive behaviour in adolescents with intellectual disability and typically developing adolescents.

    PubMed

    Pavlović, Miroslav; Zunić-Pavlović, Vesna; Glumbić, Nenad

    2013-11-01

    This study investigated aggressive behaviour in Serbian adolescents with intellectual disability (ID) compared to typically developing peers. The sample consisted of both male and female adolescents aged 12-18 years. One hundred of the adolescents had ID, and 348 adolescents did not have ID. The adolescents were asked to complete the Reactive-Proactive Aggression Questionnaire (RPQ), and their teachers provided ratings of aggression for the adolescents using the Children's Scale of Hostility and Aggression: Reactive-Proactive (C-SHARP). Results indicated that adolescents reported a higher prevalence of aggressive behaviour than their teachers. Reactive aggression was more prevalent than proactive aggression in both subsamples. In the subsample of adolescents with ID, there were no sex or age differences for aggression. However, in the normative subsample, boys and older adolescents scored significantly higher on aggression. According to adolescent self-reports the prevalence of aggression was higher in adolescents without ID, while teachers perceived aggressive behaviour to be more prevalent in adolescents with ID. Scientific and practical implications are discussed. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Psychometric Properties of Creative Self-Efficacy Inventory Among Distinguished Students in Saudi Arabian Universities.

    PubMed

    Alotaibi, Khaled N

    2016-06-01

    This study examined the psychometric properties of the Arabic version of Abbott's Creative Self-Efficacy inventory. Saudi honors students (157 men vs. 163 women) participated. These students are undergraduates (M age = 19.5 year, SD = 1.9) who complete 30 credit hours with a grade point average of no less than 4.5 out of 5. The results showed that the internal consistency (α = .87) and the test-retest reliabilities (r = .73) were satisfactory. The study sample was separated into two subsamples. The data from the first subsample (n = 60) were used to conduct an exploratory factor analysis, whereas the data from the second subsample (n = 260) were used to perform a confirmatory factor analysis. The results of exploratory factor analysis and confirmatory factor analysis indicated that creative self-efficacy was not a unidimensional construct but consisted of two factors labeled "creative thinking self-efficacy" and "creative performance self-efficacy." As expected, this two-factor model fit the data adequately, supporting prior research that treated creative self-efficacy as multidimensional construct. © The Author(s) 2016.

  12. Factor structure and psychometric properties of a Spanish translation of the Body Appreciation Scale-2 (BAS-2).

    PubMed

    Swami, Viren; García, Antonio Alías; Barron, David

    2017-09-01

    We examined the psychometric properties of a Spanish translation of the Body Appreciation Scale-2 (BAS-2) in a community sample of 411 women and 389 men in Almería, Spain. Participants completed the 10-item BAS-2 along with measures of appearance evaluation, body areas satisfaction, self-esteem, life satisfaction, and self-reported body mass index (BMI). Exploratory factor analyses with one split-half subsample revealed that BAS-2 scores had a one-dimensional factor structure in women and men. Confirmatory factor analysis with a second split-half subsample showed the one-dimensional factor structure had acceptable fit and was invariant across sex. There were no significant sex differences in BAS-2 scores. BAS-2 scores were significantly and positively correlated with appearance evaluation, body areas satisfaction, self-esteem, and life satisfaction. Body appreciation was significantly and negatively correlated with BMI in men, but associations in women were only significant in the second subsample. Results suggest that the Spanish BAS-2 has adequate psychometric properties. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Semantic evaluations of noise with tonal components in Japan, France, and Germany: a cross-cultural comparison.

    PubMed

    Hansen, Hans; Weber, Reinhard

    2009-02-01

    An evaluation of tonal components in noise using a semantic differential approach yields several perceptual and connotative factors. This study investigates the effect of culture on these factors with the aid of equivalent listening tests carried out in Japan (n=20), France (n=23), and Germany (n=20). The data's equivalence level is determined by a bias analysis. This analysis gives insight in the cross-cultural validity of the scales used for sound character determination. Three factors were extracted by factor analysis in all cultural subsamples: pleasant, metallic, and power. By employing appropriate target rotations of the factor spaces, the rotated factors were compared and they yield high similarities between the different cultural subsamples. To check cross-cultural differences in means, an item bias analysis was conducted. The a priori assumption of unbiased scales is rejected; the differences obtained are partially linked to bias effects. Acoustical sound descriptors were additionally tested for the semantic dimensions. The high agreement in judgments between the different cultural subsamples contrast the moderate success of the signal parameters to describe the dimensions.

  14. The revised Generalized Expectancy for Success Scale: a validity and reliability study.

    PubMed

    Hale, W D; Fiedler, L R; Cochran, C D

    1992-07-01

    The Generalized Expectancy for Success Scale (GESS; Fibel & Hale, 1978) was revised and assessed for reliability and validity. The revised version was administered to 199 college students along with other conceptually related measures, including the Rosenberg Self-Esteem Scale, the Life Orientation Test, and Rotter's Internal-External Locus of Control Scale. One subsample of students also completed the Eysenck Personality Inventory, while another subsample performed a criterion-related task that involved risk taking. Item analysis yielded 25 items with correlations of .45 or higher with the total score. Results indicated high internal consistency and test-retest reliability.

  15. Social and cultural influences among Mexican border entrepreneurs.

    PubMed

    Díaz Bretones, Francisco; Cappello, Héctor M; Garcia, Pedro A

    2009-06-01

    Social and cultural conditions (including U.S. border and inland influence, role models within the family, and educational background) which affect locus of control and achievement motivation among Mexican entrepreneurs were explored among 64 selected entrepreneurs in two Mexican towns, one on the Mexico-U.S. border, the other located inland. Analyses showed that the border subsample scored higher on External locus of control; however, in both subsamples the father was an important element in the locus of control variable and the entrepreneur status. No statistically significant mean difference was noted for achievement motivation. Practical applications and limitations are discussed.

  16. Precision digital control systems

    NASA Astrophysics Data System (ADS)

    Vyskub, V. G.; Rozov, B. S.; Savelev, V. I.

    This book is concerned with the characteristics of digital control systems of great accuracy. A classification of such systems is considered along with aspects of stabilization, programmable control applications, digital tracking systems and servomechanisms, and precision systems for the control of a scanning laser beam. Other topics explored are related to systems of proportional control, linear devices and methods for increasing precision, approaches for further decreasing the response time in the case of high-speed operation, possibilities for the implementation of a logical control law, and methods for the study of precision digital control systems. A description is presented of precision automatic control systems which make use of electronic computers, taking into account the existing possibilities for an employment of computers in automatic control systems, approaches and studies required for including a computer in such control systems, and an analysis of the structure of automatic control systems with computers. Attention is also given to functional blocks in the considered systems.

  17. Computer-automated dementia screening using a touch-tone telephone.

    PubMed

    Mundt, J C; Ferber, K L; Rizzo, M; Greist, J H

    2001-11-12

    This study investigated the sensitivity and specificity of a computer-automated telephone system to evaluate cognitive impairment in elderly callers to identify signs of early dementia. The Clinical Dementia Rating Scale was used to assess 155 subjects aged 56 to 93 years (n = 74, 27, 42, and 12, with a Clinical Dementia Rating Scale score of 0, 0.5, 1, and 2, respectively). These subjects performed a battery of tests administered by an interactive voice response system using standard Touch-Tone telephones. Seventy-four collateral informants also completed an interactive voice response version of the Symptoms of Dementia Screener. Sixteen cognitively impaired subjects were unable to complete the telephone call. Performances on 6 of 8 tasks were significantly influenced by Clinical Dementia Rating Scale status. The mean (SD) call length was 12 minutes 27 seconds (2 minutes 32 seconds). A subsample (n = 116) was analyzed using machine-learning methods, producing a scoring algorithm that combined performances across 4 tasks. Results indicated a potential sensitivity of 82.0% and specificity of 85.5%. The scoring model generalized to a validation subsample (n = 39), producing 85.0% sensitivity and 78.9% specificity. The kappa agreement between predicted and actual group membership was 0.64 (P<.001). Of the 16 subjects unable to complete the call, 11 provided sufficient information to permit us to classify them as impaired. Standard scoring of the interactive voice response-administered Symptoms of Dementia Screener (completed by informants) produced a screening sensitivity of 63.5% and 100% specificity. A lower criterion found a 90.4% sensitivity, without lowering specificity. Computer-automated telephone screening for early dementia using either informant or direct assessment is feasible. Such systems could provide wide-scale, cost-effective screening, education, and referral services to patients and caregivers.

  18. A real-time surface inspection system for precision steel balls based on machine vision

    NASA Astrophysics Data System (ADS)

    Chen, Yi-Ji; Tsai, Jhy-Cherng; Hsu, Ya-Chen

    2016-07-01

    Precision steel balls are one of the most fundament components for motion and power transmission parts and they are widely used in industrial machinery and the automotive industry. As precision balls are crucial for the quality of these products, there is an urgent need to develop a fast and robust system for inspecting defects of precision steel balls. In this paper, a real-time system for inspecting surface defects of precision steel balls is developed based on machine vision. The developed system integrates a dual-lighting system, an unfolding mechanism and inspection algorithms for real-time signal processing and defect detection. The developed system is tested under feeding speeds of 4 pcs s-1 with a detection rate of 99.94% and an error rate of 0.10%. The minimum detectable surface flaw area is 0.01 mm2, which meets the requirement for inspecting ISO grade 100 precision steel balls.

  19. Anisotropy of magnetic susceptibility used to detect coring-induced sediment disturbance and filter palaeomagnetic secular variation data: IODP sites M0061 and M0062 (Baltic Sea)

    NASA Astrophysics Data System (ADS)

    Snowball, Ian; Almqvist, Bjarne; Lougheed, Bryan; Svensson, Anna; Wiers, Steffen; Herrero-Bervera, Emilio

    2017-04-01

    Inspired by palaeomagnetic data obtained from two sites (M0061 and M0062) cored during IODP Expedition 347 - Baltic Sea Paleoenvironment we studied the Hemsön Alloformation, which is a series of brackish water muds consisting of horizontal planar and parallel laminated (varved) silty clays free from bioturbation. We determined the anisotropy of magnetic susceptibility (AMS) and characteristic remanence (ChRM) directions of a total of 1,102 discrete samples cut from (i) IODP cores recovered by an Advanced Piston corer and (ii) a series of six sediment cores recovered from the same sites by a Kullenberg piston corer. Systematic core splitting, sub-sampling methods and measurements were applied to all sub-samples. We experimentally tested for field-impressed AMS of these muds, in which titanomagnetite carries magnetic remanence and this test was negative. The AMS is likely determined by paramagnetic minerals. As expected for horizontally bedded sediments, the vast majority of the K1 (maximum) and K2 (intermediate) axes had inclinations close to 0 degrees and the AMS shape parameter (T) indicates an oblate fabric. The declinations of the K1 and K2 directions of the sub-samples taken from Kullenberg cores showed a wide distribution around the bedding plane, with no preferred alignment along any specimen axis. Exceptions are samples from the upper 1.5 m of some of these cores, in which the K1 and K2 directions were vertical, the K3 (minimum) axis shallow and T became prolate. We conclude that the Kullenberg corer, which penetrated the top sediments with a pressure of approximately 15 bar, occasionally under-sampled during penetration and vertically stretched the top sediments. Sub-samples from the upper sections of Kullenberg cores had relatively steep ChRM inclinations and we rejected samples that had a prolate, vertically oriented AMS ellipsoid. Surprisingly, the declinations of the K1 axis of all sub-samples taken from IODP APC core sections, which were not oriented relative to each other with respect to azimuth, clustered tightly within the 90-270 degree specimen axis (K2 is within the 0-180 degree axis). This axis is the direction across each cores' split surface (i.e. perpendicular to the "push" direction of the sub-sampling boxes). The APC cores were characterized by various degrees of downwards bending of the planar varves towards the inner surface of the core liner. We conclude that the initial hydraulic pressure applied by the APC, which was consistently above 50 bar during Expedition 347, was needlessly high and created a conical sediment structure and the distinct alignment of the magnetic susceptibility axes along specimen axes. APC core sections with marked disturbances were characterized by ChRM inclinations below 65 degrees, which is a lower limit predicted by time varying geomagnetic field models for the duration of the Hemsö Alloformation (the most recent 6000 years). We rejected samples for palaeomagnetic purposes if the K1 inclination was steeper than 10 degrees. Our study highlights the added value of measuring AMS of discrete sub-samples as an independent control of the suitability of sediments as a source of palaeomagnetic data.

  20. GALAXY GROWTH BY MERGING IN THE NEARBY UNIVERSE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang Tao; Hogg, David W.; Blanton, Michael R., E-mail: david.hogg@nyu.edu

    2012-11-10

    We measure the mass growth rate by merging for a wide range of galaxy types. We present the small-scale (0.014 h {sup -1} {sub 70} Mpc < r < 11 h {sub 70} {sup -1} Mpc) projected cross-correlation functions w(r {sub p}) of galaxy subsamples from the spectroscopic sample of the NYU Value-Added Galaxy Catalog (5 Multiplication-Sign 10{sup 5} galaxies of redshifts 0.03 < z < 0.15) with galaxy subsamples from the Sloan Digital Sky Survey imaging (4 Multiplication-Sign 10{sup 7} galaxies). We use smooth fits to de-project the two-dimensional functions w(r {sub p}) to obtain smooth three-dimensional real-space cross-correlationmore » functions {xi}(r) for each of several spectroscopic subsamples with each of several imaging subsamples. Because close pairs are expected to merge, the three-space functions and dynamical evolution time estimates provide galaxy accretion rates. We find that the accretion onto massive blue galaxies and onto red galaxies is dominated by red companions, and that onto small-mass blue galaxies, red and blue galaxies make comparable contributions. We integrate over all types of companions and find that at fixed stellar mass, the total fractional accretion rates onto red galaxies ({approx}3 h {sub 70} percent per Gyr) are greater than that onto blue galaxies ({approx}1 h {sub 70} percent per Gyr). These rates are almost certainly overestimates because we have assumed that all close pairs merge as quickly as the merger time that we used. One conclusion of this work is that if the total growth of red galaxies from z = 1 to z = 0 is mainly due to merging, the merger rates must have been higher in the past.« less

  1. Sexing adult black-legged kittiwakes by DNA, behavior, and morphology

    USGS Publications Warehouse

    Jodice, P.G.R.; Lanctot, Richard B.; Gill, V.A.; Roby, D.D.; Hatch, Shyla A.

    2000-01-01

    We sexed adult Black-legged Kittiwakes (Rissa tridactyla) using DNA-based genetic techniques, behavior and morphology and compared results from these techniques. Genetic and morphology data were collected on 605 breeding kittiwakes and sex-specific behaviors were recorded for a sub-sample of 285 of these individuals. We compared sex classification based on both genetic and behavioral techniques for this sub-sample to assess the accuracy of the genetic technique. DNA-based techniques correctly sexed 97.2% and sex-specific behaviors, 96.5% of this sub-sample. We used the corrected genetic classifications from this sub-sample and the genetic classifications for the remaining birds, under the assumption they were correct, to develop predictive morphometric discriminant function models for all 605 birds. These models accurately predicted the sex of 73-96% of individuals examined, depending on the sample of birds used and the characters included. The most accurate single measurement for determining sex was length of head plus bill, which correctly classified 88% of individuals tested. When both members of a pair were measured, classification levels improved and approached the accuracy of both behavioral observations and genetic analyses. Morphometric techniques were only slightly less accurate than genetic techniques but were easier to implement in the field and less costly. Behavioral observations, while highly accurate, required that birds be easily observable during the breeding season and that birds be identifiable. As such, sex-specific behaviors may best be applied as a confirmation of sex for previously marked birds. All three techniques thus have the potential to be highly accurate, and the selection of one or more will depend on the circumstances of any particular field study.

  2. Excoriation (skin-picking) disorder in adults: a cross-cultural survey of Israeli Jewish and Arab samples.

    PubMed

    Leibovici, Vera; Koran, Lorrin M; Murad, Sari; Siam, Ihab; Odlaug, Brian L; Mandelkorn, Uri; Feldman-Weisz, Vera; Keuthen, Nancy J

    2015-04-01

    We sought to estimate the lifetime prevalence of Excoriation (Skin-Picking) Disorder (SPD) in the Israeli adult population as a whole and compare SPD prevalence in the Jewish and Arab communities. We also explored demographic, medical and psychological correlates of SPD diagnosis. Questionnaires and scales screening for SPD, and assessing the severity of perceived stress, depression, obsessive-compulsive disorder (OCD), body dysmorphic disorder (BDD), alcohol use, illicit drug use, and medical disorders were completed in a sample of 2145 adults attending medical settings. The lifetime prevalence of SPD was 5.4% in the total sample; it did not differ between genders or within Jewish and Arab subsamples. Severity of depression (p<0.001), OCD (p<0.001) and perceived stress (p=<0.001) were greater in the SPD positive sample. Similarly, diagnoses of BDD (p=0.02) and generalized anxiety (p=0.03) were significantly more common in the SPD-positive respondents. Alcohol use and illicit substance use were significantly more common among SPD positive respondents in the total sample (both p's=0.01) and the Jewish subsample (p=0.03 and p=0.02, respectively). Hypothyroidism was more prevalent in the SPD-positive Jewish subsample (p=0.02). In the total sample, diabetes mellitus was more common in women than in men (p=0.04). Lifetime SPD appears to be relatively common in Israeli adults and associated with other mental disorders. Differences in the self-reported medical and psychiatric comorbidities between the Jewish and Arab subsamples suggest the possibility of cross-cultural variation in the correlates of this disorder. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Investigating Direct Links between Depression, Emotional Control, and Physical Punishment with Adolescent Drive for Thinness and Bulimic Behaviors, Including Possible Moderation by the Serotonin Transporter 5-HTTLPR Polymorphism.

    PubMed

    Rozenblat, Vanja; Ryan, Joanne; Wertheim, Eleanor H; King, Ross; Olsson, Craig A; Krug, Isabel

    2017-01-01

    Objectives: To examine the relationship between psychological and social factors (depression, emotional control, sexual abuse, and parental physical punishment) and adolescent drive for Thinness and Bulimic behaviors in a large community sample, and to investigate possible genetic moderation. Method: Data were drawn from the Australian Temperament Project (ATP), a population-based cohort study that has followed a representative sample of 2443 participants from infancy to adulthood across 16 waves since 1983. A subsample of 650 participants (50.2% female) of Caucasian descent who provided DNA were genotyped for a serotonin transporter promoter polymorphism ( 5-HTTLPR ). Adolescent disordered eating attitudes and behaviors were assessed using the Bulimia and Drive for Thinness scales of the Eating Disorder Inventory-2 (15-16 years). Depression and emotional control were examined at the same age using the Short Mood and Feelings Questionnaire, and an ATP-devised measure of emotional control. History of sexual abuse and physical punishment were assessed retrospectively (23-24 years) in a subsample of 467 of those providing DNA. Results: EDI-2 scores were associated with depression, emotional control, and retrospectively reported parental physical punishment. Although there was statistically significant moderation of the relationship between parental physical punishment and bulimic behaviors by 5-HTTLPR ( p = 0.0048), genotypes in this subsample were not in Hardy-Weinberg Equilibrium. No other G×E interactions were significant. Conclusion: Findings from this study affirm the central importance of psychosocial processes in disordered eating patterns in adolescence. Evidence of moderation by 5-HTTLPR was not conclusive; however, genetic moderation observed in a subsample not in Hardy-Weinberg Equilibrium warrants further investigation.

  4. Investigating Direct Links between Depression, Emotional Control, and Physical Punishment with Adolescent Drive for Thinness and Bulimic Behaviors, Including Possible Moderation by the Serotonin Transporter 5-HTTLPR Polymorphism

    PubMed Central

    Rozenblat, Vanja; Ryan, Joanne; Wertheim, Eleanor H.; King, Ross; Olsson, Craig A.; Krug, Isabel

    2017-01-01

    Objectives: To examine the relationship between psychological and social factors (depression, emotional control, sexual abuse, and parental physical punishment) and adolescent drive for Thinness and Bulimic behaviors in a large community sample, and to investigate possible genetic moderation. Method: Data were drawn from the Australian Temperament Project (ATP), a population-based cohort study that has followed a representative sample of 2443 participants from infancy to adulthood across 16 waves since 1983. A subsample of 650 participants (50.2% female) of Caucasian descent who provided DNA were genotyped for a serotonin transporter promoter polymorphism (5-HTTLPR). Adolescent disordered eating attitudes and behaviors were assessed using the Bulimia and Drive for Thinness scales of the Eating Disorder Inventory-2 (15–16 years). Depression and emotional control were examined at the same age using the Short Mood and Feelings Questionnaire, and an ATP-devised measure of emotional control. History of sexual abuse and physical punishment were assessed retrospectively (23–24 years) in a subsample of 467 of those providing DNA. Results: EDI-2 scores were associated with depression, emotional control, and retrospectively reported parental physical punishment. Although there was statistically significant moderation of the relationship between parental physical punishment and bulimic behaviors by 5-HTTLPR (p = 0.0048), genotypes in this subsample were not in Hardy–Weinberg Equilibrium. No other G×E interactions were significant. Conclusion: Findings from this study affirm the central importance of psychosocial processes in disordered eating patterns in adolescence. Evidence of moderation by 5-HTTLPR was not conclusive; however, genetic moderation observed in a subsample not in Hardy–Weinberg Equilibrium warrants further investigation. PMID:28848475

  5. Genealogy construction in a historically isolated population: application to genetic studies of rheumatoid arthritis in the Pima Indian.

    PubMed

    Lin, J P; Hirsch, R; Jacobsson, L T; Scott, W W; Ma, L D; Pillemer, S R; Knowler, W C; Kastner, D L; Bale, S J

    1999-01-01

    Due to the characteristics of complex traits, many traits may not be amenable to traditional epidemiologic methods. We illustrate an approach that defines an isolated population as the "unit" for carrying out studies of complex disease. We provide an example using the Pima Indians, a relatively isolated population, in which the incidence and prevalence of Type 2 diabetes, gallbladder disease, and rheumatoid arthritis (RA) are significantly increased compared with the general U.S. population. A previous study of RA in the Pima utilizing traditional methods failed to detect a genetic effect on the occurrence of the disease. Our approach involved constructing a genealogy for this population and using a genealogic index to investigate familial aggregation. We developed an algorithm to identify biological relationships among 88 RA cases versus 4,000 subsamples of age-matched individuals from the same population. Kinship coefficients were calculated for all possible pairs of RA cases, and similarly for the subsamples. The sum of the kinship coefficient among all combination of RA pairs, 5.92, was significantly higher than the average of the 4,000 subsamples, 1.99 (p < 0.001), and was elevated over that of the subsamples to the level of second cousin, supporting a genetic effect in the familial aggregation. The mean inbreeding coefficient for the Pima was 0.00009, similar to that reported for other populations; none of the RA cases were inbred. The Pima genealogy can be anticipated to provide valuable information for the genetic study of diseases other than RA. Defining an isolated population as the "unit" in which to assess familial aggregation may be advantageous, especially if there are a limited number of cases in the study population.

  6. Screening for prenatal substance use: development of the Substance Use Risk Profile-Pregnancy scale.

    PubMed

    Yonkers, Kimberly A; Gotman, Nathan; Kershaw, Trace; Forray, Ariadna; Howell, Heather B; Rounsaville, Bruce J

    2010-10-01

    To report on the development of a questionnaire to screen for hazardous substance use in pregnant women and to compare the performance of the questionnaire with other drug and alcohol measures. Pregnant women were administered a modified TWEAK (Tolerance, Worried, Eye-openers, Amnesia, K[C] Cut Down) questionnaire, the 4Ps Plus questionnaire, items from the Addiction Severity Index, and two questions about domestic violence (N=2,684). The sample was divided into "training" (n=1,610) and "validation" (n=1,074) subsamples. We applied recursive partitioning class analysis to the responses from individuals in the training subsample that resulted in a three-item Substance Use Risk Profile-Pregnancy scale. We examined sensitivity, specificity, and the fit of logistic regression models in the validation subsample to compare the performance of the Substance Use Risk Profile-Pregnancy scale with the modified TWEAK and various scoring algorithms of the 4Ps. The Substance Use Risk Profile-Pregnancy scale is comprised of three informative questions that can be scored for high- or low-risk populations. The Substance Use Risk Profile-Pregnancy scale algorithm for low-risk populations was mostly highly predictive of substance use in the validation subsample (Akaike's Information Criterion=579.75, Nagelkerke R=0.27) with high sensitivity (91%) and adequate specificity (67%). The high-risk algorithm had lower sensitivity (57%) but higher specificity (88%). The Substance Use Risk Profile-Pregnancy scale is simple and flexible with good sensitivity and specificity. The Substance Use Risk Profile-Pregnancy scale can potentially detect a range of substances that may be abused. Clinicians need to further assess women with a positive screen to identify those who require treatment for alcohol or illicit substance use in pregnancy. III.

  7. Characterization Data Package for Containerized Sludge Samples Collected from Engineered Container SCS-CON-210

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fountain, Matthew S.; Fiskum, Sandra K.; Baldwin, David L.

    This data package contains the K Basin sludge characterization results obtained by Pacific Northwest National Laboratory during processing and analysis of four sludge core samples collected from Engineered Container SCS-CON-210 in 2010 as requested by CH2M Hill Plateau Remediation Company. Sample processing requirements, analytes of interest, detection limits, and quality control sample requirements are defined in the KBC-33786, Rev. 2. The core processing scope included reconstitution of a sludge core sample distributed among four to six 4-L polypropylene bottles into a single container. The reconstituted core sample was then mixed and subsampled to support a variety of characterization activities. Additionalmore » core sludge subsamples were combined to prepare a container composite. The container composite was fractionated by wet sieving through a 2,000 micron mesh and a 500-micron mesh sieve. Each sieve fraction was sampled to support a suite of analyses. The core composite analysis scope included density determination, radioisotope analysis, and metals analysis, including the Waste Isolation Pilot Plant Hazardous Waste Facility Permit metals (with the exception of mercury). The container composite analysis included most of the core composite analysis scope plus particle size distribution, particle density, rheology, and crystalline phase identification. A summary of the received samples, core sample reconstitution and subsampling activities, container composite preparation and subsampling activities, physical properties, and analytical results are presented. Supporting data and documentation are provided in the appendices. There were no cases of sample or data loss and all of the available samples and data are reported as required by the Quality Assurance Project Plan/Sampling and Analysis Plan.« less

  8. Screening for Prenatal Substance Use

    PubMed Central

    Yonkers, Kimberly A.; Gotman, Nathan; Kershaw, Trace; Forray, Ariadna; Howell, Heather B.; Rounsaville, Bruce J.

    2011-01-01

    OBJECTIVE To report on the development of a questionnaire to screen for hazardous substance use in pregnant women and to compare the performance of the questionnaire with other drug and alcohol measures. METHODS Pregnant women were administered a modified TWEAK (Tolerance, Worried, Eye-openers, Amnesia, K[C] Cut Down) questionnaire, the 4Ps Plus questionnaire, items from the Addiction Severity Index, and two questions about domestic violence (N=2,684). The sample was divided into “training” (n=1,610) and “validation” (n=1,074) subsamples. We applied recursive partitioning class analysis to the responses from individuals in the training subsample that resulted in a three-item Substance Use Risk Profile-Pregnancy scale. We examined sensitivity, specificity, and the fit of logistic regression models in the validation subsample to compare the performance of the Substance Use Risk Profile-Pregnancy scale with the modified TWEAK and various scoring algorithms of the 4Ps. RESULTS The Substance Use Risk Profile-Pregnancy scale is comprised of three informative questions that can be scored for high- or low-risk populations. The Substance Use Risk Profile-Pregnancy scale algorithm for low-risk populations was mostly highly predictive of substance use in the validation subsample (Akaike’s Information Criterion=579.75, Nagelkerke R2=0.27) with high sensitivity (91%) and adequate specificity (67%). The high-risk algorithm had lower sensitivity (57%) but higher specificity (88%). CONCLUSION The Substance Use Risk Profile-Pregnancy scale is simple and flexible with good sensitivity and specificity. The Substance Use Risk Profile-Pregnancy scale can potentially detect a range of substances that may be abused. Clinicians need to further assess women with a positive screen to identify those who require treatment for alcohol or illicit substance use in pregnancy. PMID:20859145

  9. System precisely controls oscillation of vibrating mass

    NASA Technical Reports Server (NTRS)

    Hancock, D. J.

    1967-01-01

    System precisely controls the sinusoidal amplitude of a vibrating mechanical mass. Using two sets of coils, the system regulates the drive signal amplitude at the precise level to maintain the mechanical mass when it reaches the desired vibration amplitude.

  10. Precision Spectrophotometric Calibration System for Dark Energy Instruments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schubnell, Michael S.

    2015-06-30

    For this research we build a precision calibration system and carried out measurements to demonstrate the precision that can be achieved with a high precision spectrometric calibration system. It was shown that the system is capable of providing a complete spectrophotometric calibration at the sub-pixel level. The calibration system uses a fast, high precision monochromator that can quickly and efficiently scan over an instrument’s entire spectral range with a spectral line width of less than 0.01 nm corresponding to a fraction of a pixel on the CCD. The system was extensively evaluated in the laboratory. Our research showed that amore » complete spectrophotometric calibration standard for spectroscopic survey instruments such as DESI is possible. The monochromator precision and repeatability to a small fraction of the DESI spectrograph LSF was demonstrated with re-initialization on every scan and thermal drift compensation by locking to multiple external line sources. A projector system that mimics telescope aperture for point source at infinity was demonstrated.« less

  11. A Method Based on Artificial Intelligence To Fully Automatize The Evaluation of Bovine Blastocyst Images.

    PubMed

    Rocha, José Celso; Passalia, Felipe José; Matos, Felipe Delestro; Takahashi, Maria Beatriz; Ciniciato, Diego de Souza; Maserati, Marc Peter; Alves, Mayra Fernanda; de Almeida, Tamie Guibu; Cardoso, Bruna Lopes; Basso, Andrea Cristina; Nogueira, Marcelo Fábio Gouveia

    2017-08-09

    Morphological analysis is the standard method of assessing embryo quality; however, its inherent subjectivity tends to generate discrepancies among evaluators. Using genetic algorithms and artificial neural networks (ANNs), we developed a new method for embryo analysis that is more robust and reliable than standard methods. Bovine blastocysts produced in vitro were classified as grade 1 (excellent or good), 2 (fair), or 3 (poor) by three experienced embryologists according to the International Embryo Technology Society (IETS) standard. The images (n = 482) were subjected to automatic feature extraction, and the results were used as input for a supervised learning process. One part of the dataset (15%) was used for a blind test posterior to the fitting, for which the system had an accuracy of 76.4%. Interestingly, when the same embryologists evaluated a sub-sample (10%) of the dataset, there was only 54.0% agreement with the standard (mode for grades). However, when using the ANN to assess this sub-sample, there was 87.5% agreement with the modal values obtained by the evaluators. The presented methodology is covered by National Institute of Industrial Property (INPI) and World Intellectual Property Organization (WIPO) patents and is currently undergoing a commercial evaluation of its feasibility.

  12. Compact "diode-based" multi-energy soft x-ray diagnostic for NSTX.

    PubMed

    Tritz, K; Clayton, D J; Stutman, D; Finkenthal, M

    2012-10-01

    A novel and compact, diode-based, multi-energy soft x-ray (ME-SXR) diagnostic has been developed for the National Spherical Tokamak Experiment. The new edge ME-SXR system tested on NSTX consists of a set of vertically stacked diode arrays, each viewing the plasma tangentially through independent pinholes and filters providing an overlapping view of the plasma midplane which allows simultaneous SXR measurements with coarse sub-sampling of the x-ray spectrum. Using computed x-ray spectral emission data, combinations of filters can provide fast (>10 kHz) measurements of changes in the electron temperature and density profiles providing a method to "fill-in" the gaps of the multi-point Thomson scattering system.

  13. Spatial grain and the causes of regional diversity gradients in ants.

    PubMed

    Kaspari, Michael; Yuan, May; Alonso, Leeanne

    2003-03-01

    Gradients of species richness (S; the number of species of a given taxon in a given area and time) are ubiquitous. A key goal in ecology is to understand whether and how the many processes that generate these gradients act at different spatial scales. Here we evaluate six hypotheses for diversity gradients with 49 New World ant communities, from tundra to rain forest. We contrast their performance at three spatial grains from S(plot), the average number of ant species nesting in a m2 plot, through Fisher's alpha, an index that treats our 30 1-m2 plots as subsamples of a locality's diversity. At the smallest grain, S(plot), was tightly correlated (r2 = 0.99) with colony abundance in a fashion indistinguishable from the packing of randomly selected individuals into a fixed space. As spatial grain increased, the coaction of two factors linked to high net rates of diversification--warm temperatures and large areas of uniform climate--accounted for 75% of the variation in Fisher's alpha. However, the mechanisms underlying these correlations (i.e., precisely how temperature and area shape the balance of speciation to extinction) remain elusive.

  14. NIHAO VI. The hidden discs of simulated galaxies

    NASA Astrophysics Data System (ADS)

    Obreja, Aura; Stinson, Gregory S.; Dutton, Aaron A.; Macciò, Andrea V.; Wang, Liang; Kang, Xi

    2016-06-01

    Detailed studies of galaxy formation require clear definitions of the structural components of galaxies. Precisely defined components also enable better comparisons between observations and simulations. We use a subsample of 18 cosmological zoom-in simulations from the Numerical Investigation of a Hundred Astrophysical Objects (NIHAO) project to derive a robust method for defining stellar kinematic discs in galaxies. Our method uses Gaussian Mixture Models in a 3D space of dynamical variables. The NIHAO galaxies have the right stellar mass for their halo mass, and their angular momenta and Sérsic indices match observations. While the photometric disc-to-total ratios are close to 1 for all the simulated galaxies, the kinematic ratios are around ˜0.5. Thus, exponential structure does not imply a cold kinematic disc. Above M* ˜ 109.5 M⊙, the decomposition leads to thin discs and spheroids that have clearly different properties, in terms of angular momentum, rotational support, ellipticity, [Fe/H] and [O/Fe]. At M* ≲ 109.5 M⊙, the decomposition selects discs and spheroids with less distinct properties. At these low masses, both the discs and spheroids have exponential profiles with high minor-to-major axes ratios, I.e. thickened discs.

  15. Design and control of the precise tracking bed based on complex electromechanical design theory

    NASA Astrophysics Data System (ADS)

    Ren, Changzhi; Liu, Zhao; Wu, Liao; Chen, Ken

    2010-05-01

    The precise tracking technology is wide used in astronomical instruments, satellite tracking and aeronautic test bed. However, the precise ultra low speed tracking drive system is one high integrated electromechanical system, which one complexly electromechanical design method is adopted to improve the efficiency, reliability and quality of the system during the design and manufacture circle. The precise Tracking Bed is one ultra-exact, ultra-low speed, high precision and huge inertial instrument, which some kind of mechanism and environment of the ultra low speed is different from general technology. This paper explores the design process based on complex electromechanical optimizing design theory, one non-PID with a CMAC forward feedback control method is used in the servo system of the precise tracking bed and some simulation results are discussed.

  16. Effects of duty-cycled passive acoustic recordings on detecting the presence of beaked whales in the northwest Atlantic.

    PubMed

    Stanistreet, Joy E; Nowacek, Douglas P; Read, Andrew J; Baumann-Pickering, Simone; Moors-Murphy, Hilary B; Van Parijs, Sofie M

    2016-07-01

    This study investigated the effects of using duty-cycled passive acoustic recordings to monitor the daily presence of beaked whale species at three locations in the northwest Atlantic. Continuous acoustic records were subsampled to simulate duty cycles of 50%, 25%, and 10% and cycle period durations from 10 to 60 min. Short, frequent listening periods were most effective for assessing the daily presence of beaked whales. Furthermore, subsampling at low duty cycles led to consistently greater underestimation of Mesoplodon species than either Cuvier's beaked whales or northern bottlenose whales, leading to a potential bias in estimation of relative species occurrence.

  17. VizieR Online Data Catalog: SAMI Galaxy Survey: gas streaming (Cecil+, 2016)

    NASA Astrophysics Data System (ADS)

    Cecil, G.; Fogarty, L. M. R.; Richards, S.; Bland-Hawthorn, J.; Lange, R.; Moffett, A.; Catinella, B.; Cortese, L.; Ho, I.-T.; Taylor, E. N.; Bryant, J. J.; Allen, J. T.; Sweet, S. M.; Croom, S. M.; Driver, S. P.; Goodwin, M.; Kelvin, L.; Green, A. W.; Konstantopoulos, I. S.; Owers, M. S.; Lawrence, J. S.; Lorente, N. P. F.

    2016-08-01

    From the first ~830 targets observed in the SGS, we selected 344 rotationally supported galaxies having enough gas to map their CSC. We rejected 8 whose inclination angle to us is too small (i<20°) to be established reliably by photometry, and those very strongly barred or in obvious interactions. Finally, we rejected those whose CSC would be smeared excessively by our PSF (Sect. 2.3.1) because of large inclination (i>71°), compact size, or observed in atrocious conditions, leaving 163 SGS GAMA survey sub-sample and 15 "cluster" sub-sample galaxies with discs. (3 data files).

  18. Belief in narcissistic insecurity: Perceptions of lay raters and their personality and psychopathology relations.

    PubMed

    Stanton, Kasey; Watson, David; Clark, Lee Anna

    2018-02-01

    This study advances research on interpersonal perceptions of narcissism by examining the degree to which overt displays of narcissism (e.g. being boastful and arrogant) are viewed by lay raters as resulting from covert insecurity. We wrote a brief set of items to assess this view and collected responses from a large sample of community adults (n = 5 528). We present results both for participants reporting (n = 617; patient subsample) and not reporting (n = 4 911; non-patient subsample) current psychiatric treatment. Results revealed that (1) overt grandiose narcissistic traits generally are viewed as being linked to covert insecurity and vulnerability and (2) items intended to assess this link define a meaningful construct, referred to here as Belief in Narcissistic Insecurity. Patient subsample participants also completed measures of personality and psychopathology. Belief in Narcissistic Insecurity showed modest positive relations with self-rated narcissism and with favourable views of one's personality (i.e. seeing oneself as extraverted and conscientious). These findings contribute to research aimed at explicating how perceptions of narcissism are related to self-views and interpersonal functioning. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  19. Designing long-term fish community assessments in connecting channels: Lessons from the Saint Marys River

    USGS Publications Warehouse

    Schaeffer, Jeff; Rogers, Mark W.; Fielder, David G.; Godby, Neal; Bowen, Anjanette K.; O'Connor, Lisa; Parrish, Josh; Greenwood, Susan; Chong, Stephen; Wright, Greg

    2014-01-01

    Long-term surveys are useful in understanding trends in connecting channel fish communities; a gill net assessment in the Saint Marys River performed periodically since 1975 is the most comprehensive connecting channels sampling program within the Laurentian Great Lakes. We assessed efficiency of that survey, with intent to inform development of assessments at other connecting channels. We evaluated trends in community composition, effort versus estimates of species richness, ability to detect abundance changes for four species, and effects of subsampling yellow perch catches on size and age-structure metrics. Efficiency analysis revealed low power to detect changes in species abundance, whereas reduced effort could be considered to index species richness. Subsampling simulations indicated that subsampling would have allowed reliable estimates of yellow perch (Perca flavescens) population structure, while greatly reducing the number of fish that were assigned ages. Analyses of statistical power and efficiency of current sampling protocols are useful for managers collecting and using these types of data as well as for the development of new monitoring programs. Our approach provides insight into whether survey goals and objectives were being attained and can help evaluate ability of surveys to answer novel questions that arise as management strategies are refined.

  20. Statistical considerations for grain-size analyses of tills

    USGS Publications Warehouse

    Jacobs, A.M.

    1971-01-01

    Relative percentages of sand, silt, and clay from samples of the same till unit are not identical because of different lithologies in the source areas, sorting in transport, random variation, and experimental error. Random variation and experimental error can be isolated from the other two as follows. For each particle-size class of each till unit, a standard population is determined by using a normally distributed, representative group of data. New measurements are compared with the standard population and, if they compare satisfactorily, the experimental error is not significant and random variation is within the expected range for the population. The outcome of the comparison depends on numerical criteria derived from a graphical method rather than on a more commonly used one-way analysis of variance with two treatments. If the number of samples and the standard deviation of the standard population are substituted in a t-test equation, a family of hyperbolas is generated, each of which corresponds to a specific number of subsamples taken from each new sample. The axes of the graphs of the hyperbolas are the standard deviation of new measurements (horizontal axis) and the difference between the means of the new measurements and the standard population (vertical axis). The area between the two branches of each hyperbola corresponds to a satisfactory comparison between the new measurements and the standard population. Measurements from a new sample can be tested by plotting their standard deviation vs. difference in means on axes containing a hyperbola corresponding to the specific number of subsamples used. If the point lies between the branches of the hyperbola, the measurements are considered reliable. But if the point lies outside this region, the measurements are repeated. Because the critical segment of the hyperbola is approximately a straight line parallel to the horizontal axis, the test is simplified to a comparison between the means of the standard population and the means of the subsample. The minimum number of subsamples required to prove significant variation between samples caused by different lithologies in the source areas and sorting in transport can be determined directly from the graphical method. The minimum number of subsamples required is the maximum number to be run for economy of effort. ?? 1971 Plenum Publishing Corporation.

  1. An Evaluation of Population Density Mapping and Built up Area Estimates in Sri Lanka Using Multiple Methodologies

    NASA Astrophysics Data System (ADS)

    Engstrom, R.; Soundararajan, V.; Newhouse, D.

    2017-12-01

    In this study we examine how well multiple population density and built up estimates that utilize satellite data compare in Sri Lanka. The population relationship is examined at the Gram Niladhari (GN) level, the lowest administrative unit in Sri Lanka from the 2011 census. For this study we have two spatial domains, the whole country and a 3,500km2 sub-sample, for which we have complete high spatial resolution imagery coverage. For both the entire country and the sub-sample we examine how consistent are the existing publicly available measures of population constructed from satellite imagery at predicting population density? For just the sub-sample we examine how well do a suite of values derived from high spatial resolution satellite imagery predict population density and how does our built up area estimate compare to other publicly available estimates. Population measures were obtained from the Sri Lankan census, and were downloaded from Facebook, WorldPoP, GPW, and Landscan. Percentage built-up area at the GN level was calculated from three sources: Facebook, Global Urban Footprint (GUF), and the Global Human Settlement Layer (GHSL). For the sub-sample we have derived a variety of indicators from the high spatial resolution imagery. Using deep learning convolutional neural networks, an object oriented, and a non-overlapping block, spatial feature approach. Variables calculated include: cars, shadows (a proxy for building height), built up area, and buildings, roof types, roads, type of agriculture, NDVI, Pantex, and Histogram of Oriented Gradients (HOG) and others. Results indicate that population estimates are accurate at the higher, DS Division level but not necessarily at the GN level. Estimates from Facebook correlated well with census population (GN correlation of 0.91) but measures from GPW and WorldPop are more weakly correlated (0.64 and 0.34). Estimates of built-up area appear to be reliable. In the 32 DSD-subsample, Facebook's built- up area measure is highly correlated with our built-up measure (correlation of 0.9). Preliminary regression results based on variables selected from Lasso-regressions indicate that satellite indicators have exceptionally strong predictive power in predicting GN level population level and density with an out of sample r-squared of 0.75 and 0.72 respectively.

  2. Assessing map accuracy in a remotely sensed, ecoregion-scale cover map

    USGS Publications Warehouse

    Edwards, T.C.; Moisen, Gretchen G.; Cutler, D.R.

    1998-01-01

    Landscape- and ecoregion-based conservation efforts increasingly use a spatial component to organize data for analysis and interpretation. A challenge particular to remotely sensed cover maps generated from these efforts is how best to assess the accuracy of the cover maps, especially when they can exceed 1000 s/km2 in size. Here we develop and describe a methodological approach for assessing the accuracy of large-area cover maps, using as a test case the 21.9 million ha cover map developed for Utah Gap Analysis. As part of our design process, we first reviewed the effect of intracluster correlation and a simple cost function on the relative efficiency of cluster sample designs to simple random designs. Our design ultimately combined clustered and subsampled field data stratified by ecological modeling unit and accessibility (hereafter a mixed design). We next outline estimation formulas for simple map accuracy measures under our mixed design and report results for eight major cover types and the three ecoregions mapped as part of the Utah Gap Analysis. Overall accuracy of the map was 83.2% (SE=1.4). Within ecoregions, accuracy ranged from 78.9% to 85.0%. Accuracy by cover type varied, ranging from a low of 50.4% for barren to a high of 90.6% for man modified. In addition, we examined gains in efficiency of our mixed design compared with a simple random sample approach. In regard to precision, our mixed design was more precise than a simple random design, given fixed sample costs. We close with a discussion of the logistical constraints facing attempts to assess the accuracy of large-area, remotely sensed cover maps.

  3. Optimizing correlation techniques for improved earthquake location

    USGS Publications Warehouse

    Schaff, D.P.; Bokelmann, G.H.R.; Ellsworth, W.L.; Zanzerkia, E.; Waldhauser, F.; Beroza, G.C.

    2004-01-01

    Earthquake location using relative arrival time measurements can lead to dramatically reduced location errors and a view of fault-zone processes with unprecedented detail. There are two principal reasons why this approach reduces location errors. The first is that the use of differenced arrival times to solve for the vector separation of earthquakes removes from the earthquake location problem much of the error due to unmodeled velocity structure. The second reason, on which we focus in this article, is that waveform cross correlation can substantially reduce measurement error. While cross correlation has long been used to determine relative arrival times with subsample precision, we extend correlation measurements to less similar waveforms, and we introduce a general quantitative means to assess when correlation data provide an improvement over catalog phase picks. We apply the technique to local earthquake data from the Calaveras Fault in northern California. Tests for an example streak of 243 earthquakes demonstrate that relative arrival times with normalized cross correlation coefficients as low as ???70%, interevent separation distances as large as to 2 km, and magnitudes up to 3.5 as recorded on the Northern California Seismic Network are more precise than relative arrival times determined from catalog phase data. Also discussed are improvements made to the correlation technique itself. We find that for large time offsets, our implementation of time-domain cross correlation is often more robust and that it recovers more observations than the cross spectral approach. Longer time windows give better results than shorter ones. Finally, we explain how thresholds and empirical weighting functions may be derived to optimize the location procedure for any given region of interest, taking advantage of the respective strengths of diverse correlation and catalog phase data on different length scales.

  4. Chemical and kinematical properties of galactic bulge stars surrounding the stellar system Terzan 5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Massari, D.; Mucciarelli, A.; Ferraro, F. R.

    2014-08-20

    As part of a study aimed at determining the kinematical and chemical properties of Terzan 5, we present the first characterization of the bulge stars surrounding this puzzling stellar system. We observed 615 targets located well beyond the tidal radius of Terzan 5 and found that their radial velocity distribution is well described by a Gaussian function peaked at (v {sub rad}) = +21.0 ± 4.6 km s{sup –1} with dispersion σ {sub v} = 113.0 ± 2.7 km s{sup –1}. This is one of the few high-precision spectroscopic surveys of radial velocities for a large sample of bulge starsmore » in such a low and positive latitude environment (b = +1.°7). We found no evidence of the peak at (v {sub rad}) ∼ +200 km s{sup –1} found in Nidever et al. Strong contamination of many observed spectra by TiO bands prevented us from deriving the iron abundance for the entire spectroscopic sample, introducing a selection bias. The metallicity distribution was finally derived for a subsample of 112 stars in a magnitude range where the effect of the selection bias is negligible. The distribution is quite broad and roughly peaked at solar metallicity ([Fe/H] ≅ +0.05 dex) with a similar number of stars in the super-solar and in the sub-solar ranges. The population number ratios in different metallicity ranges agree well with those observed in other low-latitude bulge fields, suggesting (1) the possible presence of a plateau for |b| < 4° in the ratio between stars in the super-solar (0 < [Fe/H] <0.5 dex) and sub-solar (–0.5 < [Fe/H] <0 dex) metallicity ranges; (2) a severe drop in the metal-poor component ([Fe/H] <–0.5) as a function of Galactic latitude.« less

  5. Eclipsing damped Ly α systems in the Sloan Digital Sky Survey Data Release 12

    NASA Astrophysics Data System (ADS)

    Fathivavsari, H.; Petitjean, P.; Jamialahmadi, N.; Khosroshahi, H. G.; Rahmani, H.; Finley, H.; Noterdaeme, P.; Pâris, I.; Srianand, R.

    2018-07-01

    We present the results of our automatic search for proximate damped Ly α absorption (PDLA) systems in the quasar spectra from the Sloan Digital Sky Survey Data Release 12. We constrain our search to those PDLAs lying within 1500 km s-1 from the quasar to make sure that the broad DLA absorption trough masks most of the strong Ly α emission from the broad-line region (BLR) of the quasar. When the Ly α emission from the BLR is blocked by these so-called eclipsing DLAs, narrow Ly α emission from the host galaxy could be revealed as a narrow emission line (NEL) in the DLA trough. We define a statistical sample of 399 eclipsing DLAs with log N(H I) ≥ 21.10. We divide our statistical sample into three subsamples based on the strength of the NEL detected in the DLA trough. By studying the stacked spectra of these subsamples, we found that absorptions from high ionization species are stronger in DLAs with stronger NEL in their absorption core. Moreover, absorption from the excited states of species like SIII are also stronger in DLAs with stronger NEL. We also found no correlation between the luminosity of the Ly α NEL and the quasar luminosity. These observations are consistent with a scenario in which the DLAs with stronger NEL are denser and physically closer to the quasar. We propose that these eclipsing DLAs could be the product of the interaction between infalling and outflowing gas. High-resolution spectroscopic observation would be needed to shed some light on the nature of these eclipsing DLAs.

  6. Samples: The Story That They Tell and Our Role in Better Connecting Their Physical and Data Lifecycles.

    NASA Astrophysics Data System (ADS)

    Stall, S.

    2016-12-01

    The story of a sample starts with a proposal, a data management plan, and funded research. The sample is created, given a unique identifier (IGSN) and properly cared for during its journey to an appropriate storage location. Through its metadata, and publication information, the sample can become well known and shared with other researchers. Ultimately, a valuable sample can tell its entire story through its IGSN, associated ORCIDs, associated publication DOIs, and DOIs of data generated from sample analysis. This journey, or workflow, is in many ways still manual. Tools exist to generate IGSNs for the sample and subsamples. Publishers are committed to making IGSNs machine readable in their journals, but the connection back to the IGSN management system, specifically the System for Earth Sample Registration (SESAR) is not fully complete. Through encouragement of publishers, like AGU, and improved data management practices, such as those promoted by AGU's Data Management Assessment program, the complete lifecycle of a sample can and will be told through the journey it takes from creation, documentation (metadata), analysis, subsamples, publication, and sharing. Publishers and data facilities are using efforts like the Coalition for Publishing Data in the Earth and Space Sciences (COPDESS) to "implement and promote common policies and procedures for the publication and citation of data across Earth Science journals", including IGSNs. As our community improves its data management practices and publishers adopt and enforce machine readable use of unique sample identifiers, the ability to tell the entire story of a sample is close at hand. Better Data Management results in Better Science.

  7. Runaway and Pregnant: Risk Factors Associated with Pregnancy in a National Sample of Runaway/Homeless Female Adolescents

    PubMed Central

    Thompson, Sanna J.; Bender, Kimberly A.; Lewis, Carol M.; Watkins, Rita

    2009-01-01

    Purpose Homeless youth are at particularly high risk for teen pregnancy; research indicates as many as 20% of homeless young women become pregnant. These pregnant and homeless teens lack financial resources and adequate health care, resulting in increased risk for low– birth-weight babies and high infant mortality. This study investigated individual and family-level predictors of teen pregnancy among a national sample of runaway/homeless youth in order to better understand the needs of this vulnerable population. Methods Data from the Runaway/Homeless Youth Management Information System (RHY MIS) provided a national sample of youth seeking services at crisis shelters. A sub-sample of pregnant females and a random sub-sample (matched by age) of nonpregnant females comprised the study sample (N= 951). Chi-square and t tests identified differences between pregnant and nonpregnant runaway females; maximum likelihood logistic regression identified individual and family-level predictors of teen pregnancy. Results Teen pregnancy was associated with being an ethnic minority, dropping out of school, being away from home for longer periods of time, having a sexually transmitted disease, and feeling abandoned by one's family. Family factors, such as living in a single parent household and experiencing emotional abuse by one's mother, increased the odds of a teen being pregnant. Conclusions The complex problems associated with pregnant runaway/homeless teens create challenges for short-term shelter services. Suggestions are made for extending shelter services to include referrals and coordination with teen parenting programs and other systems of care. PMID:18639785

  8. Lyα-Lyman continuum connection in 3.5 ≤ z ≤ 4.3 star-forming galaxies from the VUDS survey

    NASA Astrophysics Data System (ADS)

    Marchi, F.; Pentericci, L.; Guaita, L.; Schaerer, D.; Verhamme, A.; Castellano, M.; Ribeiro, B.; Garilli, B.; Fèvre, O. Le; Amorin, R.; Bardelli, S.; Cassata, P.; Durkalec, A.; Grazian, A.; Hathi, N. P.; Lemaux, B. C.; Maccagni, D.; Vanzella, E.; Zucca, E.

    2018-06-01

    Context. To identify the galaxies responsible for the reionization of the Universe, we must rely on the investigation of the Lyman continuum (LyC) properties of z ≲ 5 star-forming galaxies, where we can still directly observe their ionizing radiation. Aims: The aim of this work is to explore the correlation between the LyC emission and some of the proposed indirect indicators of LyC radiation at z 4 such as a bright Lyα emission and a compact UV continuum size. Methods: We selected a sample of 201 star-forming galaxies from the Vimos Ultra Deep Survey (VUDS) at 3.5 ≤ z ≤ 4.3 in the COSMOS, ECDFS, and VVDS-2h fields, including only those with reliable spectroscopic redshifts, a clean spectrum in the LyC range and clearly not contaminated by bright nearby sources in the same slit. For all galaxies we measured the Lyα EW, the Lyα velocity shift with respect to the systemic redshift, the Lyα spatial extension and the UV continuum effective radius. We then selected different sub-samples according to the properties predicted to be good LyC emission indicators: in particular we created sub-samples of galaxies with EW(Lyα) ≥ 70 Å, Lyαext ≤ 5.7 kpc, rUV ≤ 0.30 kpc and |ΔvLyα|≤ 200 km s-1. We stacked all the galaxies in each sub-sample and measured the flux density ratio (fλ(895)/fλ(1470)), that we considered to be a proxy for LyC emission. We then compared these ratios to those obtained for the complementary samples. Finally, to estimate the statistical contamination from lower redshift inter-lopers in our samples, we performed dedicated Monte Carlo simulations using an ultradeep U-band image of the ECDFS field. Results: We find that the stacks of galaxies which are UV compact (rUV ≤ 0.30 kpc) and have bright Lyα emission (EW(Lyα) ≥ 70 Å), have much higher LyC fluxes compared to the rest of the galaxy population. These parameters appear to be good indicators of LyC radiation in agreement with theoretical studies and previous observational works. In addition we find that galaxies with a low Lyα spatial extent (Lyαext ≤ 5.7 kpc) have higher LyC flux compared to the rest of the population. Such a correlation had never been analysed before and seems even stronger than the correlation with high EW(Lyα) and small rUV. These results assume that the stacks from all sub-samples present the same statistical contamination from lower redshift interlopers. If we subtract a statistical contamination from low redshift interlopers obtained with the simulations from the flux density ratios (fλ(895)/fλ(1470)) of the significant sub-samples we find that these samples contain real LyC leaking flux with a very high probability, although the true average escape fractions are very uncertain. Conclusions: Our work indicates that galaxies with very high EW(Lyα), small Ly αext and small rUV are very likely the best candidates to show Lyman continuum radiation at z 4 and could therefore be the galaxies that have contributed most to reionisation. Based on data obtained with the European Southern Observatory Very Large Telescope, Paranal, Chile, under Large Program 185.A-0791.

  9. Robotic Observatory System Design-Specification Considerations for Achieving Long-Term Sustainable Precision Performance

    NASA Astrophysics Data System (ADS)

    Wray, J. D.

    2003-05-01

    The robotic observatory telescope must point precisely on the target object, and then track autonomously to a fraction of the FWHM of the system PSF for durations of ten to twenty minutes or more. It must retain this precision while continuing to function at rates approaching thousands of observations per night for all its years of useful life. These stringent requirements raise new challenges unique to robotic telescope systems design. Critical design considerations are driven by the applicability of the above requirements to all systems of the robotic observatory, including telescope and instrument systems, telescope-dome enclosure systems, combined electrical and electronics systems, environmental (e.g. seeing) control systems and integrated computer control software systems. Traditional telescope design considerations include the effects of differential thermal strain, elastic flexure, plastic flexure and slack or backlash with respect to focal stability, optical alignment and angular pointing and tracking precision. Robotic observatory design must holistically encapsulate these traditional considerations within the overall objective of maximized long-term sustainable precision performance. This overall objective is accomplished through combining appropriate mechanical and dynamical system characteristics with a full-time real-time telescope mount model feedback computer control system. Important design considerations include: identifying and reducing quasi-zero-backlash; increasing size to increase precision; directly encoding axis shaft rotation; pointing and tracking operation via real-time feedback between precision mount model and axis mounted encoders; use of monolithic construction whenever appropriate for sustainable mechanical integrity; accelerating dome motion to eliminate repetitive shock; ducting internal telescope air to outside dome; and the principal design criteria: maximizing elastic repeatability while minimizing slack, plastic deformation and hysteresis to facilitate long-term repeatably precise pointing and tracking performance.

  10. Thermal Demagnetization of Mare Basalts 10017 and 10020

    NASA Astrophysics Data System (ADS)

    Suavet, C. R.; Weiss, B. P.; Grove, T. L.

    2012-12-01

    Paleomagnetic studies of lunar rocks 76535 (Garrick-Bethell et al., 2009), 10020 (Shea et al., 2012) and 10017 (Suavet et al., 2012) have shown that the Moon had an active dynamo field at 4.2 Ga, 3.7 Ga, and 3.6 Ga, respectively. These studies were carried out using alternating field (AF) demagnetization, which has the advantage to avoid sample alteration by heating, but the paleointensity is only constrained within a factor of 3-5 due to uncertainties on the calibration factor between thermoremanent magnetization (TRM) and anhysteretic remanent magnetization (ARM). Thermal demagnetization is expected to give better estimates of the paleofields. Thellier-Thellier paleointensity experiments on 10017 under reducing atmosphere (Sugiura et al., 1978) and in vacuum (Hoffman et al., 1979) were attempted in the past. Almost full demagnetization was observed after heating from 200 to 300°C, which was interpreted as alteration of magnetic carriers, or interaction effects between troilite and kamacite (Pearce et al., 1976). For both experiments, the oxygen fugacity was poorly constrained due to methodological flaws: no oxygen was introduced in the system and further reduction of the rocks could not be mitigated. We designed a controlled atmosphere apparatus to conduct thermal demagnetization in a mixture of H2 and CO2. Gas mixtures were calibrated by exploring the stability of iron, magnetite, and wustite at temperatures in the range of 300-800°C. We thermally demagnetized the natural remanent magnetization (NRM) of subsamples of 10017 and 10020 in a gas mixture with an oxygen fugacity ~1 log unit below the iron-wustite buffer (Sato et al., 1973). After heating from 200 to 250°C, the magnetization was reduced to 10% of the NRM for 10017, and 20% of the NRM for 10020, and the magnetization directions became unstable. A subsample of 10020 was given a 0.1 mT ARM, and thermally demagnetized up to 250°C: the magnetization was reduced to 38% of the ARM and the direction became unstable. The fact that the NRM and the ARM have similar behavior upon heating confirms that the magnetization is a TRM. We compared the AF demagnetization of a 0.1 mT ARM before and after heating a subsample of 10017 up to 250°C: there was no change in the coercivity spectrum, which shows that the demagnetization was not due to alteration of the magnetic carriers. The thermal demagnetization of a subsample of 10017 with a saturation isothermal magnetization (SIRM) does not show a Curie point at 250°C. Therefore, the low-temperature demagnetization of mare basalts 10017 and 10020 is real. It could be caused by a defect magnetization of troilite, interaction between troilite and kamacite, presence of cohenite, or an unknown phenomenon.

  11. The EChO science case

    NASA Astrophysics Data System (ADS)

    Tinetti, Giovanna; Drossart, Pierre; Eccleston, Paul; Hartogh, Paul; Isaak, Kate; Linder, Martin; Lovis, Christophe; Micela, Giusi; Ollivier, Marc; Puig, Ludovic; Ribas, Ignasi; Snellen, Ignas; Swinyard, Bruce; Allard, France; Barstow, Joanna; Cho, James; Coustenis, Athena; Cockell, Charles; Correia, Alexandre; Decin, Leen; de Kok, Remco; Deroo, Pieter; Encrenaz, Therese; Forget, Francois; Glasse, Alistair; Griffith, Caitlin; Guillot, Tristan; Koskinen, Tommi; Lammer, Helmut; Leconte, Jeremy; Maxted, Pierre; Mueller-Wodarg, Ingo; Nelson, Richard; North, Chris; Pallé, Enric; Pagano, Isabella; Piccioni, Guseppe; Pinfield, David; Selsis, Franck; Sozzetti, Alessandro; Stixrude, Lars; Tennyson, Jonathan; Turrini, Diego; Zapatero-Osorio, Mariarosa; Beaulieu, Jean-Philippe; Grodent, Denis; Guedel, Manuel; Luz, David; Nørgaard-Nielsen, Hans Ulrik; Ray, Tom; Rickman, Hans; Selig, Avri; Swain, Mark; Banaszkiewicz, Marek; Barlow, Mike; Bowles, Neil; Branduardi-Raymont, Graziella; du Foresto, Vincent Coudé; Gerard, Jean-Claude; Gizon, Laurent; Hornstrup, Allan; Jarchow, Christopher; Kerschbaum, Franz; Kovacs, Géza; Lagage, Pierre-Olivier; Lim, Tanya; Lopez-Morales, Mercedes; Malaguti, Giuseppe; Pace, Emanuele; Pascale, Enzo; Vandenbussche, Bart; Wright, Gillian; Ramos Zapata, Gonzalo; Adriani, Alberto; Azzollini, Ruymán; Balado, Ana; Bryson, Ian; Burston, Raymond; Colomé, Josep; Crook, Martin; Di Giorgio, Anna; Griffin, Matt; Hoogeveen, Ruud; Ottensamer, Roland; Irshad, Ranah; Middleton, Kevin; Morgante, Gianluca; Pinsard, Frederic; Rataj, Mirek; Reess, Jean-Michel; Savini, Giorgio; Schrader, Jan-Rutger; Stamper, Richard; Winter, Berend; Abe, L.; Abreu, M.; Achilleos, N.; Ade, P.; Adybekian, V.; Affer, L.; Agnor, C.; Agundez, M.; Alard, C.; Alcala, J.; Allende Prieto, C.; Alonso Floriano, F. J.; Altieri, F.; Alvarez Iglesias, C. A.; Amado, P.; Andersen, A.; Aylward, A.; Baffa, C.; Bakos, G.; Ballerini, P.; Banaszkiewicz, M.; Barber, R. J.; Barrado, D.; Barton, E. J.; Batista, V.; Bellucci, G.; Belmonte Avilés, J. A.; Berry, D.; Bézard, B.; Biondi, D.; Błęcka, M.; Boisse, I.; Bonfond, B.; Bordé, P.; Börner, P.; Bouy, H.; Brown, L.; Buchhave, L.; Budaj, J.; Bulgarelli, A.; Burleigh, M.; Cabral, A.; Capria, M. T.; Cassan, A.; Cavarroc, C.; Cecchi-Pestellini, C.; Cerulli, R.; Chadney, J.; Chamberlain, S.; Charnoz, S.; Christian Jessen, N.; Ciaravella, A.; Claret, A.; Claudi, R.; Coates, A.; Cole, R.; Collura, A.; Cordier, D.; Covino, E.; Danielski, C.; Damasso, M.; Deeg, H. J.; Delgado-Mena, E.; Del Vecchio, C.; Demangeon, O.; De Sio, A.; De Wit, J.; Dobrijévic, M.; Doel, P.; Dominic, C.; Dorfi, E.; Eales, S.; Eiroa, C.; Espinoza Contreras, M.; Esposito, M.; Eymet, V.; Fabrizio, N.; Fernández, M.; Femenía Castella, B.; Figueira, P.; Filacchione, G.; Fletcher, L.; Focardi, M.; Fossey, S.; Fouqué, P.; Frith, J.; Galand, M.; Gambicorti, L.; Gaulme, P.; García López, R. J.; Garcia-Piquer, A.; Gear, W.; Gerard, J.-C.; Gesa, L.; Giani, E.; Gianotti, F.; Gillon, M.; Giro, E.; Giuranna, M.; Gomez, H.; Gomez-Leal, I.; Gonzalez Hernandez, J.; González Merino, B.; Graczyk, R.; Grassi, D.; Guardia, J.; Guio, P.; Gustin, J.; Hargrave, P.; Haigh, J.; Hébrard, E.; Heiter, U.; Heredero, R. L.; Herrero, E.; Hersant, F.; Heyrovsky, D.; Hollis, M.; Hubert, B.; Hueso, R.; Israelian, G.; Iro, N.; Irwin, P.; Jacquemoud, S.; Jones, G.; Jones, H.; Justtanont, K.; Kehoe, T.; Kerschbaum, F.; Kerins, E.; Kervella, P.; Kipping, D.; Koskinen, T.; Krupp, N.; Lahav, O.; Laken, B.; Lanza, N.; Lellouch, E.; Leto, G.; Licandro Goldaracena, J.; Lithgow-Bertelloni, C.; Liu, S. J.; Lo Cicero, U.; Lodieu, N.; Lognonné, P.; Lopez-Puertas, M.; Lopez-Valverde, M. A.; Lundgaard Rasmussen, I.; Luntzer, A.; Machado, P.; MacTavish, C.; Maggio, A.; Maillard, J.-P.; Magnes, W.; Maldonado, J.; Mall, U.; Marquette, J.-B.; Mauskopf, P.; Massi, F.; Maurin, A.-S.; Medvedev, A.; Michaut, C.; Miles-Paez, P.; Montalto, M.; Montañés Rodríguez, P.; Monteiro, M.; Montes, D.; Morais, H.; Morales, J. C.; Morales-Calderón, M.; Morello, G.; Moro Martín, A.; Moses, J.; Moya Bedon, A.; Murgas Alcaino, F.; Oliva, E.; Orton, G.; Palla, F.; Pancrazzi, M.; Pantin, E.; Parmentier, V.; Parviainen, H.; Peña Ramírez, K. Y.; Peralta, J.; Perez-Hoyos, S.; Petrov, R.; Pezzuto, S.; Pietrzak, R.; Pilat-Lohinger, E.; Piskunov, N.; Prinja, R.; Prisinzano, L.; Polichtchouk, I.; Poretti, E.; Radioti, A.; Ramos, A. A.; Rank-Lüftinger, T.; Read, P.; Readorn, K.; Rebolo López, R.; Rebordão, J.; Rengel, M.; Rezac, L.; Rocchetto, M.; Rodler, F.; Sánchez Béjar, V. J.; Sanchez Lavega, A.; Sanromá, E.; Santos, N.; Sanz Forcada, J.; Scandariato, G.; Schmider, F.-X.; Scholz, A.; Scuderi, S.; Sethenadh, J.; Shore, S.; Showman, A.; Sicardy, B.; Sitek, P.; Smith, A.; Soret, L.; Sousa, S.; Stiepen, A.; Stolarski, M.; Strazzulla, G.; Tabernero, H. M.; Tanga, P.; Tecsa, M.; Temple, J.; Terenzi, L.; Tessenyi, M.; Testi, L.; Thompson, S.; Thrastarson, H.; Tingley, B. W.; Trifoglio, M.; Martín Torres, J.; Tozzi, A.; Turrini, D.; Varley, R.; Vakili, F.; de Val-Borro, M.; Valdivieso, M. L.; Venot, O.; Villaver, E.; Vinatier, S.; Viti, S.; Waldmann, I.; Waltham, D.; Ward-Thompson, D.; Waters, R.; Watkins, C.; Watson, D.; Wawer, P.; Wawrzaszk, A.; White, G.; Widemann, T.; Winek, W.; Wiśniowski, T.; Yelle, R.; Yung, Y.; Yurchenko, S. N.

    2015-12-01

    The discovery of almost two thousand exoplanets has revealed an unexpectedly diverse planet population. We see gas giants in few-day orbits, whole multi-planet systems within the orbit of Mercury, and new populations of planets with masses between that of the Earth and Neptune—all unknown in the Solar System. Observations to date have shown that our Solar System is certainly not representative of the general population of planets in our Milky Way. The key science questions that urgently need addressing are therefore: What are exoplanets made of? Why are planets as they are? How do planetary systems work and what causes the exceptional diversity observed as compared to the Solar System? The EChO (Exoplanet Characterisation Observatory) space mission was conceived to take up the challenge to explain this diversity in terms of formation, evolution, internal structure and planet and atmospheric composition. This requires in-depth spectroscopic knowledge of the atmospheres of a large and well-defined planet sample for which precise physical, chemical and dynamical information can be obtained. In order to fulfil this ambitious scientific program, EChO was designed as a dedicated survey mission for transit and eclipse spectroscopy capable of observing a large, diverse and well-defined planet sample within its 4-year mission lifetime. The transit and eclipse spectroscopy method, whereby the signal from the star and planet are differentiated using knowledge of the planetary ephemerides, allows us to measure atmospheric signals from the planet at levels of at least 10-4 relative to the star. This can only be achieved in conjunction with a carefully designed stable payload and satellite platform. It is also necessary to provide broad instantaneous wavelength coverage to detect as many molecular species as possible, to probe the thermal structure of the planetary atmospheres and to correct for the contaminating effects of the stellar photosphere. This requires wavelength coverage of at least 0.55 to 11 μm with a goal of covering from 0.4 to 16 μm. Only modest spectral resolving power is needed, with R ~ 300 for wavelengths less than 5 μm and R ~ 30 for wavelengths greater than this. The transit spectroscopy technique means that no spatial resolution is required. A telescope collecting area of about 1 m2 is sufficiently large to achieve the necessary spectro-photometric precision: for the Phase A study a 1.13 m2 telescope, diffraction limited at 3 μm has been adopted. Placing the satellite at L2 provides a cold and stable thermal environment as well as a large field of regard to allow efficient time-critical observation of targets randomly distributed over the sky. EChO has been conceived to achieve a single goal: exoplanet spectroscopy. The spectral coverage and signal-to-noise to be achieved by EChO, thanks to its high stability and dedicated design, would be a game changer by allowing atmospheric composition to be measured with unparalleled exactness: at least a factor 10 more precise and a factor 10 to 1000 more accurate than current observations. This would enable the detection of molecular abundances three orders of magnitude lower than currently possible and a fourfold increase from the handful of molecules detected to date. Combining these data with estimates of planetary bulk compositions from accurate measurements of their radii and masses would allow degeneracies associated with planetary interior modelling to be broken, giving unique insight into the interior structure and elemental abundances of these alien worlds. EChO would allow scientists to study exoplanets both as a population and as individuals. The mission can target super-Earths, Neptune-like, and Jupiter-like planets, in the very hot to temperate zones (planet temperatures of 300-3000 K) of F to M-type host stars. The EChO core science would be delivered by a three-tier survey. The EChO Chemical Census: This is a broad survey of a few-hundred exoplanets, which allows us to explore the spectroscopic and chemical diversity of the exoplanet population as a whole. The EChO Origin: This is a deep survey of a subsample of tens of exoplanets for which significantly higher signal to noise and spectral resolution spectra can be obtained to explain the origin of the exoplanet diversity (such as formation mechanisms, chemical processes, atmospheric escape). The EChO Rosetta Stones: This is an ultra-high accuracy survey targeting a subsample of select exoplanets. These will be the bright "benchmark" cases for which a large number of measurements would be taken to explore temporal variations, and to obtain two and three dimensional spatial information on the atmospheric conditions through eclipse-mapping techniques. If EChO were launched today, the exoplanets currently observed are sufficient to provide a large and diverse sample. The Chemical Census survey would consist of > 160 exoplanets with a range of planetary sizes, temperatures, orbital parameters and stellar host properties. Additionally, over the next 10 years, several new ground- and space-based transit photometric surveys and missions will come on-line (e.g. NGTS, CHEOPS, TESS, PLATO), which will specifically focus on finding bright, nearby systems. The current rapid rate of discovery would allow the target list to be further optimised in the years prior to EChO's launch and enable the atmospheric characterisation of hundreds of planets.

  12. High precision locating control system based on VCM for Talbot lithography

    NASA Astrophysics Data System (ADS)

    Yao, Jingwei; Zhao, Lixin; Deng, Qian; Hu, Song

    2016-10-01

    Aiming at the high precision and efficiency requirements of Z-direction locating in Talbot lithography, a control system based on Voice Coil Motor (VCM) was designed. In this paper, we built a math model of VCM and its moving characteristic was analyzed. A double-closed loop control strategy including position loop and current loop were accomplished. The current loop was implemented by driver, in order to achieve the rapid follow of the system current. The position loop was completed by the digital signal processor (DSP) and the position feedback was achieved by high precision linear scales. Feed forward control and position feedback Proportion Integration Differentiation (PID) control were applied in order to compensate for dynamic lag and improve the response speed of the system. And the high precision and efficiency of the system were verified by simulation and experiments. The results demonstrated that the performance of Z-direction gantry was obviously improved, having high precision, quick responses, strong real-time and easily to expend for higher precision.

  13. A System for Incubations at High Gas Partial Pressure

    PubMed Central

    Sauer, Patrick; Glombitza, Clemens; Kallmeyer, Jens

    2012-01-01

    High-pressure is a key feature of deep subsurface environments. High partial pressure of dissolved gasses plays an important role in microbial metabolism, because thermodynamic feasibility of many reactions depends on the concentration of reactants. For gases, this is controlled by their partial pressure, which can exceed 1 MPa at in situ conditions. Therefore, high hydrostatic pressure alone is not sufficient to recreate true deep subsurface in situ conditions, but the partial pressure of dissolved gasses has to be controlled as well. We developed an incubation system that allows for incubations at hydrostatic pressure up to 60 MPa, temperatures up to 120°C, and at high gas partial pressure. The composition and partial pressure of gasses can be manipulated during the experiment. To keep costs low, the system is mainly made from off-the-shelf components with only very few custom-made parts. A flexible and inert PVDF (polyvinylidene fluoride) incubator sleeve, which is almost impermeable for gases, holds the sample and separates it from the pressure fluid. The flexibility of the incubator sleeve allows for sub-sampling of the medium without loss of pressure. Experiments can be run in both static and flow-through mode. The incubation system described here is usable for versatile purposes, not only the incubation of microorganisms and determination of growth rates, but also for chemical degradation or extraction experiments under high gas saturation, e.g., fluid–gas–rock-interactions in relation to carbon dioxide sequestration. As an application of the system we extracted organic compounds from sub-bituminous coal using H2O as well as a H2O–CO2 mixture at elevated temperature (90°C) and pressure (5 MPa). Subsamples were taken at different time points during the incubation and analyzed by ion chromatography. Furthermore we demonstrated the applicability of the system for studies of microbial activity, using samples from the Isis mud volcano. We could detect an increase in sulfate reduction rate upon the addition of methane to the sample. PMID:22347218

  14. Morphology of U 3O 8 materials following storage under controlled conditions of temperature and relative humidity

    DOE PAGES

    Tamasi, Alison L.; Cash, Leigh J.; Mullen, William Tyler; ...

    2016-07-05

    Changes in the visual characteristics of uranium oxide surfaces and morphology following storage under different conditions of temperature and relative humidity may provide insight into the history of an unknown sample. Sub-samples of three α-U 3O 8 materials—one that was phase-pure and two that were phase-impure—were stored under controlled conditions for two years. We used scanning electron microscopy to image the oxides before and after storage, and a morphology lexicon was used to characterize the images. Finally, temporal changes in morphology were observed in some sub-samples, and changes were greatest following exposure to high relative humidity.

  15. Remanent magnetic properties of unbrecciated eucrites

    NASA Technical Reports Server (NTRS)

    Cisowski, Stanley M.

    1991-01-01

    This study examines the remanent magnetic properties of five unbrecciated eucrites, ranging from the coarse-grained cumulate Moore County to the quenched melt rock ALH 81001 in order to assess the strength of the magnetic field associated with their parent body during their formation. Two of the meteorites are judged as unlikely to have preserved their primary thermal remanence because of large variations in subsample remanence intensity and direction (Ibitira), and lack of NRM resistance to AF and thermal demagnetization (PCA 82502). The lack of a strong (greater than 0.01 mT) magnetizing field during their cooling on the eucrite parent body is inferred from the low normalized NRM intensities for subsamples of ALH 81001 and Yamato 791195.

  16. The observational case for Jupiter being a typical massive planet.

    PubMed

    Lineweaver, Charles H; Grether, Daniel

    2002-01-01

    We identify a subsample of the recently detected extrasolar planets that is minimally affected by the selection effects of the Doppler detection method. With a simple analysis we quantify trends in the surface density of this subsample in the period-Msin(i) plane. A modest extrapolation of these trends puts Jupiter in the most densely occupied region of this parameter space, thus indicating that Jupiter is a typical massive planet rather than an outlier. Our analysis suggests that Jupiter is more typical than indicated by previous analyses. For example, instead of MJup mass exoplanets being twice as common as 2 MJup exoplanets, we find they are three times as common.

  17. Precision medicine and molecular imaging: new targeted approaches toward cancer therapeutic and diagnosis.

    PubMed

    Ghasemi, Mojtaba; Nabipour, Iraj; Omrani, Abdolmajid; Alipour, Zeinab; Assadi, Majid

    2016-01-01

    This paper presents a review of the importance and role of precision medicine and molecular imaging technologies in cancer diagnosis with therapeutics and diagnostics purposes. Precision medicine is progressively becoming a hot topic in all disciplines related to biomedical investigation and has the capacity to become the paradigm for clinical practice. The future of medicine lies in early diagnosis and individually appropriate treatments, a concept that has been named precision medicine, i.e. delivering the right treatment to the right patient at the right time. Molecular imaging is quickly being recognized as a tool with the potential to ameliorate every aspect of cancer treatment. On the other hand, emerging high-throughput technologies such as omics techniques and systems approaches have generated a paradigm shift for biological systems in advanced life science research. In this review, we describe the precision medicine, difference between precision medicine and personalized medicine, precision medicine initiative, systems biology/medicine approaches (such as genomics, radiogenomics, transcriptomics, proteomics, and metabolomics), P4 medicine, relationship between systems biology/medicine approaches and precision medicine, and molecular imaging modalities and their utility in cancer treatment and diagnosis. Accordingly, the precision medicine and molecular imaging will enable us to accelerate and improve cancer management in future medicine.

  18. Precision medicine and molecular imaging: new targeted approaches toward cancer therapeutic and diagnosis

    PubMed Central

    Ghasemi, Mojtaba; Nabipour, Iraj; Omrani, Abdolmajid; Alipour, Zeinab; Assadi, Majid

    2016-01-01

    This paper presents a review of the importance and role of precision medicine and molecular imaging technologies in cancer diagnosis with therapeutics and diagnostics purposes. Precision medicine is progressively becoming a hot topic in all disciplines related to biomedical investigation and has the capacity to become the paradigm for clinical practice. The future of medicine lies in early diagnosis and individually appropriate treatments, a concept that has been named precision medicine, i.e. delivering the right treatment to the right patient at the right time. Molecular imaging is quickly being recognized as a tool with the potential to ameliorate every aspect of cancer treatment. On the other hand, emerging high-throughput technologies such as omics techniques and systems approaches have generated a paradigm shift for biological systems in advanced life science research. In this review, we describe the precision medicine, difference between precision medicine and personalized medicine, precision medicine initiative, systems biology/medicine approaches (such as genomics, radiogenomics, transcriptomics, proteomics, and metabolomics), P4 medicine, relationship between systems biology/medicine approaches and precision medicine, and molecular imaging modalities and their utility in cancer treatment and diagnosis. Accordingly, the precision medicine and molecular imaging will enable us to accelerate and improve cancer management in future medicine. PMID:28078184

  19. SEEDS Moving Group Status Update

    NASA Technical Reports Server (NTRS)

    McElwain, Michael

    2011-01-01

    I will summarize the current status of the SEEDS Moving Group category and describe the importance of this sub-sample for the entire SEEDS survey. This presentation will include analysis of the sensitivity for the Moving Groups with general a comparison to other the other sub-categories. I will discuss the future impact of the Subaru SCExAO system for these targets and the advantage of using a specialized integral field spectrograph. Finally, I will present the impact of a pupil grid mask in order to produce fiducial spots in the focal plane that can be used for both photometry and astrometry.

  20. Precision displacement reference system

    DOEpatents

    Bieg, Lothar F.; Dubois, Robert R.; Strother, Jerry D.

    2000-02-22

    A precision displacement reference system is described, which enables real time accountability over the applied displacement feedback system to precision machine tools, positioning mechanisms, motion devices, and related operations. As independent measurements of tool location is taken by a displacement feedback system, a rotating reference disk compares feedback counts with performed motion. These measurements are compared to characterize and analyze real time mechanical and control performance during operation.

  1. Platform Precision Autopilot Overview and Mission Performance

    NASA Technical Reports Server (NTRS)

    Strovers, Brian K.; Lee, James A.

    2009-01-01

    The Platform Precision Autopilot is an instrument landing system-interfaced autopilot system, developed to enable an aircraft to repeatedly fly nearly the same trajectory hours, days, or weeks later. The Platform Precision Autopilot uses a novel design to interface with a NASA Gulfstream III jet by imitating the output of an instrument landing system approach. This technique minimizes, as much as possible, modifications to the baseline Gulfstream III jet and retains the safety features of the aircraft autopilot. The Platform Precision Autopilot requirement is to fly within a 5-m (16.4-ft) radius tube for distances to 200 km (108 nmi) in the presence of light turbulence for at least 90 percent of the time. This capability allows precise repeat-pass interferometry for the Unmanned Aerial Vehicle Synthetic Aperture Radar program, whose primary objective is to develop a miniaturized, polarimetric, L-band synthetic aperture radar. Precise navigation is achieved using an accurate differential global positioning system developed by the Jet Propulsion Laboratory. Flight-testing has demonstrated the ability of the Platform Precision Autopilot to control the aircraft within the specified tolerance greater than 90 percent of the time in the presence of aircraft system noise and nonlinearities, constant pilot throttle adjustments, and light turbulence.

  2. Platform Precision Autopilot Overview and Flight Test Results

    NASA Technical Reports Server (NTRS)

    Lin, V.; Strovers, B.; Lee, J.; Beck, R.

    2008-01-01

    The Platform Precision Autopilot is an instrument landing system interfaced autopilot system, developed to enable an aircraft to repeatedly fly nearly the same trajectory hours, days, or weeks later. The Platform Precision Autopilot uses a novel design to interface with a NASA Gulfstream III jet by imitating the output of an instrument landing system approach. This technique minimizes, as much as possible, modifications to the baseline Gulfstream III jet and retains the safety features of the aircraft autopilot. The Platform Precision Autopilot requirement is to fly within a 5-m (16.4-ft) radius tube for distances to 200 km (108 nmi) in the presence of light turbulence for at least 90 percent of the time. This capability allows precise repeat-pass interferometry for the Uninhabited Aerial Vehicle Synthetic Aperture Radar program, whose primary objective is to develop a miniaturized, polarimetric, L-band synthetic aperture radar. Precise navigation is achieved using an accurate differential global positioning system developed by the Jet Propulsion Laboratory. Flight-testing has demonstrated the ability of the Platform Precision Autopilot to control the aircraft within the specified tolerance greater than 90 percent of the time in the presence of aircraft system noise and nonlinearities, constant pilot throttle adjustments, and light turbulence.

  3. Finding medical care for colorectal cancer symptoms: experiences among those facing financial barriers.

    PubMed

    Thomson, Maria D; Siminoff, Laura A

    2015-02-01

    Financial barriers can substantially delay medical care seeking. Using patient narratives provided by 252 colorectal cancer patients, we explored the experience of financial barriers to care seeking. Of the 252 patients interviewed, 84 identified financial barriers as a significant hurdle to obtaining health care for their colorectal cancer symptoms. Using verbatim transcripts of the narratives collected from patients between 2008 and 2010, three themes were identified: insurance status as a barrier (discussed by n = 84; 100% of subsample), finding medical care (discussed by n = 30; 36% of subsample) and, insurance companies as barriers (discussed by n = 7; 8% of subsample). Our analysis revealed that insurance status is more nuanced than the categories insured/uninsured and differentially affects how patients attempt to secure health care. While barriers to medical care for the uninsured have been well documented, the experiences of those who are underinsured are less well understood. To improve outcomes in these patients it is critical to understand how financial barriers to medical care are manifested. Even with anticipated changes of the Affordable Care Act, it remains important to understand how perceived financial barriers may be influencing patient behaviors, particularly those who have limited health care options due to insufficient health insurance coverage. © 2014 Society for Public Health Education.

  4. Examination of the factor structure of the Schizotypal Personality Questionnaire among British and Trinidadian adults.

    PubMed

    Barron, David; Swami, Viren; Towell, Tony; Hutchinson, Gerard; Morgan, Kevin D

    2015-01-01

    Much debate in schizotypal research has centred on the factor structure of the Schizotypal Personality Questionnaire (SPQ), with research variously showing higher-order dimensionality consisting of two to seven dimensions. In addition, cross-cultural support for the stability of those factors remains limited. Here, we examined the factor structure of the SPQ among British and Trinidadian adults. Participants from a White British subsample (n = 351) resident in the UK and from an African Caribbean subsample (n = 284) resident in Trinidad completed the SPQ. The higher-order factor structure of the SPQ was analysed through confirmatory factor analysis, followed by multiple-group analysis for the model of best fit. Between-group differences for sex and ethnicity were investigated using multivariate analysis of variance in relation to the higher-order domains. The model of best-fit was the four-factor structure, which demonstrated measurement invariance across groups. Additionally, these data had an adequate fit for two alternative models: (a) 3-factor and (b) modified 4-factor model. The British subsample had significantly higher scores across all domains than the Trinidadian group, and men scored significantly higher on the disorganised domain than women. The four-factor structure received confirmatory support and, importantly, support for use with populations varying in ethnicity and culture.

  5. Precision mechatronics based on high-precision measuring and positioning systems and machines

    NASA Astrophysics Data System (ADS)

    Jäger, Gerd; Manske, Eberhard; Hausotte, Tino; Mastylo, Rostyslav; Dorozhovets, Natalja; Hofmann, Norbert

    2007-06-01

    Precision mechatronics is defined in the paper as the science and engineering of a new generation of high precision systems and machines. Nanomeasuring and nanopositioning engineering represents important fields of precision mechatronics. The nanometrology is described as the today's limit of the precision engineering. The problem, how to design nanopositioning machines with uncertainties as small as possible will be discussed. The integration of several optical and tactile nanoprobes makes the 3D-nanopositioning machine suitable for various tasks, such as long range scanning probe microscopy, mask and wafer inspection, nanotribology, nanoindentation, free form surface measurement as well as measurement of microoptics, precision molds, microgears, ring gauges and small holes.

  6. Achieving sub-millimetre precision with a solid-state full-field heterodyning range imaging camera

    NASA Astrophysics Data System (ADS)

    Dorrington, A. A.; Cree, M. J.; Payne, A. D.; Conroy, R. M.; Carnegie, D. A.

    2007-09-01

    We have developed a full-field solid-state range imaging system capable of capturing range and intensity data simultaneously for every pixel in a scene with sub-millimetre range precision. The system is based on indirect time-of-flight measurements by heterodyning intensity-modulated illumination with a gain modulation intensified digital video camera. Sub-millimetre precision to beyond 5 m and 2 mm precision out to 12 m has been achieved. In this paper, we describe the new sub-millimetre class range imaging system in detail, and review the important aspects that have been instrumental in achieving high precision ranging. We also present the results of performance characterization experiments and a method of resolving the range ambiguity problem associated with homodyne and heterodyne ranging systems.

  7. The evolution in the stellar mass of brightest cluster galaxies over the past 10 billion years

    NASA Astrophysics Data System (ADS)

    Bellstedt, Sabine; Lidman, Chris; Muzzin, Adam; Franx, Marijn; Guatelli, Susanna; Hill, Allison R.; Hoekstra, Henk; Kurinsky, Noah; Labbe, Ivo; Marchesini, Danilo; Marsan, Z. Cemile; Safavi-Naeini, Mitra; Sifón, Cristóbal; Stefanon, Mauro; van de Sande, Jesse; van Dokkum, Pieter; Weigel, Catherine

    2016-08-01

    Using a sample of 98 galaxy clusters recently imaged in the near-infrared with the European Southern Observatory (ESO) New Technology Telescope, WIYN telescope and William Herschel Telescope, supplemented with 33 clusters from the ESO archive, we measure how the stellar mass of the most massive galaxies in the universe, namely brightest cluster galaxies (BCGs), increases with time. Most of the BCGs in this new sample lie in the redshift range 0.2 < z < 0.6, which has been noted in recent works to mark an epoch over which the growth in the stellar mass of BCGs stalls. From this sample of 132 clusters, we create a subsample of 102 systems that includes only those clusters that have estimates of the cluster mass. We combine the BCGs in this subsample with BCGs from the literature, and find that the growth in stellar mass of BCGs from 10 billion years ago to the present epoch is broadly consistent with recent semi-analytic and semi-empirical models. As in other recent studies, tentative evidence indicates that the stellar mass growth rate of BCGs may be slowing in the past 3.5 billion years. Further work in collecting larger samples, and in better comparing observations with theory using mock images, is required if a more detailed comparison between the models and the data is to be made.

  8. Tank 241-AP-105, cores 208, 209 and 210, analytical results for the final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nuzum, J.L.

    1997-10-24

    This document is the final laboratory report for Tank 241-AP-105. Push mode core segments were removed from Risers 24 and 28 between July 2, 1997, and July 14, 1997. Segments were received and extruded at 222-S Laboratory. Analyses were performed in accordance with Tank 241-AP-105 Push Mode Core Sampling and Analysis Plan (TSAP) (Hu, 1997) and Tank Safety Screening Data Quality Objective (DQO) (Dukelow, et al., 1995). None of the subsamples submitted for total alpha activity (AT), differential scanning calorimetry (DSC) analysis, or total organic carbon (TOC) analysis exceeded the notification limits as stated in TSAP and DQO. The statisticalmore » results of the 95% confidence interval on the mean calculations are provided by the Tank Waste Remediation Systems Technical Basis Group, and are not considered in this report. Appearance and Sample Handling Two cores, each consisting of four segments, were expected from Tank 241-AP-105. Three cores were sampled, and complete cores were not obtained. TSAP states core samples should be transported to the laboratory within three calendar days from the time each segment is removed from the tank. This requirement was not met for all cores. Attachment 1 illustrates subsamples generated in the laboratory for analysis and identifies their sources. This reference also relates tank farm identification numbers to their corresponding 222-S Laboratory sample numbers.« less

  9. Precision Relative Positioning for Automated Aerial Refueling from a Stereo Imaging System

    DTIC Science & Technology

    2015-03-01

    PRECISION RELATIVE POSITIONING FOR AUTOMATED AERIAL REFUELING FROM A STEREO IMAGING SYSTEM THESIS Kyle P. Werner, 2Lt, USAF AFIT-ENG-MS-15-M-048...REFUELING FROM A STEREO IMAGING SYSTEM THESIS Presented to the Faculty Department of Electrical and Computer Engineering Graduate School of...RELEASE; DISTRIBUTION UNLIMITED. AFIT-ENG-MS-15-M-048 PRECISION RELATIVE POSITIONING FOR AUTOMATED AERIAL REFUELING FROM A STEREO IMAGING SYSTEM THESIS

  10. Validation of life-charts documented with the personal life-chart app - a self-monitoring tool for bipolar disorder.

    PubMed

    Schärer, Lars O; Krienke, Ute J; Graf, Sandra-Mareike; Meltzer, Katharina; Langosch, Jens M

    2015-03-14

    Long-term monitoring in bipolar affective disorders constitutes an important therapeutic and preventive method. The present study examines the validity of the Personal Life-Chart App (PLC App), in both German and in English. This App is based on the National Institute of Mental Health's Life-Chart Method, the de facto standard for long-term monitoring in the treatment of bipolar disorders. Methods have largely been replicated from 2 previous Life-Chart studies. The participants documented Life-Charts with the PLC App on a daily basis. Clinicians assessed manic and depressive symptoms in clinical interviews using the Inventory of Depressive Symptomatology, clinician-rated (IDS-C) and the Young Mania Rating Scale (YMRS) on a monthly basis on average. Spearman correlations of the total scores of IDS-C and YMRS were calculated with both the Life-Chart functional impairment rating and mood rating documented with the PLC App. 44 subjects used the PLC App in German and 10 subjects used the PLC App in English. 118 clinical interviews from the German sub-sample and 97 from the English sub-sample were analysed separately. The results in both sub-samples are similar to previous Life-Chart validation studies. Again statistically significant high correlations were found between the Life-Chart function rating assigned through the PLC App and well-established observer-rated methods. Again correlations were weaker for the Life-Chart mood rating than for the Life-Chart function impairment. No relevant correlation was found between the Life-chart mood rating and YMRS in the German sub-sample. This study gives further evidence for the validity of the Life-Chart method as a valid tool for the recognition of both manic and depressive episodes. Documenting Life-Charts with the PLC App (English and German) does not seem to impair the validity of patient ratings.

  11. Agility performance in high-level junior basketball players: the predictive value of anthropometrics and power qualities.

    PubMed

    Sisic, Nedim; Jelicic, Mario; Pehar, Miran; Spasic, Miodrag; Sekulic, Damir

    2016-01-01

    In basketball, anthropometric status is an important factor when identifying and selecting talents, while agility is one of the most vital motor performances. The aim of this investigation was to evaluate the influence of anthropometric variables and power capacities on different preplanned agility performances. The participants were 92 high-level, junior-age basketball players (16-17 years of age; 187.6±8.72 cm in body height, 78.40±12.26 kg in body mass), randomly divided into a validation and cross-validation subsample. The predictors set consisted of 16 anthropometric variables, three tests of power-capacities (Sargent-jump, broad-jump and medicine-ball-throw) as predictors. The criteria were three tests of agility: a T-Shape-Test; a Zig-Zag-Test, and a test of running with a 180-degree turn (T180). Forward stepwise multiple regressions were calculated for validation subsamples and then cross-validated. Cross validation included correlations between observed and predicted scores, dependent samples t-test between predicted and observed scores; and Bland Altman graphics. Analysis of the variance identified centres being advanced in most of the anthropometric indices, and medicine-ball-throw (all at P<0.05); with no significant between-position-differences for other studied motor performances. Multiple regression models originally calculated for the validation subsample were then cross-validated, and confirmed for Zig-zag-Test (R of 0.71 and 0.72 for the validation and cross-validation subsample, respectively). Anthropometrics were not strongly related to agility performance, but leg length is found to be negatively associated with performance in basketball-specific agility. Power capacities are confirmed to be an important factor in agility. The results highlighted the importance of sport-specific tests when studying pre-planned agility performance in basketball. The improvement in power capacities will probably result in an improvement in agility in basketball athletes, while anthropometric indices should be used in order to identify those athletes who can achieve superior agility performance.

  12. Evaluating autonomous acoustic surveying techniques for rails in tidal marshes

    USGS Publications Warehouse

    Stiffler, Lydia L.; Anderson, James T.; Katzner, Todd

    2018-01-01

    There is a growing interest toward the use of autonomous recording units (ARUs) for acoustic surveying of secretive marsh bird populations. However, there is little information on how ARUs compare to human surveyors or how best to use ARU data that can be collected continuously throughout the day. We used ARUs to conduct 2 acoustic surveys for king (Rallus elegans) and clapper rails (R. crepitans) within a tidal marsh complex along the Pamunkey River, Virginia, USA, during May–July 2015. To determine the effectiveness of an ARU in replacing human personnel, we compared results of callback point‐count surveys with concurrent acoustic recordings and calculated estimates of detection probability for both rail species combined. The success of ARUs at detecting rails that human observers recorded decreased with distance (P ≤ 0.001), such that at <25 m, 90.3% of human‐recorded rails also were detected by the ARU, but at >75 m, only 34.0% of human‐detected rails were detected by the ARU. To determine a subsampling scheme for continuous ARU data that allows for effective surveying of presence and call rates of rails, we used ARUs to conduct 15 continuous 48‐hr passive surveys, generating 720 hr of recordings. We established 5 subsampling periods of 5, 10, 15, 30, and 45 min to evaluate ARU‐based presence and vocalization detections of rails compared with each of the full 60‐min sampling of ARU‐based detection of rails. All subsampling periods resulted in different (P ≤ 0.001) detection rates and unstandardized vocalization rates compared with the hourly sampling period. However, standardized vocalization counts from the 30‐min subsampling period were not different from vocalization counts of the full hourly sampling period. When surveying rail species in estuarine environments, species‐, habitat‐, and ARU‐specific limitations to ARU sampling should be considered when making inferences about abundances and distributions from ARU data. 

  13. Does daily vitamin D 800 IU and calcium 1000 mg supplementation decrease the risk of falling in ambulatory women aged 65-71 years? A 3-year randomized population-based trial (OSTPRE-FPS).

    PubMed

    Kärkkäinen, Matti K; Tuppurainen, Marjo; Salovaara, Kari; Sandini, Lorenzo; Rikkonen, Toni; Sirola, Joonas; Honkanen, Risto; Arokoski, Jari; Alhava, Esko; Kröger, Heikki

    2010-04-01

    The hypothesis was that the calcium and vitamin D supplementation prevents falls at the population level. The OSTPRE-FPS was a randomized population-based open-trial with 3-year follow-up. The supplementation group (n=1566) received daily cholecalciferol 800IU+calcium carbonate 1000mg, while the control group (n=1573) received no supplementation or placebo. A randomly selected subsample of 593 subjects underwent a detailed measurement program including serum 25(OH)D measurements. The occurrence of falls was the primary outcome of the study. The participants in the subsample were telephoned at 4 months intervals and the rest of the trial population was interviewed by phone once a year. In the entire trial population (ETP), there were 812 women with 1832 falls in the intervention group and 833 women with 1944 falls in the control group (risk ratio was 0.98, 95% CI 0.92-1.05, P=0.160). The supplementation was not associated with single or multiple falls in the ETP. However, in the subsample, multiple fall incidence decreased by 30% (odds ratio (OR) 0.70, 95% CI 0.50-0.97, P=0.034) in the supplementation group. Further, the supplementation decreased the incidence of multiple falls requiring medical attention (OR 0.72, 95% CI 0.53-0.97, P=0.031) in the ETP. The mean compliance in the entire trial population was 78% and in the subsample 79%. Overall, the primary analysis showed no association between calcium and vitamin D supplementation and risk of falls. However, the results of a post hoc analysis suggested that there was a decreased risk of multiple falls requiring medical attention: this finding requires confirmation. Copyright 2009 Elsevier Ireland Ltd. All rights reserved.

  14. Complex segregation analysis of blood pressure and heart rate measured before and after a 20-week endurance exercise training program: the HERITAGE Family Study.

    PubMed

    An, P; Rice, T; Pérusse, L; Borecki, I B; Gagnon, J; Leon, A S; Skinner, J S; Wilmore, J H; Bouchard, C; Rao, D C

    2000-05-01

    Complex segregation analysis of baseline resting blood pressure (BP) and heart rate (HR) and their responses to training (post-training minus baseline) were performed in a sample of 482 individuals from 99 white families who participated in the HERITAGE Family Study. Resting BP and HR were measured at baseline and after a 20-week training program. Baseline resting BP and HR were age-adjusted and age-BMI-adjusted, and the responses to training were age-adjusted and age-baseline-adjusted, within four gender-by-generation groups. This study also analyzed the responses to training in two subsets of families: (1) the so-called "high" subsample, 45 families (216 individuals) with at least one member whose baseline resting BP is in the high end of the normal BP range (the upper 95th percentile: systolic BP [SBP] > or = 135 or diastolic BP [DBP] > or = 80 mm Hg); and (2) the so-called "nonhigh" subsample, the 54 remaining families (266 individuals). Baseline resting SBP was influenced by a multifactorial component (23%), which was independent of body mass index (BMI). Baseline resting DBP was influenced by a putative recessive locus, which accounted for 31% of the variance. In addition to the major gene effect, which may impact BMI as well, baseline resting DBP was also influenced by a multifactorial component (29%). Baseline resting HR was influenced by a putative dominant locus independent of BMI, which accounted for 31% of the variance. For the responses to training, no familiality was found in the whole sample or in the nonhigh subsample. However, in the high subsample, resting SBP response to training was influenced by a putative recessive locus, which accounted for 44% of the variance. No familiality was found for resting DBP response to training. Resting HR response to training was influenced by a major effect (accounting for 35% of the variance), with an ambiguous transmission from parents to offspring.

  15. New residence times of the Holocene reworked shells on the west coast of Bohai Bay, China

    NASA Astrophysics Data System (ADS)

    Shang, Zhiwen; Wang, Fu; Li, Jianfen; Marshall, William A.; Chen, Yongsheng; Jiang, Xingyu; Tian, Lizhu; Wang, Hong

    2016-01-01

    Shelly cheniers and shell-rich beds found intercalated in near-shore marine muds and sandy sediments can be used to indicate the location of ancient shorelines, and help to estimate the height of sea level. However, dating the deposition of material within cheniers and shell-rich beds is not straightforward because much of this material is transported and re-worked, creating an unknown temporal off-set, i.e., the residence time, between the death of a shell and its subsequent entombment. To quantify the residence time during the Holocene on a section of the northern Chinese coastline a total 47 shelly subsamples were taken from 17 discrete layers identified on the west coast of Bohai Bay. This material was AMS 14C dated and the calibrated ages were systematically compared. The subsamples were categorized by type as articulated and disarticulated bivalves, gastropod shells, and undifferentiated shell-hash. It was found that within most individual layers the calibrated ages of the subsamples got younger relative to the amount of apparent post-mortem re-working the material had been subject to. For examples, the 14C ages of the bivalve samples trended younger in this order: shell-hash → split shells → articulated shells. We propose that the younger subsample age determined within an individual layer will be the closest to the actual depositional age of the material dated. Using this approach at four Holocene sites we find residence times which range from 100 to 1260 cal yrs, with two average values of 600 cal yrs for the original 14C dates older than 1 ka cal BP and 100 cal yrs for the original 14C dates younger than 1 ka cal BP, respectively. Using this semi-empirical estimation of the shell residence times we have refined the existing chronology of the Holocene chenier ridges on the west coast of Bohai Bay.

  16. Mobile System for Precise Aero Delivery with Global Reach Network Capability

    DTIC Science & Technology

    2009-08-30

    No intention / need for taking advantage of networking with other agents. The Atair’s Onyx Micro Light ( Onyx ML) delivery system (www.atair.com... onyx ) (Fig.9a) is a precision airdrop system designed to address the requirements of the Joint Precision Airdrop System MLW (JPADS-MLW) system of the...autonomous powered paraglider (LEAPP) developed under contract with DARPA (Fig.9b). a) b) Fig. 9. Onyx ML with mock sensor payload release

  17. Analysis of de-noising methods to improve the precision of the ILSF BPM electronic readout system

    NASA Astrophysics Data System (ADS)

    Shafiee, M.; Feghhi, S. A. H.; Rahighi, J.

    2016-12-01

    In order to have optimum operation and precise control system at particle accelerators, it is required to measure the beam position with the precision of sub-μm. We developed a BPM electronic readout system at Iranian Light Source Facility and it has been experimentally tested at ALBA accelerator facility. The results show the precision of 0.54 μm in beam position measurements. To improve the precision of this beam position monitoring system to sub-μm level, we have studied different de-noising methods such as principal component analysis, wavelet transforms, filtering by FIR, and direct averaging method. An evaluation of the noise reduction was given to testify the ability of these methods. The results show that the noise reduction based on Daubechies wavelet transform is better than other algorithms, and the method is suitable for signal noise reduction in beam position monitoring system.

  18. A comparison of Boolean-based retrieval to the WAIS system for retrieval of aeronautical information

    NASA Technical Reports Server (NTRS)

    Marchionini, Gary; Barlow, Diane

    1994-01-01

    An evaluation of an information retrieval system using a Boolean-based retrieval engine and inverted file architecture and WAIS, which uses a vector-based engine, was conducted. Four research questions in aeronautical engineering were used to retrieve sets of citations from the NASA Aerospace Database which was mounted on a WAIS server and available through Dialog File 108 which served as the Boolean-based system (BBS). High recall and high precision searches were done in the BBS and terse and verbose queries were used in the WAIS condition. Precision values for the WAIS searches were consistently above the precision values for high recall BBS searches and consistently below the precision values for high precision BBS searches. Terse WAIS queries gave somewhat better precision performance than verbose WAIS queries. In every case, a small number of relevant documents retrieved by one system were not retrieved by the other, indicating the incomplete nature of the results from either retrieval system. Relevant documents in the WAIS searches were found to be randomly distributed in the retrieved sets rather than distributed by ranks. Advantages and limitations of both types of systems are discussed.

  19. Structural analysis of health-relevant policy-making information exchange networks in Canada.

    PubMed

    Contandriopoulos, Damien; Benoît, François; Bryant-Lukosius, Denise; Carrier, Annie; Carter, Nancy; Deber, Raisa; Duhoux, Arnaud; Greenhalgh, Trisha; Larouche, Catherine; Leclerc, Bernard-Simon; Levy, Adrian; Martin-Misener, Ruth; Maximova, Katerina; McGrail, Kimberlyn; Nykiforuk, Candace; Roos, Noralou; Schwartz, Robert; Valente, Thomas W; Wong, Sabrina; Lindquist, Evert; Pullen, Carolyn; Lardeux, Anne; Perroux, Melanie

    2017-09-20

    Health systems worldwide struggle to identify, adopt, and implement in a timely and system-wide manner the best-evidence-informed-policy-level practices. Yet, there is still only limited evidence about individual and institutional best practices for fostering the use of scientific evidence in policy-making processes The present project is the first national-level attempt to (1) map and structurally analyze-quantitatively-health-relevant policy-making networks that connect evidence production, synthesis, interpretation, and use; (2) qualitatively investigate the interaction patterns of a subsample of actors with high centrality metrics within these networks to develop an in-depth understanding of evidence circulation processes; and (3) combine these findings in order to assess a policy network's "absorptive capacity" regarding scientific evidence and integrate them into a conceptually sound and empirically grounded framework. The project is divided into two research components. The first component is based on quantitative analysis of ties (relationships) that link nodes (participants) in a network. Network data will be collected through a multi-step snowball sampling strategy. Data will be analyzed structurally using social network mapping and analysis methods. The second component is based on qualitative interviews with a subsample of the Web survey participants having central, bridging, or atypical positions in the network. Interviews will focus on the process through which evidence circulates and enters practice. Results from both components will then be integrated through an assessment of the network's and subnetwork's effectiveness in identifying, capturing, interpreting, sharing, reframing, and recodifying scientific evidence in policy-making processes. Knowledge developed from this project has the potential both to strengthen the scientific understanding of how policy-level knowledge transfer and exchange functions and to provide significantly improved advice on how to ensure evidence plays a more prominent role in public policies.

  20. Eclipsing damped Lyα systems in the Sloan Digital Sky Survey Data Release 12★

    NASA Astrophysics Data System (ADS)

    Fathivavsari, H.; Petitjean, P.; Jamialahmadi, N.; Khosroshahi, H. G.; Rahmani, H.; Finley, H.; Noterdaeme, P.; Pâris, I.; Srianand, R.

    2018-04-01

    We present the results of our automatic search for proximate damped Lyα absorption (PDLA) systems in the quasar spectra from the Sloan Digital Sky Survey Data Release 12. We constrain our search to those PDLAs lying within 1500 km s-1 from the quasar to make sure that the broad DLA absorption trough masks most of the strong Lyα emission from the broad line region (BLR) of the quasar. When the Lyα emission from the BLR is blocked by these so-called eclipsing DLAs, narrow Lyα emission from the host galaxy could be revealed as a narrow emission line (NEL) in the DLA trough. We define a statistical sample of 399 eclipsing DLAs with log N(H I) ≥ 21.10. We divide our statistical sample into three subsamples based on the strength of the NEL detected in the DLA trough. By studying the stacked spectra of these subsamples, we found that absorption from high ionization species are stronger in DLAs with stronger NEL in their absorption core. Moreover, absorption from the excited states of species like Si II are also stronger in DLAs with stronger NEL. We also found no correlation between the luminosity of the Lyα NEL and the quasar luminosity. These observations are consistent with a scenario in which the DLAs with stronger NEL are denser and physically closer to the quasar. We propose that these eclipsing DLAs could be the product of the interaction between infalling and outflowing gas. High resolution spectroscopic observation would be needed to shed some light on the nature of these eclipsing DLAs.

  1. Racial and Ethnic Differences in Total Knee Arthroplasty in the Veterans Affairs Health Care System, 2001-2013.

    PubMed

    Hausmann, Leslie R M; Brandt, Cynthia A; Carroll, Constance M; Fenton, Brenda T; Ibrahim, Said A; Becker, William C; Burgess, Diana J; Wandner, Laura D; Bair, Matthew J; Goulet, Joseph L

    2017-08-01

    To examine black-white and Hispanic-white differences in total knee arthroplasty from 2001 to 2013 in a large cohort of patients diagnosed with osteoarthritis (OA) in the Veterans Affairs (VA) health care system. Data were from the VA Musculoskeletal Disorders cohort, which includes data from electronic health records of more than 5.4 million veterans with musculoskeletal disorders diagnoses. We included white (non-Hispanic), black (non-Hispanic), and Hispanic (any race) veterans, age ≥50 years, with an OA diagnosis from 2001-2011 (n = 539,841). Veterans were followed from their first OA diagnosis until September 30, 2013. As a proxy for increased clinical severity, analyses were also conducted for a subsample restricted to those who saw an orthopedic or rheumatology specialist (n = 148,844). We used Cox proportional hazards regression to examine racial and ethnic differences in total knee arthroplasty by year of OA diagnosis, adjusting for age, sex, body mass index, physical and mental diagnoses, and pain intensity scores. We identified 12,087 total knee arthroplasty procedures in a sample of 473,170 white, 50,172 black, and 16,499 Hispanic veterans. In adjusted models examining black-white and Hispanic-white differences by year of OA diagnosis, total knee arthroplasty rates were lower for black than for white veterans diagnosed in all but 2 years. There were no Hispanic-white differences regardless of when diagnosis occurred. These patterns held in the specialty clinic subsample. Black-white differences in total knee arthroplasty appear to be persistent in the VA, even after controlling for potential clinical confounders. © 2016, American College of Rheumatology.

  2. Portable Airborne Laser System Measures Forest-Canopy Height

    NASA Technical Reports Server (NTRS)

    Nelson, Ross

    2005-01-01

    (PALS) is a combination of laser ranging, video imaging, positioning, and data-processing subsystems designed for measuring the heights of forest canopies along linear transects from tens to thousands of kilometers long. Unlike prior laser ranging systems designed to serve the same purpose, the PALS is not restricted to use aboard a single aircraft of a specific type: the PALS fits into two large suitcases that can be carried to any convenient location, and the PALS can be installed in almost any local aircraft for hire, thereby making it possible to sample remote forests at relatively low cost. The initial cost and the cost of repairing the PALS are also lower because the PALS hardware consists mostly of commercial off-the-shelf (COTS) units that can easily be replaced in the field. The COTS units include a laser ranging transceiver, a charge-coupled-device camera that images the laser-illuminated targets, a differential Global Positioning System (dGPS) receiver capable of operation within the Wide Area Augmentation System, a video titler, a video cassette recorder (VCR), and a laptop computer equipped with two serial ports. The VCR and computer are powered by batteries; the other units are powered at 12 VDC from the 28-VDC aircraft power system via a low-pass filter and a voltage converter. The dGPS receiver feeds location and time data, at an update rate of 0.5 Hz, to the video titler and the computer. The laser ranging transceiver, operating at a sampling rate of 2 kHz, feeds its serial range and amplitude data stream to the computer. The analog video signal from the CCD camera is fed into the video titler wherein the signal is annotated with position and time information. The titler then forwards the annotated signal to the VCR for recording on 8-mm tapes. The dGPS and laser range and amplitude serial data streams are processed by software that displays the laser trace and the dGPS information as they are fed into the computer, subsamples the laser range and amplitude data, interleaves the subsampled data with the dGPS information, and records the resulting interleaved data stream.

  3. A Dynamic Precision Evaluation Method for the Star Sensor in the Stellar-Inertial Navigation System.

    PubMed

    Lu, Jiazhen; Lei, Chaohua; Yang, Yanqiang

    2017-06-28

    Integrating the advantages of INS (inertial navigation system) and the star sensor, the stellar-inertial navigation system has been used for a wide variety of applications. The star sensor is a high-precision attitude measurement instrument; therefore, determining how to validate its accuracy is critical in guaranteeing its practical precision. The dynamic precision evaluation of the star sensor is more difficult than a static precision evaluation because of dynamic reference values and other impacts. This paper proposes a dynamic precision verification method of star sensor with the aid of inertial navigation device to realize real-time attitude accuracy measurement. Based on the gold-standard reference generated by the star simulator, the altitude and azimuth angle errors of the star sensor are calculated for evaluation criteria. With the goal of diminishing the impacts of factors such as the sensors' drift and devices, the innovative aspect of this method is to employ static accuracy for comparison. If the dynamic results are as good as the static results, which have accuracy comparable to the single star sensor's precision, the practical precision of the star sensor is sufficiently high to meet the requirements of the system specification. The experiments demonstrate the feasibility and effectiveness of the proposed method.

  4. Grain-size distribution and selected major and trace element concentrations in bed-sediment cores from the Lower Granite Reservoir and Snake and Clearwater Rivers, eastern Washington and northern Idaho, 2010

    USGS Publications Warehouse

    Braun, Christopher L.; Wilson, Jennifer T.; Van Metre, Peter C.; Weakland, Rhonda J.; Fosness, Ryan L.; Williams, Marshall L.

    2012-01-01

    Fifty subsamples from 15 cores were analyzed for major and trace elements. Concentrations of trace elements were low, with respect to sediment quality guidelines, in most cores. Typically, major and trace element concentrations were lower in the subsamples collected from the Snake River compared to those collected from the Clearwater River, the confluence of the Snake and Clearwater Rivers, and Lower Granite Reservoir. Generally, lower concentrations of major and trace elements were associated with coarser sediments (larger than 0.0625 millimeter) and higher concentrations of major and trace elements were associated with finer sediments (smaller than 0.0625 millimeter).

  5. Description, dissection, and subsampling of Apollo 14 core sample 14230

    NASA Technical Reports Server (NTRS)

    Fryxell, R.; Heiken, G.

    1971-01-01

    Core sample 14230, collected at Triplet Crater near the Fra Mauro landing site of the Apollo 14 mission, was dissected in greater detail than any previous core. Sediment from the actual lunar surface was missing, and 6.7 grams of sediment were removed from the base of the core for a portion of the biotest prime sample. Upper and lower portions of the original 70.7-gram core (12.5 centimeters long) were fractured excessively but not mixed stratigraphically. Three major morphologic units and 11 subdivisions were recognized. Dissection provided 55 subsamples in addition to three others made by removing longitudinal sections of the core impregnated with n-butyl methacrylate for use as a permanent documentary record and for studies requiring particles of known orientation.

  6. Lane-Level Vehicle Positioning : Integrating Diverse Systems for Precision and Reliability

    DOT National Transportation Integrated Search

    2013-05-13

    Integrated global positioning system/inertial navigation system (GPS/INS) technology, the backbone of vehicle positioning systems, cannot provide the precision and reliability needed for vehicle-based, lane-level positioning in all driving environmen...

  7. Navigation for space shuttle approach and landing using an inertial navigation system augmented by data from a precision ranging system or a microwave scan beam landing guidance system

    NASA Technical Reports Server (NTRS)

    Mcgee, L. A.; Smith, G. L.; Hegarty, D. M.; Merrick, R. B.; Carson, T. M.; Schmidt, S. F.

    1970-01-01

    A preliminary study has been made of the navigation performance which might be achieved for the high cross-range space shuttle orbiter during final approach and landing by using an optimally augmented inertial navigation system. Computed navigation accuracies are presented for an on-board inertial navigation system augmented (by means of an optimal filter algorithm) with data from two different ground navigation aids; a precision ranging system and a microwave scanning beam landing guidance system. These results show that augmentation with either type of ground navigation aid is capable of providing a navigation performance at touchdown which should be adequate for the space shuttle. In addition, adequate navigation performance for space shuttle landing is obtainable from the precision ranging system even with a complete dropout of precision range measurements as much as 100 seconds before touchdown.

  8. The Complete Local Volume Groups Sample - I. Sample selection and X-ray properties of the high-richness subsample

    NASA Astrophysics Data System (ADS)

    O'Sullivan, Ewan; Ponman, Trevor J.; Kolokythas, Konstantinos; Raychaudhury, Somak; Babul, Arif; Vrtilek, Jan M.; David, Laurence P.; Giacintucci, Simona; Gitti, Myriam; Haines, Chris P.

    2017-12-01

    We present the Complete Local-Volume Groups Sample (CLoGS), a statistically complete optically selected sample of 53 groups within 80 Mpc. Our goal is to combine X-ray, radio and optical data to investigate the relationship between member galaxies, their active nuclei and the hot intra-group medium (IGM). We describe sample selection, define a 26-group high-richness subsample of groups containing at least four optically bright (log LB ≥ 10.2 LB⊙) galaxies, and report the results of XMM-Newton and Chandra observations of these systems. We find that 14 of the 26 groups are X-ray bright, possessing a group-scale IGM extending at least 65 kpc and with luminosity >1041 erg s-1, while a further three groups host smaller galaxy-scale gas haloes. The X-ray bright groups have masses in the range M500 ≃ 0.5-5 × 1013 M⊙, based on system temperatures of 0.4-1.4 keV, and X-ray luminosities in the range 2-200 × 1041 erg s-1. We find that ∼53-65 per cent of the X-ray bright groups have cool cores, a somewhat lower fraction than found by previous archival surveys. Approximately 30 per cent of the X-ray bright groups show evidence of recent dynamical interactions (mergers or sloshing), and ∼35 per cent of their dominant early-type galaxies host active galactic nuclei with radio jets. We find no groups with unusually high central entropies, as predicted by some simulations, and confirm that CLoGS is in principle capable of detecting such systems. We identify three previously unrecognized groups, and find that they are either faint (LX, R500 < 1042 erg s-1) with no concentrated cool core, or highly disturbed. This leads us to suggest that ∼20 per cent of X-ray bright groups in the local universe may still be unidentified.

  9. A Lane-Level LBS System for Vehicle Network with High-Precision BDS/GPS Positioning

    PubMed Central

    Guo, Chi; Guo, Wenfei; Cao, Guangyi; Dong, Hongbo

    2015-01-01

    In recent years, research on vehicle network location service has begun to focus on its intelligence and precision. The accuracy of space-time information has become a core factor for vehicle network systems in a mobile environment. However, difficulties persist in vehicle satellite positioning since deficiencies in the provision of high-quality space-time references greatly limit the development and application of vehicle networks. In this paper, we propose a high-precision-based vehicle network location service to solve this problem. The major components of this study include the following: (1) application of wide-area precise positioning technology to the vehicle network system. An adaptive correction message broadcast protocol is designed to satisfy the requirements for large-scale target precise positioning in the mobile Internet environment; (2) development of a concurrence service system with a flexible virtual expansion architecture to guarantee reliable data interaction between vehicles and the background; (3) verification of the positioning precision and service quality in the urban environment. Based on this high-precision positioning service platform, a lane-level location service is designed to solve a typical traffic safety problem. PMID:25755665

  10. Predicting Document Retrieval System Performance: An Expected Precision Measure.

    ERIC Educational Resources Information Center

    Losee, Robert M., Jr.

    1987-01-01

    Describes an expected precision (EP) measure designed to predict document retrieval performance. Highlights include decision theoretic models; precision and recall as measures of system performance; EP graphs; relevance feedback; and computing the retrieval status value of a document for two models, the Binary Independent Model and the Two Poisson…

  11. Precision and repeatability of the Optotrak 3020 motion measurement system.

    PubMed

    States, R A; Pappas, E

    2006-01-01

    Several motion analysis systems are used by researchers to quantify human motion and to perform accurate surgical procedures. The Optotrak 3020 is one of these systems and despite its widespread use there is not any published information on its precision and repeatability. We used a repeated measures design study to evaluate the precision and repeatability of the Optotrak 3020 by measuring distance and angle in three sessions, four distances and three conditions (motion, static vertical, and static tilted). Precision and repeatability were found to be excellent for both angle and distance although they decreased with increasing distance from the sensors and with tilt from the plane of the sensors. Motion did not have a significant effect on the precision of the measurements. In conclusion, the measurement error of the Optotrak is minimal. Further studies are needed to evaluate its precision and repeatability under human motion conditions.

  12. Precision thermometry and the quantum speed limit

    NASA Astrophysics Data System (ADS)

    Campbell, Steve; Genoni, Marco G.; Deffner, Sebastian

    2018-04-01

    We assess precision thermometry for an arbitrary single quantum system. For a d-dimensional harmonic system we show that the gap sets a single temperature that can be optimally estimated. Furthermore, we establish a simple linear relationship between the gap and this temperature, and show that the precision exhibits a quadratic relationship. We extend our analysis to explore systems with arbitrary spectra, showing that exploiting anharmonicity and degeneracy can greatly enhance the precision of thermometry. Finally, we critically assess the dynamical features of two thermometry protocols for a two level system. By calculating the quantum speed limit we find that, despite the gap fixing a preferred temperature to probe, there is no evidence of this emerging in the dynamical features.

  13. Application of airborne thermal imagery to surveys of Pacific walrus

    USGS Publications Warehouse

    Burn, D.M.; Webber, M.A.; Udevitz, M.S.

    2006-01-01

    We conducted tests of airborne thermal imagery of Pacific walrus to determine if this technology can be used to detect walrus groups on sea ice and estimate the number of walruses present in each group. In April 2002 we collected thermal imagery of 37 walrus groups in the Bering Sea at spatial resolutions ranging from 1-4 m. We also collected high-resolution digital aerial photographs of the same groups. Walruses were considerably warmer than the background environment of ice, snow, and seawater and were easily detected in thermal imagery. We found a significant linear relation between walrus group size and the amount of heat measured by the thermal sensor at all 4 spatial resolutions tested. This relation can be used in a double-sampling framework to estimate total walrus numbers from a thermal survey of a sample of units within an area and photographs from a subsample of the thermally detected groups. Previous methods used in visual aerial surveys of Pacific walrus have sampled only a small percentage of available habitat, resulting in population estimates with low precision. Results of this study indicate that an aerial survey using a thermal sensor can cover as much as 4 times the area per hour of flight time with greater reliability than visual observation.

  14. Shell We Date? ESR Dating Sangamon Interglacial Episode Deposits at Hopwood Farm, IL.

    PubMed

    Blackwell, Bonnie A B; Kim, Danny M K; Curry, B Brandon; Grimley, David A; Blickstein, Joel I B; Skinner, Anne R

    2016-12-01

    During the Sangamon Episode, North America occasionally experienced warm climates. At Hopwood Farm, IL, a small kettle lake filled with sediment after the Illinois Episode glaciers retreated from southern Illinois. To date those deposits, 14 mollusc samples newly collected with associated sediment from three depths at Hopwood Farm were dated by standard electron spin resonance (ESR) dating. ESR can date molluscs from ~0.5 ka to >2 Ma in age with 5-10% precision, by comparing the accumulated radiation dose with the total radiation dose rate from the mollusc and its environment. Because all molluscs contained ≤0.6 ppm U, their ages do not depend on the assumed U uptake model. Using five different species, ESR analyses for 14 mollusc subsamples from Hopwood Farm showed that Unit 3, a layer rich in lacustrine molluscs, dates at 102 ± 7 ka to 90 ± 6 ka, which correlates with Marine (Oxygen) Isotope Stage 5c-b. Thus, the period with the highest non-arboreal pollen at Hopwood also correlates with the European Brørup, Dansgaard-Oeschger Event DO 23, a time period when climates were cooling and drying somewhat over the same period. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  15. Mass and age of red giant branch stars observed with LAMOST and Kepler

    NASA Astrophysics Data System (ADS)

    Wu, Yaqian; Xiang, Maosheng; Bi, Shaolan; Liu, Xiaowei; Yu, Jie; Hon, Marc; Sharma, Sanjib; Li, Tanda; Huang, Yang; Liu, Kang; Zhang, Xianfei; Li, Yaguang; Ge, Zhishuai; Tian, Zhijia; Zhang, Jinghua; Zhang, Jianwei

    2018-04-01

    Obtaining accurate and precise masses and ages for large numbers of giant stars is of great importance for unraveling the assemblage history of the Galaxy. In this paper, we estimate masses and ages of 6940 red giant branch (RGB) stars with asteroseismic parameters deduced from Kepler photometry and stellar atmospheric parameters derived from LAMOST spectra. The typical uncertainties of mass is a few per cent, and that of age is ˜20 per cent. The sample stars reveal two separate sequences in the age-[α/Fe] relation - a high-α sequence with stars older than ˜8 Gyr and a low-α sequence composed of stars with ages ranging from younger than 1 Gyr to older than 11 Gyr. We further investigate the feasibility of deducing ages and masses directly from LAMOST spectra with a machine learning method based on kernel based principal component analysis, taking a sub-sample of these RGB stars as a training data set. We demonstrate that ages thus derived achieve an accuracy of ˜24 per cent. We also explored the feasibility of estimating ages and masses based on the spectroscopically measured carbon and nitrogen abundances. The results are quite satisfactory and significantly improved compared to the previous studies.

  16. How colorful! A feature it is, isn't it?

    NASA Astrophysics Data System (ADS)

    Lebowsky, Fritz

    2015-01-01

    A display's color subpixel geometry provides an intriguing opportunity for improving readability of text. True type fonts can be positioned at the precision of subpixel resolution. With such a constraint in mind, how does one need to design font characteristics? On the other hand, display manufactures try hard in addressing the color display's dilemma: smaller pixel pitch and larger display diagonals strongly increase the total number of pixels. Consequently, cost of column and row drivers as well as power consumption increase. Perceptual color subpixel rendering using color component subsampling may save about 1/3 of color subpixels (and reduce power dissipation). This talk will try to elaborate the following questions, based on simulation of several different layouts of subpixel matrices: Up to what level are display device constraints compatible with software specific ideas of rendering text? How much of color contrast will remain? How to best consider preferred viewing distance for readability of text? How much does visual acuity vary at 20/20 vision? Can simplified models of human visual color perception be easily applied to text rendering on displays? How linear is human visual contrast perception around band limit of a display's spatial resolution? How colorful does the rendered text appear on the screen? How much does viewing angle influence the performance of subpixel layouts and color subpixel rendering?

  17. On Galactic Density Modeling in the Presence of Dust Extinction

    NASA Astrophysics Data System (ADS)

    Bovy, Jo; Rix, Hans-Walter; Green, Gregory M.; Schlafly, Edward F.; Finkbeiner, Douglas P.

    2016-02-01

    Inferences about the spatial density or phase-space structure of stellar populations in the Milky Way require a precise determination of the effective survey volume. The volume observed by surveys such as Gaia or near-infrared spectroscopic surveys, which have good coverage of the Galactic midplane region, is highly complex because of the abundant small-scale structure in the three-dimensional interstellar dust extinction. We introduce a novel framework for analyzing the importance of small-scale structure in the extinction. This formalism demonstrates that the spatially complex effect of extinction on the selection function of a pencil-beam or contiguous sky survey is equivalent to a low-pass filtering of the extinction-affected selection function with the smooth density field. We find that the angular resolution of current 3D extinction maps is sufficient for analyzing Gaia sub-samples of millions of stars. However, the current distance resolution is inadequate and needs to be improved by an order of magnitude, especially in the inner Galaxy. We also present a practical and efficient method for properly taking the effect of extinction into account in analyses of Galactic structure through an effective selection function. We illustrate its use with the selection function of red-clump stars in APOGEE using and comparing a variety of current 3D extinction maps.

  18. Patient-Centered Precision Health In A Learning Health Care System: Geisinger's Genomic Medicine Experience.

    PubMed

    Williams, Marc S; Buchanan, Adam H; Davis, F Daniel; Faucett, W Andrew; Hallquist, Miranda L G; Leader, Joseph B; Martin, Christa L; McCormick, Cara Z; Meyer, Michelle N; Murray, Michael F; Rahm, Alanna K; Schwartz, Marci L B; Sturm, Amy C; Wagner, Jennifer K; Williams, Janet L; Willard, Huntington F; Ledbetter, David H

    2018-05-01

    Health care delivery is increasingly influenced by the emerging concepts of precision health and the learning health care system. Although not synonymous with precision health, genomics is a key enabler of individualized care. Delivering patient-centered, genomics-informed care based on individual-level data in the current national landscape of health care delivery is a daunting challenge. Problems to overcome include data generation, analysis, storage, and transfer; knowledge management and representation for patients and providers at the point of care; process management; and outcomes definition, collection, and analysis. Development, testing, and implementation of a genomics-informed program requires multidisciplinary collaboration and building the concepts of precision health into a multilevel implementation framework. Using the principles of a learning health care system provides a promising solution. This article describes the implementation of population-based genomic medicine in an integrated learning health care system-a working example of a precision health program.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steed, Chad Allen

    EDENx is a multivariate data visualization tool that allows interactive user driven analysis of large-scale data sets with high dimensionality. EDENx builds on our earlier system, called EDEN to enable analysis of more dimensions and larger scale data sets. EDENx provides an initial overview of summary statistics for each variable in the data set under investigation. EDENx allows the user to interact with graphical summary plots of the data to investigate subsets and their statistical associations. These plots include histograms, binned scatterplots, binned parallel coordinate plots, timeline plots, and graphical correlation indicators. From the EDENx interface, a user can selectmore » a subsample of interest and launch a more detailed data visualization via the EDEN system. EDENx is best suited for high-level, aggregate analysis tasks while EDEN is more appropriate for detail data investigations.« less

  20. Efficacy of the Social Skills Improvement System Classwide Intervention Program (SSIS-CIP) primary version.

    PubMed

    DiPerna, James Clyde; Lei, Puiwa; Bellinger, Jillian; Cheng, Weiyi

    2015-03-01

    A multisite cluster randomized trial was conducted to examine the effects of the Social Skills Improvement System Classwide Intervention Program (SSIS-CIP; Elliott & Gresham, 2007) on students' classroom social behavior. The final sample included 432 students across 38 second grade classrooms. Social skills and problem behaviors were measured via the SSIS rating scale for all participants, and direct observations were completed for a subsample of participants within each classroom. Results indicated that the SSIS-CIP demonstrated positive effects on teacher ratings of participants' social skills and internalizing behaviors, with the greatest changes occurring in classrooms with students who exhibited lower skill proficiency prior to implementation. Statistically significant differences were not observed between treatment and control participants on teacher ratings of externalizing problem behaviors or direct observation.

  1. Target tracking system based on preliminary and precise two-stage compound cameras

    NASA Astrophysics Data System (ADS)

    Shen, Yiyan; Hu, Ruolan; She, Jun; Luo, Yiming; Zhou, Jie

    2018-02-01

    Early detection of goals and high-precision of target tracking is two important performance indicators which need to be balanced in actual target search tracking system. This paper proposed a target tracking system with preliminary and precise two - stage compound. This system using a large field of view to achieve the target search. After the target was searched and confirmed, switch into a small field of view for two field of view target tracking. In this system, an appropriate filed switching strategy is the key to achieve tracking. At the same time, two groups PID parameters are add into the system to reduce tracking error. This combination way with preliminary and precise two-stage compound can extend the scope of the target and improve the target tracking accuracy and this method has practical value.

  2. Study on the position accuracy of a mechanical alignment system

    NASA Astrophysics Data System (ADS)

    Cai, Yimin

    In this thesis, we investigated the precision level and established the baseline achieved by a mechanical alignment system using datums and reference surfaces. The factors which affect the accuracy of mechanical alignment system were studied and methodology was developed to suppress these factors so as to reach its full potential precision. In order to characterize the mechanical alignment system quantitatively, a new optical position monitoring system by using quadrant detectors has been developed in this thesis, it can monitor multi-dimensional degrees of mechanical workpieces in real time with high precision. We studied the noise factors inside the system and optimized the optical system. Based on the fact that one of the major limiting noise factors is the shifting of the laser beam, a noise cancellation technique has been developed successfully to suppress this noise, the feasibility of an ultra high resolution (<20 A) for displacement monitoring has been demonstrated. Using the optical position monitoring system, repeatability experiment of the mechanical alignment system has been conducted on different kinds of samples including steel, aluminum, glass and plastics with the same size 100mm x 130mm. The alignment accuracy was studied quantitatively rather than qualitatively before. In a controlled environment, the alignment precision can be improved 5 folds by securing the datum without other means of help. The alignment accuracy of an aluminum workpiece having reference surface by milling is about 3 times better than by shearing. Also we have found that sample material can have fairly significant effect on the alignment precision of the system. Contamination trapped between the datum and reference surfaces in mechanical alignment system can cause errors of registration or reduce the level of manufacturing precision. In the thesis, artificial and natural dust particles were used to simulate the real situations and their effects on system precision have been investigated. In this experiment, we discovered two effective cleaning processes.

  3. Assessment of average exposure to organochlorine pesticides in southern Togo from water, maize (Zea mays) and cowpea (Vigna unguiculata).

    PubMed

    Mawussi, G; Sanda, K; Merlina, G; Pinelli, E

    2009-03-01

    Drinking water, cowpea and maize grains were sampled in some potentially exposed agro-ecological areas in Togo and analysed for their contamination by some common organochlorine pesticides. A total of 19 organochlorine pesticides were investigated in ten subsamples of maize, ten subsamples of cowpea and nine subsamples of drinking water. Analytical methods included solvent extraction of the pesticide residues and their subsequent quantification using gas chromatography-mass spectrometry (GC/MS). Estimated daily intakes (EDIs) of pesticides were also determined. Pesticides residues in drinking water (0.04-0.40 microg l(-1)) were higher than the maximum residue limit (MRL) (0.03 microg l(-1)) set by the World Health Organization (WHO). Dieldrin, endrin, heptachlor epoxide and endosulfan levels (13.16-98.79 microg kg(-1)) in cowpea grains exceeded MRLs applied in France (10-50 microg kg(-1)). Contaminants' levels in maize grains (0.53-65.70 microg kg(-1)) were below the MRLs (20-100 microg kg(-1)) set by the Food and Agriculture Organization (FAO) and the WHO. EDIs of the tested pesticides ranged from 0.02% to 162.07% of the acceptable daily intakes (ADIs). Population exposure levels of dieldrin and heptachlor epoxide were higher than the FAO/WHO standards. A comprehensive national monitoring programme on organochlorine pesticides should be undertaken to include such other relevant sources like meat, fish, eggs and milk.

  4. Examining the cosmic acceleration with the latest Union2 supernova data

    NASA Astrophysics Data System (ADS)

    Li, Zhengxiang; Wu, Puxun; Yu, Hongwei

    2011-01-01

    In this Letter, by reconstructing the Om diagnostic and the deceleration parameter q from the latest Union2 Type Ia Supernova sample with and without the systematic error along with the baryon acoustic oscillation (BAO) and the cosmic microwave background (CMB), we study the cosmic expanding history, using the Chevallier-Polarski-Linder (CPL) parametrization. We obtain that Union2+BAO favor an expansion with a decreasing of the acceleration at z<0.3. However, once the CMB data is added in the analysis, the cosmic acceleration is found to be still increasing, indicating a tension between low redshift data and high redshift. In order to reduce this tension significantly, two different methods are considered and thus two different subsamples of Union2 are selected. We then find that two different subsamples+BAO+CMB give completely different results on the cosmic expanding history when the systematic error is ignored, with one suggesting a decreasing cosmic acceleration, the other just the opposite, although both of them alone with BAO support that the cosmic acceleration is slowing down. However, once the systematic error is considered, two different subsamples of Union2 along with BAO and CMB all favor an increasing of the present cosmic acceleration. Therefore a clear-cut answer on whether the cosmic acceleration is slowing down calls for more consistent data and more reliable methods to analyze them.

  5. Bed-material characteristics of the Sacramento–San Joaquin Delta, California, 2010–13

    USGS Publications Warehouse

    Marineau, Mathieu D.; Wright, Scott A.

    2017-02-10

    The characteristics of bed material at selected sites within the Sacramento–San Joaquin Delta, California, during 2010–13 are described in a study conducted by the U.S. Geological Survey in cooperation with the Bureau of Reclamation. During 2010‒13, six complete sets of samples were collected. Samples were initially collected at 30 sites; however, starting in 2012, samples were collected at 7 additional sites. These sites are generally collocated with an active streamgage. At all but one site, a separate bed-material sample was collected at three locations within the channel (left, right, and center). Bed-material samples were collected using either a US BMH–60 or a US BM–54 (for sites with higher stream velocity) cable-suspended, scoop sampler. Samples from each location were oven-dried and sieved. Bed material finer than 2 millimeters was subsampled using a sieving riffler and processed using a Beckman Coulter LS 13–320 laser diffraction particle-size analyzer. To determine the organic content of the bed material, the loss on ignition method was used for one subsample from each location. Particle-size distributions are presented as cumulative percent finer than a given size. Median and 90th-percentile particle size, and the percentage of subsample mass lost using the loss on ignition method for each sample are also presented in this report.

  6. Precision Time Protocol-Based Trilateration for Planetary Navigation

    NASA Technical Reports Server (NTRS)

    Murdock, Ron

    2015-01-01

    Progeny Systems Corporation has developed a high-fidelity, field-scalable, non-Global Positioning System (GPS) navigation system that offers precision localization over communications channels. The system is bidirectional, providing position information to both base and mobile units. It is the first-ever wireless use of the Institute of Electrical and Electronics Engineers (IEEE) Precision Time Protocol (PTP) in a bidirectional trilateration navigation system. The innovation provides a precise and reliable navigation capability to support traverse-path planning systems and other mapping applications, and it establishes a core infrastructure for long-term lunar and planetary occupation. Mature technologies are integrated to provide navigation capability and to support data and voice communications on the same network. On Earth, the innovation is particularly well suited for use in unmanned aerial vehicles (UAVs), as it offers a non-GPS precision navigation and location service for use in GPS-denied environments. Its bidirectional capability provides real-time location data to the UAV operator and to the UAV. This approach optimizes assisted GPS techniques and can be used to determine the presence of GPS degradation, spoofing, or jamming.

  7. Multi-GNSS real-time precise orbit/clock/UPD products and precise positioning service at GFZ

    NASA Astrophysics Data System (ADS)

    Li, Xingxing; Ge, Maorong; Liu, Yang; Fritsche, Mathias; Wickert, Jens; Schuh, Harald

    2016-04-01

    The rapid development of multi-constellation GNSSs (Global Navigation Satellite Systems, e.g., BeiDou, Galileo, GLONASS, GPS) and the IGS (International GNSS Service) Multi-GNSS Experiment (MGEX) bring great opportunities and challenges for real-time precise positioning service. In this contribution, we present a GPS+GLONASS+BeiDou+Galileo four-system model to fully exploit the observations of all these four navigation satellite systems for real-time precise orbit determination, clock estimation and positioning. A rigorous multi-GNSS analysis is performed to achieve the best possible consistency by processing the observations from different GNSS together in one common parameter estimation procedure. Meanwhile, an efficient multi-GNSS real-time precise positioning service system is designed and demonstrated by using the Multi-GNSS Experiment (MGEX) and International GNSS Service (IGS) data streams including stations all over the world. The addition of the BeiDou, Galileo and GLONASS systems to the standard GPS-only processing, reduces the convergence time almost by 70%, while the positioning accuracy is improved by about 25%. Some outliers in the GPS-only solutions vanish when multi-GNSS observations are processed simultaneous. The availability and reliability of GPS precise positioning decrease dramatically as the elevation cutoff increases. However, the accuracy of multi-GNSS precise point positioning (PPP) is hardly decreased and few centimeters are still achievable in the horizontal components even with 40° elevation cutoff.

  8. Science 101: How Do Atomic Clocks Work?

    ERIC Educational Resources Information Center

    Science and Children, 2008

    2008-01-01

    You might be wondering why in the world we need such precise measures of time. Well, many systems we use everyday, such as Global Positioning Systems, require precise synchronization of time. This comes into play in telecommunications and wireless communications, also. For purely scientific reasons, we can use precise measurement of time to…

  9. An empirical attempt to measure NRM lock-in depth in organic-rich varved lake sediments

    NASA Astrophysics Data System (ADS)

    Snowball, Ian; Lougheed, Bryan C.; Mellström, Anette

    2014-05-01

    The growing awareness of significant magnetosomal contributions to natural assemblages of magnetic minerals means that much remains to be discovered about how sediments become magnetised by the geomagnetic field and, therefore, the fidelity of the information provided by post-depositional remanent magnetisations (pDRMs). We have investigated the palaeomagnetic properties of organic-rich varves retrieved from Gyltigesjön (southern Sweden). An earlier study of this site by Snowball et al. (2013) compared centennial-millennial trends in inclination, declination and relative paleointensity (RPI) to a regional reference curve, which indicated that the natural remanent magnetisation (NRM) lock-in depth is at least 21 cm. This result prompted us to attempt to improve the recovery of the uppermost sediments and magnetically characterise them to assess the effect of consolidation on NRM acquisition. Fixed piston cores recovered in 2 m drives were kept vertical before capping, and discrete palaeomagnetic subsamples were obtained as close as possible to the sediment-water interface. The timescale was validated by establishing the concentration of lead (Pb) in the palaeomagnetic samples and comparing the downcore trends to the well-known regional atmospheric pollution history. Induced magnetic remanence and magnetic grain-size parameters (including the median destructive field of the anhysteretic remanent magnetization [mdfARM]) show that the concentration of single-domain magnetite grains (magnetosomes) are relatively uniform in the sediments, suggesting that they are produced in the water column. However, the mdfNRM in the uppermost sediment is several mT lower than the mdfARM (approx. 45 mT). The mdfNRM increases downcore and it agrees with the mdfARM at a depth of approx. 80 cm, which corresponds to an age of ca. 210 yrs. These observations suggest that a coarse grained clastic component contributes to the NRM close to the sediment surface, while magnetite magnetosomes become more important deeper down, which should cause smoothing of the palaeomagnetic signal. Despite the care we took, the sediment type made it practically impossible to recover precisely oriented subsamples for measurements of palaeomagnetic secular variation (PSV), and scattered results were produced. This empirical study emphasises the fact that a significant palaeomagnetic lock-in delay applies to organic-rich varves, in which magnetite magnetosomes are preserved.

  10. Research on the tool holder mode in high speed machining

    NASA Astrophysics Data System (ADS)

    Zhenyu, Zhao; Yongquan, Zhou; Houming, Zhou; Xiaomei, Xu; Haibin, Xiao

    2018-03-01

    High speed machining technology can improve the processing efficiency and precision, but also reduce the processing cost. Therefore, the technology is widely regarded in the industry. With the extensive application of high-speed machining technology, high-speed tool system has higher and higher requirements on the tool chuck. At present, in high speed precision machining, several new kinds of clip heads are as long as there are heat shrinkage tool-holder, high-precision spring chuck, hydraulic tool-holder, and the three-rib deformation chuck. Among them, the heat shrinkage tool-holder has the advantages of high precision, high clamping force, high bending rigidity and dynamic balance, etc., which are widely used. Therefore, it is of great significance to research the new requirements of the machining tool system. In order to adapt to the requirement of high speed machining precision machining technology, this paper expounds the common tool holder technology of high precision machining, and proposes how to select correctly tool clamping system in practice. The characteristics and existing problems are analyzed in the tool clamping system.

  11. Genetic programming based ensemble system for microarray data classification.

    PubMed

    Liu, Kun-Hong; Tong, Muchenxuan; Xie, Shu-Tong; Yee Ng, Vincent To

    2015-01-01

    Recently, more and more machine learning techniques have been applied to microarray data analysis. The aim of this study is to propose a genetic programming (GP) based new ensemble system (named GPES), which can be used to effectively classify different types of cancers. Decision trees are deployed as base classifiers in this ensemble framework with three operators: Min, Max, and Average. Each individual of the GP is an ensemble system, and they become more and more accurate in the evolutionary process. The feature selection technique and balanced subsampling technique are applied to increase the diversity in each ensemble system. The final ensemble committee is selected by a forward search algorithm, which is shown to be capable of fitting data automatically. The performance of GPES is evaluated using five binary class and six multiclass microarray datasets, and results show that the algorithm can achieve better results in most cases compared with some other ensemble systems. By using elaborate base classifiers or applying other sampling techniques, the performance of GPES may be further improved.

  12. Genetic Programming Based Ensemble System for Microarray Data Classification

    PubMed Central

    Liu, Kun-Hong; Tong, Muchenxuan; Xie, Shu-Tong; Yee Ng, Vincent To

    2015-01-01

    Recently, more and more machine learning techniques have been applied to microarray data analysis. The aim of this study is to propose a genetic programming (GP) based new ensemble system (named GPES), which can be used to effectively classify different types of cancers. Decision trees are deployed as base classifiers in this ensemble framework with three operators: Min, Max, and Average. Each individual of the GP is an ensemble system, and they become more and more accurate in the evolutionary process. The feature selection technique and balanced subsampling technique are applied to increase the diversity in each ensemble system. The final ensemble committee is selected by a forward search algorithm, which is shown to be capable of fitting data automatically. The performance of GPES is evaluated using five binary class and six multiclass microarray datasets, and results show that the algorithm can achieve better results in most cases compared with some other ensemble systems. By using elaborate base classifiers or applying other sampling techniques, the performance of GPES may be further improved. PMID:25810748

  13. IEEE-1588(Trademark) Standard for a Precision Clock Synchronization Protocol for Networked Measurement and Control Systems

    DTIC Science & Technology

    2002-12-01

    34th Annual Precise Time and Time Interval (PTTI) Meeting 243 IEEE-1588™ STANDARD FOR A PRECISION CLOCK SYNCHRONIZATION PROTOCOL FOR... synchronization . 2. Cyclic-systems. In cyclic-systems, timing is periodic and is usually defined by the characteristics of a cyclic network or bus...incommensurate, timing schedules for each device are easily implemented. In addition, synchronization accuracy depends on the accuracy of the common

  14. Compositional characteristics of some Apollo 14 clastic materials.

    NASA Technical Reports Server (NTRS)

    Lindstrom, M. M.; Duncan, A. R.; Fruchter, J. S.; Mckay, S. M.; Stoeser, J. W.; Goles, G. G.; Lindstrom, D. J.

    1972-01-01

    Eighty-two subsamples of Apollo 14 materials have been analyzed by instrumental neutron activation analysis techniques for as many as 25 elements. In many cases, it was necessary to develop new procedures to allow analyses of small specimens. Compositional relationships among Apollo 14 materials indicate that there are small but systematic differences between regolith from the valley terrain and that from Cone Crater ejecta. Fragments from 1-2 mm size fractions of regolith samples may be divided into compositional classes, and the 'soil breccias' among them are very similar to valley soils. Multicomponent linear mixing models have been used as interpretive tools in dealing with data on regolith fractions and subsamples from breccia 14321. These mixing models show systematic compositional variations with inferred age for Apollo 14 clastic materials.

  15. The Inverse Bagging Algorithm: Anomaly Detection by Inverse Bootstrap Aggregating

    NASA Astrophysics Data System (ADS)

    Vischia, Pietro; Dorigo, Tommaso

    2017-03-01

    For data sets populated by a very well modeled process and by another process of unknown probability density function (PDF), a desired feature when manipulating the fraction of the unknown process (either for enhancing it or suppressing it) consists in avoiding to modify the kinematic distributions of the well modeled one. A bootstrap technique is used to identify sub-samples rich in the well modeled process, and classify each event according to the frequency of it being part of such sub-samples. Comparisons with general MVA algorithms will be shown, as well as a study of the asymptotic properties of the method, making use of a public domain data set that models a typical search for new physics as performed at hadronic colliders such as the Large Hadron Collider (LHC).

  16. Neural-Net Based Optical NDE Method for Structural Health Monitoring

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.; Weiland, Kenneth E.

    2003-01-01

    This paper answers some performance and calibration questions about a non-destructive-evaluation (NDE) procedure that uses artificial neural networks to detect structural damage or other changes from sub-sampled characteristic patterns. The method shows increasing sensitivity as the number of sub-samples increases from 108 to 6912. The sensitivity of this robust NDE method is not affected by noisy excitations of the first vibration mode. A calibration procedure is proposed and demonstrated where the output of a trained net can be correlated with the outputs of the point sensors used for vibration testing. The calibration procedure is based on controlled changes of fastener torques. A heterodyne interferometer is used as a displacement sensor for a demonstration of the challenges to be handled in using standard point sensors for calibration.

  17. Fast Computation of the Two-Point Correlation Function in the Age of Big Data

    NASA Astrophysics Data System (ADS)

    Pellegrino, Andrew; Timlin, John

    2018-01-01

    We present a new code which quickly computes the two-point correlation function for large sets of astronomical data. This code combines the ease of use of Python with the speed of parallel shared libraries written in C. We include the capability to compute the auto- and cross-correlation statistics, and allow the user to calculate the three-dimensional and angular correlation functions. Additionally, the code automatically divides the user-provided sky masks into contiguous subsamples of similar size, using the HEALPix pixelization scheme, for the purpose of resampling. Errors are computed using jackknife and bootstrap resampling in a way that adds negligible extra runtime, even with many subsamples. We demonstrate comparable speed with other clustering codes, and code accuracy compared to known and analytic results.

  18. Evidence for νμ→ντ appearance in the CNGS neutrino beam with the OPERA experiment

    NASA Astrophysics Data System (ADS)

    Agafonova, N.; Aleksandrov, A.; Anokhina, A.; Aoki, S.; Ariga, A.; Ariga, T.; Asada, T.; Autiero, D.; Ben Dhahbi, A.; Badertscher, A.; Bender, D.; Bertolin, A.; Bozza, C.; Brugnera, R.; Brunet, F.; Brunetti, G.; Buonaura, A.; Buontempo, S.; Büttner, B.; Chaussard, L.; Chernyavsky, M.; Chiarella, V.; Chukanov, A.; Consiglio, L.; D'Ambrosio, N.; de Lellis, G.; de Serio, M.; Del Amo Sanchez, P.; di Crescenzo, A.; di Ferdinando, D.; di Marco, N.; Dmitrievski, S.; Dracos, M.; Duchesneau, D.; Dusini, S.; Dzhatdoev, T.; Ebert, J.; Ereditato, A.; Favier, J.; Ferber, T.; Ferone, G.; Fini, R. A.; Fukuda, T.; Galati, G.; Garfagnini, A.; Giacomelli, G.; Goellnitz, C.; Goldberg, J.; Gornushkin, Y.; Grella, G.; Grianti, F.; Guler, M.; Gustavino, C.; Hagner, C.; Hakamata, K.; Hara, T.; Hayakawa, T.; Hierholzer, M.; Hollnagel, A.; Hosseini, B.; Ishida, H.; Ishiguro, K.; Ishikawa, M.; Jakovcic, K.; Jollet, C.; Kamiscioglu, C.; Kamiscioglu, M.; Katsuragawa, T.; Kawada, J.; Kawahara, H.; Kim, J. H.; Kim, S. H.; Kimura, M.; Kitagawa, N.; Klicek, B.; Kodama, K.; Komatsu, M.; Kose, U.; Kreslo, I.; Lauria, A.; Lenkeit, J.; Ljubicic, A.; Longhin, A.; Loverre, P.; Malgin, A.; Mandrioli, G.; Marteau, J.; Matsuo, T.; Matveev, V.; Mauri, N.; Medinaceli, E.; Meregaglia, A.; Migliozzi, P.; Mikado, S.; Miyanishi, M.; Miyashita, E.; Monacelli, P.; Montesi, M. C.; Morishima, K.; Muciaccia, M. T.; Naganawa, N.; Naka, T.; Nakamura, M.; Nakano, T.; Nakatsuka, Y.; Niwa, K.; Ogawa, S.; Okateva, N.; Olshevsky, A.; Omura, T.; Ozaki, K.; Paoloni, A.; Park, B. D.; Park, I. G.; Pastore, A.; Patrizii, L.; Pennacchio, E.; Pessard, H.; Pistillo, C.; Podgrudkov, D.; Polukhina, N.; Pozzato, M.; Pretzl, K.; Pupilli, F.; Rescigno, R.; Roda, M.; Rokujo, H.; Roganova, T.; Rosa, G.; Rostovtseva, I.; Rubbia, A.; Ryazhskaya, O.; Sato, O.; Sato, Y.; Schembri, A.; Schmidt-Parzefal, W.; Shakiryanova, I.; Shchedrina, T.; Sheshukov, A.; Shibuya, H.; Shiraishi, T.; Shoziyoev, G.; Simone, S.; Sioli, M.; Sirignano, C.; Sirri, G.; Spinetti, M.; Stanco, L.; Starkov, N.; Stellacci, S. M.; Stipcevic, M.; Strauss, T.; Strolin, P.; Suzuki, K.; Takahashi, S.; Tenti, M.; Terranova, F.; Tioukov, V.; Tufanli, S.; Vilain, P.; Vladimirov, M.; Votano, L.; Vuilleumier, J. L.; Wilquet, G.; Wonsak, B.; Yoon, C. S.; Yoshida, J.; Yoshimoto, M.; Zaitsev, Y.; Zemskova, S.; Zghiche, A.; Opera Collaboration

    2014-03-01

    The OPERA experiment is designed to search for νμ→ντ oscillations in appearance mode, i.e., through the direct observation of the τ lepton in ντ-charged current interactions. The experiment has taken data for five years, since 2008, with the CERN Neutrino to Gran Sasso beam. Previously, two ντ candidates with a τ decaying into hadrons were observed in a subsample of data of the 2008-2011 runs. Here we report the observation of a third ντ candidate in the τ-→μ- decay channel coming from the analysis of a subsample of the 2012 run. Taking into account the estimated background, the absence of νμ→ντ oscillations is excluded at the 3.4 σ level.

  19. Phanerozoic marine diversity: rock record modelling provides an independent test of large-scale trends.

    PubMed

    Smith, Andrew B; Lloyd, Graeme T; McGowan, Alistair J

    2012-11-07

    Sampling bias created by a heterogeneous rock record can seriously distort estimates of marine diversity and makes a direct reading of the fossil record unreliable. Here we compare two independent estimates of Phanerozoic marine diversity that explicitly take account of variation in sampling-a subsampling approach that standardizes for differences in fossil collection intensity, and a rock area modelling approach that takes account of differences in rock availability. Using the fossil records of North America and Western Europe, we demonstrate that a modelling approach applied to the combined data produces results that are significantly correlated with those derived from subsampling. This concordance between independent approaches argues strongly for the reality of the large-scale trends in diversity we identify from both approaches.

  20. Sampling strategies for subsampled segmented EPI PRF thermometry in MR guided high intensity focused ultrasound

    PubMed Central

    Odéen, Henrik; Todd, Nick; Diakite, Mahamadou; Minalga, Emilee; Payne, Allison; Parker, Dennis L.

    2014-01-01

    Purpose: To investigate k-space subsampling strategies to achieve fast, large field-of-view (FOV) temperature monitoring using segmented echo planar imaging (EPI) proton resonance frequency shift thermometry for MR guided high intensity focused ultrasound (MRgHIFU) applications. Methods: Five different k-space sampling approaches were investigated, varying sample spacing (equally vs nonequally spaced within the echo train), sampling density (variable sampling density in zero, one, and two dimensions), and utilizing sequential or centric sampling. Three of the schemes utilized sequential sampling with the sampling density varied in zero, one, and two dimensions, to investigate sampling the k-space center more frequently. Two of the schemes utilized centric sampling to acquire the k-space center with a longer echo time for improved phase measurements, and vary the sampling density in zero and two dimensions, respectively. Phantom experiments and a theoretical point spread function analysis were performed to investigate their performance. Variable density sampling in zero and two dimensions was also implemented in a non-EPI GRE pulse sequence for comparison. All subsampled data were reconstructed with a previously described temporally constrained reconstruction (TCR) algorithm. Results: The accuracy of each sampling strategy in measuring the temperature rise in the HIFU focal spot was measured in terms of the root-mean-square-error (RMSE) compared to fully sampled “truth.” For the schemes utilizing sequential sampling, the accuracy was found to improve with the dimensionality of the variable density sampling, giving values of 0.65 °C, 0.49 °C, and 0.35 °C for density variation in zero, one, and two dimensions, respectively. The schemes utilizing centric sampling were found to underestimate the temperature rise, with RMSE values of 1.05 °C and 1.31 °C, for variable density sampling in zero and two dimensions, respectively. Similar subsampling schemes with variable density sampling implemented in zero and two dimensions in a non-EPI GRE pulse sequence both resulted in accurate temperature measurements (RMSE of 0.70 °C and 0.63 °C, respectively). With sequential sampling in the described EPI implementation, temperature monitoring over a 192 × 144 × 135 mm3 FOV with a temporal resolution of 3.6 s was achieved, while keeping the RMSE compared to fully sampled “truth” below 0.35 °C. Conclusions: When segmented EPI readouts are used in conjunction with k-space subsampling for MR thermometry applications, sampling schemes with sequential sampling, with or without variable density sampling, obtain accurate phase and temperature measurements when using a TCR reconstruction algorithm. Improved temperature measurement accuracy can be achieved with variable density sampling. Centric sampling leads to phase bias, resulting in temperature underestimations. PMID:25186406

  1. Optimal reference polarization states for the calibration of general Stokes polarimeters in the presence of noise

    NASA Astrophysics Data System (ADS)

    Mu, Tingkui; Bao, Donghao; Zhang, Chunmin; Chen, Zeyu; Song, Jionghui

    2018-07-01

    During the calibration of the system matrix of a Stokes polarimeter using reference polarization states (RPSs) and pseudo-inversion estimation method, the measurement intensities are usually noised by the signal-independent additive Gaussian noise or signal-dependent Poisson shot noise, the precision of the estimated system matrix is degraded. In this paper, we present a paradigm for selecting RPSs to improve the precision of the estimated system matrix in the presence of both types of noise. The analytical solution of the precision of the system matrix estimated with the RPSs are derived. Experimental measurements from a general Stokes polarimeter show that accurate system matrix is estimated with the optimal RPSs, which are generated using two rotating quarter-wave plates. The advantage of using optimal RPSs is a reduction in measurement time with high calibration precision.

  2. High precision applications of the global positioning system

    NASA Technical Reports Server (NTRS)

    Lichten, Stephen M.

    1991-01-01

    The Global Positioning System (GPS) is a constellation of U.S. defense navigation satellites which can be used for military and civilian positioning applications. A wide variety of GPS scientific applications were identified and precise positioning capabilities with GPS were already demonstrated with data available from the present partial satellite constellation. Expected applications include: measurements of Earth crustal motion, particularly in seismically active regions; measurements of the Earth's rotation rate and pole orientation; high-precision Earth orbiter tracking; surveying; measurements of media propagation delays for calibration of deep space radiometric data in support of NASA planetary missions; determination of precise ground station coordinates; and precise time transfer worldwide.

  3. Orbit determination of the Next-Generation Beidou satellites with Intersatellite link measurements and a priori orbit constraints

    NASA Astrophysics Data System (ADS)

    Ren, Xia; Yang, Yuanxi; Zhu, Jun; Xu, Tianhe

    2017-11-01

    Intersatellite Link (ISL) technology helps to realize the auto update of broadcast ephemeris and clock error parameters for Global Navigation Satellite System (GNSS). ISL constitutes an important approach with which to both improve the observation geometry and extend the tracking coverage of China's Beidou Navigation Satellite System (BDS). However, ISL-only orbit determination might lead to the constellation drift, rotation, and even lead to the divergence in orbit determination. Fortunately, predicted orbits with good precision can be used as a priori information with which to constrain the estimated satellite orbit parameters. Therefore, the precision of satellite autonomous orbit determination can be improved by consideration of a priori orbit information, and vice versa. However, the errors of rotation and translation in a priori orbit will remain in the ultimate result. This paper proposes a constrained precise orbit determination (POD) method for a sub-constellation of the new Beidou satellite constellation with only a few ISLs. The observation model of dual one-way measurements eliminating satellite clock errors is presented, and the orbit determination precision is analyzed with different data processing backgrounds. The conclusions are as follows. (1) With ISLs, the estimated parameters are strongly correlated, especially the positions and velocities of satellites. (2) The performance of determined BDS orbits will be improved by the constraints with more precise priori orbits. The POD precision is better than 45 m with a priori orbit constrain of 100 m precision (e.g., predicted orbits by telemetry tracking and control system), and is better than 6 m with precise priori orbit constraints of 10 m precision (e.g., predicted orbits by international GNSS monitoring & Assessment System (iGMAS)). (3) The POD precision is improved by additional ISLs. Constrained by a priori iGMAS orbits, the POD precision with two, three, and four ISLs is better than 6, 3, and 2 m, respectively. (4) The in-plane link and out-of-plane link have different contributions to observation configuration and system observability. The POD with weak observation configuration (e.g., one in-plane link and one out-of-plane link) should be tightly constrained with a priori orbits.

  4. Preschoolers' precision of the approximate number system predicts later school mathematics performance.

    PubMed

    Mazzocco, Michèle M M; Feigenson, Lisa; Halberda, Justin

    2011-01-01

    The Approximate Number System (ANS) is a primitive mental system of nonverbal representations that supports an intuitive sense of number in human adults, children, infants, and other animal species. The numerical approximations produced by the ANS are characteristically imprecise and, in humans, this precision gradually improves from infancy to adulthood. Throughout development, wide ranging individual differences in ANS precision are evident within age groups. These individual differences have been linked to formal mathematics outcomes, based on concurrent, retrospective, or short-term longitudinal correlations observed during the school age years. However, it remains unknown whether this approximate number sense actually serves as a foundation for these school mathematics abilities. Here we show that ANS precision measured at preschool, prior to formal instruction in mathematics, selectively predicts performance on school mathematics at 6 years of age. In contrast, ANS precision does not predict non-numerical cognitive abilities. To our knowledge, these results provide the first evidence for early ANS precision, measured before the onset of formal education, predicting later mathematical abilities.

  5. Preschoolers' Precision of the Approximate Number System Predicts Later School Mathematics Performance

    PubMed Central

    Mazzocco, Michèle M. M.; Feigenson, Lisa; Halberda, Justin

    2011-01-01

    The Approximate Number System (ANS) is a primitive mental system of nonverbal representations that supports an intuitive sense of number in human adults, children, infants, and other animal species. The numerical approximations produced by the ANS are characteristically imprecise and, in humans, this precision gradually improves from infancy to adulthood. Throughout development, wide ranging individual differences in ANS precision are evident within age groups. These individual differences have been linked to formal mathematics outcomes, based on concurrent, retrospective, or short-term longitudinal correlations observed during the school age years. However, it remains unknown whether this approximate number sense actually serves as a foundation for these school mathematics abilities. Here we show that ANS precision measured at preschool, prior to formal instruction in mathematics, selectively predicts performance on school mathematics at 6 years of age. In contrast, ANS precision does not predict non-numerical cognitive abilities. To our knowledge, these results provide the first evidence for early ANS precision, measured before the onset of formal education, predicting later mathematical abilities. PMID:21935362

  6. Optimizing collection of adverse event data in cancer clinical trials supporting supplemental indications.

    PubMed

    Kaiser, Lee D; Melemed, Allen S; Preston, Alaknanda J; Chaudri Ross, Hilary A; Niedzwiecki, Donna; Fyfe, Gwendolyn A; Gough, Jacqueline M; Bushnell, William D; Stephens, Cynthia L; Mace, M Kelsey; Abrams, Jeffrey S; Schilsky, Richard L

    2010-12-01

    Although much is known about the safety of an anticancer agent at the time of initial marketing approval, sponsors customarily collect comprehensive safety data for studies that support supplemental indications. This adds significant cost and complexity to the study but may not provide useful new information. The main purpose of this analysis was to assess the amount of safety and concomitant medication data collected to determine a more optimal approach in the collection of these data when used in support of supplemental applications. Following a prospectively developed statistical analysis plan, we reanalyzed safety data from eight previously completed prospective randomized trials. A total of 107,884 adverse events and 136,608 concomitant medication records were reviewed for the analysis. Of these, four grade 1 to 2 and nine grade 3 and higher events were identified as drug effects that were not included in the previously established safety profiles and could potentially have been missed using subsampling. These events were frequently detected in subsamples of 400 patients or larger. Furthermore, none of the concomitant medication records contributed to labeling changes for the supplemental indications. Our study found that applying the optimized methodologic approach, described herein, has a high probability of detecting new drug safety signals. Focusing data collection on signals that cause physicians to modify or discontinue treatment ensures that safety issues of the highest concern for patients and regulators are captured and has significant potential to relieve strain on the clinical trials system.

  7. Estimation of Global Network Statistics from Incomplete Data

    PubMed Central

    Bliss, Catherine A.; Danforth, Christopher M.; Dodds, Peter Sheridan

    2014-01-01

    Complex networks underlie an enormous variety of social, biological, physical, and virtual systems. A profound complication for the science of complex networks is that in most cases, observing all nodes and all network interactions is impossible. Previous work addressing the impacts of partial network data is surprisingly limited, focuses primarily on missing nodes, and suggests that network statistics derived from subsampled data are not suitable estimators for the same network statistics describing the overall network topology. We generate scaling methods to predict true network statistics, including the degree distribution, from only partial knowledge of nodes, links, or weights. Our methods are transparent and do not assume a known generating process for the network, thus enabling prediction of network statistics for a wide variety of applications. We validate analytical results on four simulated network classes and empirical data sets of various sizes. We perform subsampling experiments by varying proportions of sampled data and demonstrate that our scaling methods can provide very good estimates of true network statistics while acknowledging limits. Lastly, we apply our techniques to a set of rich and evolving large-scale social networks, Twitter reply networks. Based on 100 million tweets, we use our scaling techniques to propose a statistical characterization of the Twitter Interactome from September 2008 to November 2008. Our treatment allows us to find support for Dunbar's hypothesis in detecting an upper threshold for the number of active social contacts that individuals maintain over the course of one week. PMID:25338183

  8. High precision NC lathe feeding system rigid-flexible coupling model reduction technology

    NASA Astrophysics Data System (ADS)

    Xuan, He; Hua, Qingsong; Cheng, Lianjun; Zhang, Hongxin; Zhao, Qinghai; Mao, Xinkai

    2017-08-01

    This paper proposes the use of dynamic substructure method of reduction of order to achieve effective reduction of feed system for high precision NC lathe feeding system rigid-flexible coupling model, namely the use of ADAMS to establish the rigid flexible coupling simulation model of high precision NC lathe, and then the vibration simulation of the period by using the FD 3D damper is very effective for feed system of bolt connection reduction of multi degree of freedom model. The vibration simulation calculation is more accurate, more quickly.

  9. High precision detector robot arm system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shu, Deming; Chu, Yong

    A method and high precision robot arm system are provided, for example, for X-ray nanodiffraction with an X-ray nanoprobe. The robot arm system includes duo-vertical-stages and a kinematic linkage system. A two-dimensional (2D) vertical plane ultra-precision robot arm supporting an X-ray detector provides positioning and manipulating of the X-ray detector. A vertical support for the 2D vertical plane robot arm includes spaced apart rails respectively engaging a first bearing structure and a second bearing structure carried by the 2D vertical plane robot arm.

  10. Promoting new concepts of skincare via skinomics and systems biology-From traditional skincare and efficacy-based skincare to precision skincare.

    PubMed

    Jiang, Biao; Jia, Yan; He, Congfen

    2018-05-11

    Traditional skincare involves the subjective classification of skin into 4 categories (oily, dry, mixed, and neutral) prior to skin treatment. Following the development of noninvasive methods in skin and skin imaging technology, scientists have developed efficacy-based skincare products based on the physiological characteristics of skin under different conditions. Currently, the emergence of skinomics and systems biology has facilitated the development of precision skincare. In this article, the evolution of skincare based on the physiological states of the skin (from traditional skincare and efficacy-based skincare to precision skincare) is described. In doing so, we highlight skinomics and systems biology, with particular emphasis on the importance of skin lipidomics and microbiomes in precision skincare. The emerging trends of precision skincare are anticipated. © 2018 Wiley Periodicals, Inc.

  11. Clinical evaluation of the FreeStyle Precision Pro system.

    PubMed

    Brazg, Ronald; Hughes, Kristen; Martin, Pamela; Coard, Julie; Toffaletti, John; McDonnell, Elizabeth; Taylor, Elizabeth; Farrell, Lausanne; Patel, Mona; Ward, Jeanne; Chen, Ting; Alva, Shridhara; Ng, Ronald

    2013-06-05

    A new version of international standard (ISO 15197) and CLSI Guideline (POCT12) with more stringent accuracy criteria are near publication. We evaluated the glucose test performance of the FreeStyle Precision Pro system, a new blood glucose monitoring system (BGMS) designed to enhance accuracy for point-of-care testing (POCT). Precision, interference and system accuracy with 503 blood samples from capillary, venous and arterial sources were evaluated in a multicenter study. Study results were analyzed and presented in accordance with the specifications and recommendations of the final draft ISO 15197 and the new POCT12. The FreeStyle Precision Pro system demonstrated acceptable precision (CV <5%), no interference across a hematocrit range of 15-65%, and, except for xylose, no interference from 24 of 25 potentially interfering substances. It also met all accuracy criteria specified in the final draft ISO 15197 and POCT12, with 97.3-98.9% of the individual results of various blood sample types agreeing within ±12 mg/dl of the laboratory analyzer values at glucose concentrations <100mg/dl and within ±12.5% of the laboratory analyzer values at glucose concentrations ≥100 mg/dl. The FreeStyle Precision Pro system met the tighter accuracy requirements, providing a means for enhancing accuracy for point-of-care blood glucose monitoring. Copyright © 2013 Elsevier B.V. All rights reserved.

  12. Delay times of a LiDAR-guided precision sprayer control system

    USDA-ARS?s Scientific Manuscript database

    Accurate flow control systems in triggering sprays against detected targets are needed for precision variable-rate sprayer development. System delay times due to the laser-sensor data buffer, software operation, and hydraulic-mechanical component response were determined for a control system used fo...

  13. Data and Time Transfer Using SONET Radio

    NASA Technical Reports Server (NTRS)

    Graceffo, Gary M.

    1996-01-01

    The need for precise knowledge of time and frequency has become ubiquitous throughout our society. The areas of astronomy, navigation, and high speed wide-area networks are among a few of the many consumers of this type of information. The Global Positioning System (GPS) has the potential to be the most comprehensive source of precise timing information developed to date; however, the introduction of selective availability has made it difficult for many users to recover this information from the GPS system with the precision required for today's systems. The system described in this paper is a 'Synchronous Optical NetWORK (SONET) Radio Data and Time Transfer System'. The objective of this system is to provide precise time and frequency information to a variety of end-users using a two-way data and time-transfer system. Although time and frequency transfers have been done for many years, this system is unique in that time and frequency information are embedded into existing communications traffic. This eliminates the need to make the transfer of time and frequency informatio a dedicated function of the communications system. For this system SONET has been selected as the transport format from which precise time is derived. SONET has been selected because of its high data rates and its increasing acceptance throughout the industry. This paper details a proof-of-concept initiative to perform embedded time and frequency transfers using SONET Radio.

  14. Examination Of Sulfur Measurements In DWPF Sludge Slurry And SRAT Product Materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bannochie, C. J.; Wiedenman, B. J.

    2012-11-29

    Savannah River National Laboratory (SRNL) was asked to re-sample the received SB7b WAPS material for wt. % solids, perform an aqua regia digestion and analyze the digested material by inductively coupled plasma - atomic emission spectroscopy (ICP-AES), as well as re-examine the supernate by ICP-AES. The new analyses were requested in order to provide confidence that the initial analytical subsample was representative of the Tank 40 sample received and to replicate the S results obtained on the initial subsample collected. The ICP-AES analyses for S were examined with both axial and radial detection of the sulfur ICP-AES spectroscopic emission linesmore » to ascertain if there was any significant difference in the reported results. The outcome of this second subsample of the Tank 40 WAPS material is the first subject of this report. After examination of the data from the new subsample of the SB7b WAPS material, a team of DWPF and SRNL staff looked for ways to address the question of whether there was in fact insoluble S that was not being accounted for by ion chromatography (IC) analysis. The question of how much S is reaching the melter was thought best addressed by examining a DWPF Slurry Mix Evaporator (SME) Product sample, but the significant dilution of sludge material, containing the S species in question, that results from frit addition was believed to add additional uncertainty to the S analysis of SME Product material. At the time of these discussions it was believed that all S present in a Sludge Receipt and Adjustment Tank (SRAT) Receipt sample would be converted to sulfate during the course of the SRAT cycle. A SRAT Product sample would not have the S dilution effect resulting from frit addition, and hence, it was decided that a DWPF SRAT Product sample would be obtained and submitted to SRNL for digestion and sample preparation followed by a round-robin analysis of the prepared samples by the DWPF Laboratory, F/H Laboratories, and SRNL for S and sulfate. The results of this round-robin analytical study are the second subject of this report.« less

  15. Disparities in dietary intake and physical activity patterns across the urbanization divide in the Peruvian Andes.

    PubMed

    McCloskey, Morgan L; Tarazona-Meza, Carla E; Jones-Smith, Jessica C; Miele, Catherine H; Gilman, Robert H; Bernabe-Ortiz, Antonio; Miranda, J Jaime; Checkley, William

    2017-07-11

    Diet and activity are thought to worsen with urbanization, thereby increasing risk of obesity and chronic diseases. A better understanding of dietary and activity patterns across the urbanization divide may help identify pathways, and therefore intervention targets, leading to the epidemic of overweight seen in low- and middle-income populations. Therefore, we sought to characterize diet and activity in a population-based study of urban and rural residents in Puno, Peru. We compared diet and activity in 1005 (503 urban, 502 rural) participants via a lifestyle questionnaire. We then recruited an age- and sex-stratified random sample of 50 (25 urban, 25 rural) participants to further characterize diet and activity. Among these participants, diet composition and macronutrient intake was assessed by three non-consecutive 24-h dietary recalls and physical activity was assessed using Omron JH-720itc pedometers. Among 1005 participants, we found that urban residents consumed protein-rich foods, refined grains, sugary items, and fresh produce more frequently than rural residents. Among the 50 subsample participants, urban dwellers consumed more protein (47 vs. 39 g; p = 0.05), more carbohydrates (280 vs. 220 g; p = 0.03), more sugary foods (98 vs. 48 g, p = 0.02) and had greater dietary diversity (6.4 vs 5.8; p = 0.04). Rural subsample participants consumed more added salt (3.1 vs 1.7 g, p = 0.006) and tended to consume more vegetable oil. As estimated by pedometers, urban subsample participants burned fewer calories per day (191 vs 270 kcal, p = 0.03). Although urbanization is typically thought to increase consumption of fat, sugar and salt, our 24-h recall results were mixed and showed lower levels of obesity in rural Puno were not necessarily indicative of nutritionally-balanced diets. All subsample participants had relatively traditional lifestyles (low fat intake, limited consumption of processed foods and frequent walking) that may play a role in chronic disease outcomes in this region.

  16. Subsampled open-reference clustering creates consistent, comprehensive OTU definitions and scales to billions of sequences.

    PubMed

    Rideout, Jai Ram; He, Yan; Navas-Molina, Jose A; Walters, William A; Ursell, Luke K; Gibbons, Sean M; Chase, John; McDonald, Daniel; Gonzalez, Antonio; Robbins-Pianka, Adam; Clemente, Jose C; Gilbert, Jack A; Huse, Susan M; Zhou, Hong-Wei; Knight, Rob; Caporaso, J Gregory

    2014-01-01

    We present a performance-optimized algorithm, subsampled open-reference OTU picking, for assigning marker gene (e.g., 16S rRNA) sequences generated on next-generation sequencing platforms to operational taxonomic units (OTUs) for microbial community analysis. This algorithm provides benefits over de novo OTU picking (clustering can be performed largely in parallel, reducing runtime) and closed-reference OTU picking (all reads are clustered, not only those that match a reference database sequence with high similarity). Because more of our algorithm can be run in parallel relative to "classic" open-reference OTU picking, it makes open-reference OTU picking tractable on massive amplicon sequence data sets (though on smaller data sets, "classic" open-reference OTU clustering is often faster). We illustrate that here by applying it to the first 15,000 samples sequenced for the Earth Microbiome Project (1.3 billion V4 16S rRNA amplicons). To the best of our knowledge, this is the largest OTU picking run ever performed, and we estimate that our new algorithm runs in less than 1/5 the time than would be required of "classic" open reference OTU picking. We show that subsampled open-reference OTU picking yields results that are highly correlated with those generated by "classic" open-reference OTU picking through comparisons on three well-studied datasets. An implementation of this algorithm is provided in the popular QIIME software package, which uses uclust for read clustering. All analyses were performed using QIIME's uclust wrappers, though we provide details (aided by the open-source code in our GitHub repository) that will allow implementation of subsampled open-reference OTU picking independently of QIIME (e.g., in a compiled programming language, where runtimes should be further reduced). Our analyses should generalize to other implementations of these OTU picking algorithms. Finally, we present a comparison of parameter settings in QIIME's OTU picking workflows and make recommendations on settings for these free parameters to optimize runtime without reducing the quality of the results. These optimized parameters can vastly decrease the runtime of uclust-based OTU picking in QIIME.

  17. Precise point positioning with the BeiDou navigation satellite system.

    PubMed

    Li, Min; Qu, Lizhong; Zhao, Qile; Guo, Jing; Su, Xing; Li, Xiaotao

    2014-01-08

    By the end of 2012, China had launched 16 BeiDou-2 navigation satellites that include six GEOs, five IGSOs and five MEOs. This has provided initial navigation and precise pointing services ability in the Asia-Pacific regions. In order to assess the navigation and positioning performance of the BeiDou-2 system, Wuhan University has built up a network of BeiDou Experimental Tracking Stations (BETS) around the World. The Position and Navigation Data Analyst (PANDA) software was modified to determine the orbits of BeiDou satellites and provide precise orbit and satellite clock bias products from the BeiDou satellite system for user applications. This article uses the BeiDou/GPS observations of the BeiDou Experimental Tracking Stations to realize the BeiDou and BeiDou/GPS static and kinematic precise point positioning (PPP). The result indicates that the precision of BeiDou static and kinematic PPP reaches centimeter level. The precision of BeiDou/GPS kinematic PPP solutions is improved significantly compared to that of BeiDou-only or GPS-only kinematic PPP solutions. The PPP convergence time also decreases with the use of combined BeiDou/GPS systems.

  18. Precise Point Positioning with the BeiDou Navigation Satellite System

    PubMed Central

    Li, Min; Qu, Lizhong; Zhao, Qile; Guo, Jing; Su, Xing; Li, Xiaotao

    2014-01-01

    By the end of 2012, China had launched 16 BeiDou-2 navigation satellites that include six GEOs, five IGSOs and five MEOs. This has provided initial navigation and precise pointing services ability in the Asia-Pacific regions. In order to assess the navigation and positioning performance of the BeiDou-2 system, Wuhan University has built up a network of BeiDou Experimental Tracking Stations (BETS) around the World. The Position and Navigation Data Analyst (PANDA) software was modified to determine the orbits of BeiDou satellites and provide precise orbit and satellite clock bias products from the BeiDou satellite system for user applications. This article uses the BeiDou/GPS observations of the BeiDou Experimental Tracking Stations to realize the BeiDou and BeiDou/GPS static and kinematic precise point positioning (PPP). The result indicates that the precision of BeiDou static and kinematic PPP reaches centimeter level. The precision of BeiDou/GPS kinematic PPP solutions is improved significantly compared to that of BeiDou-only or GPS-only kinematic PPP solutions. The PPP convergence time also decreases with the use of combined BeiDou/GPS systems. PMID:24406856

  19. CompareTests-R package

    Cancer.gov

    CompareTests is an R package to estimate agreement and diagnostic accuracy statistics for two diagnostic tests when one is conducted on only a subsample of specimens. A standard test is observed on all specimens.

  20. A precise and accurate acupoint location obtained on the face using consistency matrix pointwise fusion method.

    PubMed

    Yanq, Xuming; Ye, Yijun; Xia, Yong; Wei, Xuanzhong; Wang, Zheyu; Ni, Hongmei; Zhu, Ying; Xu, Lingyu

    2015-02-01

    To develop a more precise and accurate method, and identified a procedure to measure whether an acupoint had been correctly located. On the face, we used an acupoint location from different acupuncture experts and obtained the most precise and accurate values of acupoint location based on the consistency information fusion algorithm, through a virtual simulation of the facial orientation coordinate system. Because of inconsistencies in each acupuncture expert's original data, the system error the general weight calculation. First, we corrected each expert of acupoint location system error itself, to obtain a rational quantification for each expert of acupuncture and moxibustion acupoint location consistent support degree, to obtain pointwise variable precision fusion results, to put every expert's acupuncture acupoint location fusion error enhanced to pointwise variable precision. Then, we more effectively used the measured characteristics of different acupuncture expert's acupoint location, to improve the measurement information utilization efficiency and acupuncture acupoint location precision and accuracy. Based on using the consistency matrix pointwise fusion method on the acupuncture experts' acupoint location values, each expert's acupoint location information could be calculated, and the most precise and accurate values of each expert's acupoint location could be obtained.

  1. Development of low-altitude remote sensing systems for crop production management

    USDA-ARS?s Scientific Manuscript database

    Precision agriculture accounts for within-field variability for targeted treatment rather than uniform treatment of an entire field. Precision agriculture is built on agricultural mechanization and state-of-the-art technologies of geographical information systems (GIS), global positioning systems (G...

  2. Exploring structures of the Rochefort Cave (Belgium) with 3D models from LIDAR scans and UAV photoscans.

    NASA Astrophysics Data System (ADS)

    Watlet, A.; Triantafyllou, A.; Kaufmann, O.; Le Mouelic, S.

    2016-12-01

    Amongst today's techniques that are able to produce 3D point clouds, LIDAR and UAV (Unmanned Aerial Vehicle) photogrammetry are probably the most commonly used. Both methods have their own advantages and limitations. LIDAR scans create high resolution and high precision 3D point clouds, but such methods are generally costly, especially for sporadic surveys. Compared to LIDAR, UAV (e.g. drones) are cheap and flexible to use in different types of environments. Moreover, the photogrammetric processing workflow of digital images taken with UAV becomes easier with the rise of many affordable software packages (e.g., Agisoft PhotoScan, MicMac, VisualSFM). In this canvas, we present a challenging study made at the Rochefort Cave Laboratory (South Belgium) comprising surface and underground surveys. The main chamber of the cave ( 10000 m³) was the principal target of the study. A LIDAR scan and an UAV photoscan were acquired underground, producing respective 3D models. An additional 3D photoscan was performed at the surface, in the sinkhole in direct connection with the main chamber. The main goal of the project is to combine this different datasets for quantifying the orientation of inaccessible geological structures (e.g. faults, tectonic and gravitational joints, and sediments bedding), and for comparing them to structural data surveyed on the field. To go through structural interpretations, we used a subsampling method merging neighboured model polygons that have similar orientations, allowing statistical analyses of polygons spatial distribution. The benefit of this method is to verify the spatial continuity of in-situ structural measurements to larger scale. Roughness and colorimetric/spectral analyses may also be of great interest for several geosciences purposes by discriminating different facies among the geological beddings. Amongst others, this study was helpful to precise the local petrophysical properties associated with particular geological layers, what improved interpreting results from an ERT monitoring of the karst hydrological processes in terms of groundwater content.

  3. Spatio-temporal filtering techniques for the detection of disaster-related communication.

    PubMed

    Fitzhugh, Sean M; Ben Gibson, C; Spiro, Emma S; Butts, Carter T

    2016-09-01

    Individuals predominantly exchange information with one another through informal, interpersonal channels. During disasters and other disrupted settings, information spread through informal channels regularly outpaces official information provided by public officials and the press. Social scientists have long examined this kind of informal communication in the rumoring literature, but studying rumoring in disrupted settings has posed numerous methodological challenges. Measuring features of informal communication-timing, content, location-with any degree of precision has historically been extremely challenging in small studies and infeasible at large scales. We address this challenge by using online, informal communication from a popular microblogging website and for which we have precise spatial and temporal metadata. While the online environment provides a new means for observing rumoring, the abundance of data poses challenges for parsing hazard-related rumoring from countless other topics in numerous streams of communication. Rumoring about disaster events is typically temporally and spatially constrained to places where that event is salient. Accordingly, we use spatio and temporal subsampling to increase the resolution of our detection techniques. By filtering out data from known sources of error (per rumor theories), we greatly enhance the signal of disaster-related rumoring activity. We use these spatio-temporal filtering techniques to detect rumoring during a variety of disaster events, from high-casualty events in major population centers to minimally destructive events in remote areas. We consistently find three phases of response: anticipatory excitation where warnings and alerts are issued ahead of an event, primary excitation in and around the impacted area, and secondary excitation which frequently brings a convergence of attention from distant locales onto locations impacted by the event. Our results demonstrate the promise of spatio-temporal filtering techniques for "tuning" measurement of hazard-related rumoring to enable observation of rumoring at scales that have long been infeasible. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Profiling Systems Using the Defining Characteristics of Systems of Systems (SoS)

    DTIC Science & Technology

    2010-02-01

    system exhaust and emissions system gas engine heating and air conditioning system fuel system regenerative braking system safety system...overcome the limitations of these fuzzy scales, measurement scales are often divided into a relatively small number of disjoint categories so that the...precision is not justified. This lack of precision can typically be addressed by breaking the measurement scale into a set of categories , the use of

  5. Sensitivity and Calibration of Non-Destructive Evaluation Method That Uses Neural-Net Processing of Characteristic Fringe Patterns

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.; Weiland, Kenneth E.

    2003-01-01

    This paper answers some performance and calibration questions about a non-destructive-evaluation (NDE) procedure that uses artificial neural networks to detect structural damage or other changes from sub-sampled characteristic patterns. The method shows increasing sensitivity as the number of sub-samples increases from 108 to 6912. The sensitivity of this robust NDE method is not affected by noisy excitations of the first vibration mode. A calibration procedure is proposed and demonstrated where the output of a trained net can be correlated with the outputs of the point sensors used for vibration testing. The calibration procedure is based on controlled changes of fastener torques. A heterodyne interferometer is used as a displacement sensor for a demonstration of the challenges to be handled in using standard point sensors for calibration.

  6. Precompetitive achievement goals, stress appraisals, emotions, and coping among athletes.

    PubMed

    Nicholls, Adam R; Perry, John L; Calmeiro, Luis

    2014-10-01

    Grounded in Lazarus's (1991, 1999, 2000) cognitive-motivational-relational theory of emotions, we tested a model of achievement goals, stress appraisals, emotions, and coping. We predicted that precompetitive achievement goals would be associated with appraisals, appraisals with emotions, and emotions with coping in our model. The mediating effects of emotions among the overall sample of 827 athletes and two stratified random subsamples were also explored. The results of this study support our proposed model in the overall sample and the stratified subsamples. Further, emotion mediated the relationship between appraisal and coping. Mediation analyses revealed that there were indirect effects of pleasant and unpleasant emotions, which indicates the importance of examining multiple emotions to reveal a more accurate representation of the overall stress process. Our findings indicate that both appraisals and emotions are just as important in shaping coping.

  7. Combining counts and incidence data: an efficient approach for estimating the log-normal species abundance distribution and diversity indices.

    PubMed

    Bellier, Edwige; Grøtan, Vidar; Engen, Steinar; Schartau, Ann Kristin; Diserud, Ola H; Finstad, Anders G

    2012-10-01

    Obtaining accurate estimates of diversity indices is difficult because the number of species encountered in a sample increases with sampling intensity. We introduce a novel method that requires that the presence of species in a sample to be assessed while the counts of the number of individuals per species are only required for just a small part of the sample. To account for species included as incidence data in the species abundance distribution, we modify the likelihood function of the classical Poisson log-normal distribution. Using simulated community assemblages, we contrast diversity estimates based on a community sample, a subsample randomly extracted from the community sample, and a mixture sample where incidence data are added to a subsample. We show that the mixture sampling approach provides more accurate estimates than the subsample and at little extra cost. Diversity indices estimated from a freshwater zooplankton community sampled using the mixture approach show the same pattern of results as the simulation study. Our method efficiently increases the accuracy of diversity estimates and comprehension of the left tail of the species abundance distribution. We show how to choose the scale of sample size needed for a compromise between information gained, accuracy of the estimates and cost expended when assessing biological diversity. The sample size estimates are obtained from key community characteristics, such as the expected number of species in the community, the expected number of individuals in a sample and the evenness of the community.

  8. Subsampling for dataset optimisation

    NASA Astrophysics Data System (ADS)

    Ließ, Mareike

    2017-04-01

    Soil-landscapes have formed by the interaction of soil-forming factors and pedogenic processes. In modelling these landscapes in their pedodiversity and the underlying processes, a representative unbiased dataset is required. This concerns model input as well as output data. However, very often big datasets are available which are highly heterogeneous and were gathered for various purposes, but not to model a particular process or data space. As a first step, the overall data space and/or landscape section to be modelled needs to be identified including considerations regarding scale and resolution. Then the available dataset needs to be optimised via subsampling to well represent this n-dimensional data space. A couple of well-known sampling designs may be adapted to suit this purpose. The overall approach follows three main strategies: (1) the data space may be condensed and de-correlated by a factor analysis to facilitate the subsampling process. (2) Different methods of pattern recognition serve to structure the n-dimensional data space to be modelled into units which then form the basis for the optimisation of an existing dataset through a sensible selection of samples. Along the way, data units for which there is currently insufficient soil data available may be identified. And (3) random samples from the n-dimensional data space may be replaced by similar samples from the available dataset. While being a presupposition to develop data-driven statistical models, this approach may also help to develop universal process models and identify limitations in existing models.

  9. Community based study of sleep bruxism during early childhood

    PubMed Central

    Insana, Salvatore P.; Gozal, David; McNeil, Daniel W.; Montgomery-Downs, Hawley E.

    2012-01-01

    Objectives The aims for this study were to determine the prevalence of sleep-bruxism among young children, explore child behavior problems that may be associated with sleep-bruxism, and identify relations among sleep-bruxism, health problems, and neurocognitive performance. Methods The current study was a retrospective analysis of parent report surveys, and behavioral and neurocognitive assessments. Parents of 1953 preschool and 2888 first grade children indicated their child’s frequency of bruxism during sleep. A subsample of preschool children (n = 249) had additional behavioral, as well as neurocognitive assessments. Among the subsample, parents also reported on their child’s health, and completed the Child Behavioral Checklist; children were administered the Differential Ability Scales, and Pre-Reading Abilities subtests of the Developmental Neuropsychological Assessment. Results 36.8% of preschoolers and 49.6% of first graders were reported to brux ≥ 1 time per week. Among the preschool subsample, bruxing was independently associated with increased internalizing behaviors (β = .17). Bruxism was also associated with increased health problems (β = .19), and increased health problems were associated with decreased neurocognitive performance (β = .22). Conclusions The prevalence of sleep-bruxism was high. A dynamic and potentially clinically relevant relation exists among sleep-bruxism, internalizing behaviors, health, and neurocognition. Pediatric sleep-bruxism may serve as a sentinel marker for possible adverse health conditions, and signal a need for early intervention. These results support the need for an interdisciplinary approach to pediatric sleep medicine, dentistry, and psychology. PMID:23219144

  10. The effect of immigration and acculturation on victimization among a national sample of Latino women.

    PubMed

    Sabina, Chiara; Cuevas, Carlos A; Schally, Jennifer L

    2013-01-01

    The current study examined the effect of immigrant status, acculturation, and the interaction of acculturation and immigrant status on self-reported victimization in the United States among Latino women, including physical assault, sexual assault, stalking, and threatened violence. In addition, immigrant status, acculturation, gender role ideology, and religious intensity were examined as predictors of the count of victimization among the victimized subsample. The Sexual Assault Among Latinas (SALAS) Study surveyed 2,000 adult Latino women who lived in high-density Latino neighborhoods in 2008. The present study reports findings for a subsample of women who were victimized in the United States (n = 568). Immigrant women reported significantly less victimization than U.S.-born Latino women in bivariate analyses. Multivariate models showed that Anglo orientation was associated with greater odds of all forms of victimization, whereas both Latino orientation and being an immigrant were associated with lower odds of all forms of victimization. Latino orientation was more protective for immigrant women than for U.S.-born Latino women with regard to sexual victimization. Among the victimized subsample, being an immigrant, Anglo acculturation, and masculine gender role were associated with a higher victimization count, whereas Latino orientation and religious intensity were associated with a lower victimization count. The findings point to the risk associated with being a U.S. minority, the protective value of Latino cultural maintenance, and the need for services to reach out to Anglo acculturated Latino women.

  11. Needle position estimation from sub-sampled k-space data for MRI-guided interventions

    NASA Astrophysics Data System (ADS)

    Schmitt, Sebastian; Choli, Morwan; Overhoff, Heinrich M.

    2015-03-01

    MRI-guided interventions have gained much interest. They profit from intervention synchronous data acquisition and image visualization. Due to long data acquisition durations, ergonomic limitations may occur. For a trueFISP MRI-data acquisition sequence, a time sparing sub-sampling strategy has been developed that is adapted to amagnetic needle detection. A symmetrical and contrast rich susceptibility needle artifact, i.e. an approximately rectangular gray scale profile is assumed. The 1-D-Fourier transformed of a rectangular function is a sinc-function. Its periodicity is exploited by sampling only along a few orthogonal trajectories in k-space. Because a needle moves during intervention, its tip region resembles a rectangle in a time-difference image that is reconstructed from such sub-sampled k-spaces acquired at different time stamps. In different phantom experiments, a needle was pushed forward along a reference trajectory, which was determined from a needle holders geometric parameters. In addition, the trajectory of the needle tip was estimated by the method described above. Only ca. 4 to 5% of the entire k-space data was used for needle tip estimation. The misalignment of needle orientation and needle tip position, i.e. the differences between reference and estimated values, is small and even in its worst case less than 2 mm. The results show that the method is applicable under nearly real conditions. Next steps are addressed to the validation of the method for clinical data.

  12. General and disease-specific pain trajectories as predictors of social and political outcomes in arthritis and cancer.

    PubMed

    James, Richard J E; Walsh, David A; Ferguson, Eamonn

    2018-04-09

    While the heterogeniety of pain progression has been studied in chronic diseases, the extent to which patterns of pain progression among people in general as well as across different diseases affect social, civic and political engagement is unclear. We explore these issues for the first time. Using data from the English Longitudinal Study of Ageing, latent class growth models were used to estimate trajectories of self-reported pain in the entire cohort, and within subsamples reporting diagnoses of arthritis and cancer. These were compared at baseline on physical health (e.g. body mass index, smoking) and over time on social, civic and political engagement. Very similar four-trajectory models fit the whole sample and arthritis subsamples, whereas a three-trajectory model fit the cancer subsample. All samples had a modal group experiencing minimal chronic pain and a group with high chronic pain that showed slight regression (more pronounced in cancer). Biometric indices were more predictive of the most painful trajectory in arthritis than cancer. In both samples the group experiencing the most pain at baseline reported impairments in social, civic and political engagement. The impact of pain differs between individuals and between diseases. Indicators of physical and psychological health differently predicted membership of the trajectories most affected by pain. These trajectories were associated with differences in engagement with social and civic life, which in turn were associated with poorer health and well-being.

  13. Sampling design by the core-food approach for the Taiwan total diet study on veterinary drugs.

    PubMed

    Chen, Chien-Chih; Tsai, Ching-Lun; Chang, Chia-Chin; Ni, Shih-Pei; Chen, Yi-Tzu; Chiang, Chow-Feng

    2017-06-01

    The core-food (CF) approach, first adopted in the United States in the 1980s, has been widely used by many countries to assess the exposure to dietary hazards at a population level. However, the reliability of exposure estimates (C × CR) depends critically on sampling methods designed for the detected chemical concentrations (C) of each CF to match with the corresponding consumption rate (CR) estimated from the surveyed intake data. In order to reduce the uncertainty of food matching, this study presents a sampling design scheme, namely the subsample method, for the 2016 Taiwan total diet study (TDS) on veterinary drugs. We first combined the four sets of national dietary recall data that covered the entire age strata (1-65+ years), and aggregated them into 307 CFs by their similarity in nutritional values, manufacturing and cooking methods. The 40 CFs pertinent to veterinary drug residues were selected for this study, and 16 subsamples for each CF were designed by weighing their quantities in CR, product brands, manufacturing, processing and cooking methods. The calculated food matching rates of each CF from this study were 84.3-97.3%, which were higher than those obtained from many previous studies using the representative food (RF) method (53.1-57.8%). The subsample method not only considers the variety of food processing and cooking methods, but also it provides better food matching and reduces the uncertainty of exposure assessment.

  14. In vivo precision of conventional and digital methods for obtaining quadrant dental impressions.

    PubMed

    Ender, Andreas; Zimmermann, Moritz; Attin, Thomas; Mehl, Albert

    2016-09-01

    Quadrant impressions are commonly used as alternative to full-arch impressions. Digital impression systems provide the ability to take these impressions very quickly; however, few studies have investigated the accuracy of the technique in vivo. The aim of this study is to assess the precision of digital quadrant impressions in vivo in comparison to conventional impression techniques. Impressions were obtained via two conventional (metal full-arch tray, CI, and triple tray, T-Tray) and seven digital impression systems (Lava True Definition Scanner, T-Def; Lava Chairside Oral Scanner, COS; Cadent iTero, ITE; 3Shape Trios, TRI; 3Shape Trios Color, TRC; CEREC Bluecam, Software 4.0, BC4.0; CEREC Bluecam, Software 4.2, BC4.2; and CEREC Omnicam, OC). Impressions were taken three times for each of five subjects (n = 15). The impressions were then superimposed within the test groups. Differences from model surfaces were measured using a normal surface distance method. Precision was calculated using the Perc90_10 value. The values for all test groups were statistically compared. The precision ranged from 18.8 (CI) to 58.5 μm (T-Tray), with the highest precision in the CI, T-Def, BC4.0, TRC, and TRI groups. The deviation pattern varied distinctly depending on the impression method. Impression systems with single-shot capture exhibited greater deviations at the tooth surface whereas high-frame rate impression systems differed more in gingival areas. Triple tray impressions displayed higher local deviation at the occlusal contact areas of upper and lower jaw. Digital quadrant impression methods achieve a level of precision, comparable to conventional impression techniques. However, there are significant differences in terms of absolute values and deviation pattern. With all tested digital impression systems, time efficient capturing of quadrant impressions is possible. The clinical precision of digital quadrant impression models is sufficient to cover a broad variety of restorative indications. Yet the precision differs significantly between the digital impression systems.

  15. The Los Alamos National Laboratory precision double crystal spectrometer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morgan, D.V.; Stevens, C.J.; Liefield, R.J.

    1994-03-01

    This report discusses the following topics on the LANL precision double crystal X-ray spectrometer: Motivation for construction of the instrument; a brief history of the instrument; mechanical systems; motion control systems; computer control system; vacuum system; alignment program; scan programs; observations of the copper K{alpha} lines; and characteristics and specifications.

  16. Precision Attitude Determination System (PADS) design and analysis. Two-axis gimbal star tracker

    NASA Technical Reports Server (NTRS)

    1973-01-01

    Development of the Precision Attitude Determination System (PADS) focused chiefly on the two-axis gimballed star tracker and electronics design improved from that of Precision Pointing Control System (PPCS), and application of the improved tracker for PADS at geosynchronous altitude. System design, system analysis, software design, and hardware design activities are reported. The system design encompasses the PADS configuration, system performance characteristics, component design summaries, and interface considerations. The PADS design and performance analysis includes error analysis, performance analysis via attitude determination simulation, and star tracker servo design analysis. The design of the star tracker and electronics are discussed. Sensor electronics schematics are included. A detailed characterization of the application software algorithms and computer requirements is provided.

  17. The Use of Scale-Dependent Precision to Increase Forecast Accuracy in Earth System Modelling

    NASA Astrophysics Data System (ADS)

    Thornes, Tobias; Duben, Peter; Palmer, Tim

    2016-04-01

    At the current pace of development, it may be decades before the 'exa-scale' computers needed to resolve individual convective clouds in weather and climate models become available to forecasters, and such machines will incur very high power demands. But the resolution could be improved today by switching to more efficient, 'inexact' hardware with which variables can be represented in 'reduced precision'. Currently, all numbers in our models are represented as double-precision floating points - each requiring 64 bits of memory - to minimise rounding errors, regardless of spatial scale. Yet observational and modelling constraints mean that values of atmospheric variables are inevitably known less precisely on smaller scales, suggesting that this may be a waste of computer resources. More accurate forecasts might therefore be obtained by taking a scale-selective approach whereby the precision of variables is gradually decreased at smaller spatial scales to optimise the overall efficiency of the model. To study the effect of reducing precision to different levels on multiple spatial scales, we here introduce a new model atmosphere developed by extending the Lorenz '96 idealised system to encompass three tiers of variables - which represent large-, medium- and small-scale features - for the first time. In this chaotic but computationally tractable system, the 'true' state can be defined by explicitly resolving all three tiers. The abilities of low resolution (single-tier) double-precision models and similar-cost high resolution (two-tier) models in mixed-precision to produce accurate forecasts of this 'truth' are compared. The high resolution models outperform the low resolution ones even when small-scale variables are resolved in half-precision (16 bits). This suggests that using scale-dependent levels of precision in more complicated real-world Earth System models could allow forecasts to be made at higher resolution and with improved accuracy. If adopted, this new paradigm would represent a revolution in numerical modelling that could be of great benefit to the world.

  18. Computer program documentation for the patch subsampling processor

    NASA Technical Reports Server (NTRS)

    Nieves, M. J.; Obrien, S. O.; Oney, J. K. (Principal Investigator)

    1981-01-01

    The programs presented are intended to provide a way to extract a sample from a full-frame scene and summarize it in a useful way. The sample in each case was chosen to fill a 512-by-512 pixel (sample-by-line) image since this is the largest image that can be displayed on the Integrated Multivariant Data Analysis and Classification System. This sample size provides one megabyte of data for manipulation and storage and contains about 3% of the full-frame data. A patch image processor computes means for 256 32-by-32 pixel squares which constitute the 512-by-512 pixel image. Thus, 256 measurements are available for 8 vegetation indexes over a 100-mile square.

  19. Testing if Social Services Prevent Fatal Child Maltreatment Among a Sample of Children Previously Known to Child Protective Services.

    PubMed

    Douglas, Emily M

    2016-08-01

    The purpose of this article was to examine the potential impact of child welfare services on the risk for fatal child maltreatment. This was conducted using a subsample of children who were identified as "prior victims" in the National Child Abuse and Neglect Data System from 2008 to 2012. At the multivariate level, the analyses show that case management services act to protect children from death as do family support services, family preservation services, and foster care, but that the results vary by type of maltreatment experienced. The author recommends that before strong conclusions are drawn, additional research in this area is warranted. © The Author(s) 2016.

  20. Apparatus for precision micromachining with lasers

    DOEpatents

    Chang, J.J.; Dragon, E.P.; Warner, B.E.

    1998-04-28

    A new material processing apparatus using a short-pulsed, high-repetition-rate visible laser for precision micromachining utilizes a near diffraction limited laser, a high-speed precision two-axis tilt-mirror for steering the laser beam, an optical system for either focusing or imaging the laser beam on the part, and a part holder that may consist of a cover plate and a back plate. The system is generally useful for precision drilling, cutting, milling and polishing of metals and ceramics, and has broad application in manufacturing precision components. Precision machining has been demonstrated through percussion drilling and trepanning using this system. With a 30 W copper vapor laser running at multi-kHz pulse repetition frequency, straight parallel holes with size varying from 500 microns to less than 25 microns and with aspect ratios up to 1:40 have been consistently drilled with good surface finish on a variety of metals. Micromilling and microdrilling on ceramics using a 250 W copper vapor laser have also been demonstrated with good results. Materialographic sections of machined parts show little (submicron scale) recast layer and heat affected zone. 1 fig.

  1. Apparatus for precision micromachining with lasers

    DOEpatents

    Chang, Jim J.; Dragon, Ernest P.; Warner, Bruce E.

    1998-01-01

    A new material processing apparatus using a short-pulsed, high-repetition-rate visible laser for precision micromachining utilizes a near diffraction limited laser, a high-speed precision two-axis tilt-mirror for steering the laser beam, an optical system for either focusing or imaging the laser beam on the part, and a part holder that may consist of a cover plate and a back plate. The system is generally useful for precision drilling, cutting, milling and polishing of metals and ceramics, and has broad application in manufacturing precision components. Precision machining has been demonstrated through percussion drilling and trepanning using this system. With a 30 W copper vapor laser running at multi-kHz pulse repetition frequency, straight parallel holes with size varying from 500 microns to less than 25 microns and with aspect ratios up to 1:40 have been consistently drilled with good surface finish on a variety of metals. Micromilling and microdrilling on ceramics using a 250 W copper vapor laser have also been demonstrated with good results. Materialogroaphic sections of machined parts show little (submicron scale) recast layer and heat affected zone.

  2. Indexing system for optical beam steering

    NASA Technical Reports Server (NTRS)

    Sullivan, Mark T.; Cannon, David M.; Debra, Daniel B.; Young, Jeffrey A.; Mansfield, Joseph A.; Carmichael, Roger E.; Lissol, Peter S.; Pryor, G. M.; Miklosy, Les G.; Lee, Jeffrey H.

    1990-01-01

    This paper describes the design and testing of an indexing system for optical-beam steering. The cryogenic beam-steering mechanism is a 360-degree rotation device capable of discrete, high-precision alignment positions. It uses low-precision components for its rough alignment and kinematic design to meet its stringent repeatability and stability requirements (of about 5 arcsec). The principal advantages of this design include a decoupling of the low-precision, large angular motion from the high-precision alignment, and a power-off alignment position that potentially extends the life or hold time of cryogenic systems. An alternate design, which takes advantage of these attributes while reducing overall motion, is also presented. Preliminary test results show the kinematic mount capable of sub-arc second repeatability.

  3. Spacecraft Alignment Determination and Control for Dual Spacecraft Precision Formation Flying

    NASA Technical Reports Server (NTRS)

    Calhoun, Philip; Novo-Gradac, Anne-Marie; Shah, Neerav

    2017-01-01

    Many proposed formation flying missions seek to advance the state of the art in spacecraft science imaging by utilizing precision dual spacecraft formation flying to enable a virtual space telescope. Using precision dual spacecraft alignment, very long focal lengths can be achieved by locating the optics on one spacecraft and the detector on the other. Proposed science missions include astrophysics concepts with spacecraft separations from 1000 km to 25,000 km, such as the Milli-Arc-Second Structure Imager (MASSIM) and the New Worlds Observer, and Heliophysics concepts for solar coronagraphs and X-ray imaging with smaller separations (50m-500m). All of these proposed missions require advances in guidance, navigation, and control (GNC) for precision formation flying. In particular, very precise astrometric alignment control and estimation is required for precise inertial pointing of the virtual space telescope to enable science imaging orders of magnitude better than can be achieved with conventional single spacecraft instruments. This work develops design architectures, algorithms, and performance analysis of proposed GNC systems for precision dual spacecraft astrometric alignment. These systems employ a variety of GNC sensors and actuators, including laser-based alignment and ranging systems, optical imaging sensors (e.g. guide star telescope), inertial measurement units (IMU), as well as microthruster and precision stabilized platforms. A comprehensive GNC performance analysis is given for Heliophysics dual spacecraft PFF imaging mission concept.

  4. Spacecraft Alignment Determination and Control for Dual Spacecraft Precision Formation Flying

    NASA Technical Reports Server (NTRS)

    Calhoun, Philip C.; Novo-Gradac, Anne-Marie; Shah, Neerav

    2017-01-01

    Many proposed formation flying missions seek to advance the state of the art in spacecraft science imaging by utilizing precision dual spacecraft formation flying to enable a virtual space telescope. Using precision dual spacecraft alignment, very long focal lengths can be achieved by locating the optics on one spacecraft and the detector on the other. Proposed science missions include astrophysics concepts with spacecraft separations from 1000 km to 25,000 km, such as the Milli-Arc-Second Structure Imager (MASSIM) and the New Worlds Observer, and Heliophysics concepts for solar coronagraphs and X-ray imaging with smaller separations (50m 500m). All of these proposed missions require advances in guidance, navigation, and control (GNC) for precision formation flying. In particular, very precise astrometric alignment control and estimation is required for precise inertial pointing of the virtual space telescope to enable science imaging orders of magnitude better than can be achieved with conventional single spacecraft instruments. This work develops design architectures, algorithms, and performance analysis of proposed GNC systems for precision dual spacecraft astrometric alignment. These systems employ a variety of GNC sensors and actuators, including laser-based alignment and ranging systems, optical imaging sensors (e.g. guide star telescope), inertial measurement units (IMU), as well as micro-thruster and precision stabilized platforms. A comprehensive GNC performance analysis is given for Heliophysics dual spacecraft PFF imaging mission concept.

  5. Reduction to Outside the Atmosphere and Statistical Tests Used in Geneva Photometry

    NASA Technical Reports Server (NTRS)

    Rufener, F.

    1984-01-01

    Conditions for creating a precise photometric system are investigated. The analytical and discriminatory potentials of a photometry obviously result from the localization of the passbands in the spectrum; they do, however, also depend critically on the precision attained. This precision is the result of two different types of precautions. Two procedures which contribute in an efficient manner to achieving greater precision are examined. These two methods are known as hardware related precision and software related precision.

  6. Wireless inertial measurement of head kinematics in freely-moving rats

    PubMed Central

    Pasquet, Matthieu O.; Tihy, Matthieu; Gourgeon, Aurélie; Pompili, Marco N.; Godsil, Bill P.; Léna, Clément; Dugué, Guillaume P.

    2016-01-01

    While miniature inertial sensors offer a promising means for precisely detecting, quantifying and classifying animal behaviors, versatile inertial sensing devices adapted for small, freely-moving laboratory animals are still lacking. We developed a standalone and cost-effective platform for performing high-rate wireless inertial measurements of head movements in rats. Our system is designed to enable real-time bidirectional communication between the headborne inertial sensing device and third party systems, which can be used for precise data timestamping and low-latency motion-triggered applications. We illustrate the usefulness of our system in diverse experimental situations. We show that our system can be used for precisely quantifying motor responses evoked by external stimuli, for characterizing head kinematics during normal behavior and for monitoring head posture under normal and pathological conditions obtained using unilateral vestibular lesions. We also introduce and validate a novel method for automatically quantifying behavioral freezing during Pavlovian fear conditioning experiments, which offers superior performance in terms of precision, temporal resolution and efficiency. Thus, this system precisely acquires movement information in freely-moving animals, and can enable objective and quantitative behavioral scoring methods in a wide variety of experimental situations. PMID:27767085

  7. Terrain matching image pre-process and its format transform in autonomous underwater navigation

    NASA Astrophysics Data System (ADS)

    Cao, Xuejun; Zhang, Feizhou; Yang, Dongkai; Yang, Bogang

    2007-06-01

    Underwater passive navigation technology is one of the important development orientations in the field of modern navigation. With the advantage of high self-determination, stealth at sea, anti-jamming and high precision, passive navigation is completely meet with actual navigation requirements. Therefore passive navigation has become a specific navigating method for underwater vehicles. The scientists and researchers in the navigating field paid more attention to it. The underwater passive navigation can provide accurate navigation information with main Inertial Navigation System (INS) for a long period, such as location and speed. Along with the development of micro-electronics technology, the navigation of AUV is given priority to INS assisted with other navigation methods, such as terrain matching navigation. It can provide navigation ability for a long period, correct the errors of INS and make AUV not emerge from the seabed termly. With terrain matching navigation technique, in the assistance of digital charts and ocean geographical characteristics sensors, we carry through underwater image matching assistant navigation to obtain the higher location precision, therefore it is content with the requirement of underwater, long-term, high precision and all-weather of the navigation system for Autonomous Underwater Vehicles. Tertian-assistant navigation (TAN) is directly dependent on the image information (map information) in the navigating field to assist the primary navigation system according to the path appointed in advance. In TAN, a factor coordinative important with the system operation is precision and practicability of the storable images and the database which produce the image data. If the data used for characteristics are not suitable, the system navigation precision will be low. Comparing with terrain matching assistant navigation system, image matching navigation system is a kind of high precision and low cost assistant navigation system, and its matching precision directly influences the final precision of integrated navigation system. Image matching assistant navigation is spatially matching and aiming at two underwater scenery images coming from two different sensors matriculating of the same scenery in order to confirm the relative displacement of the two images. In this way, we can obtain the vehicle's location in fiducial image known geographical relation, and the precise location information given from image matching location is transmitted to INS to eliminate its location error and greatly enhance the navigation precision of vehicle. Digital image data analysis and processing of image matching in underwater passive navigation is important. In regard to underwater geographic data analysis, we focus on the acquirement, disposal, analysis, expression and measurement of database information. These analysis items structure one of the important contents of underwater terrain matching and are propitious to know the seabed terrain configuration of navigation areas so that the best advantageous seabed terrain district and dependable navigation algorithm can be selected. In this way, we can improve the precision and reliability of terrain assistant navigation system. The pre-process and format transformation of digital image during underwater image matching are expatiated in this paper. The information of the terrain status in navigation areas need further study to provide the reliable data terrain characteristic and underwater overcast for navigation. Through realizing the choice of sea route, danger district prediction and navigating algorithm analysis, TAN can obtain more high location precision and probability, hence provide technological support for image matching of underwater passive navigation.

  8. 40 CFR 761.350 - Subsampling from composite samples.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...-liter sample, stir the composite using a broom handle or similar long, narrow, sturdy rod that reaches the bottom of the container. Stir the mixture for a minimum of 10 complete revolutions of the stirring...

  9. Design of automatic leveling and centering system of theodolite

    NASA Astrophysics Data System (ADS)

    Liu, Chun-tong; He, Zhen-Xin; Huang, Xian-xiang; Zhan, Ying

    2012-09-01

    To realize the theodolite automation and improve the azimuth Angle measurement instrument, the theodolite automatic leveling and centering system with the function of leveling error compensation is designed, which includes the system solution, key components selection, the mechanical structure of leveling and centering, and system software solution. The redesigned leveling feet are driven by the DC servo motor; and the electronic control center device is installed. Using high precision of tilt sensors as horizontal skew detection sensors ensures the effectiveness of the leveling error compensation. Aiming round mark center is located using digital image processing through surface array CCD; and leveling measurement precision can reach the pixel level, which makes the theodolite accurate centering possible. Finally, experiments are conducted using the automatic leveling and centering system of the theodolite. The results show the leveling and centering system can realize automatic operation with high centering accuracy of 0.04mm.The measurement precision of the orientation angle after leveling error compensation is improved, compared with that of in the traditional method. Automatic leveling and centering system of theodolite can satisfy the requirements of the measuring precision and its automation.

  10. Improving Weather Forecasts Through Reduced Precision Data Assimilation

    NASA Astrophysics Data System (ADS)

    Hatfield, Samuel; Düben, Peter; Palmer, Tim

    2017-04-01

    We present a new approach for improving the efficiency of data assimilation, by trading numerical precision for computational speed. Future supercomputers will allow a greater choice of precision, so that models can use a level of precision that is commensurate with the model uncertainty. Previous studies have already indicated that the quality of climate and weather forecasts is not significantly degraded when using a precision less than double precision [1,2], but so far these studies have not considered data assimilation. Data assimilation is inherently uncertain due to the use of relatively long assimilation windows, noisy observations and imperfect models. Thus, the larger rounding errors incurred from reducing precision may be within the tolerance of the system. Lower precision arithmetic is cheaper, and so by reducing precision in ensemble data assimilation, we can redistribute computational resources towards, for example, a larger ensemble size. Because larger ensembles provide a better estimate of the underlying distribution and are less reliant on covariance inflation and localisation, lowering precision could actually allow us to improve the accuracy of weather forecasts. We will present results on how lowering numerical precision affects the performance of an ensemble data assimilation system, consisting of the Lorenz '96 toy atmospheric model and the ensemble square root filter. We run the system at half precision (using an emulation tool), and compare the results with simulations at single and double precision. We estimate that half precision assimilation with a larger ensemble can reduce assimilation error by 30%, with respect to double precision assimilation with a smaller ensemble, for no extra computational cost. This results in around half a day extra of skillful weather forecasts, if the error-doubling characteristics of the Lorenz '96 model are mapped to those of the real atmosphere. Additionally, we investigate the sensitivity of these results to observational error and assimilation window length. Half precision hardware will become available very shortly, with the introduction of Nvidia's Pascal GPU architecture and the Intel Knights Mill coprocessor. We hope that the results presented here will encourage the uptake of this hardware. References [1] Peter D. Düben and T. N. Palmer, 2014: Benchmark Tests for Numerical Weather Forecasts on Inexact Hardware, Mon. Weather Rev., 142, 3809-3829 [2] Peter D. Düben, Hugh McNamara and T. N. Palmer, 2014: The use of imprecise processing to improve accuracy in weather & climate prediction, J. Comput. Phys., 271, 2-18

  11. Precision optical slit for high heat load or ultra high vacuum

    DOEpatents

    Andresen, N.C.; DiGennaro, R.S.; Swain, T.L.

    1995-01-24

    This invention relates generally to slits used in optics that must be precisely aligned and adjusted. The optical slits of the present invention are useful in x-ray optics, x-ray beam lines, optical systems in which the entrance slit is critical for high wavelength resolution. The invention is particularly useful in ultra high vacuum systems where lubricants are difficult to use and designs which avoid the movement of metal parts against one another are important, such as monochromators for high wavelength resolution with ultra high vacuum systems. The invention further relates to optical systems in which temperature characteristics of the slit materials is important. The present invention yet additionally relates to precision slits wherein the opposing edges of the slit must be precisely moved relative to a center line between the edges with each edge retaining its parallel orientation with respect to the other edge and/or the center line. 21 figures.

  12. Precision optical slit for high heat load or ultra high vacuum

    DOEpatents

    Andresen, Nord C.; DiGennaro, Richard S.; Swain, Thomas L.

    1995-01-01

    This invention relates generally to slits used in optics that must be precisely aligned and adjusted. The optical slits of the present invention are useful in x-ray optics, x-ray beam lines, optical systems in which the entrance slit is critical for high wavelength resolution. The invention is particularly useful in ultra high vacuum systems where lubricants are difficult to use and designs which avoid the movement of metal parts against one another are important, such as monochrometers for high wavelength resolution with ultra high vacuum systems. The invention further relates to optical systems in which temperature characteristics of the slit materials is important. The present invention yet additionally relates to precision slits wherein the opposing edges of the slit must be precisely moved relative to a center line between the edges with each edge retaining its parallel orientation with respect to the other edge and/or the center line.

  13. Modeling and Assessment of Precise Time Transfer by Using BeiDou Navigation Satellite System Triple-Frequency Signals.

    PubMed

    Tu, Rui; Zhang, Pengfei; Zhang, Rui; Liu, Jinhai; Lu, Xiaochun

    2018-03-29

    This study proposes two models for precise time transfer using the BeiDou Navigation Satellite System triple-frequency signals: ionosphere-free (IF) combined precise point positioning (PPP) model with two dual-frequency combinations (IF-PPP1) and ionosphere-free combined PPP model with a single triple-frequency combination (IF-PPP2). A dataset with a short baseline (with a common external time frequency) and a long baseline are used for performance assessments. The results show that IF-PPP1 and IF-PPP2 models can both be used for precise time transfer using BeiDou Navigation Satellite System (BDS) triple-frequency signals, and the accuracy and stability of time transfer is the same in both cases, except for a constant system bias caused by the hardware delay of different frequencies, which can be removed by the parameter estimation and prediction with long time datasets or by a priori calibration.

  14. Systems biology for nursing in the era of big data and precision health.

    PubMed

    Founds, Sandra

    2017-12-02

    The systems biology framework was previously synthesized with the person-environment-health-nursing metaparadigm. The purpose of this paper is to present a nursing discipline-specific perspective of the association of systems biology with big data and precision health. The fields of systems biology, big data, and precision health are now overviewed, from origins through expansions, with examples of what is being done by nurses in each area of science. Technological advances continue to expand omics and other varieties of big data that inform the person's phenotype and health outcomes for precision care. Meanwhile, millions of participants in the United States are being recruited for health-care research initiatives aimed at building the information commons of digital health data. Implications and opportunities abound via conceptualizing the integration of these fields through the nursing metaparadigm. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Sampling strategies for subsampled segmented EPI PRF thermometry in MR guided high intensity focused ultrasound

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Odéen, Henrik, E-mail: h.odeen@gmail.com; Diakite, Mahamadou; Todd, Nick

    2014-09-15

    Purpose: To investigate k-space subsampling strategies to achieve fast, large field-of-view (FOV) temperature monitoring using segmented echo planar imaging (EPI) proton resonance frequency shift thermometry for MR guided high intensity focused ultrasound (MRgHIFU) applications. Methods: Five different k-space sampling approaches were investigated, varying sample spacing (equally vs nonequally spaced within the echo train), sampling density (variable sampling density in zero, one, and two dimensions), and utilizing sequential or centric sampling. Three of the schemes utilized sequential sampling with the sampling density varied in zero, one, and two dimensions, to investigate sampling the k-space center more frequently. Two of the schemesmore » utilized centric sampling to acquire the k-space center with a longer echo time for improved phase measurements, and vary the sampling density in zero and two dimensions, respectively. Phantom experiments and a theoretical point spread function analysis were performed to investigate their performance. Variable density sampling in zero and two dimensions was also implemented in a non-EPI GRE pulse sequence for comparison. All subsampled data were reconstructed with a previously described temporally constrained reconstruction (TCR) algorithm. Results: The accuracy of each sampling strategy in measuring the temperature rise in the HIFU focal spot was measured in terms of the root-mean-square-error (RMSE) compared to fully sampled “truth.” For the schemes utilizing sequential sampling, the accuracy was found to improve with the dimensionality of the variable density sampling, giving values of 0.65 °C, 0.49 °C, and 0.35 °C for density variation in zero, one, and two dimensions, respectively. The schemes utilizing centric sampling were found to underestimate the temperature rise, with RMSE values of 1.05 °C and 1.31 °C, for variable density sampling in zero and two dimensions, respectively. Similar subsampling schemes with variable density sampling implemented in zero and two dimensions in a non-EPI GRE pulse sequence both resulted in accurate temperature measurements (RMSE of 0.70 °C and 0.63 °C, respectively). With sequential sampling in the described EPI implementation, temperature monitoring over a 192 × 144 × 135 mm{sup 3} FOV with a temporal resolution of 3.6 s was achieved, while keeping the RMSE compared to fully sampled “truth” below 0.35 °C. Conclusions: When segmented EPI readouts are used in conjunction with k-space subsampling for MR thermometry applications, sampling schemes with sequential sampling, with or without variable density sampling, obtain accurate phase and temperature measurements when using a TCR reconstruction algorithm. Improved temperature measurement accuracy can be achieved with variable density sampling. Centric sampling leads to phase bias, resulting in temperature underestimations.« less

  16. Using high frequency water quality data to assess sampling strategies for the EU Water Framework Directive

    NASA Astrophysics Data System (ADS)

    Skeffington, R. A.; Halliday, S. J.; Wade, A. J.; Bowes, M. J.; Loewenthal, M.

    2015-01-01

    The EU Water Framework Directive (WFD) requires that the ecological and chemical status of water bodies in Europe should be assessed, and action taken where possible to ensure that at least "good" quality is attained in each case by 2015. This paper is concerned with the accuracy and precision with which chemical status in rivers can be measured given certain sampling strategies, and how this can be improved. High frequency (hourly) chemical data from four rivers in southern England were subsampled to simulate different sampling strategies for four parameters used for WFD classification: dissolved phosphorus, dissolved oxygen, pH and water temperature. These data sub-sets were then used to calculate the WFD classification for each site. Monthly sampling was less precise than weekly sampling, but the effect on WFD classification depended on the closeness of the range of concentrations to the class boundaries. In some cases, monthly sampling for a year could result in the same water body being assigned to one of 3 or 4 WFD classes with 95% confidence, whereas with weekly sampling this was 1 or 2 classes for the same cases. In the most extreme case, random sampling effects could result in the same water body being assigned to any of the 5 WFD quality classes. The width of the weekly sampled confidence intervals was about 33% that of the monthly for P species and pH, about 50% for dissolved oxygen, and about 67% for water temperature. For water temperature, which is assessed as the 98th percentile in the UK, monthly sampling biases the mean downwards by about 1 °C compared to the true value, due to problems of assessing high percentiles with limited data. Confining sampling to the working week compared to all seven days made little difference, but a modest improvement in precision could be obtained by sampling at the same time of day within a 3 h time window, and this is recommended. For parameters with a strong diel variation, such as dissolved oxygen, the value obtained, and thus possibly the WFD classification, can depend markedly on when in the cycle the sample was taken. Specifying this in the sampling regime would be a straightforward way to improve precision, but there needs to be agreement about how best to characterise risk in different types of river. These results suggest that in some cases it will be difficult to assign accurate WFD chemical classes or to detect likely trends using current sampling regimes, even for these largely groundwater-fed rivers. A more critical approach to sampling is needed to ensure that management actions are appropriate and supported by data.

  17. Dairy farmers with larger herd sizes adopt more precision dairy technologies.

    PubMed

    Gargiulo, J I; Eastwood, C R; Garcia, S C; Lyons, N A

    2018-06-01

    An increase in the average herd size on Australian dairy farms has also increased the labor and animal management pressure on farmers, thus potentially encouraging the adoption of precision technologies for enhanced management control. A survey was undertaken in 2015 in Australia to identify the relationship between herd size, current precision technology adoption, and perception of the future of precision technologies. Additionally, differences between farmers and service providers in relation to perception of future precision technology adoption were also investigated. Responses from 199 dairy farmers, and 102 service providers, were collected between May and August 2015 via an anonymous Internet-based questionnaire. Of the 199 dairy farmer responses, 10.4% corresponded to farms that had fewer than 150 cows, 37.7% had 151 to 300 cows, 35.5% had 301 to 500 cows; 6.0% had 501 to 700 cows, and 10.4% had more than 701 cows. The results showed that farmers with more than 500 cows adopted between 2 and 5 times more specific precision technologies, such as automatic cup removers, automatic milk plant wash systems, electronic cow identification systems and herd management software, when compared with smaller farms. Only minor differences were detected in perception of the future of precision technologies between either herd size or farmers and service providers. In particular, service providers expected a higher adoption of automatic milking and walk over weighing systems than farmers. Currently, the adoption of precision technology has mostly been of the type that reduces labor needs; however, respondents indicated that by 2025 adoption of data capturing technology for monitoring farm system parameters would be increased. Copyright © 2018 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  18. ON GALACTIC DENSITY MODELING IN THE PRESENCE OF DUST EXTINCTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bovy, Jo; Rix, Hans-Walter; Schlafly, Edward F.

    Inferences about the spatial density or phase-space structure of stellar populations in the Milky Way require a precise determination of the effective survey volume. The volume observed by surveys such as Gaia or near-infrared spectroscopic surveys, which have good coverage of the Galactic midplane region, is highly complex because of the abundant small-scale structure in the three-dimensional interstellar dust extinction. We introduce a novel framework for analyzing the importance of small-scale structure in the extinction. This formalism demonstrates that the spatially complex effect of extinction on the selection function of a pencil-beam or contiguous sky survey is equivalent to amore » low-pass filtering of the extinction-affected selection function with the smooth density field. We find that the angular resolution of current 3D extinction maps is sufficient for analyzing Gaia sub-samples of millions of stars. However, the current distance resolution is inadequate and needs to be improved by an order of magnitude, especially in the inner Galaxy. We also present a practical and efficient method for properly taking the effect of extinction into account in analyses of Galactic structure through an effective selection function. We illustrate its use with the selection function of red-clump stars in APOGEE using and comparing a variety of current 3D extinction maps.« less

  19. The 2-degree Field Lensing Survey: photometric redshifts from a large new training sample to r < 19.5

    NASA Astrophysics Data System (ADS)

    Wolf, C.; Johnson, A. S.; Bilicki, M.; Blake, C.; Amon, A.; Erben, T.; Glazebrook, K.; Heymans, C.; Hildebrandt, H.; Joudaki, S.; Klaes, D.; Kuijken, K.; Lidman, C.; Marin, F.; Parkinson, D.; Poole, G.

    2017-04-01

    We present a new training set for estimating empirical photometric redshifts of galaxies, which was created as part of the 2-degree Field Lensing Survey project. This training set is located in a ˜700 deg2 area of the Kilo-Degree-Survey South field and is randomly selected and nearly complete at r < 19.5. We investigate the photometric redshift performance obtained with ugriz photometry from VST-ATLAS and W1/W2 from WISE, based on several empirical and template methods. The best redshift errors are obtained with kernel-density estimation (KDE), as are the lowest biases, which are consistent with zero within statistical noise. The 68th percentiles of the redshift scatter for magnitude-limited samples at r < (15.5, 17.5, 19.5) are (0.014, 0.017, 0.028). In this magnitude range, there are no known ambiguities in the colour-redshift map, consistent with a small rate of redshift outliers. In the fainter regime, the KDE method produces p(z) estimates per galaxy that represent unbiased and accurate redshift frequency expectations. The p(z) sum over any subsample is consistent with the true redshift frequency plus Poisson noise. Further improvements in redshift precision at r < 20 would mostly be expected from filter sets with narrower passbands to increase the sensitivity of colours to small changes in redshift.

  20. Calibrating the Planck Cluster Mass Scale with Cluster Velocity Dispersions

    NASA Astrophysics Data System (ADS)

    Amodeo, Stefania; Mei, Simona; Stanford, Spencer A.; Bartlett, James G.; Melin, Jean-Baptiste; Lawrence, Charles R.; Chary, Ranga-Ram; Shim, Hyunjin; Marleau, Francine; Stern, Daniel

    2017-08-01

    We measure the Planck cluster mass bias using dynamical mass measurements based on velocity dispersions of a subsample of 17 Planck-detected clusters. The velocity dispersions were calculated using redshifts determined from spectra that were obtained at the Gemini observatory with the GMOS multi-object spectrograph. We correct our estimates for effects due to finite aperture, Eddington bias, and correlated scatter between velocity dispersion and the Planck mass proxy. The result for the mass bias parameter, (1-b), depends on the value of the galaxy velocity bias, {b}{{v}}, adopted from simulations: (1-b)=(0.51+/- 0.09){b}{{v}}3. Using a velocity bias of {b}{{v}}=1.08 from Munari et al., we obtain (1-b)=0.64+/- 0.11, I.e., an error of 17% on the mass bias measurement with 17 clusters. This mass bias value is consistent with most previous weak-lensing determinations. It lies within 1σ of the value that is needed to reconcile the Planck cluster counts with the Planck primary cosmic microwave background constraints. We emphasize that uncertainty in the velocity bias severely hampers the precision of the measurements of the mass bias using velocity dispersions. On the other hand, when we fix the Planck mass bias using the constraints from Penna-Lima et al., based on weak-lensing measurements, we obtain a positive velocity bias of {b}{{v}}≳ 0.9 at 3σ .

  1. [New design of the Health Survey of Catalonia (Spain, 2010-2014): a step forward in health planning and evaluation].

    PubMed

    Alcañiz-Zanón, Manuela; Mompart-Penina, Anna; Guillén-Estany, Montserrat; Medina-Bustos, Antonia; Aragay-Barbany, Josep M; Brugulat-Guiteras, Pilar; Tresserras-Gaju, Ricard

    2014-01-01

    This article presents the genesis of the Health Survey of Catalonia (Spain, 2010-2014) with its semiannual subsamples and explains the basic characteristics of its multistage sampling design. In comparison with previous surveys, the organizational advantages of this new statistical operation include rapid data availability and the ability to continuously monitor the population. The main benefits are timeliness in the production of indicators and the possibility of introducing new topics through the supplemental questionnaire as a function of needs. Limitations consist of the complexity of the sample design and the lack of longitudinal follow-up of the sample. Suitable sampling weights for each specific subsample are necessary for any statistical analysis of micro-data. Accuracy in the analysis of territorial disaggregation or population subgroups increases if annual samples are accumulated. Copyright © 2013 SESPAS. Published by Elsevier Espana. All rights reserved.

  2. Sleepiness and health in midlife women: results of the National Sleep Foundation's 2007 Sleep in America poll.

    PubMed

    Chasens, Eileen R; Twerski, Sarah R; Yang, Kyeongra; Umlauf, Mary Grace

    2010-01-01

    The 2007 Sleep in America poll, a random-sample telephone survey, provided data for this study of sleep in community-dwelling women aged 40 to 60 years. The majority of the respondents were post- or perimenopausal, overweight, married or living with someone, and reported good health. A subsample (20%) reported sleepiness that consistently interfered with daily life; the sleepy subsample reported more symptoms of insomnia, restless legs syndrome, obstructive sleep apnea, depression and anxiety, as well as more problems with health-promoting behaviors, drowsy driving, job performance, household duties, and personal relationships. Hierarchical regression showed that sleepiness along with depressive symptoms, medical comorbidities, obesity, and lower education were associated with poor self-rated health, whereas menopause status (pre-, peri- or post-) was not. These results suggest that sleep disruptions and daytime sleepiness negatively affect the daily life of midlife women.

  3. Infrared and Visible Image Fusion Based on Different Constraints in the Non-Subsampled Shearlet Transform Domain.

    PubMed

    Huang, Yan; Bi, Duyan; Wu, Dongpeng

    2018-04-11

    There are many artificial parameters when fuse infrared and visible images, to overcome the lack of detail in the fusion image because of the artifacts, a novel fusion algorithm for infrared and visible images that is based on different constraints in non-subsampled shearlet transform (NSST) domain is proposed. There are high bands and low bands of images that are decomposed by the NSST. After analyzing the characters of the bands, fusing the high level bands by the gradient constraint, the fused image can obtain more details; fusing the low bands by the constraint of saliency in the images, the targets are more salient. Before the inverse NSST, the Nash equilibrium is used to update the coefficient. The fused images and the quantitative results demonstrate that our method is more effective in reserving details and highlighting the targets when compared with other state-of-the-art methods.

  4. Prevalence of smoking among bar workers prior to the Republic of Ireland smokefree workplace legislation.

    PubMed

    Mullally, B J; Greiner, B A; Allwright, S; Paul, G; Perry, I J

    2008-12-01

    This study establishes baseline prevalence of smoking and cigarette consumption among Cork bar workers prior to the Republic of Ireland's (ROI) smokefree workplace legislation and compares gender- and age-specific smoking rates and estimates the adjusted odds of being a smoker for Cork bar workers relative to the general population. Cross-sectional random sample of bar workers in Cork city and cross-sectional random telephone survey of the general population were conducted prior to the smokefree legislation. Self reported smoking prevalence among Cork bar workers (n = 129) was 54% (58% using cotinine-validated measures), with particularly high rates in women (70%) and 18-28 years old (72%). Within the ROI (n = 1,240) sub-sample rates were substantially lower at 28%. Bar workers were twice as likely to be smokers as the general population sub-sample (OR = 2.15). Cork bar workers constitute an occupational group with an extremely high smoking prevalence.

  5. Validation of the respiratory toxics exposure score (RTES) for chronic obstructive pulmonary disease screening.

    PubMed

    Salameh, Pascale; Khayat, Georges; Waked, Mirna

    2011-12-01

    Our aim is to evaluate the validity of exhaled carbon monoxide (CO) and of a newly-created score as markers of Chronic Obstructive Pulmonary Disease (COPD). The CO level was measured in a derivation subsample of a cross-sectional study and linked to COPD diagnosis; its predictors were evaluated, and a scale was constructed. It was evaluated in a validation subsample and in a clinical setting. Individuals with COPD had higher CO levels than healthy individuals. CO level significant predictors were cigarettes per day, waterpipes per week, lower age, male gender, living close to diesel exhaust, heating home with the use of diesel, and having indoor family smokers. A score composed of CO predictors was able to significantly predict COPD (Ora = 4-7.5). Coupled with the clinical judgment of physicians, this scale would be an excellent low-cost tool for screening COPD, in absence of spirometry.

  6. Infrared and Visible Image Fusion Based on Different Constraints in the Non-Subsampled Shearlet Transform Domain

    PubMed Central

    Huang, Yan; Bi, Duyan; Wu, Dongpeng

    2018-01-01

    There are many artificial parameters when fuse infrared and visible images, to overcome the lack of detail in the fusion image because of the artifacts, a novel fusion algorithm for infrared and visible images that is based on different constraints in non-subsampled shearlet transform (NSST) domain is proposed. There are high bands and low bands of images that are decomposed by the NSST. After analyzing the characters of the bands, fusing the high level bands by the gradient constraint, the fused image can obtain more details; fusing the low bands by the constraint of saliency in the images, the targets are more salient. Before the inverse NSST, the Nash equilibrium is used to update the coefficient. The fused images and the quantitative results demonstrate that our method is more effective in reserving details and highlighting the targets when compared with other state-of-the-art methods. PMID:29641505

  7. Medication adherence, comorbidities, and health risk impacts on workforce absence and job performance.

    PubMed

    Loeppke, Ronald; Haufle, Vince; Jinnett, Kim; Parry, Thomas; Zhu, Jianping; Hymel, Pamela; Konicki, Doris

    2011-06-01

    To understand impacts of medication adherence, comorbidities, and health risks on workforce absence and job performance. Retrospective observational study using employees' medical/pharmacy claims and self-reported health risk appraisals. Statin medication adherence in individuals with Coronary Artery Disease was significant predictor (P < 0.05) of decreasing absenteeism. Insulin, oral hypoglycemic, or metformin medication adherence in type 2 diabetics was significant (P < 0.05) predictor of decreasing job performance. Number of comorbidities was found as significant (P < 0.5) predictor of absenteeism in five of nine subsamples. Significant links (P < 0.05) between high health risks and lower job performance were found across all nine subsamples. Results suggest integrated health and productivity management strategies should include an emphasis on primary and secondary prevention to reduce health risks in addition to tertiary prevention efforts of disease management and medication management.

  8. Pods: a Powder Delivery System for Mars In-situ Organic, Mineralogic and Isotopic Analysis Instruments

    NASA Technical Reports Server (NTRS)

    Saha, C. P.; Bryson, C. E.; Sarrazin, P.; Blake, D. F.

    2005-01-01

    Many Mars in situ instruments require fine-grained high-fidelity samples of rocks or soil. Included are instruments for the determination of mineralogy as well as organic and isotopic chemistry. Powder can be obtained as a primary objective of a sample collection system (e.g., by collecting powder as a surface is abraded by a rotary abrasion tool (RAT)), or as a secondary objective (e.g, by collecting drill powder as a core is drilled). In the latter case, a properly designed system could be used to monitor drilling in real time as well as to deliver powder to analytical instruments which would perform complementary analyses to those later performed on the intact core. In addition, once a core or other sample is collected, a system that could transfer intelligently collected subsamples of power from the intact core to a suite of analytical instruments would be highly desirable. We have conceptualized, developed and tested a breadboard Powder Delivery System (PoDS) intended to satisfy the collection, processing and distribution requirements of powder samples for Mars in-situ mineralogic, organic and isotopic measurement instruments.

  9. Search for light curve modulations among Kepler candidates. Three very low-mass transiting companions

    NASA Astrophysics Data System (ADS)

    Lillo-Box, J.; Ribas, A.; Barrado, D.; Merín, B.; Bouy, H.

    2016-07-01

    Context. Light curve modulations in the sample of Kepler planet candidates allows the disentangling of the nature of the transiting object by photometrically measuring its mass. This is possible by detecting the effects of the gravitational pull of the companion (ellipsoidal modulations) and in some cases, the photometric imprints of the Doppler effect when observing in a broad band (Doppler beaming). Aims: We aim to photometrically unveil the nature of some transiting objects showing clear light curve modulations in the phase-folded Kepler light curve. Methods: We selected a subsample among the large crop of Kepler objects of interest (KOIs) based on their chances to show detectable light curve modulations, I.e., close (a< 12 R⋆) and large (in terms of radius, according to their transit signal) candidates. We modeled their phase-folded light curves with consistent equations for the three effects, namely, reflection, ellipsoidal and beaming (known as REB modulations). Results: We provide detailed general equations for the fit of the REB modulations for the case of eccentric orbits. These equations are accurate to the photometric precisions achievable by current and forthcoming instruments and space missions. By using this mathematical apparatus, we find three close-in very low-mass companions (two of them in the brown dwarf mass domain) orbiting main-sequence stars (KOI-554, KOI-1074, and KOI-3728), and reject the planetary nature of the transiting objects (thus classifying them as false positives). In contrast, the detection of the REB modulations and transit/eclipse signal allows the measurement of their mass and radius that can provide important constraints for modeling their interiors since just a few cases of low-mass eclipsing binaries are known. Additionally, these new systems can help to constrain the similarities in the formation process of the more massive and close-in planets (hot Jupiters), brown dwarfs, and very low-mass companions.

  10. Quantum preservation of the measurements precision using ultra-short strong pulses in exact analytical solution

    NASA Astrophysics Data System (ADS)

    Berrada, K.; Eleuch, H.

    2017-09-01

    Various schemes have been proposed to improve the parameter-estimation precision. In the present work, we suggest an alternative method to preserve the estimation precision by considering a model that closely describes a realistic experimental scenario. We explore this active way to control and enhance the measurements precision for a two-level quantum system interacting with classical electromagnetic field using ultra-short strong pulses with an exact analytical solution, i.e. beyond the rotating wave approximation. In particular, we investigate the variation of the precision with a few cycles pulse and a smooth phase jump over a finite time interval. We show that by acting on the shape of the phase transient and other parameters of the considered system, the amount of information may be increased and has smaller decay rate in the long time. These features make two-level systems incorporated in ultra-short, of-resonant and gradually changing phase good candidates for implementation of schemes for the quantum computation and the coherent information processing.

  11. A bio-physical basis of mathematics in synaptic function of the nervous system: a theory.

    PubMed

    Dempsher, J

    1980-01-01

    The purpose of this paper is to present a bio-physical basis of mathematics. The essence of the theory is that function in the nervous system is mathematical. The mathematics arises as a result of the interaction of energy (a wave with a precise curvature in space and time) and matter (a molecular or ionic structure with a precise form in space and time). In this interaction, both energy and matter play an active role. That is, the interaction results in a change in form of both energy and matter. There are at least six mathematical operations in a simple synaptic region. It is believed the form of both energy and matter are specific, and their interaction is specific, that is, function in most of the 'mind' and placed where it belongs - in nature and the synaptic regions of the nervous system; it results in both places from a precise interaction between energy (in a precise form) and matter ( in a precise structure).

  12. Precision experiments on mirror transitions at Notre Dame

    NASA Astrophysics Data System (ADS)

    Brodeur, Maxime; TwinSol Collaboration

    2016-09-01

    Thanks to extensive experimental efforts that led to a precise determination of important experimental quantities of superallowed pure Fermi transitions, we now have a very precise value for Vud that leads to a stringent test of the CKM matrix unitarity. Despite this achievement, measurements in other systems remain relevant as conflicting results could uncover unknown systematic effects or even new physics. One such system is the superallowed mixed transition, which can help refine theoretical corrections used for pure Fermi transitions and improve the accuracy of Vud. However, as a corrected Ft-value determination from these systems requires the more challenging determination of the Fermi Gamow-Teller mixing ratio, only five transitions, spreading from 19Ne to 37Ar, are currently fully characterized. To rectify the situation, an experimental program on precision experiment of mirror transitions that includes precision half-life measurements, and in the future, the determination of the Fermi Gamow-Teller mixing ratio, has started at the University of Notre Dame. This work is supported in part by the National Science Foundation.

  13. Influence of Waveform Characteristics on LiDAR Ranging Accuracy and Precision

    PubMed Central

    Yang, Bingwei; Xie, Xinhao; Li, Duan

    2018-01-01

    Time of flight (TOF) based light detection and ranging (LiDAR) is a technology for calculating distance between start/stop signals of time of flight. In lab-built LiDAR, two ranging systems for measuring flying time between start/stop signals include time-to-digital converter (TDC) that counts time between trigger signals and analog-to-digital converter (ADC) that processes the sampled start/stop pulses waveform for time estimation. We study the influence of waveform characteristics on range accuracy and precision of two kinds of ranging system. Comparing waveform based ranging (WR) with analog discrete return system based ranging (AR), a peak detection method (WR-PK) shows the best ranging performance because of less execution time, high ranging accuracy, and stable precision. Based on a novel statistic mathematical method maximal information coefficient (MIC), WR-PK precision has a high linear relationship with the received pulse width standard deviation. Thus keeping the received pulse width of measuring a constant distance as stable as possible can improve ranging precision. PMID:29642639

  14. Precision monitoring of bridge deck curvature change during replacement.

    DOT National Transportation Integrated Search

    2016-05-01

    This project was focused on development and deployment of a system for monitoring vertical : displacement in bridge decks and bridge spans. The system uses high precision wireless inclinometer : sensors to monitor inclinations at various points of a ...

  15. Advanced Pressure Coring System for Deep Earth Sampling (APRECOS)

    NASA Astrophysics Data System (ADS)

    Anders, E.; Rothfuss, M.; Müller, W. H.

    2009-04-01

    Nowadays the recovery of cores from boreholes is a standard operation. However, during that process the mechanical, physical, and chemical properties as well as living conditions for microorganisms are significantly altered. In-situ sampling is one approach to overcome the severe scientific limitations of conventional, depressurized core investigations by recovering, processing, and conducting experiments in the laboratory, while maintaining unchanged environmental parameters. The most successful equipment today is the suite of tools developed within the EU funded projects HYACE (Hydrate Autoclave Coring Equipment) and HYACINTH (Deployment of HYACE tools In New Tests on Hydrates) between 1997 and 2005. Within several DFG (German Research Foundation) projects the Technical University Berlin currently works on concepts to increase the present working pressure of 250 bar as well as to reduce logistical and financial expenses by merging redundant and analogous procedures and scaling down the considerable size of key components. It is also proposed to extend the range of applications for the wireline rotary pressure corer and the sub-sampling and transfer system to all types of soil conditions (soft to highly-consolidated). New modifications enable the tools to be used in other pressure related fields of research, such as unconventional gas exploration (coal-bed methane, tight gas, gas hydrate), CO2 sequestration, and microbiology of the deep biosphere. Expedient enhancement of an overall solution for pressure core retrieval, process and investigation will open the way for a complete on-site, all-purpose, in-situ equipment. The advanced assembly would allow for executing the whole operation sequences of coring, non-destructive measurement, sub-sampling and transfer into storage, measurement and transportation chambers, all in sterile, anaerobic conditions, and without depressurisation in quick succession. Extensive post-cruise handling and interim storage would be dispensable. The complete core processing and preparation of in-situ sample sections for worldwide shipping could be conducted within hours after retrieval.

  16. JPSS-1 VIIRS Version 2 At-Launch Relative Spectral Response Characterization and Performance

    NASA Technical Reports Server (NTRS)

    Moeller, Chris; Schwarting, Thomas; McIntire, Jeff; Moyer, Dave; Zeng, Jinan

    2017-01-01

    The relative spectral response (RSR) characterization of the JPSS-1 VIIRS spectral bands has achieved at launch status in the VIIRS Data Analysis Working Group February 2016 Version 2 RSR release. The Version 2 release improves upon the June 2015 Version 1 release by including December 2014 NIST TSIRCUS spectral measurements of VIIRS VisNIR bands in the analysis plus correcting CO2 influence on the band M13 RSR. The T-SIRCUS based characterization is merged with the summer 2014 SpMA based characterization of VisNIR bands (Version 1 release) to yield a fused RSR for these bands, combining the strengths of the T-SIRCUS and the SpMA measurement systems. The M13 RSR is updated by applying a model-based correction to mitigate CO2 attenuation of the SpMA source signal that occurred during M13 spectral measurements. The Version 2 release carries forward the Version 1 RSR for those bands that were not updated (M8-M12, M14-M16AB, I3-I5, DNBMGS). The Version 2 release includes band average (overall detectors and subsamples) RSR plus supporting RSR for each detector and subsample. The at-launch band average RSR have been used to populate Look-Up Tables supporting the sensor data record and environmental data record at-launch science products. Spectral performance metrics show that JPSS-1VIIRS RSR are compliant on specifications with a few minor exceptions. The Version 2 release, which replaces the Version 1 release, is currently available on the password-protected NASA JPSS-1 eRooms under EAR99 control.

  17. Technical Note: Interleaved Bipolar Acquisition and Low-rank Reconstruction for Water-Fat Separation in MRI.

    PubMed

    Cho, JaeJin; Park, HyunWook

    2018-05-17

    To acquire interleaved bipolar data and reconstruct the full data using low-rank property for water fat separation. Bipolar acquisition suffers from issues related to gradient switching, the opposite gradient polarities, and other system imperfections, which prevent accurate water-fat separation. In this study, an interleaved bipolar acquisition scheme and a low-rank reconstruction method were proposed to reduce issues from the bipolar gradients while achieving a short imaging time. The proposed interleaved bipolar acquisition scheme collects echo-time signals from both gradient polarities; however, the sequence increases the imaging time. To reduce the imaging time, the signals were subsampled at every dimension of k-space. The low-rank property of the bipolar acquisition was defined and exploited to estimate the full data from the acquired subsampled data. To eliminate the bipolar issues, in the proposed method, the water-fat separation was performed separately for each gradient polarity, and the results for the positive and negative gradient polarities were combined after the water-fat separation. A phantom study and in-vivo experiments were conducted on a 3T Siemens Verio system. The results for the proposed method were compared with the results of the fully sampled interleaved bipolar acquisition and Soliman's method, which was the previous water-fat separation approach for reducing the issues of bipolar gradients and accelerating the interleaved bipolar acquisition. The proposed method provided accurate water and fat images without the issues of bipolar gradients and demonstrated a better performance compared with the results of the previous methods. The water-fat separation using the bipolar acquisition has several benefits including a short echo-spacing time. However, it suffers from bipolar-gradient issues such as strong gradient switching, system imperfection, and eddy current effects. This study demonstrated that accurate water-fat separated images can be obtained using the proposed interleaved bipolar acquisition and low-rank reconstruction by using the benefits of the bipolar acquisition while reducing the bipolar-gradient issues with a short imaging time. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  18. Does the Presence of Planets Affect the Frequency and Properties of Extrasolar Kuiper Belts? Results from the Herschel Debris and Dunes Surveys

    NASA Astrophysics Data System (ADS)

    Moro-Martín, A.; Marshall, J. P.; Kennedy, G.; Sibthorpe, B.; Matthews, B. C.; Eiroa, C.; Wyatt, M. C.; Lestrade, J.-F.; Maldonado, J.; Rodriguez, D.; Greaves, J. S.; Montesinos, B.; Mora, A.; Booth, M.; Duchêne, G.; Wilner, D.; Horner, J.

    2015-03-01

    The study of the planet-debris disk connection can shed light on the formation and evolution of planetary systems and may help “predict” the presence of planets around stars with certain disk characteristics. In preliminary analyses of subsamples of the Herschel DEBRIS and DUNES surveys, Wyatt et al. and Marshall et al. identified a tentative correlation between debris and the presence of low-mass planets. Here we use the cleanest possible sample out of these Herschel surveys to assess the presence of such a correlation, discarding stars without known ages, with ages \\lt 1 Gyr, and with binary companions \\lt 100 AU to rule out possible correlations due to effects other than planet presence. In our resulting subsample of 204 FGK stars, we do not find evidence that debris disks are more common or more dusty around stars harboring high-mass or low-mass planets compared to a control sample without identified planets. There is no evidence either that the characteristic dust temperature of the debris disks around planet-bearing stars is any different from that in debris disks without identified planets, nor that debris disks are more or less common (or more or less dusty) around stars harboring multiple planets compared to single-planet systems. Diverse dynamical histories may account for the lack of correlations. The data show a correlation between the presence of high-mass planets and stellar metallicity, but no correlation between the presence of low-mass planets or debris and stellar metallicity. Comparing the observed cumulative distribution of fractional luminosity to those expected from a Gaussian distribution in logarithmic scale, we find that a distribution centered on the solar system’s value fits the data well, while one centered at 10 times this value can be rejected. This is of interest in the context of future terrestrial planet detection and characterization because it indicates that there are good prospects for finding a large number of debris disk systems (i.e., with evidence of harboring planetesimals, the building blocks of planets) with exozodiacal emission low enough to be appropriate targets for an ATLAST-type mission to search for biosignatures.

  19. Evaluation of the precision agricultural landscape modeling system (PALMS) in the semiarid Texas southern high plains

    USDA-ARS?s Scientific Manuscript database

    Accurate models to simulate the soil water balance in semiarid cropping systems are needed to evaluate management practices for soil and water conservation in both irrigated and dryland production systems. The objective of this study was to evaluate the application of the Precision Agricultural Land...

  20. Evaluation of the Precision Agricultural Landscape Modeling System (PALMS) in the Semiarid Texas Southern High Plains

    USDA-ARS?s Scientific Manuscript database

    Accurate models to simulate the soil water balance in semiarid cropping systems are needed to evaluate management practices for soil and water conservation in both irrigated and dryland production systems. The objective of this study was to evaluate the application of the Precision Agricultural Land...

  1. Single photon ranging system using two wavelengths laser and analysis of precision

    NASA Astrophysics Data System (ADS)

    Chen, Yunfei; He, Weiji; Miao, Zhuang; Gu, Guohua; Chen, Qian

    2013-09-01

    The laser ranging system based on time correlation single photon counting technology and single photon detector has the feature of high precision and low emergent energy etc. In this paper, we established a single photon laser ranging system that use the supercontinuum laser as light source, and two wavelengths (532nm and 830nm) of echo signal as the stop signal. We propose a new method that is capable to improve the single photon ranging system performance. The method is implemented by using two single-photon detectors to receive respectively the two different wavelength signals at the same time. We extracted the firings of the two detectors triggered by the same laser pulse at the same time and then took mean time of the two firings as the combined detection time-of-flight. The detection by two channels using two wavelengths will effectively improve the detection precision and decrease the false alarm probability. Finally, an experimental single photon ranging system was established. Through a lot of experiments, we got the system precision using both single and two wavelengths and verified the effectiveness of the method.

  2. Indoor high precision three-dimensional positioning system based on visible light communication using modified genetic algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Hao; Guan, Weipeng; Li, Simin; Wu, Yuxiang

    2018-04-01

    To improve the precision of indoor positioning and actualize three-dimensional positioning, a reversed indoor positioning system based on visible light communication (VLC) using genetic algorithm (GA) is proposed. In order to solve the problem of interference between signal sources, CDMA modulation is used. Each light-emitting diode (LED) in the system broadcasts a unique identity (ID) code using CDMA modulation. Receiver receives mixed signal from every LED reference point, by the orthogonality of spreading code in CDMA modulation, ID information and intensity attenuation information from every LED can be obtained. According to positioning principle of received signal strength (RSS), the coordinate of the receiver can be determined. Due to system noise and imperfection of device utilized in the system, distance between receiver and transmitters will deviate from the real value resulting in positioning error. By introducing error correction factors to global parallel search of genetic algorithm, coordinates of the receiver in three-dimensional space can be determined precisely. Both simulation results and experimental results show that in practical application scenarios, the proposed positioning system can realize high precision positioning service.

  3. Precision Farming and Precision Pest Management: The Power of New Crop Production Technologies

    PubMed Central

    Strickland, R. Mack; Ess, Daniel R.; Parsons, Samuel D.

    1998-01-01

    The use of new technologies including Geographic Information Systems (GIS), the Global Positioning System (GPS), Variable Rate Technology (VRT), and Remote Sensing (RS) is gaining acceptance in the present high-technology, precision agricultural industry. GIS provides the ability to link multiple data values for the same geo-referenced location, and provides the user with a graphical visualization of such data. When GIS is coupled with GPS and RS, management decisions can be applied in a more precise "micro-managed" manner by using VRT techniques. Such technology holds the potential to reduce agricultural crop production costs as well as crop and environmental damage. PMID:19274236

  4. Measurement of whole tire profile

    NASA Astrophysics Data System (ADS)

    Yang, Yongyue; Jiao, Wenguang

    2010-08-01

    In this paper, a precision measuring device is developed for obtaining characteristic curve of tire profile and its geometric parameters. It consists of a laser displacement measurement unit, a closed-loop precision two-dimensional coordinate table, a step motor control system and a fast data acquisition and analysis system. Based on the laser trigonometry, a data map of tire profile and coordinate values of all points can be obtained through corresponding data transformation. This device has a compact structure, a convenient control, a simple hardware circuit design and a high measurement precision. Experimental results indicate that measurement precision can meet the customer accuracy requirement of +/-0.02 mm.

  5. High-precision multiband spectroscopy of ultracold fermions in a nonseparable optical lattice

    NASA Astrophysics Data System (ADS)

    Fläschner, Nick; Tarnowski, Matthias; Rem, Benno S.; Vogel, Dominik; Sengstock, Klaus; Weitenberg, Christof

    2018-05-01

    Spectroscopic tools are fundamental for the understanding of complex quantum systems. Here, we demonstrate high-precision multiband spectroscopy in a graphenelike lattice using ultracold fermionic atoms. From the measured band structure, we characterize the underlying lattice potential with a relative error of 1.2 ×10-3 . Such a precise characterization of complex lattice potentials is an important step towards precision measurements of quantum many-body systems. Furthermore, we explain the excitation strengths into different bands with a model and experimentally study their dependency on the symmetry of the perturbation operator. This insight suggests the excitation strengths as a suitable observable for interaction effects on the eigenstates.

  6. Design and testing of a 750MHz CW-EPR digital console for small animal imaging.

    PubMed

    Sato-Akaba, Hideo; Emoto, Miho C; Hirata, Hiroshi; Fujii, Hirotada G

    2017-11-01

    This paper describes the development of a digital console for three-dimensional (3D) continuous wave electron paramagnetic resonance (CW-EPR) imaging of a small animal to improve the signal-to-noise ratio and lower the cost of the EPR imaging system. A RF generation board, an RF acquisition board and a digital signal processing (DSP) & control board were built for the digital EPR detection. Direct sampling of the reflected RF signal from a resonator (approximately 750MHz), which contains the EPR signal, was carried out using a band-pass subsampling method. A direct automatic control system to reduce the reflection from the resonator was proposed and implemented in the digital EPR detection scheme. All DSP tasks were carried out in field programmable gate array ICs. In vivo 3D imaging of nitroxyl radicals in a mouse's head was successfully performed. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Design and testing of a 750 MHz CW-EPR digital console for small animal imaging

    NASA Astrophysics Data System (ADS)

    Sato-Akaba, Hideo; Emoto, Miho C.; Hirata, Hiroshi; Fujii, Hirotada G.

    2017-11-01

    This paper describes the development of a digital console for three-dimensional (3D) continuous wave electron paramagnetic resonance (CW-EPR) imaging of a small animal to improve the signal-to-noise ratio and lower the cost of the EPR imaging system. A RF generation board, an RF acquisition board and a digital signal processing (DSP) & control board were built for the digital EPR detection. Direct sampling of the reflected RF signal from a resonator (approximately 750 MHz), which contains the EPR signal, was carried out using a band-pass subsampling method. A direct automatic control system to reduce the reflection from the resonator was proposed and implemented in the digital EPR detection scheme. All DSP tasks were carried out in field programmable gate array ICs. In vivo 3D imaging of nitroxyl radicals in a mouse's head was successfully performed.

  8. Modernizing Systems and Software: How Evolving Trends in Future Trends in Systems and Software Technology Bode Well for Advancing the Precision of Technology

    DTIC Science & Technology

    2009-04-23

    of Code Need for increased functionality will be a forcing function to bring the fields of software and systems engineering... of Software-Intensive Systems is Increasing 3 How Evolving Trends in Systems and Software Technologies Bode Well for Advancing the Precision of ...Engineering in Continued Partnership 4 How Evolving Trends in Systems and Software Technologies Bode Well for Advancing the

  9. Design of a laser navigation system for the inspection robot used in substation

    NASA Astrophysics Data System (ADS)

    Zhu, Jing; Sun, Yanhe; Sun, Deli

    2017-01-01

    Aimed at the deficiency of the magnetic guide and RFID parking system used by substation inspection robot now, a laser navigation system is designed, and the system structure, the method of map building and positioning are all introduced. The system performance is tested in a 500kV substation, and the result show that the repetitive precision of navigation system is precise enough to help the robot fulfill inspection tasks.

  10. Fourier Transform Fringe-Pattern Analysis of an Absolute Distance Michelson Interferometer for Space-Based Laser Metrology.

    NASA Astrophysics Data System (ADS)

    Talamonti, James Joseph

    1995-01-01

    Future NASA proposals include the placement of optical interferometer systems in space for a wide variety of astrophysical studies including a vastly improved deflection test of general relativity, a precise and direct calibration of the Cepheid distance scale, and the determination of stellar masses (Reasenberg et al., 1988). There are also plans for placing large array telescopes on the moon with the ultimate objective of being able to measure angular separations of less than 10 mu-arc seconds (Burns, 1990). These and other future projects will require interferometric measurement of the (baseline) distance between the optical elements comprising the systems. Eventually, space qualifiable interferometers capable of picometer (10^{-12}m) relative precision and nanometer (10^{ -9}m) absolute precision will be required. A numerical model was developed to emulate the capabilities of systems performing interferometric noncontact absolute distance measurements. The model incorporates known methods to minimize signal processing and digital sampling errors and evaluates the accuracy limitations imposed by spectral peak isolation using Hanning, Blackman, and Gaussian windows in the Fast Fourier Transform Technique. We applied this model to the specific case of measuring the relative lengths of a compound Michelson interferometer using a frequency scanned laser. By processing computer simulated data through our model, the ultimate precision is projected for ideal data, and data containing AM/FM noise. The precision is shown to be limited by non-linearities in the laser scan. A laboratory system was developed by implementing ultra-stable external cavity diode lasers into existing interferometric measuring techniques. The capabilities of the system were evaluated and increased by using the computer modeling results as guidelines for the data analysis. Experimental results measured 1-3 meter baselines with <20 micron precision. Comparison of the laboratory and modeling results showed that the laboratory precisions obtained were of the same order of magnitude as those predicted for computer generated results under similar conditions. We believe that our model can be implemented as a tool in the design for new metrology systems capable of meeting the precisions required by space-based interferometers.

  11. Albedos of Centaurs, Jovian Trojans and Hildas

    NASA Astrophysics Data System (ADS)

    Romanishin, William

    2017-01-01

    I present optical V band albedo distributions for samples of outer solar system minor bodies including Centaurs, Jovian Trojans and Hildas. Diameters come almost entirely from the NEOWISE catalog (Mainzer etal 2016- Planetary Data System). Optical photometry (H values) for about 2/3 of the approximately 2700 objects studied are from PanStarrrs (Veres et al 2015 Icarus 261, 34). The PanStarrs optical photometry is supplemented by H values from JPL Horizons (corrected to be on the same photometric system as the PanStarrs data) for the objects in the NEOWISE catalog that are not in the PanStarrs catalog. I compare the albedo distributions of various pairs of subsamples using the nonparametric Wilcoxon rank sum test. Examples of potentially interesting comparisons include: (1) the median L5 Trojan cloud albedo is about 10% darker than that of the L4 cloud at a high level of statistical significance and (2) the median albedo of the gray Centaurs lies between that of the L4 and L5 Trojan groups.

  12. Extended use of electronic health records by primary care physicians: Does the electronic health record artefact matter?

    PubMed

    Raymond, Louis; Paré, Guy; Marchand, Marie

    2017-04-01

    The deployment of electronic health record systems is deemed to play a decisive role in the transformations currently being implemented in primary care medical practices. This study aims to characterize electronic health record systems from the perspective of family physicians. To achieve this goal, we conducted a survey of physicians practising in private clinics located in Quebec, Canada. We used valid responses from 331 respondents who were found to be representative of the larger population. Data provided by the physicians using the top three electronic health record software products were analysed in order to obtain statistically adequate sub-sample sizes. Significant differences were observed among the three products with regard to their functional capability. The extent to which each of the electronic health record functionalities are used by physicians also varied significantly. Our results confirm that the electronic health record artefact 'does matter', its clinical functionalities explaining why certain physicians make more extended use of their system than others.

  13. Measuring health system resource use for economic evaluation: a comparison of data sources.

    PubMed

    Pollicino, Christine; Viney, Rosalie; Haas, Marion

    2002-01-01

    A key challenge for evaluators and health system planners is the identification, measurement and valuation of resource use for economic evaluation. Accurately capturing all significant resource use is particularly difficult in the Australian context where there is no comprehensive database from which researchers can draw. Evaluators and health system planners need to consider different approaches to data collection for estimating resource use for economic evaluation, and the relative merits of the different data sources available. This paper illustrates the issues that arise in using different data sources using a sub-sample of the data being collected for an economic evaluation. Specifically, it compares the use of Australia's largest administrative database on resource use, the Health Insurance Commission database, with the use of patient-supplied data. The extent of agreement and discrepancies between the two data sources is investigated. Findings from this study and recommendations as to how to deal with different data sources are presented.

  14. A radial measurement of the galaxy tidal alignment magnitude with BOSS data

    NASA Astrophysics Data System (ADS)

    Martens, Daniel; Hirata, Christopher M.; Ross, Ashley J.; Fang, Xiao

    2018-07-01

    The anisotropy of galaxy clustering in redshift space has long been used to probe the rate of growth of cosmological perturbations. However, if galaxies are aligned by large-scale tidal fields, then a sample with an orientation-dependent selection effect has an additional anisotropy imprinted on to its correlation function. We use the LOWZ and CMASS catalogues of SDSS-III BOSS Data Release 12 to divide galaxies into two subsamples based on their offset from the Fundamental Plane, which should be correlated with orientation. These subsamples must trace the same underlying cosmology, but have opposite orientation-dependent selection effects. We measure the clustering parameters of each subsample and compare them in order to calculate the dimensionless parameter B, a measure of how strongly galaxies are aligned by gravitational tidal fields. We found that for CMASS (LOWZ), the measured B was -0.024 ± 0.015 (-0.030 ± 0.016). This result can be compared to the theoretical predictions of Hirata, who argued that since galaxy formation physics does not depend on the direction of the `observer,' the same intrinsic alignment parameters that describe galaxy-ellipticity correlations should also describe intrinsic alignments in the radial direction. We find that the ratio of observed to theoretical values is 0.51 ± 0.32 (0.77 ± 0.41) for CMASS (LOWZ). We combine the results to obtain a total Obs/Theory = 0.61 ± 0.26. This measurement constitutes evidence (between 2σand 3σ) for radial intrinsic alignments, and is consistent with theoretical expectations (<2σ difference).

  15. Mindfulness and Psychological Health Outcomes: A Latent Profile Analysis among Military Personnel and College Students.

    PubMed

    Bravo, Adrian J; Pearson, Matthew R; Kelley, Michelle L

    2018-02-01

    Previous research on trait mindfulness facets using person-centered analyses (e.g., latent profile analysis [LPA]) has identified four distinct mindfulness profiles among college students: a high mindfulness group (high on all facets of the Five-Factor Mindfulness Questionnaire [FFMQ]), a judgmentally observing group (highest on observing, but low on non-judging of inner experience and acting with awareness), a non-judgmentally aware group (high on non-judging of inner experience and acting with awareness, but very low on observing), and a low mindfulness group (low on all facets of the FFMQ). In the present study, we used LPA to identify distinct mindfulness profiles in a community based sample of U.S. military personnel (majority veterans; n = 407) and non-military college students ( n = 310) and compare these profiles on symptoms of psychological health outcomes (e.g., suicidality, PTSD, anxiety, rumination) and percentage of participants exceeding clinically significant cut-offs for depressive symptoms, substance use, and alcohol use. In the subsample of college students, we replicated previous research and found four distinct mindfulness profiles; however, in the military subsample we found three distinct mindfulness profiles (a combined low mindfulness/judgmentally observing class). In both subsamples, we found that the most adaptive profile was the "high mindfulness" profile (i.e., demonstrated the lowest scores on all psychological symptoms and the lowest probability of exceeding clinical cut-offs). Based on these findings, we purport that the comprehensive examination of an individual's mindfulness profile could help clinicians tailor interventions/treatments that capitalize on individual's specific strengths and work to address their specific deficits.

  16. Trajectories of Multidimensional Caregiver Burden in Chinese Informal Caregivers for Dementia: Evidence from Exploratory and Confirmatory Factor Analysis of the Zarit Burden Interview.

    PubMed

    Li, Dan; Hu, Nan; Yu, Yueyi; Zhou, Aihong; Li, Fangyu; Jia, Jianping

    2017-01-01

    Despite its popularity, the latent structure of 22-item Zarit Burden Interview (ZBI) remains unclear. There has been no study exploring how caregiver multidimensional burden changed. The aim of the work was to validate the latent structure of ZBI and to investigate how multidimensional burden evolves with increasing global burden. We studied 1,132 dyads of dementia patients and their informal caregivers. The caregivers completed the ZBI and a questionnaire regarding caregiving. The total sample was randomly split into two equal subsamples. Exploratory factor analysis (EFA) was performed in the first subsample. In the second subsample, confirmatory factor analysis (CFA) was conducted to validate models generated from EFA. The mean of weighted factor score was calculated to assess the change of dimension burden against the increasing ZBI total score. The result of EFA and CFA supported that a five-factor structure, including role strain, personal strain, incompetency, dependency, and guilt, had the best goodness-of-fit. The trajectories of multidimensional burden suggested that three different dimensions (guilt, role strain and personal strain) became the main subtype of burden in sequence as the ZBI total score increased from mild to moderate. Factor dependency contributed prominently to the total burden in severe stage. The five-factor ZBI is a psychometrically robust measure for assessing multidimensional burden in Chinese caregivers. The changes of multidimensional burden have deepened our understanding of the psychological characteristics of caregiving beyond a single total score and may be useful for developing interventions to reduce caregiver burden.

  17. Impact of Menthol Smoking on Nicotine Dependence for Diverse Racial/Ethnic Groups of Daily Smokers

    PubMed Central

    Soulakova, Julia N.; Danczak, Ryan R.

    2017-01-01

    Introduction: The aims of this study were to evaluate whether menthol smoking and race/ethnicity are associated with nicotine dependence in daily smokers. Methods: The study used two subsamples of U.S. daily smokers who responded to the 2010–2011 Tobacco Use Supplement to the Current Population Survey. The larger subsample consisted of 18,849 non-Hispanic White (NHW), non-Hispanic Black (NHB), and Hispanic (HISP) smokers. The smaller subsample consisted of 1112 non-Hispanic American Indian/Alaska Native (AIAN), non-Hispanic Asian (ASIAN), non-Hispanic Hawaiian/Pacific Islander (HPI), and non-Hispanic Multiracial (MULT) smokers. Results: For larger (smaller) groups the rates were 45% (33%) for heavy smoking (16+ cig/day), 59% (51%) for smoking within 30 min of awakening (Sw30), and 14% (14%) for night-smoking. Overall, the highest prevalence of menthol smoking corresponded to NHB and HPI (≥65%), followed by MULT and HISP (31%–37%), and then by AIAN, NHW, and ASIAN (22%–27%) smokers. For larger racial/ethnic groups, menthol smoking was negatively associated with heavy smoking, not associated with Sw30, and positively associated with night-smoking. For smaller groups, menthol smoking was not associated with any measure, but the rates of heavy smoking, Sw30, and night-smoking varied across the groups. Conclusions: The diverse associations between menthol smoking and nicotine dependence maybe due to distinction among the nicotine dependence measures, i.e., individually, each measure assesses a specific smoking behavior. Menthol smoking may be associated with promoting smoking behaviors. PMID:28085040

  18. Characterizing user engagement with health app data: a data mining approach.

    PubMed

    Serrano, Katrina J; Coa, Kisha I; Yu, Mandi; Wolff-Hughes, Dana L; Atienza, Audie A

    2017-06-01

    The use of mobile health applications (apps) especially in the area of lifestyle behaviors has increased, thus providing unprecedented opportunities to develop health programs that can engage people in real-time and in the real-world. Yet, relatively little is known about which factors relate to the engagement of commercially available apps for health behaviors. This exploratory study examined behavioral engagement with a weight loss app, Lose It! and characterized higher versus lower engaged groups. Cross-sectional, anonymized data from Lose It! were analyzed (n = 12,427,196). This dataset was randomly split into 24 subsamples and three were used for this study (total n = 1,011,008). Classification and regression tree methods were used to identify subgroups of user engagement with one subsample, and descriptive analyses were conducted to examine other group characteristics associated with engagement. Data mining validation methods were conducted with two separate subsamples. On average, users engaged with the app for 29 days. Six unique subgroups were identified, and engagement for each subgroup varied, ranging from 3.5 to 172 days. Highly engaged subgroups were primarily distinguished by the customization of diet and exercise. Those less engaged were distinguished by weigh-ins and the customization of diet. Results were replicated in further analyses. Commercially-developed apps can reach large segments of the population, and data from these apps can provide insights into important app features that may aid in user engagement. Getting users to engage with a mobile health app is critical to the success of apps and interventions that are focused on health behavior change.

  19. A Radial Measurement of the Galaxy Tidal Alignment Magnitude with BOSS Data

    NASA Astrophysics Data System (ADS)

    Martens, Daniel; Hirata, Christopher M.; Ross, Ashley J.; Fang, Xiao

    2018-05-01

    The anisotropy of galaxy clustering in redshift space has long been used to probe the rate of growth of cosmological perturbations. However, if galaxies are aligned by large-scale tidal fields, then a sample with an orientation-dependent selection effect has an additional anisotropy imprinted onto its correlation function. We use the LOWZ and CMASS catalogs of SDSS-III BOSS Data Release 12 to divide galaxies into two sub-samples based on their offset from the Fundamental Plane, which should be correlated with orientation. These sub-samples must trace the same underlying cosmology, but have opposite orientation-dependent selection effects. We measure the clustering parameters of each sub-sample and compare them in order to calculate the dimensionless parameter B, a measure of how strongly galaxies are aligned by gravitational tidal fields. We found that for CMASS (LOWZ), the measured B was -0.024 ± 0.015 (-0.030 ± 0.016). This result can be compared to the theoretical predictions of Hirata (2009), who argued that since galaxy formation physics does not depend on the direction of the "observer," the same intrinsic alignment parameters that describe galaxy-ellipticity correlations should also describe intrinsic alignments in the radial direction. We find that the ratio of observed to theoretical values is 0.51 ± 0.32 (0.77 ± 0.41) for CMASS (LOWZ). We combine the results to obtain a total Obs/Theory = 0.61 ± 0.26. This measurement constitutes evidence (between 2 and 3σ) for radial intrinsic alignments, and is consistent with theoretical expectations (<2σ difference).

  20. Differences in the rotational properties of multiple stellar populations in M13: a faster rotation for the `extreme' chemical subpopulation

    NASA Astrophysics Data System (ADS)

    Cordero, M. J.; Hénault-Brunet, V.; Pilachowski, C. A.; Balbinot, E.; Johnson, C. I.; Varri, A. L.

    2017-03-01

    We use radial velocities from spectra of giants obtained with the WIYN telescope, coupled with existing chemical abundance measurements of Na and O for the same stars, to probe the presence of kinematic differences among the multiple populations of the globular cluster (GC) M13. To characterize the kinematics of various chemical subsamples, we introduce a method using Bayesian inference along with a Markov chain Monte Carlo algorithm to fit a six-parameter kinematic model (including rotation) to these subsamples. We find that the so-called extreme population (Na-enhanced and extremely O-depleted) exhibits faster rotation around the centre of the cluster than the other cluster stars, in particular, when compared with the dominant `intermediate' population (moderately Na-enhanced and O-depleted). The most likely difference between the rotational amplitude of this extreme population and that of the intermediate population is found to be ˜4 km s-1 , with a 98.4 per cent probability that the rotational amplitude of the extreme population is larger than that of the intermediate population. We argue that the observed difference in rotational amplitudes, obtained when splitting subsamples according to their chemistry, is not a product of the long-term dynamical evolution of the cluster, but more likely a surviving feature imprinted early in the formation history of this GC and its multiple populations. We also find an agreement (within uncertainties) in the inferred position angle of the rotation axis of the different subpopulations considered. We discuss the constraints that these results may place on various formation scenarios.

  1. Development and psychometric testing of a new instrument to measure the caring behaviour of nurses in Italian acute care settings.

    PubMed

    Piredda, Michela; Ghezzi, Valerio; Fenizia, Elisa; Marchetti, Anna; Petitti, Tommasangelo; De Marinis, Maria Grazia; Sili, Alessandro

    2017-12-01

    To develop and psychometrically test the Italian-language Nurse Caring Behaviours Scale, a short measure of nurse caring behaviour as perceived by inpatients. Patient perceptions of nurses' caring behaviours are a predictor of care quality. Caring behaviours are culture-specific, but no measure of patient perceptions has previously been developed in Italy. Moreover, existing tools show unclear psychometric properties, are burdensome for respondents, or are not widely applicable. Instrument development and psychometric testing. Item generation included identifying and adapting items from existing measures of caring behaviours as perceived by patients. A pool of 28 items was evaluated for face validity. Content validity indexes were calculated for the resulting 15-item scale; acceptability and clarity were pilot tested with 50 patients. To assess construct validity, a sample of 2,001 consecutive adult patients admitted to a hospital in 2014 completed the scale and was split into two groups. Reliability was evaluated using nonlinear structural equation modelling coefficients. Measurement invariance was tested across subsamples. Item 15 loaded poorly in the exploratory factor analysis (n = 983) and was excluded from the final solution, positing a single latent variable with 14 indicators. This model fitted the data moderately. The confirmatory factor analysis (n = 1018) returned similar results. Internal consistency was excellent in both subsamples. Full scalar invariance was reached, and no significant latent mean differences were detected across subsamples. The new instrument shows reasonable psychometric properties and is a promising short and widely applicable measure of inpatient perceptions of nurse caring behaviours. © 2017 John Wiley & Sons Ltd.

  2. SPECTROSCOPIC ABUNDANCES AND MEMBERSHIP IN THE WOLF 630 MOVING GROUP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bubar, Eric J.; King, Jeremy R., E-mail: ebubar@gmail.co, E-mail: jking2@ces.clemson.ed

    The concept of kinematic assemblages evolving from dispersed stellar clusters has remained contentious since Eggen's initial formulation of moving groups in the 1960s. With high-quality parallaxes from the Hipparcos space astrometry mission, distance measurements for thousands of nearby, seemingly isolated stars are currently available. With these distances, a high-resolution spectroscopic abundance analysis can be brought to bear on the alleged members of these moving groups. If a structure is a relic of an open cluster, the members can be expected to be monolithic in age and abundance in as much as homogeneity is observed in young open clusters. In thismore » work, we have examined 34 putative members of the proposed Wolf 630 moving group using high-resolution stellar spectroscopy. The stars of the sample have been chemically tagged to determine abundance homogeneity and confirm the existence of a homogeneous subsample of 19 stars. Fitting the homogeneous subsample with Yale-Yonsei isochrones yields a single evolutionary sequence of {approx}2.7 {+-} 0.5 Gyr. It is concluded that this 19 star subsample of the Wolf 630 moving group sample of 34 stars could represent a dispersed cluster with an ([Fe/H]) = -0.01 {+-} 0.02 and an age of 2.7 {+-} 0.5 Gyr. In addition, chemical abundances of Na and Al in giants are examined for indications of enhancements as observed in field giants of old open clusters; overexcitation/ionization effects are explored in the cooler dwarfs of the sample; and oxygen is derived from the infrared triplet and the forbidden line at {lambda}6300.« less

  3. Fraternal Birth Order and Extreme Right-Handedness as Predictors of Sexual Orientation and Gender Nonconformity in Men.

    PubMed

    Kishida, Mariana; Rahman, Qazi

    2015-07-01

    The present study explored whether there were relationships between number of older brothers, handedness, recalled childhood gender nonconformity (CGN), and sexual orientation in men. We used data from previous British studies conducted in our laboratory (N = 1,011 heterosexual men and 921 gay men). These men had completed measures of demographic variables, number and sex of siblings, CGN, and the Edinburgh Handedness Inventory. The results did not replicate the fraternal birth order effect. However, gay men had fewer "other siblings" than heterosexual men (even after controlling for the stopping-rule and family size). In a sub-sample (425 gay men and 478 heterosexual men) with data available on both sibling sex composition and handedness scores, gay men were found to show a significantly greater likelihood of extreme right-handedness and non-right-handedness compared to heterosexual men. There were no significant effects of sibling sex composition in this sub-sample. In a further sub-sample (N = 487) with data available on sibling sex composition, handedness, and CGN, we found that men with feminine scores on CGN were more extremely right-handed and had fewer other-siblings compared to masculine scoring men. Mediation analysis revealed that handedness was associated with sexual orientation directly and also indirectly through the mediating factor of CGN. We were unable to replicate the fraternal birth order effect in our archived dataset but there was evidence for a relationship among handedness, sexual orientation, and CGN. These data help narrow down the number of possible neurodevelopmental pathways leading to variations in male sexual orientation.

  4. Like Mother, Like Daughter? Dietary and Non-Dietary Bone Fracture Risk Factors in Mothers and Their Daughters

    PubMed Central

    SOBAS, Kamila; WADOLOWSKA, Lidia; SLOWINSKA, Malgorzata Anna; CZLAPKA-MATYASIK, Magdalena; WUENSTEL, Justyna; NIEDZWIEDZKA, Ewa

    2015-01-01

    Background: The aim of this study was to demonstrate similarities and differences between mothers and daughters regarding dietary and non-dietary risk factors for bone fractures and osteoporosis. Methods: The study was carried out in 2007–2010 on 712 mothers (29–59 years) and daughters (12–21 years) family pairs. In the sub-sample (170 family pairs) bone mineral density (BMD) was measured for the forearm by dual-energy x-ray absorptiometry (DXA). The consumption of dairy products was determined with a semi-quantitative food frequency questionnaire (ADOS-Ca) and calcium intake from the daily diet was calculated. Results: The presence of risk factors for bone fractures in mothers and daughters was significantly correlated. The Spearman rank coefficient for dietary factors of fracture risk was 0.87 (P<0.05) in whole sub-sample, 0.94 (P<0.05) in bottom tercile of BMD, 0.82 (P<0.05) in middle tercile of BMD, 0.54 (P>0.05) in upper tercile of BMD and for non-dietary factors of fracture risk was 0.83 (P<0.05) in whole sub-sample, 0.86 (P<0.05) in bottom tercile of BMD, 0.93 (P<0.05) in middle tercile of BMD, 0.65 (P<0.05) in upper tercile of BMD. Conclusions: Our results confirm the role of the family environment for bone health and document the stronger effect of negative factors of the family environment as compared to other positive factors on bone fracture risk. PMID:26576372

  5. Design and initial evaluation of a portable in situ runoff and sediment monitoring device

    NASA Astrophysics Data System (ADS)

    Sun, Tao; Cruse, Richard M.; Chen, Qiang; Li, Hao; Song, Chunyu; Zhang, Xingyi

    2014-11-01

    An inexpensive portable runoff and sediment monitoring device (RSMD) requiring no external electric power was developed for measuring water runoff and associated sediment loss from field plots ranging from 0.005 to 0.1 ha. The device consists of runoff gauge, sediment mixing and sectional subsampling assemblies. The runoff hydrograph is determined using a calibrated tipping bucket. The sediment mixing assembly minimizes fluid splash while mixing the runoff water/sediment mixture prior to subsampling this material. Automatic flow-proportional sampling utilizes mechanical power supplied by the tipping bucket action, with power transmitted to the sample collection assembly via the tipping bucket pivot bar. Runoff is well-mixed and subdivided twice before subsamples are collected for analysis. The resolution of this device for a 100 m2 plot is 0.025 mm of runoff; the device is able to capture maximum flow rates up to 82 mm h-1 in a plot of the same dimension. Calibration results indicated the maximum error is 2.1% for estimating flow rate and less than 10% for sediment concentration in most of the flow range. The RSMD was assessed by measuring field runoff and soil loss from different tillage and slope treatments for a single natural rainfall event. Results were in close agreement with those in published literature, giving additional evidence that this device is performing acceptably well. The RSMD is uniquely adapted for a wide range of field sites, especially for those without electric power, making it a useful tool for studying soil management strategies.

  6. Proposal for the creation of a national strategy for precision medicine in cancer: a position statement of SEOM, SEAP, and SEFH.

    PubMed

    Garrido, P; Aldaz, A; Vera, R; Calleja, M A; de Álava, E; Martín, M; Matías-Guiu, X; Palacios, J

    2018-04-01

    Precision medicine is an emerging approach for disease treatment and prevention that takes into account individual variability in genes, environment, and lifestyle for each person. Precision medicine is transforming clinical and biomedical research, as well as health care itself from a conceptual, as well as a methodological viewpoint, providing extraordinary opportunities to improve public health and lower the costs of the healthcare system. However, the implementation of precision medicine poses ethical-legal, regulatory, organizational, and knowledge-related challenges. Without a national strategy, precision medicine, which will be implemented one way or another, could take place without the appropriate planning that can guarantee technical quality, equal access of all citizens to the best practices, violating the rights of patients and professionals, and jeopardizing the solvency of the healthcare system. With this paper from the Spanish Societies of Medical Oncology, Pathology, and Hospital Pharmacy, we highlight the need to institute a consensual national strategy for the development of precision medicine in our country, review the national and international context, comment on the opportunities and challenges for implementing precision medicine, and outline the objectives of a national strategy on precision medicine in cancer.

  7. Development of an Integrated Thermocouple for the Accurate Sample Temperature Measurement During High Temperature Environmental Scanning Electron Microscopy (HT-ESEM) Experiments.

    PubMed

    Podor, Renaud; Pailhon, Damien; Ravaux, Johann; Brau, Henri-Pierre

    2015-04-01

    We have developed two integrated thermocouple (TC) crucible systems that allow precise measurement of sample temperature when using a furnace associated with an environmental scanning electron microscope (ESEM). Sample temperatures measured with these systems are precise (±5°C) and reliable. The TC crucible systems allow working with solids and liquids (silicate melts or ionic liquids), independent of the gas composition and pressure. These sample holder designs will allow end users to perform experiments at high temperature in the ESEM chamber with high precision control of the sample temperature.

  8. Toward 1-mm depth precision with a solid state full-field range imaging system

    NASA Astrophysics Data System (ADS)

    Dorrington, Adrian A.; Carnegie, Dale A.; Cree, Michael J.

    2006-02-01

    Previously, we demonstrated a novel heterodyne based solid-state full-field range-finding imaging system. This system is comprised of modulated LED illumination, a modulated image intensifier, and a digital video camera. A 10 MHz drive is provided with 1 Hz difference between the LEDs and image intensifier. A sequence of images of the resulting beating intensifier output are captured and processed to determine phase and hence distance to the object for each pixel. In a previous publication, we detailed results showing a one-sigma precision of 15 mm to 30 mm (depending on signal strength). Furthermore, we identified the limitations of the system and potential improvements that were expected to result in a range precision in the order of 1 mm. These primarily include increasing the operating frequency and improving optical coupling and sensitivity. In this paper, we report on the implementation of these improvements and the new system characteristics. We also comment on the factors that are important for high precision image ranging and present configuration strategies for best performance. Ranging with sub-millimeter precision is demonstrated by imaging a planar surface and calculating the deviations from a planar fit. The results are also illustrated graphically by imaging a garden gnome.

  9. Active-passive hybrid piezoelectric actuators for high-precision hard disk drive servo systems

    NASA Astrophysics Data System (ADS)

    Chan, Kwong Wah; Liao, Wei-Hsin

    2006-03-01

    Positioning precision is crucial to today's increasingly high-speed, high-capacity, high data density, and miniaturized hard disk drives (HDDs). The demand for higher bandwidth servo systems that can quickly and precisely position the read/write head on a high track density becomes more pressing. Recently, the idea of applying dual-stage actuators to track servo systems has been studied. The push-pull piezoelectric actuated devices have been developed as micro actuators for fine and fast positioning, while the voice coil motor functions as a large but coarse seeking. However, the current dual-stage actuator design uses piezoelectric patches only without passive damping. In this paper, we propose a dual-stage servo system using enhanced active-passive hybrid piezoelectric actuators. The proposed actuators will improve the existing dual-stage actuators for higher precision and shock resistance, due to the incorporation of passive damping in the design. We aim to develop this hybrid servo system not only to increase speed of track seeking but also to improve precision of track following servos in HDDs. New piezoelectrically actuated suspensions with passive damping have been designed and fabricated. In order to evaluate positioning and track following performances for the dual-stage track servo systems, experimental efforts are carried out to implement the synthesized active-passive suspension structure with enhanced piezoelectric actuators using a composite nonlinear feedback controller.

  10. Precise stacking of decellularized extracellular matrix based 3D cell-laden constructs by a 3D cell printing system equipped with heating modules.

    PubMed

    Ahn, Geunseon; Min, Kyung-Hyun; Kim, Changhwan; Lee, Jeong-Seok; Kang, Donggu; Won, Joo-Yun; Cho, Dong-Woo; Kim, Jun-Young; Jin, Songwan; Yun, Won-Soo; Shim, Jin-Hyung

    2017-08-17

    Three-dimensional (3D) cell printing systems allow the controlled and precise deposition of multiple cells in 3D constructs. Hydrogel materials have been used extensively as printable bioinks owing to their ability to safely encapsulate living cells. However, hydrogel-based bioinks have drawbacks for cell printing, e.g. inappropriate crosslinking and liquid-like rheological properties, which hinder precise 3D shaping. Therefore, in this study, we investigated the influence of various factors (e.g. bioink concentration, viscosity, and extent of crosslinking) on cell printing and established a new 3D cell printing system equipped with heating modules for the precise stacking of decellularized extracellular matrix (dECM)-based 3D cell-laden constructs. Because the pH-adjusted bioink isolated from native tissue is safely gelled at 37 °C, our heating system facilitated the precise stacking of dECM bioinks by enabling simultaneous gelation during printing. We observed greater printability compared with that of a non-heating system. These results were confirmed by mechanical testing and 3D construct stacking analyses. We also confirmed that our heating system did not elicit negative effects, such as cell death, in the printed cells. Conclusively, these results hold promise for the application of 3D bioprinting to tissue engineering and drug development.

  11. System and method for high precision isotope ratio destructive analysis

    DOEpatents

    Bushaw, Bruce A; Anheier, Norman C; Phillips, Jon R

    2013-07-02

    A system and process are disclosed that provide high accuracy and high precision destructive analysis measurements for isotope ratio determination of relative isotope abundance distributions in liquids, solids, and particulate samples. The invention utilizes a collinear probe beam to interrogate a laser ablated plume. This invention provides enhanced single-shot detection sensitivity approaching the femtogram range, and isotope ratios that can be determined at approximately 1% or better precision and accuracy (relative standard deviation).

  12. Precision Pointing Control System (PPCS) star tracker test

    NASA Technical Reports Server (NTRS)

    1972-01-01

    Tests performed on the TRW precision star tracker are described. The unit tested was a two-axis gimballed star tracker designed to provide star LOS data to an accuracy of 1 to 2 sec. The tracker features a unique bearing system and utilizes thermal and mechanical symmetry techniques to achieve high precision which can be demonstrated in a one g environment. The test program included a laboratory evaluation of tracker functional operation, sensitivity, repeatibility, and thermal stability.

  13. F-16 Task Analysis Criterion-Referenced Objective and Objectives Hierarchy Report. Volume 4

    DTIC Science & Technology

    1981-03-01

    Initiation cues: Engine flameout Systems presenting cues: Aircraft fuel, engine STANDARD: Authority: TACR 60-2 Performance precision: TD in first 1/3 of...task: None Initiation cues: On short final Systems preventing cues: N/A STANDARD: Authority: 60-2 Performance precision: +/- .5 AOA; TD zone 150-1000...precision: +/- .05 AOA; TD Zone 150-1000 Computational accuracy: N/A ... . . ... . ... e e m I TASK NO.: 1.9.4 BEHAVIOR: Perform short field landing

  14. New Technologies Smart, or Harm Work-Family Boundaries Management? Gender Differences in Conflict and Enrichment Using the JD-R Theory

    PubMed Central

    Ghislieri, Chiara; Emanuel, Federica; Molino, Monica; Cortese, Claudio G.; Colombo, Lara

    2017-01-01

    Background: The relationship between technology-assisted supplemental work and well-being outcomes is a recent issue in scientific literature. Whether the use of technology for work purpose in off-work time may have a positive or negative impact on work-family balance remains an open question and the role of gender in this relationship is poorly understood. Aim: According to the JD-R theory, this study aimed to investigate the relationship between off-work hours technology assisted job demand (off-TAJD) and both work-family conflict (WFC) and work-family enrichment (WFE). Moreover, it considered two general job demands, workload and emotional dissonance, and one job resource, supervisory coaching. Method: The hypotheses were tested with a convenience sample of 671 workers. Data were collected with a self-report questionnaire and analyzed with SPSS 23 and through multi-group structural equation model (SEM) (Mplus 7). Results: The estimated SEM [Chi-square (510) = 1041.29; p < 0.01; CFI = 0.95; TLI = 0.95; RMSEA = 0.06 (0.05, 0.06); SRMR = 0.05. M = 319/F = 352] showed that off-TAJD was positively related to WFC in both subsamples; off-TAJD was positively related also to WFE only in the Male group. Workload was positively related to WFC in both Male and Female subsamples. Emotional dissonance was positively related to WFC in both subsamples and was negatively related to WFE. Supervisory coaching was strongly, positively related to WFE in both groups, and only in the Male subsample presented a low negative relationship with WFC. Conclusion: This study contributes to the literature on new challenges in work-life interface by analyzing the association between off-TAJD and WFC and Enrichment. Our findings suggest it is important to pay attention to gender differences in the study of the impact of supplemental work carried out during off-work hours using technology on the work-life interface. In fact, employee perception of Company demands of being available during off-work time, with the use of technology, may have different consequences for men and women, indicating potential differences in the centrality of the working role. Practical implications, at both cultural and organizational levels, should address the use of technology during leisure time. PMID:28713300

  15. Sediment and water chemistry of the San Juan River and Escalante River deltas of Lake Powell, Utah, 2010-2011

    USGS Publications Warehouse

    Hornewer, Nancy J.

    2014-01-01

    Recent studies have documented the presence of trace elements, organic compounds including polycyclic aromatic hydrocarbons, and radionuclides in sediment from the Colorado River delta and from sediment in some side canyons in Lake Powell, Utah and Arizona. The fate of many of these contaminants is of significant concern to the resource managers of the National Park Service Glen Canyon National Recreation Area because of potential health impacts to humans and aquatic and terrestrial species. In 2010, the U.S. Geological Survey began a sediment-core sampling and analysis program in the San Juan River and Escalante River deltas in Lake Powell, Utah, to help the National Park Service further document the presence or absence of contaminants in deltaic sediment. Three sediment cores were collected from the San Juan River delta in August 2010 and three sediment cores and an additional replicate core were collected from the Escalante River delta in September 2011. Sediment from the cores was subsampled and composited for analysis of major and trace elements. Fifty-five major and trace elements were analyzed in 116 subsamples and 7 composited samples for the San Juan River delta cores, and in 75 subsamples and 9 composited samples for the Escalante River delta cores. Six composited sediment samples from the San Juan River delta cores and eight from the Escalante River delta cores also were analyzed for 55 low-level organochlorine pesticides and polychlorinated biphenyls, 61 polycyclic aromatic hydrocarbon compounds, gross alpha and gross beta radionuclides, and sediment-particle size. Additionally, water samples were collected from the sediment-water interface overlying each of the three cores collected from the San Juan River and Escalante River deltas. Each water sample was analyzed for 57 major and trace elements. Most of the major and trace elements analyzed were detected at concentrations greater than reporting levels for the sediment-core subsamples and composited samples. Low-level organochlorine pesticides and polychlorinated biphenyls were not detected in any of the samples. Only one polycyclic aromatic hydrocarbon compound was detected at a concentration greater than the reporting level for one San Juan composited sample. Gross alpha and gross beta radionuclides were detected at concentrations greater than reporting levels for all samples. Most of the major and trace elements analyzed were detected at concentrations greater than reporting levels for water samples.

  16. Exploring the role narrative free-text plays in discrepancies between physician coding and the InterVA regarding determination of malaria as cause of death, in a malaria holo-endemic region

    PubMed Central

    2012-01-01

    Background In countries where tracking mortality and clinical cause of death are not routinely undertaken, gathering verbal autopsies (VA) is the principal method of estimating cause of death. The most common method for determining probable cause of death from the VA interview is Physician-Certified Verbal Autopsy (PCVA). A recent alternative method to interpret Verbal Autopsy (InterVA) is a computer model using a Bayesian approach to derive posterior probabilities for causes of death, given an a priori distribution at population level and a set of interview-based indicators. The model uses the same input information as PCVA, with the exception of narrative text information, which physicians can consult but which were not inputted into the model. Comparing the results of physician coding with the model, large differences could be due to difficulties in diagnosing malaria, especially in holo-endemic regions. Thus, the aim of the study was to explore whether physicians' access to electronically unavailable narrative text helps to explain the large discrepancy in malaria cause-specific mortality fractions (CSMFs) in physician coding versus the model. Methods Free-texts of electronically available records (N = 5,649) were summarised and incorporated into the InterVA version 3 (InterVA-3) for three sub-groups: (i) a 10%-representative subsample (N = 493) (ii) records diagnosed as malaria by physicians and not by the model (N = 1035), and (iii) records diagnosed by the model as malaria, but not by physicians (N = 332). CSMF results before and after free-text incorporation were compared. Results There were changes of between 5.5-10.2% between models before and after free-text incorporation. No impact on malaria CSMFs was seen in the representative sub-sample, but the proportion of malaria as cause of death increased in the physician sub-sample (2.7%) and saw a large decrease in the InterVA subsample (9.9%). Information on 13/106 indicators appeared at least once in the free-texts that had not been matched to any item in the structured, electronically available portion of the Nouna questionnaire. Discussion Free-texts are helpful in gathering information not adequately captured in VA questionnaires, though access to free-text does not explain differences in physician and model determination of malaria as cause of death. PMID:22353802

  17. New Technologies Smart, or Harm Work-Family Boundaries Management? Gender Differences in Conflict and Enrichment Using the JD-R Theory.

    PubMed

    Ghislieri, Chiara; Emanuel, Federica; Molino, Monica; Cortese, Claudio G; Colombo, Lara

    2017-01-01

    Background: The relationship between technology-assisted supplemental work and well-being outcomes is a recent issue in scientific literature. Whether the use of technology for work purpose in off-work time may have a positive or negative impact on work-family balance remains an open question and the role of gender in this relationship is poorly understood. Aim: According to the JD-R theory, this study aimed to investigate the relationship between off-work hours technology assisted job demand (off-TAJD) and both work-family conflict (WFC) and work-family enrichment (WFE). Moreover, it considered two general job demands, workload and emotional dissonance, and one job resource, supervisory coaching. Method: The hypotheses were tested with a convenience sample of 671 workers. Data were collected with a self-report questionnaire and analyzed with SPSS 23 and through multi-group structural equation model (SEM) (Mplus 7). Results: The estimated SEM [Chi-square (510) = 1041.29; p < 0.01; CFI = 0.95; TLI = 0.95; RMSEA = 0.06 (0.05, 0.06); SRMR = 0.05. M = 319/F = 352] showed that off-TAJD was positively related to WFC in both subsamples; off-TAJD was positively related also to WFE only in the Male group. Workload was positively related to WFC in both Male and Female subsamples. Emotional dissonance was positively related to WFC in both subsamples and was negatively related to WFE. Supervisory coaching was strongly, positively related to WFE in both groups, and only in the Male subsample presented a low negative relationship with WFC. Conclusion: This study contributes to the literature on new challenges in work-life interface by analyzing the association between off-TAJD and WFC and Enrichment. Our findings suggest it is important to pay attention to gender differences in the study of the impact of supplemental work carried out during off-work hours using technology on the work-life interface. In fact, employee perception of Company demands of being available during off-work time, with the use of technology, may have different consequences for men and women, indicating potential differences in the centrality of the working role. Practical implications, at both cultural and organizational levels, should address the use of technology during leisure time.

  18. Exploring the role narrative free-text plays in discrepancies between physician coding and the InterVA regarding determination of malaria as cause of death, in a malaria holo-endemic region.

    PubMed

    Rankin, Johanna C; Lorenz, Eva; Neuhann, Florian; Yé, Maurice; Sié, Ali; Becher, Heiko; Ramroth, Heribert

    2012-02-21

    In countries where tracking mortality and clinical cause of death are not routinely undertaken, gathering verbal autopsies (VA) is the principal method of estimating cause of death. The most common method for determining probable cause of death from the VA interview is Physician-Certified Verbal Autopsy (PCVA). A recent alternative method to interpret Verbal Autopsy (InterVA) is a computer model using a Bayesian approach to derive posterior probabilities for causes of death, given an a priori distribution at population level and a set of interview-based indicators. The model uses the same input information as PCVA, with the exception of narrative text information, which physicians can consult but which were not inputted into the model. Comparing the results of physician coding with the model, large differences could be due to difficulties in diagnosing malaria, especially in holo-endemic regions. Thus, the aim of the study was to explore whether physicians' access to electronically unavailable narrative text helps to explain the large discrepancy in malaria cause-specific mortality fractions (CSMFs) in physician coding versus the model. Free-texts of electronically available records (N = 5,649) were summarised and incorporated into the InterVA version 3 (InterVA-3) for three sub-groups: (i) a 10%-representative subsample (N = 493) (ii) records diagnosed as malaria by physicians and not by the model (N = 1035), and (iii) records diagnosed by the model as malaria, but not by physicians (N = 332). CSMF results before and after free-text incorporation were compared. There were changes of between 5.5-10.2% between models before and after free-text incorporation. No impact on malaria CSMFs was seen in the representative sub-sample, but the proportion of malaria as cause of death increased in the physician sub-sample (2.7%) and saw a large decrease in the InterVA subsample (9.9%). Information on 13/106 indicators appeared at least once in the free-texts that had not been matched to any item in the structured, electronically available portion of the Nouna questionnaire. Free-texts are helpful in gathering information not adequately captured in VA questionnaires, though access to free-text does not explain differences in physician and model determination of malaria as cause of death.

  19. Advances in the Control System for a High Precision Dissolved Organic Carbon Analyzer

    NASA Astrophysics Data System (ADS)

    Liao, M.; Stubbins, A.; Haidekker, M.

    2017-12-01

    Dissolved organic carbon (DOC) is a master variable in aquatic ecosystems. DOC in the ocean is one of the largest carbon stores on earth. Studies of the dynamics of DOC in the ocean and other low DOC systems (e.g. groundwater) are hindered by the lack of high precision (sub-micromolar) analytical techniques. Results are presented from efforts to construct and optimize a flow-through, wet chemical DOC analyzer. This study focused on the design, integration and optimization of high precision components and control systems required for such a system (mass flow controller, syringe pumps, gas extraction, reactor chamber with controlled UV and temperature). Results of the approaches developed are presented.

  20. Evaluation of French and English MeSH Indexing Systems with a Parallel Corpus

    PubMed Central

    Névéol, Aurélie; Mork, James G.; Aronson, Alan R.; Darmoni, Stefan J.

    2005-01-01

    Objective This paper presents the evaluation of two MeSH® indexing systems for French and English on a parallel corpus. Material and methods We describe two automatic MeSH indexing systems - MTI for English, and MAIF for French. The French version of the evaluation resources has been manually indexed with MeSH keyword/qualifier pairs. This professional indexing is used as our gold standard in the evaluation of both systems on keyword retrieval. Results The English system (MTI) obtains significantly better precision and recall (78% precision and 21% recall at rank 1, vs. 37%. precision and 6% recall for MAIF ). Moreover, the performance of both systems can be optimised by the breakage function used by the French system (MAIF), which selects an adaptive number of descriptors for each resource indexed. Conclusion MTI achieves better performance. However, both systems have features that can benefit each other. PMID:16779103

  1. Design of Measure and Control System for Precision Pesticide Deploying Dynamic Simulating Device

    NASA Astrophysics Data System (ADS)

    Liang, Yong; Liu, Pingzeng; Wang, Lu; Liu, Jiping; Wang, Lang; Han, Lei; Yang, Xinxin

    A measure and control system for precision deploying pesticide simulating equipment is designed in order to study pesticide deployment technology. The system can simulate every state of practical pesticide deployment, and carry through precise, simultaneous measure to every factor affecting pesticide deployment effects. The hardware and software incorporates a structural design of modularization. The system is divided into many different function modules of hardware and software, and exploder corresponding modules. The modules’ interfaces are uniformly defined, which is convenient for module connection, enhancement of system’s universality, explodes efficiency and systemic reliability, and make the program’s characteristics easily extended and easy maintained. Some relevant hardware and software modules can be adapted to other measures and control systems easily. The paper introduces the design of special numeric control system, the main module of information acquisition system and the speed acquisition module in order to explain the design process of the module.

  2. Aerial imaging with manned aircraft for precision agriculture

    USDA-ARS?s Scientific Manuscript database

    Over the last two decades, numerous commercial and custom-built airborne imaging systems have been developed and deployed for diverse remote sensing applications, including precision agriculture. More recently, unmanned aircraft systems (UAS) have emerged as a versatile and cost-effective platform f...

  3. Parallel algorithm for solving Kepler’s equation on Graphics Processing Units: Application to analysis of Doppler exoplanet searches

    NASA Astrophysics Data System (ADS)

    Ford, Eric B.

    2009-05-01

    We present the results of a highly parallel Kepler equation solver using the Graphics Processing Unit (GPU) on a commercial nVidia GeForce 280GTX and the "Compute Unified Device Architecture" (CUDA) programming environment. We apply this to evaluate a goodness-of-fit statistic (e.g., χ2) for Doppler observations of stars potentially harboring multiple planetary companions (assuming negligible planet-planet interactions). Given the high-dimensionality of the model parameter space (at least five dimensions per planet), a global search is extremely computationally demanding. We expect that the underlying Kepler solver and model evaluator will be combined with a wide variety of more sophisticated algorithms to provide efficient global search, parameter estimation, model comparison, and adaptive experimental design for radial velocity and/or astrometric planet searches. We tested multiple implementations using single precision, double precision, pairs of single precision, and mixed precision arithmetic. We find that the vast majority of computations can be performed using single precision arithmetic, with selective use of compensated summation for increased precision. However, standard single precision is not adequate for calculating the mean anomaly from the time of observation and orbital period when evaluating the goodness-of-fit for real planetary systems and observational data sets. Using all double precision, our GPU code outperforms a similar code using a modern CPU by a factor of over 60. Using mixed precision, our GPU code provides a speed-up factor of over 600, when evaluating nsys > 1024 models planetary systems each containing npl = 4 planets and assuming nobs = 256 observations of each system. We conclude that modern GPUs also offer a powerful tool for repeatedly evaluating Kepler's equation and a goodness-of-fit statistic for orbital models when presented with a large parameter space.

  4. A novel method for soil aggregate stability measurement by laser granulometry with sonication

    NASA Astrophysics Data System (ADS)

    Rawlins, B. G.; Lark, R. M.; Wragg, J.

    2012-04-01

    Regulatory authorities need to establish rapid, cost-effective methods to measure soil physical indicators - such as aggregate stability - which can be applied to large numbers of soil samples to detect changes of soil quality through monitoring. Limitations of sieve-based methods to measure the stability of soil macro-aggregates include: i) the mass of stable aggregates is measured, only for a few, discrete sieve/size fractions, ii) no account is taken of the fundamental particle size distribution of the sub-sampled material, and iii) they are labour intensive. These limitations could be overcome by measurements with a Laser Granulometer (LG) instrument, but this technology has not been widely applied to the quantification of aggregate stability of soils. We present a novel method to quantify macro-aggregate (1-2 mm) stability. We measure the difference between the mean weight diameter (MWD; μm) of aggregates that are stable in circulating water of low ionic strength, and the MWD of the fundamental particles of the soil to which these aggregates are reduced by sonication. The suspension is circulated rapidly through a LG analytical cell from a connected vessel for ten seconds; during this period hydrodynamic forces associated with the circulating water lead to the destruction of unstable aggregates. The MWD of stable aggregates is then measured by LG. In the next step, the aggregates - which are kept in the vessel at a minimal water circulation speed - are subject to sonication (18W for ten minutes) so the vast majority of the sample is broken down into its fundamental particles. The suspension is then recirculated rapidly through the LG and the MWD measured again. We refer to the difference between these two measurements as disaggregation reduction (DR) - the reduction in MWD on disaggregation by sonication. Soil types with more stable aggregates have larger values of DR. The stable aggregates - which are resistant to both slaking and mechanical breakdown by the hydrodynamic forces during circulation - are disrupted only by sonication. We used this method to compare macro-aggregate (1-2 mm) stability of air-dried agricultural topsoils under conventional tillage developed from two contrasting parent material types and compared the results with an alternative sieve-based technique. The first soil from the Midlands of England (developed from sedimentary mudstone; mean soil organic carbon (SOC) 2.5%) contained a substantially larger amount of illite/smectite (I/S) minerals compared to the second from the Wensum catchment in eastern England (developed from sands and glacial deposits; mean SOC=1.7%). The latter soils are prone to large erosive losses of fine sediment. Both sets of samples had been stored air-dried for 6 months prior to aggregate analyses. The mean values of DR (n=10 repeated subsample analyses) for the Midlands soil was 178μm; mean DR (n=10 repeat subsample analyses) for the Wensum soil was 30μm. The large difference in DR is most likely due to differences in soil mineralogy. The coefficient of variation of mean DR for duplicate analyses of sub-samples from the two topsoil types is around 10%. The majority of this variation is likely to be related to the difference in composition of the sub-samples. A standard, aggregated material could be included in further analyses to determine the relative magnitude of sub-sampling and analytical variance for this measurement technique. We then used the technique to investigate whether - as previously observed - variations (range 1000 - 4000 mg kg-1) in the quantity of amorphous (oxalate extractable) iron oxyhydroxides in a variety of soil samples (n=30) from the Wensum area (range SOC 1 - 2%) could account for differences in aggregate stability of these samples.

  5. Teaching Systems Thinking in the Context of the Water Cycle

    NASA Astrophysics Data System (ADS)

    Lee, Tammy D.; Gail Jones, M.; Chesnutt, Katherine

    2017-06-01

    Complex systems affect every part of our lives from the ecosystems that we inhabit and share with other living organisms to the systems that supply our water (i.e., water cycle). Evaluating events, entities, problems, and systems from multiple perspectives is known as a systems thinking approach. New curriculum standards have made explicit the call for teaching with a systems thinking approach in our science classrooms. However, little is known about how elementary in-service or pre-service teachers understand complex systems especially in terms of systems thinking. This mixed methods study investigated 67 elementary in-service teachers' and 69 pre-service teachers' knowledge of a complex system (e.g., water cycle) and their knowledge of systems thinking. Semi-structured interviews were conducted with a sub-sample of participants. Quantitative and qualitative analyses of content assessment data and questionnaires were conducted. Results from this study showed elementary in-service and pre-service teachers applied different levels of systems thinking from novice to intermediate. Common barriers to complete systems thinking were identified with both in-service and pre-service teachers and included identifying components and processes, recognizing multiple interactions and relationships between subsystems and hidden dimensions, and difficulty understanding the human impact on the water cycle system.

  6. What Friends Are For: Collaborative Intelligence Analysis and Search

    DTIC Science & Technology

    2014-06-01

    14. SUBJECT TERMS Intelligence Community, information retrieval, recommender systems , search engines, social networks, user profiling, Lucene...improvements over existing search systems . The improvements are shown to be robust to high levels of human error and low similarity between users ...precision NOLH nearly orthogonal Latin hypercubes P@ precision at documents RS recommender systems TREC Text REtrieval Conference USM user

  7. Cobalt: Development and Maturation of GN&C Technologies for Precision Landing

    NASA Technical Reports Server (NTRS)

    Carson, John M.; Restrepo, Carolina; Seubert, Carl; Amzajerdian, Farzin

    2016-01-01

    The CoOperative Blending of Autonomous Landing Technologies (COBALT) instrument is a terrestrial test platform for development and maturation of guidance, navigation and control (GN&C) technologies for precision landing. The project is developing a third-generation Langley Research Center (LaRC) navigation doppler lidar (NDL) for ultra-precise velocity and range measurements, which will be integrated and tested with the Jet Propulsion Laboratory (JPL) lander vision system (LVS) for terrain relative navigation (TRN) position estimates. These technologies together provide precise navigation knowledge that is critical for a controlled and precise touchdown. The COBALT hardware will be integrated in 2017 into the GN&C subsystem of the Xodiac rocket-propulsive vertical test bed (VTB) developed by Masten Space Systems, and two terrestrial flight campaigns will be conducted: one open-loop (i.e., passive) and one closed-loop (i.e., active).

  8. Sensor-based precision fertilization for field crops

    USDA-ARS?s Scientific Manuscript database

    From the development of the first viable variable-rate fertilizer systems in the upper Midwest USA, precision agriculture is now approaching three decades old. Early precision fertilization practice relied on laboratory analysis of soil samples collected on a spatial pattern to define the nutrient-s...

  9. VizieR Online Data Catalog: KiDS Survey for solar system objects mining (Mahlke+, 2018)

    NASA Astrophysics Data System (ADS)

    Mahlke, M.; Bouy, H.; Altieri, B.; Verdoes Kleijn, G.; Carry, B.; Bertin, E.; de Jong, J. T. A.; Kuijken, K.; McFarland, J.; Valentijn, E.

    2017-10-01

    Provided are the observations of the 28,290 SSO candidates recovered from the KiDS survey. The candidates are split up into two subsamples; the first contains 20,221 candidates with an estimated false-positive content of less than 0.05%. The second sample contains 8,069 candidates with only three observations each or close to bright stars, with an estimated false-positive content of approximately 24%. Provided are the recovered positions in right ascension and declination, the observation epochs, the calculated proper motions, the magnitudes, the observation bands, and the object name and expected visual magnitude if the object was matched to a SkyBoT object (entries are empty if no match was found). (2 data files).

  10. The impact of the minimum wage on health.

    PubMed

    Andreyeva, Elena; Ukert, Benjamin

    2018-03-07

    This study evaluates the effect of minimum wage on risky health behaviors, healthcare access, and self-reported health. We use data from the 1993-2015 Behavioral Risk Factor Surveillance System, and employ a difference-in-differences strategy that utilizes time variation in new minimum wage laws across U.S. states. Results suggest that the minimum wage increases the probability of being obese and decreases daily fruit and vegetable intake, but also decreases days with functional limitations while having no impact on healthcare access. Subsample analyses reveal that the increase in weight and decrease in fruit and vegetable intake are driven by the older population, married, and whites. The improvement in self-reported health is especially strong among non-whites, females, and married.

  11. Accuracy of active chirp linearization for broadband frequency modulated continuous wave ladar.

    PubMed

    Barber, Zeb W; Babbitt, Wm Randall; Kaylor, Brant; Reibel, Randy R; Roos, Peter A

    2010-01-10

    As the bandwidth and linearity of frequency modulated continuous wave chirp ladar increase, the resulting range resolution, precisions, and accuracy are improved correspondingly. An analysis of a very broadband (several THz) and linear (<1 ppm) chirped ladar system based on active chirp linearization is presented. Residual chirp nonlinearity and material dispersion are analyzed as to their effect on the dynamic range, precision, and accuracy of the system. Measurement precision and accuracy approaching the part per billion level is predicted.

  12. Droplet-counting Microtitration System for Precise On-site Analysis.

    PubMed

    Kawakubo, Susumu; Omori, Taichi; Suzuki, Yasutada; Ueta, Ikuo

    2018-01-01

    A new microtitration system based on the counting of titrant droplets has been developed for precise on-site analysis. The dropping rate was controlled by inserting a capillary tube as a flow resistance in a laboratory-made micropipette. The error of titration was 3% in a simulated titration with 20 droplets. The pre-addition of a titrant was proposed for precise titration within an error of 0.5%. The analytical performances were evaluated for chelate titration, redox titration and acid-base titration.

  13. Relationships Between the Performance of Time/Frequency Standards and Navigation/Communication Systems

    NASA Technical Reports Server (NTRS)

    Hellwig, H.; Stein, S. R.; Walls, F. L.; Kahan, A.

    1978-01-01

    The relationship between system performance and clock or oscillator performance is discussed. Tradeoffs discussed include: short term stability versus bandwidth requirements; frequency accuracy versus signal acquisition time; flicker of frequency and drift versus resynchronization time; frequency precision versus communications traffic volume; spectral purity versus bit error rate, and frequency standard stability versus frequency selection and adjustability. The benefits and tradeoffs of using precise frequency and time signals are various levels of precision and accuracy are emphasized.

  14. Remote sensing with unmanned aircraft systems for precision agriculture applications

    USDA-ARS?s Scientific Manuscript database

    The Federal Aviation Administration is revising regulations for using unmanned aircraft systems (UAS) in the national airspace. An important potential application of UAS may be as a remote-sensing platform for precision agriculture, but simply down-scaling remote sensing methodologies developed usi...

  15. Modeling and Assessment of Precise Time Transfer by Using BeiDou Navigation Satellite System Triple-Frequency Signals

    PubMed Central

    Zhang, Pengfei; Zhang, Rui; Liu, Jinhai; Lu, Xiaochun

    2018-01-01

    This study proposes two models for precise time transfer using the BeiDou Navigation Satellite System triple-frequency signals: ionosphere-free (IF) combined precise point positioning (PPP) model with two dual-frequency combinations (IF-PPP1) and ionosphere-free combined PPP model with a single triple-frequency combination (IF-PPP2). A dataset with a short baseline (with a common external time frequency) and a long baseline are used for performance assessments. The results show that IF-PPP1 and IF-PPP2 models can both be used for precise time transfer using BeiDou Navigation Satellite System (BDS) triple-frequency signals, and the accuracy and stability of time transfer is the same in both cases, except for a constant system bias caused by the hardware delay of different frequencies, which can be removed by the parameter estimation and prediction with long time datasets or by a priori calibration. PMID:29596330

  16. CHARACTERISTICS OF SCHOOL BUILDINGS IN THE U.S.

    EPA Science Inventory

    The report gives results of visiting a subsample of 100 schools from the Environmental Protection Agency's (EPA's) National School Radon Survey to obtain information on building structure, location of utility lines, and the type of heating, ventilating, and air conditioning (HVAC...

  17. Algorithm of dynamic regulation of a system of duct, for a high accuracy climatic system

    NASA Astrophysics Data System (ADS)

    Arbatskiy, A. A.; Afonina, G. N.; Glazov, V. S.

    2017-11-01

    Currently, major part of climatic system, are stationary in projected mode only. At the same time, many modern industrial sites, require constant or periodical changes in technological process. That is 80% of the time, the industrial site is not require ventilation system in projected mode and high precision of climatic parameters must maintain. While that not constantly is in use for climatic systems, which use in parallel for different rooms, we will be have a problem for balance of duct system. For this problem, was created the algorithm for quantity regulation, with minimal changes. Dynamic duct system: Developed of parallel control system of air balance, with high precision of climatic parameters. The Algorithm provide a permanent pressure in main duct, in different a flow of air. Therefore, the ending devises air flow have only one parameter for regulation - flaps open area. Precision of regulation increase and the climatic system provide high precision for temperature and humidity (0,5C for temperature, 5% for relative humidity). Result: The research has been made in CFD-system - PHOENICS. Results for velocity of air in duct, for pressure of air in duct for different operation mode, has been obtained. Equation for air valves positions, with different parameters for climate in room’s, has been obtained. Energy saving potential for dynamic duct system, for different types of a rooms, has been calculated.

  18. Fast and precise thermoregulation system in physiological brain slice experiment

    NASA Astrophysics Data System (ADS)

    Sheu, Y. H.; Young, M. S.

    1995-12-01

    We have developed a fast and precise thermoregulation system incorporated within a physiological experiment on a brain slice. The thermoregulation system is used to control the temperature of a recording chamber in which the brain slice is placed. It consists of a single-chip microcomputer, a set command module, a display module, and an FLC module. A fuzzy control algorithm was developed and a fuzzy logic controller then designed for achieving fast, smooth thermostatic performance and providing precise temperature control with accuracy to 0.1 °C, from room temperature through 42 °C (experimental temperature range). The fuzzy logic controller is implemented by microcomputer software and related peripheral hardware circuits. Six operating modes of thermoregulation are offered with the system and this can be further extended according to experimental needs. The test results of this study demonstrate that the fuzzy control method is easily implemented by a microcomputer and also verifies that this method provides a simple way to achieve fast and precise high-performance control of a nonlinear thermoregulation system in a physiological brain slice experiment.

  19. Amesos2 and Belos: Direct and Iterative Solvers for Large Sparse Linear Systems

    DOE PAGES

    Bavier, Eric; Hoemmen, Mark; Rajamanickam, Sivasankaran; ...

    2012-01-01

    Solvers for large sparse linear systems come in two categories: direct and iterative. Amesos2, a package in the Trilinos software project, provides direct methods, and Belos, another Trilinos package, provides iterative methods. Amesos2 offers a common interface to many different sparse matrix factorization codes, and can handle any implementation of sparse matrices and vectors, via an easy-to-extend C++ traits interface. It can also factor matrices whose entries have arbitrary “Scalar” type, enabling extended-precision and mixed-precision algorithms. Belos includes many different iterative methods for solving large sparse linear systems and least-squares problems. Unlike competing iterative solver libraries, Belos completely decouples themore » algorithms from the implementations of the underlying linear algebra objects. This lets Belos exploit the latest hardware without changes to the code. Belos favors algorithms that solve higher-level problems, such as multiple simultaneous linear systems and sequences of related linear systems, faster than standard algorithms. The package also supports extended-precision and mixed-precision algorithms. Together, Amesos2 and Belos form a complete suite of sparse linear solvers.« less

  20. Study on application of adaptive fuzzy control and neural network in the automatic leveling system

    NASA Astrophysics Data System (ADS)

    Xu, Xiping; Zhao, Zizhao; Lan, Weiyong; Sha, Lei; Qian, Cheng

    2015-04-01

    This paper discusses the adaptive fuzzy control and neural network BP algorithm in large flat automatic leveling control system application. The purpose is to develop a measurement system with a flat quick leveling, Make the installation on the leveling system of measurement with tablet, to be able to achieve a level in precision measurement work quickly, improve the efficiency of the precision measurement. This paper focuses on the automatic leveling system analysis based on fuzzy controller, Use of the method of combining fuzzy controller and BP neural network, using BP algorithm improve the experience rules .Construct an adaptive fuzzy control system. Meanwhile the learning rate of the BP algorithm has also been run-rate adjusted to accelerate convergence. The simulation results show that the proposed control method can effectively improve the leveling precision of automatic leveling system and shorten the time of leveling.

  1. Application of Multimodality Imaging Fusion Technology in Diagnosis and Treatment of Malignant Tumors under the Precision Medicine Plan.

    PubMed

    Wang, Shun-Yi; Chen, Xian-Xia; Li, Yi; Zhang, Yu-Ying

    2016-12-20

    The arrival of precision medicine plan brings new opportunities and challenges for patients undergoing precision diagnosis and treatment of malignant tumors. With the development of medical imaging, information on different modality imaging can be integrated and comprehensively analyzed by imaging fusion system. This review aimed to update the application of multimodality imaging fusion technology in the precise diagnosis and treatment of malignant tumors under the precision medicine plan. We introduced several multimodality imaging fusion technologies and their application to the diagnosis and treatment of malignant tumors in clinical practice. The data cited in this review were obtained mainly from the PubMed database from 1996 to 2016, using the keywords of "precision medicine", "fusion imaging", "multimodality", and "tumor diagnosis and treatment". Original articles, clinical practice, reviews, and other relevant literatures published in English were reviewed. Papers focusing on precision medicine, fusion imaging, multimodality, and tumor diagnosis and treatment were selected. Duplicated papers were excluded. Multimodality imaging fusion technology plays an important role in tumor diagnosis and treatment under the precision medicine plan, such as accurate location, qualitative diagnosis, tumor staging, treatment plan design, and real-time intraoperative monitoring. Multimodality imaging fusion systems could provide more imaging information of tumors from different dimensions and angles, thereby offing strong technical support for the implementation of precision oncology. Under the precision medicine plan, personalized treatment of tumors is a distinct possibility. We believe that multimodality imaging fusion technology will find an increasingly wide application in clinical practice.

  2. Location Technologies for Apparel Assembly

    DTIC Science & Technology

    1991-09-01

    ADDRESS (Stry, State, and ZIP Code) School of Textile & Fiber Engineering Georgia Institute of Technology Atlanta, Georgia 30332-0295 206 O’Keefe...at a cost of less than $500. A review is also given of state-of-the- art vision systems. These systems have the nccessry- accuracy and precision for...of state-of-the- art vision systems. These systems have the necessary accuracy and precision for apparel manufacturing applications and could

  3. High-precision measurements of cementless acetabular components using model-based RSA: an experimental study.

    PubMed

    Baad-Hansen, Thomas; Kold, Søren; Kaptein, Bart L; Søballe, Kjeld

    2007-08-01

    In RSA, tantalum markers attached to metal-backed acetabular cups are often difficult to detect on stereo radiographs due to the high density of the metal shell. This results in occlusion of the prosthesis markers and may lead to inconclusive migration results. Within the last few years, new software systems have been developed to solve this problem. We compared the precision of 3 RSA systems in migration analysis of the acetabular component. A hemispherical and a non-hemispherical acetabular component were mounted in a phantom. Both acetabular components underwent migration analyses with 3 different RSA systems: conventional RSA using tantalum markers, an RSA system using a hemispherical cup algorithm, and a novel model-based RSA system. We found narrow confidence intervals, indicating high precision of the conventional marker system and model-based RSA with regard to migration and rotation. The confidence intervals of conventional RSA and model-based RSA were narrower than those of the hemispherical cup algorithm-based system regarding cup migration and rotation. The model-based RSA software combines the precision of the conventional RSA software with the convenience of the hemispherical cup algorithm-based system. Based on our findings, we believe that these new tools offer an improvement in the measurement of acetabular component migration.

  4. Development of the One Centimeter Accuracy Geoid Model of Latvia for GNSS Measurements

    NASA Astrophysics Data System (ADS)

    Balodis, J.; Silabriedis, G.; Haritonova, D.; Kaļinka, M.; Janpaule, I.; Morozova, K.; Jumāre, I.; Mitrofanovs, I.; Zvirgzds, J.; Kaminskis, J.; Liepiņš, I.

    2015-11-01

    There is an urgent necessity for a highly accurate and reliable geoid model to enable prompt determination of normal height with the use of GNSS coordinate determination due to the high precision requirements in geodesy, building and high precision road construction development. Additionally, the Latvian height system is in the process of transition from BAS- 77 (Baltic Height System) to EVRS2007 system. The accuracy of the geoid model must approach the precision of about ∼1 cm looking forward to the Baltic Rail and other big projects. The use of all the available and verified data sources is planned, including the use of enlarged set of GNSS/levelling data, gravimetric measurement data and, additionally, the vertical deflection measurements over the territory of Latvia. The work is going ahead stepwise. Just the issue of GNSS reference network stability is discussed. In order to achieve the ∼1 cm precision geoid, it is required to have a homogeneous high precision GNSS network as a basis for ellipsoidal height determination for GNSS/levelling points. Both the LatPos and EUPOS® - Riga network have been examined in this article.

  5. Multi-objective optimization in quantum parameter estimation

    NASA Astrophysics Data System (ADS)

    Gong, BeiLi; Cui, Wei

    2018-04-01

    We investigate quantum parameter estimation based on linear and Kerr-type nonlinear controls in an open quantum system, and consider the dissipation rate as an unknown parameter. We show that while the precision of parameter estimation is improved, it usually introduces a significant deformation to the system state. Moreover, we propose a multi-objective model to optimize the two conflicting objectives: (1) maximizing the Fisher information, improving the parameter estimation precision, and (2) minimizing the deformation of the system state, which maintains its fidelity. Finally, simulations of a simplified ɛ-constrained model demonstrate the feasibility of the Hamiltonian control in improving the precision of the quantum parameter estimation.

  6. IMDISP - INTERACTIVE IMAGE DISPLAY PROGRAM

    NASA Technical Reports Server (NTRS)

    Martin, M. D.

    1994-01-01

    The Interactive Image Display Program (IMDISP) is an interactive image display utility for the IBM Personal Computer (PC, XT and AT) and compatibles. Until recently, efforts to utilize small computer systems for display and analysis of scientific data have been hampered by the lack of sufficient data storage capacity to accomodate large image arrays. Most planetary images, for example, require nearly a megabyte of storage. The recent development of the "CDROM" (Compact Disk Read-Only Memory) storage technology makes possible the storage of up to 680 megabytes of data on a single 4.72-inch disk. IMDISP was developed for use with the CDROM storage system which is currently being evaluated by the Planetary Data System. The latest disks to be produced by the Planetary Data System are a set of three disks containing all of the images of Uranus acquired by the Voyager spacecraft. The images are in both compressed and uncompressed format. IMDISP can read the uncompressed images directly, but special software is provided to decompress the compressed images, which can not be processed directly. IMDISP can also display images stored on floppy or hard disks. A digital image is a picture converted to numerical form so that it can be stored and used in a computer. The image is divided into a matrix of small regions called picture elements, or pixels. The rows and columns of pixels are called "lines" and "samples", respectively. Each pixel has a numerical value, or DN (data number) value, quantifying the darkness or brightness of the image at that spot. In total, each pixel has an address (line number, sample number) and a DN value, which is all that the computer needs for processing. DISPLAY commands allow the IMDISP user to display all or part of an image at various positions on the display screen. The user may also zoom in and out from a point on the image defined by the cursor, and may pan around the image. To enable more or all of the original image to be displayed on the screen at once, the image can be "subsampled." For example, if the image were subsampled by a factor of 2, every other pixel from every other line would be displayed, starting from the upper left corner of the image. Any positive integer may be used for subsampling. The user may produce a histogram of an image file, which is a graph showing the number of pixels per DN value, or per range of DN values, for the entire image. IMDISP can also plot the DN value versus pixels along a line between two points on the image. The user can "stretch" or increase the contrast of an image by specifying low and high DN values; all pixels with values lower than the specified "low" will then become black, and all pixels higher than the specified "high" value will become white. Pixels between the low and high values will be evenly shaded between black and white. IMDISP is written in a modular form to make it easy to change it to work with different display devices or on other computers. The code can also be adapted for use in other application programs. There are device dependent image display modules, general image display subroutines, image I/O routines, and image label and command line parsing routines. The IMDISP system is written in C-language (94%) and Assembler (6%). It was implemented on an IBM PC with the MS DOS 3.21 operating system. IMDISP has a memory requirement of about 142k bytes. IMDISP was developed in 1989 and is a copyrighted work with all copyright vested in NASA. Additional planetary images can be obtained from the National Space Science Data Center at (301) 286-6695.

  7. High precision, rapid laser hole drilling

    DOEpatents

    Chang, Jim J.; Friedman, Herbert W.; Comaskey, Brian J.

    2007-03-20

    A laser system produces a first laser beam for rapidly removing the bulk of material in an area to form a ragged hole. The laser system produces a second laser beam for accurately cleaning up the ragged hole so that the final hole has dimensions of high precision.

  8. High precision, rapid laser hole drilling

    DOEpatents

    Chang, Jim J.; Friedman, Herbert W.; Comaskey, Brian J.

    2005-03-08

    A laser system produces a first laser beam for rapidly removing the bulk of material in an area to form a ragged hole. The laser system produces a second laser beam for accurately cleaning up the ragged hole so that the final hole has dimensions of high precision.

  9. High precision, rapid laser hole drilling

    DOEpatents

    Chang, Jim J.; Friedman, Herbert W.; Comaskey, Brian J.

    2013-04-02

    A laser system produces a first laser beam for rapidly removing the bulk of material in an area to form a ragged hole. The laser system produces a second laser beam for accurately cleaning up the ragged hole so that the final hole has dimensions of high precision.

  10. Usability of light-emitting diodes in precision approach path indicator systems by individuals with marginal color vision.

    DOT National Transportation Integrated Search

    2014-05-01

    To save energy, the FAA is planning to convert from incandescent lights to light-emitting diodes (LEDs) in : precision approach path indicator (PAPI) systems. Preliminary work on the usability of LEDs by color vision-waivered pilots (Bullough, Skinne...

  11. A comparison of precision mobile drip irrigation, LESA and LEPA

    USDA-ARS?s Scientific Manuscript database

    Precision mobile drip irrigation (PMDI) is a surface drip irrigation system fitted onto moving sprinkler systems that applies water through the driplines as they are dragged across the field. This application method can conserve water by limiting runoff, and reducing evaporative losses since the wat...

  12. Video-rate or high-precision: a flexible range imaging camera

    NASA Astrophysics Data System (ADS)

    Dorrington, Adrian A.; Cree, Michael J.; Carnegie, Dale A.; Payne, Andrew D.; Conroy, Richard M.; Godbaz, John P.; Jongenelen, Adrian P. P.

    2008-02-01

    A range imaging camera produces an output similar to a digital photograph, but every pixel in the image contains distance information as well as intensity. This is useful for measuring the shape, size and location of objects in a scene, hence is well suited to certain machine vision applications. Previously we demonstrated a heterodyne range imaging system operating in a relatively high resolution (512-by-512) pixels and high precision (0.4 mm best case) configuration, but with a slow measurement rate (one every 10 s). Although this high precision range imaging is useful for some applications, the low acquisition speed is limiting in many situations. The system's frame rate and length of acquisition is fully configurable in software, which means the measurement rate can be increased by compromising precision and image resolution. In this paper we demonstrate the flexibility of our range imaging system by showing examples of high precision ranging at slow acquisition speeds and video-rate ranging with reduced ranging precision and image resolution. We also show that the heterodyne approach and the use of more than four samples per beat cycle provides better linearity than the traditional homodyne quadrature detection approach. Finally, we comment on practical issues of frame rate and beat signal frequency selection.

  13. Evaluation of accuracy and precision of a smartphone based automated parasite egg counting system in comparison to the McMaster and Mini-FLOTAC methods.

    PubMed

    Scare, J A; Slusarewicz, P; Noel, M L; Wielgus, K M; Nielsen, M K

    2017-11-30

    Fecal egg counts are emphasized for guiding equine helminth parasite control regimens due to the rise of anthelmintic resistance. This, however, poses further challenges, since egg counting results are prone to issues such as operator dependency, method variability, equipment requirements, and time commitment. The use of image analysis software for performing fecal egg counts is promoted in recent studies to reduce the operator dependency associated with manual counts. In an attempt to remove operator dependency associated with current methods, we developed a diagnostic system that utilizes a smartphone and employs image analysis to generate automated egg counts. The aims of this study were (1) to determine precision of the first smartphone prototype, the modified McMaster and ImageJ; (2) to determine precision, accuracy, sensitivity, and specificity of the second smartphone prototype, the modified McMaster, and Mini-FLOTAC techniques. Repeated counts on fecal samples naturally infected with equine strongyle eggs were performed using each technique to evaluate precision. Triplicate counts on 36 egg count negative samples and 36 samples spiked with strongyle eggs at 5, 50, 500, and 1000 eggs per gram were performed using a second smartphone system prototype, Mini-FLOTAC, and McMaster to determine technique accuracy. Precision across the techniques was evaluated using the coefficient of variation. In regards to the first aim of the study, the McMaster technique performed with significantly less variance than the first smartphone prototype and ImageJ (p<0.0001). The smartphone and ImageJ performed with equal variance. In regards to the second aim of the study, the second smartphone system prototype had significantly better precision than the McMaster (p<0.0001) and Mini-FLOTAC (p<0.0001) methods, and the Mini-FLOTAC was significantly more precise than the McMaster (p=0.0228). Mean accuracies for the Mini-FLOTAC, McMaster, and smartphone system were 64.51%, 21.67%, and 32.53%, respectively. The Mini-FLOTAC was significantly more accurate than the McMaster (p<0.0001) and the smartphone system (p<0.0001), while the smartphone and McMaster counts did not have statistically different accuracies. Overall, the smartphone system compared favorably to manual methods with regards to precision, and reasonably with regards to accuracy. With further refinement, this system could become useful in veterinary practice. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Accuracy and Precision of a Veterinary Neuronavigation System for Radiation Oncology Positioning

    PubMed Central

    Ballegeer, Elizabeth A.; Frey, Stephen; Sieffert, Rob

    2018-01-01

    Conformal radiation treatment plans such as IMRT and other radiosurgery techniques require very precise patient positioning, typically within a millimeter of error for best results. CT cone beam, real-time navigation, and infrared position sensors are potential options for success but rarely present in veterinary radiation centers. A neuronavigation system (Brainsight Vet, Rogue Research) was tested 22 times on a skull for positioning accuracy and precision analysis. The first 6 manipulations allowed the authors to become familiar with the system but were still included in the analyses. Overall, the targeting mean error in 3D was 1.437 mm with SD 1.242 mm. This system could be used for positioning for radiation therapy or radiosurgery. PMID:29666822

  15. Research of flaw image collecting and processing technology based on multi-baseline stereo imaging

    NASA Astrophysics Data System (ADS)

    Yao, Yong; Zhao, Jiguang; Pang, Xiaoyan

    2008-03-01

    Aiming at the practical situations such as accurate optimal design, complex algorithms and precise technical demands of gun bore flaw image collecting, the design frame of a 3-D image collecting and processing system based on multi-baseline stereo imaging was presented in this paper. This system mainly including computer, electrical control box, stepping motor and CCD camera and it can realize function of image collection, stereo matching, 3-D information reconstruction and after-treatments etc. Proved by theoretical analysis and experiment results, images collected by this system were precise and it can slake efficiently the uncertainty problem produced by universally veins or repeated veins. In the same time, this system has faster measure speed and upper measure precision.

  16. Precision machining of optical surfaces with subaperture correction technologies MRF and IBF

    NASA Astrophysics Data System (ADS)

    Schmelzer, Olaf; Feldkamp, Roman

    2015-10-01

    Precision optical elements are used in a wide range of technical instrumentations. Many optical systems e.g. semiconductor inspection modules, laser heads for laser material processing or high end movie cameras, contain precision optics even aspherical or freeform surfaces. Critical parameters for such systems are wavefront error, image field curvature or scattered light. Following these demands the lens parameters are also critical concerning power and RMSi of the surface form error and micro roughness. How can we reach these requirements? The emphasis of this discussion is set on the application of subaperture correction technologies in the fabrication of high-end aspheres and free-forms. The presentation focuses on the technology chain necessary for the production of high-precision aspherical optical components and the characterization of the applied subaperture finishing tools MRF (magneto-rheological finishing) and IBF (ion beam figuring). These technologies open up the possibility of improving the performance of optical systems.

  17. The Daya Bay antineutrino detector filling system and liquid mass measurement

    NASA Astrophysics Data System (ADS)

    Band, H. R.; Cherwinka, J. J.; Draeger, E.; Heeger, K. M.; Hinrichs, P.; Lewis, C. A.; Mattison, H.; McFarlane, M. C.; Webber, D. M.; Wenman, D.; Wang, W.; Wise, T.; Xiao, Q.

    2013-09-01

    The Daya Bay Reactor Neutrino Experiment has measured the neutrino mixing angle θ13 to world-leading precision. The experiment uses eight antineutrino detectors filled with 20-tons of gadolinium-doped liquid scintillator to detect antineutrinos emitted from the Daya Bay nuclear power plant through the inverse beta decay reaction. The precision measurement of sin22θ13 relies on the relative antineutrino interaction rates between detectors at near (400 m) and far (roughly 1.8 km) distances from the nuclear reactors. The measured interaction rate in each detector is directly proportional to the number of protons in the liquid scintillator target. A precision detector filling system was developed to simultaneously fill the three liquid zones of the antineutrino detectors and measure the relative target mass between detectors to < 0.02%. This paper describes the design, operation, and performance of the system and the resulting precision measurement of the detectors' target liquid masses.

  18. A geometric modeler based on a dual-geometry representation polyhedra and rational b-splines

    NASA Technical Reports Server (NTRS)

    Klosterman, A. L.

    1984-01-01

    For speed and data base reasons, solid geometric modeling of large complex practical systems is usually approximated by a polyhedra representation. Precise parametric surface and implicit algebraic modelers are available but it is not yet practical to model the same level of system complexity with these precise modelers. In response to this contrast the GEOMOD geometric modeling system was built so that a polyhedra abstraction of the geometry would be available for interactive modeling without losing the precise definition of the geometry. Part of the reason that polyhedra modelers are effective is that all bounded surfaces can be represented in a single canonical format (i.e., sets of planar polygons). This permits a very simple and compact data structure. Nonuniform rational B-splines are currently the best representation to describe a very large class of geometry precisely with one canonical format. The specific capabilities of the modeler are described.

  19. Identification of Patients with Family History of Pancreatic Cancer--Investigation of an NLP System Portability.

    PubMed

    Mehrabi, Saeed; Krishnan, Anand; Roch, Alexandra M; Schmidt, Heidi; Li, DingCheng; Kesterson, Joe; Beesley, Chris; Dexter, Paul; Schmidt, Max; Palakal, Mathew; Liu, Hongfang

    2015-01-01

    In this study we have developed a rule-based natural language processing (NLP) system to identify patients with family history of pancreatic cancer. The algorithm was developed in a Unstructured Information Management Architecture (UIMA) framework and consisted of section segmentation, relation discovery, and negation detection. The system was evaluated on data from two institutions. The family history identification precision was consistent across the institutions shifting from 88.9% on Indiana University (IU) dataset to 87.8% on Mayo Clinic dataset. Customizing the algorithm on the the Mayo Clinic data, increased its precision to 88.1%. The family member relation discovery achieved precision, recall, and F-measure of 75.3%, 91.6% and 82.6% respectively. Negation detection resulted in precision of 99.1%. The results show that rule-based NLP approaches for specific information extraction tasks are portable across institutions; however customization of the algorithm on the new dataset improves its performance.

  20. The Geoscience Laser Altimetry/Ranging System (GLARS)

    NASA Technical Reports Server (NTRS)

    Cohen, S. C.; Degnan, J. J.; Bufton, J. L.; Garvin, J. B.; Abshire, J. B.

    1986-01-01

    The Geoscience Laser Altimetry Ranging System (GLARS) is a highly precise distance measurement system to be used for making extremely accurate geodetic observations from a space platform. It combines the attributes of a pointable laser ranging system making observations to cube corner retroreflectors placed on the ground with those of a nadir looking laser altimeter making height observations to ground, ice sheet, and oceanic surfaces. In the ranging mode, centimeter-level precise baseline and station coordinate determinations will be made on grids consisting of 100 to 200 targets separated by distances from a few tens of kilometers to about 1000 km. These measurements will be used for studies of seismic zone crustal deformations and tectonic plate motions. Ranging measurements will also be made to a coarser, but globally distributed array of retroreflectors for both precise geodetic and orbit determination applications. In the altimetric mode, relative height determinations will be obtained with approximately decimeter vertical precision and 70 to 100 meter horizontal resolution. The height data will be used to study surface topography and roughness, ice sheet and lava flow thickness, and ocean dynamics. Waveform digitization will provide a measure of the vertical extent of topography within each footprint. The planned Earth Observing System is an attractive candidate platform for GLARS since the GLAR data can be used both for direct analyses and for highly precise orbit determination needed in the reduction of data from other sensors on the multi-instrument platform. (1064, 532, and 355 nm)Nd:YAG laser meets the performance specifications for the system.

  1. Being an Informed Consumer of Health Information and Assessment of Electronic Health Literacy in a National Sample of Internet Users: Validity and Reliability of the e-HLS Instrument.

    PubMed

    Seçkin, Gül; Yeatts, Dale; Hughes, Susan; Hudson, Cassie; Bell, Valarie

    2016-07-11

    The Internet, with its capacity to provide information that transcends time and space barriers, continues to transform how people find and apply information to their own lives. With the current explosion in electronic sources of health information, including thousands of websites and hundreds of mobile phone health apps, electronic health literacy is gaining an increasing prominence in health and medical research. An important dimension of electronic health literacy is the ability to appraise the quality of information that will facilitate everyday health care decisions. Health information seekers explore their care options by gathering information from health websites, blogs, Web-based forums, social networking websites, and advertisements, despite the fact that information quality on the Internet varies greatly. Nonetheless, research has lagged behind in establishing multidimensional instruments, in part due to the evolving construct of health literacy itself. The purpose of this study was to examine psychometric properties of a new electronic health literacy (ehealth literacy) measure in a national sample of Internet users with specific attention to older users. Our paper is motivated by the fact that ehealth literacy is an underinvestigated area of inquiry. Our sample was drawn from a panel of more than 55,000 participants maintained by Knowledge Networks, the largest national probability-based research panel for Web-based surveys. We examined the factor structure of a 19-item electronic Health Literacy Scale (e-HLS) through exploratory factor analysis (EFA) and confirmatory factor analysis, internal consistency reliability, and construct validity on sample of adults (n=710) and a subsample of older adults (n=194). The AMOS graphics program 21.0 was used to construct a measurement model, linking latent factors obtained from EFA with 19 indicators to determine whether this factor structure achieved a good fit with our entire sample and the subsample (age ≥ 60 years). Linear regression analyses were performed in separate models to examine: (1) the construct validity of the e-HLS and (2) its association with respondents' demographic characteristics and health variables. The EFA produced a 3-factor solution: communication (2 items), trust (4 items), and action (13 items). The 3-factor structure of the e-HLS was found to be invariant for the subsample. Fit indices obtained were as follows: full sample: χ(2) (710)=698.547, df=131, P<.001, comparative fit index (CFI)=0.94, normed fit index (NFI)=0.92, root mean squared error of approximation (RMSEA)=0.08; and for the older subsample (age ≥ 60 years): χ(2) (194)=275.744, df=131, P<.001, CFI=0.95, NFI=0.90, RMSEA=0.08. The analyses supported the e-HLS validity and internal reliability for the full sample and subsample. The overwhelming majority of our respondents reported a great deal of confidence in their ability to appraise the quality of information obtained from the Internet, yet less than half reported performing quality checks contained on the e-HLS.

  2. Being an Informed Consumer of Health Information and Assessment of Electronic Health Literacy in a National Sample of Internet Users: Validity and Reliability of the e-HLS Instrument

    PubMed Central

    Yeatts, Dale; Hughes, Susan; Hudson, Cassie; Bell, Valarie

    2016-01-01

    Background The Internet, with its capacity to provide information that transcends time and space barriers, continues to transform how people find and apply information to their own lives. With the current explosion in electronic sources of health information, including thousands of websites and hundreds of mobile phone health apps, electronic health literacy is gaining an increasing prominence in health and medical research. An important dimension of electronic health literacy is the ability to appraise the quality of information that will facilitate everyday health care decisions. Health information seekers explore their care options by gathering information from health websites, blogs, Web-based forums, social networking websites, and advertisements, despite the fact that information quality on the Internet varies greatly. Nonetheless, research has lagged behind in establishing multidimensional instruments, in part due to the evolving construct of health literacy itself. Objective The purpose of this study was to examine psychometric properties of a new electronic health literacy (ehealth literacy) measure in a national sample of Internet users with specific attention to older users. Our paper is motivated by the fact that ehealth literacy is an underinvestigated area of inquiry. Methods Our sample was drawn from a panel of more than 55,000 participants maintained by Knowledge Networks, the largest national probability-based research panel for Web-based surveys. We examined the factor structure of a 19-item electronic Health Literacy Scale (e-HLS) through exploratory factor analysis (EFA) and confirmatory factor analysis, internal consistency reliability, and construct validity on sample of adults (n=710) and a subsample of older adults (n=194). The AMOS graphics program 21.0 was used to construct a measurement model, linking latent factors obtained from EFA with 19 indicators to determine whether this factor structure achieved a good fit with our entire sample and the subsample (age ≥ 60 years). Linear regression analyses were performed in separate models to examine: (1) the construct validity of the e-HLS and (2) its association with respondents’ demographic characteristics and health variables. Results The EFA produced a 3-factor solution: communication (2 items), trust (4 items), and action (13 items). The 3-factor structure of the e-HLS was found to be invariant for the subsample. Fit indices obtained were as follows: full sample: χ2 (710)=698.547, df=131, P<.001, comparative fit index (CFI)=0.94, normed fit index (NFI)=0.92, root mean squared error of approximation (RMSEA)=0.08; and for the older subsample (age ≥ 60 years): χ2 (194)=275.744, df=131, P<.001, CFI=0.95, NFI=0.90, RMSEA=0.08. Conclusions The analyses supported the e-HLS validity and internal reliability for the full sample and subsample. The overwhelming majority of our respondents reported a great deal of confidence in their ability to appraise the quality of information obtained from the Internet, yet less than half reported performing quality checks contained on the e-HLS. PMID:27400726

  3. The emergence of precision therapeutics: New challenges and opportunities for Canada's health leaders.

    PubMed

    Slater, Jim; Shields, Laura; Racette, Ray J; Juzwishin, Donald; Coppes, Max

    2015-11-01

    In the era of personalized and precision medicine, the approach to healthcare is quickly changing. Genetic and other molecular information are being increasingly demanded by clinicians and expected by patients for prevention, screening, diagnosis, prognosis, health promotion, and treatment of an increasing number of conditions. As a result of these developments, Canadian health leaders must understand and be prepared to lead the necessary changes associated with these disruptive technologies. This article focuses on precision therapeutics but also provides background on the concepts and terminology related to personalized and precision medicine and explores Canadian health leadership and system issues that may pose barriers to their implementation. The article is intended to inspire, educate, and mobilize Canadian health leaders to initiate dialogue around the transformative changes necessary to ready the healthcare system to realize the benefits of precision therapeutics. © 2015 Collège canadien des leaders en santé

  4. From Lift-Off to Light-Off

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Shuttle's propellant measurement system is produced by Simmonds Precision. Company has extensive experience in fuel management systems and other equipment for military and commercial aircraft. A separate corporate entity, Industrial Controls Division was formed due to a number of non-aerospace spinoffs. One example is a "custody transfer" system for measuring and monitoring liquefied natural gas (LNG). LNG is transported aboard large tankers at minus 260 degrees Fahrenheit. Value of a single shipload may reach $15 million. Precision's LNG measurement and monitoring system aids accurate financial accounting and enhances crew safety. Custody transfer systems have been provided for 10 LNG tankers, built by Owing Shipbuilding. Simmonds also provided measurement systems for several liquefied petroleum gas (LPG) production and storage installations. Another spinoff developed by Simmonds Precision is an advanced ignition system for industrial boilers that offers savings of millions of gallons of fuel, and a computer based monitoring and control system for improving safety and reliability in electrical utility applications. Simmonds produces a line of safety systems for nuclear and non-nuclear electrical power plants.

  5. Relative receiver autonomous integrity monitoring for future GNSS-based aircraft navigation

    NASA Astrophysics Data System (ADS)

    Gratton, Livio Rafael

    The Global Positioning System (GPS) has enabled reliable, safe, and practical aircraft positioning for en-route and non-precision phases of flight for more than a decade. Intense research is currently devoted to extending the use of Global Navigation Satellite Systems (GNSS), including GPS, to precision approach and landing operations. In this context, this work is focused on the development, analysis, and verification of the concept of Relative Receiver Autonomous Integrity Monitoring (RRAIM) and its potential applications to precision approach navigation. RRAIM fault detection algorithms are developed, and associated mathematical bounds on position error are derived. These are investigated as possible solutions to some current key challenges in precision approach navigation, discussed below. Augmentation systems serving continent-size areas (like the Wide Area Augmentation System or WAAS) allow certain precision approach operations within the covered region. More and better satellites, with dual frequency capabilities, are expected to be in orbit in the mid-term future, which will potentially allow WAAS-like capabilities worldwide with a sparse ground station network. Two main challenges in achieving this goal are (1) ensuring that navigation fault detection functions are fast enough to alert worldwide users of hazardously misleading information, and (2) minimizing situations in which navigation is unavailable because the user's local satellite geometry is insufficient for safe position estimation. Local augmentation systems (implemented at individual airports, like the Local Area Augmentation System or LAAS) have the potential to allow precision approach and landing operations by providing precise corrections to user-satellite range measurements. An exception to these capabilities arises during ionospheric storms (caused by solar activity), when hazardous situations can exist with residual range errors several orders of magnitudes higher than nominal. Until dual frequency civil GPS signals are available, the ability to provide integrity during ionospheric storms, without excessive loss of availability is a major challenge. For all users, with or without augmentation, some situations cause short duration losses of satellites in view. Two examples are aircraft banking during turns and ionospheric scintillation. The loss of range signals can translate into gaps in good satellite geometry, and the resulting challenge is to ensure navigation continuity by bridging these gaps, while simultaneously maintaining high integrity. It is shown that the RRAIM methods developed in this research can be applied to mitigate each of these obstacles to safe and reliable precision aircraft navigation.

  6. Precision Pointing Control System (PPCS) system design and analysis. [for gimbaled experiment platforms

    NASA Technical Reports Server (NTRS)

    Frew, A. M.; Eisenhut, D. F.; Farrenkopf, R. L.; Gates, R. F.; Iwens, R. P.; Kirby, D. K.; Mann, R. J.; Spencer, D. J.; Tsou, H. S.; Zaremba, J. G.

    1972-01-01

    The precision pointing control system (PPCS) is an integrated system for precision attitude determination and orientation of gimbaled experiment platforms. The PPCS concept configures the system to perform orientation of up to six independent gimbaled experiment platforms to design goal accuracy of 0.001 degrees, and to operate in conjunction with a three-axis stabilized earth-oriented spacecraft in orbits ranging from low altitude (200-2500 n.m., sun synchronous) to 24 hour geosynchronous, with a design goal life of 3 to 5 years. The system comprises two complementary functions: (1) attitude determination where the attitude of a defined set of body-fixed reference axes is determined relative to a known set of reference axes fixed in inertial space; and (2) pointing control where gimbal orientation is controlled, open-loop (without use of payload error/feedback) with respect to a defined set of body-fixed reference axes to produce pointing to a desired target.

  7. The mean and variance of phylogenetic diversity under rarefaction

    PubMed Central

    Matsen, Frederick A.

    2013-01-01

    Summary Phylogenetic diversity (PD) depends on sampling depth, which complicates the comparison of PD between samples of different depth. One approach to dealing with differing sample depth for a given diversity statistic is to rarefy, which means to take a random subset of a given size of the original sample. Exact analytical formulae for the mean and variance of species richness under rarefaction have existed for some time but no such solution exists for PD.We have derived exact formulae for the mean and variance of PD under rarefaction. We confirm that these formulae are correct by comparing exact solution mean and variance to that calculated by repeated random (Monte Carlo) subsampling of a dataset of stem counts of woody shrubs of Toohey Forest, Queensland, Australia. We also demonstrate the application of the method using two examples: identifying hotspots of mammalian diversity in Australasian ecoregions, and characterising the human vaginal microbiome.There is a very high degree of correspondence between the analytical and random subsampling methods for calculating mean and variance of PD under rarefaction, although the Monte Carlo method requires a large number of random draws to converge on the exact solution for the variance.Rarefaction of mammalian PD of ecoregions in Australasia to a common standard of 25 species reveals very different rank orderings of ecoregions, indicating quite different hotspots of diversity than those obtained for unrarefied PD. The application of these methods to the vaginal microbiome shows that a classical score used to quantify bacterial vaginosis is correlated with the shape of the rarefaction curve.The analytical formulae for the mean and variance of PD under rarefaction are both exact and more efficient than repeated subsampling. Rarefaction of PD allows for many applications where comparisons of samples of different depth is required. PMID:23833701

  8. The mean and variance of phylogenetic diversity under rarefaction.

    PubMed

    Nipperess, David A; Matsen, Frederick A

    2013-06-01

    Phylogenetic diversity (PD) depends on sampling depth, which complicates the comparison of PD between samples of different depth. One approach to dealing with differing sample depth for a given diversity statistic is to rarefy, which means to take a random subset of a given size of the original sample. Exact analytical formulae for the mean and variance of species richness under rarefaction have existed for some time but no such solution exists for PD.We have derived exact formulae for the mean and variance of PD under rarefaction. We confirm that these formulae are correct by comparing exact solution mean and variance to that calculated by repeated random (Monte Carlo) subsampling of a dataset of stem counts of woody shrubs of Toohey Forest, Queensland, Australia. We also demonstrate the application of the method using two examples: identifying hotspots of mammalian diversity in Australasian ecoregions, and characterising the human vaginal microbiome.There is a very high degree of correspondence between the analytical and random subsampling methods for calculating mean and variance of PD under rarefaction, although the Monte Carlo method requires a large number of random draws to converge on the exact solution for the variance.Rarefaction of mammalian PD of ecoregions in Australasia to a common standard of 25 species reveals very different rank orderings of ecoregions, indicating quite different hotspots of diversity than those obtained for unrarefied PD. The application of these methods to the vaginal microbiome shows that a classical score used to quantify bacterial vaginosis is correlated with the shape of the rarefaction curve.The analytical formulae for the mean and variance of PD under rarefaction are both exact and more efficient than repeated subsampling. Rarefaction of PD allows for many applications where comparisons of samples of different depth is required.

  9. Free-breathing pediatric chest MRI: Performance of self-navigated golden-angle ordered conical ultrashort echo time acquisition.

    PubMed

    Zucker, Evan J; Cheng, Joseph Y; Haldipur, Anshul; Carl, Michael; Vasanawala, Shreyas S

    2018-01-01

    To assess the feasibility and performance of conical k-space trajectory free-breathing ultrashort echo time (UTE) chest magnetic resonance imaging (MRI) versus four-dimensional (4D) flow and effects of 50% data subsampling and soft-gated motion correction. Thirty-two consecutive children who underwent both 4D flow and UTE ferumoxytol-enhanced chest MR (mean age: 5.4 years, range: 6 days to 15.7 years) in one 3T exam were recruited. From UTE k-space data, three image sets were reconstructed: 1) one with all data, 2) one using the first 50% of data, and 3) a final set with soft-gating motion correction, leveraging the signal magnitude immediately after each excitation. Two radiologists in blinded fashion independently scored image quality of anatomical landmarks on a 5-point scale. Ratings were compared using Wilcoxon rank-sum, Wilcoxon signed-ranks, and Kruskal-Wallis tests. Interobserver agreement was assessed with the intraclass correlation coefficient (ICC). For fully sampled UTE, mean scores for all structures were ≥4 (good-excellent). Full UTE surpassed 4D flow for lungs and airways (P < 0.001), with similar pulmonary artery (PA) quality (P = 0.62). 50% subsampling only slightly degraded all landmarks (P < 0.001), as did motion correction. Subsegmental PA visualization was possible in >93% scans for all techniques (P = 0.27). Interobserver agreement was excellent for combined scores (ICC = 0.83). High-quality free-breathing conical UTE chest MR is feasible, surpassing 4D flow for lungs and airways, with equivalent PA visualization. Data subsampling only mildly degraded images, favoring lesser scan times. Soft-gating motion correction overall did not improve image quality. 2 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018;47:200-209. © 2017 International Society for Magnetic Resonance in Medicine.

  10. Psychometric properties of a German organizational justice questionnaire (G-OJQ) and its association with self-rated health: findings from the Mannheim Industrial Cohort Studies (MICS).

    PubMed

    Herr, Raphael M; Li, Jian; Bosch, Jos A; Schmidt, Burkhard; DeJoy, David M; Fischer, Joachim E; Loerbroks, Adrian

    2014-01-01

    The objective of the present study was to validate a German 11-item organizational justice questionnaire (G-OJQ) that consists of two subscales, referred to as "procedural justice" (PJ) and "interactional justice" (IJ) adapted from Moorman's organizational justice (OJ) questionnaire. A second objective was to determine associations of the G-OJQ with self-rated health. This study used cross-sectional data from an occupational cohort of 1518 factory workers from Germany (87.7 % male; mean age = 38.8 with SD = 11.9). After splitting the sample in two random subsamples, we assessed structural validity by exploratory factor analyses in one subsample and by confirmatory factor analysis in the other subsample. Internal validity was assessed by Cronbach's α. Associations with self-reported poor health were estimated by logistic regression. The full scale and its subscales yielded Cronbach's α's of ≥0.9, and item-total correlations were ≥0.5. Factor analyses confirmed the expected 2-factor structure, labeled "interactional justice" (IJ, 4 items, λ 0.43-0.94) and "procedural justice" (PJ, 7 items, λ 0.46-0.83), respectively, and showed an acceptable fit to the data (χ (2) = 61; p = .001; CFI = 0.995; RMSEA = 0.037). The OJ total score as well as subscale scores in the lowest quartile, when compared to the highest quartile, was associated with an ≥2.3 increased odds of reporting poor health. The G-OJQ seems to be a valid and useful tool for observational and intervention studies in occupational settings. Future studies may additionally explore longitudinal associations and test the generalizability of the present findings to other populations and health outcomes.

  11. A study of uniformity of elements deposition on glass fiber filters after collection of airborne particulate matter (PM-10), using a high-volume sampler.

    PubMed

    Marrero, Julieta; Rebagliati, Raúl Jiménez; Gómez, Darío; Smichowski, Patricia

    2005-12-15

    A study was conducted to evaluate the homogeneity of the distribution of metals and metalloids deposited on glass fiber filters collected using a high-volume sampler equipped with a PM-10 sampling head. The airborne particulate matter (APM)-loaded glass fiber filters (with an active surface of about 500cm(2)) were weighed and then each filter was cut in five small discs of 6.5cm of diameter. Each disk was mineralized by acid-assisted microwave (MW) digestion using a mixture of nitric, perchloric and hydrofluoric acids. Analysis was performed by axial view inductively coupled plasma optical emission spectrometry (ICP OES) and the elements considered were: Al, As, Cd, Cr, Cu, Fe, Mn, Ni, Pb, Sb, Ti and V. The validation of the procedure was performed by the analysis of the standard reference material NIST 1648, urban particulate matter. As a way of comparing the possible variability in trace elements distribution in a particular filter, the mean concentration for each element over the five positions (discs) was calculated and each element concentration was normalized to this mean value. Scatter plots of the normalized concentrations were examined for all elements and all sub-samples. We considered that an element was homogeneously distributed if its normalized concentrations in the 45 sub-samples were within +/-15% of the mean value ranging between 0.85 and 1.15. The study demonstrated that the 12 elements tested showed different distribution pattern. Aluminium, Cu and V showed the most homogeneous pattern while Cd and Ni exhibited the largest departures from the mean value in 13 out of the 45 discs analyzed. No preferential deposition was noticed in any sub-sample.

  12. Floating on Air: Fulfillment and Self-in-Context for Distressed Japanese Women

    PubMed Central

    Arnault, Denise Saint; Shimabukuro, Shizuka

    2017-01-01

    This research was part of a larger mixed-methods study examining culture, distress, and help seeking. We surveyed 209 Japanese women living in the United States recruited from clinic and community-based sites, and carried out semi-structured ethnographic interviews with a highly distressed subsample of 25 Japanese. Analytic Ethnography revealed that women described themselves as a “self-in-context,” negotiating situations using protective resources or experiencing risk exposure. Women experienced quality of life (QOL) when they were successful. However, a related goal of achieving Ikigai (or purpose in life) was differentiated from QOL, and was defined as an ongoing process of searching for balance between achieving social and individual fulfillment. Our resulting hypothetical model suggested that symptom level would be related to risk and protective factors (tested for the full sample) and to specific risk and protective phenomenon (tested in the distressed subsample). The t tests in the full sample found that women who were above threshold for depressive symptoms (n = 26) had higher social stressor and lower social support means. Women who were above the threshold for physical symptoms (n = 99) had higher social stressor means. Analysis of the interviewed subsample found that low self-validation and excessive responsibilities were related to high physical symptoms. We conclude that perceived lack of balance between culturally defined, and potentially opposing, markers of success can create a stressful dilemma for first-generation immigrant Japanese women, requiring new skills to achieve balance. Perceptions of health, as well as illness, are part of complex culturally based interpretations that have implications for intervention for immigrant Japanese women living in the United States. PMID:26896391

  13. Time-jittered marine seismic data acquisition via compressed sensing and sparsity-promoting wavefield reconstruction

    NASA Astrophysics Data System (ADS)

    Wason, H.; Herrmann, F. J.; Kumar, R.

    2016-12-01

    Current efforts towards dense shot (or receiver) sampling and full azimuthal coverage to produce high resolution images have led to the deployment of multiple source vessels (or streamers) across marine survey areas. Densely sampled marine seismic data acquisition, however, is expensive, and hence necessitates the adoption of sampling schemes that save acquisition costs and time. Compressed sensing is a sampling paradigm that aims to reconstruct a signal--that is sparse or compressible in some transform domain--from relatively fewer measurements than required by the Nyquist sampling criteria. Leveraging ideas from the field of compressed sensing, we show how marine seismic acquisition can be setup as a compressed sensing problem. A step ahead from multi-source seismic acquisition is simultaneous source acquisition--an emerging technology that is stimulating both geophysical research and commercial efforts--where multiple source arrays/vessels fire shots simultaneously resulting in better coverage in marine surveys. Following the design principles of compressed sensing, we propose a pragmatic simultaneous time-jittered time-compressed marine acquisition scheme where single or multiple source vessels sail across an ocean-bottom array firing airguns at jittered times and source locations, resulting in better spatial sampling and speedup acquisition. Our acquisition is low cost since our measurements are subsampled. Simultaneous source acquisition generates data with overlapping shot records, which need to be separated for further processing. We can significantly impact the reconstruction quality of conventional seismic data from jittered data and demonstrate successful recovery by sparsity promotion. In contrast to random (sub)sampling, acquisition via jittered (sub)sampling helps in controlling the maximum gap size, which is a practical requirement of wavefield reconstruction with localized sparsifying transforms. We illustrate our results with simulations of simultaneous time-jittered marine acquisition for 2D and 3D ocean-bottom cable survey.

  14. The Origin of Faint Tidal Features around Galaxies in the RESOLVE Survey

    NASA Astrophysics Data System (ADS)

    Hood, Callie E.; Kannappan, Sheila J.; Stark, David V.; Dell’Antonio, Ian P.; Moffett, Amanda J.; Eckert, Kathleen D.; Norris, Mark A.; Hendel, David

    2018-04-01

    We study tidal features around galaxies in the REsolved Spectroscopy Of a Local VolumE (RESOLVE) survey. Our sample consists of 1048 RESOLVE galaxies that overlap with the DECam Legacy Survey, which reaches an r-band 3σ depth of ∼27.9 mag arcsec‑2 for a 100 arcsec2 feature. Images were masked, smoothed, and inspected for tidal features such as streams, shells, or tails/arms. We find tidal features in 17±2% of our galaxies, setting a lower limit on the true frequency. The frequency of tidal features in the gas-poor (gas-to-stellar mass ratio <0.1) subsample is lower than in the gas-rich subsample (13±3% versus 19±2%). Within the gas-poor subsample, galaxies with tidal features have higher stellar and halo masses, ∼3× closer distances to nearest neighbors (in the same group), and possibly fewer group members at fixed halo mass than galaxies without tidal features, but similar specific star formation rates. These results suggest tidal features in gas-poor galaxies are typically streams/shells from dry mergers or satellite disruption. In contrast, the presence of tidal features around gas-rich galaxies does not correlate with stellar or halo mass, suggesting these tidal features are often tails/arms from resonant interactions. Similar to tidal features in gas-poor galaxies, tidal features in gas-rich galaxies imply 1.7× closer nearest neighbors in the same group; however, they are associated with diskier morphologies, higher star formation rates, and higher gas content. In addition to interactions with known neighbors, we suggest that tidal features in gas-rich galaxies may arise from accretion of cosmic gas and/or gas-rich satellites below the survey limit.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schaumberg, Andrew

    The Omics Tools package provides several small trivial tools for work in genomics. This single portable package, the “omics.jar” file, is a toolbox that works in any Java-based environment, including PCs, Macs, and supercomputers. The number of tools is expected to grow. One tool (called cmsearch.hadoop or cmsearch.local), calls the external cmsearch program to predict non-coding RNA in a genome. The cmsearch program is part of the third-party Infernal package. Omics Tools does not contain Infernal. Infernal may be installed separately. The cmsearch.hadoop subtool requires Apache Hadoop and runs on a supercomputer, though cmsearch.local does not and runs on amore » server. Omics Tools does not contain Hadoop. Hadoop mat be installed separartely The other tools (cmgbk, cmgff, fastats, pal, randgrp, randgrpr, randsub) do not interface with third-party tools. Omics Tools is written in Java and Scala programming languages. Invoking the “help” command shows currently available tools, as shown below: schaumbe@gpint06:~/proj/omics$ java -jar omics.jar help Known commands are: cmgbk : compare cmsearch and GenBank Infernal hits cmgff : compare hits among two GFF (version 3) files cmsearch.hadoop : find Infernal hits in a genome, on your supercomputer cmsearch.local : find Infernal hits in a genome, on your workstation fastats : FASTA stats, e.g. # bases, GC content pal : stem-loop motif detection by palindromic sequence search (code stub) randgrp : random subsample without replacement, of groups randgrpr : random subsample with replacement, of groups (fast) randsub : random subsample without replacement, of file lines For more help regarding a particular command, use: java -jar omics.jar command help Usage: java -jar omics.jar command args« less

  16. Bidirectional Associations Between Cannabis Use and Depressive Symptoms From Adolescence Through Early Adulthood Among At-Risk Young Men.

    PubMed

    Womack, Sean R; Shaw, Daniel S; Weaver, Chelsea M; Forbes, Erika E

    2016-03-01

    Previous studies have established a relationship between cannabis use and affective problems among adolescents and young adults; however, the direction of these associations remains a topic of debate. The present study sought to examine bidirectional associations between cannabis use and depressive symptoms, specifically testing the validity of two competing hypotheses: the cannabis effect hypothesis, which suggests that cannabis use contributes to the onset of later depressive symptoms; and the self-medication hypothesis, which posits that individuals increase their use of a substance to alleviate distressing psychological symptoms. Participants in this study were 264 low-socioeconomic-status males assessed at ages 17, 20, and 22. Cross-lag panel models were fit to test bidirectional associations between cannabis use frequency and depressive symptoms across the transition from adolescence to early adulthood. In addition, analyses were conducted within two high-risk subsamples to examine whether associations between cannabis use frequency (ranging from never used to daily use) and depressive symptoms differed among regular cannabis users (used cannabis more than once per week) or subjects reporting at least mild levels of depressive symptoms. Cannabis use and depressive symptoms were concurrently correlated. Cannabis use predicted increases in later depressive symptoms, but only among the mild-depression subsample. Depressive symptoms predicted only slight increases in later cannabis use, among the subsample of regular cannabis users. Temporal patterns of cannabis use and depressive symptoms provide evidence for the cannabis effect but limited evidence for the self-medication hypothesis. Adolescents higher in depressive symptoms may be vulnerable to the adverse psychological effects of using cannabis. Results are discussed in terms of implications for basic research, prevention, and intervention.

  17. Prediabetes, undiagnosed diabetes, and diabetes among Mexican adults: findings from the Mexican Health and Aging Study.

    PubMed

    Kumar, Amit; Wong, Rebeca; Ottenbacher, Kenneth J; Al Snih, Soham

    2016-03-01

    The purpose of the study was to examine the prevalence and determinants of prediabetes, undiagnosed diabetes, and diabetes among Mexican adults from a subsample of the Mexican Health and Aging Study. We examined 2012 participants from a subsample of the Mexican Health and Aging Study. Measures included sociodemographic characteristics, body mass index, central obesity, medical conditions, cholesterol, high-density lipoprotein cholesterol, hemoglobin A1c, and vitamin D. Logistic regression was performed to identify factors associated with prediabetes, undiagnosed diabetes, and self-reported diabetes. Prevalence of prediabetes, undiagnosed, and self-reported diabetes in this cohort was 44.2%, 18.0%, and 21.4%, respectively. Participants with high waist-hip ratio (1.61, 95% confidence interval [CI] = 1.05-2.45) and high cholesterol (1.85, 95% CI = 1.36-2.51) had higher odds of prediabetes. Overweight (1.68, 95% CI = 1.07-2.64), obesity (2.38, 95% CI = 1.41-4.02), and high waist circumference (1.60, 95% CI = 1.06-2.40) were significantly associated with higher odds of having undiagnosed diabetes. Those residing in a Mexican state with high U.S. migration had lower odds of prediabetes (0.61, 95% CI = 0.45-0.82) and undiagnosed diabetes (0.53, 95% CI = 0.41-0.70). Those engaged in regular physical activity had lower odds of undiagnosed diabetes (0.74, 95% CI = 0.57-0.97). There is a high prevalence of prediabetes and undiagnosed diabetes among Mexican adults in this subsample. Findings suggest the need for resources to prevent, identify, and treat persons with prediabetes and undiagnosed diabetes. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Bidirectional Associations Between Cannabis Use and Depressive Symptoms From Adolescence Through Early Adulthood Among At-Risk Young Men

    PubMed Central

    Womack, Sean R.; Shaw, Daniel S.; Weaver, Chelsea M.; Forbes, Erika E.

    2016-01-01

    Objective: Previous studies have established a relationship between cannabis use and affective problems among adolescents and young adults; however, the direction of these associations remains a topic of debate. The present study sought to examine bidirectional associations between cannabis use and depressive symptoms, specifically testing the validity of two competing hypotheses: the cannabis effect hypothesis, which suggests that cannabis use contributes to the onset of later depressive symptoms; and the self-medication hypothesis, which posits that individuals increase their use of a substance to alleviate distressing psychological symptoms. Method: Participants in this study were 264 low-socioeconomic-status males assessed at ages 17, 20, and 22. Cross-lag panel models were fit to test bidirectional associations between cannabis use frequency and depressive symptoms across the transition from adolescence to early adulthood. In addition, analyses were conducted within two high-risk subsamples to examine whether associations between cannabis use frequency (ranging from never used to daily use) and depressive symptoms differed among regular cannabis users (used cannabis more than once per week) or subjects reporting at least mild levels of depressive symptoms. Results: Cannabis use and depressive symptoms were concurrently correlated. Cannabis use predicted increases in later depressive symptoms, but only among the mild-depression subsample. Depressive symptoms predicted only slight increases in later cannabis use, among the subsample of regular cannabis users. Conclusions: Temporal patterns of cannabis use and depressive symptoms provide evidence for the cannabis effect but limited evidence for the self-medication hypothesis. Adolescents higher in depressive symptoms may be vulnerable to the adverse psychological effects of using cannabis. Results are discussed in terms of implications for basic research, prevention, and intervention. PMID:26997187

  19. New insights into the correlation structure of DSM-IV depression symptoms in the general population v. subsamples of depressed individuals.

    PubMed

    Foster, S; Mohler-Kuo, M

    2018-06-01

    Previous research failed to uncover a replicable dimensional structure underlying the symptoms of depression. We aimed to examine two neglected methodological issues in this research: (a) adjusting symptom correlations for overall depression severity; and (b) analysing general population samples v. subsamples of currently depressed individuals. Using population-based cross-sectional and longitudinal data from two nations (Switzerland, 5883 young men; USA, 2174 young men and 2244 young women) we assessed the dimensions of the nine DSM-IV depression symptoms in young adults. In each general-population sample and each subsample of currently depressed participants, we conducted a standardised process of three analytical steps, based on exploratory and confirmatory factor and bifactor analysis, to reveal any replicable dimensional structure underlying symptom correlations while controlling for overall depression severity. We found no evidence of a replicable dimensional structure across samples when adjusting symptom correlations for overall depression severity. In the general-population samples, symptoms correlated strongly and a single dimension of depression severity was revealed. Among depressed participants, symptom correlations were surprisingly weak and no replicable dimensions were identified, regardless of severity-adjustment. First, caution is warranted when considering studies assessing dimensions of depression because general population-based studies and studies of depressed individuals generate different data that can lead to different conclusions. This problem likely generalises to other models based on the symptoms' inter-relationships such as network models. Second, whereas the overall severity aligns individuals on a continuum of disorder intensity that allows non-affected individuals to be distinguished from affected individuals, the clinical evaluation and treatment of depressed individuals should focus directly on each individual's symptom profile.

  20. Individual Placement and Support in Spinal Cord Injury: A Longitudinal Observational Study of Employment Outcomes.

    PubMed

    Ottomanelli, Lisa; Goetz, Lance L; Barnett, Scott D; Njoh, Eni; Dixon, Thomas M; Holmes, Sally Ann; LePage, James P; Ota, Doug; Sabharwal, Sunil; White, Kevin T

    2017-08-01

    To determine the effects of a 24-month program of Individual Placement and Support (IPS) supported employment (SE) on employment outcomes for veterans with spinal cord injury (SCI). Longitudinal, observational multisite study of a single-arm, nonrandomized cohort. SCI centers in the Veterans Health Administration (n=7). Veterans with SCI (N=213) enrolled during an episode of either inpatient hospital care (24.4%) or outpatient care (75.6%). More than half the sample (59.2%) had a history of traumatic brain injury (TBI). IPS SE for 24 months. Competitive employment. Over the 24-month period, 92 of 213 IPS participants obtained competitive jobs for an overall employment rate of 43.2%. For the subsample of participants without TBI enrolled as outpatients (n=69), 36 obtained competitive jobs for an overall employment rate of 52.2%. Overall, employed participants averaged 38.2±29.7 weeks of employment, with an average time to first employment of 348.3±220.0 days. Nearly 25% of first jobs occurred within 4 to 6 months of beginning the program. Similar employment characteristics were observed in the subsample without TBI history enrolled as outpatients. Almost half of the veterans with SCI participating in the 24-month IPS program as part of their ongoing SCI care achieved competitive employment, consistent with their expressed preferences at the start of the study. Among a subsample of veterans without TBI history enrolled as outpatients, employment rates were >50%. Time to first employment was highly variable, but quite long in many instances. These findings support offering continued IPS services as part of ongoing SCI care to achieve positive employment outcomes. Copyright © 2017 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  1. Floating on Air: Fulfillment and Self-in-Context for Distressed Japanese Women.

    PubMed

    Saint Arnault, Denise; Shimabukuro, Shizuka

    2016-05-01

    This research was part of a larger mixed-methods study examining culture, distress, and help seeking. We surveyed 209 Japanese women living in the United States recruited from clinic and community-based sites, and carried out semi-structured ethnographic interviews with a highly distressed subsample of 25 Japanese. Analytic Ethnography revealed that women described themselves as a "self-in-context," negotiating situations using protective resources or experiencing risk exposure. Women experienced quality of life (QOL) when they were successful. However, a related goal of achievingIkigai(or purpose in life) was differentiated from QOL, and was defined as an ongoing process of searching for balance between achieving social and individual fulfillment. Our resulting hypothetical model suggested that symptom level would be related to risk and protective factors (tested for the full sample) and to specific risk and protective phenomenon (tested in the distressed subsample). Thettests in the full sample found that women who were above threshold for depressive symptoms (n= 26) had higher social stressor and lower social support means. Women who were above the threshold for physical symptoms (n= 99) had higher social stressor means. Analysis of the interviewed subsample found that low self-validation and excessive responsibilities were related to high physical symptoms. We conclude that perceived lack of balance between culturally defined, and potentially opposing, markers of success can create a stressful dilemma for first-generation immigrant Japanese women, requiring new skills to achieve balance. Perceptions of health, as well as illness, are part of complex culturally based interpretations that have implications for intervention for immigrant Japanese women living in the United States. © The Author(s) 2016.

  2. X-ray versus infrared selection of distant galaxy clusters: A case study using the XMM-LSS and SpARCS cluster samples

    NASA Astrophysics Data System (ADS)

    Willis, J. P.; Ramos-Ceja, M. E.; Muzzin, A.; Pacaud, F.; Yee, H. K. C.; Wilson, G.

    2018-04-01

    We present a comparison of two samples of z > 0.8 galaxy clusters selected using different wavelength-dependent techniques and examine the physical differences between them. We consider 18 clusters from the X-ray selected XMM-LSS distant cluster survey and 92 clusters from the optical-MIR selected SpARCS cluster survey. Both samples are selected from the same approximately 9 square degree sky area and we examine them using common XMM-Newton, Spitzer-SWIRE and CFHT Legacy Survey data. Clusters from each sample are compared employing aperture measures of X-ray and MIR emission. We divide the SpARCS distant cluster sample into three sub-samples: a) X-ray bright, b) X-ray faint, MIR bright, and c) X-ray faint, MIR faint clusters. We determine that X-ray and MIR selected clusters display very similar surface brightness distributions of galaxy MIR light. In addition, the average location and amplitude of the galaxy red sequence as measured from stacked colour histograms is very similar in the X-ray and MIR-selected samples. The sub-sample of X-ray faint, MIR bright clusters displays a distribution of BCG-barycentre position offsets which extends to higher values than all other samples. This observation indicates that such clusters may exist in a more disturbed state compared to the majority of the distant cluster population sampled by XMM-LSS and SpARCS. This conclusion is supported by stacked X-ray images for the X-ray faint, MIR bright cluster sub-sample that display weak, centrally-concentrated X-ray emission, consistent with a population of growing clusters accreting from an extended envelope of material.

  3. [Anthropometric model for the prediction of appendicular skeletal muscle mass in Chilean older adults].

    PubMed

    Lera, Lydia; Albala, Cecilia; Ángel, Bárbara; Sánchez, Hugo; Picrin, Yaisy; Hormazabal, María José; Quiero, Andrea

    2014-03-01

    To develop a predictive model of appendicular skeletal muscle mass (ASM) based on anthropometric measurements in elderly from Santiago, Chile. 616 community dwelling, non-disabled subjects ≥ 60 years (mean 69.9 ± 5.2 years) living in Santiago, 64.6% female, participating in ALEXANDROS study. Anthropometric measurements, handgrip strength, mobility tests and DEXA were performed. Step by step linear regression models were used to associate ASM from DEXA with anthropometric variables, age and sex. The sample was divided at random into two to obtain prediction equations for both subsamples, which were mutually validated by double cross-validation. The high correlation between the values of observed and predicted MMAE in both sub-samples and the low degree of shrinkage allowed developing the final prediction equation with the total sample. The cross-validity coefficient between prediction models from the subsamples (0.941 and 0.9409) and the shrinkage (0.004 and 0.006) were similar in both equations. The final prediction model obtained from the total sample was: ASM (kg) = 0.107(weight in kg) + 0.251( knee height in cm) + 0.197 (Calf Circumference in cm) +0.047 (dynamometry in kg) - 0.034 (Hip Circumference in cm) + 3.417 (Man) - 0.020 (age years) - 7.646 (R2 = 0.89). The mean ASM obtained by the prediction equation and the DEXA measurement were similar (16.8 ± 4.0 vs 16.9 ± 3.7) and highly concordant according Bland and Altman (95% CI: -2.6 -2.7) and Lin (concordance correlation coefficient = 0.94) methods. We obtained a low cost anthropometric equation to determine the appendicular skeletal muscle mass useful for the screening of sarcopenia in older adults. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.

  4. The specialty of the treating physician affects the likelihood of tumor-free resection margins for basal cell carcinoma: results from a multi-institutional retrospective study.

    PubMed

    Fleischer, A B; Feldman, S R; Barlow, J O; Zheng, B; Hahn, H B; Chuang, T Y; Draft, K S; Golitz, L E; Wu, E; Katz, A S; Maize, J C; Knapp, T; Leshin, B

    2001-02-01

    Basal cell carcinoma (BCC) is the most common cutaneous malignancy. Surgical experience and physician specialty may affect the outcome quality of surgical excision of BCC. We performed a multicenter retrospective study of BCC excisions submitted to the respective Departments of Pathology at 4 major university medical centers. Our outcome measure was presence of histologic evidence of tumor present in surgical margins of excision specimens (incomplete excision). Clinician experience was defined as the number of excisions that a clinician performed during the study interval. The analytic sample pool included 1459 tumors that met all inclusion and exclusion criteria. Analyses included univariate and multivariate techniques involving the entire sample and separate subsample analyses that excluded 2 outlying dermatologists. Tumor was present at the surgical margins in 243 (16.6%) of 1459 specimens. A patient's sex, age, and tumor size were not significantly related to the presence of tumor in the surgical margin. Physician experience did not demonstrate a significant difference either in the entire sample (P <.09) or in the subsample analysis (P >.30). Tumors of the head and neck were more likely to be incompletely excised than truncal tumors in all the analyses (P <.03). Compared with dermatologists, otolaryngologists (P <.02) and plastic surgeons (P <.008) were more likely to incompletely excise tumors; however, subsample analysis for plastic surgeons found only a trend toward significance (P <.10). Dermatologists and general surgeons did not differ in the likelihood of performing an incomplete excision (P >.4). The physician specialty may affect the quality of care in the surgical management of BCC.

  5. Psychometric properties of the Haitian Creole version of the Resilience Scale with a sample of adult survivors of the 2010 earthquake.

    PubMed

    Cénat, Jude Mary; Derivois, Daniel; Hébert, Martine; Eid, Patricia; Mouchenik, Yoram

    2015-11-01

    Resilience is defined as the ability of people to cope with disasters and significant life adversities. The present paper aims to investigate the underlying structure of the Creole version of the Resilience Scale and its psychometric properties using a sample of adult survivors of the 2010 earthquake. A parallel analysis was conducted to determine the number of factors to extract and confirmatory factor analysis was performed using a sample of 1355 adult survivors of the 2010 earthquake from people of specific places where earthquake occurred with an average age of 31.57 (SD=14.42). All participants completed the Creole version of Resilience Scale (RS), the Impact of Event Scale Revised (IES-R), the Beck Depression Inventory (BDI) and the Social Support Questionnaire (SQQ-6). To facilitate exploratory (EFA) and confirmatory factor analysis (CFA), the sample was divided into two subsamples (subsample 1 for EFA and subsample 2 for CFA). Parallel analysis and confirmatory factor analysis results showed a good-fit 3-factor structure. The Cronbach α coefficient was .79, .74 and .72 respectively for the factor 1, 2 and 3 and correlated to each other. Construct validity of the Resilience scale was provided by significant correlation with measures of depression and social support satisfaction, but no correlation was found with posttraumatic stress disorder measure, except for factor 2. The results reveal a different factorial structure including 25 items of the RS. However, the Haitian Creole version of RS is a valid and reliable measure for assessing resilience for adults in Haiti. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Confirmatory factor analysis of the Frommelt Attitude Toward Care of the Dying Scale (FATCOD-B) among Italian medical students.

    PubMed

    Leombruni, Paolo; Loera, Barbara; Miniotti, Marco; Zizzi, Francesca; Castelli, Lorys; Torta, Riccardo

    2015-10-01

    A steady increase in the number of patients requiring end-of-life care has been observed during the last decades. The assessment of healthcare students' attitudes toward end-of-life care is an important step in their curriculum, as it provides information about their disposition to practice palliative medicine. The Frommelt Attitude Toward Care of the Dying Scale (FATCOD-B) was developed to detect such a disposition, but its psychometric properties are yet to be clearly defined. A convenience sample of 608 second-year medical students participated in our study in the 2012/2013 and 2013/2014 academic years. All participants completed the FATCOD-B. The sample was randomly divided in two subsamples. In the item analysis, reliability (Cronbach's α), internal consistency (item-total correlations), and an exploratory factor analysis (EFA) were conducted using the first subsample (n = 300). Using the second subsample (n = 308), confirmatory factor analysis (CFA) was performed using the robust ML method in the Lisrel program. Reliability for all items was 0.699. Item-total correlations, ranging from 0.03 to 0.39, were weak. EFA identified a two-dimensional orthogonal solution, explaining 20% of total variance. CFA upheld the two-dimensional model, but the loadings on the dimensions and their respective indicators were weak and equal to zero for certain items. The findings of the present study suggest that the FATCOD-B measures a two-dimensional construct and that several items seem in need of revision. Future research oriented toward building a revised version of the scale should pay attention to item ambiguity and take particular care to distinguish among items that concern emotions and beliefs related to end-of-life care, as well as their subjects (e.g., the healthcare provider, the patient, his family).

  7. Local stellar kinematics from RAVE data—VIII. Effects of the Galactic disc perturbations on stellar orbits of red clump stars

    NASA Astrophysics Data System (ADS)

    Önal Taş, Ö.; Bilir, S.; Plevne, O.

    2018-02-01

    We aim to probe the dynamic structure of the extended Solar neighborhood by calculating the radial metallicity gradients from orbit properties, which are obtained for axisymmetric and non-axisymmetric potential models, of red clump (RC) stars selected from the RAdial Velocity Experiment's Fourth Data Release. Distances are obtained by assuming a single absolute magnitude value in near-infrared, i.e. M_{Ks}=-1.54±0.04 mag, for each RC star. Stellar orbit parameters are calculated by using the potential functions: (i) for the MWPotential2014 potential, (ii) for the same potential with perturbation functions of the Galactic bar and transient spiral arms. The stellar age is calculated with a method based on Bayesian statistics. The radial metallicity gradients are evaluated based on the maximum vertical distance (z_{max}) from the Galactic plane and the planar eccentricity (ep) of RC stars for both of the potential models. The largest radial metallicity gradient in the 0< z_{max} ≤0.5 kpc distance interval is -0.065±0.005 dex kpc^{-1} for a subsample with ep≤0.1, while the lowest value is -0.014±0.006 dex kpc^{-1} for the subsample with ep≤0.5. We find that at z_{max}>1 kpc, the radial metallicity gradients have zero or positive values and they do not depend on ep subsamples. There is a large radial metallicity gradient for thin disc, but no radial gradient found for thick disc. Moreover, the largest radial metallicity gradients are obtained where the outer Lindblad resonance region is effective. We claim that this apparent change in radial metallicity gradients in the thin disc is a result of orbital perturbation originating from the existing resonance regions.

  8. The VMC Survey. XXVII. Young Stellar Structures in the LMC’s Bar Star-forming Complex

    NASA Astrophysics Data System (ADS)

    Sun, Ning-Chen; de Grijs, Richard; Subramanian, Smitha; Bekki, Kenji; Bell, Cameron P. M.; Cioni, Maria-Rosa L.; Ivanov, Valentin D.; Marconi, Marcella; Oliveira, Joana M.; Piatti, Andrés E.; Ripepi, Vincenzo; Rubele, Stefano; Tatton, Ben L.; van Loon, Jacco Th.

    2017-11-01

    Star formation is a hierarchical process, forming young stellar structures of star clusters, associations, and complexes over a wide range of scales. The star-forming complex in the bar region of the Large Magellanic Cloud is investigated with upper main-sequence stars observed by the VISTA Survey of the Magellanic Clouds. The upper main-sequence stars exhibit highly nonuniform distributions. Young stellar structures inside the complex are identified from the stellar density map as density enhancements of different significance levels. We find that these structures are hierarchically organized such that larger, lower-density structures contain one or several smaller, higher-density ones. They follow power-law size and mass distributions, as well as a lognormal surface density distribution. All these results support a scenario of hierarchical star formation regulated by turbulence. The temporal evolution of young stellar structures is explored by using subsamples of upper main-sequence stars with different magnitude and age ranges. While the youngest subsample, with a median age of log(τ/yr) = 7.2, contains the most substructure, progressively older ones are less and less substructured. The oldest subsample, with a median age of log(τ/yr) = 8.0, is almost indistinguishable from a uniform distribution on spatial scales of 30-300 pc, suggesting that the young stellar structures are completely dispersed on a timescale of ˜100 Myr. These results are consistent with the characteristics of the 30 Doradus complex and the entire Large Magellanic Cloud, suggesting no significant environmental effects. We further point out that the fractal dimension may be method dependent for stellar samples with significant age spreads.

  9. Ochratoxin A in raisins and currants: basic extraction procedure used in two small marketing surveys of the occurrence and control of the heterogeneity of the toxins in samples.

    PubMed

    Möller, T E; Nyberg, M

    2003-11-01

    A basic extraction procedure for analysis of ochratoxin A (OTA) in currants and raisins is described, as well as the occurrence of OTA and a control of heterogeneity of the toxin in samples bought for two small marketing surveys 1999/2000 and 2001/02. Most samples in the surveys were divided into two subsamples that were individually prepared as slurries and analysed separately. The limit of quantification for the method was estimated as 0.1 microg kg(-1) and recoveries of 85, 90 and 115% were achieved in recovery experiments at 10, 5 and 0.1 microg kg(-1), respectively. Of all 118 subsamples analysed in the surveys, 96 (84%) contained ochratoxin A at levels above the quantification level and five samples (4%) contained more than the European Community legislation of 10 microg kg(-1). The OTA concentrations found in the first survey were in the range < 0.1-19.0 microg kg(-1) with a median concentration of 0.9 microg kg(-1). In the 2001/02 study, the range was < 0.1-34.6 microg kg(-1) with a median of 0.2 microg kg(-1). Big differences were often achieved between individual subsamples of the original sample, which indicate a wide heterogeneous distribution of the toxin. Data from the repeatability test as well as recovery experiments from the same slurries showed that preparation of slurries as described here seemed to give a homogeneous and representative sample. The extraction with the basic sodium bicarbonate-methanol mixture used in the surveys gave similar or somewhat higher OTA values on some samples tested in a comparison with a weak phosphoric acid water-methanol extraction mixture.

  10. Microbial oxidation of arsenite in a subarctic environment: diversity of arsenite oxidase genes and identification of a psychrotolerant arsenite oxidiser

    USGS Publications Warehouse

    Osborne, Thomas H.; Jamieson, Heather E.; Hudson-Edwards, Karen A.; Nordstrom, D. Kirk; Walker, Stephen R.; Ward, Seamus A.; Santini, Joanne M.

    2010-01-01

    Background: Arsenic is toxic to most living cells. The two soluble inorganic forms of arsenic are arsenite (+3) and arsenate (+5), with arsenite the more toxic. Prokaryotic metabolism of arsenic has been reported in both thermal and moderate environments and has been shown to be involved in the redox cycling of arsenic. No arsenic metabolism (either dissimilatory arsenate reduction or arsenite oxidation) has ever been reported in cold environments (i.e. < 10°C).Results: Our study site is located 512 kilometres south of the Arctic Circle in the Northwest Territories, Canada in an inactive gold mine which contains mine waste water in excess of 50 mM arsenic. Several thousand tonnes of arsenic trioxide dust are stored in underground chambers and microbial biofilms grow on the chamber walls below seepage points rich in arsenite-containing solutions. We compared the arsenite oxidisers in two subsamples (which differed in arsenite concentration) collected from one biofilm. 'Species' (sequence) richness did not differ between subsamples, but the relative importance of the three identifiable clades did. An arsenite-oxidising bacterium (designated GM1) was isolated, and was shown to oxidise arsenite in the early exponential growth phase and to grow at a broad range of temperatures (4-25°C). Its arsenite oxidase was constitutively expressed and functioned over a broad temperature range.Conclusions: The diversity of arsenite oxidisers does not significantly differ from two subsamples of a microbial biofilm that vary in arsenite concentrations. GM1 is the first psychrotolerant arsenite oxidiser to be isolated with the ability to grow below 10°C. This ability to grow at low temperatures could be harnessed for arsenic bioremediation in moderate to cold climates.

  11. Cluster candidates around low-power radio galaxies at z ∼ 1-2 in cosmos

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castignani, G.; Celotti, A.; De Zotti, G.

    2014-09-10

    We search for high-redshift (z ∼1-2) galaxy clusters using low power radio galaxies (FR I) as beacons and our newly developed Poisson probability method based on photometric redshift information and galaxy number counts. We use a sample of 32 FR Is within the Cosmic Evolution Survey (COSMOS) field from the Chiaberge et al. catalog. We derive a reliable subsample of 21 bona fide low luminosity radio galaxies (LLRGs) and a subsample of 11 high luminosity radio galaxies (HLRGs), on the basis of photometric redshift information and NRAO VLA Sky Survey radio fluxes. The LLRGs are selected to have 1.4 GHzmore » rest frame luminosities lower than the fiducial FR I/FR II divide. This also allows us to estimate the comoving space density of sources with L {sub 1.4} ≅ 10{sup 32.3} erg s{sup –1} Hz{sup –1} at z ≅ 1.1, which strengthens the case for a strong cosmological evolution of these sources. In the fields of the LLRGs and HLRGs we find evidence that 14 and 8 of them reside in rich groups or galaxy clusters, respectively. Thus, overdensities are found around ∼70% of the FR Is, independently of the considered subsample. This rate is in agreement with the fraction found for low redshift FR Is and it is significantly higher than that for FR IIs at all redshifts. Although our method is primarily introduced for the COSMOS survey, it may be applied to both present and future wide field surveys such as Sloan Digital Sky Survey Stripe 82, LSST, and Euclid. Furthermore, cluster candidates found with our method are excellent targets for next generation space telescopes such as James Webb Space Telescope.« less

  12. High-Precision Image Aided Inertial Navigation with Known Features: Observability Analysis and Performance Evaluation

    PubMed Central

    Jiang, Weiping; Wang, Li; Niu, Xiaoji; Zhang, Quan; Zhang, Hui; Tang, Min; Hu, Xiangyun

    2014-01-01

    A high-precision image-aided inertial navigation system (INS) is proposed as an alternative to the carrier-phase-based differential Global Navigation Satellite Systems (CDGNSSs) when satellite-based navigation systems are unavailable. In this paper, the image/INS integrated algorithm is modeled by a tightly-coupled iterative extended Kalman filter (IEKF). Tightly-coupled integration ensures that the integrated system is reliable, even if few known feature points (i.e., less than three) are observed in the images. A new global observability analysis of this tightly-coupled integration is presented to guarantee that the system is observable under the necessary conditions. The analysis conclusions were verified by simulations and field tests. The field tests also indicate that high-precision position (centimeter-level) and attitude (half-degree-level)-integrated solutions can be achieved in a global reference. PMID:25330046

  13. Precision laser range finder system design for Advanced Technology Laboratory applications

    NASA Technical Reports Server (NTRS)

    Golden, K. E.; Kohn, R. L.; Seib, D. H.

    1974-01-01

    Preliminary system design of a pulsed precision ruby laser rangefinder system is presented which has a potential range resolution of 0.4 cm when atmospheric effects are negligible. The system being proposed for flight testing on the advanced technology laboratory (ATL) consists of a modelocked ruby laser transmitter, course and vernier rangefinder receivers, optical beacon retroreflector tracking system, and a network of ATL tracking retroreflectors. Performance calculations indicate that spacecraft to ground ranging accuracies of 1 to 2 cm are possible.

  14. Helicopter precision approach capability using the Global Positioning System

    NASA Technical Reports Server (NTRS)

    Kaufmann, David N.

    1992-01-01

    The period between 1 July and 31 December, 1992, was spent developing a research plan as well as a navigation system document and flight test plan to investigate helicopter precision approach capability using the Global Positioning System (GPS). In addition, all hardware and software required for the research was acquired, developed, installed, and verified on both the test aircraft and the ground-based reference station.

  15. Precision of FLEET Velocimetry Using High-speed CMOS Camera Systems

    NASA Technical Reports Server (NTRS)

    Peters, Christopher J.; Danehy, Paul M.; Bathel, Brett F.; Jiang, Naibo; Calvert, Nathan D.; Miles, Richard B.

    2015-01-01

    Femtosecond laser electronic excitation tagging (FLEET) is an optical measurement technique that permits quantitative velocimetry of unseeded air or nitrogen using a single laser and a single camera. In this paper, we seek to determine the fundamental precision of the FLEET technique using high-speed complementary metal-oxide semiconductor (CMOS) cameras. Also, we compare the performance of several different high-speed CMOS camera systems for acquiring FLEET velocimetry data in air and nitrogen free-jet flows. The precision was defined as the standard deviation of a set of several hundred single-shot velocity measurements. Methods of enhancing the precision of the measurement were explored such as digital binning (similar in concept to on-sensor binning, but done in post-processing), row-wise digital binning of the signal in adjacent pixels and increasing the time delay between successive exposures. These techniques generally improved precision; however, binning provided the greatest improvement to the un-intensified camera systems which had low signal-to-noise ratio. When binning row-wise by 8 pixels (about the thickness of the tagged region) and using an inter-frame delay of 65 micro sec, precisions of 0.5 m/s in air and 0.2 m/s in nitrogen were achieved. The camera comparison included a pco.dimax HD, a LaVision Imager scientific CMOS (sCMOS) and a Photron FASTCAM SA-X2, along with a two-stage LaVision High Speed IRO intensifier. Excluding the LaVision Imager sCMOS, the cameras were tested with and without intensification and with both short and long inter-frame delays. Use of intensification and longer inter-frame delay generally improved precision. Overall, the Photron FASTCAM SA-X2 exhibited the best performance in terms of greatest precision and highest signal-to-noise ratio primarily because it had the largest pixels.

  16. How do precision medicine and system biology response to human body's complex adaptability?

    PubMed

    Yuan, Bing

    2016-12-01

    In the field of life sciences, although system biology and "precision medicine" introduce some complex scientifific methods and techniques, it is still based on the "analysis-reconstruction" of reductionist theory as a whole. Adaptability of complex system increase system behaviour uncertainty as well as the difficulties of precise identifification and control. It also put systems biology research into trouble. To grasp the behaviour and characteristics of organism fundamentally, systems biology has to abandon the "analysis-reconstruction" concept. In accordance with the guidelines of complexity science, systems biology should build organism model from holistic level, just like the Chinese medicine did in dealing with human body and disease. When we study the living body from the holistic level, we will fifind the adaptability of complex system is not the obstacle that increases the diffificulty of problem solving. It is the "exceptional", "right-hand man" that helping us to deal with the complexity of life more effectively.

  17. Drying method has no substantial effect on δ(15)N or δ(13)C values of muscle tissue from teleost fishes.

    PubMed

    Bessey, Cindy; Vanderklift, Mathew A

    2014-02-15

    Stable isotope analysis (SIA) is a powerful tool in many fields of research that enables quantitative comparisons among studies, if similar methods have been used. The goal of this study was to determine if three different drying methods commonly used to prepare samples for SIA yielded different δ(15)N and δ(13)C values. Muscle subsamples from 10 individuals each of three teleost species were dried using three methods: (i) oven, (ii) food dehydrator, and (iii) freeze-dryer. All subsamples were analysed for δ(15)N and δ(13)C values, and nitrogen and carbon content, using a continuous flow system consisting of a Delta V Plus mass spectrometer and a Flush 1112 elemental analyser via a Conflo IV universal interface. The δ(13)C values were normalized to constant lipid content using the equations proposed by McConnaughey and McRoy. Although statistically significant, the differences in δ(15)N values between the drying methods were small (mean differences ≤0.21‰). The differences in δ(13)C values between the drying methods were not statistically significant, and normalising the δ(13)C values to constant lipid content reduced the mean differences for all treatments to ≤0.65‰. A statistically significant difference of ~2% in C content existed between tissues dried in a food dehydrator and those dried in a freeze-dryer for two fish species. There was no significant effect of fish size on the differences between methods. No substantial effect of drying method was found on the δ(15)N or δ(13)C values of teleost muscle tissue. Copyright © 2013 John Wiley & Sons, Ltd.

  18. Prevalence and Mental Health Correlates of Insomnia in First-Encounter Veterans with and without Military Sexual Trauma.

    PubMed

    Jenkins, Melissa M; Colvonen, Peter J; Norman, Sonya B; Afari, Niloofar; Allard, Carolyn B; Drummond, Sean P A

    2015-10-01

    There is limited information about prevalence of insomnia in general populations of veterans of recent wars in Iraq and Afghanistan. No studies have examined insomnia in veterans with military sexual trauma (MST). We assess prevalence of insomnia, identify types of services sought by veterans with insomnia, and examine correlates of insomnia in veterans with and without MST. A cross-sectional study of first-encounter veterans registering to establish care. Veteran Affairs San Diego Healthcare System. Nine hundred seventeen veterans completed questionnaires assessing insomnia, MST, service needs, traumatic brain injury, resilience, and symptoms of depression, posttraumatic stress disorder (PTSD), pain, alcohol misuse, and hypomania. N/A. 53.1% of veterans without MST and 60.8% of veterans with MST had clinically significant insomnia symptoms, with the MST subsample reporting more severe symptoms, P < 0.05. Insomnia was more prevalent than depression, hypomania, PTSD, and substance misuse. Veterans with insomnia were more likely to seek care for physical health problems and primary care versus mental health concerns, P < 0.001. For the veteran sample without MST, age, combat service, traumatic brain injury, pain, and depression were associated with worse insomnia, P < 0.001. For the MST subsample, employment status, pain, and depression were associated with worse insomnia, P < 0.001. Study findings indicate a higher rate of insomnia in veterans compared to what has been found in the general population. Insomnia is more prevalent, and more severe, in veterans with military sexual trauma. Routine insomnia assessments and referrals to providers who can provide evidence-based treatment are crucial. © 2015 Associated Professional Sleep Societies, LLC.

  19. Manifestations, acquisition and diagnostic categories of dental fear in a self-referred population.

    PubMed

    Moore, R; Brødsgaard, I; Birn, H

    1991-01-01

    This study aimed to clarify how manifestations and acquisition relate to diagnostic categories of dental fear in a population of self-referred dental fear patients, since diagnostic criteria specifically related to dental fear have not been validated. DSM III-R diagnostic criteria for phobias were used to compare with four existing dental fear diagnostic categories, referred to as the Seattle system. Subjects were 208 persons with dental fear who were telephone interviewed, of whom a subsample of 155 responded to a mailed Dental Anxiety Scale (DAS), State-Trait Anxiety Inventory and a modified FSS-II Geer Fear Scale (GFS). Personal interviews and a Dental Beliefs Scale of perceived trust and social interaction with dentists were also used to evaluate a subsample of 80 patients selected by sex and high dental fear. Results showed that the majority of the 80 patients (66%), suffered from social embarrassment about their dental fear problem and their inability to do something about it. The largest cause of their fear (84%) was reported to be traumatic dental experiences, especially in childhood (70%). A minority of patients (16%) could not isolate traumatic experiences and had a history of general fearfulness or anxiety. Analysis of GFS data for the 155 subjects showed that fear of snakes and injuries were highest among women; heights and injections among men. Fear of blood was rarely reported. Spearman correlations between GFS individual items and DAS scores indicated functional independence between dental fear and common fears such as blood, injections and enclosures in most cases. Only in specific types of dental fear did these results support Rachman and Lopatka's contention that fears are thought to summate.(ABSTRACT TRUNCATED AT 250 WORDS)

  20. JPSS-1 VIIRS version 2 at-launch relative spectral response characterization and performance

    NASA Astrophysics Data System (ADS)

    Moeller, Chris; Schwarting, Tom; McIntire, Jeff; Moyer, David I.; Zeng, Jinan

    2016-09-01

    The relative spectral response (RSR) characterization of the JPSS-1 VIIRS spectral bands has achieved "at launch" status in the VIIRS Data Analysis Working Group February 2016 Version 2 RSR release. The Version 2 release improves upon the June 2015 Version 1 release by including December 2014 NIST TSIRCUS spectral measurements of VIIRS VisNIR bands in the analysis plus correcting CO2 influence on the band M13 RSR. The T-SIRCUS based characterization is merged with the summer 2014 SpMA based characterization of VisNIR bands (Version 1 release) to yield a "fused" RSR for these bands, combining the strengths of the T-SIRCUS and the SpMA measurement systems. The M13 RSR is updated by applying a model-based correction to mitigate CO2 attenuation of the SpMA source signal that occurred during M13 spectral measurements. The Version 2 release carries forward the Version 1 RSR for those bands that were not updated (M8-M12, M14-M16A/B, I3-I5, DNBMGS). The Version 2 release includes band average (over all detectors and subsamples) RSR plus supporting RSR for each detector and subsample. The at-launch band average RSR have been used to populate Look-Up Tables supporting the sensor data record and environmental data record at-launch science products. Spectral performance metrics show that JPSS-1 VIIRS RSR are compliant on specifications with a few minor exceptions. The Version 2 release, which replaces the Version 1 release, is currently available on the password-protected NASA JPSS-1 eRooms under EAR99 control.

Top