Sample records for priori defined reference

  1. A set-theoretic model reference adaptive control architecture for disturbance rejection and uncertainty suppression with strict performance guarantees

    NASA Astrophysics Data System (ADS)

    Arabi, Ehsan; Gruenwald, Benjamin C.; Yucelen, Tansel; Nguyen, Nhan T.

    2018-05-01

    Research in adaptive control algorithms for safety-critical applications is primarily motivated by the fact that these algorithms have the capability to suppress the effects of adverse conditions resulting from exogenous disturbances, imperfect dynamical system modelling, degraded modes of operation, and changes in system dynamics. Although government and industry agree on the potential of these algorithms in providing safety and reducing vehicle development costs, a major issue is the inability to achieve a-priori, user-defined performance guarantees with adaptive control algorithms. In this paper, a new model reference adaptive control architecture for uncertain dynamical systems is presented to address disturbance rejection and uncertainty suppression. The proposed framework is predicated on a set-theoretic adaptive controller construction using generalised restricted potential functions.The key feature of this framework allows the system error bound between the state of an uncertain dynamical system and the state of a reference model, which captures a desired closed-loop system performance, to be less than a-priori, user-defined worst-case performance bound, and hence, it has the capability to enforce strict performance guarantees. Examples are provided to demonstrate the efficacy of the proposed set-theoretic model reference adaptive control architecture.

  2. An Empirical Study of the Weigl-Goldstein-Scheerer Color-Form Test According to a Developmental Frame of Reference.

    ERIC Educational Resources Information Center

    Strauss, Helen; Lewin, Isaac

    1982-01-01

    Analyzed the Weigl-Goldstein-Scheerer Color-Form Test using a sample of Danish children. Distinguished three dimensions: configuration of sorting, verbalization of the sorting principle, and the flexibility of switching sorting principle. The three dimensions proved themselves to constitute the a-priori-defined gradients. Results indicated a…

  3. The Mediterranean Diet: its definition and evaluation of a priori dietary indexes in primary cardiovascular prevention.

    PubMed

    D'Alessandro, Annunziata; De Pergola, Giovanni

    2018-01-18

    We have analysed the definition of Mediterranean Diet in 28 studies included in six meta-analyses evaluating the relation between the Mediterranean Diet and primary prevention of cardiovascular disease. Some typical food of this dietary pattern like whole cereals, olive oil and red wine were taken into account only in a few a priori indexes, and the dietary pattern defined as Mediterranean showed many differences among the studies and compared to traditional Mediterranean Diet of the early 1960s. Altogether, the analysed studies show a protective effect of the Mediterranean Diet against cardiovascular disease but present different effects against specific conditions as cerebrovascular disease and coronary heart disease. These different effects might depend on the definition of Mediterranean Diet and the indexes of the adhesion to the same one used. To compare the effects of the Mediterranean Diet against cardiovascular disease, coronary heart disease and stroke a univocal model of Mediterranean Diet should be established as a reference, and it might be represented by the Modern Mediterranean Diet Pyramid. The a priori index to evaluate the adhesion to Mediterranean Diet might be the Mediterranean-Style Dietary Pattern Score that has some advantages in comparison to the others a priori indexes.

  4. A Methodology to Seperate and Analyze a Seismic Wide Angle Profile

    NASA Astrophysics Data System (ADS)

    Weinzierl, Wolfgang; Kopp, Heidrun

    2010-05-01

    General solutions of inverse problems can often be obtained through the introduction of probability distributions to sample the model space. We present a simple approach of defining an a priori space in a tomographic study and retrieve the velocity-depth posterior distribution by a Monte Carlo method. Utilizing a fitting routine designed for very low statistics to setup and analyze the obtained tomography results, it is possible to statistically separate the velocity-depth model space derived from the inversion of seismic refraction data. An example of a profile acquired in the Lesser Antilles subduction zone reveals the effectiveness of this approach. The resolution analysis of the structural heterogeneity includes a divergence analysis which proves to be capable of dissecting long wide-angle profiles for deep crust and upper mantle studies. The complete information of any parameterised physical system is contained in the a posteriori distribution. Methods for analyzing and displaying key properties of the a posteriori distributions of highly nonlinear inverse problems are therefore essential in the scope of any interpretation. From this study we infer several conclusions concerning the interpretation of the tomographic approach. By calculating a global as well as singular misfits of velocities we are able to map different geological units along a profile. Comparing velocity distributions with the result of a tomographic inversion along the profile we can mimic the subsurface structures in their extent and composition. The possibility of gaining a priori information for seismic refraction analysis by a simple solution to an inverse problem and subsequent resolution of structural heterogeneities through a divergence analysis is a new and simple way of defining a priori space and estimating the a posteriori mean and covariance in singular and general form. The major advantage of a Monte Carlo based approach in our case study is the obtained knowledge of velocity depth distributions. Certainly the decision of where to extract velocity information on the profile for setting up a Monte Carlo ensemble is limiting the a priori space. However, the general conclusion of analyzing the velocity field according to distinct reference distributions gives us the possibility to define the covariance according to any geological unit if we have a priori information on the velocity depth distributions. Using the wide angle data recorded across the Lesser Antilles arc, we are able to resolve a shallow feature like the backstop by a robust and simple divergence analysis. We demonstrate the effectiveness of the new methodology to extract some key features and properties from the inversion results by including information concerning the confidence level of results.

  5. The inverse problem of refraction travel times, part I: Types of Geophysical Nonuniqueness through Minimization

    USGS Publications Warehouse

    Ivanov, J.; Miller, R.D.; Xia, J.; Steeples, D.; Park, C.B.

    2005-01-01

    In a set of two papers we study the inverse problem of refraction travel times. The purpose of this work is to use the study as a basis for development of more sophisticated methods for finding more reliable solutions to the inverse problem of refraction travel times, which is known to be nonunique. The first paper, "Types of Geophysical Nonuniqueness through Minimization," emphasizes the existence of different forms of nonuniqueness in the realm of inverse geophysical problems. Each type of nonuniqueness requires a different type and amount of a priori information to acquire a reliable solution. Based on such coupling, a nonuniqueness classification is designed. Therefore, since most inverse geophysical problems are nonunique, each inverse problem must be studied to define what type of nonuniqueness it belongs to and thus determine what type of a priori information is necessary to find a realistic solution. The second paper, "Quantifying Refraction Nonuniqueness Using a Three-layer Model," serves as an example of such an approach. However, its main purpose is to provide a better understanding of the inverse refraction problem by studying the type of nonuniqueness it possesses. An approach for obtaining a realistic solution to the inverse refraction problem is planned to be offered in a third paper that is in preparation. The main goal of this paper is to redefine the existing generalized notion of nonuniqueness and a priori information by offering a classified, discriminate structure. Nonuniqueness is often encountered when trying to solve inverse problems. However, possible nonuniqueness diversity is typically neglected and nonuniqueness is regarded as a whole, as an unpleasant "black box" and is approached in the same manner by applying smoothing constraints, damping constraints with respect to the solution increment and, rarely, damping constraints with respect to some sparse reference information about the true parameters. In practice, when solving geophysical problems different types of nonuniqueness exist, and thus there are different ways to solve the problems. Nonuniqueness is usually regarded as due to data error, assuming the true geology is acceptably approximated by simple mathematical models. Compounding the nonlinear problems, geophysical applications routinely exhibit exact-data nonuniqueness even for models with very few parameters adding to the nonuniqueness due to data error. While nonuniqueness variations have been defined earlier, they have not been linked to specific use of a priori information necessary to resolve each case. Four types of nonuniqueness, typical for minimization problems are defined with the corresponding methods for inclusion of a priori information to find a realistic solution without resorting to a non-discriminative approach. The above-developed stand-alone classification is expected to be helpful when solving any geophysical inverse problems. ?? Birkha??user Verlag, Basel, 2005.

  6. Solution of underdetermined systems of equations with gridded a priori constraints.

    PubMed

    Stiros, Stathis C; Saltogianni, Vasso

    2014-01-01

    The TOPINV, Topological Inversion algorithm (or TGS, Topological Grid Search) initially developed for the inversion of highly non-linear redundant systems of equations, can solve a wide range of underdetermined systems of non-linear equations. This approach is a generalization of a previous conclusion that this algorithm can be used for the solution of certain integer ambiguity problems in Geodesy. The overall approach is based on additional (a priori) information for the unknown variables. In the past, such information was used either to linearize equations around approximate solutions, or to expand systems of observation equations solved on the basis of generalized inverses. In the proposed algorithm, the a priori additional information is used in a third way, as topological constraints to the unknown n variables, leading to an R(n) grid containing an approximation of the real solution. The TOPINV algorithm does not focus on point-solutions, but exploits the structural and topological constraints in each system of underdetermined equations in order to identify an optimal closed space in the R(n) containing the real solution. The centre of gravity of the grid points defining this space corresponds to global, minimum-norm solutions. The rationale and validity of the overall approach are demonstrated on the basis of examples and case studies, including fault modelling, in comparison with SVD solutions and true (reference) values, in an accuracy-oriented approach.

  7. Semi-automatic segmentation of myocardium at risk in T2-weighted cardiovascular magnetic resonance.

    PubMed

    Sjögren, Jane; Ubachs, Joey F A; Engblom, Henrik; Carlsson, Marcus; Arheden, Håkan; Heiberg, Einar

    2012-01-31

    T2-weighted cardiovascular magnetic resonance (CMR) has been shown to be a promising technique for determination of ischemic myocardium, referred to as myocardium at risk (MaR), after an acute coronary event. Quantification of MaR in T2-weighted CMR has been proposed to be performed by manual delineation or the threshold methods of two standard deviations from remote (2SD), full width half maximum intensity (FWHM) or Otsu. However, manual delineation is subjective and threshold methods have inherent limitations related to threshold definition and lack of a priori information about cardiac anatomy and physiology. Therefore, the aim of this study was to develop an automatic segmentation algorithm for quantification of MaR using anatomical a priori information. Forty-seven patients with first-time acute ST-elevation myocardial infarction underwent T2-weighted CMR within 1 week after admission. Endocardial and epicardial borders of the left ventricle, as well as the hyper enhanced MaR regions were manually delineated by experienced observers and used as reference method. A new automatic segmentation algorithm, called Segment MaR, defines the MaR region as the continuous region most probable of being MaR, by estimating the intensities of normal myocardium and MaR with an expectation maximization algorithm and restricting the MaR region by an a priori model of the maximal extent for the user defined culprit artery. The segmentation by Segment MaR was compared against inter observer variability of manual delineation and the threshold methods of 2SD, FWHM and Otsu. MaR was 32.9 ± 10.9% of left ventricular mass (LVM) when assessed by the reference observer and 31.0 ± 8.8% of LVM assessed by Segment MaR. The bias and correlation was, -1.9 ± 6.4% of LVM, R = 0.81 (p < 0.001) for Segment MaR, -2.3 ± 4.9%, R = 0.91 (p < 0.001) for inter observer variability of manual delineation, -7.7 ± 11.4%, R = 0.38 (p = 0.008) for 2SD, -21.0 ± 9.9%, R = 0.41 (p = 0.004) for FWHM, and 5.3 ± 9.6%, R = 0.47 (p < 0.001) for Otsu. There is a good agreement between automatic Segment MaR and manually assessed MaR in T2-weighted CMR. Thus, the proposed algorithm seems to be a promising, objective method for standardized MaR quantification in T2-weighted CMR.

  8. Comparing dietary patterns derived by two methods and their associations with obesity in Polish girls aged 13-21 years: the cross-sectional GEBaHealth study.

    PubMed

    Wadolowska, Lidia; Kowalkowska, Joanna; Czarnocinska, Jolanta; Jezewska-Zychowicz, Marzena; Babicz-Zielinska, Ewa

    2017-05-01

    To compare dietary patterns (DPs) derived by two methods and their assessment as a factor of obesity in girls aged 13-21 years. Data from a cross-sectional study conducted among the representative sample of Polish females ( n = 1,107) aged 13-21 years were used. Subjects were randomly selected. Dietary information was collected using three short-validated food frequency questionnaires (FFQs) regarding fibre intake, fat intake and overall food intake variety. DPs were identified by two methods: a priori approach (a priori DPs) and cluster analysis (data-driven DPs). The association between obesity and DPs and three single dietary characteristics was examined using multiple logistic regression analysis. Four data-driven DPs were obtained: 'Low-fat-Low-fibre-Low-varied' (21.2%), 'Low-fibre' (29.1%), 'Low-fat' (25.0%) and 'High-fat-Varied' (24.7%). Three a priori DPs were pre-defined: 'Non-healthy' (16.6%), 'Neither-pro-healthy-nor-non-healthy' (79.1%) and 'Pro-healthy' (4.3%). Girls with 'Low-fibre' DP were less likely to have central obesity (adjusted odds ratio (OR) = 0.36; 95% confidence interval (CI): 0.17, 0.75) than girls with 'Low-fat-Low-fibre-Low-varied' DP (reference group, OR = 1.00). No significant associations were found between a priori DPs and overweight including obesity or central obesity. The majority of girls with 'Non-healthy' DP were also classified as 'Low-fibre' DP in the total sample, in girls with overweight including obesity and in girls with central obesity (81.7%, 80.6% and 87.3%, respectively), while most girls with 'Pro-healthy' DP were classified as 'Low-fat' DP (67.8%, 87.6% and 52.1%, respectively). We found that the a priori approach as well as cluster analysis can be used to derive opposite health-oriented DPs in Polish females. Both methods have provided disappointing outcomes in explaining the association between obesity and DPs. The cluster analysis, in comparison with the a priori approach, was more useful for finding any relationship between DPs and central obesity. Our study highlighted the importance of method used to derive DPs in exploring associations between diet and obesity.

  9. A Self-Contained Mapping Closure Approximation for Scalar Mixing

    DTIC Science & Technology

    2003-12-01

    hierarchy in statistical mechanics ( Balescu 1975), where the correlations are specified a priori and then fixed. The MCA approach does not invoke...and thus the scalar fields. Unlike usual treatments in the BBGKY hierar- chy ( Balescu 1975), where the representations are specified a priori, the...discussions. This work was supported by the Speciae Funds for Major Basic Research Project G. 2000077305, P. R. China. REFERENCES BALESCU , R. 1975

  10. 'Aussie normals': an a priori study to develop clinical chemistry reference intervals in a healthy Australian population.

    PubMed

    Koerbin, G; Cavanaugh, J A; Potter, J M; Abhayaratna, W P; West, N P; Glasgow, N; Hawkins, C; Armbruster, D; Oakman, C; Hickman, P E

    2015-02-01

    Development of reference intervals is difficult, time consuming, expensive and beyond the scope of most laboratories. The Aussie Normals study is a direct a priori study to determine reference intervals in healthy Australian adults. All volunteers completed a health and lifestyle questionnaire and exclusion was based on conditions such as pregnancy, diabetes, renal or cardiovascular disease. Up to 91 biochemical analyses were undertaken on a variety of analytical platforms using serum samples collected from 1856 volunteers. We report on our findings for 40 of these analytes and two calculated parameters performed on the Abbott ARCHITECTci8200/ci16200 analysers. Not all samples were analysed for all assays due to volume requirements or assay/instrument availability. Results with elevated interference indices and those deemed unsuitable after clinical evaluation were removed from the database. Reference intervals were partitioned based on the method of Harris and Boyd into three scenarios, combined gender, males and females and age and gender. We have performed a detailed reference interval study on a healthy Australian population considering the effects of sex, age and body mass. These reference intervals may be adapted to other manufacturer's analytical methods using method transference.

  11. Promoting A-Priori Interoperability of HLA-Based Simulations in the Space Domain: The SISO Space Reference FOM Initiative

    NASA Technical Reports Server (NTRS)

    Moller, Bjorn; Garro, Alfredo; Falcone, Alberto; Crues, Edwin Z.; Dexter, Daniel E.

    2016-01-01

    Distributed and Real-Time Simulation plays a key-role in the Space domain being exploited for missions and systems analysis and engineering as well as for crew training and operational support. One of the most popular standards is the 1516-2010 IEEE Standard for Modeling and Simulation (M&S) High Level Architecture (HLA). HLA supports the implementation of distributed simulations (called Federations) in which a set of simulation entities (called Federates) can interact using a Run-Time Infrastructure (RTI). In a given Federation, a Federate can publish and/or subscribes objects and interactions on the RTI only in accordance with their structures as defined in a FOM (Federation Object Model). Currently, the Space domain is characterized by a set of incompatible FOMs that, although meet the specific needs of different organizations and projects, increases the long-term cost for interoperability. In this context, the availability of a reference FOM for the Space domain will enable the development of interoperable HLA-based simulators for related joint projects and collaborations among worldwide organizations involved in the Space domain (e.g. NASA, ESA, Roscosmos, and JAXA). The paper presents a first set of results achieved by a SISO standardization effort that aims at providing a Space Reference FOM for international collaboration on Space systems simulations.

  12. Conventional Principles in Science: On the foundations and development of the relativized a priori

    NASA Astrophysics Data System (ADS)

    Ivanova, Milena; Farr, Matt

    2015-11-01

    The present volume consists of a collection of papers originally presented at the conference Conventional Principles in Science, held at the University of Bristol, August 2011, which featured contributions on the history and contemporary development of the notion of 'relativized a priori' principles in science, from Henri Poincaré's conventionalism to Michael Friedman's contemporary defence of the relativized a priori. In Science and Hypothesis, Poincaré assessed the problematic epistemic status of Euclidean geometry and Newton's laws of motion, famously arguing that each has the status of 'convention' in that their justification is neither analytic nor empirical in nature. In The Theory of Relativity and A Priori Knowledge, Hans Reichenbach, in light of the general theory of relativity, proposed an updated notion of the Kantian synthetic a priori to account for the dynamic inter-theoretic status of geometry and other non-empirical physical principles. Reichenbach noted that one may reject the 'necessarily true' aspect of the synthetic a priori whilst preserving the feature of being constitutive of the object of knowledge. Such constitutive principles are theory-relative, as illustrated by the privileged role of non-Euclidean geometry in general relativity theory. This idea of relativized a priori principles in spacetime physics has been analysed and developed at great length in the modern literature in the work of Michael Friedman, in particular the roles played by the light postulate and the equivalence principle - in special and general relativity respectively - in defining the central terms of their respective theories and connecting the abstract mathematical formalism of the theories with their empirical content. The papers in this volume guide the reader through the historical development of conventional and constitutive principles in science, from the foundational work of Poincaré, Reichenbach and others, to contemporary issues and applications of the relativized a priori concerning the notion of measurement, physical possibility, and the interpretation of scientific theories.

  13. Operator for object recognition and scene analysis by estimation of set occupancy with noisy and incomplete data sets

    NASA Astrophysics Data System (ADS)

    Rees, S. J.; Jones, Bryan F.

    1992-11-01

    Once feature extraction has occurred in a processed image, the recognition problem becomes one of defining a set of features which maps sufficiently well onto one of the defined shape/object models to permit a claimed recognition. This process is usually handled by aggregating features until a large enough weighting is obtained to claim membership, or an adequate number of located features are matched to the reference set. A requirement has existed for an operator or measure capable of a more direct assessment of membership/occupancy between feature sets, particularly where the feature sets may be defective representations. Such feature set errors may be caused by noise, by overlapping of objects, and by partial obscuration of features. These problems occur at the point of acquisition: repairing the data would then assume a priori knowledge of the solution. The technique described in this paper offers a set theoretical measure for partial occupancy defined in terms of the set of minimum additions to permit full occupancy and the set of locations of occupancy if such additions are made. As is shown, this technique permits recognition of partial feature sets with quantifiable degrees of uncertainty. A solution to the problems of obscuration and overlapping is therefore available.

  14. Communication of scientific uncertainty: international case studies on the development of folate and vitamin D Dietary Reference Values.

    PubMed

    Brown, Kerry A; de Wit, Liesbeth; Timotijevic, Lada; Sonne, Anne-Mette; Lähteenmäki, Liisa; Brito Garcia, Noé; Jeruszka-Bielak, Marta; Sicińska, Ewa; Moore, Alana N; Lawrence, Mark; Raats, Monique M

    2015-06-01

    Transparent evidence-based decision making has been promoted worldwide to engender trust in science and policy making. Yet, little attention has been given to transparency implementation. The degree of transparency (focused on how uncertain evidence was handled) during the development of folate and vitamin D Dietary Reference Values was explored in three a priori defined areas: (i) value request; (ii) evidence evaluation; and (iii) final values. Qualitative case studies (semi-structured interviews and desk research). A common protocol was used for data collection, interview thematic analysis and reporting. Results were coordinated via cross-case synthesis. Australia and New Zealand, Netherlands, Nordic countries, Poland, Spain and UK. Twenty-one interviews were conducted in six case studies. Transparency of process was not universally observed across countries or areas of the recommendation setting process. Transparency practices were most commonly seen surrounding the request to develop reference values (e.g. access to risk manager/assessor problem formulation discussions) and evidence evaluation (e.g. disclosure of risk assessor data sourcing/evaluation protocols). Fewer transparency practices were observed to assist with handling uncertainty in the evidence base during the development of quantitative reference values. Implementation of transparency policies may be limited by a lack of dedicated resources and best practice procedures, particularly to assist with the latter stages of reference value development. Challenges remain regarding the best practice for transparently communicating the influence of uncertain evidence on the final reference values. Resolving this issue may assist the evolution of nutrition risk assessment and better inform the recommendation setting process.

  15. SOCIODEMOGRAPHIC DOMAINS OF DEPRIVATION AND PRETERM BIRTH

    EPA Science Inventory

    Area-level deprivation is consistently associated with poor health outcomes. Using US census data (2000) and principal components analysis, a priori defined socio-demographic indices of poverty, housing, residential stability, occupation, employment and education were created fo...

  16. IGS preparations for the next reprocessing and ITRF

    NASA Astrophysics Data System (ADS)

    Griffiths, J.; Rebischung, P.; Garayt, B.; Ray, J.

    2012-04-01

    The International GNSS Service (IGS) is preparing for a second reanalysis of the full history of data collected by the global network using the latest models and methodologies. This effort is designed to obtain improved, consistent satellite orbits, station and satellite clocks, Earth orientation parameters (EOPs) and terrestrial frame products using the current IGS framework, IGS08/igs08.atx. It follows a successful first reprocessing campaign, which provided the IGS input to ITRF2008. Likewise, this second campaign (repro2) should provide the IGS contribution to the next ITRF. We will discuss the analysis standards adopted for repro2, including treatment of and mitigation against non-tidal loading effects, and improvements expected with respect to the first reprocessing campaign. International Earth Rotation and Reference Systems Service (IERS) Conventions of 2010 are expected to be implemented. Though, no improvements in the diurnal and semidiurnal EOP tide models will be made, so associated errors will remain. Adoption of new orbital force models and consistent handling of satellite attitude changes are expected to improve IGS clock and orbit products. A priori Earth-reflected radiation pressure models should nearly eliminate the ~2.5 cm orbit radial bias previously observed using laser ranging methods. Also, a priori modeling of radiation forces exerted in signal transmission should improve the orbit products. And use of consistent satellite attitude models should help with satellite clock estimation during Earth and Moon eclipses. Improvements of the terrestrial frame products are expected from, for example, the inclusion of second order ionospheric corrections and also the a priori modeling of Earth-reflected radiation pressure. Because of remaining unmodeled orbital forces, systematic errors will however likely continue to affect the origin of the repro2 frames and prevent a contribution of GNSS to the origin of the next ITRF. On the other hand, the planned inclusion of satellite phase center offsets in the long-term stacking of the repro2 frames could help in defining the scale rate of the next ITRF.

  17. Maximizing the probability of satisfying the clinical goals in radiation therapy treatment planning under setup uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fredriksson, Albin, E-mail: albin.fredriksson@raysearchlabs.com; Hårdemark, Björn; Forsgren, Anders

    2015-07-15

    Purpose: This paper introduces a method that maximizes the probability of satisfying the clinical goals in intensity-modulated radiation therapy treatments subject to setup uncertainty. Methods: The authors perform robust optimization in which the clinical goals are constrained to be satisfied whenever the setup error falls within an uncertainty set. The shape of the uncertainty set is included as a variable in the optimization. The goal of the optimization is to modify the shape of the uncertainty set in order to maximize the probability that the setup error will fall within the modified set. Because the constraints enforce the clinical goalsmore » to be satisfied under all setup errors within the uncertainty set, this is equivalent to maximizing the probability of satisfying the clinical goals. This type of robust optimization is studied with respect to photon and proton therapy applied to a prostate case and compared to robust optimization using an a priori defined uncertainty set. Results: Slight reductions of the uncertainty sets resulted in plans that satisfied a larger number of clinical goals than optimization with respect to a priori defined uncertainty sets, both within the reduced uncertainty sets and within the a priori, nonreduced, uncertainty sets. For the prostate case, the plans taking reduced uncertainty sets into account satisfied 1.4 (photons) and 1.5 (protons) times as many clinical goals over the scenarios as the method taking a priori uncertainty sets into account. Conclusions: Reducing the uncertainty sets enabled the optimization to find better solutions with respect to the errors within the reduced as well as the nonreduced uncertainty sets and thereby achieve higher probability of satisfying the clinical goals. This shows that asking for a little less in the optimization sometimes leads to better overall plan quality.« less

  18. Coupling GIS and multivariate approaches to reference site selection for wadeable stream monitoring.

    PubMed

    Collier, Kevin J; Haigh, Andy; Kelly, Johlene

    2007-04-01

    Geographic Information System (GIS) was used to identify potential reference sites for wadeable stream monitoring, and multivariate analyses were applied to test whether invertebrate communities reflected a priori spatial and stream type classifications. We identified potential reference sites in segments with unmodified vegetation cover adjacent to the stream and in >85% of the upstream catchment. We then used various landcover, amenity and environmental impact databases to eliminate sites that had potential anthropogenic influences upstream and that fell into a range of access classes. Each site identified by this process was coded by four dominant stream classes and seven zones, and 119 candidate sites were randomly selected for follow-up assessment. This process yielded 16 sites conforming to reference site criteria using a conditional-probabilistic design, and these were augmented by an additional 14 existing or special interest reference sites. Non-metric multidimensional scaling (NMS) analysis of percent abundance invertebrate data indicated significant differences in community composition among some of the zones and stream classes identified a priori providing qualified support for this framework in reference site selection. NMS analysis of a range standardised condition and diversity metrics derived from the invertebrate data indicated a core set of 26 closely related sites, and four outliers that were considered atypical of reference site conditions and subsequently dropped from the network. Use of GIS linked to stream typology, available spatial databases and aerial photography greatly enhanced the objectivity and efficiency of reference site selection. The multi-metric ordination approach reduced variability among stream types and bias associated with non-random site selection, and provided an effective way to identify representative reference sites.

  19. Automated identification of ERP peaks through Dynamic Time Warping: an application to developmental dyslexia.

    PubMed

    Assecondi, Sara; Bianchi, A M; Hallez, H; Staelens, S; Casarotto, S; Lemahieu, I; Chiarenza, G A

    2009-10-01

    This article proposes a method to automatically identify and label event-related potential (ERP) components with high accuracy and precision. We present a framework, referred to as peak-picking Dynamic Time Warping (ppDTW), where a priori knowledge about the ERPs under investigation is used to define a reference signal. We developed a combination of peak-picking and Dynamic Time Warping (DTW) that makes the temporal intervals for peak-picking adaptive on the basis of the morphology of the data. We tested the procedure on experimental data recorded from a control group and from children diagnosed with developmental dyslexia. We compared our results with the traditional peak-picking. We demonstrated that our method achieves better performance than peak-picking, with an overall precision, recall and F-score of 93%, 86% and 89%, respectively, versus 93%, 80% and 85% achieved by peak-picking. We showed that our hybrid method outperforms peak-picking, when dealing with data involving several peaks of interest. The proposed method can reliably identify and label ERP components in challenging event-related recordings, thus assisting the clinician in an objective assessment of amplitudes and latencies of peaks of clinical interest.

  20. Multichannel blind deconvolution of spatially misaligned images.

    PubMed

    Sroubek, Filip; Flusser, Jan

    2005-07-01

    Existing multichannel blind restoration techniques assume perfect spatial alignment of channels, correct estimation of blur size, and are prone to noise. We developed an alternating minimization scheme based on a maximum a posteriori estimation with a priori distribution of blurs derived from the multichannel framework and a priori distribution of original images defined by the variational integral. This stochastic approach enables us to recover the blurs and the original image from channels severely corrupted by noise. We observe that the exact knowledge of the blur size is not necessary, and we prove that translation misregistration up to a certain extent can be automatically removed in the restoration process.

  1. Contribution of TIGA reprocessing to the ITRF densification

    NASA Astrophysics Data System (ADS)

    Rudenko, S.; Dähnn, M.; Gendt, G.; Brandt, A.; Nischan, T.

    2009-04-01

    Analysis of tide gauge measurements with the purpose of sea level change investigations requires a well defined reference frame. Such reference frame can be realized through precise positions of GPS stations located at or near tide gauges (TIGA stations) and analyzed within the IGS GPS Tide Gauge Benchmark Monitoring Pilot Project (TIGA). To tie this reference frame to the International Terrestrial Reference Frame (ITRF), one should process simultaneously GPS data from TIGA and IGS stations included in the ITRF. A time series of GPS station positions has been recently derived by reprocessing GPS data from about 400 GPS stations globally distributed covering totally time span from 1998 till 2008 using EPOS-Potsdam software developed at GFZ and improved in the recent years. The analysis is based on the use of IERS Conventions 2003, ITRF2005 as a priori reference frame, FES2004 ocean tide loading model, absolute phase centre variations for GPS satellite transmit and ground receive antennae and other models. About 220 stations of the solution are IGS ones and about 180 are TIGA GPS stations that are not IGS ones. The solution includes weekly coordinates of GPS stations, daily values of the Earth rotation parameters and their rates, as well as satellite antenna offsets. On the other hand, our new solution can contribute to the ITRF densification by providing positions of about 200 stations being not present in ITRF2005. The solution can be also used for the integration of regional frames. The paper presents the results of the analysis and the comparison of our solution with ITRF2005 and the solutions of other TIGA and IGS Analysis Centres.

  2. Expedient Metrics to Describe Plant Community Change Across Gradients of Anthropogenic Influence

    NASA Astrophysics Data System (ADS)

    Marcelino, José A. P.; Weber, Everett; Silva, Luís; Garcia, Patrícia V.; Soares, António O.

    2014-11-01

    Human influence associated with land use may cause considerable biodiversity losses, namely in oceanic islands such as the Azores. Our goal was to identify plant indicator species across two gradients of increasing anthropogenic influence and management (arborescent and herbaceous communities) and determine similarity between plant communities of uncategorized vegetation plots to those in reference gradients using metrics derived from R programming. We intend to test and provide an expedient way to determine the conservation value of a given uncategorized vegetation plot based on the number of native, endemic, introduced, and invasive indicator species present. Using the metric IndVal, plant taxa with a significant indicator value for each community type in the two anthropogenic gradients were determined. A new metric, ComVal, was developed to assess the similarity of an uncategorized vegetation plot toward a reference community type, based on (i) the percentage of pre-defined indicator species from reference communities present in the vegetation plots, and (ii) the percentage of indicator species, specific to a given reference community type, present in the vegetation plot. Using a data resampling approach, the communities were randomly used as training or validation sets to classify vegetation plots based on ComVal. The percentage match with reference community types ranged from 77 to 100 % and from 79 to 100 %, for herbaceous and arborescent vegetation plots, respectively. Both IndVal and ComVal are part of a suite of useful tools characterizing plant communities and plant community change along gradients of anthropogenic influence without a priori knowledge of their biology and ecology.

  3. The mere exposure effect depends on an odor's initial pleasantness.

    PubMed

    Delplanque, Sylvain; Coppin, Géraldine; Bloesch, Laurène; Cayeux, Isabelle; Sander, David

    2015-01-01

    The mere exposure phenomenon refers to improvement of one's attitude toward an a priori neutral stimulus after its repeated exposure. The extent to which such a phenomenon influences evaluation of a priori emotional stimuli remains under-investigated. Here we investigated this question by presenting participants with different odors varying in a priori pleasantness during different sessions spaced over time. Participants were requested to report each odor's pleasantness, intensity, and familiarity. As expected, participants became more familiar with all stimuli after the repetition procedure. However, while neutral and mildly pleasant odors showed an increase in pleasantness ratings, unpleasant and very pleasant odors remained unaffected. Correlational analyses revealed an inverse U-shape between the magnitude of the mere exposure effect and the initial pleasantness of the odor. Consequently, the initial pleasantness of the stimuli appears to modulate the impact of repeated exposures on an individual's attitude. These data underline the limits of mere exposure effect and are discussed in light of the biological relevance of odors for individual survival.

  4. The mere exposure effect depends on an odor’s initial pleasantness

    PubMed Central

    Delplanque, Sylvain; Coppin, Géraldine; Bloesch, Laurène; Cayeux, Isabelle; Sander, David

    2015-01-01

    The mere exposure phenomenon refers to improvement of one’s attitude toward an a priori neutral stimulus after its repeated exposure. The extent to which such a phenomenon influences evaluation of a priori emotional stimuli remains under-investigated. Here we investigated this question by presenting participants with different odors varying in a priori pleasantness during different sessions spaced over time. Participants were requested to report each odor’s pleasantness, intensity, and familiarity. As expected, participants became more familiar with all stimuli after the repetition procedure. However, while neutral and mildly pleasant odors showed an increase in pleasantness ratings, unpleasant and very pleasant odors remained unaffected. Correlational analyses revealed an inverse U-shape between the magnitude of the mere exposure effect and the initial pleasantness of the odor. Consequently, the initial pleasantness of the stimuli appears to modulate the impact of repeated exposures on an individual’s attitude. These data underline the limits of mere exposure effect and are discussed in light of the biological relevance of odors for individual survival. PMID:26191021

  5. Forward and inverse uncertainty quantification using multilevel Monte Carlo algorithms for an elliptic non-local equation

    DOE PAGES

    Jasra, Ajay; Law, Kody J. H.; Zhou, Yan

    2016-01-01

    Our paper considers uncertainty quantification for an elliptic nonlocal equation. In particular, it is assumed that the parameters which define the kernel in the nonlocal operator are uncertain and a priori distributed according to a probability measure. It is shown that the induced probability measure on some quantities of interest arising from functionals of the solution to the equation with random inputs is well-defined,s as is the posterior distribution on parameters given observations. As the elliptic nonlocal equation cannot be solved approximate posteriors are constructed. The multilevel Monte Carlo (MLMC) and multilevel sequential Monte Carlo (MLSMC) sampling algorithms are usedmore » for a priori and a posteriori estimation, respectively, of quantities of interest. Furthermore, these algorithms reduce the amount of work to estimate posterior expectations, for a given level of error, relative to Monte Carlo and i.i.d. sampling from the posterior at a given level of approximation of the solution of the elliptic nonlocal equation.« less

  6. Forward and inverse uncertainty quantification using multilevel Monte Carlo algorithms for an elliptic non-local equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jasra, Ajay; Law, Kody J. H.; Zhou, Yan

    Our paper considers uncertainty quantification for an elliptic nonlocal equation. In particular, it is assumed that the parameters which define the kernel in the nonlocal operator are uncertain and a priori distributed according to a probability measure. It is shown that the induced probability measure on some quantities of interest arising from functionals of the solution to the equation with random inputs is well-defined,s as is the posterior distribution on parameters given observations. As the elliptic nonlocal equation cannot be solved approximate posteriors are constructed. The multilevel Monte Carlo (MLMC) and multilevel sequential Monte Carlo (MLSMC) sampling algorithms are usedmore » for a priori and a posteriori estimation, respectively, of quantities of interest. Furthermore, these algorithms reduce the amount of work to estimate posterior expectations, for a given level of error, relative to Monte Carlo and i.i.d. sampling from the posterior at a given level of approximation of the solution of the elliptic nonlocal equation.« less

  7. LandScape: a simple method to aggregate p-values and other stochastic variables without a priori grouping.

    PubMed

    Wiuf, Carsten; Schaumburg-Müller Pallesen, Jonatan; Foldager, Leslie; Grove, Jakob

    2016-08-01

    In many areas of science it is custom to perform many, potentially millions, of tests simultaneously. To gain statistical power it is common to group tests based on a priori criteria such as predefined regions or by sliding windows. However, it is not straightforward to choose grouping criteria and the results might depend on the chosen criteria. Methods that summarize, or aggregate, test statistics or p-values, without relying on a priori criteria, are therefore desirable. We present a simple method to aggregate a sequence of stochastic variables, such as test statistics or p-values, into fewer variables without assuming a priori defined groups. We provide different ways to evaluate the significance of the aggregated variables based on theoretical considerations and resampling techniques, and show that under certain assumptions the FWER is controlled in the strong sense. Validity of the method was demonstrated using simulations and real data analyses. Our method may be a useful supplement to standard procedures relying on evaluation of test statistics individually. Moreover, by being agnostic and not relying on predefined selected regions, it might be a practical alternative to conventionally used methods of aggregation of p-values over regions. The method is implemented in Python and freely available online (through GitHub, see the Supplementary information).

  8. Covariation bias for food-related control is associated with eating disorders symptoms in normal adolescents.

    PubMed

    Mayer, Birgit; Muris, Peter; Kramer Freher, Nancy; Stout, Janne; Polak, Marike

    2012-12-01

    Covariation bias refers to the phenomenon of overestimating the contingency between certain stimuli and negative outcomes, which is considered as a heuristic playing a role in the maintenance of certain types of psychopathology. In the present study, covariation bias was investigated within the context of eating pathology. In a sample of 148 adolescents (101 girls, 47 boys; mean age 15.3 years), a priori and a posteriori contingencies were measured between words referring to control and loss of control over eating behavior, on the one hand, and fear, disgust, positive and neutral outcomes, on the other hand. Results indicated that all adolescents displayed an a priori covariation bias reflecting an overestimation of the contingency of words referring to loss of control over eating behavior and fear- and disgust-relevant outcomes, while words referring to control over eating behavior were more often associated with positive and neutral outcomes. This bias was unrelated to level of eating disorder symptoms. In the case of a posteriori contingency estimates no overall bias could be observed, but some evidence was found indicating that girls with higher levels of eating disorder symptoms displayed a stronger covariation bias. These findings provide further support for the notion that covariation bias is involved in eating pathology, and also demonstrate that this type of cognitive distortion is already present in adolescents. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. A priori discretization quality metrics for distributed hydrologic modeling applications

    NASA Astrophysics Data System (ADS)

    Liu, Hongli; Tolson, Bryan; Craig, James; Shafii, Mahyar; Basu, Nandita

    2016-04-01

    In distributed hydrologic modelling, a watershed is treated as a set of small homogeneous units that address the spatial heterogeneity of the watershed being simulated. The ability of models to reproduce observed spatial patterns firstly depends on the spatial discretization, which is the process of defining homogeneous units in the form of grid cells, subwatersheds, or hydrologic response units etc. It is common for hydrologic modelling studies to simply adopt a nominal or default discretization strategy without formally assessing alternative discretization levels. This approach lacks formal justifications and is thus problematic. More formalized discretization strategies are either a priori or a posteriori with respect to building and running a hydrologic simulation model. A posteriori approaches tend to be ad-hoc and compare model calibration and/or validation performance under various watershed discretizations. The construction and calibration of multiple versions of a distributed model can become a seriously limiting computational burden. Current a priori approaches are more formalized and compare overall heterogeneity statistics of dominant variables between candidate discretization schemes and input data or reference zones. While a priori approaches are efficient and do not require running a hydrologic model, they do not fully investigate the internal spatial pattern changes of variables of interest. Furthermore, the existing a priori approaches focus on landscape and soil data and do not assess impacts of discretization on stream channel definition even though its significance has been noted by numerous studies. The primary goals of this study are to (1) introduce new a priori discretization quality metrics considering the spatial pattern changes of model input data; (2) introduce a two-step discretization decision-making approach to compress extreme errors and meet user-specified discretization expectations through non-uniform discretization threshold modification. The metrics for the first time provides quantification of the routing relevant information loss due to discretization according to the relationship between in-channel routing length and flow velocity. Moreover, it identifies and counts the spatial pattern changes of dominant hydrological variables by overlaying candidate discretization schemes upon input data and accumulating variable changes in area-weighted way. The metrics are straightforward and applicable to any semi-distributed or fully distributed hydrological model with grid scales are greater than input data resolutions. The discretization metrics and decision-making approach are applied to the Grand River watershed located in southwestern Ontario, Canada where discretization decisions are required for a semi-distributed modelling application. Results show that discretization induced information loss monotonically increases as discretization gets rougher. With regards to routing information loss in subbasin discretization, multiple interesting points rather than just the watershed outlet should be considered. Moreover, subbasin and HRU discretization decisions should not be considered independently since subbasin input significantly influences the complexity of HRU discretization result. Finally, results show that the common and convenient approach of making uniform discretization decisions across the watershed domain performs worse compared to a metric informed non-uniform discretization approach as the later since is able to conserve more watershed heterogeneity under the same model complexity (number of computational units).

  10. Does Science Presuppose Naturalism (or Anything at All)?

    NASA Astrophysics Data System (ADS)

    Fishman, Yonatan I.; Boudry, Maarten

    2013-05-01

    Several scientists, scientific institutions, and philosophers have argued that science is committed to Methodological Naturalism (MN), the view that science, by virtue of its methods, is limited to studying `natural' phenomena and cannot consider or evaluate hypotheses that refer to supernatural entities. While they may in fact exist, gods, ghosts, spirits, and extrasensory or psi phenomena are inherently outside the domain of scientific investigation. Recently, Mahner (Sci Educ 3:357-371, 2012) has taken this position one step further, proposing the more radical view that science presupposes an a priori commitment not just to MN, but also to ontological naturalism (ON), the metaphysical thesis that supernatural entities and phenomena do not exist. Here, we argue that science presupposes neither MN nor ON and that science can indeed investigate supernatural hypotheses via standard methodological approaches used to evaluate any `non-supernatural' claim. Science, at least ideally, is committed to the pursuit of truth about the nature of reality, whatever it may be, and hence cannot exclude the existence of the supernatural a priori, be it on methodological or metaphysical grounds, without artificially limiting its scope and power. Hypotheses referring to the supernatural or paranormal should be rejected not because they violate alleged a priori methodological or metaphysical presuppositions of the scientific enterprise, but rather because they fail to satisfy basic explanatory criteria, such as explanatory power and parsimony, which are routinely considered when evaluating claims in science and everyday life. Implications of our view for science education are discussed.

  11. State-of-Science Approaches to Determine Sensitive Taxa for Water Quality Criteria Derivation

    EPA Science Inventory

    Current Ambient Water Quality Criteria (AWQC) guidelines specify pre-defined taxa diversity requirements, which has limited chemical-specific criteria development in the U.S. to less than 100 chemicals. A priori knowledge of sensitive taxa to toxicologically similar groups of che...

  12. αAMG based on Weighted Matching for Systems of Elliptic PDEs Arising From Displacement and Mixed Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D'Ambra, P.; Vassilevski, P. S.

    2014-05-30

    Adaptive Algebraic Multigrid (or Multilevel) Methods (αAMG) are introduced to improve robustness and efficiency of classical algebraic multigrid methods in dealing with problems where no a-priori knowledge or assumptions on the near-null kernel of the underlined matrix are available. Recently we proposed an adaptive (bootstrap) AMG method, αAMG, aimed to obtain a composite solver with a desired convergence rate. Each new multigrid component relies on a current (general) smooth vector and exploits pairwise aggregation based on weighted matching in a matrix graph to define a new automatic, general-purpose coarsening process, which we refer to as “the compatible weighted matching”. Inmore » this work, we present results that broaden the applicability of our method to different finite element discretizations of elliptic PDEs. In particular, we consider systems arising from displacement methods in linear elasticity problems and saddle-point systems that appear in the application of the mixed method to Darcy problems.« less

  13. Prediction of allosteric sites and mediating interactions through bond-to-bond propensities

    NASA Astrophysics Data System (ADS)

    Amor, B. R. C.; Schaub, M. T.; Yaliraki, S. N.; Barahona, M.

    2016-08-01

    Allostery is a fundamental mechanism of biological regulation, in which binding of a molecule at a distant location affects the active site of a protein. Allosteric sites provide targets to fine-tune protein activity, yet we lack computational methodologies to predict them. Here we present an efficient graph-theoretical framework to reveal allosteric interactions (atoms and communication pathways strongly coupled to the active site) without a priori information of their location. Using an atomistic graph with energy-weighted covalent and weak bonds, we define a bond-to-bond propensity quantifying the non-local effect of instantaneous bond fluctuations propagating through the protein. Significant interactions are then identified using quantile regression. We exemplify our method with three biologically important proteins: caspase-1, CheY, and h-Ras, correctly predicting key allosteric interactions, whose significance is additionally confirmed against a reference set of 100 proteins. The almost-linear scaling of our method renders it suitable for high-throughput searches for candidate allosteric sites.

  14. Prediction of allosteric sites and mediating interactions through bond-to-bond propensities

    PubMed Central

    Amor, B. R. C.; Schaub, M. T.; Yaliraki, S. N.; Barahona, M.

    2016-01-01

    Allostery is a fundamental mechanism of biological regulation, in which binding of a molecule at a distant location affects the active site of a protein. Allosteric sites provide targets to fine-tune protein activity, yet we lack computational methodologies to predict them. Here we present an efficient graph-theoretical framework to reveal allosteric interactions (atoms and communication pathways strongly coupled to the active site) without a priori information of their location. Using an atomistic graph with energy-weighted covalent and weak bonds, we define a bond-to-bond propensity quantifying the non-local effect of instantaneous bond fluctuations propagating through the protein. Significant interactions are then identified using quantile regression. We exemplify our method with three biologically important proteins: caspase-1, CheY, and h-Ras, correctly predicting key allosteric interactions, whose significance is additionally confirmed against a reference set of 100 proteins. The almost-linear scaling of our method renders it suitable for high-throughput searches for candidate allosteric sites. PMID:27561351

  15. A priori evaluation of two-stage cluster sampling for accuracy assessment of large-area land-cover maps

    USGS Publications Warehouse

    Wickham, J.D.; Stehman, S.V.; Smith, J.H.; Wade, T.G.; Yang, L.

    2004-01-01

    Two-stage cluster sampling reduces the cost of collecting accuracy assessment reference data by constraining sample elements to fall within a limited number of geographic domains (clusters). However, because classification error is typically positively spatially correlated, within-cluster correlation may reduce the precision of the accuracy estimates. The detailed population information to quantify a priori the effect of within-cluster correlation on precision is typically unavailable. Consequently, a convenient, practical approach to evaluate the likely performance of a two-stage cluster sample is needed. We describe such an a priori evaluation protocol focusing on the spatial distribution of the sample by land-cover class across different cluster sizes and costs of different sampling options, including options not imposing clustering. This protocol also assesses the two-stage design's adequacy for estimating the precision of accuracy estimates for rare land-cover classes. We illustrate the approach using two large-area, regional accuracy assessments from the National Land-Cover Data (NLCD), and describe how the a priorievaluation was used as a decision-making tool when implementing the NLCD design.

  16. Analysis and compensation of reference frequency mismatch in multiple-frequency feedforward active noise and vibration control system

    NASA Astrophysics Data System (ADS)

    Liu, Jinxin; Chen, Xuefeng; Yang, Liangdong; Gao, Jiawei; Zhang, Xingwu

    2017-11-01

    In the field of active noise and vibration control (ANVC), a considerable part of unwelcome noise and vibration is resulted from rotational machines, making the spectrum of response signal multiple-frequency. Narrowband filtered-x least mean square (NFXLMS) is a very popular algorithm to suppress such noise and vibration. It has good performance since a priori-knowledge of fundamental frequency of the noise source (called reference frequency) is adopted. However, if the priori-knowledge is inaccurate, the control performance will be dramatically degraded. This phenomenon is called reference frequency mismatch (RFM). In this paper, a novel narrowband ANVC algorithm with orthogonal pair-wise reference frequency regulator is proposed to compensate for the RFM problem. Firstly, the RFM phenomenon in traditional NFXLMS is closely investigated both analytically and numerically. The results show that RFM changes the parameter estimation problem of the adaptive controller into a parameter tracking problem. Then, adaptive sinusoidal oscillators with output rectification are introduced as the reference frequency regulator to compensate for the RFM problem. The simulation results show that the proposed algorithm can dramatically suppress the multiple-frequency noise and vibration with an improved convergence rate whether or not there is RFM. Finally, case studies using experimental data are conducted under the conditions of none, small and large RFM. The shaft radial run-out signal of a rotor test-platform is applied to simulate the primary noise, and an IIR model identified from a real steel structure is applied to simulate the secondary path. The results further verify the robustness and effectiveness of the proposed algorithm.

  17. SPARSE—A subgrid particle averaged Reynolds stress equivalent model: testing with a priori closure

    PubMed Central

    Davis, Sean L.; Sen, Oishik; Udaykumar, H. S.

    2017-01-01

    A Lagrangian particle cloud model is proposed that accounts for the effects of Reynolds-averaged particle and turbulent stresses and the averaged carrier-phase velocity of the subparticle cloud scale on the averaged motion and velocity of the cloud. The SPARSE (subgrid particle averaged Reynolds stress equivalent) model is based on a combination of a truncated Taylor expansion of a drag correction function and Reynolds averaging. It reduces the required number of computational parcels to trace a cloud of particles in Eulerian–Lagrangian methods for the simulation of particle-laden flow. Closure is performed in an a priori manner using a reference simulation where all particles in the cloud are traced individually with a point-particle model. Comparison of a first-order model and SPARSE with the reference simulation in one dimension shows that both the stress and the averaging of the carrier-phase velocity on the cloud subscale affect the averaged motion of the particle. A three-dimensional isotropic turbulence computation shows that only one computational parcel is sufficient to accurately trace a cloud of tens of thousands of particles. PMID:28413341

  18. SPARSE-A subgrid particle averaged Reynolds stress equivalent model: testing with a priori closure.

    PubMed

    Davis, Sean L; Jacobs, Gustaaf B; Sen, Oishik; Udaykumar, H S

    2017-03-01

    A Lagrangian particle cloud model is proposed that accounts for the effects of Reynolds-averaged particle and turbulent stresses and the averaged carrier-phase velocity of the subparticle cloud scale on the averaged motion and velocity of the cloud. The SPARSE (subgrid particle averaged Reynolds stress equivalent) model is based on a combination of a truncated Taylor expansion of a drag correction function and Reynolds averaging. It reduces the required number of computational parcels to trace a cloud of particles in Eulerian-Lagrangian methods for the simulation of particle-laden flow. Closure is performed in an a priori manner using a reference simulation where all particles in the cloud are traced individually with a point-particle model. Comparison of a first-order model and SPARSE with the reference simulation in one dimension shows that both the stress and the averaging of the carrier-phase velocity on the cloud subscale affect the averaged motion of the particle. A three-dimensional isotropic turbulence computation shows that only one computational parcel is sufficient to accurately trace a cloud of tens of thousands of particles.

  19. A priori mesh grading for the numerical calculation of the head-related transfer functions

    PubMed Central

    Ziegelwanger, Harald; Kreuzer, Wolfgang; Majdak, Piotr

    2017-01-01

    Head-related transfer functions (HRTFs) describe the directional filtering of the incoming sound caused by the morphology of a listener’s head and pinnae. When an accurate model of a listener’s morphology exists, HRTFs can be calculated numerically with the boundary element method (BEM). However, the general recommendation to model the head and pinnae with at least six elements per wavelength renders the BEM as a time-consuming procedure when calculating HRTFs for the full audible frequency range. In this study, a mesh preprocessing algorithm is proposed, viz., a priori mesh grading, which reduces the computational costs in the HRTF calculation process significantly. The mesh grading algorithm deliberately violates the recommendation of at least six elements per wavelength in certain regions of the head and pinnae and varies the size of elements gradually according to an a priori defined grading function. The evaluation of the algorithm involved HRTFs calculated for various geometric objects including meshes of three human listeners and various grading functions. The numerical accuracy and the predicted sound-localization performance of calculated HRTFs were analyzed. A-priori mesh grading appeared to be suitable for the numerical calculation of HRTFs in the full audible frequency range and outperformed uniform meshes in terms of numerical errors, perception based predictions of sound-localization performance, and computational costs. PMID:28239186

  20. Undersampling strategies for compressed sensing accelerated MR spectroscopic imaging

    NASA Astrophysics Data System (ADS)

    Vidya Shankar, Rohini; Hu, Houchun Harry; Bikkamane Jayadev, Nutandev; Chang, John C.; Kodibagkar, Vikram D.

    2017-03-01

    Compressed sensing (CS) can accelerate magnetic resonance spectroscopic imaging (MRSI), facilitating its widespread clinical integration. The objective of this study was to assess the effect of different undersampling strategy on CS-MRSI reconstruction quality. Phantom data were acquired on a Philips 3 T Ingenia scanner. Four types of undersampling masks, corresponding to each strategy, namely, low resolution, variable density, iterative design, and a priori were simulated in Matlab and retrospectively applied to the test 1X MRSI data to generate undersampled datasets corresponding to the 2X - 5X, and 7X accelerations for each type of mask. Reconstruction parameters were kept the same in each case(all masks and accelerations) to ensure that any resulting differences can be attributed to the type of mask being employed. The reconstructed datasets from each mask were statistically compared with the reference 1X, and assessed using metrics like the root mean square error and metabolite ratios. Simulation results indicate that both the a priori and variable density undersampling masks maintain high fidelity with the 1X up to five-fold acceleration. The low resolution mask based reconstructions showed statistically significant differences from the 1X with the reconstruction failing at 3X, while the iterative design reconstructions maintained fidelity with the 1X till 4X acceleration. In summary, a pilot study was conducted to identify an optimal sampling mask in CS-MRSI. Simulation results demonstrate that the a priori and variable density masks can provide statistically similar results to the fully sampled reference. Future work would involve implementing these two masks prospectively on a clinical scanner.

  1. Pressure Scalings and Influence Region Research

    DTIC Science & Technology

    2018-01-01

    the results are briefly discussed. Additionally, updated experimental results are presented along with discussion of collaborative research efforts...with upstream and downstream influence, where the influence lengths are defined in terms of a-priori quantities (freestream conditions and...governing equations and the result is briefly discussed. Additionally, updated experimental results are presented along with discussion of

  2. Self-configurable radio receiver system and method for use with signals without prior knowledge of signal defining characteristics

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon (Inventor); Simon, Marvin K. (Inventor); Divsalar, Dariush (Inventor); Dolinar, Samuel J. (Inventor); Tkacenko, Andre (Inventor)

    2013-01-01

    A method, radio receiver, and system to autonomously receive and decode a plurality of signals having a variety of signal types without a priori knowledge of the defining characteristics of the signals is disclosed. The radio receiver is capable of receiving a signal of an unknown signal type and, by estimating one or more defining characteristics of the signal, determine the type of signal. The estimated defining characteristic(s) is/are utilized to enable the receiver to determine other defining characteristics. This in turn, enables the receiver, through multiple iterations, to make a maximum-likelihood (ML) estimate for each of the defining characteristics. After the type of signal is determined by its defining characteristics, the receiver selects an appropriate decoder from a plurality of decoders to decode the signal.

  3. A Priori and a Posteriori Dietary Patterns during Pregnancy and Gestational Weight Gain: The Generation R Study

    PubMed Central

    Tielemans, Myrte J.; Erler, Nicole S.; Leermakers, Elisabeth T. M.; van den Broek, Marion; Jaddoe, Vincent W. V.; Steegers, Eric A. P.; Kiefte-de Jong, Jessica C.; Franco, Oscar H.

    2015-01-01

    Abnormal gestational weight gain (GWG) is associated with adverse pregnancy outcomes. We examined whether dietary patterns are associated with GWG. Participants included 3374 pregnant women from a population-based cohort in the Netherlands. Dietary intake during pregnancy was assessed with food-frequency questionnaires. Three a posteriori-derived dietary patterns were identified using principal component analysis: a “Vegetable, oil and fish”, a “Nuts, high-fiber cereals and soy”, and a “Margarine, sugar and snacks” pattern. The a priori-defined dietary pattern was based on national dietary recommendations. Weight was repeatedly measured around 13, 20 and 30 weeks of pregnancy; pre-pregnancy and maximum weight were self-reported. Normal weight women with high adherence to the “Vegetable, oil and fish” pattern had higher early-pregnancy GWG than those with low adherence (43 g/week (95% CI 16; 69) for highest vs. lowest quartile (Q)). Adherence to the “Margarine, sugar and snacks” pattern was associated with a higher prevalence of excessive GWG (OR 1.45 (95% CI 1.06; 1.99) Q4 vs. Q1). Normal weight women with higher scores on the “Nuts, high-fiber cereals and soy” pattern had more moderate GWG than women with lower scores (−0.01 (95% CI −0.02; −0.00) per SD). The a priori-defined pattern was not associated with GWG. To conclude, specific dietary patterns may play a role in early pregnancy but are not consistently associated with GWG. PMID:26569303

  4. Plant biomass and species composition along an environmental gradient in montane riparian meadows

    Treesearch

    Kathleen A. Dwire; J. Boone Kauffman; E. N. Jack Brookshire; John E. Baham

    2004-01-01

    In riparian meadows, narrow zonation of the dominant vegetation frequently occurs along the elevational gradient from the stream edge to the floodplain terrace. We measured plant species composition and above- and belowground biomass in three riparian plant communities - a priori defined as wet, moist, and dry meadow - along short streamside topographic gradients in...

  5. National heritage areas: examining organizational development and the role of the National Park Service as federal partner

    Treesearch

    Susan Martin-Williams; Steven Selin

    2007-01-01

    Understanding the organizational development of National Heritage Areas (NHAs) and defining the National Park Service's (NPS) role within individual NHAs guided this qualitative study. Information gained during telephone interviews led to the development of an a priori model of the evolutionary stages of NHAs' organizational development and...

  6. Direct recovery of mean gravity anomalies from satellite to satellite tracking

    NASA Technical Reports Server (NTRS)

    Hajela, D. P.

    1974-01-01

    The direct recovery was investigated of mean gravity anomalies from summed range rate observations, the signal path being ground station to a geosynchronous relay satellite to a close satellite significantly perturbed by the short wave features of the earth's gravitational field. To ensure realistic observations, these were simulated with the nominal orbital elements for the relay satellite corresponding to ATS-6, and for two different close satellites (one at about 250 km height, and the other at about 900 km height) corresponding to the nominal values for GEOS-C. The earth's gravitational field was represented by a reference set of potential coefficients up to degree and order 12, considered as known values, and by residual gravity anomalies obtained by subtracting the anomalies, implied by the potential coefficients, from their terrestrial estimates. It was found that gravity anomalies could be recovered from strong signal without using any a-priori terrestrial information, i.e. considering their initial values as zero and also assigning them a zero weight matrix. While recovering them from weak signal, it was necessary to use the a-priori estimate of the standard deviation of the anomalies to form their a-priori diagonal weight matrix.

  7. A New Self-Constrained Inversion Method of Potential Fields Based on Probability Tomography

    NASA Astrophysics Data System (ADS)

    Sun, S.; Chen, C.; WANG, H.; Wang, Q.

    2014-12-01

    The self-constrained inversion method of potential fields uses a priori information self-extracted from potential field data. Differing from external a priori information, the self-extracted information are generally parameters derived exclusively from the analysis of the gravity and magnetic data (Paoletti et al., 2013). Here we develop a new self-constrained inversion method based on probability tomography. Probability tomography doesn't need any priori information, as well as large inversion matrix operations. Moreover, its result can describe the sources, especially the distribution of which is complex and irregular, entirely and clearly. Therefore, we attempt to use the a priori information extracted from the probability tomography results to constrain the inversion for physical properties. The magnetic anomaly data was taken as an example in this work. The probability tomography result of magnetic total field anomaly(ΔΤ) shows a smoother distribution than the anomalous source and cannot display the source edges exactly. However, the gradients of ΔΤ are with higher resolution than ΔΤ in their own direction, and this characteristic is also presented in their probability tomography results. So we use some rules to combine the probability tomography results of ∂ΔΤ⁄∂x, ∂ΔΤ⁄∂y and ∂ΔΤ⁄∂z into a new result which is used for extracting a priori information, and then incorporate the information into the model objective function as spatial weighting functions to invert the final magnetic susceptibility. Some magnetic synthetic examples incorporated with and without a priori information extracted from the probability tomography results were made to do comparison, results of which show that the former are more concentrated and with higher resolution of the source body edges. This method is finally applied in an iron mine in China with field measured ΔΤ data and performs well. ReferencesPaoletti, V., Ialongo, S., Florio, G., Fedi, M. & Cella, F., 2013. Self-constrained inversion of potential fields, Geophys J Int.This research is supported by the Fundamental Research Funds for Institute for Geophysical and Geochemical Exploration, Chinese Academy of Geological Sciences (Grant Nos. WHS201210 and WHS201211).

  8. Prospective Elementary Teachers' Perceptions of the Processes of Modeling: A Case Study

    ERIC Educational Resources Information Center

    Fazio, Claudio; Di Paola, Benedetto; Guastella, Ivan

    2012-01-01

    In this paper we discuss a study on the approaches to modeling of students of the 4-year elementary school teacher program at the University of Palermo, Italy. The answers to a specially designed questionnaire are analyzed on the basis of an "a priori" analysis made using a general scheme of reference on the epistemology of mathematics…

  9. Template based rotation: A method for functional connectivity analysis with a priori templates☆

    PubMed Central

    Schultz, Aaron P.; Chhatwal, Jasmeer P.; Huijbers, Willem; Hedden, Trey; van Dijk, Koene R.A.; McLaren, Donald G.; Ward, Andrew M.; Wigman, Sarah; Sperling, Reisa A.

    2014-01-01

    Functional connectivity magnetic resonance imaging (fcMRI) is a powerful tool for understanding the network level organization of the brain in research settings and is increasingly being used to study large-scale neuronal network degeneration in clinical trial settings. Presently, a variety of techniques, including seed-based correlation analysis and group independent components analysis (with either dual regression or back projection) are commonly employed to compute functional connectivity metrics. In the present report, we introduce template based rotation,1 a novel analytic approach optimized for use with a priori network parcellations, which may be particularly useful in clinical trial settings. Template based rotation was designed to leverage the stable spatial patterns of intrinsic connectivity derived from out-of-sample datasets by mapping data from novel sessions onto the previously defined a priori templates. We first demonstrate the feasibility of using previously defined a priori templates in connectivity analyses, and then compare the performance of template based rotation to seed based and dual regression methods by applying these analytic approaches to an fMRI dataset of normal young and elderly subjects. We observed that template based rotation and dual regression are approximately equivalent in detecting fcMRI differences between young and old subjects, demonstrating similar effect sizes for group differences and similar reliability metrics across 12 cortical networks. Both template based rotation and dual-regression demonstrated larger effect sizes and comparable reliabilities as compared to seed based correlation analysis, though all three methods yielded similar patterns of network differences. When performing inter-network and sub-network connectivity analyses, we observed that template based rotation offered greater flexibility, larger group differences, and more stable connectivity estimates as compared to dual regression and seed based analyses. This flexibility owes to the reduced spatial and temporal orthogonality constraints of template based rotation as compared to dual regression. These results suggest that template based rotation can provide a useful alternative to existing fcMRI analytic methods, particularly in clinical trial settings where predefined outcome measures and conserved network descriptions across groups are at a premium. PMID:25150630

  10. Geostatistical regularization operators for geophysical inverse problems on irregular meshes

    NASA Astrophysics Data System (ADS)

    Jordi, C.; Doetsch, J.; Günther, T.; Schmelzbach, C.; Robertsson, J. OA

    2018-05-01

    Irregular meshes allow to include complicated subsurface structures into geophysical modelling and inverse problems. The non-uniqueness of these inverse problems requires appropriate regularization that can incorporate a priori information. However, defining regularization operators for irregular discretizations is not trivial. Different schemes for calculating smoothness operators on irregular meshes have been proposed. In contrast to classical regularization constraints that are only defined using the nearest neighbours of a cell, geostatistical operators include a larger neighbourhood around a particular cell. A correlation model defines the extent of the neighbourhood and allows to incorporate information about geological structures. We propose an approach to calculate geostatistical operators for inverse problems on irregular meshes by eigendecomposition of a covariance matrix that contains the a priori geological information. Using our approach, the calculation of the operator matrix becomes tractable for 3-D inverse problems on irregular meshes. We tested the performance of the geostatistical regularization operators and compared them against the results of anisotropic smoothing in inversions of 2-D surface synthetic electrical resistivity tomography (ERT) data as well as in the inversion of a realistic 3-D cross-well synthetic ERT scenario. The inversions of 2-D ERT and seismic traveltime field data with geostatistical regularization provide results that are in good accordance with the expected geology and thus facilitate their interpretation. In particular, for layered structures the geostatistical regularization provides geologically more plausible results compared to the anisotropic smoothness constraints.

  11. A multivariate assessment of changes in wetland habitat for waterbirds at Moosehorn National Wildlife Refuge, Maine, USA

    USGS Publications Warehouse

    Hierl, L.A.; Loftin, C.S.; Longcore, J.R.; McAuley, D.G.; Urban, D.L.

    2007-01-01

    We assessed changes in vegetative structure of 49 impoundments at Moosehorn National Wildlife Refuge (MNWR), Maine, USA, between the periods 1984-1985 to 2002 with a multivariate, adaptive approach that may be useful in a variety of wetland and other habitat management situations. We used Mahalanobis Distance (MD) analysis to classify the refuge?s wetlands as poor or good waterbird habitat based on five variables: percent emergent vegetation, percent shrub, percent open water, relative richness of vegetative types, and an interspersion juxtaposition index that measures adjacency of vegetation patches. Mahalanobis Distance is a multivariate statistic that examines whether a particular data point is an outlier or a member of a data cluster while accounting for correlations among inputs. For each wetland, we used MD analysis to quantify a distance from a reference condition defined a priori by habitat conditions measured in MNWR wetlands used by waterbirds. Twenty-five wetlands declined in quality between the two periods, whereas 23 wetlands improved. We identified specific wetland characteristics that may be modified to improve habitat conditions for waterbirds. The MD analysis seems ideal for instituting an adaptive wetland management approach because metrics can be easily added or removed, ranges of target habitat conditions can be defined by field-collected data, and the analysis can identify priorities for single or multiple management objectives.

  12. Twenty First Century Cyberbullying Defined: An Analysis of Intent, Repetition and Emotional Response

    ERIC Educational Resources Information Center

    Walker, Carol Marie

    2012-01-01

    The purpose of this study was to analyze the extent and impact that cyberbullying has on the undergraduate college student and provide a current definition for the event. A priori power analysis guided this research to provide an 80 percent probability of detecting a real effect with medium effect size. Adequate research power was essential to…

  13. Obtaining Rubric Weights for Assessments by More than One Lecturer Using a Pairwise Learning Model

    ERIC Educational Resources Information Center

    Quevedo, J. R.; Montanes, E.

    2009-01-01

    Specifying the criteria of a rubric to assess an activity, establishing the different quality levels of proficiency of development and defining weights for every criterion is not as easy as one a priori might think. Besides, the complexity of these tasks increases when they involve more than one lecturer. Reaching an agreement about the criteria…

  14. Asteroid orbital error analysis: Theory and application

    NASA Technical Reports Server (NTRS)

    Muinonen, K.; Bowell, Edward

    1992-01-01

    We present a rigorous Bayesian theory for asteroid orbital error estimation in which the probability density of the orbital elements is derived from the noise statistics of the observations. For Gaussian noise in a linearized approximation the probability density is also Gaussian, and the errors of the orbital elements at a given epoch are fully described by the covariance matrix. The law of error propagation can then be applied to calculate past and future positional uncertainty ellipsoids (Cappellari et al. 1976, Yeomans et al. 1987, Whipple et al. 1991). To our knowledge, this is the first time a Bayesian approach has been formulated for orbital element estimation. In contrast to the classical Fisherian school of statistics, the Bayesian school allows a priori information to be formally present in the final estimation. However, Bayesian estimation does give the same results as Fisherian estimation when no priori information is assumed (Lehtinen 1988, and reference therein).

  15. Improved pedagogy for linear differential equations by reconsidering how we measure the size of solutions

    NASA Astrophysics Data System (ADS)

    Tisdell, Christopher C.

    2017-11-01

    For over 50 years, the learning of teaching of a priori bounds on solutions to linear differential equations has involved a Euclidean approach to measuring the size of a solution. While the Euclidean approach to a priori bounds on solutions is somewhat manageable in the learning and teaching of the proofs involving second-order, linear problems with constant co-efficients, we believe it is not pedagogically optimal. Moreover, the Euclidean method becomes pedagogically unwieldy in the proofs involving higher-order cases. The purpose of this work is to propose a simpler pedagogical approach to establish a priori bounds on solutions by considering a different way of measuring the size of a solution to linear problems, which we refer to as the Uber size. The Uber form enables a simplification of pedagogy from the literature and the ideas are accessible to learners who have an understanding of the Fundamental Theorem of Calculus and the exponential function, both usually seen in a first course in calculus. We believe that this work will be of mathematical and pedagogical interest to those who are learning and teaching in the area of differential equations or in any of the numerous disciplines where linear differential equations are used.

  16. A numerical fragment basis approach to SCF calculations.

    NASA Astrophysics Data System (ADS)

    Hinde, Robert J.

    1997-11-01

    The counterpoise method is often used to correct for basis set superposition error in calculations of the electronic structure of bimolecular systems. One drawback of this approach is the need to specify a ``reference state'' for the system; for reactive systems, the choice of an unambiguous reference state may be difficult. An example is the reaction F^- + HCl arrow HF + Cl^-. Two obvious reference states for this reaction are F^- + HCl and HF + Cl^-; however, different counterpoise-corrected interaction energies are obtained using these two reference states. We outline a method for performing SCF calculations which employs numerical basis functions; this method attempts to eliminate basis set superposition errors in an a priori fashion. We test the proposed method on two one-dimensional, three-center systems and discuss the possibility of extending our approach to include electron correlation effects.

  17. An Empirical Verification of a-priori Learning Models on Mailing Archives in the Context of Online Learning Activities of Participants in Free\\Libre Open Source Software (FLOSS) Communities

    ERIC Educational Resources Information Center

    Mukala, Patrick; Cerone, Antonio; Turini, Franco

    2017-01-01

    Free\\Libre Open Source Software (FLOSS) environments are increasingly dubbed as learning environments where practical software engineering skills can be acquired. Numerous studies have extensively investigated how knowledge is acquired in these environments through a collaborative learning model that define a learning process. Such a learning…

  18. Investigating the Factor Structure and Measurement Invariance of Phonological Abilities in a Sufficiently Transparent Language

    ERIC Educational Resources Information Center

    Papadopoulos, Timothy C.; Kendeou, Panayiota; Spanoudis, George

    2012-01-01

    Theory-driven conceptualizations of phonological abilities in a sufficiently transparent language (Greek) were examined in children ages 5 years 8 months to 7 years 7 months, by comparing a set of a priori models. Specifically, the fit of 9 different models was evaluated, as defined by the Number of Factors (1 to 3; represented by rhymes,…

  19. Automatic PSO-Based Deformable Structures Markerless Tracking in Laparoscopic Cholecystectomy

    NASA Astrophysics Data System (ADS)

    Djaghloul, Haroun; Batouche, Mohammed; Jessel, Jean-Pierre

    An automatic and markerless tracking method of deformable structures (digestive organs) during laparoscopic cholecystectomy intervention that uses the (PSO) behavour and the preoperative a priori knowledge is presented. The associated shape to the global best particles of the population determines a coarse representation of the targeted organ (the gallbladder) in monocular laparoscopic colored images. The swarm behavour is directed by a new fitness function to be optimized to improve the detection and tracking performance. The function is defined by a linear combination of two terms, namely, the human a priori knowledge term (H) and the particle's density term (D). Under the limits of standard (PSO) characteristics, experimental results on both synthetic and real data show the effectiveness and robustness of our method. Indeed, it outperforms existing methods without need of explicit initialization (such as active contours, deformable models and Gradient Vector Flow) on accuracy and convergence rate.

  20. A meta-learning system based on genetic algorithms

    NASA Astrophysics Data System (ADS)

    Pellerin, Eric; Pigeon, Luc; Delisle, Sylvain

    2004-04-01

    The design of an efficient machine learning process through self-adaptation is a great challenge. The goal of meta-learning is to build a self-adaptive learning system that is constantly adapting to its specific (and dynamic) environment. To that end, the meta-learning mechanism must improve its bias dynamically by updating the current learning strategy in accordance with its available experiences or meta-knowledge. We suggest using genetic algorithms as the basis of an adaptive system. In this work, we propose a meta-learning system based on a combination of the a priori and a posteriori concepts. A priori refers to input information and knowledge available at the beginning in order to built and evolve one or more sets of parameters by exploiting the context of the system"s information. The self-learning component is based on genetic algorithms and neural Darwinism. A posteriori refers to the implicit knowledge discovered by estimation of the future states of parameters and is also applied to the finding of optimal parameters values. The in-progress research presented here suggests a framework for the discovery of knowledge that can support human experts in their intelligence information assessment tasks. The conclusion presents avenues for further research in genetic algorithms and their capability to learn to learn.

  1. Estimating clinical chemistry reference values based on an existing data set of unselected animals.

    PubMed

    Dimauro, Corrado; Bonelli, Piero; Nicolussi, Paola; Rassu, Salvatore P G; Cappio-Borlino, Aldo; Pulina, Giuseppe

    2008-11-01

    In an attempt to standardise the determination of biological reference values, the International Federation of Clinical Chemistry (IFCC) has published a series of recommendations on developing reference intervals. The IFCC recommends the use of an a priori sampling of at least 120 healthy individuals. However, such a high number of samples and laboratory analysis is expensive, time-consuming and not always feasible, especially in veterinary medicine. In this paper, an alternative (a posteriori) method is described and is used to determine reference intervals for biochemical parameters of farm animals using an existing laboratory data set. The method used was based on the detection and removal of outliers to obtain a large sample of animals likely to be healthy from the existing data set. This allowed the estimation of reliable reference intervals for biochemical parameters in Sarda dairy sheep. This method may also be useful for the determination of reference intervals for different species, ages and gender.

  2. Testing the hypothesis of neurodegeneracy in respiratory network function with a priori transected arterially perfused brain stem preparation of rat

    PubMed Central

    Jones, Sarah E.

    2016-01-01

    Degeneracy of respiratory network function would imply that anatomically discrete aspects of the brain stem are capable of producing respiratory rhythm. To test this theory we a priori transected brain stem preparations before reperfusion and reoxygenation at 4 rostrocaudal levels: 1.5 mm caudal to obex (n = 5), at obex (n = 5), and 1.5 (n = 7) and 3 mm (n = 6) rostral to obex. The respiratory activity of these preparations was assessed via recordings of phrenic and vagal nerves and lumbar spinal expiratory motor output. Preparations with a priori transection at level of the caudal brain stem did not produce stable rhythmic respiratory bursting, even when the arterial chemoreceptors were stimulated with sodium cyanide (NaCN). Reperfusion of brain stems that preserved the pre-Bötzinger complex (pre-BötC) showed spontaneous and sustained rhythmic respiratory bursting at low phrenic nerve activity (PNA) amplitude that occurred simultaneously in all respiratory motor outputs. We refer to this rhythm as the pre-BötC burstlet-type rhythm. Conserving circuitry up to the pontomedullary junction consistently produced robust high-amplitude PNA at lower burst rates, whereas sequential motor patterning across the respiratory motor outputs remained absent. Some of the rostrally transected preparations expressed both burstlet-type and regular PNA amplitude rhythms. Further analysis showed that the burstlet-type rhythm and high-amplitude PNA had 1:2 quantal relation, with burstlets appearing to trigger high-amplitude bursts. We conclude that no degenerate rhythmogenic circuits are located in the caudal medulla oblongata and confirm the pre-BötC as the primary rhythmogenic kernel. The absence of sequential motor patterning in a priori transected preparations suggests that pontine circuits govern respiratory pattern formation. PMID:26888109

  3. Testing the hypothesis of neurodegeneracy in respiratory network function with a priori transected arterially perfused brain stem preparation of rat.

    PubMed

    Jones, Sarah E; Dutschmann, Mathias

    2016-05-01

    Degeneracy of respiratory network function would imply that anatomically discrete aspects of the brain stem are capable of producing respiratory rhythm. To test this theory we a priori transected brain stem preparations before reperfusion and reoxygenation at 4 rostrocaudal levels: 1.5 mm caudal to obex (n = 5), at obex (n = 5), and 1.5 (n = 7) and 3 mm (n = 6) rostral to obex. The respiratory activity of these preparations was assessed via recordings of phrenic and vagal nerves and lumbar spinal expiratory motor output. Preparations with a priori transection at level of the caudal brain stem did not produce stable rhythmic respiratory bursting, even when the arterial chemoreceptors were stimulated with sodium cyanide (NaCN). Reperfusion of brain stems that preserved the pre-Bötzinger complex (pre-BötC) showed spontaneous and sustained rhythmic respiratory bursting at low phrenic nerve activity (PNA) amplitude that occurred simultaneously in all respiratory motor outputs. We refer to this rhythm as the pre-BötC burstlet-type rhythm. Conserving circuitry up to the pontomedullary junction consistently produced robust high-amplitude PNA at lower burst rates, whereas sequential motor patterning across the respiratory motor outputs remained absent. Some of the rostrally transected preparations expressed both burstlet-type and regular PNA amplitude rhythms. Further analysis showed that the burstlet-type rhythm and high-amplitude PNA had 1:2 quantal relation, with burstlets appearing to trigger high-amplitude bursts. We conclude that no degenerate rhythmogenic circuits are located in the caudal medulla oblongata and confirm the pre-BötC as the primary rhythmogenic kernel. The absence of sequential motor patterning in a priori transected preparations suggests that pontine circuits govern respiratory pattern formation. Copyright © 2016 the American Physiological Society.

  4. Defining consensus: a systematic review recommends methodologic criteria for reporting of Delphi studies.

    PubMed

    Diamond, Ivan R; Grant, Robert C; Feldman, Brian M; Pencharz, Paul B; Ling, Simon C; Moore, Aideen M; Wales, Paul W

    2014-04-01

    To investigate how consensus is operationalized in Delphi studies and to explore the role of consensus in determining the results of these studies. Systematic review of a random sample of 100 English language Delphi studies, from two large multidisciplinary databases [ISI Web of Science (Thompson Reuters, New York, NY) and Scopus (Elsevier, Amsterdam, NL)], published between 2000 and 2009. About 98 of the Delphi studies purported to assess consensus, although a definition for consensus was only provided in 72 of the studies (64 a priori). The most common definition for consensus was percent agreement (25 studies), with 75% being the median threshold to define consensus. Although the authors concluded in 86 of the studies that consensus was achieved, consensus was only specified a priori (with a threshold value) in 42 of these studies. Achievement of consensus was related to the decision to stop the Delphi study in only 23 studies, with 70 studies terminating after a specified number of rounds. Although consensus generally is felt to be of primary importance to the Delphi process, definitions of consensus vary widely and are poorly reported. Improved criteria for reporting of methods of Delphi studies are required. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Multi-edge X-ray absorption spectroscopy study of road dust samples from a traffic area of Venice using stoichiometric and environmental references

    NASA Astrophysics Data System (ADS)

    Valotto, Gabrio; Cattaruzza, Elti; Bardelli, Fabrizio

    2017-02-01

    The appropriate selection of representative pure compounds to be used as reference is a crucial step for successful analysis of X-ray absorption near edge spectroscopy (XANES) data, and it is often not a trivial task. This is particularly true when complex environmental matrices are investigated, being their elemental speciation a priori unknown. In this paper, an investigation on the speciation of Cu, Zn, and Sb based on the use of conventional (stoichiometric compounds) and non-conventional (environmental samples or relevant certified materials) references is explored. This method can be useful in when the effectiveness of XANES analysis is limited because of the difficulty in obtaining a set of references sufficiently representative of the investigated samples. Road dust samples collected along the bridge connecting Venice to the mainland were used to show the potentialities and the limits of this approach.

  6. Rotating full- and reduced-dimensional quantum chemical models of molecules

    NASA Astrophysics Data System (ADS)

    Fábri, Csaba; Mátyus, Edit; Császár, Attila G.

    2011-02-01

    A flexible protocol, applicable to semirigid as well as floppy polyatomic systems, is developed for the variational solution of the rotational-vibrational Schrödinger equation. The kinetic energy operator is expressed in terms of curvilinear coordinates, describing the internal motion, and rotational coordinates, characterizing the orientation of the frame fixed to the nonrigid body. Although the analytic form of the kinetic energy operator might be very complex, it does not need to be known a priori within this scheme as it is constructed automatically and numerically whenever needed. The internal coordinates can be chosen to best represent the system of interest and the body-fixed frame is not restricted to an embedding defined with respect to a single reference geometry. The features of the technique mentioned make it especially well suited to treat large-amplitude nuclear motions. Reduced-dimensional rovibrational models can be defined straightforwardly by introducing constraints on the generalized coordinates. In order to demonstrate the flexibility of the protocol and the associated computer code, the inversion-tunneling of the ammonia (14NH3) molecule is studied using one, two, three, four, and six active vibrational degrees of freedom, within both vibrational and rovibrational variational computations. For example, the one-dimensional inversion-tunneling model of ammonia is considered also for nonzero rotational angular momenta. It turns out to be difficult to significantly improve upon this simple model. Rotational-vibrational energy levels are presented for rotational angular momentum quantum numbers J = 0, 1, 2, 3, and 4.

  7. Quaternion normalization in spacecraft attitude determination

    NASA Technical Reports Server (NTRS)

    Deutschmann, J.; Markley, F. L.; Bar-Itzhack, Itzhack Y.

    1993-01-01

    Attitude determination of spacecraft usually utilizes vector measurements such as Sun, center of Earth, star, and magnetic field direction to update the quaternion which determines the spacecraft orientation with respect to some reference coordinates in the three dimensional space. These measurements are usually processed by an extended Kalman filter (EKF) which yields an estimate of the attitude quaternion. Two EKF versions for quaternion estimation were presented in the literature; namely, the multiplicative EKF (MEKF) and the additive EKF (AEKF). In the multiplicative EKF, it is assumed that the error between the correct quaternion and its a-priori estimate is, by itself, a quaternion that represents the rotation necessary to bring the attitude which corresponds to the a-priori estimate of the quaternion into coincidence with the correct attitude. The EKF basically estimates this quotient quaternion and then the updated quaternion estimate is obtained by the product of the a-priori quaternion estimate and the estimate of the difference quaternion. In the additive EKF, it is assumed that the error between the a-priori quaternion estimate and the correct one is an algebraic difference between two four-tuple elements and thus the EKF is set to estimate this difference. The updated quaternion is then computed by adding the estimate of the difference to the a-priori quaternion estimate. If the quaternion estimate converges to the correct quaternion, then, naturally, the quaternion estimate has unity norm. This fact was utilized in the past to obtain superior filter performance by applying normalization to the filter measurement update of the quaternion. It was observed for the AEKF that when the attitude changed very slowly between measurements, normalization merely resulted in a faster convergence; however, when the attitude changed considerably between measurements, without filter tuning or normalization, the quaternion estimate diverged. However, when the quaternion estimate was normalized, the estimate converged faster and to a lower error than with tuning only. In last years, symposium we presented three new AEKF normalization techniques and we compared them to the brute force method presented in the literature. The present paper presents the issue of normalization of the MEKF and examines several MEKF normalization techniques.

  8. An a priori study of different tabulation methods for turbulent pulverised coal combustion

    NASA Astrophysics Data System (ADS)

    Luo, Yujuan; Wen, Xu; Wang, Haiou; Luo, Kun; Jin, Hanhui; Fan, Jianren

    2018-05-01

    In many practical pulverised coal combustion systems, different oxidiser streams exist, e.g. the primary- and secondary-air streams in the power plant boilers, which makes the modelling of these systems challenging. In this work, three tabulation methods for modelling pulverised coal combustion are evaluated through an a priori study. Pulverised coal flames stabilised in a three-dimensional turbulent counterflow, consisting of different oxidiser streams, are simulated with detailed chemistry first. Then, the thermo-chemical quantities calculated with different tabulation methods are compared to those from detailed chemistry solutions. The comparison shows that the conventional two-stream flamelet model with a fixed oxidiser temperature cannot predict the flame temperature correctly. The conventional two-stream flamelet model is then modified to set the oxidiser temperature equal to the fuel temperature, both of which are varied in the flamelets. By this means, the variations of oxidiser temperature can be considered. It is found that this modified tabulation method performs very well on prediction of the flame temperature. The third tabulation method is an extended three-stream flamelet model that was initially proposed for gaseous combustion. The results show that the reference gaseous temperature profile can be overall reproduced by the extended three-stream flamelet model. Interestingly, it is found that the predictions of major species mass fractions are not sensitive to the oxidiser temperature boundary conditions for the flamelet equations in the a priori analyses.

  9. Bayesian nonparametric adaptive control using Gaussian processes.

    PubMed

    Chowdhary, Girish; Kingravi, Hassan A; How, Jonathan P; Vela, Patricio A

    2015-03-01

    Most current model reference adaptive control (MRAC) methods rely on parametric adaptive elements, in which the number of parameters of the adaptive element are fixed a priori, often through expert judgment. An example of such an adaptive element is radial basis function networks (RBFNs), with RBF centers preallocated based on the expected operating domain. If the system operates outside of the expected operating domain, this adaptive element can become noneffective in capturing and canceling the uncertainty, thus rendering the adaptive controller only semiglobal in nature. This paper investigates a Gaussian process-based Bayesian MRAC architecture (GP-MRAC), which leverages the power and flexibility of GP Bayesian nonparametric models of uncertainty. The GP-MRAC does not require the centers to be preallocated, can inherently handle measurement noise, and enables MRAC to handle a broader set of uncertainties, including those that are defined as distributions over functions. We use stochastic stability arguments to show that GP-MRAC guarantees good closed-loop performance with no prior domain knowledge of the uncertainty. Online implementable GP inference methods are compared in numerical simulations against RBFN-MRAC with preallocated centers and are shown to provide better tracking and improved long-term learning.

  10. A digitally facilitated citizen-science driven approach accelerates participant recruitment and increases study population diversity.

    PubMed

    Puhan, Milo A; Steinemann, Nina; Kamm, Christian P; Müller, Stephanie; Kuhle, Jens; Kurmann, Roland; Calabrese, Pasquale; Kesselring, Jürg; von Wyl, Viktor; Swiss Multiple Sclerosis Registry Smsr

    2018-05-16

    Our aim was to assess whether a novel approach of digitally facilitated, citizen-science research, as followed by the Swiss Multiple Sclerosis Registry (Swiss MS Registry), leads to accelerated participant recruitment and more diverse study populations compared with traditional research studies where participants are mostly recruited in study centres without the use of digital technology. The Swiss MS Registry is a prospective, longitudinal, observational study covering all Switzerland. Participants actively contribute to the Swiss MS Registry, from defining research questions to providing data (online or on a paper form) and co-authoring papers. We compared the recruitment dynamics over the first 18 months with the a priori defined recruitment goals and assessed whether a priori defined groups were enrolled who are likely to be missed by traditional research studies. The goal to recruit 400 participants in the first year was reached after only 20 days, and by the end of 18 months 1700 participants had enrolled in the Swiss MS Registry, vastly exceeding expectations. Of the a priori defined groups with potential underrepresentation in other studies, 645 participants (46.5%) received care at a private neurology practice, 167 participants (12%) did not report any use of healthcare services in the past 12 months, 32 (2.3%) participants lived in rural mountainous areas, and 20 (2.0% of the 1041 for whom this information was available) lived in a long-term care facility. Having both online and paper options increased diversity of the study population in terms of geographic origin and type and severity of disease, as well as use of health care services. In particular, paper enrolees tended to be older, more frequently affected by progressive MS types and more likely to have accessed healthcare services in the past 12 months. Academic and industry-driven medical research faces substantial challenges in terms of patient involvement, recruitment, relevance and generalisability. Digital studies and stakeholder engagement may have enormous potential for medical research. But many digital studies are based on limited participant information and/or informed consent and unclear data ownership, and are subject to selection bias, confounding and information bias. The Swiss MS Registry serves as an example of a digitally enhanced, citizen-science study that leverages the advantages of both traditional medical research, with its established research methods, and novel societal and technological developments, while mitigating their ethical and legal disadvantages and risks.

  11. An iterative shrinkage approach to total-variation image restoration.

    PubMed

    Michailovich, Oleg V

    2011-05-01

    The problem of restoration of digital images from their degraded measurements plays a central role in a multitude of practically important applications. A particularly challenging instance of this problem occurs in the case when the degradation phenomenon is modeled by an ill-conditioned operator. In such a situation, the presence of noise makes it impossible to recover a valuable approximation of the image of interest without using some a priori information about its properties. Such a priori information--commonly referred to as simply priors--is essential for image restoration, rendering it stable and robust to noise. Moreover, using the priors makes the recovered images exhibit some plausible features of their original counterpart. Particularly, if the original image is known to be a piecewise smooth function, one of the standard priors used in this case is defined by the Rudin-Osher-Fatemi model, which results in total variation (TV) based image restoration. The current arsenal of algorithms for TV-based image restoration is vast. In this present paper, a different approach to the solution of the problem is proposed based upon the method of iterative shrinkage (aka iterated thresholding). In the proposed method, the TV-based image restoration is performed through a recursive application of two simple procedures, viz. linear filtering and soft thresholding. Therefore, the method can be identified as belonging to the group of first-order algorithms which are efficient in dealing with images of relatively large sizes. Another valuable feature of the proposed method consists in its working directly with the TV functional, rather then with its smoothed versions. Moreover, the method provides a single solution for both isotropic and anisotropic definitions of the TV functional, thereby establishing a useful connection between the two formulae. Finally, a number of standard examples of image deblurring are demonstrated, in which the proposed method can provide restoration results of superior quality as compared to the case of sparse-wavelet deconvolution.

  12. Object-based modeling, identification, and labeling of medical images for content-based retrieval by querying on intervals of attribute values

    NASA Astrophysics Data System (ADS)

    Thies, Christian; Ostwald, Tamara; Fischer, Benedikt; Lehmann, Thomas M.

    2005-04-01

    The classification and measuring of objects in medical images is important in radiological diagnostics and education, especially when using large databases as knowledge resources, for instance a picture archiving and communication system (PACS). The main challenge is the modeling of medical knowledge and the diagnostic context to label the sought objects. This task is referred to as closing the semantic gap between low-level pixel information and high level application knowledge. This work describes an approach which allows labeling of a-priori unknown objects in an intuitive way. Our approach consists of four main components. At first an image is completely decomposed into all visually relevant partitions on different scales. This provides a hierarchical organized set of regions. Afterwards, for each of the obtained regions a set of descriptive features is computed. In this data structure objects are represented by regions with characteristic attributes. The actual object identification is the formulation of a query. It consists of attributes on which intervals are defined describing those regions that correspond to the sought objects. Since the objects are a-priori unknown, they are described by a medical expert by means of an intuitive graphical user interface (GUI). This GUI is the fourth component. It enables complex object definitions by browsing the data structure and examinating the attributes to formulate the query. The query is executed and if the sought objects have not been identified its parameterization is refined. By using this heuristic approach, object models for hand radiographs have been developed to extract bones from a single hand in different anatomical contexts. This demonstrates the applicability of the labeling concept. By using a rule for metacarpal bones on a series of 105 images, this type of bone could be retrieved with a precision of 0.53 % and a recall of 0.6%.

  13. A Priori Calculations of Thermodynamic Functions

    DTIC Science & Technology

    1991-12-01

    Ten closes this work with a brief summary and offers suggestions for improving the model and for future research. S CHAPTER TWO In this chapter, we...we must first define the theoretical model . The molecules studied in this work contain up to 10 non- hydrogen atoms and, in general, are not...is given by equation (2-31) for two different geometries or two different theoretical models . Equation (2-31) shows the error in the force constant has

  14. Precise regional baseline estimation using a priori orbital information

    NASA Technical Reports Server (NTRS)

    Lindqwister, Ulf J.; Lichten, Stephen M.; Blewitt, Geoffrey

    1990-01-01

    A solution using GPS measurements acquired during the CASA Uno campaign has resulted in 3-4 mm horizontal daily baseline repeatability and 13 mm vertical repeatability for a 729 km baseline, located in North America. The agreement with VLBI is at the level of 10-20 mm for all components. The results were obtained with the GIPSY orbit determination and baseline estimation software and are based on five single-day data arcs spanning the 20, 21, 25, 26, and 27 of January, 1988. The estimation strategy included resolving the carrier phase integer ambiguities, utilizing an optial set of fixed reference stations, and constraining GPS orbit parameters by applying a priori information. A multiday GPS orbit and baseline solution has yielded similar 2-4 mm horizontal daily repeatabilities for the same baseline, consistent with the constrained single-day arc solutions. The application of weak constraints to the orbital state for single-day data arcs produces solutions which approach the precise orbits obtained with unconstrained multiday arc solutions.

  15. BELM: Bayesian extreme learning machine.

    PubMed

    Soria-Olivas, Emilio; Gómez-Sanchis, Juan; Martín, José D; Vila-Francés, Joan; Martínez, Marcelino; Magdalena, José R; Serrano, Antonio J

    2011-03-01

    The theory of extreme learning machine (ELM) has become very popular on the last few years. ELM is a new approach for learning the parameters of the hidden layers of a multilayer neural network (as the multilayer perceptron or the radial basis function neural network). Its main advantage is the lower computational cost, which is especially relevant when dealing with many patterns defined in a high-dimensional space. This brief proposes a bayesian approach to ELM, which presents some advantages over other approaches: it allows the introduction of a priori knowledge; obtains the confidence intervals (CIs) without the need of applying methods that are computationally intensive, e.g., bootstrap; and presents high generalization capabilities. Bayesian ELM is benchmarked against classical ELM in several artificial and real datasets that are widely used for the evaluation of machine learning algorithms. Achieved results show that the proposed approach produces a competitive accuracy with some additional advantages, namely, automatic production of CIs, reduction of probability of model overfitting, and use of a priori knowledge.

  16. A model for medical decision making and problem solving.

    PubMed

    Werner, M

    1995-08-01

    Clinicians confront the classical problem of decision making under uncertainty, but a universal procedure by which they deal with this situation, both in diagnosis and therapy, can be defined. This consists in the choice of a specific course of action from available alternatives so as to reduce uncertainty. Formal analysis evidences that the expected value of this process depends on the a priori probabilities confronted, the discriminatory power of the action chosen, and the values and costs associated with possible outcomes. Clinical problem-solving represents the construction of a systematic strategy from multiple decisional building blocks. Depending on the level of uncertainty the physicians attach to their working hypothesis, they can choose among at least four prototype strategies: pattern recognition, the hypothetico-deductive process, arborization, and exhaustion. However, the resolution of real-life problems can involve a combination of these game plans. Formal analysis of each strategy permits definition of its appropriate a priori probabilities, action characteristics, and cost implications.

  17. Potential and Limitations of Cochrane Reviews in Pediatric Cardiology: A Systematic Analysis.

    PubMed

    Poryo, Martin; Khosrawikatoli, Sara; Abdul-Khaliq, Hashim; Meyer, Sascha

    2017-04-01

    Evidence-based medicine has contributed substantially to the quality of medical care in pediatric and adult cardiology. However, our impression from the bedside is that a substantial number of Cochrane reviews generate inconclusive data that are of limited clinical benefit. We performed a systematic synopsis of Cochrane reviews published between 2001 and 2015 in the field of pediatric cardiology. Main outcome parameters were the number and percentage of conclusive, partly conclusive, and inconclusive reviews as well as their recommendations and their development over three a priori defined intervals. In total, 69 reviews were analyzed. Most of them examined preterm and term neonates (36.2%), whereas 33.3% included also non-pediatric patients. Leading topics were pharmacological issues (71.0%) followed by interventional (10.1%) and operative procedures (2.9%). The majority of reviews were inconclusive (42.9%), while 36.2% were conclusive and 21.7% partly conclusive. Although the number of published reviews increased during the three a priori defined time intervals, reviews with "no specific recommendations" remained stable while "recommendations in favor of an intervention" clearly increased. Main reasons for missing recommendations were insufficient data (n = 41) as well as an insufficient number of trials (n = 22) or poor study quality (n = 19). There is still need for high-quality research, which will likely yield a greater number of Cochrane reviews with conclusive results.

  18. A Lightweight Intelligent Virtual Cinematography System for Machinima Production

    DTIC Science & Technology

    2007-01-01

    portmanteau of machine and cinema , machinima refers to the innovation of leveraging video game technology to greatly ease the creation of computer...selecting camera angles to capture the action of an a priori unknown script as aesthetically appropriate cinema . There are a number of challenges therein...Proc. of the 4th International Conf. on Autonomous Agents. Young, R.M. and Riedl, M.O. 2003. Towards an Architecture for Intelligent Control of Narrative in Interactive Virtual Worlds. In Proc. of IUI 2003.

  19. Multi-edge X-ray absorption spectroscopy study of road dust samples from a traffic area of Venice using stoichiometric and environmental references.

    PubMed

    Valotto, Gabrio; Cattaruzza, Elti; Bardelli, Fabrizio

    2017-02-15

    The appropriate selection of representative pure compounds to be used as reference is a crucial step for successful analysis of X-ray absorption near edge spectroscopy (XANES) data, and it is often not a trivial task. This is particularly true when complex environmental matrices are investigated, being their elemental speciation a priori unknown. In this paper, an investigation on the speciation of Cu, Zn, and Sb based on the use of conventional (stoichiometric compounds) and non-conventional (environmental samples or relevant certified materials) references is explored. This method can be useful in when the effectiveness of XANES analysis is limited because of the difficulty in obtaining a set of references sufficiently representative of the investigated samples. Road dust samples collected along the bridge connecting Venice to the mainland were used to show the potentialities and the limits of this approach. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Lorentz Invariance of Gravitational Lagrangians in the Space of Reference Frames

    NASA Astrophysics Data System (ADS)

    Cognola, G.

    1980-06-01

    The recently proposed theories of gravitation in the space of reference frames S are based on a Lagrangian invariant with respect to the homogeneous Lorentz group. However, in theories of this kind, the Lorentz invariance is not a necessary consequence of some physical principles, as in the theories formulated in space-time, but rather a purely esthetic request. In the present paper, we give a systematic method for the construction of gravitational theories in the space S, without assuming a priori the Lorentz invariance of the Lagrangian. The Einstein-Cartan equations of gravitation are obtained requiring only that the Lagrangian is invariant under proper rotations and has particular transformation properties under space reflections and space-time dilatations

  1. Complex amplitude reconstruction by iterative amplitude-phase retrieval algorithm with reference

    NASA Astrophysics Data System (ADS)

    Shen, Cheng; Guo, Cheng; Tan, Jiubin; Liu, Shutian; Liu, Zhengjun

    2018-06-01

    Multi-image iterative phase retrieval methods have been successfully applied in plenty of research fields due to their simple but efficient implementation. However, there is a mismatch between the measurement of the first long imaging distance and the sequential interval. In this paper, an amplitude-phase retrieval algorithm with reference is put forward without additional measurements or priori knowledge. It gets rid of measuring the first imaging distance. With a designed update formula, it significantly raises the convergence speed and the reconstruction fidelity, especially in phase retrieval. Its superiority over the original amplitude-phase retrieval (APR) method is validated by numerical analysis and experiments. Furthermore, it provides a conceptual design of a compact holographic image sensor, which can achieve numerical refocusing easily.

  2. Development of a Publicly Available, Comprehensive Database of Fiber and Health Outcomes: Rationale and Methods

    PubMed Central

    Livingston, Kara A.; Chung, Mei; Sawicki, Caleigh M.; Lyle, Barbara J.; Wang, Ding Ding; Roberts, Susan B.; McKeown, Nicola M.

    2016-01-01

    Background Dietary fiber is a broad category of compounds historically defined as partially or completely indigestible plant-based carbohydrates and lignin with, more recently, the additional criteria that fibers incorporated into foods as additives should demonstrate functional human health outcomes to receive a fiber classification. Thousands of research studies have been published examining fibers and health outcomes. Objectives (1) Develop a database listing studies testing fiber and physiological health outcomes identified by experts at the Ninth Vahouny Conference; (2) Use evidence mapping methodology to summarize this body of literature. This paper summarizes the rationale, methodology, and resulting database. The database will help both scientists and policy-makers to evaluate evidence linking specific fibers with physiological health outcomes, and identify missing information. Methods To build this database, we conducted a systematic literature search for human intervention studies published in English from 1946 to May 2015. Our search strategy included a broad definition of fiber search terms, as well as search terms for nine physiological health outcomes identified at the Ninth Vahouny Fiber Symposium. Abstracts were screened using a priori defined eligibility criteria and a low threshold for inclusion to minimize the likelihood of rejecting articles of interest. Publications then were reviewed in full text, applying additional a priori defined exclusion criteria. The database was built and published on the Systematic Review Data Repository (SRDR™), a web-based, publicly available application. Conclusions A fiber database was created. This resource will reduce the unnecessary replication of effort in conducting systematic reviews by serving as both a central database archiving PICO (population, intervention, comparator, outcome) data on published studies and as a searchable tool through which this data can be extracted and updated. PMID:27348733

  3. Automated segmentation of intraretinal layers from macular optical coherence tomography images

    NASA Astrophysics Data System (ADS)

    Haeker, Mona; Sonka, Milan; Kardon, Randy; Shah, Vinay A.; Wu, Xiaodong; Abràmoff, Michael D.

    2007-03-01

    Commercially-available optical coherence tomography (OCT) systems (e.g., Stratus OCT-3) only segment and provide thickness measurements for the total retina on scans of the macula. Since each intraretinal layer may be affected differently by disease, it is desirable to quantify the properties of each layer separately. Thus, we have developed an automated segmentation approach for the separation of the retina on (anisotropic) 3-D macular OCT scans into five layers. Each macular series consisted of six linear radial scans centered at the fovea. Repeated series (up to six, when available) were acquired for each eye and were first registered and averaged together, resulting in a composite image for each angular location. The six surfaces defining the five layers were then found on each 3-D composite image series by transforming the segmentation task into that of finding a minimum-cost closed set in a geometric graph constructed from edge/regional information and a priori-determined surface smoothness and interaction constraints. The method was applied to the macular OCT scans of 12 patients with unilateral anterior ischemic optic neuropathy (corresponding to 24 3-D composite image series). The boundaries were independently defined by two human experts on one raw scan of each eye. Using the average of the experts' tracings as a reference standard resulted in an overall mean unsigned border positioning error of 6.7 +/- 4.0 μm, with five of the six surfaces showing significantly lower mean errors than those computed between the two observers (p < 0.05, pixel size of 50 × 2 μm).

  4. Persistent Identifiers as Boundary Objects

    NASA Astrophysics Data System (ADS)

    Parsons, M. A.; Fox, P. A.

    2017-12-01

    In 1989, Leigh Star and Jim Griesemer defined the seminal concept of `boundary objects'. These `objects' are what Latour calls `immutable mobiles' that enable communication and collaboration across difference by helping meaning to be understood in different contexts. As Star notes, they are a sort of arrangement that allow different groups to work together without (a priori) consensus. Part of the idea is to recognize and allow for the `interpretive flexibility' that is central to much of the `constructivist' approach in the sociology of science. Persistent Identifiers (PIDs) can clearly act as boundary objects, but people do not usually assume that they enable interpretive flexibility. After all, they are meant to be unambiguous, machine-interpretable identifiers of defined artifacts. In this paper, we argue that PIDs can fill at least two roles: 1) That of the standardized form, where there is strong agreement on what is being represented and how and 2) that of the idealized type, a more conceptual concept that allows many different representations. We further argue that these seemingly abstract conceptions actually help us implement PIDs more effectively to link data, publications, various other artifacts, and especially people. Considering PIDs as boundary objects can help us address issues such as what level of granularity is necessary for PIDs, what metadata should be directly associated with PIDs, and what purpose is the PID serving (reference, provenance, credit, etc.). In short, sociological theory can improve data sharing standards and their implementation in a way that enables broad interdisciplinary data sharing and reuse. We will illustrate this with several specific examples of Earth science data.

  5. An error analysis of least-squares finite element method of velocity-pressure-vorticity formulation for Stokes problem

    NASA Technical Reports Server (NTRS)

    Chang, Ching L.; Jiang, Bo-Nan

    1990-01-01

    A theoretical proof of the optimal rate of convergence for the least-squares method is developed for the Stokes problem based on the velocity-pressure-vorticity formula. The 2D Stokes problem is analyzed to define the product space and its inner product, and the a priori estimates are derived to give the finite-element approximation. The least-squares method is found to converge at the optimal rate for equal-order interpolation.

  6. [Generalization of the results of clinical studies through the analysis of subgroups].

    PubMed

    Costa, João; Fareleira, Filipa; Ascensão, Raquel; Vaz Carneiro, António

    2012-01-01

    Subgroup analysis in clinical trials are usually performed to define the potential heterogeneity of treatment effect in relation with the baseline risk, physiopathology, practical application of therapy or the under-utilization in clinical practice of effective interventions due to uncertainties of its benefit/risk ratio. When appropriately planned, subgroup analysis are a valid methodology the define benefits in subgroups of patients, thus providing good quality evidence to support clinical decision making. However, in order to be correct, subgroup analysis should be defined a priori, done in small numbers, should be fully reported and, most important, must endure statistical tests for interaction. In this paper we present an example of the treatment of post-menopausal osteoporosis, in which the benefits of an intervention (the higher the fracture risk is, the better the benefit is) with a specific agent (bazedoxifene) was only disclosed after a post-hoc analysis of the initial global trial sample.

  7. FRUIT: An operational tool for multisphere neutron spectrometry in workplaces

    NASA Astrophysics Data System (ADS)

    Bedogni, Roberto; Domingo, Carles; Esposito, Adolfo; Fernández, Francisco

    2007-10-01

    FRUIT (Frascati Unfolding Interactive Tool) is an unfolding code for Bonner sphere spectrometers (BSS) developed, under the Labview environment, at the INFN-Frascati National Laboratory. It models a generic neutron spectrum as the superposition of up to four components (thermal, epithermal, fast and high energy), fully defined by up to seven positive parameters. Different physical models are available to unfold the sphere counts, covering the majority of the neutron spectra encountered in workplaces. The iterative algorithm uses Monte Carlo methods to vary the parameters and derive the final spectrum as limit of a succession of spectra fulfilling the established convergence criteria. Uncertainties on the final results are evaluated taking into consideration the different sources of uncertainty affecting the input data. Relevant features of FRUIT are (1) a high level of interactivity, allowing the user to follow the convergence process, (2) the possibility to modify the convergence tolerances during the run, allowing a rapid achievement of meaningful solutions and (3) the reduced dependence of the results from the initial hypothesis. This provides a useful instrument for spectrometric measurements in workplaces, where detailed a priori information is usually unavailable. This paper describes the characteristics of the code and presents the results of performance tests over a significant variety of reference and workplace neutron spectra ranging from thermal up to hundreds MeV neutrons.

  8. Bayesian inference on earthquake size distribution: a case study in Italy

    NASA Astrophysics Data System (ADS)

    Licia, Faenza; Carlo, Meletti; Laura, Sandri

    2010-05-01

    This paper is focused on the study of earthquake size statistical distribution by using Bayesian inference. The strategy consists in the definition of an a priori distribution based on instrumental seismicity, and modeled as a power law distribution. By using the observed historical data, the power law is then modified in order to obtain the posterior distribution. The aim of this paper is to define the earthquake size distribution using all the seismic database available (i.e., instrumental and historical catalogs) and a robust statistical technique. We apply this methodology to the Italian seismicity, dividing the territory in source zones as done for the seismic hazard assessment, taken here as a reference model. The results suggest that each area has its own peculiar trend: while the power law is able to capture the mean aspect of the earthquake size distribution, the posterior emphasizes different slopes in different areas. Our results are in general agreement with the ones used in the seismic hazard assessment in Italy. However, there are areas in which a flattening in the curve is shown, meaning a significant departure from the power law behavior and implying that there are some local aspects that a power law distribution is not able to capture.

  9. Data-free and data-driven spectral perturbations for RANS UQ

    NASA Astrophysics Data System (ADS)

    Edeling, Wouter; Mishra, Aashwin; Iaccarino, Gianluca

    2017-11-01

    Despite recent developments in high-fidelity turbulent flow simulations, RANS modeling is still vastly used by industry, due to its inherent low cost. Since accuracy is a concern in RANS modeling, model-form UQ is an essential tool for assessing the impacts of this uncertainty on quantities of interest. Applying the spectral decomposition to the modeled Reynolds-Stress Tensor (RST) allows for the introduction of decoupled perturbations into the baseline intensity (kinetic energy), shape (eigenvalues), and orientation (eigenvectors). This constitutes a natural methodology to evaluate the model form uncertainty associated to different aspects of RST modeling. In a predictive setting, one frequently encounters an absence of any relevant reference data. To make data-free predictions with quantified uncertainty we employ physical bounds to a-priori define maximum spectral perturbations. When propagated, these perturbations yield intervals of engineering utility. High-fidelity data opens up the possibility of inferring a distribution of uncertainty, by means of various data-driven machine-learning techniques. We will demonstrate our framework on a number of flow problems where RANS models are prone to failure. This research was partially supported by the Defense Advanced Research Projects Agency under the Enabling Quantification of Uncertainty in Physical Systems (EQUiPS) project (technical monitor: Dr Fariba Fahroo), and the DOE PSAAP-II program.

  10. Morphological integration of anatomical, developmental, and functional postcranial modules in the crab-eating macaque (Macaca fascicularis).

    PubMed

    Conaway, Mark A; Schroeder, Lauren; von Cramon-Taubadel, Noreen

    2018-03-22

    Integration and modularity reflect the coordinated action of past evolutionary processes and, in turn, constrain or facilitate phenotypic evolvability. Here, we analyze magnitudes of integration in the macaque postcranium to test whether 20 a priori defined modules are (1) more tightly integrated than random sets of postcranial traits, and (2) are differentiated based on mode of definition, with developmental modules expected to be more integrated than functional or anatomical modules. The 3D morphometric data collected for eight limb and girdle bones for 60 macaques were collated into anatomical, developmental, and functional modules. A resampling technique was used to create random samples of integration values for each module for statistical comparison. Our results found that not all a priori defined modules were more strongly integrated than random samples of postcranial traits and that specific types of modules did not present consistent patterns of integration. Rather, girdle and joint modules were consistently less integrated than limb modules, and forelimb elements were less integrated than hindlimbs. The results suggest that morphometrically complex modules tend to be less integrated than simple limb bones, irrespective of the number of available traits. However, differences in integration of the fore- and hindlimb more likely reflects the multitude of locomotory, feeding, and social functions involved. It remains to be tested whether patterns of integration identified here are primate universals, and to what extent they vary depending on phylogenetic or functional factors. © 2018 Wiley Periodicals, Inc.

  11. A priori and a posteriori analysis of the flow around a rectangular cylinder

    NASA Astrophysics Data System (ADS)

    Cimarelli, A.; Leonforte, A.; Franciolini, M.; De Angelis, E.; Angeli, D.; Crivellini, A.

    2017-11-01

    The definition of a correct mesh resolution and modelling approach for the Large Eddy Simulation (LES) of the flow around a rectangular cylinder is recognized to be a rather elusive problem as shown by the large scatter of LES results present in the literature. In the present work, we aim at assessing this issue by performing an a priori analysis of Direct Numerical Simulation (DNS) data of the flow. This approach allows us to measure the ability of the LES field on reproducing the main flow features as a function of the resolution employed. Based on these results, we define a mesh resolution which maximize the opposite needs of reducing the computational costs and of adequately resolving the flow dynamics. The effectiveness of the resolution method proposed is then verified by means of an a posteriori analysis of actual LES data obtained by means of the implicit LES approach given by the numerical properties of the Discontinuous Galerkin spatial discretization technique. The present work represents a first step towards a best practice for LES of separating and reattaching flows.

  12. Symbolic Regression for the Estimation of Transfer Functions of Hydrological Models

    NASA Astrophysics Data System (ADS)

    Klotz, D.; Herrnegger, M.; Schulz, K.

    2017-11-01

    Current concepts for parameter regionalization of spatially distributed rainfall-runoff models rely on the a priori definition of transfer functions that globally map land surface characteristics (such as soil texture, land use, and digital elevation) into the model parameter space. However, these transfer functions are often chosen ad hoc or derived from small-scale experiments. This study proposes and tests an approach for inferring the structure and parametrization of possible transfer functions from runoff data to potentially circumvent these difficulties. The concept uses context-free grammars to generate possible proposition for transfer functions. The resulting structure can then be parametrized with classical optimization techniques. Several virtual experiments are performed to examine the potential for an appropriate estimation of transfer function, all of them using a very simple conceptual rainfall-runoff model with data from the Austrian Mur catchment. The results suggest that a priori defined transfer functions are in general well identifiable by the method. However, the deduction process might be inhibited, e.g., by noise in the runoff observation data, often leading to transfer function estimates of lower structural complexity.

  13. An efficient and flexible Abel-inversion method for noisy data

    NASA Astrophysics Data System (ADS)

    Antokhin, Igor I.

    2016-12-01

    We propose an efficient and flexible method for solving the Abel integral equation of the first kind, frequently appearing in many fields of astrophysics, physics, chemistry, and applied sciences. This equation represents an ill-posed problem, thus solving it requires some kind of regularization. Our method is based on solving the equation on a so-called compact set of functions and/or using Tikhonov's regularization. A priori constraints on the unknown function, defining a compact set, are very loose and can be set using simple physical considerations. Tikhonov's regularization in itself does not require any explicit a priori constraints on the unknown function and can be used independently of such constraints or in combination with them. Various target degrees of smoothness of the unknown function may be set, as required by the problem at hand. The advantage of the method, apart from its flexibility, is that it gives uniform convergence of the approximate solution to the exact solution, as the errors of input data tend to zero. The method is illustrated on several simulated models with known solutions. An example of astrophysical application of the method is also given.

  14. Keyword extraction by entropy difference between the intrinsic and extrinsic mode

    NASA Astrophysics Data System (ADS)

    Yang, Zhen; Lei, Jianjun; Fan, Kefeng; Lai, Yingxu

    2013-10-01

    This paper proposes a new metric to evaluate and rank the relevance of words in a text. The method uses the Shannon’s entropy difference between the intrinsic and extrinsic mode, which refers to the fact that relevant words significantly reflect the author’s writing intention, i.e., their occurrences are modulated by the author’s purpose, while the irrelevant words are distributed randomly in the text. By using The Origin of Species by Charles Darwin as a representative text sample, the performance of our detector is demonstrated and compared to previous proposals. Since a reference text “corpus” is all of an author’s writings, books, papers, etc. his collected works is not needed. Our approach is especially suitable for single documents of which there is no a priori information available.

  15. Reflecting on the structure of soil classification systems: insights from a proposal for integrating subsoil data into soil information systems

    NASA Astrophysics Data System (ADS)

    Dondeyne, Stefaan; Juilleret, Jérôme; Vancampenhout, Karen; Deckers, Jozef; Hissler, Christophe

    2017-04-01

    Classification of soils in both World Reference Base for soil resources (WRB) and Soil Taxonomy hinges on the identification of diagnostic horizons and characteristics. However as these features often occur within the first 100 cm, these classification systems convey little information on subsoil characteristics. An integrated knowledge of the soil, soil-to-substratum and deeper substratum continuum is required when dealing with environmental issues such as vegetation ecology, water quality or the Critical Zone in general. Therefore, we recently proposed a classification system of the subsolum complementing current soil classification systems. By reflecting on the structure of the subsoil classification system which is inspired by WRB, we aim at fostering a discussion on some potential future developments of WRB. For classifying the subsolum we define Regolite, Saprolite, Saprock and Bedrock as four Subsolum Reference Groups each corresponding to different weathering stages of the subsoil. Principal qualifiers can be used to categorize intergrades of these Subsoil Reference Groups while morphologic and lithologic characteristics can be presented with supplementary qualifiers. We argue that adopting a low hierarchical structure - akin to WRB and in contrast to a strong hierarchical structure as in Soil Taxonomy - offers the advantage of having an open classification system avoiding the need for a priori knowledge of all possible combinations which may be encountered in the field. Just as in WRB we also propose to use principal and supplementary qualifiers as a second level of classification. However, in contrast to WRB we propose to reserve the principal qualifiers for intergrades and to regroup the supplementary qualifiers into thematic categories (morphologic or lithologic). Structuring the qualifiers in this manner should facilitate the integration and handling of both soil and subsoil classification units into soil information systems and calls for paying attention to these structural issues in future developments of WRB.

  16. Venus: Mantle convection, hotspots, and tectonics

    NASA Technical Reports Server (NTRS)

    Phillips, R. J.

    1989-01-01

    The putative paradigm that planets of the same size and mass have the same tectonic style led to the adaptation of the mechanisms of terrestrial plate tectonics as the a priori model of the way Venus should behave. Data acquired over the last decade by Pioneer Venus, Venera, and ground-based radar have modified this view sharply and have illuminated the lack of detailed understanding of the plate tectonic mechanism. For reference, terrestrial mechanisms are briefly reviewed. Venusian lithospheric divergence, hotspot model, and horizontal deformation theories are proposed and examined.

  17. Long-term orbit prediction for China's Tiangong-1 spacecraft based on mean atmosphere model

    NASA Astrophysics Data System (ADS)

    Tang, Jingshi; Liu, Lin; Miao, Manqian

    Tiangong-1 is China's test module for future space station. It has gone through three successful rendezvous and dockings with Shenzhou spacecrafts from 2011 to 2013. For the long-term management and maintenance, the orbit sometimes needs to be predicted for a long period of time. As Tiangong-1 works in a low-Earth orbit with an altitude of about 300-400 km, the error in the a priori atmosphere model contributes significantly to the rapid increase of the predicted orbit error. When the orbit is predicted for 10-20 days, the error in the a priori atmosphere model, if not properly corrected, could induce the semi-major axis error and the overall position error up to a few kilometers and several thousand kilometers respectively. In this work, we use a mean atmosphere model averaged from NRLMSIS00. The a priori reference mean density can be corrected during precise orbit determination (POD). For applications in the long-term orbit prediction, the observations are first accumulated. With sufficiently long period of observations, we are able to obtain a series of the diurnal mean densities. This series bears the recent variation of the atmosphere density and can be analyzed for various periods. After being properly fitted, the mean density can be predicted and then applied in the orbit prediction. We show that the densities predicted with this approach can serve to increase the accuracy of the predicted orbit. In several 20-day prediction tests, most predicted orbits show semi-major axis errors better than 700m and overall position errors better than 600km.

  18. Stability of the Associations between Early Life Risk Indicators and Adolescent Overweight over the Evolving Obesity Epidemic

    PubMed Central

    Graversen, Lise; Sørensen, Thorkild I. A.; Petersen, Liselotte; Sovio, Ulla; Kaakinen, Marika; Sandbæk, Annelli; Laitinen, Jaana; Taanila, Anja; Pouta, Anneli

    2014-01-01

    Background Pre- and perinatal factors and preschool body size may help identify children developing overweight, but these factors might have changed during the development of the obesity epidemic. Objective We aimed to assess the associations between early life risk indicators and overweight at the age of 9 and 15 years at different stages of the obesity epidemic. Methods We used two population-based Northern Finland Birth Cohorts including 4111 children born in 1966 (NFBC1966) and 5414 children born in 1985–1986 (NFBC1986). In both cohorts, we used the same a priori defined prenatal factors, maternal body mass index (BMI), birth weight, infant weight (age 5 months and 1 year), and preschool BMI (age 2–5 years). We used internal references in early childhood to define percentiles of body size (<50, 50–75, 75–90 and >90) and generalized linear models to study the association with overweight, according to the International Obesity Taskforce (IOTF) definitions, at the ages of 9 and 15 years. Results The prevalence of overweight at the age of 15 was 9% for children born in 1966 and 16% for children born in 1986. However, medians of infant weight and preschool BMI changed little between the cohorts, and we found similar associations between maternal BMI, infant weight, preschool BMI, and later overweight in the two cohorts. At 5 years, children above the 90th percentile had approximately a 12 times higher risk of being overweight at the age of 15 years compared to children below the 50th percentile in both cohorts. Conclusions The associations between early body size and adolescent overweight showed remarkable stability, despite the increase in prevalence of overweight over the 20 years between the cohorts. Using consequently defined internal percentiles may be a valuable tool in clinical practice. PMID:24748033

  19. The Historical and InstruMental SEismic cataLogue for France (HIMSELF)

    NASA Astrophysics Data System (ADS)

    Manchuel, Kevin; Traversa, Paola; Baumont, David; Cara, Michel; Nayman, Emmanuelle; Durouchoux, Christophe

    2017-04-01

    In regions that undergo low deformation rates, as it is the case for metropolitan France, the use of historical seismicity, in addition to instrumental one, is necessary when dealing with seismic hazard assessment. The goal is to extend the observation time window to better assess the seismogenic behavior of the crust and of specific geological structures. This paper presents the strategy adopted to develop a parametric earthquake catalogue using Mw as the reference magnitude scale that covers the Metropolitan France for both instrumental and historical times. Works performed in the frame of the SiHex (Cara et al., 2015) and SIGMA projects (EDF-CEA-AREVA-ENEL), respectively on instrumental and historical earthquakes, are combined to produce the Historical and InstruMental SEismic cataLogue for France (HIMSELF). The SiHex catalogue is composed of 40 000 natural earthquakes, for which hypocentral location (inferred from 1D homogeneous location process and observatories regional estimates) and Mw magnitude (from specific analysis on crustal waves coda - ML-LDG> 4.0 - and magnitudes conversions laws) are given. In the frame of the SIGMA research program, an integrated study is realized on historical seismicity from Empirical Macroseismic Prediction Equations (EMPEs) calibration in Mw (Baumont et al., submitted) to their application to earthquakes of the SISFRANCE macroseismic database (BRGM, EDF, IRSN), through a dedicated strategy developed by Traversa et al. (submitted) to compute their Mw magnitude and depth. This inversion process allows taking into account the main macroseismic field specificities reported by SISFRANCE with a Logic Tree (LT) approach. It also permits to capture epistemic uncertainties associated to macroseismic data and to EMPEs selection. For events that exhibit a poorly constrained macroseismic field (mainly old, cross border or at sea earthquakes) joint inversion of Mw and depth is not possible and a priori depth needs to be set to calculate Mw. Regional a priori depths are defined here based on analysis of the distribution of depths computed for earthquakes with a well constrained macroseismic field and for which joint inversion of Mw and depth is possible. At the end, 27% of SISFRANCE earthquake seismological parameters are jointly inverted and for the other 73% Mw are calculated assuming a priori depths. The HIMSELF catalogue is composed of the SIGMA historical parametric catalogue from 463 to 1965 and of the SiHex instrumental one from 1965 to 2009. All magnitudes are expressed in Mw which makes this catalogue directly usable as an input for seismic hazard studies, carried out both through a probabilistic or deterministic way. Uncertainties on magnitudes and depths are provided in this study for historical earthquakes following calculation scheme presented in Traversa et al. (submitted). Uncertainties on magnitudes for instrumental events are from Cara et al. (2016).

  20. Developing an A Priori Database for Passive Microwave Snow Water Retrievals Over Ocean

    NASA Astrophysics Data System (ADS)

    Yin, Mengtao; Liu, Guosheng

    2017-12-01

    A physically optimized a priori database is developed for Global Precipitation Measurement Microwave Imager (GMI) snow water retrievals over ocean. The initial snow water content profiles are derived from CloudSat Cloud Profiling Radar (CPR) measurements. A radiative transfer model in which the single-scattering properties of nonspherical snowflakes are based on the discrete dipole approximate results is employed to simulate brightness temperatures and their gradients. Snow water content profiles are then optimized through a one-dimensional variational (1D-Var) method. The standard deviations of the difference between observed and simulated brightness temperatures are in a similar magnitude to the observation errors defined for observation error covariance matrix after the 1D-Var optimization, indicating that this variational method is successful. This optimized database is applied in a Bayesian retrieval snow water algorithm. The retrieval results indicated that the 1D-Var approach has a positive impact on the GMI retrieved snow water content profiles by improving the physical consistency between snow water content profiles and observed brightness temperatures. Global distribution of snow water contents retrieved from the a priori database is compared with CloudSat CPR estimates. Results showed that the two estimates have a similar pattern of global distribution, and the difference of their global means is small. In addition, we investigate the impact of using physical parameters to subset the database on snow water retrievals. It is shown that using total precipitable water to subset the database with 1D-Var optimization is beneficial for snow water retrievals.

  1. Dual and mixed nonsymmetric stress-based variational formulations for coupled thermoelastodynamics with second sound effect

    NASA Astrophysics Data System (ADS)

    Tóth, Balázs

    2018-03-01

    Some new dual and mixed variational formulations based on a priori nonsymmetric stresses will be developed for linearly coupled irreversible thermoelastodynamic problems associated with second sound effect according to the Lord-Shulman theory. Having introduced the entropy flux vector instead of the entropy field and defining the dissipation and the relaxation potential as the function of the entropy flux, a seven-field dual and mixed variational formulation will be derived from the complementary Biot-Hamilton-type variational principle, using the Lagrange multiplier method. The momentum-, the displacement- and the infinitesimal rotation vector, and the a priori nonsymmetric stress tensor, the temperature change, the entropy field and its flux vector are considered as the independent field variables of this formulation. In order to handle appropriately the six different groups of temporal prescriptions in the relaxed- and/or the strong form, two variational integrals will be incorporated into the seven-field functional. Then, eliminating the entropy from this formulation through the strong fulfillment of the constitutive relation for the temperature change with the use of the Legendre transformation between the enthalpy and Gibbs potential, a six-field dual and mixed action functional is obtained. As a further development, the elimination of the momentum- and the velocity vector from the six-field principle through the a priori satisfaction of the kinematic equation and the constitutive relation for the momentum vector leads to a five-field variational formulation. These principles are suitable for the transient analyses of the structures exposed to a thermal shock of short temporal domain or a large heat flux.

  2. The consistency of the current conventional celestial and terrestrial reference frames and the conventional EOP series

    NASA Astrophysics Data System (ADS)

    Heinkelmann, R.; Belda-Palazon, S.; Ferrándiz, J.; Schuh, H.

    2015-08-01

    For applications in Earth sciences, navigation, and astronomy the celestial (ICRF) and terrestrial (ITRF) reference frames as well as the orientation among them, the Earth orientation parameters (EOP), have to be consistent at the level of 1 mm and 0.1 mm/yr (GGOS recommendations). We assess the effect of unmodelled geophysical signals in the regularized coordinates and the sensitivity with respect to different a priori EOP and celestial reference frames. The EOP are determined using the same VLBI data but with station coordinates fixed on different TRFs. The conclusion is that within the time span of data incorporated into ITRF2008 (Altamimi, et al., 2011) the ITRF2008 and the IERS 08 C04 are consistent. This consistency involves that non-linear station motion such as unmodelled geophysical signals partly affect the IERS 08 C04 EOP. There are small but not negligible inconsistencies between the conventional celestial reference frame, ICRF2 (Fey, et al., 2009), the ITRF2008 and the conventional EOP that are quantified by comparing VTRF2008 (Böckmann, et al., 2010) and ITRF2008.

  3. On the Execution Control of HLA Federations using the SISO Space Reference FOM

    NASA Technical Reports Server (NTRS)

    Moller, Bjorn; Garro, Alfredo; Falcone, Alberto; Crues, Edwin Z.; Dexter, Daniel E.

    2017-01-01

    In the Space domain the High Level Architecture (HLA) is one of the reference standard for Distributed Simulation. However, for the different organizations involved in the Space domain (e.g. NASA, ESA, Roscosmos, and JAXA) and their industrial partners, it is difficult to implement HLA simulators (called Federates) able to interact and interoperate in the context of a distributed HLA simulation (called Federation). The lack of a common FOM (Federation Object Model) for the Space domain is one of the main reasons that precludes a-priori interoperability between heterogeneous federates. To fill this lack a Product Development Group (PDG) has been recently activated in the Simulation Interoperability Standards Organization (SISO) with the aim to provide a Space Reference FOM (SRFOM) for international collaboration on Space systems simulations. Members of the PDG come from several countries and contribute experiences from projects within NASA, ESA and other organizations. Participants represent government, academia and industry. The paper presents an overview of the ongoing Space Reference FOM standardization initiative by focusing on the solution provided for managing the execution of an SRFOM-based Federation.

  4. Establishing key components of yoga interventions for musculoskeletal conditions: a Delphi survey

    PubMed Central

    2014-01-01

    Background Evidence suggests yoga is a safe and effective intervention for the management of physical and psychosocial symptoms associated with musculoskeletal conditions. However, heterogeneity in the components and reporting of clinical yoga trials impedes both the generalization of study results and the replication of study protocols. The aim of this Delphi survey was to address these issues of heterogeneity, by developing a list of recommendations of key components for the design and reporting of yoga interventions for musculoskeletal conditions. Methods Recognised experts involved in the design, conduct, and teaching of yoga for musculoskeletal conditions were identified from a systematic review, and invited to contribute to the Delphi survey. Forty-one of the 58 experts contacted, representing six countries, agreed to participate. A three-round Delphi was conducted via electronic surveys. Round 1 presented an open-ended question, allowing panellists to individually identify components they considered key to the design and reporting of yoga interventions for musculoskeletal conditions. Thematic analysis of Round 1 identified items for quantitative rating in Round 2; items not reaching consensus were forwarded to Round 3 for re-rating. Results Thirty-six panellists (36/41; 88%) completed the three rounds of the Delphi survey. Panellists provided 348 comments to the Round 1 question. These comments were reduced to 49 items, grouped under five themes, for rating in subsequent rounds. A priori group consensus of ≥80% was reached on 28 items related to five themes concerning defining the yoga intervention, types of yoga practices to include in an intervention, delivery of the yoga protocol, domains of outcome measures, and reporting of yoga interventions for musculoskeletal conditions. Additionally, a priori consensus of ≥50% was reached on five items relating to minimum values for intervention parameters. Conclusions Expert consensus has provided a non-prescriptive reference list for the design and reporting of yoga interventions for musculoskeletal conditions. It is anticipated future research incorporating the Delphi guidelines will facilitate high quality international research in this field, increase homogeneity of intervention components and parameters, and enhance the comparison and reproducibility of research into the use of yoga for the management of musculoskeletal conditions. PMID:24942270

  5. New multirate sampled-data control law structure and synthesis algorithm

    NASA Technical Reports Server (NTRS)

    Berg, Martin C.; Mason, Gregory S.; Yang, Gen-Sheng

    1992-01-01

    A new multirate sampled-data control law structure is defined and a new parameter-optimization-based synthesis algorithm for that structure is introduced. The synthesis algorithm can be applied to multirate, multiple-input/multiple-output, sampled-data control laws having a prescribed dynamic order and structure, and a priori specified sampling/update rates for all sensors, processor states, and control inputs. The synthesis algorithm is applied to design two-input, two-output tip position controllers of various dynamic orders for a sixth-order, two-link robot arm model.

  6. LFSPMC: Linear feature selection program using the probability of misclassification

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.; Marion, B. P.

    1975-01-01

    The computational procedure and associated computer program for a linear feature selection technique are presented. The technique assumes that: a finite number, m, of classes exists; each class is described by an n-dimensional multivariate normal density function of its measurement vectors; the mean vector and covariance matrix for each density function are known (or can be estimated); and the a priori probability for each class is known. The technique produces a single linear combination of the original measurements which minimizes the one-dimensional probability of misclassification defined by the transformed densities.

  7. Revealing plant cryptotypes: defining meaningful phenotypes among infinite traits.

    PubMed

    Chitwood, Daniel H; Topp, Christopher N

    2015-04-01

    The plant phenotype is infinite. Plants vary morphologically and molecularly over developmental time, in response to the environment, and genetically. Exhaustive phenotyping remains not only out of reach, but is also the limiting factor to interpreting the wealth of genetic information currently available. Although phenotyping methods are always improving, an impasse remains: even if we could measure the entirety of phenotype, how would we interpret it? We propose the concept of cryptotype to describe latent, multivariate phenotypes that maximize the separation of a priori classes. Whether the infinite points comprising a leaf outline or shape descriptors defining root architecture, statistical methods to discern the quantitative essence of an organism will be required as we approach measuring the totality of phenotype. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. SCGICAR: Spatial concatenation based group ICA with reference for fMRI data analysis.

    PubMed

    Shi, Yuhu; Zeng, Weiming; Wang, Nizhuan

    2017-09-01

    With the rapid development of big data, the functional magnetic resonance imaging (fMRI) data analysis of multi-subject is becoming more and more important. As a kind of blind source separation technique, group independent component analysis (GICA) has been widely applied for the multi-subject fMRI data analysis. However, spatial concatenated GICA is rarely used compared with temporal concatenated GICA due to its disadvantages. In this paper, in order to overcome these issues and to consider that the ability of GICA for fMRI data analysis can be improved by adding a priori information, we propose a novel spatial concatenation based GICA with reference (SCGICAR) method to take advantage of the priori information extracted from the group subjects, and then the multi-objective optimization strategy is used to implement this method. Finally, the post-processing means of principal component analysis and anti-reconstruction are used to obtain group spatial component and individual temporal component in the group, respectively. The experimental results show that the proposed SCGICAR method has a better performance on both single-subject and multi-subject fMRI data analysis compared with classical methods. It not only can detect more accurate spatial and temporal component for each subject of the group, but also can obtain a better group component on both temporal and spatial domains. These results demonstrate that the proposed SCGICAR method has its own advantages in comparison with classical methods, and it can better reflect the commonness of subjects in the group. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Local digital control of power electronic converters in a dc microgrid based on a-priori derivation of switching surfaces

    NASA Astrophysics Data System (ADS)

    Banerjee, Bibaswan

    In power electronic basedmicrogrids, the computational requirements needed to implement an optimized online control strategy can be prohibitive. The work presented in this dissertation proposes a generalized method of derivation of geometric manifolds in a dc microgrid that is based on the a-priori computation of the optimal reactions and trajectories for classes of events in a dc microgrid. The proposed states are the stored energies in all the energy storage elements of the dc microgrid and power flowing into them. It is anticipated that calculating a large enough set of dissimilar transient scenarios will also span many scenarios not specifically used to develop the surface. These geometric manifolds will then be used as reference surfaces in any type of controller, such as a sliding mode hysteretic controller. The presence of switched power converters in microgrids involve different control actions for different system events. The control of the switch states of the converters is essential for steady state and transient operations. A digital memory look-up based controller that uses a hysteretic sliding mode control strategy is an effective technique to generate the proper switch states for the converters. An example dcmicrogrid with three dc-dc boost converters and resistive loads is considered for this work. The geometric manifolds are successfully generated for transient events, such as step changes in the loads and the sources. The surfaces corresponding to a specific case of step change in the loads are then used as reference surfaces in an EEPROM for experimentally validating the control strategy. The required switch states corresponding to this specific transient scenario are programmed in the EEPROM as a memory table. This controls the switching of the dc-dc boost converters and drives the system states to the reference manifold. In this work, it is shown that this strategy effectively controls the system for a transient condition such as step changes in the loads for the example case.

  10. PICKLE 2.0: A human protein-protein interaction meta-database employing data integration via genetic information ontology

    PubMed Central

    Gioutlakis, Aris; Klapa, Maria I.

    2017-01-01

    It has been acknowledged that source databases recording experimentally supported human protein-protein interactions (PPIs) exhibit limited overlap. Thus, the reconstruction of a comprehensive PPI network requires appropriate integration of multiple heterogeneous primary datasets, presenting the PPIs at various genetic reference levels. Existing PPI meta-databases perform integration via normalization; namely, PPIs are merged after converted to a certain target level. Hence, the node set of the integrated network depends each time on the number and type of the combined datasets. Moreover, the irreversible a priori normalization process hinders the identification of normalization artifacts in the integrated network, which originate from the nonlinearity characterizing the genetic information flow. PICKLE (Protein InteraCtion KnowLedgebasE) 2.0 implements a new architecture for this recently introduced human PPI meta-database. Its main novel feature over the existing meta-databases is its approach to primary PPI dataset integration via genetic information ontology. Building upon the PICKLE principles of using the reviewed human complete proteome (RHCP) of UniProtKB/Swiss-Prot as the reference protein interactor set, and filtering out protein interactions with low probability of being direct based on the available evidence, PICKLE 2.0 first assembles the RHCP genetic information ontology network by connecting the corresponding genes, nucleotide sequences (mRNAs) and proteins (UniProt entries) and then integrates PPI datasets by superimposing them on the ontology network without any a priori transformations. Importantly, this process allows the resulting heterogeneous integrated network to be reversibly normalized to any level of genetic reference without loss of the original information, the latter being used for identification of normalization biases, and enables the appraisal of potential false positive interactions through PPI source database cross-checking. The PICKLE web-based interface (www.pickle.gr) allows for the simultaneous query of multiple entities and provides integrated human PPI networks at either the protein (UniProt) or the gene level, at three PPI filtering modes. PMID:29023571

  11. Comparison of a priori versus provisional heparin therapy on radial artery occlusion after transradial coronary angiography and patent hemostasis (from the PHARAOH Study).

    PubMed

    Pancholy, Samir B; Bertrand, Olivier F; Patel, Tejas

    2012-07-15

    Systemic anticoagulation decreases the risk of radial artery occlusion (RAO) after transradial catheterization and standard occlusive hemostasis. We compared the efficacy and safety of provisional heparin use only when the technique of patent hemostasis was not achievable to standard a priori heparin administration after radial sheath introduction. Patients referred for coronary angiography were randomized in 2 groups. In the a priori group, 200 patients received intravenous heparin (50 IU/kg) immediately after sheath insertion. In the provisional group, 200 patients did not receive heparin during the procedure. After sheath removal, hemostasis was obtained using a TR band (Terumo corporation, Tokyo, Japan) with a plethysmography-guided patent hemostasis technique. In the provisional group, no heparin was given if radial artery patency could be obtained and maintained. If radial patency was not achieved, a bolus of heparin (50 IU/kg) was given. Radial artery patency was evaluated at 24 hours (early RAO) and 30 days after the procedure (late RAO) by plethysmography. Patent hemostasis was obtained in 67% in the a priori group and 74% in the provisional group (p = 0.10). Incidence of RAO remained similar in the 2 groups at the early (7.5% vs 7.0%, p = 0.84) and late (4.5% vs 5.0%, p = 0.83) evaluations. Women, patients with diabetes, patients having not received heparin, and patients without radial artery patency during hemostasis had more RAO. By multivariate analysis, patent radial artery during hemostasis (odds ratio [OR] 0.03, 95% confidence interval [CI] 0.004 to 0.28, p = 0.002) and diabetes (OR 11, 95% CI 3 to 38,p <0.0001) were independent predictors of late RAO, whereas heparin was not (OR 0.45 95% CI 0.13 to 1.54, p = 0.20). In conclusion, our results suggest that maintenance of radial artery patency during hemostasis is the most important parameter to decrease the risk of RAO. In selected cases, provisional use of heparin appears feasible and safe when patent hemostasis is maintained. Copyright © 2012 Elsevier Inc. All rights reserved.

  12. Interactive Reference Point Procedure Based on the Conic Scalarizing Function

    PubMed Central

    2014-01-01

    In multiobjective optimization methods, multiple conflicting objectives are typically converted into a single objective optimization problem with the help of scalarizing functions. The conic scalarizing function is a general characterization of Benson proper efficient solutions of non-convex multiobjective problems in terms of saddle points of scalar Lagrangian functions. This approach preserves convexity. The conic scalarizing function, as a part of a posteriori or a priori methods, has successfully been applied to several real-life problems. In this paper, we propose a conic scalarizing function based interactive reference point procedure where the decision maker actively takes part in the solution process and directs the search according to her or his preferences. An algorithmic framework for the interactive solution of multiple objective optimization problems is presented and is utilized for solving some illustrative examples. PMID:24723795

  13. Mediterranean diet and cognitive health: Initial results from the Hellenic Longitudinal Investigation of Ageing and Diet.

    PubMed

    Anastasiou, Costas A; Yannakoulia, Mary; Kosmidis, Mary H; Dardiotis, Efthimios; Hadjigeorgiou, Giorgos M; Sakka, Paraskevi; Arampatzi, Xanthi; Bougea, Anastasia; Labropoulos, Ioannis; Scarmeas, Nikolaos

    2017-01-01

    The Mediterranean dietary pattern has been associated with a decreased risk of many degenerative diseases and cognitive function in particular; however, relevant information from Mediterranean regions, where the prototype Mediterranean diet is typically adhered to, have been very limited. Additionally, predefined Mediterranean diet (MeDi) scores with use of a priori cut-offs have been used very rarely, limiting comparisons between different populations and thus external validity of the associations. Finally, associations between individual components of MeDi (i.e., food groups, macronutrients) and particular aspects of cognitive performance have rarely been explored. We evaluated the association of adherence to an a priori defined Mediterranean dietary pattern and its components with dementia and specific aspects of cognitive function in a representative population cohort in Greece. Participants from the Hellenic Longitudinal Investigation of Ageing and Diet (HELIAD), an on-going population-based study, exploring potential associations between diet and cognitive performance in a representative sample from Greek regions, were included in this analysis. Diagnosis of dementia was made by a full clinical and neuropsychological evaluation, while cognitive performance was assessed according to five cognitive domains (memory, language, attention-speed, executive functioning, visuospatial perception) and a composite cognitive score. Adherence to MeDi was evaluated by an a priori score (range 0-55), derived from a detailed food frequency questionnaire. Among 1,865 individuals (mean age 73±6 years, 41% male), 90 were diagnosed with dementia and 223 with mild cognitive impairment. Each unit increase in the Mediterranean dietary score (MedDietScore) was associated with a 10% decrease in the odds for dementia. Adherence to the MeDi was also associated with better performance in memory, language, visuospatial perception and the composite cognitive score; the associations were strongest for memory. Fish consumption was negatively associated with dementia and cognitive performance positively associated with non-refined cereal consumption. Our results suggest that adherence to the MeDi is associated with better cognitive performance and lower dementia rates in Greek elders. Thus, the MeDi in its a priori constructed prototype form may have cognitive benefits in traditional Mediterranean populations.

  14. Mediterranean diet and cognitive health: Initial results from the Hellenic Longitudinal Investigation of Ageing and Diet

    PubMed Central

    Yannakoulia, Mary; Kosmidis, Mary H.; Dardiotis, Efthimios; Hadjigeorgiou, Giorgos M.; Sakka, Paraskevi; Arampatzi, Xanthi; Bougea, Anastasia; Labropoulos, Ioannis; Scarmeas, Nikolaos

    2017-01-01

    Background The Mediterranean dietary pattern has been associated with a decreased risk of many degenerative diseases and cognitive function in particular; however, relevant information from Mediterranean regions, where the prototype Mediterranean diet is typically adhered to, have been very limited. Additionally, predefined Mediterranean diet (MeDi) scores with use of a priori cut-offs have been used very rarely, limiting comparisons between different populations and thus external validity of the associations. Finally, associations between individual components of MeDi (i.e., food groups, macronutrients) and particular aspects of cognitive performance have rarely been explored. We evaluated the association of adherence to an a priori defined Mediterranean dietary pattern and its components with dementia and specific aspects of cognitive function in a representative population cohort in Greece. Methods Participants from the Hellenic Longitudinal Investigation of Ageing and Diet (HELIAD), an on-going population-based study, exploring potential associations between diet and cognitive performance in a representative sample from Greek regions, were included in this analysis. Diagnosis of dementia was made by a full clinical and neuropsychological evaluation, while cognitive performance was assessed according to five cognitive domains (memory, language, attention-speed, executive functioning, visuospatial perception) and a composite cognitive score. Adherence to MeDi was evaluated by an a priori score (range 0–55), derived from a detailed food frequency questionnaire. Results Among 1,865 individuals (mean age 73±6 years, 41% male), 90 were diagnosed with dementia and 223 with mild cognitive impairment. Each unit increase in the Mediterranean dietary score (MedDietScore) was associated with a 10% decrease in the odds for dementia. Adherence to the MeDi was also associated with better performance in memory, language, visuospatial perception and the composite cognitive score; the associations were strongest for memory. Fish consumption was negatively associated with dementia and cognitive performance positively associated with non-refined cereal consumption. Conclusions Our results suggest that adherence to the MeDi is associated with better cognitive performance and lower dementia rates in Greek elders. Thus, the MeDi in its a priori constructed prototype form may have cognitive benefits in traditional Mediterranean populations. PMID:28763509

  15. The constitutive a priori and the distinction between mathematical and physical possibility

    NASA Astrophysics Data System (ADS)

    Everett, Jonathan

    2015-11-01

    This paper is concerned with Friedman's recent revival of the notion of the relativized a priori. It is particularly concerned with addressing the question as to how Friedman's understanding of the constitutive function of the a priori has changed since his defence of the idea in his Dynamics of Reason. Friedman's understanding of the a priori remains influenced by Reichenbach's initial defence of the idea; I argue that this notion of the a priori does not naturally lend itself to describing the historical development of space-time physics. Friedman's analysis of the role of the rotating frame thought experiment in the development of general relativity - which he suggests made the mathematical possibility of four-dimensional space-time a genuine physical possibility - has a central role in his argument. I analyse this thought experiment and argue that it is better understood by following Cassirer and placing emphasis on regulative principles. Furthermore, I argue that Cassirer's Kantian framework enables us to capture Friedman's key insights into the nature of the constitutive a priori.

  16. A neurogenetics approach to defining differential susceptibility to institutional care

    PubMed Central

    Brett, Zoe H.; Sheridan, Margaret; Humphreys, Kate; Smyke, Anna; Gleason, Mary Margaret; Fox, Nathan; Zeanah, Charles; Nelson, Charles; Drury, Stacy

    2014-01-01

    An individual's neurodevelopmental and cognitive sequelae to negative early experiences may, in part, be explained by genetic susceptibility. We examined whether extreme differences in the early caregiving environment, defined as exposure to severe psychosocial deprivation associated with institutional care compared to normative rearing, interacted with a biologically informed genoset comprising BDNF (rs6265), COMT (rs4680), and SIRT1 (rs3758391) to predict distinct outcomes of neurodevelopment at age 8 (N = 193, 97 males and 96 females). Ethnicity was categorized as Romanian (71%), Roma (21%), unknown (7%), or other (1%). We identified a significant interaction between early caregiving environment (i.e., institutionalized versus never institutionalized children) and the a priori defined genoset for full-scale IQ, two spatial working memory tasks, and prefrontal cortex gray matter volume. Model validation was performed using a bootstrap resampling procedure. Although we hypothesized that the effect of this genoset would operate in a manner consistent with differential susceptibility, our results demonstrate a complex interaction where vantage susceptibility, diathesis stress, and differential susceptibility are implicated. PMID:25663728

  17. A neurogenetics approach to defining differential susceptibility to institutional care.

    PubMed

    Brett, Zoe H; Sheridan, Margaret; Humphreys, Kate; Smyke, Anna; Gleason, Mary Margaret; Fox, Nathan; Zeanah, Charles; Nelson, Charles; Drury, Stacy

    2015-03-01

    An individual's neurodevelopmental and cognitive sequelae to negative early experiences may, in part, be explained by genetic susceptibility. We examined whether extreme differences in the early caregiving environment, defined as exposure to severe psychosocial deprivation associated with institutional care compared to normative rearing, interacted with a biologically informed genoset comprising BDNF (rs6265), COMT (rs4680), and SIRT1 (rs3758391) to predict distinct outcomes of neurodevelopment at age 8 ( N = 193, 97 males and 96 females). Ethnicity was categorized as Romanian (71%), Roma (21%), unknown (7%), or other (1%). We identified a significant interaction between early caregiving environment (i.e., institutionalized versus never institutionalized children) and the a priori defined genoset for full-scale IQ, two spatial working memory tasks, and prefrontal cortex gray matter volume. Model validation was performed using a bootstrap resampling procedure. Although we hypothesized that the effect of this genoset would operate in a manner consistent with differential susceptibility, our results demonstrate a complex interaction where vantage susceptibility, diathesis stress, and differential susceptibility are implicated.

  18. An adaptive inverse kinematics algorithm for robot manipulators

    NASA Technical Reports Server (NTRS)

    Colbaugh, R.; Glass, K.; Seraji, H.

    1990-01-01

    An adaptive algorithm for solving the inverse kinematics problem for robot manipulators is presented. The algorithm is derived using model reference adaptive control (MRAC) theory and is computationally efficient for online applications. The scheme requires no a priori knowledge of the kinematics of the robot if Cartesian end-effector sensing is available, and it requires knowledge of only the forward kinematics if joint position sensing is used. Computer simulation results are given for the redundant seven-DOF robotics research arm, demonstrating that the proposed algorithm yields accurate joint angle trajectories for a given end-effector position/orientation trajectory.

  19. Assessment of Higher-Order RANS Closures in a Decelerated Planar Wall-Bounded Turbulent Flow

    NASA Technical Reports Server (NTRS)

    Jeyapaul, Elbert; Coleman, Gary N.; Rumsey, Christopher L.

    2014-01-01

    A reference DNS database is presented, which includes third- and fourth-order moment budgets for unstrained and strained planar channel flow. Existing RANS closure models for third- and fourth-order terms are surveyed, and new model ideas are introduced. The various models are then compared with the DNS data term by term using a priori testing of the higher-order budgets of turbulence transport, velocity-pressure-gradient, and dissipation for both the unstrained and strained databases. Generally, the models for the velocity-pressure-gradient terms are most in need of improvement.

  20. Practical interior tomography with radial Hilbert filtering and a priori knowledge in a small round area.

    PubMed

    Tang, Shaojie; Yang, Yi; Tang, Xiangyang

    2012-01-01

    Interior tomography problem can be solved using the so-called differentiated backprojection-projection onto convex sets (DBP-POCS) method, which requires a priori knowledge within a small area interior to the region of interest (ROI) to be imaged. In theory, the small area wherein the a priori knowledge is required can be in any shape, but most of the existing implementations carry out the Hilbert filtering either horizontally or vertically, leading to a vertical or horizontal strip that may be across a large area in the object. In this work, we implement a practical DBP-POCS method with radial Hilbert filtering and thus the small area with the a priori knowledge can be roughly round (e.g., a sinus or ventricles among other anatomic cavities in human or animal body). We also conduct an experimental evaluation to verify the performance of this practical implementation. We specifically re-derive the reconstruction formula in the DBP-POCS fashion with radial Hilbert filtering to assure that only a small round area with the a priori knowledge be needed (namely radial DBP-POCS method henceforth). The performance of the practical DBP-POCS method with radial Hilbert filtering and a priori knowledge in a small round area is evaluated with projection data of the standard and modified Shepp-Logan phantoms simulated by computer, followed by a verification using real projection data acquired by a computed tomography (CT) scanner. The preliminary performance study shows that, if a priori knowledge in a small round area is available, the radial DBP-POCS method can solve the interior tomography problem in a more practical way at high accuracy. In comparison to the implementations of DBP-POCS method demanding the a priori knowledge in horizontal or vertical strip, the radial DBP-POCS method requires the a priori knowledge within a small round area only. Such a relaxed requirement on the availability of a priori knowledge can be readily met in practice, because a variety of small round areas (e.g., air-filled sinuses or fluid-filled ventricles among other anatomic cavities) exist in human or animal body. Therefore, the radial DBP-POCS method with a priori knowledge in a small round area is more feasible in clinical and preclinical practice.

  1. Guaranteed convergence of the Hough transform

    NASA Astrophysics Data System (ADS)

    Soffer, Menashe; Kiryati, Nahum

    1995-01-01

    The straight-line Hough Transform using normal parameterization with a continuous voting kernel is considered. It transforms the colinearity detection problem to a problem of finding the global maximum of a two dimensional function above a domain in the parameter space. The principle is similar to robust regression using fixed scale M-estimation. Unlike standard M-estimation procedures the Hough Transform does not rely on a good initial estimate of the line parameters: The global optimization problem is approached by exhaustive search on a grid that is usually as fine as computationally feasible. The global maximum of a general function above a bounded domain cannot be found by a finite number of function evaluations. Only if sufficient a-priori knowledge about the smoothness of the objective function is available, convergence to the global maximum can be guaranteed. The extraction of a-priori information and its efficient use are the main challenges in real global optimization problems. The global optimization problem in the Hough Transform is essentially how fine should the parameter space quantization be in order not to miss the true maximum. More than thirty years after Hough patented the basic algorithm, the problem is still essentially open. In this paper an attempt is made to identify a-priori information on the smoothness of the objective (Hough) function and to introduce sufficient conditions for the convergence of the Hough Transform to the global maximum. An image model with several application dependent parameters is defined. Edge point location errors as well as background noise are accounted for. Minimal parameter space quantization intervals that guarantee convergence are obtained. Focusing policies for multi-resolution Hough algorithms are developed. Theoretical support for bottom- up processing is provided. Due to the randomness of errors and noise, convergence guarantees are probabilistic.

  2. Scalar flux modeling in turbulent flames using iterative deconvolution

    NASA Astrophysics Data System (ADS)

    Nikolaou, Z. M.; Cant, R. S.; Vervisch, L.

    2018-04-01

    In the context of large eddy simulations, deconvolution is an attractive alternative for modeling the unclosed terms appearing in the filtered governing equations. Such methods have been used in a number of studies for non-reacting and incompressible flows; however, their application in reacting flows is limited in comparison. Deconvolution methods originate from clearly defined operations, and in theory they can be used in order to model any unclosed term in the filtered equations including the scalar flux. In this study, an iterative deconvolution algorithm is used in order to provide a closure for the scalar flux term in a turbulent premixed flame by explicitly filtering the deconvoluted fields. The assessment of the method is conducted a priori using a three-dimensional direct numerical simulation database of a turbulent freely propagating premixed flame in a canonical configuration. In contrast to most classical a priori studies, the assessment is more stringent as it is performed on a much coarser mesh which is constructed using the filtered fields as obtained from the direct simulations. For the conditions tested in this study, deconvolution is found to provide good estimates both of the scalar flux and of its divergence.

  3. On the interfacial thermodynamics of nanoscale droplets and bubbles

    NASA Astrophysics Data System (ADS)

    Corti, David S.; Kerr, Karl J.; Torabi, Korosh

    2011-07-01

    We present a new self-consistent thermodynamic formalism for the interfacial properties of nanoscale embryos whose interiors do not exhibit bulklike behavior and are in complete equilibrium with the surrounding mother phase. In contrast to the standard Gibbsian analysis, whereby a bulk reference pressure based on the same temperature and chemical potentials of the mother phase is introduced, our approach naturally incorporates the normal pressure at the center of the embryo as an appropriate reference pressure. While the interfacial properties of small embryos that follow from the use of these two reference pressures are different, both methods yield by construction the same reversible work of embryo formation as well as consistency between their respective thermodynamic and mechanical routes to the surface tension. Hence, there is no a priori reason to select one method over another. Nevertheless, we argue, and demonstrate via a density-functional theory (with the local density approximation) analysis of embryo formation in the pure component Lennard-Jones fluid, that our new method generates more physically appealing trends. For example, within the new approach the surface tension at all locations of the dividing surface vanishes at the spinodal where the density profile spanning the embryo and mother phase becomes completely uniform (only the surface tension at the Gibbs surface of tension vanishes in the Gibbsian method at this same limit). Also, for bubbles, the location of the surface of tension now diverges at the spinodal, similar to the divergent behavior exhibited by the equimolar dividing surface (in the Gibbsian method, the location of the surface of tension vanishes instead). For droplets, the new method allows for the appearance of negative surface tensions (the Gibbsian method always yields positive tensions) when the normal pressures within the interior of the embryo become less than the bulk pressure of the surrounding vapor phase. Such a prediction, which is allowed by thermodynamics, is consistent with the interpretation that the mother phase's attempted compression of the droplet is counterbalanced by the negative surface tension, or free energy cost to decrease the interfacial area. Furthermore, for these same droplets, the surface of tension can no longer be meaningfully defined (the surface of tension always remains well defined in the Gibbsian method). Within the new method, the dividing surface at which the surface tension equals zero emerges as a new lengthscale, which has various thermodynamic analogs to and similar behavior as the surface of tension.

  4. Some Simultaneous Inference Procedures for A Priori Contrasts.

    ERIC Educational Resources Information Center

    Convey, John J.

    The testing of a priori contrasts, post hoc contrasts, and experimental error rates are discussed. Methods for controlling the experimental error rate for a set of a priori contrasts tested simultaneously have been developed by Dunnett, Dunn, Sidak, and Krishnaiah. Each of these methods is discussed and contrasted as to applicability, power, and…

  5. Adaptability and phenotypic stability of common bean genotypes through Bayesian inference.

    PubMed

    Corrêa, A M; Teodoro, P E; Gonçalves, M C; Barroso, L M A; Nascimento, M; Santos, A; Torres, F E

    2016-04-27

    This study used Bayesian inference to investigate the genotype x environment interaction in common bean grown in Mato Grosso do Sul State, and it also evaluated the efficiency of using informative and minimally informative a priori distributions. Six trials were conducted in randomized blocks, and the grain yield of 13 common bean genotypes was assessed. To represent the minimally informative a priori distributions, a probability distribution with high variance was used, and a meta-analysis concept was adopted to represent the informative a priori distributions. Bayes factors were used to conduct comparisons between the a priori distributions. The Bayesian inference was effective for the selection of upright common bean genotypes with high adaptability and phenotypic stability using the Eberhart and Russell method. Bayes factors indicated that the use of informative a priori distributions provided more accurate results than minimally informative a priori distributions. According to Bayesian inference, the EMGOPA-201, BAMBUÍ, CNF 4999, CNF 4129 A 54, and CNFv 8025 genotypes had specific adaptability to favorable environments, while the IAPAR 14 and IAC CARIOCA ETE genotypes had specific adaptability to unfavorable environments.

  6. An Image Processing Algorithm Based On FMAT

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Pal, Sankar K.

    1995-01-01

    Information deleted in ways minimizing adverse effects on reconstructed images. New grey-scale generalization of medial axis transformation (MAT), called FMAT (short for Fuzzy MAT) proposed. Formulated by making natural extension to fuzzy-set theory of all definitions and conditions (e.g., characteristic function of disk, subset condition of disk, and redundancy checking) used in defining MAT of crisp set. Does not need image to have any kind of priori segmentation, and allows medial axis (and skeleton) to be fuzzy subset of input image. Resulting FMAT (consisting of maximal fuzzy disks) capable of reconstructing exactly original image.

  7. [Overdiagnosis in cancer screening].

    PubMed

    Cervera Deval, J; Sentís Crivillé, M; Zulueta, J J

    2015-01-01

    In screening programs, overdiagnosis is defined as the detection of a disease that would have gone undetected without screening when that disease would not have resulted in morbimortality and was treated unnecessarily. Overdiagnosis is a bias inherent in screening and an undesired effect of secondary prevention and improved sensitivity of diagnostic techniques. It is difficult to discriminate a priori between clinically relevant diagnoses and those in which treatment is unnecessary. To minimize the effects of overdiagnosis, screening should be done in patients at risk. Copyright © 2014 SERAM. Published by Elsevier España, S.L.U. All rights reserved.

  8. Remaining lifetime modeling using State-of-Health estimation

    NASA Astrophysics Data System (ADS)

    Beganovic, Nejra; Söffker, Dirk

    2017-08-01

    Technical systems and system's components undergo gradual degradation over time. Continuous degradation occurred in system is reflected in decreased system's reliability and unavoidably lead to a system failure. Therefore, continuous evaluation of State-of-Health (SoH) is inevitable to provide at least predefined lifetime of the system defined by manufacturer, or even better, to extend the lifetime given by manufacturer. However, precondition for lifetime extension is accurate estimation of SoH as well as the estimation and prediction of Remaining Useful Lifetime (RUL). For this purpose, lifetime models describing the relation between system/component degradation and consumed lifetime have to be established. In this contribution modeling and selection of suitable lifetime models from database based on current SoH conditions are discussed. Main contribution of this paper is the development of new modeling strategies capable to describe complex relations between measurable system variables, related system degradation, and RUL. Two approaches with accompanying advantages and disadvantages are introduced and compared. Both approaches are capable to model stochastic aging processes of a system by simultaneous adaption of RUL models to current SoH. The first approach requires a priori knowledge about aging processes in the system and accurate estimation of SoH. An estimation of SoH here is conditioned by tracking actual accumulated damage into the system, so that particular model parameters are defined according to a priori known assumptions about system's aging. Prediction accuracy in this case is highly dependent on accurate estimation of SoH but includes high number of degrees of freedom. The second approach in this contribution does not require a priori knowledge about system's aging as particular model parameters are defined in accordance to multi-objective optimization procedure. Prediction accuracy of this model does not highly depend on estimated SoH. This model has lower degrees of freedom. Both approaches rely on previously developed lifetime models each of them corresponding to predefined SoH. Concerning first approach, model selection is aided by state-machine-based algorithm. In the second approach, model selection conditioned by tracking an exceedance of predefined thresholds is concerned. The approach is applied to data generated from tribological systems. By calculating Root Squared Error (RSE), Mean Squared Error (MSE), and Absolute Error (ABE) the accuracy of proposed models/approaches is discussed along with related advantages and disadvantages. Verification of the approach is done using cross-fold validation, exchanging training and test data. It can be stated that the newly introduced approach based on data (denoted as data-based or data-driven) parametric models can be easily established providing detailed information about remaining useful/consumed lifetime valid for systems with constant load but stochastically occurred damage.

  9. The combined effects of self-referent information processing and ruminative responses on adolescent depression.

    PubMed

    Black, Stephanie Winkeljohn; Pössel, Patrick

    2013-08-01

    Adolescents who develop depression have worse interpersonal and affective experiences and are more likely to develop substance problems and/or suicidal ideation compared to adolescents who do not develop depression. This study examined the combined effects of negative self-referent information processing and rumination (i.e., brooding and reflection) on adolescent depressive symptoms. It was hypothesized that the interaction of negative self-referent information processing and brooding would significantly predict depressive symptoms, while the interaction of negative self-referent information processing and reflection would not predict depressive symptoms. Adolescents (n = 92; 13-15 years; 34.7% female) participated in a 6-month longitudinal study. Self-report instruments measured depressive symptoms and rumination; a cognitive task measured information processing. Path modelling in Amos 19.0 analyzed the data. The interaction of negative information processing and brooding significantly predicted an increase in depressive symptoms 6 months later. The interaction of negative information processing and reflection did not significantly predict depression, however, the model not meet a priori standards to accept the null hypothesis. Results suggest clinicians working with adolescents at-risk for depression should consider focusing on the reduction of brooding and negative information processing to reduce long-term depressive symptoms.

  10. Time Domain Simulations of Arm Locking in LISA

    NASA Technical Reports Server (NTRS)

    Thorpe, J. I.; Maghami, P.; Livas, Jeff

    2011-01-01

    Arm locking is a technique that has been proposed for reducing laser frequency fluctuations in the Laser Interferometer Space Antenna (LISA). a gravitational-wave observatory sensitive' in the milliHertz frequency band. Arm locking takes advantage of the geometric stability of the triangular constellation of three spacecraft that comprise LISA to provide a frequency reference with a stability in the LISA measurement band that exceeds that available from a standard reference such as an optical cavity or molecular absorption line. We have implemented a time-domain simulation of arm locking including the expected limiting noise sources (shot noise, clock noise. spacecraft jitter noise. and residual laser frequency noise). The effect of imperfect a priori knowledge of the LISA heterodyne frequencies and associated "pulling" of an arm locked laser is included. We find that our implementation meets requirements both on the noise and dynamic range of the laser frequency.

  11. Terrestrial reference frame solution with the Vienna VLBI Software VieVS and implication of tropospheric gradient estimation

    NASA Astrophysics Data System (ADS)

    Spicakova, H.; Plank, L.; Nilsson, T.; Böhm, J.; Schuh, H.

    2011-07-01

    The Vienna VLBI Software (VieVS) has been developed at the Institute of Geodesy and Geophysics at TU Vienna since 2008. In this presentation, we present the module Vie_glob which is the part of VieVS that allows the parameter estimation from multiple VLBI sessions in a so-called global solution. We focus on the determination of the terrestrial reference frame (TRF) using all suitable VLBI sessions since 1984. We compare different analysis options like the choice of loading corrections or of one of the models for the tropospheric delays. The effect of atmosphere loading corrections on station heights if neglected at observation level will be shown. Time series of station positions (using a previously determined TRF as a priori values) are presented and compared to other estimates of site positions from individual IVS (International VLBI Service for Geodesy and Astrometry) Analysis Centers.

  12. Validating Affordances as an Instrument for Design and a Priori Analysis of Didactical Situations in Mathematics

    ERIC Educational Resources Information Center

    Sollervall, Håkan; Stadler, Erika

    2015-01-01

    The aim of the presented case study is to investigate how coherent analytical instruments may guide the a priori and a posteriori analyses of a didactical situation. In the a priori analysis we draw on the notion of affordances, as artefact-mediated opportunities for action, to construct hypothetical trajectories of goal-oriented actions that have…

  13. Towards Improving Satellite Tropospheric NO2 Retrieval Products: Impacts of the spatial resolution and lighting NOx production from the a priori chemical transport model

    NASA Astrophysics Data System (ADS)

    Smeltzer, C. D.; Wang, Y.; Zhao, C.; Boersma, F.

    2009-12-01

    Polar orbiting satellite retrievals of tropospheric nitrogen dioxide (NO2) columns are important to a variety of scientific applications. These NO2 retrievals rely on a priori profiles from chemical transport models and radiative transfer models to derive the vertical columns (VCs) from slant columns measurements. In this work, we compare the retrieval results using a priori profiles from a global model (TM4) and a higher resolution regional model (REAM) at the OMI overpass hour of 1330 local time, implementing the Dutch OMI NO2 (DOMINO) retrieval. We also compare the retrieval results using a priori profiles from REAM model simulations with and without lightning NOx (NO + NO2) production. A priori model resolution and lightning NOx production are both found to have large impact on satellite retrievals by altering the satellite sensitivity to a particular observation by shifting the NO2 vertical distribution interpreted by the radiation model. The retrieved tropospheric NO2 VCs may increase by 25-100% in urban regions and be reduced by 50% in rural regions if the a priori profiles from REAM simulations are used during the retrievals instead of the profiles from TM4 simulations. The a priori profiles with lightning NOx may result in a 25-50% reduction of the retrieved tropospheric NO2 VCs compared to the a priori profiles without lightning. As first priority, a priori vertical NO2 profiles from a chemical transport model with a high resolution, which can better simulate urban-rural NO2 gradients in the boundary layer and make use of observation-based parameterizations of lightning NOx production, should be first implemented to obtain more accurate NO2 retrievals over the United States, where NOx source regions are spatially separated and lightning NOx production is significant. Then as consequence of a priori NO2 profile variabilities resulting from lightning and model resolution dynamics, geostationary satellite, daylight observations would further promote the next step towards producing a more complete NO2 data product provided sufficient resolution of the observations. Both the corrected retrieval algorithm and the proposed next generation geostationary satellite observations would thus improve emission inventories, better validate model simulations, and advantageously optimize regional specific ozone control strategies.

  14. Biomimetic Hybrid Feedback Feedforward Neural-Network Learning Control.

    PubMed

    Pan, Yongping; Yu, Haoyong

    2017-06-01

    This brief presents a biomimetic hybrid feedback feedforward neural-network learning control (NNLC) strategy inspired by the human motor learning control mechanism for a class of uncertain nonlinear systems. The control structure includes a proportional-derivative controller acting as a feedback servo machine and a radial-basis-function (RBF) NN acting as a feedforward predictive machine. Under the sufficient constraints on control parameters, the closed-loop system achieves semiglobal practical exponential stability, such that an accurate NN approximation is guaranteed in a local region along recurrent reference trajectories. Compared with the existing NNLC methods, the novelties of the proposed method include: 1) the implementation of an adaptive NN control to guarantee plant states being recurrent is not needed, since recurrent reference signals rather than plant states are utilized as NN inputs, which greatly simplifies the analysis and synthesis of the NNLC and 2) the domain of NN approximation can be determined a priori by the given reference signals, which leads to an easy construction of the RBF-NNs. Simulation results have verified the effectiveness of this approach.

  15. DeltaSA tool for source apportionment benchmarking, description and sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Pernigotti, D.; Belis, C. A.

    2018-05-01

    DeltaSA is an R-package and a Java on-line tool developed at the EC-Joint Research Centre to assist and benchmark source apportionment applications. Its key functionalities support two critical tasks in this kind of studies: the assignment of a factor to a source in factor analytical models (source identification) and the model performance evaluation. The source identification is based on the similarity between a given factor and source chemical profiles from public databases. The model performance evaluation is based on statistical indicators used to compare model output with reference values generated in intercomparison exercises. The references values are calculated as the ensemble average of the results reported by participants that have passed a set of testing criteria based on chemical profiles and time series similarity. In this study, a sensitivity analysis of the model performance criteria is accomplished using the results of a synthetic dataset where "a priori" references are available. The consensus modulated standard deviation punc gives the best choice for the model performance evaluation when a conservative approach is adopted.

  16. Housing and sexual health among street-involved youth.

    PubMed

    Kumar, Maya M; Nisenbaum, Rosane; Barozzino, Tony; Sgro, Michael; Bonifacio, Herbert J; Maguire, Jonathon L

    2015-10-01

    Street-involved youth (SIY) carry a disproportionate burden of sexually transmitted diseases (STD). Studies among adults suggest that improving housing stability may be an effective primary prevention strategy for improving sexual health. Housing options available to SIY offer varying degrees of stability and adult supervision. This study investigated whether housing options offering more stability and adult supervision are associated with fewer STD and related risk behaviors among SIY. A cross-sectional study was performed using public health survey and laboratory data collected from Toronto SIY in 2010. Three exposure categories were defined a priori based on housing situation: (1) stable and supervised housing, (2) stable and unsupervised housing, and (3) unstable and unsupervised housing. Multivariate logistic regression was used to test the association between housing category and current or recent STD. Secondary analyses were performed using the following secondary outcomes: blood-borne infection, recent binge-drinking, and recent high-risk sexual behavior. The final analysis included 184 SIY. Of these, 28.8 % had a current or recent STD. Housing situation was stable and supervised for 12.5 %, stable and unsupervised for 46.2 %, and unstable and unsupervised for 41.3 %. Compared to stable and supervised housing, there was no significant association between current or recent STD among stable and unsupervised housing or unstable and unsupervised housing. There was no significant association between housing category and risk of blood-borne infection, binge-drinking, or high-risk sexual behavior. Although we did not demonstrate a significant association between stable and supervised housing and lower STD risk, our incorporation of both housing stability and adult supervision into a priori defined exposure groups may inform future studies of housing-related prevention strategies among SIY. Multi-modal interventions beyond housing alone may also be required to prevent sexual morbidity among these vulnerable youth.

  17. Microstructural White Matter Alterations in the Corpus Callosum of Girls With Conduct Disorder.

    PubMed

    Menks, Willeke Martine; Furger, Reto; Lenz, Claudia; Fehlbaum, Lynn Valérie; Stadler, Christina; Raschle, Nora Maria

    2017-03-01

    Diffusion tensor imaging (DTI) studies in adolescent conduct disorder (CD) have demonstrated white matter alterations of tracts connecting functionally distinct fronto-limbic regions, but only in boys or mixed-gender samples. So far, no study has investigated white matter integrity in girls with CD on a whole-brain level. Therefore, our aim was to investigate white matter alterations in adolescent girls with CD. We collected high-resolution DTI data from 24 girls with CD and 20 typically developing control girls using a 3T magnetic resonance imaging system. Fractional anisotropy (FA) and mean diffusivity (MD) were analyzed for whole-brain as well as a priori-defined regions of interest, while controlling for age and intelligence, using a voxel-based analysis and an age-appropriate customized template. Whole-brain findings revealed white matter alterations (i.e., increased FA) in girls with CD bilaterally within the body of the corpus callosum, expanding toward the right cingulum and left corona radiata. The FA and MD results in a priori-defined regions of interest were more widespread and included changes in the cingulum, corona radiata, fornix, and uncinate fasciculus. These results were not driven by age, intelligence, or attention-deficit/hyperactivity disorder comorbidity. This report provides the first evidence of white matter alterations in female adolescents with CD as indicated through white matter reductions in callosal tracts. This finding enhances current knowledge about the neuropathological basis of female CD. An increased understanding of gender-specific neuronal characteristics in CD may influence diagnosis, early detection, and successful intervention strategies. Copyright © 2017 American Academy of Child and Adolescent Psychiatry. Published by Elsevier Inc. All rights reserved.

  18. Deformation integrity monitoring for GNSS positioning services including local, regional and large scale hazard monitoring - the Karlsruhe approach and software(MONIKA)

    NASA Astrophysics Data System (ADS)

    Jaeger, R.

    2007-05-01

    GNSS-positioning services like SAPOS/ascos in Germany and many others in Europe, America and worldwide, usually yield in a short time their interdisciplinary and country-wide use for precise geo-referencing, replacing traditional low order geodetic networks. So it becomes necessary that possible changes of the reference stations' coordinates are detected ad hoc. The GNSS-reference-station MONitoring by the KArlsruhe approach and software (MONIKA) are designed for that task. The developments at Karlsruhe University of Applied Sciences in cooperation with the State Survey of Baden-Württemberg are further motivated by a the official resolution of the German state survey departments' association (Arbeitsgemeinschaft der Vermessungsverwaltungen Deutschland (AdV)) 2006 on coordinate monitoring as a quality-control duty of the GNSS-positioning service provider. The presented approach can - besides the coordinate control of GNSS-positioning services - also be used to set up any GNSS-service for the tasks of an area-wide geodynamical and natural disaster-prevention service. The mathematical model of approach, which enables a multivariate and multi-epochal design approach, is based on the GNSS-observations input of the RINEX-data of the GNSS service, followed by fully automatic processing of baselines and/or session, and a near-online setting up of epoch-state vectors and their covariance-matrices in a rigorous 3D network adjustment. In case of large scale and long-term monitoring situations, geodynamical standard trends (datum-drift, plate-movements etc.) are accordingly considered and included in the mathematical model of MONIKA. The coordinate-based deformation monitoring approach, as third step of the stepwise adjustments, is based on the above epoch-state vectors, and - splitting off geodynamics trends - hereby on a multivariate and multi-epochal congruency testing. So far, that no other information exists, all points are assumed as being stable and congruent reference points. Stations, which a priori assumed as moving - in that way local monitoring areas can be included- are to be monitored and analyzed in reference to the stable reference points. In that way, a high sensitivity for the detection of GNSS station displacements, both for assumed stable points, as well as for a priori moving points, can be achieved. The results for the concept are shown at the example of a monitoring using the MONINKA-software in the 300 x 300 km area of the state of Baden-Württemberg, Germany.

  19. VizieR Online Data Catalog: Outliers and similarity in APOGEE (Reis+, 2018)

    NASA Astrophysics Data System (ADS)

    Reis, I.; Poznanski, D.; Baron, D.; Zasowski, G.; Shahaf, S.

    2017-11-01

    t-SNE is a dimensionality reduction algorithm that is particularly well suited for the visualization of high-dimensional datasets. We use t-SNE to visualize our distance matrix. A-priori, these distances could define a space with almost as many dimensions as objects, i.e., tens of thousand of dimensions. Obviously, since many stars are quite similar, and their spectra are defined by a few physical parameters, the minimal spanning space might be smaller. By using t-SNE we can examine the structure of our sample projected into 2D. We use our distance matrix as input to the t-SNE algorithm and in return get a 2D map of the objects in our dataset. For each star in a sample of 183232 APOGEE stars, the APOGEE IDs of the 99 stars with most similar spectra (according to the method described in paper), ordered by similarity. (3 data files).

  20. Establishment of Biological Reference Intervals and Reference Curve for Urea by Exploratory Parametric and Non-Parametric Quantile Regression Models.

    PubMed

    Sarkar, Rajarshi

    2013-07-01

    The validity of the entire renal function tests as a diagnostic tool depends substantially on the Biological Reference Interval (BRI) of urea. Establishment of BRI of urea is difficult partly because exclusion criteria for selection of reference data are quite rigid and partly due to the compartmentalization considerations regarding age and sex of the reference individuals. Moreover, construction of Biological Reference Curve (BRC) of urea is imperative to highlight the partitioning requirements. This a priori study examines the data collected by measuring serum urea of 3202 age and sex matched individuals, aged between 1 and 80 years, by a kinetic UV Urease/GLDH method on a Roche Cobas 6000 auto-analyzer. Mann-Whitney U test of the reference data confirmed the partitioning requirement by both age and sex. Further statistical analysis revealed the incompatibility of the data for a proposed parametric model. Hence the data was non-parametrically analysed. BRI was found to be identical for both sexes till the 2(nd) decade, and the BRI for males increased progressively 6(th) decade onwards. Four non-parametric models were postulated for construction of BRC: Gaussian kernel, double kernel, local mean and local constant, of which the last one generated the best-fitting curves. Clinical decision making should become easier and diagnostic implications of renal function tests should become more meaningful if this BRI is followed and the BRC is used as a desktop tool in conjunction with similar data for serum creatinine.

  1. Staging on the Internet: research on online photo album users in Taiwan with the spectacle/performance paradigm.

    PubMed

    Hsu, Chiung-wen

    2007-08-01

    This study explores motivations of online photo album users in Taiwan and the distinctive "staging" phenomenon with media gratifications and an a priori theoretical framework, the spectacle/performance paradigm (SPP). Media drenching, performance, function and reference are "new" gratifications, which no prior research was found. These gratifications are consistent with the argument of the "diffused audience" on the Internet. This study verifies that the process-content distinction may not be applicable in the Internet setting because distinctions between the real world and the mediated world are vanishing, which is also the main argument of the SPP paradigm.

  2. Refined discrete and empirical horizontal gradients in VLBI analysis

    NASA Astrophysics Data System (ADS)

    Landskron, Daniel; Böhm, Johannes

    2018-02-01

    Missing or incorrect consideration of azimuthal asymmetry of troposphere delays is a considerable error source in space geodetic techniques such as Global Navigation Satellite Systems (GNSS) or Very Long Baseline Interferometry (VLBI). So-called horizontal troposphere gradients are generally utilized for modeling such azimuthal variations and are particularly required for observations at low elevation angles. Apart from estimating the gradients within the data analysis, which has become common practice in space geodetic techniques, there is also the possibility to determine the gradients beforehand from different data sources than the actual observations. Using ray-tracing through Numerical Weather Models (NWMs), we determined discrete gradient values referred to as GRAD for VLBI observations, based on the standard gradient model by Chen and Herring (J Geophys Res 102(B9):20489-20502, 1997. https://doi.org/10.1029/97JB01739) and also for new, higher-order gradient models. These gradients are produced on the same data basis as the Vienna Mapping Functions 3 (VMF3) (Landskron and Böhm in J Geod, 2017.https://doi.org/10.1007/s00190-017-1066-2), so they can also be regarded as the VMF3 gradients as they are fully consistent with each other. From VLBI analyses of the Vienna VLBI and Satellite Software (VieVS), it becomes evident that baseline length repeatabilities (BLRs) are improved on average by 5% when using a priori gradients GRAD instead of estimating the gradients. The reason for this improvement is that the gradient estimation yields poor results for VLBI sessions with a small number of observations, while the GRAD a priori gradients are unaffected from this. We also developed a new empirical gradient model applicable for any time and location on Earth, which is included in the Global Pressure and Temperature 3 (GPT3) model. Although being able to describe only the systematic component of azimuthal asymmetry and no short-term variations at all, even these empirical a priori gradients slightly reduce (improve) the BLRs with respect to the estimation of gradients. In general, this paper addresses that a priori horizontal gradients are actually more important for VLBI analysis than previously assumed, as particularly the discrete model GRAD as well as the empirical model GPT3 are indeed able to refine and improve the results.

  3. The gravity field model IGGT_R1 based on the second invariant of the GOCE gravitational gradient tensor

    NASA Astrophysics Data System (ADS)

    Lu, Biao; Luo, Zhicai; Zhong, Bo; Zhou, Hao; Flechtner, Frank; Förste, Christoph; Barthelmes, Franz; Zhou, Rui

    2017-11-01

    Based on tensor theory, three invariants of the gravitational gradient tensor (IGGT) are independent of the gradiometer reference frame (GRF). Compared to traditional methods for calculation of gravity field models based on the gravity field and steady-state ocean circulation explorer (GOCE) data, which are affected by errors in the attitude indicator, using IGGT and least squares method avoids the problem of inaccurate rotation matrices. The IGGT approach as studied in this paper is a quadratic function of the gravity field model's spherical harmonic coefficients. The linearized observation equations for the least squares method are obtained using a Taylor expansion, and the weighting equation is derived using the law of error propagation. We also investigate the linearization errors using existing gravity field models and find that this error can be ignored since the used a-priori model EIGEN-5C is sufficiently accurate. One problem when using this approach is that it needs all six independent gravitational gradients (GGs), but the components V_{xy} and V_{yz} of GOCE are worse due to the non-sensitive axes of the GOCE gradiometer. Therefore, we use synthetic GGs for both inaccurate gravitational gradient components derived from the a-priori gravity field model EIGEN-5C. Another problem is that the GOCE GGs are measured in a band-limited manner. Therefore, a forward and backward finite impulse response band-pass filter is applied to the data, which can also eliminate filter caused phase change. The spherical cap regularization approach (SCRA) and the Kaula rule are then applied to solve the polar gap problem caused by GOCE's inclination of 96.7° . With the techniques described above, a degree/order 240 gravity field model called IGGT_R1 is computed. Since the synthetic components of V_{xy} and V_{yz} are not band-pass filtered, the signals outside the measurement bandwidth are replaced by the a-priori model EIGEN-5C. Therefore, this model is practically a combined gravity field model which contains GOCE GGs signals and long wavelength signals from the a-priori model EIGEN-5C. Finally, IGGT_R1's accuracy is evaluated by comparison with other gravity field models in terms of difference degree amplitudes, the geostrophic velocity in the Agulhas current area, gravity anomaly differences as well as by comparison to GNSS/leveling data.

  4. Orbit determination of the Next-Generation Beidou satellites with Intersatellite link measurements and a priori orbit constraints

    NASA Astrophysics Data System (ADS)

    Ren, Xia; Yang, Yuanxi; Zhu, Jun; Xu, Tianhe

    2017-11-01

    Intersatellite Link (ISL) technology helps to realize the auto update of broadcast ephemeris and clock error parameters for Global Navigation Satellite System (GNSS). ISL constitutes an important approach with which to both improve the observation geometry and extend the tracking coverage of China's Beidou Navigation Satellite System (BDS). However, ISL-only orbit determination might lead to the constellation drift, rotation, and even lead to the divergence in orbit determination. Fortunately, predicted orbits with good precision can be used as a priori information with which to constrain the estimated satellite orbit parameters. Therefore, the precision of satellite autonomous orbit determination can be improved by consideration of a priori orbit information, and vice versa. However, the errors of rotation and translation in a priori orbit will remain in the ultimate result. This paper proposes a constrained precise orbit determination (POD) method for a sub-constellation of the new Beidou satellite constellation with only a few ISLs. The observation model of dual one-way measurements eliminating satellite clock errors is presented, and the orbit determination precision is analyzed with different data processing backgrounds. The conclusions are as follows. (1) With ISLs, the estimated parameters are strongly correlated, especially the positions and velocities of satellites. (2) The performance of determined BDS orbits will be improved by the constraints with more precise priori orbits. The POD precision is better than 45 m with a priori orbit constrain of 100 m precision (e.g., predicted orbits by telemetry tracking and control system), and is better than 6 m with precise priori orbit constraints of 10 m precision (e.g., predicted orbits by international GNSS monitoring & Assessment System (iGMAS)). (3) The POD precision is improved by additional ISLs. Constrained by a priori iGMAS orbits, the POD precision with two, three, and four ISLs is better than 6, 3, and 2 m, respectively. (4) The in-plane link and out-of-plane link have different contributions to observation configuration and system observability. The POD with weak observation configuration (e.g., one in-plane link and one out-of-plane link) should be tightly constrained with a priori orbits.

  5. Zonal management of arsenic contaminated ground water in Northwestern Bangladesh.

    PubMed

    Hill, Jason; Hossain, Faisal; Bagtzoglou, Amvrossios C

    2009-09-01

    This paper used ordinary kriging to spatially map arsenic contamination in shallow aquifers of Northwestern Bangladesh (total area approximately 35,000 km(2)). The Northwestern region was selected because it represents a relatively safer source of large-scale and affordable water supply for the rest of Bangladesh currently faced with extensive arsenic contamination in drinking water (such as the Southern regions). Hence, the work appropriately explored sustainability issues by building upon a previously published study (Hossain et al., 2007; Water Resources Management, vol. 21: 1245-1261) where a more general nation-wide assessment afforded by kriging was identified. The arsenic database for reference comprised the nation-wide survey (of 3534 drinking wells) completed in 1999 by the British Geological Survey (BGS) in collaboration with the Department of Public Health Engineering (DPHE) of Bangladesh. Randomly sampled networks of zones from this reference database were used to develop an empirical variogram and develop maps of zonal arsenic concentration for the Northwestern region. The remaining non-sampled zones from the reference database were used to assess the accuracy of the kriged maps. Two additional criteria were explored: (1) the ability of geostatistical interpolators such as kriging to extrapolate information on spatial structure of arsenic contamination beyond small-scale exploratory domains; (2) the impact of a priori knowledge of anisotropic variability on the effectiveness of geostatistically based management. On the average, the kriging method was found to have a 90% probability of successful prediction of safe zones according to the WHO safe limit of 10ppb while for the Bangladesh safe limit of 50ppb, the safe zone prediction probability was 97%. Compared to the previous study by Hossain et al. (2007) over the rest of the contaminated country side, the probability of successful detection of safe zones in the Northwest is observed to be about 25% higher. An a priori knowledge of anisotropy was found to have inconclusive impact on the effectiveness of kriging. It was, however, hypothesized that a preferential sampling strategy that honored anisotropy could be necessary to reach a more definitive conclusion in regards to this issue.

  6. Are we closer to the vision? A proposed framework for incorporating omics into environmental assessments.

    PubMed

    Martyniuk, Christopher J

    2018-04-01

    Environmental science has benefited a great deal from omics-based technologies. High-throughput toxicology has defined adverse outcome pathways (AOPs), prioritized chemicals of concern, and identified novel actions of environmental chemicals. While many of these approaches are conducted under rigorous laboratory conditions, a significant challenge has been the interpretation of omics data in "real-world" exposure scenarios. Clarity in the interpretation of these data limits their use in environmental monitoring programs. In recent years, one overarching objective of many has been to address fundamental questions concerning experimental design and the robustness of data collected under the broad umbrella of environmental genomics. These questions include: (1) the likelihood that molecular profiles return to a predefined baseline level following remediation efforts, (2) how reference site selection in an urban environment influences interpretation of omics data and (3) what is the most appropriate species to monitor in the environment from an omics point of view. In addition, inter-genomics studies have been conducted to assess transcriptome reproducibility in toxicology studies. One lesson learned from inter-genomics studies is that there are core molecular networks that can be identified by multiple laboratories using the same platform. This supports the idea that "omics-networks" defined a priori may be a viable approach moving forward for evaluating environmental impacts over time. Both spatial and temporal variability in ecosystem structure is expected to influence molecular responses to environmental stressors, and it is important to recognize how these variables, as well as individual factor (i.e. sex, age, maturation), may confound interpretation of network responses to chemicals. This mini-review synthesizes the progress made towards adopting these tools into environmental monitoring and identifies future challenges to be addressed, as we move into the next era of high throughput sequencing. A conceptual framework for validating and incorporating molecular networks into environmental monitoring programs is proposed. As AOPs become more defined and their potential in environmental monitoring assessments becomes more recognized, the AOP framework may prove to be the conduit between omics and penultimate ecological responses for environmental risk assessments. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Liver segmentation from CT images using a sparse priori statistical shape model (SP-SSM).

    PubMed

    Wang, Xuehu; Zheng, Yongchang; Gan, Lan; Wang, Xuan; Sang, Xinting; Kong, Xiangfeng; Zhao, Jie

    2017-01-01

    This study proposes a new liver segmentation method based on a sparse a priori statistical shape model (SP-SSM). First, mark points are selected in the liver a priori model and the original image. Then, the a priori shape and its mark points are used to obtain a dictionary for the liver boundary information. Second, the sparse coefficient is calculated based on the correspondence between mark points in the original image and those in the a priori model, and then the sparse statistical model is established by combining the sparse coefficients and the dictionary. Finally, the intensity energy and boundary energy models are built based on the intensity information and the specific boundary information of the original image. Then, the sparse matching constraint model is established based on the sparse coding theory. These models jointly drive the iterative deformation of the sparse statistical model to approximate and accurately extract the liver boundaries. This method can solve the problems of deformation model initialization and a priori method accuracy using the sparse dictionary. The SP-SSM can achieve a mean overlap error of 4.8% and a mean volume difference of 1.8%, whereas the average symmetric surface distance and the root mean square symmetric surface distance can reach 0.8 mm and 1.4 mm, respectively.

  8. Shape: A 3D Modeling Tool for Astrophysics.

    PubMed

    Steffen, Wolfgang; Koning, Nicholas; Wenger, Stephan; Morisset, Christophe; Magnor, Marcus

    2011-04-01

    We present a flexible interactive 3D morpho-kinematical modeling application for astrophysics. Compared to other systems, our application reduces the restrictions on the physical assumptions, data type, and amount that is required for a reconstruction of an object's morphology. It is one of the first publicly available tools to apply interactive graphics to astrophysical modeling. The tool allows astrophysicists to provide a priori knowledge about the object by interactively defining 3D structural elements. By direct comparison of model prediction with observational data, model parameters can then be automatically optimized to fit the observation. The tool has already been successfully used in a number of astrophysical research projects.

  9. On some properties of force-free magnetic fields in infinite regions of space

    NASA Technical Reports Server (NTRS)

    Aly, J. J.

    1984-01-01

    Techniques for solving boundary value problems (BVP) for a force free magnetic field (FFF) in infinite space are presented. A priori inequalities are defined which must be satisfied by the force-free equations. It is shown that upper bounds may be calculated for the magnetic energy of the region provided the value of the magnetic normal component at the boundary of the region can be shown to decay sufficiently fast at infinity. The results are employed to prove a nonexistence theorem for the BVP for the FFF in the spatial region. The implications of the theory for modeling the origins of solar flares are discussed.

  10. Optimizing the design of vertical seismic profiling (VSP) for imaging fracture zones over hardrock basement geothermal environments

    NASA Astrophysics Data System (ADS)

    Reiser, Fabienne; Schmelzbach, Cedric; Maurer, Hansruedi; Greenhalgh, Stewart; Hellwig, Olaf

    2017-04-01

    A primary focus of geothermal seismic imaging is to map dipping faults and fracture zones that control rock permeability and fluid flow. Vertical seismic profiling (VSP) is therefore a most valuable means to image the immediate surroundings of an existing borehole to guide, for example, the placing of new boreholes to optimize production from known faults and fractures. We simulated 2D and 3D acoustic synthetic seismic data and processed it through to pre-stack depth migration to optimize VSP survey layouts for mapping moderately to steeply dipping fracture zones within possible basement geothermal reservoirs. Our VSP survey optimization procedure for sequentially selecting source locations to define the area where source points are best located for optimal imaging makes use of a cross-correlation statistic, by which a subset of migrated shot gathers is compared with a target or reference image from a comprehensive set of source gathers. In geothermal exploration at established sites, it is reasonable to assume that sufficient à priori information is available to construct such a target image. We generally obtained good results with a relatively small number of optimally chosen source positions distributed over an ideal source location area for different fracture zone scenarios (different dips, azimuths, and distances from the surveying borehole). Adding further sources outside the optimal source area did not necessarily improve the results, but rather resulted in image distortions. It was found that fracture zones located at borehole-receiver depths and laterally offset from the borehole by 300 m can be imaged reliably for a range of the different dips, but more source positions and large offsets between sources and the borehole are required for imaging steeply dipping interfaces. When such features cross-cut the borehole, they are particularly difficult to image. For fracture zones with different azimuths, 3D effects are observed. Far offset source positions contribute less to the image quality as fracture zone azimuth increases. Our optimization methodology is best suited for designing future field surveys with a favorable benefit-cost ratio in areas with significant à priori knowledge. Moreover, our optimization workflow is valuable for selecting useful subsets of acquired data for optimum target-oriented processing.

  11. Outlier analysis of functional genomic profiles enriches for oncology targets and enables precision medicine.

    PubMed

    Zhu, Zhou; Ihle, Nathan T; Rejto, Paul A; Zarrinkar, Patrick P

    2016-06-13

    Genome-scale functional genomic screens across large cell line panels provide a rich resource for discovering tumor vulnerabilities that can lead to the next generation of targeted therapies. Their data analysis typically has focused on identifying genes whose knockdown enhances response in various pre-defined genetic contexts, which are limited by biological complexities as well as the incompleteness of our knowledge. We thus introduce a complementary data mining strategy to identify genes with exceptional sensitivity in subsets, or outlier groups, of cell lines, allowing an unbiased analysis without any a priori assumption about the underlying biology of dependency. Genes with outlier features are strongly and specifically enriched with those known to be associated with cancer and relevant biological processes, despite no a priori knowledge being used to drive the analysis. Identification of exceptional responders (outliers) may not lead only to new candidates for therapeutic intervention, but also tumor indications and response biomarkers for companion precision medicine strategies. Several tumor suppressors have an outlier sensitivity pattern, supporting and generalizing the notion that tumor suppressors can play context-dependent oncogenic roles. The novel application of outlier analysis described here demonstrates a systematic and data-driven analytical strategy to decipher large-scale functional genomic data for oncology target and precision medicine discoveries.

  12. Optimal phase estimation with arbitrary a priori knowledge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demkowicz-Dobrzanski, Rafal

    2011-06-15

    The optimal-phase estimation strategy is derived when partial a priori knowledge on the estimated phase is available. The solution is found with the help of the most famous result from the entanglement theory: the positive partial transpose criterion. The structure of the optimal measurements, estimators, and the optimal probe states is analyzed. This Rapid Communication provides a unified framework bridging the gap in the literature on the subject which until now dealt almost exclusively with two extreme cases: almost perfect knowledge (local approach based on Fisher information) and no a priori knowledge (global approach based on covariant measurements). Special attentionmore » is paid to a natural a priori probability distribution arising from a diffusion process.« less

  13. Intervention fidelity in primary care complex intervention trials: qualitative study using telephone interviews of patients and practitioners.

    PubMed

    Dyas, Jane V; Togher, Fiona; Siriwardena, A Niroshan

    2014-01-01

    Treatment fidelity has previously been defined as the degree to which a treatment or intervention is delivered to participants as intended. Underreporting of fidelity in primary care randomised controlled trials (RCTs) of complex interventions reduces our confidence that findings are due to the treatment or intervention being investigated, rather than unknown confounders. We aimed to investigate treatment fidelity (for the purpose of this paper, hereafter referred to as intervention fidelity), of an educational intervention delivered to general practice teams and designed to improve the primary care management of insomnia. We conducted telephone interviews with patients and practitioners participating in the intervention arm of the trial to explore trial fidelity. Qualitative analysis was undertaken using constant comparison and a priori themes (categories): 'adherence to the delivery of the intervention', 'patients received and understood intervention' and 'patient enactment'. If the intervention protocol was not adhered to by the practitioner then patient receipt, understanding and enactment levels were reduced. Recruitment difficulties in terms of the gap between initially being recruited into the study and attending an intervention consultation also reduced the effectiveness of the intervention. Patient attributes such as motivation to learn and engage contributed to the success of the uptake of the intervention. Qualitative methods using brief telephone interviews are an effective way of collecting the depth of data required to assess intervention fidelity. Intervention fidelity monitoring should be an important element of definitive trial design. ClinicalTrials. gov id isrctn 55001433 - www.controlled-trials.com/isrctn55001433.

  14. Gaia FGK benchmark stars: Metallicity

    NASA Astrophysics Data System (ADS)

    Jofré, P.; Heiter, U.; Soubiran, C.; Blanco-Cuaresma, S.; Worley, C. C.; Pancino, E.; Cantat-Gaudin, T.; Magrini, L.; Bergemann, M.; González Hernández, J. I.; Hill, V.; Lardo, C.; de Laverny, P.; Lind, K.; Masseron, T.; Montes, D.; Mucciarelli, A.; Nordlander, T.; Recio Blanco, A.; Sobeck, J.; Sordo, R.; Sousa, S. G.; Tabernero, H.; Vallenari, A.; Van Eck, S.

    2014-04-01

    Context. To calibrate automatic pipelines that determine atmospheric parameters of stars, one needs a sample of stars, or "benchmark stars", with well-defined parameters to be used as a reference. Aims: We provide detailed documentation of the iron abundance determination of the 34 FGK-type benchmark stars that are selected to be the pillars for calibration of the one billion Gaia stars. They cover a wide range of temperatures, surface gravities, and metallicities. Methods: Up to seven different methods were used to analyze an observed spectral library of high resolutions and high signal-to-noise ratios. The metallicity was determined by assuming a value of effective temperature and surface gravity obtained from fundamental relations; that is, these parameters were known a priori and independently from the spectra. Results: We present a set of metallicity values obtained in a homogeneous way for our sample of benchmark stars. In addition to this value, we provide detailed documentation of the associated uncertainties. Finally, we report a value of the metallicity of the cool giant ψ Phe for the first time. Based on NARVAL and HARPS data obtained within the Gaia DPAC (Data Processing and Analysis Consortium) and coordinated by the GBOG (Ground-Based Observations for Gaia) working group and on data retrieved from the ESO-ADP database.Tables 6-76 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/564/A133

  15. Use of a priori statistics to minimize acquisition time for RFI immune spread spectrum systems

    NASA Technical Reports Server (NTRS)

    Holmes, J. K.; Woo, K. T.

    1978-01-01

    The optimum acquisition sweep strategy was determined for a PN code despreader when the a priori probability density function was not uniform. A psuedo noise spread spectrum system was considered which could be utilized in the DSN to combat radio frequency interference. In a sample case, when the a priori probability density function was Gaussian, the acquisition time was reduced by about 41% compared to a uniform sweep approach.

  16. Fuzzy logic of Aristotelian forms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perlovsky, L.I.

    1996-12-31

    Model-based approaches to pattern recognition and machine vision have been proposed to overcome the exorbitant training requirements of earlier computational paradigms. However, uncertainties in data were found to lead to a combinatorial explosion of the computational complexity. This issue is related here to the roles of a priori knowledge vs. adaptive learning. What is the a-priori knowledge representation that supports learning? I introduce Modeling Field Theory (MFT), a model-based neural network whose adaptive learning is based on a priori models. These models combine deterministic, fuzzy, and statistical aspects to account for a priori knowledge, its fuzzy nature, and data uncertainties.more » In the process of learning, a priori fuzzy concepts converge to crisp or probabilistic concepts. The MFT is a convergent dynamical system of only linear computational complexity. Fuzzy logic turns out to be essential for reducing the combinatorial complexity to linear one. I will discuss the relationship of the new computational paradigm to two theories due to Aristotle: theory of Forms and logic. While theory of Forms argued that the mind cannot be based on ready-made a priori concepts, Aristotelian logic operated with just such concepts. I discuss an interpretation of MFT suggesting that its fuzzy logic, combining a-priority and adaptivity, implements Aristotelian theory of Forms (theory of mind). Thus, 2300 years after Aristotle, a logic is developed suitable for his theory of mind.« less

  17. The Quality of Life in Hand Eczema Questionnaire (QOLHEQ): validation of the German version of a new disease-specific measure of quality of life for patients with hand eczema.

    PubMed

    Ofenloch, R F; Weisshaar, E; Dumke, A-K; Molin, S; Diepgen, T L; Apfelbacher, C

    2014-08-01

    Health-related quality of life (HRQOL) is widely used as a patient-reported outcome to evaluate clinical trials. In routine care it can also be used to improve treatment strategies or to enhance patients' self-awareness and empowerment. Therefore a disease-specific instrument is needed that assesses in detail all the impairments caused by the disease of interest. For patients with hand eczema (HE) such an instrument was developed by an international expert group, but its measurement properties are unknown. To validate the German version of the Quality of Life in Hand Eczema Questionnaire (QOLHEQ), which covers the domains of (i) symptoms, (ii) emotions, (iii) functioning and (iv) treatment and prevention. The QOLHEQ was assessed up to three times in 316 patients with HE to test reliability and sensitivity to change. To test construct validity we also assessed several reference measures. The scale structure was analysed using the Rasch model for each subscale and a structural equation model was used to test the multi domain structure of the QOLHEQ. After minor adaptions of the scoring structure, all four subscales of the QOLHEQ did not significantly misfit the Rasch model (α > 0·05). The fit indices of the structural equation model showed a good fit of the multi domain construct with four subscales assessing HRQOL. Nearly all a priori-defined hypotheses relating to construct validity could be confirmed. The QOLHEQ showed a sensitivity to change that was superior compared with all reference measures. The QOLHEQ is ready to be used in its German version as a sensitive outcome measure in clinical trials and for routine monitoring. The treatment-relevant subscales enable its use to enhance patients' self-awareness and to monitor treatment decisions. © 2014 British Association of Dermatologists.

  18. TIGA Tide Gauge Data Reprocessing at GFZ

    NASA Astrophysics Data System (ADS)

    Deng, Zhiguo; Schöne, Tilo; Gendt, Gerd

    2014-05-01

    To analyse the tide gauge measurements for the purpose of global long-term sea level change research a well-defined absolute reference frame is required by oceanographic community. To create such frame the data from a global GNSS network located at or near tide gauges are processed. For analyzing the GNSS data on a preferably continuous basis the International GNSS Service (IGS) Tide Gauge Benchmark Monitoring Working Group (TIGA-WG) is responsible. As one of the TIGA Analysis Centers the German Research Centre for Geosciences (GFZ) is contributing to the IGS TIGA Reprocessing Campaign. The solutions of the TIGA Reprocessing Campaign will also contribute to 2nd IGS Data Reprocessing Campaign with GFZ IGS reprocessing solution. After the first IGS reprocessing finished in 2010 some improvements were implemented into the latest GFZ software version EPOS.P8: reference frame IGb08 based on ITRF2008, antenna calibration igs08.atx, geopotential model (EGM2008), higher-order ionospheric effects, new a priori meteorological model (GPT2), VMF mapping function, and other minor improvements. GPS data of the globally distributed tracking network of 794 stations for the time span from 1994 until end of 2012 are used for the TIGA reprocessing. To handle such large network a new processing strategy is developed and described in detail. In the TIGA reprocessing the GPS@TIGA data are processed in precise point positioning (PPP) mode to clean data using the IGS reprocessing orbit and clock products. To validate the quality of the PPP coordinate results the rates of 80 GPS@TIGA station vertical movement are estimated from the PPP results using Maximum Likelihood Estimation (MLE) method. The rates are compared with the solution of University of LaRochelle Consortium (ULR) (named ULR5). 56 of the 80 stations have a difference of the vertical velocities below 1 mm/yr. The error bars of PPP rates are significant larger than those of ULR5, which indicates large time correlated noise in the PPP solutions.

  19. Adolescent physical activity and health: a systematic review.

    PubMed

    Hallal, Pedro C; Victora, Cesar G; Azevedo, Mario R; Wells, Jonathan C K

    2006-01-01

    Physical activity in adolescence may contribute to the development of healthy adult lifestyles, helping reduce chronic disease incidence. However, definition of the optimal amount of physical activity in adolescence requires addressing a number of scientific challenges. This article reviews the evidence on short- and long-term health effects of adolescent physical activity. Systematic reviews of the literature were undertaken using a reference period between 2000 and 2004, based primarily on the MEDLINE/PubMed database. Relevant studies were identified by examination of titles, abstracts and full papers, according to inclusion criteria defined a priori. A conceptual framework is proposed to outline how adolescent physical activity may contribute to adult health, including the following pathways: (i) pathway A--tracking of physical activity from adolescence to adulthood; (ii) pathway B--direct influence of adolescent physical activity on adult morbidity; (iii) pathway C--role of physical activity in treating adolescent morbidity; and (iv) pathway D - short-term benefits of physical activity in adolescence on health. The literature reviews showed consistent evidence supporting pathway 'A', although the magnitude of the association appears to be moderate. Thus, there is an indirect effect on all health benefits resulting from adult physical activity. Regarding pathway 'B', adolescent physical activity seems to provide long-term benefits on bone health, breast cancer and sedentary behaviours. In terms of pathway 'C', water physical activities in adolescence are effective in the treatment of asthma, and exercise is recommended in the treatment of cystic fibrosis. Self-esteem is also positively affected by adolescent physical activity. Regarding pathway 'D', adolescent physical activity provides short-term benefits; the strongest evidence refers to bone and mental health. Appreciation of different mechanisms through which adolescent physical activity may influence adult health is essential for drawing recommendations; however, the amount of exercise needed for achieving different benefits may vary. Physical activity promotion must start in early life; although the 'how much' remains unknown and needs further research, the lifelong benefits of adolescent physical activity on adult health are unequivocal.

  20. Toward unsupervised outbreak detection through visual perception of new patterns

    PubMed Central

    Lévy, Pierre P; Valleron, Alain-Jacques

    2009-01-01

    Background Statistical algorithms are routinely used to detect outbreaks of well-defined syndromes, such as influenza-like illness. These methods cannot be applied to the detection of emerging diseases for which no preexisting information is available. This paper presents a method aimed at facilitating the detection of outbreaks, when there is no a priori knowledge of the clinical presentation of cases. Methods The method uses a visual representation of the symptoms and diseases coded during a patient consultation according to the International Classification of Primary Care 2nd version (ICPC-2). The surveillance data are transformed into color-coded cells, ranging from white to red, reflecting the increasing frequency of observed signs. They are placed in a graphic reference frame mimicking body anatomy. Simple visual observation of color-change patterns over time, concerning a single code or a combination of codes, enables detection in the setting of interest. Results The method is demonstrated through retrospective analyses of two data sets: description of the patients referred to the hospital by their general practitioners (GPs) participating in the French Sentinel Network and description of patients directly consulting at a hospital emergency department (HED). Informative image color-change alert patterns emerged in both cases: the health consequences of the August 2003 heat wave were visualized with GPs' data (but passed unnoticed with conventional surveillance systems), and the flu epidemics, which are routinely detected by standard statistical techniques, were recognized visually with HED data. Conclusion Using human visual pattern-recognition capacities to detect the onset of unexpected health events implies a convenient image representation of epidemiological surveillance and well-trained "epidemiology watchers". Once these two conditions are met, one could imagine that the epidemiology watchers could signal epidemiological alerts, based on "image walls" presenting the local, regional and/or national surveillance patterns, with specialized field epidemiologists assigned to validate the signals detected. PMID:19515246

  1. Development of a harmonised multi sensor retrieval scheme for HCHO within the Quality Assurance For Essential Climate Variables (QA4ECV) project

    NASA Astrophysics Data System (ADS)

    De Smedt, Isabelle; Richter, Andreas; Beirle, Steffen; Danckaert, Thomas; Van Roozendael, Michel; Yu, Huan; Bösch, Tim; Hilboll, Andreas; Peters, Enno; Doerner, Steffen; Wagner, Thomas; Wang, Yang; Lorente, Alba; Eskes, Henk; Van Geffen, Jos; Boersma, Folkert

    2016-04-01

    One of the main goals of the QA4ECV project is to define community best-practices for the generation of multi-decadal ECV data records from satellite instruments. QA4ECV will develop retrieval algorithms for the Land ECVs surface albedo, leaf area index (LAI), and fraction of active photosynthetic radiation (fAPAR), as well as for the Atmosphere ECV ozone and aerosol precursors nitrogen dioxide (NO2), formaldehyde (HCHO), and carbon monoxide (CO). Here we assess best practices and provide recommendations for the retrieval of HCHO. Best practices are established based on (1) a detailed intercomparison exercise between the QA4ECV partner's for each specific algorithm processing steps, (2) the feasibility of implementation, and (3) the requirement to generate consistent multi-sensor multi-decadal data records. We propose a fitting window covering the 328.5-346 nm spectral interval for the morning sensors (GOME, SCIAMACHY and GOME-2) and an extension to 328.5-359 nm for OMI and GOME-2, allowed by improved quality of the recorded spectra. A high level of consistency between group algorithms is found when the retrieval settings are carefully aligned. However, the retrieval of slant columns is highly sensitive to any change in the selected settings. The use of a mean background radiance as DOAS reference spectrum allows for a stabilization of the retrievals. A background correction based on the reference sector method is recommended for implementation in the QA4ECV HCHO algorithm as it further reduces retrieval uncertainties. HCHO AMFs using different radiative transfer codes show a good overall consistency when harmonized settings are used. As for NO2, it is proposed to use a priori HCHO profiles from the TM5 model. These are provided on a 1°x1° latitude-longitude grid.

  2. Structure-From for Calibration of a Vehicle Camera System with Non-Overlapping Fields-Of in AN Urban Environment

    NASA Astrophysics Data System (ADS)

    Hanel, A.; Stilla, U.

    2017-05-01

    Vehicle environment cameras observing traffic participants in the area around a car and interior cameras observing the car driver are important data sources for driver intention recognition algorithms. To combine information from both camera groups, a camera system calibration can be performed. Typically, there is no overlapping field-of-view between environment and interior cameras. Often no marked reference points are available in environments, which are a large enough to cover a car for the system calibration. In this contribution, a calibration method for a vehicle camera system with non-overlapping camera groups in an urban environment is described. A-priori images of an urban calibration environment taken with an external camera are processed with the structure-frommotion method to obtain an environment point cloud. Images of the vehicle interior, taken also with an external camera, are processed to obtain an interior point cloud. Both point clouds are tied to each other with images of both image sets showing the same real-world objects. The point clouds are transformed into a self-defined vehicle coordinate system describing the vehicle movement. On demand, videos can be recorded with the vehicle cameras in a calibration drive. Poses of vehicle environment cameras and interior cameras are estimated separately using ground control points from the respective point cloud. All poses of a vehicle camera estimated for different video frames are optimized in a bundle adjustment. In an experiment, a point cloud is created from images of an underground car park, as well as a point cloud of the interior of a Volkswagen test car is created. Videos of two environment and one interior cameras are recorded. Results show, that the vehicle camera poses are estimated successfully especially when the car is not moving. Position standard deviations in the centimeter range can be achieved for all vehicle cameras. Relative distances between the vehicle cameras deviate between one and ten centimeters from tachymeter reference measurements.

  3. 11 CFR 109.21 - What is a “coordinated communication”?

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 100.29. (2) A public communication, as defined in 11 CFR 100.26, that disseminates, distributes, or... public communication, as defined in 11 CFR 100.26, that expressly advocates, as defined in 11 CFR 100.22... section: (i) References to House and Senate candidates. The public communication refers to a clearly...

  4. 11 CFR 109.21 - What is a “coordinated communication”?

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 100.29. (2) A public communication, as defined in 11 CFR 100.26, that disseminates, distributes, or... public communication, as defined in 11 CFR 100.26, that expressly advocates, as defined in 11 CFR 100.22... section: (i) References to House and Senate candidates. The public communication refers to a clearly...

  5. 11 CFR 109.21 - What is a “coordinated communication”?

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 100.29. (2) A public communication, as defined in 11 CFR 100.26, that disseminates, distributes, or... public communication, as defined in 11 CFR 100.26, that expressly advocates, as defined in 11 CFR 100.22... section: (i) References to House and Senate candidates. The public communication refers to a clearly...

  6. 11 CFR 109.21 - What is a “coordinated communication”?

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 100.29. (2) A public communication, as defined in 11 CFR 100.26, that disseminates, distributes, or... public communication, as defined in 11 CFR 100.26, that expressly advocates, as defined in 11 CFR 100.22... section: (i) References to House and Senate candidates. The public communication refers to a clearly...

  7. Anomaly transform methods based on total energy and ocean heat content norms for generating ocean dynamic disturbances for ensemble climate forecasts

    NASA Astrophysics Data System (ADS)

    Romanova, Vanya; Hense, Andreas

    2017-08-01

    In our study we use the anomaly transform, a special case of ensemble transform method, in which a selected set of initial oceanic anomalies in space, time and variables are defined and orthogonalized. The resulting orthogonal perturbation patterns are designed such that they pick up typical balanced anomaly structures in space and time and between variables. The metric used to set up the eigen problem is taken either as the weighted total energy with its zonal, meridional kinetic and available potential energy terms having equal contributions, or the weighted ocean heat content in which a disturbance is applied only to the initial temperature fields. The choices of a reference state for defining the initial anomalies are such that either perturbations on seasonal timescales and or on interannual timescales are constructed. These project a-priori only the slow modes of the ocean physical processes, such that the disturbances grow mainly in the Western Boundary Currents, in the Antarctic Circumpolar Current and the El Nino Southern Oscillation regions. An additional set of initial conditions is designed to fit in a least square sense data from global ocean reanalysis. Applying the AT produced sets of disturbances to oceanic initial conditions initialized by observations of the MPIOM-ESM coupled model on T63L47/GR15 resolution, four ensemble and one hind-cast experiments were performed. The weighted total energy norm is used to monitor the amplitudes and rates of the fastest growing error modes. The results showed minor dependence of the instabilities or error growth on the selected metric but considerable change due to the magnitude of the scaling amplitudes of the perturbation patterns. In contrast to similar atmospheric applications, we find an energy conversion from kinetic to available potential energy, which suggests a different source of uncertainty generation in the ocean than in the atmosphere mainly associated with changes in the density field.

  8. On the numerical treatment of selected oscillatory evolutionary problems

    NASA Astrophysics Data System (ADS)

    Cardone, Angelamaria; Conte, Dajana; D'Ambrosio, Raffaele; Paternoster, Beatrice

    2017-07-01

    We focus on evolutionary problems whose qualitative behaviour is known a-priori and exploited in order to provide efficient and accurate numerical schemes. For classical numerical methods, depending on constant coefficients, the required computational effort could be quite heavy, due to the necessary employ of very small stepsizes needed to accurately reproduce the qualitative behaviour of the solution. In these situations, it may be convenient to use special purpose formulae, i.e. non-polynomially fitted formulae on basis functions adapted to the problem (see [16, 17] and references therein). We show examples of special purpose strategies to solve two families of evolutionary problems exhibiting periodic solutions, i.e. partial differential equations and Volterra integral equations.

  9. Orbit computation of the TELECOM-2D satellite with a Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Deleflie, Florent; Coulot, David; Vienne, Alain; Decosta, Romain; Richard, Pascal; Lasri, Mohammed Amjad

    2014-07-01

    In order to test a preliminary orbit determination method, we fit an orbit of the geostationary satellite TELECOM-2D, as if we did not know any a priori information on its trajectory. The method is based on a genetic algorithm coupled to an analytical propagator of the trajectory, that is used over a couple of days, and that uses a whole set of altazimutal data that are acquired by the tracking network made up of the two TAROT telescopes. The adjusted orbit is then compared to a numerical reference. The method is described, and the results are analyzed, as a step towards an operational method of preliminary orbit determination for uncatalogued objects.

  10. Venus spherical harmonic gravity model to degree and order 60

    NASA Technical Reports Server (NTRS)

    Konopliv, Alex S.; Sjogren, William L.

    1994-01-01

    The Magellan and Pioneer Venus Orbiter radiometric tracking data sets have been combined to produce a 60th degree and order spherical harmonic gravity field. The Magellan data include the high-precision X-band gravity tracking from September 1992 to May 1993 and post-aerobraking data up to January 5, 1994. Gravity models are presented from the application of Kaula's power rule for Venus and an alternative a priori method using surface accelerations. Results are given as vertical gravity acceleration at the reference surface, geoid, vertical Bouguer, and vertical isostatic maps with errors for the vertical gravity and geoid maps included. Correlation of the gravity with topography for the different models is also discussed.

  11. Model-based segmentation of hand radiographs

    NASA Astrophysics Data System (ADS)

    Weiler, Frank; Vogelsang, Frank

    1998-06-01

    An important procedure in pediatrics is to determine the skeletal maturity of a patient from radiographs of the hand. There is great interest in the automation of this tedious and time-consuming task. We present a new method for the segmentation of the bones of the hand, which allows the assessment of the skeletal maturity with an appropriate database of reference bones, similar to the atlas based methods. The proposed algorithm uses an extended active contour model for the segmentation of the hand bones, which incorporates a-priori knowledge of shape and topology of the bones in an additional energy term. This `scene knowledge' is integrated in a complex hierarchical image model, that is used for the image analysis task.

  12. 76 FR 16712 - Participation by Religious Organizations in USAID Programs

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-25

    ... are defined without reference to religion, (iii) has the effect of furthering a development objective... available to a wide range of organizations and beneficiaries which are defined without reference to religion...

  13. Rate determination from vector observations

    NASA Technical Reports Server (NTRS)

    Weiss, Jerold L.

    1993-01-01

    Vector observations are a common class of attitude data provided by a wide variety of attitude sensors. Attitude determination from vector observations is a well-understood process and numerous algorithms such as the TRIAD algorithm exist. These algorithms require measurement of the line of site (LOS) vector to reference objects and knowledge of the LOS directions in some predetermined reference frame. Once attitude is determined, it is a simple matter to synthesize vehicle rate using some form of lead-lag filter, and then, use it for vehicle stabilization. Many situations arise, however, in which rate knowledge is required but knowledge of the nominal LOS directions are not available. This paper presents two methods for determining spacecraft angular rates from vector observations without a priori knowledge of the vector directions. The first approach uses an extended Kalman filter with a spacecraft dynamic model and a kinematic model representing the motion of the observed LOS vectors. The second approach uses a 'differential' TRIAD algorithm to compute the incremental direction cosine matrix, from which vehicle rate is then derived.

  14. On the validity of the dispersion model of hepatic drug elimination when intravascular transit time densities are long-tailed.

    PubMed

    Weiss, M; Stedtler, C; Roberts, M S

    1997-09-01

    The dispersion model with mixed boundary conditions uses a single parameter, the dispersion number, to describe the hepatic elimination of xenobiotics and endogenous substances. An implicit a priori assumption of the model is that the transit time density of intravascular indicators is approximately by an inverse Gaussian distribution. This approximation is limited in that the model poorly describes the tail part of the hepatic outflow curves of vascular indicators. A sum of two inverse Gaussian functions is proposed as an alternative, more flexible empirical model for transit time densities of vascular references. This model suggests that a more accurate description of the tail portion of vascular reference curves yields an elimination rate constant (or intrinsic clearance) which is 40% less than predicted by the dispersion model with mixed boundary conditions. The results emphasize the need to accurately describe outflow curves in using them as a basis for determining pharmacokinetic parameters using hepatic elimination models.

  15. Defining ischemic burden after traumatic brain injury using 15O PET imaging of cerebral physiology.

    PubMed

    Coles, Jonathan P; Fryer, Tim D; Smielewski, Peter; Rice, Kenneth; Clark, John C; Pickard, John D; Menon, David K

    2004-02-01

    Whereas postmortem ischemic damage is common in head injury, antemortem demonstration of ischemia has proven to be elusive. Although 15O positron emission tomography may be useful in this area, the technique has traditionally analyzed data within regions of interest (ROIs) to improve statistical accuracy. In head injury, such techniques are limited because of the lack of a priori knowledge regarding the location of ischemia, coexistence of hyperaemia, and difficulty in defining ischemic cerebral blood flow (CBF) and cerebral oxygen metabolism (CMRO2) levels. We report a novel method for defining disease pathophysiology following head injury. Voxel-based approaches are used to define the distribution of oxygen extraction fraction (OEF) across the entire brain; the standard deviation of this distribution provides a measure of the variability of OEF. These data are also used to integrate voxels above a threshold OEF value to produce an ROI based upon coherent physiology rather than spatial contiguity (the ischemic brain volume; IBV). However, such approaches may suffer from poor statistical accuracy, particularly in regions with low blood flow. The magnitude of these errors has been assessed in modeling experiments using the Hoffman brain phantom and modified control datasets. We conclude that this technique is a valid and useful tool for quantifying ischemic burden after traumatic brain injury.

  16. Harmonising Reference Intervals for Three Calculated Parameters used in Clinical Chemistry.

    PubMed

    Hughes, David; Koerbin, Gus; Potter, Julia M; Glasgow, Nicholas; West, Nic; Abhayaratna, Walter P; Cavanaugh, Juleen; Armbruster, David; Hickman, Peter E

    2016-08-01

    For more than a decade there has been a global effort to harmonise all phases of the testing process, with particular emphasis on the most frequently utilised measurands. In addition, it is recognised that calculated parameters derived from these measurands should also be a target for harmonisation. Using data from the Aussie Normals study we report reference intervals for three calculated parameters: serum osmolality, serum anion gap and albumin-adjusted serum calcium. The Aussie Normals study was an a priori study that analysed samples from 1856 healthy volunteers. The nine analytes used for the calculations in this study were measured on Abbott Architect analysers. The data demonstrated normal (Gaussian) distributions for the albumin-adjusted serum calcium, the anion gap (using potassium in the calculation) and the calculated serum osmolality (using both the Bhagat et al. and Smithline and Gardner formulae). To assess the suitability of these reference intervals for use as harmonised reference intervals, we reviewed data from the Royal College of Pathologists of Australasia/Australasian Association of Clinical Biochemists (RCPA/AACB) bias survey. We conclude that the reference intervals for the calculated serum osmolality (using the Smithline and Gardner formulae) may be suitable for use as a common reference interval. Although a common reference interval for albumin-adjusted serum calcium may be possible, further investigations (including a greater range of albumin concentrations) are needed. This is due to the bias between the Bromocresol Green (BCG) and Bromocresol Purple (BCP) methods at lower serum albumin concentrations. Problems with the measurement of Total CO 2 in the bias survey meant that we could not use the data for assessing the suitability of a common reference interval for the anion gap. Further study is required.

  17. System and method for calibrating inter-star-tracker misalignments in a stellar inertial attitude determination system

    NASA Technical Reports Server (NTRS)

    Li, Rongsheng (Inventor); Wu, Yeong-Wei Andy (Inventor); Hein, Douglas H. (Inventor)

    2004-01-01

    A method and apparatus for determining star tracker misalignments is disclosed. The method comprises the steps of defining a defining a reference frame for the star tracker assembly according to a boresight of the primary star tracker and a boresight of a second star tracker wherein the boresight of the primary star tracker and a plane spanned by the boresight of the primary star tracker and the boresight of the second star tracker at least partially define a datum for the reference frame for the star tracker assembly; and determining the misalignment of the at least one star tracker as a rotation of the defined reference frame.

  18. A Discontinuous Galerkin Method for Parabolic Problems with Modified hp-Finite Element Approximation Technique

    NASA Technical Reports Server (NTRS)

    Kaneko, Hideaki; Bey, Kim S.; Hou, Gene J. W.

    2004-01-01

    A recent paper is generalized to a case where the spatial region is taken in R(sup 3). The region is assumed to be a thin body, such as a panel on the wing or fuselage of an aerospace vehicle. The traditional h- as well as hp-finite element methods are applied to the surface defined in the x - y variables, while, through the thickness, the technique of the p-element is employed. Time and spatial discretization scheme based upon an assumption of certain weak singularity of double vertical line u(sub t) double vertical line 2, is used to derive an optimal a priori error estimate for the current method.

  19. The importance of using dynamical a-priori profiles for infrared O3 retrievals : the case of IASI.

    NASA Astrophysics Data System (ADS)

    Peiro, H.; Emili, E.; Le Flochmoen, E.; Barret, B.; Cariolle, D.

    2016-12-01

    Tropospheric ozone (O3) is a trace gas involved in the global greenhouse effect. To quantify its contribution to global warming, an accurate determination of O3 profiles is necessary. The instrument IASI (Infrared Atmospheric Sounding Interferometer), on board satellite MetOP-A, is the more sensitive sensor to tropospheric O3 with a high spatio-temporal coverage. Satellite retrievals are often based on the inversion of the measured radiance data with a variational approach. This requires an a priori profile and the correspondent error covariance matrix (COV) as ancillary input. Previous studies have shown some biases ( 20%) in IASI retrievals for tropospheric column in the Southern Hemisphere (SH). A possible source of errors is caused by the a priori profile. This study aims to i) build a dynamical a priori profile O3 with a Chemistry Transport Model (CTM), ii) integrate and to demonstrate the interest of this a priori profile in IASI retrievals.Global O3 profiles are retrieved from IASI radiances with the SOFRID (Software for a fast Retrieval of IASI Data) algorithm. It is based on the RTTOV (Radiative Transfer for TOVS) code and a 1D-Var retrieval scheme. Until now, a constant a priori profile was based on a combination of MOZAIC, WOUDC-SHADOZ and Aura/MLS data named here CLIM PR. The global CTM MOCAGE (Modèle de Chimie Atmosphérique à Grande Echelle) has been used with a linear O3 chemistry scheme to assimilate Microwave Limb Sounder (MLS) data. The model resolution of 2°x2°, with 60 sigma-hybrid vertical levels covering the stratosphere has been used. MLS level 2 products have been assimilated with a 4D-VAR variational algorithm to constrain stratospheric O3 and obtain high quality a priori profiles O3 above the tropopause. From this reanalysis, we built these profiles at a 6h frequency on a coarser resolution grid 10°x20° named MOCAGE+MLS PR.Statistical comparisons between retrievals and ozonesondes have shown better correlations and smaller biases for MOCAGE+MLS PR than CLIM PR. We found biases of 6% instead of 33% in SH showing that the a priori plays an important role within O3 infrared-retrievals. Improvements of IASI retrievals have been obtained in the free troposphere and low stratosphere, inserting dynamical a priori profiles from a CTM in SOFRID. Possible advancements would be to insert dynamical COV in SOFRID.

  20. Reference Intervals of Common Clinical Chemistry Analytes for Adults in Hong Kong.

    PubMed

    Lo, Y C; Armbruster, David A

    2012-04-01

    Defining reference intervals is a major challenge because of the difficulty in recruiting volunteers to participate and testing samples from a significant number of healthy reference individuals. Historical literature citation intervals are often suboptimal because they're be based on obsolete methods and/or only a small number of poorly defined reference samples. Blood donors in Hong Kong gave permission for additional blood to be collected for reference interval testing. The samples were tested for twenty-five routine analytes on the Abbott ARCHITECT clinical chemistry system. Results were analyzed using the Rhoads EP evaluator software program, which is based on the CLSI/IFCC C28-A guideline, and defines the reference interval as the 95% central range. Method specific reference intervals were established for twenty-five common clinical chemistry analytes for a Chinese ethnic population. The intervals were defined for each gender separately and for genders combined. Gender specific or combined gender intervals were adapted as appropriate for each analyte. A large number of healthy, apparently normal blood donors from a local ethnic population were tested to provide current reference intervals for a new clinical chemistry system. Intervals were determined following an accepted international guideline. Laboratories using the same or similar methodologies may adapt these intervals if deemed validated and deemed suitable for their patient population. Laboratories using different methodologies may be able to successfully adapt the intervals for their facilities using the reference interval transference technique based on a method comparison study.

  1. A New Understanding for the Rain Rate retrieval of Attenuating Radars Measurement

    NASA Astrophysics Data System (ADS)

    Koner, P.; Battaglia, A.; Simmer, C.

    2009-04-01

    The retrieval of rain rate from the attenuated radar (e.g. Cloud Profiling Radar on board of CloudSAT in orbit since June 2006) is a challenging problem. ĹEcuyer and Stephens [1] underlined this difficulty (for rain rates larger than 1.5 mm/h) and suggested the need of additional information (like path-integrated attenuations (PIA) derived from surface reference techniques or precipitation water path estimated from co-located passive microwave radiometer) to constrain the retrieval. It is generally discussed based on the optimal estimation theory that there are no solutions without constraining the problem in a case of visible attenuation because there is no enough information content to solve the problem. However, when the problem is constrained by the additional measurement of PIA, there is a reasonable solution. This raises the spontaneous question: Is all information enclosed in this additional measurement? This also contradicts with the information theory because one measurement can introduce only one degree of freedom in the retrieval. Why is one degree of freedom so important in the above problem? This question cannot be explained using the estimation and information theories of OEM. On the other hand, Koner and Drummond [2] argued that the OEM is basically a regularization method, where a-priori covariance is used as a stabilizer and the regularization strength is determined by the choices of the a-priori and error covariance matrices. The regularization is required for the reduction of the condition number of Jacobian, which drives the noise injection from the measurement and inversion spaces to the state space in an ill-posed inversion. In this work, the above mentioned question will be discussed based on the regularization theory, error mitigation and eigenvalue mathematics. References 1. L'Ecuyer TS and Stephens G. An estimation based precipitation retrieval algorithm for attenuating radar. J. Appl. Met., 2002, 41, 272-85. 2. Koner PK, Drummond JR. A comparison of regularization techniques for atmospheric trace gases retrievals. JQSRT 2008; 109:514-26.

  2. Experimental verification of an indefinite causal order

    PubMed Central

    Rubino, Giulia; Rozema, Lee A.; Feix, Adrien; Araújo, Mateus; Zeuner, Jonas M.; Procopio, Lorenzo M.; Brukner, Časlav; Walther, Philip

    2017-01-01

    Investigating the role of causal order in quantum mechanics has recently revealed that the causal relations of events may not be a priori well defined in quantum theory. Although this has triggered a growing interest on the theoretical side, creating processes without a causal order is an experimental task. We report the first decisive demonstration of a process with an indefinite causal order. To do this, we quantify how incompatible our setup is with a definite causal order by measuring a “causal witness.” This mathematical object incorporates a series of measurements that are designed to yield a certain outcome only if the process under examination is not consistent with any well-defined causal order. In our experiment, we perform a measurement in a superposition of causal orders—without destroying the coherence—to acquire information both inside and outside of a “causally nonordered process.” Using this information, we experimentally determine a causal witness, demonstrating by almost 7 SDs that the experimentally implemented process does not have a definite causal order. PMID:28378018

  3. A novel adaptive scoring system for segmentation validation with multiple reference masks

    NASA Astrophysics Data System (ADS)

    Moltz, Jan H.; Rühaak, Jan; Hahn, Horst K.; Peitgen, Heinz-Otto

    2011-03-01

    The development of segmentation algorithms for different anatomical structures and imaging protocols is an important task in medical image processing. The validation of these methods, however, is often treated as a subordinate task. Since manual delineations, which are widely used as a surrogate for the ground truth, exhibit an inherent uncertainty, it is preferable to use multiple reference segmentations for an objective validation. This requires a consistent framework that should fulfill three criteria: 1) it should treat all reference masks equally a priori and not demand consensus between the experts; 2) it should evaluate the algorithmic performance in relation to the inter-reference variability, i.e., be more tolerant where the experts disagree about the true segmentation; 3) it should produce results that are comparable for different test data. We show why current state-of-the-art frameworks as the one used at several MICCAI segmentation challenges do not fulfill these criteria and propose a new validation methodology. A score is computed in an adaptive way for each individual segmentation problem, using a combination of volume- and surface-based comparison metrics. These are transformed into the score by relating them to the variability between the reference masks which can be measured by comparing the masks with each other or with an estimated ground truth. We present examples from a study on liver tumor segmentation in CT scans where our score shows a more adequate assessment of the segmentation results than the MICCAI framework.

  4. How things fall apart: understanding the nature of internalizing through its relationship with impairment.

    PubMed

    Markon, Kristian E

    2010-08-01

    The literature suggests that internalizing psychopathology relates to impairment incrementally and gradually. However, the form of this relationship has not been characterized. This form is critical to understanding internalizing psychopathology, as it is possible that internalizing may accelerate in effect at some level of severity, defining a natural boundary of abnormality. Here, a novel method-semiparametric structural equation modeling-was used to model the relationship between internalizing and impairment in a sample of 8,580 individuals from the 2000 British Office for National Statistics Survey of Psychiatric Morbidity, a large, population-representative study of psychopathology. This method allows one to model relationships between latent internalizing and impairment without assuming any particular form a priori and to compare models in which the relationship is constant and linear. Results suggest that the relationship between internalizing and impairment is in fact linear and constant across the entire range of internalizing variation and that it is impossible to nonarbitrarily define a specific level of internalizing beyond which consequences suddenly become catastrophic in nature. Results demonstrate the phenomenological continuity of internalizing psychopathology, highlight the importance of impairment as well as symptoms, and have clear implications for defining mental disorder. Copyright 2010 APA, all rights reserved

  5. Wildlife Habitat Restoration: Chapter 12

    USGS Publications Warehouse

    Conway, Courtney J.; Borgmann, Kathi L.; Morrison, Michael L.; Mathewson, Heather A.

    2015-01-01

    As the preceding chapters point out, many wildlife species and the habitat they depend on are in peril. However, opportunities exist to restore habitat for many imperiled wildlife species. But what is wildlife habitat restoration? We begin this chapter by defining habitat restoration and then provide recommendations on how to maximize success of future habitat restoration efforts for wildlife. Finally, we evaluate whether we have been successful in restoring wildlife habitat and supply recommendations to advance habitat restoration. Successful restoration requires clear and explicit goals that are based on our best understanding of what the habitat was like prior to the disturbing event. Ideally, a restoration project would include: (1) a summary of prerestoration conditions that define the existing status of wildlife populations and their habitat; (2) a description of habitat features required by the focal or indicator species for persistence; (3) an a priori description of measurable, quantitative metrics that define restoration goals and measures of success; (4) a monitoring plan; (5) postrestoration comparisons of habitat features and wildlife populations with adjacent unmodified areas that are similar to the restoration site; and (6) expert review of the entire restoration plan (i.e., the five aforementioned components).

  6. Resting State Network Estimation in Individual Subjects

    PubMed Central

    Hacker, Carl D.; Laumann, Timothy O.; Szrama, Nicholas P.; Baldassarre, Antonello; Snyder, Abraham Z.

    2014-01-01

    Resting-state functional magnetic resonance imaging (fMRI) has been used to study brain networks associated with both normal and pathological cognitive function. The objective of this work is to reliably compute resting state network (RSN) topography in single participants. We trained a supervised classifier (multi-layer perceptron; MLP) to associate blood oxygen level dependent (BOLD) correlation maps corresponding to pre-defined seeds with specific RSN identities. Hard classification of maps obtained from a priori seeds was highly reliable across new participants. Interestingly, continuous estimates of RSN membership retained substantial residual error. This result is consistent with the view that RSNs are hierarchically organized, and therefore not fully separable into spatially independent components. After training on a priori seed-based maps, we propagated voxel-wise correlation maps through the MLP to produce estimates of RSN membership throughout the brain. The MLP generated RSN topography estimates in individuals consistent with previous studies, even in brain regions not represented in the training data. This method could be used in future studies to relate RSN topography to other measures of functional brain organization (e.g., task-evoked responses, stimulation mapping, and deficits associated with lesions) in individuals. The multi-layer perceptron was directly compared to two alternative voxel classification procedures, specifically, dual regression and linear discriminant analysis; the perceptron generated more spatially specific RSN maps than either alternative. PMID:23735260

  7. Functional Connectivity in Multiple Cortical Networks Is Associated with Performance Across Cognitive Domains in Older Adults.

    PubMed

    Shaw, Emily E; Schultz, Aaron P; Sperling, Reisa A; Hedden, Trey

    2015-10-01

    Intrinsic functional connectivity MRI has become a widely used tool for measuring integrity in large-scale cortical networks. This study examined multiple cortical networks using Template-Based Rotation (TBR), a method that applies a priori network and nuisance component templates defined from an independent dataset to test datasets of interest. A priori templates were applied to a test dataset of 276 older adults (ages 65-90) from the Harvard Aging Brain Study to examine the relationship between multiple large-scale cortical networks and cognition. Factor scores derived from neuropsychological tests represented processing speed, executive function, and episodic memory. Resting-state BOLD data were acquired in two 6-min acquisitions on a 3-Tesla scanner and processed with TBR to extract individual-level metrics of network connectivity in multiple cortical networks. All results controlled for data quality metrics, including motion. Connectivity in multiple large-scale cortical networks was positively related to all cognitive domains, with a composite measure of general connectivity positively associated with general cognitive performance. Controlling for the correlations between networks, the frontoparietal control network (FPCN) and executive function demonstrated the only significant association, suggesting specificity in this relationship. Further analyses found that the FPCN mediated the relationships of the other networks with cognition, suggesting that this network may play a central role in understanding individual variation in cognition during aging.

  8. Multiagent data warehousing and multiagent data mining for cerebrum/cerebellum modeling

    NASA Astrophysics Data System (ADS)

    Zhang, Wen-Ran

    2002-03-01

    An algorithm named Neighbor-Miner is outlined for multiagent data warehousing and multiagent data mining. The algorithm is defined in an evolving dynamic environment with autonomous or semiautonomous agents. Instead of mining frequent itemsets from customer transactions, the new algorithm discovers new agents and mining agent associations in first-order logic from agent attributes and actions. While the Apriori algorithm uses frequency as a priory threshold, the new algorithm uses agent similarity as priory knowledge. The concept of agent similarity leads to the notions of agent cuboid, orthogonal multiagent data warehousing (MADWH), and multiagent data mining (MADM). Based on agent similarities and action similarities, Neighbor-Miner is proposed and illustrated in a MADWH/MADM approach to cerebrum/cerebellum modeling. It is shown that (1) semiautonomous neurofuzzy agents can be identified for uniped locomotion and gymnastic training based on attribute relevance analysis; (2) new agents can be discovered and agent cuboids can be dynamically constructed in an orthogonal MADWH, which resembles an evolving cerebrum/cerebellum system; and (3) dynamic motion laws can be discovered as association rules in first order logic. Although examples in legged robot gymnastics are used to illustrate the basic ideas, the new approach is generally suitable for a broad category of data mining tasks where knowledge can be discovered collectively by a set of agents from a geographically or geometrically distributed but relevant environment, especially in scientific and engineering data environments.

  9. Mediterranean Diet and Cardiovascular Disease: A Critical Evaluation of A Priori Dietary Indexes

    PubMed Central

    D’Alessandro, Annunziata; De Pergola, Giovanni

    2015-01-01

    The aim of this paper is to analyze the a priori dietary indexes used in the studies that have evaluated the role of the Mediterranean Diet in influencing the risk of developing cardiovascular disease. All the studies show that this dietary pattern protects against cardiovascular disease, but studies show quite different effects on specific conditions such as coronary heart disease or cerebrovascular disease. A priori dietary indexes used to measure dietary exposure imply quantitative and/or qualitative divergences from the traditional Mediterranean Diet of the early 1960s, and, therefore, it is very difficult to compare the results of different studies. Based on real cultural heritage and traditions, we believe that the a priori indexes used to evaluate adherence to the Mediterranean Diet should consider classifying whole grains and refined grains, olive oil and monounsaturated fats, and wine and alcohol differently. PMID:26389950

  10. Data Prediction for Public Events in Professional Domains Based on Improved RNN- LSTM

    NASA Astrophysics Data System (ADS)

    Song, Bonan; Fan, Chunxiao; Wu, Yuexin; Sun, Juanjuan

    2018-02-01

    The traditional data services of prediction for emergency or non-periodic events usually cannot generate satisfying result or fulfill the correct prediction purpose. However, these events are influenced by external causes, which mean certain a priori information of these events generally can be collected through the Internet. This paper studied the above problems and proposed an improved model—LSTM (Long Short-term Memory) dynamic prediction and a priori information sequence generation model by combining RNN-LSTM and public events a priori information. In prediction tasks, the model is qualified for determining trends, and its accuracy also is validated. This model generates a better performance and prediction results than the previous one. Using a priori information can increase the accuracy of prediction; LSTM can better adapt to the changes of time sequence; LSTM can be widely applied to the same type of prediction tasks, and other prediction tasks related to time sequence.

  11. The frequency of U-shaped dose responses in the toxicological literature.

    PubMed

    Calabrese, E J; Baldwin, L A

    2001-08-01

    Hormesis has been defined as a dose-response relationship in which there is a stimulatory response at low doses, but an inhibitory response at high doses, resulting in a U- or inverted U-shaped dose response. To assess the proportion of studies satisfying criteria for evidence of hormesis, a database was created from published toxicological literature using rigorous a priori entry and evaluative criteria. One percent (195 out of 20,285) of the published articles contained 668 dose-response relationships that met the entry criteria. Subsequent application of evaluative criteria revealed that 245 (37% of 668) dose-response relationships from 86 articles (0.4% of 20,285) satisfied requirements for evidence of hormesis. Quantitative evaluation of false-positive and false-negative responses indicated that the data were not very susceptible to such influences. A complementary analysis of all dose responses assessed by hypothesis testing or distributional analyses, where the units of comparison were treatment doses below the NOAEL, revealed that of 1089 doses below the NOAEL, 213 (19.5%) satisfied statistical significance or distributional data evaluative criteria for hormesis, 869 (80%) did not differ from the control, and 7 (0.6%) displayed evidence of false-positive values. The 32.5-fold (19.5% vs 0.6%) greater occurrence of hormetic responses than a response of similar magnitude in the opposite (negative) direction strongly supports the nonrandom nature of hormetic responses. This study, which provides the first documentation of a data-derived frequency of hormetic responses in the toxicologically oriented literature, indicates that when the study design satisfies a priori criteria (i.e., a well-defined NOAEL, > or = 2 doses below the NOAEL, and the end point measured has the capacity to display either stimulatory or inhibitory responses), hormesis is frequently encountered and is broadly represented according to agent, model, and end point. These findings have broad-based implications for study design, risk assessment methods, and the establishment of optimal drug doses and suggest important evolutionarily adaptive strategies for dose-response relationships.

  12. Risk factors for shunt malfunction in pediatric hydrocephalus: a multicenter prospective cohort study.

    PubMed

    Riva-Cambrin, Jay; Kestle, John R W; Holubkov, Richard; Butler, Jerry; Kulkarni, Abhaya V; Drake, James; Whitehead, William E; Wellons, John C; Shannon, Chevis N; Tamber, Mandeep S; Limbrick, David D; Rozzelle, Curtis; Browd, Samuel R; Simon, Tamara D

    2016-04-01

    OBJECT The rate of CSF shunt failure remains unacceptably high. The Hydrocephalus Clinical Research Network (HCRN) conducted a comprehensive prospective observational study of hydrocephalus management, the aim of which was to isolate specific risk factors for shunt failure. METHODS The study followed all first-time shunt insertions in children younger than 19 years at 6 HCRN centers. The HCRN Investigator Committee selected, a priori, 21 variables to be examined, including clinical, radiographic, and shunt design variables. Shunt failure was defined as shunt revision, subsequent endoscopic third ventriculostomy, or shunt infection. Important a priori-defined risk factors as well as those significant in univariate analyses were then tested for independence using multivariate Cox proportional hazard modeling. RESULTS A total of 1036 children underwent initial CSF shunt placement between April 2008 and December 2011. Of these, 344 patients experienced shunt failure, including 265 malfunctions and 79 infections. The mean and median length of follow-up for the entire cohort was 400 days and 264 days, respectively. The Cox model found that age younger than 6 months at first shunt placement (HR 1.6 [95% CI 1.1-2.1]), a cardiac comorbidity (HR 1.4 [95% CI 1.0-2.1]), and endoscopic placement (HR 1.9 [95% CI 1.2-2.9]) were independently associated with reduced shunt survival. The following had no independent associations with shunt survival: etiology, payer, center, valve design, valve programmability, the use of ultrasound or stereotactic guidance, and surgeon experience and volume. CONCLUSIONS This is the largest prospective study reported on children with CSF shunts for hydrocephalus. It confirms that a young age and the use of the endoscope are risk factors for first shunt failure and that valve type has no impact. A new risk factor-an existing cardiac comorbidity-was also associated with shunt failure.

  13. Level statistics of words: Finding keywords in literary texts and symbolic sequences

    NASA Astrophysics Data System (ADS)

    Carpena, P.; Bernaola-Galván, P.; Hackenberg, M.; Coronado, A. V.; Oliver, J. L.

    2009-03-01

    Using a generalization of the level statistics analysis of quantum disordered systems, we present an approach able to extract automatically keywords in literary texts. Our approach takes into account not only the frequencies of the words present in the text but also their spatial distribution along the text, and is based on the fact that relevant words are significantly clustered (i.e., they self-attract each other), while irrelevant words are distributed randomly in the text. Since a reference corpus is not needed, our approach is especially suitable for single documents for which no a priori information is available. In addition, we show that our method works also in generic symbolic sequences (continuous texts without spaces), thus suggesting its general applicability.

  14. Transformation of apparent ocean wave spectra observed from an aircraft sensor platform

    NASA Technical Reports Server (NTRS)

    Poole, L. R.

    1976-01-01

    The problem considered was transformation of a unidirectional apparent ocean wave spectrum observed from an aircraft sensor platform into the true spectrum that would be observed from a stationary platform. Spectral transformation equations were developed in terms of the linear wave dispersion relationship and the wave group speed. An iterative solution to the equations was outlined and used to transform reference theoretical apparent spectra for several assumed values of average water depth. Results show that changing the average water depth leads to a redistribution of energy density among the various frequency bands of the transformed spectrum. This redistribution is most severe when much of the energy density is expected, a priori, to reside at relatively low true frequencies.

  15. Approaches to defining reference regimes for river restoration planning

    NASA Astrophysics Data System (ADS)

    Beechie, T. J.

    2014-12-01

    Reference conditions or reference regimes can be defined using three general approaches, historical analysis, contemporary reference sites, and theoretical or empirical models. For large features (e.g., floodplain channels and ponds) historical data and maps are generally reliable. For smaller features (e.g., pools and riffles in small tributaries), field data from contemporary reference sites are a reasonable surrogate for historical data. Models are generally used for features that have no historical information or present day reference sites (e.g., beaver pond habitat). Each of these approaches contributes to a watershed-wide understanding of current biophysical conditions relative to potential conditions, which helps create not only a guiding vision for restoration, but also helps quantify and locate the largest or most important restoration opportunities. Common uses of geomorphic and biological reference conditions include identifying key areas for habitat protection or restoration, and informing the choice of restoration targets. Examples of use of each of these three approaches to define reference regimes in western USA illustrate how historical information and current research highlight key restoration opportunities, focus restoration effort in areas that can produce the largest ecological benefit, and contribute to estimating restoration potential and assessing likelihood of achieving restoration goals.

  16. A New Comprehensive Model for Crustal and Upper Mantle Structure of the European Plate

    NASA Astrophysics Data System (ADS)

    Morelli, A.; Danecek, P.; Molinari, I.; Postpischl, L.; Schivardi, R.; Serretti, P.; Tondi, M. R.

    2009-12-01

    We present a new comprehensive model of crustal and upper mantle structure of the whole European Plate — from the North Atlantic ridge to Urals, and from North Africa to the North Pole — describing seismic speeds (P and S) and density. Our description of crustal structure merges information from previous studies: large-scale compilations, seismic prospection, receiver functions, inversion of surface wave dispersion measurements and Green functions from noise correlation. We use a simple description of crustal structure, with laterally-varying sediment and cristalline layers thickness and seismic parameters. Most original information refers to P-wave speed, from which we derive S speed and density from scaling relations. This a priori crustal model by itself improves the overall fit to observed Bouguer anomaly maps, as derived from GRACE satellite data, over CRUST2.0. The new crustal model is then used as a constraint in the inversion for mantle shear wave speed, based on fitting Love and Rayleigh surface wave dispersion. In the inversion for transversely isotropic mantle structure, we use group speed measurements made on European event-to-station paths, and use a global a priori model (S20RTS) to ensure fair rendition of earth structure at depth and in border areas with little coverage from our data. The new mantle model sensibly improves over global S models in the imaging of shallow asthenospheric (slow) anomalies beneath the Alpine mobile belt, and fast lithospheric signatures under the two main Mediterranean subduction systems (Aegean and Tyrrhenian). We map compressional wave speed inverting ISC travel times (reprocessed by Engdahl et al.) with a non linear inversion scheme making use of finite-difference travel time calculation. The inversion is based on an a priori model obtained by scaling the 3D mantle S-wave speed to P. The new model substantially confirms images of descending lithospheric slabs and back-arc shallow asthenospheric regions, shown in other more local high-resolution tomographic studies, but covers the whole range of the European Plate. We also obtain three-dimensional mantle density structure by inversion of GRACE Bouguer anomalies locally adjusting density and the scaling relation between seismic wave speeds and density. We validate the new comprehensive model through comparison of recorded seismograms with numerical simulations based on SPECFEM3D. This work is a contribution towards the definition of a reference earth model for Europe. To this extent, in order to improve model dissemination and comparison, we propose the adoption of a common exchange format for tomographic earth models based on JSON, a lightweight data-interchange format supported by most high-level programming languages. We provide tools for manipulating and visualising models, described in this standard format, in Google Earth and GEON IDV.

  17. Ex Priori: Exposure-based Prioritization across Chemical Space

    EPA Science Inventory

    EPA's Exposure Prioritization (Ex Priori) is a simplified, quantitative visual dashboard that makes use of data from various inputs to provide rank-ordered internalized dose metric. This complements other high throughput screening by viewing exposures within all chemical space si...

  18. Reporting and methodological quality of sample size calculations in cluster randomized trials could be improved: a review.

    PubMed

    Rutterford, Clare; Taljaard, Monica; Dixon, Stephanie; Copas, Andrew; Eldridge, Sandra

    2015-06-01

    To assess the quality of reporting and accuracy of a priori estimates used in sample size calculations for cluster randomized trials (CRTs). We reviewed 300 CRTs published between 2000 and 2008. The prevalence of reporting sample size elements from the 2004 CONSORT recommendations was evaluated and a priori estimates compared with those observed in the trial. Of the 300 trials, 166 (55%) reported a sample size calculation. Only 36 of 166 (22%) reported all recommended descriptive elements. Elements specific to CRTs were the worst reported: a measure of within-cluster correlation was specified in only 58 of 166 (35%). Only 18 of 166 articles (11%) reported both a priori and observed within-cluster correlation values. Except in two cases, observed within-cluster correlation values were either close to or less than a priori values. Even with the CONSORT extension for cluster randomization, the reporting of sample size elements specific to these trials remains below that necessary for transparent reporting. Journal editors and peer reviewers should implement stricter requirements for authors to follow CONSORT recommendations. Authors should report observed and a priori within-cluster correlation values to enable comparisons between these over a wider range of trials. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  19. How Can TOLNet Help to Better Understand Tropospheric Ozone? A Satellite Perspective

    NASA Technical Reports Server (NTRS)

    Johnson, Matthew S.

    2018-01-01

    Potential sources of a priori ozone (O3) profiles for use in Tropospheric Emissions: Monitoring of Pollution (TEMPO) satellite tropospheric O3 retrievals are evaluated with observations from multiple Tropospheric Ozone Lidar Network (TOLNet) systems in North America. An O3 profile climatology (tropopause-based O3 climatology (TB-Clim), currently proposed for use in the TEMPO O3 retrieval algorithm) derived from ozonesonde observations and O3 profiles from three separate models (operational Goddard Earth Observing System (GEOS-5) Forward Processing (FP) product, reanalysis product from Modern-Era Retrospective analysis for Research and Applications version 2 (MERRA2), and the GEOS-Chem chemical transport model (CTM)) were: 1) evaluated with TOLNet measurements on various temporal scales (seasonally, daily, hourly) and 2) implemented as a priori information in theoretical TEMPO tropospheric O3 retrievals in order to determine how each a priori impacts the accuracy of retrieved tropospheric (0-10 km) and lowermost tropospheric (LMT, 0-2 km) O3 columns. We found that all sources of a priori O3 profiles evaluated in this study generally reproduced the vertical structure of summer-averaged observations. However, larger differences between the a priori profiles and lidar observations were observed when evaluating inter-daily and diurnal variability of tropospheric O3. The TB-Clim O3 profile climatology was unable to replicate observed inter-daily and diurnal variability of O3 while model products, in particular GEOS-Chem simulations, displayed more skill in reproducing these features. Due to the ability of models, primarily the CTM used in this study, on average to capture the inter-daily and diurnal variability of tropospheric and LMT O3 columns, using a priori profiles from CTM simulations resulted in TEMPO retrievals with the best statistical comparison with lidar observations. Furthermore, important from an air quality perspective, when high LMT O3 values were observed, using CTM a priori profiles resulted in TEMPO LMT O3 retrievals with the least bias. The application of time-specific (non-climatological) hourly/daily model predictions as the a priori profile in TEMPO O3 retrievals will be best suited when applying this data to study air quality or event-based processes as the standard retrieval algorithm will still need to use a climatology product. Follow-on studies to this work are currently being conducted to investigate the application of different CTM-predicted O3 climatology products in the standard TEMPO retrieval algorithm. Finally, similar methods to those used in this study can be easily applied by TEMPO data users to recalculate tropospheric O3 profiles provided from the standard retrieval using a different source of a priori.

  20. Quint: An R package for the identification of subgroups of clients who differ in which treatment alternative is best for them.

    PubMed

    Dusseldorp, Elise; Doove, Lisa; Mechelen, Iven van

    2016-06-01

    In the analysis of randomized controlled trials (RCTs), treatment effect heterogeneity often occurs, implying differences across (subgroups of) clients in treatment efficacy. This phenomenon is typically referred to as treatment-subgroup interactions. The identification of subgroups of clients, defined in terms of pretreatment characteristics that are involved in a treatment-subgroup interaction, is a methodologically challenging task, especially when many characteristics are available that may interact with treatment and when no comprehensive a priori hypotheses on relevant subgroups are available. A special type of treatment-subgroup interaction occurs if the ranking of treatment alternatives in terms of efficacy differs across subgroups of clients (e.g., for one subgroup treatment A is better than B and for another subgroup treatment B is better than A). These are called qualitative treatment-subgroup interactions and are most important for optimal treatment assignment. The method QUINT (Qualitative INteraction Trees) was recently proposed to induce subgroups involved in such interactions from RCT data. The result of an analysis with QUINT is a binary tree from which treatment assignment criteria can be derived. The implementation of this method, the R package quint, is the topic of this paper. The analysis process is described step-by-step using data from the Breast Cancer Recovery Project, showing the reader all functions included in the package. The output is explained and given a substantive interpretation. Furthermore, an overview is given of the tuning parameters involved in the analysis, along with possible motivational concerns associated with choice alternatives that are available to the user.

  1. [Cause-specific mortality in an area of Campania with numerous waste disposal sites].

    PubMed

    Altavista, Pierluigi; Belli, Stefano; Bianchi, Fabrizio; Binazzi, Alessandra; Comba, Pietro; Del Giudice, Raffaele; Fazzo, Lucia; Felli, Angelo; Mastrantonio, Marina; Menegozzo, Massimo; Musmeci, Loredana; Pizzuti, Renato; Savarese, Anna; Trinca, Stefania; Uccelli, Raffaella

    2004-01-01

    To investigate cause-specific mortality in an area of Campania region, in the surroundings of Naples, characterized by many toxic waste dumping grounds sites and by widespread burning of urban wastes. The study area was characterized by examining the spatial distribution of waste disposal sites and toxic waste dumping grounds, using a geographic information system (GIS). Mortality (1986-2000) was studied in the three municipalities of Giugliano in Campania, Qualiano and Villaricca, encompassing a population of about 150,000 inhabitants. Mortality rates of the population resident in the Campania region were used in order to generate expected figures. Causes of death of a priori interest where those previously associated to residence in the neighbourhood of (toxic) waste sites, including lung cancer, bladder cancer, leukemia and liver cancer. Overall 39 waste sites, 27 of which characterized by the likely presence of toxic wastes, were identified in the area of interest. A good agreement was found between two independent surveys of the Regional Environmental Protection Agency and of the environmentalist association Legambiente. Cancer mortality was significantly increased, with special reference to malignant neoplasm of lung, pleura, larynx, bladder, liver and brain. Circulatory diseases were also significantly in excess and diabetes showed some increases. Mortality statistics provide preliminary evidence of the disease load in the area. Mapping waste dumping grounds provides information for defining high risk areas. Improvements in exposure assessment together with the use of a range of health data (hospital discharge cards, malformation notifications, observations of general practitioners) will contribute to second generation studies aimed at inferring causal relationships.

  2. Semantic solutions to Heliophysics data access

    NASA Astrophysics Data System (ADS)

    Narock, T. W.; Vandegriff, J. D.; Weigel, R. S.

    2011-12-01

    Within the domain of Heliophysics, data discovery is being actively addressed. However, data diversity in the returned results has proven to be a significant barrier to integrated multi-mission analysis. Software is being actively developed (e.g. Vandergriff and Brown, 2008) that is data format and measurement type agnostic. However, such approaches rely on an a priori definition of common baseline parameters, units, and coordinate systems onto which all data will be mapped. In this work, we describe our efforts at utilizing a task ontology (Guarino, 1998) to model the steps involved in data transformation within Heliophysics. Thus, given Heliophysics logic and heterogeneous input data, we are able to develop software that is able to infer the set of steps required to compute user specified parameters. Such a framework offers flexibility by allowing users to define their own preferred sets of parameters, units, and coordinate systems they would like in their analysis. In addition, the storage of this information as ontology instances means they are external to source code and are easily shareable and extensible. The additional inclusion of a provenance ontology allows us to capture the historical record of each data analysis session for future review. We describe our use of existing task and provenance ontologies and provide example use cases as well as potential future applications. References J. Vandegriff and L. Brown, (2010), A framework for reading and unifying heliophysics time series data, Earth Science Informatics, Volume 3, Numbers 1-2, Pages 75-86 N. Guarino, (1998), Formal Ontology in Information Systems, Proceedings of FOIS'98, Trento, Italy, 6-8 June 1998. Amsterdam, IOS Press, pp. 3-15.

  3. The association between AHA CPR quality guideline compliance and clinical outcomes from out-of-hospital cardiac arrest.

    PubMed

    Cheskes, Sheldon; Schmicker, Robert H; Rea, Tom; Morrison, Laurie J; Grunau, Brian; Drennan, Ian R; Leroux, Brian; Vaillancourt, Christian; Schmidt, Terri A; Koller, Allison C; Kudenchuk, Peter; Aufderheide, Tom P; Herren, Heather; Flickinger, Katharyn H; Charleston, Mark; Straight, Ron; Christenson, Jim

    2017-07-01

    Measures of chest compression fraction (CCF), compression rate, compression depth and pre-shock pause have all been independently associated with improved outcomes from out-of-hospital (OHCA) cardiac arrest. However, it is unknown whether compliance with American Heart Association (AHA) guidelines incorporating all the aforementioned metrics, is associated with improved survival from OHCA. We performed a secondary analysis of prospectively collected data from the Resuscitation Outcomes Consortium Epistry-Cardiac Arrest database. As per the 2015 American Heart Association (AHA) guidelines, guideline compliant cardiopulmonary resuscitation (CPR) was defined as CCF >0.8, chest compression rate 100-120/minute, chest compression depth 50-60mm, and pre-shock pause <10s. Multivariable logistic regression models controlling for Utstein variables were used to assess the relationship between global guideline compliance and survival to hospital discharge and neurologically intact survival with MRS ≤3. Due to potential confounding between CPR quality metrics and cases that achieved early ROSC, we performed an a priori subgroup analysis restricted to patients who obtained ROSC after ≥10min of EMS resuscitation. After allowing for study exclusions, 19,568 defibrillator records were collected over a 4-year period ending in June 2015. For all reported models, the reference standard included all cases who did not meet all CPR quality benchmarks. For the primary model (CCF, rate, depth), there was no significant difference in survival for resuscitations that met all CPR quality benchmarks (guideline compliant) compared to the reference standard (OR 1.26; 95% CI: 0.80, 1.97). When the dataset was restricted to patients obtaining ROSC after ≥10min of EMS resuscitation (n=4,158), survival was significantly higher for those resuscitations that were guideline compliant (OR 2.17; 95% CI: 1.11, 4.27) compared to the reference standard. Similar findings were obtained for neurologically intact survival with MRS ≤3 (OR 3.03; 95% CI: 1.12, 8.20). In this observational study, compliance with AHA guidelines for CPR quality was not associated with improved outcomes from OHCA. Conversely, when restricting the cohort to those with late ROSC, compliance with guidelines was associated with improved clinical outcomes. Strategies to improve overall guideline compliance may have a significant impact on outcomes from OHCA. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Fracture risk among older men: osteopenia and osteoporosis defined using cut-points derived from female versus male reference data.

    PubMed

    Pasco, J A; Lane, S E; Brennan, S L; Timney, E N; Bucki-Smith, G; Dobbins, A G; Nicholson, G C; Kotowicz, M A

    2014-03-01

    We explored the effect of using male and female reference data in a male sample to categorise areal bone mineral density (BMD). Using male reference data, a large proportion of fractures arose from osteopenia, whereas using female reference data shifted the fracture burden into normal BMD. The purpose of this study was to describe fracture risk associated with osteopenia and osteoporosis in older men, defined by areal BMD and using cut-points derived from male and female reference data. As part of the Geelong Osteoporosis Study, we followed 619 men aged 60-93 years after BMD assessments (performed 2001-2006) until 2010, fracture, death or emigration. Post-baseline fractures were radiologically confirmed, and proportions of fractures in each BMD category were age-standardised to national profiles. Based on World Health Organization criteria, and using male reference data, 207 men had normal BMD at the femoral neck, 357 were osteopenic and 55 were osteoporotic. Using female reference data, corresponding numbers were 361, 227 and 31. During the study, 130 men died, 15 emigrated and 63 sustained at least one fracture. Using male reference data, most (86.5 %) of the fractures occurred in men without osteoporosis on BMD criteria (18.4 % normal BMD, 68.1 % osteopenia). The pattern differed when female reference data were used; while most fractures arose from men without osteoporosis (88.2 %), the burden shifted from those with osteopenia (34.8 %) to those with normal BMD (53.4 %). Decreasing BMD categories defined increasing risk of fracture. Although men with osteoporotic BMD were at greatest risk, they made a relatively small contribution to the total burden of fractures. Using male reference data, two-thirds of the fractures arose from men with osteopenia. However, using female reference data, approximately half of the fractures arose from those with normal BMD. Using female reference data to define osteoporosis in men does not appear to be the optimal approach.

  5. Artificial intelligence in robot control systems

    NASA Astrophysics Data System (ADS)

    Korikov, A.

    2018-05-01

    This paper analyzes modern concepts of artificial intelligence and known definitions of the term "level of intelligence". In robotics artificial intelligence system is defined as a system that works intelligently and optimally. The author proposes to use optimization methods for the design of intelligent robot control systems. The article provides the formalization of problems of robotic control system design, as a class of extremum problems with constraints. Solving these problems is rather complicated due to the high dimensionality, polymodality and a priori uncertainty. Decomposition of the extremum problems according to the method, suggested by the author, allows reducing them into a sequence of simpler problems, that can be successfully solved by modern computing technology. Several possible approaches to solving such problems are considered in the article.

  6. Estimation of the physiological mechanical conditioning in vascular tissue engineering by a predictive fluid-structure interaction approach.

    PubMed

    Tresoldi, Claudia; Bianchi, Elena; Pellegata, Alessandro Filippo; Dubini, Gabriele; Mantero, Sara

    2017-08-01

    The in vitro replication of physiological mechanical conditioning through bioreactors plays a crucial role in the development of functional Small-Caliber Tissue-Engineered Blood Vessels. An in silico scaffold-specific model under pulsatile perfusion provided by a bioreactor was implemented using a fluid-structure interaction (FSI) approach for viscoelastic tubular scaffolds (e.g. decellularized swine arteries, DSA). Results of working pressures, circumferential deformations, and wall shear stress on DSA fell within the desired physiological range and indicated the ability of this model to correctly predict the mechanical conditioning acting on the cells-scaffold system. Consequently, the FSI model allowed us to a priori define the stimulation pattern, driving in vitro physiological maturation of scaffolds, especially with viscoelastic properties.

  7. Minimax terminal approach problem in two-level hierarchical nonlinear discrete-time dynamical system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shorikov, A. F., E-mail: afshorikov@mail.ru

    We consider a discrete–time dynamical system consisting of three controllable objects. The motions of all objects are given by the corresponding vector nonlinear or linear discrete–time recurrent vector relations, and control system for its has two levels: basic (first or I level) that is dominating and subordinate level (second or II level) and both have different criterions of functioning and united a priori by determined informational and control connections defined in advance. For the dynamical system in question, we propose a mathematical formalization in the form of solving a multistep problem of two-level hierarchical minimax program control over the terminalmore » approach process with incomplete information and give a general scheme for its solving.« less

  8. Two types of rate-determining step in chemical and biochemical processes.

    PubMed Central

    Yagisawa, S

    1989-01-01

    Close examination of the concept of the rate-determining step (RDS) shows that there are two types of RDS depending on the definition of 'rate'. One is represented by the highest peak of the free-energy diagram of consecutive reactions and holds true where the rate is defined in terms of the concentration of the first reactant. The other is represented by the peak showing the maximum free-energy difference, where the free-energy difference is the height of a peak measured from the bottom of any preceding troughs, where the definition of the rate is in terms of the total reactant concentration including intermediates. There are no criteria a priori for selecting one of them. PMID:2597141

  9. Astrophysics of Reference Frame Tie Objects

    NASA Technical Reports Server (NTRS)

    Johnston, Kenneth J.; Boboltz, David; Fey, Alan Lee; Gaume, Ralph A.; Zacharias, Norbert

    2004-01-01

    The Astrophysics of Reference Frame Tie Objects Key Science program will investigate the underlying physics of SIM grid objects. Extragalactic objects in the SIM grid will be used to tie the SIM reference frame to the quasi-inertial reference frame defined by extragalactic objects and to remove any residual frame rotation with respect to the extragalactic frame. The current realization of the extragalactic frame is the International Celestial Reference Frame (ICRF). The ICRF is defined by the radio positions of 212 extragalactic objects and is the IAU sanctioned fundamental astronomical reference frame. This key project will advance our knowledge of the physics of the objects which will make up the SIM grid, such as quasars and chromospherically active stars, and relates directly to the stability of the SIM reference frame. The following questions concerning the physics of reference frame tie objects will be investigated.

  10. Accuracy Assessment of Geometrical Elements for Setting-Out in Horizontal Plane of Conveying Chambers at the Bauxite Mine "KOSTURI" Srebrenica

    NASA Astrophysics Data System (ADS)

    Milutinović, Aleksandar; Ganić, Aleksandar; Tokalić, Rade

    2014-03-01

    Setting-out of objects on the exploitation field of the mine, both in surface mining and in the underground mines, is determined by the specified setting-out accuracy of reference points, which are best to define spatial position of the object projected. For the purpose of achieving of the specified accuracy, it is necessary to perform a priori accuracy assessment of parameters, which are to be used when performing setting-out. Based on the a priori accuracy assessment, verification of the quality of geometrical setting- -out elements specified in the layout; definition of the accuracy for setting-out of geometrical elements; selection of setting-out method; selection at the type and class of instruments and tools that need to be applied in order to achieve predefined accuracy. The paper displays the accuracy assessment of geometrical elements for setting-out of the main haul gallery, haul downcast and helical conveying downcasts in shape of an inclined helix in horizontal plane, using the example of the underground bauxite mine »Kosturi«, Srebrenica. Wytyczanie obiektów na polu wydobywczym w kopalniach, zarówno podziemnych jak i odkrywkowych, zależy w dużej mierze od określonej dokładności wytyczania punktów referencyjnych, przy pomocy których określane jest następnie położenie przestrzenne pozostałych obiektów. W celu uzyskania założonej dokładności, należy przeprowadzić wstępną analizę dokładności oszacowania parametrów które następnie wykorzystane będą w procesie wytyczania. W oparciu o wyniki wstępnej analizy dokładności dokonuje się weryfikacji jakości geometrycznego wytyczenia elementów zaznaczonych na szkicu, uwzględniając te wyniki dobrać należy odpowiednią metodę wytyczania i rodzaj oraz klasę wykorzystywanych narzędzi i instrumentów, tak by osiągnąć założony poziom dokładności. W pracy przedstawiono oszacowanie dokładności wytyczania elementów geometrycznych dla głównego chodnika transportowego, chodnika upadowego oraz szybów wlotowych, naniesionych na płaszczyznę poziomą, dla podziemnej kopalni boksytu "Kosturi' w Srebrenicy.

  11. Single-trial event-related potential extraction through one-unit ICA-with-reference

    NASA Astrophysics Data System (ADS)

    Lih Lee, Wee; Tan, Tele; Falkmer, Torbjörn; Leung, Yee Hong

    2016-12-01

    Objective. In recent years, ICA has been one of the more popular methods for extracting event-related potential (ERP) at the single-trial level. It is a blind source separation technique that allows the extraction of an ERP without making strong assumptions on the temporal and spatial characteristics of an ERP. However, the problem with traditional ICA is that the extraction is not direct and is time-consuming due to the need for source selection processing. In this paper, the application of an one-unit ICA-with-Reference (ICA-R), a constrained ICA method, is proposed. Approach. In cases where the time-region of the desired ERP is known a priori, this time information is utilized to generate a reference signal, which is then used for guiding the one-unit ICA-R to extract the source signal of the desired ERP directly. Main results. Our results showed that, as compared to traditional ICA, ICA-R is a more effective method for analysing ERP because it avoids manual source selection and it requires less computation thus resulting in faster ERP extraction. Significance. In addition to that, since the method is automated, it reduces the risks of any subjective bias in the ERP analysis. It is also a potential tool for extracting the ERP in online application.

  12. Single-trial event-related potential extraction through one-unit ICA-with-reference.

    PubMed

    Lee, Wee Lih; Tan, Tele; Falkmer, Torbjörn; Leung, Yee Hong

    2016-12-01

    In recent years, ICA has been one of the more popular methods for extracting event-related potential (ERP) at the single-trial level. It is a blind source separation technique that allows the extraction of an ERP without making strong assumptions on the temporal and spatial characteristics of an ERP. However, the problem with traditional ICA is that the extraction is not direct and is time-consuming due to the need for source selection processing. In this paper, the application of an one-unit ICA-with-Reference (ICA-R), a constrained ICA method, is proposed. In cases where the time-region of the desired ERP is known a priori, this time information is utilized to generate a reference signal, which is then used for guiding the one-unit ICA-R to extract the source signal of the desired ERP directly. Our results showed that, as compared to traditional ICA, ICA-R is a more effective method for analysing ERP because it avoids manual source selection and it requires less computation thus resulting in faster ERP extraction. In addition to that, since the method is automated, it reduces the risks of any subjective bias in the ERP analysis. It is also a potential tool for extracting the ERP in online application.

  13. Impact of the galactic acceleration on the terrestrial reference frame and the scale factor in VLBI

    NASA Astrophysics Data System (ADS)

    Krásná, Hana; Titov, Oleg

    2017-04-01

    The relative motion of the solar system barycentre around the galactic centre can also be described as an acceleration of the solar system directed towards the centre of the Galaxy. So far, this effect has been omitted in the a priori modelling of the Very Long Baseline Interferometry (VLBI) observable. Therefore, it results in a systematic dipole proper motion (Secular Aberration Drift, SAD) of extragalactic radio sources building the celestial reference frame with a theoretical maximum magnitude of 5-7 microarcsec/year. In this work, we present our estimation of the SAD vector obtained within a global adjustment of the VLBI measurements (1979.0 - 2016.5) using the software VieVS. We focus on the influence of the observed radio sources with the maximum SAD effect on the terrestrial reference frame. We show that the scale factor from the VLBI measurements estimated for each source individually discloses a clear systematic aligned with the direction to the Galactic centre-anticentre. Therefore, the radio sources located near Galactic anticentre may cause a strong systematic effect, especially, in early VLBI years. For instance, radio source 0552+398 causes a difference up to 1 mm in the estimated baseline length. Furthermore, we discuss the scale factor estimated for each radio source after removal of the SAD systematic.

  14. 49 CFR 385.321 - What failures of safety management practices disclosed by the safety audit will result in a...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... occurrence. This violation refers to a driver operating a CMV as defined under § 383.5. 9. § 387.7(a... unqualified driver Single occurrence. This violation refers to a driver operating a CMV as defined under § 390...

  15. A spatial haplotype copying model with applications to genotype imputation.

    PubMed

    Yang, Wen-Yun; Hormozdiari, Farhad; Eskin, Eleazar; Pasaniuc, Bogdan

    2015-05-01

    Ever since its introduction, the haplotype copy model has proven to be one of the most successful approaches for modeling genetic variation in human populations, with applications ranging from ancestry inference to genotype phasing and imputation. Motivated by coalescent theory, this approach assumes that any chromosome (haplotype) can be modeled as a mosaic of segments copied from a set of chromosomes sampled from the same population. At the core of the model is the assumption that any chromosome from the sample is equally likely to contribute a priori to the copying process. Motivated by recent works that model genetic variation in a geographic continuum, we propose a new spatial-aware haplotype copy model that jointly models geography and the haplotype copying process. We extend hidden Markov models of haplotype diversity such that at any given location, haplotypes that are closest in the genetic-geographic continuum map are a priori more likely to contribute to the copying process than distant ones. Through simulations starting from the 1000 Genomes data, we show that our model achieves superior accuracy in genotype imputation over the standard spatial-unaware haplotype copy model. In addition, we show the utility of our model in selecting a small personalized reference panel for imputation that leads to both improved accuracy as well as to a lower computational runtime than the standard approach. Finally, we show our proposed model can be used to localize individuals on the genetic-geographical map on the basis of their genotype data.

  16. Non-linear motions in reprocessed GPS station position time series

    NASA Astrophysics Data System (ADS)

    Rudenko, Sergei; Gendt, Gerd

    2010-05-01

    Global Positioning System (GPS) data of about 400 globally distributed stations obtained at time span from 1998 till 2007 were reprocessed using GFZ Potsdam EPOS (Earth Parameter and Orbit System) software within International GNSS Service (IGS) Tide Gauge Benchmark Monitoring (TIGA) Pilot Project and IGS Data Reprocessing Campaign with the purpose to determine weekly precise coordinates of GPS stations located at or near tide gauges. Vertical motions of these stations are used to correct the vertical motions of tide gauges for local motions and to tie tide gauge measurements to the geocentric reference frame. Other estimated parameters include daily values of the Earth rotation parameters and their rates, as well as satellite antenna offsets. The solution GT1 derived is based on using absolute phase center variation model, ITRF2005 as a priori reference frame, and other new models. The solution contributed also to ITRF2008. The time series of station positions are analyzed to identify non-linear motions caused by different effects. The paper presents the time series of GPS station coordinates and investigates apparent non-linear motions and their influence on GPS station height rates.

  17. Towards inverse modeling of turbidity currents: The inverse lock-exchange problem

    NASA Astrophysics Data System (ADS)

    Lesshafft, Lutz; Meiburg, Eckart; Kneller, Ben; Marsden, Alison

    2011-04-01

    A new approach is introduced for turbidite modeling, leveraging the potential of computational fluid dynamics methods to simulate the flow processes that led to turbidite formation. The practical use of numerical flow simulation for the purpose of turbidite modeling so far is hindered by the need to specify parameters and initial flow conditions that are a priori unknown. The present study proposes a method to determine optimal simulation parameters via an automated optimization process. An iterative procedure matches deposit predictions from successive flow simulations against available localized reference data, as in practice may be obtained from well logs, and aims at convergence towards the best-fit scenario. The final result is a prediction of the entire deposit thickness and local grain size distribution. The optimization strategy is based on a derivative-free, surrogate-based technique. Direct numerical simulations are performed to compute the flow dynamics. A proof of concept is successfully conducted for the simple test case of a two-dimensional lock-exchange turbidity current. The optimization approach is demonstrated to accurately retrieve the initial conditions used in a reference calculation.

  18. Multimodal Fusion with Reference: Searching for Joint Neuromarkers of Working Memory Deficits in Schizophrenia

    PubMed Central

    Qi, Shile; Calhoun, Vince D.; van Erp, Theo G. M.; Bustillo, Juan; Damaraju, Eswar; Turner, Jessica A.; Du, Yuhui; Chen, Jiayu; Yu, Qingbao; Mathalon, Daniel H.; Ford, Judith M.; Voyvodic, James; Mueller, Bryon A.; Belger, Aysenil; Ewen, Sarah Mc; Potkin, Steven G.; Preda, Adrian; Jiang, Tianzi

    2017-01-01

    Multimodal fusion is an effective approach to take advantage of cross-information among multiple imaging data to better understand brain diseases. However, most current fusion approaches are blind, without adopting any prior information. To date, there is increasing interest to uncover the neurocognitive mapping of specific behavioral measurement on enriched brain imaging data; hence, a supervised, goal-directed model that enables a priori information as a reference to guide multimodal data fusion is in need and a natural option. Here we proposed a fusion with reference model, called “multi-site canonical correlation analysis with reference plus joint independent component analysis” (MCCAR+jICA), which can precisely identify co-varying multimodal imaging patterns closely related to reference information, such as cognitive scores. In a 3-way fusion simulation, the proposed method was compared with its alternatives on estimation accuracy of both target component decomposition and modality linkage detection. MCCAR+jICA outperforms others with higher precision. In human imaging data, working memory performance was utilized as a reference to investigate the covarying functional and structural brain patterns among 3 modalities and how they are impaired in schizophrenia. Two independent cohorts (294 and 83 subjects respectively) were used. Interestingly, similar brain maps were identified between the two cohorts, with substantial overlap in the executive control networks in fMRI, salience network in sMRI, and major white matter tracts in dMRI. These regions have been linked with working memory deficits in schizophrenia in multiple reports, while MCCAR+jICA further verified them in a repeatable, joint manner, demonstrating the potential of such results to identify potential neuromarkers for mental disorders. PMID:28708547

  19. Nonimaging optical illumination system

    DOEpatents

    Winston, R.; Ries, H.

    1996-12-17

    A nonimaging illumination optical device for producing a selected far field illuminance over an angular range. The optical device includes a light source, a light reflecting surface, and a family of light edge rays defined along a reference line with the reflecting surface defined in terms of the reference line as a parametric function R(t) where t is a scalar parameter position and R(t)=k(t)+Du(t) where k(t) is a parameterization of the reference line, and D is a distance from a point on the reference line to the reflection surface along the desired edge ray through the point. 35 figs.

  20. Nonimaging optical illumination system

    DOEpatents

    Winston, R.; Ries, H.

    1998-10-06

    A nonimaging illumination optical device for producing a selected far field illuminance over an angular range. The optical device includes a light source a light reflecting surface, and a family of light edge rays defined along a reference line with the reflecting surface defined in terms of the reference lines a parametric function R(t) where t is a scalar parameter position and R(t)=k(t)+Du(t) where k(t) is a parameterization of the reference line, and D is a distance from a point on the reference line to the reflection surface along the desired edge ray through the point. 35 figs.

  1. Defining Top-of-Atmosphere Flux Reference Level for Earth Radiation Budget Studies

    NASA Technical Reports Server (NTRS)

    Loeb, N. G.; Kato, S.; Wielicki, B. A.

    2002-01-01

    To estimate the earth's radiation budget at the top of the atmosphere (TOA) from satellite-measured radiances, it is necessary to account for the finite geometry of the earth and recognize that the earth is a solid body surrounded by a translucent atmosphere of finite thickness that attenuates solar radiation differently at different heights. As a result, in order to account for all of the reflected solar and emitted thermal radiation from the planet by direct integration of satellite-measured radiances, the measurement viewing geometry must be defined at a reference level well above the earth s surface (e.g., 100 km). This ensures that all radiation contributions, including radiation escaping the planet along slant paths above the earth s tangent point, are accounted for. By using a field-of- view (FOV) reference level that is too low (such as the surface reference level), TOA fluxes for most scene types are systematically underestimated by 1-2 W/sq m. In addition, since TOA flux represents a flow of radiant energy per unit area, and varies with distance from the earth according to the inverse-square law, a reference level is also needed to define satellite-based TOA fluxes. From theoretical radiative transfer calculations using a model that accounts for spherical geometry, the optimal reference level for defining TOA fluxes in radiation budget studies for the earth is estimated to be approximately 20 km. At this reference level, there is no need to explicitly account for horizontal transmission of solar radiation through the atmosphere in the earth radiation budget calculation. In this context, therefore, the 20-km reference level corresponds to the effective radiative top of atmosphere for the planet. Although the optimal flux reference level depends slightly on scene type due to differences in effective transmission of solar radiation with cloud height, the difference in flux caused by neglecting the scene-type dependence is less than 0.1%. If an inappropriate TOA flux reference level is used to define satellite TOA fluxes, and horizontal transmission of solar radiation through the planet is not accounted for in the radiation budget equation, systematic errors in net flux of up to 8 W/sq m can result. Since climate models generally use a plane-parallel model approximation to estimate TOA fluxes and the earth radiation budget, they implicitly assume zero horizontal transmission of solar radiation in the radiation budget equation, and do not need to specify a flux reference level. By defining satellite-based TOA flux estimates at a 20-km flux reference level, comparisons with plane-parallel climate model calculations are simplified since there is no need to explicitly correct plane-parallel climate model fluxes for horizontal transmission of solar radiation through a finite earth.

  2. The effect of directivity in a PSHA framework

    NASA Astrophysics Data System (ADS)

    Spagnuolo, E.; Herrero, A.; Cultrera, G.

    2012-09-01

    We propose a method to introduce a refined representation of the ground motion in the framework of the Probabilistic Seismic Hazard Analysis (PSHA). This study is especially oriented to the incorporation of a priori information about source parameters, by focusing on the directivity effect and its influence on seismic hazard maps. Two strategies have been followed. One considers the seismic source as an extended source, and it is valid when the PSHA seismogenetic sources are represented as fault segments. We show that the incorporation of variables related to the directivity effect can lead to variations up to 20 per cent of the hazard level in case of dip-slip faults with uniform distribution of hypocentre location, in terms of spectral acceleration response at 5 s, exceeding probability of 10 per cent in 50 yr. The second one concerns the more general problem of the seismogenetic areas, where each point is a seismogenetic source having the same chance of enucleate a seismic event. In our proposition the point source is associated to the rupture-related parameters, defined using a statistical description. As an example, we consider a source point of an area characterized by strike-slip faulting style. With the introduction of the directivity correction the modulation of the hazard map reaches values up to 100 per cent (for strike-slip, unilateral faults). The introduction of directivity does not increase uniformly the hazard level, but acts more like a redistribution of the estimation that is consistent with the fault orientation. A general increase appears only when no a priori information is available. However, nowadays good a priori knowledge exists on style of faulting, dip and orientation of faults associated to the majority of the seismogenetic zones of the present seismic hazard maps. The percentage of variation obtained is strongly dependent on the type of model chosen to represent analytically the directivity effect. Therefore, it is our aim to emphasize more on the methodology following which, all the information collected may be easily converted to obtain a more comprehensive and meaningful probabilistic seismic hazard formulation.

  3. An FP7 "Space" project: Aphorism "Advanced PRocedures for volcanic and Seismic Monitoring"

    NASA Astrophysics Data System (ADS)

    Di Iorio, A., Sr.; Stramondo, S.; Bignami, C.; Corradini, S.; Merucci, L.

    2014-12-01

    APHORISM project proposes the development and testing of two new methods to combine Earth Observation satellite data from different sensors, and ground data. The aim is to demonstrate that this two types of data, appropriately managed and integrated, can provide new improved GMES products useful for seismic and volcanic crisis management. The first method, APE - A Priori information for Earthquake damage mapping, concerns the generation of maps to address the detection and estimate of damage caused by a seism. The use of satellite data to investigate earthquake damages is not an innovative issue. We can find a wide literature and projects concerning such issue, but usually the approach is only based on change detection techniques and classifications algorithms. The novelty of APE relies on the exploitation of a priori information derived by InSAR time series to measure surface movements, shake maps obtained from seismological data, and vulnerability information. This a priori information is then integrated with change detection map to improve accuracy and to limit false alarms. The second method deals with volcanic crisis management. The method, MACE - Multi-platform volcanic Ash Cloud Estimation, concerns the exploitation of GEO (Geosynchronous Earth Orbit) sensor platform, LEO (Low Earth Orbit) satellite sensors and ground measures to improve the ash detection and retrieval and to characterize the volcanic ash clouds. The basic idea of MACE consists of an improvement of volcanic ash retrievals at the space-time scale by using both the LEO and GEO estimations and in-situ data. Indeed the standard ash thermal infrared retrieval is integrated with data coming from a wider spectral range from visible to microwave. The ash detection is also extended in case of cloudy atmosphere or steam plumes. APE and MACE methods have been defined in order to provide products oriented toward the next ESA Sentinels satellite missions.The project is funded under the European Union FP7 program and the Kick-Off meeting has been held at INGV premises in Rome on 18th December 2013.

  4. A Priori Knowledge and Heuristic Reasoning in Architectural Design.

    ERIC Educational Resources Information Center

    Rowe, Peter G.

    1982-01-01

    It is proposed that the various classes of a priori knowledge incorporated in heuristic reasoning processes exert a strong influence over architectural design activity. Some design problems require exercise of some provisional set of rules, inference, or plausible strategy which requires heuristic reasoning. A case study illustrates this concept.…

  5. THE NUMBER OF TIDAL DWARF SATELLITE GALAXIES IN DEPENDENCE OF BULGE INDEX

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    López-Corredoira, Martín; Kroupa, Pavel, E-mail: martinlc@iac.es, E-mail: pavel@astro.uni-bonn.de

    We show that a significant correlation (up to 5σ) emerges between the bulge index, defined to be larger for a larger bulge/disk ratio, in spiral galaxies with similar luminosities in the Galaxy Zoo 2 of the Sloan Digital Sky Survey and the number of tidal-dwarf galaxies in the catalog by Kaviraj et al. In the standard cold or warm dark matter cosmological models, the number of satellite galaxies correlates with the circular velocity of the dark matter host halo. In generalized gravity models without cold or warm dark matter, such a correlation does not exist, because host galaxies cannot capture infalling dwarfmore » galaxies due to the absence of dark-matter-induced dynamical friction. However, in such models, a correlation is expected to exist between the bulge mass and the number of satellite galaxies because bulges and tidal-dwarf satellite galaxies form in encounters between host galaxies. This is not predicted by dark matter models in which bulge mass and the number of satellites are a priori uncorrelated because higher bulge/disk ratios do not imply higher dark/luminous ratios. Hence, our correlation reproduces the prediction of scenarios without dark matter, whereas an explanation is not found readily from the a priori predictions of the standard scenario with dark matter. Further research is needed to explore whether some application of the standard theory may explain this correlation.« less

  6. Reporting standards for Bland-Altman agreement analysis in laboratory research: a cross-sectional survey of current practice.

    PubMed

    Chhapola, Viswas; Kanwal, Sandeep Kumar; Brar, Rekha

    2015-05-01

    To carry out a cross-sectional survey of the medical literature on laboratory research papers published later than 2012 and available in the common search engines (PubMed, Google Scholar) on the quality of statistical reporting of method comparison studies using Bland-Altman (B-A) analysis. Fifty clinical studies were identified which had undertaken method comparison of laboratory analytes using B-A. The reporting of B-A was evaluated using a predesigned checklist with following six items: (1) correct representation of x-axis on B-A plot, (2) representation and correct definition of limits of agreement (LOA), (3) reporting of confidence interval (CI) of LOA, (4) comparison of LOA with a priori defined clinical criteria, (5) evaluation of the pattern of the relationship between difference (y-axis) and average (x-axis) and (6) measures of repeatability. The x-axis and LOA were presented correctly in 94%, comparison with a priori clinical criteria in 74%, CI reporting in 6%, evaluation of pattern in 28% and repeatability assessment in 38% of studies. There is incomplete reporting of B-A in published clinical studies. Despite its simplicity, B-A appears not to be completely understood by researchers, reviewers and editors of journals. There appear to be differences in the reporting of B-A between laboratory medicine journals and other clinical journals. A uniform reporting of B-A method will enhance the generalizability of results. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  7. Multiparameter elastic full waveform inversion with facies-based constraints

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen-dong; Alkhalifah, Tariq; Naeini, Ehsan Zabihi; Sun, Bingbing

    2018-06-01

    Full waveform inversion (FWI) incorporates all the data characteristics to estimate the parameters described by the assumed physics of the subsurface. However, current efforts to utilize FWI beyond improved acoustic imaging, like in reservoir delineation, faces inherent challenges related to the limited resolution and the potential trade-off between the elastic model parameters. Some anisotropic parameters are insufficiently updated because of their minor contributions to the surface collected data. Adding rock physics constraints to the inversion helps mitigate such limited sensitivity, but current approaches to add such constraints are based on including them as a priori knowledge mostly valid around the well or as a global constraint for the whole area. Since similar rock formations inside the Earth admit consistent elastic properties and relative values of elasticity and anisotropy parameters (this enables us to define them as a seismic facies), utilizing such localized facies information in FWI can improve the resolution of inverted parameters. We propose a novel approach to use facies-based constraints in both isotropic and anisotropic elastic FWI. We invert for such facies using Bayesian theory and update them at each iteration of the inversion using both the inverted models and a priori information. We take the uncertainties of the estimated parameters (approximated by radiation patterns) into consideration and improve the quality of estimated facies maps. Four numerical examples corresponding to different acquisition, physical assumptions and model circumstances are used to verify the effectiveness of the proposed method.

  8. Development of a Clinical Framework for Mirror Therapy in Patients with Phantom Limb Pain: An Evidence-based Practice Approach.

    PubMed

    Rothgangel, Andreas; Braun, Susy; de Witte, Luc; Beurskens, Anna; Smeets, Rob

    2016-04-01

    To describe the development and content of a clinical framework for mirror therapy (MT) in patients with phantom limb pain (PLP) following amputation. Based on an a priori formulated theoretical model, 3 sources of data collection were used to develop the clinical framework. First, a review of the literature took place on important clinical aspects and the evidence on the effectiveness of MT in patients with phantom limb pain. In addition, questionnaires and semi-structured interviews were used to analyze clinical experiences and preferences of physical and occupational therapists and patients suffering from PLP regarding the application of MT. All data were finally clustered into main and subcategories and were used to complement and refine the theoretical model. For every main category of the a priori formulated theoretical model, several subcategories emerged from the literature search, patient, and therapist interviews. Based on these categories, we developed a clinical flowchart that incorporates the main and subcategories in a logical way according to the phases in methodical intervention defined by the Royal Dutch Society for Physical Therapy. In addition, we developed a comprehensive booklet that illustrates the individual steps of the clinical flowchart. In this study, a structured clinical framework for the application of MT in patients with PLP was developed. This framework is currently being tested for its effectiveness in a multicenter randomized controlled trial. © 2015 World Institute of Pain.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bartkiewicz, Karol; Miranowicz, Adam

    We find an optimal quantum cloning machine, which clones qubits of arbitrary symmetrical distribution around the Bloch vector with the highest fidelity. The process is referred to as phase-independent cloning in contrast to the standard phase-covariant cloning for which an input qubit state is a priori better known. We assume that the information about the input state is encoded in an arbitrary axisymmetric distribution (phase function) on the Bloch sphere of the cloned qubits. We find analytical expressions describing the optimal cloning transformation and fidelity of the clones. As an illustration, we analyze cloning of qubit state described by themore » von Mises-Fisher and Brosseau distributions. Moreover, we show that the optimal phase-independent cloning machine can be implemented by modifying the mirror phase-covariant cloning machine for which quantum circuits are known.« less

  10. Least Squares Solution of Small Sample Multiple-Master PSInSAR System

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Ding, Xiao Li; Lu, Zhong

    2010-03-01

    In this paper we propose a least squares based approach for multi-temporal SAR interferometry that allows to estimate the deformation rate with no need of phase unwrapping. The approach utilizes a series of multi-master wrapped differential interferograms with short baselines and only focuses on the arcs constructed by two nearby points at which there are no phase ambiguities. During the estimation an outlier detector is used to identify and remove the arcs with phase ambiguities, and pseudoinverse of priori variance component matrix is taken as the weight of correlated observations in the model. The parameters at points can be obtained by an indirect adjustment model with constraints when several reference points are available. The proposed approach is verified by a set of simulated data.

  11. Development of an Axisymmetric Afterbody Test Case for Turbulent Flow Separation Validation

    NASA Technical Reports Server (NTRS)

    Disotell, Kevin J.; Rumsey, Christopher L.

    2017-01-01

    As identified in the CFD Vision 2030 Study commissioned by NASA, validation of advanced RANS models and scale-resolving methods for computing turbulent flows must be supported by improvements in high-quality experiments designed specifically for CFD implementation. A new test platform referred to as the Axisymmetric Afterbody allows for a range of flow behaviors to be studied on interchangeable afterbodies while facilitating access to higher Reynolds number facilities. A priori RANS computations are reported for a risk-reduction configuration to demonstrate critical variation among turbulence model results for a given afterbody, ranging from barely-attached to mild separated flow. The effects of body nose geometry and tunnel-wall boundary condition on the computed afterbody flow are explored to inform the design of an experimental test program.

  12. A weighted ℓ{sub 1}-minimization approach for sparse polynomial chaos expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peng, Ji; Hampton, Jerrad; Doostan, Alireza, E-mail: alireza.doostan@colorado.edu

    2014-06-15

    This work proposes a method for sparse polynomial chaos (PC) approximation of high-dimensional stochastic functions based on non-adapted random sampling. We modify the standard ℓ{sub 1}-minimization algorithm, originally proposed in the context of compressive sampling, using a priori information about the decay of the PC coefficients, when available, and refer to the resulting algorithm as weightedℓ{sub 1}-minimization. We provide conditions under which we may guarantee recovery using this weighted scheme. Numerical tests are used to compare the weighted and non-weighted methods for the recovery of solutions to two differential equations with high-dimensional random inputs: a boundary value problem with amore » random elliptic operator and a 2-D thermally driven cavity flow with random boundary condition.« less

  13. Using the GOCE star trackers for validating the calibration of its accelerometers

    NASA Astrophysics Data System (ADS)

    Visser, P. N. A. M.

    2017-12-01

    A method for validating the calibration parameters of the six accelerometers on board the Gravity field and steady-state Ocean Circulation Explorer (GOCE) from star tracker observations that was originally tested by an end-to-end simulation, has been updated and applied to real data from GOCE. It is shown that the method provides estimates of scale factors for all three axes of the six GOCE accelerometers that are consistent at a level significantly better than 0.01 compared to the a priori calibrated value of 1. In addition, relative accelerometer biases and drift terms were estimated consistent with values obtained by precise orbit determination, where the first GOCE accelerometer served as reference. The calibration results clearly reveal the different behavior of the sensitive and less-sensitive accelerometer axes.

  14. The structure of paranoia in the general population.

    PubMed

    Bebbington, Paul E; McBride, Orla; Steel, Craig; Kuipers, Elizabeth; Radovanovic, Mirjana; Brugha, Traolach; Jenkins, Rachel; Meltzer, Howard I; Freeman, Daniel

    2013-06-01

    Psychotic phenomena appear to form a continuum with normal experience and beliefs, and may build on common emotional interpersonal concerns. We tested predictions that paranoid ideation is exponentially distributed and hierarchically arranged in the general population, and that persecutory ideas build on more common cognitions of mistrust, interpersonal sensitivity and ideas of reference. Items were chosen from the Structured Clinical Interview for DSM-IV Axis II Disorders (SCID-II) questionnaire and the Psychosis Screening Questionnaire in the second British National Survey of Psychiatric Morbidity (n = 8580), to test a putative hierarchy of paranoid development using confirmatory factor analysis, latent class analysis and factor mixture modelling analysis. Different types of paranoid ideation ranged in frequency from less than 2% to nearly 30%. Total scores on these items followed an almost perfect exponential distribution (r = 0.99). Our four a priori first-order factors were corroborated (interpersonal sensitivity; mistrust; ideas of reference; ideas of persecution). These mapped onto four classes of individual respondents: a rare, severe, persecutory class with high endorsement of all item factors, including persecutory ideation; a quasi-normal class with infrequent endorsement of interpersonal sensitivity, mistrust and ideas of reference, and no ideas of persecution; and two intermediate classes, characterised respectively by relatively high endorsement of items relating to mistrust and to ideas of reference. The paranoia continuum has implications for the aetiology, mechanisms and treatment of psychotic disorders, while confirming the lack of a clear distinction from normal experiences and processes.

  15. Gene integrated set profile analysis: a context-based approach for inferring biological endpoints

    PubMed Central

    Kowalski, Jeanne; Dwivedi, Bhakti; Newman, Scott; Switchenko, Jeffery M.; Pauly, Rini; Gutman, David A.; Arora, Jyoti; Gandhi, Khanjan; Ainslie, Kylie; Doho, Gregory; Qin, Zhaohui; Moreno, Carlos S.; Rossi, Michael R.; Vertino, Paula M.; Lonial, Sagar; Bernal-Mizrachi, Leon; Boise, Lawrence H.

    2016-01-01

    The identification of genes with specific patterns of change (e.g. down-regulated and methylated) as phenotype drivers or samples with similar profiles for a given gene set as drivers of clinical outcome, requires the integration of several genomic data types for which an ‘integrate by intersection’ (IBI) approach is often applied. In this approach, results from separate analyses of each data type are intersected, which has the limitation of a smaller intersection with more data types. We introduce a new method, GISPA (Gene Integrated Set Profile Analysis) for integrated genomic analysis and its variation, SISPA (Sample Integrated Set Profile Analysis) for defining respective genes and samples with the context of similar, a priori specified molecular profiles. With GISPA, the user defines a molecular profile that is compared among several classes and obtains ranked gene sets that satisfy the profile as drivers of each class. With SISPA, the user defines a gene set that satisfies a profile and obtains sample groups of profile activity. Our results from applying GISPA to human multiple myeloma (MM) cell lines contained genes of known profiles and importance, along with several novel targets, and their further SISPA application to MM coMMpass trial data showed clinical relevance. PMID:26826710

  16. Hybrid Parallel Contour Trees, Version 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sewell, Christopher; Fasel, Patricia; Carr, Hamish

    A common operation in scientific visualization is to compute and render a contour of a data set. Given a function of the form f : R^d -> R, a level set is defined as an inverse image f^-1(h) for an isovalue h, and a contour is a single connected component of a level set. The Reeb graph can then be defined to be the result of contracting each contour to a single point, and is well defined for Euclidean spaces or for general manifolds. For simple domains, the graph is guaranteed to be a tree, and is called the contourmore » tree. Analysis can then be performed on the contour tree in order to identify isovalues of particular interest, based on various metrics, and render the corresponding contours, without having to know such isovalues a priori. This code is intended to be the first data-parallel algorithm for computing contour trees. Our implementation will use the portable data-parallel primitives provided by Nvidia’s Thrust library, allowing us to compile our same code for both GPUs and multi-core CPUs. Native OpenMP and purely serial versions of the code will likely also be included. It will also be extended to provide a hybrid data-parallel / distributed algorithm, allowing scaling beyond a single GPU or CPU.« less

  17. Department of Defense Strategy to Support Multi-Agency Bat Conservation Initiative within the State of Utah

    DTIC Science & Technology

    2008-02-28

    Range, and Section are entered. Datum: Geometric reference surface. Original Site Location datum is defined by user’s map datum; e.g. NAD27...Section are entered. Datum: Geometric reference surface. Original Site Location datum is defined by user’s map datum; e.g. NAD27 Conus or NAD83...Calculated and recorded automatically if the fields UTM_N and UTM_E or Township, Range, and Section are entered. 41 Datum: Geometric reference surface

  18. Considerations about expected a posteriori estimation in adaptive testing: adaptive a priori, adaptive correction for bias, and adaptive integration interval.

    PubMed

    Raiche, Gilles; Blais, Jean-Guy

    2009-01-01

    In a computerized adaptive test, we would like to obtain an acceptable precision of the proficiency level estimate using an optimal number of items. Unfortunately, decreasing the number of items is accompanied by a certain degree of bias when the true proficiency level differs significantly from the a priori estimate. The authors suggest that it is possible to reduced the bias, and even the standard error of the estimate, by applying to each provisional estimation one or a combination of the following strategies: adaptive correction for bias proposed by Bock and Mislevy (1982), adaptive a priori estimate, and adaptive integration interval.

  19. Attitude determination and parameter estimation using vector observations - Theory

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis

    1989-01-01

    Procedures for attitude determination based on Wahba's loss function are generalized to include the estimation of parameters other than the attitude, such as sensor biases. Optimization with respect to the attitude is carried out using the q-method, which does not require an a priori estimate of the attitude. Optimization with respect to the other parameters employs an iterative approach, which does require an a priori estimate of these parameters. Conventional state estimation methods require a priori estimates of both the parameters and the attitude, while the algorithm presented in this paper always computes the exact optimal attitude for given values of the parameters. Expressions for the covariance of the attitude and parameter estimates are derived.

  20. Invariant polarimetric contrast parameters of light with Gaussian fluctuations in three dimensions.

    PubMed

    Réfrégier, Philippe; Roche, Muriel; Goudail, François

    2006-01-01

    We propose a rigorous definition of the minimal set of parameters that characterize the difference between two partially polarized states of light whose electric fields vary in three dimensions with Gaussian fluctuations. Although two such states are a priori defined by eighteen parameters, we demonstrate that the performance of processing tasks such as detection, localization, or segmentation of spatial or temporal polarization variations is uniquely determined by three scalar functions of these parameters. These functions define a "polarimetric contrast" that simplifies the analysis and the specification of processing techniques on polarimetric signals and images. This result can also be used to analyze the definition of the degree of polarization of a three-dimensional state of light with Gaussian fluctuations in comparison, with respect to its polarimetric contrast parameters, with a totally depolarized light. We show that these contrast parameters are a simple function of the degrees of polarization previously proposed by Barakat [Opt. Acta 30, 1171 (1983)] and Setälä et al. [Phys. Rev. Lett. 88, 123902 (2002)]. Finally, we analyze the dimension of the set of contrast parameters in different particular situations.

  1. Encouraging an ecological evolution of data infrastructure

    NASA Astrophysics Data System (ADS)

    Parsons, M. A.

    2015-12-01

    Infrastructure is often thought of as a complex physical construct usually designed to transport information or things (e.g. electricity, water, cars, money, sound, data…). The Research Data Alliance (RDA) takes a more holistic view and considers infrastructure as a complex body of relationships between people, machines, and organisations. This paper will describe how this more ecological perspective leads RDA to define and govern an agile virtual organization. We seek to harness the power of the volunteer, through an open problem solving approach that focusses on the problems of our individual members and their organisations. We focus on implementing solutions that make data sharing work better without defining a priori what is necessary. We do not judge the fitness of a solution, per se, but instead assess how broadly the solution is adopted, recognizing that adoption is often the social challenge of technical problem. We seek to encourage a bottoms up approach with light guidance on principles from the top. The goal is to develop community solutions that solve real problems today yet are adaptive to changing technologies and needs.

  2. A scenario elicitation methodology to identify the drivers of electricity infrastructure cost in South America

    NASA Astrophysics Data System (ADS)

    Moksnes, Nandi; Taliotis, Constantinos; Broad, Oliver; de Moura, Gustavo; Howells, Mark

    2017-04-01

    Developing a set of scenarios to assess a proposed policy or future development pathways requires a certain level of information, as well as establishing the socio-economic context. As the future is difficult to predict, great care in defining the selected scenarios is needed. Even so it can be difficult to assess if the selected scenario is covering the possible solution space. Instead, this paper's methodology develops a large set of scenarios (324) in OSeMOSYS using the SAMBA 2.0 (South America Model Base) model to assess long-term electricity supply scenarios and applies a scenario-discovery statistical data mining algorithm, Patient Rule Induction Method (PRIM). By creating a multidimensional space, regions related to high and low cost can be identified as well as their key driver. The six key drivers are defined a priori in three (high, medium, low) or two levers (high, low): 1) Demand projected from GDP, population, urbanization and transport, 2) Fossil fuel price, 3) Climate change impact on hydropower, 4) Renewable technology learning rate, 5) Discount rate, 6) CO2 emission targets.

  3. High Bar Swing Performance in Novice Adults: Effects of Practice and Talent

    ERIC Educational Resources Information Center

    Busquets, Albert; Marina, Michel; Irurtia, Alfredo; Ranz, Daniel; Angulo-Barroso, Rosa M.

    2011-01-01

    An individual's a priori talent can affect movement performance during learning. Also, task requirements and motor-perceptual factors are critical to the learning process. This study describes changes in high bar swing performance after a 2-month practice period. Twenty-five novice participants were divided by a priori talent level…

  4. Five Methods for Estimating Angoff Cut Scores with IRT

    ERIC Educational Resources Information Center

    Wyse, Adam E.

    2017-01-01

    This article illustrates five different methods for estimating Angoff cut scores using item response theory (IRT) models. These include maximum likelihood (ML), expected a priori (EAP), modal a priori (MAP), and weighted maximum likelihood (WML) estimators, as well as the most commonly used approach based on translating ratings through the test…

  5. On-orbit calibration for star sensors without priori information.

    PubMed

    Zhang, Hao; Niu, Yanxiong; Lu, Jiazhen; Zhang, Chengfen; Yang, Yanqiang

    2017-07-24

    The star sensor is a prerequisite navigation device for a spacecraft. The on-orbit calibration is an essential guarantee for its operation performance. However, traditional calibration methods rely on ground information and are invalid without priori information. The uncertain on-orbit parameters will eventually influence the performance of guidance navigation and control system. In this paper, a novel calibration method without priori information for on-orbit star sensors is proposed. Firstly, the simplified back propagation neural network is designed for focal length and main point estimation along with system property evaluation, called coarse calibration. Then the unscented Kalman filter is adopted for the precise calibration of all parameters, including focal length, main point and distortion. The proposed method benefits from self-initialization and no attitude or preinstalled sensor parameter is required. Precise star sensor parameter estimation can be achieved without priori information, which is a significant improvement for on-orbit devices. Simulations and experiments results demonstrate that the calibration is easy for operation with high accuracy and robustness. The proposed method can satisfy the stringent requirement for most star sensors.

  6. Effects of daily, high spatial resolution a priori profiles of satellite-derived NOx emissions

    NASA Astrophysics Data System (ADS)

    Laughner, J.; Zare, A.; Cohen, R. C.

    2016-12-01

    The current generation of space-borne NO2 column observations provides a powerful method of constraining NOx emissions due to the spatial resolution and global coverage afforded by the Ozone Monitoring Instrument (OMI). The greater resolution available in next generation instruments such as TROPOMI and the capabilities of geosynchronous platforms TEMPO, Sentinel-4, and GEMS will provide even greater capabilities in this regard, but we must apply lessons learned from the current generation of retrieval algorithms to make the best use of these instruments. Here, we focus on the effect of the resolution of the a priori NO2 profiles used in the retrieval algorithms. We show that for an OMI retrieval, using daily high-resolution a priori profiles results in changes in the retrieved VCDs up to 40% when compared to a retrieval using monthly average profiles at the same resolution. Further, comparing a retrieval with daily high spatial resolution a priori profiles to a more standard one, we show that emissions derived increase by 100% when using the optimized retrieval.

  7. Nonimaging optical illumination system

    DOEpatents

    Winston, Roland; Ries, Harald

    2000-01-01

    A nonimaging illumination optical device for producing a selected far field illuminance over an angular range. The optical device includes a light source 102, a light reflecting surface 108, and a family of light edge rays defined along a reference line 104 with the reflecting surface 108 defined in terms of the reference line 104 as a parametric function R(t) where t is a scalar parameter position and R(t)=k(t)+Du(t) where k(t) is a parameterization of the reference line 104, and D is a distance from a point on the reference line 104 to the reflection surface 108 along the desired edge ray through the point.

  8. Nonimaging optical illumination system

    DOEpatents

    Winston, Roland; Ries, Harald

    1998-01-01

    A nonimaging illumination optical device for producing a selected far field illuminance over an angular range. The optical device includes a light source 102, a light reflecting surface 108, and a family of light edge rays defined along a reference line 104 with the reflecting surface 108 defined in terms of the reference line 104 as a parametric function R(t) where t is a scalar parameter position and R(t)=k(t)+Du(t) where k(t) is a parameterization of the reference line 104, and D is a distance from a point on the reference line 104 to the reflection surface 108 along the desired edge ray through the point.

  9. Nonimaging optical illumination system

    DOEpatents

    Winston, Roland; Ries, Harald

    1996-01-01

    A nonimaging illumination optical device for producing a selected far field illuminance over an angular range. The optical device includes a light source 102, a light reflecting surface 108, and a family of light edge rays defined along a reference line 104 with the reflecting surface 108 defined in terms of the reference line 104 as a parametric function R(t) where t is a scalar parameter position and R(t)=k(t)+Du(t) where k(t) is a parameterization of the reference line 104, and D is a distance from a point on the reference line 104 to the reflection surface 108 along the desired edge ray through the point.

  10. Reconstruction of the experimentally supported human protein interactome: what can we learn?

    PubMed

    Klapa, Maria I; Tsafou, Kalliopi; Theodoridis, Evangelos; Tsakalidis, Athanasios; Moschonas, Nicholas K

    2013-10-02

    Understanding the topology and dynamics of the human protein-protein interaction (PPI) network will significantly contribute to biomedical research, therefore its systematic reconstruction is required. Several meta-databases integrate source PPI datasets, but the protein node sets of their networks vary depending on the PPI data combined. Due to this inherent heterogeneity, the way in which the human PPI network expands via multiple dataset integration has not been comprehensively analyzed. We aim at assembling the human interactome in a global structured way and exploring it to gain insights of biological relevance. First, we defined the UniProtKB manually reviewed human "complete" proteome as the reference protein-node set and then we mined five major source PPI datasets for direct PPIs exclusively between the reference proteins. We updated the protein and publication identifiers and normalized all PPIs to the UniProt identifier level. The reconstructed interactome covers approximately 60% of the human proteome and has a scale-free structure. No apparent differentiating gene functional classification characteristics were identified for the unrepresented proteins. The source dataset integration augments the network mainly in PPIs. Polyubiquitin emerged as the highest-degree node, but the inclusion of most of its identified PPIs may be reconsidered. The high number (>300) of connections of the subsequent fifteen proteins correlates well with their essential biological role. According to the power-law network structure, the unrepresented proteins should mainly have up to four connections with equally poorly-connected interactors. Reconstructing the human interactome based on the a priori definition of the protein nodes enabled us to identify the currently included part of the human "complete" proteome, and discuss the role of the proteins within the network topology with respect to their function. As the network expansion has to comply with the scale-free theory, we suggest that the core of the human interactome has essentially emerged. Thus, it could be employed in systems biology and biomedical research, despite the considerable number of currently unrepresented proteins. The latter are probably involved in specialized physiological conditions, justifying the scarcity of related PPI information, and their identification can assist in designing relevant functional experiments and targeted text mining algorithms.

  11. Recording multiple spatially-heterodyned direct to digital holograms in one digital image

    DOEpatents

    Hanson, Gregory R [Clinton, TN; Bingham, Philip R [Knoxville, TN

    2008-03-25

    Systems and methods are described for recording multiple spatially-heterodyned direct to digital holograms in one digital image. A method includes digitally recording, at a first reference beam-object beam angle, a first spatially-heterodyned hologram including spatial heterodyne fringes for Fourier analysis; Fourier analyzing the recorded first spatially-heterodyned hologram by shifting a first original origin of the recorded first spatially-heterodyned hologram to sit on top of a first spatial-heterodyne carrier frequency defined by the first reference beam-object beam angle; digitally recording, at a second reference beam-object beam angle, a second spatially-heterodyned hologram including spatial heterodyne fringes for Fourier analysis; Fourier analyzing the recorded second spatially-heterodyned hologram by shifting a second original origin of the recorded second spatially-heterodyned hologram to sit on top of a second spatial-heterodyne carrier frequency defined by the second reference beam-object beam angle; applying a first digital filter to cut off signals around the first original origin and define a first result; performing a first inverse Fourier transform on the first result; applying a second digital filter to cut off signals around the second original origin and define a second result; and performing a second inverse Fourier transform on the second result, wherein the first reference beam-object beam angle is not equal to the second reference beam-object beam angle and a single digital image includes both the first spatially-heterodyned hologram and the second spatially-heterodyned hologram.

  12. Principal component analysis of TOF-SIMS spectra, images and depth profiles: an industrial perspective

    NASA Astrophysics Data System (ADS)

    Pacholski, Michaeleen L.

    2004-06-01

    Principal component analysis (PCA) has been successfully applied to time-of-flight secondary ion mass spectrometry (TOF-SIMS) spectra, images and depth profiles. Although SIMS spectral data sets can be small (in comparison to datasets typically discussed in literature from other analytical techniques such as gas or liquid chromatography), each spectrum has thousands of ions resulting in what can be a difficult comparison of samples. Analysis of industrially-derived samples means the identity of most surface species are unknown a priori and samples must be analyzed rapidly to satisfy customer demands. PCA enables rapid assessment of spectral differences (or lack there of) between samples and identification of chemically different areas on sample surfaces for images. Depth profile analysis helps define interfaces and identify low-level components in the system.

  13. Defining drug response for stratified medicine.

    PubMed

    Lonergan, Mike; Senn, Stephen J; McNamee, Christine; Daly, Ann K; Sutton, Robert; Hattersley, Andrew; Pearson, Ewan; Pirmohamed, Munir

    2017-01-01

    The premise for stratified medicine is that drug efficacy, drug safety, or both, vary between groups of patients, and biomarkers can be used to facilitate more targeted prescribing, with the aim of improving the benefit:risk ratio of treatment. However, many factors can contribute to the variability in response to drug treatment. Inadequate characterisation of the nature and degree of variability can lead to the identification of biomarkers that have limited utility in clinical settings. Here, we discuss the complexities associated with the investigation of variability in drug efficacy and drug safety, and how consideration of these issues a priori, together with standardisation of phenotypes, can increase both the efficiency of stratification procedures and identification of biomarkers with the potential for clinical impact. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Using Electroencephalography for Treatment Guidance in Major Depressive Disorder.

    PubMed

    Wade, Elizabeth C; Iosifescu, Dan V

    2016-09-01

    Given the high prevalence of treatment-resistant depression and the long delays in finding effective treatments via trial and error, valid biomarkers of treatment outcome with the ability to guide treatment selection represent one of the most important unmet needs in mood disorders. A large body of research has investigated, for this purpose, biomarkers derived from electroencephalography (EEG), using resting state EEG or evoked potentials. Most studies have focused on specific EEG features (or combinations thereof), whereas more recently machine-learning approaches have been used to define the EEG features with the best predictive abilities without a priori hypotheses. While reviewing these different approaches, we have focused on the predictor characteristics and the quality of the supporting evidence. Copyright © 2016 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  15. Hopping in the Crowd to Unveil Network Topology.

    PubMed

    Asllani, Malbor; Carletti, Timoteo; Di Patti, Francesca; Fanelli, Duccio; Piazza, Francesco

    2018-04-13

    We introduce a nonlinear operator to model diffusion on a complex undirected network under crowded conditions. We show that the asymptotic distribution of diffusing agents is a nonlinear function of the nodes' degree and saturates to a constant value for sufficiently large connectivities, at variance with standard diffusion in the absence of excluded-volume effects. Building on this observation, we define and solve an inverse problem, aimed at reconstructing the a priori unknown connectivity distribution. The method gathers all the necessary information by repeating a limited number of independent measurements of the asymptotic density at a single node, which can be chosen randomly. The technique is successfully tested against both synthetic and real data and is also shown to estimate with great accuracy the total number of nodes.

  16. Hopping in the Crowd to Unveil Network Topology

    NASA Astrophysics Data System (ADS)

    Asllani, Malbor; Carletti, Timoteo; Di Patti, Francesca; Fanelli, Duccio; Piazza, Francesco

    2018-04-01

    We introduce a nonlinear operator to model diffusion on a complex undirected network under crowded conditions. We show that the asymptotic distribution of diffusing agents is a nonlinear function of the nodes' degree and saturates to a constant value for sufficiently large connectivities, at variance with standard diffusion in the absence of excluded-volume effects. Building on this observation, we define and solve an inverse problem, aimed at reconstructing the a priori unknown connectivity distribution. The method gathers all the necessary information by repeating a limited number of independent measurements of the asymptotic density at a single node, which can be chosen randomly. The technique is successfully tested against both synthetic and real data and is also shown to estimate with great accuracy the total number of nodes.

  17. Current Practices of Measuring and Reference Range Reporting of Free and Total Testosterone in the United States.

    PubMed

    Le, Margaret; Flores, David; May, Danica; Gourley, Eric; Nangia, Ajay K

    2016-05-01

    The evaluation and management of male hypogonadism should be based on symptoms and on serum testosterone levels. Diagnostically this relies on accurate testing and reference values. Our objective was to define the distribution of reference values and assays for free and total testosterone by clinical laboratories in the United States. Upper and lower reference values, assay methodology and source of published reference ranges were obtained from laboratories across the country. A standardized survey was reviewed with laboratory staff via telephone. Descriptive statistics were used to tabulate results. We surveyed a total of 120 laboratories in 47 states. Total testosterone was measured in house at 73% of laboratories. At the remaining laboratories studies were sent to larger centralized reference facilities. The mean ± SD lower reference value of total testosterone was 231 ± 46 ng/dl (range 160 to 300) and the mean upper limit was 850 ± 141 ng/dl (range 726 to 1,130). Only 9% of laboratories where in-house total testosterone testing was performed created a reference range unique to their region. Others validated the instrument recommended reference values in a small number of internal test samples. For free testosterone 82% of laboratories sent testing to larger centralized reference laboratories where equilibrium dialysis and/or liquid chromatography with mass spectrometry was done. The remaining laboratories used published algorithms to calculate serum free testosterone. Reference ranges for testosterone assays vary significantly among laboratories. The ranges are predominantly defined by limited population studies of men with unknown medical and reproductive histories. These poorly defined and variable reference values, especially the lower limit, affect how clinicians determine treatment. Copyright © 2016 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  18. The benefits of steroids versus steroids plus antivirals for treatment of Bell's palsy: a meta-analysis.

    PubMed

    Quant, Eudocia C; Jeste, Shafali S; Muni, Rajeev H; Cape, Alison V; Bhussar, Manveen K; Peleg, Anton Y

    2009-09-07

    To determine whether steroids plus antivirals provide a better degree of facial muscle recovery in patients with Bell's palsy than steroids alone. Meta-analysis. PubMed, Embase, Web of Science, and the Cochrane Central Register of Controlled Trials were searched for studies published in all languages from 1984 to January 2009. Additional studies were identified from cited references. Selection criteria Randomised controlled trials that compared steroids with the combination of steroids and antivirals for the treatment of Bell's palsy were included in this study. At least one month of follow-up and a primary end point of at least partial facial muscle recovery, as defined by a House-Brackmann grade of at least 2 (complete palsy is designated a grade of 6) or an equivalent score on an alternative recognised scoring system, were required. Review methods Two authors independently reviewed studies for methodological quality, treatment regimens, duration of symptoms before treatment, length of follow-up, and outcomes. Odds ratios with 95% confidence intervals were calculated and pooled using a random effects model. Six trials were included, a total of 1145 patients; 574 patients received steroids alone and 571 patients received steroids and antivirals. The pooled odds ratio for facial muscle recovery showed no benefit of steroids plus antivirals compared with steroids alone (odds ratio 1.50, 95% confidence interval 0.83 to 2.69; P=0.18). A one study removed analysis showed that the highest quality studies had the greatest effect on the lack of difference between study arms shown by the odds ratio. Subgroup analyses assessing causes of heterogeneity defined a priori (time from symptom onset to treatment, length of follow-up, and type of antiviral studied) showed no benefit of antivirals in addition to that provided by steroids. Antivirals did not provide an added benefit in achieving at least partial facial muscle recovery compared with steroids alone in patients with Bell's palsy. This study does not, therefore, support the routine use of antivirals in Bell's palsy. Future studies should use improved herpes virus diagnostics and newer antivirals to assess whether combination therapy benefits patients with more severe facial paralysis at study entry.

  19. District-level hospital trauma care audit filters: Delphi technique for defining context-appropriate indicators for quality improvement initiative evaluation in developing countries

    PubMed Central

    Stewart, Barclay T; Gyedu, Adam; Quansah, Robert; Addo, Wilfred Larbi; Afoko, Akis; Agbenorku, Pius; Amponsah-Manu, Forster; Ankomah, James; Appiah-Denkyira, Ebenezer; Baffoe, Peter; Debrah, Sam; Donkor, Peter; Dorvlo, Theodor; Japiong, Kennedy; Kushner, Adam L; Morna, Martin; Ofosu, Anthony; Oppong-Nketia, Victor; Tabiri, Stephen; Mock, Charles

    2015-01-01

    Introduction Prospective clinical audit of trauma care improves outcomes for the injured in high-income countries (HICs). However, equivalent, context-appropriate audit filters for use in low- and middle-income country (LMIC) district-level hospitals have not been well established. We aimed to develop context-appropriate trauma care audit filters for district-level hospitals in Ghana, was well as other LMICs more broadly. Methods Consensus on trauma care audit filters was built between twenty panelists using a Delphi technique with four anonymous, iterative surveys designed to elicit: i) trauma care processes to be measured; ii) important features of audit filters for the district-level hospital setting; and iii) potentially useful filters. Filters were ranked on a scale from 0 – 10 (10 being very useful). Consensus was measured with average percent majority opinion (APMO) cut-off rate. Target consensus was defined a priori as: a median rank of ≥9 for each filter and an APMO cut-off rate of ≥0.8. Results Panelists agreed on trauma care processes to target (e.g. triage, phases of trauma assessment, early referral if needed) and specific features of filters for district-level hospital use (e.g. simplicity, unassuming of resource capacity). APMO cut-off rate increased successively: Round 1 - 0.58; Round 2 - 0.66; Round 3 - 0.76; and Round 4 - 0.82. After Round 4, target consensus on 22 trauma care and referral-specific filters was reached. Example filters include: triage - vital signs are recorded within 15 minutes of arrival (must include breathing assessment, heart rate, blood pressure, oxygen saturation if available); circulation - a large bore IV was placed within 15 minutes of patient arrival; referral - if referral is activated, the referring clinician and receiving facility communicate by phone or radio prior to transfer. Conclusion This study proposes trauma care audit filters appropriate for LMIC district-level hospitals. Given the successes of similar filters in HICs and obstetric care filters in LMICs, the collection and reporting of prospective trauma care audit filters may be an important step toward improving care for the injured at district-level hospitals in LMICs. PMID:26492882

  20. Responsiveness and Minimally Important Differences for 4 Patient-Reported Outcomes Measurement Information System Short Forms: Physical Function, Pain Interference, Depression, and Anxiety in Knee Osteoarthritis.

    PubMed

    Lee, Augustine C; Driban, Jeffrey B; Price, Lori Lyn; Harvey, William F; Rodday, Angie Mae; Wang, Chenchen

    2017-09-01

    Patient-Reported Outcomes Measurement Information System (PROMIS) instruments can provide valid, interpretable measures of health status among adults with osteoarthritis (OA). However, their ability to detect meaningful change over time is unknown. We evaluated the responsiveness and minimally important differences (MIDs) for 4 PROMIS Short Forms: Physical Function, Pain Interference, Depression, and Anxiety. We analyzed adults with symptomatic knee OA from our randomized trial comparing Tai Chi and physical therapy. Using baseline and 12-week scores, responsiveness was evaluated according to consensus standards by testing 6 a priori hypotheses of the correlations between PROMIS and legacy change scores. Responsiveness was considered high if ≥5 hypotheses were confirmed, and moderate if 3 or 4 were confirmed. MIDs were evaluated according to prospective change for people achieving previously-established MID on legacy comparators. The lowest and highest MIDs meeting a priori quality criteria formed a MID range for each PROMIS Short Form. Among 165 predominantly female (70%) and white (57%) participants, mean age was 61 years and body mass index was 33. PROMIS Physical Function had 5 confirmed hypotheses and Pain Interference, Depression, and Anxiety had 3 or 4. MID ranges were: Depression = 3.0 to 3.1; Anxiety = 2.3 to 3.4; Physical Function = 1.9 to 2.2; and Pain Interference = 2.35 to 2.4. PROMIS Physical Function has high responsiveness, and Depression, Anxiety, and Pain Interference have moderate responsiveness among adults with knee OA. We established the first MIDs for PROMIS in this population, and provided an important standard of reference to better apply or interpret PROMIS in future trials or clinical practice. This study examined whether PROMIS Short Form instruments (Physical Function, Pain Interference, Depression, and Anxiety) were able to detect change over time among adults with knee OA, and provided minimally important change estimates for each measure. This standard of reference can help apply or interpret these instruments in the future. Copyright © 2017. Published by Elsevier Inc.

  1. Knowledge Management and Reference Services

    ERIC Educational Resources Information Center

    Gandhi, Smiti

    2004-01-01

    Many corporations are embracing knowledge management (KM) to capture the intellectual capital of their employees. This article focuses on KM applications for reference work in libraries. It defines key concepts of KM, establishes a need for KM for reference services, and reviews various KM initiatives for reference services.

  2. Improved Pedagogy for Linear Differential Equations by Reconsidering How We Measure the Size of Solutions

    ERIC Educational Resources Information Center

    Tisdell, Christopher C.

    2017-01-01

    For over 50 years, the learning of teaching of "a priori" bounds on solutions to linear differential equations has involved a Euclidean approach to measuring the size of a solution. While the Euclidean approach to "a priori" bounds on solutions is somewhat manageable in the learning and teaching of the proofs involving…

  3. Reference Structures: Stagnation, Progress, and Future Challenges.

    ERIC Educational Resources Information Center

    Greenberg, Jane

    1997-01-01

    Assesses the current state of reference structures in online public access catalogs (OPACs) in a framework defined by stagnation, progress, and future challenges. Outlines six areas for reference structure development. Twenty figures provide illustrations. (AEF)

  4. 78 FR 41132 - Self-Regulatory Organizations; EDGX Exchange, Inc.; Notice of Filing and Immediate Effectiveness...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-09

    ... Exchange's principal office, and at the Public Reference Room of the Commission. \\3\\ ``Member'' is defined... the Exchange as that term is defined in Section 3(a)(3) of the Act.'' EDGX Rule 1.5(n). \\4\\ References..., then the shares are posted on the EDGX book unless the Member instructs otherwise. See Exchange Rule 11...

  5. GPS Water Vapor Tomography Based on Accurate Estimations of the GPS Tropospheric Parameters

    NASA Astrophysics Data System (ADS)

    Champollion, C.; Masson, F.; Bock, O.; Bouin, M.; Walpersdorf, A.; Doerflinger, E.; van Baelen, J.; Brenot, H.

    2003-12-01

    The Global Positioning System (GPS) is now a common technique for the retrieval of zenithal integrated water vapor (IWV). Further applications in meteorology need also slant integrated water vapor (SIWV) which allow to precisely define the high variability of tropospheric water vapor at different temporal and spatial scales. Only precise estimations of IWV and horizontal gradients allow the estimation of accurate SIWV. We present studies developed to improve the estimation of tropospheric water vapor from GPS data. Results are obtained from several field experiments (MAP, ESCOMPTE, OHM-CV, IHOP, .). First IWV are estimated using different GPS processing strategies and results are compared to radiosondes. The role of the reference frame and the a priori constraints on the coordinates of the fiducial and local stations is generally underestimated. It seems to be of first order in the estimation of the IWV. Second we validate the estimated horizontal gradients comparing zenith delay gradients and single site gradients. IWV, gradients and post-fit residuals are used to construct slant integrated water delays. Validation of the SIWV is under progress comparing GPS SIWV, Lidar measurements and high resolution meteorological models (Meso-NH). A careful analysis of the post-fit residuals is needed to separate tropospheric signal from multipaths. The slant tropospheric delays are used to study the 3D heterogeneity of the troposphere. We develop a tomographic software to model the three-dimensional distribution of the tropospheric water vapor from GPS data. The software is applied to the ESCOMPTE field experiment, a dense network of 17 dual frequency GPS receivers operated in southern France. Three inversions have been successfully compared to three successive radiosonde launches. Good resolution is obtained up to heights of 3000 m.

  6. THE SOURCE STRUCTURE OF 0642+449 DETECTED FROM THE CONT14 OBSERVATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Ming H.; Wang, Guang L.; Heinkelmann, Robert

    2016-11-01

    The CONT14 campaign with state-of-the-art very long baseline interferometry (VLBI) data has observed the source 0642+449 with about 1000 observables each day during a continuous observing period of 15 days, providing tens of thousands of closure delays—the sum of the delays around a closed loop of baselines. The closure delay is independent of the instrumental and propagation delays and provides valuable additional information about the source structure. We demonstrate the use of this new “observable” for the determination of the structure in the radio source 0642+449. This source, as one of the defining sources in the second realization of themore » International Celestial Reference Frame, is found to have two point-like components with a relative position offset of −426 microarcseconds ( μ as) in R.A. and −66 μ as in decl. The two components are almost equally bright, with a flux-density ratio of 0.92. The standard deviation of closure delays for source 0642+449 was reduced from 139 to 90 ps by using this two-component model. Closure delays larger than 1 ns are found to be related to the source structure, demonstrating that structure effects for a source with this simple structure could be up to tens of nanoseconds. The method described in this paper does not rely on a priori source structure information, such as knowledge of source structure determined from direct (Fourier) imaging of the same observations or observations at other epochs. We anticipate our study to be a starting point for more effective determination of the structure effect in VLBI observations.« less

  7. Obstetrical brachial plexus injury (OBPI): Canada's national clinical practice guideline

    PubMed Central

    Coroneos, Christopher J; Voineskos, Sophocles H; Christakis, Marie K; Thoma, Achilleas; Bain, James R; Brouwers, Melissa C

    2017-01-01

    Objective The objective of this study was to establish an evidence-based clinical practice guideline for the primary management of obstetrical brachial plexus injury (OBPI). This clinical practice guideline addresses 4 existing gaps: (1) historic poor use of evidence, (2) timing of referral to multidisciplinary care, (3) Indications and timing of operative nerve repair and (4) distribution of expertise. Setting The guideline is intended for all healthcare providers treating infants and children, and all specialists treating upper extremity injuries. Participants The evidence interpretation and recommendation consensus team (Canadian OBPI Working Group) was composed of clinicians representing each of Canada's 10 multidisciplinary centres. Outcome measures An electronic modified Delphi approach was used for consensus, with agreement criteria defined a priori. Quality indicators for referral to a multidisciplinary centre were established by consensus. An original meta-analysis of primary nerve repair and review of Canadian epidemiology and burden were previously completed. Results 7 recommendations address clinical gaps and guide identification, referral, treatment and outcome assessment: (1) physically examine for OBPI in newborns with arm asymmetry or risk factors; (2) refer newborns with OBPI to a multidisciplinary centre by 1 month; (3) provide pregnancy/birth history and physical examination findings at birth; (4) multidisciplinary centres should include a therapist and peripheral nerve surgeon experienced with OBPI; (5) physical therapy should be advised by a multidisciplinary team; (6) microsurgical nerve repair is indicated in root avulsion and other OBPI meeting centre operative criteria; (7) the common data set includes the Narakas classification, limb length, Active Movement Scale (AMS) and Brachial Plexus Outcome Measure (BPOM) 2 years after birth/surgery. Conclusions The process established a new network of opinion leaders and researchers for further guideline development and multicentre research. A structured referral form is available for primary care, including referral recommendations. PMID:28132014

  8. Strategies for successful trauma registry implementation in low- and middle-income countries-protocol for a systematic review.

    PubMed

    Paradis, Tiffany; St-Louis, Etienne; Landry, Tara; Poenaru, Dan

    2018-02-21

    The benefits of trauma registries have been well described. The crucial data they provide may guide injury prevention strategies, inform resource allocation, and support advocacy and policy. This has been shown to reduce trauma-related mortality in various settings. Trauma remains a leading cause of mortality in low- and middle-income countries (LMICs). However, the implementation of trauma registries in LMICs can be challenging due to lack of funding, specialized personnel, and infrastructure. This study explores strategies for successful trauma registry implementation in LMICs. The protocol was registered a priori (CRD42017058586). A peer-reviewed search strategy of multiple databases will be developed with a senior librarian. As per PRISMA guidelines, first screen of references based on abstract and title and subsequent full-text review will be conducted by two independent reviewers. Disagreements that cannot be resolved by discussion between reviewers shall be arbitrated by the principal investigator. Data extraction will be performed using a pre-defined data extraction sheet. Finally, bibliographies of included articles will be hand-searched. Studies of any design will be included if they describe or review development and implementation of a trauma registry in LMICs. No language or period restrictions will be applied. Summary statistics and qualitative meta-narrative analyses will be performed. The significant burden of trauma in LMIC environments presents unique challenges and limitations. Adapted strategies for deployment and maintenance of sustainable trauma registries are needed. Our methodology will systematically identify recommendations and strategies for successful trauma registry implementation in LMICs and describe threats and barriers to this endeavor. The protocol was registered on the PROSPERO international prospective register of systematic reviews ( CRD42017058586 ).

  9. "They Shouldn't Be Coming to the ED, Should They?": A Descriptive Service Evaluation of Why Patients With Palliative Care Needs Present to the Emergency Department.

    PubMed

    Green, Emilie; Ward, Sarah; Brierley, Will; Riley, Ben; Sattar, Henna; Harris, Tim

    2017-12-01

    Patients with palliative care needs frequently attend the emergency department (ED). There is no international agreement on which patients are best cared for in the ED, compared to the primary care setting or direct admission to the hospital. This article presents the quantitative phase of a mixed-methods service evaluation, exploring the reasons why patients with palliative care needs present to the ED. This is a single-center, observational study including all patients under the care of a specialist palliative care team who presented to the ED over a 10-week period. Demographic and clinical data were collected from electronic health records. A total of 105 patients made 112 presentations to the ED. The 2 most common presenting complaints were shortness of breath (35%) and pain (28%). Eighty-three percent of presentations required care in the ED according to a priori defined criteria. They either underwent urgent investigation or received immediate interventions that could not be delivered in another setting, were referred by a health-care professional, or were admitted. Findings challenge the misconception that patients known to a palliative care team should be cared for outside the ED. The importance and necessity of the ED for patients in their last years of life has been highlighted, specifically in terms of managing acute, unpredictable crises. Future service provision should not be based solely on a patient's presenting complaint. Further qualitative research exploring patient perspective is required in order to explore the decision-making process that leads patients with palliative care needs to the ED.

  10. Urine biomarkers of kidney injury among adolescents in Nicaragua, a region affected by an epidemic of chronic kidney disease of unknown aetiology

    PubMed Central

    Ramírez-Rubio, Oriana; Amador, Juan José; Kaufman, James S.; Weiner, Daniel E.; Parikh, Chirag R.; Khan, Usman; McClean, Michael D.; Laws, Rebecca L.; López-Pilarte, Damaris; Friedman, David J.; Kupferman, Joseph; Brooks, Daniel R.

    2016-01-01

    Background An epidemic of chronic kidney disease (CKD) of non-traditional aetiology has been recently recognized by health authorities as a public health priority in Central America. Previous studies have identified strenuous manual work, agricultural activities and residence at low altitude as potential risk factors; however, the aetiology remains unknown. Because individuals are frequently diagnosed with CKD in early adulthood, we measured biomarkers of kidney injury among adolescents in different regions of Nicaragua to assess whether kidney damage might be initiated during childhood. Methods Participants include 200 adolescents aged 12–18 years with no prior work history from four different schools in Nicaragua. The location of the school served as a proxy for environmental exposures and geographic locations were selected to represent a range of factors that have been associated with CKD in adults (e.g. altitude, primary industry and CKD mortality rates). Questionnaires, urine dipsticks and kidney injury biomarkers [interleukin-18, N-acetyl-d-glucosaminidase (NAG), neutrophil gelatinase-associated lipocalin (NGAL) and albumin–creatinine ratio] were assessed. Biomarker concentrations were compared by school using linear regression models. Results Protein (3.5%) and glucose (1%) in urine measured by dipstick were rare and did not differ by school. Urine biomarkers of tubular kidney damage, particularly NGAL and NAG, showed higher concentrations in those schools and regions within Nicaragua that were defined a priori as having increased CKD risk. Painful urination was a frequent self-reported symptom. Conclusions Although interpretation of these urine biomarkers is limited because of the lack of population reference values, results suggest the possibility of early kidney damage prior to occupational exposures in these adolescents. PMID:26311057

  11. Early detection of Alzheimer disease: methods, markers, and misgivings.

    PubMed

    Green, R C; Clarke, V C; Thompson, N J; Woodard, J L; Letz, R

    1997-01-01

    There is at present no reliable predictive test for most forms of Alzheimer disease (AD). Although some information about future risk for disease is available in theory through ApoE genotyping, it is of limited accuracy and utility. Once neuroprotective treatments are available for AD, reliable early detection will become a key component of the treatment strategy. We recently conducted a pilot survey eliciting attitudes and beliefs toward an unspecified and hypothetical predictive test for AD. The survey was completed by a convenience sample of 176 individuals, aged 22-77, which was 75% female, 30% African-American, and of which 33% had a family member with AD. The survey revealed that 69% of this sample would elect to obtain predictive testing for AD if the test were 100% accurate. Individuals were more likely to desire predictive testing if they had an a priori belief that they would develop AD (p = 0.0001), had a lower educational level (p = 0.003), were worried that they would develop AD (p = 0.02), had a self-defined history of depression (p = 0.04), and had a family member with AD (p = 0.04). However, the desire for predictive testing was not significantly associated with age, gender, ethnicity, or income. The desire to obtain predictive testing for AD decreased as the assumed accuracy of the hypothetical test decreased. A better short-term strategy for early detection of AD may be computer-based neuropsychological screening of at-risk (older aged) individuals to identify very early cognitive impairment. Individuals identified in this manner could be referred for diagnostic evaluation and early cases of AD could be identified and treated. A new self-administered, touch-screen, computer-based, neuropsychological screening instrument called Neurobehavioral Evaluation System-3 is described, which may facilitate this type of screening.

  12. Characteristics of Patients Referred to a Pediatric Infectious Diseases Clinic With Unexplained Fever.

    PubMed

    Statler, Victoria A; Marshall, Gary S

    2016-09-01

    Older case series established diagnostic considerations for children meeting a priori definitions of fever of unknown origin (FUO). No recent study has examined the final diagnoses of children referred for unexplained fever. This study was conducted with a retrospective chart review of patients referred to a pediatric infectious diseases clinic from 2008 to 2012 for unexplained fever. Sixty-nine of 221 patients were referred for "prolonged" unexplained fever. Ten of these were not actually having fever, and 11 had diagnoses that were readily apparent at the initial visit. The remaining 48 were classified as having FUO. The median duration of reported fever for these patients was 30 days; 15 had a diagnosis made, 5 of which were serious. None of the serious FUO diagnoses were infections. Of 152 patients with "recurrent" unexplained fever, 92 had an "intermittent" fever pattern, and most of these had sequential, self-limited viral illnesses or no definitive diagnosis made. Twenty of the 60 patients with a "periodic" fever pattern were diagnosed with periodic fever, aphthous stomatitis, pharyngitis, and adenitis syndrome. Overall, 166 patients either were not having fever, had self-limited illnesses, or ultimately had no cause of fever discovered. Only 12 had a serious illness, 2 of which were infections (malaria and typhoid fever). Most children referred with unexplained fever had either self-limited illnesses or no specific diagnosis established. Serious diagnoses were unusual, suggesting that these diagnoses rarely present with unexplained fever alone, or that, when they do, the diagnoses are made by primary care providers or other subspecialists. © The Author 2015. Published by Oxford University Press on behalf of the Pediatric Infectious Diseases Society. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  13. One Size Doesn't Fit All - RefEditor: Building Personalized Diploid Reference Genome to Improve Read Mapping and Genotype Calling in Next Generation Sequencing Studies

    PubMed Central

    Yuan, Shuai; Johnston, H. Richard; Zhang, Guosheng; Li, Yun; Hu, Yi-Juan; Qin, Zhaohui S.

    2015-01-01

    With rapid decline of the sequencing cost, researchers today rush to embrace whole genome sequencing (WGS), or whole exome sequencing (WES) approach as the next powerful tool for relating genetic variants to human diseases and phenotypes. A fundamental step in analyzing WGS and WES data is mapping short sequencing reads back to the reference genome. This is an important issue because incorrectly mapped reads affect the downstream variant discovery, genotype calling and association analysis. Although many read mapping algorithms have been developed, the majority of them uses the universal reference genome and do not take sequence variants into consideration. Given that genetic variants are ubiquitous, it is highly desirable if they can be factored into the read mapping procedure. In this work, we developed a novel strategy that utilizes genotypes obtained a priori to customize the universal haploid reference genome into a personalized diploid reference genome. The new strategy is implemented in a program named RefEditor. When applying RefEditor to real data, we achieved encouraging improvements in read mapping, variant discovery and genotype calling. Compared to standard approaches, RefEditor can significantly increase genotype calling consistency (from 43% to 61% at 4X coverage; from 82% to 92% at 20X coverage) and reduce Mendelian inconsistency across various sequencing depths. Because many WGS and WES studies are conducted on cohorts that have been genotyped using array-based genotyping platforms previously or concurrently, we believe the proposed strategy will be of high value in practice, which can also be applied to the scenario where multiple NGS experiments are conducted on the same cohort. The RefEditor sources are available at https://github.com/superyuan/refeditor. PMID:26267278

  14. A reference model for space data system interconnection services

    NASA Astrophysics Data System (ADS)

    Pietras, John; Theis, Gerhard

    1993-03-01

    The widespread adoption of standard packet-based data communication protocols and services for spaceflight missions provides the foundation for other standard space data handling services. These space data handling services can be defined as increasingly sophisticated processing of data or information received from lower-level services, using a layering approach made famous in the International Organization for Standardization (ISO) Open System Interconnection Reference Model (OSI-RM). The Space Data System Interconnection Reference Model (SDSI-RM) incorporates the conventions of the OSIRM to provide a framework within which a complete set of space data handling services can be defined. The use of the SDSI-RM is illustrated through its application to data handling services and protocols that have been defined by, or are under consideration by, the Consultative Committee for Space Data Systems (CCSDS).

  15. A reference model for space data system interconnection services

    NASA Technical Reports Server (NTRS)

    Pietras, John; Theis, Gerhard

    1993-01-01

    The widespread adoption of standard packet-based data communication protocols and services for spaceflight missions provides the foundation for other standard space data handling services. These space data handling services can be defined as increasingly sophisticated processing of data or information received from lower-level services, using a layering approach made famous in the International Organization for Standardization (ISO) Open System Interconnection Reference Model (OSI-RM). The Space Data System Interconnection Reference Model (SDSI-RM) incorporates the conventions of the OSIRM to provide a framework within which a complete set of space data handling services can be defined. The use of the SDSI-RM is illustrated through its application to data handling services and protocols that have been defined by, or are under consideration by, the Consultative Committee for Space Data Systems (CCSDS).

  16. BusyBee Web: metagenomic data analysis by bootstrapped supervised binning and annotation

    PubMed Central

    Kiefer, Christina; Fehlmann, Tobias; Backes, Christina

    2017-01-01

    Abstract Metagenomics-based studies of mixed microbial communities are impacting biotechnology, life sciences and medicine. Computational binning of metagenomic data is a powerful approach for the culture-independent recovery of population-resolved genomic sequences, i.e. from individual or closely related, constituent microorganisms. Existing binning solutions often require a priori characterized reference genomes and/or dedicated compute resources. Extending currently available reference-independent binning tools, we developed the BusyBee Web server for the automated deconvolution of metagenomic data into population-level genomic bins using assembled contigs (Illumina) or long reads (Pacific Biosciences, Oxford Nanopore Technologies). A reversible compression step as well as bootstrapped supervised binning enable quick turnaround times. The binning results are represented in interactive 2D scatterplots. Moreover, bin quality estimates, taxonomic annotations and annotations of antibiotic resistance genes are computed and visualized. Ground truth-based benchmarks of BusyBee Web demonstrate comparably high performance to state-of-the-art binning solutions for assembled contigs and markedly improved performance for long reads (median F1 scores: 70.02–95.21%). Furthermore, the applicability to real-world metagenomic datasets is shown. In conclusion, our reference-independent approach automatically bins assembled contigs or long reads, exhibits high sensitivity and precision, enables intuitive inspection of the results, and only requires FASTA-formatted input. The web-based application is freely accessible at: https://ccb-microbe.cs.uni-saarland.de/busybee. PMID:28472498

  17. Four dimensional studies in earth space

    NASA Technical Reports Server (NTRS)

    Mather, R. S.

    1972-01-01

    A system of reference which is directly related to observations, is proposed for four-dimensional studies in earth space. Global control network and polar wandering are defined. The determination of variations in the earth's gravitational field with time also forms part of such a system. Techniques are outlined for the unique definition of the motion of the geocenter, and the changes in the location of the axis of rotation of an instantaneous earth model, in relation to values at some epoch of reference. The instantaneous system referred to is directly related to a fundamental equation in geodynamics. The reference system defined would provide an unambiguous frame for long period studies in earth space, provided the scale of the space were specified.

  18. Methods for selecting fixed-effect models for heterogeneous codon evolution, with comments on their application to gene and genome data.

    PubMed

    Bao, Le; Gu, Hong; Dunn, Katherine A; Bielawski, Joseph P

    2007-02-08

    Models of codon evolution have proven useful for investigating the strength and direction of natural selection. In some cases, a priori biological knowledge has been used successfully to model heterogeneous evolutionary dynamics among codon sites. These are called fixed-effect models, and they require that all codon sites are assigned to one of several partitions which are permitted to have independent parameters for selection pressure, evolutionary rate, transition to transversion ratio or codon frequencies. For single gene analysis, partitions might be defined according to protein tertiary structure, and for multiple gene analysis partitions might be defined according to a gene's functional category. Given a set of related fixed-effect models, the task of selecting the model that best fits the data is not trivial. In this study, we implement a set of fixed-effect codon models which allow for different levels of heterogeneity among partitions in the substitution process. We describe strategies for selecting among these models by a backward elimination procedure, Akaike information criterion (AIC) or a corrected Akaike information criterion (AICc). We evaluate the performance of these model selection methods via a simulation study, and make several recommendations for real data analysis. Our simulation study indicates that the backward elimination procedure can provide a reliable method for model selection in this setting. We also demonstrate the utility of these models by application to a single-gene dataset partitioned according to tertiary structure (abalone sperm lysin), and a multi-gene dataset partitioned according to the functional category of the gene (flagellar-related proteins of Listeria). Fixed-effect models have advantages and disadvantages. Fixed-effect models are desirable when data partitions are known to exhibit significant heterogeneity or when a statistical test of such heterogeneity is desired. They have the disadvantage of requiring a priori knowledge for partitioning sites. We recommend: (i) selection of models by using backward elimination rather than AIC or AICc, (ii) use a stringent cut-off, e.g., p = 0.0001, and (iii) conduct sensitivity analysis of results. With thoughtful application, fixed-effect codon models should provide a useful tool for large scale multi-gene analyses.

  19. Performance of third-trimester combined screening model for prediction of adverse perinatal outcome.

    PubMed

    Miranda, J; Triunfo, S; Rodriguez-Lopez, M; Sairanen, M; Kouru, H; Parra-Saavedra, M; Crovetto, F; Figueras, F; Crispi, F; Gratacós, E

    2017-09-01

    To explore the potential value of third-trimester combined screening for the prediction of adverse perinatal outcome (APO) in the general population and among small-for-gestational-age (SGA) fetuses. This was a nested case-control study within a prospective cohort of 1590 singleton gestations undergoing third-trimester evaluation (32 + 0 to 36 + 6 weeks' gestation). Maternal baseline characteristics, mean arterial blood pressure, fetoplacental ultrasound and circulating biochemical markers (placental growth factor (PlGF), lipocalin-2, unconjugated estriol and inhibin A) were assessed in all women who subsequently had an APO (n = 148) and in a control group without perinatal complications (n = 902). APO was defined as the occurrence of stillbirth, umbilical artery cord blood pH < 7.15, 5-min Apgar score < 7 or emergency operative delivery for fetal distress. Logistic regression models were developed for the prediction of APO in the general population and among SGA cases (defined as customized birth weight < 10 th centile). The prevalence of APO was 9.3% in the general population and 27.4% among SGA cases. In the general population, a combined screening model including a-priori risk (maternal characteristics), estimated fetal weight (EFW) centile, umbilical artery pulsatility index (UA-PI), estriol and PlGF achieved a detection rate for APO of 26% (area under receiver-operating characteristics curve (AUC), 0.59 (95% CI, 0.54-0.65)), at a 10% false-positive rate (FPR). Among SGA cases, a model including a-priori risk, EFW centile, UA-PI, cerebroplacental ratio, estriol and PlGF predicted 62% of APO (AUC, 0.86 (95% CI, 0.80-0.92)) at a FPR of 10%. The use of fetal ultrasound and maternal biochemical markers at 32-36 weeks provides a poor prediction of APO in the general population. Although it remains limited, the performance of the screening model is improved when applied to fetuses with suboptimal fetal growth. Copyright © 2016 ISUOG. Published by John Wiley & Sons Ltd. Copyright © 2016 ISUOG. Published by John Wiley & Sons Ltd.

  20. Synergy of stereo cloud top height and ORAC optimal estimation cloud retrieval: evaluation and application to AATSR

    NASA Astrophysics Data System (ADS)

    Fisher, Daniel; Poulsen, Caroline A.; Thomas, Gareth E.; Muller, Jan-Peter

    2016-03-01

    In this paper we evaluate the impact on the cloud parameter retrievals of the ORAC (Optimal Retrieval of Aerosol and Cloud) algorithm following the inclusion of stereo-derived cloud top heights as a priori information. This is performed in a mathematically rigorous way using the ORAC optimal estimation retrieval framework, which includes the facility to use such independent a priori information. Key to the use of a priori information is a characterisation of their associated uncertainty. This paper demonstrates the improvements that are possible using this approach and also considers their impact on the microphysical cloud parameters retrieved. The Along-Track Scanning Radiometer (AATSR) instrument has two views and three thermal channels, so it is well placed to demonstrate the synergy of the two techniques. The stereo retrieval is able to improve the accuracy of the retrieved cloud top height when compared to collocated Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO), particularly in the presence of boundary layer inversions and high clouds. The impact of the stereo a priori information on the microphysical cloud properties of cloud optical thickness (COT) and effective radius (RE) was evaluated and generally found to be very small for single-layer clouds conditions over open water (mean RE differences of 2.2 (±5.9) microns and mean COD differences of 0.5 (±1.8) for single-layer ice clouds over open water at elevations of above 9 km, which are most strongly affected by the inclusion of the a priori).

  1. Realism, functions, and the a priori: Ernst Cassirer's philosophy of science.

    PubMed

    Heis, Jeremy

    2014-12-01

    This paper presents the main ideas of Cassirer's general philosophy of science, focusing on the two aspects of his thought that--in addition to being the most central ideas in his philosophy of science--have received the most attention from contemporary philosophers of science: his theory of the a priori aspects of physical theory, and his relation to scientific realism.

  2. Body Composition Assessment in Axial CT Images Using FEM-Based Automatic Segmentation of Skeletal Muscle.

    PubMed

    Popuri, Karteek; Cobzas, Dana; Esfandiari, Nina; Baracos, Vickie; Jägersand, Martin

    2016-02-01

    The proportions of muscle and fat tissues in the human body, referred to as body composition is a vital measurement for cancer patients. Body composition has been recently linked to patient survival and the onset/recurrence of several types of cancers in numerous cancer research studies. This paper introduces a fully automatic framework for the segmentation of muscle and fat tissues from CT images to estimate body composition. We developed a novel finite element method (FEM) deformable model that incorporates a priori shape information via a statistical deformation model (SDM) within the template-based segmentation framework. The proposed method was validated on 1000 abdominal and 530 thoracic CT images and we obtained very good segmentation results with Jaccard scores in excess of 90% for both the muscle and fat regions.

  3. When students can choose easy, medium, or hard homework problems

    NASA Astrophysics Data System (ADS)

    Teodorescu, Raluca E.; Seaton, Daniel T.; Cardamone, Caroline N.; Rayyan, Saif; Abbott, Jonathan E.; Barrantes, Analia; Pawl, Andrew; Pritchard, David E.

    2012-02-01

    We investigate student-chosen, multi-level homework in our Integrated Learning Environment for Mechanics [1] built using the LON-CAPA [2] open-source learning system. Multi-level refers to problems categorized as easy, medium, and hard. Problem levels were determined a priori based on the knowledge needed to solve them [3]. We analyze these problems using three measures: time-per-problem, LON-CAPA difficulty, and item difficulty measured by item response theory. Our analysis of student behavior in this environment suggests that time-per-problem is strongly dependent on problem category, unlike either score-based measures. We also found trends in student choice of problems, overall effort, and efficiency across the student population. Allowing students choice in problem solving seems to improve their motivation; 70% of students worked additional problems for which no credit was given.

  4. The diabolo classifier

    PubMed

    Schwenk

    1998-11-15

    We present a new classification architecture based on autoassociative neural networks that are used to learn discriminant models of each class. The proposed architecture has several interesting properties with respect to other model-based classifiers like nearest-neighbors or radial basis functions: it has a low computational complexity and uses a compact distributed representation of the models. The classifier is also well suited for the incorporation of a priori knowledge by means of a problem-specific distance measure. In particular, we will show that tangent distance (Simard, Le Cun, & Denker, 1993) can be used to achieve transformation invariance during learning and recognition. We demonstrate the application of this classifier to optical character recognition, where it has achieved state-of-the-art results on several reference databases. Relations to other models, in particular those based on principal component analysis, are also discussed.

  5. A numerical comparison of discrete Kalman filtering algorithms: An orbit determination case study

    NASA Technical Reports Server (NTRS)

    Thornton, C. L.; Bierman, G. J.

    1976-01-01

    The numerical stability and accuracy of various Kalman filter algorithms are thoroughly studied. Numerical results and conclusions are based on a realistic planetary approach orbit determination study. The case study results of this report highlight the numerical instability of the conventional and stabilized Kalman algorithms. Numerical errors associated with these algorithms can be so large as to obscure important mismodeling effects and thus give misleading estimates of filter accuracy. The positive result of this study is that the Bierman-Thornton U-D covariance factorization algorithm is computationally efficient, with CPU costs that differ negligibly from the conventional Kalman costs. In addition, accuracy of the U-D filter using single-precision arithmetic consistently matches the double-precision reference results. Numerical stability of the U-D filter is further demonstrated by its insensitivity of variations in the a priori statistics.

  6. Automatic Artifact Removal from Electroencephalogram Data Based on A Priori Artifact Information.

    PubMed

    Zhang, Chi; Tong, Li; Zeng, Ying; Jiang, Jingfang; Bu, Haibing; Yan, Bin; Li, Jianxin

    2015-01-01

    Electroencephalogram (EEG) is susceptible to various nonneural physiological artifacts. Automatic artifact removal from EEG data remains a key challenge for extracting relevant information from brain activities. To adapt to variable subjects and EEG acquisition environments, this paper presents an automatic online artifact removal method based on a priori artifact information. The combination of discrete wavelet transform and independent component analysis (ICA), wavelet-ICA, was utilized to separate artifact components. The artifact components were then automatically identified using a priori artifact information, which was acquired in advance. Subsequently, signal reconstruction without artifact components was performed to obtain artifact-free signals. The results showed that, using this automatic online artifact removal method, there were statistical significant improvements of the classification accuracies in both two experiments, namely, motor imagery and emotion recognition.

  7. Automatic Artifact Removal from Electroencephalogram Data Based on A Priori Artifact Information

    PubMed Central

    Zhang, Chi; Tong, Li; Zeng, Ying; Jiang, Jingfang; Bu, Haibing; Li, Jianxin

    2015-01-01

    Electroencephalogram (EEG) is susceptible to various nonneural physiological artifacts. Automatic artifact removal from EEG data remains a key challenge for extracting relevant information from brain activities. To adapt to variable subjects and EEG acquisition environments, this paper presents an automatic online artifact removal method based on a priori artifact information. The combination of discrete wavelet transform and independent component analysis (ICA), wavelet-ICA, was utilized to separate artifact components. The artifact components were then automatically identified using a priori artifact information, which was acquired in advance. Subsequently, signal reconstruction without artifact components was performed to obtain artifact-free signals. The results showed that, using this automatic online artifact removal method, there were statistical significant improvements of the classification accuracies in both two experiments, namely, motor imagery and emotion recognition. PMID:26380294

  8. A New Expanded Mixed Element Method for Convection-Dominated Sobolev Equation

    PubMed Central

    Wang, Jinfeng; Li, Hong; Fang, Zhichao

    2014-01-01

    We propose and analyze a new expanded mixed element method, whose gradient belongs to the simple square integrable space instead of the classical H(div; Ω) space of Chen's expanded mixed element method. We study the new expanded mixed element method for convection-dominated Sobolev equation, prove the existence and uniqueness for finite element solution, and introduce a new expanded mixed projection. We derive the optimal a priori error estimates in L 2-norm for the scalar unknown u and a priori error estimates in (L 2)2-norm for its gradient λ and its flux σ. Moreover, we obtain the optimal a priori error estimates in H 1-norm for the scalar unknown u. Finally, we obtained some numerical results to illustrate efficiency of the new method. PMID:24701153

  9. Multispectral guided fluorescence diffuse optical tomography using upconverting nanoparticles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Svenmarker, Pontus, E-mail: pontus.svenmarker@physics.umu.se; Department of Physics, Umeå University, SE-901 87 Umeå; Centre for Microbial Research

    2014-02-17

    We report on improved image detectability for fluorescence diffuse optical tomography using upconverting nanoparticles doped with rare-earth elements. Core-shell NaYF{sub 4}:Yb{sup 3+}/Er{sup 3+}@NaYF{sub 4} upconverting nanoparticles were synthesized through a stoichiometric method. The Yb{sup 3+}/Er{sup 3+} sensitizer-activator pair yielded two anti-Stokes shifted fluorescence emission bands at 540 nm and 660 nm, here used to a priori estimate the fluorescence source depth with sub-millimeter precision. A spatially varying regularization incorporated the a priori fluorescence source depth estimation into the tomography reconstruction scheme. Tissue phantom experiments showed both an improved resolution and contrast in the reconstructed images as compared to not using any amore » priori information.« less

  10. Examination of a Method to Determine the Reference Region for Calculating the Specific Binding Ratio in Dopamine Transporter Imaging.

    PubMed

    Watanabe, Ayumi; Inoue, Yusuke; Asano, Yuji; Kikuchi, Kei; Miyatake, Hiroki; Tokushige, Takanobu

    2017-01-01

    The specific binding ratio (SBR) was first reported by Tossici-Bolt et al. for quantitative indicators for dopamine transporter (DAT) imaging. It is defined as the ratio of the specific binding concentration of the striatum to the non-specific binding concentration of the whole brain other than the striatum. The non-specific binding concentration is calculated based on the region of interest (ROI), which is set 20 mm inside the outer contour, defined by a threshold technique. Tossici-Bolt et al. used a 50% threshold, but sometimes we couldn't define the ROI of non-specific binding concentration (reference region) and calculate SBR appropriately with a 50% threshold. Therefore, we sought a new method for determining the reference region when calculating SBR. We used data from 20 patients who had undergone DAT imaging in our hospital, to calculate the non-specific binding concentration by the following methods, the threshold to define a reference region was fixed at some specific values (the fixing method) and reference region was visually optimized by an examiner at every examination (the visual optimization method). First, we assessed the reference region of each method visually, and afterward, we quantitatively compared SBR calculated based on each method. In the visual assessment, the scores of the fixing method at 30% and visual optimization method were higher than the scores of the fixing method at other values, with or without scatter correction. In the quantitative assessment, the SBR obtained by visual optimization of the reference region, based on consensus of three radiological technologists, was used as a baseline (the standard method). The values of SBR showed good agreement between the standard method and both the fixing method at 30% and the visual optimization method, with or without scatter correction. Therefore, the fixing method at 30% and the visual optimization method were equally suitable for determining the reference region.

  11. Alleviating the reference standard dilemma using a systematic exact mass suspect screening approach with liquid chromatography-high resolution mass spectrometry.

    PubMed

    Moschet, Christoph; Piazzoli, Alessandro; Singer, Heinz; Hollender, Juliane

    2013-11-05

    In this study, the efficiency of a suspect screening strategy using liquid chromatography-high resolution mass spectrometry (LC-HRMS) without the prior purchase of reference standards was systematically optimized and evaluated for assessing the exposure of rarely investigated pesticides and their transformation products (TPs) in 76 surface water samples. Water-soluble and readily ionizable (electrospray ionization) substances, 185 in total, were selected from a list of all insecticides and fungicides registered in Switzerland and their major TPs. Initially, a solid phase extraction-LC-HRMS method was established using 45 known, persistent, and high sales volume pesticides. Seventy percent of these target substances had limit of quantitation (LOQ) < 5 ng L(-1). This compound set was then used to develop and optimize a HRMS suspect screening method using only the exact mass as a priori information. Thresholds for blank subtraction, peak area, peak shape, signal-to-noise, and isotopic pattern were applied to automatically filter the initially picked peaks. The success rate was 70%; false negatives mainly resulted from low intense peaks. The optimized approach was applied to the remaining 140 substances. Nineteen additional substances were detected in environmental samples, two TPs for the first time in the environment. Sixteen substances were confirmed with reference standards purchased subsequently, while three TP standards could be obtained from industry or other laboratories. Overall, this screening approach was fast and very successful and can easily be expanded to other micropollutant classes for which reference standards are not readily accessible such as TPs of household chemicals.

  12. Validating a pragmatic definition of shock in adult patients presenting to the ED.

    PubMed

    Li, Yan-ling; Chan, Cangel Pui-yee; Sin, King-keung; Chan, Stewart S W; Lin, Pei-yi; Chen, Xiao-hui; Smith, Brendan E; Joynt, Gavin M; Graham, Colin A; Rainer, Timothy H

    2014-11-01

    The importance of the early recognition of shock in patients presenting to emergency departments is well recognized, but at present, there is no agreed practical definition for undifferentiated shock. The main aim of this study was to validate an a priori clinical definition of shock against 28-day mortality. This prospective, observational, cross-sectional, single-center study was conducted in Hong Kong, China. Data were collected between July 1, 2012, and January 31, 2013. An a priori definition of shock was designed, whereby patients admitted to the resuscitation room or high dependency area of the emergency department were divided into 1 of 3 groups-no shock, possible shock, and shock. The primary outcome was 28-day mortality. Secondary outcomes were in-hospital mortality or admission to the intensive or coronary care unit. A total of 111 patients (mean age, 67.2 ± 17.1 years; male = 69 [62%]) were recruited, of which 22 were classified as no shock, 54 as possible shock, and 35 as shock. Systolic blood pressure, mean arterial pressure, lactate, and base deficit correlated well with shock classifications (P < .05). Patients who had 3 or more positively defined shock variables had a 100% poor composite outcome rate (5 of 5). Patients with 2 shock variables had a 66.7% (4 of 6) poor composite outcome rate. A simple, practical definition of undifferentiated shock has been proposed and validated in a group of patients presenting to an emergency department in Hong Kong. This definition needs further validation in a larger population and other settings. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. On the Use of Nonlinear Regularization in Inverse Methods for the Solar Tachocline Profile Determination

    NASA Astrophysics Data System (ADS)

    Corbard, T.; Berthomieu, G.; Provost, J.; Blanc-Feraud, L.

    Inferring the solar rotation from observed frequency splittings represents an ill-posed problem in the sense of Hadamard and the traditional approach used to override this difficulty consists in regularizing the problem by adding some a priori information on the global smoothness of the solution defined as the norm of its first or second derivative. Nevertheless, inversions of rotational splittings (e.g. Corbard et al., 1998; Schou et al., 1998) have shown that the surface layers and the so-called solar tachocline (Spiegel & Zahn 1992) at the base of the convection zone are regions in which high radial gradients of the rotation rate occur. %there exist high gradients in the solar rotation profile near %the surface and at the base of the convection zone (e.g. Corbard et al. 1998) %in the so-called solar tachocline (Spiegel & Zahn 1992). Therefore, the global smoothness a-priori which tends to smooth out every high gradient in the solution may not be appropriate for the study of a zone like the tachocline which is of particular interest for the study of solar dynamics (e.g. Elliot 1997). In order to infer the fine structure of such regions with high gradients by inverting helioseismic data, we have to find a way to preserve these zones in the inversion process. Setting a more adapted constraint on the solution leads to non-linear regularization methods that are in current use for edge-preserving regularization in computed imaging (e.g. Blanc-Feraud et al. 1995). In this work, we investigate their use in the helioseismic context of rotational inversions.

  14. Genotype-based association models of complex diseases to detect gene-gene and gene-environment interactions.

    PubMed

    Lobach, Iryna; Fan, Ruzong; Manga, Prashiela

    A central problem in genetic epidemiology is to identify and rank genetic markers involved in a disease. Complex diseases, such as cancer, hypertension, diabetes, are thought to be caused by an interaction of a panel of genetic factors, that can be identified by markers, which modulate environmental factors. Moreover, the effect of each genetic marker may be small. Hence, the association signal may be missed unless a large sample is considered, or a priori biomedical data are used. Recent advances generated a vast variety of a priori information, including linkage maps and information about gene regulatory dependence assembled into curated pathway databases. We propose a genotype-based approach that takes into account linkage disequilibrium (LD) information between genetic markers that are in moderate LD while modeling gene-gene and gene-environment interactions. A major advantage of our method is that the observed genetic information enters a model directly thus eliminating the need to estimate haplotype-phase. Our approach results in an algorithm that is inexpensive computationally and does not suffer from bias induced by haplotype-phase ambiguity. We investigated our model in a series of simulation experiments and demonstrated that the proposed approach results in estimates that are nearly unbiased and have small variability. We applied our method to the analysis of data from a melanoma case-control study and investigated interaction between a set of pigmentation genes and environmental factors defined by age and gender. Furthermore, an application of our method is demonstrated using a study of Alcohol Dependence.

  15. PPP Sliding Window Algorithm and Its Application in Deformation Monitoring.

    PubMed

    Song, Weiwei; Zhang, Rui; Yao, Yibin; Liu, Yanyan; Hu, Yuming

    2016-05-31

    Compared with the double-difference relative positioning method, the precise point positioning (PPP) algorithm can avoid the selection of a static reference station and directly measure the three-dimensional position changes at the observation site and exhibit superiority in a variety of deformation monitoring applications. However, because of the influence of various observing errors, the accuracy of PPP is generally at the cm-dm level, which cannot meet the requirements needed for high precision deformation monitoring. For most of the monitoring applications, the observation stations maintain stationary, which can be provided as a priori constraint information. In this paper, a new PPP algorithm based on a sliding window was proposed to improve the positioning accuracy. Firstly, data from IGS tracking station was processed using both traditional and new PPP algorithm; the results showed that the new algorithm can effectively improve positioning accuracy, especially for the elevation direction. Then, an earthquake simulation platform was used to simulate an earthquake event; the results illustrated that the new algorithm can effectively detect the vibrations change of a reference station during an earthquake. At last, the observed Wenchuan earthquake experimental results showed that the new algorithm was feasible to monitor the real earthquakes and provide early-warning alerts.

  16. Measuring Greenland Ice Mass Variation With Gravity Recovery and the Climate Experiment Gravity and GPS

    NASA Technical Reports Server (NTRS)

    Wu, Xiao-Ping

    1999-01-01

    The response of the Greenland ice sheet to climate change could significantly alter sea level. The ice sheet was much thicker at the last glacial maximum. To gain insight into the global change process and the future trend, it is important to evaluate the ice mass variation as a function of time and space. The Gravity Recovery and Climate Experiment (GRACE) mission to fly in 2001 for 5 years will measure gravity changes associated with the current ice variation and the solid earth's response to past variations. Our objective is to assess the separability of different change sources, accuracy and resolution in the mass variation determination by the new gravity data and possible Global Positioning System (GPS) bedrock uplift measurements. We use a reference parameter state that follows a dynamic ice model for current mass variation and a variant of the Tushingham and Peltier ICE-3G deglaciation model for historical deglaciation. The current linear trend is also assumed to have started 5 kyr ago. The Earth model is fixed as preliminary reference Earth model (PREM) with four viscoelastic layers. A discrete Bayesian inverse algorithm is developed employing an isotropic Gaussian a priori covariance function over the ice sheet and time. We use data noise predicted by the University of Texas and JPL for major GRACE error sources. A 2 mm/yr uplift uncertainty is assumed for GPS occupation time of 5 years. We then carry out covariance analysis and inverse simulation using GRACE geoid coefficients up to degree 180 in conjunction with a number of GPS uplift rates. Present-day ice mass variation and historical deglaciation are solved simultaneously over 146 grids of roughly 110 km x 110 km and with 6 time increments of 3 kyr each, along with a common starting epoch of the current trend. For present-day ice thickness change, the covariance analysis using GRACE geoid data alone results in a root mean square (RMS) posterior root variance of 2.6 cm/yr, with fairly large a priori uncertainties in the parameters and a Gaussian correlation length of 350 km. Simulated inverse can successfully recover most features in the reference present-day change. The RMS difference between them over the grids is 2.8 cm/yr. The RMS difference becomes 1.1 cm/yr when both are averaged with a half Gaussian wavelength of 150 km. With a fixed Earth model, GRACE alone can separate the geoid signals due to past and current load fairly well. Shown are the reference geoid signatures of direct and elastic effects of the current trend, the viscoelastic effect of the same trend starting from 5 kyr ago, the Post Glacial Rebound (PGR), and the predicted GRACE geoid error. The difference between the reference and inverse modeled total viscoelastic signatures is also shown. Although past and current ice mass variations are allowed the same spatial scale, their geoid signals have different spatial patterns. GPS data can contribute to the ice mass determination as well. Additional information is contained in the original.

  17. Device for monitoring cell voltage

    DOEpatents

    Doepke, Matthias [Garbsen, DE; Eisermann, Henning [Edermissen, DE

    2012-08-21

    A device for monitoring a rechargeable battery having a number of electrically connected cells includes at least one current interruption switch for interrupting current flowing through at least one associated cell and a plurality of monitoring units for detecting cell voltage. Each monitoring unit is associated with a single cell and includes a reference voltage unit for producing a defined reference threshold voltage and a voltage comparison unit for comparing the reference threshold voltage with a partial cell voltage of the associated cell. The reference voltage unit is electrically supplied from the cell voltage of the associated cell. The voltage comparison unit is coupled to the at least one current interruption switch for interrupting the current of at least the current flowing through the associated cell, with a defined minimum difference between the reference threshold voltage and the partial cell voltage.

  18. [Pediatric reference intervals : retrospective study on thyroid hormone levels].

    PubMed

    Ladang, A; Vranken, L; Luyckx, F; Lebrethon, M-C; Cavalier, E

    2017-01-01

    Defining reference range is an essential tool for diagnostic. Age and sexe influences on thyroid hormone levels have been already discussed. In this study, we are defining a new pediatric reference range for TSH, FT3 and FT4 for Cobas C6000 analyzer. To do so, we have taken in account 0 to 18 year old outclinic patients. During the first year of life, thyroid hormone levels change dramatically before getting stabilized around 3 years old. We also compared our results to those obtained in a Canadian large-scale prospective study (the CALIPER initiative).

  19. A-Priori Rupture Models for Northern California Type-A Faults

    USGS Publications Warehouse

    Wills, Chris J.; Weldon, Ray J.; Field, Edward H.

    2008-01-01

    This appendix describes how a-priori rupture models were developed for the northern California Type-A faults. As described in the main body of this report, and in Appendix G, ?a-priori? models represent an initial estimate of the rate of single and multi-segment surface ruptures on each fault. Whether or not a given model is moment balanced (i.e., satisfies section slip-rate data) depends on assumptions made regarding the average slip on each segment in each rupture (which in turn depends on the chosen magnitude-area relationship). Therefore, for a given set of assumptions, or branch on the logic tree, the methodology of the present Working Group (WGCEP-2007) is to find a final model that is as close as possible to the a-priori model, in the least squares sense, but that also satisfies slip rate and perhaps other data. This is analogous the WGCEP- 2002 approach of effectively voting on the relative rate of each possible rupture, and then finding the closest moment-balance model (under a more limiting set of assumptions than adopted by the present WGCEP, as described in detail in Appendix G). The 2002 Working Group Report (WCCEP, 2003, referred to here as WGCEP-2002), created segmented earthquake rupture forecast models for all faults in the region, including some that had been designated as Type B faults in the NSHMP, 1996, and one that had not previously been considered. The 2002 National Seismic Hazard Maps used the values from WGCEP-2002 for all the faults in the region, essentially treating all the listed faults as Type A faults. As discussed in Appendix A, the current WGCEP found that there are a number of faults with little or no data on slip-per-event, or dates of previous earthquakes. As a result, the WGCEP recommends that faults with minimal available earthquake recurrence data: the Greenville, Mount Diablo, San Gregorio, Monte Vista-Shannon and Concord-Green Valley be modeled as Type B faults to be consistent with similarly poorly-known faults statewide. As a result, the modified segmented models discussed here only concern the San Andreas, Hayward-Rodgers Creek, and Calaveras faults. Given the extensive level of effort given by the recent Bay-Area WGCEP-2002, our approach has been to adopt their final average models as our preferred a-prior models. We have modified the WGCEP-2002 models where necessary to match data that were not available or not used by that WGCEP and where the models needed by WGCEP-2007 for a uniform statewide model require different assumptions and/or logic-tree branch weights. In these cases we have made what are usually slight modifications to the WGCEP-2002 model. This Appendix presents the minor changes needed to accomodate updated information and model construction. We do not attempt to reproduce here the extensive documentation of data, model parameters and earthquake probablilities in the WG-2002 report.

  20. Determination of ReQuest-based symptom thresholds to define symptom relief in GERD clinical studies.

    PubMed

    Stanghellini, Vincenzo; Armstrong, David; Mönnikes, Hubert; Berghöfer, Peter; Gatz, Gudrun; Bardhan, Karna Dev

    2007-01-01

    The growing importance of symptom assessment is evident from the numerous clinical studies on gastroesophageal reflux disease (GERD) assessing treatment-induced symptom relief. However, to date, the a priori selection of criteria defining symptom relief has been arbitrary. The present study was designed to prospectively identify GERD symptom thresholds for the broad spectrum of GERD-related symptoms assessed by the validated reflux questionnaire (ReQuest) and its subscales, ReQuest-GI (gastrointestinal symptoms) and ReQuest-WSO (general well-being, sleep disturbances, other complaints), in individuals without evidence of GERD. In this 4-day evaluation in Germany, 385 individuals without evidence of GERD were included. On the first day, participants completed the ReQuest, the Gastrointestinal Symptom Rating Scale, and the Psychological General Well-Being scale. On the other days, participants filled in the ReQuest only. GERD symptom thresholds were calculated for ReQuest and its subscales, based on the respective 90th percentiles. GERD symptom thresholds were 3.37 for ReQuest, 0.95 for ReQuest-GI, and 2.46 for ReQuest-WSO. Even individuals without evidence of GERD may experience some mild symptoms that are commonly ascribed to GERD. GERD symptom thresholds derived in this study can be used to define the global symptom relief in patients with GERD. Copyright 2007 S. Karger AG, Basel.

  1. Defining care products to finance health care in the Netherlands.

    PubMed

    Westerdijk, Machiel; Zuurbier, Joost; Ludwig, Martijn; Prins, Sarah

    2012-04-01

    A case-mix project started in the Netherlands with the primary goal to define a complete set of health care products for hospitals. The definition of the product structure was completed 4 years later. The results are currently being used for billing purposes. This paper focuses on the methodology and techniques that were developed and applied in order to define the casemix product structure. The central research question was how to develop a manageable product structure, i.e., a limited set of hospital products, with acceptable cost homogeneity. For this purpose, a data warehouse with approximately 1.5 million patient records from 27 hospitals was build up over a period of 3 years. The data associated with each patient consist of a large number of a priori independent parameters describing the resource utilization in different stages of the treatment process, e.g., activities in the operating theatre, the lab and the radiology department. Because of the complexity of the database, it was necessary to apply advanced data analysis techniques. The full analyses process that starts from the database and ends up with a product definition consists of four basic analyses steps. Each of these steps has revealed interesting insights. This paper describes each step in some detail and presents the major results of each step. The result consists of 687 product groups for 24 medical specialties used for billing purposes.

  2. A Methodological Framework for Model Selection in Interrupted Time Series Studies.

    PubMed

    Lopez Bernal, J; Soumerai, S; Gasparrini, A

    2018-06-06

    Interrupted time series is a powerful and increasingly popular design for evaluating public health and health service interventions. The design involves analysing trends in the outcome of interest and estimating the change in trend following an intervention relative to the counterfactual (the expected ongoing trend if the intervention had not occurred). There are two key components to modelling this effect: first, defining the counterfactual; second, defining the type of effect that the intervention is expected to have on the outcome, known as the impact model. The counterfactual is defined by extrapolating the underlying trends observed before the intervention to the post-intervention period. In doing this, authors must consider the pre-intervention period that will be included, any time varying confounders, whether trends may vary within different subgroups of the population and whether trends are linear or non-linear. Defining the impact model involves specifying the parameters that model the intervention, including for instance whether to allow for an abrupt level change or a gradual slope change, whether to allow for a lag before any effect on the outcome, whether to allow a transition period during which the intervention is being implemented and whether a ceiling or floor effect might be expected. Inappropriate model specification can bias the results of an interrupted time series analysis and using a model that is not closely tailored to the intervention or testing multiple models increases the risk of false positives being detected. It is important that authors use substantive knowledge to customise their interrupted time series model a priori to the intervention and outcome under study. Where there is uncertainty in model specification, authors should consider using separate data sources to define the intervention, running limited sensitivity analyses or undertaking initial exploratory studies. Copyright © 2018. Published by Elsevier Inc.

  3. Coastal habitats as surrogates for taxonomic, functional and trophic structures of benthic faunal communities.

    PubMed

    Törnroos, Anna; Nordström, Marie C; Bonsdorff, Erik

    2013-01-01

    Due to human impact, there is extensive degradation and loss of marine habitats, which calls for measures that incorporate taxonomic as well as functional and trophic aspects of biodiversity. Since such data is less easily quantifiable in nature, the use of habitats as surrogates or proxies for biodiversity is on the rise in marine conservation and management. However, there is a critical gap in knowledge of whether pre-defined habitat units adequately represent the functional and trophic structure of communities. We also lack comparisons of different measures of community structure in terms of both between- (β) and within-habitat (α) variability when accounting for species densities. Thus, we evaluated a priori defined coastal habitats as surrogates for traditional taxonomic, functional and trophic zoobenthic community structure. We focused on four habitats (bare sand, canopy-forming algae, seagrass above- and belowground), all easily delineated in nature and defined through classification systems. We analyzed uni- and multivariate data on species and trait diversity as well as stable isotope ratios of benthic macrofauna. A good fit between habitat types and taxonomic and functional structure was found, although habitats were more similar functionally. This was attributed to within-habitat heterogeneity so when habitat divisions matched the taxonomic structure, only bare sand was functionally distinct. The pre-defined habitats did not meet the variability of trophic structure, which also proved to differentiate on a smaller spatial scale. The quantification of trophic structure using species density only identified an epi- and an infaunal unit. To summarize the results we present a conceptual model illustrating the match between pre-defined habitat types and the taxonomic, functional and trophic community structure. Our results show the importance of including functional and trophic aspects more comprehensively in marine management and spatial planning.

  4. Defining Optimal Head-Tilt Position of Resuscitation in Neonates and Young Infants Using Magnetic Resonance Imaging Data

    PubMed Central

    Bhalala, Utpal S.; Hemani, Malvi; Shah, Meehir; Kim, Barbara; Gu, Brian; Cruz, Angelo; Arunachalam, Priya; Tian, Elli; Yu, Christine; Punnoose, Joshua; Chen, Steven; Petrillo, Christopher; Brown, Alisa; Munoz, Karina; Kitchen, Grant; Lam, Taylor; Bosemani, Thangamadhan; Huisman, Thierry A. G. M.; Allen, Robert H.; Acharya, Soumyadipta

    2016-01-01

    Head-tilt maneuver assists with achieving airway patency during resuscitation. However, the relationship between angle of head-tilt and airway patency has not been defined. Our objective was to define an optimal head-tilt position for airway patency in neonates (age: 0–28 days) and young infants (age: 29 days–4 months). We performed a retrospective study of head and neck magnetic resonance imaging (MRI) of neonates and infants to define the angle of head-tilt for airway patency. We excluded those with an artificial airway or an airway malformation. We defined head-tilt angle a priori as the angle between occipito-ophisthion line and ophisthion-C7 spinous process line on the sagittal MR images. We evaluated medical records for Hypoxic Ischemic Encephalopathy (HIE) and exposure to sedation during MRI. We analyzed MRI of head and neck regions of 63 children (53 neonates and 10 young infants). Of these 63 children, 17 had evidence of airway obstruction and 46 had a patent airway on MRI. Also, 16/63 had underlying HIE and 47/63 newborn infants had exposure to sedative medications during MRI. In spontaneously breathing and neurologically depressed newborn infants, the head-tilt angle (median ± SD) associated with patent airway (125.3° ± 11.9°) was significantly different from that of blocked airway (108.2° ± 17.1°) (Mann Whitney U-test, p = 0.0045). The logistic regression analysis showed that the proportion of patent airways progressively increased with an increasing head-tilt angle, with > 95% probability of a patent airway at head-tilt angle 144–150°. PMID:27003759

  5. Defining Optimal Head-Tilt Position of Resuscitation in Neonates and Young Infants Using Magnetic Resonance Imaging Data.

    PubMed

    Bhalala, Utpal S; Hemani, Malvi; Shah, Meehir; Kim, Barbara; Gu, Brian; Cruz, Angelo; Arunachalam, Priya; Tian, Elli; Yu, Christine; Punnoose, Joshua; Chen, Steven; Petrillo, Christopher; Brown, Alisa; Munoz, Karina; Kitchen, Grant; Lam, Taylor; Bosemani, Thangamadhan; Huisman, Thierry A G M; Allen, Robert H; Acharya, Soumyadipta

    2016-01-01

    Head-tilt maneuver assists with achieving airway patency during resuscitation. However, the relationship between angle of head-tilt and airway patency has not been defined. Our objective was to define an optimal head-tilt position for airway patency in neonates (age: 0-28 days) and young infants (age: 29 days-4 months). We performed a retrospective study of head and neck magnetic resonance imaging (MRI) of neonates and infants to define the angle of head-tilt for airway patency. We excluded those with an artificial airway or an airway malformation. We defined head-tilt angle a priori as the angle between occipito-ophisthion line and ophisthion-C7 spinous process line on the sagittal MR images. We evaluated medical records for Hypoxic Ischemic Encephalopathy (HIE) and exposure to sedation during MRI. We analyzed MRI of head and neck regions of 63 children (53 neonates and 10 young infants). Of these 63 children, 17 had evidence of airway obstruction and 46 had a patent airway on MRI. Also, 16/63 had underlying HIE and 47/63 newborn infants had exposure to sedative medications during MRI. In spontaneously breathing and neurologically depressed newborn infants, the head-tilt angle (median ± SD) associated with patent airway (125.3° ± 11.9°) was significantly different from that of blocked airway (108.2° ± 17.1°) (Mann Whitney U-test, p = 0.0045). The logistic regression analysis showed that the proportion of patent airways progressively increased with an increasing head-tilt angle, with > 95% probability of a patent airway at head-tilt angle 144-150°.

  6. Musical Probabilities, Abductive Reasoning, and Brain Mechanisms: Extended Perspective of "A Priori" Listening to Music within the Creative Cognition Approach

    ERIC Educational Resources Information Center

    Schmidt, Sebastian; Troge, Thomas A.; Lorrain, Denis

    2013-01-01

    A theory of listening to music is proposed. It suggests that, for listeners, the process of prediction is the starting point to experiencing music. This implies that perception of music starts through both a predisposed and an experience-based extrapolation into the future (this is labeled "a priori" listening). Indications for this…

  7. Phase estimation without a priori phase knowledge in the presence of loss

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kolodynski, Jan; Demkowicz-Dobrzanski, Rafal

    2010-11-15

    We find the optimal scheme for quantum phase estimation in the presence of loss when no a priori knowledge on the estimated phase is available. We prove analytically an explicit lower bound on estimation uncertainty, which shows that, as a function of the number of probes, quantum precision enhancement amounts at most to a constant factor improvement over classical strategies.

  8. Model Parameter Estimation Experiment (MOPEX): An overview of science strategy and major results from the second and third workshops

    USGS Publications Warehouse

    Duan, Q.; Schaake, J.; Andreassian, V.; Franks, S.; Goteti, G.; Gupta, H.V.; Gusev, Y.M.; Habets, F.; Hall, A.; Hay, L.; Hogue, T.; Huang, M.; Leavesley, G.; Liang, X.; Nasonova, O.N.; Noilhan, J.; Oudin, L.; Sorooshian, S.; Wagener, T.; Wood, E.F.

    2006-01-01

    The Model Parameter Estimation Experiment (MOPEX) is an international project aimed at developing enhanced techniques for the a priori estimation of parameters in hydrologic models and in land surface parameterization schemes of atmospheric models. The MOPEX science strategy involves three major steps: data preparation, a priori parameter estimation methodology development, and demonstration of parameter transferability. A comprehensive MOPEX database has been developed that contains historical hydrometeorological data and land surface characteristics data for many hydrologic basins in the United States (US) and in other countries. This database is being continuously expanded to include more basins in all parts of the world. A number of international MOPEX workshops have been convened to bring together interested hydrologists and land surface modelers from all over world to exchange knowledge and experience in developing a priori parameter estimation techniques. This paper describes the results from the second and third MOPEX workshops. The specific objective of these workshops is to examine the state of a priori parameter estimation techniques and how they can be potentially improved with observations from well-monitored hydrologic basins. Participants of the second and third MOPEX workshops were provided with data from 12 basins in the southeastern US and were asked to carry out a series of numerical experiments using a priori parameters as well as calibrated parameters developed for their respective hydrologic models. Different modeling groups carried out all the required experiments independently using eight different models, and the results from these models have been assembled for analysis in this paper. This paper presents an overview of the MOPEX experiment and its design. The main experimental results are analyzed. A key finding is that existing a priori parameter estimation procedures are problematic and need improvement. Significant improvement of these procedures may be achieved through model calibration of well-monitored hydrologic basins. This paper concludes with a discussion of the lessons learned, and points out further work and future strategy. ?? 2005 Elsevier Ltd. All rights reserved.

  9. Impact of a priori information on IASI ozone retrievals and trends

    NASA Astrophysics Data System (ADS)

    Barret, B.; Peiro, H.; Emili, E.; Le Flocgmoën, E.

    2017-12-01

    The IASI sensor documents atmospheric water vapor, temperature and composition since 2007. The Software for a Fast Retrieval of IASI Data (SOFRID) has been developped to retrieve O3 and CO profiles from IASI in near-real time on a global scale. Information content analyses have shown that IASI enables the quantification of O3 independently in the troposphere, the UTLS and the stratosphere. Validation studies have demonstrated that the daily to seasonal variability of tropospheric and UTLS O3 was well captured by IASI especially in the tropics. IASI-SOFRID retrievals have also been used to document the tropospheric composition during the Asian monsoon and participated to determine the O3 evolution during the 2008-2016 period in the framework of the TOAR project. Nevertheless, IASI-SOFRID O3 is biased high in the UTLS and in the tropical troposphere and the 8 years O3 trends from the different IASI products are significantly different from the O3 trends from UV-Vis satellite sensors (e.g. OMI)..SOFRID is based on the Optimal Estimation Method that requires a priori information to complete the information provided by the measured thermal infrared radiances. In SOFRID-O3 v1.5 used in TOAR the a priori consists of a single O3 profile and associated covariance matrix based on global O3 radiosoundings. Such a global a priori is characterized by a very large variabilty and does not represent our best kowledge of the O3 profile at a given time and location. Furthermore it is biased towards the northern hemisphere middle latitudes. We have therefore implemented the possibility to use dynamical a priori data in SOFRID and performed experiments using O3 climatological data and MLS O3 analyses. We will present O3 distributions and comparisons with O3 radiosoundings from the different SOFRID-O3 retrievals. We will in particular assess the impact of the use of different a priori data upon the O3 biases and trends during the IASI period.

  10. Optimal information networks: Application for data-driven integrated health in populations

    PubMed Central

    Servadio, Joseph L.; Convertino, Matteo

    2018-01-01

    Development of composite indicators for integrated health in populations typically relies on a priori assumptions rather than model-free, data-driven evidence. Traditional variable selection processes tend not to consider relatedness and redundancy among variables, instead considering only individual correlations. In addition, a unified method for assessing integrated health statuses of populations is lacking, making systematic comparison among populations impossible. We propose the use of maximum entropy networks (MENets) that use transfer entropy to assess interrelatedness among selected variables considered for inclusion in a composite indicator. We also define optimal information networks (OINs) that are scale-invariant MENets, which use the information in constructed networks for optimal decision-making. Health outcome data from multiple cities in the United States are applied to this method to create a systemic health indicator, representing integrated health in a city. PMID:29423440

  11. Discrete mathematics for spatial data classification and understanding

    NASA Astrophysics Data System (ADS)

    Mussio, Luigi; Nocera, Rossella; Poli, Daniela

    1998-12-01

    Data processing, in the field of information technology, requires new tools, involving discrete mathematics, like data compression, signal enhancement, data classification and understanding, hypertexts and multimedia (considering educational aspects too), because the mass of data implies automatic data management and doesn't permit any a priori knowledge. The methodologies and procedures used in this class of problems concern different kinds of segmentation techniques and relational strategies, like clustering, parsing, vectorization, formalization, fitting and matching. On the other hand, the complexity of this approach imposes to perform optimal sampling and outlier detection just at the beginning, in order to define the set of data to be processed: rough data supply very poor information. For these reasons, no hypotheses about the distribution behavior of the data can be generally done and a judgment should be acquired by distribution-free inference only.

  12. Restoration of Static JPEG Images and RGB Video Frames by Means of Nonlinear Filtering in Conditions of Gaussian and Non-Gaussian Noise

    NASA Astrophysics Data System (ADS)

    Sokolov, R. I.; Abdullin, R. R.

    2017-11-01

    The use of nonlinear Markov process filtering makes it possible to restore both video stream frames and static photos at the stage of preprocessing. The present paper reflects the results of research in comparison of these types image filtering quality by means of special algorithm when Gaussian or non-Gaussian noises acting. Examples of filter operation at different values of signal-to-noise ratio are presented. A comparative analysis has been performed, and the best filtered kind of noise has been defined. It has been shown the quality of developed algorithm is much better than quality of adaptive one for RGB signal filtering at the same a priori information about the signal. Also, an advantage over median filter takes a place when both fluctuation and pulse noise filtering.

  13. Atlas-based segmentation of 3D cerebral structures with competitive level sets and fuzzy control.

    PubMed

    Ciofolo, Cybèle; Barillot, Christian

    2009-06-01

    We propose a novel approach for the simultaneous segmentation of multiple structures with competitive level sets driven by fuzzy control. To this end, several contours evolve simultaneously toward previously defined anatomical targets. A fuzzy decision system combines the a priori knowledge provided by an anatomical atlas with the intensity distribution of the image and the relative position of the contours. This combination automatically determines the directional term of the evolution equation of each level set. This leads to a local expansion or contraction of the contours, in order to match the boundaries of their respective targets. Two applications are presented: the segmentation of the brain hemispheres and the cerebellum, and the segmentation of deep internal structures. Experimental results on real magnetic resonance (MR) images are presented, quantitatively assessed and discussed.

  14. Management of Patients With Diverticulosis and Diverticular Disease: Consensus Statements From the 2nd International Symposium on Diverticular Disease.

    PubMed

    Tursi, Antonio; Picchio, Marcello; Elisei, Walter; Di Mario, Francesco; Scarpignato, Carmelo; Brandimarte, Giovanni

    2016-10-01

    The statements produced by the Chairmen of the 2nd International Symposium on Diverticular Disease, held in Rome on April 8th to 9th, 2016, are reported. Topics such as epidemiology, risk factors, diagnosis, medical and surgical treatment of diverticular disease in patients with uncomplicated and complicated diverticular disease were reviewed by the Chairmen who proposed 41 statements graded according to level of evidence and strength of recommendation. Each topic was explored focusing on the more relevant clinical questions. The vote was conducted on a 6-point scale and consensus was defined a priori as 67% agreement of the participants. The voting group consisted of 80 physicians from 6 countries, and agreement with all statements was provided. Comments were added explaining some controversial areas.

  15. Relations between water physico-chemistry and benthic algal communities in a northern Canadian watershed: defining reference conditions using multiple descriptors of community structure.

    PubMed

    Thomas, Kathryn E; Hall, Roland I; Scrimgeour, Garry J

    2015-09-01

    Defining reference conditions is central to identifying environmental effects of anthropogenic activities. Using a watershed approach, we quantified reference conditions for benthic algal communities and their relations to physico-chemical conditions in rivers in the South Nahanni River watershed, NWT, Canada, in 2008 and 2009. We also compared the ability of three descriptors that vary in terms of analytical costs to define algal community structure based on relative abundances of (i) all algal taxa, (ii) only diatom taxa, and (iii) photosynthetic pigments. Ordination analyses showed that variance in algal community structure was strongly related to gradients in environmental variables describing water physico-chemistry, stream habitats, and sub-watershed structure. Water physico-chemistry and local watershed-scale descriptors differed significantly between algal communities from sites in the Selwyn Mountain ecoregion compared to sites in the Nahanni-Hyland ecoregions. Distinct differences in algal community types between ecoregions were apparent irrespective of whether algal community structure was defined using all algal taxa, diatom taxa, or photosynthetic pigments. Two algal community types were highly predictable using environmental variables, a core consideration in the development of Reference Condition Approach (RCA) models. These results suggest that assessments of environmental impacts could be completed using RCA models for each ecoregion. We suggest that use of algal pigments, a high through-put analysis, is a promising alternative compared to more labor-intensive and costly taxonomic approaches for defining algal community structure.

  16. Tomographic inversion of time-domain resistivity and chargeability data for the investigation of landfills using a priori information.

    PubMed

    De Donno, Giorgio; Cardarelli, Ettore

    2017-01-01

    In this paper, we present a new code for the modelling and inversion of resistivity and chargeability data using a priori information to improve the accuracy of the reconstructed model for landfill. When a priori information is available in the study area, we can insert them by means of inequality constraints on the whole model or on a single layer or assigning weighting factors for enhancing anomalies elongated in the horizontal or vertical directions. However, when we have to face a multilayered scenario with numerous resistive to conductive transitions (the case of controlled landfills), the effective thickness of the layers can be biased. The presented code includes a model-tuning scheme, which is applied after the inversion of field data, where the inversion of the synthetic data is performed based on an initial guess, and the absolute difference between the field and synthetic inverted models is minimized. The reliability of the proposed approach has been supported in two real-world examples; we were able to identify an unauthorized landfill and to reconstruct the geometrical and physical layout of an old waste dump. The combined analysis of the resistivity and chargeability (normalised) models help us to remove ambiguity due to the presence of the waste mass. Nevertheless, the presence of certain layers can remain hidden without using a priori information, as demonstrated by a comparison of the constrained inversion with a standard inversion. The robustness of the above-cited method (using a priori information in combination with model tuning) has been validated with the cross-section from the construction plans, where the reconstructed model is in agreement with the original design. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Defining And Employing Reference Conditions For Ecological Restoration Of The Lower Missouri River, USA

    NASA Astrophysics Data System (ADS)

    Jacobson, R. B.; Elliott, C. M.; Reuter, J. M.

    2008-12-01

    Ecological reference conditions are especially challenging for large, intensively managed rivers like the Lower Missouri. Historical information provides broad understanding of how the river has changed, but translating historical information into quantitative reference conditions remains a challenge. Historical information is less available for biological and chemical conditions than for physical conditions. For physical conditions, much of the early historical condition is documented in date-specific measurements or maps, and it is difficult to determine how representative these conditions are for a river system that was characterized historically by large floods and high channel migration rates. As an alternative to a historically defined least- disturbed condition, spatial variation within the Missouri River basin provides potential for defining a best- attainable reference condition. A possibility for the best-attainable condition for channel morphology is an unchannelized segment downstream of the lowermost dam (rkm 1298 - 1203). This segment retains multiple channels and abundant sandbars although it has a highly altered flow regime and a greatly diminished sediment supply. Conversely, downstream river segments have more natural flow regimes, but have been narrowed and simplified for navigation and bank stability. We use two computational tools to compensate for the lack of ideal reference conditions. The first is a hydrologic model that synthesizes natural and altered flow regimes based on 100 years of daily inputs to the river (daily routing model, DRM, US Army Corps of Engineers, 1998); the second tool is hydrodynamic modeling of habitat availability. The flow-regime and hydrodynamic outputs are integrated to define habitat-duration curves as the basis for reference conditions (least-disturbed flow regime and least-disturbed channel morphology). Lacking robust biological response models, we use mean residence time of water and a habitat diversity index as generic ecosystem indicators.

  18. Global identifiability of linear compartmental models--a computer algebra algorithm.

    PubMed

    Audoly, S; D'Angiò, L; Saccomani, M P; Cobelli, C

    1998-01-01

    A priori global identifiability deals with the uniqueness of the solution for the unknown parameters of a model and is, thus, a prerequisite for parameter estimation of biological dynamic models. Global identifiability is however difficult to test, since it requires solving a system of algebraic nonlinear equations which increases both in nonlinearity degree and number of terms and unknowns with increasing model order. In this paper, a computer algebra tool, GLOBI (GLOBal Identifiability) is presented, which combines the topological transfer function method with the Buchberger algorithm, to test global identifiability of linear compartmental models. GLOBI allows for the automatic testing of a priori global identifiability of general structure compartmental models from general multi input-multi output experiments. Examples of usage of GLOBI to analyze a priori global identifiability of some complex biological compartmental models are provided.

  19. Priori mask guided image reconstruction (p-MGIR) for ultra-low dose cone-beam computed tomography

    NASA Astrophysics Data System (ADS)

    Park, Justin C.; Zhang, Hao; Chen, Yunmei; Fan, Qiyong; Kahler, Darren L.; Liu, Chihray; Lu, Bo

    2015-11-01

    Recently, the compressed sensing (CS) based iterative reconstruction method has received attention because of its ability to reconstruct cone beam computed tomography (CBCT) images with good quality using sparsely sampled or noisy projections, thus enabling dose reduction. However, some challenges remain. In particular, there is always a tradeoff between image resolution and noise/streak artifact reduction based on the amount of regularization weighting that is applied uniformly across the CBCT volume. The purpose of this study is to develop a novel low-dose CBCT reconstruction algorithm framework called priori mask guided image reconstruction (p-MGIR) that allows reconstruction of high-quality low-dose CBCT images while preserving the image resolution. In p-MGIR, the unknown CBCT volume was mathematically modeled as a combination of two regions: (1) where anatomical structures are complex, and (2) where intensities are relatively uniform. The priori mask, which is the key concept of the p-MGIR algorithm, was defined as the matrix that distinguishes between the two separate CBCT regions where the resolution needs to be preserved and where streak or noise needs to be suppressed. We then alternately updated each part of image by solving two sub-minimization problems iteratively, where one minimization was focused on preserving the edge information of the first part while the other concentrated on the removal of noise/artifacts from the latter part. To evaluate the performance of the p-MGIR algorithm, a numerical head-and-neck phantom, a Catphan 600 physical phantom, and a clinical head-and-neck cancer case were used for analysis. The results were compared with the standard Feldkamp-Davis-Kress as well as conventional CS-based algorithms. Examination of the p-MGIR algorithm showed that high-quality low-dose CBCT images can be reconstructed without compromising the image resolution. For both phantom and the patient cases, the p-MGIR is able to achieve a clinically-reasonable image with 60 projections. Therefore, a clinically-viable, high-resolution head-and-neck CBCT image can be obtained while cutting the dose by 83%. Moreover, the image quality obtained using p-MGIR is better than the quality obtained using other algorithms. In this work, we propose a novel low-dose CBCT reconstruction algorithm called p-MGIR. It can be potentially used as a CBCT reconstruction algorithm with low dose scan requests

  20. Instantaneous progression reference frame for calculating pelvis rotations: Reliable and anatomically-meaningful results independent of the direction of movement.

    PubMed

    Kainz, Hans; Lloyd, David G; Walsh, Henry P J; Carty, Christopher P

    2016-05-01

    In motion analysis, pelvis angles are conventionally calculated as the rotations between the pelvis and laboratory reference frame. This approach assumes that the participant's motion is along the anterior-posterior laboratory reference frame axis. When this assumption is violated interpretation of pelvis angels become problematic. In this paper a new approach for calculating pelvis angles based on the rotations between the pelvis and an instantaneous progression reference frame was introduced. At every time-point, the tangent to the trajectory of the midpoint of the pelvis projected into the horizontal plane of the laboratory reference frame was used to define the anterior-posterior axis of the instantaneous progression reference frame. This new approach combined with the rotation-obliquity-tilt rotation sequence was compared to the conventional approach using the rotation-obliquity-tilt and tilt-obliquity-rotation sequences. Four different movement tasks performed by eight healthy adults were analysed. The instantaneous progression reference frame approach was the only approach that showed reliable and anatomically meaningful results for all analysed movement tasks (mean root-mean-square-differences below 5°, differences in pelvis angles at pre-defined gait events below 10°). Both rotation sequences combined with the conventional approach led to unreliable results as soon as the participant's motion was not along the anterior-posterior laboratory axis (mean root-mean-square-differences up to 30°, differences in pelvis angles at pre-defined gait events up to 45°). The instantaneous progression reference frame approach enables the gait analysis community to analysis pelvis angles for movements that do not follow the anterior-posterior axis of the laboratory reference frame. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Invariant polarimetric contrast parameters of coherent light.

    PubMed

    Réfrégier, Philippe; Goudail, François

    2002-06-01

    Many applications use an active coherent illumination and analyze the variation of the polarization state of optical signals. However, as a result of the use of coherent light, these signals are generally strongly perturbed with speckle noise. This is the case, for example, for active polarimetric imaging systems that are useful for enhancing contrast between different elements in a scene. We propose a rigorous definition of the minimal set of parameters that characterize the difference between two coherent and partially polarized states. Indeed, two states of partially polarized light are a priori defined by eight parameters, for example, their two Stokes vectors. We demonstrate that the processing performance for such signal processing tasks as detection, localization, or segmentation of spatial or temporal polarization variations is uniquely determined by two scalar functions of these eight parameters. These two scalar functions are the invariant parameters that define the polarimetric contrast between two polarized states of coherent light. Different polarization configurations with the same invariant contrast parameters will necessarily lead to the same performance for a given task, which is a desirable quality for a rigorous contrast measure. The definition of these polarimetric contrast parameters simplifies the analysis and the specification of processing techniques for coherent polarimetric signals.

  2. Design of partially supervised classifiers for multispectral image data

    NASA Technical Reports Server (NTRS)

    Jeon, Byeungwoo; Landgrebe, David

    1993-01-01

    A partially supervised classification problem is addressed, especially when the class definition and corresponding training samples are provided a priori only for just one particular class. In practical applications of pattern classification techniques, a frequently observed characteristic is the heavy, often nearly impossible requirements on representative prior statistical class characteristics of all classes in a given data set. Considering the effort in both time and man-power required to have a well-defined, exhaustive list of classes with a corresponding representative set of training samples, this 'partially' supervised capability would be very desirable, assuming adequate classifier performance can be obtained. Two different classification algorithms are developed to achieve simplicity in classifier design by reducing the requirement of prior statistical information without sacrificing significant classifying capability. The first one is based on optimal significance testing, where the optimal acceptance probability is estimated directly from the data set. In the second approach, the partially supervised classification is considered as a problem of unsupervised clustering with initially one known cluster or class. A weighted unsupervised clustering procedure is developed to automatically define other classes and estimate their class statistics. The operational simplicity thus realized should make these partially supervised classification schemes very viable tools in pattern classification.

  3. Defining a Bobath clinical framework - A modified e-Delphi study.

    PubMed

    Vaughan-Graham, Julie; Cott, Cheryl

    2016-11-01

    To gain consensus within the expert International Bobath Instructors Training Association (IBITA) on a Bobath clinical framework on which future efficacy studies can be based. A three-round modified e-Delphi approach was used with 204 full members of the IBITA. Twenty-one initial statements were generated from the literature. Consensus was defined a priori as at least 80% of the respondents with a level of agreement on a Likert scale of 4 or 5. The Delphi questionnaire for each round was available online for two weeks. Summary reports and subsequent questionnaires were posted within four weeks. Ninety-four IBITA members responded, forming the Delphi panel, of which 68 and 66 responded to Rounds Two and Three, respectively. The 21 initial statements were revised to 17 statements and five new statements in Round Two in which eight statements were accepted and two statements were eliminated. Round Three presented 12 revised statements, all reaching consensus. The Delphi was successful in gaining consensus on a Bobath clinical framework in a geographically diverse expert association, identifying the unique components of Bobath clinical practice. Discussion throughout all three Rounds revolved primarily around the terminology of atypical and compensatory motor behavior and balance.

  4. A semi-supervised classification algorithm using the TAD-derived background as training data

    NASA Astrophysics Data System (ADS)

    Fan, Lei; Ambeau, Brittany; Messinger, David W.

    2013-05-01

    In general, spectral image classification algorithms fall into one of two categories: supervised and unsupervised. In unsupervised approaches, the algorithm automatically identifies clusters in the data without a priori information about those clusters (except perhaps the expected number of them). Supervised approaches require an analyst to identify training data to learn the characteristics of the clusters such that they can then classify all other pixels into one of the pre-defined groups. The classification algorithm presented here is a semi-supervised approach based on the Topological Anomaly Detection (TAD) algorithm. The TAD algorithm defines background components based on a mutual k-Nearest Neighbor graph model of the data, along with a spectral connected components analysis. Here, the largest components produced by TAD are used as regions of interest (ROI's),or training data for a supervised classification scheme. By combining those ROI's with a Gaussian Maximum Likelihood (GML) or a Minimum Distance to the Mean (MDM) algorithm, we are able to achieve a semi supervised classification method. We test this classification algorithm against data collected by the HyMAP sensor over the Cooke City, MT area and University of Pavia scene.

  5. Practical Considerations about Expected A Posteriori Estimation in Adaptive Testing: Adaptive A Priori, Adaptive Correction for Bias, and Adaptive Integration Interval.

    ERIC Educational Resources Information Center

    Raiche, Gilles; Blais, Jean-Guy

    In a computerized adaptive test (CAT), it would be desirable to obtain an acceptable precision of the proficiency level estimate using an optimal number of items. Decreasing the number of items is accompanied, however, by a certain degree of bias when the true proficiency level differs significantly from the a priori estimate. G. Raiche (2000) has…

  6. An algorithmic approach to crustal deformation analysis

    NASA Technical Reports Server (NTRS)

    Iz, Huseyin Baki

    1987-01-01

    In recent years the analysis of crustal deformation measurements has become important as a result of current improvements in geodetic methods and an increasing amount of theoretical and observational data provided by several earth sciences. A first-generation data analysis algorithm which combines a priori information with current geodetic measurements was proposed. Relevant methods which can be used in the algorithm were discussed. Prior information is the unifying feature of this algorithm. Some of the problems which may arise through the use of a priori information in the analysis were indicated and preventive measures were demonstrated. The first step in the algorithm is the optimal design of deformation networks. The second step in the algorithm identifies the descriptive model of the deformation field. The final step in the algorithm is the improved estimation of deformation parameters. Although deformation parameters are estimated in the process of model discrimination, they can further be improved by the use of a priori information about them. According to the proposed algorithm this information must first be tested against the estimates calculated using the sample data only. Null-hypothesis testing procedures were developed for this purpose. Six different estimators which employ a priori information were examined. Emphasis was put on the case when the prior information is wrong and analytical expressions for possible improvements under incompatible prior information were derived.

  7. Geopositioning with a quadcopter: Extracted feature locations and predicted accuracy without a priori sensor attitude information

    NASA Astrophysics Data System (ADS)

    Dolloff, John; Hottel, Bryant; Edwards, David; Theiss, Henry; Braun, Aaron

    2017-05-01

    This paper presents an overview of the Full Motion Video-Geopositioning Test Bed (FMV-GTB) developed to investigate algorithm performance and issues related to the registration of motion imagery and subsequent extraction of feature locations along with predicted accuracy. A case study is included corresponding to a video taken from a quadcopter. Registration of the corresponding video frames is performed without the benefit of a priori sensor attitude (pointing) information. In particular, tie points are automatically measured between adjacent frames using standard optical flow matching techniques from computer vision, an a priori estimate of sensor attitude is then computed based on supplied GPS sensor positions contained in the video metadata and a photogrammetric/search-based structure from motion algorithm, and then a Weighted Least Squares adjustment of all a priori metadata across the frames is performed. Extraction of absolute 3D feature locations, including their predicted accuracy based on the principles of rigorous error propagation, is then performed using a subset of the registered frames. Results are compared to known locations (check points) over a test site. Throughout this entire process, no external control information (e.g. surveyed points) is used other than for evaluation of solution errors and corresponding accuracy.

  8. Approximate deconvolution model for the simulation of turbulent gas-solid flows: An a priori analysis

    NASA Astrophysics Data System (ADS)

    Schneiderbauer, Simon; Saeedipour, Mahdi

    2018-02-01

    Highly resolved two-fluid model (TFM) simulations of gas-solid flows in vertical periodic channels have been performed to study closures for the filtered drag force and the Reynolds-stress-like contribution stemming from the convective terms. An approximate deconvolution model (ADM) for the large-eddy simulation of turbulent gas-solid suspensions is detailed and subsequently used to reconstruct those unresolved contributions in an a priori manner. With such an approach, an approximation of the unfiltered solution is obtained by repeated filtering allowing the determination of the unclosed terms of the filtered equations directly. A priori filtering shows that predictions of the ADM model yield fairly good agreement with the fine grid TFM simulations for various filter sizes and different particle sizes. In particular, strong positive correlation (ρ > 0.98) is observed at intermediate filter sizes for all sub-grid terms. Additionally, our study reveals that the ADM results moderately depend on the choice of the filters, such as box and Gaussian filter, as well as the deconvolution order. The a priori test finally reveals that ADM is superior compared to isotropic functional closures proposed recently [S. Schneiderbauer, "A spatially-averaged two-fluid model for dense large-scale gas-solid flows," AIChE J. 63, 3544-3562 (2017)].

  9. Technostress and the Reference Librarian.

    ERIC Educational Resources Information Center

    Kupersmith, John

    1992-01-01

    Defines "technostress" as the stress experienced by reference librarians who must constantly deal with the demands of new information technology and the changes they produce in the work place. Discussion includes suggested ways in which both organizations and individuals can work to reduce stress. (27 references) (LAE)

  10. Changing Roles for References Librarians.

    ERIC Educational Resources Information Center

    Kelly, Julia; Robbins, Kathryn

    1996-01-01

    Discusses the future outlook for reference librarians, with topics including: "Technology as the Source of Change"; "Impact of the Internet"; "Defining the Virtual Library"; "Rethinking Reference"; "Out of the Library and into the Streets"; "Asking Users About Their Needs"; "Standardization and Artificial Intelligence"; "The Financial Future"; and…

  11. Low energy stage study. Volume 2: Requirements and candidate propulsion modes. [orbital launching of shuttle payloads

    NASA Technical Reports Server (NTRS)

    1978-01-01

    A payload mission model covering 129 launches, was examined and compared against the space transportation system shuttle standard orbit inclinations and a shuttle launch site implementation schedule. Based on this examination and comparison, a set of six reference missions were defined in terms of spacecraft weight and velocity requirements to deliver the payload from a 296 km circular Shuttle standard orbit to the spacecraft's planned orbit. Payload characteristics and requirements representative of the model payloads included in the regime bounded by each of the six reference missions were determined. A set of launch cost envelopes were developed and defined based on the characteristics of existing/planned Shuttle upper stages and expendable launch systems in terms of launch cost and velocity delivered. These six reference missions were used to define the requirements for the candidate propulsion modes which were developed and screened to determine the propulsion approaches for conceptual design.

  12. Long-Term Variations of the EOP and ICRF2

    NASA Technical Reports Server (NTRS)

    Zharov, Vladimir; Sazhin, Mikhail; Sementsov, Valerian; Sazhina, Olga

    2010-01-01

    We analyzed the time series of the coordinates of the ICRF radio sources. We show that part of the radio sources, including the defining sources, shows a significant apparent motion. The stability of the celestial reference frame is provided by a no-net-rotation condition applied to the defining sources. In our case this condition leads to a rotation of the frame axes with time. We calculated the effect of this rotation on the Earth orientation parameters (EOP). In order to improve the stability of the celestial reference frame we suggest a new method for the selection of the defining sources. The method consists of two criteria: the first one we call cosmological and the second one kinematical. It is shown that a subset of the ICRF sources selected according to cosmological criteria provides the most stable reference frame for the next decade.

  13. Mars rover/sample return mission requirements affecting space station

    NASA Technical Reports Server (NTRS)

    1988-01-01

    The possible interfaces between the Space Station and the Mars Rover/Sample Return (MRSR) mission are defined. In order to constrain the scope of the report a series of seven design reference missions divided into three major types were assumed. These missions were defined to span the probable range of Space Station-MRSR interactions. The options were reduced, the MRSR sample handling requirements and baseline assumptions about the MRSR hardware and the key design features and requirements of the Space Station are summarized. Only the aspects of the design reference missions necessary to define the interfaces, hooks and scars, and other provisions on the Space Station are considered. An analysis of each of the three major design reference missions, is reported, presenting conceptual designs of key hardware to be mounted on the Space Station, a definition of weights, interfaces, and required hooks and scars.

  14. Thermocouple, multiple junction reference oven

    NASA Technical Reports Server (NTRS)

    Leblanc, L. P. (Inventor)

    1981-01-01

    An improved oven for maintaining the junctions of a plurality of reference thermocouples at a common and constant temperature is described. The oven is characterized by a cylindrical body defining a heat sink with axially extended-cylindrical cavity a singularized heating element which comprises a unitary cylindrical heating element consisting of a resistance heating coil wound about the surface of metallic spool with an axial bore defined and seated in the cavity. Other features of the oven include an annular array of radially extended bores defined in the cylindrical body and a plurality of reference thermocouple junctions seated in the bores in uniformly spaced relation with the heating element, and a temperature sensing device seated in the axial bore for detecting temperature changes as they occur in the spool and circuit to apply a voltage across the coil in response to detected drops in temperatures of the spool.

  15. Problems in Defining the Field of Distance Education.

    ERIC Educational Resources Information Center

    Keegan, Desmond

    1988-01-01

    This discussion of definitions of distance education responds to previous articles attempting to define the field. Topics discussed include distance education versus conventional education; group-based distance education; differences between open learning and distance education; and criteria to define distance education. (13 references) (LRW)

  16. On the determination of certain astronomical, selenodesic, and gravitational parameters of the moon

    NASA Technical Reports Server (NTRS)

    Aleksashin, Y. P.; Ziman, Y. L.; Isavnina, I. V.; Krasikov, V. A.; Nepoklonov, B. V.; Rodionov, B. N.; Tischenko, A. P.

    1974-01-01

    A method was examined for joint construction of a selenocentric fundamental system which can be realized by a coordinate catalog of reference contour points uniformly positioned over the entire lunar surface, and determination of the parameters characterizing the gravitational field, rotation, and orbital motion of the moon. Characteristic of the problem formulation is the introduction of a new complex of inconometric measurements which can be made using pictures obtained from an artificial lunar satellite. The proposed method can be used to solve similar problems on any other planet for which surface images can be obtained from a spacecraft. Characteristic of the proposed technique for solving the problem is the joint statistical analysis of all forms of measurements: orbital iconometric, earth-based trajectory, and also a priori information on the parameters in question which is known from earth-based astronomical studies.

  17. Short-term climate variability and atmospheric teleconnections from satellite-observed outgoing longwave radiation. I Simultaneous relationships. II - Lagged correlations

    NASA Technical Reports Server (NTRS)

    Lau, K.-M.; Chan, P. H.

    1983-01-01

    Attention is given to the low-frequency variability of outgoing longwave radiation (OLR) fluctuations, their possible correlations over different parts of the globe, and their relationships with teleconnections obtained from other meteorological parameters, for example, geopotential and temperature fields. Simultaneous relationships with respect to the Southern Oscillation (Namais, 1978; Barnett, 1981) signal and the reference OLR fluctuation over the equatorial central Pacific are investigated. Emphasis is placed on the relative importance of the Southern Oscillation (SO) signal over preferred regions. Using lag cross-correlation statistics, possible lagged relationships between the tropics and midlatitudes and their relationships with the SO are then investigated. Only features that are consistent with present knowledge of the dynamics of the system are emphasized. Certain features which may not meet rigorous statistical significance tests but yet are either expected a priori from independent observations or are predicted from dynamical theories are also explored.

  18. Six-hourly time series of horizontal troposphere gradients in VLBI analyis

    NASA Astrophysics Data System (ADS)

    Landskron, Daniel; Hofmeister, Armin; Mayer, David; Böhm, Johannes

    2016-04-01

    Consideration of horizontal gradients is indispensable for high-precision VLBI and GNSS analysis. As a rule of thumb, all observations below 15 degrees elevation need to be corrected for the influence of azimuthal asymmetry on the delay times, which is mainly a product of the non-spherical shape of the atmosphere and ever-changing weather conditions. Based on the well-known gradient estimation model by Chen and Herring (1997), we developed an augmented gradient model with additional parameters which are determined from ray-traced delays for the complete history of VLBI observations. As input to the ray-tracer, we used operational and re-analysis data from the European Centre for Medium-Range Weather Forecasts. Finally, we applied those a priori gradient parameters to VLBI analysis along with other empirical gradient models and assessed their impact on baseline length repeatabilities as well as on celestial and terrestrial reference frames.

  19. Adaptive control of robotic manipulators

    NASA Technical Reports Server (NTRS)

    Seraji, H.

    1987-01-01

    The author presents a novel approach to adaptive control of manipulators to achieve trajectory tracking by the joint angles. The central concept in this approach is the utilization of the manipulator inverse as a feedforward controller. The desired trajectory is applied as an input to the feedforward controller which behaves as the inverse of the manipulator at any operating point; the controller output is used as the driving torque for the manipulator. The controller gains are then updated by an adaptation algorithm derived from MRAC (model reference adaptive control) theory to cope with variations in the manipulator inverse due to changes of the operating point. An adaptive feedback controller and an auxiliary signal are also used to enhance closed-loop stability and to achieve faster adaptation. The proposed control scheme is computationally fast and does not require a priori knowledge of the complex dynamic model or the parameter values of the manipulator or the payload.

  20. On the Theory and Numerical Simulation of Cohesive Crack Propagation with Application to Fiber-Reinforced Composites

    NASA Technical Reports Server (NTRS)

    Rudraraju, Siva Shankar; Garikipati, Krishna; Waas, Anthony M.; Bednarcyk, Brett A.

    2013-01-01

    The phenomenon of crack propagation is among the predominant modes of failure in many natural and engineering structures, often leading to severe loss of structural integrity and catastrophic failure. Thus, the ability to understand and a priori simulate the evolution of this failure mode has been one of the cornerstones of applied mechanics and structural engineering and is broadly referred to as "fracture mechanics." The work reported herein focuses on extending this understanding, in the context of through-thickness crack propagation in cohesive materials, through the development of a continuum-level multiscale numerical framework, which represents cracks as displacement discontinuities across a surface of zero measure. This report presents the relevant theory, mathematical framework, numerical modeling, and experimental investigations of through-thickness crack propagation in fiber-reinforced composites using the Variational Multiscale Cohesive Method (VMCM) developed by the authors.

  1. Systematic effects in LOD from SLR observations

    NASA Astrophysics Data System (ADS)

    Bloßfeld, Mathis; Gerstl, Michael; Hugentobler, Urs; Angermann, Detlef; Müller, Horst

    2014-09-01

    Beside the estimation of station coordinates and the Earth’s gravity field, laser ranging observations to near-Earth satellites can be used to determine the rotation of the Earth. One parameter of this rotation is ΔLOD (excess Length Of Day) which describes the excess revolution time of the Earth w.r.t. 86,400 s. Due to correlations among the different parameter groups, it is difficult to obtain reliable estimates for all parameters. In the official ΔLOD products of the International Earth Rotation and Reference Systems Service (IERS), the ΔLOD information determined from laser ranging observations is excluded from the processing. In this paper, we study the existing correlations between ΔLOD, the orbital node Ω, the even zonal gravity field coefficients, cross-track empirical accelerations and relativistic accelerations caused by the Lense-Thirring and deSitter effect in detail using first order Gaussian perturbation equations. We found discrepancies due to different a priories by using different gravity field models of up to 1.0 ms for polar orbits at an altitude of 500 km and up to 40.0 ms, if the gravity field coefficients are estimated using only observations to LAGEOS 1. If observations to LAGEOS 2 are included, reliable ΔLOD estimates can be achieved. Nevertheless, an impact of the a priori gravity field even on the multi-satellite ΔLOD estimates can be clearly identified. Furthermore, we investigate the effect of empirical cross-track accelerations and the effect of relativistic accelerations of near-Earth satellites on ΔLOD. A total effect of 0.0088 ms is caused by not modeled Lense-Thirring and deSitter terms. The partial derivatives of these accelerations w.r.t. the position and velocity of the satellite cause very small variations (0.1 μs) on ΔLOD.

  2. Filtering observations without the initial guess

    NASA Astrophysics Data System (ADS)

    Chin, T. M.; Abbondanza, C.; Gross, R. S.; Heflin, M. B.; Parker, J. W.; Soja, B.; Wu, X.

    2017-12-01

    Noisy geophysical observations sampled irregularly over space and time are often numerically "analyzed" or "filtered" before scientific usage. The standard analysis and filtering techniques based on the Bayesian principle requires "a priori" joint distribution of all the geophysical parameters of interest. However, such prior distributions are seldom known fully in practice, and best-guess mean values (e.g., "climatology" or "background" data if available) accompanied by some arbitrarily set covariance values are often used in lieu. It is therefore desirable to be able to exploit efficient (time sequential) Bayesian algorithms like the Kalman filter while not forced to provide a prior distribution (i.e., initial mean and covariance). An example of this is the estimation of the terrestrial reference frame (TRF) where requirement for numerical precision is such that any use of a priori constraints on the observation data needs to be minimized. We will present the Information Filter algorithm, a variant of the Kalman filter that does not require an initial distribution, and apply the algorithm (and an accompanying smoothing algorithm) to the TRF estimation problem. We show that the information filter allows temporal propagation of partial information on the distribution (marginal distribution of a transformed version of the state vector), instead of the full distribution (mean and covariance) required by the standard Kalman filter. The information filter appears to be a natural choice for the task of filtering observational data in general cases where prior assumption on the initial estimate is not available and/or desirable. For application to data assimilation problems, reduced-order approximations of both the information filter and square-root information filter (SRIF) have been published, and the former has previously been applied to a regional configuration of the HYCOM ocean general circulation model. Such approximation approaches are also briefed in the presentation.

  3. Trial-by-Trial Changes in a Priori Informational Value of External Cues and Subjective Expectancies in Human Auditory Attention

    PubMed Central

    Arjona, Antonio; Gómez, Carlos M.

    2011-01-01

    Background Preparatory activity based on a priori probabilities generated in previous trials and subjective expectancies would produce an attentional bias. However, preparation can be correct (valid) or incorrect (invalid) depending on the actual target stimulus. The alternation effect refers to the subjective expectancy that a target will not be repeated in the same position, causing RTs to increase if the target location is repeated. The present experiment, using the Posner's central cue paradigm, tries to demonstrate that not only the credibility of the cue, but also the expectancy about the next position of the target are changedin a trial by trial basis. Sequences of trials were analyzed. Results The results indicated an increase in RT benefits when sequences of two and three valid trials occurred. The analysis of errors indicated an increase in anticipatory behavior which grows as the number of valid trials is increased. On the other hand, there was also an RT benefit when a trial was preceded by trials in which the position of the target changed with respect to the current trial (alternation effect). Sequences of two alternations or two repetitions were faster than sequences of trials in which a pattern of repetition or alternation is broken. Conclusions Taken together, these results suggest that in Posner's central cue paradigm, and with regard to the anticipatory activity, the credibility of the external cue and of the endogenously anticipated patterns of target location are constantly updated. The results suggest that Bayesian rules are operating in the generation of anticipatory activity as a function of the previous trial's outcome, but also on biases or prior beliefs like the “gambler fallacy”. PMID:21698164

  4. Implementation and testing of the gridded Vienna Mapping Function 1 (VMF1)

    NASA Astrophysics Data System (ADS)

    Kouba, J.

    2008-04-01

    The new gridded Vienna Mapping Function (VMF1) was implemented and compared to the well-established site-dependent VMF1, directly and by using precise point positioning (PPP) with International GNSS Service (IGS) Final orbits/clocks for a 1.5-year GPS data set of 11 globally distributed IGS stations. The gridded VMF1 data can be interpolated for any location and for any time after 1994, whereas the site-dependent VMF1 data are only available at selected IGS stations and only after 2004. Both gridded and site-dependent VMF1 PPP solutions agree within 1 and 2 mm for the horizontal and vertical position components, respectively, provided that respective VMF1 hydrostatic zenith path delays (ZPD) are used for hydrostatic ZPD mapping to slant delays. The total ZPD of the gridded and site-dependent VMF1 data agree with PPP ZPD solutions with RMS of 1.5 and 1.8 cm, respectively. Such precise total ZPDs could provide useful initial a priori ZPD estimates for kinematic PPP and regional static GPS solutions. The hydrostatic ZPDs of the gridded VMF1 compare with the site-dependent VMF1 ZPDs with RMS of 0.3 cm, subject to some biases and discontinuities of up to 4 cm, which are likely due to different strategies used in the generation of the site-dependent VMF1 data. The precision of gridded hydrostatic ZPD should be sufficient for accurate a priori hydrostatic ZPD mapping in all precise GPS and very long baseline interferometry (VLBI) solutions. Conversely, precise and globally distributed geodetic solutions of total ZPDs, which need to be linked to VLBI to control biases and stability, should also provide a consistent and stable reference frame for long-term and state-of-the-art numerical weather modeling.

  5. Global a priori estimates for the inhomogeneous Landau equation with moderately soft potentials

    NASA Astrophysics Data System (ADS)

    Cameron, Stephen; Silvestre, Luis; Snelson, Stanley

    2018-05-01

    We establish a priori upper bounds for solutions to the spatially inhomogeneous Landau equation in the case of moderately soft potentials, with arbitrary initial data, under the assumption that mass, energy and entropy densities stay under control. Our pointwise estimates decay polynomially in the velocity variable. We also show that if the initial data satisfies a Gaussian upper bound, this bound is propagated for all positive times.

  6. Optimism as a Prior Belief about the Probability of Future Reward

    PubMed Central

    Kalra, Aditi; Seriès, Peggy

    2014-01-01

    Optimists hold positive a priori beliefs about the future. In Bayesian statistical theory, a priori beliefs can be overcome by experience. However, optimistic beliefs can at times appear surprisingly resistant to evidence, suggesting that optimism might also influence how new information is selected and learned. Here, we use a novel Pavlovian conditioning task, embedded in a normative framework, to directly assess how trait optimism, as classically measured using self-report questionnaires, influences choices between visual targets, by learning about their association with reward progresses. We find that trait optimism relates to an a priori belief about the likelihood of rewards, but not losses, in our task. Critically, this positive belief behaves like a probabilistic prior, i.e. its influence reduces with increasing experience. Contrary to findings in the literature related to unrealistic optimism and self-beliefs, it does not appear to influence the iterative learning process directly. PMID:24853098

  7. Quantum information and the problem of mechanisms of biological evolution.

    PubMed

    Melkikh, Alexey V

    2014-01-01

    One of the most important conditions for replication in early evolution is the de facto elimination of the conformational degrees of freedom of the replicators, the mechanisms of which remain unclear. In addition, realistic evolutionary timescales can be established based only on partially directed evolution, further complicating this issue. A division of the various evolutionary theories into two classes has been proposed based on the presence or absence of a priori information about the evolving system. A priori information plays a key role in solving problems in evolution. Here, a model of partially directed evolution, based on the learning automata theory, which includes a priori information about the fitness space, is proposed. A potential repository of such prior information is the states of biologically important molecules. Thus, the need for extended evolutionary synthesis is discussed. Experiments to test the hypothesis of partially directed evolution are proposed. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  8. A gantry-based tri-modality system for bioluminescence tomography

    PubMed Central

    Yan, Han; Lin, Yuting; Barber, William C.; Unlu, Mehmet Burcin; Gulsen, Gultekin

    2012-01-01

    A gantry-based tri-modality system that combines bioluminescence (BLT), diffuse optical (DOT), and x-ray computed tomography (XCT) into the same setting is presented here. The purpose of this system is to perform bioluminescence tomography using a multi-modality imaging approach. As parts of this hybrid system, XCT and DOT provide anatomical information and background optical property maps. This structural and functional a priori information is used to guide and restrain bioluminescence reconstruction algorithm and ultimately improve the BLT results. The performance of the combined system is evaluated using multi-modality phantoms. In particular, a cylindrical heterogeneous multi-modality phantom that contains regions with higher optical absorption and x-ray attenuation is constructed. We showed that a 1.5 mm diameter bioluminescence inclusion can be localized accurately with the functional a priori information while its source strength can be recovered more accurately using both structural and the functional a priori information. PMID:22559540

  9. ECHO: A reference-free short-read error correction algorithm

    PubMed Central

    Kao, Wei-Chun; Chan, Andrew H.; Song, Yun S.

    2011-01-01

    Developing accurate, scalable algorithms to improve data quality is an important computational challenge associated with recent advances in high-throughput sequencing technology. In this study, a novel error-correction algorithm, called ECHO, is introduced for correcting base-call errors in short-reads, without the need of a reference genome. Unlike most previous methods, ECHO does not require the user to specify parameters of which optimal values are typically unknown a priori. ECHO automatically sets the parameters in the assumed model and estimates error characteristics specific to each sequencing run, while maintaining a running time that is within the range of practical use. ECHO is based on a probabilistic model and is able to assign a quality score to each corrected base. Furthermore, it explicitly models heterozygosity in diploid genomes and provides a reference-free method for detecting bases that originated from heterozygous sites. On both real and simulated data, ECHO is able to improve the accuracy of previous error-correction methods by several folds to an order of magnitude, depending on the sequence coverage depth and the position in the read. The improvement is most pronounced toward the end of the read, where previous methods become noticeably less effective. Using a whole-genome yeast data set, it is demonstrated here that ECHO is capable of coping with nonuniform coverage. Also, it is shown that using ECHO to perform error correction as a preprocessing step considerably facilitates de novo assembly, particularly in the case of low-to-moderate sequence coverage depth. PMID:21482625

  10. 49 CFR 385.321 - What failures of safety management practices disclosed by the safety audit will result in a...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... disqualified by a State, has lost the right to operate a CMV in a State or who is disqualified to operate a... violation refers to a driver operating a CMV as defined under § 383.5. 9. § 387.7(a)—Operating a motor... Single occurrence. This violation refers to a driver operating a CMV as defined under § 390.5. 13. § 395...

  11. 49 CFR 385.321 - What failures of safety management practices disclosed by the safety audit will result in a...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... disqualified by a State, has lost the right to operate a CMV in a State or who is disqualified to operate a... violation refers to a driver operating a CMV as defined under § 383.5. 9. § 387.7(a)—Operating a motor... Single occurrence. This violation refers to a driver operating a CMV as defined under § 390.5. 13. § 395...

  12. 49 CFR 385.321 - What failures of safety management practices disclosed by the safety audit will result in a...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... disqualified by a State, has lost the right to operate a CMV in a State or who is disqualified to operate a... violation refers to a driver operating a CMV as defined under § 383.5. 9. § 387.7(a)—Operating a motor... Single occurrence. This violation refers to a driver operating a CMV as defined under § 390.5. 13. § 395...

  13. Building a 3D faulted a priori model for stratigraphic inversion: Illustration of a new methodology applied on a North Sea field case study

    NASA Astrophysics Data System (ADS)

    Rainaud, Jean-François; Clochard, Vincent; Delépine, Nicolas; Crabié, Thomas; Poudret, Mathieu; Perrin, Michel; Klein, Emmanuel

    2018-07-01

    Accurate reservoir characterization is needed all along the development of an oil and gas field study. It helps building 3D numerical reservoir simulation models for estimating the original oil and gas volumes in place and for simulating fluid flow behaviors. At a later stage of the field development, reservoir characterization can also help deciding which recovery techniques need to be used for fluids extraction. In complex media, such as faulted reservoirs, flow behavior predictions within volumes close to faults can be a very challenging issue. During the development plan, it is necessary to determine which types of communication exist between faults or which potential barriers exist for fluid flows. The solving of these issues rests on accurate fault characterization. In most cases, faults are not preserved along reservoir characterization workflows. The memory of the interpreted faults from seismic is not kept during seismic inversion and further interpretation of the result. The goal of our study is at first to integrate a 3D fault network as a priori information into a model-based stratigraphic inversion procedure. Secondly, we apply our methodology on a well-known oil and gas case study over a typical North Sea field (UK Northern North Sea) in order to demonstrate its added value for determining reservoir properties. More precisely, the a priori model is composed of several geological units populated by physical attributes, they are extrapolated from well log data following the deposition mode, but usually a priori model building methods respect neither the 3D fault geometry nor the stratification dips on the fault sides. We address this difficulty by applying an efficient flattening method for each stratigraphic unit in our workflow. Even before seismic inversion, the obtained stratigraphic model has been directly used to model synthetic seismic on our case study. Comparisons between synthetic seismic obtained from our 3D fault network model give much lower residuals than with a "basic" stratigraphic model. Finally, we apply our model-based inversion considering both faulted and non-faulted a priori models. By comparing the rock impedances results obtain in the two cases, we can see a better delineation of the Brent-reservoir compartments by using the 3D faulted a priori model built with our method.

  14. Evaluating a Priori Ozone Profile Information Used in TEMPO (Tropospheric Emissions: Monitoring of Pollution) Tropospheric Ozone Retrievals

    NASA Technical Reports Server (NTRS)

    Johnson, Matthew Stephen

    2017-01-01

    A primary objective for TOLNet is the evaluation and validation of space-based tropospheric O3 retrievals from future systems such as the Tropospheric Emissions: Monitoring of Pollution (TEMPO) satellite. This study is designed to evaluate the tropopause-based O3 climatology (TB-Clim) dataset which will be used as the a priori profile information in TEMPO O3 retrievals. This study also evaluates model simulated O3 profiles, which could potentially serve as a priori O3 profile information in TEMPO retrievals, from near-real-time (NRT) data assimilation model products (NASA Global Modeling and Assimilation Office (GMAO) Goddard Earth Observing System (GEOS-5) Forward Processing (FP) and Modern-Era Retrospective analysis for Research and Applications version 2 (MERRA2)) and full chemical transport model (CTM), GEOS-Chem, simulations. The TB-Clim dataset and model products are evaluated with surface (0-2 km) and tropospheric (0-10 km) TOLNet observations to demonstrate the accuracy of the suggested a priori dataset and information which could potentially be used in TEMPO O3 algorithms. This study also presents the impact of individual a priori profile sources on the accuracy of theoretical TEMPO O3 retrievals in the troposphere and at the surface. Preliminary results indicate that while the TB-Clim climatological dataset can replicate seasonally-averaged tropospheric O3 profiles observed by TOLNet, model-simulated profiles from a full CTM (GEOS-Chem is used as a proxy for CTM O3 predictions) resulted in more accurate tropospheric and surface-level O3 retrievals from TEMPO when compared to hourly (diurnal cycle evaluation) and daily-averaged (daily variability evaluation) TOLNet observations. Furthermore, it was determined that when large daily-averaged surface O3 mixing ratios are observed (65 ppb), which are important for air quality purposes, TEMPO retrieval values at the surface display higher correlations and less bias when applying CTM a priori profile information compared to all other data products. The primary reason for this is that CTM predictions better capture the spatio-temporal variability of the vertical profiles of observed tropospheric O3 compared to the TB-Clim dataset and other NRT data assimilation models evaluated during this study.

  15. From near to eternity: Spin-glass planting, tiling puzzles, and constraint-satisfaction problems

    NASA Astrophysics Data System (ADS)

    Hamze, Firas; Jacob, Darryl C.; Ochoa, Andrew J.; Perera, Dilina; Wang, Wenlong; Katzgraber, Helmut G.

    2018-04-01

    We present a methodology for generating Ising Hamiltonians of tunable complexity and with a priori known ground states based on a decomposition of the model graph into edge-disjoint subgraphs. The idea is illustrated with a spin-glass model defined on a cubic lattice, where subproblems, whose couplers are restricted to the two values {-1 ,+1 } , are specified on unit cubes and are parametrized by their local degeneracy. The construction is shown to be equivalent to a type of three-dimensional constraint-satisfaction problem known as the tiling puzzle. By varying the proportions of subproblem types, the Hamiltonian can span a dramatic range of typical computational complexity, from fairly easy to many orders of magnitude more difficult than prototypical bimodal and Gaussian spin glasses in three space dimensions. We corroborate this behavior via experiments with different algorithms and discuss generalizations and extensions to different types of graphs.

  16. Localization of self-potential sources in volcano-electric effect with complex continuous wavelet transform and electrical tomography methods for an active volcano

    NASA Astrophysics Data System (ADS)

    Saracco, Ginette; Labazuy, Philippe; Moreau, Frédérique

    2004-06-01

    This study concerns the fluid flow circulation associated with magmatic intrusion during volcanic eruptions from electrical tomography studies. The objective is to localize and characterize the sources responsible for electrical disturbances during a time evolution survey between 1993 and 1999 of an active volcano, the Piton de la Fournaise. We have applied a dipolar probability tomography and a multi-scale analysis on synthetic and experimental SP data. We show the advantage of the complex continuous wavelet transform which allows to obtain directional information from the phase without a priori information on sources. In both cases, we point out a translation of potential sources through the upper depths during periods preceding a volcanic eruption around specific faults or structural features. The set of parameters obtained (vertical and horizontal localization, multipolar degree and inclination) could be taken into account as criteria to define volcanic precursors.

  17. Interdependency in Multimodel Climate Projections: Component Replication and Result Similarity

    NASA Astrophysics Data System (ADS)

    Boé, Julien

    2018-03-01

    Multimodel ensembles are the main way to deal with model uncertainties in climate projections. However, the interdependencies between models that often share entire components make it difficult to combine their results in a satisfactory way. In this study, how the replication of components (atmosphere, ocean, land, and sea ice) between climate models impacts the proximity of their results is quantified precisely, in terms of climatological means and future changes. A clear relationship exists between the number of components shared by climate models and the proximity of their results. Even the impact of a single shared component is generally visible. These conclusions are true at both the global and regional scales. Given available data, it cannot be robustly concluded that some components are more important than others. Those results provide ways to estimate model interdependencies a priori rather than a posteriori based on their results, in order to define independence weights.

  18. Navigation, behaviors, and control modes in an autonomous vehicle

    NASA Astrophysics Data System (ADS)

    Byler, Eric A.

    1995-01-01

    An Intelligent Mobile Sensing System (IMSS) has been developed for the automated inspection of radioactive and hazardous waste storage containers in warehouse facilities at Department of Energy sites. A 2D space of control modes was used that provides a combined view of reactive and planning approaches wherein a 2D situation space is defined by dimensions representing the predictability of the agent's task environment and the constraint imposed by its goals. In this sense selection of appropriate systems for planning, navigation, and control depends on the problem at hand. The IMSS vehicle navigation system is based on a combination of feature based motion, landmark sightings, and an a priori logical map of the mockup storage facility. Motion for the inspection activities are composed of different interactions of several available control modes, several obstacle avoidance modes, and several feature identification modes. Features used to drive these behaviors are both visual and acoustic.

  19. Controlled nanostructrures formation by ultra fast laser pulses for color marking.

    PubMed

    Dusser, B; Sagan, Z; Soder, H; Faure, N; Colombier, J P; Jourlin, M; Audouard, E

    2010-02-01

    Precise nanostructuration of surface and the subsequent upgrades in material properties is a strong outcome of ultra fast laser irradiations. Material characteristics can be designed on mesoscopic scales, carrying new optical properties. We demonstrate in this work, the possibility of achieving material modifications using ultra short pulses, via polarization dependent structures generation, that can generate specific color patterns. These oriented nanostructures created on the metal surface, called ripples, are typically smaller than the laser wavelength and in the range of visible spectrum. In this way, a complex colorization process of the material, involving imprinting, calibration and reading, has been performed to associate a priori defined colors. This new method based on the control of the laser-driven nanostructure orientation allows cumulating high quantity of information in a minimal surface, proposing new applications for laser marking and new types of identifying codes.

  20. A Lyapunov method for stability analysis of piecewise-affine systems over non-invariant domains

    NASA Astrophysics Data System (ADS)

    Rubagotti, Matteo; Zaccarian, Luca; Bemporad, Alberto

    2016-05-01

    This paper analyses stability of discrete-time piecewise-affine systems, defined on possibly non-invariant domains, taking into account the possible presence of multiple dynamics in each of the polytopic regions of the system. An algorithm based on linear programming is proposed, in order to prove exponential stability of the origin and to find a positively invariant estimate of its region of attraction. The results are based on the definition of a piecewise-affine Lyapunov function, which is in general discontinuous on the boundaries of the regions. The proposed method is proven to lead to feasible solutions in a broader range of cases as compared to a previously proposed approach. Two numerical examples are shown, among which a case where the proposed method is applied to a closed-loop system, to which model predictive control was applied without a-priori guarantee of stability.

  1. Control of a flexible link by shaping the closed loop frequency response function through optimised feedback filters

    NASA Astrophysics Data System (ADS)

    Del Vescovo, D.; D'Ambrogio, W.

    1995-01-01

    A frequency domain method is presented to design a closed-loop control for vibration reduction flexible mechanisms. The procedure is developed on a single-link flexible arm, driven by one rotary degree of freedom servomotor, although the same technique may be applied to similar systems such as supports for aerospace antennae or solar panels. The method uses the structural frequency response functions (FRFs), thus avoiding system identification, that produces modeling uncertainties. Two closed-loops are implemented: the inner loop uses acceleration feedback with the aim of making the FRF similar to that of an equivalent rigid link; the outer loop feeds back displacements to achieve a fast positioning response and null steady state error. In both cases, the controller type is established a priori, while actual characteristics are defined by an optimisation procedure in which the relevant FRF is constrained into prescribed bounds and stability is taken into account.

  2. Local-Scale Simulations of Nucleate Boiling on Micrometer Featured Surfaces: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sitaraman, Hariswaran; Moreno, Gilberto; Narumanchi, Sreekant V

    2017-08-03

    A high-fidelity computational fluid dynamics (CFD)-based model for bubble nucleation of the refrigerant HFE7100 on micrometer-featured surfaces is presented in this work. The single-fluid incompressible Navier-Stokes equations, along with energy transport and natural convection effects are solved on a featured surface resolved grid. An a priori cavity detection method is employed to convert raw profilometer data of a surface into well-defined cavities. The cavity information and surface morphology are represented in the CFD model by geometric mesh deformations. Surface morphology is observed to initiate buoyancy-driven convection in the liquid phase, which in turn results in faster nucleation of cavities. Simulationsmore » pertaining to a generic rough surface show a trend where smaller size cavities nucleate with higher wall superheat. This local-scale model will serve as a self-consistent connection to larger device scale continuum models where local feature representation is not possible.« less

  3. Local-Scale Simulations of Nucleate Boiling on Micrometer-Featured Surfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sitaraman, Hariswaran; Moreno, Gilberto; Narumanchi, Sreekant V

    2017-07-12

    A high-fidelity computational fluid dynamics (CFD)-based model for bubble nucleation of the refrigerant HFE7100 on micrometer-featured surfaces is presented in this work. The single-fluid incompressible Navier-Stokes equations, along with energy transport and natural convection effects are solved on a featured surface resolved grid. An a priori cavity detection method is employed to convert raw profilometer data of a surface into well-defined cavities. The cavity information and surface morphology are represented in the CFD model by geometric mesh deformations. Surface morphology is observed to initiate buoyancy-driven convection in the liquid phase, which in turn results in faster nucleation of cavities. Simulationsmore » pertaining to a generic rough surface show a trend where smaller size cavities nucleate with higher wall superheat. This local-scale model will serve as a self-consistent connection to larger device scale continuum models where local feature representation is not possible.« less

  4. A new simplex chemometric approach to identify olive oil blends with potentially high traceability.

    PubMed

    Semmar, N; Laroussi-Mezghani, S; Grati-Kamoun, N; Hammami, M; Artaud, J

    2016-10-01

    Olive oil blends (OOBs) are complex matrices combining different cultivars at variable proportions. Although qualitative determinations of OOBs have been subjected to several chemometric works, quantitative evaluations of their contents remain poorly developed because of traceability difficulties concerning co-occurring cultivars. Around this question, we recently published an original simplex approach helping to develop predictive models of the proportions of co-occurring cultivars from chemical profiles of resulting blends (Semmar & Artaud, 2015). Beyond predictive model construction and validation, this paper presents an extension based on prediction errors' analysis to statistically define the blends with the highest predictability among all the possible ones that can be made by mixing cultivars at different proportions. This provides an interesting way to identify a priori labeled commercial products with potentially high traceability taking into account the natural chemical variability of different constitutive cultivars. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Numerical stability in problems of linear algebra.

    NASA Technical Reports Server (NTRS)

    Babuska, I.

    1972-01-01

    Mathematical problems are introduced as mappings from the space of input data to that of the desired output information. Then a numerical process is defined as a prescribed recurrence of elementary operations creating the mapping of the underlying mathematical problem. The ratio of the error committed by executing the operations of the numerical process (the roundoff errors) to the error introduced by perturbations of the input data (initial error) gives rise to the concept of lambda-stability. As examples, several processes are analyzed from this point of view, including, especially, old and new processes for solving systems of linear algebraic equations with tridiagonal matrices. In particular, it is shown how such a priori information can be utilized as, for instance, a knowledge of the row sums of the matrix. Information of this type is frequently available where the system arises in connection with the numerical solution of differential equations.

  6. A theory of biological relativity: no privileged level of causation.

    PubMed

    Noble, Denis

    2012-02-06

    Must higher level biological processes always be derivable from lower level data and mechanisms, as assumed by the idea that an organism is completely defined by its genome? Or are higher level properties necessarily also causes of lower level behaviour, involving actions and interactions both ways? This article uses modelling of the heart, and its experimental basis, to show that downward causation is necessary and that this form of causation can be represented as the influences of initial and boundary conditions on the solutions of the differential equations used to represent the lower level processes. These insights are then generalized. A priori, there is no privileged level of causation. The relations between this form of 'biological relativity' and forms of relativity in physics are discussed. Biological relativity can be seen as an extension of the relativity principle by avoiding the assumption that there is a privileged scale at which biological functions are determined.

  7. A theory of biological relativity: no privileged level of causation

    PubMed Central

    Noble, Denis

    2012-01-01

    Must higher level biological processes always be derivable from lower level data and mechanisms, as assumed by the idea that an organism is completely defined by its genome? Or are higher level properties necessarily also causes of lower level behaviour, involving actions and interactions both ways? This article uses modelling of the heart, and its experimental basis, to show that downward causation is necessary and that this form of causation can be represented as the influences of initial and boundary conditions on the solutions of the differential equations used to represent the lower level processes. These insights are then generalized. A priori, there is no privileged level of causation. The relations between this form of ‘biological relativity’ and forms of relativity in physics are discussed. Biological relativity can be seen as an extension of the relativity principle by avoiding the assumption that there is a privileged scale at which biological functions are determined. PMID:23386960

  8. A Novel Extreme Learning Control Framework of Unmanned Surface Vehicles.

    PubMed

    Wang, Ning; Sun, Jing-Chao; Er, Meng Joo; Liu, Yan-Cheng

    2016-05-01

    In this paper, an extreme learning control (ELC) framework using the single-hidden-layer feedforward network (SLFN) with random hidden nodes for tracking an unmanned surface vehicle suffering from unknown dynamics and external disturbances is proposed. By combining tracking errors with derivatives, an error surface and transformed states are defined to encapsulate unknown dynamics and disturbances into a lumped vector field of transformed states. The lumped nonlinearity is further identified accurately by an extreme-learning-machine-based SLFN approximator which does not require a priori system knowledge nor tuning input weights. Only output weights of the SLFN need to be updated by adaptive projection-based laws derived from the Lyapunov approach. Moreover, an error compensator is incorporated to suppress approximation residuals, and thereby contributing to the robustness and global asymptotic stability of the closed-loop ELC system. Simulation studies and comprehensive comparisons demonstrate that the ELC framework achieves high accuracy in both tracking and approximation.

  9. An Interpersonal Analysis of Pathological Personality Traits in DSM-5

    PubMed Central

    Wright, Aidan G.C.; Pincus, Aaron L.; Hopwood, Christopher J.; Thomas, Katherine M.; Markon, Kristian E.; Krueger, Robert F.

    2012-01-01

    The proposed changes to the personality disorder section of the DSM-5 places an increased focus on interpersonal impairment as one of the defining features of personality psychopathology. In addition, a proposed trait model has been offered to provide a means of capturing phenotypic variation on the expression of personality disorder. In this study, we subject the proposed DSM-5 traits to interpersonal analysis using the Inventory of Interpersonal Problems – Circumplex scales via the structural summary method for circumplex data. DSM-5 traits were consistently associated with generalized interpersonal dysfunction suggesting that they are maladaptive in nature, the majority of traits demonstrated discriminant validity with prototypical and differentiated interpersonal problem profiles, and conformed well to a priori hypothesized associations. These results are discussed in the context of the DSM-5 proposal and contemporary interpersonal theory, with a particular focus on potential areas for expansion of the DSM-5 trait model. PMID:22589411

  10. An information theory approach for evaluating earth radiation budget (ERB) measurements - Nonuniform sampling of diurnal longwave flux variations

    NASA Technical Reports Server (NTRS)

    Halyo, Nesim; Direskeneli, Haldun; Barkstrom, Bruce R.

    1991-01-01

    Satellite measurements are subject to a wide range of uncertainties due to their temporal, spatial, and directional sampling characteristics. An information-theory approach is suggested to examine the nonuniform temporal sampling of ERB measurements. The information (i.e., its entropy or uncertainty) before and after the measurements is determined, and information gain (IG) is defined as a reduction in the uncertainties involved. A stochastic model for the diurnal outgoing flux variations that affect the ERB is developed. Using Gaussian distributions for the a priori and measured radiant exitance fields, the IG is obtained by computing the a posteriori covariance. The IG for the monthly outgoing flux measurements is examined for different orbital parameters and orbital tracks, using the Earth Observing System orbital parameters as specific examples. Variations in IG due to changes in the orbit's inclination angle and the initial ascending node local time are investigated.

  11. Bond Ellipticity Alternation: An Accurate Descriptor of the Nonlinear Optical Properties of π-Conjugated Chromophores.

    PubMed

    Lopes, Thiago O; Machado, Daniel F Scalabrini; Risko, Chad; Brédas, Jean-Luc; de Oliveira, Heibbe C B

    2018-03-15

    Well-defined structure-property relationships offer a conceptual basis to afford a priori design principles to develop novel π-conjugated molecular and polymer materials for nonlinear optical (NLO) applications. Here, we introduce the bond ellipticity alternation (BEA) as a robust parameter to assess the NLO characteristics of organic chromophores and illustrate its effectiveness in the case of streptocyanines. BEA is based on the symmetry of the electron density, a physical observable that can be determined from experimental X-ray electron densities or from quantum-chemical calculations. Through comparisons to the well-established bond-length alternation and π-bond order alternation parameters, we demonstrate the generality of BEA to foreshadow NLO characteristics and underline that, in the case of large electric fields, BEA is a more reliable descriptor. Hence, this study introduces BEA as a prominent descriptor of organic chromophores of interest for NLO applications.

  12. Equilibrium statistical mechanics on correlated random graphs

    NASA Astrophysics Data System (ADS)

    Barra, Adriano; Agliari, Elena

    2011-02-01

    Biological and social networks have recently attracted great attention from physicists. Among several aspects, two main ones may be stressed: a non-trivial topology of the graph describing the mutual interactions between agents and, typically, imitative, weighted, interactions. Despite such aspects being widely accepted and empirically confirmed, the schemes currently exploited in order to generate the expected topology are based on a priori assumptions and, in most cases, implement constant intensities for links. Here we propose a simple shift [-1,+1]\\to [0,+1] in the definition of patterns in a Hopfield model: a straightforward effect is the conversion of frustration into dilution. In fact, we show that by varying the bias of pattern distribution, the network topology (generated by the reciprocal affinities among agents, i.e. the Hebbian rule) crosses various well-known regimes, ranging from fully connected, to an extreme dilution scenario, then to completely disconnected. These features, as well as small-world properties, are, in this context, emergent and no longer imposed a priori. The model is throughout investigated also from a thermodynamics perspective: the Ising model defined on the resulting graph is analytically solved (at a replica symmetric level) by extending the double stochastic stability technique, and presented together with its fluctuation theory for a picture of criticality. Overall, our findings show that, at least at equilibrium, dilution (of whatever kind) simply decreases the strength of the coupling felt by the spins, but leaves the paramagnetic/ferromagnetic flavors unchanged. The main difference with respect to previous investigations is that, within our approach, replicas do not appear: instead of (multi)-overlaps as order parameters, we introduce a class of magnetizations on all the possible subgraphs belonging to the main one investigated: as a consequence, for these objects a closure for a self-consistent relation is achieved.

  13. A priori motion models for four-dimensional reconstruction in gated cardiac SPECT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lalush, D.S.; Tsui, B.M.W.; Cui, Lin

    1996-12-31

    We investigate the benefit of incorporating a priori assumptions about cardiac motion in a fully four-dimensional (4D) reconstruction algorithm for gated cardiac SPECT. Previous work has shown that non-motion-specific 4D Gibbs priors enforcing smoothing in time and space can control noise while preserving resolution. In this paper, we evaluate methods for incorporating known heart motion in the Gibbs prior model. The new model is derived by assigning motion vectors to each 4D voxel, defining the movement of that volume of activity into the neighboring time frames. Weights for the Gibbs cliques are computed based on these {open_quotes}most likely{close_quotes} motion vectors.more » To evaluate, we employ the mathematical cardiac-torso (MCAT) phantom with a new dynamic heart model that simulates the beating and twisting motion of the heart. Sixteen realistically-simulated gated datasets were generated, with noise simulated to emulate a real Tl-201 gated SPECT study. Reconstructions were performed using several different reconstruction algorithms, all modeling nonuniform attenuation and three-dimensional detector response. These include ML-EM with 4D filtering, 4D MAP-EM without prior motion assumption, and 4D MAP-EM with prior motion assumptions. The prior motion assumptions included both the correct motion model and incorrect models. Results show that reconstructions using the 4D prior model can smooth noise and preserve time-domain resolution more effectively than 4D linear filters. We conclude that modeling of motion in 4D reconstruction algorithms can be a powerful tool for smoothing noise and preserving temporal resolution in gated cardiac studies.« less

  14. SDM - A geodetic inversion code incorporating with layered crust structure and curved fault geometry

    NASA Astrophysics Data System (ADS)

    Wang, Rongjiang; Diao, Faqi; Hoechner, Andreas

    2013-04-01

    Currently, inversion of geodetic data for earthquake fault ruptures is most based on a uniform half-space earth model because of its closed-form Green's functions. However, the layered structure of the crust can significantly affect the inversion results. The other effect, which is often neglected, is related to the curved fault geometry. Especially, fault planes of most mega thrust earthquakes vary their dip angle with depth from a few to several tens of degrees. Also the strike directions of many large earthquakes are variable. For simplicity, such curved fault geometry is usually approximated to several connected rectangular segments, leading to an artificial loss of the slip resolution and data fit. In this presentation, we introduce a free FORTRAN code incorporating with the layered crust structure and curved fault geometry in a user-friendly way. The name SDM stands for Steepest Descent Method, an iterative algorithm used for the constrained least-squares optimization. The new code can be used for joint inversion of different datasets, which may include systematic offsets, as most geodetic data are obtained from relative measurements. These offsets are treated as unknowns to be determined simultaneously with the slip unknowns. In addition, a-priori and physical constraints are considered. The a-priori constraint includes the upper limit of the slip amplitude and the variation range of the slip direction (rake angle) defined by the user. The physical constraint is needed to obtain a smooth slip model, which is realized through a smoothing term to be minimized with the misfit to data. In difference to most previous inversion codes, the smoothing can be optionally applied to slip or stress-drop. The code works with an input file, a well-documented example of which is provided with the source code. Application examples are demonstrated.

  15. Non-targeted, high resolution mass spectrometry strategy for simultaneous monitoring of xenobiotics and endogenous compounds in green sea turtles on the Great Barrier Reef.

    PubMed

    Heffernan, Amy L; Gómez-Ramos, Maria M; Gaus, Caroline; Vijayasarathy, Soumini; Bell, Ian; Hof, Christine; Mueller, Jochen F; Gómez-Ramos, Maria J

    2017-12-01

    Chemical contamination poses a threat to ecosystem, biota and human health, and identifying these hazards is a complex challenge. Traditional hazard identification relies on a priori-defined targets of limited chemical scope, and is generally inappropriate for exploratory studies such as explaining toxicological effects in environmental systems. Here we present a non-target high resolution mass spectrometry environmental monitoring study with multivariate statistical analysis to simultaneously detect biomarkers of exposure (e.g. xenobiotics) and biomarkers of effect in whole turtle blood. Borrowing the concept from clinical chemistry, a case-control sampling approach was used to investigate the potential influence of xenobiotics of anthropogenic origin on free-ranging green sea turtles (Chelonia mydas) from a remote, offshore 'control' site; and two coastal 'case' sites influenced by urban/industrial and agricultural activities, respectively, on the Great Barrier Reef in North Queensland, Australia. Multiple biomarkers of exposure, including sulfonic acids (n=9), a carbamate insecticide metabolite, and other industrial chemicals; and five biomarkers of effect (lipid peroxidation products), were detected in case sites. Additionally, two endogenous biomarkers of neuroinflammation and oxidative stress were identified, and showed moderate-to-strong correlations with clinical measures of inflammation and liver dysfunction. Our data filtering strategy overcomes limitations of traditional a priori selection of target compounds, and adds to the limited environmental xenobiotic metabolomics literature. To our knowledge this is the first case-control study of xenobiotics in marine megafauna, and demonstrates the utility of green sea turtles to link internal and external exposure, to explain potential toxicological effects in environmental systems. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. A glycosylated recombinant human granulocyte colony stimulating factor produced in a novel protein production system (AVI-014) in healthy subjects: a first-in human, single dose, controlled study

    PubMed Central

    Varki, Roslyn; Pequignot, Ed; Leavitt, Mark C; Ferber, Andres; Kraft, Walter K

    2009-01-01

    Background AVI-014 is an egg white-derived, recombinant, human granulocyte colony-stimulating factor (G-CSF). This healthy volunteer study is the first human investigation of AVI-014. Methods 24 male and female subjects received a single subcutaneous injection of AVI-014 at 4 or 8 mcg/kg. 16 control subjects received 4 or 8 mcg/kg of filgrastim (Neupogen, Amgen) in a partially blinded, parallel fashion. Results The Geometric Mean Ratio (GMR) (90% CI) of 4 mcg/kg AVI-014/filgrastim AUC(0–72 hr) was 1.00 (0.76, 1.31) and Cmax was 0.86 (0.66, 1.13). At the 8 mcg/kg dose, the AUC(0–72) GMR was 0.89 (0.69, 1.14) and Cmax was 0.76 (0.58, 0.98). A priori pharmacokinetic bioequivalence was defined as the 90% CI of the GMR bounded by 0.8–1.25. Both the white blood cell and absolute neutrophil count area under the % increase curve AUC(0–9 days) and Cmax (maximal % increase from baseline)GMR at 4 and 8 mcg/kg fell within the 0.5–2.0 a priori bound set for pharmacodynamic bioequivalence. The CD 34+ % increase curve AUC(0–9 days) and Cmax GMR for both doses was ~1, but 90% confidence intervals were large due to inherent variance, and this measure did not meet pharmacodynamic bioequivalence. AVI-014 demonstrated a side effect profile similar to that of filgrastim. Conclusion AVI-014 has safety, pharmacokinetic, and pharmacodynamic properties comparable to filgrastim at an equal dose in healthy volunteers. These findings support further investigation in AVI-014. PMID:19175929

  17. A novel method to infer the origin of polyploids from Amplified Fragment Length Polymorphism data reveals that the alpine polyploid complex of Senecio carniolicus (Asteraceae) evolved mainly via autopolyploidy.

    PubMed

    Winkler, Manuela; Escobar García, Pedro; Gattringer, Andreas; Sonnleitner, Michaela; Hülber, Karl; Schönswetter, Peter; Schneeweiss, Gerald M

    2017-09-01

    Despite its evolutionary and ecological relevance, the mode of polyploid origin has been notoriously difficult to be reconstructed from molecular data. Here, we present a method to identify the putative parents of polyploids and thus to infer the mode of their origin (auto- vs. allopolyploidy) from Amplified Fragment Length Polymorphism (AFLP) data. To this end, we use Cohen's d of distances between in silico polyploids, generated within a priori defined scenarios of origin from a priori delimited putative parental entities (e.g. taxa, genetic lineages), and natural polyploids. Simulations show that the discriminatory power of the proposed method increases mainly with increasing divergence between the lower-ploid putative ancestors and less so with increasing delay of polyploidization relative to the time of divergence. We apply the new method to the Senecio carniolicus aggregate, distributed in the European Alps and comprising two diploid, one tetraploid and one hexaploid species. In the eastern part of its distribution, the S. carniolicus aggregate was inferred to comprise an autopolyploid series, whereas for western populations of the tetraploid species, an allopolyploid origin involving the two diploid species was the most likely scenario. Although this suggests that the tetraploid species has two independent origins, other evidence (ribotype distribution, morphology) is consistent with the hypothesis of an autopolyploid origin with subsequent introgression by the second diploid species. Altogether, identifying the best among alternative scenarios using Cohen's d can be straightforward, but particular scenarios, such as allopolyploid origin vs. autopolyploid origin with subsequent introgression, remain difficult to be distinguished. © 2016 John Wiley & Sons Ltd.

  18. Measurement Properties of the Modified Spinal Function Sort (M-SFS): Is It Reliable and Valid in Workers with Chronic Musculoskeletal Pain?

    PubMed

    Trippolini, Maurizio Alen; Janssen, Svenja; Hilfiker, Roger; Oesch, Peter

    2018-06-01

    Purpose To analyze the reliability and validity of a picture-based questionnaire, the Modified Spinal Function Sort (M-SFS). Methods Sixty-two injured workers with chronic musculoskeletal disorders (MSD) were recruited from two work rehabilitation centers. Internal consistency was assessed by Cronbach's alpha. Construct validity was tested based on four a priori hypotheses. Structural validity was measured with principal component analysis (PCA). Test-retest reliability and agreement was evaluated using intraclass correlation coefficient (ICC) and measurement error with the limits of agreement (LoA). Results Total score of the M-SFS was 54.4 (SD 16.4) and 56.1 (16.4) for test and retest, respectively. Item distribution showed no ceiling effects. Cronbach's alpha was 0.94 and 0.95 for test and retest, respectively. PCA showed the presence of four components explaining a total of 74% of the variance. Item communalities were >0.6 in 17 out of 20 items. ICC was 0.90, LoA was ±12.6/16.2 points. The correlations between the M-SFS were 0.89 with the original SFS, 0.49 with the Pain Disability Index, -0.37 and -0.33 with the Numeric Rating Scale for actual pain, -0.52 for selfreported disability due to chronic low back pain, and 0.50, 0.56-0.59 with three distinct lifting tests. No a priori defined hypothesis for construct validity was rejected. Conclusions The M-SFS allows reliable and valid assessment of perceived self-efficacy for work-related tasks and can be recommended for use in patients with chronic MSD. Further research should investigate the proposed M-SFS score of <56 for its predictive validity for non-return to work.

  19. Evaluation of flamelet/progress variable model for laminar pulverized coal combustion

    NASA Astrophysics Data System (ADS)

    Wen, Xu; Wang, Haiou; Luo, Yujuan; Luo, Kun; Fan, Jianren

    2017-08-01

    In the present work, the flamelet/progress variable (FPV) approach based on two mixture fractions is formulated for pulverized coal combustion and then evaluated in laminar counterflow coal flames under different operating conditions through both a priori and a posteriori analyses. Two mixture fractions, Zvol and Zchar, are defined to characterize the mixing between the oxidizer and the volatile matter/char reaction products. A coordinate transformation is conducted to map the flamelet solutions from a unit triangle space (Zvol, Zchar) to a unit square space (Z, X) so that a more stable solution can be achieved. To consider the heat transfers between the coal particle phase and the gas phase, the total enthalpy is introduced as an additional manifold. As a result, the thermo-chemical quantities are parameterized as a function of the mixture fraction Z, the mixing parameter X, the normalized total enthalpy Hnorm, and the reaction progress variable YPV. The validity of the flamelet chemtable and the selected trajectory variables is first evaluated in a priori tests by comparing the tabulated quantities with the results obtained from numerical simulations with detailed chemistry. The comparisons show that the major species mass fractions can be predicted by the FPV approach in all combustion regions for all operating conditions, while the CO and H2 mass fractions are over-predicted in the premixed flame reaction zone. The a posteriori study shows that overall good agreement between the FPV results and those obtained from detailed chemistry simulations can be achieved, although the coal particle ignition is predicted to be slightly earlier. Overall, the validity of the FPV approach for laminar pulverized coal combustion is confirmed and its performance in turbulent pulverized coal combustion will be tested in future work.

  20. Development of a Vitality Scan related to workers' sustainable employability: a study assessing its internal consistency and construct validity.

    PubMed

    Brouwers, Livia A M; Engels, Josephine A; Heerkens, Yvonne F; van der Beek, Allard J

    2015-06-16

    Most validated sustainable employability questionnaires are extensive and difficult to obtain. Our objective was to develop a usable and valid tool, a Vitality Scan, to determine possible signs of stagnation in one's functioning related to sustainable employability and to establish the instrument's internal consistency and construct validity. A literature review was performed and expert input was obtained to develop an online survey of 31 items. A sample of 1722 Dutch employees was recruited. Internal consistency was assessed by Cronbach's alpha. The underlying theoretical concepts were extracted by factor analysis using a principal component method. For construct validity, a priori hypotheses were defined for expected differences between known subgroups: 1) older workers would report more stagnation than younger workers, and 2) less educated workers would report more problems than the highly educated ones. Both hypotheses were statistically tested using ANOVA. Internal consistency measures and factor analysis resulted in five subscales with acceptable to good reliability (Cronbach's alpha 0.72-0.87). These subscales included: balance and competence, motivation and involvement, resilience, mental and physical health, and social support at work. Three items were removed following these analyses. In accordance with our a priori hypothesis 1, the ANOVA showed that older workers reported the most problems, while younger workers reported the least problems. However, hypothesis 2 was not confirmed: no significant differences were found for education level. The developed Vitality Scan - with the 28 remaining items - showed good measurement properties. It is applicable as a user-friendly, evaluative instrument for worker's sustainable employability. The scan's value for determining whether or not the employee is at risk for a decrease in functioning during present and future work, should be further tested.

  1. Occupational risk factors for low grade and high grade glioma: results from an international case control study of adult brain tumours.

    PubMed

    Schlehofer, Brigitte; Hettinger, Iris; Ryan, Philip; Blettner, Maria; Preston-Martin, Susan; Little, Julian; Arslan, Annie; Ahlbom, Anders; Giles, Graham G; Howe, Geoffrey R; Ménégoz, Francoise; Rodvall, Ylva; Choi, Won N; Wahrendorf, Jürgen

    2005-01-01

    The majority of suspected occupational risk factors for adult brain tumours have yet to be confirmed as etiologically relevant. Within an international case-control study on brain tumours, lifelong occupational histories and information on exposures to specific substances were obtained by direct interviews to further investigate occupational risk factors for glioma. This is one of the largest studies of brain tumours in adults, including 1,178 cases and 1987 population controls from 8 collaborating study centres matched for age, gender and centre. All occupational information, was aggregated into 16 occupational categories. In a pooled analysis, odds ratios (OR), adjusted for education, were estimated separately for men and women and for high-grade glioma (HGG) and low-grade glioma (LGG), focusing especially on 6 categories defined a priori: agricultural, chemical, construction, metal, electrical/electronic and transport. For men, an elevated OR of glioma associated with the category "metal" (OR = 1.24, 95% CI 0.96-1.62) was seen, which appeared to be largely accounted for by LGG (OR = 1.59, 95% CI 1.00-2.52). For the other 5 occupational categories, no elevated risks for glioma were observed. For women the only noteworthy observation for the 6 a priori categories was an inverse association with the "agriculture" category (OR = 0.60, 95% CI 0.36-0.99). Apart from the 6 major categories, women working in food production or food processing (category "food") showed an increased OR of 1.95 (95% CI 1.04-3.68). None of the 20 substance groups was positively associated with glioma risk. Although some other point estimates were elevated, they lacked statistical significance. The results do not provide evidence of a strong association between occupational exposures and glioma development.

  2. Prototypic Development and Evaluation of a Medium Format Metric Camera

    NASA Astrophysics Data System (ADS)

    Hastedt, H.; Rofallski, R.; Luhmann, T.; Rosenbauer, R.; Ochsner, D.; Rieke-Zapp, D.

    2018-05-01

    Engineering applications require high-precision 3D measurement techniques for object sizes that vary between small volumes (2-3 m in each direction) and large volumes (around 20 x 20 x 1-10 m). The requested precision in object space (1σ RMS) is defined to be within 0.1-0.2 mm for large volumes and less than 0.01 mm for small volumes. In particular, focussing large volume applications the availability of a metric camera would have different advantages for several reasons: 1) high-quality optical components and stabilisations allow for a stable interior geometry of the camera itself, 2) a stable geometry leads to a stable interior orientation that enables for an a priori camera calibration, 3) a higher resulting precision can be expected. With this article the development and accuracy evaluation of a new metric camera, the ALPA 12 FPS add|metric will be presented. Its general accuracy potential is tested against calibrated lengths in a small volume test environment based on the German Guideline VDI/VDE 2634.1 (2002). Maximum length measurement errors of less than 0.025 mm are achieved with different scenarios having been tested. The accuracy potential for large volumes is estimated within a feasibility study on the application of photogrammetric measurements for the deformation estimation on a large wooden shipwreck in the German Maritime Museum. An accuracy of 0.2 mm-0.4 mm is reached for a length of 28 m (given by a distance from a lasertracker network measurement). All analyses have proven high stabilities of the interior orientation of the camera and indicate the applicability for a priori camera calibration for subsequent 3D measurements.

  3. Probabilistic inversion of electrical resistivity data from bench-scale experiments: On model parameterization for CO2 sequestration monitoring

    NASA Astrophysics Data System (ADS)

    Breen, S. J.; Lochbuehler, T.; Detwiler, R. L.; Linde, N.

    2013-12-01

    Electrical resistivity tomography (ERT) is a well-established method for geophysical characterization and has shown potential for monitoring geologic CO2 sequestration, due to its sensitivity to electrical resistivity contrasts generated by liquid/gas saturation variability. In contrast to deterministic ERT inversion approaches, probabilistic inversion provides not only a single saturation model but a full posterior probability density function for each model parameter. Furthermore, the uncertainty inherent in the underlying petrophysics (e.g., Archie's Law) can be incorporated in a straightforward manner. In this study, the data are from bench-scale ERT experiments conducted during gas injection into a quasi-2D (1 cm thick), translucent, brine-saturated sand chamber with a packing that mimics a simple anticlinal geological reservoir. We estimate saturation fields by Markov chain Monte Carlo sampling with the MT-DREAM(ZS) algorithm and compare them quantitatively to independent saturation measurements from a light transmission technique, as well as results from deterministic inversions. Different model parameterizations are evaluated in terms of the recovered saturation fields and petrophysical parameters. The saturation field is parameterized (1) in cartesian coordinates, (2) by means of its discrete cosine transform coefficients, and (3) by fixed saturation values and gradients in structural elements defined by a gaussian bell of arbitrary shape and location. Synthetic tests reveal that a priori knowledge about the expected geologic structures (as in parameterization (3)) markedly improves the parameter estimates. The number of degrees of freedom thus strongly affects the inversion results. In an additional step, we explore the effects of assuming that the total volume of injected gas is known a priori and that no gas has migrated away from the monitored region.

  4. Deciding on race: a diffusion model analysis of race-categorisation.

    PubMed

    Benton, Christopher P; Skinner, Andrew L

    2015-06-01

    It has long been known that a person's race can affect their decisions about people of another race; an observation that clearly taps into some deep societal issues. However, in order to behave differently in response to someone else's race, you must first categorise that person as other-race. The current study investigates the process of race-categorisation. Two groups of participants, Asian and Caucasian, rapidly classified facial images that varied from strongly Asian, through racially intermediate, to strongly Caucasian. In agreement with previous findings, there was a difference in category boundary between the two groups. Asian participants more frequently judged intermediate images as Caucasian and vice versa. We fitted a decision model, the Ratcliff diffusion model, to our two choice reaction time data. This model provides an account of the processes thought to underlie binary choice decisions. Within its architecture it has two components that could reasonably lead to a difference in race category boundary, these being evidence accumulation rate and a priori bias. The latter is the expectation or prior belief that a participant brings to the task, whilst the former indexes sensitivity to race-dependent perceptual cues. Whilst we find no good evidence for a difference in a priori bias between our two groups, we do find evidence for a difference in evidence accumulation rate. Our Asian participants were more sensitive to Caucasian cues within the images than were our Caucasian participants (and vice versa). These results support the idea that differences in perceptual sensitivity to race-defining visual characteristics drive differences in race categorisation. We propose that our findings fit with a wider view in which perceptual adaptation plays a central role in the visual processing of own and other race. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  5. Accuracy Investigation of Creating Orthophotomaps Based on Images Obtained by Applying Trimble-UX5 UAV

    NASA Astrophysics Data System (ADS)

    Hlotov, Volodymyr; Hunina, Alla; Siejka, Zbigniew

    2017-06-01

    The main purpose of this work is to confirm the possibility of making largescale orthophotomaps applying unmanned aerial vehicle (UAV) Trimble- UX5. A planned altitude reference of the studying territory was carried out before to the aerial surveying. The studying territory has been marked with distinctive checkpoints in the form of triangles (0.5 × 0.5 × 0.2 m). The checkpoints used to precise the accuracy of orthophotomap have been marked with similar triangles. To determine marked reference point coordinates and check-points method of GNSS in real-time kinematics (RTK) measuring has been applied. Projecting of aerial surveying has been done with the help of installed Trimble Access Aerial Imaging, having been used to run out the UX5. Aerial survey out of the Trimble UX5 UAV has been done with the help of the digital camera SONY NEX-5R from 200m and 300 m altitude. These aerial surveying data have been calculated applying special photogrammetric software Pix 4D. The orthophotomap of the surveying objects has been made with its help. To determine the precise accuracy of the got results of aerial surveying the checkpoint coordinates according to the orthophotomap have been set. The average square error has been calculated according to the set coordinates applying GNSS measurements. A-priori accuracy estimation of spatial coordinates of the studying territory using the aerial surveying data have been calculated: mx=0.11 m, my=0.15 m, mz=0.23 m in the village of Remeniv and mx=0.26 m, my=0.38 m, mz=0.43 m in the town of Vynnyky. The accuracy of determining checkpoint coordinates has been investigated using images obtained out of UAV and the average square error of the reference points. Based on comparative analysis of the got results of the accuracy estimation of the made orthophotomap it can be concluded that the value the average square error does not exceed a-priori accuracy estimation. The possibility of applying Trimble UX5 UAV for making large-scale orthophotomaps has been investigated. The aerial surveying output data using UAV can be applied for monitoring potentially dangerous for people objects, the state border controlling, checking out the plots of settlements. Thus, it is important to control the accuracy the got results. Having based on the done analysis and experimental researches it can be concluded that applying UAV gives the possibility to find data more efficiently in comparison with the land surveying methods. As the result, the Trimble UX5 UAV gives the possibility to survey built-up territories with the required accuracy for making orthophotomaps with the following scales 1: 2000, 1: 1000, 1: 500.

  6. Experimental Testing of a Generic Submarine Model in the DSTO Low Speed Wind Tunnel. Phase 2

    DTIC Science & Technology

    2014-03-01

    axis, z-axis (Nm) l Model reference length (1.35 m) L Lift force (N) MRP Moment Reference Point q Dynamic pressure       2 2 1 Uρ (Pa...moment reference point ( MRP ). The moment reference point was defined as the mid-length position on the centre-line of the model. Figure 5 presents the

  7. OWL references in ORM conceptual modelling

    NASA Astrophysics Data System (ADS)

    Matula, Jiri; Belunek, Roman; Hunka, Frantisek

    2017-07-01

    Object Role Modelling methodology is the fact-based type of conceptual modelling. The aim of the paper is to emphasize a close connection to OWL documents and its possible mutual cooperation. The definition of entities or domain values is an indispensable part of the conceptual schema design procedure defined by the ORM methodology. Many of these entities are already defined in OWL documents. Therefore, it is not necessary to declare entities again, whereas it is possible to utilize references from OWL documents during modelling of information systems.

  8. The Study and Measurement of Values and Attitudes.

    ERIC Educational Resources Information Center

    Kerlinger, Fred N.

    The author defines values, attitudes, and beliefs according to their relation to referents. A referent is a construct standing for a set or category of social objects, ideas, or behaviors that is the focus of an attitude. Attitudes and values are belief systems. Beliefs are enduring cognitions about referents; beliefs reflect the value and…

  9. Opportunities and challenges in conducting systematic reviews to support development of nutrient reference values: vitamin A as an example

    USDA-ARS?s Scientific Manuscript database

    Nutrient reference values have significant public health and policy implications. Given the importance of defining reliable nutrient reference values, there is a need for an explicit, objective, and transparent process to set these values. The Tufts Medical Center Evidence-based Practice Center asse...

  10. Spatial Reasoning in Tenejapan Mayans

    PubMed Central

    Li, Peggy; Abarbanell, Linda; Gleitman, Lila; Papafragou, Anna

    2011-01-01

    Language communities differ in their stock of reference frames (coordinate systems for specifying locations and directions). English typically uses egocentrically defined axes (e.g., “left-right”), especially when describing small-scale relationships. Other languages such as Tseltal Mayan prefer to use geocentrically-defined axes (e.g., “north-south”) and do not use any type of projective body-defined axes. It has been argued that the availability of specific frames of reference in language determines the availability or salience of the corresponding spatial concepts. In four experiments, we explored this hypothesis by testing Tseltal speakers’ spatial reasoning skills. Whereas most prior tasks in this domain were open-ended (allowing several correct solutions), the present tasks required a unique solution that favored adopting a frame of reference that was either congruent or incongruent with what is habitually lexicalized in the participants’ language. In these tasks, Tseltal speakers easily solved the language-incongruent problems, and performance was generally more robust for these than for the language-congruent problems that favored geocentrically-defined coordinates. We suggest thatlisteners’ probabilistic inferences when instruction is open to more than one interpretation account for why there are greater cross-linguistic differences in the solutions to open-ended spatial problems than to less ambiguous ones. PMID:21481854

  11. A computational approach to compare regression modelling strategies in prediction research.

    PubMed

    Pajouheshnia, Romin; Pestman, Wiebe R; Teerenstra, Steven; Groenwold, Rolf H H

    2016-08-25

    It is often unclear which approach to fit, assess and adjust a model will yield the most accurate prediction model. We present an extension of an approach for comparing modelling strategies in linear regression to the setting of logistic regression and demonstrate its application in clinical prediction research. A framework for comparing logistic regression modelling strategies by their likelihoods was formulated using a wrapper approach. Five different strategies for modelling, including simple shrinkage methods, were compared in four empirical data sets to illustrate the concept of a priori strategy comparison. Simulations were performed in both randomly generated data and empirical data to investigate the influence of data characteristics on strategy performance. We applied the comparison framework in a case study setting. Optimal strategies were selected based on the results of a priori comparisons in a clinical data set and the performance of models built according to each strategy was assessed using the Brier score and calibration plots. The performance of modelling strategies was highly dependent on the characteristics of the development data in both linear and logistic regression settings. A priori comparisons in four empirical data sets found that no strategy consistently outperformed the others. The percentage of times that a model adjustment strategy outperformed a logistic model ranged from 3.9 to 94.9 %, depending on the strategy and data set. However, in our case study setting the a priori selection of optimal methods did not result in detectable improvement in model performance when assessed in an external data set. The performance of prediction modelling strategies is a data-dependent process and can be highly variable between data sets within the same clinical domain. A priori strategy comparison can be used to determine an optimal logistic regression modelling strategy for a given data set before selecting a final modelling approach.

  12. A Compositional Relevance Model for Adaptive Information Retrieval

    NASA Technical Reports Server (NTRS)

    Mathe, Nathalie; Chen, James; Lu, Henry, Jr. (Technical Monitor)

    1994-01-01

    There is a growing need for rapid and effective access to information in large electronic documentation systems. Access can be facilitated if information relevant in the current problem solving context can be automatically supplied to the user. This includes information relevant to particular user profiles, tasks being performed, and problems being solved. However most of this knowledge on contextual relevance is not found within the contents of documents, and current hypermedia tools do not provide any easy mechanism to let users add this knowledge to their documents. We propose a compositional relevance network to automatically acquire the context in which previous information was found relevant. The model records information on the relevance of references based on user feedback for specific queries and contexts. It also generalizes such information to derive relevant references for similar queries and contexts. This model lets users filter information by context of relevance, build personalized views of documents over time, and share their views with other users. It also applies to any type of multimedia information. Compared to other approaches, it is less costly and doesn't require any a priori statistical computation, nor an extended training period. It is currently being implemented into the Computer Integrated Documentation system which enables integration of various technical documents in a hypertext framework.

  13. Assessment of brain reference genes for RT-qPCR studies in neurodegenerative diseases

    PubMed Central

    Rydbirk, Rasmus; Folke, Jonas; Winge, Kristian; Aznar, Susana; Pakkenberg, Bente; Brudek, Tomasz

    2016-01-01

    Evaluation of gene expression levels by reverse transcription quantitative real-time PCR (RT-qPCR) has for many years been the favourite approach for discovering disease-associated alterations. Normalization of results to stably expressed reference genes (RGs) is pivotal to obtain reliable results. This is especially important in relation to neurodegenerative diseases where disease-related structural changes may affect the most commonly used RGs. We analysed 15 candidate RGs in 98 brain samples from two brain regions from Alzheimer’s disease (AD), Parkinson’s disease (PD), Multiple System Atrophy, and Progressive Supranuclear Palsy patients. Using RefFinder, a web-based tool for evaluating RG stability, we identified the most stable RGs to be UBE2D2, CYC1, and RPL13 which we recommend for future RT-qPCR studies on human brain tissue from these patients. None of the investigated genes were affected by experimental variables such as RIN, PMI, or age. Findings were further validated by expression analyses of a target gene GSK3B, known to be affected by AD and PD. We obtained high variations in GSK3B levels when contrasting the results using different sets of common RG underlining the importance of a priori validation of RGs for RT-qPCR studies. PMID:27853238

  14. Assessment of brain reference genes for RT-qPCR studies in neurodegenerative diseases.

    PubMed

    Rydbirk, Rasmus; Folke, Jonas; Winge, Kristian; Aznar, Susana; Pakkenberg, Bente; Brudek, Tomasz

    2016-11-17

    Evaluation of gene expression levels by reverse transcription quantitative real-time PCR (RT-qPCR) has for many years been the favourite approach for discovering disease-associated alterations. Normalization of results to stably expressed reference genes (RGs) is pivotal to obtain reliable results. This is especially important in relation to neurodegenerative diseases where disease-related structural changes may affect the most commonly used RGs. We analysed 15 candidate RGs in 98 brain samples from two brain regions from Alzheimer's disease (AD), Parkinson's disease (PD), Multiple System Atrophy, and Progressive Supranuclear Palsy patients. Using RefFinder, a web-based tool for evaluating RG stability, we identified the most stable RGs to be UBE2D2, CYC1, and RPL13 which we recommend for future RT-qPCR studies on human brain tissue from these patients. None of the investigated genes were affected by experimental variables such as RIN, PMI, or age. Findings were further validated by expression analyses of a target gene GSK3B, known to be affected by AD and PD. We obtained high variations in GSK3B levels when contrasting the results using different sets of common RG underlining the importance of a priori validation of RGs for RT-qPCR studies.

  15. Statistical analysis of regulatory ecotoxicity tests.

    PubMed

    Isnard, P; Flammarion, P; Roman, G; Babut, M; Bastien, P; Bintein, S; Esserméant, L; Férard, J F; Gallotti-Schmitt, S; Saouter, E; Saroli, M; Thiébaud, H; Tomassone, R; Vindimian, E

    2001-11-01

    ANOVA-type data analysis, i.e.. determination of lowest-observed-effect concentrations (LOECs), and no-observed-effect concentrations (NOECs), has been widely used for statistical analysis of chronic ecotoxicity data. However, it is more and more criticised for several reasons, among which the most important is probably the fact that the NOEC depends on the choice of test concentrations and number of replications and rewards poor experiments, i.e., high variability, with high NOEC values. Thus, a recent OECD workshop concluded that the use of the NOEC should be phased out and that a regression-based estimation procedure should be used. Following this workshop, a working group was established at the French level between government, academia and industry representatives. Twenty-seven sets of chronic data (algae, daphnia, fish) were collected and analysed by ANOVA and regression procedures. Several regression models were compared and relations between NOECs and ECx, for different values of x, were established in order to find an alternative summary parameter to the NOEC. Biological arguments are scarce to help in defining a negligible level of effect x for the ECx. With regard to their use in the risk assessment procedures, a convenient methodology would be to choose x so that ECx are on average similar to the present NOEC. This would lead to no major change in the risk assessment procedure. However, experimental data show that the ECx depend on the regression models and that their accuracy decreases in the low effect zone. This disadvantage could probably be reduced by adapting existing experimental protocols but it could mean more experimental effort and higher cost. ECx (derived with existing test guidelines, e.g., regarding the number of replicates) whose lowest bounds of the confidence interval are on average similar to present NOEC would improve this approach by a priori encouraging more precise experiments. However, narrow confidence intervals are not only linked to good experimental practices, but also depend on the distance between the best model fit and experimental data. At least, these approaches still use the NOEC as a reference although this reference is statistically not correct. On the contrary, EC50 are the most precise values to estimate on a concentration response curve, but they are clearly different from the NOEC and their use would require a modification of existing assessment factors.

  16. Reconstruction of the experimentally supported human protein interactome: what can we learn?

    PubMed Central

    2013-01-01

    Background Understanding the topology and dynamics of the human protein-protein interaction (PPI) network will significantly contribute to biomedical research, therefore its systematic reconstruction is required. Several meta-databases integrate source PPI datasets, but the protein node sets of their networks vary depending on the PPI data combined. Due to this inherent heterogeneity, the way in which the human PPI network expands via multiple dataset integration has not been comprehensively analyzed. We aim at assembling the human interactome in a global structured way and exploring it to gain insights of biological relevance. Results First, we defined the UniProtKB manually reviewed human “complete” proteome as the reference protein-node set and then we mined five major source PPI datasets for direct PPIs exclusively between the reference proteins. We updated the protein and publication identifiers and normalized all PPIs to the UniProt identifier level. The reconstructed interactome covers approximately 60% of the human proteome and has a scale-free structure. No apparent differentiating gene functional classification characteristics were identified for the unrepresented proteins. The source dataset integration augments the network mainly in PPIs. Polyubiquitin emerged as the highest-degree node, but the inclusion of most of its identified PPIs may be reconsidered. The high number (>300) of connections of the subsequent fifteen proteins correlates well with their essential biological role. According to the power-law network structure, the unrepresented proteins should mainly have up to four connections with equally poorly-connected interactors. Conclusions Reconstructing the human interactome based on the a priori definition of the protein nodes enabled us to identify the currently included part of the human “complete” proteome, and discuss the role of the proteins within the network topology with respect to their function. As the network expansion has to comply with the scale-free theory, we suggest that the core of the human interactome has essentially emerged. Thus, it could be employed in systems biology and biomedical research, despite the considerable number of currently unrepresented proteins. The latter are probably involved in specialized physiological conditions, justifying the scarcity of related PPI information, and their identification can assist in designing relevant functional experiments and targeted text mining algorithms. PMID:24088582

  17. Intra- and extra-articular planes of reference for use in total hip arthroplasty: a preliminary study.

    PubMed

    Hausselle, Jerome; Moreau, Pierre Etienne; Wessely, Loic; de Thomasson, Emmanuel; Assi, Ayman; Parratte, Sebastien; Essig, Jerome; Skalli, Wafa

    2012-08-01

    Acetabular component malalignment in total hip arthroplasty can lead to potential complications such as dislocation, component impingement and excessive wear. Computer-assisted orthopaedic surgery systems generally use the anterior pelvic plane (APP). Our aim was to investigate the reliability of anatomical landmarks accessible during surgery and to define new potential planes of reference. Three types of palpations were performed: virtual, on dry bones and on two cadaveric specimens. Four landmarks were selected, the reproducibility of their positioning ranging from 0.9 to 2.3 mm. We then defined five planes and tested them during palpations on two cadaveric specimens. Two planes produced a mean orientation error of 5.0° [standard deviation (SD 3.3°)] and 5.6° (SD 2.7°). Even if further studies are needed to test the reliability of such planes on a larger scale in vivo during surgery, these results demonstrated the feasibility of defining a new plane of reference as an alternative to the APP.

  18. Prediction and explanation in the multiverse

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garriga, J.; Vilenkin, A.

    2008-02-15

    Probabilities in the multiverse can be calculated by assuming that we are typical representatives in a given reference class. But is this class well defined? What should be included in the ensemble in which we are supposed to be typical? There is a widespread belief that this question is inherently vague, and that there are various possible choices for the types of reference objects which should be counted in. Here we argue that the 'ideal' reference class (for the purpose of making predictions) can be defined unambiguously in a rather precise way, as the set of all observers with identicalmore » information content. When the observers in a given class perform an experiment, the class branches into subclasses who learn different information from the outcome of that experiment. The probabilities for the different outcomes are defined as the relative numbers of observers in each subclass. For practical purposes, wider reference classes can be used, where we trace over all information which is uncorrelated to the outcome of the experiment, or whose correlation with it is beyond our current understanding. We argue that, once we have gathered all practically available evidence, the optimal strategy for making predictions is to consider ourselves typical in any reference class we belong to, unless we have evidence to the contrary. In the latter case, the class must be correspondingly narrowed.« less

  19. Assessing Granger Causality in Electrophysiological Data: Removing the Adverse Effects of Common Signals via Bipolar Derivations.

    PubMed

    Trongnetrpunya, Amy; Nandi, Bijurika; Kang, Daesung; Kocsis, Bernat; Schroeder, Charles E; Ding, Mingzhou

    2015-01-01

    Multielectrode voltage data are usually recorded against a common reference. Such data are frequently used without further treatment to assess patterns of functional connectivity between neuronal populations and between brain areas. It is important to note from the outset that such an approach is valid only when the reference electrode is nearly electrically silent. In practice, however, the reference electrode is generally not electrically silent, thereby adding a common signal to the recorded data. Volume conduction further complicates the problem. In this study we demonstrate the adverse effects of common signals on the estimation of Granger causality, which is a statistical measure used to infer synaptic transmission and information flow in neural circuits from multielectrode data. We further test the hypothesis that the problem can be overcome by utilizing bipolar derivations where the difference between two nearby electrodes is taken and treated as a representation of local neural activity. Simulated data generated by a neuronal network model where the connectivity pattern is known were considered first. This was followed by analyzing data from three experimental preparations where a priori predictions regarding the patterns of causal interactions can be made: (1) laminar recordings from the hippocampus of an anesthetized rat during theta rhythm, (2) laminar recordings from V4 of an awake-behaving macaque monkey during alpha rhythm, and (3) ECoG recordings from electrode arrays implanted in the middle temporal lobe and prefrontal cortex of an epilepsy patient during fixation. For both simulation and experimental analysis the results show that bipolar derivations yield the expected connectivity patterns whereas the untreated data (referred to as unipolar signals) do not. In addition, current source density signals, where applicable, yield results that are close to the expected connectivity patterns, whereas the commonly practiced average re-reference method leads to erroneous results.

  20. A priori error estimates for an hp-version of the discontinuous Galerkin method for hyperbolic conservation laws

    NASA Technical Reports Server (NTRS)

    Bey, Kim S.; Oden, J. Tinsley

    1993-01-01

    A priori error estimates are derived for hp-versions of the finite element method for discontinuous Galerkin approximations of a model class of linear, scalar, first-order hyperbolic conservation laws. These estimates are derived in a mesh dependent norm in which the coefficients depend upon both the local mesh size h(sub K) and a number p(sub k) which can be identified with the spectral order of the local approximations over each element.

  1. Monte Carlo Simulations: Number of Iterations and Accuracy

    DTIC Science & Technology

    2015-07-01

    iterations because of its added complexity compared to the WM . We recommend that the WM be used for a priori estimates of the number of MC ...inaccurate.15 Although the WM and the WSM have generally proven useful in estimating the number of MC iterations and addressing the accuracy of the MC ...Theorem 3 3. A Priori Estimate of Number of MC Iterations 7 4. MC Result Accuracy 11 5. Using Percentage Error of the Mean to Estimate Number of MC

  2. The benefits of steroids versus steroids plus antivirals for treatment of Bell’s palsy: a meta-analysis

    PubMed Central

    Quant, Eudocia C; Jeste, Shafali S; Muni, Rajeev H; Cape, Alison V; Bhussar, Manveen K

    2009-01-01

    Objective To determine whether steroids plus antivirals provide a better degree of facial muscle recovery in patients with Bell’s palsy than steroids alone. Design Meta-analysis. Data sources PubMed, Embase, Web of Science, and the Cochrane Central Register of Controlled Trials were searched for studies published in all languages from 1984 to January 2009. Additional studies were identified from cited references. Selection criteria Randomised controlled trials that compared steroids with the combination of steroids and antivirals for the treatment of Bell’s palsy were included in this study. At least one month of follow-up and a primary end point of at least partial facial muscle recovery, as defined by a House-Brackmann grade of at least 2 (complete palsy is designated a grade of 6) or an equivalent score on an alternative recognised scoring system, were required. Review methods Two authors independently reviewed studies for methodological quality, treatment regimens, duration of symptoms before treatment, length of follow-up, and outcomes. Odds ratios with 95% confidence intervals were calculated and pooled using a random effects model. Results Six trials were included, a total of 1145 patients; 574 patients received steroids alone and 571 patients received steroids and antivirals. The pooled odds ratio for facial muscle recovery showed no benefit of steroids plus antivirals compared with steroids alone (odds ratio 1.50, 95% confidence interval 0.83 to 2.69; P=0.18). A one study removed analysis showed that the highest quality studies had the greatest effect on the lack of difference between study arms shown by the odds ratio. Subgroup analyses assessing causes of heterogeneity defined a priori (time from symptom onset to treatment, length of follow-up, and type of antiviral studied) showed no benefit of antivirals in addition to that provided by steroids. Conclusions Antivirals did not provide an added benefit in achieving at least partial facial muscle recovery compared with steroids alone in patients with Bell’s palsy. This study does not, therefore, support the routine use of antivirals in Bell’s palsy. Future studies should use improved herpes virus diagnostics and newer antivirals to assess whether combination therapy benefits patients with more severe facial paralysis at study entry. PMID:19736282

  3. District-level hospital trauma care audit filters: Delphi technique for defining context-appropriate indicators for quality improvement initiative evaluation in developing countries.

    PubMed

    Stewart, Barclay T; Gyedu, Adam; Quansah, Robert; Addo, Wilfred Larbi; Afoko, Akis; Agbenorku, Pius; Amponsah-Manu, Forster; Ankomah, James; Appiah-Denkyira, Ebenezer; Baffoe, Peter; Debrah, Sam; Donkor, Peter; Dorvlo, Theodor; Japiong, Kennedy; Kushner, Adam L; Morna, Martin; Ofosu, Anthony; Oppong-Nketia, Victor; Tabiri, Stephen; Mock, Charles

    2016-01-01

    Prospective clinical audit of trauma care improves outcomes for the injured in high-income countries (HICs). However, equivalent, context-appropriate audit filters for use in low- and middle-income country (LMIC) district-level hospitals have not been well established. We aimed to develop context-appropriate trauma care audit filters for district-level hospitals in Ghana, was well as other LMICs more broadly. Consensus on trauma care audit filters was built between twenty panellists using a Delphi technique with four anonymous, iterative surveys designed to elicit: (i) trauma care processes to be measured; (ii) important features of audit filters for the district-level hospital setting; and (iii) potentially useful filters. Filters were ranked on a scale from 0 to 10 (10 being very useful). Consensus was measured with average percent majority opinion (APMO) cut-off rate. Target consensus was defined a priori as: a median rank of ≥9 for each filter and an APMO cut-off rate of ≥0.8. Panellists agreed on trauma care processes to target (e.g. triage, phases of trauma assessment, early referral if needed) and specific features of filters for district-level hospital use (e.g. simplicity, unassuming of resource capacity). APMO cut-off rate increased successively: Round 1--0.58; Round 2--0.66; Round 3--0.76; and Round 4--0.82. After Round 4, target consensus on 22 trauma care and referral-specific filters was reached. Example filters include: triage--vital signs are recorded within 15 min of arrival (must include breathing assessment, heart rate, blood pressure, oxygen saturation if available); circulation--a large bore IV was placed within 15 min of patient arrival; referral--if referral is activated, the referring clinician and receiving facility communicate by phone or radio prior to transfer. This study proposes trauma care audit filters appropriate for LMIC district-level hospitals. Given the successes of similar filters in HICs and obstetric care filters in LMICs, the collection and reporting of prospective trauma care audit filters may be an important step towards improving care for the injured at district-level hospitals in LMICs. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Introduction to stream network habitat analysis

    USGS Publications Warehouse

    Bartholow, John M.; Waddle, Terry J.

    1986-01-01

    Increasing demands on stream resources by a variety of users have resulted in an increased emphasis on studies that evaluate the cumulative effects of basinwide water management programs. Network habitat analysis refers to the evaluation of an entire river basin (or network) by predicting its habitat response to alternative management regimes. The analysis principally focuses on the biological and hydrological components of the riv er basin, which include both micro- and macrohabitat. (The terms micro- and macrohabitat are further defined and discussed later in this document.) Both conceptual and analytic models are frequently used for simplifying and integrating the various components of the basin. The model predictions can be used in developing management recommendations to preserve, restore, or enhance instream fish habitat. A network habitat analysis should begin with a clear and concise statement of the study objectives and a thorough understanding of the institutional setting in which the study results will be applied. This includes the legal, social, and political considerations inherent in any water management setting. The institutional environment may dictate the focus and level of detail required of the study to a far greater extent than the technical considerations. After the study objectives, including species on interest, and institutional setting are collectively defined, the technical aspects should be scoped to determine the spatial and temporal requirements of the analysis. A macro level approach should be taken first to identify critical biological elements and requirements. Next, habitat availability is quantified much as in a "standard" river segment analysis, with the likely incorporation of some macrohabitat components, such as stream temperature. Individual river segments may be aggregated to represent the networkwide habitat response of alternative water management schemes. Things learned about problems caused or opportunities generated may be fed back to the design of new alternatives, which themselves may be similarly tested. One may get as sophisticated an analysis as the decisionmaking process demands. Figure 1 shows a decision point that asks whether the results from the micro- or macrohabitat models display cumulative or synergistic effects. If they do, then network habitat analysis is the appropriate tool. We are left, however, in a difficult bind. We may not know a priori whether the effects are cumulative or synergistic unless some network-type questions are investigated as part of the scoping process. The next several sections raise issues designed to alert the modeler to relevant questions necessary to address this paradox.

  5. Pointing to potential reference areas to assess soil mutagenicity.

    PubMed

    Meyer, D D; Da Silva, F M R; Souza, J W M; Pohren, R S; Rocha, J A V; Vargas, V M F

    2015-04-01

    Several have been performed to evaluate the mutagenicity of soil samples in urban and industrial areas. The use of uncontaminated reference areas has been an obstacle to the study of environmental mutagenesis. The study aimed to indicate a methodology to define reference areas in studies of environmental contamination based on "Ambient Background Concentration" of metallic elements associated with the Salmonella/microsome assay. We looked at three potential reference areas, two of them close by the industrial sources of contamination (São Jerônimo reference, near the coal-fired power plant, and Triunfo reference, near the wood preservative plant), but not directly influenced by them and an area located inside a protected area (Itapuã reference). We also carried out chemical analyses of some metals to plot the metal profile of these potential reference areas and define basal levels of these metals in the soils. After examining the mutagenicity of the inorganic extracts using strains TA98, TA97a, and TA100, in the presence and absence of S9 mix, we indicated the São Jerônimo reference and the Itapuã reference as two sites that could be used in future studies of mutagenicity of soils in southern Brazil. The association between a mutagenicity bioassay and the "Ambient Background Concentration" seems to be a useful method to indicate the reference areas in studies of contamination by environmental mutagens, where these results were corroborated by canonical correspondence analysis.

  6. Reference in human and non-human primate communication: What does it take to refer?

    PubMed

    Sievers, Christine; Gruber, Thibaud

    2016-07-01

    The concept of functional reference has been used to isolate potentially referential vocal signals in animal communication. However, its relatedness to the phenomenon of reference in human language has recently been brought into question. While some researchers have suggested abandoning the concept of functional reference altogether, others advocate a revision of its definition to include contextual cues that play a role in signal production and perception. Empirical and theoretical work on functional reference has also put much emphasis on how the receiver understands the referential signal. However, reference, as defined in the linguistic literature, is an action of the producer, and therefore, any definition describing reference in non-human animals must also focus on the producer. To successfully determine whether a signal is used to refer, we suggest an approach from the field of pragmatics, taking a closer look at specific situations of signal production, specifically at the factors that influence the production of a signal by an individual. We define the concept of signaller's reference to identify intentional acts of reference produced by a signaller independently of the communicative modality, and illustrate it with a case study of the hoo vocalizations produced by wild chimpanzees during travel. This novel framework introduces an intentional approach to referentiality. It may therefore permit a closer comparison of human and non-human animal referential behaviour and underlying cognitive processes, allowing us to identify what may have emerged solely in the human lineage.

  7. SUPPORT FOR REFERENCE AND EQUIVALENCY PROGRAM

    EPA Science Inventory

    Federal Reference Methods (FRMs) and Federal Equivalent Methods (FEMs) form the backbone of the EPA's national monitoring strategy. They are the measurement methodologies that define attainment of a National Ambient Air Quality Standard (NAAQS). As knowledge and technology adva...

  8. International Geomagnetic Reference Field: the third generation.

    USGS Publications Warehouse

    Peddie, N.W.

    1982-01-01

    In August 1981 the International Association of Geomagnetism and Aeronomy revised the International Geomagnetic Reference Field (IGRF). It is the second revision since the inception of the IGRF in 1968. The revision extends the earlier series of IGRF models from 1980 to 1985, introduces a new series of definitive models for 1965-1976, and defines a provisional reference field for 1975- 1980. The revision consists of: 1) a model of the main geomagnetic field at 1980.0, not continuous with the earlier series of IGRF models together with a forecast model of the secular variation of the main field during 1980-1985; 2) definitive models of the main field at 1965.0, 1970.0, and 1975.0, with linear interpolation of the model coefficients specified for intervening dates; and 3) a provisional reference field for 1975-1980, defined as the linear interpolation of the 1975 and 1980 main-field models.-from Author

  9. 14 CFR Appendix J to Part 36 - Alternative Noise Certification Procedure for Helicopters Under Subpart H Having a Maximum...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... produce the same advancing blade tip Mach number as associated with the reference conditions; (i) Advancing blade tip Mach number (MAT) is defined as the ratio of the arithmetic sum of blade tip rotational... the reference advancing blade tip Mach number. The adjusted reference airspeed shall be maintained...

  10. 14 CFR Appendix J to Part 36 - Alternative Noise Certification Procedure for Helicopters Under Subpart H Having a Maximum...

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... produce the same advancing blade tip Mach number as associated with the reference conditions; (i) Advancing blade tip Mach number (MAT) is defined as the ratio of the arithmetic sum of blade tip rotational... the reference advancing blade tip Mach number. The adjusted reference airspeed shall be maintained...

  11. 14 CFR Appendix J to Part 36 - Alternative Noise Certification Procedure for Helicopters Under Subpart H Having a Maximum...

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... produce the same advancing blade tip Mach number as associated with the reference conditions; (i) Advancing blade tip Mach number (MAT) is defined as the ratio of the arithmetic sum of blade tip rotational... the reference advancing blade tip Mach number. The adjusted reference airspeed shall be maintained...

  12. 14 CFR Appendix J to Part 36 - Alternative Noise Certification Procedure for Helicopters Under Subpart H Having a Maximum...

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... produce the same advancing blade tip Mach number as associated with the reference conditions; (i) Advancing blade tip Mach number (MAT) is defined as the ratio of the arithmetic sum of blade tip rotational... the reference advancing blade tip Mach number. The adjusted reference airspeed shall be maintained...

  13. 14 CFR Appendix J to Part 36 - Alternative Noise Certification Procedure for Helicopters Under Subpart H Having a Maximum...

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... produce the same advancing blade tip Mach number as associated with the reference conditions; (i) Advancing blade tip Mach number (MAT) is defined as the ratio of the arithmetic sum of blade tip rotational... the reference advancing blade tip Mach number. The adjusted reference airspeed shall be maintained...

  14. Guidelines for Marine Biological Reference Collections. Unesco Reports in Marine Sciences, No. 22.

    ERIC Educational Resources Information Center

    Hureau, J. C.; Rice, A. L.

    This manual provides practical advice on the appropriation, conservation, and documentation of a marine biological reference collection, in response to needs expressed by Mediterranean Arab countries. A reference collection is defined as a working museum containing a series of specimens with which biologists are able to compare their own material.…

  15. Spatial-Heterodyne Interferometry For Reflection And Transm Ission (Shirt) Measurements

    DOEpatents

    Hanson, Gregory R [Clinton, TN; Bingham, Philip R [Knoxville, TN; Tobin, Ken W [Harriman, TN

    2006-02-14

    Systems and methods are described for spatial-heterodyne interferometry for reflection and transmission (SHIRT) measurements. A method includes digitally recording a first spatially-heterodyned hologram using a first reference beam and a first object beam; digitally recording a second spatially-heterodyned hologram using a second reference beam and a second object beam; Fourier analyzing the digitally recorded first spatially-heterodyned hologram to define a first analyzed image; Fourier analyzing the digitally recorded second spatially-heterodyned hologram to define a second analyzed image; digitally filtering the first analyzed image to define a first result; and digitally filtering the second analyzed image to define a second result; performing a first inverse Fourier transform on the first result, and performing a second inverse Fourier transform on the second result. The first object beam is transmitted through an object that is at least partially translucent, and the second object beam is reflected from the object.

  16. A New Linearized Crank-Nicolson Mixed Element Scheme for the Extended Fisher-Kolmogorov Equation

    PubMed Central

    Wang, Jinfeng; Li, Hong; He, Siriguleng; Gao, Wei

    2013-01-01

    We present a new mixed finite element method for solving the extended Fisher-Kolmogorov (EFK) equation. We first decompose the EFK equation as the two second-order equations, then deal with a second-order equation employing finite element method, and handle the other second-order equation using a new mixed finite element method. In the new mixed finite element method, the gradient ∇u belongs to the weaker (L 2(Ω))2 space taking the place of the classical H(div; Ω) space. We prove some a priori bounds for the solution for semidiscrete scheme and derive a fully discrete mixed scheme based on a linearized Crank-Nicolson method. At the same time, we get the optimal a priori error estimates in L 2 and H 1-norm for both the scalar unknown u and the diffusion term w = −Δu and a priori error estimates in (L 2)2-norm for its gradient χ = ∇u for both semi-discrete and fully discrete schemes. PMID:23864831

  17. A new linearized Crank-Nicolson mixed element scheme for the extended Fisher-Kolmogorov equation.

    PubMed

    Wang, Jinfeng; Li, Hong; He, Siriguleng; Gao, Wei; Liu, Yang

    2013-01-01

    We present a new mixed finite element method for solving the extended Fisher-Kolmogorov (EFK) equation. We first decompose the EFK equation as the two second-order equations, then deal with a second-order equation employing finite element method, and handle the other second-order equation using a new mixed finite element method. In the new mixed finite element method, the gradient ∇u belongs to the weaker (L²(Ω))² space taking the place of the classical H(div; Ω) space. We prove some a priori bounds for the solution for semidiscrete scheme and derive a fully discrete mixed scheme based on a linearized Crank-Nicolson method. At the same time, we get the optimal a priori error estimates in L² and H¹-norm for both the scalar unknown u and the diffusion term w = -Δu and a priori error estimates in (L²)²-norm for its gradient χ = ∇u for both semi-discrete and fully discrete schemes.

  18. The Next-generation Berkeley High Resolution NO2 (BEHR NO2) Retrieval: Design and Preliminary Emissions Constraints

    NASA Astrophysics Data System (ADS)

    Laughner, J.; Cohen, R. C.

    2017-12-01

    Recent work has identified a number of assumptions made in NO2 retrievals that lead to biases in the retrieved NO2 column density. These include the treatment of the surface as an isotropic reflector, the absence of lightning NO2 in high resolution a priori profiles, and the use of monthly averaged a priori profiles. We present a new release of the Berkeley High Resolution (BEHR) OMI NO2 retrieval based on the new NASA Standard Product (version 3) that addresses these assumptions by: accounting for surface anisotropy by using a BRDF albedo product, using an updated method of regridding NO2 data, and revised NO2 a priori profiles that better account for lightning NO2 and daily variation in the profile shape. We quantify the effect these changes have on the retrieved NO2 column densities and the resultant impact these updates have on constraints of urban NOx emissions for select cities throughout the United States.

  19. Application of ray-traced tropospheric slant delays to geodetic VLBI analysis

    NASA Astrophysics Data System (ADS)

    Hofmeister, Armin; Böhm, Johannes

    2017-08-01

    The correction of tropospheric influences via so-called path delays is critical for the analysis of observations from space geodetic techniques like the very long baseline interferometry (VLBI). In standard VLBI analysis, the a priori slant path delays are determined using the concept of zenith delays, mapping functions and gradients. The a priori use of ray-traced delays, i.e., tropospheric slant path delays determined with the technique of ray-tracing through the meteorological data of numerical weather models (NWM), serves as an alternative way of correcting the influences of the troposphere on the VLBI observations within the analysis. In the presented research, the application of ray-traced delays to the VLBI analysis of sessions in a time span of 16.5 years is investigated. Ray-traced delays have been determined with program RADIATE (see Hofmeister in Ph.D. thesis, Department of Geodesy and Geophysics, Faculty of Mathematics and Geoinformation, Technische Universität Wien. http://resolver.obvsg.at/urn:nbn:at:at-ubtuw:1-3444, 2016) utilizing meteorological data provided by NWM of the European Centre for Medium-Range Weather Forecasts (ECMWF). In comparison with a standard VLBI analysis, which includes the tropospheric gradient estimation, the application of the ray-traced delays to an analysis, which uses the same parameterization except for the a priori slant path delay handling and the used wet mapping factors for the zenith wet delay (ZWD) estimation, improves the baseline length repeatability (BLR) at 55.9% of the baselines at sub-mm level. If no tropospheric gradients are estimated within the compared analyses, 90.6% of all baselines benefit from the application of the ray-traced delays, which leads to an average improvement of the BLR of 1 mm. The effects of the ray-traced delays on the terrestrial reference frame are also investigated. A separate assessment of the RADIATE ray-traced delays is carried out by comparison to the ray-traced delays from the National Aeronautics and Space Administration Goddard Space Flight Center (NASA GSFC) (Eriksson and MacMillan in http://lacerta.gsfc.nasa.gov/tropodelays, 2016) with respect to the analysis performances in terms of BLR results. If tropospheric gradient estimation is included in the analysis, 51.3% of the baselines benefit from the RADIATE ray-traced delays at sub-mm difference level. If no tropospheric gradients are estimated within the analysis, the RADIATE ray-traced delays deliver a better BLR at 63% of the baselines compared to the NASA GSFC ray-traced delays.

  20. Spatial-heterodyne interferometry for transmission (SHIFT) measurements

    DOEpatents

    Bingham, Philip R.; Hanson, Gregory R.; Tobin, Ken W.

    2006-10-10

    Systems and methods are described for spatial-heterodyne interferometry for transmission (SHIFT) measurements. A method includes digitally recording a spatially-heterodyned hologram including spatial heterodyne fringes for Fourier analysis using a reference beam, and an object beam that is transmitted through an object that is at least partially translucent; Fourier analyzing the digitally recorded spatially-heterodyned hologram, by shifting an original origin of the digitally recorded spatially-heterodyned hologram to sit on top of a spatial-heterodyne carrier frequency defined by an angle between the reference beam and the object beam, to define an analyzed image; digitally filtering the analyzed image to cut off signals around the original origin to define a result; and performing an inverse Fourier transform on the result.

  1. 47 CFR 64.619 - VRS Access Technology Reference Platform and administrator.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Access Technology Reference Platform shall be a software product that performs consistently with the...) Compensation. The TRS Fund, as defined by § 64.604(a)(5)(iii) of this subpart, may be used to compensate the...

  2. 47 CFR 64.619 - VRS Access Technology Reference Platform and administrator.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Access Technology Reference Platform shall be a software product that performs consistently with the...) Compensation. The TRS Fund, as defined by § 64.604(a)(5)(iii) of this subpart, may be used to compensate the...

  3. Chemical-agnostic hazard prediction: statistical inference of in ...

    EPA Pesticide Factsheets

    Toxicity pathways have been defined as normal cellular pathways that, when sufficiently perturbed as a consequence of chemical exposure, lead to an adverse outcome. If an exposure alters one or more normal biological pathways to an extent that leads to an adverse toxicity outcome, a significant correlation must exist between the exposure, the extent of pathway alteration, and the degree of adverse outcome. Biological pathways are regulated at multiple levels, including transcriptional, post-transcriptional, post-translational, and targeted degradation, each of which can affect the levels and extents of modification of proteins involved in the pathways. Significant alterations of toxicity pathways resulting from changes in regulation at any of these levels therefore are likely to be detectable as alterations in the proteome. We hypothesize that significant correlations between exposures, adverse outcomes, and changes in the proteome have the potential to identify putative toxicity pathways, facilitating selection of candidate targets for high throughput screening, even in the absence of a priori knowledge of either the specific pathways involved or the specific agents inducing the pathway alterations. We explored this hypothesis in vitro in BEAS-2B human airway epithelial cells exposed to different concentrations of Ni2+, Cd2+, and Cr6+, alone and in defined mixtures. Levels and phosphorylation status of a variety of signaling pathway proteins and cytokines were

  4. EOS MLS Level 2 Data Processing Software Version 3

    NASA Technical Reports Server (NTRS)

    Livesey, Nathaniel J.; VanSnyder, Livesey W.; Read, William G.; Schwartz, Michael J.; Lambert, Alyn; Santee, Michelle L.; Nguyen, Honghanh T.; Froidevaux, Lucien; wang, Shuhui; Manney, Gloria L.; hide

    2011-01-01

    This software accepts the EOS MLS calibrated measurements of microwave radiances products and operational meteorological data, and produces a set of estimates of atmospheric temperature and composition. This version has been designed to be as flexible as possible. The software is controlled by a Level 2 Configuration File that controls all aspects of the software: defining the contents of state and measurement vectors, defining the configurations of the various forward models available, reading appropriate a priori spectroscopic and calibration data, performing retrievals, post-processing results, computing diagnostics, and outputting results in appropriate files. In production mode, the software operates in a parallel form, with one instance of the program acting as a master, coordinating the work of multiple slave instances on a cluster of computers, each computing the results for individual chunks of data. In addition, to do conventional retrieval calculations and producing geophysical products, the Level 2 Configuration File can instruct the software to produce files of simulated radiances based on a state vector formed from a set of geophysical product files taken as input. Combining both the retrieval and simulation tasks in a single piece of software makes it far easier to ensure that identical forward model algorithms and parameters are used in both tasks. This also dramatically reduces the complexity of the code maintenance effort.

  5. Traceability validation of a high speed short-pulse testing method used in LED production

    NASA Astrophysics Data System (ADS)

    Revtova, Elena; Vuelban, Edgar Moreno; Zhao, Dongsheng; Brenkman, Jacques; Ulden, Henk

    2017-12-01

    Industrial processes of LED (light-emitting diode) production include LED light output performance testing. Most of them are monitored and controlled by optically, electrically and thermally measuring LEDs by high speed short-pulse measurement methods. However, these are not standardized and a lot of information is proprietary that it is impossible for third parties, such as NMIs, to trace and validate. It is known, that these techniques have traceability issue and metrological inadequacies. Often due to these, the claimed performance specifications of LEDs are overstated, which consequently results to manufacturers experiencing customers' dissatisfaction and a large percentage of failures in daily use of LEDs. In this research a traceable setup is developed to validate one of the high speed testing techniques, investigate inadequacies and work out the traceability issues. A well-characterised short square pulse of 25 ms is applied to chip-on-board (CoB) LED modules to investigate the light output and colour content. We conclude that the short-pulse method is very efficient in case a well-defined electrical current pulse is applied and the stabilization time of the device is "a priori" accurately determined. No colour shift is observed. The largest contributors to the measurement uncertainty include badly-defined current pulse and inaccurate calibration factor.

  6. Navigating a Mobile Robot Across Terrain Using Fuzzy Logic

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun; Howard, Ayanna; Bon, Bruce

    2003-01-01

    A strategy for autonomous navigation of a robotic vehicle across hazardous terrain involves the use of a measure of traversability of terrain within a fuzzy-logic conceptual framework. This navigation strategy requires no a priori information about the environment. Fuzzy logic was selected as a basic element of this strategy because it provides a formal methodology for representing and implementing a human driver s heuristic knowledge and operational experience. Within a fuzzy-logic framework, the attributes of human reasoning and decision- making can be formulated by simple IF (antecedent), THEN (consequent) rules coupled with easily understandable and natural linguistic representations. The linguistic values in the rule antecedents convey the imprecision associated with measurements taken by sensors onboard a mobile robot, while the linguistic values in the rule consequents represent the vagueness inherent in the reasoning processes to generate the control actions. The operational strategies of the human expert driver can be transferred, via fuzzy logic, to a robot-navigation strategy in the form of a set of simple conditional statements composed of linguistic variables. These linguistic variables are defined by fuzzy sets in accordance with user-defined membership functions. The main advantages of a fuzzy navigation strategy lie in the ability to extract heuristic rules from human experience and to obviate the need for an analytical model of the robot navigation process.

  7. Forecasting an invasive species’ distribution with global distribution data, local data, and physiological information

    USGS Publications Warehouse

    Jarnevich, Catherine S.; Young, Nicholas E.; Talbert, Marian; Talbert, Colin

    2018-01-01

    Understanding invasive species distributions and potential invasions often requires broad‐scale information on the environmental tolerances of the species. Further, resource managers are often faced with knowing these broad‐scale relationships as well as nuanced environmental factors related to their landscape that influence where an invasive species occurs and potentially could occur. Using invasive buffelgrass (Cenchrus ciliaris), we developed global models and local models for Saguaro National Park, Arizona, USA, based on location records and literature on physiological tolerances to environmental factors to investigate whether environmental relationships of a species at a global scale are also important at local scales. In addition to correlative models with five commonly used algorithms, we also developed a model using a priori user‐defined relationships between occurrence and environmental characteristics based on a literature review. All correlative models at both scales performed well based on statistical evaluations. The user‐defined curves closely matched those produced by the correlative models, indicating that the correlative models may be capturing mechanisms driving the distribution of buffelgrass. Given climate projections for the region, both global and local models indicate that conditions at Saguaro National Park may become more suitable for buffelgrass. Combining global and local data with correlative models and physiological information provided a holistic approach to forecasting invasive species distributions.

  8. An assessment of patient navigator activities in breast cancer patient navigation programs using a nine-principle framework.

    PubMed

    Gunn, Christine M; Clark, Jack A; Battaglia, Tracy A; Freund, Karen M; Parker, Victoria A

    2014-10-01

    To determine how closely a published model of navigation reflects the practice of navigation in breast cancer patient navigation programs. Observational field notes describing patient navigator activities collected from 10 purposefully sampled, foundation-funded breast cancer navigation programs in 2008-2009. An exploratory study evaluated a model framework for patient navigation published by Harold Freeman by using an a priori coding scheme based on model domains. Field notes were compiled and coded. Inductive codes were added during analysis to characterize activities not included in the original model. Programs were consistent with individual-level principles representing tasks focused on individual patients. There was variation with respect to program-level principles that related to program organization and structure. Program characteristics such as the use of volunteer or clinical navigators were identified as contributors to patterns of model concordance. This research provides a framework for defining the navigator role as focused on eliminating barriers through the provision of individual-level interventions. The diversity observed at the program level in these programs was a reflection of implementation according to target population. Further guidance may be required to assist patient navigation programs to define and tailor goals and measurement to community needs. © Health Research and Educational Trust.

  9. An Assessment of Patient Navigator Activities in Breast Cancer Patient Navigation Programs Using a Nine-Principle Framework

    PubMed Central

    Gunn, Christine M; Clark, Jack A; Battaglia, Tracy A; Freund, Karen M; Parker, Victoria A

    2014-01-01

    Objective To determine how closely a published model of navigation reflects the practice of navigation in breast cancer patient navigation programs. Data Source Observational field notes describing patient navigator activities collected from 10 purposefully sampled, foundation-funded breast cancer navigation programs in 2008–2009. Study Design An exploratory study evaluated a model framework for patient navigation published by Harold Freeman by using an a priori coding scheme based on model domains. Data Collection Field notes were compiled and coded. Inductive codes were added during analysis to characterize activities not included in the original model. Principal Findings Programs were consistent with individual-level principles representing tasks focused on individual patients. There was variation with respect to program-level principles that related to program organization and structure. Program characteristics such as the use of volunteer or clinical navigators were identified as contributors to patterns of model concordance. Conclusions This research provides a framework for defining the navigator role as focused on eliminating barriers through the provision of individual-level interventions. The diversity observed at the program level in these programs was a reflection of implementation according to target population. Further guidance may be required to assist patient navigation programs to define and tailor goals and measurement to community needs. PMID:24820445

  10. The impact of individual-level heterogeneity on estimated infectious disease burden: a simulation study.

    PubMed

    McDonald, Scott A; Devleesschauwer, Brecht; Wallinga, Jacco

    2016-12-08

    Disease burden is not evenly distributed within a population; this uneven distribution can be due to individual heterogeneity in progression rates between disease stages. Composite measures of disease burden that are based on disease progression models, such as the disability-adjusted life year (DALY), are widely used to quantify the current and future burden of infectious diseases. Our goal was to investigate to what extent ignoring the presence of heterogeneity could bias DALY computation. Simulations using individual-based models for hypothetical infectious diseases with short and long natural histories were run assuming either "population-averaged" progression probabilities between disease stages, or progression probabilities that were influenced by an a priori defined individual-level frailty (i.e., heterogeneity in disease risk) distribution, and DALYs were calculated. Under the assumption of heterogeneity in transition rates and increasing frailty with age, the short natural history disease model predicted 14% fewer DALYs compared with the homogenous population assumption. Simulations of a long natural history disease indicated that assuming homogeneity in transition rates when heterogeneity was present could overestimate total DALYs, in the present case by 4% (95% quantile interval: 1-8%). The consequences of ignoring population heterogeneity should be considered when defining transition parameters for natural history models and when interpreting the resulting disease burden estimates.

  11. Smoldering of porous media: numerical model and comparison of calculations with experiment

    NASA Astrophysics Data System (ADS)

    Lutsenko, N. A.; Levin, V. A.

    2017-10-01

    Numerical modelling of smoldering in porous media under natural convection is considered. Smoldering can be defined as a flameless exothermic surface reaction; it is a type of heterogeneous combustion which can propagate in porous media. Peatbogs, landfills and other natural or man-made porous objects can sustain smoldering under natural (or free) convection, when the flow rate of gas passed through the porous object is unknown a priori. In the present work a numerical model is proposed for investigating smoldering in porous media under natural convection. The model is based on the assumption of interacting interpenetrating continua using classical approaches of the theory of filtration combustion and includes equations of state, continuity, momentum conservation and energy for solid and gas phases. Computational results obtained by means of the numerical model in one-dimensional case are compared with the experimental data of the smoldering combustion in polyurethane foam under free convection in the gravity field, which were described in literature. Calculations shows that when simulating both co-current combustion (when the smoldering wave moves upward) and counter-current combustion (when the smoldering wave moves downward), the numerical model can provide a good quantitative agreement with experiment if the parameters of the model are well defined.

  12. 29 CFR 553.224 - “Work period” defined.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 3 2011-07-01 2011-07-01 false âWork periodâ defined. 553.224 Section 553.224 Labor... Enforcement Employees of Public Agencies Tour of Duty and Compensable Hours of Work Rules § 553.224 “Work period” defined. (a) As used in section 7(k), the term “work period” refers to any established and...

  13. Constructivist Learning Environments and Defining the Online Learning Community

    ERIC Educational Resources Information Center

    Brown, Loren

    2014-01-01

    The online learning community is frequently referred to, but ill defined. The constructivist philosophy and approach to teaching and learning is both an effective means of constructing an online learning community and it is a tool by which to define key elements of the learning community. In order to build a nurturing, self-sustaining online…

  14. 45 CFR 506.10 - “Vietnam conflict” defined.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 45 Public Welfare 3 2014-10-01 2014-10-01 false âVietnam conflictâ defined. 506.10 Section 506.10... § 506.10 “Vietnam conflict” defined. Vietnam conflict refers to the period beginning February 28, 1961... proclaimed the date of May 7, 1975, to be the ending date of the “Vietnam era” (Presidential Proclamation No...

  15. 45 CFR 506.10 - “Vietnam conflict” defined.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 45 Public Welfare 3 2013-10-01 2013-10-01 false âVietnam conflictâ defined. 506.10 Section 506.10... § 506.10 “Vietnam conflict” defined. Vietnam conflict refers to the period beginning February 28, 1961... proclaimed the date of May 7, 1975, to be the ending date of the “Vietnam era” (Presidential Proclamation No...

  16. 45 CFR 506.10 - “Vietnam conflict” defined.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 45 Public Welfare 3 2012-10-01 2012-10-01 false âVietnam conflictâ defined. 506.10 Section 506.10... § 506.10 “Vietnam conflict” defined. Vietnam conflict refers to the period beginning February 28, 1961... proclaimed the date of May 7, 1975, to be the ending date of the “Vietnam era” (Presidential Proclamation No...

  17. 45 CFR 506.10 - “Vietnam conflict” defined.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 45 Public Welfare 3 2011-10-01 2011-10-01 false âVietnam conflictâ defined. 506.10 Section 506.10... § 506.10 “Vietnam conflict” defined. Vietnam conflict refers to the period beginning February 28, 1961... proclaimed the date of May 7, 1975, to be the ending date of the “Vietnam era” (Presidential Proclamation No...

  18. Reporting and methodological quality of meta-analyses in urological literature.

    PubMed

    Xia, Leilei; Xu, Jing; Guzzo, Thomas J

    2017-01-01

    To assess the overall quality of published urological meta-analyses and identify predictive factors for high quality. We systematically searched PubMed to identify meta-analyses published from January 1st, 2011 to December 31st, 2015 in 10 predetermined major paper-based urology journals. The characteristics of the included meta-analyses were collected, and their reporting and methodological qualities were assessed by the PRISMA checklist (27 items) and AMSTAR tool (11 items), respectively. Descriptive statistics were used for individual items as a measure of overall compliance, and PRISMA and AMSTAR scores were calculated as the sum of adequately reported domains. Logistic regression was used to identify predictive factors for high qualities. A total of 183 meta-analyses were included. The mean PRISMA and AMSTAR scores were 22.74 ± 2.04 and 7.57 ± 1.41, respectively. PRISMA item 5, protocol and registration, items 15 and 22, risk of bias across studies, items 16 and 23, additional analysis had less than 50% adherence. AMSTAR item 1, " a priori " design, item 5, list of studies and item 10, publication bias had less than 50% adherence. Logistic regression analyses showed that funding support and " a priori " design were associated with superior reporting quality, following PRISMA guideline and " a priori " design were associated with superior methodological quality. Reporting and methodological qualities of recently published meta-analyses in major paper-based urology journals are generally good. Further improvement could potentially be achieved by strictly adhering to PRISMA guideline and having " a priori " protocol.

  19. Kant and the Conservation of Matter

    NASA Astrophysics Data System (ADS)

    Morris, Joel

    This dissertation is an examination of Kant's rather notorious claim that natural science, or physics, has a priori principles, understood as the claim that physics is constrained by rules warranted by the essential nature of thought. The overall direction of this study is towards examining Kant's claim by close study of a particular principle of physics, the principle of the conservation of matter. If indeed this is a principle of physics, and Kant can successfully show that it is a priori, then it will be reasonable to conclude, in company with Kant, that physics has a priori principles. Although Kant's proof of the principle of the conservation of matter has been traditionally regraded as a reasonably straightforward consequence of his First Analogy of Experience, a careful reading of his proof reveals that this is not really the case. Rather, Kant's proof of the conservation of matter is a consequence of (i) his schematisation of the category of substance in terms of permanence, and (ii) his identification of matter as substance, by appeal to what he thinks is the empirical criterion of substance, activity. Careful examination of Kant's argument in defence of the principle of the conservation of matter, however, reveals a number of deficiencies, and it is concluded that Kant cannot be said to have satisfactorily demonstrated the principle of the conservation of matter or to have convincingly illustrated his claim that physics has a priori principles by appeal to this instance.

  20. Sterilization of tumor-positive lymph nodes of esophageal cancer by neo-adjuvant treatment is associated with worse survival compared to tumor-negative lymph nodes treated with surgery first.

    PubMed

    Mantziari, Styliani; Allemann, Pierre; Winiker, Michael; Sempoux, Christine; Demartines, Nicolas; Schäfer, Markus

    2017-09-01

    Lymph node (LN) involvement by esophageal cancer is associated with compromised long-term prognosis. This study assessed whether LN downstaging by neoadjuvant treatment (NAT) might offer a survival benefit compared to patients with a priori negative LN. Patients undergoing esophagectomy for cancer between 2005 and 2014 were screened for inclusion. Group 1 included cN0 patients confirmed as pN0 who were treated with surgery first, whereas group 2 included patients initially cN+ and down-staged to ypN0 after NAT. Survival analysis was performed with the Kaplan-Meier and Cox regression methods. Fifty-seven patients were included in our study, 24 in group 1 and 33 in group 2. Group 2 patients had more locally advanced lesions compared to a priori negative patients, and despite complete LN sterilization by NAT they still had worse long-term survival. Overall 3-year survival was 86.8% for a priori LN negative versus 63.3% for downstaged patients (P = 0.013), while disease-free survival was 79.6% and 57.9%, respectively (P = 0.021). Tumor recurrence was also earlier and more disseminated for the down-staged group. Downstaged LN, despite the systemic effect of NAT, still inherit an increased risk for early tumor recurrence and worse long-term survival compared to a priori negative LN. © 2017 Wiley Periodicals, Inc.

  1. Reference coordinate systems: An update. Supplement 11

    NASA Technical Reports Server (NTRS)

    Mueller, Ivan I.

    1988-01-01

    A common requirement for all geodetic investigations is a well-defined coordinate system attached to the earth in some prescribed way, as well as a well-defined inertial coordinate system in which the motions of the terrestrial frame can be monitored. The paper deals with the problems encountered when establishing such coordinate systems and the transformations between them. In addition, problems related to the modeling of the deformable earth are discussed. This paper is an updated version of the earlier work, Reference Coordinate Systems for Earth Dynamics: A Preview, by the author.

  2. Automatic left-atrial segmentation from cardiac 3D ultrasound: a dual-chamber model-based approach

    NASA Astrophysics Data System (ADS)

    Almeida, Nuno; Sarvari, Sebastian I.; Orderud, Fredrik; Gérard, Olivier; D'hooge, Jan; Samset, Eigil

    2016-04-01

    In this paper, we present an automatic solution for segmentation and quantification of the left atrium (LA) from 3D cardiac ultrasound. A model-based framework is applied, making use of (deformable) active surfaces to model the endocardial surfaces of cardiac chambers, allowing incorporation of a priori anatomical information in a simple fashion. A dual-chamber model (LA and left ventricle) is used to detect and track the atrio-ventricular (AV) plane, without any user input. Both chambers are represented by parametric surfaces and a Kalman filter is used to fit the model to the position of the endocardial walls detected in the image, providing accurate detection and tracking during the whole cardiac cycle. This framework was tested in 20 transthoracic cardiac ultrasound volumetric recordings of healthy volunteers, and evaluated using manual traces of a clinical expert as a reference. The 3D meshes obtained with the automatic method were close to the reference contours at all cardiac phases (mean distance of 0.03+/-0.6 mm). The AV plane was detected with an accuracy of -0.6+/-1.0 mm. The LA volumes assessed automatically were also in agreement with the reference (mean +/-1.96 SD): 0.4+/-5.3 ml, 2.1+/-12.6 ml, and 1.5+/-7.8 ml at end-diastolic, end-systolic and pre-atrial-contraction frames, respectively. This study shows that the proposed method can be used for automatic volumetric assessment of the LA, considerably reducing the analysis time and effort when compared to manual analysis.

  3. Appropriate complexity for the prediction of coastal and estuarine geomorphic behaviour at decadal to centennial scales

    NASA Astrophysics Data System (ADS)

    French, Jon; Payo, Andres; Murray, Brad; Orford, Julian; Eliot, Matt; Cowell, Peter

    2016-03-01

    Coastal and estuarine landforms provide a physical template that not only accommodates diverse ecosystem functions and human activities, but also mediates flood and erosion risks that are expected to increase with climate change. In this paper, we explore some of the issues associated with the conceptualisation and modelling of coastal morphological change at time and space scales relevant to managers and policy makers. Firstly, we revisit the question of how to define the most appropriate scales at which to seek quantitative predictions of landform change within an age defined by human interference with natural sediment systems and by the prospect of significant changes in climate and ocean forcing. Secondly, we consider the theoretical bases and conceptual frameworks for determining which processes are most important at a given scale of interest and the related problem of how to translate this understanding into models that are computationally feasible, retain a sound physical basis and demonstrate useful predictive skill. In particular, we explore the limitations of a primary scale approach and the extent to which these can be resolved with reference to the concept of the coastal tract and application of systems theory. Thirdly, we consider the importance of different styles of landform change and the need to resolve not only incremental evolution of morphology but also changes in the qualitative dynamics of a system and/or its gross morphological configuration. The extreme complexity and spatially distributed nature of landform systems means that quantitative prediction of future changes must necessarily be approached through mechanistic modelling of some form or another. Geomorphology has increasingly embraced so-called 'reduced complexity' models as a means of moving from an essentially reductionist focus on the mechanics of sediment transport towards a more synthesist view of landform evolution. However, there is little consensus on exactly what constitutes a reduced complexity model and the term itself is both misleading and, arguably, unhelpful. Accordingly, we synthesise a set of requirements for what might be termed 'appropriate complexity modelling' of quantitative coastal morphological change at scales commensurate with contemporary management and policy-making requirements: 1) The system being studied must be bounded with reference to the time and space scales at which behaviours of interest emerge and/or scientific or management problems arise; 2) model complexity and comprehensiveness must be appropriate to the problem at hand; 3) modellers should seek a priori insights into what kind of behaviours are likely to be evident at the scale of interest and the extent to which the behavioural validity of a model may be constrained by its underlying assumptions and its comprehensiveness; 4) informed by qualitative insights into likely dynamic behaviour, models should then be formulated with a view to resolving critical state changes; and 5) meso-scale modelling of coastal morphological change should reflect critically on the role of modelling and its relation to the observable world.

  4. Comparison of oral amoxicillin with placebo for the treatment of world health organization-defined nonsevere pneumonia in children aged 2-59 months: a multicenter, double-blind, randomized, placebo-controlled trial in pakistan.

    PubMed

    Hazir, Tabish; Nisar, Yasir Bin; Abbasi, Saleem; Ashraf, Yusra Pervaiz; Khurshid, Joza; Tariq, Perveen; Asghar, Rai; Murtaza, Asifa; Masood, Tahir; Maqbool, Sajid

    2011-02-01

    world Health Organization (WHO) acute respiratory illness case management guidelines classify children with fast breathing as having pneumonia and recommend treatment with an antibiotic. There is concern that many of these children may not have pneumonia and are receiving antibiotics unnecessarily. This could increase antibiotic resistance in the community. The aim was to compare the clinical outcome at 72 h in children with WHO-defined nonsevere pneumonia when treated with amoxicillin, compared with placebo. we performed a double-blind, randomized, equivalence trial in 4 tertiary hospitals in Pakistan. Nine hundred children aged 2-59 months with WHO defined nonsevere pneumonia were randomized to receive either 3 days of oral amoxicillin (45mg/kg/day) or placebo; 873 children completed the study. All children were followed up on days 3, 5, and 14. The primary outcome was therapy failure defined a priori at 72 h. in per-protocol analysis at day 3, 31 (7.2%) of the 431 children in the amoxicillin arm and 37 (8.3%) of the 442 in placebo group had therapy failure. This difference was not statistically significant (odds ratio [OR], .85; 95%CI, .50-1.43; P = .60). The multivariate analysis identified history of difficult breathing (OR, 2.86; 95% CI, 1.29-7.23; P = .027) and temperature >37.5°C 100°F at presentation (OR, 1.99; 95% CI, 1.37-2.90; P = .0001) as risk factors for treatment failure by day 5. clinical outcome in children aged 2-59 months with WHO-defined nonsevere pneumonia is not different when treated with an antibiotic or placebo. Similar trials are needed in countries with a high burden of pneumonia to rationalize the use of antibiotics in these communities.

  5. Defining health-related quality of life for young wheelchair users: A qualitative health economics study

    PubMed Central

    2017-01-01

    Background Wheelchairs for children with impaired mobility provide health, developmental and psychosocial benefits, however there is limited understanding of how mobility aids affect the health-related quality of life of children with impaired mobility. Preference-based health-related quality of life outcome measures are used to calculate quality-adjusted life years; an important concept in health economics. The aim of this research was to understand how young wheelchair users and their parents define health-related quality of life in relation to mobility impairment and wheelchair use. Methods The sampling frame was children with impaired mobility (≤18 years) who use a wheelchair and their parents. Data were collected through semi-structured face-to-face interviews conducted in participants’ homes. Qualitative framework analysis was used to analyse the interview transcripts. An a priori thematic coding framework was developed. Emerging codes were grouped into categories, and refined into analytical themes. The data were used to build an understanding of how children with impaired mobility define health-related quality of life in relation to mobility impairment, and to assess the applicability of two standard measures of health-related quality of life. Results Eleven children with impaired mobility and 24 parents were interviewed across 27 interviews. Participants defined mobility-related quality of life through three distinct but interrelated concepts: 1) participation and positive experiences; 2) self-worth and feeling fulfilled; 3) health and functioning. A good degree of consensus was found between child and parent responses, although there was some evidence to suggest a shift in perception of mobility-related quality of life with child age. Conclusions Young wheelchair users define health-related quality of life in a distinct way as a result of their mobility impairment and adaptation use. Generic, preference-based measures of health-related quality of life lack sensitivity in this population. Development of a mobility-related quality of life outcome measure for children is recommended. PMID:28617820

  6. Population-Based Pediatric Reference Intervals in General Clinical Chemistry: A Swedish Survey.

    PubMed

    Ridefelt, Peter

    2015-01-01

    Very few high quality studies on pediatric reference intervals for general clinical chemistry and hematology analytes have been performed. Three recent prospective community-based projects utilising blood samples from healthy children in Sweden, Denmark and Canada have substantially improved the situation. The Swedish survey included 701 healthy children. Reference intervals for general clinical chemistry and hematology were defined.

  7. The role of internal reference prices in consumers' willingness to pay judgments: Thaler's Beer Pricing Task revisited.

    PubMed

    Ranyard, R; Charlton, J P; Williamson, J

    2001-02-01

    Alternative reference prices, either displayed in the environment (external) or recalled from memory (internal) are known to influence consumer judgments and decisions. In one line of previous research, internal reference prices have been defined in terms of general price expectations. However, Thaler (Marketing Science 4 (1985) 199; Journal of Behavioral Decision Making 12 (1999) 183) defined them as fair prices expected from specific types of seller. Using a Beer Pricing Task, he found that seller context had a substantial effect on willingness to pay, and concluded that this was due to specific internal reference prices evoked by specific contexts. In a think aloud study using the same task (N = 48), we found only a marginal effect of seller context. In a second study using the Beer Pricing Task and seven analogous ones (N = 144), general internal reference prices were estimated by asking people what they normally paid for various commodities. Both general internal reference prices and seller context influenced willingness to pay, although the effect of the latter was again rather small. We conclude that general internal reference prices have a greater impact in these scenarios than specific ones, because of the lower cognitive load involved in their storage and retrieval.

  8. An empirical approach to symmetry and probability

    NASA Astrophysics Data System (ADS)

    North, Jill

    We often rely on symmetries to infer outcomes' probabilities, as when we infer that each side of a fair coin is equally likely to come up on a given toss. Why are these inferences successful? I argue against answering this question with an a priori indifference principle. Reasons to reject such a principle are familiar, yet instructive. They point to a new, empirical explanation for the success of our probabilistic predictions. This has implications for indifference reasoning generally. I argue that a priori symmetries need never constrain our probability attributions, even for initial credences.

  9. GPS (Global Positioning System) Error Budgets, Accuracy and Applications Considerations for Test and Training Ranges.

    DTIC Science & Technology

    1982-12-01

    RELATIONSHIP OF POOP AND HOOP WITH A PRIORI ALTITUDE UNCERTAINTY IN 3 DIMENSIONAL NAVIGATION. 4Satellite configuration ( AZEL ), (00,100), (900,10O), (180,10O...RELATIONSHIP OF HOOP WITH A PRIORI ALTITUDE UNCERTAINTY IN 2 DIMENSIONAL NAVIGATION. Satellite configuration ( AZEL ), (°,lO), (90,10), (180,lOO), (27o8...UNCERTAINTY IN 2 DIMENSIONAL NAVIGATION. Satellite configuration ( AZEL ), (00,100), (909,200), (l80*,30*), (270*,40*) 4.4-12 4.t 78 " 70 " 30F 20F 4S, a

  10. Intelligent Control Systems Research

    NASA Technical Reports Server (NTRS)

    Loparo, Kenneth A.

    1994-01-01

    Results of a three phase research program into intelligent control systems are presented. The first phase looked at implementing the lowest or direct level of a hierarchical control scheme using a reinforcement learning approach assuming no a priori information about the system under control. The second phase involved the design of an adaptive/optimizing level of the hierarchy and its interaction with the direct control level. The third and final phase of the research was aimed at combining the results of the previous phases with some a priori information about the controlled system.

  11. How the Government Defines "Rural" Has Implications for Education Policies and Practices. Issues & Answers. REL 2007-010

    ERIC Educational Resources Information Center

    Arnold, Michael L.; Biscoe, Belinda; Farmer, Thomas W.; Robertson, Dylan L.; Shapley, Kathy L.

    2007-01-01

    Clearly defining what rural means has tangible implications for public policies and practices in education, from establishing resource needs to achieving the goals of No Child Left Behind in rural areas. The word "rural" has many meanings. It has been defined in reference to population density, geographic features, and level of economic…

  12. Satellite power system concept development and evaluation program. Volume 1: Technical assessment summary report

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Candidate satellite power system (SPS) concepts were identified and evaluated in terms of technical and cost factors. A number of alternative technically feasible approaches and system concepts were investigated. A reference system was defined to facilitate economic, environmental, and societal assessments by the Department of Energy. All elements of the reference system were defined including the satellite and all its subsystems, the orbital construction and maintenance bases, all elements of the space transportation system, the ground receiving station, and the associated industrial facilities for manufacturing the required hardware. The reference conclusions and remaining issues are stated for the following topical areas: system definition; energy conversion and power management; power transmission and reception; structures, controls, and materials; construction and operations; and space transportation.

  13. Parametric and non-parametric masking of randomness in sequence alignments can be improved and leads to better resolved trees.

    PubMed

    Kück, Patrick; Meusemann, Karen; Dambach, Johannes; Thormann, Birthe; von Reumont, Björn M; Wägele, Johann W; Misof, Bernhard

    2010-03-31

    Methods of alignment masking, which refers to the technique of excluding alignment blocks prior to tree reconstructions, have been successful in improving the signal-to-noise ratio in sequence alignments. However, the lack of formally well defined methods to identify randomness in sequence alignments has prevented a routine application of alignment masking. In this study, we compared the effects on tree reconstructions of the most commonly used profiling method (GBLOCKS) which uses a predefined set of rules in combination with alignment masking, with a new profiling approach (ALISCORE) based on Monte Carlo resampling within a sliding window, using different data sets and alignment methods. While the GBLOCKS approach excludes variable sections above a certain threshold which choice is left arbitrary, the ALISCORE algorithm is free of a priori rating of parameter space and therefore more objective. ALISCORE was successfully extended to amino acids using a proportional model and empirical substitution matrices to score randomness in multiple sequence alignments. A complex bootstrap resampling leads to an even distribution of scores of randomly similar sequences to assess randomness of the observed sequence similarity. Testing performance on real data, both masking methods, GBLOCKS and ALISCORE, helped to improve tree resolution. The sliding window approach was less sensitive to different alignments of identical data sets and performed equally well on all data sets. Concurrently, ALISCORE is capable of dealing with different substitution patterns and heterogeneous base composition. ALISCORE and the most relaxed GBLOCKS gap parameter setting performed best on all data sets. Correspondingly, Neighbor-Net analyses showed the most decrease in conflict. Alignment masking improves signal-to-noise ratio in multiple sequence alignments prior to phylogenetic reconstruction. Given the robust performance of alignment profiling, alignment masking should routinely be used to improve tree reconstructions. Parametric methods of alignment profiling can be easily extended to more complex likelihood based models of sequence evolution which opens the possibility of further improvements.

  14. Effects of a standardised extract of Trifolium pratense (Promensil) at a dosage of 80mg in the treatment of menopausal hot flushes: A systematic review and meta-analysis.

    PubMed

    Myers, S P; Vigar, V

    2017-01-15

    To critically assess the evidence for a specific standardised extract of Trifolium pratense isoflavones (Promensil) at a dosage of 80mg/day in the treatment of menopausal hot flushes. Systematic literature searches were performed in Medline, Scopus, CINAHL Plus, Cochrane, AMED and InforRMIT and citations obtained from 1996 to March 2016. Reference lists were checked; corresponding authors contacted and the grey literature searched for additional publications. Studies were selected according to predefined inclusion and exclusion criteria. All randomised clinical trials of a specific standardised extract of Trifolium pratense isoflavones (Promensil) used as a mono-component at 80mg/day and measuring vasomotor symptoms were included. The data extraction and quality assessment were performed independently by one reviewer and validated by a second with any disagreements being settled by discussion. Weighted mean differences and 95% confidence intervals were calculated for continuous data using the fixed-effects model. Twenty potentially relevant papers were identified, with only five studies meeting the inclusion criteria. The meta-analysis demonstrated a statistical and clinically relevant reduction in hot flush frequency in the active treatment group compared to placebo. Weighted mean difference 3.63 hot flushes per day: [95% CI 2.70-4.56]; p˂0.00001). Due to a lack of homogeneity a priori defined sub-group analyses were performed demonstrating a substantive difference between cross-over and parallel-arm clinical trial designs. There is evidence for a statistical and clinically significant benefit for using a specific standardised extract of red clover isoflavones (Promensil) at 80mg/day for treating hot flushes in menopausal women across the 3 studies included in the meta-analysis. The preparation was safe over the short-term duration of the studies (3 months). Copyright © 2016 The Authors. Published by Elsevier GmbH.. All rights reserved.

  15. BIOMARKERS S100B AND NSE PREDICT OUTCOME IN HYPOTHERMIA-TREATED ENCEPHALOPATHIC NEWBORNS

    PubMed Central

    Massaro, An N.; Chang, Taeun; Baumgart, Stephen; McCarter, Robert; Nelson, Karin B.; Glass, Penny

    2014-01-01

    Objective To evaluate if serum S100B protein and neuron specific enolase (NSE) measured during therapeutic hypothermia are predictive of neurodevelopmental outcome at 15 months in children with neonatal encephalopathy (NE). Design Prospective longitudinal cohort study Setting A level IV neonatal intensive care unit in a free-standing children’s hospital. Patients Term newborns with moderate to severe NE referred for therapeutic hypothermia during the study period. Interventions Serum NSE and S100B were measured at 0, 12, 24 and 72 hrs of hypothermia. Measurements and Main Reseults Of the 83 infants were enrolled, fifteen (18%) died in the newborn period. Survivors were evaluated by the Bayley Scales of Infant Development (BSID-II) at 15 months of age. Outcomes were assessed in 49/68 (72%) survivors at a mean age of 15.2±2.7 months. Neurodevelopmental outcome was classified by BSID-II Mental (MDI) and Psychomotor (PDI) Developmental Index scores, reflecting cognitive and motor outcomes respectively. Four-level outcome classifications were defined a priori: normal= MDI/PDI within 1SD (>85), mild= MDI/PDI <1SD (70–85), moderate/severe= MDI/PDI <2SD (<70), or died. Elevated serum S100B and NSE levels measured during hypothermia were associated with increasing outcome severity after controlling for baseline and soceioeconomic characteristics in ordinal regression models. Adjusted odds ratios for cognitive outcome were: S100B 2.5 (95% CI 1.3–4.8) and NSE 2.1 (1.2–3.6); for motor outcome: S100B 2.6 (1.2–5.6) and NSE 2.1 (1.2–3.6). Conclusions Serum S100B and NSE levels in babies with NE are associated with neurodevelopmental outcome at 15 months. These putative biomarkers of brain injury may help direct care during therapeutic hypothermia. PMID:24777302

  16. Biomarkers S100B and neuron-specific enolase predict outcome in hypothermia-treated encephalopathic newborns*.

    PubMed

    Massaro, An N; Chang, Taeun; Baumgart, Stephen; McCarter, Robert; Nelson, Karin B; Glass, Penny

    2014-09-01

    To evaluate if serum S100B protein and neuron-specific enolase measured during therapeutic hypothermia are predictive of neurodevelopmental outcome at 15 months in children with neonatal encephalopathy. Prospective longitudinal cohort study. A level IV neonatal ICU in a freestanding children's hospital. Term newborns with moderate to severe neonatal encephalopathy referred for therapeutic hypothermia during the study period. Serum neuron-specific enolase and S100B were measured at 0, 12, 24, and 72 hours of hypothermia. Of the 83 infants enrolled, 15 (18%) died in the newborn period. Survivors were evaluated by the Bayley Scales of Infant Development-II at 15 months. Outcomes were assessed in 49 of 68 survivors (72%) at a mean age of 15.2 ± 2.7 months. Neurodevelopmental outcome was classified by Bayley Scales of Infant Development-II Mental Developmental Index and Psychomotor Developmental Index scores, reflecting cognitive and motor outcomes, respectively. Four-level outcome classifications were defined a priori: normal = Mental Developmental Index/Psychomotor Developmental Index within 1 SD (> 85), mild = Mental Developmental Index/Psychomotor Developmental Index less than 1 SD (70-85), moderate/severe = Mental Developmental Index/Psychomotor Developmental Index less than 2 SD (< 70), or died. Elevated serum S100B and neuron-specific enolase levels measured during hypothermia were associated with increasing outcome severity after controlling for baseline and socioeconomic characteristics in ordinal regression models. Adjusted odds ratios for cognitive outcome were 2.5 (95% CI, 1.3-4.8) for S100B and 2.1 (95% CI, 1.2-3.6) for neuron-specific enolase, and for motor outcome, 2.6 (95% CI, 1.2-5.6) for S100B and 2.1 (95% CI, 1.2-3.6) for neuron-specific enolase. Serum S100B and neuron-specific enolase levels in babies with neonatal encephalopathy are associated with neurodevelopmental outcome at 15 months. These putative biomarkers of brain injury may help direct care during therapeutic hypothermia.

  17. [A lower adherence to Mediterranean diet is associated with a poorer self-rated health in university population].

    PubMed

    Barrios-Vicedo, Ricardo; Navarrete-Muñoz, Eva Maria; García de la Hera, Manuela; González-Palacios, Sandra; Valera-Gran, Desirée; Checa-Sevilla, José Francisco; Gimenez-Monzo, Daniel; Vioque, Jesús

    2014-09-15

    A higher adherence to Mediterranean diet is considered as a protective factor against the large number of deaths attributable to the main chronic degenerative diseases in developed countries. Self-rated health is established as a good indicator of population health status and as a predictor of mortality. Studies exploring the relationship between the adherence to Mediterranean diet and self-rated health are scarce, especially, in young adults. Our aim was to explore the factors related, specially the adherence to a priori-defined Mediterranean diet with self-rated health in a cohort of Spanish university students. We analyzed data from 1110 participants of Spanish DiSA-UMH (Dieta, Salud y Antropometría en universitarios de la Universidad Miguel Hernández) study. Diet was assessed using a validated food frequency questionnaire and the adherence to Mediterranean diet was calculated using the relative Mediterranean Diet Score (rMED; score range: 0-18) according to the consumption of 9 dietary components. Self-rated health was gathered from the question: "In general, how do you consider your health to be? (Excellent, good, fair, poor, very poor). Information on sociodemographic and lifestyle characteristics was also collected. Multinomial logistic regression (using relative risk ratio, RRR) was used to analyze the association between the adherence to Mediterranean diet (low rMED: 0-6 points; medium: 7-10 points; high: 11-18 points) and self-rated health (Excellent (reference), good and fair/ poor/very poor). A low, medium or high adherence to Mediterranean diet conformed to 26.8%, 58.7% and 14.4% of participants, which of them reported an excellent (23.1%), good (65.1%) and fair/poor or very poor health, respectively. In multivariate analysis, a lower adherence to Mediterranean diet was significantly (p. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.

  18. Establishing best practices for the validation of atmospheric composition measurements from satellites

    NASA Astrophysics Data System (ADS)

    Lambert, Jean-Christopher

    As a contribution to the implementation of the Global Earth Observation System of Systems (GEOSS), the Committee on Earth Observation Satellites (CEOS) is developing a data quality strategy for satellite measurements. To achieve GEOSS requirements of consistency and interoperability (e.g. for comparison and for integrated interpretation) of the measurements and their derived data products, proper uncertainty assessment is essential and needs to be continuously monitored and traceable to standards. Therefore, CEOS has undertaken the task to establish a set of best practices and guidelines for satellite validation, starting with current practices that could be improved with time. Best practices are not intended to be imposed as firm requirements, but rather to be suggested as a baseline for comparing against, which could be used by the widest community and provide guidance to newcomers. The present paper reviews the current development of best practices and guidelines for the validation of atmospheric composition satellites. Terminologies and general principles of validation are reminded. Going beyond elementary definitions of validation like the assessment of uncertainties, the specific GEOSS context calls also for validation of individual service components and against user requirements. This paper insists on two important aspects. First one, the question of the "collocation". Validation generally involves comparisons with "reference" measurements of the same quantities, and the question of what constitutes a valid comparison is not the least of the challenges faced. We present a tentative scheme for defining the validity of a comparison and of the necessary "collocation" criteria. Second focus of this paper: the information content of the data product. Validation against user requirements, or the verification of the "fitness for purpose" of both the data products and their validation, needs to identify what information, in the final product, is contributed really by the measurement, as opposed to what is contributed by a priori constraints imposed by the retrieval.

  19. Uav Surveying for a Complete Mapping and Documentation of Archaeological Findings. The Early Neolithic Site of Portonovo

    NASA Astrophysics Data System (ADS)

    Malinverni, E. S.; Conati Barbaro, C.; Pierdicca, R.; Bozzi, C. A.; Tassetti, A. N.

    2016-06-01

    The huge potential of 3D digital acquisition techniques for the documentation of archaeological sites, as well as the related findings, is almost well established. In spite of the variety of available techniques, a sole documentation pipeline cannot be defined a priori because of the diversity of archaeological settings. Stratigraphic archaeological excavations, for example, require a systematic, quick and low cost 3D single-surface documentation because the nature of stratigraphic archaeology compels providing documentary evidence of any excavation phase. Only within a destructive process each single excavation cannot be identified, documented and interpreted and this implies the necessity of a re- examination of the work on field. In this context, this paper describes the methodology, carried out during the last years, to 3D document the Early Neolithic site of Portonovo (Ancona, Italy) and, in particular, its latest step consisting in a photogrammetric aerial survey by means of UAV platform. It completes the previous research delivered in the same site by means of terrestrial laser scanning and close range techniques and sets out different options for further reflection in terms of site coverage, resolution and campaign cost. With the support of a topographic network and a unique reference system, the full documentation of the site is managed in order to detail each excavation phase; besides, the final output proves how the 3D digital methodology can be completely integrated with reasonable costs during the excavation and used to interpret the archaeological context. Further contribution of this work is the comparison between several acquisition techniques (i.e. terrestrial and aerial), which could be useful as decision support system for different archaeological scenarios. The main objectives of the comparison are: i) the evaluation of 3D mapping accuracy from different data sources, ii) the definition of a standard pipeline for different archaeological needs and iii) the provision of different level of detail according to the user needs.

  20. Rapid Computation of Thermodynamic Properties over Multidimensional Nonbonded Parameter Spaces Using Adaptive Multistate Reweighting.

    PubMed

    Naden, Levi N; Shirts, Michael R

    2016-04-12

    We show how thermodynamic properties of molecular models can be computed over a large, multidimensional parameter space by combining multistate reweighting analysis with a linear basis function approach. This approach reduces the computational cost to estimate thermodynamic properties from molecular simulations for over 130,000 tested parameter combinations from over 1000 CPU years to tens of CPU days. This speed increase is achieved primarily by computing the potential energy as a linear combination of basis functions, computed from either modified simulation code or as the difference of energy between two reference states, which can be done without any simulation code modification. The thermodynamic properties are then estimated with the Multistate Bennett Acceptance Ratio (MBAR) as a function of multiple model parameters without the need to define a priori how the states are connected by a pathway. Instead, we adaptively sample a set of points in parameter space to create mutual configuration space overlap. The existence of regions of poor configuration space overlap are detected by analyzing the eigenvalues of the sampled states' overlap matrix. The configuration space overlap to sampled states is monitored alongside the mean and maximum uncertainty to determine convergence, as neither the uncertainty or the configuration space overlap alone is a sufficient metric of convergence. This adaptive sampling scheme is demonstrated by estimating with high precision the solvation free energies of charged particles of Lennard-Jones plus Coulomb functional form with charges between -2 and +2 and generally physical values of σij and ϵij in TIP3P water. We also compute entropy, enthalpy, and radial distribution functions of arbitrary unsampled parameter combinations using only the data from these sampled states and use the estimates of free energies over the entire space to examine the deviation of atomistic simulations from the Born approximation to the solvation free energy.

  1. Biomarker Reference Sets for Cancers in Women — EDRN Public Portal

    Cancer.gov

    The purpose of this study is to develop a standard reference set of specimens for use by investigators participating in the National Cancer Institutes Early Detection Research Network (EDRN) in defining false positive rates for new cancer biomarkers in women.

  2. Fabrication and Testing of Binary-Phase Fourier Gratings for Nonuniform Array Generation

    NASA Technical Reports Server (NTRS)

    Keys, Andrew S.; Crow, Robert W.; Ashley, Paul R.; Nelson, Tom R., Jr.; Parker, Jack H.; Beecher, Elizabeth A.

    2004-01-01

    This effort describes the fabrication and testing of binary-phase Fourier gratings designed to generate an incoherent array of output source points with nonuniform user-defined intensities, symmetric about the zeroth order. Like Dammann fanout gratings, these binary-phase Fourier gratings employ only two phase levels to generate a defined output array. Unlike Dammann fanout gratings, these gratings generate an array of nonuniform, user-defined intensities when projected into the far-field regime. The paper describes the process of design, fabrication, and testing for two different version of the binary-phase grating; one designed for a 12 micron wavelength, referred to as the Long-Wavelength Infrared (LWIR) grating, and one designed for a 5 micron wavelength, referred to as the Mid-Wavelength Infrared Grating (MWIR).

  3. A variational approach to dynamics of flexible multibody systems

    NASA Technical Reports Server (NTRS)

    Wu, Shih-Chin; Haug, Edward J.; Kim, Sung-Soo

    1989-01-01

    This paper presents a variational formulation of constrained dynamics of flexible multibody systems, using a vector-variational calculus approach. Body reference frames are used to define global position and orientation of individual bodies in the system, located and oriented by position of its origin and Euler parameters, respectively. Small strain linear elastic deformation of individual components, relative to their body references frames, is defined by linear combinations of deformation modes that are induced by constraint reaction forces and normal modes of vibration. A library of kinematic couplings between flexible and/or rigid bodies is defined and analyzed. Variational equations of motion for multibody systems are obtained and reduced to mixed differential-algebraic equations of motion. A space structure that must deform during deployment is analyzed, to illustrate use of the methods developed.

  4. Valid analytical performance specifications for combined analytical bias and imprecision for the use of common reference intervals.

    PubMed

    Hyltoft Petersen, Per; Lund, Flemming; Fraser, Callum G; Sandberg, Sverre; Sölétormos, György

    2018-01-01

    Background Many clinical decisions are based on comparison of patient results with reference intervals. Therefore, an estimation of the analytical performance specifications for the quality that would be required to allow sharing common reference intervals is needed. The International Federation of Clinical Chemistry (IFCC) recommended a minimum of 120 reference individuals to establish reference intervals. This number implies a certain level of quality, which could then be used for defining analytical performance specifications as the maximum combination of analytical bias and imprecision required for sharing common reference intervals, the aim of this investigation. Methods Two methods were investigated for defining the maximum combination of analytical bias and imprecision that would give the same quality of common reference intervals as the IFCC recommendation. Method 1 is based on a formula for the combination of analytical bias and imprecision and Method 2 is based on the Microsoft Excel formula NORMINV including the fractional probability of reference individuals outside each limit and the Gaussian variables of mean and standard deviation. The combinations of normalized bias and imprecision are illustrated for both methods. The formulae are identical for Gaussian and log-Gaussian distributions. Results Method 2 gives the correct results with a constant percentage of 4.4% for all combinations of bias and imprecision. Conclusion The Microsoft Excel formula NORMINV is useful for the estimation of analytical performance specifications for both Gaussian and log-Gaussian distributions of reference intervals.

  5. Body mass index and childhood obesity classification systems: A comparison of the French, International Obesity Task Force (IOTF) and World Health Organization (WHO) references.

    PubMed

    Kêkê, L M; Samouda, H; Jacobs, J; di Pompeo, C; Lemdani, M; Hubert, H; Zitouni, D; Guinhouya, B C

    2015-06-01

    This study aims to compare three body mass index (BMI)-based classification systems of childhood obesity: the French, the International Obesity Task Force (IOTF) and the World Health Organization (WHO) references. The study involved 1382 schoolchildren, recruited from the Lille Academic District in France in May 2009 aged 8.4±1.7 years (4.0-12.0 years). Their mean height and body mass were 131.5±10.9cm and 30.7±9.2kg, respectively, resulting in a BMI of 17.4±3.2kg/m(2). The weight status was defined according to the three systems considered in this study. The agreement between these references was tested using the Cohen's kappa coefficient. The prevalence of overweight was higher with the WHO references (20.0%) in comparison with the French references (13.8%; P<0.0001) and the IOTF (16.2%; P≤0.01). A similar result was found with obesity (WHO: 11.6% vs. IOTF: 6.7%; or French references: 6.7%; P<0.0001). Agreement between the three references ranged from "moderate" to "perfect" (0.43≤κ≤1.00; P<0.0001). Kappa coefficients were higher when the three references were used to classify children as obese (0.63≤κ≤1.00; P<0.0001) as compared to classification in the overweight (obesity excluded) category (0.43≤κ≤0.94; P<0.0001). When sex and age categories (4-6 years vs. 7-12 years) were considered to define the overweight status, the lowest kappa coefficient was found between the French and WHO references in boys aged 7-12 years (κ=0.28; P<0.0001), and the highest one in girls aged 7-12 years between the French references and IOTF (κ=0.97; P<0.0001). As for obesity, agreement between the three references ranged from 0.60 to 1.00 (P<0.0001), with the lowest values obtained in the comparison of the WHO references against French references or IOTF among boys aged 7-12 years (κ=0.60; P<0.0001). Overall, the WHO references yield an overestimation in overweight and/or obesity within this sample of schoolchildren as compared to the French references and the IOTF. The magnitude of agreement coefficients between the three references depends on of both sex and age categories. The French references seem to be in rather close agreement with the IOTF in defining overweight, especially in 7-12-year-old children. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  6. The subject-fixated coaxially sighted corneal light reflex: a clinical marker for centration of refractive treatments and devices.

    PubMed

    Chang, Daniel H; Waring, George O

    2014-11-01

    To describe the inconsistencies in definition, application, and usage of the ocular reference axes (optical axis, visual axis, line of sight, pupillary axis, and topographic axis) and angles (angle kappa, lambda, and alpha) and to propose a precise, reproducible, clinically defined reference marker and axis for centration of refractive treatments and devices. Perspective. Literature review of papers dealing with ocular reference axes, angles, and centration. The inconsistent definitions and usage of the current ocular axes, as derived from eye models, limit their clinical utility. With a clear understanding of Purkinje images and a defined alignment of the observer, light source/fixation target, and subject eye, the subject-fixated coaxially sighted corneal light reflex can be a clinically useful reference marker. The axis formed by connecting the subject-fixated coaxially sighted corneal light reflex and the fixation point, the subject-fixated coaxially sighted corneal light reflex axis, is independent of pupillary dilation and phakic status of the eye. The relationship of the subject-fixated coaxially sighted corneal light reflex axis to a refined definition of the visual axis without reference to nodal points, the foveal-fixation axis, is discussed. The displacement between the subject-fixated coaxially sighted corneal light reflex and pupil center is described not by an angle, but by a chord, here termed chord mu. The application of the subject-fixated coaxially sighted corneal light reflex to the surgical centration of refractive treatments and devices is discussed. As a clinically defined reference marker, the subject-fixated coaxially sighted corneal light reflex avoids the shortcomings of current ocular axes for clinical application and may contribute to better consensus in the literature and improved patient outcomes. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Deep brain stimulation for Parkinson's disease: defining the optimal location within the subthalamic nucleus.

    PubMed

    Bot, Maarten; Schuurman, P Richard; Odekerken, Vincent J J; Verhagen, Rens; Contarino, Fiorella Maria; De Bie, Rob M A; van den Munckhof, Pepijn

    2018-05-01

    Individual motor improvement after deep brain stimulation (DBS) of the subthalamic nucleus (STN) for Parkinson's disease (PD) varies considerably. Stereotactic targeting of the dorsolateral sensorimotor part of the STN is considered paramount for maximising effectiveness, but studies employing the midcommissural point (MCP) as anatomical reference failed to show correlation between DBS location and motor improvement. The medial border of the STN as reference may provide better insight in the relationship between DBS location and clinical outcome. Motor improvement after 12 months of 65 STN DBS electrodes was categorised into non-responding, responding and optimally responding body-sides. Stereotactic coordinates of optimal electrode contacts relative to both medial STN border and MCP served to define theoretic DBS 'hotspots'. Using the medial STN border as reference, significant negative correlation (Pearson's correlation -0.52, P<0.01) was found between the Euclidean distance from the centre of stimulation to this DBS hotspot and motor improvement. This hotspot was located at 2.8 mm lateral, 1.7 mm anterior and 2.5 mm superior relative to the medial STN border. Using MCP as reference, no correlation was found. The medial STN border proved superior compared with MCP as anatomical reference for correlation of DBS location and motor improvement, and enabled defining an optimal DBS location within the nucleus. We therefore propose the medial STN border as a better individual reference point than the currently used MCP on preoperative stereotactic imaging, in order to obtain optimal and thus less variable motor improvement for individual patients with PD following STN DBS. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  8. 40 CFR 53.2 - General requirements for a reference method determination.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... part. Further, FRM samplers must be manufactured in an ISO 9001-registered facility, as defined in § 53... manufactured in an ISO 9001-registered facility, as defined in § 53.1 and as set forth in § 53.51. (b...

  9. What Is Cyberspace?

    ERIC Educational Resources Information Center

    Bauwens, Michel

    1994-01-01

    Discusses the concept of cyberspace, defines three different levels that have been or will be attained, and compares it with mediaspace; examines the concept of virtualization, particularly the virtual library, and defines three different levels; and describes the concepts of cybrarians, cyberocracy, and cyberology. (Contains seven references.)…

  10. Parameter transferability within homogeneous regions and comparisons with predictions from a priori parameters in the eastern United States

    NASA Astrophysics Data System (ADS)

    Chouaib, Wafa; Alila, Younes; Caldwell, Peter V.

    2018-05-01

    The need for predictions of flow time-series persists at ungauged catchments, motivating the research goals of our study. By means of the Sacramento model, this paper explores the use of parameter transfer within homogeneous regions of similar climate and flow characteristics and makes comparisons with predictions from a priori parameters. We assessed the performance using the Nash-Sutcliffe (NS), bias, mean monthly hydrograph and flow duration curve (FDC). The study was conducted on a large dataset of 73 catchments within the eastern US. Two approaches to the parameter transferability were developed and evaluated; (i) the within homogeneous region parameter transfer using one donor catchment specific to each region, (ii) the parameter transfer disregarding the geographical limits of homogeneous regions, where one donor catchment was common to all regions. Comparisons between both parameter transfers enabled to assess the gain in performance from the parameter regionalization and its respective constraints and limitations. The parameter transfer within homogeneous regions outperformed the a priori parameters and led to a decrease in bias and increase in efficiency reaching a median NS of 0.77 and a NS of 0.85 at individual catchments. The use of FDC revealed the effect of bias on the inaccuracy of prediction from parameter transfer. In one specific region, of mountainous and forested catchments, the prediction accuracy of the parameter transfer was less satisfactory and equivalent to a priori parameters. In this region, the parameter transfer from the outsider catchment provided the best performance; less-biased with smaller uncertainty in medium flow percentiles (40%-60%). The large disparity of energy conditions explained the lack of performance from parameter transfer in this region. Besides, the subsurface stormflow is predominant and there is a likelihood of lateral preferential flow, which according to its specific properties further explained the reduced efficiency. Testing the parameter transferability using criteria of similar climate and flow characteristics at ungauged catchments and comparisons with predictions from a priori parameters are a novelty. The ultimate limitations of both approaches are recognized and recommendations are made for future research.

  11. Adaptive critic neural network-based object grasping control using a three-finger gripper.

    PubMed

    Jagannathan, S; Galan, Gustavo

    2004-03-01

    Grasping of objects has been a challenging task for robots. The complex grasping task can be defined as object contact control and manipulation subtasks. In this paper, object contact control subtask is defined as the ability to follow a trajectory accurately by the fingers of a gripper. The object manipulation subtask is defined in terms of maintaining a predefined applied force by the fingers on the object. A sophisticated controller is necessary since the process of grasping an object without a priori knowledge of the object's size, texture, softness, gripper, and contact dynamics is rather difficult. Moreover, the object has to be secured accurately and considerably fast without damaging it. Since the gripper, contact dynamics, and the object properties are not typically known beforehand, an adaptive critic neural network (NN)-based hybrid position/force control scheme is introduced. The feedforward action generating NN in the adaptive critic NN controller compensates the nonlinear gripper and contact dynamics. The learning of the action generating NN is performed on-line based on a critic NN output signal. The controller ensures that a three-finger gripper tracks a desired trajectory while applying desired forces on the object for manipulation. Novel NN weight tuning updates are derived for the action generating and critic NNs so that Lyapunov-based stability analysis can be shown. Simulation results demonstrate that the proposed scheme successfully allows fingers of a gripper to secure objects without the knowledge of the underlying gripper and contact dynamics of the object compared to conventional schemes.

  12. From "Where" to "What": Distributed Representations of Brand Associations in the Human Brain.

    PubMed

    Chen, Yu-Ping; Nelson, Leif D; Hsu, Ming

    2015-08-01

    Considerable attention has been given to the notion that there exists a set of human-like characteristics associated with brands, referred to as brand personality. Here we combine newly available machine learning techniques with functional neuroimaging data to characterize the set of processes that give rise to these associations. We show that brand personality traits can be captured by the weighted activity across a widely distributed set of brain regions previously implicated in reasoning, imagery, and affective processing. That is, as opposed to being constructed via reflective processes, brand personality traits appear to exist a priori inside the minds of consumers, such that we were able to predict what brand a person is thinking about based solely on the relationship between brand personality associations and brain activity. These findings represent an important advance in the application of neuroscientific methods to consumer research, moving from work focused on cataloguing brain regions associated with marketing stimuli to testing and refining mental constructs central to theories of consumer behavior.

  13. Near-field Light Scattering Techniques for Measuring Nanoparticle-Surface Interaction Energies and Forces.

    PubMed

    Schein, Perry; Ashcroft, Colby K; O'Dell, Dakota; Adam, Ian S; DiPaolo, Brian; Sabharwal, Manit; Shi, Ce; Hart, Robert; Earhart, Christopher; Erickson, David

    2015-08-15

    Nanoparticles are quickly becoming commonplace in many commercial and industrial products, ranging from cosmetics to pharmaceuticals to medical diagnostics. Predicting the stability of the engineered nanoparticles within these products a priori remains an important and difficult challenge. Here we describe our techniques for measuring the mechanical interactions between nanoparticles and surfaces using near-field light scattering. Particle-surface interfacial forces are measured by optically "pushing" a particle against a reference surface and observing its motion using scattered near-field light. Unlike atomic force microscopy, this technique is not limited by thermal noise, but instead takes advantage of it. The integrated waveguide and microfluidic architecture allow for high-throughput measurements of about 1000 particles per hour. We characterize the reproducibility of and experimental uncertainty in the measurements made using the NanoTweezer surface instrument. We report surface interaction studies on gold nanoparticles with 50 nm diameters, smaller than previously reported in the literature using similar techniques.

  14. Photorealistic scene presentation: virtual video camera

    NASA Astrophysics Data System (ADS)

    Johnson, Michael J.; Rogers, Joel Clark W.

    1994-07-01

    This paper presents a low cost alternative for presenting photo-realistic imagery during the final approach, which often is a peak workload phase of flight. The method capitalizes on `a priori' information. It accesses out-the-window `snapshots' from a mass storage device, selecting the snapshots that deliver the best match for a given aircraft position and runway scene. It then warps the snapshots to align them more closely with the current viewpoint. The individual snapshots, stored as highly compressed images, are decompressed and interpolated to produce a `clear-day' video stream. The paper shows how this warping, when combined with other compression methods, saves considerable amounts of storage; compression factors from 1000 to 3000 were achieved. Thus, a CD-ROM today can store reference snapshots for thousands of different runways. Dynamic scene elements not present in the snapshot database can be inserted as separate symbolic or pictorial images. When underpinned by an appropriate suite of sensor technologies, the methods discussed indicate an all-weather virtual video camera is possible.

  15. Efficient anharmonic vibrational spectroscopy for large molecules using local-mode coordinates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, Xiaolu; Steele, Ryan P., E-mail: ryan.steele@utah.edu

    This article presents a general computational approach for efficient simulations of anharmonic vibrational spectra in chemical systems. An automated local-mode vibrational approach is presented, which borrows techniques from localized molecular orbitals in electronic structure theory. This approach generates spatially localized vibrational modes, in contrast to the delocalization exhibited by canonical normal modes. The method is rigorously tested across a series of chemical systems, ranging from small molecules to large water clusters and a protonated dipeptide. It is interfaced with exact, grid-based approaches, as well as vibrational self-consistent field methods. Most significantly, this new set of reference coordinates exhibits a well-behavedmore » spatial decay of mode couplings, which allows for a systematic, a priori truncation of mode couplings and increased computational efficiency. Convergence can typically be reached by including modes within only about 4 Å. The local nature of this truncation suggests particular promise for the ab initio simulation of anharmonic vibrational motion in large systems, where connection to experimental spectra is currently most challenging.« less

  16. Design of adaptive control systems by means of self-adjusting transversal filters

    NASA Technical Reports Server (NTRS)

    Merhav, S. J.

    1986-01-01

    The design of closed-loop adaptive control systems based on nonparametric identification was addressed. Implementation is by self-adjusting Least Mean Square (LMS) transversal filters. The design concept is Model Reference Adaptive Control (MRAC). Major issues are to preserve the linearity of the error equations of each LMS filter, and to prevent estimation bias that is due to process or measurement noise, thus providing necessary conditions for the convergence and stability of the control system. The controlled element is assumed to be asymptotically stable and minimum phase. Because of the nonparametric Finite Impulse Response (FIR) estimates provided by the LMS filters, a-priori information on the plant model is needed only in broad terms. Following a survey of control system configurations and filter design considerations, system implementation is shown here in Single Input Single Output (SISO) format which is readily extendable to multivariable forms. In extensive computer simulation studies the controlled element is represented by a second-order system with widely varying damping, natural frequency, and relative degree.

  17. Pixel-by-pixel absolute phase retrieval using three phase-shifted fringe patterns without markers

    NASA Astrophysics Data System (ADS)

    Jiang, Chufan; Li, Beiwen; Zhang, Song

    2017-04-01

    This paper presents a method that can recover absolute phase pixel by pixel without embedding markers on three phase-shifted fringe patterns, acquiring additional images, or introducing additional hardware component(s). The proposed three-dimensional (3D) absolute shape measurement technique includes the following major steps: (1) segment the measured object into different regions using rough priori knowledge of surface geometry; (2) artificially create phase maps at different z planes using geometric constraints of structured light system; (3) unwrap the phase pixel by pixel for each region by properly referring to the artificially created phase map; and (4) merge unwrapped phases from all regions into a complete absolute phase map for 3D reconstruction. We demonstrate that conventional three-step phase-shifted fringe patterns can be used to create absolute phase map pixel by pixel even for large depth range objects. We have successfully implemented our proposed computational framework to achieve absolute 3D shape measurement at 40 Hz.

  18. Modeling aspects of the surface reconstruction problem

    NASA Astrophysics Data System (ADS)

    Toth, Charles K.; Melykuti, Gabor

    1994-08-01

    The ultimate goal of digital photogrammetry is to automatically produce digital maps which may in turn form the basis of GIS. Virtually all work in surface reconstruction deals with various kinds of approximations and constraints that are applied. In this paper we extend these concepts in various ways. For one, matching is performed in object space. Thus, matching and densification (modeling) is performed in the same reference system. Another extension concerns the solution of the second sub-problem. Rather than simply densifying (interpolating) the surface, we propose to model it. This combined top-down and bottom-up approach is performed in scale space, whereby the model is refined until compatibility between the data and expectations is reached. The paper focuses on the modeling aspects of the surface reconstruction problem. Obviously, the top-down and bottom-up model descriptions ought to be in a form which allows the generation and verification of hypotheses. Another crucial question is the degree of a priori scene knowledge necessary to constrain the solution space.

  19. From “Where” to “What”: Distributed Representations of Brand Associations in the Human Brain

    PubMed Central

    Chen, Yu-Ping; Nelson, Leif D.; Hsu, Ming

    2015-01-01

    Considerable attention has been given to the notion that there exists a set of human-like characteristics associated with brands, referred to as brand personality. Here we combine newly available machine learning techniques with functional neuroimaging data to characterize the set of processes that give rise to these associations. We show that brand personality traits can be captured by the weighted activity across a widely distributed set of brain regions previously implicated in reasoning, imagery, and affective processing. That is, as opposed to being constructed via reflective processes, brand personality traits appear to exist a priori inside the minds of consumers, such that we were able to predict what brand a person is thinking about based solely on the relationship between brand personality associations and brain activity. These findings represent an important advance in the application of neuroscientific methods to consumer research, moving from work focused on cataloguing brain regions associated with marketing stimuli to testing and refining mental constructs central to theories of consumer behavior. PMID:27065490

  20. Implications of genome wide association studies for addiction: are our a priori assumptions all wrong?

    PubMed

    Hall, F Scott; Drgonova, Jana; Jain, Siddharth; Uhl, George R

    2013-12-01

    Substantial genetic contributions to addiction vulnerability are supported by data from twin studies, linkage studies, candidate gene association studies and, more recently, Genome Wide Association Studies (GWAS). Parallel to this work, animal studies have attempted to identify the genes that may contribute to responses to addictive drugs and addiction liability, initially focusing upon genes for the targets of the major drugs of abuse. These studies identified genes/proteins that affect responses to drugs of abuse; however, this does not necessarily mean that variation in these genes contributes to the genetic component of addiction liability. One of the major problems with initial linkage and candidate gene studies was an a priori focus on the genes thought to be involved in addiction based upon the known contributions of those proteins to drug actions, making the identification of novel genes unlikely. The GWAS approach is systematic and agnostic to such a priori assumptions. From the numerous GWAS now completed several conclusions may be drawn: (1) addiction is highly polygenic; each allelic variant contributing in a small, additive fashion to addiction vulnerability; (2) unexpected, compared to our a priori assumptions, classes of genes are most important in explaining addiction vulnerability; (3) although substantial genetic heterogeneity exists, there is substantial convergence of GWAS signals on particular genes. This review traces the history of this research; from initial transgenic mouse models based upon candidate gene and linkage studies, through the progression of GWAS for addiction and nicotine cessation, to the current human and transgenic mouse studies post-GWAS. © 2013.

  1. 40 CFR 53.1 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... followed by a gravimetric mass determination, but which is not a Class I equivalent method because of... MONITORING REFERENCE AND EQUIVALENT METHODS General Provisions § 53.1 Definitions. Terms used but not defined... slope of a linear plot fitted to corresponding candidate and reference method mean measurement data...

  2. Intelligence Defined and Undefined: A Relativistic Appraisal

    ERIC Educational Resources Information Center

    Wechsler, David

    1975-01-01

    Major reasons for the continuing divergency of opinion as regards the nature and meaning of intelligence are examined. An appraisal of intelligence as a relative concept is proposed which advocates the necessity of specifying the reference systems to which a statement about intelligence refers. (EH)

  3. Analysis of space systems study for the space disposal of nuclear waste study report. Volume 2: Technical report

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Reasonable space systems concepts were systematically identified and defined and a total system was evaluated for the space disposal of nuclear wastes. Areas studied include space destinations, space transportation options, launch site options payload protection approaches, and payload rescue techniques. Systems level cost and performance trades defined four alternative space systems which deliver payloads to the selected 0.85 AU heliocentric orbit destination at least as economically as the reference system without requiring removal of the protective radiation shield container. No concepts significantly less costly than the reference concept were identified.

  4. The a priori SDR Estimation Techniques with Reduced Speech Distortion for Acoustic Echo and Noise Suppression

    NASA Astrophysics Data System (ADS)

    Thoonsaengngam, Rattapol; Tangsangiumvisai, Nisachon

    This paper proposes an enhanced method for estimating the a priori Signal-to-Disturbance Ratio (SDR) to be employed in the Acoustic Echo and Noise Suppression (AENS) system for full-duplex hands-free communications. The proposed a priori SDR estimation technique is modified based upon the Two-Step Noise Reduction (TSNR) algorithm to suppress the background noise while preserving speech spectral components. In addition, a practical approach to determine accurately the Echo Spectrum Variance (ESV) is presented based upon the linear relationship assumption between the power spectrum of far-end speech and acoustic echo signals. The ESV estimation technique is then employed to alleviate the acoustic echo problem. The performance of the AENS system that employs these two proposed estimation techniques is evaluated through the Echo Attenuation (EA), Noise Attenuation (NA), and two speech distortion measures. Simulation results based upon real speech signals guarantee that our improved AENS system is able to mitigate efficiently the problem of acoustic echo and background noise, while preserving the speech quality and speech intelligibility.

  5. Uncertainty quantification of crustal scale thermo-chemical properties in Southeast Australia

    NASA Astrophysics Data System (ADS)

    Mather, B.; Moresi, L. N.; Rayner, P. J.

    2017-12-01

    The thermo-chemical properties of the crust are essential to understanding the mechanical and thermal state of the lithosphere. The uncertainties associated with these parameters are connected to the available geophysical observations and a priori information to constrain the objective function. Often, it is computationally efficient to reduce the parameter space by mapping large portions of the crust into lithologies that have assumed homogeneity. However, the boundaries of these lithologies are, in themselves, uncertain and should also be included in the inverse problem. We assimilate geological uncertainties from an a priori geological model of Southeast Australia with geophysical uncertainties from S-wave tomography and 174 heat flow observations within an adjoint inversion framework. This reduces the computational cost of inverting high dimensional probability spaces, compared to probabilistic inversion techniques that operate in the `forward' mode, but at the sacrifice of uncertainty and covariance information. We overcome this restriction using a sensitivity analysis, that perturbs our observations and a priori information within their probability distributions, to estimate the posterior uncertainty of thermo-chemical parameters in the crust.

  6. Progress in defining a standard for file-level metadata

    NASA Technical Reports Server (NTRS)

    Williams, Joel; Kobler, Ben

    1996-01-01

    In the following narrative, metadata required to locate a file on tape or collection of tapes will be referred to as file-level metadata. This paper discribes the rationale for and the history of the effort to define a standard for this metadata.

  7. Alternative Fuels Data Center

    Science.gov Websites

    reasonably available. Practicability and measures of compliance are defined in rules adopted by the standards, measures, targets, and tools to support agencies in reducing greenhouse gas emissions and and measures of compliance are defined in rules adopted by the Department. (Reference Executive Order

  8. A New Global Geodetic Strain Rate Model

    NASA Astrophysics Data System (ADS)

    Kreemer, C.; Blewitt, G.; Klein, E. C.; Shen, Z.; Wang, M.; Estey, L.; Wier, S.

    2013-12-01

    As part of the Global Earthquake Model (GEM) effort to improve global seismic hazard models, we present a new global geodetic strain rate model. This model (GSRM v. 2) is a vast improvement on the previous model from 2004 (v. 1.2). The model is still based on a finite-element type approach and has deforming cells in between the assumed rigid plates. The new model contains ~144,700 cells of 0.25° by 0.2° dimension. We redefined the geometries of the deforming zones based on the definitions of Bird (2003) and Chamot-Rooke and Rabaute (2006). We made some adjustments to the grid geometry at places where seismicity and/or GPS velocities suggested either the presence of deforming areas or a rigid block where those previous studies did not. GSRM v.2 includes 50 plates and blocks, including many not considered by Bird (2003). The new GSRM model is based on over 20,700 horizontal geodetic velocities at over 17,000 unique locations. The GPS velocity field consists of a 1) Over 6500 velocities derived by the University of Nevada, Reno, for CGPS stations for which >2.5 years of RINEX data are available until April 2013, 2) ~1200 velocities for China from a new analysis of all data from the Crustal Movement Network of China (CMONOC), and 3) about 13,000 velocities from 212 studies published in the literature or made otherwise available to us. Velocities from all studies were combined into the same reference frame by a 6-parameter transformation using velocities at collocated stations. We model co-seismic jumps while estimating velocities, ignore periods of post-seismic deformation, and exclude time-series that reflect magmatic and anthropogenic activity. GPS velocities were used to estimate angular velocities for 36 of the 50 rigid plates and blocks (the rest being taken from the literature), and these were used as boundary conditions in the strain rate calculations. For the strain rate calculations we used the method of Haines and Holt. In order to fit the data equally well in slowly and rapidly deforming areas, we first calculated a very smooth model by setting the a priori variances of the strain rate components very low. We then used this model as a proxy for the a priori standard deviations of the final model, at least for the areas that are well constrained by the GPS data. We will show examples of the strain rate and velocity field results. We will also highlight how and where the results can be viewed and accessed through a dedicated webportal (gsrm2.unavco.org). New GPS velocities (in any reference frame) can be uploaded to a new tool and displayed together with velocities used in GSRM v.2 in 53 reference frames (http://facility.unavco.org/data/maps/GPSVelocityViewer/GSRMViewer.html) .

  9. Large eddy simulations of compressible magnetohydrodynamic turbulence

    NASA Astrophysics Data System (ADS)

    Grete, Philipp

    2017-02-01

    Supersonic, magnetohydrodynamic (MHD) turbulence is thought to play an important role in many processes - especially in astrophysics, where detailed three-dimensional observations are scarce. Simulations can partially fill this gap and help to understand these processes. However, direct simulations with realistic parameters are often not feasible. Consequently, large eddy simulations (LES) have emerged as a viable alternative. In LES the overall complexity is reduced by simulating only large and intermediate scales directly. The smallest scales, usually referred to as subgrid-scales (SGS), are introduced to the simulation by means of an SGS model. Thus, the overall quality of an LES with respect to properly accounting for small-scale physics crucially depends on the quality of the SGS model. While there has been a lot of successful research on SGS models in the hydrodynamic regime for decades, SGS modeling in MHD is a rather recent topic, in particular, in the compressible regime. In this thesis, we derive and validate a new nonlinear MHD SGS model that explicitly takes compressibility effects into account. A filter is used to separate the large and intermediate scales, and it is thought to mimic finite resolution effects. In the derivation, we use a deconvolution approach on the filter kernel. With this approach, we are able to derive nonlinear closures for all SGS terms in MHD: the turbulent Reynolds and Maxwell stresses, and the turbulent electromotive force (EMF). We validate the new closures both a priori and a posteriori. In the a priori tests, we use high-resolution reference data of stationary, homogeneous, isotropic MHD turbulence to compare exact SGS quantities against predictions by the closures. The comparison includes, for example, correlations of turbulent fluxes, the average dissipative behavior, and alignment of SGS vectors such as the EMF. In order to quantify the performance of the new nonlinear closure, this comparison is conducted from the subsonic (sonic Mach number M s ≈ 0.2) to the highly supersonic (M s ≈ 20) regime, and against other SGS closures. The latter include established closures of eddy-viscosity and scale-similarity type. In all tests and over the entire parameter space, we find that the proposed closures are (significantly) closer to the reference data than the other closures. In the a posteriori tests, we perform large eddy simulations of decaying, supersonic MHD turbulence with initial M s ≈ 3. We implemented closures of all types, i.e. of eddy-viscosity, scale-similarity and nonlinear type, as an SGS model and evaluated their performance in comparison to simulations without a model (and at higher resolution). We find that the models need to be calculated on a scale larger than the grid scale, e.g. by an explicit filter, to have an influence on the dynamics at all. Furthermore, we show that only the proposed nonlinear closure improves higher-order statistics.

  10. Adapted random sampling patterns for accelerated MRI.

    PubMed

    Knoll, Florian; Clason, Christian; Diwoky, Clemens; Stollberger, Rudolf

    2011-02-01

    Variable density random sampling patterns have recently become increasingly popular for accelerated imaging strategies, as they lead to incoherent aliasing artifacts. However, the design of these sampling patterns is still an open problem. Current strategies use model assumptions like polynomials of different order to generate a probability density function that is then used to generate the sampling pattern. This approach relies on the optimization of design parameters which is very time consuming and therefore impractical for daily clinical use. This work presents a new approach that generates sampling patterns by making use of power spectra of existing reference data sets and hence requires neither parameter tuning nor an a priori mathematical model of the density of sampling points. The approach is validated with downsampling experiments, as well as with accelerated in vivo measurements. The proposed approach is compared with established sampling patterns, and the generalization potential is tested by using a range of reference images. Quantitative evaluation is performed for the downsampling experiments using RMS differences to the original, fully sampled data set. Our results demonstrate that the image quality of the method presented in this paper is comparable to that of an established model-based strategy when optimization of the model parameter is carried out and yields superior results to non-optimized model parameters. However, no random sampling pattern showed superior performance when compared to conventional Cartesian subsampling for the considered reconstruction strategy.

  11. How To Refer to People with Disabilities: A Primer for Laypeople.

    ERIC Educational Resources Information Center

    Beadles, Robert J., Jr.

    2001-01-01

    This article discusses the movement toward focusing on the individual rather than the disabling condition when referring to people with disabilities and contrasts acceptable and unacceptable terminology for people with different types of disabilities. The terms "impairment,""disability," and "handicap" are defined. (CR)

  12. Determination of the extragalactic-planetary frame tie from joint analysis of radio interferometric and lunar laser ranging measurements

    NASA Technical Reports Server (NTRS)

    Folkner, W. M.; Charlot, P.; Finger, M. H.; Williams, J. G.; Sovers, O. J.; Newhall, XX; Standish, E. M., Jr.

    1994-01-01

    Very Long Baseline Interferometry (VLBI) observations of extragalactic radio sources provide the basis for defining an accurate non-rotating reference frame in terms of angular positions of the sources. Measurements of the distance from the Earth to the Moon and to the inner planets provide the basis for defining an inertial planetary ephemeris reference frame. The relative orientation, or frame tie, between these two reference frames is of interest for combining Earth orientation measurements, for comparing Earth orientation results with theories referred to the mean equator and equinox, and for determining the positions of the planets with respect to the extragalactic reference frame. This work presents an indirect determination of the extragalactic-planetary frame tie from a combined reduction of VLBI and Lunar Laser Ranging (LLR) observations. For this determination, data acquired by LLR tracking stations since 1969 have been analyzed and combined with 14 years of VLBI data acquired by NASA's Deep Space Network since 1978. The frame tie derived from this joint analysis, with an accuracy of 0.003 sec, is the most accurate determination obtained so far. This result, combined with a determination of the mean ecliptic (defined in the rotating sense), shows that the mean equinox of epoch J2000 is offset from the x-axis of the extragalactic frame adopted by the International Earth Rotation Service for astrometric and geodetic applications by 0.078 sec +/- 0.010 sec along the y-direction and y 0.019 sec +/- 0.001 sec. along the z-direction.

  13. Optimal minimal measurements of mixed states

    NASA Astrophysics Data System (ADS)

    Vidal, G.; Latorre, J. I.; Pascual, P.; Tarrach, R.

    1999-07-01

    The optimal and minimal measuring strategy is obtained for a two-state system prepared in a mixed state with a probability given by any isotropic a priori distribution. We explicitly construct the specific optimal and minimal generalized measurements, which turn out to be independent of the a priori probability distribution, obtaining the best guesses for the unknown state as well as a closed expression for the maximal mean-average fidelity. We do this for up to three copies of the unknown state in a way that leads to the generalization to any number of copies, which we then present and prove.

  14. Precipitation from the GPM Microwave Imager and Constellation Radiometers

    NASA Astrophysics Data System (ADS)

    Kummerow, Christian; Randel, David; Kirstetter, Pierre-Emmanuel; Kulie, Mark; Wang, Nai-Yu

    2014-05-01

    Satellite precipitation retrievals from microwave sensors are fundamentally underconstrained requiring either implicit or explicit a-priori information to constrain solutions. The radiometer algorithm designed for the GPM core and constellation satellites makes this a-priori information explicit in the form of a database of possible rain structures from the GPM core satellite and a Bayesian retrieval scheme. The a-priori database will eventually come from the GPM core satellite's combined radar/radiometer retrieval algorithm. That product is physically constrained to ensure radiometric consistency between the radars and radiometers and is thus ideally suited to create the a-priori databases for all radiometers in the GPM constellation. Until a robust product exists, however, the a-priori databases are being generated from the combination of existing sources over land and oceans. Over oceans, the Day-1 GPM radiometer algorithm uses the TRMM PR/TMI physically derived hydrometer profiles that are available from the tropics through sea surface temperatures of approximately 285K. For colder sea surface temperatures, the existing profiles are used with lower hydrometeor layers removed to correspond to colder conditions. While not ideal, the results appear to be reasonable placeholders until the full GPM database can be constructed. It is more difficult to construct physically consistent profiles over land due to ambiguities in surface emissivities as well as details of the ice scattering that dominates brightness temperature signatures over land. Over land, the a-priori databases have therefore been constructed by matching satellite overpasses to surface radar data derived from the WSR-88 network over the continental United States through the National Mosaic and Multi-Sensor QPE (NMQ) initiative. Databases are generated as a function of land type (4 categories of increasing vegetation cover as well as 4 categories of increasing snow depth), land surface temperature and total precipitable water. One year of coincident observations, generating 20 and 80 million database entries, depending upon the sensor, are used in the retrieval algorithm. The remaining areas such as sea ice and high latitude coastal zones are filled with a combination of CloudSat and AMSR-E plus MHS observations together with a model to create the equivalent databases for other radiometers in the constellation. The most noteworthy result from the Day-1 algorithm is the quality of the land products when compared to existing products. Unlike previous versions of land algorithms that depended upon complex screening routines to decide if pixels were precipitating or not, the current scheme is free of conditional rain statements and appears to produce rain rate with much greater fidelity than previous schemes. There results will be shown.

  15. Non a Priori Automatic Discovery of 3D Chemical Patterns: Application to Mutagenicity.

    PubMed

    Rabatel, Julien; Fannes, Thomas; Lepailleur, Alban; Le Goff, Jérémie; Crémilleux, Bruno; Ramon, Jan; Bureau, Ronan; Cuissart, Bertrand

    2017-10-01

    This article introduces a new type of structural fragment called a geometrical pattern. Such geometrical patterns are defined as molecular graphs that include a labelling of atoms together with constraints on interatomic distances. The discovery of geometrical patterns in a chemical dataset relies on the induction of multiple decision trees combined in random forests. Each computational step corresponds to a refinement of a preceding set of constraints, extending a previous geometrical pattern. This paper focuses on the mutagenicity of chemicals via the definition of structural alerts in relation with these geometrical patterns. It follows an experimental assessment of the main geometrical patterns to show how they can efficiently originate the definition of a chemical feature related to a chemical function or a chemical property. Geometrical patterns have provided a valuable and innovative approach to bring new pieces of information for discovering and assessing structural characteristics in relation to a particular biological phenotype. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Early Warning Signals of Financial Crises with Multi-Scale Quantile Regressions of Log-Periodic Power Law Singularities.

    PubMed

    Zhang, Qun; Zhang, Qunzhi; Sornette, Didier

    2016-01-01

    We augment the existing literature using the Log-Periodic Power Law Singular (LPPLS) structures in the log-price dynamics to diagnose financial bubbles by providing three main innovations. First, we introduce the quantile regression to the LPPLS detection problem. This allows us to disentangle (at least partially) the genuine LPPLS signal and the a priori unknown complicated residuals. Second, we propose to combine the many quantile regressions with a multi-scale analysis, which aggregates and consolidates the obtained ensembles of scenarios. Third, we define and implement the so-called DS LPPLS Confidence™ and Trust™ indicators that enrich considerably the diagnostic of bubbles. Using a detailed study of the "S&P 500 1987" bubble and presenting analyses of 16 historical bubbles, we show that the quantile regression of LPPLS signals contributes useful early warning signals. The comparison between the constructed signals and the price development in these 16 historical bubbles demonstrates their significant predictive ability around the real critical time when the burst/rally occurs.

  17. Optimization of Multicomponent Behavioral and Biobehavioral Interventions for the Prevention and Treatment of HIV/AIDS

    PubMed Central

    Collins, Linda M.; Kugler, Kari C.; Gwadz, Marya Viorst

    2015-01-01

    To move society toward an AIDS-free generation, behavioral interventions for prevention and treatment of HIV/AIDS must be not only effective, but also cost-effective, efficient, and readily scalable. The purpose of this article is to introduce to the HIV/AIDS research community the multiphase optimization strategy (MOST), a new methodological framework inspired by engineering principles and designed to develop behavioral interventions that have these important characteristics. Many behavioral interventions comprise multiple components. In MOST, randomized experimentation is conducted to assess the individual performance of each intervention component, and whether its presence/absence/setting has an impact on the performance of other components. This information is used to engineer an intervention that meets a specific optimization criterion, defined a priori in terms of effectiveness, cost, cost-effectiveness, and/or scalability. MOST will enable intervention science to develop a coherent knowledge base about what works and does not work. Ultimately this will improve behavioral interventions systematically and incrementally. PMID:26238037

  18. How can we study reasoning in the brain?

    PubMed Central

    Papo, David

    2015-01-01

    The brain did not develop a dedicated device for reasoning. This fact bears dramatic consequences. While for perceptuo-motor functions neural activity is shaped by the input's statistical properties, and processing is carried out at high speed in hardwired spatially segregated modules, in reasoning, neural activity is driven by internal dynamics and processing times, stages, and functional brain geometry are largely unconstrained a priori. Here, it is shown that the complex properties of spontaneous activity, which can be ignored in a short-lived event-related world, become prominent at the long time scales of certain forms of reasoning. It is argued that the neural correlates of reasoning should in fact be defined in terms of non-trivial generic properties of spontaneous brain activity, and that this implies resorting to concepts, analytical tools, and ways of designing experiments that are as yet non-standard in cognitive neuroscience. The implications in terms of models of brain activity, shape of the neural correlates, methods of data analysis, observability of the phenomenon, and experimental designs are discussed. PMID:25964755

  19. How can we study reasoning in the brain?

    PubMed

    Papo, David

    2015-01-01

    The brain did not develop a dedicated device for reasoning. This fact bears dramatic consequences. While for perceptuo-motor functions neural activity is shaped by the input's statistical properties, and processing is carried out at high speed in hardwired spatially segregated modules, in reasoning, neural activity is driven by internal dynamics and processing times, stages, and functional brain geometry are largely unconstrained a priori. Here, it is shown that the complex properties of spontaneous activity, which can be ignored in a short-lived event-related world, become prominent at the long time scales of certain forms of reasoning. It is argued that the neural correlates of reasoning should in fact be defined in terms of non-trivial generic properties of spontaneous brain activity, and that this implies resorting to concepts, analytical tools, and ways of designing experiments that are as yet non-standard in cognitive neuroscience. The implications in terms of models of brain activity, shape of the neural correlates, methods of data analysis, observability of the phenomenon, and experimental designs are discussed.

  20. Neuroimaging in psychiatric pharmacogenetics research: the promise and pitfalls.

    PubMed

    Falcone, Mary; Smith, Ryan M; Chenoweth, Meghan J; Bhattacharjee, Abesh Kumar; Kelsoe, John R; Tyndale, Rachel F; Lerman, Caryn

    2013-11-01

    The integration of research on neuroimaging and pharmacogenetics holds promise for improving treatment for neuropsychiatric conditions. Neuroimaging may provide a more sensitive early measure of treatment response in genetically defined patient groups, and could facilitate development of novel therapies based on an improved understanding of pathogenic mechanisms underlying pharmacogenetic associations. This review summarizes progress in efforts to incorporate neuroimaging into genetics and treatment research on major psychiatric disorders, such as schizophrenia, major depressive disorder, bipolar disorder, attention-deficit/hyperactivity disorder, and addiction. Methodological challenges include: performing genetic analyses in small study populations used in imaging studies; inclusion of patients with psychiatric comorbidities; and the extensive variability across studies in neuroimaging protocols, neurobehavioral task probes, and analytic strategies. Moreover, few studies use pharmacogenetic designs that permit testing of genotype × drug effects. As a result of these limitations, few findings have been fully replicated. Future studies that pre-screen participants for genetic variants selected a priori based on drug metabolism and targets have the greatest potential to advance the science and practice of psychiatric treatment.

Top