Sample records for maximum compatibility estimates

  1. [Compatible biomass models of natural spruce (Picea asperata)].

    PubMed

    Wang, Jin Chi; Deng, Hua Feng; Huang, Guo Sheng; Wang, Xue Jun; Zhang, Lu

    2017-10-01

    By using nonlinear measurement error method, the compatible tree volume and above ground biomass equations were established based on the volume and biomass data of 150 sampling trees of natural spruce (Picea asperata). Two approaches, controlling directly under total aboveground biomass and controlling jointly from level to level, were used to design the compatible system for the total aboveground biomass and the biomass of four components (stem, bark, branch and foliage), and the total ground biomass could be estimated independently or estimated simultaneously in the system. The results showed that the R 2 of the one variable and bivariate compatible tree volume and aboveground biomass equations were all above 0.85, and the maximum value reached 0.99. The prediction effect of the volume equations could be improved significantly when tree height was included as predictor, while it was not significant in biomass estimation. For the compatible biomass systems, the one variable model based on controlling jointly from level to level was better than the model using controlling directly under total above ground biomass, but the bivariate models of the two methods were similar. Comparing the imitative effects of the one variable and bivariate compatible biomass models, the results showed that the increase of explainable variables could significantly improve the fitness of branch and foliage biomass, but had little effect on other components. Besides, there was almost no difference between the two methods of estimation based on the comparison.

  2. Additivity and maximum likelihood estimation of nonlinear component biomass models

    Treesearch

    David L.R. Affleck

    2015-01-01

    Since Parresol's (2001) seminal paper on the subject, it has become common practice to develop nonlinear tree biomass equations so as to ensure compatibility among total and component predictions and to fit equations jointly using multi-step least squares (MSLS) methods. In particular, many researchers have specified total tree biomass models by aggregating the...

  3. Benefit-cost estimation for alternative drinking water maximum contaminant levels

    NASA Astrophysics Data System (ADS)

    Gurian, Patrick L.; Small, Mitchell J.; Lockwood, John R.; Schervish, Mark J.

    2001-08-01

    A simulation model for estimating compliance behavior and resulting costs at U.S. Community Water Suppliers is developed and applied to the evaluation of a more stringent maximum contaminant level (MCL) for arsenic. Probability distributions of source water arsenic concentrations are simulated using a statistical model conditioned on system location (state) and source water type (surface water or groundwater). This model is fit to two recent national surveys of source waters, then applied with the model explanatory variables for the population of U.S. Community Water Suppliers. Existing treatment types and arsenic removal efficiencies are also simulated. Utilities with finished water arsenic concentrations above the proposed MCL are assumed to select the least cost option compatible with their existing treatment from among 21 available compliance strategies and processes for meeting the standard. Estimated costs and arsenic exposure reductions at individual suppliers are aggregated to estimate the national compliance cost, arsenic exposure reduction, and resulting bladder cancer risk reduction. Uncertainties in the estimates are characterized based on uncertainties in the occurrence model parameters, existing treatment types, treatment removal efficiencies, costs, and the bladder cancer dose-response function for arsenic.

  4. A low-cost Mr compatible ergometer to assess post-exercise phosphocreatine recovery kinetics.

    PubMed

    Naimon, Niels D; Walczyk, Jerzy; Babb, James S; Khegai, Oleksandr; Che, Xuejiao; Alon, Leeor; Regatte, Ravinder R; Brown, Ryan; Parasoglou, Prodromos

    2017-06-01

    To develop a low-cost pedal ergometer compatible with ultrahigh (7 T) field MR systems to reliably quantify metabolic parameters in human lower leg muscle using phosphorus magnetic resonance spectroscopy. We constructed an MR compatible ergometer using commercially available materials and elastic bands that provide resistance to movement. We recruited ten healthy subjects (eight men and two women, mean age ± standard deviation: 32.8 ± 6.0 years, BMI: 24.1 ± 3.9 kg/m 2 ). All subjects were scanned on a 7 T whole-body magnet. Each subject was scanned on two visits and performed a 90 s plantar flexion exercise at 40% maximum voluntary contraction during each scan. During the first visit, each subject performed the exercise twice in order for us to estimate the intra-exam repeatability, and once during the second visit in order to estimate the inter-exam repeatability of the time constant of phosphocreatine recovery kinetics. We assessed the intra and inter-exam reliability in terms of the within-subject coefficient of variation (CV). We acquired reliable measurements of PCr recovery kinetics with an intra- and inter-exam CV of 7.9% and 5.7%, respectively. We constructed a low-cost pedal ergometer compatible with ultrahigh (7 T) field MR systems, which allowed us to quantify reliably PCr recovery kinetics in lower leg muscle using 31 P-MRS.

  5. SimArray: a user-friendly and user-configurable microarray design tool

    PubMed Central

    Auburn, Richard P; Russell, Roslin R; Fischer, Bettina; Meadows, Lisa A; Sevillano Matilla, Santiago; Russell, Steven

    2006-01-01

    Background Microarrays were first developed to assess gene expression but are now also used to map protein-binding sites and to assess allelic variation between individuals. Regardless of the intended application, efficient production and appropriate array design are key determinants of experimental success. Inefficient production can make larger-scale studies prohibitively expensive, whereas poor array design makes normalisation and data analysis problematic. Results We have developed a user-friendly tool, SimArray, which generates a randomised spot layout, computes a maximum meta-grid area, and estimates the print time, in response to user-specified design decisions. Selected parameters include: the number of probes to be printed; the microtitre plate format; the printing pin configuration, and the achievable spot density. SimArray is compatible with all current robotic spotters that employ 96-, 384- or 1536-well microtitre plates, and can be configured to reflect most production environments. Print time and maximum meta-grid area estimates facilitate evaluation of each array design for its suitability. Randomisation of the spot layout facilitates correction of systematic biases by normalisation. Conclusion SimArray is intended to help both established researchers and those new to the microarray field to develop microarray designs with randomised spot layouts that are compatible with their specific production environment. SimArray is an open-source program and is available from . PMID:16509966

  6. Asteroid models from photometry and complementary data sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaasalainen, Mikko

    I discuss inversion methods for asteroid shape and spin reconstruction with photometry (lightcurves) and complementary data sources such as adaptive optics or other images, occultation timings, interferometry, and range-Doppler radar data. These are essentially different sampling modes (generalized projections) of plane-of-sky images. An important concept in this approach is the optimal weighting of the various data modes. The maximum compatibility estimate, a multi-modal generalization of the maximum likelihood estimate, can be used for this purpose. I discuss the fundamental properties of lightcurve inversion by examining the two-dimensional case that, though not usable in our three-dimensional world, is simple to analyze,more » and it shares essentially the same uniqueness and stability properties as the 3-D case. After this, I review the main aspects of 3-D shape representations, lightcurve inversion, and the inclusion of complementary data.« less

  7. ORACLE INEQUALITIES FOR THE LASSO IN THE COX MODEL

    PubMed Central

    Huang, Jian; Sun, Tingni; Ying, Zhiliang; Yu, Yi; Zhang, Cun-Hui

    2013-01-01

    We study the absolute penalized maximum partial likelihood estimator in sparse, high-dimensional Cox proportional hazards regression models where the number of time-dependent covariates can be larger than the sample size. We establish oracle inequalities based on natural extensions of the compatibility and cone invertibility factors of the Hessian matrix at the true regression coefficients. Similar results based on an extension of the restricted eigenvalue can be also proved by our method. However, the presented oracle inequalities are sharper since the compatibility and cone invertibility factors are always greater than the corresponding restricted eigenvalue. In the Cox regression model, the Hessian matrix is based on time-dependent covariates in censored risk sets, so that the compatibility and cone invertibility factors, and the restricted eigenvalue as well, are random variables even when they are evaluated for the Hessian at the true regression coefficients. Under mild conditions, we prove that these quantities are bounded from below by positive constants for time-dependent covariates, including cases where the number of covariates is of greater order than the sample size. Consequently, the compatibility and cone invertibility factors can be treated as positive constants in our oracle inequalities. PMID:24086091

  8. ORACLE INEQUALITIES FOR THE LASSO IN THE COX MODEL.

    PubMed

    Huang, Jian; Sun, Tingni; Ying, Zhiliang; Yu, Yi; Zhang, Cun-Hui

    2013-06-01

    We study the absolute penalized maximum partial likelihood estimator in sparse, high-dimensional Cox proportional hazards regression models where the number of time-dependent covariates can be larger than the sample size. We establish oracle inequalities based on natural extensions of the compatibility and cone invertibility factors of the Hessian matrix at the true regression coefficients. Similar results based on an extension of the restricted eigenvalue can be also proved by our method. However, the presented oracle inequalities are sharper since the compatibility and cone invertibility factors are always greater than the corresponding restricted eigenvalue. In the Cox regression model, the Hessian matrix is based on time-dependent covariates in censored risk sets, so that the compatibility and cone invertibility factors, and the restricted eigenvalue as well, are random variables even when they are evaluated for the Hessian at the true regression coefficients. Under mild conditions, we prove that these quantities are bounded from below by positive constants for time-dependent covariates, including cases where the number of covariates is of greater order than the sample size. Consequently, the compatibility and cone invertibility factors can be treated as positive constants in our oracle inequalities.

  9. Compatible Basal Area and Number of Trees Estimators from Remeasured Horizontal Point Samples

    Treesearch

    Francis A. Roesch; Edwin J. Green; Charles T. Scott

    1989-01-01

    Compatible groups of estimators for total value at time 1 (V1), survivor growth (S), and ingrowth (I) for use with permanent horizontal point samples are evaluated for the special cases of estimating the change in both the number of trees and basal area. Caveats which should be observed before any one compatible grouping of estimators is chosen...

  10. Nonlinear stress waves in a perfectly flexible string. [for aerodynamic decelerating system

    NASA Technical Reports Server (NTRS)

    Fan, D.-N.; Mcgarvey, J. F.

    1977-01-01

    This paper discusses nonlinear stress-wave propagation in a perfectly flexible string obeying a quasilinear (rate-dependent) constitutive equation. Wave speeds and compatibility relations valid along various families of characteristics were determined. It was shown that the compatibility relations associated with the transverse as well as the longitudinal waves readily yield a physical interpretation when they are expressed in suitable variables and in vector form. Coding based on the present information was completed for the machine solution of a class of mixed initial- and boundary-value problems of practical interest. Computer simulation of the stress-wave interaction in the 40-foot lanyard in the Arcas 'Rocoz' system during deployment was carried out using a stress-strain relation for nylon at the strain rate of 30/second. A method for estimating the maximum tension and strain in a string during the initial loading phase is proposed.

  11. Dose commitments due to radioactive releases from nuclear power plant sites: Methodology and data base. Supplement 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, D.A.

    1996-06-01

    This manual describes a dose assessment system used to estimate the population or collective dose commitments received via both airborne and waterborne pathways by persons living within a 2- to 80-kilometer region of a commercial operating power reactor for a specific year of effluent releases. Computer programs, data files, and utility routines are included which can be used in conjunction with an IBM or compatible personal computer to produce the required dose commitments and their statistical distributions. In addition, maximum individual airborne and waterborne dose commitments are estimated and compared to 10 CFR Part 50, Appendix 1, design objectives. Thismore » supplement is the last report in the NUREG/CR-2850 series.« less

  12. Using astrophysical jets for establishing an upper limit for the photon mass

    NASA Astrophysics Data System (ADS)

    Ryutov, D. D.

    2004-11-01

    Finite photon mass is compatible with general principles of the relativity theory; how small it (the mass) actually is has to be established experimentally. The presently accepted upper limit [1], E-22 of the electron mass, is established [2] based on the observations of the Solar wind. This estimate corresponds to the photon Compton length of L=3E6 km. I discuss possible ways of improving this estimate based on the properties of those of astrophysical jets where the pinch force is important for establishing the jet structure. It turns out that, if the jet radius is much greater than L, both pinch equilibrium and stability become very different from the massless photon case. In particular, the equilibrium pressure maximum coincides with the maximum of the current density. These new features are often incompatible with the observations, providing a way for improving the estimate of the photon mass by orders of magnitude. Work performed for the U.S. DOE by UC LLNL under contract W-7405-Eng-48. [1] S. Eidelman, and Particle Phys. Group. "Review of Particle Physics," Phys. Lett. B592, p.1, 2004; [2] D.D. Ryutov. Plasma Phys. Contr. Fus., 39, p.A73, 1997.

  13. Re-injection feasibility study of fracturing flow-back fluid in shale gas mining

    NASA Astrophysics Data System (ADS)

    Kang, Dingyu; Xue, Chen; Chen, Xinjian; Du, Jiajia; Shi, Shengwei; Qu, Chengtun; Yu, Tao

    2018-02-01

    Fracturing flow-back fluid in shale gas mining is usually treated by re-injecting into formation. After treatment, the fracturing flow-back fluid is injected back into the formation. In order to ensure that it will not cause too much damage to the bottom layer, feasibility evaluations of re-injection of two kinds of fracturing fluid with different salinity were researched. The experimental research of the compatibility of mixed water samples based on the static simulation method was conducted. Through the analysis of ion concentration, the amount of scale buildup and clay swelling rate, the feasibility of re-injection of different fracturing fluid were studied. The result shows that the swelling of the clay expansion rate of treated fracturing fluid is lower than the mixed water of treated fracturing fluid and the distilled water, indicating that in terms of clay expansion rate, the treated fracturing flow-back fluid is better than that of water injection after re-injection. In the compatibility test, the maximum amount of fouling in the Yangzhou oilfield is 12mg/L, and the maximum value of calcium loss rate is 1.47%, indicating that the compatibility is good. For the fracturing fluid with high salinity in the Yanchang oilfield, the maximum amount of scaling is 72mg/L, and the maximum calcium loss rate is 3.50%, indicating that the compatibility is better.

  14. Compatibility: drugs and parenteral nutrition

    PubMed Central

    Miranda, Talita Muniz Maloni; Ferraresi, Andressa de Abreu

    2016-01-01

    ABSTRACT Objective Standardization and systematization of data to provide quick access to compatibility of leading injectable drugs used in hospitals for parenteral nutrition. Methods We selected 55 injectable drugs analyzed individually with two types of parenteral nutrition: 2-in-1 and 3-in-1. The following variables were considered: active ingredient, compatibility of drugs with the parenteral nutrition with or without lipids, and maximum drug concentration after dilution for the drugs compatible with parenteral nutrition. Drugs were classified as compatible, incompatible and untested. Results After analysis, relevant information to the product’s compatibility with parental nutrition was summarized in a table. Conclusion Systematization of compatibility data provided quick and easy access, and enabled standardizing pharmacists work. PMID:27074235

  15. Effect of Background Pressure on the Plasma Oscillation Characteristics of the HiVHAc Hall Thruster

    NASA Technical Reports Server (NTRS)

    Huang, Wensheng; Kamhawi, Hani; Lobbia, Robert B.; Brown, Daniel L.

    2014-01-01

    During a component compatibility test of the NASA HiVHAc Hall thruster, a high-speed camera and a set of high-speed Langmuir probes were implemented to study the effect of varying facility background pressure on thruster operation. The results show a rise in the oscillation frequency of the breathing mode with rising background pressure, which is hypothesized to be due to a shortening accelerationionization zone. An attempt is made to apply a simplified ingestion model to the data. The combined results are used to estimate the maximum acceptable background pressure for performance and wear testing.

  16. Cutaneous malignant melanoma and familial dysplastic nevi: evidence for autosomal dominance and pleiotropy.

    PubMed Central

    Bale, S J; Chakravarti, A; Greene, M H

    1986-01-01

    Segregation of familial cutaneous melanoma has been shown to be compatible with autosomal dominant transmission with incomplete penetrance. However, the combined phenotype of melanoma and a known melanoma-precursor lesion, the dysplastic nevus (DN), has not previously been found to fit a Mendelian model of inheritance using complex segregation analysis. Employing a life-table and disease-free survival analysis approach, we estimated the lifetime incidence of melanoma in the sibs and offspring of DN-affected individuals to be 46%, consistent with a highly penetrant, autosomal dominant mode of inheritance. To further elucidate the relationship between the two traits, we conducted a linkage analysis between the melanoma locus and a hypothetical DN locus, and obtained a maximum lod score of 3.857 at theta = .08. Furthermore, all families giving evidence for linkage were in the coupling phase and the maximum likelihood estimate of theta was not significantly different from 0 (P = .1). This provides evidence that the DN and melanoma traits may represent pleiotropic effects of a single, highly penetrant gene behaving in an autosomal dominant manner. PMID:3456198

  17. An estimation of global solar p-mode frequencies from IRIS network data: 1989-1996

    NASA Astrophysics Data System (ADS)

    Serebryanskiy, A.; Ehgamberdiev, Sh.; Kholikov, Sh.; Fossat, E.; Gelly, B.; Schmider, F. X.; Grec, G.; Cacciani, A.; Palle, P. L.; Lazrek, M.; Hoeksema, J. T.

    2001-06-01

    The IRIS network has accumulated full disk helioseismological data since July 1989, i.e. a complete 11-year solar cycle. Since the last paper publishing a frequency list [A&A 317 (1997) L71], not only has the network acquired new data, but has also developed new co-operative programs with compatible instruments [Abstr. SOHO 6/GONG 98 Workshop (1998) 51], so that merging IRIS files with these co-operative program data sets has made possible the improvement of the overall duty cycle. This paper presents new estimations of low degree p-mode frequencies obtained from this IRIS++ data bank covering the period 1989-1996, as well as the variation of their main parameters along the total range of magnetic activity, from before the last maximum to the very minimum. A preliminary estimation of the peak profile asymmetries is also included.

  18. An interactive program for pharmacokinetic modeling.

    PubMed

    Lu, D R; Mao, F

    1993-05-01

    A computer program, PharmK, was developed for pharmacokinetic modeling of experimental data. The program was written in C computer language based on the high-level user-interface Macintosh operating system. The intention was to provide a user-friendly tool for users of Macintosh computers. An interactive algorithm based on the exponential stripping method is used for the initial parameter estimation. Nonlinear pharmacokinetic model fitting is based on the maximum likelihood estimation method and is performed by the Levenberg-Marquardt method based on chi 2 criterion. Several methods are available to aid the evaluation of the fitting results. Pharmacokinetic data sets have been examined with the PharmK program, and the results are comparable with those obtained with other programs that are currently available for IBM PC-compatible and other types of computers.

  19. UNITED STATES DEPARTMENT OF TRANSPORTATION GLOBAL POSITIONING SYSTEM (GPS) ADJACENT BAND COMPATIBILITY ASSESSMENT

    DOT National Transportation Integrated Search

    2018-04-01

    The goal of the U.S. Department of Transportation (DOT) Global Positioning System (GPS) Adjacent Band Compatibility Assessment is to evaluate the maximum transmitted power levels of adjacent band radiofrequency (RF) systems that can be tolerated by G...

  20. Site specific seismic hazard analysis and determination of response spectra of Kolkata for maximum considered earthquake

    NASA Astrophysics Data System (ADS)

    Shiuly, Amit; Sahu, R. B.; Mandal, Saroj

    2017-06-01

    This paper presents site specific seismic hazard analysis of Kolkata city, former capital of India and present capital of state West Bengal, situated on the world’s largest delta island, Bengal basin. For this purpose, peak ground acceleration (PGA) for a maximum considered earthquake (MCE) at bedrock level has been estimated using an artificial neural network (ANN) based attenuation relationship developed on the basis of synthetic ground motion data for the region. Using the PGA corresponding to the MCE, a spectrum compatible acceleration time history at bedrock level has been generated by using a wavelet based computer program, WAVEGEN. This spectrum compatible time history at bedrock level has been converted to the same at surface level using SHAKE2000 for 144 borehole locations in the study region. Using the predicted values of PGA and PGV at the surface, corresponding contours for the region have been drawn. For the MCE, the PGA at bedrock level of Kolkata city has been obtained as 0.184 g, while that at the surface level varies from 0.22 g to 0.37 g. Finally, Kolkata has been subdivided into eight seismic subzones, and for each subzone a response spectrum equation has been derived using polynomial regression analysis. This will be very helpful for structural and geotechnical engineers to design safe and economical earthquake resistant structures.

  1. Compatible estimators of the components of change for a rotating panel forest inventory design

    Treesearch

    Francis A. Roesch

    2007-01-01

    This article presents two approaches for estimating the components of forest change utilizing data from a rotating panel sample design. One approach uses a variant of the exponentially weighted moving average estimator and the other approach uses mixed estimation. Three general transition models were each combined with a single compatibility model for the mixed...

  2. Compatibility of Segments of Thermoelectric Generators

    NASA Technical Reports Server (NTRS)

    Snyder, G. Jeffrey; Ursell, Tristan

    2009-01-01

    A method of calculating (usually for the purpose of maximizing) the power-conversion efficiency of a segmented thermoelectric generator is based on equations derived from the fundamental equations of thermoelectricity. Because it is directly traceable to first principles, the method provides physical explanations in addition to predictions of phenomena involved in segmentation. In comparison with the finite-element method used heretofore to predict (without being able to explain) the behavior of a segmented thermoelectric generator, this method is much simpler to implement in practice: in particular, the efficiency of a segmented thermoelectric generator can be estimated by evaluating equations using only hand-held calculator with this method. In addition, the method provides for determination of cascading ratios. The concept of cascading is illustrated in the figure and the definition of the cascading ratio is defined in the figure caption. An important aspect of the method is its approach to the issue of compatibility among segments, in combination with introduction of the concept of compatibility within a segment. Prior approaches involved the use of only averaged material properties. Two materials in direct contact could be examined for compatibility with each other, but there was no general framework for analysis of compatibility. The present method establishes such a framework. The mathematical derivation of the method begins with the definition of reduced efficiency of a thermoelectric generator as the ratio between (1) its thermal-to-electric power-conversion efficiency and (2) its Carnot efficiency (the maximum efficiency theoretically attainable, given its hot- and cold-side temperatures). The derivation involves calculation of the reduced efficiency of a model thermoelectric generator for which the hot-side temperature is only infinitesimally greater than the cold-side temperature. The derivation includes consideration of the ratio (u) between the electric current and heat-conduction power and leads to the concept of compatibility factor (s) for a given thermoelectric material, defined as the value of u that maximizes the reduced efficiency of the aforementioned model thermoelectric generator.

  3. Transitions between self-compatibility and self-incompatibility and the evolution of reproductive isolation in the large and diverse tropical genus Dendrobium (Orchidaceae)

    PubMed Central

    Pinheiro, Fabio; Cafasso, Donata; Cozzolino, Salvatore; Scopece, Giovanni

    2015-01-01

    Background and Aims The evolution of interspecific reproductive barriers is crucial to understanding species evolution. This study examines the contribution of transitions between self-compatibility (SC) and self-incompatibility (SI) and genetic divergence in the evolution of reproductive barriers in Dendrobium, one of the largest orchid genera. Specifically, it investigates the evolution of pre- and postzygotic isolation and the effects of transitions between compatibility states on interspecific reproductive isolation within the genus. Methods The role of SC and SI changes in reproductive compatibility among species was examined using fruit set and seed viability data available in the literature from 86 species and ∼2500 hand pollinations. The evolution of SC and SI in Dendrobium species was investigated within a phylogenetic framework using internal transcribed spacer sequences available in GenBank. Key Results Based on data from crossing experiments, estimations of genetic distance and the results of a literature survey, it was found that changes in SC and SI significantly influenced the compatibility between species in interspecific crosses. The number of fruits produced was significantly higher in crosses in which self-incompatible species acted as pollen donor for self-compatible species, following the SI × SC rule. Maximum likelihood and Bayesian tests did not reject transitions from SI to SC and from SC to SI across the Dendrobium phylogeny. In addition, postzygotic isolation (embryo mortality) was found to evolve gradually with genetic divergence, in agreement with previous results observed for other plant species, including orchids. Conclusions Transitions between SC and SI and the gradual accumulation of genetic incompatibilities affecting postzygotic isolation are important mechanisms preventing gene flow among Dendrobium species, and may constitute important evolutionary processes contributing to the high levels of species diversity in this tropical orchid group. PMID:25953040

  4. Transitions between self-compatibility and self-incompatibility and the evolution of reproductive isolation in the large and diverse tropical genus Dendrobium (Orchidaceae).

    PubMed

    Pinheiro, Fabio; Cafasso, Donata; Cozzolino, Salvatore; Scopece, Giovanni

    2015-09-01

    The evolution of interspecific reproductive barriers is crucial to understanding species evolution. This study examines the contribution of transitions between self-compatibility (SC) and self-incompatibility (SI) and genetic divergence in the evolution of reproductive barriers in Dendrobium, one of the largest orchid genera. Specifically, it investigates the evolution of pre- and postzygotic isolation and the effects of transitions between compatibility states on interspecific reproductive isolation within the genus. The role of SC and SI changes in reproductive compatibility among species was examined using fruit set and seed viability data available in the literature from 86 species and ∼2500 hand pollinations. The evolution of SC and SI in Dendrobium species was investigated within a phylogenetic framework using internal transcribed spacer sequences available in GenBank. Based on data from crossing experiments, estimations of genetic distance and the results of a literature survey, it was found that changes in SC and SI significantly influenced the compatibility between species in interspecific crosses. The number of fruits produced was significantly higher in crosses in which self-incompatible species acted as pollen donor for self-compatible species, following the SI × SC rule. Maximum likelihood and Bayesian tests did not reject transitions from SI to SC and from SC to SI across the Dendrobium phylogeny. In addition, postzygotic isolation (embryo mortality) was found to evolve gradually with genetic divergence, in agreement with previous results observed for other plant species, including orchids. Transitions between SC and SI and the gradual accumulation of genetic incompatibilities affecting postzygotic isolation are important mechanisms preventing gene flow among Dendrobium species, and may constitute important evolutionary processes contributing to the high levels of species diversity in this tropical orchid group. © The Author 2015. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  5. The FIA Panel Design and Compatible Estimators for the Components of Change

    Treesearch

    Francis A. Roesch

    2006-01-01

    The FIA annual panel design and its relation to compatible estimation systems for the components of change are discussed. Estimation for the traditional components of growth, as presented by Meyer (1953, Forest Mensuration) is bypassed in favor of a focus on estimation for the discrete analogs to Eriksson’s (1995, For. Sci. 41(4):796- 822) time invariant redefinitions...

  6. Modeling and Validation of the Three Dimensional Deflection of an MRI-Compatible Magnetically-Actuated Steerable Catheter

    PubMed Central

    Liu, Taoming; Poirot, Nate Lombard; Franson, Dominique; Seiberlich, Nicole; Griswold, Mark A.; Çavuşoğlu, M. Cenk

    2016-01-01

    Objective This paper presents the three dimensional kinematic modeling of a novel steerable robotic ablation catheter system. The catheter, embedded with a set of current-carrying micro-coils, is actuated by the magnetic forces generated by the magnetic field of the magnetic resonance imaging (MRI) scanner. Methods This paper develops a 3D model of the MRI actuated steerable catheter system by using finite differences approach. For each finite segment, a quasi-static torque-deflection equilibrium equation is calculated using beam theory. By using the deflection displacements and torsion angles, the kinematic model of the catheter system is derived. Results The proposed models are validated by comparing the simulation results of the proposed model with the experimental results of a hardware prototype of the catheter design. The maximum tip deflection error is 4.70 mm and the maximum root-mean-square (RMS) error of the shape estimation is 3.48 mm. Conclusion The results demonstrate that the proposed model can successfully estimate the deflection motion of the catheter. Significance The presented three dimensional deflection model of the magnetically controlled catheter design paves the way to efficient control of the robotic catheter for treatment of atrial fibrillation. PMID:26731519

  7. Magnetic resonance safety and compatibility of tantalum markers used in proton beam therapy for intraocular tumors: A 7.0 Tesla study.

    PubMed

    Oberacker, Eva; Paul, Katharina; Huelnhagen, Till; Oezerdem, Celal; Winter, Lukas; Pohlmann, Andreas; Boehmert, Laura; Stachs, Oliver; Heufelder, Jens; Weber, Andreas; Rehak, Matus; Seibel, Ira; Niendorf, Thoralf

    2017-10-01

    Proton radiation therapy (PRT) is a standard treatment of uveal melanoma. PRT patients undergo implantation of ocular tantalum markers (OTMs) for treatment planning. Ultra-high-field MRI is a promising technique for 3D tumor visualization and PRT planning. This work examines MR safety and compatibility of OTMs at 7.0 Tesla. MR safety assessment included deflection angle measurements (DAMs), electromagnetic field (EMF) simulations for specific absorption rate (SAR) estimation, and temperature simulations for examining radiofrequency heating using a bow-tie dipole antenna for transmission. MR compatibility was assessed by susceptibility artifacts in agarose, ex vivo pig eyes, and in an ex vivo tumor eye using gradient echo and fast spin-echo imaging. DAM (α < 1 °) demonstrated no risk attributed to magnetically induced OTM deflection. EMF simulations showed that an OTM can be approximated by a disk, demonstrated the need for averaging masses of m ave  = 0.01 g to accommodate the OTM, and provided SAR 0.01g,maximum  = 2.64 W/kg (P in  = 1W) in OTM presence. A transfer function was derived, enabling SAR 0.01g estimation for individual patient scenarios without the OTM being integrated. Thermal simulations revealed minor OTM-related temperature increase (δT < 15 mK). Susceptibility artifact size (<8 mm) and location suggest no restrictions for MRI of the nervus opticus. OTMs are not a per se contraindication for MRI. Magn Reson Med 78:1533-1546, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  8. The Efficacy of Consensus Tree Methods for Summarizing Phylogenetic Relationships from a Posterior Sample of Trees Estimated from Morphological Data.

    PubMed

    O'Reilly, Joseph E; Donoghue, Philip C J

    2018-03-01

    Consensus trees are required to summarize trees obtained through MCMC sampling of a posterior distribution, providing an overview of the distribution of estimated parameters such as topology, branch lengths, and divergence times. Numerous consensus tree construction methods are available, each presenting a different interpretation of the tree sample. The rise of morphological clock and sampled-ancestor methods of divergence time estimation, in which times and topology are coestimated, has increased the popularity of the maximum clade credibility (MCC) consensus tree method. The MCC method assumes that the sampled, fully resolved topology with the highest clade credibility is an adequate summary of the most probable clades, with parameter estimates from compatible sampled trees used to obtain the marginal distributions of parameters such as clade ages and branch lengths. Using both simulated and empirical data, we demonstrate that MCC trees, and trees constructed using the similar maximum a posteriori (MAP) method, often include poorly supported and incorrect clades when summarizing diffuse posterior samples of trees. We demonstrate that the paucity of information in morphological data sets contributes to the inability of MCC and MAP trees to accurately summarise of the posterior distribution. Conversely, majority-rule consensus (MRC) trees represent a lower proportion of incorrect nodes when summarizing the same posterior samples of trees. Thus, we advocate the use of MRC trees, in place of MCC or MAP trees, in attempts to summarize the results of Bayesian phylogenetic analyses of morphological data.

  9. The Efficacy of Consensus Tree Methods for Summarizing Phylogenetic Relationships from a Posterior Sample of Trees Estimated from Morphological Data

    PubMed Central

    O’Reilly, Joseph E; Donoghue, Philip C J

    2018-01-01

    Abstract Consensus trees are required to summarize trees obtained through MCMC sampling of a posterior distribution, providing an overview of the distribution of estimated parameters such as topology, branch lengths, and divergence times. Numerous consensus tree construction methods are available, each presenting a different interpretation of the tree sample. The rise of morphological clock and sampled-ancestor methods of divergence time estimation, in which times and topology are coestimated, has increased the popularity of the maximum clade credibility (MCC) consensus tree method. The MCC method assumes that the sampled, fully resolved topology with the highest clade credibility is an adequate summary of the most probable clades, with parameter estimates from compatible sampled trees used to obtain the marginal distributions of parameters such as clade ages and branch lengths. Using both simulated and empirical data, we demonstrate that MCC trees, and trees constructed using the similar maximum a posteriori (MAP) method, often include poorly supported and incorrect clades when summarizing diffuse posterior samples of trees. We demonstrate that the paucity of information in morphological data sets contributes to the inability of MCC and MAP trees to accurately summarise of the posterior distribution. Conversely, majority-rule consensus (MRC) trees represent a lower proportion of incorrect nodes when summarizing the same posterior samples of trees. Thus, we advocate the use of MRC trees, in place of MCC or MAP trees, in attempts to summarize the results of Bayesian phylogenetic analyses of morphological data. PMID:29106675

  10. Combining Earthquake Focal Mechanism Inversion and Coulomb Friction Law to Yield Tectonic Stress Magnitudes in Strike-slip Faulting Regime

    NASA Astrophysics Data System (ADS)

    Soh, I.; Chang, C.

    2017-12-01

    The techniques for estimating present-day stress states by inverting multiple earthquake focal mechanism solutions (FMS) provide orientations of the three principal stresses and their relative magnitudes. In order to estimate absolute magnitudes of the stresses that are generally required to analyze faulting mechanics, we combine the relative stress magnitude parameter (R-value) derived from the inversion process and the concept of frictional equilibrium of stress state defined by Coulomb friction law. The stress inversion in Korean Peninsula using 152 FMS data (magnitude≥2.5) conducted at regularly spaced grid points yields a consistent strike-slip faulting regime in which the maximum (S1) and the minimum (S3) principal stresses act in horizontal planes (with an S1 azimuth in ENE-WSW) and the intermediate principal stress (S2) close to vertical. However, R-value varies from 0.28 to 0.75 depending on locations, systematically increasing eastward. Based on the assumptions that the vertical stress is lithostatic, pore pressure is hydrostatic, and the maximum differential stress (S1-S3) is limited by Byerlee's friction of optimally oriented faults for slip, we estimate absolute magnitudes of the two horizontal principal stresses using R-value. As R-value increases, so do the magnitudes of the horizontal stresses. Our estimation of the stress magnitudes shows that the maximum horizontal principal stress (S1) normalized by vertical stress tends to increase from 1.3 in the west to 1.8 in the east. The estimated variation of stress magnitudes is compatible with distinct clustering of faulting types in different regions. Normal faulting events are densely populated in the west region where the horizontal stress is relatively low, whereas numerous reverse faulting events prevail in the east offshore where the horizontal stress is relatively high. Such a characteristic distribution of distinct faulting types in different regions can only be explained in terms of stress magnitude variation.

  11. Scalable Video Transmission Over Multi-Rate Multiple Access Channels

    DTIC Science & Technology

    2007-06-01

    Rate - compatible punctured convolutional codes (RCPC codes ) and their ap- plications,” IEEE...source encoded using the MPEG-4 video codec. The source encoded bitstream is then channel encoded with Rate Compatible Punctured Convolutional (RCPC...Clark, and J. M. Geist, “ Punctured convolutional codes or rate (n-1)/n and simplified maximum likelihood decoding,” IEEE Transactions on

  12. Nuclear counting filter based on a centered Skellam test and a double exponential smoothing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coulon, Romain; Kondrasovs, Vladimir; Dumazert, Jonathan

    2015-07-01

    Online nuclear counting represents a challenge due to the stochastic nature of radioactivity. The count data have to be filtered in order to provide a precise and accurate estimation of the count rate, this with a response time compatible with the application in view. An innovative filter is presented in this paper addressing this issue. It is a nonlinear filter based on a Centered Skellam Test (CST) giving a local maximum likelihood estimation of the signal based on a Poisson distribution assumption. This nonlinear approach allows to smooth the counting signal while maintaining a fast response when brutal change activitymore » occur. The filter has been improved by the implementation of a Brown's double Exponential Smoothing (BES). The filter has been validated and compared to other state of the art smoothing filters. The CST-BES filter shows a significant improvement compared to all tested smoothing filters. (authors)« less

  13. Cost and Savings Estimates the Air Force Used to Decide Against Relocating the Electromagnetic Compatibility Analysis Center from Annapolis, Maryland, to Duluth, Minnesota.

    DTIC Science & Technology

    1983-03-09

    that maximize electromagnetic compatibility potential. -- Providing direct assistance on an reimbursable basis to DOD and other Government agencies on...value, we estimated that reimburs - able real estate expenses would average about $6,458 rather than $4,260 included in the Air Force estimate. When the...of estimated reimbursement was assumed to be necessary to encourage the relocation of more professional employees and increase their estimated

  14. Safety in the Preschool.

    ERIC Educational Resources Information Center

    Settles, Mimi

    Guidelines for safety in the cooperative preschool are outlined, emphasizing control of the physical environment to insure maximum freedom for the children compatible with maximum safety. Building standards are set for stairways, rooms, lavatories, parking lots, harmful supplies, and wading pools. Orientation for safety is discussed in regard to…

  15. Design and preliminary assessment of a smart textile for respiratory monitoring based on an array of Fiber Bragg Gratings.

    PubMed

    Massaroni, C; Ciocchetti, M; Di Tomaso, G; Saccomandi, P; Caponero, M A; Polimadei, A; Formica, D; Schena, E

    2016-08-01

    Comfortable and easy to wear smart textiles have gained popularity for continuous respiratory monitoring. Among different emerging technologies, smart textiles based on fiber optic sensors (FOSs) have several advantages, like Magnetic Resonance (MR)-compatibility and good metrological properties. In this paper we report on the development and assessment of an MR-compatible smart textiles based on FOSs for respiratory monitoring. The system consists of six fiber Bragg grating (FBG) sensors glued on the textile to monitor six compartments of the chest wall (i.e., right and left upper thorax, right and left abdominal rib cage, and right and left abdomen). This solution allows monitoring both global respiratory parameters and each compartment volume change. The system converts thoracic movements into strain measured by the FBGs. The positioning of the FBGs was optimized by experiments performed using an optoelectronic system. The feasibility of the smart textile was assessed on 6 healthy volunteers. Experimental data were compared to the ones estimated by an optoelectronic plethysmography used as reference. Promising results were obtained on both breathing period (maximum percentage error is 1.14%), inspiratory and expiratory period, as well as on total volume change (mean percentage difference between the two systems was ~14%). The Bland-Altman analysis shows a satisfactory accuracy for the parameters under investigation. The proposed system is safe and non-invasive, MR-compatible, and allows monitoring compartmental volumes.

  16. Determination of the maximum-depth to potential field sources by a maximum structural index method

    NASA Astrophysics Data System (ADS)

    Fedi, M.; Florio, G.

    2013-01-01

    A simple and fast determination of the limiting depth to the sources may represent a significant help to the data interpretation. To this end we explore the possibility of determining those source parameters shared by all the classes of models fitting the data. One approach is to determine the maximum depth-to-source compatible with the measured data, by using for example the well-known Bott-Smith rules. These rules involve only the knowledge of the field and its horizontal gradient maxima, and are independent from the density contrast. Thanks to the direct relationship between structural index and depth to sources we work out a simple and fast strategy to obtain the maximum depth by using the semi-automated methods, such as Euler deconvolution or depth-from-extreme-points method (DEXP). The proposed method consists in estimating the maximum depth as the one obtained for the highest allowable value of the structural index (Nmax). Nmax may be easily determined, since it depends only on the dimensionality of the problem (2D/3D) and on the nature of the analyzed field (e.g., gravity field or magnetic field). We tested our approach on synthetic models against the results obtained by the classical Bott-Smith formulas and the results are in fact very similar, confirming the validity of this method. However, while Bott-Smith formulas are restricted to the gravity field only, our method is applicable also to the magnetic field and to any derivative of the gravity and magnetic field. Our method yields a useful criterion to assess the source model based on the (∂f/∂x)max/fmax ratio. The usefulness of the method in real cases is demonstrated for a salt wall in the Mississippi basin, where the estimation of the maximum depth agrees with the seismic information.

  17. Shape reconstruction of irregular bodies with multiple complementary data sources

    NASA Astrophysics Data System (ADS)

    Kaasalainen, M.; Viikinkoski, M.

    2012-07-01

    We discuss inversion methods for shape reconstruction with complementary data sources. The current main sources are photometry, adaptive optics or other images, occultation timings, and interferometry, and the procedure can readily be extended to include range-Doppler radar and thermal infrared data as well. We introduce the octantoid, a generally applicable shape support that can be automatically used for surface types encountered in planetary research, including strongly nonconvex or non-starlike shapes. We present models of Kleopatra and Hermione from multimodal data as examples of this approach. An important concept in this approach is the optimal weighting of the various data modes. We define the maximum compatibility estimate, a multimodal generalization of the maximum likelihood estimate, for this purpose. We also present a specific version of the procedure for asteroid flyby missions, with which one can reconstruct the complete shape of the target by using the flyby-based map of a part of the surface together with other available data. Finally, we show that the relative volume error of a shape solution is usually approximately equal to the relative shape error rather than its multiple. Our algorithms are trivially parallelizable, so running the code on a CUDA-enabled graphics processing unit is some two orders of magnitude faster than the usual single-processor mode.

  18. Carbon Budgets as a Guide to Deep Decarbonisation

    NASA Astrophysics Data System (ADS)

    Rogelj, J.

    2017-12-01

    Halting global mean temperature rise requires a limit on the cumulative amount of net CO2 disposed of in the atmosphere. Remaining within the limits of such carbon budgets over the 21st century will require a profound transformation of how our societies use and produce energy, crops, and materials. To understand the options available to stay within stringent carbon budget constraints, global transformation pathways are being devised with integrated models of the energy-economy-land system. This presentation will look at how the latest insights of such pathways affect carbon budgets. Estimates of carbon budgets compatible with a given temperature limit depend on the anticipated temperature contribution at peak warming of non-CO2 forcers. Integrated transformation pathways allow to understand the projected extend of these contributions, as well as estimate the maximum conceivable rate of emissions reductions over the coming decades. The latter directly informs the lower end of future cumulative CO2 emissions and can thus provide an estimate for minimum peak warming over the 21st century - a measure which can be compared to the ambitious long-term temperature goal of the UNFCCC Paris Agreement.

  19. The fabrication of a programmable via using phase-change material in CMOS-compatible technology.

    PubMed

    Chen, Kuan-Neng; Krusin-Elbaum, Lia

    2010-04-02

    We demonstrate an energy-efficient programmable via concept using indirectly heated phase-change material. This via structure has maximum phase-change volume to achieve a minimum on resistance for high performance logic applications. Process development and material investigations for this device structure are reported. The device concept is successfully demonstrated in a standard CMOS-compatible technology capable of multiple cycles between on/off states for reconfigurable applications.

  20. InSAR analysis of surface deformation over permafrost to estimate active layer thickness based on one-dimensional heat transfer model of soils

    PubMed Central

    Li, Zhiwei; Zhao, Rong; Hu, Jun; Wen, Lianxing; Feng, Guangcai; Zhang, Zeyu; Wang, Qijie

    2015-01-01

    This paper presents a novel method to estimate active layer thickness (ALT) over permafrost based on InSAR (Interferometric Synthetic Aperture Radar) observation and the heat transfer model of soils. The time lags between the periodic feature of InSAR-observed surface deformation over permafrost and the meteorologically recorded temperatures are assumed to be the time intervals that the temperature maximum to diffuse from the ground surface downward to the bottom of the active layer. By exploiting the time lags and the one-dimensional heat transfer model of soils, we estimate the ALTs. Using the frozen soil region in southern Qinghai-Tibet Plateau (QTP) as examples, we provided a conceptual demonstration of the estimation of the InSAR pixel-wise ALTs. In the case study, the ALTs are ranging from 1.02 to 3.14 m and with an average of 1.95 m. The results are compatible with those sparse ALT observations/estimations by traditional methods, while with extraordinary high spatial resolution at pixel level (~40 meter). The presented method is simple, and can potentially be used for deriving high-resolution ALTs in other remote areas similar to QTP, where only sparse observations are available now. PMID:26480892

  1. InSAR analysis of surface deformation over permafrost to estimate active layer thickness based on one-dimensional heat transfer model of soils.

    PubMed

    Li, Zhiwei; Zhao, Rong; Hu, Jun; Wen, Lianxing; Feng, Guangcai; Zhang, Zeyu; Wang, Qijie

    2015-10-20

    This paper presents a novel method to estimate active layer thickness (ALT) over permafrost based on InSAR (Interferometric Synthetic Aperture Radar) observation and the heat transfer model of soils. The time lags between the periodic feature of InSAR-observed surface deformation over permafrost and the meteorologically recorded temperatures are assumed to be the time intervals that the temperature maximum to diffuse from the ground surface downward to the bottom of the active layer. By exploiting the time lags and the one-dimensional heat transfer model of soils, we estimate the ALTs. Using the frozen soil region in southern Qinghai-Tibet Plateau (QTP) as examples, we provided a conceptual demonstration of the estimation of the InSAR pixel-wise ALTs. In the case study, the ALTs are ranging from 1.02 to 3.14 m and with an average of 1.95 m. The results are compatible with those sparse ALT observations/estimations by traditional methods, while with extraordinary high spatial resolution at pixel level (~40 meter). The presented method is simple, and can potentially be used for deriving high-resolution ALTs in other remote areas similar to QTP, where only sparse observations are available now.

  2. The optimal fiber volume fraction and fiber-matrix property compatibility in fiber reinforced composites

    NASA Technical Reports Server (NTRS)

    Pan, Ning

    1992-01-01

    Although the question of minimum or critical fiber volume fraction beyond which a composite can then be strengthened due to addition of fibers has been dealt with by several investigators for both continuous and short fiber composites, a study of maximum or optimal fiber volume fraction at which the composite reaches its highest strength has not been reported yet. The present analysis has investigated this issue for short fiber case based on the well-known shear lag (the elastic stress transfer) theory as the first step. Using the relationships obtained, the minimum spacing between fibers is determined upon which the maximum fiber volume fraction can be calculated, depending on the fiber packing forms within the composites. The effects on the value of this maximum fiber volume fraction due to such factors as fiber and matrix properties, fiber aspect ratio and fiber packing forms are discussed. Furthermore, combined with the previous analysis on the minimum fiber volume fraction, this maximum fiber volume fraction can be used to examine the property compatibility of fiber and matrix in forming a composite. This is deemed to be useful for composite design. Finally some examples are provided to illustrate the results.

  3. A Compatibility Assessment and Comparison of the Flame Resistant Chemical-Biological (CB) Overgarment with the Standard ’A’ CB Overgarment.

    DTIC Science & Technology

    1982-01-01

    Overgarment-Dynamics .............. .19 TABLES 1. Basic Anthropometry .......... ...................... 3 2. Flame Resistant CB Overgarment Test...participants (TPs) during this evaluation. Basic anthropometry of these subjects is given in Table I. TABLE 1 Basic Anthropometry Mean SD Maximum...5 S H L XL 39 H L XL XXL 43 L XL XXL XXL The areas considered were ease of doffing and donning, compatibility with prescribed clothing and field

  4. Super-spinning compact objects and models of high-frequency quasi-periodic oscillations observed in Galactic microquasars. II. Forced resonances

    NASA Astrophysics Data System (ADS)

    Kotrlová, A.; Šrámková, E.; Török, G.; Stuchlík, Z.; Goluchová, K.

    2017-11-01

    In our previous work (Paper I) we applied several models of high-frequency quasi-periodic oscillations (HF QPOs) to estimate the spin of the central compact object in three Galactic microquasars assuming the possibility that the central compact body is a super-spinning object (or a naked singularity) with external spacetime described by Kerr geometry with a dimensionless spin parameter a ≡ cJ/GM2 > 1. Here we extend our consideration, and in a consistent way investigate implications of a set of ten resonance models so far discussed only in the context of a < 1. The same physical arguments as in Paper I are applied to these models, I.e. only a small deviation of the spin estimate from a = 1, a ≳ 1, is assumed for a favoured model. For five of these models that involve Keplerian and radial epicyclic oscillations we find the existence of a unique specific QPO excitation radius. Consequently, there is a simple behaviour of dimensionless frequency M × νU(a) represented by a single continuous function having solely one maximum close to a ≳ 1. Only one of these models is compatible with the expectation of a ≳ 1. The other five models that involve the radial and vertical epicyclic oscillations imply the existence of multiple resonant radii. This signifies a more complicated behaviour of M × νU(a) that cannot be represented by single functions. Each of these five models is compatible with the expectation of a ≳ 1.

  5. New Compatible Estimators for Survivor Growth and Ingrowth from Remeasured Horizontal Point Samples

    Treesearch

    Francis A. Roesch; Edwin J. Green; Charles T. Scott

    1989-01-01

    Forest volume growth between two measurements is often decomposed into the components of survivor growth (S), ingrowth(Z), mortality (M), and cut (C) (for example, Beers 1962 or Van Deusen et al. 1986). Net change between volumes at times 1 and 2 (V1 - V2) is then represented by the equation V,-V,=S+I-M-C. Two new compatible pairs of estimators for S and Z in this...

  6. A method for safety testing of radiofrequency/microwave-emitting devices using MRI.

    PubMed

    Alon, Leeor; Cho, Gene Y; Yang, Xing; Sodickson, Daniel K; Deniz, Cem M

    2015-11-01

    Strict regulations are imposed on the amount of radiofrequency (RF) energy that devices can emit to prevent excessive deposition of RF energy into the body. In this study, we investigated the application of MR temperature mapping and 10-g average specific absorption rate (SAR) computation for safety evaluation of RF-emitting devices. Quantification of the RF power deposition was shown for an MRI-compatible dipole antenna and a non-MRI-compatible mobile phone via phantom temperature change measurements. Validation of the MR temperature mapping method was demonstrated by comparison with physical temperature measurements and electromagnetic field simulations. MR temperature measurements alongside physical property measurements were used to reconstruct 10-g average SAR. The maximum temperature change for a dipole antenna and the maximum 10-g average SAR were 1.83°C and 12.4 W/kg, respectively, for simulations and 1.73°C and 11.9 W/kg, respectively, for experiments. The difference between MR and probe thermometry was <0.15°C. The maximum temperature change and the maximum 10-g average SAR for a cell phone radiating at maximum output for 15 min was 1.7°C and 0.54 W/kg, respectively. Information acquired using MR temperature mapping and thermal property measurements can assess RF/microwave safety with high resolution and fidelity. © 2014 Wiley Periodicals, Inc.

  7. A Method for Safety Testing of Radiofrequency/Microwave-Emitting Devices Using MRI

    PubMed Central

    Alon, Leeor; Cho, Gene Y.; Yang, Xing; Sodickson, Daniel K.; Deniz, Cem M.

    2015-01-01

    Purpose Strict regulations are imposed on the amount of radiofrequency (RF) energy that devices can emit to prevent excessive deposition of RF energy into the body. In this study, we investigated the application of MR temperature mapping and 10-g average specific absorption rate (SAR) computation for safety evaluation of RF-emitting devices. Methods Quantification of the RF power deposition was shown for an MRI-compatible dipole antenna and a non–MRI-compatible mobile phone via phantom temperature change measurements. Validation of the MR temperature mapping method was demonstrated by comparison with physical temperature measurements and electromagnetic field simulations. MR temperature measurements alongside physical property measurements were used to reconstruct 10-g average SAR. Results The maximum temperature change for a dipole antenna and the maximum 10-g average SAR were 1.83° C and 12.4 W/kg, respectively, for simulations and 1.73° C and 11.9 W/kg, respectively, for experiments. The difference between MR and probe thermometry was <0.15° C. The maximum temperature change and the maximum 10-g average SAR for a cell phone radiating at maximum output for 15 min was 1.7° C and 0.54 W/kg, respectively. Conclusion Information acquired using MR temperature mapping and thermal property measurements can assess RF/microwave safety with high resolution and fidelity. PMID:25424724

  8. Alternatives to Weight Tolerance Permits

    DOT National Transportation Integrated Search

    2000-10-01

    A complex web of government regulations in the United States establishes maximum weights for vehicles on public roads. The primary purpose is to ensure compatibility of roadway design and operations with vehicle weight and dimensions. Of particular c...

  9. Rates of spontaneous mutation among RNA viruses.

    PubMed Central

    Drake, J W

    1993-01-01

    Simple methods are presented to estimate rates of spontaneous mutation from mutant frequencies and population parameters in RNA viruses. Published mutant frequencies yield a wide range of mutation rates per genome per replication, mainly because mutational targets have usually been small and, thus, poor samples of the mutability of the average base. Nevertheless, there is a clear central tendency for lytic RNA viruses (bacteriophage Q beta, poliomyelitis, vesicular stomatitis, and influenza A) to display rates of spontaneous mutation of approximately 1 per genome per replication. This rate is some 300-fold higher than previously reported for DNA-based microbes. Lytic RNA viruses thus mutate at a rate close to the maximum value compatible with viability. Retroviruses (spleen necrosis, murine leukemia, Rous sarcoma), however, mutate at an average rate about an order of magnitude lower than lytic RNA viruses. PMID:8387212

  10. Correlation of the CT Compatible Stereotaxic Craniotomy with MRI Scans of the Patients for Removing Cranial Lesions Located Eloquent Areas and Deep Sites of Brain.

    PubMed

    Gulsen, Salih

    2015-03-15

    The first goal in neurosurgery is to protect neural function as long as it is possible. Moreover, while protecting the neural function, a neurosurgeon should extract the maximum amount of tumoral tissue from the tumour region of the brain. So neurosurgery and technological advancement go hand in hand to realize this goal. Using of CT compatible stereotaxy for removing a cranial tumour is to be commended as a cornerstone of these technological advancements. Following CT compatible stereotaxic system applications in neurosurgery, different techniques have taken place in neurosurgical practice. These techniques are magnetic resonance imaging (MRI), MRI compatible stereotaxis, frameless stereotaxy, volumetric stereotaxy, functional MRI, diffusion tensor (DT) imaging techniques (tractography of the white matter), intraoperative MRI and neuronavigation systems. However, to use all of this equipment having these technologies would be impossible because of economic reasons. However, when we correlated this technique with MRI scans of the patients with CT compatible stereotaxy scans, it is possible to provide gross total resection and protect and improve patients' neural functions.

  11. Neutral atmospheric models compatible with satellite orbital decay and incoherent scatter measurements

    NASA Technical Reports Server (NTRS)

    Rohrbaugh, J. L.

    1972-01-01

    A correlation study was made of the variations of the exospheric temperature extrema with various combinations of the monthly mean and daily values of the 2800 MHz and Ca:2 solar indices. The phase and amplitude of the semi-annual component and the term dependent on Kp were found to remain almost the same for the maximum and minimum temperature. The term dependent on the 27 day component of the solar activity was found to be about four times as large for the diurnal maximum as for the minimum. Measurements at Arecibo have shown that temperature gradient changes at 125 km are consistent with the phase difference between the neutral temperature and density maxima. This is used to develop an empirical model which is compatible with both the satellite measurements and the available incoherent scatter measurements. A main feature of this model is that day length is included as a major model parameter.

  12. Piezoelectric actuator design for MR elastography: implementation and vibration issues.

    PubMed

    Tse, Zion Tsz Ho; Chan, Yum Ji; Janssen, Henning; Hamed, Abbi; Young, Ian; Lamperth, Michael

    2011-09-01

    MR elastography (MRE) is an emerging technique for tumor diagnosis. MRE actuation devices require precise mechanical design and radiofrequency engineering to achieve the required mechanical vibration performance and MR compatibility. A method of designing a general-purpose, compact and inexpensive MRE actuator is presented. It comprises piezoelectric bimorphs arranged in a resonant structure designed to operate at its resonant frequency for maximum vibration amplitude. An analytical model was established to understand the device vibration characteristics. The model-predicted performance was validated in experiments, showing its accuracy in predicting the actuator resonant frequency with an error < 4%. The device MRI compatibility was shown to cause minimal interference to a 1.5 tesla MRI scanner, with maximum signal-to-noise ratio reduction of 7.8% and generated artefact of 7.9 mm in MR images. A piezoelectric MRE actuator is proposed, and its implementation, vibration issues and future work are discussed. Copyright © 2011 John Wiley & Sons, Ltd.

  13. Compatibility of segmented thermoelectric generators

    NASA Technical Reports Server (NTRS)

    Snyder, J.; Ursell, T.

    2002-01-01

    It is well known that power generation efficiency improves when materials with appropriate properties are combined either in a cascaded or segmented fashion across a temperature gradient. Past methods for determining materials used in segmentation weremainly concerned with materials that have the highest figure of merit in the temperature range. However, the example of SiGe segmented with Bi2Te3 and/or various skutterudites shows a marked decline in device efficiency even though SiGe has the highest figure of merit in the temperature range. The origin of the incompatibility of SiGe with other thermoelectric materials leads to a general definition of compatibility and intrinsic efficiency. The compatibility factor derived as = (Jl+zr - 1) a is a function of only intrinsic material properties and temperature, which is represented by a ratio of current to conduction heat. For maximum efficiency the compatibility factor should not change with temperature both within a single material, and in the segmented leg as a whole. This leads to a measure of compatibility not only between segments, but also within a segment. General temperature trends show that materials are more self compatible at higher temperatures, and segmentation is more difficult across a larger -T. The compatibility factor can be used as a quantitative guide for deciding whether a material is better suited for segmentation orcascading. Analysis of compatibility factors and intrinsic efficiency for optimal segmentation are discussed, with intent to predict optimal material properties, temperature interfaces, and/or currentheat ratios.

  14. Surface acoustic wave devices as passive buried sensors

    NASA Astrophysics Data System (ADS)

    Friedt, J.-M.; Rétornaz, T.; Alzuaga, S.; Baron, T.; Martin, G.; Laroche, T.; Ballandras, S.; Griselin, M.; Simonnet, J.-P.

    2011-02-01

    Surface acoustic wave (SAW) devices are currently used as passive remote-controlled sensors for measuring various physical quantities through a wireless link. Among the two main classes of designs—resonator and delay line—the former has the advantage of providing narrow-band spectrum informations and hence appears compatible with an interrogation strategy complying with Industry-Scientific-Medical regulations in radio-frequency (rf) bands centered around 434, 866, or 915 MHz. Delay-line based sensors require larger bandwidths as they consists of a few interdigitated electrodes excited by short rf pulses with large instantaneous energy and short response delays but is compatible with existing equipment such as ground penetrating radar (GPR). We here demonstrate the measurement of temperature using the two configurations, particularly for long term monitoring using sensors buried in soil. Although we have demonstrated long term stability and robustness of packaged resonators and signal to noise ratio compatible with the expected application, the interrogation range (maximum 80 cm) is insufficient for most geology or geophysical purposes. We then focus on the use of delay lines, as the corresponding interrogation method is similar to the one used by GPR which allows for rf penetration distances ranging from a few meters to tens of meters and which operates in the lower rf range, depending on soil water content, permittivity, and conductivity. Assuming propagation losses in a pure dielectric medium with negligible conductivity (snow or ice), an interrogation distance of about 40 m is predicted, which overcomes the observed limits met when using interrogation methods specifically developed for wireless SAW sensors, and could partly comply with the above-mentioned applications. Although quite optimistic, this estimate is consistent with the signal to noise ratio observed during an experimental demonstration of the interrogation of a delay line buried at a depth of 5 m in snow.

  15. Comparative analysis of microarray data in Arabidopsis transcriptome during compatible interactions with plant viruses

    USDA-ARS?s Scientific Manuscript database

    To analyze transcriptome response to virus infection, we have assembled currently available microarray data on changes in gene expression levels in compatible Arabidopsis-virus interactions. We used the mean r (Pearson’s correlation coefficient) for neighboring pairs to estimate pairwise local simil...

  16. Calcium Chloride in Neonatal Parenteral Nutrition Solutions with and without Added Cysteine: Compatibility Studies Using Laser and Micro-Flow Imaging Methodology.

    PubMed

    Huston, Robert K; Christensen, J Mark; Alshahrani, Sultan M; Mohamed, Sumeia M; Clark, Sara M; Nason, Jeffrey A; Wu, Ying Xing

    2015-01-01

    Previous studies of compatibility of calcium chloride (CaCl2) and phosphates have not included particle counts in the range specified by the United States Pharmacopeia. Micro-flow imaging techniques have been shown to be comparable to light obscuration when determining particle count and size in pharmaceutical solutions. The purpose of this study was to do compatibility testing for parenteral nutrition (PN) solutions containing CaCl2 using dynamic light scattering and micro-flow imaging techniques. Solutions containing TrophAmine (Braun Medical Inc, Irvine, CA), CaCl2, and sodium phosphate (NaPhos) were compounded with and without cysteine. All solutions contained standard additives to neonatal PN solutions including dextrose, trace metals, and electrolytes. Control solutions contained no calcium or phosphate. Solutions were analyzed for particle size and particle count. Means of Z-average particle size and particle counts of controls were determined. Study solutions were compared to controls and United States Pharmacopeia (USP) Chapter 788 guidelines. The maximum amount of Phos that was compatible in solutions that contained at least 10 mmol/L of Ca in 2.5% amino acids (AA) was determined. Compatibility of these solutions was verified by performing analyses of 5 repeats of these solutions. Microscopic analyses of the repeats were also performed. Amounts of CaCl2 and NaPhos that were compatible in solutions containing 1.5%, 2%, 2.5%, and 3% AA were determined. The maximum amount of NaPhos that could be added to TrophAmine solutions of > = 2.5% AA containing at least 10 mmol/L of CaCl2 was 7.5 mmol/L. Adding 50 mg/dL of cysteine increased the amount of NaPhos that could be added to solutions containing 10 mmol/L of CaCl2 to 10 mmol/L. Calcium chloride can be added to neonatal PN solutions containing NaPhos in concentrations that can potentially provide an intravenous intake of adequate amounts of calcium and phosphorus.

  17. Photothermal tomography for the functional and structural evaluation, and early mineral loss monitoring in bones.

    PubMed

    Kaiplavil, Sreekumar; Mandelis, Andreas; Wang, Xueding; Feng, Ting

    2014-08-01

    Salient features of a new non-ionizing bone diagnostics technique, truncated-correlation photothermal coherence tomography (TC-PCT), exhibiting optical-grade contrast and capable of resolving the trabecular network in three dimensions through the cortical region with and without a soft-tissue overlayer are presented. The absolute nature and early demineralization-detection capability of a marker called thermal wave occupation index, estimated using the proposed modality, have been established. Selective imaging of regions of a specific mineral density range has been demonstrated in a mouse femur. The method is maximum-permissible-exposure compatible. In a matrix of bone and soft-tissue a depth range of ~3.8 mm has been achieved, which can be increased through instrumental and modulation waveform optimization. Furthermore, photoacoustic microscopy, a comparable modality with TC-PCT, has been used to resolve the trabecular structure and for comparison with the photothermal tomography.

  18. Photothermal tomography for the functional and structural evaluation, and early mineral loss monitoring in bones

    PubMed Central

    Kaiplavil, Sreekumar; Mandelis, Andreas; Wang, Xueding; Feng, Ting

    2014-01-01

    Salient features of a new non-ionizing bone diagnostics technique, truncated-correlation photothermal coherence tomography (TC-PCT), exhibiting optical-grade contrast and capable of resolving the trabecular network in three dimensions through the cortical region with and without a soft-tissue overlayer are presented. The absolute nature and early demineralization-detection capability of a marker called thermal wave occupation index, estimated using the proposed modality, have been established. Selective imaging of regions of a specific mineral density range has been demonstrated in a mouse femur. The method is maximum-permissible-exposure compatible. In a matrix of bone and soft-tissue a depth range of ~3.8 mm has been achieved, which can be increased through instrumental and modulation waveform optimization. Furthermore, photoacoustic microscopy, a comparable modality with TC-PCT, has been used to resolve the trabecular structure and for comparison with the photothermal tomography. PMID:25136480

  19. Effect of Background Pressure on the Plasma Oscillation Characteristics of the HiVHAc Hall Thruster

    NASA Technical Reports Server (NTRS)

    Huang, Wensheng; Kamhawi, Hani; Lobbia, Robert B.; Brown, Daniel L.

    2014-01-01

    During a component compatibility test of the NASA HiVHAc Hall thruster, a number of plasma diagnostics were implemented to study the effect of varying facility background pressure on thruster operation. These diagnostics characterized the thruster performance, the plume, and the plasma oscillations in the thruster. Thruster performance and plume characteristics as functions of background pressure were previously published. This paper focuses on changes in the plasma oscillation characteristics with changing background pressure. The diagnostics used to study plasma oscillations include a high-speed camera and a set of high-speed Langmuir probes. The results show a rise in the oscillation frequency of the "breathing" mode with rising background pressure, which is hypothesized to be due to a shortening acceleration/ionization zone. An attempt is made to apply a simplified ingestion model to the data. The combined results are used to estimate the maximum acceptable background pressure for performance and wear testing.

  20. A mild alkali treated jute fibre controlling the hydration behaviour of greener cement paste

    PubMed Central

    Jo, Byung-Wan; Chakraborty, Sumit

    2015-01-01

    To reduce the antagonistic effect of jute fibre on the setting and hydration of jute reinforced cement, modified jute fibre reinforcement would be a unique approach. The present investigation deals with the effectiveness of mild alkali treated (0.5%) jute fibre on the setting and hydration behaviour of cement. Setting time measurement, hydration test and analytical characterizations of the hardened samples (viz., FTIR, XRD, DSC, TGA, and free lime estimation) were used to evaluate the effect of alkali treated jute fibre. From the hydration test, the time (t) required to reach maximum temperature for the hydration of control cement sample is estimated to be 860 min, whilst the time (t) is measured to be 1040 min for the hydration of a raw jute reinforced cement sample. However, the time (t) is estimated to be 1020 min for the hydration of an alkali treated jute reinforced cement sample. Additionally, from the analytical characterizations, it is determined that fibre-cement compatibility is increased and hydration delaying effect is minimized by using alkali treated jute fibre as fibre reinforcement. Based on the analyses, a model has been proposed to explain the setting and hydration behaviour of alkali treated jute fibre reinforced cement composite. PMID:25592665

  1. The Influence of Compatibility of Rhubarb and Radix Scutellariae on the Pharmacokinetics of Anthraquinones and Flavonoids in Rat Plasma.

    PubMed

    Zhang, Yaqing; Zhang, Zunjian; Song, Rui

    2018-06-01

    Rhubarb-Radix scutellariae is a classic herb pair, which is commonly used to clear away heat and toxin in clinic. The aim of this study was to investigate the influence of compatibility of Rhubarb and Radix scutellariae on the pharmacokinetic behaviors of anthraquinones and flavonoids in rat plasma. Eighteen rats were randomly divided into three groups, and were orally administered Rhubarb and/or Radix scutellariae extracts. A sensitive and rapid UPLC-MS/MS method was developed and validated to determine the concentrations of baicalin, baicalein, wogonside, wogonin, rhein, and emodin in rat plasma. The concentrations of phase II conjugates of flavonoid aglycones and anthraquinone aglycones were also determined after hydrolyzing the plasma with sulfatase. Compared with administration of Radix scutellariae alone, co-administration of Rhubarb significantly decreased the first maximum plasma concentration (C max1 ) of baicalin, wogonside, and the phase II conjugates of baicalein, wogonin to 46.40, 61.27, 41.49, and 20.50%, respectively. The area under the plasma concentration-time curve from time zero to infinity (AUC 0-∞ ) was significantly decreased from 82.60 ± 20.22 to 51.91 ± 7.46 μM·h for rhein and 276.83 ± 98.02 to 175.42 ± 86.82 μM·h for the phase II conjugates of wogonin after compatibility. The time to reach the first maximum plasma concentration (T max1 ) of anthraquinones was shortened and the second peak of anthraquinones disappeared after compatibility. Compatibility of Rhubarb and Radix scutellariae can significantly affect the pharmacokinetic behaviors of characteristic constituents of the two herbs. The cause of these pharmacokinetic differences was further discussed combined with the in vivo ADME (absorption, disposition, metabolism, and excretion) processes of anthraquinones and flavonoids.

  2. Assessing the MR compatibility of dental retainer wires at 7 Tesla.

    PubMed

    Wezel, Joep; Kooij, Bert Jan; Webb, Andrew G

    2014-10-01

    To determine the MR compatibility of common dental retainer wires at 7 Tesla in terms of potential RF heating and magnetic susceptibility effects. Electromagnetic simulations and experimental results were compared for dental retainer wires placed in tissue-mimicking phantoms. Simulations were then performed for a human model with wire in place. Finally, image quality was assessed for different scanning protocols and wires. Simulations and experimental data in phantoms agreed well, with the length of the wire correlating to maximum heating in phantoms being approximately 47 mm. Even in this case, no substantial heating occurs when scanning within the specific absorption rate (SAR) guidelines for the head. Image distortions from the most ferromagnetic dental wire were not significant for any brain region. Dental retainer wires appear to be MR compatible at 7 Tesla. Copyright © 2013 Wiley Periodicals, Inc.

  3. Simpler and equitable allocation of kidneys from postmortem donors primarily based on full HLA-DR compatibility.

    PubMed

    Doxiadis, Ilias I N; de Fijter, Johan W; Mallat, Marko J K; Haasnoot, Geert W; Ringers, Jan; Persijn, Guido G; Claas, Frans H J

    2007-05-15

    The introduction of human leukocyte antigen (HLA)-matching in nonliving kidney transplantation has resulted into a better graft outcome, but also in an increase of waiting time, especially for patients with rare HLA phenotypes. We addressed the question of the differential influence of HLA-DR-matching versus HLA-A,B in clinical kidney transplantation. We used Kaplan-Meier product limit method to estimate survival rates, and Cox proportional hazard regression for the estimation of relative risks (Hazard-ratios) for different variables. A single center study (n=456 transplants, performed between 1985 and 1999) showed that full HLA-DR compatibility leads to a lower incidence of biopsy confirmed acute rejections in the first 180 posttransplantation days. These results were substantiated using the Eurotransplant database (n=39,205 transplants performed between 1985 and 2005) where graft survival in the full HLA-DR compatible group was significantly better than in the incompatible. An additional positive effect of HLA-A,B matching was only found in the full HLA-DR compatible group. In both studies, the introduction of a single HLA-DR incompatibility eliminates the HLA-A,B matching effect. We propose to allocate postmortem kidneys only to patients with full HLA-DR compatibility, and use HLA-A,B compatibility as an additional selection criterion. All patients, irrespective of their ethnic origin, will profit since the polymorphism of HLA-DR is by far lower than that of HLA-A,B. Excessive kidney travel and cold ischemia time will be significantly reduced.

  4. Implementation of the Rauch-Tung-Striebel Smoother for Sensor Compatibility Correction of a Fixed-Wing Unmanned Air Vehicle

    PubMed Central

    Chan, Woei-Leong; Hsiao, Fei-Bin

    2011-01-01

    This paper presents a complete procedure for sensor compatibility correction of a fixed-wing Unmanned Air Vehicle (UAV). The sensors consist of a differential air pressure transducer for airspeed measurement, two airdata vanes installed on an airdata probe for angle of attack (AoA) and angle of sideslip (AoS) measurement, and an Attitude and Heading Reference System (AHRS) that provides attitude angles, angular rates, and acceleration. The procedure is mainly based on a two pass algorithm called the Rauch-Tung-Striebel (RTS) smoother, which consists of a forward pass Extended Kalman Filter (EKF) and a backward recursion smoother. On top of that, this paper proposes the implementation of the Wiener Type Filter prior to the RTS in order to avoid the complicated process noise covariance matrix estimation. Furthermore, an easy to implement airdata measurement noise variance estimation method is introduced. The method estimates the airdata and subsequently the noise variances using the ground speed and ascent rate provided by the Global Positioning System (GPS). It incorporates the idea of data regionality by assuming that some sort of statistical relation exists between nearby data points. Root mean square deviation (RMSD) is being employed to justify the sensor compatibility. The result shows that the presented procedure is easy to implement and it improves the UAV sensor data compatibility significantly. PMID:22163819

  5. Implementation of the Rauch-Tung-Striebel smoother for sensor compatibility correction of a fixed-wing unmanned air vehicle.

    PubMed

    Chan, Woei-Leong; Hsiao, Fei-Bin

    2011-01-01

    This paper presents a complete procedure for sensor compatibility correction of a fixed-wing Unmanned Air Vehicle (UAV). The sensors consist of a differential air pressure transducer for airspeed measurement, two airdata vanes installed on an airdata probe for angle of attack (AoA) and angle of sideslip (AoS) measurement, and an Attitude and Heading Reference System (AHRS) that provides attitude angles, angular rates, and acceleration. The procedure is mainly based on a two pass algorithm called the Rauch-Tung-Striebel (RTS) smoother, which consists of a forward pass Extended Kalman Filter (EKF) and a backward recursion smoother. On top of that, this paper proposes the implementation of the Wiener Type Filter prior to the RTS in order to avoid the complicated process noise covariance matrix estimation. Furthermore, an easy to implement airdata measurement noise variance estimation method is introduced. The method estimates the airdata and subsequently the noise variances using the ground speed and ascent rate provided by the Global Positioning System (GPS). It incorporates the idea of data regionality by assuming that some sort of statistical relation exists between nearby data points. Root mean square deviation (RMSD) is being employed to justify the sensor compatibility. The result shows that the presented procedure is easy to implement and it improves the UAV sensor data compatibility significantly.

  6. Nonadditive entropy maximization is inconsistent with Bayesian updating

    NASA Astrophysics Data System (ADS)

    Pressé, Steve

    2014-11-01

    The maximum entropy method—used to infer probabilistic models from data—is a special case of Bayes's model inference prescription which, in turn, is grounded in basic propositional logic. By contrast to the maximum entropy method, the compatibility of nonadditive entropy maximization with Bayes's model inference prescription has never been established. Here we demonstrate that nonadditive entropy maximization is incompatible with Bayesian updating and discuss the immediate implications of this finding. We focus our attention on special cases as illustrations.

  7. Nonadditive entropy maximization is inconsistent with Bayesian updating.

    PubMed

    Pressé, Steve

    2014-11-01

    The maximum entropy method-used to infer probabilistic models from data-is a special case of Bayes's model inference prescription which, in turn, is grounded in basic propositional logic. By contrast to the maximum entropy method, the compatibility of nonadditive entropy maximization with Bayes's model inference prescription has never been established. Here we demonstrate that nonadditive entropy maximization is incompatible with Bayesian updating and discuss the immediate implications of this finding. We focus our attention on special cases as illustrations.

  8. Compatibility Flight Profile and Internal Environment Characterization for the RASCAL Pod. Project: Senior RASCAL

    DTIC Science & Technology

    2008-12-01

    pod at increasing angles of attack. An overall vertical acceleration maximum of 7.5 g RMS occurred while in a transonic wind-up turn at 15,000 ft and...landings, level accelerations, and specific maneuver blocks of varying sideslip, load factor, and angle of attack (AOA). The flight conditions...0g 10s maximum Angle of Attack (deg) ±1 ±1 16 Table A1: Data Bands and Tolerances for the Vibroacoustic Tests Table A2 summarizes the conditions

  9. Pneumatic Control Device for the Pershing 2 Adaption Kit

    DTIC Science & Technology

    1979-03-14

    forward force to main- tain a pressure seal (this, versus an-I6-to 25 pound maximum reverse .force component due to pressure). In all probability, initial...stem forward force to main- tain a pressure seal (this, versus an 48-to-25-pound maximum " reverse.force, component due-topressue). In-all probability...PII Li L! Ramn Eniern Inc Contrato . 2960635 GAS GENERATOR COMPATIBILITY U TEST REPORT 1.j Requirement s The requirements for the Pershing II, Phase I

  10. Extended maximum likelihood halo-independent analysis of dark matter direct detection data

    DOE PAGES

    Gelmini, Graciela B.; Georgescu, Andreea; Gondolo, Paolo; ...

    2015-11-24

    We extend and correct a recently proposed maximum-likelihood halo-independent method to analyze unbinned direct dark matter detection data. Instead of the recoil energy as independent variable we use the minimum speed a dark matter particle must have to impart a given recoil energy to a nucleus. This has the advantage of allowing us to apply the method to any type of target composition and interaction, e.g. with general momentum and velocity dependence, and with elastic or inelastic scattering. We prove the method and provide a rigorous statistical interpretation of the results. As first applications, we find that for dark mattermore » particles with elastic spin-independent interactions and neutron to proton coupling ratio f n/f p=-0.7, the WIMP interpretation of the signal observed by CDMS-II-Si is compatible with the constraints imposed by all other experiments with null results. We also find a similar compatibility for exothermic inelastic spin-independent interactions with f n/f p=-0.8.« less

  11. Anomalous maximum and minimum for the dissociation of a geminate pair in energetically disordered media

    NASA Astrophysics Data System (ADS)

    Govatski, J. A.; da Luz, M. G. E.; Koehler, M.

    2015-01-01

    We study the geminated pair dissociation probability φ as function of applied electric field and temperature in energetically disordered nD media. Regardless nD, for certain parameters regions φ versus the disorder degree (σ) displays anomalous minimum (maximum) at low (moderate) fields. This behavior is compatible with a transport energy which reaches a maximum and then decreases to negative values as σ increases. Our results explain the temperature dependence of the persistent photoconductivity in C60 single crystals going through order-disorder transitions. They also indicate how an energetic disorder spatial variation may contribute to higher exciton dissociation in multicomponent donor/acceptor systems.

  12. A Compatible Stem Taper-Volume-Weight System For Intensively Managed Fast Growing Loblolly Pine

    Treesearch

    Yugia Zhang; Bruce E. Borders; Robert L Bailey

    2002-01-01

    eometry-oriented methodology yielded a compatible taper-volume-weight system of models whose parameters were estimated using data from intensively managed loblolly pine (Pinus taeda L.) plantations in the lower coastal plain of Georgia. Data analysis showed that fertilization has significantly reduced taper (inside and outside bark) on the upper...

  13. Compatible above-ground biomass equations and carbon stock estimation for small diameter Turkish pine (Pinus brutia Ten.).

    PubMed

    Sakici, Oytun Emre; Kucuk, Omer; Ashraf, Muhammad Irfan

    2018-04-15

    Small trees and saplings are important for forest management, carbon stock estimation, ecological modeling, and fire management planning. Turkish pine (Pinus brutia Ten.) is a common coniferous species and comprises 25.1% of total forest area of Turkey. Turkish pine is also important due to its flammable fuel characteristics. In this study, compatible above-ground biomass equations were developed to predict needle, branch, stem wood, and above-ground total biomass, and carbon stock assessment was also described for Turkish pine which is smaller than 8 cm diameter at breast height or shorter than breast height. Compatible biomass equations are useful for biomass prediction of small diameter individuals of Turkish pine. These equations will also be helpful in determining fire behavior characteristics and calculating their carbon stock. Overall, present study will be useful for developing ecological models, forest management plans, silvicultural plans, and fire management plans.

  14. Constrained map-based inventory estimation

    Treesearch

    Paul C. Van Deusen; Francis A. Roesch

    2007-01-01

    A region can conceptually be tessellated into polygons at different scales or resolutions. Likewise, samples can be taken from the region to determine the value of a polygon variable for each scale. Sampled polygons can be used to estimate values for other polygons at the same scale. However, estimates should be compatible across the different scales. Estimates are...

  15. Complementary nonparametric analysis of covariance for logistic regression in a randomized clinical trial setting.

    PubMed

    Tangen, C M; Koch, G G

    1999-03-01

    In the randomized clinical trial setting, controlling for covariates is expected to produce variance reduction for the treatment parameter estimate and to adjust for random imbalances of covariates between the treatment groups. However, for the logistic regression model, variance reduction is not obviously obtained. This can lead to concerns about the assumptions of the logistic model. We introduce a complementary nonparametric method for covariate adjustment. It provides results that are usually compatible with expectations for analysis of covariance. The only assumptions required are based on randomization and sampling arguments. The resulting treatment parameter is a (unconditional) population average log-odds ratio that has been adjusted for random imbalance of covariates. Data from a randomized clinical trial are used to compare results from the traditional maximum likelihood logistic method with those from the nonparametric logistic method. We examine treatment parameter estimates, corresponding standard errors, and significance levels in models with and without covariate adjustment. In addition, we discuss differences between unconditional population average treatment parameters and conditional subpopulation average treatment parameters. Additional features of the nonparametric method, including stratified (multicenter) and multivariate (multivisit) analyses, are illustrated. Extensions of this methodology to the proportional odds model are also made.

  16. A model to assess the Mars Telecommunications Network relay robustness

    NASA Technical Reports Server (NTRS)

    Girerd, Andre R.; Meshkat, Leila; Edwards, Charles D., Jr.; Lee, Charles H.

    2005-01-01

    The relatively long mission durations and compatible radio protocols of current and projected Mars orbiters have enabled the gradual development of a heterogeneous constellation providing proximity communication services for surface assets. The current and forecasted capability of this evolving network has reached the point that designers of future surface missions consider complete dependence on it. Such designers, along with those architecting network requirements, have a need to understand the robustness of projected communication service. A model has been created to identify the robustness of the Mars Network as a function of surface location and time. Due to the decade-plus time horizon considered, the network will evolve, with emerging productive nodes and nodes that cease or fail to contribute. The model is a flexible framework to holistically process node information into measures of capability robustness that can be visualized for maximum understanding. Outputs from JPL's Telecom Orbit Analysis Simulation Tool (TOAST) provide global telecom performance parameters for current and projected orbiters. Probabilistic estimates of orbiter fuel life are derived from orbit keeping burn rates, forecasted maneuver tasking, and anomaly resolution budgets. Orbiter reliability is estimated probabilistically. A flexible scheduling framework accommodates the projected mission queue as well as potential alterations.

  17. Halo-independent determination of the unmodulated WIMP signal in DAMA: the isotropic case

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gondolo, Paolo; Scopel, Stefano, E-mail: paolo.gondolo@utah.edu, E-mail: scopel@sogang.ac.kr

    2017-09-01

    We present a halo-independent determination of the unmodulated signal corresponding to the DAMA modulation if interpreted as due to dark matter weakly interacting massive particles (WIMPs). First we show how a modulated signal gives information on the WIMP velocity distribution function in the Galactic rest frame from which the unmodulated signal descends. Then we describe a mathematically-sound profile likelihood analysis in which the likelihood is profiled over a continuum of nuisance parameters (namely, the WIMP velocity distribution). As a first application of the method, which is very general and valid for any class of velocity distributions, we restrict the analysismore » to velocity distributions that are isotropic in the Galactic frame. In this way we obtain halo-independent maximum-likelihood estimates and confidence intervals for the DAMA unmodulated signal. We find that the estimated unmodulated signal is in line with expectations for a WIMP-induced modulation and is compatible with the DAMA background+signal rate. Specifically, for the isotropic case we find that the modulated amplitude ranges between a few percent and about 25% of the unmodulated amplitude, depending on the WIMP mass.« less

  18. Determining the accuracy of maximum likelihood parameter estimates with colored residuals

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; Klein, Vladislav

    1994-01-01

    An important part of building high fidelity mathematical models based on measured data is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of the accuracy of parameter estimates, the estimates themselves have limited value. In this work, an expression based on theoretical analysis was developed to properly compute parameter accuracy measures for maximum likelihood estimates with colored residuals. This result is important because experience from the analysis of measured data reveals that the residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Simulated data runs were used to show that the parameter accuracy measures computed with this technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for analysis of the output residuals in the frequency domain or heuristically determined multiplication factors. The result is general, although the application studied here is maximum likelihood estimation of aerodynamic model parameters from flight test data.

  19. Evaluation of Four Methods for Predicting Carbon Stocks of Korean Pine Plantations in Heilongjiang Province, China

    PubMed Central

    Gao, Huilin; Dong, Lihu; Li, Fengri; Zhang, Lianjun

    2015-01-01

    A total of 89 trees of Korean pine (Pinus koraiensis) were destructively sampled from the plantations in Heilongjiang Province, P.R. China. The sample trees were measured and calculated for the biomass and carbon stocks of tree components (i.e., stem, branch, foliage and root). Both compatible biomass and carbon stock models were developed with the total biomass and total carbon stocks as the constraints, respectively. Four methods were used to evaluate the carbon stocks of tree components. The first method predicted carbon stocks directly by the compatible carbon stocks models (Method 1). The other three methods indirectly predicted the carbon stocks in two steps: (1) estimating the biomass by the compatible biomass models, and (2) multiplying the estimated biomass by three different carbon conversion factors (i.e., carbon conversion factor 0.5 (Method 2), average carbon concentration of the sample trees (Method 3), and average carbon concentration of each tree component (Method 4)). The prediction errors of estimating the carbon stocks were compared and tested for the differences between the four methods. The results showed that the compatible biomass and carbon models with tree diameter (D) as the sole independent variable performed well so that Method 1 was the best method for predicting the carbon stocks of tree components and total. There were significant differences among the four methods for the carbon stock of stem. Method 2 produced the largest error, especially for stem and total. Methods 3 and Method 4 were slightly worse than Method 1, but the differences were not statistically significant. In practice, the indirect method using the mean carbon concentration of individual trees was sufficient to obtain accurate carbon stocks estimation if carbon stocks models are not available. PMID:26659257

  20. A new method for evaluating impacts of data assimilation with respect to tropical cyclone intensity forecast problem

    NASA Astrophysics Data System (ADS)

    Vukicevic, T.; Uhlhorn, E.; Reasor, P.; Klotz, B.

    2012-12-01

    A significant potential for improving numerical model forecast skill of tropical cyclone (TC) intensity by assimilation of airborne inner core observations in high resolution models has been demonstrated in recent studies. Although encouraging , the results so far have not provided clear guidance on the critical information added by the inner core data assimilation with respect to the intensity forecast skill. Better understanding of the relationship between the intensity forecast and the value added by the assimilation is required to further the progress, including the assimilation of satellite observations. One of the major difficulties in evaluating such a relationship is the forecast verification metric of TC intensity: the maximum one-minute sustained wind speed at 10 m above surface. The difficulty results from two issues : 1) the metric refers to a practically unobservable quantity since it is an extreme value in a highly turbulent, and spatially-extensive wind field and 2) model- and observation-based estimates of this measure are not compatible in terms of spatial and temporal scales, even in high-resolution models. Although the need for predicting the extreme value of near surface wind is well justified, and the observation-based estimates that are used in practice are well thought of, a revised metric for the intensity is proposed for the purpose of numerical forecast evaluation and the impacts on the forecast. The metric should enable a robust observation- and model-resolvable and phenomenologically-based evaluation of the impacts. It is shown that the maximum intensity could be represented in terms of decomposition into deterministic and stochastic components of the wind field. Using the vortex-centric cylindrical reference frame, the deterministic component is defined as the sum of amplitudes of azimuthal wave numbers 0 and 1 at the radius of maximum wind, whereas the stochastic component is represented by a non-Gaussian PDF. This decomposition is exact and fully independent of individual TC properties. The decomposition of the maximum wind intensity was first evaluated using several sources of data including Step Frequency Microwave Radiometer surface wind speeds from NOAA and Air Force reconnaissance flights,NOAA P-3 Tail Doppler Radar measurements, and best track maximum intensity estimates as well as the simulations from Hurricane WRF Ensemble Data Assimilation System (HEDAS) experiments for 83 real data cases. The results confirmed validity of the method: the stochastic component of the maximum exibited a non-Gaussian PDF with small mean amplitude and variance that was comparable to the known best track error estimates. The results of the decomposition were then used to evaluate the impact of the improved initial conditions on the forecast. It was shown that the errors in the deterministic component of the intensity had the dominant effect on the forecast skill for the studied cases. This result suggests that the data assimilation of the inner core observations could focus primarily on improving the analysis of wave number 0 and 1 initial structure and on the mechanisms responsible for forcing the evolution of this low-wavenumber structure. For the latter analysis, the assimilation of airborne and satellite remote sensing observations could play significant role.

  1. [Invasive fungal infections in children with cancer, neutropenia and fever, in Chile].

    PubMed

    Lucero, Yalda; Brücher, Roberto; Alvarez, Ana María; Becker, Ana; Cofré, José; Enríquez, Nancy; Payá, Ernesto; Salgado, Carmen; Santolaya, María Elena; Tordecilla, Juan; Varas, Mónica; Villarroel, Milena; Viviani, Tamara; Zubieta, Marcela; O'Ryan, Miguel

    2002-10-01

    Invasive fungal infections (IFI) cause prolonged hospitalizations and increase the possibility of death among patients with cancer and febrile neutropenia (FN). Up to 10% of febrile neutropenic episodes may be caused by IFI. To estimate the incidence of IFI among a large group of Chilean children with cancer and FN. Clinical and laboratory information was collected from a data base provided by the "Programa Infantil Nacional de Drogas Antineoplásicas" (PINDA) that included 445 FN episodes occurring in five hospitals in Santiago, Chile. This information was used to identify children that presented with signs and symptoms compatible with an IFI. According to predefined criteria based on a literature review, IFI episodes were categorized as "proven", "probable" or "possible". A total of 41/445 episodes (9.2%) were compatible with an IFI of which 4 (0.9%) were proven, 23 (5.2%) probable, and 14 (3.1%) possible. Hospitalization was longer (27 vs 8 days, p < .01), new infectious foci appeared with higher frequency (71 vs 38%, p < .01), and mortality was higher (10 vs 1.6%, p < .001) in children with IFI compatible episodes, when compared to children who did not have an IFI. The estimated incidence of IFI in Chilean children with cancer and FN ranged between 6-9% depending on the stringency of criteria selection used for classification. This estimate is similar to that reported by other studies. The low detection yield of clinically compatible IFI underscores the need of improved diagnosis of fungal infections in this population.

  2. Advanced subsonic long-haul transport terminal area compatibility study. Volume 2: Research and technology recommendations

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The Terminal Area Compatibility (TAC) study is briefly summarized for background information. The most important research items for the areas of noise congestion, and emissions are identified. Other key research areas are also discussed. The 50 recommended research items are categorized by flight phase, technology, and compatibility benefits. The relationship of the TAC recommendations to the previous ATT recommendations is discussed. The bulk of the document contains the 50 recommended research items. For each item, the potential payoff, state of readiness, recommended action and estimated cost and schedule are given.

  3. Adhoc electromagnetic compatibility testing of non-implantable medical devices and radio frequency identification

    PubMed Central

    2013-01-01

    Background The use of radiofrequency identification (RFID) in healthcare is increasing and concerns for electromagnetic compatibility (EMC) pose one of the biggest obstacles for widespread adoption. Numerous studies have documented that RFID can interfere with medical devices. The majority of past studies have concentrated on implantable medical devices such as implantable pacemakers and implantable cardioverter defibrillators (ICDs). This study examined EMC between RFID systems and non-implantable medical devices. Methods Medical devices were exposed to 19 different RFID readers and one RFID active tag. The RFID systems used covered 5 different frequency bands: 125–134 kHz (low frequency (LF)); 13.56 MHz (high frequency (HF)); 433 MHz; 915 MHz (ultra high frequency (UHF])) and 2.4 GHz. We tested three syringe pumps, three infusion pumps, four automatic external defibrillators (AEDs), and one ventilator. The testing procedure is modified from American National Standards Institute (ANSI) C63.18, Recommended Practice for an On-Site, Ad Hoc Test Method for Estimating Radiated Electromagnetic Immunity of Medical Devices to Specific Radio-Frequency Transmitters. Results For syringe pumps, we observed electromagnetic interference (EMI) during 13 of 60 experiments (22%) at a maximum distance of 59 cm. For infusion pumps, we observed EMI during 10 of 60 experiments (17%) at a maximum distance of 136 cm. For AEDs, we observed EMI during 18 of 75 experiments (24%) at a maximum distance of 51 cm. The majority of the EMI observed was classified as probably clinically significant or left the device inoperable. No EMI was observed for all medical devices tested during exposure to 433 MHz (two readers, one active tag) or 2.4 GHz RFID (two readers). Conclusion Testing confirms that RFID has the ability to interfere with critical medical equipment. Hospital staff should be aware of the potential for medical device EMI caused by RFID systems and should be encouraged to perform on-site RF immunity tests prior to RFID system deployment or prior to placing new medical devices in an RFID environment. The methods presented in this paper are time-consuming and burdensome and suggest the need for standard test methods for assessing the immunity of medical devices to RFID systems. PMID:23845013

  4. Adhoc electromagnetic compatibility testing of non-implantable medical devices and radio frequency identification.

    PubMed

    Seidman, Seth J; Guag, Joshua W

    2013-07-11

    The use of radiofrequency identification (RFID) in healthcare is increasing and concerns for electromagnetic compatibility (EMC) pose one of the biggest obstacles for widespread adoption. Numerous studies have documented that RFID can interfere with medical devices. The majority of past studies have concentrated on implantable medical devices such as implantable pacemakers and implantable cardioverter defibrillators (ICDs). This study examined EMC between RFID systems and non-implantable medical devices. Medical devices were exposed to 19 different RFID readers and one RFID active tag. The RFID systems used covered 5 different frequency bands: 125-134 kHz (low frequency (LF)); 13.56 MHz (high frequency (HF)); 433 MHz; 915 MHz (ultra high frequency (UHF])) and 2.4 GHz. We tested three syringe pumps, three infusion pumps, four automatic external defibrillators (AEDs), and one ventilator. The testing procedure is modified from American National Standards Institute (ANSI) C63.18, Recommended Practice for an On-Site, Ad Hoc Test Method for Estimating Radiated Electromagnetic Immunity of Medical Devices to Specific Radio-Frequency Transmitters. For syringe pumps, we observed electromagnetic interference (EMI) during 13 of 60 experiments (22%) at a maximum distance of 59 cm. For infusion pumps, we observed EMI during 10 of 60 experiments (17%) at a maximum distance of 136 cm. For AEDs, we observed EMI during 18 of 75 experiments (24%) at a maximum distance of 51 cm. The majority of the EMI observed was classified as probably clinically significant or left the device inoperable. No EMI was observed for all medical devices tested during exposure to 433 MHz (two readers, one active tag) or 2.4 GHz RFID (two readers). Testing confirms that RFID has the ability to interfere with critical medical equipment. Hospital staff should be aware of the potential for medical device EMI caused by RFID systems and should be encouraged to perform on-site RF immunity tests prior to RFID system deployment or prior to placing new medical devices in an RFID environment. The methods presented in this paper are time-consuming and burdensome and suggest the need for standard test methods for assessing the immunity of medical devices to RFID systems.

  5. Finite mixture model: A maximum likelihood estimation approach on time series data

    NASA Astrophysics Data System (ADS)

    Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad

    2014-09-01

    Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.

  6. MRI dynamic range and its compatibility with signal transmission media

    PubMed Central

    Gabr, Refaat E.; Schär, Michael; Edelstein, Arthur D.; Kraitchman, Dara L.; Bottomley, Paul A.; Edelstein, William A.

    2010-01-01

    As the number of MRI phased array coil elements grows, interactions among cables connecting them to the system receiver become increasingly problematic. Fiber optic or wireless links would reduce electromagnetic interference, but their dynamic range (DR) is generally less than that of coaxial cables. Raw MRI signals, however, have a large DR because of the high signal amplitude near the center of k-space. Here, we study DR in MRI in order to determine the compatibility of MRI multicoil imaging with non-coaxial cable signal transmission. Since raw signal data are routinely discarded, we have developed an improved method for estimating the DR of MRI signals from conventional magnitude images. Our results indicate that the DR of typical surface coil signals at 3 T for human subjects is less than 88 dB, even for three-dimensional acquisition protocols. Cardiac and spine coil arrays had a maximum DR of less than 75 dB and head coil arrays less than 88 dB. The DR derived from magnitude images is in good agreement with that measured from raw data. The results suggest that current analog fiber optic links, with a spurious-free DR of 60–70 dB at 500 kHz bandwidth, are not by themselves adequate for transmitting MRI data from volume or array coils with DR ~90 dB. However, combining analog links with signal compression might make non-coaxial cable signal transmission viable. PMID:19251444

  7. MRI dynamic range and its compatibility with signal transmission media.

    PubMed

    Gabr, Refaat E; Schär, Michael; Edelstein, Arthur D; Kraitchman, Dara L; Bottomley, Paul A; Edelstein, William A

    2009-06-01

    As the number of MRI phased array coil elements grows, interactions among cables connecting them to the system receiver become increasingly problematic. Fiber optic or wireless links would reduce electromagnetic interference, but their dynamic range (DR) is generally less than that of coaxial cables. Raw MRI signals, however, have a large DR because of the high signal amplitude near the center of k-space. Here, we study DR in MRI in order to determine the compatibility of MRI multicoil imaging with non-coaxial cable signal transmission. Since raw signal data are routinely discarded, we have developed an improved method for estimating the DR of MRI signals from conventional magnitude images. Our results indicate that the DR of typical surface coil signals at 3T for human subjects is less than 88 dB, even for three-dimensional acquisition protocols. Cardiac and spine coil arrays had a maximum DR of less than 75 dB and head coil arrays less than 88 dB. The DR derived from magnitude images is in good agreement with that measured from raw data. The results suggest that current analog fiber optic links, with a spurious-free DR of 60-70 dB at 500 kHz bandwidth, are not by themselves adequate for transmitting MRI data from volume or array coils with DR approximately 90 dB. However, combining analog links with signal compression might make non-coaxial cable signal transmission viable.

  8. Depth of Ultra High Energy Cosmic Ray Induced Air Shower Maxima Measured by the Telescope Array Black Rock and Long Ridge FADC Fluorescence Detectors and Surface Array in Hybrid Mode

    NASA Astrophysics Data System (ADS)

    Abbasi, R. U.; Abe, M.; Abu-Zayyad, T.; Allen, M.; Azuma, R.; Barcikowski, E.; Belz, J. W.; Bergman, D. R.; Blake, S. A.; Cady, R.; Cheon, B. G.; Chiba, J.; Chikawa, M.; di Matteo, A.; Fujii, T.; Fujita, K.; Fukushima, M.; Furlich, G.; Goto, T.; Hanlon, W.; Hayashi, M.; Hayashi, Y.; Hayashida, N.; Hibino, K.; Honda, K.; Ikeda, D.; Inoue, N.; Ishii, T.; Ishimori, R.; Ito, H.; Ivanov, D.; Jeong, H. M.; Jeong, S. M.; Jui, C. C. H.; Kadota, K.; Kakimoto, F.; Kalashev, O.; Kasahara, K.; Kawai, H.; Kawakami, S.; Kawana, S.; Kawata, K.; Kido, E.; Kim, H. B.; Kim, J. H.; Kim, J. H.; Kishigami, S.; Kitamura, S.; Kitamura, Y.; Kuzmin, V.; Kuznetsov, M.; Kwon, Y. J.; Lee, K. H.; Lubsandorzhiev, B.; Lundquist, J. P.; Machida, K.; Martens, K.; Matsuyama, T.; Matthews, J. N.; Mayta, R.; Minamino, M.; Mukai, K.; Myers, I.; Nagasawa, K.; Nagataki, S.; Nakamura, R.; Nakamura, T.; Nonaka, T.; Oda, H.; Ogio, S.; Ogura, J.; Ohnishi, M.; Ohoka, H.; Okuda, T.; Omura, Y.; Ono, M.; Onogi, R.; Oshima, A.; Ozawa, S.; Park, I. H.; Pshirkov, M. S.; Rodriguez, D. C.; Rubtsov, G.; Ryu, D.; Sagawa, H.; Sahara, R.; Saito, K.; Saito, Y.; Sakaki, N.; Sakurai, N.; Scott, L. M.; Seki, T.; Sekino, K.; Shah, P. D.; Shibata, F.; Shibata, T.; Shimodaira, H.; Shin, B. K.; Shin, H. S.; Smith, J. D.; Sokolsky, P.; Stokes, B. T.; Stratton, S. R.; Stroman, T. A.; Suzawa, T.; Takagi, Y.; Takahashi, Y.; Takamura, M.; Takeda, M.; Takeishi, R.; Taketa, A.; Takita, M.; Tameda, Y.; Tanaka, H.; Tanaka, K.; Tanaka, M.; Thomas, S. B.; Thomson, G. B.; Tinyakov, P.; Tkachev, I.; Tokuno, H.; Tomida, T.; Troitsky, S.; Tsunesada, Y.; Tsutsumi, K.; Uchihori, Y.; Udo, S.; Urban, F.; Wong, T.; Yamamoto, M.; Yamane, R.; Yamaoka, H.; Yamazaki, K.; Yang, J.; Yashiro, K.; Yoneda, Y.; Yoshida, S.; Yoshii, H.; Zhezher, Y.; Zundel, Z.; Telescope Array Collaboration

    2018-05-01

    The Telescope Array (TA) observatory utilizes fluorescence detectors and surface detectors (SDs) to observe air showers produced by ultra high energy cosmic rays in Earth’s atmosphere. Cosmic-ray events observed in this way are termed hybrid data. The depth of air shower maximum is related to the mass of the primary particle that generates the shower. This paper reports on shower maxima data collected over 8.5 yr using the Black Rock Mesa and Long Ridge fluorescence detectors in conjunction with the array of SDs. We compare the means and standard deviations of the observed {X}\\max distributions with Monte Carlo {X}\\max distributions of unmixed protons, helium, nitrogen, and iron, all generated using the QGSJet II-04 hadronic model. We also perform an unbinned maximum likelihood test of the observed data, which is subjected to variable systematic shifting of the data {X}\\max distributions to allow us to test the full distributions, and compare them to the Monte Carlo to see which elements are not compatible with the observed data. For all energy bins, QGSJet II-04 protons are found to be compatible with TA hybrid data at the 95% confidence level after some systematic {X}\\max shifting of the data. Three other QGSJet II-04 elements are found to be compatible using the same test procedure in an energy range limited to the highest energies where data statistics are sparse.

  9. Enhancing Life Satisfaction by Government Accountability in China

    ERIC Educational Resources Information Center

    Cheung, Chau-kiu; Leung, Kwan-kwok

    2007-01-01

    Finding the rationale for democracy requires not merely a conceptual task but also an empirical study. One rationale is that democracy maximizes people's happiness by satisfying everyone. A further qualification of this is that democracy minimizes the maximum regret of the disadvantaged. This is compatible with the protection theory of government,…

  10. Calculation of Weibull strength parameters, Batdorf flaw density constants and related statistical quantities using PC-CARES

    NASA Technical Reports Server (NTRS)

    Szatmary, Steven A.; Gyekenyesi, John P.; Nemeth, Noel N.

    1990-01-01

    This manual describes the operation and theory of the PC-CARES (Personal Computer-Ceramic Analysis and Reliability Evaluation of Structures) computer program for the IBM PC and compatibles running PC-DOS/MS-DOR OR IBM/MS-OS/2 (version 1.1 or higher) operating systems. The primary purpose of this code is to estimate Weibull material strength parameters, the Batdorf crack density coefficient, and other related statistical quantities. Included in the manual is the description of the calculation of shape and scale parameters of the two-parameter Weibull distribution using the least-squares analysis and maximum likelihood methods for volume- and surface-flaw-induced fracture in ceramics with complete and censored samples. The methods for detecting outliers and for calculating the Kolmogorov-Smirnov and the Anderson-Darling goodness-of-fit statistics and 90 percent confidence bands about the Weibull line, as well as the techniques for calculating the Batdorf flaw-density constants are also described.

  11. The envelope of the power spectra of over a thousand δ Scuti stars. The T̅eff - νmax scaling relation

    NASA Astrophysics Data System (ADS)

    Barceló Forteza, S.; Roca Cortés, T.; García, R. A.

    2018-06-01

    CoRoT and Kepler high-precision photometric data allowed the detection and characterization of the oscillation parameters in stars other than the Sun. Moreover, thanks to the scaling relations, it is possible to estimate masses and radii for thousands of solar-type oscillating stars. Recently, a Δν - ρ relation has been found for δ Scuti stars. Now, analysing several hundreds of this kind of stars observed with CoRoT and Kepler, we present an empiric relation between their frequency at maximum power of their oscillation spectra and their effective temperature. Such a relation can be explained with the help of the κ-mechanism and the observed dispersion of the residuals is compatible with they being caused by the gravity-darkening effect. Table A.1 is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/614/A46

  12. Performance of high-recovery recycling reverse osmosis with wash water

    NASA Technical Reports Server (NTRS)

    Herrmann, Cal C.

    1993-01-01

    Inclusion of a recycling loop for partially-desalted water from second-stage reverse-osmosis permeate has been shown useful for achieving high-recovery at moderate applied pressures. This approach has now been applied to simulated wash waters, to obtain data on retention by the membranes of solutes in a mixture comparable to anticipated spacecraft hygiene wastewaters, and to generate an estimate of the maximum concentration that can be expected without causing membrane fouling. A first experiment set provides selectivity information from a single membrane and an Igepon detergent, as a function of final concentration. A reject concentration of 3.1% Total Organic Carbon has been reached, at a pressure of 1.4 Mega Pascals, without membrane fouling. Further experiments have generated selectivity values for the recycle configuration from two washwater simulations, as a function of applied pump pressure. Reverse osmosis removal has also been tested for washwater containing detergent formulated for plant growth compatibility (containing nitrogen, phosphorous and potassium functional groups.)

  13. Permanganate gel (PG) for groundwater remediation: compatibility, gelation, and release characteristics.

    PubMed

    Lee, Eung Seok; Olson, Pamela R; Gupta, Neha; Solpuker, Utku; Schwartz, Franklin W; Kim, Yongje

    2014-02-01

    Permanganate (MnO4(-)) is a strong oxidant that is widely used for treating chlorinated ethylenes in groundwater. This study aims to develop hyper-saline MnO4(-) solution (MnO4(-) gel; PG) that can be injected into aquifers via wells, slowly gelates over time, and slowly release MnO4(-) to flowing water. In this study, compatibility and miscibility of gels, such as chitosan, aluminosilicate, silicate, and colloidal silica gels, with MnO4(-) were tested. Of these gels, chitosan was reactive with MnO4(-). Aluminosilicates were compatible but not readily miscible with MnO4(-). Silicates and colloidal silica were both compatible and miscible with MnO4(-), and gelated with addition of KMnO4 granules. Colloidal silica has low initial viscosity (<15cP), exhibited delayed gelation characteristics with the lag times ranging from 0 to 200min. Release of MnO4(-) from the colloidal silica-based PG gel occurred in a delayed fashion, with maximum duration of 24h. These results suggested that colloidal silica can be used to create PG or delayed-gelling forms containing other oxidants which can be used for groundwater remediation. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Chemically induced graft copolymerization of 2-hydroxyethyl methacrylate onto polyurethane surface for improving blood compatibility

    NASA Astrophysics Data System (ADS)

    He, Chunli; Wang, Miao; Cai, Xianmei; Huang, Xiaobo; Li, Li; Zhu, Haomiao; Shen, Jian; Yuan, Jiang

    2011-11-01

    To improve hydrophilicity and blood compatibility properties of polyurethane (PU) film, we chemically induced graft copolymerization of 2-hydroxyethyl methacrylate (HEMA) onto the surface of polyurethane film using benzoyl peroxide as an initiator. The effects of grafting temperature, grafting time, monomer and initiator concentrations on the grafting yields were studied. The maximum grafting yield value was obtained 0.0275 g/cm2 for HEMA. Characterization of the films was carried out by attenuated total reflection Fourier transform infrared spectroscopy (ATR-FTIR), water contact angle measurements. ATR-FTIR data showed that HEMA was successfully grafted onto the PU films surface. Water contact angle measurement demonstrated the grafted films possessed a relatively hydrophilic surface. The blood compatibility of the grafted films was preliminarily evaluated by a platelet-rich plasma adhesion test and hemolysis test. The results of platelet adhesion experiment showed that polyurethane grafted polymerization with monomer of 2-hydroxyethyl methacrylate had good blood compatibility featured by the low platelet adhesion. Hemolysis rate of the PU-g-PHEMA films was dramatically decreased than the ungrafted PU films. This kind of new biomaterials grafted with HEMA monomers might have a potential usage for biomedical applications.

  15. Development and Implementation of Environmentally Compatible Solid Film Lubricants

    NASA Technical Reports Server (NTRS)

    Novak, Howard L.; Hall, Phillip B.

    1999-01-01

    Multi-body launch vehicles require the use of Solid Film Lubricants (SFLs) to allow for unrestricted relative motion between structural assemblies and components during lift-off and ascent into orbit. The Space Shuttle Solid Rocket Booster (SRB), uses a dual coat, ceramic-bonded high temperature SFL in several locations such as restraint hardware between the SRB aft skirt and the Mobile Launch Platform (MLP), the aft SRB/External Tank (ET) attach struts, and the forward skirt SRB/ET attach ball assembly. Future launch systems may require similar applications of SFLs for attachment and restraint hardware. A family of environmentally compatible non-lead/antimony bearing alternative SFLs have been developed including a compatible repair material. In addition, commercial applications for SFLs on transportation equipment, all types of lubricated fasteners, and energy related equipment allow for wide usage's of these new lubricants. The new SFLs trade named BOOSTERLUBE is a family of single layer thin film (0.001 inch maximum) coatings that are a unique mixture of non-hazardous pigments in a compatible resin system that allows for low temperature curing (450 F). Significant savings in energy and processing time as well as elimination of hazardous material usage and disposal would result from the non-toxic one-step SFL application. Compatible air-dry field repair lubricants will help eliminate disassembly of launch vehicle restraint hardware during critical time sensitive assembly operations.

  16. Towards improving searches for optimal phylogenies.

    PubMed

    Ford, Eric; St John, Katherine; Wheeler, Ward C

    2015-01-01

    Finding the optimal evolutionary history for a set of taxa is a challenging computational problem, even when restricting possible solutions to be "tree-like" and focusing on the maximum-parsimony optimality criterion. This has led to much work on using heuristic tree searches to find approximate solutions. We present an approach for finding exact optimal solutions that employs and complements the current heuristic methods for finding optimal trees. Given a set of taxa and a set of aligned sequences of characters, there may be subsets of characters that are compatible, and for each such subset there is an associated (possibly partially resolved) phylogeny with edges corresponding to each character state change. These perfect phylogenies serve as anchor trees for our constrained search space. We show that, for sequences with compatible sites, the parsimony score of any tree [Formula: see text] is at least the parsimony score of the anchor trees plus the number of inferred changes between [Formula: see text] and the anchor trees. As the maximum-parsimony optimality score is additive, the sum of the lower bounds on compatible character partitions provides a lower bound on the complete alignment of characters. This yields a region in the space of trees within which the best tree is guaranteed to be found; limiting the search for the optimal tree to this region can significantly reduce the number of trees that must be examined in a search of the space of trees. We analyze this method empirically using four different biological data sets as well as surveying 400 data sets from the TreeBASE repository, demonstrating the effectiveness of our technique in reducing the number of steps in exact heuristic searches for trees under the maximum-parsimony optimality criterion. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  17. A Maximum Radius for Habitable Planets.

    PubMed

    Alibert, Yann

    2015-09-01

    We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope.

  18. Automated Estimation Of Software-Development Costs

    NASA Technical Reports Server (NTRS)

    Roush, George B.; Reini, William

    1993-01-01

    COSTMODL is automated software development-estimation tool. Yields significant reduction in risk of cost overruns and failed projects. Accepts description of software product developed and computes estimates of effort required to produce it, calendar schedule required, and distribution of effort and staffing as function of defined set of development life-cycle phases. Written for IBM PC(R)-compatible computers.

  19. Computerized estimation of compatibility of stressors at work and worker's health characteristics.

    PubMed

    Susnik, J; Bizjak, B; Cestnik, B

    1996-09-01

    A system of computerized estimation of compatibility of stressors at work and worker's health characteristics is presented. Each characteristic is defined and scored on a specific scale. Incompatible workplace characteristics as related to worker's characteristics are singled out and offered to the user for an ergonomic solution. Work on the system started in 1987. This paper deals with the system's further development, which involves a larger number of topics, changes of the algorithm and presentation of an applicative case. Comparison of the system's results with those of medical experts shows that the use of the system tends to improve the thoroughness and consistency of incompatibility evaluations and consequently to make working ability assessment more objective.

  20. A Business Case Analysis (BCA) of the One Box - One Wire (OB1) Joint Combined Technology Demonstration (JCTD)

    DTIC Science & Technology

    2009-03-01

    Framework ( IATF ) as their guidance for information assurance. The IATF defines what high 5 OB1 is compatible with and intends to use legacy...standardized levels of security. The OB1 team using the IATF as guidance defines high robustness as proving to the maximum extent possible, that

  1. Seakeeping considerations in the employment of V/STOL on Naval ships

    NASA Technical Reports Server (NTRS)

    Olson, S. R.

    1977-01-01

    Compatibility of Naval ships as V/STOL support platforms and the ship motions that V/STOL aircraft must endure are discussed. A methodology which evaluates the impact of motion criteria such as the maximum ship motion allowable during V/STOL landing/launch is presented. Emphasis is given to design alternatives that reduce ship motion.

  2. Stimulus-Response Compatibility in Spatial Precuing and Symbolic Identification: Effects of Coding Practice, Retention and Transfer

    DTIC Science & Technology

    1989-05-31

    for the effects. Most of the pqychophysiological research has examined event-related potentials (ERPs). Bashore (Chapter 7 ), Ragot (Chapter 8), and...flanking noise letters I signaled a response that was incongruent with the resTcnse i:.dicated b., a target letter. Bashore (Chapter 7 ) describes a...forms. First, compatibility effects have been used as estimates of interhemispheric transmission time (Bashore, I Chapter 7 ). The logic, articulated

  3. Messages Do Diffuse Faster than Messengers: Reconciling Disparate Estimates of the Morphogen Bicoid Diffusion Coefficient

    PubMed Central

    Sigaut, Lorena; Pearson, John E.; Colman-Lerner, Alejandro; Ponce Dawson, Silvina

    2014-01-01

    The gradient of Bicoid (Bcd) is key for the establishment of the anterior-posterior axis in Drosophila embryos. The gradient properties are compatible with the SDD model in which Bcd is synthesized at the anterior pole and then diffuses into the embryo and is degraded with a characteristic time. Within this model, the Bcd diffusion coefficient is critical to set the timescale of gradient formation. This coefficient has been measured using two optical techniques, Fluorescence Recovery After Photobleaching (FRAP) and Fluorescence Correlation Spectroscopy (FCS), obtaining estimates in which the FCS value is an order of magnitude larger than the FRAP one. This discrepancy raises the following questions: which estimate is "correct''; what is the reason for the disparity; and can the SDD model explain Bcd gradient formation within the experimentally observed times? In this paper, we use a simple biophysical model in which Bcd diffuses and interacts with binding sites to show that both the FRAP and the FCS estimates may be correct and compatible with the observed timescale of gradient formation. The discrepancy arises from the fact that FCS and FRAP report on different effective (concentration dependent) diffusion coefficients, one of which describes the spreading rate of the individual Bcd molecules (the messengers) and the other one that of their concentration (the message). The latter is the one that is more relevant for the gradient establishment and is compatible with its formation within the experimentally observed times. PMID:24901638

  4. Messages do diffuse faster than messengers: reconciling disparate estimates of the morphogen bicoid diffusion coefficient.

    PubMed

    Sigaut, Lorena; Pearson, John E; Colman-Lerner, Alejandro; Ponce Dawson, Silvina

    2014-06-01

    The gradient of Bicoid (Bcd) is key for the establishment of the anterior-posterior axis in Drosophila embryos. The gradient properties are compatible with the SDD model in which Bcd is synthesized at the anterior pole and then diffuses into the embryo and is degraded with a characteristic time. Within this model, the Bcd diffusion coefficient is critical to set the timescale of gradient formation. This coefficient has been measured using two optical techniques, Fluorescence Recovery After Photobleaching (FRAP) and Fluorescence Correlation Spectroscopy (FCS), obtaining estimates in which the FCS value is an order of magnitude larger than the FRAP one. This discrepancy raises the following questions: which estimate is "correct''; what is the reason for the disparity; and can the SDD model explain Bcd gradient formation within the experimentally observed times? In this paper, we use a simple biophysical model in which Bcd diffuses and interacts with binding sites to show that both the FRAP and the FCS estimates may be correct and compatible with the observed timescale of gradient formation. The discrepancy arises from the fact that FCS and FRAP report on different effective (concentration dependent) diffusion coefficients, one of which describes the spreading rate of the individual Bcd molecules (the messengers) and the other one that of their concentration (the message). The latter is the one that is more relevant for the gradient establishment and is compatible with its formation within the experimentally observed times.

  5. Compatible Spatial Discretizations for Partial Differential Equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arnold, Douglas, N, ed.

    From May 11--15, 2004, the Institute for Mathematics and its Applications held a hot topics workshop on Compatible Spatial Discretizations for Partial Differential Equations. The numerical solution of partial differential equations (PDE) is a fundamental task in science and engineering. The goal of the workshop was to bring together a spectrum of scientists at the forefront of the research in the numerical solution of PDEs to discuss compatible spatial discretizations. We define compatible spatial discretizations as those that inherit or mimic fundamental properties of the PDE such as topology, conservation, symmetries, and positivity structures and maximum principles. A wide varietymore » of discretization methods applied across a wide range of scientific and engineering applications have been designed to or found to inherit or mimic intrinsic spatial structure and reproduce fundamental properties of the solution of the continuous PDE model at the finite dimensional level. A profusion of such methods and concepts relevant to understanding them have been developed and explored: mixed finite element methods, mimetic finite differences, support operator methods, control volume methods, discrete differential forms, Whitney forms, conservative differencing, discrete Hodge operators, discrete Helmholtz decomposition, finite integration techniques, staggered grid and dual grid methods, etc. This workshop seeks to foster communication among the diverse groups of researchers designing, applying, and studying such methods as well as researchers involved in practical solution of large scale problems that may benefit from advancements in such discretizations; to help elucidate the relations between the different methods and concepts; and to generally advance our understanding in the area of compatible spatial discretization methods for PDE. Particular points of emphasis included: + Identification of intrinsic properties of PDE models that are critical for the fidelity of numerical simulations. + Identification and design of compatible spatial discretizations of PDEs, their classification, analysis, and relations. + Relationships between different compatible spatial discretization methods and concepts which have been developed; + Impact of compatible spatial discretizations upon physical fidelity, verification and validation of simulations, especially in large-scale, multiphysics settings. + How solvers address the demands placed upon them by compatible spatial discretizations. This report provides information about the program and abstracts of all the presentations.« less

  6. On the existence of maximum likelihood estimates for presence-only data

    USGS Publications Warehouse

    Hefley, Trevor J.; Hooten, Mevin B.

    2015-01-01

    It is important to identify conditions for which maximum likelihood estimates are unlikely to be identifiable from presence-only data. In data sets where the maximum likelihood estimates do not exist, penalized likelihood and Bayesian methods will produce coefficient estimates, but these are sensitive to the choice of estimation procedure and prior or penalty term. When sample size is small or it is thought that habitat preferences are strong, we propose a suite of estimation procedures researchers can consider using.

  7. 10 K gate I(2)L and 1 K component analog compatible bipolar VLSI technology - HIT-2

    NASA Astrophysics Data System (ADS)

    Washio, K.; Watanabe, T.; Okabe, T.; Horie, N.

    1985-02-01

    An advanced analog/digital bipolar VLSI technology that combines on the same chip 2-ns 10 K I(2)L gates with 1 K analog devices is proposed. The new technology, called high-density integration technology-2, is based on a new structure concept that consists of three major techniques: shallow grooved-isolation, I(2)L active layer etching, and I(2)L current gain increase. I(2)L circuits with 80-MHz maximum toggle frequency have developed compatibly with n-p-n transistors having a BV(CE0) of more than 10 V and an f(T) of 5 GHz, and lateral p-n-p transistors having an f(T) of 150 MHz.

  8. Variance Difference between Maximum Likelihood Estimation Method and Expected A Posteriori Estimation Method Viewed from Number of Test Items

    ERIC Educational Resources Information Center

    Mahmud, Jumailiyah; Sutikno, Muzayanah; Naga, Dali S.

    2016-01-01

    The aim of this study is to determine variance difference between maximum likelihood and expected A posteriori estimation methods viewed from number of test items of aptitude test. The variance presents an accuracy generated by both maximum likelihood and Bayes estimation methods. The test consists of three subtests, each with 40 multiple-choice…

  9. Sweep Width Estimation for Ground Search and Rescue

    DTIC Science & Technology

    2004-12-30

    Develop data compatible with search planning and POD estimation methods that are de- signed to use sweep width data. An experimental...important for Park Rangers and man- trackers . Search experience was expected to be a significant correction factor. However, the re- sults indicate...41 4.1.1 Signing In

  10. Absolute plate motions relative to deep mantle plumes

    NASA Astrophysics Data System (ADS)

    Wang, Shimin; Yu, Hongzheng; Zhang, Qiong; Zhao, Yonghong

    2018-05-01

    Advances in whole waveform seismic tomography have revealed the presence of broad mantle plumes rooted at the base of the Earth's mantle beneath major hotspots. Hotspot tracks associated with these deep mantle plumes provide ideal constraints for inverting absolute plate motions as well as testing the fixed hotspot hypothesis. In this paper, 27 observed hotspot trends associated with 24 deep mantle plumes are used together with the MORVEL model for relative plate motions to determine an absolute plate motion model, in terms of a maximum likelihood optimization for angular data fitting, combined with an outlier data detection procedure based on statistical tests. The obtained T25M model fits 25 observed trends of globally distributed hotspot tracks to the statistically required level, while the other two hotspot trend data (Comores on Somalia and Iceland on Eurasia) are identified as outliers, which are significantly incompatible with other data. For most hotspots with rate data available, T25M predicts plate velocities significantly lower than the observed rates of hotspot volcanic migration, which cannot be fully explained by biased errors in observed rate data. Instead, the apparent hotspot motions derived by subtracting the observed hotspot migration velocities from the T25M plate velocities exhibit a combined pattern of being opposite to plate velocities and moving towards mid-ocean ridges. The newly estimated net rotation of the lithosphere is statistically compatible with three recent estimates, but differs significantly from 30 of 33 prior estimates.

  11. Assessing compatibility of direct detection data: halo-independent global likelihood analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gelmini, Graciela B.; Huh, Ji-Haeng; Witte, Samuel J.

    2016-10-18

    We present two different halo-independent methods to assess the compatibility of several direct dark matter detection data sets for a given dark matter model using a global likelihood consisting of at least one extended likelihood and an arbitrary number of Gaussian or Poisson likelihoods. In the first method we find the global best fit halo function (we prove that it is a unique piecewise constant function with a number of down steps smaller than or equal to a maximum number that we compute) and construct a two-sided pointwise confidence band at any desired confidence level, which can then be comparedmore » with those derived from the extended likelihood alone to assess the joint compatibility of the data. In the second method we define a “constrained parameter goodness-of-fit” test statistic, whose p-value we then use to define a “plausibility region” (e.g. where p≥10%). For any halo function not entirely contained within the plausibility region, the level of compatibility of the data is very low (e.g. p<10%). We illustrate these methods by applying them to CDMS-II-Si and SuperCDMS data, assuming dark matter particles with elastic spin-independent isospin-conserving interactions or exothermic spin-independent isospin-violating interactions.« less

  12. Recycling of plastic wastes with poly (ethylene-co-methacrylic acid) copolymer as compatibilizer and their conversion into high-end product.

    PubMed

    Rajasekaran, Divya; Maji, Pradip K

    2018-04-01

    This paper deals with the utilization of plastic wastes to a useful product. The major plastic pollutants that are considered to be in maximum use i.e. PET bottle and PE bags have been taken for consideration for recycling. As these two plastic wastes are not compatible, poly (ethylene-co-methacrylic acid) copolymer has been used as compatibilizer to process these two plastic wastes. Effect of dose of poly (ethylene-co-methacrylic acid) copolymer as compatibilizer has been studied here. It has been shown that only 3 wt% of poly (ethylene-co-methacrylic acid) copolymer is sufficient to make 3:1 mass ratio of PET bottle and polyethylene bags compatible. Compatibility has been examined through mechanical testing, thermal and morphological analysis. After analysing the property of recyclates, better mechanical and thermal property has been observed. Almost 500% of tensile property has been improved by addition of 3 wt% of poly (ethylene-co-methacrylic acid) copolymer in 3:1 mass ratio blend of PET bottle and PE bags than that of pristine blend. Morphological analysis by FESEM and AFM has also confirmed the compatibility of the blend. Experimental data showed better performance than available recycling process. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Comparison of online reporting systems and their compatibility check with respective adverse drug reaction reporting forms.

    PubMed

    Maharshi, Vikas; Nagar, Pravesh

    2017-01-01

    Different forms and online tools are available in different countries for spontaneous reporting, one of the most widely used methods of pharmacovigilance. Capturing sufficient information and adequate compatibility of online systems with respective reporting form is highly desirable for appropriate reporting of adverse drug reactions (ADRs). This study was aimed to compare three major online reporting systems (US, UK, and WHO) of the world and also to check their compatibility with the respective ADR reporting form. A total of 89 data elements to provide relevant information were found out from above three online reporting systems. All three online systems were compared regarding magnitude of information captured by each of them and scoring was done by providing a score of "1" to each element. Compatibility of ADR reporting forms of India (Red form), US (Form 3500), and UK (Yellow card form) was assessed by comparing the information gathered by them with that can be entered into their respective online reporting systems, namely, "VigiFlow," "US online reporting," and "Yellow card online reporting." Each unmatching item was given a score of "-1". VigiFlow scored "74" points, whereas online reporting systems of the US and UK scored "56" and "49," respectively, regarding magnitude of the information gathered by them. Compatibility score was found to be "0," "-9," and "-26" in case of ADR reporting systems of US, UK, and India, respectively. Our study reveals that "VigiFlow" is capable of capturing the maximum amount of information but "Form 3500" and "Online reporting system of US" are maximally compatible to each other among ADR reporting systems of all three countries.

  14. Magnetic resonance imaging-compatible circular mapping catheter: an in vivo feasibility and safety study.

    PubMed

    Elbes, Delphine; Magat, Julie; Govari, Assaf; Ephrath, Yaron; Vieillot, Delphine; Beeckler, Christopher; Weerasooriya, Rukshen; Jais, Pierre; Quesson, Bruno

    2017-03-01

    Interventional cardiac catheter mapping is routinely guided by X-ray fluoroscopy, although radiation exposure remains a significant concern. Feasibility of catheter ablation for common flutter has recently been demonstrated under magnetic resonance imaging (MRI) guidance. The benefit of catheter ablation under MRI could be significant for complex arrhythmias such as atrial fibrillation (AF), but MRI-compatible multi-electrode catheters such as Lasso have not yet been developed. This study aimed at demonstrating the feasibility and safety of using a multi-electrode catheter [magnetic resonance (MR)-compatible Lasso] during MRI for cardiac mapping. We also aimed at measuring the level of interference between MR and electrophysiological (EP) systems. Experiments were performed in vivo in sheep (N = 5) using a multi-electrode, circular, steerable, MR-compatible diagnostic catheter. The most common MRI sequences (1.5T) relevant for cardiac examination were run with the catheter positioned in the right atrium. High-quality electrograms were recorded while imaging with a maximal signal-to-noise ratio (peak-to-peak signal amplitude/peak-to-peak noise amplitude) ranging from 5.8 to 165. Importantly, MRI image quality was unchanged. Artefacts induced by MRI sequences during mapping were demonstrated to be compatible with clinical use. Phantom data demonstrated that this 10-pole circular catheter can be used safely with a maximum of 4°C increase in temperature. This new MR-compatible 10-pole catheter appears to be safe and effective. Combining MR and multipolar EP in a single session offers the possibility to correlate substrate information (scar, fibrosis) and EP mapping as well as online monitoring of lesion formation and electrical endpoint. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2016. For permissions please email: journals.permissions@oup.com.

  15. 47 CFR 68.317 - Hearing aid compatibility volume control: technical standards.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... headset of the telephone, 12 dB of gain minimum and up to 18 dB of gain maximum, when measured in terms of... Instruments With Loop Signaling) . The 12 dB of gain minimum must be achieved without significant clipping of... change in ROLR as a function of the volume control setting that are relevant to the specification of...

  16. 47 CFR 68.317 - Hearing aid compatibility volume control: technical standards.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... headset of the telephone, 12 dB of gain minimum and up to 18 dB of gain maximum, when measured in terms of... Instruments With Loop Signaling) . The 12 dB of gain minimum must be achieved without significant clipping of... change in ROLR as a function of the volume control setting that are relevant to the specification of...

  17. 47 CFR 68.317 - Hearing aid compatibility volume control: technical standards.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... headset of the telephone, 12 dB of gain minimum and up to 18 dB of gain maximum, when measured in terms of... Instruments With Loop Signaling) . The 12 dB of gain minimum must be achieved without significant clipping of... change in ROLR as a function of the volume control setting that are relevant to the specification of...

  18. 47 CFR 68.317 - Hearing aid compatibility volume control: technical standards.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... headset of the telephone, 12 dB of gain minimum and up to 18 dB of gain maximum, when measured in terms of... Instruments With Loop Signaling) . The 12 dB of gain minimum must be achieved without significant clipping of... change in ROLR as a function of the volume control setting that are relevant to the specification of...

  19. 47 CFR 68.317 - Hearing aid compatibility volume control: technical standards.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... headset of the telephone, 12 dB of gain minimum and up to 18 dB of gain maximum, when measured in terms of... Instruments With Loop Signaling) . The 12 dB of gain minimum must be achieved without significant clipping of... change in ROLR as a function of the volume control setting that are relevant to the specification of...

  20. Representation of bidirectional ground motions for design spectra in building codes

    USGS Publications Warehouse

    Stewart, Jonathan P.; Abrahamson, Norman A.; Atkinson, Gail M.; Beker, Jack W.; Boore, David M.; Bozorgnia, Yousef; Campbell, Kenneth W.; Comartin, Craig D.; Idriss, I.M.; Lew, Marshall; Mehrain, Michael; Moehle, Jack P.; Naeim, Farzad; Sabol, Thomas A.

    2011-01-01

    The 2009 NEHRP Provisions modified the definition of horizontal ground motion from the geometric mean of spectral accelerations for two components to the peak response of a single lumped mass oscillator regardless of direction. These maximum-direction (MD) ground motions operate under the assumption that the dynamic properties of the structure (e.g., stiffness, strength) are identical in all directions. This assumption may be true for some in-plan symmetric structures, however, the response of most structures is dominated by modes of vibration along specific axes (e.g., longitudinal and transverse axes in a building), and often the dynamic properties (especially stiffness) along those axes are distinct. In order to achieve structural designs consistent with the collapse risk level given in the NEHRP documents, we argue that design spectra should be compatible with expected levels of ground motion along those principal response axes. The use of MD ground motions effectively assumes that the azimuth of maximum ground motion coincides with the directions of principal structural response. Because this is unlikely, design ground motions have lower probability of occurrence than intended, with significant societal costs. We recommend adjustments to make design ground motions compatible with target risk levels.

  1. A Comparison of a Bayesian and a Maximum Likelihood Tailored Testing Procedure.

    ERIC Educational Resources Information Center

    McKinley, Robert L.; Reckase, Mark D.

    A study was conducted to compare tailored testing procedures based on a Bayesian ability estimation technique and on a maximum likelihood ability estimation technique. The Bayesian tailored testing procedure selected items so as to minimize the posterior variance of the ability estimate distribution, while the maximum likelihood tailored testing…

  2. Unification of field theory and maximum entropy methods for learning probability densities

    NASA Astrophysics Data System (ADS)

    Kinney, Justin B.

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  3. Unification of field theory and maximum entropy methods for learning probability densities.

    PubMed

    Kinney, Justin B

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  4. Learning Probabilities From Random Observables in High Dimensions: The Maximum Entropy Distribution and Others

    NASA Astrophysics Data System (ADS)

    Obuchi, Tomoyuki; Cocco, Simona; Monasson, Rémi

    2015-11-01

    We consider the problem of learning a target probability distribution over a set of N binary variables from the knowledge of the expectation values (with this target distribution) of M observables, drawn uniformly at random. The space of all probability distributions compatible with these M expectation values within some fixed accuracy, called version space, is studied. We introduce a biased measure over the version space, which gives a boost increasing exponentially with the entropy of the distributions and with an arbitrary inverse `temperature' Γ . The choice of Γ allows us to interpolate smoothly between the unbiased measure over all distributions in the version space (Γ =0) and the pointwise measure concentrated at the maximum entropy distribution (Γ → ∞ ). Using the replica method we compute the volume of the version space and other quantities of interest, such as the distance R between the target distribution and the center-of-mass distribution over the version space, as functions of α =(log M)/N and Γ for large N. Phase transitions at critical values of α are found, corresponding to qualitative improvements in the learning of the target distribution and to the decrease of the distance R. However, for fixed α the distance R does not vary with Γ which means that the maximum entropy distribution is not closer to the target distribution than any other distribution compatible with the observable values. Our results are confirmed by Monte Carlo sampling of the version space for small system sizes (N≤ 10).

  5. Intracranial EEG potentials estimated from MEG sources: A new approach to correlate MEG and iEEG data in epilepsy.

    PubMed

    Grova, Christophe; Aiguabella, Maria; Zelmann, Rina; Lina, Jean-Marc; Hall, Jeffery A; Kobayashi, Eliane

    2016-05-01

    Detection of epileptic spikes in MagnetoEncephaloGraphy (MEG) requires synchronized neuronal activity over a minimum of 4cm2. We previously validated the Maximum Entropy on the Mean (MEM) as a source localization able to recover the spatial extent of the epileptic spike generators. The purpose of this study was to evaluate quantitatively, using intracranial EEG (iEEG), the spatial extent recovered from MEG sources by estimating iEEG potentials generated by these MEG sources. We evaluated five patients with focal epilepsy who had a pre-operative MEG acquisition and iEEG with MRI-compatible electrodes. Individual MEG epileptic spikes were localized along the cortical surface segmented from a pre-operative MRI, which was co-registered with the MRI obtained with iEEG electrodes in place for identification of iEEG contacts. An iEEG forward model estimated the influence of every dipolar source of the cortical surface on each iEEG contact. This iEEG forward model was applied to MEG sources to estimate iEEG potentials that would have been generated by these sources. MEG-estimated iEEG potentials were compared with measured iEEG potentials using four source localization methods: two variants of MEM and two standard methods equivalent to minimum norm and LORETA estimates. Our results demonstrated an excellent MEG/iEEG correspondence in the presumed focus for four out of five patients. In one patient, the deep generator identified in iEEG could not be localized in MEG. MEG-estimated iEEG potentials is a promising method to evaluate which MEG sources could be retrieved and validated with iEEG data, providing accurate results especially when applied to MEM localizations. Hum Brain Mapp 37:1661-1683, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  6. The recursive maximum likelihood proportion estimator: User's guide and test results

    NASA Technical Reports Server (NTRS)

    Vanrooy, D. L.

    1976-01-01

    Implementation of the recursive maximum likelihood proportion estimator is described. A user's guide to programs as they currently exist on the IBM 360/67 at LARS, Purdue is included, and test results on LANDSAT data are described. On Hill County data, the algorithm yields results comparable to the standard maximum likelihood proportion estimator.

  7. Comparing methods to estimate Reineke’s maximum size-density relationship species boundary line slope

    Treesearch

    Curtis L. VanderSchaaf; Harold E. Burkhart

    2010-01-01

    Maximum size-density relationships (MSDR) provide natural resource managers useful information about the relationship between tree density and average tree size. Obtaining a valid estimate of how maximum tree density changes as average tree size changes is necessary to accurately describe these relationships. This paper examines three methods to estimate the slope of...

  8. Marginal Maximum A Posteriori Item Parameter Estimation for the Generalized Graded Unfolding Model

    ERIC Educational Resources Information Center

    Roberts, James S.; Thompson, Vanessa M.

    2011-01-01

    A marginal maximum a posteriori (MMAP) procedure was implemented to estimate item parameters in the generalized graded unfolding model (GGUM). Estimates from the MMAP method were compared with those derived from marginal maximum likelihood (MML) and Markov chain Monte Carlo (MCMC) procedures in a recovery simulation that varied sample size,…

  9. A variable-temperature nanostencil compatible with a low-temperature scanning tunneling microscope/atomic force microscope

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steurer, Wolfram, E-mail: wst@zurich.ibm.com; Gross, Leo; Schlittler, Reto R.

    2014-02-15

    We describe a nanostencil lithography tool capable of operating at variable temperatures down to 30 K. The setup is compatible with a combined low-temperature scanning tunneling microscope/atomic force microscope located within the same ultra-high-vacuum apparatus. The lateral movement capability of the mask allows the patterning of complex structures. To demonstrate operational functionality of the tool and estimate temperature drift and blurring, we fabricated LiF and NaCl nanostructures on Cu(111) at 77 K.

  10. A variable-temperature nanostencil compatible with a low-temperature scanning tunneling microscope/atomic force microscope.

    PubMed

    Steurer, Wolfram; Gross, Leo; Schlittler, Reto R; Meyer, Gerhard

    2014-02-01

    We describe a nanostencil lithography tool capable of operating at variable temperatures down to 30 K. The setup is compatible with a combined low-temperature scanning tunneling microscope/atomic force microscope located within the same ultra-high-vacuum apparatus. The lateral movement capability of the mask allows the patterning of complex structures. To demonstrate operational functionality of the tool and estimate temperature drift and blurring, we fabricated LiF and NaCl nanostructures on Cu(111) at 77 K.

  11. Continuous inventories and the components of change

    Treesearch

    Frnacis A. Roesch

    2004-01-01

    The consequences of conducting a continuous inventory that utilizes measurements on overlapping temporal intervals of varying length on compatible estimation systems for the components of growth are explored. The time interpenetrating sample design of the USDA Forest Service Forest Inventory and Analysis Program is used as an example. I show why estimation of the...

  12. Estimation of maximum transdermal flux of nonionized xenobiotics from basic physicochemical determinants

    PubMed Central

    Milewski, Mikolaj; Stinchcomb, Audra L.

    2012-01-01

    An ability to estimate the maximum flux of a xenobiotic across skin is desirable both from the perspective of drug delivery and toxicology. While there is an abundance of mathematical models describing the estimation of drug permeability coefficients, there are relatively few that focus on the maximum flux. This article reports and evaluates a simple and easy-to-use predictive model for the estimation of maximum transdermal flux of xenobiotics based on three common molecular descriptors: logarithm of octanol-water partition coefficient, molecular weight and melting point. The use of all three can be justified on the theoretical basis of their influence on the solute aqueous solubility and the partitioning into the stratum corneum lipid domain. The model explains 81% of the variability in the permeation dataset comprised of 208 entries and can be used to obtain a quick estimate of maximum transdermal flux when experimental data is not readily available. PMID:22702370

  13. Maximum Likelihood Estimation with Emphasis on Aircraft Flight Data

    NASA Technical Reports Server (NTRS)

    Iliff, K. W.; Maine, R. E.

    1985-01-01

    Accurate modeling of flexible space structures is an important field that is currently under investigation. Parameter estimation, using methods such as maximum likelihood, is one of the ways that the model can be improved. The maximum likelihood estimator has been used to extract stability and control derivatives from flight data for many years. Most of the literature on aircraft estimation concentrates on new developments and applications, assuming familiarity with basic estimation concepts. Some of these basic concepts are presented. The maximum likelihood estimator and the aircraft equations of motion that the estimator uses are briefly discussed. The basic concepts of minimization and estimation are examined for a simple computed aircraft example. The cost functions that are to be minimized during estimation are defined and discussed. Graphic representations of the cost functions are given to help illustrate the minimization process. Finally, the basic concepts are generalized, and estimation from flight data is discussed. Specific examples of estimation of structural dynamics are included. Some of the major conclusions for the computed example are also developed for the analysis of flight data.

  14. On the integration of ultrananocrystalline diamond (UNCD) with CMOS chip

    DOE PAGES

    Mi, Hongyi; Yuan, Hao -Chih; Seo, Jung -Hun; ...

    2017-03-27

    A low temperature deposition of high quality ultrananocrystalline diamond (UNCD) film onto a finished Si-based CMOS chip was performed to investigate the compatibility of the UNCD deposition process with CMOS devices for monolithic integration of MEMS on Si CMOS platform. DC and radio-frequency performances of the individual PMOS and NMOS devices on the CMOS chip before and after the UNCD deposition were characterized. Electrical characteristics of CMOS after deposition of the UNCD film remained within the acceptable ranges, namely showing small variations in threshold voltage V th, transconductance g m, cut-off frequency f T and maximum oscillation frequency f max.more » Finally, the results suggest that low temperature UNCD deposition is compatible with CMOS to realize monolithically integrated CMOS-driven MEMS/NEMS based on UNCD.« less

  15. On the integration of ultrananocrystalline diamond (UNCD) with CMOS chip

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mi, Hongyi; Yuan, Hao -Chih; Seo, Jung -Hun

    A low temperature deposition of high quality ultrananocrystalline diamond (UNCD) film onto a finished Si-based CMOS chip was performed to investigate the compatibility of the UNCD deposition process with CMOS devices for monolithic integration of MEMS on Si CMOS platform. DC and radio-frequency performances of the individual PMOS and NMOS devices on the CMOS chip before and after the UNCD deposition were characterized. Electrical characteristics of CMOS after deposition of the UNCD film remained within the acceptable ranges, namely showing small variations in threshold voltage V th, transconductance g m, cut-off frequency f T and maximum oscillation frequency f max.more » Finally, the results suggest that low temperature UNCD deposition is compatible with CMOS to realize monolithically integrated CMOS-driven MEMS/NEMS based on UNCD.« less

  16. Effective population size and the genetic consequences of commercial whaling on the humpback whales (Megaptera novaeangliae) from Southwestern Atlantic Ocean

    PubMed Central

    Cypriano-Souza, Ana Lúcia; da Silva, Tiago Ferraz; Engel, Márcia H.; Bonatto, Sandro L.

    2018-01-01

    Abstract Genotypes of 10 microsatellite loci of 420 humpback whales from the Southwestern Atlantic Ocean population were used to estimate for the first time its contemporary effective (N e) and census (N c) population sizes and to test the genetic effect of commercial whaling. The results are in agreement with our previous studies that found high genetic diversity for this breeding population. Using an approximate Bayesian computation approach, the scenario of constant N e was significantly supported over scenarios with moderate to strong size changes during the commercial whaling period. The previous generation N c (N e multiplied by 3.6), which should corresponds to the years between around 1980 and 1990, was estimated between ~2,600 and 6,800 whales (point estimate ~4,000), and is broadly compatible with the recent abundance surveys extrapolated to the past using a growth rate of 7.4% per annum. The long-term N c in the constant scenario (point estimate ~15,000) was broadly compatible (considering the confidence interval) with pre-whaling catch records estimates (point estimate ~25,000). Overall, our results shown that the Southwestern Atlantic Ocean humpback whale population is genetically very diverse and resisted well to the strong population reduction during commercial whaling. PMID:29668011

  17. Effective population size and the genetic consequences of commercial whaling on the humpback whales (Megaptera novaeangliae) from Southwestern Atlantic Ocean.

    PubMed

    Cypriano-Souza, Ana Lúcia; da Silva, Tiago Ferraz; Engel, Márcia H; Bonatto, Sandro L

    2018-01-01

    Genotypes of 10 microsatellite loci of 420 humpback whales from the Southwestern Atlantic Ocean population were used to estimate for the first time its contemporary effective (Ne) and census (Nc) population sizes and to test the genetic effect of commercial whaling. The results are in agreement with our previous studies that found high genetic diversity for this breeding population. Using an approximate Bayesian computation approach, the scenario of constant Ne was significantly supported over scenarios with moderate to strong size changes during the commercial whaling period. The previous generation Nc (Ne multiplied by 3.6), which should corresponds to the years between around 1980 and 1990, was estimated between ~2,600 and 6,800 whales (point estimate ~4,000), and is broadly compatible with the recent abundance surveys extrapolated to the past using a growth rate of 7.4% per annum. The long-term Nc in the constant scenario (point estimate ~15,000) was broadly compatible (considering the confidence interval) with pre-whaling catch records estimates (point estimate ~25,000). Overall, our results shown that the Southwestern Atlantic Ocean humpback whale population is genetically very diverse and resisted well to the strong population reduction during commercial whaling.

  18. The numerical evaluation of maximum-likelihood estimates of the parameters for a mixture of normal distributions from partially identified samples

    NASA Technical Reports Server (NTRS)

    Walker, H. F.

    1976-01-01

    Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate were considered. These equations suggest certain successive approximations iterative procedures for obtaining maximum likelihood estimates. The procedures, which are generalized steepest ascent (deflected gradient) procedures, contain those of Hosmer as a special case.

  19. Cosmic shear measurement with maximum likelihood and maximum a posteriori inference

    NASA Astrophysics Data System (ADS)

    Hall, Alex; Taylor, Andy

    2017-06-01

    We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with promising results. We find that the introduction of an intrinsic shape prior can help with mitigation of noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely subdominant. We show how biases propagate to shear estimates, demonstrating in our simple set-up that shear biases can be reduced by orders of magnitude and potentially to within the requirements of planned space-based surveys at mild signal-to-noise ratio. We find that second-order terms can exhibit significant cancellations at low signal-to-noise ratio when Gaussian noise is assumed, which has implications for inferring the performance of shear-measurement algorithms from simplified simulations. We discuss the viability of our point estimators as tools for lensing inference, arguing that they allow for the robust measurement of ellipticity and shear.

  20. An evaluation of percentile and maximum likelihood estimators of weibull paremeters

    Treesearch

    Stanley J. Zarnoch; Tommy R. Dell

    1985-01-01

    Two methods of estimating the three-parameter Weibull distribution were evaluated by computer simulation and field data comparison. Maximum likelihood estimators (MLB) with bias correction were calculated with the computer routine FITTER (Bailey 1974); percentile estimators (PCT) were those proposed by Zanakis (1979). The MLB estimators had superior smaller bias and...

  1. Effects of Estimation Bias on Multiple-Category Classification with an IRT-Based Adaptive Classification Procedure

    ERIC Educational Resources Information Center

    Yang, Xiangdong; Poggio, John C.; Glasnapp, Douglas R.

    2006-01-01

    The effects of five ability estimators, that is, maximum likelihood estimator, weighted likelihood estimator, maximum a posteriori, expected a posteriori, and Owen's sequential estimator, on the performances of the item response theory-based adaptive classification procedure on multiple categories were studied via simulations. The following…

  2. Weak lensing magnification in the Dark Energy Survey Science Verification Data

    DOE PAGES

    Garcia-Fernandez, M.; et al.

    2018-02-02

    In this paper the effect of weak lensing magnification on galaxy number counts is studied by cross-correlating the positions of two galaxy samples, separated by redshift, using data from the Dark Energy Survey Science Verification dataset. The analysis is carried out for two photometrically-selected galaxy samples, with mean photometric redshifts in themore » $0.2 < z < 0.4$ and $0.7 < z < 1.0$ ranges, in the riz bands. A signal is detected with a $$3.5\\sigma$$ significance level in each of the bands tested, and is compatible with the magnification predicted by the $$\\Lambda$$CDM model. After an extensive analysis, it cannot be attributed to any known systematic effect. The detection of the magnification signal is robust to estimated uncertainties in the outlier rate of the pho- tometric redshifts, but this will be an important issue for use of photometric redshifts in magnification mesurements from larger samples. In addition to the detection of the magnification signal, a method to select the sample with the maximum signal-to-noise is proposed and validated with data.« less

  3. Weak lensing magnification in the Dark Energy Survey Science Verification Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcia-Fernandez, M.; et al.

    In this paper the effect of weak lensing magnification on galaxy number counts is studied by cross-correlating the positions of two galaxy samples, separated by redshift, using data from the Dark Energy Survey Science Verification dataset. The analysis is carried out for two photometrically-selected galaxy samples, with mean photometric redshifts in themore » $0.2 < z < 0.4$ and $0.7 < z < 1.0$ ranges, in the riz bands. A signal is detected with a $$3.5\\sigma$$ significance level in each of the bands tested, and is compatible with the magnification predicted by the $$\\Lambda$$CDM model. After an extensive analysis, it cannot be attributed to any known systematic effect. The detection of the magnification signal is robust to estimated uncertainties in the outlier rate of the pho- tometric redshifts, but this will be an important issue for use of photometric redshifts in magnification mesurements from larger samples. In addition to the detection of the magnification signal, a method to select the sample with the maximum signal-to-noise is proposed and validated with data.« less

  4. Weak lensing magnification in the Dark Energy Survey Science Verification Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcia-Fernandez, M.; et al.

    2016-11-30

    In this paper the effect of weak lensing magnification on galaxy number counts is studied by cross-correlating the positions of two galaxy samples, separated by redshift, using data from the Dark Energy Survey Science Verification dataset. The analysis is carried out for two photometrically-selected galaxy samples, with mean photometric redshifts in themore » $0.2 < z < 0.4$ and $0.7 < z < 1.0$ ranges, in the riz bands. A signal is detected with a $$3.5\\sigma$$ significance level in each of the bands tested, and is compatible with the magnification predicted by the $$\\Lambda$$CDM model. After an extensive analysis, it cannot be attributed to any known systematic effect. The detection of the magnification signal is robust to estimated uncertainties in the outlier rate of the pho- tometric redshifts, but this will be an important issue for use of photometric redshifts in magnification mesurements from larger samples. In addition to the detection of the magnification signal, a method to select the sample with the maximum signal-to-noise is proposed and validated with data.« less

  5. Evaporation of Accretion Disks around Black Holes: The Disk-Corona Transition and the Connection to the Advection-dominated Accretion Flow.

    PubMed

    Liu; Yuan; Meyer; Meyer-Hofmeister; Xie

    1999-12-10

    We apply the disk-corona evaporation model (Meyer & Meyer-Hofmeister) originally derived for dwarf novae to black hole systems. This model describes the transition of a thin cool outer disk to a hot coronal flow. The mass accretion rate determines the location of this transition. For a number of well-studied black hole binaries, we take the mass flow rates derived from a fit of the advection-dominated accretion flow (ADAF) model to the observed spectra (for a review, see Narayan, Mahadevan, & Quataert) and determine where the transition of accretion via a cool disk to a coronal flow/ADAF would be located for these rates. We compare this with the observed location of the inner disk edge, as estimated from the maximum velocity of the Halpha emission line. We find that the transition caused by evaporation agrees with this determination in stellar disks. We also show that the ADAF and the "thin outer disk + corona" are compatible in terms of the physics in the transition region.

  6. Estimation of sensing characteristics for refractory nitrides based gain assisted core-shell plasmonic nanoparticles

    NASA Astrophysics Data System (ADS)

    Shishodia, Manmohan Singh; Pathania, Pankaj

    2018-04-01

    Refractory transition metal nitrides such as zirconium nitride (ZrN), hafnium nitride (HfN) and titanium nitride (TiN) have emerged as viable alternatives to coinage metals based plasmonic materials, e.g., gold (Au) and silver (Ag). The present work assesses the suitability of gain assisted ZrN-, HfN- and TiN-based conventional core-shell nanoparticles (CCSNPs) and multilayered core-shell nanoparticles (MCSNPs) for refractive index sensing. We report that the optical gain incorporation in the dielectric layer leads to multifold enhancement of the scattering efficiency (Qsca), substantial reduction of the spectral full width at half maximum, and a higher figure of merit (FOM). In comparison with CCSNPs, the MCSNP system exhibits superior sensing characteristics such as higher FOM, ˜ 45% reduction in the critical optical gain, response shift towards the biological window, and higher degree of tunability. Inherent biocompatibility, growth compatibility, chemical stability and flexible spectral tuning of refractory nitrides augmented by superior sensing properties in the present work may pave the way for refractory nitrides based low cost sensing.

  7. Genomic Model with Correlation Between Additive and Dominance Effects.

    PubMed

    Xiang, Tao; Christensen, Ole Fredslund; Vitezica, Zulma Gladis; Legarra, Andres

    2018-05-09

    Dominance genetic effects are rarely included in pedigree-based genetic evaluation. With the availability of single nucleotide polymorphism markers and the development of genomic evaluation, estimates of dominance genetic effects have become feasible using genomic best linear unbiased prediction (GBLUP). Usually, studies involving additive and dominance genetic effects ignore possible relationships between them. It has been often suggested that the magnitude of functional additive and dominance effects at the quantitative trait loci are related, but there is no existing GBLUP-like approach accounting for such correlation. Wellmann and Bennewitz showed two ways of considering directional relationships between additive and dominance effects, which they estimated in a Bayesian framework. However, these relationships cannot be fitted at the level of individuals instead of loci in a mixed model and are not compatible with standard animal or plant breeding software. This comes from a fundamental ambiguity in assigning the reference allele at a given locus. We show that, if there has been selection, assigning the most frequent as the reference allele orients the correlation between functional additive and dominance effects. As a consequence, the most frequent reference allele is expected to have a positive value. We also demonstrate that selection creates negative covariance between genotypic additive and dominance genetic values. For parameter estimation, it is possible to use a combined additive and dominance relationship matrix computed from marker genotypes, and to use standard restricted maximum likelihood (REML) algorithms based on an equivalent model. Through a simulation study, we show that such correlations can easily be estimated by mixed model software and accuracy of prediction for genetic values is slightly improved if such correlations are used in GBLUP. However, a model assuming uncorrelated effects and fitting orthogonal breeding values and dominant deviations performed similarly for prediction. Copyright © 2018, Genetics.

  8. Assessment of macroseismic intensity in the Nile basin, Egypt

    NASA Astrophysics Data System (ADS)

    Fergany, Elsayed

    2018-01-01

    This work intends to assess deterministic seismic hazard and risk analysis in terms of the maximum expected intensity map of the Egyptian Nile basin sector. Seismic source zone model of Egypt was delineated based on updated compatible earthquake catalog in 2015, focal mechanisms, and the common tectonic elements. Four effective seismic source zones were identified along the Nile basin. The observed macroseismic intensity data along the basin was used to develop intensity prediction equation defined in terms of moment magnitude. Expected maximum intensity map was proven based on the developed intensity prediction equation, identified effective seismic source zones, and maximum expected magnitude for each zone along the basin. The earthquake hazard and risk analysis was discussed and analyzed in view of the maximum expected moment magnitude and the maximum expected intensity values for each effective source zone. Moderate expected magnitudes are expected to put high risk at Cairo and Aswan regions. The results of this study could be a recommendation for the planners in charge to mitigate the seismic risk at these strategic zones of Egypt.

  9. Maximum likelihood estimation of signal-to-noise ratio and combiner weight

    NASA Technical Reports Server (NTRS)

    Kalson, S.; Dolinar, S. J.

    1986-01-01

    An algorithm for estimating signal to noise ratio and combiner weight parameters for a discrete time series is presented. The algorithm is based upon the joint maximum likelihood estimate of the signal and noise power. The discrete-time series are the sufficient statistics obtained after matched filtering of a biphase modulated signal in additive white Gaussian noise, before maximum likelihood decoding is performed.

  10. Comparison of Maximum Likelihood Estimation Approach and Regression Approach in Detecting Quantitative Trait Lco Using RAPD Markers

    Treesearch

    Changren Weng; Thomas L. Kubisiak; C. Dana Nelson; James P. Geaghan; Michael Stine

    1999-01-01

    Single marker regression and single marker maximum likelihood estimation were tied to detect quantitative trait loci (QTLs) controlling the early height growth of longleaf pine and slash pine using a ((longleaf pine x slash pine) x slash pine) BC, population consisting of 83 progeny. Maximum likelihood estimation was found to be more power than regression and could...

  11. A Growth and Yield Model for Thinned Stands of Yellow-Poplar

    Treesearch

    Bruce R. Knoebel; Harold E. Burkhart; Donald E. Beck

    1986-01-01

    Simultaneous growth and yield equations were developed for predicting basal area growth and cubic-foot volume growth and yield in thinned stands of yellow-poplar. A joint loss function involving both volume and basal area was used to estimate the coefficients in the system of equations. The estimates obtained were analytically compatible, invariant for projection...

  12. Global Sea Surface Temperature: A Harmonized Multi-sensor Time-series from Satellite Observations

    NASA Astrophysics Data System (ADS)

    Merchant, C. J.

    2017-12-01

    This paper presents the methods used to obtain a new global sea surface temperature (SST) dataset spanning the early 1980s to the present, intended for use as a climate data record (CDR). The dataset provides skin SST (the fundamental measurement) and an estimate of the daily mean SST at depths compatible with drifting buoys (adjusting for skin and diurnal variability). The depth SST provided enables the CDR to be used with in situ records and centennial-scale SST reconstructions. The new SST timeseries is as independent as possible from in situ observations, and from 1995 onwards is harmonized to an independent satellite reference (namely, SSTs from the Advanced Along Track Scanning Radiometer (Advanced ATSR)). This maximizes the utility of our new estimates of variability and long-term trends in interrogating previous datasets tied to in situ observations. The new SSTs include full resolution (swath, level 2) data, single-sensor gridded data (level 3, 0.05 degree latitude-longitude grid) and a multi-sensor optimal analysis (level 4, same grid). All product levels are consistent. All SSTs have validated uncertainty estimates attached. The sensors used include all Advanced Very High Resolution Radiometers from NOAA-6 onwards and the ATSR series. AVHRR brightness temperatures (BTs) are calculated from counts using a new in-flight re-calibration for each sensor, ultimately linked through to the AATSR BT calibration by a new harmonization technique. Artefacts in AVHRR BTs linked to varying instrument temperature, orbital regime and solar contamination are significantly reduced. These improvements in the AVHRR BTs (level 1) translate into improved cloud detection and SST (level 2). For cloud detection, we use a Bayesian approach for all sensors. For the ATSRs, SSTs are derived with sufficient accuracy and sensitivity using dual-view coefficients. This is not the case for single-view AVHRR observations, for which a physically based retrieval is employed, using a hybrid maximum a posteriori / maximum likelihood retrieval, which optimises retrieval uncertainty and SST sensitivity for climate applications. Validation results will be presented along with examples of the variability and trends in SST evident in the dataset.

  13. Seismic Vulnerability Assessment for Montreal -An Application of HAZUS-MH4

    NASA Astrophysics Data System (ADS)

    Yu, Keyan

    2011-12-01

    Seismic loss estimation for Montreal, Canada is performed for a 2% in 50 years seismic hazard using the HAZUS-MH4 tool developed by US Federal Emergency Management. The software is manipulated to accept a Canadian setting for the Montreal study region, which includes 522 census tracts. The accuracy of loss estimations using HAZUS is dependent on the quality and quantity of data collection and preparation. The data collected for Montreal study region comprise: (1) the building inventory (2) hazard maps regarding soil amplification, liquefaction, and landslides (3) population distribution at three different times of the day (4) census demographic information and (5) synthetic ground motion contour maps using three different ground motion prediction equations. All these data are prepared and assembled into geodatabases that are compatible with the HAZUS software. The study estimated that roughly 5% of the building stock would be damaged with direct economic losses evaluated at 1.4 billion dollars for a scenario corresponding to the 2% in 50 years scenario. The maximum number of casualties associated with this scenario corresponds to a time of occurrence of 2pm and would result in approximately 500 people being injured. Epistemic uncertainty was considered by obtaining damage estimates for three attenuation functions that were developed for Eastern North America. The results indicate that loss estimates are highly sensitive to the choice of the attenuation function and suggests that epistemic uncertainty should be considered both for the definition of the hazard function and in loss estimation methodologies. The next steps in the study should be to increase the size of the survey area to the Greater Montreal which includes more than 3 million inhabitants and to perform more targeted studies for critical areas such as downtown Montreal, and the south-eastern tip of Montreal. The current study was performed mainly for the built environment; the next phase will need to include more information relative to lifelines and their impact on risks.

  14. Hydrogen at the Lunar Terminator

    NASA Astrophysics Data System (ADS)

    Livengood, T. A.; Chin, G.; Sagdeev, R. Z.; Mitrofanov, I. G.; Boynton, W. V.; Evans, L. G.; Litvak, M. L.; McClanahan, T. P.; Sanin, A. B.; Starr, R. D.; Su, J. J.

    2015-10-01

    Suppression of the Moon's naturally occurring epithermal neutron leakage flux near the equatorial dawn terminator is consistent with the presence of diurnally varying quantities of hydrogen in the regolith with maximum concentration on the day side of the dawn terminator. This flux suppression has been observed using the Lunar Exploration Neutron Detector (LEND) on the polar-orbiting Lunar Reconnaissance Orbiter (LRO). The chemical form of hydrogen is not determined, but other remote sensing methods and elemental availability suggest water. The observed variability is interpreted as frost collecting in or on the cold nightside surface, thermally desorbing in sunlight during the lunar morning,and migrating away from the warm subsolar region across the nearby terminator to return to the lunar surface. The maximum concentration, averaged over the upper ~1m of regolith to which neutron detection is sensitive,is estimated to be 0.0125±0.0022 weight-percent water-equivalent hydrogen (wt% WEH), yielding an accumulation of 190±30 ml recoverable water per square meter of regolith at each dawn. The source of hydrogen (water) must be in equilibrium with losses due to solar photolysis and escape. A chemical recycling process or self-shielding from solar UV must be assumed in order to bring the loss rate down to compatibility with possible sources, including solar wind or micrometeoroid delivery of hydrogen, which require near-complete retention of hydrogen,or outgassing of primordial volatiles, for which a plausible supply rate requires significantly less retention efficiency.

  15. A Methodology for the Derivation of Unloaded Abdominal Aortic Aneurysm Geometry With Experimental Validation

    PubMed Central

    Chandra, Santanu; Gnanaruban, Vimalatharmaiyah; Riveros, Fabian; Rodriguez, Jose F.; Finol, Ender A.

    2016-01-01

    In this work, we present a novel method for the derivation of the unloaded geometry of an abdominal aortic aneurysm (AAA) from a pressurized geometry in turn obtained by 3D reconstruction of computed tomography (CT) images. The approach was experimentally validated with an aneurysm phantom loaded with gauge pressures of 80, 120, and 140 mm Hg. The unloaded phantom geometries estimated from these pressurized states were compared to the actual unloaded phantom geometry, resulting in mean nodal surface distances of up to 3.9% of the maximum aneurysm diameter. An in-silico verification was also performed using a patient-specific AAA mesh, resulting in maximum nodal surface distances of 8 μm after running the algorithm for eight iterations. The methodology was then applied to 12 patient-specific AAA for which their corresponding unloaded geometries were generated in 5–8 iterations. The wall mechanics resulting from finite element analysis of the pressurized (CT image-based) and unloaded geometries were compared to quantify the relative importance of using an unloaded geometry for AAA biomechanics. The pressurized AAA models underestimate peak wall stress (quantified by the first principal stress component) on average by 15% compared to the unloaded AAA models. The validation and application of the method, readily compatible with any finite element solver, underscores the importance of generating the unloaded AAA volume mesh prior to using wall stress as a biomechanical marker for rupture risk assessment. PMID:27538124

  16. An Investigation of the Standard Errors of Expected A Posteriori Ability Estimates.

    ERIC Educational Resources Information Center

    De Ayala, R. J.; And Others

    Expected a posteriori has a number of advantages over maximum likelihood estimation or maximum a posteriori (MAP) estimation methods. These include ability estimates (thetas) for all response patterns, less regression towards the mean than MAP ability estimates, and a lower average squared error. R. D. Bock and R. J. Mislevy (1982) state that the…

  17. Why Hasn't Earth Warmed as Much as Expected?

    NASA Technical Reports Server (NTRS)

    Schwartz, Stephen E.; Charlson, Robert J.; Kahn, Ralph A.; Ogren, John A.; Rodhe, Henning

    2010-01-01

    The observed increase in global mean surface temperature (GMST) over the industrial era is less than 40% of that expected from observed increases in long-lived greenhouse gases together with the best-estimate equilibrium climate sensitivity given by the 2007 Assessment Report of the Intergovernmental Panel on Climate Change (IPCC). Possible reasons for this warming discrepancy are systematically examined here. The warming discrepancy is found to be due mainly to some combination of two factors: the IPCC best estimate of climate sensitivity being too high and/or the greenhouse gas forcing being partially offset by forcing by increased concentrations of atmospheric aerosols; the increase in global heat content due to thermal disequilibrium accounts for less than 25% of the discrepancy, and cooling by natural temperature variation can account for only about 15 %. Current uncertainty in climate sensitivity is shown to preclude determining the amount of future fossil fuel CO2 emissions that would be compatible with any chosen maximum allowable increase in GMST; even the sign of such allowable future emissions is unconstrained. Resolving this situation, by empirical determination of the earth's climate sensitivity from the historical record over the industrial period or through use of climate models whose accuracy is evaluated by their performance over this period, is shown to require substantial reduction in the uncertainty of aerosol forcing over this period.

  18. Consistency of Rasch Model Parameter Estimation: A Simulation Study.

    ERIC Educational Resources Information Center

    van den Wollenberg, Arnold L.; And Others

    1988-01-01

    The unconditional--simultaneous--maximum likelihood (UML) estimation procedure for the one-parameter logistic model produces biased estimators. The UML method is inconsistent and is not a good alternative to conditional maximum likelihood method, at least with small numbers of items. The minimum Chi-square estimation procedure produces unbiased…

  19. Tectonic stress orientations and magnitudes, and friction of faults, deduced from earthquake focal mechanism inversions over the Korean Peninsula

    NASA Astrophysics Data System (ADS)

    Soh, Inho; Chang, Chandong; Lee, Junhyung; Hong, Tae-Kyung; Park, Eui-Seob

    2018-05-01

    We characterize the present-day stress state in and around the Korean Peninsula using formal inversions of earthquake focal mechanisms. Two different methods are used to select preferred fault planes in the double-couple focal mechanism solutions: one that minimizes average misfit angle and the other choosing faults with higher instability. We invert selected sets of fault planes for estimating the principal stresses at regularly spaced grid points, using a circular-area data-binning method, where the bin radius is optimized to yield the best possible stress inversion results based on the World Stress Map quality ranking scheme. The inversions using the two methods yield well constrained and fairly comparable results, which indicate that the prevailing stress regime is strike-slip, and the maximum horizontal principal stress (SHmax) is oriented ENE-WSW throughout the study region. Although the orientation of the stresses is consistent across the peninsula, the relative stress magnitude parameter (R-value) varies significantly, from 0.22 in the northwest to 0.89 in the southeast. Based on our knowledge of the R-values and stress regime, and using a value for vertical stress (Sv) estimated from the overburden weight of rock, together with a value for the maximum differential stress (based on the Coulomb friction of faults optimally oriented for slip), we estimate the magnitudes of the two horizontal principal stresses. The horizontal stress magnitudes increase from west to east such that SHmax/Sv ratio rises from 1.5 to 2.4, and the Shmin/Sv ratio from 0.6 to 0.8. The variation in the magnitudes of the tectonic stresses appears to be related to differences in the rigidity of crustal rocks. Using the complete stress tensors, including both orientations and magnitudes, we assess the possible ranges of frictional coefficients for different types of faults. We show that normal and reverse faults have lower frictional coefficients than strike-slip faults, suggesting that the former types of faults can be activated under a strike-slip stress regime. Our observations of the seismicity, with normal faulting concentrated offshore to the northwest and reverse faulting focused offshore to the east, are compatible with the results of our estimates of stress magnitudes.

  20. Maximum likelihood estimation of finite mixture model for economic data

    NASA Astrophysics Data System (ADS)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  1. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, Addendum

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1975-01-01

    New results and insights concerning a previously published iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions were discussed. It was shown that the procedure converges locally to the consistent maximum likelihood estimate as long as a specified parameter is bounded between two limits. Bound values were given to yield optimal local convergence.

  2. Propane spectral resolution enhancement by the maximum entropy method

    NASA Technical Reports Server (NTRS)

    Bonavito, N. L.; Stewart, K. P.; Hurley, E. J.; Yeh, K. C.; Inguva, R.

    1990-01-01

    The Burg algorithm for maximum entropy power spectral density estimation is applied to a time series of data obtained from a Michelson interferometer and compared with a standard FFT estimate for resolution capability. The propane transmittance spectrum was estimated by use of the FFT with a 2 to the 18th data sample interferogram, giving a maximum unapodized resolution of 0.06/cm. This estimate was then interpolated by zero filling an additional 2 to the 18th points, and the final resolution was taken to be 0.06/cm. Comparison of the maximum entropy method (MEM) estimate with the FFT was made over a 45/cm region of the spectrum for several increasing record lengths of interferogram data beginning at 2 to the 10th. It is found that over this region the MEM estimate with 2 to the 16th data samples is in close agreement with the FFT estimate using 2 to the 18th samples.

  3. T-111 Rankine system corrosion test loop, volume 1

    NASA Technical Reports Server (NTRS)

    Harrison, R. W.; Hoffman, E. E.; Smith, J. P.

    1975-01-01

    Results are given of a program whose objective was to determine the performance of refractory metal alloys in a two loop Rankine test system. The test system consisted of a circulating lithium circuit heated to 1230 C maximum transferring heat to a boiling potassium circuit with a 1170 C superheated vapor temperature. The results demonstrate the suitability of the selected refractory alloys to perform from a chemical compatibility standpoint.

  4. Estimating parameter of Rayleigh distribution by using Maximum Likelihood method and Bayes method

    NASA Astrophysics Data System (ADS)

    Ardianti, Fitri; Sutarman

    2018-01-01

    In this paper, we use Maximum Likelihood estimation and Bayes method under some risk function to estimate parameter of Rayleigh distribution to know the best method. The prior knowledge which used in Bayes method is Jeffrey’s non-informative prior. Maximum likelihood estimation and Bayes method under precautionary loss function, entropy loss function, loss function-L 1 will be compared. We compare these methods by bias and MSE value using R program. After that, the result will be displayed in tables to facilitate the comparisons.

  5. Bootstrap Standard Errors for Maximum Likelihood Ability Estimates When Item Parameters Are Unknown

    ERIC Educational Resources Information Center

    Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi

    2014-01-01

    When item parameter estimates are used to estimate the ability parameter in item response models, the standard error (SE) of the ability estimate must be corrected to reflect the error carried over from item calibration. For maximum likelihood (ML) ability estimates, a corrected asymptotic SE is available, but it requires a long test and the…

  6. Mortality table construction

    NASA Astrophysics Data System (ADS)

    Sutawanir

    2015-12-01

    Mortality tables play important role in actuarial studies such as life annuities, premium determination, premium reserve, valuation pension plan, pension funding. Some known mortality tables are CSO mortality table, Indonesian Mortality Table, Bowers mortality table, Japan Mortality table. For actuary applications some tables are constructed with different environment such as single decrement, double decrement, and multiple decrement. There exist two approaches in mortality table construction : mathematics approach and statistical approach. Distribution model and estimation theory are the statistical concepts that are used in mortality table construction. This article aims to discuss the statistical approach in mortality table construction. The distributional assumptions are uniform death distribution (UDD) and constant force (exponential). Moment estimation and maximum likelihood are used to estimate the mortality parameter. Moment estimation methods are easier to manipulate compared to maximum likelihood estimation (mle). However, the complete mortality data are not used in moment estimation method. Maximum likelihood exploited all available information in mortality estimation. Some mle equations are complicated and solved using numerical methods. The article focus on single decrement estimation using moment and maximum likelihood estimation. Some extension to double decrement will introduced. Simple dataset will be used to illustrated the mortality estimation, and mortality table.

  7. Analysis of visual quality improvements provided by known tools for HDR content

    NASA Astrophysics Data System (ADS)

    Kim, Jaehwan; Alshina, Elena; Lee, JongSeok; Park, Youngo; Choi, Kwang Pyo

    2016-09-01

    In this paper, the visual quality of different solutions for high dynamic range (HDR) compression using MPEG test contents is analyzed. We also simulate the method for an efficient HDR compression which is based on statistical property of the signal. The method is compliant with HEVC specification and also easily compatible with other alternative methods which might require HEVC specification changes. It was subjectively tested on commercial TVs and compared with alternative solutions for HDR coding. Subjective visual quality tests were performed using SUHD TVs model which is SAMSUNG JS9500 with maximum luminance up to 1000nit in test. The solution that is based on statistical property shows not only improvement of objective performance but improvement of visual quality compared to other HDR solutions, while it is compatible with HEVC specification.

  8. Expected versus Observed Information in SEM with Incomplete Normal and Nonnormal Data

    ERIC Educational Resources Information Center

    Savalei, Victoria

    2010-01-01

    Maximum likelihood is the most common estimation method in structural equation modeling. Standard errors for maximum likelihood estimates are obtained from the associated information matrix, which can be estimated from the sample using either expected or observed information. It is known that, with complete data, estimates based on observed or…

  9. Complex compatible taper and volume estimation systems for red and loblolly pine

    Treesearch

    John C. Byrne; David D. Reed

    1986-01-01

    Five equation systems are described which can be used to estimate upper stem diameter, total individual tree cubic-foot volume, and merchantable cubic-foot volumes to any merchantability imit (expressed in terms of diameter or height), both inside and outside bark. The equations provide consistent results since they are mathematically related and are fit using stem...

  10. A site model for Pyrenean oak (Quercus pyrenaica) stands using a dynamic algebraic difference equation

    Treesearch

    Joao P. Carvalho; Bernard R. Parresol

    2005-01-01

    This paper presents a growth model for dominant-height and site-quality estimations for Pyrenean oak (Quercus pyrenaica Willd.) stands. The Bertalanffy–Richards function is used with the generalized algebraic difference approach to derive a dynamic site equation. This allows dominant-height and site-index estimations in a compatible way, using any...

  11. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1975-01-01

    A general iterative procedure is given for determining the consistent maximum likelihood estimates of normal distributions. In addition, a local maximum of the log-likelihood function, Newtons's method, a method of scoring, and modifications of these procedures are discussed.

  12. Comparison of online reporting systems and their compatibility check with respective adverse drug reaction reporting forms

    PubMed Central

    Maharshi, Vikas; Nagar, Pravesh

    2017-01-01

    AIM: Different forms and online tools are available in different countries for spontaneous reporting, one of the most widely used methods of pharmacovigilance. Capturing sufficient information and adequate compatibility of online systems with respective reporting form is highly desirable for appropriate reporting of adverse drug reactions (ADRs). This study was aimed to compare three major online reporting systems (US, UK, and WHO) of the world and also to check their compatibility with the respective ADR reporting form. MATERIALS AND METHODS: A total of 89 data elements to provide relevant information were found out from above three online reporting systems. All three online systems were compared regarding magnitude of information captured by each of them and scoring was done by providing a score of “1” to each element. Compatibility of ADR reporting forms of India (Red form), US (Form 3500), and UK (Yellow card form) was assessed by comparing the information gathered by them with that can be entered into their respective online reporting systems, namely, “VigiFlow,” “US online reporting,” and “Yellow card online reporting.” Each unmatching item was given a score of “−1”. RESULTS: VigiFlow scored “74” points, whereas online reporting systems of the US and UK scored “56” and “49,” respectively, regarding magnitude of the information gathered by them. Compatibility score was found to be “0,” “−9,” and “−26” in case of ADR reporting systems of US, UK, and India, respectively. CONCLUSION: Our study reveals that “VigiFlow” is capable of capturing the maximum amount of information but “Form 3500” and “Online reporting system of US” are maximally compatible to each other among ADR reporting systems of all three countries. PMID:29515278

  13. Estimating missing daily temperature extremes in Jaffna, Sri Lanka

    NASA Astrophysics Data System (ADS)

    Thevakaran, A.; Sonnadara, D. U. J.

    2018-04-01

    The accuracy of reconstructing missing daily temperature extremes in the Jaffna climatological station, situated in the northern part of the dry zone of Sri Lanka, is presented. The adopted method utilizes standard departures of daily maximum and minimum temperature values at four neighbouring stations, Mannar, Anuradhapura, Puttalam and Trincomalee to estimate the standard departures of daily maximum and minimum temperatures at the target station, Jaffna. The daily maximum and minimum temperatures from 1966 to 1980 (15 years) were used to test the validity of the method. The accuracy of the estimation is higher for daily maximum temperature compared to daily minimum temperature. About 95% of the estimated daily maximum temperatures are within ±1.5 °C of the observed values. For daily minimum temperature, the percentage is about 92. By calculating the standard deviation of the difference in estimated and observed values, we have shown that the error in estimating the daily maximum and minimum temperatures is ±0.7 and ±0.9 °C, respectively. To obtain the best accuracy when estimating the missing daily temperature extremes, it is important to include Mannar which is the nearest station to the target station, Jaffna. We conclude from the analysis that the method can be applied successfully to reconstruct the missing daily temperature extremes in Jaffna where no data is available due to frequent disruptions caused by civil unrests and hostilities in the region during the period, 1984 to 2000.

  14. Photovoltaic-Model-Based Solar Irradiance Estimators: Performance Comparison and Application to Maximum Power Forecasting

    NASA Astrophysics Data System (ADS)

    Scolari, Enrica; Sossan, Fabrizio; Paolone, Mario

    2018-01-01

    Due to the increasing proportion of distributed photovoltaic (PV) production in the generation mix, the knowledge of the PV generation capacity has become a key factor. In this work, we propose to compute the PV plant maximum power starting from the indirectly-estimated irradiance. Three estimators are compared in terms of i) ability to compute the PV plant maximum power, ii) bandwidth and iii) robustness against measurements noise. The approaches rely on measurements of the DC voltage, current, and cell temperature and on a model of the PV array. We show that the considered methods can accurately reconstruct the PV maximum generation even during curtailment periods, i.e. when the measured PV power is not representative of the maximum potential of the PV array. Performance evaluation is carried out by using a dedicated experimental setup on a 14.3 kWp rooftop PV installation. Results also proved that the analyzed methods can outperform pyranometer-based estimations, with a less complex sensing system. We show how the obtained PV maximum power values can be applied to train time series-based solar maximum power forecasting techniques. This is beneficial when the measured power values, commonly used as training, are not representative of the maximum PV potential.

  15. Uncertainty estimation of the self-thinning process by Maximum-Entropy Principle

    Treesearch

    Shoufan Fang; George Z. Gertner

    2000-01-01

    When available information is scarce, the Maximum-Entropy Principle can estimate the distributions of parameters. In our case study, we estimated the distributions of the parameters of the forest self-thinning process based on literature information, and we derived the conditional distribution functions and estimated the 95 percent confidence interval (CI) of the self-...

  16. Applying a Weighted Maximum Likelihood Latent Trait Estimator to the Generalized Partial Credit Model

    ERIC Educational Resources Information Center

    Penfield, Randall D.; Bergeron, Jennifer M.

    2005-01-01

    This article applies a weighted maximum likelihood (WML) latent trait estimator to the generalized partial credit model (GPCM). The relevant equations required to obtain the WML estimator using the Newton-Raphson algorithm are presented, and a simulation study is described that compared the properties of the WML estimator to those of the maximum…

  17. The Maximum Likelihood Estimation of Signature Transformation /MLEST/ algorithm. [for affine transformation of crop inventory data

    NASA Technical Reports Server (NTRS)

    Thadani, S. G.

    1977-01-01

    The Maximum Likelihood Estimation of Signature Transformation (MLEST) algorithm is used to obtain maximum likelihood estimates (MLE) of affine transformation. The algorithm has been evaluated for three sets of data: simulated (training and recognition segment pairs), consecutive-day (data gathered from Landsat images), and geographical-extension (large-area crop inventory experiment) data sets. For each set, MLEST signature extension runs were made to determine MLE values and the affine-transformed training segment signatures were used to classify the recognition segments. The classification results were used to estimate wheat proportions at 0 and 1% threshold values.

  18. Item Selection and Ability Estimation Procedures for a Mixed-Format Adaptive Test

    ERIC Educational Resources Information Center

    Ho, Tsung-Han; Dodd, Barbara G.

    2012-01-01

    In this study we compared five item selection procedures using three ability estimation methods in the context of a mixed-format adaptive test based on the generalized partial credit model. The item selection procedures used were maximum posterior weighted information, maximum expected information, maximum posterior weighted Kullback-Leibler…

  19. Fast maximum likelihood estimation of mutation rates using a birth-death process.

    PubMed

    Wu, Xiaowei; Zhu, Hongxiao

    2015-02-07

    Since fluctuation analysis was first introduced by Luria and Delbrück in 1943, it has been widely used to make inference about spontaneous mutation rates in cultured cells. Under certain model assumptions, the probability distribution of the number of mutants that appear in a fluctuation experiment can be derived explicitly, which provides the basis of mutation rate estimation. It has been shown that, among various existing estimators, the maximum likelihood estimator usually demonstrates some desirable properties such as consistency and lower mean squared error. However, its application in real experimental data is often hindered by slow computation of likelihood due to the recursive form of the mutant-count distribution. We propose a fast maximum likelihood estimator of mutation rates, MLE-BD, based on a birth-death process model with non-differential growth assumption. Simulation studies demonstrate that, compared with the conventional maximum likelihood estimator derived from the Luria-Delbrück distribution, MLE-BD achieves substantial improvement on computational speed and is applicable to arbitrarily large number of mutants. In addition, it still retains good accuracy on point estimation. Published by Elsevier Ltd.

  20. Collinear Latent Variables in Multilevel Confirmatory Factor Analysis: A Comparison of Maximum Likelihood and Bayesian Estimations.

    PubMed

    Can, Seda; van de Schoot, Rens; Hox, Joop

    2015-06-01

    Because variables may be correlated in the social and behavioral sciences, multicollinearity might be problematic. This study investigates the effect of collinearity manipulated in within and between levels of a two-level confirmatory factor analysis by Monte Carlo simulation. Furthermore, the influence of the size of the intraclass correlation coefficient (ICC) and estimation method; maximum likelihood estimation with robust chi-squares and standard errors and Bayesian estimation, on the convergence rate are investigated. The other variables of interest were rate of inadmissible solutions and the relative parameter and standard error bias on the between level. The results showed that inadmissible solutions were obtained when there was between level collinearity and the estimation method was maximum likelihood. In the within level multicollinearity condition, all of the solutions were admissible but the bias values were higher compared with the between level collinearity condition. Bayesian estimation appeared to be robust in obtaining admissible parameters but the relative bias was higher than for maximum likelihood estimation. Finally, as expected, high ICC produced less biased results compared to medium ICC conditions.

  1. The numerical evaluation of maximum-likelihood estimates of the parameters for a mixture of normal distributions from partially identified samples

    NASA Technical Reports Server (NTRS)

    Walker, H. F.

    1976-01-01

    Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate are considered. These equations, suggest certain successive-approximations iterative procedures for obtaining maximum-likelihood estimates. These are generalized steepest ascent (deflected gradient) procedures. It is shown that, with probability 1 as N sub 0 approaches infinity (regardless of the relative sizes of N sub 0 and N sub 1, i=1,...,m), these procedures converge locally to the strongly consistent maximum-likelihood estimates whenever the step size is between 0 and 2. Furthermore, the value of the step size which yields optimal local convergence rates is bounded from below by a number which always lies between 1 and 2.

  2. Electrical Power Subsystem Integration and Test for the NPS Solar Cell Array Tester CubeSat

    DTIC Science & Technology

    2010-12-01

    Earth’s Gravitational Constant MCU Microcontroller Unit MPPT Maximum Power Point Tracker NiCr Nickel Chromium NPS Naval Postgraduate School P...new testing platform was designed, built, and used to conduct integrated testing on CubeSat Kit (CSK) compatible devices. The power budgets and...acceptance test results obtained from the testing platform were used with a solar array power generation simulation, and a battery state of charge

  3. A Comparison of Three Multivariate Models for Estimating Test Battery Reliability.

    ERIC Educational Resources Information Center

    Wood, Terry M.; Safrit, Margaret J.

    1987-01-01

    A comparison of three multivariate models (canonical reliability model, maximum generalizability model, canonical correlation model) for estimating test battery reliability indicated that the maximum generalizability model showed the least degree of bias, smallest errors in estimation, and the greatest relative efficiency across all experimental…

  4. Assessment of mammal reproduction for hunting sustainability through community-based sampling of species in the wild.

    PubMed

    Mayor, Pedro; El Bizri, Hani; Bodmer, Richard E; Bowler, Mark

    2017-08-01

    Wildlife subsistence hunting is a major source of protein for tropical rural populations and a prominent conservation issue. The intrinsic rate of natural increase. (r max ) of populations is a key reproductive parameter in the most used assessments of hunting sustainability. However, researchers face severe difficulties in obtaining reproductive data in the wild, so these assessments often rely on classic reproductive rates calculated mostly from studies of captive animals conducted 30 years ago. The result is a flaw in almost 50% of studies, which hampers management decision making. We conducted a 15-year study in the Amazon in which we used reproductive data from the genitalia of 950 hunted female mammals. Genitalia were collected by local hunters. We examined tissue from these samples to estimate birthrates for wild populations of the 10 most hunted mammals. We compared our estimates with classic measures and considered the utility of the use of r max in sustainability assessments. For woolly monkey (Lagothrix poeppigii) and tapir (Tapirus terrestris), wild birthrates were similar to those from captive populations, whereas birthrates for other ungulates and lowland-paca (Cuniculus paca) were significantly lower than previous estimates. Conversely, for capuchin monkeys (Sapajus macrocephalus), agoutis (Dasyprocta sp.), and coatis (Nasua nasua), our calculated reproductive rates greatly exceeded often-used values. Researchers could keep applying classic measures compatible with our estimates, but for other species previous estimates of r max may not be appropriate. We suggest that data from local studies be used to set hunting quotas. Our maximum rates of population growth in the wild correlated with body weight, which suggests that our method is consistent and reliable. Integration of this method into community-based wildlife management and the training of local hunters to record pregnancies in hunted animals could efficiently generate useful information of life histories of wild species and thus improve management of natural resources. © 2016 Society for Conservation Biology.

  5. The rotational dynamics of Titan from Cassini RADAR images

    NASA Astrophysics Data System (ADS)

    Meriggiola, Rachele; Iess, Luciano; Stiles, Bryan. W.; Lunine, Jonathan. I.; Mitri, Giuseppe

    2016-09-01

    Between 2004 and 2009 the RADAR instrument of the Cassini mission provided 31 SAR images of Titan. We tracked the position of 160 surface landmarks as a function of time in order to monitor the rotational dynamics of Titan. We generated and processed RADAR observables using a least squares fit to determine the updated values of the rotational parameters. We provide a new rotational model of Titan, which includes updated values for spin pole location, spin rate, precession and nutation terms. The estimated pole location is compatible with the occupancy of a Cassini state 1. We found a synchronous value of the spin rate (22.57693 deg/day), compatible at a 3-σ level with IAU predictions. The estimated obliquity is equal to 0.31°, incompatible with the assumption of a rigid body with fully-damped pole and a moment of inertia factor of 0.34, as determined by gravity measurements.

  6. The use of LANDSAT-1 imagery in mapping and managing soil and range resources in the Sand Hills region of Nebraska

    NASA Technical Reports Server (NTRS)

    Seevers, P. M. (Principal Investigator); Drew, J. V.

    1976-01-01

    The author has identified the following significant results. Evaluation of ERTS-1 imagery for the Sand Hills region of Nebraska has shown that the data can be used to effectively measure several parameters of inventory needs. (1) Vegetative biomass can be estimated with a high degree of confidence using computer compatable tape data. (2) Soils can be mapped to the subgroup level with high altitude aircraft color infrared photography and to the association level with multitemporal ERTS-1 imagery. (3) Water quality in Sand Hills lakes can be estimated utilizing computer compatable tape data. (4) Center pivot irrigation can be inventoried from satellite data and can be monitored regarding site selection and relative success of establishment from high altitude aircraft color infrared photography. (5) ERTS-1 data is of exceptional value in wide-area inventory of natural resource data in the Sand Hills region of Nebraska.

  7. A reverse KAM method to estimate unknown mutual inclinations in exoplanetary systems

    NASA Astrophysics Data System (ADS)

    Volpi, Mara; Locatelli, Ugo; Sansottera, Marco

    2018-05-01

    The inclinations of exoplanets detected via radial velocity method are essentially unknown. We aim to provide estimations of the ranges of mutual inclinations that are compatible with the long-term stability of the system. Focusing on the skeleton of an extrasolar system, i.e. considering only the two most massive planets, we study the Hamiltonian of the three-body problem after the reduction of the angular momentum. Such a Hamiltonian is expanded both in Poincaré canonical variables and in the small parameter D_2, which represents the normalised angular momentum deficit. The value of the mutual inclination is deduced from D_2 and, thanks to the use of interval arithmetic, we are able to consider open sets of initial conditions instead of single values. Looking at the convergence radius of the Kolmogorov normal form, we develop a reverse KAM approach in order to estimate the ranges of mutual inclinations that are compatible with the long-term stability in a KAM sense. Our method is successfully applied to the extrasolar systems HD 141399, HD 143761 and HD 40307.

  8. Improvement of blood compatibility on polysulfone-polyvinylpyrrolidone blend films as a model membrane of dialyzer by physical adsorption of recombinant soluble human thrombomodulin (ART-123).

    PubMed

    Omichi, Masaaki; Matsusaki, Michiya; Maruyama, Ikuro; Akashi, Mitsuru

    2012-01-01

    ART-123 is a recombinant soluble human thrombomodulin (hTM) with potent anticoagulant activity, and is available for developing antithrombogenic surfaces by immobilization. We focused on improving blood compatibility on the dialyzer surface by the physical adsorption of ART-123 as a safe yet simple method without using chemical reagents. The physical adsorption mechanism and anticoagulant activities of adsorbed hTM on the surface of a polysulfone (PSF) membrane containing polyvinylpyrrolidone (PVP) as a model dialyzer were investigated in detail. The PVP content of the PSF-PVP films was saturated at 20 wt% after immersion in Tris-HCl buffer, even with the addition of over 20 wt% PVP. The surface morphology of the PSF-PVP films was strongly influenced by the PVP content, because PVP covered the outermost surface of the PSF-PVP films. The adsorption speed of hTM slowed dramatically with increasing PVP content up to 10 wt%, but the maximum adsorption amount of hTM onto the PSF-PVP film surface was almost the same, regardless of the PVP content. The PSF-PVP film with the physically adsorbed hTM showed higher protein C activity as compared to the PSF film, it showed excellent blood compatibility due to the protein C activity and the inhibition properties of platelet adhesion. The physical adsorption of hTM can be useful as a safe yet simple method to improve the blood compatibility of a dialyzer surface.

  9. Recovery of Item Parameters in the Nominal Response Model: A Comparison of Marginal Maximum Likelihood Estimation and Markov Chain Monte Carlo Estimation.

    ERIC Educational Resources Information Center

    Wollack, James A.; Bolt, Daniel M.; Cohen, Allan S.; Lee, Young-Sun

    2002-01-01

    Compared the quality of item parameter estimates for marginal maximum likelihood (MML) and Markov Chain Monte Carlo (MCMC) with the nominal response model using simulation. The quality of item parameter recovery was nearly identical for MML and MCMC, and both methods tended to produce good estimates. (SLD)

  10. IRT Item Parameter Recovery with Marginal Maximum Likelihood Estimation Using Loglinear Smoothing Models

    ERIC Educational Resources Information Center

    Casabianca, Jodi M.; Lewis, Charles

    2015-01-01

    Loglinear smoothing (LLS) estimates the latent trait distribution while making fewer assumptions about its form and maintaining parsimony, thus leading to more precise item response theory (IRT) item parameter estimates than standard marginal maximum likelihood (MML). This article provides the expectation-maximization algorithm for MML estimation…

  11. An EM Algorithm for Maximum Likelihood Estimation of Process Factor Analysis Models

    ERIC Educational Resources Information Center

    Lee, Taehun

    2010-01-01

    In this dissertation, an Expectation-Maximization (EM) algorithm is developed and implemented to obtain maximum likelihood estimates of the parameters and the associated standard error estimates characterizing temporal flows for the latent variable time series following stationary vector ARMA processes, as well as the parameters defining the…

  12. Ball-morph: definition, implementation, and comparative evaluation.

    PubMed

    Whited, Brian; Rossignac, Jaroslaw Jarek

    2011-06-01

    We define b-compatibility for planar curves and propose three ball morphing techniques between pairs of b-compatible curves. Ball-morphs use the automatic ball-map correspondence, proposed by Chazal et al., from which we derive different vertex trajectories (linear, circular, and parabolic). All three morphs are symmetric, meeting both curves with the same angle, which is a right angle for the circular and parabolic. We provide simple constructions for these ball-morphs and compare them to each other and other simple morphs (linear-interpolation, closest-projection, curvature-interpolation, Laplace-blending, and heat-propagation) using six cost measures (travel-distance, distortion, stretch, local acceleration, average squared mean curvature, and maximum squared mean curvature). The results depend heavily on the input curves. Nevertheless, we found that the linear ball-morph has consistently the shortest travel-distance and the circular ball-morph has the least amount of distortion.

  13. Application of Bayesian Maximum Entropy Filter in parameter calibration of groundwater flow model in PingTung Plain

    NASA Astrophysics Data System (ADS)

    Cheung, Shao-Yong; Lee, Chieh-Han; Yu, Hwa-Lung

    2017-04-01

    Due to the limited hydrogeological observation data and high levels of uncertainty within, parameter estimation of the groundwater model has been an important issue. There are many methods of parameter estimation, for example, Kalman filter provides a real-time calibration of parameters through measurement of groundwater monitoring wells, related methods such as Extended Kalman Filter and Ensemble Kalman Filter are widely applied in groundwater research. However, Kalman Filter method is limited to linearity. This study propose a novel method, Bayesian Maximum Entropy Filtering, which provides a method that can considers the uncertainty of data in parameter estimation. With this two methods, we can estimate parameter by given hard data (certain) and soft data (uncertain) in the same time. In this study, we use Python and QGIS in groundwater model (MODFLOW) and development of Extended Kalman Filter and Bayesian Maximum Entropy Filtering in Python in parameter estimation. This method may provide a conventional filtering method and also consider the uncertainty of data. This study was conducted through numerical model experiment to explore, combine Bayesian maximum entropy filter and a hypothesis for the architecture of MODFLOW groundwater model numerical estimation. Through the virtual observation wells to simulate and observe the groundwater model periodically. The result showed that considering the uncertainty of data, the Bayesian maximum entropy filter will provide an ideal result of real-time parameters estimation.

  14. In Vivo potassium-39 NMR spectra by the burg maximum-entropy method

    NASA Astrophysics Data System (ADS)

    Uchiyama, Takanori; Minamitani, Haruyuki

    The Burg maximum-entropy method was applied to estimate 39K NMR spectra of mung bean root tips. The maximum-entropy spectra have as good a linearity between peak areas and potassium concentrations as those obtained by fast Fourier transform and give a better estimation of intracellular potassium concentrations. Therefore potassium uptake and loss processes of mung bean root tips are shown to be more clearly traced by the maximum-entropy method.

  15. High-Performance Clock Synchronization Algorithms for Distributed Wireless Airborne Computer Networks with Applications to Localization and Tracking of Targets

    DTIC Science & Technology

    2010-06-01

    GMKPF represents a better and more flexible alternative to the Gaussian Maximum Likelihood (GML), and Exponential Maximum Likelihood ( EML ...accurate results relative to GML and EML when the network delays are modeled in terms of a single non-Gaussian/non-exponential distribution or as a...to the Gaussian Maximum Likelihood (GML), and Exponential Maximum Likelihood ( EML ) estimators for clock offset estimation in non-Gaussian or non

  16. Modelling maximum river flow by using Bayesian Markov Chain Monte Carlo

    NASA Astrophysics Data System (ADS)

    Cheong, R. Y.; Gabda, D.

    2017-09-01

    Analysis of flood trends is vital since flooding threatens human living in terms of financial, environment and security. The data of annual maximum river flows in Sabah were fitted into generalized extreme value (GEV) distribution. Maximum likelihood estimator (MLE) raised naturally when working with GEV distribution. However, previous researches showed that MLE provide unstable results especially in small sample size. In this study, we used different Bayesian Markov Chain Monte Carlo (MCMC) based on Metropolis-Hastings algorithm to estimate GEV parameters. Bayesian MCMC method is a statistical inference which studies the parameter estimation by using posterior distribution based on Bayes’ theorem. Metropolis-Hastings algorithm is used to overcome the high dimensional state space faced in Monte Carlo method. This approach also considers more uncertainty in parameter estimation which then presents a better prediction on maximum river flow in Sabah.

  17. Five Methods for Estimating Angoff Cut Scores with IRT

    ERIC Educational Resources Information Center

    Wyse, Adam E.

    2017-01-01

    This article illustrates five different methods for estimating Angoff cut scores using item response theory (IRT) models. These include maximum likelihood (ML), expected a priori (EAP), modal a priori (MAP), and weighted maximum likelihood (WML) estimators, as well as the most commonly used approach based on translating ratings through the test…

  18. Computation of nonlinear least squares estimator and maximum likelihood using principles in matrix calculus

    NASA Astrophysics Data System (ADS)

    Mahaboob, B.; Venkateswarlu, B.; Sankar, J. Ravi; Balasiddamuni, P.

    2017-11-01

    This paper uses matrix calculus techniques to obtain Nonlinear Least Squares Estimator (NLSE), Maximum Likelihood Estimator (MLE) and Linear Pseudo model for nonlinear regression model. David Pollard and Peter Radchenko [1] explained analytic techniques to compute the NLSE. However the present research paper introduces an innovative method to compute the NLSE using principles in multivariate calculus. This study is concerned with very new optimization techniques used to compute MLE and NLSE. Anh [2] derived NLSE and MLE of a heteroscedatistic regression model. Lemcoff [3] discussed a procedure to get linear pseudo model for nonlinear regression model. In this research article a new technique is developed to get the linear pseudo model for nonlinear regression model using multivariate calculus. The linear pseudo model of Edmond Malinvaud [4] has been explained in a very different way in this paper. David Pollard et.al used empirical process techniques to study the asymptotic of the LSE (Least-squares estimation) for the fitting of nonlinear regression function in 2006. In Jae Myung [13] provided a go conceptual for Maximum likelihood estimation in his work “Tutorial on maximum likelihood estimation

  19. Productivity correlated to photobiochemical performance of Chlorella mass cultures grown outdoors in thin-layer cascades.

    PubMed

    Masojídek, Jiří; Kopecký, Jiří; Giannelli, Luca; Torzillo, Giuseppe

    2011-02-01

    This work aims to: (1) correlate photochemical activity and productivity, (2) characterize the flow pattern of culture layers and (3) determine a range of biomass densities for high productivity of the freshwater microalga Chlorella spp., grown outdoors in thin-layer cascade units. Biomass density, irradiance inside culture, pigment content and productivity were measured in the microalgae cultures. Chlorophyll-fluorescence quenching was monitored in situ (using saturation-pulse method) to estimate photochemical activities. Photobiochemical activities and growth parameters were studied in cultures of biomass density between 1 and 47 g L(-1). Fluorescence measurements showed that diluted cultures (1-2 g DW L(-1)) experienced significant photostress due to inhibition of electron transport in the PSII complex. The highest photochemical activities were achieved in cultures of 6.5-12.5 g DW L(-1), which gave a maximum daylight productivity of up to 55 g dry biomass m(-2) day(-1). A midday depression of maximum PSII photochemical yield (F (v)/F (m)) of 20-30% compared with morning values in these cultures proved to be compatible with well-performing cultures. Lower or higher depression of F (v)/F (m) indicated low-light acclimated or photo-inhibited cultures, respectively. A hydrodynamic model of the culture demonstrated highly turbulent flow allowing rapid light/dark cycles (with frequency of 0.5 s(-1)) which possibly match the turnover of the photosynthetic apparatus. These results are important from a biotechnological point of view for optimisation of growth of outdoor microalgae mass cultures under various climatic conditions.

  20. Compatibility check of measured aircraft responses using kinematic equations and extended Kalman filter

    NASA Technical Reports Server (NTRS)

    Klein, V.; Schiess, J. R.

    1977-01-01

    An extended Kalman filter smoother and a fixed point smoother were used for estimation of the state variables in the six degree of freedom kinematic equations relating measured aircraft responses and for estimation of unknown constant bias and scale factor errors in measured data. The computing algorithm includes an analysis of residuals which can improve the filter performance and provide estimates of measurement noise characteristics for some aircraft output variables. The technique developed was demonstrated using simulated and real flight test data. Improved accuracy of measured data was obtained when the data were corrected for estimated bias errors.

  1. The use of LANDSAT data to monitor the urban growth of Sao Paulo Metropolitan area

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Niero, M.; Lombardo, M. A.; Foresti, C.

    1982-01-01

    Urban growth from 1977 to 1979 of the region between Billings and the Guarapiranga reservoir was mapped and the problematic urban areas identified using several LANDSAT products. Visual and automatic interpretation techniques were applied to the data. Computer compatible tapes of LANDSAT multispectral scanner data were analyzed through the maximum likelihood Gaussian algorithm. The feasibility of monitoring fast urban growth by remote sensing techniques for efficient urban planning and control is demonstrated.

  2. An Iterative Maximum a Posteriori Estimation of Proficiency Level to Detect Multiple Local Likelihood Maxima

    ERIC Educational Resources Information Center

    Magis, David; Raiche, Gilles

    2010-01-01

    In this article the authors focus on the issue of the nonuniqueness of the maximum likelihood (ML) estimator of proficiency level in item response theory (with special attention to logistic models). The usual maximum a posteriori (MAP) method offers a good alternative within that framework; however, this article highlights some drawbacks of its…

  3. Probabilistic description of probable maximum precipitation

    NASA Astrophysics Data System (ADS)

    Ben Alaya, Mohamed Ali; Zwiers, Francis W.; Zhang, Xuebin

    2017-04-01

    Probable Maximum Precipitation (PMP) is the key parameter used to estimate probable Maximum Flood (PMF). PMP and PMF are important for dam safety and civil engineering purposes. Even if the current knowledge of storm mechanisms remains insufficient to properly evaluate limiting values of extreme precipitation, PMP estimation methods are still based on deterministic consideration, and give only single values. This study aims to provide a probabilistic description of the PMP based on the commonly used method, the so-called moisture maximization. To this end, a probabilistic bivariate extreme values model is proposed to address the limitations of traditional PMP estimates via moisture maximization namely: (i) the inability to evaluate uncertainty and to provide a range PMP values, (ii) the interpretation that a maximum of a data series as a physical upper limit (iii) and the assumption that a PMP event has maximum moisture availability. Results from simulation outputs of the Canadian Regional Climate Model CanRCM4 over North America reveal the high uncertainties inherent in PMP estimates and the non-validity of the assumption that PMP events have maximum moisture availability. This later assumption leads to overestimation of the PMP by an average of about 15% over North America, which may have serious implications for engineering design.

  4. Maximum stress estimation model for multi-span waler beams with deflections at the supports using average strains.

    PubMed

    Park, Sung Woo; Oh, Byung Kwan; Park, Hyo Seon

    2015-03-30

    The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs), the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads.

  5. Estimation Methods for Non-Homogeneous Regression - Minimum CRPS vs Maximum Likelihood

    NASA Astrophysics Data System (ADS)

    Gebetsberger, Manuel; Messner, Jakob W.; Mayr, Georg J.; Zeileis, Achim

    2017-04-01

    Non-homogeneous regression models are widely used to statistically post-process numerical weather prediction models. Such regression models correct for errors in mean and variance and are capable to forecast a full probability distribution. In order to estimate the corresponding regression coefficients, CRPS minimization is performed in many meteorological post-processing studies since the last decade. In contrast to maximum likelihood estimation, CRPS minimization is claimed to yield more calibrated forecasts. Theoretically, both scoring rules used as an optimization score should be able to locate a similar and unknown optimum. Discrepancies might result from a wrong distributional assumption of the observed quantity. To address this theoretical concept, this study compares maximum likelihood and minimum CRPS estimation for different distributional assumptions. First, a synthetic case study shows that, for an appropriate distributional assumption, both estimation methods yield to similar regression coefficients. The log-likelihood estimator is slightly more efficient. A real world case study for surface temperature forecasts at different sites in Europe confirms these results but shows that surface temperature does not always follow the classical assumption of a Gaussian distribution. KEYWORDS: ensemble post-processing, maximum likelihood estimation, CRPS minimization, probabilistic temperature forecasting, distributional regression models

  6. XDGMM: eXtreme Deconvolution Gaussian Mixture Modeling

    NASA Astrophysics Data System (ADS)

    Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.

    2017-08-01

    XDGMM uses Gaussian mixtures to do density estimation of noisy, heterogenous, and incomplete data using extreme deconvolution (XD) algorithms which is compatible with the scikit-learn machine learning methods. It implements both the astroML and Bovy et al. (2011) algorithms, and extends the BaseEstimator class from scikit-learn so that cross-validation methods work. It allows the user to produce a conditioned model if values of some parameters are known.

  7. Compatible Models of Carbon Content of Individual Trees on a Cunninghamia lanceolata Plantation in Fujian Province, China

    PubMed Central

    Zhuo, Lin; Tao, Hong; Wei, Hong; Chengzhen, Wu

    2016-01-01

    We tried to establish compatible carbon content models of individual trees for a Chinese fir (Cunninghamia lanceolata (Lamb.) Hook.) plantation from Fujian province in southeast China. In general, compatibility requires that the sum of components equal the whole tree, meaning that the sum of percentages calculated from component equations should equal 100%. Thus, we used multiple approaches to simulate carbon content in boles, branches, foliage leaves, roots and the whole individual trees. The approaches included (i) single optimal fitting (SOF), (ii) nonlinear adjustment in proportion (NAP) and (iii) nonlinear seemingly unrelated regression (NSUR). These approaches were used in combination with variables relating diameter at breast height (D) and tree height (H), such as D, D2H, DH and D&H (where D&H means two separate variables in bivariate model). Power, exponential and polynomial functions were tested as well as a new general function model was proposed by this study. Weighted least squares regression models were employed to eliminate heteroscedasticity. Model performances were evaluated by using mean residuals, residual variance, mean square error and the determination coefficient. The results indicated that models with two dimensional variables (DH, D2H and D&H) were always superior to those with a single variable (D). The D&H variable combination was found to be the most useful predictor. Of all the approaches, SOF could establish a single optimal model separately, but there were deviations in estimating results due to existing incompatibilities, while NAP and NSUR could ensure predictions compatibility. Simultaneously, we found that the new general model had better accuracy than others. In conclusion, we recommend that the new general model be used to estimate carbon content for Chinese fir and considered for other vegetation types as well. PMID:26982054

  8. Estimating a Logistic Discrimination Functions When One of the Training Samples Is Subject to Misclassification: A Maximum Likelihood Approach.

    PubMed

    Nagelkerke, Nico; Fidler, Vaclav

    2015-01-01

    The problem of discrimination and classification is central to much of epidemiology. Here we consider the estimation of a logistic regression/discrimination function from training samples, when one of the training samples is subject to misclassification or mislabeling, e.g. diseased individuals are incorrectly classified/labeled as healthy controls. We show that this leads to zero-inflated binomial model with a defective logistic regression or discrimination function, whose parameters can be estimated using standard statistical methods such as maximum likelihood. These parameters can be used to estimate the probability of true group membership among those, possibly erroneously, classified as controls. Two examples are analyzed and discussed. A simulation study explores properties of the maximum likelihood parameter estimates and the estimates of the number of mislabeled observations.

  9. An "ASYMPTOTIC FRACTAL" Approach to the Morphology of Malignant Cell Nuclei

    NASA Astrophysics Data System (ADS)

    Landini, Gabriel; Rippin, John W.

    To investigate quantitatively nuclear membrane irregularity, 672 nuclei from 10 cases of oral cancer (squamous cell carcinoma) and normal cells from oral mucosa were studied in transmission electron micrographs. The nuclei were photographed at ×1400 magnification and transferred to computer memory (1 pixel = 35 nm). The perimeter of the profiles was analysed using the "yardstick method" of fractal dimension estimation, and the log-log plot of ruler size vs. boundary length demonstrated that there exists a significant effect of resolution on length measurement. However, this effect seems to disappear at higher resolutions. As this observation is compatible with the concept of asymptotic fractal, we estimated the parameters c, L and Bm from the asymptotic fractal formula Br = Bm {1 + (r / L)c}-1 , where Br is the boundary length measured with a ruler of size r, Bm is the maximum boundary for r → 0, L is a constant, and c = asymptotic fractal dimension minus topological dimension (D - Dt) for r → ∞. Analyses of variance showed c to be significantly higher in the normal than malignant cases (P < 0.001), but log(L) and Bm to be significantly higher in the malignant cases (P < 0.001). A multivariate linear discrimination analysis on c, log(L) and Bm re-classified 76.6% of the cells correctly (84.8% of the normal and 67.5% of the tumor). Furthermore, this shows that asymptotic fractal analysis applied to nuclear profiles has great potential for shape quantification in diagnosis of oral cancer.

  10. Accuracy of AHOF400 with a moment-measuring load cell barrier.

    DOT National Transportation Integrated Search

    2011-06-13

    Several performance measures derived from rigid : barrier crash testing have been proposed to assess : vehicle-to-vehicle crash compatibility. One such : measure, the Average Height of Force 400 (AHOF400) : [1], has been proposed to estimate the heig...

  11. Nonparametric evaluation of quantitative traits in population-based association studies when the genetic model is unknown.

    PubMed

    Konietschke, Frank; Libiger, Ondrej; Hothorn, Ludwig A

    2012-01-01

    Statistical association between a single nucleotide polymorphism (SNP) genotype and a quantitative trait in genome-wide association studies is usually assessed using a linear regression model, or, in the case of non-normally distributed trait values, using the Kruskal-Wallis test. While linear regression models assume an additive mode of inheritance via equi-distant genotype scores, Kruskal-Wallis test merely tests global differences in trait values associated with the three genotype groups. Both approaches thus exhibit suboptimal power when the underlying inheritance mode is dominant or recessive. Furthermore, these tests do not perform well in the common situations when only a few trait values are available in a rare genotype category (disbalance), or when the values associated with the three genotype categories exhibit unequal variance (variance heterogeneity). We propose a maximum test based on Marcus-type multiple contrast test for relative effect sizes. This test allows model-specific testing of either dominant, additive or recessive mode of inheritance, and it is robust against variance heterogeneity. We show how to obtain mode-specific simultaneous confidence intervals for the relative effect sizes to aid in interpreting the biological relevance of the results. Further, we discuss the use of a related all-pairwise comparisons contrast test with range preserving confidence intervals as an alternative to Kruskal-Wallis heterogeneity test. We applied the proposed maximum test to the Bogalusa Heart Study dataset, and gained a remarkable increase in the power to detect association, particularly for rare genotypes. Our simulation study also demonstrated that the proposed non-parametric tests control family-wise error rate in the presence of non-normality and variance heterogeneity contrary to the standard parametric approaches. We provide a publicly available R library nparcomp that can be used to estimate simultaneous confidence intervals or compatible multiplicity-adjusted p-values associated with the proposed maximum test.

  12. Four points function fitted and first derivative procedure for determining the end points in potentiometric titration curves: statistical analysis and method comparison.

    PubMed

    Kholeif, S A

    2001-06-01

    A new method that belongs to the differential category for determining the end points from potentiometric titration curves is presented. It uses a preprocess to find first derivative values by fitting four data points in and around the region of inflection to a non-linear function, and then locate the end point, usually as a maximum or minimum, using an inverse parabolic interpolation procedure that has an analytical solution. The behavior and accuracy of the sigmoid and cumulative non-linear functions used are investigated against three factors. A statistical evaluation of the new method using linear least-squares method validation and multifactor data analysis are covered. The new method is generally applied to symmetrical and unsymmetrical potentiometric titration curves, and the end point is calculated using numerical procedures only. It outperforms the "parent" regular differential method in almost all factors levels and gives accurate results comparable to the true or estimated true end points. Calculated end points from selected experimental titration curves compatible with the equivalence point category of methods, such as Gran or Fortuin, are also compared with the new method.

  13. Extremely Rare Interbreeding Events Can Explain Neanderthal DNA in Living Humans

    PubMed Central

    Neves, Armando G. M.; Serva, Maurizio

    2012-01-01

    Considering the recent experimental discovery of Green et al that present-day non-Africans have 1 to of their nuclear DNA of Neanderthal origin, we propose here a model which is able to quantify the genetic interbreeding between two subpopulations with equal fitness, living in the same geographic region. The model consists of a solvable system of deterministic ordinary differential equations containing as a stochastic ingredient a realization of the neutral Wright-Fisher process. By simulating the stochastic part of the model we are able to apply it to the interbreeding ofthe African ancestors of Eurasians and Middle Eastern Neanderthal subpopulations and estimate the only parameter of the model, which is the number of individuals per generation exchanged between subpopulations. Our results indicate that the amount of Neanderthal DNA in living non-Africans can be explained with maximum probability by the exchange of a single pair of individuals between the subpopulations at each 77 generations, but larger exchange frequencies are also allowed with sizeable probability. The results are compatible with a long coexistence time of 130,000 years, a total interbreeding population of order individuals, and with all living humans being descendants of Africans both for mitochondrial DNA and Y chromosome. PMID:23112810

  14. Plasma Chamber and First Wall of the Ignitor Experiment^*

    NASA Astrophysics Data System (ADS)

    Cucchiaro, A.; Coppi, B.; Bianchi, A.; Lucca, F.

    2005-10-01

    The new designs of the Plasma Chamber (PC) and of the First Wall (FW) system are based on updated scenarios for vertical plasma disruption (VDE) as well as estimates for the maximum thermal wall loadings at ignition. The PC wall thickness has been optimized to reduce the deformation during the worst disruption event without sacrificing the dimensions of the plasma column. A non linear dynamic analysis of the PC has been performed on a 360^o model of it, taking into account possible toroidal asymmetries of the halo current. Radial EM loads obtained by scaling JET measurements have been also considered. The low-cycle fatigue analysis confirms that the PC is able to meet a lifetime of few thousand cycles for the most extreme combinations of magnetic fields and plasma currents. The FW, made of Molybdenum (TZM) tiles covering the entire inner surface of the PC, has been designed to withstand thermal and EM loads, both under normal operating conditions and in case of disruption. Detailed elasto-plastic structural analyses of the most (EM) loaded tile-carriers show that these are compatible with the adopted fabrication requirements. ^*Sponsored in part by ENEA of Italy and by the U.S. DOE.

  15. A polychromatic adaption of the Beer-Lambert model for spectral decomposition

    NASA Astrophysics Data System (ADS)

    Sellerer, Thorsten; Ehn, Sebastian; Mechlem, Korbinian; Pfeiffer, Franz; Herzen, Julia; Noël, Peter B.

    2017-03-01

    We present a semi-empirical forward-model for spectral photon-counting CT which is fully compatible with state-of-the-art maximum-likelihood estimators (MLE) for basis material line integrals. The model relies on a minimum calibration effort to make the method applicable in routine clinical set-ups with the need for periodic re-calibration. In this work we present an experimental verifcation of our proposed method. The proposed method uses an adapted Beer-Lambert model, describing the energy dependent attenuation of a polychromatic x-ray spectrum using additional exponential terms. In an experimental dual-energy photon-counting CT setup based on a CdTe detector, the model demonstrates an accurate prediction of the registered counts for an attenuated polychromatic spectrum. Thereby deviations between model and measurement data lie within the Poisson statistical limit of the performed acquisitions, providing an effectively unbiased forward-model. The experimental data also shows that the model is capable of handling possible spectral distortions introduced by the photon-counting detector and CdTe sensor. The simplicity and high accuracy of the proposed model provides a viable forward-model for MLE-based spectral decomposition methods without the need of costly and time-consuming characterization of the system response.

  16. A Comparison of Pseudo-Maximum Likelihood and Asymptotically Distribution-Free Dynamic Factor Analysis Parameter Estimation in Fitting Covariance Structure Models to Block-Toeplitz Matrices Representing Single-Subject Multivariate Time-Series.

    ERIC Educational Resources Information Center

    Molenaar, Peter C. M.; Nesselroade, John R.

    1998-01-01

    Pseudo-Maximum Likelihood (p-ML) and Asymptotically Distribution Free (ADF) estimation methods for estimating dynamic factor model parameters within a covariance structure framework were compared through a Monte Carlo simulation. Both methods appear to give consistent model parameter estimates, but only ADF gives standard errors and chi-square…

  17. Uncertainties in estimating heart doses from 2D-tangential breast cancer radiotherapy.

    PubMed

    Lorenzen, Ebbe L; Brink, Carsten; Taylor, Carolyn W; Darby, Sarah C; Ewertz, Marianne

    2016-04-01

    We evaluated the accuracy of three methods of estimating radiation dose to the heart from two-dimensional tangential radiotherapy for breast cancer, as used in Denmark during 1982-2002. Three tangential radiotherapy regimens were reconstructed using CT-based planning scans for 40 patients with left-sided and 10 with right-sided breast cancer. Setup errors and organ motion were simulated using estimated uncertainties. For left-sided patients, mean heart dose was related to maximum heart distance in the medial field. For left-sided breast cancer, mean heart dose estimated from individual CT-scans varied from <1Gy to >8Gy, and maximum dose from 5 to 50Gy for all three regimens, so that estimates based only on regimen had substantial uncertainty. When maximum heart distance was taken into account, the uncertainty was reduced and was comparable to the uncertainty of estimates based on individual CT-scans. For right-sided breast cancer patients, mean heart dose based on individual CT-scans was always <1Gy and maximum dose always <5Gy for all three regimens. The use of stored individual simulator films provides a method for estimating heart doses in left-tangential radiotherapy for breast cancer that is almost as accurate as estimates based on individual CT-scans. Copyright © 2016. Published by Elsevier Ireland Ltd.

  18. Interactions between the Tetrasodium Salts of EDTA and 1-Hydroxyethane 1,1-Diphosphonic Acid with Sodium Hypochlorite Irrigants.

    PubMed

    Biel, Philippe; Mohn, Dirk; Attin, Thomas; Zehnder, Matthias

    2017-04-01

    A clinically useful all-in-one endodontic irrigant with combined proteolytic and decalcifying properties is still elusive. In this study, the chemical effects of dissolving the tetrasodium salts of 1-hydroxyethane 1,1-diphosphonic acid (Na 4 HEDP) or Na 4 EDTA directly in sodium hypochlorite (NaOCl) irrigants in polypropylene syringes were assessed during the course of 1 hour. The solubility of the salts in water was determined. Their compatibility with 1% and 5% NaOCl was measured by iodometric titration and in a calcium complexation experiment by using a Ca 2+ -selective electrode. The salts dissolved within 1 minute. The dissolution maximum of Na 4 HEDP in water (wt/total wt) was 44.6% ± 1.6%. The corresponding dissolution maximum of Na 4 EDTA was 38.2% ± 0.8%. Na 4 HEDP at 18% in 5% NaOCl caused a mere loss of 16% of the initially available chlorine during 1 hour. In contrast, a corresponding mixture between NaOCl and the Na 4 EDTA salt caused 95% reduction in available chlorine after 1 minute. Mixtures of 3% Na 4 EDTA with 1% NaOCl were more stable, but only for 30 minutes. Na 4 HEDP lost 24% of its calcium complexation capacity after 60 minutes. The corresponding loss for Na 4 EDTA was 34%. The compatibility and solubility of particulate Na 4 HEDP with/in NaOCl solutions are such that these components can be mixed and used for up to 1 hour. In contrast, short-term compatibility of the Na 4 EDTA salt with NaOCl solutions was considerably lower, decreasing at higher concentrations of either compound. Especially for Na 4 HEDP but also for Na 4 EDTA, the NaOCl had little effect on calcium complexation. Copyright © 2017 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  19. Plant-like mating in an animal: sexual compatibility and allocation trade-offs in a simultaneous hermaphrodite with remote transfer of sperm.

    PubMed

    Pemberton, A J; Sommerfeldt, A D; Wood, C A; Flint, H C; Noble, L R; Clarke, K R; Bishop, J D D

    2004-05-01

    The importance of sexual compatibility between mates has only recently been realized in zoological research into sexual selection, yet its study has been central to botanical research for many decades. The reproductive characteristics of remote mating, an absence of precopulatory mate screening, internal fertilization and embryonic brooding are shared between passively pollinated plants and a phylogenetically diverse group of sessile aquatic invertebrates. Here, we further characterize the sexual compatibility system of one such invertebrate, the colonial ascidian Diplosoma listerianum. All 66 reciprocal pairings of 12 genetic individuals were carried out. Fecundities of crosses varied widely and suggested a continuous scale of sexual compatibility. Of the 11 animals from the same population c. 40% of crosses were completely incompatible with a further c. 20% having obvious partial compatibility (reduced fecundity). We are unaware of other studies documenting such high levels of sexual incompatibility in unrelated individuals. RAPD fingerprinting was used to estimate relatedness among the 12 individuals after a known pedigree was successfully reconstructed to validate the technique. In contrast to previous results, no correlation between genetic similarity and sexual compatibility was detected. The blocking of many genotypes of sperm is expected to severely modify realized paternity away from 'fair raffle' expectations and probably reduce levels of intra-brood genetic diversity in this obligatorily promiscuous mating system. One adaptive benefit may be to reduce the bombardment of the female reproductive system by outcrossed sperm with conflicting evolutionary interests, so as to maintain female control of somatic : gametic investment.

  20. Weighted Maximum-a-Posteriori Estimation in Tests Composed of Dichotomous and Polytomous Items

    ERIC Educational Resources Information Center

    Sun, Shan-Shan; Tao, Jian; Chang, Hua-Hua; Shi, Ning-Zhong

    2012-01-01

    For mixed-type tests composed of dichotomous and polytomous items, polytomous items often yield more information than dichotomous items. To reflect the difference between the two types of items and to improve the precision of ability estimation, an adaptive weighted maximum-a-posteriori (WMAP) estimation is proposed. To evaluate the performance of…

  1. Exploiting Non-sequence Data in Dynamic Model Learning

    DTIC Science & Technology

    2013-10-01

    For our experiments here and in Section 3.5, we implement the proposed algorithms in MATLAB and use the maximum directed spanning tree solver...embarrassingly parallelizable, whereas PM’s maximum directed spanning tree procedure is harder to parallelize. In this experiment, our MATLAB ...some estimation problems, this approach is able to give unique and consistent estimates while the maximum- likelihood method gets entangled in

  2. ATAC Autocuer Modeling Analysis.

    DTIC Science & Technology

    1981-01-01

    the analysis of the simple rectangular scrnentation (1) is based on detection and estimation theory (2). This approach uses the concept of maximum ...continuous wave forms. In order to develop the principles of maximum likelihood, it is con- venient to develop the principles for the "classical...the concept of maximum likelihood is significant in that it provides the optimum performance of the detection/estimation problem. With a knowledge of

  3. Modeling the distribution of extreme share return in Malaysia using Generalized Extreme Value (GEV) distribution

    NASA Astrophysics Data System (ADS)

    Hasan, Husna; Radi, Noor Fadhilah Ahmad; Kassim, Suraiya

    2012-05-01

    Extreme share return in Malaysia is studied. The monthly, quarterly, half yearly and yearly maximum returns are fitted to the Generalized Extreme Value (GEV) distribution. The Augmented Dickey Fuller (ADF) and Phillips Perron (PP) tests are performed to test for stationarity, while Mann-Kendall (MK) test is for the presence of monotonic trend. Maximum Likelihood Estimation (MLE) is used to estimate the parameter while L-moments estimate (LMOM) is used to initialize the MLE optimization routine for the stationary model. Likelihood ratio test is performed to determine the best model. Sherman's goodness of fit test is used to assess the quality of convergence of the GEV distribution by these monthly, quarterly, half yearly and yearly maximum. Returns levels are then estimated for prediction and planning purposes. The results show all maximum returns for all selection periods are stationary. The Mann-Kendall test indicates the existence of trend. Thus, we ought to model for non-stationary model too. Model 2, where the location parameter is increasing with time is the best for all selection intervals. Sherman's goodness of fit test shows that monthly, quarterly, half yearly and yearly maximum converge to the GEV distribution. From the results, it seems reasonable to conclude that yearly maximum is better for the convergence to the GEV distribution especially if longer records are available. Return level estimates, which is the return level (in this study return amount) that is expected to be exceeded, an average, once every t time periods starts to appear in the confidence interval of T = 50 for quarterly, half yearly and yearly maximum.

  4. Comparing fishers' and scientific estimates of size at maturity and maximum body size as indicators for overfishing.

    PubMed

    Mclean, Elizabeth L; Forrester, Graham E

    2018-04-01

    We tested whether fishers' local ecological knowledge (LEK) of two fish life-history parameters, size at maturity (SAM) at maximum body size (MS), was comparable to scientific estimates (SEK) of the same parameters, and whether LEK influenced fishers' perceptions of sustainability. Local ecological knowledge was documented for 82 fishers from a small-scale fishery in Samaná Bay, Dominican Republic, whereas SEK was compiled from the scientific literature. Size at maturity estimates derived from LEK and SEK overlapped for most of the 15 commonly harvested species (10 of 15). In contrast, fishers' maximum size estimates were usually lower than (eight species), or overlapped with (five species) scientific estimates. Fishers' size-based estimates of catch composition indicate greater potential for overfishing than estimates based on SEK. Fishers' estimates of size at capture relative to size at maturity suggest routine inclusion of juveniles in the catch (9 of 15 species), and fishers' estimates suggest that harvested fish are substantially smaller than maximum body size for most species (11 of 15 species). Scientific estimates also suggest that harvested fish are generally smaller than maximum body size (13 of 15), but suggest that the catch is dominated by adults for most species (9 of 15 species), and that juveniles are present in the catch for fewer species (6 of 15). Most Samaná fishers characterized the current state of their fishery as poor (73%) and as having changed for the worse over the past 20 yr (60%). Fishers stated that concern about overfishing, catching small fish, and catching immature fish contributed to these perceptions, indicating a possible influence of catch-size composition on their perceptions. Future work should test this link more explicitly because we found no evidence that the minority of fishers with more positive perceptions of their fishery reported systematically different estimates of catch-size composition than those with the more negative majority view. Although fishers' and scientific estimates of size at maturity and maximum size parameters sometimes differed, the fact that fishers make routine quantitative assessments of maturity and body size suggests potential for future collaborative monitoring efforts to generate estimates usable by scientists and meaningful to fishers. © 2017 by the Ecological Society of America.

  5. Electron microprobe analysis program for biological specimens: BIOMAP

    NASA Technical Reports Server (NTRS)

    Edwards, B. F.

    1972-01-01

    BIOMAP is a Univac 1108 compatible program which facilitates the electron probe microanalysis of biological specimens. Input data are X-ray intensity data from biological samples, the X-ray intensity and composition data from a standard sample and the electron probe operating parameters. Outputs are estimates of the weight percentages of the analyzed elements, the distribution of these estimates for sets of red blood cells and the probabilities for correlation between elemental concentrations. An optional feature statistically estimates the X-ray intensity and residual background of a principal standard relative to a series of standards.

  6. Estimating aquifer transmissivity from specific capacity using MATLAB.

    PubMed

    McLin, Stephen G

    2005-01-01

    Historically, specific capacity information has been used to calculate aquifer transmissivity when pumping test data are unavailable. This paper presents a simple computer program written in the MATLAB programming language that estimates transmissivity from specific capacity data while correcting for aquifer partial penetration and well efficiency. The program graphically plots transmissivity as a function of these factors so that the user can visually estimate their relative importance in a particular application. The program is compatible with any computer operating system running MATLAB, including Windows, Macintosh OS, Linux, and Unix. Two simple examples illustrate program usage.

  7. Alternative Zoning Scenarios for Regional Sustainable Land Use Controls in China: A Knowledge-Based Multiobjective Optimisation Model

    PubMed Central

    Xia, Yin; Liu, Dianfeng; Liu, Yaolin; He, Jianhua; Hong, Xiaofeng

    2014-01-01

    Alternative land use zoning scenarios provide guidance for sustainable land use controls. This study focused on an ecologically vulnerable catchment on the Loess Plateau in China, proposed a novel land use zoning model, and generated alternative zoning solutions to satisfy the various requirements of land use stakeholders and managers. This model combined multiple zoning objectives, i.e., maximum zoning suitability, maximum planning compatibility and maximum spatial compactness, with land use constraints by using goal programming technique, and employed a modified simulated annealing algorithm to search for the optimal zoning solutions. The land use zoning knowledge was incorporated into the initialisation operator and neighbourhood selection strategy of the simulated annealing algorithm to improve its efficiency. The case study indicates that the model is both effective and robust. Five optimal zoning scenarios of the study area were helpful for satisfying the requirements of land use controls in loess hilly regions, e.g., land use intensification, agricultural protection and environmental conservation. PMID:25170679

  8. LONG DISTANCE POLLEN-MEDIATED GENE FLOW FROM CREEPING BENTGRASS

    EPA Science Inventory

    Researchers from USEPA WED have measured gene flow from experimental fields of Roundup? herbicide resistant genetically modified (GM) creeping bentgrass a grass used primarily on golf courses, to compatible non-crop relatives. Using a sampling design based on the estimated time ...

  9. Gap junctions favor normal rat kidney epithelial cell adaptation to chronic hypertonicity.

    PubMed

    Desforges, Bénédicte; Savarin, Philippe; Bounedjah, Ouissame; Delga, Stéphanie; Hamon, Loïc; Curmi, Patrick A; Pastré, David

    2011-09-01

    Upon hypertonic stress most often resulting from high salinity, cells need to balance their osmotic pressure by accumulating neutral osmolytes called compatible osmolytes like betaine, myo-inositol, and taurine. However, the massive uptake of compatible osmolytes is a slow process compared with other defense mechanisms related to oxidative or heat stress. This is especially critical for cycling cells as they have to double their volume while keeping a hospitable intracellular environment for the molecular machineries. Here we propose that clustered cells can accelerate the supply of compatible osmolytes to cycling cells via the transit, mediated by gap junctions, of compatible osmolytes from arrested to cycling cells. Both experimental results in epithelial normal rat kidney cells and theoretical estimations show that gap junctions indeed play a key role in cell adaptation to chronic hypertonicity. These results can provide basis for a better understanding of the functions of gap junctions in osmoregulation not only for the kidney but also for many other epithelia. In addition to this, we suggest that cancer cells that do not communicate via gap junctions poorly cope with hypertonic environments thus explaining the rare occurrence of cancer coming from the kidney medulla.

  10. Soil-Bacterium Compatibility Model as a Decision-Making Tool for Soil Bioremediation.

    PubMed

    Horemans, Benjamin; Breugelmans, Philip; Saeys, Wouter; Springael, Dirk

    2017-02-07

    Bioremediation of organic pollutant contaminated soil involving bioaugmentation with dedicated bacteria specialized in degrading the pollutant is suggested as a green and economically sound alternative to physico-chemical treatment. However, intrinsic soil characteristics impact the success of bioaugmentation. The feasibility of using partial least-squares regression (PLSR) to predict the success of bioaugmentation in contaminated soil based on the intrinsic physico-chemical soil characteristics and, hence, to improve the success of bioaugmentation, was examined. As a proof of principle, PLSR was used to build soil-bacterium compatibility models to predict the bioaugmentation success of the phenanthrene-degrading Novosphingobium sp. LH128. The survival and biodegradation activity of strain LH128 were measured in 20 soils and correlated with the soil characteristics. PLSR was able to predict the strain's survival using 12 variables or less while the PAH-degrading activity of strain LH128 in soils that show survival was predicted using 9 variables. A three-step approach using the developed soil-bacterium compatibility models is proposed as a decision making tool and first estimation to select compatible soils and organisms and increase the chance of success of bioaugmentation.

  11. Experimental Interactions of Components of Hemodialysis Units with Human Blood

    PubMed Central

    Zucker, W. H.; Shinoda, B. A.; Mason, R. G.

    1974-01-01

    An in vitro model test system for estimation of the blood compatibility of hemodialysis membranes and tubing is described. The model test system consists of a modified hemodialysis unit and blood pump through which fresh citrated human blood is circulated. The effects of the use of different pump and tubing types upon hematologic and blood coagulation parameters are described. Preexposure of test surfaces to albumin appeared to enhance blood compatibility characteristics of the model test system, whereas preexposure to a high density lipoprotein preparation or a proteinpolysaccharide preparation was without appreciable benefit. Use of blood from subjects receiving aspirin resulted in enhanced blood compatibility in the test system as did use of heparin. Use of Warfarin or dextran did not appear to enhance blood compatibility of test surfaces under the conditions of this test system. Dialysis membranes and tubing which formed parts of the test system were examined by scanning and transmission electron microscopy in control tests and in tests for effects of proteins and antithrombotic agents. ImagesFig 5Fig 6Fig 7Fig 8Fig 9Fig 10Fig 11Fig 12Fig 13Fig 14Fig 1Fig 2Fig 3Fig 4 PMID:4825611

  12. Object-oriented productivity metrics

    NASA Technical Reports Server (NTRS)

    Connell, John L.; Eller, Nancy

    1992-01-01

    Software productivity metrics are useful for sizing and costing proposed software and for measuring development productivity. Estimating and measuring source lines of code (SLOC) has proven to be a bad idea because it encourages writing more lines of code and using lower level languages. Function Point Analysis is an improved software metric system, but it is not compatible with newer rapid prototyping and object-oriented approaches to software development. A process is presented here for counting object-oriented effort points, based on a preliminary object-oriented analysis. It is proposed that this approach is compatible with object-oriented analysis, design, programming, and rapid prototyping. Statistics gathered on actual projects are presented to validate the approach.

  13. Hierarchical clustering method for improved prostate cancer imaging in diffuse optical tomography

    NASA Astrophysics Data System (ADS)

    Kavuri, Venkaiah C.; Liu, Hanli

    2013-03-01

    We investigate the feasibility of trans-rectal near infrared (NIR) based diffuse optical tomography (DOT) for early detection of prostate cancer using a transrectal ultrasound (TRUS) compatible imaging probe. For this purpose, we designed a TRUS-compatible, NIR-based image system (780nm), in which the photo diodes were placed on the trans-rectal probe. DC signals were recorded and used for estimating the absorption coefficient. We validated the system using laboratory phantoms. For further improvement, we also developed a hierarchical clustering method (HCM) to improve the accuracy of image reconstruction with limited prior information. We demonstrated the method using computer simulations laboratory phantom experiments.

  14. Bayesian structural equation modeling in sport and exercise psychology.

    PubMed

    Stenling, Andreas; Ivarsson, Andreas; Johnson, Urban; Lindwall, Magnus

    2015-08-01

    Bayesian statistics is on the rise in mainstream psychology, but applications in sport and exercise psychology research are scarce. In this article, the foundations of Bayesian analysis are introduced, and we will illustrate how to apply Bayesian structural equation modeling in a sport and exercise psychology setting. More specifically, we contrasted a confirmatory factor analysis on the Sport Motivation Scale II estimated with the most commonly used estimator, maximum likelihood, and a Bayesian approach with weakly informative priors for cross-loadings and correlated residuals. The results indicated that the model with Bayesian estimation and weakly informative priors provided a good fit to the data, whereas the model estimated with a maximum likelihood estimator did not produce a well-fitting model. The reasons for this discrepancy between maximum likelihood and Bayesian estimation are discussed as well as potential advantages and caveats with the Bayesian approach.

  15. Expression of the 12-oxophytodienoic acid 10,11-reductase gene in the compatible interaction between pea and fungal pathogen.

    PubMed

    Ishiga, Yasuhiro; Funato, Akiko; Tachiki, Tomoyuki; Toyoda, Kazuhiro; Shiraishi, Tomonori; Yamada, Tetsuji; Ichinose, Yuki

    2002-10-01

    Suppressors produced by Mycosphaerella pinodes are glycopeptides to block pea defense responses induced by elicitors. A clone, S64, was isolated as cDNA for suppressor-inducible gene from pea epicotyls. The treatment of pea epicotyls with suppressor alone induced an increase of S64 mRNA within 1 h, and it reached a maximum level at 3 h after treatment. The induction was not affected by application of the elicitor, indicating that the suppressor has a dominant action to regulate S64 gene expression. S64 was also induced by inoculation with a virulent pathogen, M. pinodes, but not by inoculation with a non-pathogen, Ascochyta rabiei, nor by treatment with fungal elicitor. The deduced structure of S64 showed high homology to 12-oxophytodienoic acid reductase (OPR) in Arabidopsis thaliana. A recombinant protein derived from S64 had OPR activity, suggesting compatibility-specific activation of the octadecanoid pathway in plants. Treatment with jasmonic acid (JA) or methyl jasmonic acid, end products of the octadecanoid pathway, inhibited the elicitor-induced accumulation of PAL mRNA in pea. These results indicate that the suppressor-induced S64 gene expression leads to the production of JA or related compounds, which might contribute to the establishment of compatibility by inhibiting the phenylpropanoid biosynthetic pathway.

  16. Compatibility and stability of tramadol and dexamethasone in solution and its use in terminally ill patients.

    PubMed

    Negro, S; Salama, A; Sánchez, Y; Azuara, M L; Barcia, E

    2007-10-01

    Delivery of drug admixtures by continuous subcutaneous infusion is common practice in palliative medicine, but analytical confirmation of their compatibility and stability is not always available. To study the compatibility and stability of tramadol hydrochloride and dexamethasone sodium phosphate combined in solution and to report on its use in terminally ill patients. Twelve different solutions containing tramadol hydrochloride (8.33-33.33 mg/mL) and dexamethasone sodium phosphate (0.33-3.33 mg/mL) were prepared in saline and stored in polypropylene syringes for 5 days (25 degrees C). Analysis was performed on days 1, 3 and 5 days with simultaneous determination by HPLC. pH was measured at 0 and 5 days. Clinical performance was assessed retrospectively in six terminal-ill oncology patients. Maximum losses of 7% and 6% were observed for tramadol and dexamethasone. Pain was completely controlled in four patients. Local tolerance resulted in haematoma in three patients, which resolved by switching to a butterfly insertion site. Tramadol hydrochloride (100-400 mg/day) and dexamethasone sodium phosphate (4-40 mg/day) are stable for at least 5 days when combined in saline and stored at 25 degrees C. These results are only valid for the type of syringes and the specific commercial preparations tested.

  17. Incentive Compatible Online Scheduling of Malleable Parallel Jobs with Individual Deadlines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carroll, Thomas E.; Grosu, Daniel

    2010-09-13

    We consider the online scheduling of malleable jobs on parallel systems, such as clusters, symmetric multiprocessing computers, and multi-core processor computers. Malleable jobs is a model of parallel processing in which jobs adapt to the number of processors assigned to them. This model permits the scheduler and resource manager to make more efficient use of the available resources. Each malleable job is characterized by arrival time, deadline, and value. If the job completes by its deadline, the user earns the payoff indicated by the value; otherwise, she earns a payoff of zero. The scheduling objective is to maximize the summore » of the values of the jobs that complete by their associated deadlines. Complicating the matter is that users in the real world are rational and they will attempt to manipulate the scheduler by misreporting their jobs’ parameters if it benefits them to do so. To mitigate this behavior, we design an incentive compatible online scheduling mechanism. Incentive compatibility assures us that the users will obtain the maximum payoff only if they truthfully report their jobs’ parameters to the scheduler. Finally, we simulate and study the mechanism to show the effects of misreports on the cheaters and on the system.« less

  18. A review of the generalized uncertainty principle.

    PubMed

    Tawfik, Abdel Nasser; Diab, Abdel Magied

    2015-12-01

    Based on string theory, black hole physics, doubly special relativity and some 'thought' experiments, minimal distance and/or maximum momentum are proposed. As alternatives to the generalized uncertainty principle (GUP), the modified dispersion relation, the space noncommutativity, the Lorentz invariance violation, and the quantum-gravity-induced birefringence effects are summarized. The origin of minimal measurable quantities and the different GUP approaches are reviewed and the corresponding observations are analysed. Bounds on the GUP parameter are discussed and implemented in the understanding of recent PLANCK observations of cosmic inflation. The higher-order GUP approaches predict minimal length uncertainty with and without maximum momenta. Possible arguments against the GUP are discussed; for instance, the concern about its compatibility with the equivalence principles, the universality of gravitational redshift and the free fall and law of reciprocal action are addressed.

  19. Maximum Likelihood Estimations and EM Algorithms with Length-biased Data

    PubMed Central

    Qin, Jing; Ning, Jing; Liu, Hao; Shen, Yu

    2012-01-01

    SUMMARY Length-biased sampling has been well recognized in economics, industrial reliability, etiology applications, epidemiological, genetic and cancer screening studies. Length-biased right-censored data have a unique data structure different from traditional survival data. The nonparametric and semiparametric estimations and inference methods for traditional survival data are not directly applicable for length-biased right-censored data. We propose new expectation-maximization algorithms for estimations based on full likelihoods involving infinite dimensional parameters under three settings for length-biased data: estimating nonparametric distribution function, estimating nonparametric hazard function under an increasing failure rate constraint, and jointly estimating baseline hazards function and the covariate coefficients under the Cox proportional hazards model. Extensive empirical simulation studies show that the maximum likelihood estimators perform well with moderate sample sizes and lead to more efficient estimators compared to the estimating equation approaches. The proposed estimates are also more robust to various right-censoring mechanisms. We prove the strong consistency properties of the estimators, and establish the asymptotic normality of the semi-parametric maximum likelihood estimators under the Cox model using modern empirical processes theory. We apply the proposed methods to a prevalent cohort medical study. Supplemental materials are available online. PMID:22323840

  20. Genetic diversity and structure in two species of Leavenworthia with self-incompatible and self-compatible populations

    PubMed Central

    Koelling, V A; Hamrick, J L; Mauricio, R

    2011-01-01

    Self-fertilization is a common mating system in plants and is known to reduce genetic diversity, increase genetic structure and potentially put populations at greater risk of extinction. In this study, we measured the genetic diversity and structure of two cedar glade endemic species, Leavenworthia alabamica and L. crassa. These species have self-incompatible (SI) and self-compatible (SC) populations and are therefore ideal for understanding how the mating system affects genetic diversity and structure. We found that L. alabamica and L. crassa had high species-level genetic diversity (He=0.229 and 0.183, respectively) and high genetic structure among their populations (FST=0.45 and 0.36, respectively), but that mean genetic diversity was significantly lower in SC compared with SI populations (SC vs SI, He for L. alabamica was 0.065 vs 0.206 and for L. crassa was 0.084 vs 0.189). We also found significant genetic structure using maximum-likelihood clustering methods. These data indicate that the loss of SI leads to the loss of genetic diversity within populations. In addition, we examined genetic distance relationships between SI and SC populations to analyze possible population history and origins of self-compatibility. We find there may have been multiple origins of self-compatibility in L. alabamica and L. crassa. However, further work is required to test this hypothesis. Finally, given their high genetic structure and that individual populations harbor unique alleles, conservation strategies seeking to maximize species-level genetic diversity for these or similar species should protect multiple populations. PMID:20485327

  1. Navy Multiband Terminal (NMT)

    DTIC Science & Technology

    2015-12-01

    AEHF satellites and MILSTAR satellites in the backwards-compatible mode. Mission requirements specific to Navy operations, including threat levels and...Center for Cost Analysis (NCCA) Component Cost Position (CCP) memo dated December 18, 2015 Confidence Level Confidence Level of cost estimate for... Econ Qty Sch Eng Est Oth Spt Total 6.970 0.082 0.637 0.034 0.000 -1.210 0.000 -0.418 -0.875 6.095 Current SAR Baseline to Current Estimate (TY $M) PAUC

  2. pytc: Open-Source Python Software for Global Analyses of Isothermal Titration Calorimetry Data.

    PubMed

    Duvvuri, Hiranmayi; Wheeler, Lucas C; Harms, Michael J

    2018-05-08

    Here we describe pytc, an open-source Python package for global fits of thermodynamic models to multiple isothermal titration calorimetry experiments. Key features include simplicity, the ability to implement new thermodynamic models, a robust maximum likelihood fitter, a fast Bayesian Markov-Chain Monte Carlo sampler, rigorous implementation, extensive documentation, and full cross-platform compatibility. pytc fitting can be done using an application program interface or via a graphical user interface. It is available for download at https://github.com/harmslab/pytc .

  3. Transition and separation process in brine channels formation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berti, Alessia, E-mail: alessia.berti@unibs.it; Bochicchio, Ivana, E-mail: ibochicchio@unisa.it; Fabrizio, Mauro, E-mail: mauro.fabrizio@unibo.it

    2016-02-15

    In this paper, we discuss the formation of brine channels in sea ice. The model includes a time-dependent Ginzburg-Landau equation for the solid-liquid phase change, a diffusion equation of the Cahn-Hilliard kind for the solute dynamics, and the heat equation for the temperature change. The macroscopic motion of the fluid is also considered, so the resulting differential system couples with the Navier-Stokes equation. The compatibility of this system with the thermodynamic laws and a maximum theorem is proved.

  4. Ultrahigh temperature vapor core reactor-MHD system for space nuclear electric power

    NASA Technical Reports Server (NTRS)

    Maya, Isaac; Anghaie, Samim; Diaz, Nils J.; Dugan, Edward T.

    1991-01-01

    The conceptual design of a nuclear space power system based on the ultrahigh temperature vapor core reactor with MHD energy conversion is presented. This UF4 fueled gas core cavity reactor operates at 4000 K maximum core temperature and 40 atm. Materials experiments, conducted with UF4 up to 2200 K, demonstrate acceptable compatibility with tungsten-molybdenum-, and carbon-based materials. The supporting nuclear, heat transfer, fluid flow and MHD analysis, and fissioning plasma physics experiments are also discussed.

  5. Optically stimulated luminescence dating of sediments

    NASA Astrophysics Data System (ADS)

    Troja, S. O.; Amore, C.; Barbagallo, G.; Burrafato, G.; Forzese, R.; Geremia, F.; Gueli, A. M.; Marzo, F.; Pirnaci, D.; Russo, M.; Turrisi, E.

    2000-04-01

    Optically stimulated luminescence (OSL) dating methodology was applied on the coarse grain fraction (100÷500 μm thick) of quartz crystals (green light stimulated luminescence, GLSL) and feldspar crystals (infrared stimulated luminescence, IRSL) taken from sections at different depths of cores bored in various coastal lagoons (Longarini, Cuba, Bruno) in the south-east coast of Sicily. The results obtained give a sequence of congruent relative ages and maximum absolute ages compatible with the sedimentary structure, thus confirming the excellent potential of the methodology.

  6. Estimating tree crown widths for the primary Acadian species in Maine

    Treesearch

    Matthew B. Russell; Aaron R. Weiskittel

    2012-01-01

    In this analysis, data for seven conifer and eight hardwood species were gathered from across the state of Maine for estimating tree crown widths. Maximum and largest crown width equations were developed using tree diameter at breast height as the primary predicting variable. Quantile regression techniques were used to estimate the maximum crown width and a constrained...

  7. Bias Correction for the Maximum Likelihood Estimate of Ability. Research Report. ETS RR-05-15

    ERIC Educational Resources Information Center

    Zhang, Jinming

    2005-01-01

    Lord's bias function and the weighted likelihood estimation method are effective in reducing the bias of the maximum likelihood estimate of an examinee's ability under the assumption that the true item parameters are known. This paper presents simulation studies to determine the effectiveness of these two methods in reducing the bias when the item…

  8. MODFLOW-2000, the U.S. Geological Survey modular ground-water model; user guide to the observation, sensitivity, and parameter-estimation processes and three post-processing programs

    USGS Publications Warehouse

    Hill, Mary C.; Banta, E.R.; Harbaugh, A.W.; Anderman, E.R.

    2000-01-01

    This report documents the Observation, Sensitivity, and Parameter-Estimation Processes of the ground-water modeling computer program MODFLOW-2000. The Observation Process generates model-calculated values for comparison with measured, or observed, quantities. A variety of statistics is calculated to quantify this comparison, including a weighted least-squares objective function. In addition, a number of files are produced that can be used to compare the values graphically. The Sensitivity Process calculates the sensitivity of hydraulic heads throughout the model with respect to specified parameters using the accurate sensitivity-equation method. These are called grid sensitivities. If the Observation Process is active, it uses the grid sensitivities to calculate sensitivities for the simulated values associated with the observations. These are called observation sensitivities. Observation sensitivities are used to calculate a number of statistics that can be used (1) to diagnose inadequate data, (2) to identify parameters that probably cannot be estimated by regression using the available observations, and (3) to evaluate the utility of proposed new data. The Parameter-Estimation Process uses a modified Gauss-Newton method to adjust values of user-selected input parameters in an iterative procedure to minimize the value of the weighted least-squares objective function. Statistics produced by the Parameter-Estimation Process can be used to evaluate estimated parameter values; statistics produced by the Observation Process and post-processing program RESAN-2000 can be used to evaluate how accurately the model represents the actual processes; statistics produced by post-processing program YCINT-2000 can be used to quantify the uncertainty of model simulated values. Parameters are defined in the Ground-Water Flow Process input files and can be used to calculate most model inputs, such as: for explicitly defined model layers, horizontal hydraulic conductivity, horizontal anisotropy, vertical hydraulic conductivity or vertical anisotropy, specific storage, and specific yield; and, for implicitly represented layers, vertical hydraulic conductivity. In addition, parameters can be defined to calculate the hydraulic conductance of the River, General-Head Boundary, and Drain Packages; areal recharge rates of the Recharge Package; maximum evapotranspiration of the Evapotranspiration Package; pumpage or the rate of flow at defined-flux boundaries of the Well Package; and the hydraulic head at constant-head boundaries. The spatial variation of model inputs produced using defined parameters is very flexible, including interpolated distributions that require the summation of contributions from different parameters. Observations can include measured hydraulic heads or temporal changes in hydraulic heads, measured gains and losses along head-dependent boundaries (such as streams), flows through constant-head boundaries, and advective transport through the system, which generally would be inferred from measured concentrations. MODFLOW-2000 is intended for use on any computer operating system. The program consists of algorithms programmed in Fortran 90, which efficiently performs numerical calculations and is fully compatible with the newer Fortran 95. The code is easily modified to be compatible with FORTRAN 77. Coordination for multiple processors is accommodated using Message Passing Interface (MPI) commands. The program is designed in a modular fashion that is intended to support inclusion of new capabilities.

  9. Maximum Stress Estimation Model for Multi-Span Waler Beams with Deflections at the Supports Using Average Strains

    PubMed Central

    Park, Sung Woo; Oh, Byung Kwan; Park, Hyo Seon

    2015-01-01

    The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs), the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads. PMID:25831087

  10. A Maximum Muscle Strength Prediction Formula Using Theoretical Grade 3 Muscle Strength Value in Daniels et al.'s Manual Muscle Test, in Consideration of Age: An Investigation of Hip and Knee Joint Flexion and Extension.

    PubMed

    Usa, Hideyuki; Matsumura, Masashi; Ichikawa, Kazuna; Takei, Hitoshi

    2017-01-01

    This study attempted to develop a formula for predicting maximum muscle strength value for young, middle-aged, and elderly adults using theoretical Grade 3 muscle strength value (moment fair: M f )-the static muscular moment to support a limb segment against gravity-from the manual muscle test by Daniels et al. A total of 130 healthy Japanese individuals divided by age group performed isometric muscle contractions at maximum effort for various movements of hip joint flexion and extension and knee joint flexion and extension, and the accompanying resisting force was measured and maximum muscle strength value (moment max, M m ) was calculated. Body weight and limb segment length (thigh and lower leg length) were measured, and M f was calculated using anthropometric measures and theoretical calculation. There was a linear correlation between M f and M m in each of the four movement types in all groups, excepting knee flexion in elderly. However, the formula for predicting maximum muscle strength was not sufficiently compatible in middle-aged and elderly adults, suggesting that the formula obtained in this study is applicable in young adults only.

  11. A Maximum Muscle Strength Prediction Formula Using Theoretical Grade 3 Muscle Strength Value in Daniels et al.'s Manual Muscle Test, in Consideration of Age: An Investigation of Hip and Knee Joint Flexion and Extension

    PubMed Central

    Matsumura, Masashi; Ichikawa, Kazuna; Takei, Hitoshi

    2017-01-01

    This study attempted to develop a formula for predicting maximum muscle strength value for young, middle-aged, and elderly adults using theoretical Grade 3 muscle strength value (moment fair: Mf)—the static muscular moment to support a limb segment against gravity—from the manual muscle test by Daniels et al. A total of 130 healthy Japanese individuals divided by age group performed isometric muscle contractions at maximum effort for various movements of hip joint flexion and extension and knee joint flexion and extension, and the accompanying resisting force was measured and maximum muscle strength value (moment max, Mm) was calculated. Body weight and limb segment length (thigh and lower leg length) were measured, and Mf was calculated using anthropometric measures and theoretical calculation. There was a linear correlation between Mf and Mm in each of the four movement types in all groups, excepting knee flexion in elderly. However, the formula for predicting maximum muscle strength was not sufficiently compatible in middle-aged and elderly adults, suggesting that the formula obtained in this study is applicable in young adults only. PMID:28133549

  12. Procedure for estimating stability and control parameters from flight test data by using maximum likelihood methods employing a real-time digital system

    NASA Technical Reports Server (NTRS)

    Grove, R. D.; Bowles, R. L.; Mayhew, S. C.

    1972-01-01

    A maximum likelihood parameter estimation procedure and program were developed for the extraction of the stability and control derivatives of aircraft from flight test data. Nonlinear six-degree-of-freedom equations describing aircraft dynamics were used to derive sensitivity equations for quasilinearization. The maximum likelihood function with quasilinearization was used to derive the parameter change equations, the covariance matrices for the parameters and measurement noise, and the performance index function. The maximum likelihood estimator was mechanized into an iterative estimation procedure utilizing a real time digital computer and graphic display system. This program was developed for 8 measured state variables and 40 parameters. Test cases were conducted with simulated data for validation of the estimation procedure and program. The program was applied to a V/STOL tilt wing aircraft, a military fighter airplane, and a light single engine airplane. The particular nonlinear equations of motion, derivation of the sensitivity equations, addition of accelerations into the algorithm, operational features of the real time digital system, and test cases are described.

  13. Studying Maximum Plantar Stress per Insole Design Using Foot CT-Scan Images of Hyperelastic Soft Tissues

    PubMed Central

    Sarikhani, Ali; Motalebizadeh, Abbas; Kamali Doost Azad, Babak

    2016-01-01

    The insole shape and the resulting plantar stress distribution have a pivotal impact on overall health. In this paper, by Finite Element Method, maximum stress value and stress distribution of plantar were studied for different insoles designs, which are the flat surface and the custom-molded (conformal) surface. Moreover, insole thickness, heel's height, and different materials were used to minimize the maximum stress and achieve the most uniform stress distribution. The foot shape and its details used in this paper were imported from online CT-Scan images. Results show that the custom-molded insole reduced maximum stress 40% more than the flat surface insole. Upon increase of thickness in both insole types, stress distribution becomes more uniform and maximum stress value decreases up to 10%; however, increase of thickness becomes ineffective above a threshold of 1 cm. By increasing heel height (degree of insole), maximum stress moves from heel to toes and becomes more uniform. Therefore, this scenario is very helpful for control of stress in 0.2° to 0.4° degrees for custom-molded insole and over 1° for flat insole. By changing the material of the insole, the value of maximum stress remains nearly constant. The custom-molded (conformal) insole which has 0.5 to 1 cm thickness and 0.2° to 0.4° degrees is found to be the most compatible form for foot. PMID:27843284

  14. Computing maximum-likelihood estimates for parameters of the National Descriptive Model of Mercury in Fish

    USGS Publications Warehouse

    Donato, David I.

    2012-01-01

    This report presents the mathematical expressions and the computational techniques required to compute maximum-likelihood estimates for the parameters of the National Descriptive Model of Mercury in Fish (NDMMF), a statistical model used to predict the concentration of methylmercury in fish tissue. The expressions and techniques reported here were prepared to support the development of custom software capable of computing NDMMF parameter estimates more quickly and using less computer memory than is currently possible with available general-purpose statistical software. Computation of maximum-likelihood estimates for the NDMMF by numerical solution of a system of simultaneous equations through repeated Newton-Raphson iterations is described. This report explains the derivation of the mathematical expressions required for computational parameter estimation in sufficient detail to facilitate future derivations for any revised versions of the NDMMF that may be developed.

  15. CARES/PC - CERAMICS ANALYSIS AND RELIABILITY EVALUATION OF STRUCTURES

    NASA Technical Reports Server (NTRS)

    Szatmary, S. A.

    1994-01-01

    The beneficial properties of structural ceramics include their high-temperature strength, light weight, hardness, and corrosion and oxidation resistance. For advanced heat engines, ceramics have demonstrated functional abilities at temperatures well beyond the operational limits of metals. This is offset by the fact that ceramic materials tend to be brittle. When a load is applied, their lack of significant plastic deformation causes the material to crack at microscopic flaws, destroying the component. CARES/PC performs statistical analysis of data obtained from the fracture of simple, uniaxial tensile or flexural specimens and estimates the Weibull and Batdorf material parameters from this data. CARES/PC is a subset of the program CARES (COSMIC program number LEW-15168) which calculates the fast-fracture reliability or failure probability of ceramic components utilizing the Batdorf and Weibull models to describe the effects of multi-axial stress states on material strength. CARES additionally requires that the ceramic structure be modeled by a finite element program such as MSC/NASTRAN or ANSYS. The more limited CARES/PC does not perform fast-fracture reliability estimation of components. CARES/PC estimates ceramic material properties from uniaxial tensile or from three- and four-point bend bar data. In general, the parameters are obtained from the fracture stresses of many specimens (30 or more are recommended) whose geometry and loading configurations are held constant. Parameter estimation can be performed for single or multiple failure modes by using the least-squares analysis or the maximum likelihood method. Kolmogorov-Smirnov and Anderson-Darling goodness-of-fit tests measure the accuracy of the hypothesis that the fracture data comes from a population with a distribution specified by the estimated Weibull parameters. Ninety-percent confidence intervals on the Weibull parameters and the unbiased value of the shape parameter for complete samples are provided when the maximum likelihood technique is used. CARES/PC is written and compiled with the Microsoft FORTRAN v5.0 compiler using the VAX FORTRAN extensions and dynamic array allocation supported by this compiler for the IBM/MS-DOS or OS/2 operating systems. The dynamic array allocation routines allow the user to match the number of fracture sets and test specimens to the memory available. Machine requirements include IBM PC compatibles with optional math coprocessor. Program output is designed to fit 80-column format printers. Executables for both DOS and OS/2 are provided. CARES/PC is distributed on one 5.25 inch 360K MS-DOS format diskette in compressed format. The expansion tool PKUNZIP.EXE is supplied on the diskette. CARES/PC was developed in 1990. IBM PC and OS/2 are trademarks of International Business Machines. MS-DOS and MS OS/2 are trademarks of Microsoft Corporation. VAX is a trademark of Digital Equipment Corporation.

  16. Estimating Last Glacial Maximum Ice Thickness Using Porosity and Depth Relationships: Examples from AND-1B and AND-2A Cores, McMurdo Sound, Antarctica

    NASA Astrophysics Data System (ADS)

    Hayden, T. G.; Kominz, M. A.; Magens, D.; Niessen, F.

    2009-12-01

    We have estimated ice thicknesses at the AND-1B core during the Last Glacial Maximum by adapting an existing technique to calculate overburden. As ice thickness at Last Glacial Maximum is unknown in existing ice sheet reconstructions, this analysis provides constraint on model predictions. We analyze the porosity as a function of depth and lithology from measurements taken on the AND-1B core, and compare these results to a global dataset of marine, normally compacted sediments compiled from various legs of ODP and IODP. Using this dataset we are able to estimate the amount of overburden required to compact the sediments to the porosity observed in AND-1B. This analysis is a function of lithology, depth and porosity, and generates estimates ranging from zero to 1,000 meters. These overburden estimates are based on individual lithologies, and are translated into ice thickness estimates by accounting for both sediment and ice densities. To do this we use a simple relationship of Xover * (ρsed/ρice) = Xice; where Xover is the overburden thickness, ρsed is sediment density (calculated from lithology and porosity), ρice is the density of glacial ice (taken as 0.85g/cm3), and Xice is the equalivant ice thickness. The final estimates vary considerably, however the “Best Estimate” behavior of the 2 lithologies most likely to compact consistently is remarkably similar. These lithologies are the clay and silt units (Facies 2a/2b) and the diatomite units (Facies 1a) of AND-1B. These lithologies both produce best estimates of approximately 1,000 meters of ice during Last Glacial Maximum. Additionally, while there is a large range of possible values, no combination of reasonable lithology, compaction, sediment density, or ice density values result in an estimate exceeding 1,900 meters of ice. This analysis only applies to ice thicknesses during Last Glacial Maximum, due to the overprinting effect of Last Glacial Maximum on previous ice advances. Analysis of the AND-2A core is underway, and results will be compared to those of AND-1B.

  17. Some Small Sample Results for Maximum Likelihood Estimation in Multidimensional Scaling.

    ERIC Educational Resources Information Center

    Ramsay, J. O.

    1980-01-01

    Some aspects of the small sample behavior of maximum likelihood estimates in multidimensional scaling are investigated with Monte Carlo techniques. In particular, the chi square test for dimensionality is examined and a correction for bias is proposed and evaluated. (Author/JKS)

  18. Sexual satisfaction, sexual compatibility, and relationship adjustment in couples: the role of sexual behaviors, orgasm, and men's discernment of women's intercourse orgasm.

    PubMed

    Klapilová, Kateřina; Brody, Stuart; Krejčová, Lucie; Husárová, Barbara; Binter, Jakub

    2015-03-01

    Research indicated that (i) vaginal orgasm consistency is associated with indices of psychological, intimate relationship, and physiological functioning, and (ii) masturbation is adversely associated with some such measures. The aim of this study was to examine the association of various dyadic and masturbation behavior frequencies and percentage of female orgasms during these activities with: (i) measures of dyadic adjustment; (ii) sexual satisfaction; and (iii) compatibility perceived by both partners. In a sample of 85 Czech long-term couples (aged 20-40; mean relationship length 5.4 years), both partners provided details of recent sexual behaviors and completed sexual satisfaction, Spanier dyadic adjustment, and Hurlbert sexual compatibility measures. Multiple regression analyses were used. The association of sexual behaviors with dyadic adjustment, sexual compatibility, and satisfaction was analyzed. In multivariate analyses, women's dyadic adjustment is independently predicted by greater vaginal orgasm consistency and lower frequency of women's masturbation. For both sexes, sexual compatibility was independently predicted by higher frequency of penile-vaginal intercourse and greater vaginal orgasm consistency. Women's sexual satisfaction score was significantly predicted by greater vaginal orgasm consistency, frequency of partner genital stimulation, and negatively with masturbation. Men's sexual satisfaction score was significantly predicted by greater intercourse frequency and any vaginal orgasm of their female partners. Concordance of partner vaginal orgasm consistency estimates was associated with greater dyadic adjustment. The findings suggest that specifically penile-vaginal intercourse frequency and vaginal orgasm consistency are associated with indices of greater intimate relationship adjustment, satisfaction, and compatibility of both partners, and that women's masturbation is independently inversely associated with measures of dyadic and personal function. Results are discussed in light of previous research and an evolutionary theory of vaginal orgasm. © 2014 International Society for Sexual Medicine.

  19. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1978-01-01

    This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.

  20. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, 2

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1976-01-01

    The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.

  1. Load estimator (LOADEST): a FORTRAN program for estimating constituent loads in streams and rivers

    USGS Publications Warehouse

    Runkel, Robert L.; Crawford, Charles G.; Cohn, Timothy A.

    2004-01-01

    LOAD ESTimator (LOADEST) is a FORTRAN program for estimating constituent loads in streams and rivers. Given a time series of streamflow, additional data variables, and constituent concentration, LOADEST assists the user in developing a regression model for the estimation of constituent load (calibration). Explanatory variables within the regression model include various functions of streamflow, decimal time, and additional user-specified data variables. The formulated regression model then is used to estimate loads over a user-specified time interval (estimation). Mean load estimates, standard errors, and 95 percent confidence intervals are developed on a monthly and(or) seasonal basis. The calibration and estimation procedures within LOADEST are based on three statistical estimation methods. The first two methods, Adjusted Maximum Likelihood Estimation (AMLE) and Maximum Likelihood Estimation (MLE), are appropriate when the calibration model errors (residuals) are normally distributed. Of the two, AMLE is the method of choice when the calibration data set (time series of streamflow, additional data variables, and concentration) contains censored data. The third method, Least Absolute Deviation (LAD), is an alternative to maximum likelihood estimation when the residuals are not normally distributed. LOADEST output includes diagnostic tests and warnings to assist the user in determining the appropriate estimation method and in interpreting the estimated loads. This report describes the development and application of LOADEST. Sections of the report describe estimation theory, input/output specifications, sample applications, and installation instructions.

  2. Estimating the Richness of a Population When the Maximum Number of Classes Is Fixed: A Nonparametric Solution to an Archaeological Problem

    PubMed Central

    Eren, Metin I.; Chao, Anne; Hwang, Wen-Han; Colwell, Robert K.

    2012-01-01

    Background Estimating assemblage species or class richness from samples remains a challenging, but essential, goal. Though a variety of statistical tools for estimating species or class richness have been developed, they are all singly-bounded: assuming only a lower bound of species or classes. Nevertheless there are numerous situations, particularly in the cultural realm, where the maximum number of classes is fixed. For this reason, a new method is needed to estimate richness when both upper and lower bounds are known. Methodology/Principal Findings Here, we introduce a new method for estimating class richness: doubly-bounded confidence intervals (both lower and upper bounds are known). We specifically illustrate our new method using the Chao1 estimator, rarefaction, and extrapolation, although any estimator of asymptotic richness can be used in our method. Using a case study of Clovis stone tools from the North American Lower Great Lakes region, we demonstrate that singly-bounded richness estimators can yield confidence intervals with upper bound estimates larger than the possible maximum number of classes, while our new method provides estimates that make empirical sense. Conclusions/Significance Application of the new method for constructing doubly-bound richness estimates of Clovis stone tools permitted conclusions to be drawn that were not otherwise possible with singly-bounded richness estimates, namely, that Lower Great Lakes Clovis Paleoindians utilized a settlement pattern that was probably more logistical in nature than residential. However, our new method is not limited to archaeological applications. It can be applied to any set of data for which there is a fixed maximum number of classes, whether that be site occupancy models, commercial products (e.g. athletic shoes), or census information (e.g. nationality, religion, age, race). PMID:22666316

  3. Maximum angular accuracy of pulsed laser radar in photocounting limit.

    PubMed

    Elbaum, M; Diament, P; King, M; Edelson, W

    1977-07-01

    To estimate the angular position of targets with pulsed laser radars, their images may be sensed with a fourquadrant noncoherent detector and the image photocounting distribution processed to obtain the angular estimates. The limits imposed on the accuracy of angular estimation by signal and background radiation shot noise, dark current noise, and target cross-section fluctuations are calculated. Maximum likelihood estimates of angular positions are derived for optically rough and specular targets and their performances compared with theoretical lower bounds.

  4. A maximum pseudo-profile likelihood estimator for the Cox model under length-biased sampling

    PubMed Central

    Huang, Chiung-Yu; Qin, Jing; Follmann, Dean A.

    2012-01-01

    This paper considers semiparametric estimation of the Cox proportional hazards model for right-censored and length-biased data arising from prevalent sampling. To exploit the special structure of length-biased sampling, we propose a maximum pseudo-profile likelihood estimator, which can handle time-dependent covariates and is consistent under covariate-dependent censoring. Simulation studies show that the proposed estimator is more efficient than its competitors. A data analysis illustrates the methods and theory. PMID:23843659

  5. Extracting volatility signal using maximum a posteriori estimation

    NASA Astrophysics Data System (ADS)

    Neto, David

    2016-11-01

    This paper outlines a methodology to estimate a denoised volatility signal for foreign exchange rates using a hidden Markov model (HMM). For this purpose a maximum a posteriori (MAP) estimation is performed. A double exponential prior is used for the state variable (the log-volatility) in order to allow sharp jumps in realizations and then log-returns marginal distributions with heavy tails. We consider two routes to choose the regularization and we compare our MAP estimate to realized volatility measure for three exchange rates.

  6. Process-based soil erodibility estimation for empirical water erosion models

    USDA-ARS?s Scientific Manuscript database

    A variety of modeling technologies exist for water erosion prediction each with specific parameters. It is of interest to scrutinize parameters of a particular model from the point of their compatibility with dataset of other models. In this research, functional relationships between soil erodibilit...

  7. Environmental compatibility of closed landfills - assessing future pollution hazards.

    PubMed

    Laner, David; Fellner, Johann; Brunner, Paul H

    2011-01-01

    Municipal solid waste landfills need to be managed after closure. This so-called aftercare comprises the treatment and monitoring of residual emissions as well as the maintenance and control of landfill elements. The measures can be terminated when a landfill does not pose a threat to the environment any more. Consequently, the evaluation of landfill environmental compatibility includes an estimation of future pollution hazards as well as an assessment of the vulnerability of the affected environment. An approach to assess future emission rates is presented and discussed in view of long-term environmental compatibility. The suggested method consists (a) of a continuous model to predict emissions under the assumption of constant landfill conditions, and (b) different scenarios to evaluate the effects of changing conditions within and around the landfill. The model takes into account the actual status of the landfill, hence different methods to gain information about landfill characteristics have to be applied. Finally, assumptions, uncertainties, and limitations of the methodology are discussed, and the need for future research is outlined.

  8. Maximum likelihood estimation of label imperfections and its use in the identification of mislabeled patterns

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B.

    1979-01-01

    The problem of estimating label imperfections and the use of the estimation in identifying mislabeled patterns is presented. Expressions for the maximum likelihood estimates of classification errors and a priori probabilities are derived from the classification of a set of labeled patterns. Expressions also are given for the asymptotic variances of probability of correct classification and proportions. Simple models are developed for imperfections in the labels and for classification errors and are used in the formulation of a maximum likelihood estimation scheme. Schemes are presented for the identification of mislabeled patterns in terms of threshold on the discriminant functions for both two-class and multiclass cases. Expressions are derived for the probability that the imperfect label identification scheme will result in a wrong decision and are used in computing thresholds. The results of practical applications of these techniques in the processing of remotely sensed multispectral data are presented.

  9. Maximum Likelihood Estimation of Nonlinear Structural Equation Models.

    ERIC Educational Resources Information Center

    Lee, Sik-Yum; Zhu, Hong-Tu

    2002-01-01

    Developed an EM type algorithm for maximum likelihood estimation of a general nonlinear structural equation model in which the E-step is completed by a Metropolis-Hastings algorithm. Illustrated the methodology with results from a simulation study and two real examples using data from previous studies. (SLD)

  10. Enhanced Telecom Emission from Single Group-IV Quantum Dots by Precise CMOS-Compatible Positioning in Photonic Crystal Cavities.

    PubMed

    Schatzl, Magdalena; Hackl, Florian; Glaser, Martin; Rauter, Patrick; Brehm, Moritz; Spindlberger, Lukas; Simbula, Angelica; Galli, Matteo; Fromherz, Thomas; Schäffler, Friedrich

    2017-03-15

    Efficient coupling to integrated high-quality-factor cavities is crucial for the employment of germanium quantum dot (QD) emitters in future monolithic silicon-based optoelectronic platforms. We report on strongly enhanced emission from single Ge QDs into L3 photonic crystal resonator (PCR) modes based on precise positioning of these dots at the maximum of the respective mode field energy density. Perfect site control of Ge QDs grown on prepatterned silicon-on-insulator substrates was exploited to fabricate in one processing run almost 300 PCRs containing single QDs in systematically varying positions within the cavities. Extensive photoluminescence studies on this cavity chip enable a direct evaluation of the position-dependent coupling efficiency between single dots and selected cavity modes. The experimental results demonstrate the great potential of the approach allowing CMOS-compatible parallel fabrication of arrays of spatially matched dot/cavity systems for group-IV-based data transfer or quantum optical systems in the telecom regime.

  11. Enhanced Telecom Emission from Single Group-IV Quantum Dots by Precise CMOS-Compatible Positioning in Photonic Crystal Cavities

    PubMed Central

    2017-01-01

    Efficient coupling to integrated high-quality-factor cavities is crucial for the employment of germanium quantum dot (QD) emitters in future monolithic silicon-based optoelectronic platforms. We report on strongly enhanced emission from single Ge QDs into L3 photonic crystal resonator (PCR) modes based on precise positioning of these dots at the maximum of the respective mode field energy density. Perfect site control of Ge QDs grown on prepatterned silicon-on-insulator substrates was exploited to fabricate in one processing run almost 300 PCRs containing single QDs in systematically varying positions within the cavities. Extensive photoluminescence studies on this cavity chip enable a direct evaluation of the position-dependent coupling efficiency between single dots and selected cavity modes. The experimental results demonstrate the great potential of the approach allowing CMOS-compatible parallel fabrication of arrays of spatially matched dot/cavity systems for group-IV-based data transfer or quantum optical systems in the telecom regime. PMID:28345012

  12. Modelling of extreme rainfall events in Peninsular Malaysia based on annual maximum and partial duration series

    NASA Astrophysics Data System (ADS)

    Zin, Wan Zawiah Wan; Shinyie, Wendy Ling; Jemain, Abdul Aziz

    2015-02-01

    In this study, two series of data for extreme rainfall events are generated based on Annual Maximum and Partial Duration Methods, derived from 102 rain-gauge stations in Peninsular from 1982-2012. To determine the optimal threshold for each station, several requirements must be satisfied and Adapted Hill estimator is employed for this purpose. A semi-parametric bootstrap is then used to estimate the mean square error (MSE) of the estimator at each threshold and the optimal threshold is selected based on the smallest MSE. The mean annual frequency is also checked to ensure that it lies in the range of one to five and the resulting data is also de-clustered to ensure independence. The two data series are then fitted to Generalized Extreme Value and Generalized Pareto distributions for annual maximum and partial duration series, respectively. The parameter estimation methods used are the Maximum Likelihood and the L-moment methods. Two goodness of fit tests are then used to evaluate the best-fitted distribution. The results showed that the Partial Duration series with Generalized Pareto distribution and Maximum Likelihood parameter estimation provides the best representation for extreme rainfall events in Peninsular Malaysia for majority of the stations studied. Based on these findings, several return values are also derived and spatial mapping are constructed to identify the distribution characteristic of extreme rainfall in Peninsular Malaysia.

  13. A maximum power point prediction method for group control of photovoltaic water pumping systems based on parameter identification

    NASA Astrophysics Data System (ADS)

    Chen, B.; Su, J. H.; Guo, L.; Chen, J.

    2017-06-01

    This paper puts forward a maximum power estimation method based on the photovoltaic array (PVA) model to solve the optimization problems about group control of the PV water pumping systems (PVWPS) at the maximum power point (MPP). This method uses the improved genetic algorithm (GA) for model parameters estimation and identification in view of multi P-V characteristic curves of a PVA model, and then corrects the identification results through least square method. On this basis, the irradiation level and operating temperature under any condition are able to estimate so an accurate PVA model is established and the MPP none-disturbance estimation is achieved. The simulation adopts the proposed GA to determine parameters, and the results verify the accuracy and practicability of the methods.

  14. On the Performance of Maximum Likelihood versus Means and Variance Adjusted Weighted Least Squares Estimation in CFA

    ERIC Educational Resources Information Center

    Beauducel, Andre; Herzberg, Philipp Yorck

    2006-01-01

    This simulation study compared maximum likelihood (ML) estimation with weighted least squares means and variance adjusted (WLSMV) estimation. The study was based on confirmatory factor analyses with 1, 2, 4, and 8 factors, based on 250, 500, 750, and 1,000 cases, and on 5, 10, 20, and 40 variables with 2, 3, 4, 5, and 6 categories. There was no…

  15. Spurious Latent Class Problem in the Mixed Rasch Model: A Comparison of Three Maximum Likelihood Estimation Methods under Different Ability Distributions

    ERIC Educational Resources Information Center

    Sen, Sedat

    2018-01-01

    Recent research has shown that over-extraction of latent classes can be observed in the Bayesian estimation of the mixed Rasch model when the distribution of ability is non-normal. This study examined the effect of non-normal ability distributions on the number of latent classes in the mixed Rasch model when estimated with maximum likelihood…

  16. Do we know how to reconcile preservation of landscapes with adaptation of agriculture to climate change? A case-study in a hilly area in Southern Italy

    NASA Astrophysics Data System (ADS)

    Menenti, Massimo; Alfieri, Silvia; Basile, Angelo; Bonfante, Antonello; Monaco, Eugenia; Riccardi, Maria; De Lorenzi, Francesca

    2013-04-01

    Limited impacts of climate change on agricultural yields are unlikely to induce any significant changes in current landscapes. Larger impacts, unacceptable on economic or social ground, are likely to trigger interventions towards adaptation of agricultural production systems by reducing or removing vulnerabilities to climate variability and change. Such interventions may require a transition to a different production system, i.e. complete substitution of current crops, or displacement of current crops at their current location towards other locations, e.g. at higher elevations within the landscape. We have assessed the impacts of climate change and evaluated options for adaptation of a valley in Southern Italy, dominated by vine and olive orchards with a significant presence of wheat. We have first estimated the climatic requirements of several varieties for each dominant species. Next, to identify options for adaptation we have evaluated the compatibility of such requirements with indicators of a reference (current) climate and of future climate. This climate - compatibility assessment was done for each soil unit within the valley, leading to maps of locations where each crop is expected to be compatible with climate. This leads to identify both potential crop substitutions within the entire valley and crop displacements from one location to another within the valley. Two climate scenarios were considered: reference (1961-90) and future (2021-2050) climate, the former from climatic statistics, and the latter from statistical downscaling of general circulation models (AOGCM). Climatic data consists of daily time series of maximum and minimum temperature, and daily rainfall on a grid with a spatial resolution of 35 km. We evaluated the adaptive capacity of the "Valle Telesina" (Campania Region, Southern Italy). A mechanistic model of water flow in the soil-plant-atmosphere system (SWAP) was used to describe the hydrological conditions in response to climate for each soil unit. Crop-specific input data and model parameters were estimated on the basis of local experiments and of scientific literature and assumed to be generically representative of the species. Time series of MODIS TIR data were used to downscale gridded climate data on air temperature for both the reference and the future climate. The results indicate that no complete crop substitution will be required within this time frame, i.e. the Valle Telesina will preserve its typical landscape features of a vine - olive orchards dominated production system, typical of many regions in Mediterranean Europe. On the other hand very significant crop displacements will be necessary to grow each variety under optimal hydrothermal conditions, from the point of view of both quantity and quality of yield. The work was carried out within the Italian national project AGROSCENARI funded by the Ministry for Agricultural, Food and Forest Policies (MIPAAF, D.M. 8608/7303/2008)

  17. A classical Perron method for existence of smooth solutions to boundary value and obstacle problems for degenerate-elliptic operators via holomorphic maps

    NASA Astrophysics Data System (ADS)

    Feehan, Paul M. N.

    2017-09-01

    We prove existence of solutions to boundary value problems and obstacle problems for degenerate-elliptic, linear, second-order partial differential operators with partial Dirichlet boundary conditions using a new version of the Perron method. The elliptic operators considered have a degeneracy along a portion of the domain boundary which is similar to the degeneracy of a model linear operator identified by Daskalopoulos and Hamilton [9] in their study of the porous medium equation or the degeneracy of the Heston operator [21] in mathematical finance. Existence of a solution to the partial Dirichlet problem on a half-ball, where the operator becomes degenerate on the flat boundary and a Dirichlet condition is only imposed on the spherical boundary, provides the key additional ingredient required for our Perron method. Surprisingly, proving existence of a solution to this partial Dirichlet problem with ;mixed; boundary conditions on a half-ball is more challenging than one might expect. Due to the difficulty in developing a global Schauder estimate and due to compatibility conditions arising where the ;degenerate; and ;non-degenerate boundaries; touch, one cannot directly apply the continuity or approximate solution methods. However, in dimension two, there is a holomorphic map from the half-disk onto the infinite strip in the complex plane and one can extend this definition to higher dimensions to give a diffeomorphism from the half-ball onto the infinite ;slab;. The solution to the partial Dirichlet problem on the half-ball can thus be converted to a partial Dirichlet problem on the slab, albeit for an operator which now has exponentially growing coefficients. The required Schauder regularity theory and existence of a solution to the partial Dirichlet problem on the slab can nevertheless be obtained using previous work of the author and C. Pop [16]. Our Perron method relies on weak and strong maximum principles for degenerate-elliptic operators, concepts of continuous subsolutions and supersolutions for boundary value and obstacle problems for degenerate-elliptic operators, and maximum and comparison principle estimates previously developed by the author [13].

  18. Influence of alkyl chain length compatibility on microemulsion structure and solubilization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bansal, V.K.; O'Connell, J.P.; Shah, D.O.

    1980-06-01

    The water solubilization capacity of water/oil microemulsions is studied as a function of alkyl chain length of oil (C/sub 8/ to C/sub 16/), surfactant (C/sub 14/ and C/sub 18/ fatty acid soaps), and alcohol (C/sub 4/ to C/sub 7/). Sodium stearate and sodium myristate were used as surfactants. For n-butanol microemulsions the maximum amount of water solubilized in the microemulsion decreased continuously with increasing oil chain length; for n-heptanol it increased continuously. For n-pentanol and n-hexanol systems, water solubilization reached a maximum when the oil chain length plus alcohol chain length was equal to that of the surfactant. The electricmore » resistance and dielectric constant of the microemulsions also are measured as a function of alkyl chain length of the oil. 48 references.« less

  19. Nonparametric probability density estimation by optimization theoretic techniques

    NASA Technical Reports Server (NTRS)

    Scott, D. W.

    1976-01-01

    Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.

  20. Predicting what helminth parasites a fish species should have using Parasite Co-occurrence Modeler (PaCo)

    USGS Publications Warehouse

    Strona, Giovanni; Lafferty, Kevin D.

    2013-01-01

    Fish pathologists are often interested in which parasites would likely be present in a particular host. Parasite Co-occurrence Modeler (PaCo) is a tool for identifying a list of parasites known from fish species that are similar ecologically, phylogenetically, and geographically to the host of interest. PaCo uses data from FishBase (maximum length, growth rate, life span, age at maturity, trophic level, phylogeny, and biogeography) to estimate compatibility between a target host and parasite species–genera from the major helminth groups (Acanthocephala, Cestoda, Monogenea, Nematoda, and Trematoda). Users can include any combination of host attributes in a model. These unique features make PaCo an innovative tool for addressing both theoretical and applied questions in parasitology. In addition to predicting the occurrence of parasites, PaCo can be used to investigate how host characteristics shape parasite communities. To test the performance of the PaCo algorithm, we created 12,400 parasite lists by applying any possible combination of model parameters (248) to 50 fish hosts. We then measured the relative importance of each parameter by assessing their frequency in the best models for each host. Host phylogeny and host geography were identified as the most important factors, with both present in 88% of the best models. Habitat (64%) was identified in more than half of the best models. Among ecological parameters, trophic level (41%) was the most relevant while life span (34%), growth rate (32%), maximum length (28%), and age at maturity (20%) were less commonly linked to best models. PaCo is free to use at www.purl.oclc.org/fishpest.

  1. Investigation of Structural, Compositional and Anti-Microbial Properties of Copper Thin Film Using Direct Current Magnetron Sputtering for Surgical Instruments

    NASA Astrophysics Data System (ADS)

    Kalaiselvam, S.; Sandhya, J.; Krishnan, K. V. Hari; Kedharnath, A.; Arulkumar, G.; Roseline, A. Ameelia

    Surgical instruments and other bioimplant devices, owing to their importance in the biomedical industry require high biocompatibility to be used in the human body. Nevertheless, issues of compatibility, bacterial infections are quite common in such devices. Hence development of surface coatings on various substrates for implant applications is a promising technique to combat the issues arising in these implant materials. The present investigation aims at coating copper on stainless steel substrate using DC Magnetron sputtering which is used to achieve film of required thickness (0.5-8μm). The deposition pressure, substrate temperature, power supply, distance between the specimen and target are optimized and maintained constant, while the sputtering time (30-110min) is varied. The sputtered copper thin film’s morphology, composition are characterized by SEM and EDAX. X-ray diffraction analysis shows copper oriented on (111) and (002) and copper oxide on (111) planes. The contact angle of copper thin film is 92∘ while AISI 316L shows 73∘. The antimicrobial studies carried in Staphylococcus aureus, Escherichia Coli, Klebsiella pneumonia and Candida albicans show that the maximum reduction was seen upto 35, 26, 54, 39CFU/mL, respectively after 24h. The cell viability is studied by MTT assay test on Vero cell line for 24h, 48h and 72h and average cell viability is 43.85%. The copper release from the thin film to the culture medium is 6691μg/L (maximum) is estimated from AAS studies. The copper coated substrate does not show much reaction with living Vero cells whereas the bacteria and fungi are found to be destroyed.

  2. Predicting what helminth parasites a fish species should have using Parasite Co-occurrence Modeler (PaCo).

    PubMed

    Strona, Giovanni; Lafferty, Kevin D

    2013-02-01

    Fish pathologists are often interested in which parasites would likely be present in a particular host. Parasite Co-occurrence Modeler (PaCo) is a tool for identifying a list of parasites known from fish species that are similar ecologically, phylogenetically, and geographically to the host of interest. PaCo uses data from FishBase (maximum length, growth rate, life span, age at maturity, trophic level, phylogeny, and biogeography) to estimate compatibility between a target host and parasite species-genera from the major helminth groups (Acanthocephala, Cestoda, Monogenea, Nematoda, and Trematoda). Users can include any combination of host attributes in a model. These unique features make PaCo an innovative tool for addressing both theoretical and applied questions in parasitology. In addition to predicting the occurrence of parasites, PaCo can be used to investigate how host characteristics shape parasite communities. To test the performance of the PaCo algorithm, we created 12,400 parasite lists by applying any possible combination of model parameters (248) to 50 fish hosts. We then measured the relative importance of each parameter by assessing their frequency in the best models for each host. Host phylogeny and host geography were identified as the most important factors, with both present in 88% of the best models. Habitat (64%) was identified in more than half of the best models. Among ecological parameters, trophic level (41%) was the most relevant while life span (34%), growth rate (32%), maximum length (28%), and age at maturity (20%) were less commonly linked to best models. PaCo is free to use at www.purl.oclc.org/fishpest.

  3. Decadal Changes in Global Ocean Annual Primary Production

    NASA Technical Reports Server (NTRS)

    Gregg, Watson; Conkright, Margarita E.; Behrenfeld, Michael J.; Ginoux, Paul; Casey, Nancy W.; Koblinsky, Chester J. (Technical Monitor)

    2002-01-01

    The Sea-viewing Wide Field-of-View Sensor (SeaWiFS) has produced the first multi-year time series of global ocean chlorophyll observations since the demise of the Coastal Zone Color Scanner (CZCS) in 1986. Global observations from 1997-present from SeaWiFS combined with observations from 1979-1986 from the CZCS should in principle provide an opportunity to observe decadal changes in global ocean annual primary production, since chlorophyll is the primary driver for estimates of primary production. However, incompatibilities between algorithms have so far precluded quantitative analysis. We have developed and applied compatible processing methods for the CZCS, using modern advances in atmospheric correction and consistent bio-optical algorithms to advance the CZCS archive to comparable quality with SeaWiFS. We applied blending methodologies, where in situ data observations are incorporated into the CZCS and SeaWiFS data records, to provide improvement of the residuals. These re-analyzed, blended data records provide maximum compatibility and permit, for the first time, a quantitative analysis of the changes in global ocean primary production in the early-to-mid 1980's and the present, using synoptic satellite observations. An intercomparison of the global and regional primary production from these blended satellite observations is important to understand global climate change and the effects on ocean biota. Photosynthesis by chlorophyll-containing phytoplankton is responsible for biotic uptake of carbon in the oceans and potentially ultimately from the atmosphere. Global ocean annual primary decreased from the CZCS record to SeaWiFS, by nearly 6% from the early 1980s to the present. Annual primary production in the high latitudes was responsible for most of the decadal change. Conversely, primary production in the low latitudes generally increased, with the exception of the tropical Pacific. The differences and similarities of the two data records provide evidence of how the Earth's climate may be changing and how ocean biota respond. Furthermore, the results have implications for the ocean carbon cycle.

  4. Mixture Rasch Models with Joint Maximum Likelihood Estimation

    ERIC Educational Resources Information Center

    Willse, John T.

    2011-01-01

    This research provides a demonstration of the utility of mixture Rasch models. Specifically, a model capable of estimating a mixture partial credit model using joint maximum likelihood is presented. Like the partial credit model, the mixture partial credit model has the beneficial feature of being appropriate for analysis of assessment data…

  5. Exploiting the Maximum Entropy Principle to Increase Retrieval Effectiveness.

    ERIC Educational Resources Information Center

    Cooper, William S.

    1983-01-01

    Presents information retrieval design approach in which queries of computer-based system consist of sets of terms, either unweighted or weighted with subjective term precision estimates, and retrieval outputs ranked by probability of usefulness estimated by "maximum entropy principle." Boolean and weighted request systems are discussed.…

  6. Bayesian Monte Carlo and Maximum Likelihood Approach for Uncertainty Estimation and Risk Management: Application to Lake Oxygen Recovery Model

    EPA Science Inventory

    Model uncertainty estimation and risk assessment is essential to environmental management and informed decision making on pollution mitigation strategies. In this study, we apply a probabilistic methodology, which combines Bayesian Monte Carlo simulation and Maximum Likelihood e...

  7. The Effects of Model Misspecification and Sample Size on LISREL Maximum Likelihood Estimates.

    ERIC Educational Resources Information Center

    Baldwin, Beatrice

    The robustness of LISREL computer program maximum likelihood estimates under specific conditions of model misspecification and sample size was examined. The population model used in this study contains one exogenous variable; three endogenous variables; and eight indicator variables, two for each latent variable. Conditions of model…

  8. 75 FR 77781 - Amendment of the Commission's Rules Governing Hearing Aid-Compatible Mobile Handsets...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-14

    ... FEDERAL COMMUNICATIONS COMMISSION 47 CFR Part 20 [FCC 10-145; WT Docket No. 07-250] Amendment [email protected]fcc.gov . SUPPLEMENTARY INFORMATION: The Federal Communications Commission (FCC) has received...). Form No.: FCC Form 655--electronic only. Estimated Annual Burden: 925 respondents; 925 responses; 12...

  9. REST: a computer system for estimating logging residue by using the line-intersect method

    Treesearch

    A. Jeff Martin

    1975-01-01

    A computer program was designed to accept logging-residue measurements obtained by line-intersect sampling and transform them into summaries useful for the land manager. The features of the program, along with inputs and outputs, are briefly described, with a note on machine compatibility.

  10. User's Guide for the Precision Recursive Estimator for Ephemeris Refinement (PREFER)

    NASA Technical Reports Server (NTRS)

    Gibbs, B. P.

    1982-01-01

    PREFER is a recursive orbit determination program which is used to refine the ephemerides produced by a batch least squares program (e.g., GTDS). It is intended to be used primarily with GTDS and, thus, is compatible with some of the GTDS input/output files.

  11. INDOOR AIR QUALITY MODEL VERSION 1.0 DOCUMENTATION

    EPA Science Inventory

    The report presents a multiroom model for estimating the impact of various sources on indoor air quality (IAQ). The model is written for use on IBM-PC and compatible microcomputers. It is easy to use with a menu-driven user interface. Data are entered using a fill-in-a-form inter...

  12. Statistical field estimators for multiscale simulations.

    PubMed

    Eapen, Jacob; Li, Ju; Yip, Sidney

    2005-11-01

    We present a systematic approach for generating smooth and accurate fields from particle simulation data using the notions of statistical inference. As an extension to a parametric representation based on the maximum likelihood technique previously developed for velocity and temperature fields, a nonparametric estimator based on the principle of maximum entropy is proposed for particle density and stress fields. Both estimators are applied to represent molecular dynamics data on shear-driven flow in an enclosure which exhibits a high degree of nonlinear characteristics. We show that the present density estimator is a significant improvement over ad hoc bin averaging and is also free of systematic boundary artifacts that appear in the method of smoothing kernel estimates. Similarly, the velocity fields generated by the maximum likelihood estimator do not show any edge effects that can be erroneously interpreted as slip at the wall. For low Reynolds numbers, the velocity fields and streamlines generated by the present estimator are benchmarked against Newtonian continuum calculations. For shear velocities that are a significant fraction of the thermal speed, we observe a form of shear localization that is induced by the confining boundary.

  13. Physical Compatibility of Magnesium Sulfate and Sodium Bicarbonate in a Pharmacy-compounded Bicarbonate-buffered Hemofiltration Solution

    PubMed Central

    Moriyama, Brad; Henning, Stacey A.; Jin, Haksong; Kolf, Mike; Rehak, Nadja N.; Danner, Robert L.; Walsh, Thomas J.; Grimes, George J.

    2011-01-01

    PURPOSE To assess the physical compatibility of magnesium sulfate and sodium bicarbonate in a pharmacy-compounded bicarbonate-buffered hemofiltration solution used at the National Institutes of Health Clinical Center (http://www.cc.nih.gov). METHODS Two hemofiltration fluid formulations with a bicarbonate of 50 mEq/L and a magnesium of 1.5 mEq/L or 15 mEq/L were prepared in triplicate with an automated compounding device. The hemofiltration solution with a bicarbonate of 50 mEq/L and a magnesium of 1.5 mEq/L contains the maximum concentration of additives that we use in clinical practice. The hemofiltration solution of 15 mEq/L of magnesium and 50 mEq/L of bicarbonate was used to study the physicochemical properties of this interaction. The solutions were stored without light protection at 22 to 25 °C for 48 hours. Physical compatibility was assessed by visual inspection and microscopy. The pH of the solutions was assayed at 3 to 4 hours and 52 to 53 hours after compounding. In addition, electrolyte and glucose concentrations in the solutions were assayed at two time points after preparation: 3 to 4 hours and 50 to 51 hours. RESULTS No particulate matter was observed by visual and microscopic inspection in the compounded hemofiltration solutions at 48 hours. Electrolyte and glucose concentrations and pH were similar at both time points after solution preparation. CONCLUSION Magnesium sulfate (1.5 mEq/L) and sodium bicarbonate (50 mEq/L) were physically compatible in a pharmacy-compounded bicarbonate-buffered hemofiltration solution at room temperature without light protection at 48 hours. PMID:20237384

  14. A basin-scale approach to estimating stream temperatures of tributaries to the lower Klamath River, California

    USGS Publications Warehouse

    Flint, L.E.; Flint, A.L.

    2008-01-01

    Stream temperature is an important component of salmonid habitat and is often above levels suitable for fish survival in the Lower Klamath River in northern California. The objective of this study was to provide boundary conditions for models that are assessing stream temperature on the main stem for the purpose of developing strategies to manage stream conditions using Total Maximum Daily Loads. For model input, hourly stream temperatures for 36 tributaries were estimated for 1 Jan. 2001 through 31 Oct. 2004. A basin-scale approach incorporating spatially distributed energy balance data was used to estimate the stream temperatures with measured air temperature and relative humidity data and simulated solar radiation, including topographic shading and corrections for cloudiness. Regression models were developed on the basis of available stream temperature data to predict temperatures for unmeasured periods of time and for unmeasured streams. The most significant factor in matching measured minimum and maximum stream temperatures was the seasonality of the estimate. Adding minimum and maximum air temperature to the regression model improved the estimate, and air temperature data over the region are available and easily distributed spatially. The addition of simulated solar radiation and vapor saturation deficit to the regression model significantly improved predictions of maximum stream temperature but was not required to predict minimum stream temperature. The average SE in estimated maximum daily stream temperature for the individual basins was 0.9 ?? 0.6??C at the 95% confidence interval. Copyright ?? 2008 by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America. All rights reserved.

  15. The maximum entropy method of moments and Bayesian probability theory

    NASA Astrophysics Data System (ADS)

    Bretthorst, G. Larry

    2013-08-01

    The problem of density estimation occurs in many disciplines. For example, in MRI it is often necessary to classify the types of tissues in an image. To perform this classification one must first identify the characteristics of the tissues to be classified. These characteristics might be the intensity of a T1 weighted image and in MRI many other types of characteristic weightings (classifiers) may be generated. In a given tissue type there is no single intensity that characterizes the tissue, rather there is a distribution of intensities. Often this distributions can be characterized by a Gaussian, but just as often it is much more complicated. Either way, estimating the distribution of intensities is an inference problem. In the case of a Gaussian distribution, one must estimate the mean and standard deviation. However, in the Non-Gaussian case the shape of the density function itself must be inferred. Three common techniques for estimating density functions are binned histograms [1, 2], kernel density estimation [3, 4], and the maximum entropy method of moments [5, 6]. In the introduction, the maximum entropy method of moments will be reviewed. Some of its problems and conditions under which it fails will be discussed. Then in later sections, the functional form of the maximum entropy method of moments probability distribution will be incorporated into Bayesian probability theory. It will be shown that Bayesian probability theory solves all of the problems with the maximum entropy method of moments. One gets posterior probabilities for the Lagrange multipliers, and, finally, one can put error bars on the resulting estimated density function.

  16. Global chain properties of an all l-α-eicosapeptide with a secondary α-helix and its all retro d-inverso-α-eicosapeptide estimated through the modeling of their CZE-determined electrophoretic mobilities.

    PubMed

    Deiber, Julio A; Piaggio, Maria V; Peirotti, Marta B

    2014-03-01

    Several global chain properties of relatively long peptides composed of 20 amino acid residues are estimated through the modeling of their experimental effective electrophoretic mobilities determined by CZE for 2 < pH < 6. In this regard, an all l-α-eicosapeptide, including a secondary α-helix (Peptide 1) and its all retro d-inverso-α-eicosapeptide (Peptide 2), are considered. Despite Peptides 1 and 2 are isomeric chains, they do not present similar global conformations in the whole range of pH studied. These peptides may also differ in the quality of BGE components chain interactions depending on the pH value. Three Peptide 1 fragments (Peptides 3, 4, and 5) are also analyzed in this framework with the following purposes: (i) visualization of the effects of initial and final strands at each side of the α-helix on the global chain conformations of Peptide 1 at different pHs and (ii) analysis of global chain conformations of Peptides 1 and 2, and Peptide 1 fragments in relation to their pI values. Also, the peptide maximum and minimum hydrations predicted by the model, compatible with experimental effective electrophoretic mobilities at different pHs, are quantified and discussed, and needs for further research concerning chain hydration are proposed. It is shown that CZE is a useful analytical tool for peptidomimetic designs and purposes. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Simultaneous reconstruction of the activity image and registration of the CT image in TOF-PET

    NASA Astrophysics Data System (ADS)

    Rezaei, Ahmadreza; Michel, Christian; Casey, Michael E.; Nuyts, Johan

    2016-02-01

    Previously, maximum-likelihood methods have been proposed to jointly estimate the activity image and the attenuation image or the attenuation sinogram from time-of-flight (TOF) positron emission tomography (PET) data. In this contribution, we propose a method that addresses the possible alignment problem of the TOF-PET emission data and the computed tomography (CT) attenuation data, by combining reconstruction and registration. The method, called MLRR, iteratively reconstructs the activity image while registering the available CT-based attenuation image, so that the pair of activity and attenuation images maximise the likelihood of the TOF emission sinogram. The algorithm is slow to converge, but some acceleration could be achieved by using Nesterov’s momentum method and by applying a multi-resolution scheme for the non-rigid displacement estimation. The latter also helps to avoid local optima, although convergence to the global optimum cannot be guaranteed. The results are evaluated on 2D and 3D simulations as well as a respiratory gated clinical scan. Our experiments indicate that the proposed method is able to correct for possible misalignment of the CT-based attenuation image, and is therefore a very promising approach to suppressing attenuation artefacts in clinical PET/CT. When applied to respiratory gated data of a patient scan, it produced deformations that are compatible with breathing motion and which reduced the well known attenuation artefact near the dome of the liver. Since the method makes use of the energy-converted CT attenuation image, the scale problem of joint reconstruction is automatically solved.

  18. Estimation of eye lens doses received by pediatric interventional cardiologists.

    PubMed

    Alejo, L; Koren, C; Ferrer, C; Corredoira, E; Serrada, A

    2015-09-01

    Maximum Hp(0.07) dose to the eye lens received in a year by the pediatric interventional cardiologists has been estimated. Optically stimulated luminescence dosimeters were placed on the eyes of an anthropomorphic phantom, whose position in the room simulates the most common irradiation conditions. Maximum workload was considered with data collected from procedures performed in the Hospital. None of the maximum values obtained exceed the dose limit of 20 mSv recommended by ICRP. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Earthquake Potential Models for China

    NASA Astrophysics Data System (ADS)

    Rong, Y.; Jackson, D. D.

    2002-12-01

    We present three earthquake potential estimates for magnitude 5.4 and larger earthquakes for China. The potential is expressed as the rate density (probability per unit area, magnitude and time). The three methods employ smoothed seismicity-, geologic slip rate-, and geodetic strain rate data. We tested all three estimates, and the published Global Seismic Hazard Assessment Project (GSHAP) model, against earthquake data. We constructed a special earthquake catalog which combines previous catalogs covering different times. We used the special catalog to construct our smoothed seismicity model and to evaluate all models retrospectively. All our models employ a modified Gutenberg-Richter magnitude distribution with three parameters: a multiplicative ``a-value," the slope or ``b-value," and a ``corner magnitude" marking a strong decrease of earthquake rate with magnitude. We assumed the b-value to be constant for the whole study area and estimated the other parameters from regional or local geophysical data. The smoothed seismicity method assumes that the rate density is proportional to the magnitude of past earthquakes and approximately as the reciprocal of the epicentral distance out to a few hundred kilometers. We derived the upper magnitude limit from the special catalog and estimated local a-values from smoothed seismicity. Earthquakes since January 1, 2000 are quite compatible with the model. For the geologic forecast we adopted the seismic source zones (based on geological, geodetic and seismicity data) of the GSHAP model. For each zone, we estimated a corner magnitude by applying the Wells and Coppersmith [1994] relationship to the longest fault in the zone, and we determined the a-value from fault slip rates and an assumed locking depth. The geological model fits the earthquake data better than the GSHAP model. We also applied the Wells and Coppersmith relationship to individual faults, but the results conflicted with the earthquake record. For our geodetic model we derived the uniform upper magnitude limit from the special catalog and assumed local a-values proportional to maximum horizontal strain rate. In prospective tests the geodetic model agrees well with earthquake occurrence. The smoothed seismicity model performs best of the four models.

  20. Northeast Artificial Intelligence Consortium Annual Report - 1988. Volume 4. Distributed AI for Communications Network Management

    DTIC Science & Technology

    1989-10-01

    apiots to rerlliii their los1 ’ at act ions iii Ilie rouist ’lirt iii of thle plain. Th’lis fast piece of iiiforiat liolu is plrovidied tlioiigli Ow 1use of...maximum compatible sets and delete subsets otherwise for every plan fragment pf, for g,. tile first goal in goals, if p.f- does not exceed resource... deletes an non-default assumption. 4.5.3.2 Data Structures The MATMS is a frame-based system in which there are five basic types of objects: beliefs

  1. Measurement of a 200 MeV proton beam using a polyurethane dosimeter

    NASA Astrophysics Data System (ADS)

    Heard, Malcolm; Adamovics, John; Ibbott, Geoffrey

    2006-12-01

    PRESAGETM (Heuris Pharma LLC, Skillman, NJ) is a three-dimensional polyurethane dosimeter containing a leuco dye that generates a color change when irradiated. The dosimeter is solid and does not require a container to maintain its shape. The dosimeter is transparent before irradiation and the maximum absorbance of the leuco dye occurs at 633 nm which is compatible with the OCT-OPUSTM laser CT scanner (MGS Research, Inc., Madison, CT). The purpose of this study was to investigate the response of PRESAGETM to proton beam radiotherapy.

  2. Quick setting water-compatible furfuryl alcohol polymer concretes

    DOEpatents

    Sugama, Toshifumi; Kukacka, Lawrence E.; Horn, William H.

    1982-11-30

    A novel quick setting polymer concrete composite comprising a furfuryl alcohol monomer, an aggregate containing a maximum of 8% by weight water, and about 1-10% trichlorotoluene initiator and about 20-80% powdered metal salt promoter, such as zinc chloride, based on the weight of said monomer, to initiate and promote polymerization of said monomer in the presence of said aggregate, within 1 hour after mixing at a temperature of -20.degree. C. to 40.degree. C., to produce a polymer concrete having a 1 hour compressive strength greater than 2000 psi.

  3. FDA-sunlamp recommended Maximum Timer Interval And Exposure Schedule: consensus ISO/CIE dose equivalence.

    PubMed

    Dowdy, John C; Czako, Eugene A; Stepp, Michael E; Schlitt, Steven C; Bender, Gregory R; Khan, Lateef U; Shinneman, Kenneth D; Karos, Manuel G; Shepherd, James G; Sayre, Robert M

    2011-09-01

    The authors compared calculations of sunlamp maximum exposure times following current USFDA Guidance Policy on the Maximum Timer Interval and Exposure Schedule, with USFDA/CDRH proposals revising these to equivalent erythemal exposures of ISO/CIE Standard Erythema Dose (SED). In 2003, [USFDA/CDRH proposed replacing their unique CDRH/Lytle] erythema action spectrum with the ISO/CIE erythema action spectrum and revising the sunlamp maximum exposure timer to 600 J m(-2) ISO/CIE effective dose, presented as being biologically equivalent. Preliminary analysis failed to confirm said equivalence, indicating instead ∼38% increased exposure when applying these proposed revisions. To confirm and refine this finding, a collaboration of tanning bed and UV lamp manufacturers compiled 89 UV spectra representing a broad sampling of U.S. indoor tanning equipment. USFDA maximum recommended exposure time (Te) per current sunlamp guidance and CIE erythemal effectiveness per ISO/CIE standard were calculated. The CIE effective dose delivered per Te averaged 456 J(CIE) m(-2) (SD = 0.17) or ∼4.5 SED. The authors found that CDRH's proposed 600 J(CIE) m(-2) recommended maximum sunlamp exposure exceeds current Te erythemal dose by ∼33%. The current USFDA 0.75 MED initial exposure was ∼0.9 SED, consistent with 1.0 SED initial dose in existing international sunlamp standards. As no sunlamps analyzed exceeded 5 SED, a revised maximum exposure of 500 J(CIE) m(-2) (∼80% of CDRH's proposal) should be compatible with existing tanning equipment. A tanning acclimatization schedule is proposed beginning at 1 SED thrice-weekly, increasing uniformly stepwise over 4 wk to a 5 SED maximum exposure in conjunction with a tan maintenance schedule of twice-weekly 5 SED sessions, as biologically equivalent to current USFDA sunlamp policy.

  4. What controls the maximum magnitude of injection-induced earthquakes?

    NASA Astrophysics Data System (ADS)

    Eaton, D. W. S.

    2017-12-01

    Three different approaches for estimation of maximum magnitude are considered here, along with their implications for managing risk. The first approach is based on a deterministic limit for seismic moment proposed by McGarr (1976), which was originally designed for application to mining-induced seismicity. This approach has since been reformulated for earthquakes induced by fluid injection (McGarr, 2014). In essence, this method assumes that the upper limit for seismic moment release is constrained by the pressure-induced stress change. A deterministic limit is given by the product of shear modulus and the net injected fluid volume. This method is based on the assumptions that the medium is fully saturated and in a state of incipient failure. An alternative geometrical approach was proposed by Shapiro et al. (2011), who postulated that the rupture area for an induced earthquake falls entirely within the stimulated volume. This assumption reduces the maximum-magnitude problem to one of estimating the largest potential slip surface area within a given stimulated volume. Finally, van der Elst et al. (2016) proposed that the maximum observed magnitude, statistically speaking, is the expected maximum value for a finite sample drawn from an unbounded Gutenberg-Richter distribution. These three models imply different approaches for risk management. The deterministic method proposed by McGarr (2014) implies that a ceiling on the maximum magnitude can be imposed by limiting the net injected volume, whereas the approach developed by Shapiro et al. (2011) implies that the time-dependent maximum magnitude is governed by the spatial size of the microseismic event cloud. Finally, the sample-size hypothesis of Van der Elst et al. (2016) implies that the best available estimate of the maximum magnitude is based upon observed seismicity rate. The latter two approaches suggest that real-time monitoring is essential for effective management of risk. A reliable estimate of maximum plausible magnitude would clearly be beneficial for quantitative risk assessment of injection-induced seismicity.

  5. Quasi-Maximum Likelihood Estimation of Structural Equation Models with Multiple Interaction and Quadratic Effects

    ERIC Educational Resources Information Center

    Klein, Andreas G.; Muthen, Bengt O.

    2007-01-01

    In this article, a nonlinear structural equation model is introduced and a quasi-maximum likelihood method for simultaneous estimation and testing of multiple nonlinear effects is developed. The focus of the new methodology lies on efficiency, robustness, and computational practicability. Monte-Carlo studies indicate that the method is highly…

  6. Statistical analysis of dynamic fibrils observed from NST/BBSO observations

    NASA Astrophysics Data System (ADS)

    Gopalan Priya, Thambaje; Su, Jiang-Tao; Chen, Jie; Deng, Yuan-Yong; Prasad Choudhury, Debi

    2018-02-01

    We present the results obtained from the analysis of dynamic fibrils in NOAA active region (AR) 12132, using high resolution Hα observations from the New Solar Telescope operating at Big Bear Solar Observatory. The dynamic fibrils are seen to be moving up and down, and most of these dynamic fibrils are periodic and have a jet-like appearance. We found from our observations that the fibrils follow almost perfect parabolic paths in many cases. A statistical analysis on the properties of the parabolic paths showing an analysis on deceleration, maximum velocity, duration and kinetic energy of these fibrils is presented here. We found the average maximum velocity to be around 15 kms‑1 and mean deceleration to be around 100 ms‑2. The observed deceleration appears to be a fraction of gravity of the Sun and is not compatible with the path of ballistic motion due to gravity of the Sun. We found a positive correlation between deceleration and maximum velocity. This correlation is consistent with simulations done earlier on magnetoacoustic shock waves propagating upward.

  7. Development of advanced techniques for rotorcraft state estimation and parameter identification

    NASA Technical Reports Server (NTRS)

    Hall, W. E., Jr.; Bohn, J. G.; Vincent, J. H.

    1980-01-01

    An integrated methodology for rotorcraft system identification consists of rotorcraft mathematical modeling, three distinct data processing steps, and a technique for designing inputs to improve the identifiability of the data. These elements are as follows: (1) a Kalman filter smoother algorithm which estimates states and sensor errors from error corrupted data. Gust time histories and statistics may also be estimated; (2) a model structure estimation algorithm for isolating a model which adequately explains the data; (3) a maximum likelihood algorithm for estimating the parameters and estimates for the variance of these estimates; and (4) an input design algorithm, based on a maximum likelihood approach, which provides inputs to improve the accuracy of parameter estimates. Each step is discussed with examples to both flight and simulated data cases.

  8. The biogeography of the yeti crabs (Kiwaidae) with notes on the phylogeny of the Chirostyloidea (Decapoda: Anomura)

    PubMed Central

    Roterman, C. N.; Copley, J. T.; Linse, K. T.; Tyler, P. A.; Rogers, A. D.

    2013-01-01

    The phylogeny of the superfamily Chirostyloidea (Decapoda: Anomura) has been poorly understood owing to limited taxon sampling and discordance between different genes. We present a nine-gene dataset across 15 chirostyloids, including all known yeti crabs (Kiwaidae), to improve the resolution of phylogenetic affinities within and between the different families, and to date key divergences using fossil calibrations. This study supports the monophyly of Chirostyloidea and, within this, a basal split between Eumunididae and a Kiwaidae–Chirostylidae clade. All three families originated in the Mid-Cretaceous, but extant kiwaids and most chirostylids radiated from the Eocene onwards. Within Kiwaidae, the basal split between the seep-endemic Kiwa puravida and a vent clade comprising Kiwa hirsuta and Kiwa spp. found on the East Scotia and Southwest Indian ridges is compatible with a hypothesized seep-to-vent evolutionary trajectory. A divergence date estimate of 13.4–25.9 Ma between the Pacific and non-Pacific lineages is consistent with Kiwaidae spreading into the Atlantic sector of the Southern Ocean via the newly opened Drake Passage. The recent radiation of Kiwaidae adds to the list of chemosynthetic fauna that appear to have diversified after the Palaeocene/Eocene Thermal Maximum, a period of possibly widespread anoxia/dysoxia in deep-sea basins. PMID:23782878

  9. Effects of Beryllium and Compaction Pressure on the Thermal Diffusivity of Uranium Dioxide Fuel Pellets

    NASA Astrophysics Data System (ADS)

    Camarano, D. M.; Mansur, F. A.; Santos, A. M. M.; Ferraz, W. B.; Ferreira, R. A. N.

    2017-09-01

    In nuclear reactors, the performance of uranium dioxide (UO2) fuel is strongly dependent on the thermal conductivity, which directly affects the fuel pellet temperature, the fission gas release and the fuel rod mechanical behavior during reactor operation. The use of additives to improve UO2 fuel performance has been investigated, and beryllium oxide (BeO) appears as a suitable additive because of its high thermal conductivity and excellent chemical compatibility with UO2. In this paper, UO2-BeO pellets were manufactured by mechanical mixing, pressing and sintering processes varying the BeO contents and compaction pressures. Pellets with BeO contents of 2 wt%, 3 wt%, 5 wt% and 7 wt% BeO were pressed at 400 MPa, 500 MPa and 600 MPa. The laser flash method was applied to determine the thermal diffusivity, and the results showed that the thermal diffusivity tends to increase with BeO content. Comparing thermal diffusivity results of UO2 with UO2-BeO pellets, it was observed that there was an increase in thermal diffusivity of at least 18 % for the UO2-2 wt% BeO pellet pressed at 400 MPa. The maximum relative expanded uncertainty (coverage factor k = 2) of the thermal diffusivity measurements was estimated to be 9 %.

  10. High energy x-ray phase contrast CT using glancing-angle grating interferometers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarapata, A., E-mail: adrian.sarapata@tum.de; Stayman, J. W.; Siewerdsen, J. H.

    Purpose: The authors present initial progress toward a clinically compatible x-ray phase contrast CT system, using glancing-angle x-ray grating interferometry to provide high contrast soft tissue images at estimated by computer simulation dose levels comparable to conventional absorption based CT. Methods: DPC-CT scans of a joint phantom and of soft tissues were performed in order to answer several important questions from a clinical setup point of view. A comparison between high and low fringe visibility systems is presented. The standard phase stepping method was compared with sliding window interlaced scanning. Using estimated dose values obtained with a Monte-Carlo code themore » authors studied the dependence of the phase image contrast on exposure time and dose. Results: Using a glancing angle interferometer at high x-ray energy (∼45 keV mean value) in combination with a conventional x-ray tube the authors achieved fringe visibility values of nearly 50%, never reported before. High fringe visibility is shown to be an indispensable parameter for a potential clinical scanner. Sliding window interlaced scanning proved to have higher SNRs and CNRs in a region of interest and to also be a crucial part of a low dose CT system. DPC-CT images of a soft tissue phantom at exposures in the range typical for absorption based CT of musculoskeletal extremities were obtained. Assuming a human knee as the CT target, good soft tissue phase contrast could be obtained at an estimated absorbed dose level around 8 mGy, similar to conventional CT. Conclusions: DPC-CT with glancing-angle interferometers provides improved soft tissue contrast over absorption CT even at clinically compatible dose levels (estimated by a Monte-Carlo computer simulation). Further steps in image processing, data reconstruction, and spectral matching could make the technique fully clinically compatible. Nevertheless, due to its increased scan time and complexity the technique should be thought of not as replacing, but as complimentary to conventional CT, to be used in specific applications.« less

  11. Maximum Likelihood Shift Estimation Using High Resolution Polarimetric SAR Clutter Model

    NASA Astrophysics Data System (ADS)

    Harant, Olivier; Bombrun, Lionel; Vasile, Gabriel; Ferro-Famil, Laurent; Gay, Michel

    2011-03-01

    This paper deals with a Maximum Likelihood (ML) shift estimation method in the context of High Resolution (HR) Polarimetric SAR (PolSAR) clutter. Texture modeling is exposed and the generalized ML texture tracking method is extended to the merging of various sensors. Some results on displacement estimation on the Argentiere glacier in the Mont Blanc massif using dual-pol TerraSAR-X (TSX) and quad-pol RADARSAT-2 (RS2) sensors are finally discussed.

  12. Application of the quantum spin glass theory to image restoration.

    PubMed

    Inoue, J I

    2001-04-01

    Quantum fluctuation is introduced into the Markov random-field model for image restoration in the context of a Bayesian approach. We investigate the dependence of the quantum fluctuation on the quality of a black and white image restoration by making use of statistical mechanics. We find that the maximum posterior marginal (MPM) estimate based on the quantum fluctuation gives a fine restoration in comparison with the maximum a posteriori estimate or the thermal fluctuation based MPM estimate.

  13. Maximum Entropy Approach in Dynamic Contrast-Enhanced Magnetic Resonance Imaging.

    PubMed

    Farsani, Zahra Amini; Schmid, Volker J

    2017-01-01

    In the estimation of physiological kinetic parameters from Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) data, the determination of the arterial input function (AIF) plays a key role. This paper proposes a Bayesian method to estimate the physiological parameters of DCE-MRI along with the AIF in situations, where no measurement of the AIF is available. In the proposed algorithm, the maximum entropy method (MEM) is combined with the maximum a posterior approach (MAP). To this end, MEM is used to specify a prior probability distribution of the unknown AIF. The ability of this method to estimate the AIF is validated using the Kullback-Leibler divergence. Subsequently, the kinetic parameters can be estimated with MAP. The proposed algorithm is evaluated with a data set from a breast cancer MRI study. The application shows that the AIF can reliably be determined from the DCE-MRI data using MEM. Kinetic parameters can be estimated subsequently. The maximum entropy method is a powerful tool to reconstructing images from many types of data. This method is useful for generating the probability distribution based on given information. The proposed method gives an alternative way to assess the input function from the existing data. The proposed method allows a good fit of the data and therefore a better estimation of the kinetic parameters. In the end, this allows for a more reliable use of DCE-MRI. Schattauer GmbH.

  14. Noise stochastic corrected maximum a posteriori estimator for birefringence imaging using polarization-sensitive optical coherence tomography

    PubMed Central

    Kasaragod, Deepa; Makita, Shuichi; Hong, Young-Joo; Yasuno, Yoshiaki

    2017-01-01

    This paper presents a noise-stochastic corrected maximum a posteriori estimator for birefringence imaging using Jones matrix optical coherence tomography. The estimator described in this paper is based on the relationship between probability distribution functions of the measured birefringence and the effective signal to noise ratio (ESNR) as well as the true birefringence and the true ESNR. The Monte Carlo method is used to numerically describe this relationship and adaptive 2D kernel density estimation provides the likelihood for a posteriori estimation of the true birefringence. Improved estimation is shown for the new estimator with stochastic model of ESNR in comparison to the old estimator, both based on the Jones matrix noise model. A comparison with the mean estimator is also done. Numerical simulation validates the superiority of the new estimator. The superior performance of the new estimator was also shown by in vivo measurement of optic nerve head. PMID:28270974

  15. Empirical best linear unbiased prediction method for small areas with restricted maximum likelihood and bootstrap procedure to estimate the average of household expenditure per capita in Banjar Regency

    NASA Astrophysics Data System (ADS)

    Aminah, Agustin Siti; Pawitan, Gandhi; Tantular, Bertho

    2017-03-01

    So far, most of the data published by Statistics Indonesia (BPS) as data providers for national statistics are still limited to the district level. Less sufficient sample size for smaller area levels to make the measurement of poverty indicators with direct estimation produced high standard error. Therefore, the analysis based on it is unreliable. To solve this problem, the estimation method which can provide a better accuracy by combining survey data and other auxiliary data is required. One method often used for the estimation is the Small Area Estimation (SAE). There are many methods used in SAE, one of them is Empirical Best Linear Unbiased Prediction (EBLUP). EBLUP method of maximum likelihood (ML) procedures does not consider the loss of degrees of freedom due to estimating β with β ^. This drawback motivates the use of the restricted maximum likelihood (REML) procedure. This paper proposed EBLUP with REML procedure for estimating poverty indicators by modeling the average of household expenditures per capita and implemented bootstrap procedure to calculate MSE (Mean Square Error) to compare the accuracy EBLUP method with the direct estimation method. Results show that EBLUP method reduced MSE in small area estimation.

  16. Maximum likelihood estimation for Cox's regression model under nested case-control sampling.

    PubMed

    Scheike, Thomas H; Juul, Anders

    2004-04-01

    Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazards model. The MLE is computed by the EM-algorithm, which is easy to implement in the proportional hazards setting. Standard errors are estimated by a numerical profile likelihood approach based on EM aided differentiation. The work was motivated by a nested case-control study that hypothesized that insulin-like growth factor I was associated with ischemic heart disease. The study was based on a population of 3784 Danes and 231 cases of ischemic heart disease where controls were matched on age and gender. We illustrate the use of the MLE for these data and show how the maximum likelihood framework can be used to obtain information additional to the relative risk estimates of covariates.

  17. The Extended-Image Tracking Technique Based on the Maximum Likelihood Estimation

    NASA Technical Reports Server (NTRS)

    Tsou, Haiping; Yan, Tsun-Yee

    2000-01-01

    This paper describes an extended-image tracking technique based on the maximum likelihood estimation. The target image is assume to have a known profile covering more than one element of a focal plane detector array. It is assumed that the relative position between the imager and the target is changing with time and the received target image has each of its pixels disturbed by an independent additive white Gaussian noise. When a rotation-invariant movement between imager and target is considered, the maximum likelihood based image tracking technique described in this paper is a closed-loop structure capable of providing iterative update of the movement estimate by calculating the loop feedback signals from a weighted correlation between the currently received target image and the previously estimated reference image in the transform domain. The movement estimate is then used to direct the imager to closely follow the moving target. This image tracking technique has many potential applications, including free-space optical communications and astronomy where accurate and stabilized optical pointing is essential.

  18. A maximum likelihood algorithm for genome mapping of cytogenetic loci from meiotic configuration data.

    PubMed Central

    Reyes-Valdés, M H; Stelly, D M

    1995-01-01

    Frequencies of meiotic configurations in cytogenetic stocks are dependent on chiasma frequencies in segments defined by centromeres, breakpoints, and telomeres. The expectation maximization algorithm is proposed as a general method to perform maximum likelihood estimations of the chiasma frequencies in the intervals between such locations. The estimates can be translated via mapping functions into genetic maps of cytogenetic landmarks. One set of observational data was analyzed to exemplify application of these methods, results of which were largely concordant with other comparable data. The method was also tested by Monte Carlo simulation of frequencies of meiotic configurations from a monotelodisomic translocation heterozygote, assuming six different sample sizes. The estimate averages were always close to the values given initially to the parameters. The maximum likelihood estimation procedures can be extended readily to other kinds of cytogenetic stocks and allow the pooling of diverse cytogenetic data to collectively estimate lengths of segments, arms, and chromosomes. Images Fig. 1 PMID:7568226

  19. Estimation of Lithological Classification in Taipei Basin: A Bayesian Maximum Entropy Method

    NASA Astrophysics Data System (ADS)

    Wu, Meng-Ting; Lin, Yuan-Chien; Yu, Hwa-Lung

    2015-04-01

    In environmental or other scientific applications, we must have a certain understanding of geological lithological composition. Because of restrictions of real conditions, only limited amount of data can be acquired. To find out the lithological distribution in the study area, many spatial statistical methods used to estimate the lithological composition on unsampled points or grids. This study applied the Bayesian Maximum Entropy (BME method), which is an emerging method of the geological spatiotemporal statistics field. The BME method can identify the spatiotemporal correlation of the data, and combine not only the hard data but the soft data to improve estimation. The data of lithological classification is discrete categorical data. Therefore, this research applied Categorical BME to establish a complete three-dimensional Lithological estimation model. Apply the limited hard data from the cores and the soft data generated from the geological dating data and the virtual wells to estimate the three-dimensional lithological classification in Taipei Basin. Keywords: Categorical Bayesian Maximum Entropy method, Lithological Classification, Hydrogeological Setting

  20. The effect of high leverage points on the logistic ridge regression estimator having multicollinearity

    NASA Astrophysics Data System (ADS)

    Ariffin, Syaiba Balqish; Midi, Habshah

    2014-06-01

    This article is concerned with the performance of logistic ridge regression estimation technique in the presence of multicollinearity and high leverage points. In logistic regression, multicollinearity exists among predictors and in the information matrix. The maximum likelihood estimator suffers a huge setback in the presence of multicollinearity which cause regression estimates to have unduly large standard errors. To remedy this problem, a logistic ridge regression estimator is put forward. It is evident that the logistic ridge regression estimator outperforms the maximum likelihood approach for handling multicollinearity. The effect of high leverage points are then investigated on the performance of the logistic ridge regression estimator through real data set and simulation study. The findings signify that logistic ridge regression estimator fails to provide better parameter estimates in the presence of both high leverage points and multicollinearity.

  1. A generalized system of models forecasting Central States tree growth.

    Treesearch

    Stephen R. Shifley

    1987-01-01

    Describes the development and testing of a system of individual tree-based growth projection models applicable to species in Indiana, Missouri, and Ohio. Annual tree basal area growth is estimated as a function of tree size, crown ratio, stand density, and site index. Models are compatible with the STEMS and TWIGS Projection System.

  2. The Integration of Psycholinguistic and Discourse Processing Theories of Reading Comprehension.

    ERIC Educational Resources Information Center

    Beebe, Mona J.

    To assess the compatibility of miscue analysis and recall analysis as independent elements in a theory of reading comprehension, a study was performed that operationalized each theory and separated its components into measurable units to allow empirical testing. A cueing strategy model was estimated, but the discourse processing model was broken…

  3. Maximum-likelihood estimation of recent shared ancestry (ERSA).

    PubMed

    Huff, Chad D; Witherspoon, David J; Simonson, Tatum S; Xing, Jinchuan; Watkins, W Scott; Zhang, Yuhua; Tuohy, Therese M; Neklason, Deborah W; Burt, Randall W; Guthery, Stephen L; Woodward, Scott R; Jorde, Lynn B

    2011-05-01

    Accurate estimation of recent shared ancestry is important for genetics, evolution, medicine, conservation biology, and forensics. Established methods estimate kinship accurately for first-degree through third-degree relatives. We demonstrate that chromosomal segments shared by two individuals due to identity by descent (IBD) provide much additional information about shared ancestry. We developed a maximum-likelihood method for the estimation of recent shared ancestry (ERSA) from the number and lengths of IBD segments derived from high-density SNP or whole-genome sequence data. We used ERSA to estimate relationships from SNP genotypes in 169 individuals from three large, well-defined human pedigrees. ERSA is accurate to within one degree of relationship for 97% of first-degree through fifth-degree relatives and 80% of sixth-degree and seventh-degree relatives. We demonstrate that ERSA's statistical power approaches the maximum theoretical limit imposed by the fact that distant relatives frequently share no DNA through a common ancestor. ERSA greatly expands the range of relationships that can be estimated from genetic data and is implemented in a freely available software package.

  4. Collinear Latent Variables in Multilevel Confirmatory Factor Analysis

    PubMed Central

    van de Schoot, Rens; Hox, Joop

    2014-01-01

    Because variables may be correlated in the social and behavioral sciences, multicollinearity might be problematic. This study investigates the effect of collinearity manipulated in within and between levels of a two-level confirmatory factor analysis by Monte Carlo simulation. Furthermore, the influence of the size of the intraclass correlation coefficient (ICC) and estimation method; maximum likelihood estimation with robust chi-squares and standard errors and Bayesian estimation, on the convergence rate are investigated. The other variables of interest were rate of inadmissible solutions and the relative parameter and standard error bias on the between level. The results showed that inadmissible solutions were obtained when there was between level collinearity and the estimation method was maximum likelihood. In the within level multicollinearity condition, all of the solutions were admissible but the bias values were higher compared with the between level collinearity condition. Bayesian estimation appeared to be robust in obtaining admissible parameters but the relative bias was higher than for maximum likelihood estimation. Finally, as expected, high ICC produced less biased results compared to medium ICC conditions. PMID:29795827

  5. Bias correction of risk estimates in vaccine safety studies with rare adverse events using a self-controlled case series design.

    PubMed

    Zeng, Chan; Newcomer, Sophia R; Glanz, Jason M; Shoup, Jo Ann; Daley, Matthew F; Hambidge, Simon J; Xu, Stanley

    2013-12-15

    The self-controlled case series (SCCS) method is often used to examine the temporal association between vaccination and adverse events using only data from patients who experienced such events. Conditional Poisson regression models are used to estimate incidence rate ratios, and these models perform well with large or medium-sized case samples. However, in some vaccine safety studies, the adverse events studied are rare and the maximum likelihood estimates may be biased. Several bias correction methods have been examined in case-control studies using conditional logistic regression, but none of these methods have been evaluated in studies using the SCCS design. In this study, we used simulations to evaluate 2 bias correction approaches-the Firth penalized maximum likelihood method and Cordeiro and McCullagh's bias reduction after maximum likelihood estimation-with small sample sizes in studies using the SCCS design. The simulations showed that the bias under the SCCS design with a small number of cases can be large and is also sensitive to a short risk period. The Firth correction method provides finite and less biased estimates than the maximum likelihood method and Cordeiro and McCullagh's method. However, limitations still exist when the risk period in the SCCS design is short relative to the entire observation period.

  6. A comparison of maximum likelihood and other estimators of eigenvalues from several correlated Monte Carlo samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beer, M.

    1980-12-01

    The maximum likelihood method for the multivariate normal distribution is applied to the case of several individual eigenvalues. Correlated Monte Carlo estimates of the eigenvalue are assumed to follow this prescription and aspects of the assumption are examined. Monte Carlo cell calculations using the SAM-CE and VIM codes for the TRX-1 and TRX-2 benchmark reactors, and SAM-CE full core results are analyzed with this method. Variance reductions of a few percent to a factor of 2 are obtained from maximum likelihood estimation as compared with the simple average and the minimum variance individual eigenvalue. The numerical results verify that themore » use of sample variances and correlation coefficients in place of the corresponding population statistics still leads to nearly minimum variance estimation for a sufficient number of histories and aggregates.« less

  7. Retention Severity in the Navy: A Composite Index.

    DTIC Science & Technology

    1983-06-01

    unfortunately, their estimates of optimum SRB award levels are applicable only to recruits with four year obligations ( 4YO ) and six year obligatiorn(6YO). A...of a maximum bonus award level of 6. Their estimates would put the maximum bonus level as high as 20 for 4YOs and 19 for 6YOs. However, the implica

  8. 78 FR 20109 - Agency Forms Undergoing Paperwork Reduction Act Review

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-03

    ...Meeting (i.e., webinar) training session conducted by CDC staff. We estimate the burden of this training to be a maximum of 2 hours. Respondents will only have to take this training one time. Assuming a maximum number of outbreaks of 1,400, the estimated burden for this training is 2,800 hours. The total...

  9. Impact of Violation of the Missing-at-Random Assumption on Full-Information Maximum Likelihood Method in Multidimensional Adaptive Testing

    ERIC Educational Resources Information Center

    Han, Kyung T.; Guo, Fanmin

    2014-01-01

    The full-information maximum likelihood (FIML) method makes it possible to estimate and analyze structural equation models (SEM) even when data are partially missing, enabling incomplete data to contribute to model estimation. The cornerstone of FIML is the missing-at-random (MAR) assumption. In (unidimensional) computerized adaptive testing…

  10. Constrained Maximum Likelihood Estimation for Two-Level Mean and Covariance Structure Models

    ERIC Educational Resources Information Center

    Bentler, Peter M.; Liang, Jiajuan; Tang, Man-Lai; Yuan, Ke-Hai

    2011-01-01

    Maximum likelihood is commonly used for the estimation of model parameters in the analysis of two-level structural equation models. Constraints on model parameters could be encountered in some situations such as equal factor loadings for different factors. Linear constraints are the most common ones and they are relatively easy to handle in…

  11. Computing Maximum Likelihood Estimates of Loglinear Models from Marginal Sums with Special Attention to Loglinear Item Response Theory.

    ERIC Educational Resources Information Center

    Kelderman, Henk

    1992-01-01

    Describes algorithms used in the computer program LOGIMO for obtaining maximum likelihood estimates of the parameters in loglinear models. These algorithms are also useful for the analysis of loglinear item-response theory models. Presents modified versions of the iterative proportional fitting and Newton-Raphson algorithms. Simulated data…

  12. Recovery of Graded Response Model Parameters: A Comparison of Marginal Maximum Likelihood and Markov Chain Monte Carlo Estimation

    ERIC Educational Resources Information Center

    Kieftenbeld, Vincent; Natesan, Prathiba

    2012-01-01

    Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…

  13. V/STOL model fan stage rig design report

    NASA Technical Reports Server (NTRS)

    Cheatham, J. G.; Creason, T. L.

    1983-01-01

    A model single-stage fan with variable inlet guide vanes (VIGV) was designed to demonstrate efficient point operation while providing flow and pressure ratio modulation capability required for a V/STOL propulsion system. The fan stage incorporates a split-flap VIGV with an independently actuated ID flap to permit independent modulation of fan and core engine airstreams, a flow splitter integrally designed into the blade and vanes to completely segregate fan and core airstreams in order to maximize core stream supercharging for V/STOL operation, and an EGV with a variable leading edge fan flap for rig performance optimization. The stage was designed for a maximum flow size of 37.4 kg/s (82.3 lb/s) for compatibility with LeRC test facility requirements. Design values at maximum flow for blade tip velocity and stage pressure ratio are 472 m/s (1550 ft/s) and 1.68, respectively.

  14. Modeling an exhumed basin: A method for estimating eroded overburden

    USGS Publications Warehouse

    Poelchau, H.S.

    2001-01-01

    The Alberta Deep Basin in western Canada has undergone a large amount of erosion following deep burial in the Eocene. Basin modeling and simulation of burial and temperature history require estimates of maximum overburden for each gridpoint in the basin model. Erosion can be estimated using shale compaction trends. For instance, the widely used Magara method attempts to establish a sonic log gradient for shales and uses the extrapolation to a theoretical uncompacted shale value as a first indication of overcompaction and estimation of the amount of erosion. Because such gradients are difficult to establish in many wells, an extension of this method was devised to help map erosion over a large area. Sonic A; values of one suitable shale formation are calibrated with maximum depth of burial estimates from sonic log extrapolation for several wells. This resulting regression equation then can be used to estimate and map maximum depth of burial or amount of erosion for all wells in which this formation has been logged. The example from the Alberta Deep Basin shows that the magnitude of erosion calculated by this method is conservative and comparable to independent estimates using vitrinite reflectance gradient methods. ?? 2001 International Association for Mathematical Geology.

  15. F-8C adaptive flight control extensions. [for maximum likelihood estimation

    NASA Technical Reports Server (NTRS)

    Stein, G.; Hartmann, G. L.

    1977-01-01

    An adaptive concept which combines gain-scheduled control laws with explicit maximum likelihood estimation (MLE) identification to provide the scheduling values is described. The MLE algorithm was improved by incorporating attitude data, estimating gust statistics for setting filter gains, and improving parameter tracking during changing flight conditions. A lateral MLE algorithm was designed to improve true air speed and angle of attack estimates during lateral maneuvers. Relationships between the pitch axis sensors inherent in the MLE design were examined and used for sensor failure detection. Design details and simulation performance are presented for each of the three areas investigated.

  16. Estimation of Dynamic Discrete Choice Models by Maximum Likelihood and the Simulated Method of Moments

    PubMed Central

    Eisenhauer, Philipp; Heckman, James J.; Mosso, Stefano

    2015-01-01

    We compare the performance of maximum likelihood (ML) and simulated method of moments (SMM) estimation for dynamic discrete choice models. We construct and estimate a simplified dynamic structural model of education that captures some basic features of educational choices in the United States in the 1980s and early 1990s. We use estimates from our model to simulate a synthetic dataset and assess the ability of ML and SMM to recover the model parameters on this sample. We investigate the performance of alternative tuning parameters for SMM. PMID:26494926

  17. CHARACTERIZATION OF A SAMPLE OF INTERMEDIATE-TYPE ACTIVE GALACTIC NUCLEI. II. HOST BULGE PROPERTIES AND BLACK HOLE MASS ESTIMATES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benitez, Erika; Cruz-Gonzalez, Irene; Martinez, Benoni

    2013-02-15

    We present a study of the host bulge properties and their relations with the black hole mass for a sample of 10 intermediate-type active galactic nuclei (AGNs). Our sample consists mainly of early-type spirals, four of them hosting a bar. For 70{sup +10} {sub -17}% of the galaxies, we have been able to determine the type of the bulge, and find that these objects probably harbor a pseudobulge or a combination of classical bulge/pseudobulge, suggesting that pseudobulges might be frequent in intermediate-type AGNs. In our sample, 50% {+-} 14% of the objects show double-peaked emission lines. Therefore, narrow double-peaked emissionmore » lines seem to be frequent in galaxies harboring a pseudobulge or a combination of classical bulge/pseudobulge. Depending on the bulge type, we estimated the black hole mass using the corresponding M {sub BH}-{sigma}* relation and found them within a range of 5.69 {+-} 0.21 < log M {sup {sigma}}*{sub BH} < 8.09 {+-} 0.24. Comparing these M {sup {sigma}}*{sub BH} values with masses derived from the FWHM of H{beta} and the continuum luminosity at 5100 A from their SDSS-DR7 spectra (M {sub BH}), we find that 8 out of 10 (80{sup +7} {sub -17}%) galaxies have black hole masses that are compatible within a factor of 3. This result would support that M {sub BH} and M {sup {sigma}}*{sub BH} are the same for intermediate-type AGNs, as has been found for type 1 AGNs. However, when the type of the bulge is taken into account, only three out of the seven (43{sup +18} {sub -15}%) objects of the sample have their M {sup {sigma}}*{sub BH} and M {sub BH} compatible within 3{sigma} errors. We also find that estimations based on the M {sub BH}-{sigma}* relation for pseudobulges are not compatible in 50% {+-} 20% of the objects.« less

  18. Compositional Evolution of Saturn's Ring: Ice, Tholin, and 'CHIRON'-Dust

    NASA Technical Reports Server (NTRS)

    Cuzzi, Jeffrey N.; Estrada, P. R.; DeVincenzi, Donald L. (Technical Monitor)

    1996-01-01

    We address compositional evolution in planetary ring systems subsequent to meteoroid bombardment. The huge surface area to mass ratio of planetary rings ensures the importance of this process, given currently expected values of meteoroid flux. We developed a model which includes both direct deposition of extrinsic meteoritic 'pollutants', and ballistic transport of the increasingly polluted ring material as impact ejecta. Certain aspects of the observed regional variations in ring color and albedo can be understood in terms of such a process. We conclude that the regional scale color and albedo differences between the C ring and B ring can be understood if all ring material began with the same composition (primarily water ice, based on other data, but colored by tiny amounts of non-icy, reddish absorber) and then evolved entirely by addition and mixing of extrinsic, neutrally colored, highly absorbing material. This conclusion is readily extended to the Cassini Division and its surroundings as well. Typical silicates are unable to satisfy the ring color, spectroscopic, and microwave absorption constraints either as intrinsic or extrinsic non-icy constituents. However, 'Titan Tholin' provides a satisfactory match for the inferred refractive indices of the 'pre-pollution' nonicy ring material. The extrinsic bombarding material is compatible with the properties of Halley or Chiron, but not with the properties of other 'red' primitive objects such as Pholus. We further demonstrate that the detailed radial profile of color across the abrupt B ring - C ring boundary is quite compatible with such a 'pollution transport' process, and that the shape of the profile can constrain key parameters in the model. We use the model to estimate the 'exposure age' of Saturn's rings to extrinsic meteoroid flux. We obtain a geologically young 'age' which is compatible with timescales estimated independently based on the evolution of ring structure due to ballistic transport, and also with other 'short timescales' estimated on the grounds of gravitational torques.

  19. An Alternative Estimator for the Maximum Likelihood Estimator for the Two Extreme Response Patterns.

    DTIC Science & Technology

    1981-06-29

    A.D-AI14# D" TENNEssEE UNIV KNOXVILLI DEPT OF PSYCHOLOGY F/6 12/1 AN ALTERNATIVE STIMATOR-FOR THE MAXIMUM LIKELIHOO ESTIMATOR F--ETCCU) JUN &I F...EXTREME RESPONSE PATTERNS 𔃺 FUMIKO SAMEJIMAr DEPARTMENT OF PSYCHOLOGY UNIVERSITY OF TENNESSEE KNOXVILLE, TENN. 37916 JUNE, 1981 Prepared under the...contract number N00014-77-C-360, NRl 1蓺 with the Personnel and Training Research Programs Psychological Sciences Division Office of Naval Research

  20. Maximum a posteriori decoder for digital communications

    NASA Technical Reports Server (NTRS)

    Altes, Richard A. (Inventor)

    1997-01-01

    A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.

  1. Pareto-Optimal Multi-objective Inversion of Geophysical Data

    NASA Astrophysics Data System (ADS)

    Schnaidt, Sebastian; Conway, Dennis; Krieger, Lars; Heinson, Graham

    2018-01-01

    In the process of modelling geophysical properties, jointly inverting different data sets can greatly improve model results, provided that the data sets are compatible, i.e., sensitive to similar features. Such a joint inversion requires a relationship between the different data sets, which can either be analytic or structural. Classically, the joint problem is expressed as a scalar objective function that combines the misfit functions of multiple data sets and a joint term which accounts for the assumed connection between the data sets. This approach suffers from two major disadvantages: first, it can be difficult to assess the compatibility of the data sets and second, the aggregation of misfit terms introduces a weighting of the data sets. We present a pareto-optimal multi-objective joint inversion approach based on an existing genetic algorithm. The algorithm treats each data set as a separate objective, avoiding forced weighting and generating curves of the trade-off between the different objectives. These curves are analysed by their shape and evolution to evaluate data set compatibility. Furthermore, the statistical analysis of the generated solution population provides valuable estimates of model uncertainty.

  2. Pointwise nonparametric maximum likelihood estimator of stochastically ordered survivor functions

    PubMed Central

    Park, Yongseok; Taylor, Jeremy M. G.; Kalbfleisch, John D.

    2012-01-01

    In this paper, we consider estimation of survivor functions from groups of observations with right-censored data when the groups are subject to a stochastic ordering constraint. Many methods and algorithms have been proposed to estimate distribution functions under such restrictions, but none have completely satisfactory properties when the observations are censored. We propose a pointwise constrained nonparametric maximum likelihood estimator, which is defined at each time t by the estimates of the survivor functions subject to constraints applied at time t only. We also propose an efficient method to obtain the estimator. The estimator of each constrained survivor function is shown to be nonincreasing in t, and its consistency and asymptotic distribution are established. A simulation study suggests better small and large sample properties than for alternative estimators. An example using prostate cancer data illustrates the method. PMID:23843661

  3. ESTIMATE OF SOLAR MAXIMUM USING THE 1-8 Å GEOSTATIONARY OPERATIONAL ENVIRONMENTAL SATELLITES X-RAY MEASUREMENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winter, L. M.; Balasubramaniam, K. S., E-mail: lwinter@aer.com

    We present an alternate method of determining the progression of the solar cycle through an analysis of the solar X-ray background. Our results are based on the NOAA Geostationary Operational Environmental Satellites (GOES) X-ray data in the 1-8 Å band from 1986 to the present, covering solar cycles 22, 23, and 24. The X-ray background level tracks the progression of the solar cycle through its maximum and minimum. Using the X-ray data, we can therefore make estimates of the solar cycle progression and the date of solar maximum. Based upon our analysis, we conclude that the Sun reached its hemisphere-averagedmore » maximum in solar cycle 24 in late 2013. This is within six months of the NOAA prediction of a maximum in spring 2013.« less

  4. Development of a Strategy Based on the Surface Plasmon Resonance Technology for Platelet Compatibility Testing.

    PubMed

    Wu, Chang-Lin; He, Jian-An; Gu, Da-Yong; Shao, Chao-Peng; Zhu, Yi; Dang, Xin-Tang

    2018-01-01

    This study was aimed to establish a novel strategy based on the surface plasmon resonance (SPR) technology for platelet compatibility testing. A novel surface matrix was prepared based on poly (OEGMA-co-HEMA) via surface-initiated polymerization as a biosensor surface platform. Type O universal platelets and donor platelets were immobilized on these novel matrices via amine-coupling reaction and worked as a capturing ligand for binding the platelet antibody. Antibodies binding to platelets were monitored in real time by injecting the samples into a microfluidic channel. Clinical serum samples (n = 186) with multiple platelet transfusions were assayed for platelet antibodies using the SPR technology and monoclonal antibody-immobilized platelet antigen (MAIPA) assay. The novel biosensor surface achieved nonfouling background and high immobilization capacity and showed good repeatability and stability after regeneration. The limit of detection of the SPR biosensor for platelet antibody was estimated to be 50 ng/mL. The sensitivity and specificity were 92% and 98.7%. It could detect the platelet antibody directly in serum samples, and the results were similar to MAIPA assay. A novel strategy to facilitate the sensitive and reliable detection of platelet compatibility for developing an SPR-based biosensor was established in this study. The SPR-based biosensor combined with novel surface chemistry is a promising method for platelet compatibility testing.

  5. The Sensitivity of Earth's Climate History To Changes In The Rates of Biological And Geological Evolution

    NASA Astrophysics Data System (ADS)

    Waltham, D.

    2014-12-01

    The faint young Sun paradox (early Earth had surface liquid water despite solar luminosity 70% of the modern value) implies that our planet's albedo has increased through time and/or greenhouse warming has fallen. The obvious explanation is that negative feedback processes stabilized temperatures. However, the limited temperature data available does not exhibit the expected residual temperature rise and, at least for the Phanerozoic, estimates of climate sensitivity exceed the Planck sensitivity (the zero net-feedback value). The alternate explanation is that biological and geological evolution have tended to cool Earth through time hence countering solar-driven warming. The coincidence that Earth-evolution has roughly cancelled Solar-evolution can then be explained as an emergent property of a complex system (the Gaia hypothesis) or the result of the unavoidable observational bias that Earth's climate history must be compatible with our existence (the anthropic principle). Here, I use a simple climate model to investigate the sensitivity of Earth's climate to changes in the rate of Earth-evolution. Earth-evolution is represented by an effective emissivity which has an intrinsic variation through time (due to continental growth, the evolution of cyanobacteria, orbital fluctuations etc) plus a linear feedback term which enhances emissivity variations. An important feature of this model is a predicted maximum in the radiated-flux versus temperature function. If the increasing solar flux through time had exceeded this value then runaway warming would have occurred. For the best-guess temperature history and climate sensitivity, the Earth has always been within a few percent of this maximum. There is no obvious Gaian explanation for this flux-coincidence but the anthropic principle naturally explains it: If the rate of biological/geological evolution is naturally slow then Earth is a fortunate outlier which evolved just fast enough to avoid solar-induced over-heating. However, there are large uncertainties concerning the temperature history of our planet and concerning climate sensitivity in the Archean and Proterozoic. When these are included, the solar-flux through time might have been as little as 70-90 % of the maximum thus reducing the significance of the flux-coincidence.

  6. Aircraft parameter estimation

    NASA Technical Reports Server (NTRS)

    Iliff, Kenneth W.

    1987-01-01

    The aircraft parameter estimation problem is used to illustrate the utility of parameter estimation, which applies to many engineering and scientific fields. Maximum likelihood estimation has been used to extract stability and control derivatives from flight data for many years. This paper presents some of the basic concepts of aircraft parameter estimation and briefly surveys the literature in the field. The maximum likelihood estimator is discussed, and the basic concepts of minimization and estimation are examined for a simple simulated aircraft example. The cost functions that are to be minimized during estimation are defined and discussed. Graphic representations of the cost functions are given to illustrate the minimization process. Finally, the basic concepts are generalized, and estimation from flight data is discussed. Some of the major conclusions for the simulated example are also developed for the analysis of flight data from the F-14, highly maneuverable aircraft technology (HiMAT), and space shuttle vehicles.

  7. Asymptotic Properties of Induced Maximum Likelihood Estimates of Nonlinear Models for Item Response Variables: The Finite-Generic-Item-Pool Case.

    ERIC Educational Resources Information Center

    Jones, Douglas H.

    The progress of modern mental test theory depends very much on the techniques of maximum likelihood estimation, and many popular applications make use of likelihoods induced by logistic item response models. While, in reality, item responses are nonreplicate within a single examinee and the logistic models are only ideal, practitioners make…

  8. A Systematic Approach for Model-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2010-01-01

    A requirement for effective aircraft engine performance estimation is the ability to account for engine degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. This paper presents a linear point design methodology for minimizing the degradation-induced error in model-based aircraft engine performance estimation applications. The technique specifically focuses on the underdetermined estimation problem, where there are more unknown health parameters than available sensor measurements. A condition for Kalman filter-based estimation is that the number of health parameters estimated cannot exceed the number of sensed measurements. In this paper, the estimated health parameter vector will be replaced by a reduced order tuner vector whose dimension is equivalent to the sensed measurement vector. The reduced order tuner vector is systematically selected to minimize the theoretical mean squared estimation error of a maximum a posteriori estimator formulation. This paper derives theoretical estimation errors at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the estimation accuracy achieved through conventional maximum a posteriori and Kalman filter estimation approaches. Maximum a posteriori estimation results demonstrate that reduced order tuning parameter vectors can be found that approximate the accuracy of estimating all health parameters directly. Kalman filter estimation results based on the same reduced order tuning parameter vectors demonstrate that significantly improved estimation accuracy can be achieved over the conventional approach of selecting a subset of health parameters to serve as the tuner vector. However, additional development is necessary to fully extend the methodology to Kalman filter-based estimation applications.

  9. Methodology and Implications of Maximum Paleodischarge Estimates for

    USGS Publications Warehouse

    Channels, M.; Pruess, J.; Wohl, E.E.; Jarrett, R.D.

    1998-01-01

    Historical and geologic records may be used to enhance magnitude estimates for extreme floods along mountain channels, as demonstrated in this study from the San Juan Mountains of Colorado. Historical photographs and local newspaper accounts from the October 1911 flood indicate the likely extent of flooding and damage. A checklist designed to organize and numerically score evidence of flooding was used in 15 field reconnaissance surveys in the upper Animas River valley of southwestern Colorado. Step-backwater flow modeling estimated the discharges necessary to create longitudinal flood bars observed at 6 additional field sites. According to these analyses, maximum unit discharge peaks at approximately 1.3 m3 s~' km"2 around 2200 m elevation, with decreased unit discharges at both higher and lower elevations. These results (1) are consistent with Jarrett's (1987, 1990, 1993) maximum 2300-m elevation limit for flash-flooding in the Colorado Rocky Mountains, and (2) suggest that current Probable Maximum Flood (PMF) estimates based on a 24-h rainfall of 30 cm at elevations above 2700 m are unrealistically large. The methodology used for this study should be readily applicable to other mountain regions where systematic streamflow records are of short duration or nonexistent. ?? 1998 Regents of the University of Colorado.

  10. Psychometric Properties of IRT Proficiency Estimates

    ERIC Educational Resources Information Center

    Kolen, Michael J.; Tong, Ye

    2010-01-01

    Psychometric properties of item response theory proficiency estimates are considered in this paper. Proficiency estimators based on summed scores and pattern scores include non-Bayes maximum likelihood and test characteristic curve estimators and Bayesian estimators. The psychometric properties investigated include reliability, conditional…

  11. Shape reconstruction of irregular bodies with multiple complementary data sources

    NASA Astrophysics Data System (ADS)

    Kaasalainen, M.; Viikinkoski, M.; Carry, B.; Durech, J.; Lamy, P.; Jorda, L.; Marchis, F.; Hestroffer, D.

    2011-10-01

    Irregularly shaped bodies with at most partial in situ data are a particular challenge for shape reconstruction and mapping. We have created an inversion algorithm and software package for complementary data sources, with which it is possible to create shape and spin models with feature details even when only groundbased data are available. The procedure uses photometry, adaptive optics or other images, occultation timings, and interferometry as main data sources, and we are extending it to include range-Doppler radar and thermal infrared data as well. The data sources are described as generalized projections in various observable spaces [2], which allows their uniform handling with essentially the same techniques, making the addition of new data sources inexpensive in terms of computation time or software development. We present a generally applicable shape support that can be automatically used for all surface types, including strongly nonconvex or non-starlike shapes. New models of Kleopatra (from photometry, adaptive optics, and interferometry) and Hermione are examples of this approach. When using adaptive optics images, the main information from these is extracted from the limb and terminator contours that can be determined much more accurately than the image pixel brightnesses that inevitably contain large errors for most targets. We have shown that the contours yield a wealth of information independent of the scattering properties of the surface [3]. Their use also facilitates a very fast and robustly converging algorithm. An important concept in the inversion is the optimal weighting of the various data modes. We have developed a mathematicallly rigorous scheme for this purpose. The resulting maximum compatibility estimate [3], a multimodal generalization of the maximum likelihood estimate, ensures that the actual information content of each source is properly taken into account, and that the resolution scale of the ensuing model can be reliably estimated. We have applied our procedure to several asteroids, and the ground truth from the Rosetta/Lutetia flyby confirmed the ability of the approach to recover shape details [1] (see also Carry et al., this meeting). We have created a general flyby-version of the procedure to construct full models of planetary targets for which probe images are only available of a part of the surface (a typical setup for many planetary missions). We have successfully combined flyby images with photometry (Steins [4]) and adaptive optics images (Lutetia); the portion of the surface accurately determined by the flyby constrains the shape solution of the "dark side" efficiently.

  12. Estimation of Surface Air Temperature Over Central and Eastern Eurasia from MODIS Land Surface Temperature

    NASA Technical Reports Server (NTRS)

    Shen, Suhung; Leptoukh, Gregory G.

    2011-01-01

    Surface air temperature (T(sub a)) is a critical variable in the energy and water cycle of the Earth.atmosphere system and is a key input element for hydrology and land surface models. This is a preliminary study to evaluate estimation of T(sub a) from satellite remotely sensed land surface temperature (T(sub s)) by using MODIS-Terra data over two Eurasia regions: northern China and fUSSR. High correlations are observed in both regions between station-measured T(sub a) and MODIS T(sub s). The relationships between the maximum T(sub a) and daytime T(sub s) depend significantly on land cover types, but the minimum T(sub a) and nighttime T(sub s) have little dependence on the land cover types. The largest difference between maximum T(sub a) and daytime T(sub s) appears over the barren and sparsely vegetated area during the summer time. Using a linear regression method, the daily maximum T(sub a) were estimated from 1 km resolution MODIS T(sub s) under clear-sky conditions with coefficients calculated based on land cover types, while the minimum T(sub a) were estimated without considering land cover types. The uncertainty, mean absolute error (MAE), of the estimated maximum T(sub a) varies from 2.4 C over closed shrublands to 3.2 C over grasslands, and the MAE of the estimated minimum Ta is about 3.0 C.

  13. Optimum data analysis procedures for Titan 4 and Space Shuttle payload acoustic measurements during lift-off

    NASA Technical Reports Server (NTRS)

    Piersol, Allan G.

    1991-01-01

    Analytical expressions have been derived to describe the mean square error in the estimation of the maximum rms value computed from a step-wise (or running) time average of a nonstationary random signal. These analytical expressions have been applied to the problem of selecting the optimum averaging times that will minimize the total mean square errors in estimates of the maximum sound pressure levels measured inside the Titan IV payload fairing (PLF) and the Space Shuttle payload bay (PLB) during lift-off. Based on evaluations of typical Titan IV and Space Shuttle launch data, it has been determined that the optimum averaging times for computing the maximum levels are (1) T (sub o) = 1.14 sec for the maximum overall level, and T(sub oi) = 4.88 f (sub i) (exp -0.2) sec for the maximum 1/3 octave band levels inside the Titan IV PLF, and (2) T (sub o) = 1.65 sec for the maximum overall level, and T (sub oi) = 7.10 f (sub i) (exp -0.2) sec for the maximum 1/3 octave band levels inside the Space Shuttle PLB, where f (sub i) is the 1/3 octave band center frequency. However, the results for both vehicles indicate that the total rms error in the maximum level estimates will be within 25 percent the minimum error for all averaging times within plus or minus 50 percent of the optimum averaging time, so a precise selection of the exact optimum averaging time is not critical. Based on these results, linear averaging times (T) are recommended for computing the maximum sound pressure level during lift-off.

  14. Is HL Tauri and FU Orionis system in quiescence?

    NASA Technical Reports Server (NTRS)

    Lin, D. N. C.; Hayashi, M.; Bell, K. R.; Ohashi, N.

    1994-01-01

    A recent Nobeyama map of HL Tau reveals that gas is infalling in a flattened region approximately 1400 AU around the central star. The apparent motion of the gas provides the necessary condition for the formation of a Keplerian disk with a radius comparable to the size of the primordial solar nebula. The inferred mass infall rate onto the disk is approximately equal to 5 x 10(exp -6) solar mass/yr, which greatly exceeds the maximum estimate of the accretion rate onto the central star (approximately 7 x 10(exp -7) solar mass/yr). Consequently, mass must currently be accumulating in the disk. The estimated age and disk mass of HL Tau suggest that the accumulated matter has been flushed repeatedly on a timescale less than 10(exp 4) yr. Based on the similarites between their evolution patterns, we propose that HL Tau is an FU Orionis system in quiescence. In addition to HL Tau, 14 out of 86 pre-main-sequence stars in the Taurus-Auriga dark clouds have infrared luminosities much greater than their otherwise normal extinction-corrected stellar luminosities. These sources also tend to have flat spectra which may be due to the reprocessing of radiation by dusty, flattened, collapsing envelopes with infall rates a few 10(exp -6) solar mass/yr. Such rates are much larger than estimated central accretion rates for these systems, which suggests that mass must also be accumulating in these disks. If these sources are FU Orionis stars in quiescence, similar to HL Tau, their age and relative abundance imply that the FU Orionis phase occurs over a timescale of approixmately 10(exp 5) yr, and the quiescent phase between each outburst lasts approximately 10(exp 3) =10(exp 4) yr. These inferred properties are compatible with the scenario that FU Orionis outbursts are regulated by a thermal instability in the inner region of the disk.

  15. Local Stellar Kinematics from RAVE data - V. Kinematic Investigation of the Galaxy with Red Clump Stars

    NASA Astrophysics Data System (ADS)

    Karaali, S.; Bilir, S.; Ak, S.; Gökçe, E. Yaz; Önal, Ö.; Ak, T.

    2014-02-01

    We investigated the space velocity components of 6 610 red clump (RC) stars in terms of vertical distance, Galactocentric radial distance and Galactic longitude. Stellar velocity vectors are corrected for differential rotation of the Galaxy which is taken into account using photometric distances of RC stars. The space velocity components estimated for the sample stars above and below the Galactic plane are compatible only for the space velocity component in the direction to the Galactic rotation of the thin disc stars. The space velocity component in the direction to the Galactic rotation (V lsr) shows a smooth variation relative to the mean Galactocentric radial distance (Rm ), while it attains its maximum at the Galactic plane. The space velocity components in the direction to the Galactic centre (U lsr) and in the vertical direction (W lsr) show almost flat distributions relative to Rm , with small changes in their trends at Rm ~ 7.5 kpc. U lsr values estimated for the RC stars in quadrant 180° < l ⩽ 270° are larger than the ones in quadrants 0° < l ⩽ 90° and 270° < l ⩽ 360°. The smooth distribution of the space velocity dispersions reveals that the thin and thick discs are kinematically continuous components of the Galaxy. Based on the W lsr space velocity components estimated in the quadrants 0° < l ⩽ 90° and 270° < l ⩽ 360°, in the inward direction relative to the Sun, we showed that RC stars above the Galactic plane move towards the North Galactic Pole, whereas those below the Galactic plane move in the opposite direction. In the case of quadrant 180° < l ⩽ 270°, their behaviour is different, i.e. the RC stars above and below the Galactic plane move towards the Galactic plane. We stated that the Galactic long bar is the probable origin of many, but not all, of the detected features.

  16. Software Measurement Guidebook. Version 02.00.02

    DTIC Science & Technology

    1992-12-01

    Compatibility Testing Process .............................. 9-5 Figure 9-3. Development Effort Planning Curve ................................. 9-7 Figure 10-1...requirements, design, code, and test and for analyzing this data. "* Proposal Manager. The person responsible for describing and supporting the estimated...designed, build/elease ranges, variances, and comparisons size growth; costs; completions; and content, units completing test , units with historical

  17. Estimating potency for the Emax-model without attaining maximal effects.

    PubMed

    Schoemaker, R C; van Gerven, J M; Cohen, A F

    1998-10-01

    The most widely applied model relating drug concentrations to effects is the Emax model. In practice, concentration-effect relationships often deviate from a simple linear relationship but without reaching a clear maximum because a further increase in concentration might be associated with unacceptable or distorting side effects. The parameters for the Emax model can only be estimated with reasonable precision if the curve shows sign of reaching a maximum, otherwise both EC50 and Emax estimates may be extremely imprecise. This paper provides a solution by introducing a new parameter (S0) equal to Emax/EC50 that can be used to characterize potency adequately even if there are no signs of a clear maximum. Simulations are presented to investigate the nature of the new parameter and published examples are used as illustration.

  18. NR/EPDM elastomeric rubber blend miscibility evaluation by two-level fractional factorial design of experiment

    NASA Astrophysics Data System (ADS)

    Razak, Jeefferie Abd; Ahmad, Sahrim Haji; Ratnam, Chantara Thevy; Mahamood, Mazlin Aida; Yaakub, Juliana; Mohamad, Noraiham

    2014-09-01

    Fractional 25 two-level factorial design of experiment (DOE) was applied to systematically prepare the NR/EPDM blend using Haake internal mixer set-up. The process model of rubber blend preparation that correlates the relationships between the mixer process input parameters and the output response of blend compatibility was developed. Model analysis of variance (ANOVA) and model fitting through curve evaluation finalized the R2 of 99.60% with proposed parametric combination of A = 30/70 NR/EPDM blend ratio; B = 70°C mixing temperature; C = 70 rpm of rotor speed; D = 5 minutes of mixing period and E = 1.30 phr EPDM-g-MAH compatibilizer addition, with overall 0.966 desirability. Model validation with small deviation at +2.09% confirmed the repeatability of the mixing strategy with valid maximum tensile strength output representing the blend miscibility. Theoretical calculation of NR/EPDM blend compatibility is also included and compared. In short, this study provides a brief insight on the utilization of DOE for experimental simplification and parameter inter-correlation studies, especially when dealing with multiple variables during elastomeric rubber blend preparation.

  19. An MR/MRI compatible core holder with the RF probe immersed in the confining fluid.

    PubMed

    Shakerian, M; Balcom, B J

    2018-01-01

    An open frame RF probe for high pressure and high temperature MR/MRI measurements was designed, fabricated, and tested. The open frame RF probe was installed inside an MR/MRI compatible metallic core holder, withstanding a maximum pressure and temperature of 5000 psi and 80 °C. The open frame RF probe was tunable for both 1 H and 19 F resonance frequencies with a 0.2 T static magnetic field. The open frame structure was based on simple pillars of PEEK polymer upon which the RF probe was wound. The RF probe was immersed in the high pressure confining fluid during operation. The open frame structure simplified fabrication of the RF probe and significantly reduced the amount of polymeric materials in the core holder. This minimized the MR background signal detected. Phase encoding MRI methods were employed to map the spin density of a sulfur hexafluoride gas saturating a Berea core plug in the core holder. The SF 6 was imaged as a high pressure gas and as a supercritical fluid. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. An MR/MRI compatible core holder with the RF probe immersed in the confining fluid

    NASA Astrophysics Data System (ADS)

    Shakerian, M.; Balcom, B. J.

    2018-01-01

    An open frame RF probe for high pressure and high temperature MR/MRI measurements was designed, fabricated, and tested. The open frame RF probe was installed inside an MR/MRI compatible metallic core holder, withstanding a maximum pressure and temperature of 5000 psi and 80 °C. The open frame RF probe was tunable for both 1H and 19F resonance frequencies with a 0.2 T static magnetic field. The open frame structure was based on simple pillars of PEEK polymer upon which the RF probe was wound. The RF probe was immersed in the high pressure confining fluid during operation. The open frame structure simplified fabrication of the RF probe and significantly reduced the amount of polymeric materials in the core holder. This minimized the MR background signal detected. Phase encoding MRI methods were employed to map the spin density of a sulfur hexafluoride gas saturating a Berea core plug in the core holder. The SF6 was imaged as a high pressure gas and as a supercritical fluid.

  1. Control of Co content and SOFC cathode performance in Y1-ySr2+yCu3-xCoxO7+δ

    NASA Astrophysics Data System (ADS)

    Šimo, F.; Payne, J. L.; Demont, A.; Sayers, R.; Li, Ming; Collins, C. M.; Pitcher, M. J.; Claridge, J. B.; Rosseinsky, M. J.

    2014-11-01

    The electrochemical performance of the layered perovskite YSr2Cu3-xCoxO7+δ, a potential solid oxide fuel cell (SOFC) cathode, is improved by increasing the Co content from x = 1.00 to a maximum of x = 1.30. Single phase samples with x > 1.00 are obtained by tuning the Y/Sr ratio, yielding the composition Y1-ySr2+yCu3-xCoxO7+δ (where y ≤ 0.05). The high temperature structure of Y0.95Sr2.05Cu1.7Co1.3O7+δ at 740 °C is characterised by powder neutron diffraction and the potential of this Co-enriched material as a SOFC cathode is investigated by combining AC impedance spectroscopy, four-probe DC conductivity and powder XRD measurements to determine its electrochemical properties along with its thermal stability and compatibility with a range of commercially available electrolytes. The material is shown to be compatible with doped ceria electrolytes at 900 °C.

  2. An organic water-gated ambipolar transistor with a bulk heterojunction active layer for stable and tunable photodetection

    NASA Astrophysics Data System (ADS)

    Xu, Haihua; Zhu, Qingqing; Wu, Tongyuan; Chen, Wenwen; Zhou, Guodong; Li, Jun; Zhang, Huisheng; Zhao, Ni

    2016-11-01

    Organic water-gated transistors (OWGTs) have emerged as promising sensing architectures for biomedical applications and environmental monitoring due to their ability of in-situ detection of biological substances with high sensitivity and low operation voltage, as well as compatibility with various read-out circuits. Tremendous progress has been made in the development of p-type OWGTs. However, achieving stable n-type operation in OWGTs due to the presence of solvated oxygen in water is still challenging. Here, we report an ambipolar OWGT based on a bulk heterojunction active layer, which exhibits a stable hole and electron transport when exposed to aqueous environment. The device can be used as a photodetector both in the hole and electron accumulation regions to yield a maximum responsivity of 0.87 A W-1. More importantly, the device exhibited stable static and dynamic photodetection even when operated in the n-type mode. These findings bring possibilities for the device to be adopted for future biosensing platforms, which are fully compatible with low-cost and low-power organic complementary circuits.

  3. [Estimation of Maximum Entrance Skin Dose during Cerebral Angiography].

    PubMed

    Kawauchi, Satoru; Moritake, Takashi; Hayakawa, Mikito; Hamada, Yusuke; Sakuma, Hideyuki; Yoda, Shogo; Satoh, Masayuki; Sun, Lue; Koguchi, Yasuhiro; Akahane, Keiichi; Chida, Koichi; Matsumaru, Yuji

    2015-09-01

    Using radio-photoluminescence glass dosimeter, we measured the entrance skin dose (ESD) in 46 cases and analyzed the correlations between maximum ESD and angiographic parameters [total fluoroscopic time (TFT); number of digital subtraction angiography (DSA) frames, air kerma at the interventional reference point (AK), and dose-area product (DAP)] to estimate the maximum ESD in real time. Mean (± standard deviation) maximum ESD, dose of the right lens, and dose of the left lens were 431.2 ± 135.8 mGy, 33.6 ± 15.5 mGy, and 58.5 ± 35.0 mGy, respectively. Correlation coefficients (r) between maximum ESD and TFT, number of DSA frames, AK, and DAP were r=0.379 (P<0.01), r=0.702 (P<0.001), r=0.825 (P<0.001), and r=0.709 (P<0.001), respectively. AK was identified as the most useful parameter for real-time prediction of maximum ESD. This study should contribute to the development of new diagnostic reference levels in our country.

  4. The maximum economic depth of groundwater abstraction for irrigation

    NASA Astrophysics Data System (ADS)

    Bierkens, M. F.; Van Beek, L. P.; de Graaf, I. E. M.; Gleeson, T. P.

    2017-12-01

    Over recent decades, groundwater has become increasingly important for agriculture. Irrigation accounts for 40% of the global food production and its importance is expected to grow further in the near future. Already, about 70% of the globally abstracted water is used for irrigation, and nearly half of that is pumped groundwater. In many irrigated areas where groundwater is the primary source of irrigation water, groundwater abstraction is larger than recharge and we see massive groundwater head decline in these areas. An important question then is: to what maximum depth can groundwater be pumped for it to be still economically recoverable? The objective of this study is therefore to create a global map of the maximum depth of economically recoverable groundwater when used for irrigation. The maximum economic depth is the maximum depth at which revenues are still larger than pumping costs or the maximum depth at which initial investments become too large compared to yearly revenues. To this end we set up a simple economic model where costs of well drilling and the energy costs of pumping, which are a function of well depth and static head depth respectively, are compared with the revenues obtained for the irrigated crops. Parameters for the cost sub-model are obtained from several US-based studies and applied to other countries based on GDP/capita as an index of labour costs. The revenue sub-model is based on gross irrigation water demand calculated with a global hydrological and water resources model, areal coverage of crop types from MIRCA2000 and FAO-based statistics on crop yield and market price. We applied our method to irrigated areas in the world overlying productive aquifers. Estimated maximum economic depths range between 50 and 500 m. Most important factors explaining the maximum economic depth are the dominant crop type in the area and whether or not initial investments in well infrastructure are limiting. In subsequent research, our estimates of maximum economic depth will be combined with estimates of groundwater depth and storage coefficients to estimate economically attainable groundwater volumes worldwide.

  5. Design of an fMRI-compatible optical touch stripe based on frustrated total internal reflection.

    PubMed

    Jarrahi, Behnaz; Wanek, Johann

    2014-01-01

    Previously we developed a low-cost, multi-configurable handheld response system, using a reflective-type intensity modulated fiber-optic sensor (FOS) to accurately gather participants' behavioral responses during functional magnetic resonance imaging (fMRI). Inspired by the popularity and omnipresence of the fingertip-based touch sensing user interface devices, in this paper we present the design of a prototype fMRI-compatible optical touch stripe (OTS) as an alternative configuration. The prototype device takes advantage of a proven frustrated total internal reflection (FTIR) technique. By using a custom-built wedge-shaped optically transparent acrylic prism as an optical waveguide, and a plano-concave lens to provide the required light beam profile, the position of a fingertip touching the surface of the wedge prism can be determined from the deflected light beams that become trapped within the prism by total internal reflection. To achieve maximum sensitivity, the optical design of the wedge prism and lens were optimized through a series of light beam simulations using WinLens 3D Basic software suite. Furthermore, OTS performance and MRI-compatibility were assessed on a 3.0 Tesla MRI scanner running echo planar imaging (EPI) sequences. The results show that the OTS can detect a touch signal at high spatial resolution (about 0.5 cm), and is well suited for use within the MRI environment with average time-variant signal-to-noise ratio (tSNR) loss < 3%.

  6. Asymptotic Normality of the Maximum Pseudolikelihood Estimator for Fully Visible Boltzmann Machines.

    PubMed

    Nguyen, Hien D; Wood, Ian A

    2016-04-01

    Boltzmann machines (BMs) are a class of binary neural networks for which there have been numerous proposed methods of estimation. Recently, it has been shown that in the fully visible case of the BM, the method of maximum pseudolikelihood estimation (MPLE) results in parameter estimates, which are consistent in the probabilistic sense. In this brief, we investigate the properties of MPLE for the fully visible BMs further, and prove that MPLE also yields an asymptotically normal parameter estimator. These results can be used to construct confidence intervals and to test statistical hypotheses. These constructions provide a closed-form alternative to the current methods that require Monte Carlo simulation or resampling. We support our theoretical results by showing that the estimator behaves as expected in simulation studies.

  7. Fast and accurate estimation of the covariance between pairwise maximum likelihood distances.

    PubMed

    Gil, Manuel

    2014-01-01

    Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989) which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error.

  8. Fast and accurate estimation of the covariance between pairwise maximum likelihood distances

    PubMed Central

    2014-01-01

    Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989) which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error. PMID:25279263

  9. Effects of time-shifted data on flight determined stability and control derivatives

    NASA Technical Reports Server (NTRS)

    Steers, S. T.; Iliff, K. W.

    1975-01-01

    Flight data were shifted in time by various increments to assess the effects of time shifts on estimates of stability and control derivatives produced by a maximum likelihood estimation method. Derivatives could be extracted from flight data with the maximum likelihood estimation method even if there was a considerable time shift in the data. Time shifts degraded the estimates of the derivatives, but the degradation was in a consistent rather than a random pattern. Time shifts in the control variables caused the most degradation, and the lateral-directional rotary derivatives were affected the most by time shifts in any variable.

  10. Super-spinning compact objects and models of high-frequency quasi-periodic oscillations observed in Galactic microquasars

    NASA Astrophysics Data System (ADS)

    Kotrlová, Andrea; Török, Gabriel; Šrámková, Eva; Stuchlík, Zdeněk

    2014-12-01

    We have previously applied several models of high-frequency quasi-periodic oscillations (HF QPOs) to estimate the spin of the central Kerr black hole in the three Galactic microquasars, GRS 1915+105, GRO J1655-40, and XTE J1550-564. Here we explore the alternative possibility that the central compact body is a super-spinning object (or a naked singularity) with the external space-time described by Kerr geometry with a dimensionless spin parameter a ≡ cJ/GM2> 1. We calculate the relevant spin intervals for a subset of HF QPO models considered in the previous study. Our analysis indicates that for all but one of the considered models there exists at least one interval of a> 1 that is compatible with constraints given by the ranges of the central compact object mass independently estimated for the three sources. For most of the models, the inferred values of a are several times higher than the extreme Kerr black hole value a = 1. These values may be too high since the spin of superspinars is often assumed to rapidly decrease due to accretion when a ≫ 1. In this context, we conclude that only the epicyclic and the Keplerian resonance model provides estimates that are compatible with the expectation of just a small deviation from a = 1.

  11. Climatic implications of glacial evolution in the Tröllaskagi peninsula (northern Iceland) since the Little Ice Age maximum. The cases of the Gljúfurárjökull and Tungnahryggsjökull glaciers

    NASA Astrophysics Data System (ADS)

    Fernández-Fernández, José M.; Andrés, Nuria; Brynjólfsson, Skafti; Sæmundsson, Þorsteinn; Palacios, David

    2017-04-01

    The Tröllaskagi peninsula is located in northern Iceland, between meridians 19°30'W and 18°10'W, jutting out into the North Atlantic to latitude 66°12'N and joining the central highlands to the south. About 150 glaciers located on the Tröllaskagi peninsula reached their Holocene maximum extent during the Little Ice Age (LIA) maximum at the end of the 19th century. The sudden warming at the turn of the 20th century triggered a continuous retreat from the LIA maximum positions, interrupted by a reversal trend during the mid-seventies and eighties in response to a brief period of climate cooling. The aim of this paper is to analyze the relationships between glacial and climatic evolution since the LIA maximum. For this reason, we selected three small debris-free glaciers: Gljúfurárjökull, and western and eastern Tungnahryggsjökull, at the headwalls of Skíðadalur and Kolbeinsdalur, as their absence of debris cover makes them sensitive to climatic fluctuations. To achieve this purpose, we used ArcGIS to map the glacier extent during the LIA maximum and several dates over four georeferenced aerial photos (1946, 1985, 1994 and 2000), as well as a 2005 SPOT satellite image. Then, the Equilibrium-Line Altitude (ELA) was calculated by applying the Accumulation Area Ratio (AAR) and Area Altitude Balance Ratio (AABR) approaches. Climatological data series from the nearby weather stations were used in order to analyze climate development and to estimate precipitation at the ELA with different numerical models. Our results show considerable changes of the three debris-free glaciers and demonstrates their sensitivity to climatic fluctuations. As a result of the abrupt climatic transition of the 20th century, the following warm 25-year period and the warming started in the late eighties, the three glaciers retreated by ca. 990-1330 m from the LIA maximum to 2005, supported by a 40-metre ELA rise and a reduction of their area and volume of 25% and 33% on average, respectively. The 1.5 °C warming recorded at the city of Akureyri from late 19th century to 2005 does not agree with the 0.3 °C value obtained from the ELA rise and lapse rate. On the contrary it demonstrates that other factors - for example, precipitation and wind - have been decisive in the evolution of the glaciers. All the models applied suggest a precipitation increase of 700 mm water equivalent at the mean ELA since the LIA maximum, and higher and lower values depending on warm and cold periods respectively. The overall increase in precipitation is compatible with the increase in the surface temperature of the North Atlantic and a possible negative-to-positive shift in North Atlantic Oscillation (NAO) mode. However, the link between winter accumulation and prevailing wind directions recorded at nearby weather stations remains unclear. Research funded by Deglaciation project (CGL2015-65813-R), Government of Spain.

  12. [The maximum heart rate in the exercise test: the 220-age formula or Sheffield's table?].

    PubMed

    Mesquita, A; Trabulo, M; Mendes, M; Viana, J F; Seabra-Gomes, R

    1996-02-01

    To determine in the maximum cardiac rate in exercise test of apparently healthy individuals may be more properly estimated through 220-age formula (Astrand) or the Sheffield table. Retrospective analysis of clinical history and exercises test of apparently healthy individuals submitted to cardiac check-up. Sequential sampling of 170 healthy individuals submitted to cardiac check-up between April 1988 and September 1992. Comparison of maximum cardiac rate of individuals studied by the protocols of Bruce and modified Bruce, in interrupted exercise test by fatigue, and with the estimated values by the formulae: 220-age versus Sheffield table. The maximum cardiac heart rate is similar with both protocols. This parameter in normal individuals is better predicted by the 220-age formula. The theoretic maximum cardiac heart rate determined by 220-age formula should be recommended for a healthy, and for this reason the Sheffield table has been excluded from our clinical practice.

  13. Pharmacokinetic profile of nifedipine GITS in hypertensive patients with chronic renal impairment.

    PubMed

    Schneider, R; Stolero, D; Griffel, L; Kobelt, R; Brendel, E; Iaina, A

    1994-01-01

    25 hypertensive patients with normal or impaired renal function underwent pharmacokinetic and safety studies after single and multiple dose administration of nifedipine GITS (Gastro-Intestinal Therapeutic System) 60mg tablets. Complete pharmacokinetic data were obtained from 23 of these patients. Blood pressure and heart rate changes were compatible with the known properties of the drug. Impaired renal function did not affect the maximum plasma concentrations or bioavailability of nifedipine after single or multiple dose administration of nifedipine GITS, nor was there any evidence of excessive drug accumulation in the presence of renal impairment.

  14. On the Existence and Uniqueness of JML Estimates for the Partial Credit Model

    ERIC Educational Resources Information Center

    Bertoli-Barsotti, Lucio

    2005-01-01

    A necessary and sufficient condition is given in this paper for the existence and uniqueness of the maximum likelihood (the so-called joint maximum likelihood) estimate of the parameters of the Partial Credit Model. This condition is stated in terms of a structural property of the pattern of the data matrix that can be easily verified on the basis…

  15. Formulating the Rasch Differential Item Functioning Model under the Marginal Maximum Likelihood Estimation Context and Its Comparison with Mantel-Haenszel Procedure in Short Test and Small Sample Conditions

    ERIC Educational Resources Information Center

    Paek, Insu; Wilson, Mark

    2011-01-01

    This study elaborates the Rasch differential item functioning (DIF) model formulation under the marginal maximum likelihood estimation context. Also, the Rasch DIF model performance was examined and compared with the Mantel-Haenszel (MH) procedure in small sample and short test length conditions through simulations. The theoretically known…

  16. PROFIT-PC: a program for estimating maximum net revenue from multiproduct harvests in Appalachian hardwoods

    Treesearch

    Chris B. LeDoux; John E. Baumgras; R. Bryan Selbe

    1989-01-01

    PROFIT-PC is a menu driven, interactive PC (personal computer) program that estimates optimum product mix and maximum net harvesting revenue based on projected product yields and stump-to-mill timber harvesting costs. Required inputs include the number of trees/acre by species and 2 inches diameter at breast-height class, delivered product prices by species and product...

  17. Minimax estimation of qubit states with Bures risk

    NASA Astrophysics Data System (ADS)

    Acharya, Anirudh; Guţă, Mădălin

    2018-04-01

    The central problem of quantum statistics is to devise measurement schemes for the estimation of an unknown state, given an ensemble of n independent identically prepared systems. For locally quadratic loss functions, the risk of standard procedures has the usual scaling of 1/n. However, it has been noticed that for fidelity based metrics such as the Bures distance, the risk of conventional (non-adaptive) qubit tomography schemes scales as 1/\\sqrt{n} for states close to the boundary of the Bloch sphere. Several proposed estimators appear to improve this scaling, and our goal is to analyse the problem from the perspective of the maximum risk over all states. We propose qubit estimation strategies based on separate adaptive measurements, and collective measurements, that achieve 1/n scalings for the maximum Bures risk. The estimator involving local measurements uses a fixed fraction of the available resource n to estimate the Bloch vector direction; the length of the Bloch vector is then estimated from the remaining copies by measuring in the estimator eigenbasis. The estimator based on collective measurements uses local asymptotic normality techniques which allows us to derive upper and lower bounds to its maximum Bures risk. We also discuss how to construct a minimax optimal estimator in this setup. Finally, we consider quantum relative entropy and show that the risk of the estimator based on collective measurements achieves a rate O(n-1log n) under this loss function. Furthermore, we show that no estimator can achieve faster rates, in particular the ‘standard’ rate n ‑1.

  18. Estimating landscape carrying capacity through maximum clique analysis

    USGS Publications Warehouse

    Donovan, Therese; Warrington, Greg; Schwenk, W. Scott; Dinitz, Jeffrey H.

    2012-01-01

    Habitat suitability (HS) maps are widely used tools in wildlife science and establish a link between wildlife populations and landscape pattern. Although HS maps spatially depict the distribution of optimal resources for a species, they do not reveal the population size a landscape is capable of supporting--information that is often crucial for decision makers and managers. We used a new approach, "maximum clique analysis," to demonstrate how HS maps for territorial species can be used to estimate the carrying capacity, N(k), of a given landscape. We estimated the N(k) of Ovenbirds (Seiurus aurocapillus) and bobcats (Lynx rufus) in an 1153-km2 study area in Vermont, USA. These two species were selected to highlight different approaches in building an HS map as well as computational challenges that can arise in a maximum clique analysis. We derived 30-m2 HS maps for each species via occupancy modeling (Ovenbird) and by resource utilization modeling (bobcats). For each species, we then identified all pixel locations on the map (points) that had sufficient resources in the surrounding area to maintain a home range (termed a "pseudo-home range"). These locations were converted to a mathematical graph, where any two points were linked if two pseudo-home ranges could exist on the landscape without violating territory boundaries. We used the program Cliquer to find the maximum clique of each graph. The resulting estimates of N(k) = 236 Ovenbirds and N(k) = 42 female bobcats were sensitive to different assumptions and model inputs. Estimates of N(k) via alternative, ad hoc methods were 1.4 to > 30 times greater than the maximum clique estimate, suggesting that the alternative results may be upwardly biased. The maximum clique analysis was computationally intensive but could handle problems with < 1500 total pseudo-home ranges (points). Given present computational constraints, it is best suited for species that occur in clustered distributions (where the problem can be broken into several, smaller problems), or for species with large home ranges relative to grid scale where resampling the points to a coarser resolution can reduce the problem to manageable proportions.

  19. Tin recycling in the United States in 1998

    USGS Publications Warehouse

    Carlin, James F.

    2001-01-01

    This materials flow study includes a description of tin supply and demand factors for the United States to illustrate the extent of tin recycling and to identify recycling trends. Understanding the flow of materials from source to ultimate disposition can assist in improving the management of the use of natural resources in a manner that is compatible with sound environmental practices. The quantity of tin recycled in 1998 as a percentage of apparent tin supply was estimated to be about 22%, and recycling efficiency was estimated to be 75%. Of the total tin consumed in products for the U.S. market in 1998, an estimated 12% was consumed in products where the tin was not recyclable (dissipative uses).

  20. Making Sense of Palaeoclimate Sensitivity

    NASA Technical Reports Server (NTRS)

    Rohling, E. J.; Sluijs, A.; DeConto, R.; Drijfhout, S. S.; Fedorov, A.; Foster, G. L.; Ganopolski, A.; Hansen, J.; Honisch, B.; Hooghiemstra, H.; hide

    2012-01-01

    Many palaeoclimate studies have quantified pre-anthropogenic climate change to calculate climate sensitivity (equilibrium temperature change in response to radiative forcing change), but a lack of consistent methodologies produces a wide range of estimates and hinders comparability of results. Here we present a stricter approach, to improve intercomparison of palaeoclimate sensitivity estimates in a manner compatible with equilibrium projections for future climate change. Over the past 65 million years, this reveals a climate sensitivity (in K W-1 m2) of 0.3-1.9 or 0.6-1.3 at 95% or 68% probability, respectively. The latter implies a warming of 2.2-4.8 K per doubling of atmospheric CO2, which agrees with IPCC estimates.

  1. ABO, Rhesus, and Kell Antigens, Alleles, and Haplotypes in West Bengal, India

    PubMed Central

    Basu, Debapriya; Datta, Suvro Sankha; Montemayor, Celina; Bhattacharya, Prasun; Mukherjee, Krishnendu; Flegel, Willy A.

    2018-01-01

    Background Few studies have documented the blood group antigens in the population of eastern India. Frequencies of some common alleles and haplotypes were unknown. We describe phenotype, allele, and haplotype frequencies in the state of West Bengal, India. Methods We tested 1,528 blood donors at the Medical College Hospital, Kolkata. The common antigens of the ABO, Rhesus, and Kell blood group systems were determined by standard serologic methods in tubes. Allele and haplotype frequencies were calculated with an iterative method that yielded maximum-likelihood estimates under the assumption of a Hardy-Weinberg equilibrium. Results The prevalence of ABO antigens were B (34%), O (32%), A (25%), and AB (9%) with ABO allele frequencies for O = 0.567, A = 0.189, and B = 0.244. The D antigen (RH1) was observed in 96.6% of the blood donors with RH haplotype frequencies, such as for CDe = 0.688809, cde = 0.16983 and CdE = 0.000654. The K antigen (K1) was observed in 12 donors (0.79%) with KEL allele frequencies for K = 0.004 and k = 0.996. Conclusions: For the Bengali population living in the south of West Bengal, we established the frequencies of the major clinically relevant antigens in the ABO, Rhesus, and Kell blood group systems and derived estimates for the underlying ABO and KEL alleles and RH haplotypes. Such blood donor screening will improve the availability of compatible red cell units for transfusion. Our approach using widely available routine methods can readily be applied in other regions, where the sufficient supply of blood typed for the Rh and K antigens is lacking. PMID:29593462

  2. Modification of inertial oscillations by the mesoscale eddy field

    NASA Astrophysics Data System (ADS)

    Elipot, Shane; Lumpkin, Rick; Prieto, GermáN.

    2010-09-01

    The modification of near-surface near-inertial oscillations (NIOs) by the geostrophic vorticity is studied globally from an observational standpoint. Surface drifter are used to estimate NIO characteristics. Despite its spatial resolution limits, altimetry is used to estimate the geostrophic vorticity. Three characteristics of NIOs are considered: the relative frequency shift with respect to the local inertial frequency; the near-inertial variance; and the inverse excess bandwidth, which is interpreted as a decay time scale. The geostrophic mesoscale flow shifts the frequency of NIOs by approximately half its vorticity. Equatorward of 30°N and S, this effect is added to a global pattern of blue shift of NIOs. While the global pattern of near-inertial variance is interpretable in terms of wind forcing, it is also observed that the geostrophic vorticity organizes the near-inertial variance; it is maximum for near zero values of the Laplacian of the vorticity and decreases for nonzero values, albeit not as much for positive as for negative values. Because the Laplacian of vorticity and vorticity are anticorrelated in the altimeter data set, overall, more near-inertial variance is found in anticyclonic vorticity regions than in cyclonic regions. While this is compatible with anticyclones trapping NIOs, the organization of near-inertial variance by the Laplacian of vorticity is also in very good agreement with previous theoretical and numerical predictions. The inverse bandwidth is a decreasing function of the gradient of vorticity, which acts like the gradient of planetary vorticity to increase the decay of NIOs from the ocean surface. Because the altimetry data set captures the largest vorticity gradients in energetic mesoscale regions, it is also observed that NIOs decay faster in large geostrophic eddy kinetic energy regions.

  3. Assessing Potential Habitat and Carrying Capacity for Reintroduction of Plains Bison (Bison bison bison) in Banff National Park

    PubMed Central

    Steenweg, Robin; Hebblewhite, Mark; Gummer, David; Low, Brian; Hunt, Bill

    2016-01-01

    Interest in bison (Bison bison, B. bonasus) conservation and restoration continues to grow globally. In Canada, plains bison (B. b. bison) are threatened, occupying less than 0.5% of their former range. The largest threat to their recovery is the lack of habitat in which they are considered compatible with current land uses. Fences and direct management make range expansion by most bison impossible. Reintroduction of bison into previously occupied areas that remain suitable, therefore, is critical for bison recovery in North America. Banff National Park is recognized as historical range of plains bison and has been identified as a potential site for reintroduction of a wild population. To evaluate habitat quality and assess if there is sufficient habitat for a breeding population, we developed a Habitat Suitability Index (HSI) model for the proposed reintroduction and surrounding areas in Banff National Park (Banff). We then synthesize previous studies on habitat relationships, forage availability, bison energetics and snowfall scenarios to estimate nutritional carrying capacity. Considering constraints on nutritional carrying capacity, the most realistic scenario that we evaluated resulted in an estimated maximum bison density of 0.48 bison/km2. This corresponds to sufficient habitat to support at least 600 to 1000 plains bison, which could be one of the largest 10 plains bison populations in North America. Within Banff, there is spatial variation in predicted bison habitat suitability and population size that suggests one potential reintroduction site as the most likely to be successful from a habitat perspective. The successful reintroduction of bison into Banff would represent a significant global step towards conserving this iconic species, and our approach provides a useful template for evaluating potential habitat for other endangered species reintroductions into their former range. PMID:26910226

  4. Maximum Correntropy Unscented Kalman Filter for Spacecraft Relative State Estimation.

    PubMed

    Liu, Xi; Qu, Hua; Zhao, Jihong; Yue, Pengcheng; Wang, Meng

    2016-09-20

    A new algorithm called maximum correntropy unscented Kalman filter (MCUKF) is proposed and applied to relative state estimation in space communication networks. As is well known, the unscented Kalman filter (UKF) provides an efficient tool to solve the non-linear state estimate problem. However, the UKF usually plays well in Gaussian noises. Its performance may deteriorate substantially in the presence of non-Gaussian noises, especially when the measurements are disturbed by some heavy-tailed impulsive noises. By making use of the maximum correntropy criterion (MCC), the proposed algorithm can enhance the robustness of UKF against impulsive noises. In the MCUKF, the unscented transformation (UT) is applied to obtain a predicted state estimation and covariance matrix, and a nonlinear regression method with the MCC cost is then used to reformulate the measurement information. Finally, the UT is adopted to the measurement equation to obtain the filter state and covariance matrix. Illustrative examples demonstrate the superior performance of the new algorithm.

  5. Maximum Correntropy Unscented Kalman Filter for Spacecraft Relative State Estimation

    PubMed Central

    Liu, Xi; Qu, Hua; Zhao, Jihong; Yue, Pengcheng; Wang, Meng

    2016-01-01

    A new algorithm called maximum correntropy unscented Kalman filter (MCUKF) is proposed and applied to relative state estimation in space communication networks. As is well known, the unscented Kalman filter (UKF) provides an efficient tool to solve the non-linear state estimate problem. However, the UKF usually plays well in Gaussian noises. Its performance may deteriorate substantially in the presence of non-Gaussian noises, especially when the measurements are disturbed by some heavy-tailed impulsive noises. By making use of the maximum correntropy criterion (MCC), the proposed algorithm can enhance the robustness of UKF against impulsive noises. In the MCUKF, the unscented transformation (UT) is applied to obtain a predicted state estimation and covariance matrix, and a nonlinear regression method with the MCC cost is then used to reformulate the measurement information. Finally, the UT is adopted to the measurement equation to obtain the filter state and covariance matrix. Illustrative examples demonstrate the superior performance of the new algorithm. PMID:27657069

  6. Methods for estimating drought streamflow probabilities for Virginia streams

    USGS Publications Warehouse

    Austin, Samuel H.

    2014-01-01

    Maximum likelihood logistic regression model equations used to estimate drought flow probabilities for Virginia streams are presented for 259 hydrologic basins in Virginia. Winter streamflows were used to estimate the likelihood of streamflows during the subsequent drought-prone summer months. The maximum likelihood logistic regression models identify probable streamflows from 5 to 8 months in advance. More than 5 million streamflow daily values collected over the period of record (January 1, 1900 through May 16, 2012) were compiled and analyzed over a minimum 10-year (maximum 112-year) period of record. The analysis yielded the 46,704 equations with statistically significant fit statistics and parameter ranges published in two tables in this report. These model equations produce summer month (July, August, and September) drought flow threshold probabilities as a function of streamflows during the previous winter months (November, December, January, and February). Example calculations are provided, demonstrating how to use the equations to estimate probable streamflows as much as 8 months in advance.

  7. Maximum mutual information estimation of a simplified hidden MRF for offline handwritten Chinese character recognition

    NASA Astrophysics Data System (ADS)

    Xiong, Yan; Reichenbach, Stephen E.

    1999-01-01

    Understanding of hand-written Chinese characters is at such a primitive stage that models include some assumptions about hand-written Chinese characters that are simply false. So Maximum Likelihood Estimation (MLE) may not be an optimal method for hand-written Chinese characters recognition. This concern motivates the research effort to consider alternative criteria. Maximum Mutual Information Estimation (MMIE) is an alternative method for parameter estimation that does not derive its rationale from presumed model correctness, but instead examines the pattern-modeling problem in automatic recognition system from an information- theoretic point of view. The objective of MMIE is to find a set of parameters in such that the resultant model allows the system to derive from the observed data as much information as possible about the class. We consider MMIE for recognition of hand-written Chinese characters using on a simplified hidden Markov Random Field. MMIE provides improved performance improvement over MLE in this application.

  8. Estimation of Compaction Parameters Based on Soil Classification

    NASA Astrophysics Data System (ADS)

    Lubis, A. S.; Muis, Z. A.; Hastuty, I. P.; Siregar, I. M.

    2018-02-01

    Factors that must be considered in compaction of the soil works were the type of soil material, field control, maintenance and availability of funds. Those problems then raised the idea of how to estimate the density of the soil with a proper implementation system, fast, and economical. This study aims to estimate the compaction parameter i.e. the maximum dry unit weight (γ dmax) and optimum water content (Wopt) based on soil classification. Each of 30 samples were being tested for its properties index and compaction test. All of the data’s from the laboratory test results, were used to estimate the compaction parameter values by using linear regression and Goswami Model. From the research result, the soil types were A4, A-6, and A-7 according to AASHTO and SC, SC-SM, and CL based on USCS. By linear regression, the equation for estimation of the maximum dry unit weight (γdmax *)=1,862-0,005*FINES- 0,003*LL and estimation of the optimum water content (wopt *)=- 0,607+0,362*FINES+0,161*LL. By Goswami Model (with equation Y=mLogG+k), for estimation of the maximum dry unit weight (γdmax *) with m=-0,376 and k=2,482, for estimation of the optimum water content (wopt *) with m=21,265 and k=-32,421. For both of these equations a 95% confidence interval was obtained.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roberson, G P; Logan, C M

    We have estimated interference from external background radiation for a computed tomography (CT) scanner. Our intention is to estimate the interference that would be expected for the high-resolution SkyScan 1072 desk-top x-ray microtomography system. The SkyScan system uses a Microfocus x-ray source capable of a 10-{micro}m focal spot at a maximum current of 0.1 mA and a maximum energy of 130 kVp. All predictions made in this report assume using the x-ray source at the smallest spot size, maximum energy, and operating at the maximum current. Some of the systems basic geometry that is used for these estimates are: (1)more » Source-to-detector distance: 250 mm, (2) Minimum object-to-detector distance: 40 mm, and (3) Maximum object-to-detector distance: 230 mm. This is a first-order, rough estimate of the quantity of interference expected at the system detector caused by background radiation. The amount of interference is expressed by using the ratio of exposure expected at the detector of the CT system. The exposure values for the SkyScan system are determined by scaling the measured values of an x-ray source and the background radiation adjusting for the difference in source-to-detector distance and current. The x-ray source that was used for these measurements was not the SkyScan Microfocus x-ray tube. Measurements were made using an x-ray source that was operated at the same applied voltage but higher current for better statistics.« less

  10. Evaluation of probable maximum snow accumulation: Development of a methodology for climate change studies

    NASA Astrophysics Data System (ADS)

    Klein, Iris M.; Rousseau, Alain N.; Frigon, Anne; Freudiger, Daphné; Gagnon, Patrick

    2016-06-01

    Probable maximum snow accumulation (PMSA) is one of the key variables used to estimate the spring probable maximum flood (PMF). A robust methodology for evaluating the PMSA is imperative so the ensuing spring PMF is a reasonable estimation. This is of particular importance in times of climate change (CC) since it is known that solid precipitation in Nordic landscapes will in all likelihood change over the next century. In this paper, a PMSA methodology based on simulated data from regional climate models is developed. Moisture maximization represents the core concept of the proposed methodology; precipitable water being the key variable. Results of stationarity tests indicate that CC will affect the monthly maximum precipitable water and, thus, the ensuing ratio to maximize important snowfall events. Therefore, a non-stationary approach is used to describe the monthly maximum precipitable water. Outputs from three simulations produced by the Canadian Regional Climate Model were used to give first estimates of potential PMSA changes for southern Quebec, Canada. A sensitivity analysis of the computed PMSA was performed with respect to the number of time-steps used (so-called snowstorm duration) and the threshold for a snowstorm to be maximized or not. The developed methodology is robust and a powerful tool to estimate the relative change of the PMSA. Absolute results are in the same order of magnitude as those obtained with the traditional method and observed data; but are also found to depend strongly on the climate projection used and show spatial variability.

  11. Maximum likelihood solution for inclination-only data in paleomagnetism

    NASA Astrophysics Data System (ADS)

    Arason, P.; Levi, S.

    2010-08-01

    We have developed a new robust maximum likelihood method for estimating the unbiased mean inclination from inclination-only data. In paleomagnetic analysis, the arithmetic mean of inclination-only data is known to introduce a shallowing bias. Several methods have been introduced to estimate the unbiased mean inclination of inclination-only data together with measures of the dispersion. Some inclination-only methods were designed to maximize the likelihood function of the marginal Fisher distribution. However, the exact analytical form of the maximum likelihood function is fairly complicated, and all the methods require various assumptions and approximations that are often inappropriate. For some steep and dispersed data sets, these methods provide estimates that are significantly displaced from the peak of the likelihood function to systematically shallower inclination. The problem locating the maximum of the likelihood function is partly due to difficulties in accurately evaluating the function for all values of interest, because some elements of the likelihood function increase exponentially as precision parameters increase, leading to numerical instabilities. In this study, we succeeded in analytically cancelling exponential elements from the log-likelihood function, and we are now able to calculate its value anywhere in the parameter space and for any inclination-only data set. Furthermore, we can now calculate the partial derivatives of the log-likelihood function with desired accuracy, and locate the maximum likelihood without the assumptions required by previous methods. To assess the reliability and accuracy of our method, we generated large numbers of random Fisher-distributed data sets, for which we calculated mean inclinations and precision parameters. The comparisons show that our new robust Arason-Levi maximum likelihood method is the most reliable, and the mean inclination estimates are the least biased towards shallow values.

  12. Beyond SaGMRotI: Conversion to SaArb, SaSN, and SaMaxRot

    USGS Publications Warehouse

    Watson-Lamprey, J. A.; Boore, D.M.

    2007-01-01

    In the seismic design of structures, estimates of design forces are usually provided to the engineer in the form of elastic response spectra. Predictive equations for elastic response spectra are derived from empirical recordings of ground motion. The geometric mean of the two orthogonal horizontal components of motion is often used as the response value in these predictive equations, although it is not necessarily the most relevant estimate of forces within the structure. For some applications it is desirable to estimate the response value on a randomly chosen single component of ground motion, and in other applications the maximum response in a single direction is required. We give adjustment factors that allow converting the predictions of geometric-mean ground-motion predictions into either of these other two measures of seismic ground-motion intensity. In addition, we investigate the relation of the strike-normal component of ground motion to the maximum response values. We show that the strike-normal component of ground motion seldom corresponds to the maximum horizontal-component response value (in particular, at distances greater than about 3 km from faults), and that focusing on this case in exclusion of others can result in the underestimation of the maximum component. This research provides estimates of the maximum response value of a single component for all cases, not just near-fault strike-normal components. We provide modification factors that can be used to convert predictions of ground motions in terms of the geometric mean to the maximum spectral acceleration (SaMaxRot) and the random component of spectral acceleration (SaArb). Included are modification factors for both the mean and the aleatory standard deviation of the logarithm of the motions.

  13. Impact of air temperature on physically-based maximum precipitation estimation through change in moisture holding capacity of air

    NASA Astrophysics Data System (ADS)

    Ishida, K.; Ohara, N.; Kavvas, M. L.; Chen, Z. Q.; Anderson, M. L.

    2018-01-01

    Impact of air temperature on the Maximum Precipitation (MP) estimation through change in moisture holding capacity of air was investigated. A series of previous studies have estimated the MP of 72-h basin-average precipitation over the American River watershed (ARW) in Northern California by means of the Maximum Precipitation (MP) estimation approach, which utilizes a physically-based regional atmospheric model. For the MP estimation, they have selected 61 severe storm events for the ARW, and have maximized them by means of the atmospheric boundary condition shifting (ABCS) and relative humidity maximization (RHM) methods. This study conducted two types of numerical experiments in addition to the MP estimation by the previous studies. First, the air temperature on the entire lateral boundaries of the outer model domain was increased uniformly by 0.0-8.0 °C with 0.5 °C increments for the two severest maximized historical storm events in addition to application of the ABCS + RHM method to investigate the sensitivity of the basin-average precipitation over the ARW to air temperature rise. In this investigation, a monotonous increase was found in the maximum 72-h basin-average precipitation over the ARW with air temperature rise for both of the storm events. The second numerical experiment used specific amounts of air temperature rise that is assumed to happen under future climate change conditions. Air temperature was increased by those specified amounts uniformly on the entire lateral boundaries in addition to application of the ABCS + RHM method to investigate the impact of air temperature on the MP estimate over the ARW under changing climate. The results in the second numerical experiment show that temperature increases in the future climate may amplify the MP estimate over the ARW. The MP estimate may increase by 14.6% in the middle of the 21st century and by 27.3% in the end of the 21st century compared to the historical period.

  14. Models and analysis for multivariate failure time data

    NASA Astrophysics Data System (ADS)

    Shih, Joanna Huang

    The goal of this research is to develop and investigate models and analytic methods for multivariate failure time data. We compare models in terms of direct modeling of the margins, flexibility of dependency structure, local vs. global measures of association, and ease of implementation. In particular, we study copula models, and models produced by right neutral cumulative hazard functions and right neutral hazard functions. We examine the changes of association over time for families of bivariate distributions induced from these models by displaying their density contour plots, conditional density plots, correlation curves of Doksum et al, and local cross ratios of Oakes. We know that bivariate distributions with same margins might exhibit quite different dependency structures. In addition to modeling, we study estimation procedures. For copula models, we investigate three estimation procedures. the first procedure is full maximum likelihood. The second procedure is two-stage maximum likelihood. At stage 1, we estimate the parameters in the margins by maximizing the marginal likelihood. At stage 2, we estimate the dependency structure by fixing the margins at the estimated ones. The third procedure is two-stage partially parametric maximum likelihood. It is similar to the second procedure, but we estimate the margins by the Kaplan-Meier estimate. We derive asymptotic properties for these three estimation procedures and compare their efficiency by Monte-Carlo simulations and direct computations. For models produced by right neutral cumulative hazards and right neutral hazards, we derive the likelihood and investigate the properties of the maximum likelihood estimates. Finally, we develop goodness of fit tests for the dependency structure in the copula models. We derive a test statistic and its asymptotic properties based on the test of homogeneity of Zelterman and Chen (1988), and a graphical diagnostic procedure based on the empirical Bayes approach. We study the performance of these two methods using actual and computer generated data.

  15. Estimating Rhododendron maximum L. (Ericaceae) Canopy Cover Using GPS/GIS Technology

    Treesearch

    Tyler J. Tran; Katherine J. Elliott

    2012-01-01

    In the southern Appalachians, Rhododendron maximum L. (Ericaceae) is a key evergreen understory species, often forming a subcanopy in forest stands. Little is known about the significance of R. maximum cover in relation to other forest structural variables. Only recently have studies used Global Positioning System (GPS) technology...

  16. Comparing Three Estimation Methods for the Three-Parameter Logistic IRT Model

    ERIC Educational Resources Information Center

    Lamsal, Sunil

    2015-01-01

    Different estimation procedures have been developed for the unidimensional three-parameter item response theory (IRT) model. These techniques include the marginal maximum likelihood estimation, the fully Bayesian estimation using Markov chain Monte Carlo simulation techniques, and the Metropolis-Hastings Robbin-Monro estimation. With each…

  17. The Significance of the Record Length in Flood Frequency Analysis

    NASA Astrophysics Data System (ADS)

    Senarath, S. U.

    2013-12-01

    Of all of the potential natural hazards, flood is the most costly in many regions of the world. For example, floods cause over a third of Europe's average annual catastrophe losses and affect about two thirds of the people impacted by natural catastrophes. Increased attention is being paid to determining flow estimates associated with pre-specified return periods so that flood-prone areas can be adequately protected against floods of particular magnitudes or return periods. Flood frequency analysis, which is conducted by using an appropriate probability density function that fits the observed annual maximum flow data, is frequently used for obtaining these flow estimates. Consequently, flood frequency analysis plays an integral role in determining the flood risk in flood prone watersheds. A long annual maximum flow record is vital for obtaining accurate estimates of discharges associated with high return period flows. However, in many areas of the world, flood frequency analysis is conducted with limited flow data or short annual maximum flow records. These inevitably lead to flow estimates that are subject to error. This is especially the case with high return period flow estimates. In this study, several statistical techniques are used to identify errors caused by short annual maximum flow records. The flow estimates used in the error analysis are obtained by fitting a log-Pearson III distribution to the flood time-series. These errors can then be used to better evaluate the return period flows in data limited streams. The study findings, therefore, have important implications for hydrologists, water resources engineers and floodplain managers.

  18. A double-gaussian, percentile-based method for estimating maximum blood flow velocity.

    PubMed

    Marzban, Caren; Illian, Paul R; Morison, David; Mourad, Pierre D

    2013-11-01

    Transcranial Doppler sonography allows for the estimation of blood flow velocity, whose maximum value, especially at systole, is often of clinical interest. Given that observed values of flow velocity are subject to noise, a useful notion of "maximum" requires a criterion for separating the signal from the noise. All commonly used criteria produce a point estimate (ie, a single value) of maximum flow velocity at any time and therefore convey no information on the distribution or uncertainty of flow velocity. This limitation has clinical consequences especially for patients in vasospasm, whose largest flow velocities can be difficult to measure. Therefore, a method for estimating flow velocity and its uncertainty is desirable. A gaussian mixture model is used to separate the noise from the signal distribution. The time series of a given percentile of the latter, then, provides a flow velocity envelope. This means of estimating the flow velocity envelope naturally allows for displaying several percentiles (e.g., 95th and 99th), thereby conveying uncertainty in the highest flow velocity. Such envelopes were computed for 59 patients and were shown to provide reasonable and useful estimates of the largest flow velocities compared to a standard algorithm. Moreover, we found that the commonly used envelope was generally consistent with the 90th percentile of the signal distribution derived via the gaussian mixture model. Separating the observed distribution of flow velocity into a noise component and a signal component, using a double-gaussian mixture model, allows for the percentiles of the latter to provide meaningful measures of the largest flow velocities and their uncertainty.

  19. The Inverse Problem for Confined Aquifer Flow: Identification and Estimation With Extensions

    NASA Astrophysics Data System (ADS)

    Loaiciga, Hugo A.; MariñO, Miguel A.

    1987-01-01

    The contributions of this work are twofold. First, a methodology for estimating the elements of parameter matrices in the governing equation of flow in a confined aquifer is developed. The estimation techniques for the distributed-parameter inverse problem pertain to linear least squares and generalized least squares methods. The linear relationship among the known heads and unknown parameters of the flow equation provides the background for developing criteria for determining the identifiability status of unknown parameters. Under conditions of exact or overidentification it is possible to develop statistically consistent parameter estimators and their asymptotic distributions. The estimation techniques, namely, two-stage least squares and three stage least squares, are applied to a specific groundwater inverse problem and compared between themselves and with an ordinary least squares estimator. The three-stage estimator provides the closer approximation to the actual parameter values, but it also shows relatively large standard errors as compared to the ordinary and two-stage estimators. The estimation techniques provide the parameter matrices required to simulate the unsteady groundwater flow equation. Second, a nonlinear maximum likelihood estimation approach to the inverse problem is presented. The statistical properties of maximum likelihood estimators are derived, and a procedure to construct confidence intervals and do hypothesis testing is given. The relative merits of the linear and maximum likelihood estimators are analyzed. Other topics relevant to the identification and estimation methodologies, i.e., a continuous-time solution to the flow equation, coping with noise-corrupted head measurements, and extension of the developed theory to nonlinear cases are also discussed. A simulation study is used to evaluate the methods developed in this study.

  20. Scanning linear estimation: improvements over region of interest (ROI) methods

    NASA Astrophysics Data System (ADS)

    Kupinski, Meredith K.; Clarkson, Eric W.; Barrett, Harrison H.

    2013-03-01

    In tomographic medical imaging, a signal activity is typically estimated by summing voxels from a reconstructed image. We introduce an alternative estimation scheme that operates on the raw projection data and offers a substantial improvement, as measured by the ensemble mean-square error (EMSE), when compared to using voxel values from a maximum-likelihood expectation-maximization (MLEM) reconstruction. The scanning-linear (SL) estimator operates on the raw projection data and is derived as a special case of maximum-likelihood estimation with a series of approximations to make the calculation tractable. The approximated likelihood accounts for background randomness, measurement noise and variability in the parameters to be estimated. When signal size and location are known, the SL estimate of signal activity is unbiased, i.e. the average estimate equals the true value. By contrast, unpredictable bias arising from the null functions of the imaging system affect standard algorithms that operate on reconstructed data. The SL method is demonstrated for two different tasks: (1) simultaneously estimating a signal’s size, location and activity; (2) for a fixed signal size and location, estimating activity. Noisy projection data are realistically simulated using measured calibration data from the multi-module multi-resolution small-animal SPECT imaging system. For both tasks, the same set of images is reconstructed using the MLEM algorithm (80 iterations), and the average and maximum values within the region of interest (ROI) are calculated for comparison. This comparison shows dramatic improvements in EMSE for the SL estimates. To show that the bias in ROI estimates affects not only absolute values but also relative differences, such as those used to monitor the response to therapy, the activity estimation task is repeated for three different signal sizes.

  1. Investigating the Impact of Uncertainty about Item Parameters on Ability Estimation

    ERIC Educational Resources Information Center

    Zhang, Jinming; Xie, Minge; Song, Xiaolan; Lu, Ting

    2011-01-01

    Asymptotic expansions of the maximum likelihood estimator (MLE) and weighted likelihood estimator (WLE) of an examinee's ability are derived while item parameter estimators are treated as covariates measured with error. The asymptotic formulae present the amount of bias of the ability estimators due to the uncertainty of item parameter estimators.…

  2. Methodology and implications of maximum paleodischarge estimates for mountain channels, upper Animas River basin, Colorado, U.S.A.

    USGS Publications Warehouse

    Pruess, J.; Wohl, E.E.; Jarrett, R.D.

    1998-01-01

    Historical and geologic records may be used to enhance magnitude estimates for extreme floods along mountain channels, as demonstrated in this study from the San Juan Mountains of Colorado. Historical photographs and local newspaper accounts from the October 1911 flood indicate the likely extent of flooding and damage. A checklist designed to organize and numerically score evidence of flooding was used in 15 field reconnaissance surveys in the upper Animas River valley of southwestern Colorado. Step-backwater flow modeling estimated the discharges necessary to create longitudinal flood bars observed at 6 additional field sites. According to these analyses, maximum unit discharge peaks at approximately 1.3 m3 s-1 km-2 around 2200 m elevation, with decreased unit discharges at both higher and lower elevations. These results (1) are consistent with Jarrett's (1987, 1990, 1993) maximum 2300-m elevation limit for flash-flooding in the Colorado Rocky Mountains, and (2) suggest that current Probable Maximum Flood (PMF) estimates based on a 24-h rainfall of 30 cm at elevations above 2700 m are unrealistically large. The methodology used for this study should be readily applicable to other mountain regions where systematic streamflow records are of short duration or nonexistent.

  3. Lateral stability and control derivatives of a jet fighter airplane extracted from flight test data by utilizing maximum likelihood estimation

    NASA Technical Reports Server (NTRS)

    Parrish, R. V.; Steinmetz, G. G.

    1972-01-01

    A method of parameter extraction for stability and control derivatives of aircraft from flight test data, implementing maximum likelihood estimation, has been developed and successfully applied to actual lateral flight test data from a modern sophisticated jet fighter. This application demonstrates the important role played by the analyst in combining engineering judgment and estimator statistics to yield meaningful results. During the analysis, the problems of uniqueness of the extracted set of parameters and of longitudinal coupling effects were encountered and resolved. The results for all flight runs are presented in tabular form and as time history comparisons between the estimated states and the actual flight test data.

  4. Effect of sampling rate and record length on the determination of stability and control derivatives

    NASA Technical Reports Server (NTRS)

    Brenner, M. J.; Iliff, K. W.; Whitman, R. K.

    1978-01-01

    Flight data from five aircraft were used to assess the effects of sampling rate and record length reductions on estimates of stability and control derivatives produced by a maximum likelihood estimation method. Derivatives could be extracted from flight data with the maximum likelihood estimation method even if there were considerable reductions in sampling rate and/or record length. Small amplitude pulse maneuvers showed greater degradation of the derivative maneuvers than large amplitude pulse maneuvers when these reductions were made. Reducing the sampling rate was found to be more desirable than reducing the record length as a method of lessening the total computation time required without greatly degrading the quantity of the estimates.

  5. Characterization, parameter estimation, and aircraft response statistics of atmospheric turbulence

    NASA Technical Reports Server (NTRS)

    Mark, W. D.

    1981-01-01

    A nonGaussian three component model of atmospheric turbulence is postulated that accounts for readily observable features of turbulence velocity records, their autocorrelation functions, and their spectra. Methods for computing probability density functions and mean exceedance rates of a generic aircraft response variable are developed using nonGaussian turbulence characterizations readily extracted from velocity recordings. A maximum likelihood method is developed for optimal estimation of the integral scale and intensity of records possessing von Karman transverse of longitudinal spectra. Formulas for the variances of such parameter estimates are developed. The maximum likelihood and least-square approaches are combined to yield a method for estimating the autocorrelation function parameters of a two component model for turbulence.

  6. Sleep Estimates Using Microelectromechanical Systems (MEMS)

    PubMed Central

    te Lindert, Bart H. W.; Van Someren, Eus J. W.

    2013-01-01

    Study Objectives: Although currently more affordable than polysomnography, actigraphic sleep estimates have disadvantages. Brand-specific differences in data reduction impede pooling of data in large-scale cohorts and may not fully exploit movement information. Sleep estimate reliability might improve by advanced analyses of three-axial, linear accelerometry data sampled at a high rate, which is now feasible using microelectromechanical systems (MEMS). However, it might take some time before these analyses become available. To provide ongoing studies with backward compatibility while already switching from actigraphy to MEMS accelerometry, we designed and validated a method to transform accelerometry data into the traditional actigraphic movement counts, thus allowing for the use of validated algorithms to estimate sleep parameters. Design: Simultaneous actigraphy and MEMS-accelerometry recording. Setting: Home, unrestrained. Participants: Fifteen healthy adults (23-36 y, 10 males, 5 females). Interventions: None. Measurements: Actigraphic movement counts/15-sec and 50-Hz digitized MEMS-accelerometry. Analyses: Passing-Bablok regression optimized transformation of MEMS-accelerometry signals to movement counts. Kappa statistics calculated agreement between individual epochs scored as wake or sleep. Bland-Altman plots evaluated reliability of common sleep variables both between and within actigraphs and MEMS-accelerometers. Results: Agreement between epochs was almost perfect at the low, medium, and high threshold (kappa = 0.87 ± 0.05, 0.85 ± 0.06, and 0.83 ± 0.07). Sleep parameter agreement was better between two MEMS-accelerometers or a MEMS-accelerometer and an actigraph than between two actigraphs. Conclusions: The algorithm allows for continuity of outcome parameters in ongoing actigraphy studies that consider switching to MEMS-accelerometers. Its implementation makes backward compatibility feasible, while collecting raw data that, in time, could provide better sleep estimates and promote cross-study data pooling. Citation: te Lindert BHW; Van Someren EJW. Sleep estimates using microelectromechanical systems (MEMS). SLEEP 2013;36(5):781-789. PMID:23633761

  7. Maximum Neutral Buoyancy Depth of Juvenile Chinook Salmon: Implications for Survival during Hydroturbine Passage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pflugrath, Brett D.; Brown, Richard S.; Carlson, Thomas J.

    This study investigated the maximum depth at which juvenile Chinook salmon Oncorhynchus tshawytscha can acclimate by attaining neutral buoyancy. Depth of neutral buoyancy is dependent upon the volume of gas within the swim bladder, which greatly influences the occurrence of injuries to fish passing through hydroturbines. We used two methods to obtain maximum swim bladder volumes that were transformed into depth estimations - the increased excess mass test (IEMT) and the swim bladder rupture test (SBRT). In the IEMT, weights were surgically added to the fishes exterior, requiring the fish to increase swim bladder volume in order to remain neutrallymore » buoyant. SBRT entailed removing and artificially increasing swim bladder volume through decompression. From these tests, we estimate the maximum acclimation depth for juvenile Chinook salmon is a median of 6.7m (range = 4.6-11.6 m). These findings have important implications to survival estimates, studies using tags, hydropower operations, and survival of juvenile salmon that pass through large Kaplan turbines typical of those found within the Columbia and Snake River hydropower system.« less

  8. Computation of nonparametric convex hazard estimators via profile methods.

    PubMed

    Jankowski, Hanna K; Wellner, Jon A

    2009-05-01

    This paper proposes a profile likelihood algorithm to compute the nonparametric maximum likelihood estimator of a convex hazard function. The maximisation is performed in two steps: First the support reduction algorithm is used to maximise the likelihood over all hazard functions with a given point of minimum (or antimode). Then it is shown that the profile (or partially maximised) likelihood is quasi-concave as a function of the antimode, so that a bisection algorithm can be applied to find the maximum of the profile likelihood, and hence also the global maximum. The new algorithm is illustrated using both artificial and real data, including lifetime data for Canadian males and females.

  9. Heritable and non-heritable genetic effects on retained placenta in Meuse-Rhine-Yssel cattle.

    PubMed

    Benedictus, L; Koets, A P; Kuijpers, F H J; Joosten, I; van Eldik, P; Heuven, H C M

    2013-02-01

    Failure of the timely expulsion of the fetal membranes, called retained placenta, leads to reduced fertility, increased veterinary costs and reduced milk yields. The objectives of this study were to concurrently look at the heritable and non-heritable genetic effects on retained placenta and test the hypothesis that a greater coefficient of relationship between dam and calf increases the risk of retained placenta in the dam. The average incidence of retained placenta in 43,661 calvings of Meuse-Rhine-Yssel cattle was 4.5%, ranging from 0% to 29.6% among half-sib groups. The average pedigree based relationship between the sire and the maternal grandsire was 0.05 and ranged from 0 to 1.04. Using a sire-maternal grandsire model the heritability was estimated at 0.22 (SEM=0.07) which is comparable with estimates for other dual purpose breeds. The coefficient of relationship between the sire and the maternal grandsire had an effect on retained placenta. The coefficient of relationship between the sire and the maternal grandsire was used as a proxy for the coefficient of relationship between dam and calf, which is correlated with the probability of major histocompatibility complex (MHC) class I compatibility between dam and calf. MHC class I compatibility is an important risk factor for retained placenta. Although the MHC class I haplotype is genetically determined, MHC class I compatibility is not heritable. This study shows that selection against retained placenta is possible and indicates that preventing the mating of related parents may play a role in the prevention of retained placenta. Copyright © 2012 Elsevier B.V. All rights reserved.

  10. Measurement Error Correction for Predicted Spatiotemporal Air Pollution Exposures.

    PubMed

    Keller, Joshua P; Chang, Howard H; Strickland, Matthew J; Szpiro, Adam A

    2017-05-01

    Air pollution cohort studies are frequently analyzed in two stages, first modeling exposure then using predicted exposures to estimate health effects in a second regression model. The difference between predicted and unobserved true exposures introduces a form of measurement error in the second stage health model. Recent methods for spatial data correct for measurement error with a bootstrap and by requiring the study design ensure spatial compatibility, that is, monitor and subject locations are drawn from the same spatial distribution. These methods have not previously been applied to spatiotemporal exposure data. We analyzed the association between fine particulate matter (PM2.5) and birth weight in the US state of Georgia using records with estimated date of conception during 2002-2005 (n = 403,881). We predicted trimester-specific PM2.5 exposure using a complex spatiotemporal exposure model. To improve spatial compatibility, we restricted to mothers residing in counties with a PM2.5 monitor (n = 180,440). We accounted for additional measurement error via a nonparametric bootstrap. Third trimester PM2.5 exposure was associated with lower birth weight in the uncorrected (-2.4 g per 1 μg/m difference in exposure; 95% confidence interval [CI]: -3.9, -0.8) and bootstrap-corrected (-2.5 g, 95% CI: -4.2, -0.8) analyses. Results for the unrestricted analysis were attenuated (-0.66 g, 95% CI: -1.7, 0.35). This study presents a novel application of measurement error correction for spatiotemporal air pollution exposures. Our results demonstrate the importance of spatial compatibility between monitor and subject locations and provide evidence of the association between air pollution exposure and birth weight.

  11. Evaluation of the Performance of the Mars Environmental Compatibility Assessment Electrometer

    NASA Technical Reports Server (NTRS)

    Mantovani, James G.

    2002-01-01

    The Mars Environmental Compatibility Assessment (MECA) electrometer is an instrument that was designed jointly by researchers at the Jet Propulsion Laboratory and the Kennedy Space Center, and is intended to fly on a future space exploration mission of the surface of Mars. The electrometer was designed primarily to study (1) the electrostatic interaction between the Martian soil and five different types of insulators, which are attached to the electrometer, as the electrometer is rubbed over the Martian soil. The MECA/Electrometer is also capable of measuring (2) the presence of charged particles in the Martian atmosphere, (3) the local electric field strength, and (4) the local temperature. The goal of the research project described in this report was to test and evaluate the measurement capabilities of the MECA/Electrometer under simulated Martian surface conditions using facilities located in the Labs and Testbeds Division at the Kennedy Space Center. The results of this study indicate that the Martian soil simulant can triboelectrically charge up the insulator surface. However, the maximum charge buildup did not exceed 18% of the electrometer's full-range sensitivity when rubbed vigorously, and is more likely to be as low as 1% of the maximum range when rubbed through soil. This indicates that the overall gain of the MECA/Electrometer could be increased by a factor of 50 if measurements at the 50% level of full-range sensitivity are desired. The ion gauge, which detects the presence of charged particles, was also evaluated over a pressure range from 10 to 400 Torr (13 to 533 mbar). The electric field sensor was also evaluated. Although the temperature sensor was not evaluated due to project time constraints, it was previously reported to work properly.

  12. Evaluation of The Performance of The Mars Environmental Compatibility Assessment Electrometer

    NASA Technical Reports Server (NTRS)

    Mantovani, James G.

    2001-01-01

    The Mars Environmental Compatibility Assessment (MECA) electrometer is an instrument that was designed jointly by researchers at the Jet Propulsion Laboratory and the Kennedy Space Center, and is intended to fly on a future space exploration mission of the surface of Mars. The electrometer was designed primarily to study (1) the electrostatic interaction between the Martian soil and five different types of insulators, which are attached to the electrometer, as the electrometer is rubbed over the Martian soil. The MECA/Electrometer is also capable of measuring (2) the presence of charged particles in the Martian atmosphere, (3) the local electric field strength, and (4) the local temperature. The goal of the research project described in this report was to test and evaluate the measurement capabilities of the MECA/Electrometer under simulated Martian surface conditions using facilities located in the Labs and Testbeds Division at the Kennedy Space Center. The results of this study indicate that the Martian soil simulant can triboelectrically charge up the insulator surface. However, the maximum charge buildup did not exceed 18% of the electrometer's full-range sensitivity when rubbed vigorously, and is more likely to be as low as 1% of the maximum range when rubbed through soil. This indicates that the overall gain of the MECA/Electrometer could be increased by a factor of 50 if measurements at the 50% level of full-range sensitivity are desired. The ion gauge, which detects the presence of charged particles, was also evaluated over a pressure range from 10 to 400 Torr (13 to 533 mbar). The electric field sensor was also evaluated. Although the temperature sensor was not evaluated due to project time constraints, it was previously reported to work properly.

  13. Learning graph matching.

    PubMed

    Caetano, Tibério S; McAuley, Julian J; Cheng, Li; Le, Quoc V; Smola, Alex J

    2009-06-01

    As a fundamental problem in pattern recognition, graph matching has applications in a variety of fields, from computer vision to computational biology. In graph matching, patterns are modeled as graphs and pattern recognition amounts to finding a correspondence between the nodes of different graphs. Many formulations of this problem can be cast in general as a quadratic assignment problem, where a linear term in the objective function encodes node compatibility and a quadratic term encodes edge compatibility. The main research focus in this theme is about designing efficient algorithms for approximately solving the quadratic assignment problem, since it is NP-hard. In this paper we turn our attention to a different question: how to estimate compatibility functions such that the solution of the resulting graph matching problem best matches the expected solution that a human would manually provide. We present a method for learning graph matching: the training examples are pairs of graphs and the 'labels' are matches between them. Our experimental results reveal that learning can substantially improve the performance of standard graph matching algorithms. In particular, we find that simple linear assignment with such a learning scheme outperforms Graduated Assignment with bistochastic normalisation, a state-of-the-art quadratic assignment relaxation algorithm.

  14. Integrating cobenefits produced with water quality BMPs into credits markets: Conceptualization and experimental illustration for EPRI's Ohio River Basin Trading

    NASA Astrophysics Data System (ADS)

    Liu, Pengfei; Swallow, Stephen K.

    2016-05-01

    This paper develops a method that incorporates the public value for environmental cobenefits when a conservation buyer can purchase water quality credits based on nonmarket valuation results. We demonstrate this approach through an experiment with adult students in a classroom laboratory environment. Our application contributes to the study of individual preference and willingness to pay for cobenefits associated with the production of water quality credits in relation to the Ohio River Basin Trading Project. We use three different methods to elicit individuals' willingness to pay (WTP), including (1) a hypothetical referendum, (2) a real referendum lacking incentive compatibility, and (3) a real choice with incentive compatibility. Methodologically, our WTP estimates suggest individuals are more sensitive to the cost changes and reveal the lowest value in the real choice with incentive compatibility. Practically, we find individuals value certain cobenefits and credits as public goods. Incorporating public value toward cobenefits may improve the overall efficiency of a water quality trading market. Based on our specification of a planner's welfare function, results suggest a substantial welfare improvement after identifying an optimal allocation of a buyer's budget across credits derived from agricultural management practices producing different portfolios of cobenefits.

  15. Compatibility of household budget and individual nutrition surveys: results of the preliminary analysis.

    PubMed

    Naska, A; Trichopoulou, A

    2001-08-01

    The EU-supported project entitled: "Compatibility of household budget and individual nutrition surveys and disparities in food habits" aimed at comparing individualised household budget survey (HBS) data with food consumption values derived from individual nutrition surveys (INS). The present paper provides a brief description of the methodology applied for rendering the datasets at a comparable level. Results of the preliminary evaluation of their compatibility are also presented. A non parametric modelling approach was used for the individualisation (age and gender-specific) of the food data collected at household level, in the context of the national HBSs and the bootstrap technique was used for the derivation of 95% confidence intervals. For each food group, INS and HBS-derived mean values were calculated for twenty-four research units, jointly defined by country (four countries involved), gender (male, female) and age (younger, middle-aged and older). Pearson correlation coefficients were calculated. The results of this preliminary analysis show that there is considerable scope in the nutritional information derived from HBSs. Additional and more sophisticated work is however required, putting particular emphasis on addressing limitations present in both surveys and on deriving reliable individual consumption point and interval estimates, on the basis of HBS data.

  16. Feature space trajectory for distorted-object classification and pose estimation in synthetic aperture radar

    NASA Astrophysics Data System (ADS)

    Casasent, David P.; Shenoy, Rajesh

    1997-10-01

    Classification and pose estimation of distorted input objects are considered. The feature space trajectory representation of distorted views of an object is used with a new eigenfeature space. For a distorted input object, the closest trajectory denotes the class of the input and the closest line segment on it denotes its pose. If an input point is too far from a trajectory, it is rejected as clutter. New methods for selecting Fukunaga-Koontz discriminant vectors, the number of dominant eigenvectors per class and for determining training, and test set compatibility are presented.

  17. Gravitational Waves from Binary Mergers of Subsolar Mass Dark Black Holes

    NASA Astrophysics Data System (ADS)

    Shandera, Sarah; Jeong, Donghui; Gebhardt, Henry S. Grasshorn

    2018-06-01

    We explore the possible spectrum of binary mergers of subsolar mass black holes formed out of dark matter particles interacting via a dark electromagnetism. We estimate the properties of these dark black holes by assuming that their formation process is parallel to Population-III star formation, except that dark molecular cooling can yield a smaller opacity limit. We estimate the binary coalescence rates for the Advanced LIGO and Einstein telescope, and find that scenarios compatible with all current constraints could produce dark black holes at rates high enough for detection by Advanced LIGO.

  18. Development and application of the maximum entropy method and other spectral estimation techniques

    NASA Astrophysics Data System (ADS)

    King, W. R.

    1980-09-01

    This summary report is a collection of four separate progress reports prepared under three contracts, which are all sponsored by the Office of Naval Research in Arlington, Virginia. This report contains the results of investigations into the application of the maximum entropy method (MEM), a high resolution, frequency and wavenumber estimation technique. The report also contains a description of two, new, stable, high resolution spectral estimation techniques that is provided in the final report section. Many examples of wavenumber spectral patterns for all investigated techniques are included throughout the report. The maximum entropy method is also known as the maximum entropy spectral analysis (MESA) technique, and both names are used in the report. Many MEM wavenumber spectral patterns are demonstrated using both simulated and measured radar signal and noise data. Methods for obtaining stable MEM wavenumber spectra are discussed, broadband signal detection using the MEM prediction error transform (PET) is discussed, and Doppler radar narrowband signal detection is demonstrated using the MEM technique. It is also shown that MEM cannot be applied to randomly sampled data. The two new, stable, high resolution, spectral estimation techniques discussed in the final report section, are named the Wiener-King and the Fourier spectral estimation techniques. The two new techniques have a similar derivation based upon the Wiener prediction filter, but the two techniques are otherwise quite different. Further development of the techniques and measurement of the technique spectral characteristics is recommended for subsequent investigation.

  19. A modified ATI technique for nowcasting convective rain volumes over areas. [area-time integrals

    NASA Technical Reports Server (NTRS)

    Makarau, Amos; Johnson, L. Ronald; Doneaud, Andre A.

    1988-01-01

    This paper explores the applicability of the area-time-integral (ATI) technique for the estimation of the growth portion only of a convective storm (while the rain volume is computed using the entire life history of the event) and for nowcasting the total rain volume of a convective system at the stage of its maximum development. For these purposes, the ATIs were computed from the digital radar data (for 1981-1982) from the North Dakota Cloud Modification Project, using the maximum echo area (ATIA) no less than 25 dBz, the maximum reflectivity, and the maximum echo height as the end of the growth portion of the convective event. Linear regression analysis demonstrated that correlations between total rain volume or the maximum rain volume versus ATIA were the strongest. The uncertainties obtained were comparable to the uncertainties which typically occur in rain volume estimates obtained from radar data employing Z-R conversion followed by space and time integration. This demonstrates that the total rain volume of a storm can be nowcasted at its maximum stage of development.

  20. Climatic significance of the ostracode fauna from the Pliocene Kap Kobenhavn Formation, north Greenland

    USGS Publications Warehouse

    Brouwers, E.M.; Jorgensen, N.O.; Cronin, T. M.

    1991-01-01

    The Kap Kobenhavn Formation crops out in Greenland at 80??N latitude and marks the most northerly onshore Pliocene locality known. The sands and silts that comprise the formation were deposited in marginal marine and shallow marine environments. An abundant and diverse vertebrate and invertebrate fauna and plant megafossil flora provide age and paleoclimatic constraints. The age estimated for the Kap Kobenhavn ranges from 2.0 to 3.0 million years old. Winter and summer bottom water paleotemperatures were estimated on the basis of the ostracode assemblages. The marine ostracode fauna in units B1 and B2 indicate a subfrigid to frigid marine climate, with estimated minimum sea bottom temperatures (SBT) of -2??C and estimated maximum SBT of 6-8??C. Sediments assigned to unit B2 at locality 72 contain a higher proportion of warm water genera, and the maximum SBT is estimated at 9-10??C. The marginal marine fauna in the uppermost unit B3 (locality 68) indicates a cold temperate to subfrigid marine climate, with an estimated minimum SBT of -2??C and an estimated maximum SBT ranging as high as 12-14??C. These temperatures indicated that, on the average, the Kap Kobenhavn winters in the late Pliocene were similar to or perhaps 1-2??C warmer than winters today and that summer temperatures were 7-8??C warmer than today. -from Authors

  1. Purification and characterization of Bacillus cereus protease suitable for detergent industry.

    PubMed

    Prakash, Monika; Banik, Rathindra Mohan; Koch-Brandt, Claudia

    2005-12-01

    An extracellular alkaline protease from an alkalophilic bacterium, Bacillus cereus, was produced in a large amount by the method of extractive fermentation. The protease is thermostable, pH tolerant, and compatible with commercial laundry detergents. The protease purified and characterized in this study was found to be superior to endogenous protease already present in commercial laundry detergents. The enzyme was purified to homogeneity by ammonium sulfate precipitation, concentration by ultrafiltration, anion-exchange chromatography, and gel filtration. The purified enzyme had a specific activity of 3256.05 U/mg and was found to be a monomeric protein with a molecular mass of 28 and 31 kDa, as estimated by sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) and nondenaturing PAGE, respectively. Its maximum protease activity against casein was found to be at pH 10.5 and 50 degrees C. Proteolytic activity of the enzyme was detected by casein and gelatin zymography, which gave a very clear protease activity zone on gel that corresponded to the band obtained on SDS-PAGE and nondenaturing PAGE with a molecular mass of nearly 31 kDa. The purified enzyme was analyzed through matrix-assisted laser desorption ionization-time-of-flight-mass spectrometry (MALDI-TOF-MS) and identified as a subtilisin class of protease. Specific serine protease inhibitors, suggesting the presence of serine residues at the active site, inhibited the enzyme significantly.

  2. A portable optical reader and wall projector towards enumeration of bio-conjugated beads or cells

    PubMed Central

    McArdle, Niamh A.; Kendlin, Jane L.; O’Connell, Triona M.; Ducrée, Jens

    2017-01-01

    Measurement of the height of a packed column of cells or beads, which can be direclty related to the number of cells or beads present in a chamber, is an important step in a number of diagnostic assays. For example, haematocrit measurements may rapidly identify anemia or polycthemia. Recently, user-friendly and cost-efficient Lab-on-a-Chip devices have been developed towards isolating and counting cell sub-populations for diagnostic purposes. In this work, we present a low-cost optical module for estimating the filling level of packed magnetic beads within a Lab-on-a-Chip device. The module is compatible with a previously introduced, disposable microfluidic chip for rapid determination of CD4+ cell counts. The device is a simple optical microscope module is manufactured by 3D printing. An objective lens directly interrogates the height of packed beads which are efficiently isolated on the finger-actuated chip. Optionally, an inexpensive, battery-powered Light Emitting Diode may project a shadow of the microfluidic chip at approximately 50-fold magnification onto a nearby surface. The reader is calibrated with the filling levels of known concentrations of paramagnetic beads within the finger actuated chip. Results in direct and projector mode are compared to measurements from a conventional, inverted white-light microscope. All three read-out methods indicate a maximum variation of 6.5% between methods. PMID:29267367

  3. Simple Penalties on Maximum-Likelihood Estimates of Genetic Parameters to Reduce Sampling Variation

    PubMed Central

    Meyer, Karin

    2016-01-01

    Multivariate estimates of genetic parameters are subject to substantial sampling variation, especially for smaller data sets and more than a few traits. A simple modification of standard, maximum-likelihood procedures for multivariate analyses to estimate genetic covariances is described, which can improve estimates by substantially reducing their sampling variances. This is achieved by maximizing the likelihood subject to a penalty. Borrowing from Bayesian principles, we propose a mild, default penalty—derived assuming a Beta distribution of scale-free functions of the covariance components to be estimated—rather than laboriously attempting to determine the stringency of penalization from the data. An extensive simulation study is presented, demonstrating that such penalties can yield very worthwhile reductions in loss, i.e., the difference from population values, for a wide range of scenarios and without distorting estimates of phenotypic covariances. Moreover, mild default penalties tend not to increase loss in difficult cases and, on average, achieve reductions in loss of similar magnitude to computationally demanding schemes to optimize the degree of penalization. Pertinent details required for the adaptation of standard algorithms to locate the maximum of the likelihood function are outlined. PMID:27317681

  4. Estimation of brood and nest survival: Comparative methods in the presence of heterogeneity

    USGS Publications Warehouse

    Manly, Bryan F.J.; Schmutz, Joel A.

    2001-01-01

    The Mayfield method has been widely used for estimating survival of nests and young animals, especially when data are collected at irregular observation intervals. However, this method assumes survival is constant throughout the study period, which often ignores biologically relevant variation and may lead to biased survival estimates. We examined the bias and accuracy of 1 modification to the Mayfield method that allows for temporal variation in survival, and we developed and similarly tested 2 additional methods. One of these 2 new methods is simply an iterative extension of Klett and Johnson's method, which we refer to as the Iterative Mayfield method and bears similarity to Kaplan-Meier methods. The other method uses maximum likelihood techniques for estimation and is best applied to survival of animals in groups or families, rather than as independent individuals. We also examined how robust these estimators are to heterogeneity in the data, which can arise from such sources as dependent survival probabilities among siblings, inherent differences among families, and adoption. Testing of estimator performance with respect to bias, accuracy, and heterogeneity was done using simulations that mimicked a study of survival of emperor goose (Chen canagica) goslings. Assuming constant survival for inappropriately long periods of time or use of Klett and Johnson's methods resulted in large bias or poor accuracy (often >5% bias or root mean square error) compared to our Iterative Mayfield or maximum likelihood methods. Overall, estimator performance was slightly better with our Iterative Mayfield than our maximum likelihood method, but the maximum likelihood method provides a more rigorous framework for testing covariates and explicity models a heterogeneity factor. We demonstrated use of all estimators with data from emperor goose goslings. We advocate that future studies use the new methods outlined here rather than the traditional Mayfield method or its previous modifications.

  5. Simple, Efficient Estimators of Treatment Effects in Randomized Trials Using Generalized Linear Models to Leverage Baseline Variables

    PubMed Central

    Rosenblum, Michael; van der Laan, Mark J.

    2010-01-01

    Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation. PMID:20628636

  6. Maximum likelihood estimation of signal detection model parameters for the assessment of two-stage diagnostic strategies.

    PubMed

    Lirio, R B; Dondériz, I C; Pérez Abalo, M C

    1992-08-01

    The methodology of Receiver Operating Characteristic curves based on the signal detection model is extended to evaluate the accuracy of two-stage diagnostic strategies. A computer program is developed for the maximum likelihood estimation of parameters that characterize the sensitivity and specificity of two-stage classifiers according to this extended methodology. Its use is briefly illustrated with data collected in a two-stage screening for auditory defects.

  7. Computing Maximum Likelihood Estimates of Loglinear Models from Marginal Sums with Special Attention to Loglinear Item Response Theory. [Project Psychometric Aspects of Item Banking No. 53.] Research Report 91-1.

    ERIC Educational Resources Information Center

    Kelderman, Henk

    In this paper, algorithms are described for obtaining the maximum likelihood estimates of the parameters in log-linear models. Modified versions of the iterative proportional fitting and Newton-Raphson algorithms are described that work on the minimal sufficient statistics rather than on the usual counts in the full contingency table. This is…

  8. Estimations of relative effort during sit-to-stand increase when accounting for variations in maximum voluntary torque with joint angle and angular velocity.

    PubMed

    Bieryla, Kathleen A; Anderson, Dennis E; Madigan, Michael L

    2009-02-01

    The main purpose of this study was to compare three methods of determining relative effort during sit-to-stand (STS). Fourteen young (mean 19.6+/-SD 1.2 years old) and 17 older (61.7+/-5.5 years old) adults completed six STS trials at three speeds: slow, normal, and fast. Sagittal plane joint torques at the hip, knee, and ankle were calculated through inverse dynamics. Isometric and isokinetic maximum voluntary contractions (MVC) for the hip, knee, and ankle were collected and used for model parameters to predict the participant-specific maximum voluntary joint torque. Three different measures of relative effort were determined by normalizing STS joint torques to three different estimates of maximum voluntary torque. Relative effort at the hip, knee, and ankle were higher when accounting for variations in maximum voluntary torque with joint angle and angular velocity (hip=26.3+/-13.5%, knee=78.4+/-32.2%, ankle=27.9+/-14.1%) compared to methods which do not account for these variations (hip=23.5+/-11.7%, knee=51.7+/-15.0%, ankle=20.7+/-10.4%). At higher velocities, the difference in calculating relative effort with respect to isometric MVC or incorporating joint angle and angular velocity became more evident. Estimates of relative effort that account for the variations in maximum voluntary torque with joint angle and angular velocity may provide higher levels of accuracy compared to methods based on measurements of maximal isometric torques.

  9. Real-Time Parameter Estimation Using Output Error

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.

    2014-01-01

    Output-error parameter estimation, normally a post- ight batch technique, was applied to real-time dynamic modeling problems. Variations on the traditional algorithm were investigated with the goal of making the method suitable for operation in real time. Im- plementation recommendations are given that are dependent on the modeling problem of interest. Application to ight test data showed that accurate parameter estimates and un- certainties for the short-period dynamics model were available every 2 s using time domain data, or every 3 s using frequency domain data. The data compatibility problem was also solved in real time, providing corrected sensor measurements every 4 s. If uncertainty corrections for colored residuals are omitted, this rate can be increased to every 0.5 s.

  10. The Equivalence of Two Methods of Parameter Estimation for the Rasch Model.

    ERIC Educational Resources Information Center

    Blackwood, Larry G.; Bradley, Edwin L.

    1989-01-01

    Two methods of estimating parameters in the Rasch model are compared. The equivalence of likelihood estimations from the model of G. J. Mellenbergh and P. Vijn (1981) and from usual unconditional maximum likelihood (UML) estimation is demonstrated. Mellenbergh and Vijn's model is a convenient method of calculating UML estimates. (SLD)

  11. Multilevel modeling of single-case data: A comparison of maximum likelihood and Bayesian estimation.

    PubMed

    Moeyaert, Mariola; Rindskopf, David; Onghena, Patrick; Van den Noortgate, Wim

    2017-12-01

    The focus of this article is to describe Bayesian estimation, including construction of prior distributions, and to compare parameter recovery under the Bayesian framework (using weakly informative priors) and the maximum likelihood (ML) framework in the context of multilevel modeling of single-case experimental data. Bayesian estimation results were found similar to ML estimation results in terms of the treatment effect estimates, regardless of the functional form and degree of information included in the prior specification in the Bayesian framework. In terms of the variance component estimates, both the ML and Bayesian estimation procedures result in biased and less precise variance estimates when the number of participants is small (i.e., 3). By increasing the number of participants to 5 or 7, the relative bias is close to 5% and more precise estimates are obtained for all approaches, except for the inverse-Wishart prior using the identity matrix. When a more informative prior was added, more precise estimates for the fixed effects and random effects were obtained, even when only 3 participants were included. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  12. FSA field test report, 1980 - 1982

    NASA Technical Reports Server (NTRS)

    Maxwell, H. G.; Grimmett, C. A.; Repar, J.; Frickland, P. O.; Amy, J. A.

    1983-01-01

    Photovoltaic modules made of new and developing materials were tested in a continuing study of weatherability, compatibility, and corrosion protection. Over a two-year period, 365 two-cell submodules have been exposed for various intervals at three outdoor sites in Southern California or subjected to laboratory acceptance tests. Results to date show little loss of maximum power output, except in two types of modules. In the first of these, failure is due to cell fracture from the stresses that arise as water is regained from the surrounding air by a hardboard substrate, which shrank as it dried during its encapsulation in plastic film at 150 C in vacuo. In the second, the glass superstrate is sensitive to cracking, which also damages the cells electrostatically bonded to it; inadequate bonding of interconnects to the cells is also a problem in these modules. In a third type of module, a polyurethane pottant has begun to yellow, though as yet without significant effect on maximum power output.

  13. Some constraints on levels of shear stress in the crust from observations and theory.

    USGS Publications Warehouse

    McGarr, A.

    1980-01-01

    In situ stress determinations in North America, southern Africa, and Australia indicate that on the average the maximum shear stress increases linearly with depth to at least 5.1 km measured in soft rock, such as shale and sandstone, and to 3.7 km in hard rock, including granite and quartzite. Regression lines fitted to the data yield gradients of 3.8 MPa/km and 6.6 MPa/km for soft and hard rock, respectively. Generally, the maximum shear stress in compressional states of stress for which the least principal stress is oriented near vertically is substantially greater than in extensional stress regimes, with the greatest principal stress in a vertical direction. The equations of equilibrium and compatibility can be used to provide functional constrains on the state of stress. If the stress is assumed to vary only with depth z in a given region, then all nonzero components must have the form A + Bz, where A and B are constants which generally differ for the various components. - Author

  14. Implications of the principle of maximum conformality for the QCD strong coupling

    DOE PAGES

    Deur, Alexandre; Shen, Jian -Ming; Wu, Xing -Gang; ...

    2017-08-14

    The Principle of Maximum Conformality (PMC) provides scale-fixed perturbative QCD predictions which are independent of the choice of the renormalization scheme, as well as the choice of the initial renormalization scale. In this article, we will test the PMC by comparing its predictions for the strong couplingmore » $$\\alpha^s_{g_1}(Q)$$, defined from the Bjorken sum rule, with predictions using conventional pQCD scale-setting. The two results are found to be compatible with each other and with the available experimental data. However, the PMC provides a significantly more precise determination, although its domain of applicability ($$Q \\gtrsim 1.5$$ GeV) does not extend to as small values of momentum transfer as that of a conventional pQCD analysis ($$Q \\gtrsim 1$$ GeV). In conclusion, we suggest that the PMC range of applicability could be improved by a modified intermediate scheme choice or using a single effective PMC scale.« less

  15. [Biomechanical properties of bioabsorbable cannulated screws for surgical fixation of dislocated epiphysiolysis capitis femoris].

    PubMed

    Kröber, M W; Rovinsky, D; Lotz, J; Carstens, C; Otsuka, N Y

    2002-06-01

    Bioabsorbable materials are well suited for fixation of slipped capital femoral epiphysis (SCFE) as they are resorbable, compatible with magnetic resonance imaging, and well tolerated by the pediatric population. We compared cannulated 4.5-mm bioabsorbable screws made of self-reinforced polylevolactic acid (SR-PLLA) to cannulated 4.5-mm steel and titanium screws for their resistance to shear stress and ability to generate compression in a polyurethane foam model of SCFE fixation. The maximum shear stress resisted by the three screw types was similar (SR-PLLA 371 +/- 146, steel 442 +/- 43, titanium 470 +/- 91 MPa, NS). The maximum compression generated by both the SR-PLLA screw (68.5 +/- 3.3 N) and the steel screw (63.3 +/- 5.9 N) was greater than that for the titanium screw (3.0 +/- 1.4 N, p < 0.05). These data suggest that cannulated SR-PLLA screws have sufficient biomechanical strength to be used in the treatment of SCFE.

  16. Daily mean temperature estimate at the US SUFRAD stations as an average of the maximum and minimum temperatures

    DOE PAGES

    Chylek, Petr; Augustine, John A.; Klett, James D.; ...

    2017-09-30

    At thousands of stations worldwide, the mean daily surface air temperature is estimated as a mean of the daily maximum (T max) and minimum (T min) temperatures. In this paper, we use the NOAA Surface Radiation Budget Network (SURFRAD) of seven US stations with surface air temperature recorded each minute to assess the accuracy of the mean daily temperature estimate as an average of the daily maximum and minimum temperatures and to investigate how the accuracy of the estimate increases with an increasing number of daily temperature observations. We find the average difference between the estimate based on an averagemore » of the maximum and minimum temperatures and the average of 1440 1-min daily observations to be - 0.05 ± 1.56 °C, based on analyses of a sample of 238 days of temperature observations. Considering determination of the daily mean temperature based on 3, 4, 6, 12, or 24 daily temperature observations, we find that 2, 4, or 6 daily observations do not reduce significantly the uncertainty of the daily mean temperature. The bias reduction in a statistically significant manner (95% confidence level) occurs only with 12 or 24 daily observations. The daily mean temperature determination based on 24 hourly observations reduces the sample daily temperature uncertainty to - 0.01 ± 0.20 °C. Finally, estimating the parameters of population of all SURFRAD observations, the 95% confidence intervals based on 24 hourly measurements is from - 0.025 to 0.004 °C, compared to a confidence interval from - 0.15 to 0.05 °C based on the mean of T max and T min.« less

  17. Daily mean temperature estimate at the US SUFRAD stations as an average of the maximum and minimum temperatures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chylek, Petr; Augustine, John A.; Klett, James D.

    At thousands of stations worldwide, the mean daily surface air temperature is estimated as a mean of the daily maximum (T max) and minimum (T min) temperatures. In this paper, we use the NOAA Surface Radiation Budget Network (SURFRAD) of seven US stations with surface air temperature recorded each minute to assess the accuracy of the mean daily temperature estimate as an average of the daily maximum and minimum temperatures and to investigate how the accuracy of the estimate increases with an increasing number of daily temperature observations. We find the average difference between the estimate based on an averagemore » of the maximum and minimum temperatures and the average of 1440 1-min daily observations to be - 0.05 ± 1.56 °C, based on analyses of a sample of 238 days of temperature observations. Considering determination of the daily mean temperature based on 3, 4, 6, 12, or 24 daily temperature observations, we find that 2, 4, or 6 daily observations do not reduce significantly the uncertainty of the daily mean temperature. The bias reduction in a statistically significant manner (95% confidence level) occurs only with 12 or 24 daily observations. The daily mean temperature determination based on 24 hourly observations reduces the sample daily temperature uncertainty to - 0.01 ± 0.20 °C. Finally, estimating the parameters of population of all SURFRAD observations, the 95% confidence intervals based on 24 hourly measurements is from - 0.025 to 0.004 °C, compared to a confidence interval from - 0.15 to 0.05 °C based on the mean of T max and T min.« less

  18. Kinesthetic Feedback During 2DOF Wrist Movements via a Novel MR-Compatible Robot.

    PubMed

    Erwin, Andrew; O'Malley, Marcia K; Ress, David; Sergi, Fabrizio

    2017-09-01

    We demonstrate the interaction control capabilities of the MR-SoftWrist, a novel MR-compatible robot capable of applying accurate kinesthetic feedback to wrist pointing movements executed during fMRI. The MR-SoftWrist, based on a novel design that combines parallel piezoelectric actuation with compliant force feedback, is capable of delivering 1.5 N [Formula: see text] of torque to the wrist of an interacting subject about the flexion/extension and radial/ulnar deviation axes. The robot workspace, defined by admissible wrist rotation angles, fully includes a circle with a 20 deg radius. Via dynamic characterization, we demonstrate capability for transparent operation with low (10% of maximum torque output) backdrivability torques at nominal speeds. Moreover, we demonstrate a 5.5 Hz stiffness control bandwidth for a 14 dB range of virtual stiffness values, corresponding to 25%-125% of the device's physical reflected stiffness in the nominal configuration. We finally validate the possibility of operation during fMRI via a case study involving one healthy subject. Our validation experiment demonstrates the capability of the device to apply kinesthetic feedback to elicit distinguishable kinetic and neural responses without significant degradation of image quality or task-induced head movements. With this study, we demonstrate the feasibility of MR-compatible devices like the MR-SoftWrist to be used in support of motor control experiments investigating wrist pointing under robot-applied force fields. Such future studies may elucidate fundamental neural mechanisms enabling robot-assisted motor skill learning, which is crucial for robot-aided neurorehabilitation.

  19. Compatibility of materials with liquid metal targets for SNS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DiStefano, J.R.; Pawel, S.J.; DeVan, J.H.

    1996-06-01

    Several heavy liquid metals are candidates as the target in a spallation neutron source: Hg, Pb, Bi, and Pb-Bi eutectic. Systems with these liquid metals have been used in the past and a data-base on compatibility already exists. Two major compatibility issues have been identified when selecting a container material for these liquid metals: temperature gradient mass transfer and liquid metal embrittlement or LME. Temperature gradient mass transfer refers to dissolution of material from the high temperature portions of a system and its deposition in the lower temperature areas. Solution and deposition rate constants along with temperature, {Delta}T, and velocitymore » are usually the most important parameters. For most candidate materials mass transfer corrosion has been found to be proportionately worse in Bi compared with Hg and Pb. For temperatures to {approx}550{degrees}C, ferritic/martensitic steels have been satisfactory in Pb or Hg systems and the maximum temperature can be extended to {approx}650{degrees}C with additions of inhibitors to the liquid metal, e.g. Mg, Ti, Zr. Above {approx}600{degrees}C, austenitic stainless steels have been reported to be unsatisfactory, largely because of the mass transfer of nickel. Blockage of flow from deposition of material is usually the life-limiting effect of this type of corrosion. However, mass transfer corrosion at lower temperatures has not been studied. At low temperatures (usually < 150{degrees}C), LME has been reported for some liquid metal/container alloy combinations. Liquid metal embrittlement, like hydrogen embrittlement, results in brittle fracture of a normally ductile material.« less

  20. Listing of Schools Constructed with a Building System. Building Systems Information Clearinghouse Special Report Number Two.

    ERIC Educational Resources Information Center

    Building Systems Information Clearinghouse, Menlo Park, CA.

    To qualify for inclusion in this listing, a building must have been constructed with two or more compatible building subsystems, one of which is structure. It is estimated that at least 400 schools that satisfy this definition have been constructed in the U. S. and Canada, about one quarter of which appear in this listing. Beginning with this…

  1. Seismotectonic Models of the Three Recent Devastating SCR Earthquakes in India

    NASA Astrophysics Data System (ADS)

    Mooney, W. D.; Kayal, J.

    2007-12-01

    During the last decade, three devastating earthquakes, the Killari 1993 (Mb 6.3), Jabalpur 1997 (Mb 6.0) and the Bhuj 2001 (Mw 7.7) occurred in the Stable Continental Region (SCR), Peninsular India. First, the September 30, 1993 Killari earthquake (Mb 6.3) occurred in the Deccan province of central India, in the Latur district of Maharashtra state. The local geology in the area is obscured by the late Cretaceous-Eocene basalt flows, referred to as the Deccan traps. This makes it difficult to recognize the geological surface faults that could be associated with the Killari earthquake. The epicentre was reported at 18.090N and 76.620E, and the focal depth at 7 +/- 1 km was precisely estimated by waveform inversion (Chen and Kao, 1995). The maximum intensity reached to VIII and the earthquake caused a loss of about 10,000 lives and severe damage to property. The May 22, 1997 Jabalpur earthquake (Mb 6.0), epicentre at 23.080N and 80.060E, is a well studied earthquake in the Son-Narmada-Tapti (SONATA) seismic zone. A notable aspects of this earthquake is that it was the first significant event in India to be recorded by 10 broadband seismic stations which were established in 1996 by the India Meteorological Department (IMD). The focal depth was well estimated using the "converted phases" of the broadband seismograms. The focal depth was given in the lower crust at a depth of 35 +/- 1 km, similar to the moderate earthquakes reported from the Amazona ancient rift system in SCR of South America. Maximum MSK intensity of the Jabalpur earthquake reached to VIII in the MSK scale and this earthquake killed about 50 people in the Jabalpur area. Finally, the Bhuj earthquake (MW 7.7) of January 26, 2001 in the Gujarat state, northwestern India, was felt across the whole country, and killed about 20,000 people. The maximum intensity level reached X. The epicenter of the earthquake is reported at 23.400N and 70.280E, and the well estimated focal depth at 25 km. A total of about 3000 aftershocks (M> 1.0) were recorded until mid April, 2001. About 500 aftershocks (M>2.0) are well located; the epicenter map shows an aftershock cluster area, about 60 km x 30 km, between 70.0-70.60E and 23.3-23.60N; almost all the aftershocks occurred within the high intensity (IX) zone. The source area of the main shock and most of the aftershocks are at a depth range of 20-25 km. The fault-plane solutions suggest that the main shock originated at the base of the paleo-rift zone by a south dipping, hidden reverse fault; the rupture propagated both NE and NW. The aftershocks occurred by left-lateral strike-slip motion along the NE trending fault, compatible with the main shock solution, and by pure reverse to right-lateral, strike-slip motion along the NW trending conjugate fault. Understanding these earthquake sequences may shed new light in on the tectonics and active faults in the source regions.

  2. The evaluation of maximum horizontal in-situ stress using the wellbore imagers data

    NASA Astrophysics Data System (ADS)

    Dubinya, N. V.; Ezhov, K. A.

    2016-12-01

    Well drilling provides a number of possibilities to improve the knowledge of stress state of the upper layers of the Earth crust. The data obtained from drilling, well logging, core experiments and special tests is used to evaluate the principal stresses' directions and magnitudes. Although the values of vertical stress and minimum horizontal stress may be decently estimated, the maximum horizontal stress remains a major problem. In this study a new method to estimate this value is proposed. The suggested approach is based on the concept of hydraulically conductive and non-conductive fractures near a wellbore (Barton, Zoback and Moos, 1995). It was stated that all the fractures which properties may be acquired from well logging data can be divided into two groups regarding hydraulic conductivity. The fracture properties and the in-situ stress state are put in relationship via the Mohr diagram. This approach was later used by Ito and Zoback (2000) to estimate the magnitude of the maximum horizontal stress from the temperature profiles. In the current study ultrasonic and resistivity borehole imaging are used to estimate the magnitude of maximum horizontal stress in rather precise way. After proper interpretation one is able to obtain orientation and hydraulic conductivity for each fracture appeared at the images. If the proper profiles of vertical and minimum horizontal stresses are known all the fractures may be analyzed at the Mohr diagram. Alteration of maximum horizontal stress profile grants an opportunity to adjust it so the conductive fractures at the Mohr diagram fit the data from imagers' interpretation. The precision of the suggested approach was evaluated for several oil production wells in Siberia with decent wellbore stability models. It appeared that the difference between maximum horizontal stress estimated in a suggested approach and the values obtained from drilling reports did not exceed 0.5 MPa. Thus the proposed approach may be used to evaluate the values of maximum horizontal stress using the wellbore imagers' data. References Barton, C.A., Zoback, M.D., Moos, D. Fluid flow along potentially active faults in crystalline rock - Geology, 1995. T. Ito, M. Zoback, Fracture permeability and in situ stress to 7 km depth in the KTB Scientific Drillhole, Geophysical Research Letters, 2000.

  3. Optimizing the Terzaghi Estimator of the 3D Distribution of Rock Fracture Orientations

    NASA Astrophysics Data System (ADS)

    Tang, Huiming; Huang, Lei; Juang, C. Hsein; Zhang, Junrong

    2017-08-01

    Orientation statistics are prone to bias when surveyed with the scanline mapping technique in which the observed probabilities differ, depending on the intersection angle between the fracture and the scanline. This bias leads to 1D frequency statistical data that are poorly representative of the 3D distribution. A widely accessible estimator named after Terzaghi was developed to estimate 3D frequencies from 1D biased observations, but the estimation accuracy is limited for fractures at narrow intersection angles to scanlines (termed the blind zone). Although numerous works have concentrated on accuracy with respect to the blind zone, accuracy outside the blind zone has rarely been studied. This work contributes to the limited investigations of accuracy outside the blind zone through a qualitative assessment that deploys a mathematical derivation of the Terzaghi equation in conjunction with a quantitative evaluation that uses fractures simulation and verification of natural fractures. The results show that the estimator does not provide a precise estimate of 3D distributions and that the estimation accuracy is correlated with the grid size adopted by the estimator. To explore the potential for improving accuracy, the particular grid size producing maximum accuracy is identified from 168 combinations of grid sizes and two other parameters. The results demonstrate that the 2° × 2° grid size provides maximum accuracy for the estimator in most cases when applied outside the blind zone. However, if the global sample density exceeds 0.5°-2, then maximum accuracy occurs at a grid size of 1° × 1°.

  4. Earthquake catalog for estimation of maximum earthquake magnitude, Central and Eastern United States: Part A, Prehistoric earthquakes

    USGS Publications Warehouse

    Wheeler, Russell L.

    2014-01-01

    Computation of probabilistic earthquake hazard requires an estimate of Mmax, the maximum earthquake magnitude thought to be possible within a specified geographic region. This report is Part A of an Open-File Report that describes the construction of a global catalog of moderate to large earthquakes, from which one can estimate Mmax for most of the Central and Eastern United States and adjacent Canada. The catalog and Mmax estimates derived from it were used in the 2014 edition of the U.S. Geological Survey national seismic-hazard maps. This Part A discusses prehistoric earthquakes that occurred in eastern North America, northwestern Europe, and Australia, whereas a separate Part B deals with historical events.

  5. Exploiting the Modified Colombo-Nyquist Rule for Co-estimating Sub-monthly Gravity Field Solutions from a GRACE-like Mission

    NASA Astrophysics Data System (ADS)

    Devaraju, B.; Weigelt, M.; Mueller, J.

    2017-12-01

    In order to suppress the impact of aliasing errors on the standard monthly GRACE gravity-field solutions, co-estimating sub-monthly (daily/two-day) low-degree solutions has been suggested as a solution. The maximum degree of the low-degree solutions is chosen via the Colombo-Nyquist rule of thumb. However, it is now established that the sampling of satellites puts a restriction on the maximum estimable order and not the degree - modified Colombo-Nyquist rule. Therefore, in this contribution, we co-estimate low-order sub-monthly solutions, and compare and contrast them with the low-degree sub-monthly solutions. We also investigate their efficacies in dealing with aliasing errors.

  6. A real-time digital program for estimating aircraft stability and control parameters from flight test data by using the maximum likelihood method

    NASA Technical Reports Server (NTRS)

    Grove, R. D.; Mayhew, S. C.

    1973-01-01

    A computer program (Langley program C1123) has been developed for estimating aircraft stability and control parameters from flight test data. These parameters are estimated by the maximum likelihood estimation procedure implemented on a real-time digital simulation system, which uses the Control Data 6600 computer. This system allows the investigator to interact with the program in order to obtain satisfactory results. Part of this system, the control and display capabilities, is described for this program. This report also describes the computer program by presenting the program variables, subroutines, flow charts, listings, and operational features. Program usage is demonstrated with a test case using pseudo or simulated flight data.

  7. Mars surface radiation exposure for solar maximum conditions and 1989 solar proton events

    NASA Technical Reports Server (NTRS)

    Simonsen, Lisa C.; Nealy, John E.

    1992-01-01

    The Langley heavy-ion/nucleon transport code, HZETRN, and the high-energy nucleon transport code, BRYNTRN, are used to predict the propagation of galactic cosmic rays (GCR's) and solar flare protons through the carbon dioxide atmosphere of Mars. Particle fluences and the resulting doses are estimated on the surface of Mars for GCR's during solar maximum conditions and the Aug., Sep., and Oct. 1989 solar proton events. These results extend previously calculated surface estimates for GCR's at solar minimum conditions and the Feb. 1956, Nov. 1960, and Aug. 1972 solar proton events. Surface doses are estimated with both a low-density and a high-density carbon dioxide model of the atmosphere for altitudes of 0, 4, 8, and 12 km above the surface. A solar modulation function is incorporated to estimate the GCR dose variation between solar minimum and maximum conditions over the 11-year solar cycle. By using current Mars mission scenarios, doses to the skin, eye, and blood-forming organs are predicted for short- and long-duration stay times on the Martian surface throughout the solar cycle.

  8. Light environment under Rhododendron maximum thickets and estimated carbon gain of regenerating forest tree seedlings

    Treesearch

    T.T. Lei; E.T. Nilsen; S.W. Semones

    2006-01-01

    Canopy tree recruitment is inhibited by evergreen shrubs in many forests. In the southern Appalachian mountains of the USA, thickets of Rhododendron maximum L. restrict dominant canopy tree seedling survival and persistence. Using R. maximum as a model system, we examined available light under the thickets and the photosynthetic...

  9. Integrating LIDAR and forest inventories to fill the trees outside forests data gap.

    PubMed

    Johnson, Kristofer D; Birdsey, Richard; Cole, Jason; Swatantran, Anu; O'Neil-Dunne, Jarlath; Dubayah, Ralph; Lister, Andrew

    2015-10-01

    Forest inventories are commonly used to estimate total tree biomass of forest land even though they are not traditionally designed to measure biomass of trees outside forests (TOF). The consequence may be an inaccurate representation of all of the aboveground biomass, which propagates error to the outputs of spatial and process models that rely on the inventory data. An ideal approach to fill this data gap would be to integrate TOF measurements within a traditional forest inventory for a parsimonious estimate of total tree biomass. In this study, Light Detection and Ranging (LIDAR) data were used to predict biomass of TOF in all "nonforest" Forest Inventory and Analysis (FIA) plots in the state of Maryland. To validate the LIDAR-based biomass predictions, a field crew was sent to measure TOF on nonforest plots in three Maryland counties, revealing close agreement at both the plot and county scales between the two estimates. Total tree biomass in Maryland increased by 25.5 Tg, or 15.6%, when biomass of TOF were included. In two counties (Carroll and Howard), there was a 47% increase. In contrast, counties located further away from the interstate highway corridor showed only a modest increase in biomass when TOF were added because nonforest conditions were less common in those areas. The advantage of this approach for estimating biomass of TOF is that it is compatible with, and explicitly separates TOF biomass from, forest biomass already measured by FIA crews. By predicting biomass of TOF at actual FIA plots, this approach is directly compatible with traditionally reported FIA forest biomass, providing a framework for other states to follow, and should improve carbon reporting and modeling activities in Maryland.

  10. Benefits of 20 kHz PMAD in a nuclear space station

    NASA Technical Reports Server (NTRS)

    Sundberg, Gale R.

    1987-01-01

    Compared to existing systems, high frequency ac power provides higher efficiency, lower cost, and improved safety benefits. The 20 kHz power system has exceptional flexibility, is inherently user friendly, and is compatible with all types of energy sources; photovoltaic, solar dynamic, rotating machines and nuclear. A 25 kW, 20 kHz ac power distribution system testbed was recently (1986) developed. The testbed possesses maximum flexibility, versatility, and transparency to user technology while maintaining high efficiency, low mass, and reduced volume. Several aspects of the 20 kHz power management and distribution (PMAD) system that have particular benefits for a nuclear power Space Station are discussed.

  11. Parametric tests of a traction drive retrofitted to an automotive gas turbine

    NASA Technical Reports Server (NTRS)

    Rohn, D. A.; Lowenthal, S. H.; Anderson, N. E.

    1980-01-01

    The results of a test program to retrofit a high performance fixed ratio Nasvytis Multiroller Traction Drive in place of a helical gear set to a gas turbine engine are presented. Parametric tests up to a maximum engine power turbine speed of 45,500 rpm and to a power level of 11 kW were conducted. Comparisons were made to similar drives that were parametrically tested on a back-to-back test stand. The drive showed good compatibility with the gas turbine engine. Specific fuel consumption of the engine with the traction drive speed reducer installed was comparable to the original helical gearset equipped engine.

  12. Asymmetric (1+1)-dimensional hydrodynamics in high-energy collisions

    NASA Astrophysics Data System (ADS)

    Bialas, A.; Peschanski, R.

    2011-05-01

    The possibility that particle production in high-energy collisions is a result of two asymmetric hydrodynamic flows is investigated using the Khalatnikov form of the (1+1)-dimensional approximation of hydrodynamic equations. The general solution is discussed and applied to the physically appealing “generalized in-out cascade” where the space-time and energy-momentum rapidities are equal at initial temperature but boost invariance is not imposed. It is demonstrated that the two-bump structure of the entropy density, characteristic of the asymmetric input, changes easily into a single broad maximum compatible with data on particle production in symmetric processes. A possible microscopic QCD interpretation of asymmetric hydrodynamics is proposed.

  13. Mechanisms of intracratonic and rift basin formation: Insights from Canning Basin, northwest Australia

    NASA Astrophysics Data System (ADS)

    Bender, Andre Adriano

    2000-10-01

    The Canning basin was investigated in order to determine the mechanisms responsible for its initiation and development. The basement morphology, determined using magnetic and gravity inversion techniques, was used to map the distribution, amplitude and subsidence history of the basin. The sag development of the Canning basin is hypothesized to be a consequence of a major late Proterozoic thermal event that induced broad-scale uplift, extrusion of tholeiitic basalt, and substantial crustal erosion. The development of the Canning basin is consistent with removal of up to 11 km of crustal rocks, followed by isostatic re-adjustment during the cooling of the lithosphere. Earlier models that employed both lower crustal metamorphism and erosion are considered inappropriate mechanisms for intracratonic basin formation because this work has shown that their effects are mutually exclusive. The time scale for the metamorphic-related subsidence is typically short (<10 m.y.) and the maximum subsidence is small (<4 km) compared to the long subsidence (ca. 200 m.y.) and maximum depths (6--7 km) recorded in the Canning basin. Observed amplitudes and rates of basement subsidence are compatible with a thermal anomaly that began to dissipate in the early Cambrian and lasted until the Permian. Punctuating the long-lived intracratonic basin subsidence is a series of extensional pulses that in Silurian to Carboniferous/Permian time led to the development of several prominent normal faults in the northeastern portion of the Canning basin (Fitzroy graben). Stratigraphic and structural data and section-balancing techniques have helped to elucidate the geometry and evolution of the basin-bounding fault of the Fitzroy graben. The fault is listric, with a dip that decreases from approximately 50° at the surface to 20° at a depth of 20 km, and with an estimated horizontal offset of 32--41 km. The southern margin of the Fitzroy graben was tilted, truncated, and onlapped from the south, consistent with the flexural rebound of a lithosphere with an elastic thickness of ca. 30 km.

  14. Statistical inference based on the nonparametric maximum likelihood estimator under double-truncation.

    PubMed

    Emura, Takeshi; Konno, Yoshihiko; Michimae, Hirofumi

    2015-07-01

    Doubly truncated data consist of samples whose observed values fall between the right- and left- truncation limits. With such samples, the distribution function of interest is estimated using the nonparametric maximum likelihood estimator (NPMLE) that is obtained through a self-consistency algorithm. Owing to the complicated asymptotic distribution of the NPMLE, the bootstrap method has been suggested for statistical inference. This paper proposes a closed-form estimator for the asymptotic covariance function of the NPMLE, which is computationally attractive alternative to bootstrapping. Furthermore, we develop various statistical inference procedures, such as confidence interval, goodness-of-fit tests, and confidence bands to demonstrate the usefulness of the proposed covariance estimator. Simulations are performed to compare the proposed method with both the bootstrap and jackknife methods. The methods are illustrated using the childhood cancer dataset.

  15. An Example of an Improvable Rao-Blackwell Improvement, Inefficient Maximum Likelihood Estimator, and Unbiased Generalized Bayes Estimator.

    PubMed

    Galili, Tal; Meilijson, Isaac

    2016-01-02

    The Rao-Blackwell theorem offers a procedure for converting a crude unbiased estimator of a parameter θ into a "better" one, in fact unique and optimal if the improvement is based on a minimal sufficient statistic that is complete. In contrast, behind every minimal sufficient statistic that is not complete, there is an improvable Rao-Blackwell improvement. This is illustrated via a simple example based on the uniform distribution, in which a rather natural Rao-Blackwell improvement is uniformly improvable. Furthermore, in this example the maximum likelihood estimator is inefficient, and an unbiased generalized Bayes estimator performs exceptionally well. Counterexamples of this sort can be useful didactic tools for explaining the true nature of a methodology and possible consequences when some of the assumptions are violated. [Received December 2014. Revised September 2015.].

  16. Fuzzy multinomial logistic regression analysis: A multi-objective programming approach

    NASA Astrophysics Data System (ADS)

    Abdalla, Hesham A.; El-Sayed, Amany A.; Hamed, Ramadan

    2017-05-01

    Parameter estimation for multinomial logistic regression is usually based on maximizing the likelihood function. For large well-balanced datasets, Maximum Likelihood (ML) estimation is a satisfactory approach. Unfortunately, ML can fail completely or at least produce poor results in terms of estimated probabilities and confidence intervals of parameters, specially for small datasets. In this study, a new approach based on fuzzy concepts is proposed to estimate parameters of the multinomial logistic regression. The study assumes that the parameters of multinomial logistic regression are fuzzy. Based on the extension principle stated by Zadeh and Bárdossy's proposition, a multi-objective programming approach is suggested to estimate these fuzzy parameters. A simulation study is used to evaluate the performance of the new approach versus Maximum likelihood (ML) approach. Results show that the new proposed model outperforms ML in cases of small datasets.

  17. Estimating maximum depth distribution of seagrass using underwater videography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Norris, J.G.; Wyllie-Echeverria, S.

    1997-06-01

    The maximum depth distribution of eelgrass (Zostera marina) beds in Willapa Bay, Washington appears to be limited by light penetration which is likely related to water turbidity. Using underwater videographic techniques we estimated that the maximum depth penetration in the less turbid outer bay was -5.85 ft (MILW) and in the more turbid inner bay was only -1.59 ft (MLLW). Eelgrass beds had well defined deepwater edges and no eelgrass was observed in the deep channels of the bay. The results from this study suggest that aerial photographs taken during low tide periods are capable of recording the majority ofmore » eelgrass beds in Willapa Bay.« less

  18. Simulation data for an estimation of the maximum theoretical value and confidence interval for the correlation coefficient.

    PubMed

    Rocco, Paolo; Cilurzo, Francesco; Minghetti, Paola; Vistoli, Giulio; Pedretti, Alessandro

    2017-10-01

    The data presented in this article are related to the article titled "Molecular Dynamics as a tool for in silico screening of skin permeability" (Rocco et al., 2017) [1]. Knowledge of the confidence interval and maximum theoretical value of the correlation coefficient r can prove useful to estimate the reliability of developed predictive models, in particular when there is great variability in compiled experimental datasets. In this Data in Brief article, data from purposely designed numerical simulations are presented to show how much the maximum r value is worsened by increasing the data uncertainty. The corresponding confidence interval of r is determined by using the Fisher r → Z transform.

  19. Gauging the Nearness and Size of Cycle Maximum

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.; Hathaway, David H.

    2003-01-01

    A simple method for monitoring the nearness and size of conventional cycle maximum for an ongoing sunspot cycle is examined. The method uses the observed maximum daily value and the maximum monthly mean value of international sunspot number and the maximum value of the 2-mo moving average of monthly mean sunspot number to effect the estimation. For cycle 23, a maximum daily value of 246, a maximum monthly mean of 170.1, and a maximum 2-mo moving average of 148.9 were each observed in July 2000. Taken together, these values strongly suggest that conventional maximum amplitude for cycle 23 would be approx. 124.5, occurring near July 2002 +/-5 mo, very close to the now well-established conventional maximum amplitude and occurrence date for cycle 23-120.8 in April 2000.

  20. A Geomagnetic Estimate of Mean Paleointensity

    NASA Technical Reports Server (NTRS)

    Voorhies, Coerte V.

    2004-01-01

    To test a statistical hypothesis about Earth's magnetic field against paleomagnetism, the present field is used to estimate time averaged paleointensity. The estimate used the modern magnetic multipole spectrum R(n), which gives the mean square induction represented by spherical harmonics of degree n averaged over the sphere of radius a = 6371.2 km. The hypothesis asserts that low degree multi-pole powers of the coresource field are distributed as chi-squared with 2n+1 degrees of freedom and expectation values, where c is the 3480 km radius of the Earth's core. (This is compatible with a usually mainly geocentric axial dipolar field). Amplitude K is estimated by fitting theoretical to observational spectra through degree 12. The resulting calibrated expectation spectrum is summed through degree 12 to estimate expected square intensity F(exp 2). The sum also estimates F(exp 2) averaged over geologic time, in so far as the present magnetic spectrum is a fair sample of that generated in the past by core geodynamic processes. Additional information is included in the original extended abstract.

  1. Estimation Methods for One-Parameter Testlet Models

    ERIC Educational Resources Information Center

    Jiao, Hong; Wang, Shudong; He, Wei

    2013-01-01

    This study demonstrated the equivalence between the Rasch testlet model and the three-level one-parameter testlet model and explored the Markov Chain Monte Carlo (MCMC) method for model parameter estimation in WINBUGS. The estimation accuracy from the MCMC method was compared with those from the marginalized maximum likelihood estimation (MMLE)…

  2. Fitting power-laws in empirical data with estimators that work for all exponents

    PubMed Central

    Hanel, Rudolf; Corominas-Murtra, Bernat; Liu, Bo; Thurner, Stefan

    2017-01-01

    Most standard methods based on maximum likelihood (ML) estimates of power-law exponents can only be reliably used to identify exponents smaller than minus one. The argument that power laws are otherwise not normalizable, depends on the underlying sample space the data is drawn from, and is true only for sample spaces that are unbounded from above. Power-laws obtained from bounded sample spaces (as is the case for practically all data related problems) are always free of such limitations and maximum likelihood estimates can be obtained for arbitrary powers without restrictions. Here we first derive the appropriate ML estimator for arbitrary exponents of power-law distributions on bounded discrete sample spaces. We then show that an almost identical estimator also works perfectly for continuous data. We implemented this ML estimator and discuss its performance with previous attempts. We present a general recipe of how to use these estimators and present the associated computer codes. PMID:28245249

  3. Procedures for estimating the frequency of commercial airline flights encountering high cabin ozone levels

    NASA Technical Reports Server (NTRS)

    Holdeman, J. D.

    1979-01-01

    Three analytical problems in estimating the frequency at which commercial airline flights will encounter high cabin ozone levels are formulated and solved: namely, estimating flight-segment mean levels, estimating maximum-per-flight levels, and estimating the maximum average level over a specified flight interval. For each problem, solution procedures are given for different levels of input information - from complete cabin ozone data, which provides a direct solution, to limited ozone information, such as ambient ozone means and standard deviations, with which several assumptions are necessary to obtain the required estimates. Each procedure is illustrated by an example case calculation that uses simultaneous cabin and ambient ozone data obtained by the NASA Global Atmospheric Sampling Program. Critical assumptions are discussed and evaluated, and the several solutions for each problem are compared. Example calculations are also performed to illustrate how variations in lattitude, altitude, season, retention ratio, flight duration, and cabin ozone limits affect the estimated probabilities.

  4. Estimating contaminant loads in rivers: An application of adjusted maximum likelihood to type 1 censored data

    USGS Publications Warehouse

    Cohn, Timothy A.

    2005-01-01

    This paper presents an adjusted maximum likelihood estimator (AMLE) that can be used to estimate fluvial transport of contaminants, like phosphorus, that are subject to censoring because of analytical detection limits. The AMLE is a generalization of the widely accepted minimum variance unbiased estimator (MVUE), and Monte Carlo experiments confirm that it shares essentially all of the MVUE's desirable properties, including high efficiency and negligible bias. In particular, the AMLE exhibits substantially less bias than alternative censored‐data estimators such as the MLE (Tobit) or the MLE followed by a jackknife. As with the MLE and the MVUE the AMLE comes close to achieving the theoretical Frechet‐Cramér‐Rao bounds on its variance. This paper also presents a statistical framework, applicable to both censored and complete data, for understanding and estimating the components of uncertainty associated with load estimates. This can serve to lower the cost and improve the efficiency of both traditional and real‐time water quality monitoring.

  5. Regression estimators for generic health-related quality of life and quality-adjusted life years.

    PubMed

    Basu, Anirban; Manca, Andrea

    2012-01-01

    To develop regression models for outcomes with truncated supports, such as health-related quality of life (HRQoL) data, and account for features typical of such data such as a skewed distribution, spikes at 1 or 0, and heteroskedasticity. Regression estimators based on features of the Beta distribution. First, both a single equation and a 2-part model are presented, along with estimation algorithms based on maximum-likelihood, quasi-likelihood, and Bayesian Markov-chain Monte Carlo methods. A novel Bayesian quasi-likelihood estimator is proposed. Second, a simulation exercise is presented to assess the performance of the proposed estimators against ordinary least squares (OLS) regression for a variety of HRQoL distributions that are encountered in practice. Finally, the performance of the proposed estimators is assessed by using them to quantify the treatment effect on QALYs in the EVALUATE hysterectomy trial. Overall model fit is studied using several goodness-of-fit tests such as Pearson's correlation test, link and reset tests, and a modified Hosmer-Lemeshow test. The simulation results indicate that the proposed methods are more robust in estimating covariate effects than OLS, especially when the effects are large or the HRQoL distribution has a large spike at 1. Quasi-likelihood techniques are more robust than maximum likelihood estimators. When applied to the EVALUATE trial, all but the maximum likelihood estimators produce unbiased estimates of the treatment effect. One and 2-part Beta regression models provide flexible approaches to regress the outcomes with truncated supports, such as HRQoL, on covariates, after accounting for many idiosyncratic features of the outcomes distribution. This work will provide applied researchers with a practical set of tools to model outcomes in cost-effectiveness analysis.

  6. Fabrication and characterization of SU-8-based capacitive micromachined ultrasonic transducer for airborne applications

    NASA Astrophysics Data System (ADS)

    Joseph, Jose; Singh, Shiv Govind; Vanjari, Siva Rama Krishna

    2018-01-01

    We present a successful fabrication and characterization of a capacitive micromachined ultrasonic transducer (CMUT) with SU-8 as the membrane material. The goal of this research is to develop a post-CMOS compatible CMUT that can be monolithically integrated with the CMOS circuitry. The fabrication is based on a simple, three mask process, with all wet etching steps involved so that the device can be realized with minimal laboratory conditions. The maximum temperature involved in the whole process flow was 140°C, and hence, it is post-CMOS compatible. The fabricated device exhibited a resonant frequency of 835 kHz with bandwidth 62 kHz, when characterized in air. The pull-in and snapback characteristics of the device were analyzed. The influence of membrane radius on the center frequency and bandwidth was also experimentally evaluated by fabricating CMUTs with membrane radius varying from 30 to 54 μm with an interval of 4 μm. These devices were vibrating at frequencies from 5.2 to 1.8 MHz with an average Q-factor of 23.41. Acoustic characterization of the fabricated devices was performed in air, demonstrating the applicability of SU-8 CMUTs in airborne applications.

  7. Low Statistics Reconstruction of the Compton Camera Point Spread Function in 3D Prompt-γ Imaging of Ion Beam Therapy

    NASA Astrophysics Data System (ADS)

    Lojacono, Xavier; Richard, Marie-Hélène; Ley, Jean-Luc; Testa, Etienne; Ray, Cédric; Freud, Nicolas; Létang, Jean Michel; Dauvergne, Denis; Maxim, Voichiţa; Prost, Rémy

    2013-10-01

    The Compton camera is a relevant imaging device for the detection of prompt photons produced by nuclear fragmentation in hadrontherapy. It may allow an improvement in detection efficiency compared to a standard gamma-camera but requires more sophisticated image reconstruction techniques. In this work, we simulate low statistics acquisitions from a point source having a broad energy spectrum compatible with hadrontherapy. We then reconstruct the image of the source with a recently developed filtered backprojection algorithm, a line-cone approach and an iterative List Mode Maximum Likelihood Expectation Maximization algorithm. Simulated data come from a Compton camera prototype designed for hadrontherapy online monitoring. Results indicate that the achievable resolution in directions parallel to the detector, that may include the beam direction, is compatible with the quality control requirements. With the prototype under study, the reconstructed image is elongated in the direction orthogonal to the detector. However this direction is of less interest in hadrontherapy where the first requirement is to determine the penetration depth of the beam in the patient. Additionally, the resolution may be recovered using a second camera.

  8. Pucksat Payload Carrier

    NASA Technical Reports Server (NTRS)

    Milam, M. Bruce; Young, Joseph P.

    1999-01-01

    There is an ever-expanding need to provide economical space launch opportunities for relatively small science payloads. To address this need, a team at NASA's Goddard Space Flight Center has designed the Pucksat. The Pucksat is a highly versatile payload carrier structure compatible for launching on a Delta II two-stage vehicle as a system co-manifested with a primary payload. It is also compatible for launch on the Air Force Medium Class EELV. Pucksat's basic structural architecture consists of six honeycomb panels attached to six longerons in a hexagonal manner and closed off at the top and bottom with circular rings. Users may configure a co-manifested Pucksat in a number of ways. As examples, co-manifested configurations can be designed to accommodate dedicated missions, multiple experiments, multiple small deployable satellites, or a hybrid of the preceding examples. The Pucksat has fixed lateral dimensions and a downward scaleable height. The dimension across the panel hexagonal flats is 62 in. and the maximum height configuration dimension is 38.5 in. Pucksat has been designed to support a 5000 lbm primary payload, with the center of gravity located no greater than 60 in. from its separation plane, and to accommodate a total co-manifested payload mass of 1275 lbm.

  9. Metacarpal geometry changes during Thoroughbred race training are compatible with sagittal-plane cantilever bending.

    PubMed

    Merritt, J S; Davies, H M S

    2010-11-01

    Bending of the equine metacarpal bones during locomotion is poorly understood. Cantilever bending, in particular, may influence the loading of the metacarpal bones and surrounding structures in unique ways. We hypothesised that increased amounts of sagittal-plane cantilever bending may govern changes to the shape of the metacarpal bones of Thoroughbred racehorses during training. We hypothesised that this type of bending would require a linear change to occur in the combined second moment of area of the bones for sagittal-plane bending (I) during race training. Six Thoroughbred racehorses were used, who had all completed at least 4 years of race training at a commercial stable. The approximate change in I that had occurred during race training was computed from radiographic measurements at the start and end of training using a simple model of bone shape. A significant (P < 0.001), approximately linear pattern of change in I was observed in each horse, with the maximum change occurring proximally and the minimum change occurring distally. The pattern of change in I was compatible with the hypothesis that sagittal-plane cantilever bending governed changes to the shape of the metacarpal bones during race training. © 2010 EVJ Ltd.

  10. Investigation on thermodynamics of ion-slicing of GaN and heterogeneously integrating high-quality GaN films on CMOS compatible Si(100) substrates.

    PubMed

    Huang, Kai; Jia, Qi; You, Tiangui; Zhang, Runchun; Lin, Jiajie; Zhang, Shibin; Zhou, Min; Zhang, Bo; Yu, Wenjie; Ou, Xin; Wang, Xi

    2017-11-08

    Die-to-wafer heterogeneous integration of single-crystalline GaN film with CMOS compatible Si(100) substrate using the ion-cutting technique has been demonstrated. The thermodynamics of GaN surface blistering is in-situ investigated via a thermal-stage optical microscopy, which indicates that the large activation energy (2.5 eV) and low H ions utilization ratio (~6%) might result in the extremely high H fluence required for the ion-slicing of GaN. The crystalline quality, surface topography and the microstructure of the GaN films are characterized in detail. The full width at half maximum (FWHM) for GaN (002) X-ray rocking curves is as low as 163 arcsec, corresponding to a density of threading dislocation of 5 × 10 7  cm -2 . Different evolution of the implantation-induced damage was observed and a relationship between the damage evolution and implantation-induced damage is demonstrated. This work would be beneficial to understand the mechanism of ion-slicing of GaN and to provide a platform for the hybrid integration of GaN devices with standard Si CMOS process.

  11. Simple fiber-optic confocal microscopy with nanoscale depth resolution beyond the diffraction barrier.

    PubMed

    Ilev, Ilko; Waynant, Ronald; Gannot, Israel; Gandjbakhche, Amir

    2007-09-01

    A novel fiber-optic confocal approach for ultrahigh depth-resolution (

  12. Integration in PACS of DICOM with TCP/IP, SQL, and X Windows

    NASA Astrophysics Data System (ADS)

    Reijns, Gerard L.

    1994-05-01

    The DICOM standard (Digital Imaging and Communications in Medicine) has been developed in order to obtain compatibility at the higher OSI levels. This paper describes the implementation of DICOM into our developed low cost PACS, which uses as much as possible standard software and standard protocols such as SQL, X Windows and TCP/IP. We adopted the requirement that all messages on the communication network have to be DICOM compatible. The translation between DICOM messages and SQL commands, which drive the relational data base, has been accommodated in our PACS supervisor. The translation takes only between 10 and 20 milliseconds. Images, that will be used the current day are stored in a distributed, parallel operating image base for reasons of performance. Extensive use has been made of X Windows to visualize images. A maximum of 12 images can simultaneously be displayed, of which one selected image can be manipulated (e.g., magnified, rotated, etc.), without affecting the other displayed images. The emphasis of the future work will be on performance measurements and modeling of our PACS and bringing the results of both methods in agreement with each other.

  13. Is Bayesian Estimation Proper for Estimating the Individual's Ability? Research Report 80-3.

    ERIC Educational Resources Information Center

    Samejima, Fumiko

    The effect of prior information in Bayesian estimation is considered, mainly from the standpoint of objective testing. In the estimation of a parameter belonging to an individual, the prior information is, in most cases, the density function of the population to which the individual belongs. Bayesian estimation was compared with maximum likelihood…

  14. On the impacts of computing daily temperatures as the average of the daily minimum and maximum temperatures

    NASA Astrophysics Data System (ADS)

    Villarini, Gabriele; Khouakhi, Abdou; Cunningham, Evan

    2017-12-01

    Daily temperature values are generally computed as the average of the daily minimum and maximum observations, which can lead to biases in the estimation of daily averaged values. This study examines the impacts of these biases on the calculation of climatology and trends in temperature extremes at 409 sites in North America with at least 25 years of complete hourly records. Our results show that the calculation of daily temperature based on the average of minimum and maximum daily readings leads to an overestimation of the daily values of 10+ % when focusing on extremes and values above (below) high (low) thresholds. Moreover, the effects of the data processing method on trend estimation are generally small, even though the use of the daily minimum and maximum readings reduces the power of trend detection ( 5-10% fewer trends detected in comparison with the reference data).

  15. Estimation of Multinomial Probabilities.

    DTIC Science & Technology

    1978-11-01

    1971) and Alam (1978) have shown that the maximum likelihood estimator is admissible with respect to the quadratic loss. Steinhaus (1957) and Trybula...appear). Johnson, B. Mck. (1971). On admissible estimators for certain fixed sample binomial populations. Ann. Math. Statist. 92, 1579-1587. Steinhaus , H

  16. Determination of the Maximum Temperature in a Non-Uniform Hot Zone by Line-of-Site Absorption Spectroscopy with a Single Diode Laser.

    PubMed

    Liger, Vladimir V; Mironenko, Vladimir R; Kuritsyn, Yurii A; Bolshov, Mikhail A

    2018-05-17

    A new algorithm for the estimation of the maximum temperature in a non-uniform hot zone by a sensor based on absorption spectrometry with a diode laser is developed. The algorithm is based on the fitting of the absorption spectrum with a test molecule in a non-uniform zone by linear combination of two single temperature spectra simulated using spectroscopic databases. The proposed algorithm allows one to better estimate the maximum temperature of a non-uniform zone and can be useful if only the maximum temperature rather than a precise temperature profile is of primary interest. The efficiency and specificity of the algorithm are demonstrated in numerical experiments and experimentally proven using an optical cell with two sections. Temperatures and water vapor concentrations could be independently regulated in both sections. The best fitting was found using a correlation technique. A distributed feedback (DFB) diode laser in the spectral range around 1.343 µm was used in the experiments. Because of the significant differences between the temperature dependences of the experimental and theoretical absorption spectra in the temperature range 300⁻1200 K, a database was constructed using experimentally detected single temperature spectra. Using the developed algorithm the maximum temperature in the two-section cell was estimated with accuracy better than 30 K.

  17. Maximum swimming speeds of sailfish and three other large marine predatory fish species based on muscle contraction time and stride length: a myth revisited

    PubMed Central

    Svendsen, Morten B. S.; Domenici, Paolo; Marras, Stefano; Krause, Jens; Boswell, Kevin M.; Rodriguez-Pinto, Ivan; Wilson, Alexander D. M.; Kurvers, Ralf H. J. M.; Viblanc, Paul E.; Finger, Jean S.; Steffensen, John F.

    2016-01-01

    ABSTRACT Billfishes are considered to be among the fastest swimmers in the oceans. Previous studies have estimated maximum speed of sailfish and black marlin at around 35 m s−1 but theoretical work on cavitation predicts that such extreme speed is unlikely. Here we investigated maximum speed of sailfish, and three other large marine pelagic predatory fish species, by measuring the twitch contraction time of anaerobic swimming muscle. The highest estimated maximum swimming speeds were found in sailfish (8.3±1.4 m s−1), followed by barracuda (6.2±1.0 m s−1), little tunny (5.6±0.2 m s−1) and dorado (4.0±0.9 m s−1); although size-corrected performance was highest in little tunny and lowest in sailfish. Contrary to previously reported estimates, our results suggest that sailfish are incapable of exceeding swimming speeds of 10-15 m s−1, which corresponds to the speed at which cavitation is predicted to occur, with destructive consequences for fin tissues. PMID:27543056

  18. A criterion for maximum resin flow in composite materials curing process

    NASA Astrophysics Data System (ADS)

    Lee, Woo I.; Um, Moon-Kwang

    1993-06-01

    On the basis of Springer's resin flow model, a criterion for maximum resin flow in autoclave curing is proposed. Validity of the criterion was proved for two resin systems (Fiberite 976 and Hercules 3501-6 epoxy resin). The parameter required for the criterion can be easily estimated from the measured resin viscosity data. The proposed criterion can be used in establishing the proper cure cycle to ensure maximum resin flow and, thus, the maximum compaction.

  19. Investigation of Drug–Polymer Compatibility Using Chemometric-Assisted UV-Spectrophotometry

    PubMed Central

    Mohamed, Amir Ibrahim; Abd-Motagaly, Amr Mohamed Elsayed; Ahmed, Osama A. A.; Amin, Suzan; Mohamed Ali, Alaa Ibrahim

    2017-01-01

    A simple chemometric-assisted UV-spectrophotometric method was used to study the compatibility of clindamycin hydrochloride (HC1) with two commonly used natural controlled-release polymers, alginate (Ag) and chitosan (Ch). Standard mixtures containing 1:1, 1:2, and 1:0.5 w/w drug–polymer ratios were prepared and UV scanned. A calibration model was developed with partial least square (PLS) regression analysis for each polymer separately. Then, test mixtures containing 1:1 w/w drug–polymer ratios with different sets of drug concentrations were prepared. These were UV scanned initially and after three and seven days of storage at 25 °C. Using the calibration model, the drug recovery percent was estimated and a decrease in concentration of 10% or more from initial concentration was considered to indicate instability. PLS models with PC3 (for Ag) and PC2 (for Ch) showed a good correlation between actual and found values with root mean square error of cross validation (RMSECV) of 0.00284 and 0.01228, and calibration coefficient (R2) values of 0.996 and 0.942, respectively. The average drug recovery percent after three and seven days was 98.1 ± 2.9 and 95.4 ± 4.0 (for Ag), and 97.3 ± 2.1 and 91.4 ± 3.8 (for Ch), which suggests more drug compatibility with an Ag than a Ch polymer. Conventional techniques including DSC, XRD, FTIR, and in vitro minimum inhibitory concentration (MIC) for (1:1) drug–polymer mixtures were also performed to confirm clindamycin compatibility with Ag and Ch polymers. PMID:28275214

  20. Inverse problems-based maximum likelihood estimation of ground reflectivity for selected regions of interest from stripmap SAR data [Regularized maximum likelihood estimation of ground reflectivity from stripmap SAR data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    West, R. Derek; Gunther, Jacob H.; Moon, Todd K.

    In this study, we derive a comprehensive forward model for the data collected by stripmap synthetic aperture radar (SAR) that is linear in the ground reflectivity parameters. It is also shown that if the noise model is additive, then the forward model fits into the linear statistical model framework, and the ground reflectivity parameters can be estimated by statistical methods. We derive the maximum likelihood (ML) estimates for the ground reflectivity parameters in the case of additive white Gaussian noise. Furthermore, we show that obtaining the ML estimates of the ground reflectivity requires two steps. The first step amounts tomore » a cross-correlation of the data with a model of the data acquisition parameters, and it is shown that this step has essentially the same processing as the so-called convolution back-projection algorithm. The second step is a complete system inversion that is capable of mitigating the sidelobes of the spatially variant impulse responses remaining after the correlation processing. We also state the Cramer-Rao lower bound (CRLB) for the ML ground reflectivity estimates.We show that the CRLB is linked to the SAR system parameters, the flight path of the SAR sensor, and the image reconstruction grid.We demonstrate the ML image formation and the CRLB bound for synthetically generated data.« less

  1. Inverse problems-based maximum likelihood estimation of ground reflectivity for selected regions of interest from stripmap SAR data [Regularized maximum likelihood estimation of ground reflectivity from stripmap SAR data

    DOE PAGES

    West, R. Derek; Gunther, Jacob H.; Moon, Todd K.

    2016-12-01

    In this study, we derive a comprehensive forward model for the data collected by stripmap synthetic aperture radar (SAR) that is linear in the ground reflectivity parameters. It is also shown that if the noise model is additive, then the forward model fits into the linear statistical model framework, and the ground reflectivity parameters can be estimated by statistical methods. We derive the maximum likelihood (ML) estimates for the ground reflectivity parameters in the case of additive white Gaussian noise. Furthermore, we show that obtaining the ML estimates of the ground reflectivity requires two steps. The first step amounts tomore » a cross-correlation of the data with a model of the data acquisition parameters, and it is shown that this step has essentially the same processing as the so-called convolution back-projection algorithm. The second step is a complete system inversion that is capable of mitigating the sidelobes of the spatially variant impulse responses remaining after the correlation processing. We also state the Cramer-Rao lower bound (CRLB) for the ML ground reflectivity estimates.We show that the CRLB is linked to the SAR system parameters, the flight path of the SAR sensor, and the image reconstruction grid.We demonstrate the ML image formation and the CRLB bound for synthetically generated data.« less

  2. Normal-faulting slip maxima and stress-drop variability: a geological perspective

    USGS Publications Warehouse

    Hecker, S.; Dawson, T.E.; Schwartz, D.P.

    2010-01-01

    We present an empirical estimate of maximum slip in continental normal-faulting earthquakes and present evidence that stress drop in intraplate extensional environments is dependent on fault maturity. A survey of reported slip in historical earthquakes globally and in latest Quaternary paleoearthquakes in the Western Cordillera of the United States indicates maximum vertical displacements as large as 6–6.5 m. A difference in the ratio of maximum-to-mean displacements between data sets of prehistoric and historical earthquakes, together with constraints on bias in estimates of mean paleodisplacement, suggest that applying a correction factor of 1.4±0.3 to the largest observed displacement along a paleorupture may provide a reasonable estimate of the maximum displacement. Adjusting the largest paleodisplacements in our regional data set (~6 m) by a factor of 1.4 yields a possible upper-bound vertical displacement for the Western Cordillera of about 8.4 m, although a smaller correction factor may be more appropriate for the longest ruptures. Because maximum slip is highly localized along strike, if such large displacements occur, they are extremely rare. Static stress drop in surface-rupturing earthquakes in the Western Cordillera, as represented by maximum reported displacement as a fraction of modeled rupture length, appears to be larger on normal faults with low cumulative geologic displacement (<2 km) and larger in regions such as the Rocky Mountains, where immature, low-throw faults are concentrated. This conclusion is consistent with a growing recognition that structural development influences stress drop and indicates that this influence is significant enough to be evident among faults within a single intraplate environment.

  3. Estimate of Solar Maximum Using the 1-8 Angstrom Geostationary Operational Environmental Satellites X-Ray Measurements

    DTIC Science & Technology

    2014-12-12

    AFRL-RV-PS- AFRL-RV-PS- TR-2015-0005 TR-2015-0005 ESTIMATE OF SOLAR MAXIMUM USING THE 1–8 Å GEOSTATIONARY OPERATIONAL ENVIRONMENTAL SATELLITES X... Geostationary Operational Environmental Satellites X-Ray Measurements (Postprint) 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 61102F 6...of the solar cycle through an analysis of the solar X-ray background. Our results are based on the NOAA Geostationary Operational Environmental

  4. Constrained Maximum Likelihood Estimation of Relative Abundances of Protein Conformation in a Heterogeneous Mixture from Small Angle X-Ray Scattering Intensity Measurements

    PubMed Central

    Onuk, A. Emre; Akcakaya, Murat; Bardhan, Jaydeep P.; Erdogmus, Deniz; Brooks, Dana H.; Makowski, Lee

    2015-01-01

    In this paper, we describe a model for maximum likelihood estimation (MLE) of the relative abundances of different conformations of a protein in a heterogeneous mixture from small angle X-ray scattering (SAXS) intensities. To consider cases where the solution includes intermediate or unknown conformations, we develop a subset selection method based on k-means clustering and the Cramér-Rao bound on the mixture coefficient estimation error to find a sparse basis set that represents the space spanned by the measured SAXS intensities of the known conformations of a protein. Then, using the selected basis set and the assumptions on the model for the intensity measurements, we show that the MLE model can be expressed as a constrained convex optimization problem. Employing the adenylate kinase (ADK) protein and its known conformations as an example, and using Monte Carlo simulations, we demonstrate the performance of the proposed estimation scheme. Here, although we use 45 crystallographically determined experimental structures and we could generate many more using, for instance, molecular dynamics calculations, the clustering technique indicates that the data cannot support the determination of relative abundances for more than 5 conformations. The estimation of this maximum number of conformations is intrinsic to the methodology we have used here. PMID:26924916

  5. Estimation method of finger tapping dynamics using simple magnetic detection system

    NASA Astrophysics Data System (ADS)

    Kandori, Akihiko; Sano, Yuko; Miyashita, Tsuyoshi; Okada, Yoshihisa; Irokawa, Masataka; Shima, Keisuke; Tsuji, Toshio; Yokoe, Masaru; Sakoda, Saburo

    2010-05-01

    We have developed the simple estimation method of a finger tapping dynamics model for investigating muscle resistance and stiffness during tapping movement in normal subjects. We measured finger tapping movements of 207 normal subjects using a magnetic finger tapping detection system. Each subject tapped two fingers in time with a metronome at 1, 2, 3, 4, and 5 Hz. The velocity and acceleration values for both the closing and opening tapping data were used to estimate a finger tapping dynamics model. Using the frequency response of the ratio of acceleration to velocity of the mechanical impedance parameters, we estimated the resistance (friction coefficient) and compliance (stiffness). We found two dynamics models for the maximum open position and tap position. In the maximum open position, the extensor muscle resistance was twice as high as the flexor muscle resistance and males had a higher spring constant. In the tap position, the flexor muscle resistance was much higher than the extensor muscle resistance. This indicates that the tapping dynamics in the maximum open position are controlled by the balance of extensor and flexor muscle friction resistances and the flexor stiffness, and the flexor friction resistance is the main component in the tap position. It can be concluded that our estimation method makes it possible to understand the tapping dynamics.

  6. Estimation method of finger tapping dynamics using simple magnetic detection system.

    PubMed

    Kandori, Akihiko; Sano, Yuko; Miyashita, Tsuyoshi; Okada, Yoshihisa; Irokawa, Masataka; Shima, Keisuke; Tsuji, Toshio; Yokoe, Masaru; Sakoda, Saburo

    2010-05-01

    We have developed the simple estimation method of a finger tapping dynamics model for investigating muscle resistance and stiffness during tapping movement in normal subjects. We measured finger tapping movements of 207 normal subjects using a magnetic finger tapping detection system. Each subject tapped two fingers in time with a metronome at 1, 2, 3, 4, and 5 Hz. The velocity and acceleration values for both the closing and opening tapping data were used to estimate a finger tapping dynamics model. Using the frequency response of the ratio of acceleration to velocity of the mechanical impedance parameters, we estimated the resistance (friction coefficient) and compliance (stiffness). We found two dynamics models for the maximum open position and tap position. In the maximum open position, the extensor muscle resistance was twice as high as the flexor muscle resistance and males had a higher spring constant. In the tap position, the flexor muscle resistance was much higher than the extensor muscle resistance. This indicates that the tapping dynamics in the maximum open position are controlled by the balance of extensor and flexor muscle friction resistances and the flexor stiffness, and the flexor friction resistance is the main component in the tap position. It can be concluded that our estimation method makes it possible to understand the tapping dynamics.

  7. ReplacementMatrix: a web server for maximum-likelihood estimation of amino acid replacement rate matrices.

    PubMed

    Dang, Cuong Cao; Lefort, Vincent; Le, Vinh Sy; Le, Quang Si; Gascuel, Olivier

    2011-10-01

    Amino acid replacement rate matrices are an essential basis of protein studies (e.g. in phylogenetics and alignment). A number of general purpose matrices have been proposed (e.g. JTT, WAG, LG) since the seminal work of Margaret Dayhoff and co-workers. However, it has been shown that matrices specific to certain protein groups (e.g. mitochondrial) or life domains (e.g. viruses) differ significantly from general average matrices, and thus perform better when applied to the data to which they are dedicated. This Web server implements the maximum-likelihood estimation procedure that was used to estimate LG, and provides a number of tools and facilities. Users upload a set of multiple protein alignments from their domain of interest and receive the resulting matrix by email, along with statistics and comparisons with other matrices. A non-parametric bootstrap is performed optionally to assess the variability of replacement rate estimates. Maximum-likelihood trees, inferred using the estimated rate matrix, are also computed optionally for each input alignment. Finely tuned procedures and up-to-date ML software (PhyML 3.0, XRATE) are combined to perform all these heavy calculations on our clusters. http://www.atgc-montpellier.fr/ReplacementMatrix/ olivier.gascuel@lirmm.fr Supplementary data are available at http://www.atgc-montpellier.fr/ReplacementMatrix/

  8. The Estimation of Compaction Parameter Values Based on Soil Properties Values Stabilized with Portland Cement

    NASA Astrophysics Data System (ADS)

    Lubis, A. S.; Muis, Z. A.; Pasaribu, M. I.

    2017-03-01

    The strength and durability of pavement construction is highly dependent on the properties and subgrade bearing capacity. This then led to the idea of the selection methods to estimate the density of the soil with the proper implementation of the system, fast and economical. This study aims to estimate the compaction parameter value namely the maximum dry unit weight (γd max) and optimum moisture content (wopt) of the soil properties value that stabilized with Portland Cement. Tests conducted in the laboratory of soil mechanics to determine the index properties (fines and liquid limit) and Standard Compaction Test. Soil samples that have Plasticity Index (PI) between 0-15% then mixed with Portland Cement (PC) with variations of 2%, 4%, 6%, 8% and 10%, each 10 samples. The results showed that the maximum dry unit weight (γd max) and wopt has a significant relationship with percent fines, liquid limit and the percentation of cement. Equation for the estimated maximum dry unit weight (γd max) = 1.782 - 0.011*LL + 0,000*F + 0.006*PS with R2 = 0.915 and the estimated optimum moisture content (wopt) = 3.441 + 0.594*LL + 0,025*F + 0,024*PS with R2 = 0.726.

  9. Radiation Exposure and Attributable Cancer Risk in Patients With Esophageal Atresia.

    PubMed

    Yousef, Yasmine; Baird, Robert

    2018-02-01

    Cases of esophageal carcinoma have been documented in survivors of esophageal atresia (EA). Children with EA undergo considerable amounts of diagnostic imaging and consequent radiation exposure potentially increasing their lifetime cancer mortality risk. This study evaluates the radiological procedures performed on patients with EA and estimates their cumulative radiation exposure and attributable lifetime cancer mortality risk. Medical records of patients with EA managed at a tertiary care center were reviewed for demographics, EA subtype, and number and type of radiological investigations. Existing normative data were used to estimate the cumulative radiation exposure and lifetime cancer risk per patient. The present study included 53 patients with a mean follow-up of 5.7 years. The overall median and maximum estimated effective radiation dose in the neonatal period was 5521.4 μSv/patient and 66638.6 μSv/patient, respectively. This correlates to a median and maximum estimated cumulative lifetime cancer mortality risk of 1:1530 and 1:130, respectively. Hence, radiation exposure in the neonatal period increased the cumulative cancer mortality risk a median of 130-fold and a maximum of 1575-fold in EA survivors. Children with EA are exposed to significant amounts of radiation and an increased estimated cumulative cancer mortality risk. Efforts should be made to eliminate superfluous imaging.

  10. Novel Sessile Drop Software for Quantitative Estimation of Slag Foaming in Carbon/Slag Interactions

    NASA Astrophysics Data System (ADS)

    Khanna, Rita; Rahman, Mahfuzur; Leow, Richard; Sahajwalla, Veena

    2007-08-01

    Novel video-processing software has been developed for the sessile drop technique for a rapid and quantitative estimation of slag foaming. The data processing was carried out in two stages: the first stage involved the initial transformation of digital video/audio signals into a format compatible with computing software, and the second stage involved the computation of slag droplet volume and area of contact in a chosen video frame. Experimental results are presented on slag foaming from synthetic graphite/slag system at 1550 °C. This technique can be used for determining the extent and stability of foam as a function of time.

  11. Fast gradient-based algorithm on extended landscapes for wave-front reconstruction of Earth observation satellite

    NASA Astrophysics Data System (ADS)

    Thiebaut, C.; Perraud, L.; Delvit, J. M.; Latry, C.

    2016-07-01

    We present an on-board satellite implementation of a gradient-based (optical flows) algorithm for the shifts estimation between images of a Shack-Hartmann wave-front sensor on extended landscapes. The proposed algorithm has low complexity in comparison with classical correlation methods which is a big advantage for being used on-board a satellite at high instrument data rate and in real-time. The electronic board used for this implementation is designed for space applications and is composed of radiation-hardened software and hardware. Processing times of both shift estimations and pre-processing steps are compatible of on-board real-time computation.

  12. On a strong solution of the non-stationary Navier-Stokes equations under slip or leak boundary conditions of friction type

    NASA Astrophysics Data System (ADS)

    Kashiwabara, Takahito

    Strong solutions of the non-stationary Navier-Stokes equations under non-linearized slip or leak boundary conditions are investigated. We show that the problems are formulated by a variational inequality of parabolic type, to which uniqueness is established. Using Galerkin's method and deriving a priori estimates, we prove global and local existence for 2D and 3D slip problems respectively. For leak problems, under no-leak assumption at t=0 we prove local existence in 2D and 3D cases. Compatibility conditions for initial states play a significant role in the estimates.

  13. ESTIMATING PROPORTION OF AREA OCCUPIED UNDER COMPLEX SURVEY DESIGNS

    EPA Science Inventory

    Estimating proportion of sites occupied, or proportion of area occupied (PAO) is a common problem in environmental studies. Typically, field surveys do not ensure that occupancy of a site is made with perfect detection. Maximum likelihood estimation of site occupancy rates when...

  14. Proportion estimation using prior cluster purities

    NASA Technical Reports Server (NTRS)

    Terrell, G. R. (Principal Investigator)

    1980-01-01

    The prior distribution of CLASSY component purities is studied, and this information incorporated into maximum likelihood crop proportion estimators. The method is tested on Transition Year spring small grain segments.

  15. Estimation of the solar Lyman alpha flux from ground based measurements of the Ca II K line

    NASA Technical Reports Server (NTRS)

    Rottman, G. J.; Livingston, W. C.; White, O. R.

    1990-01-01

    Measurements of the solar Lyman alpha and Ca II K from October 1981 to April 1989 show a strong correlation (r = 0.95) that allows estimation of the Lyman alpha flux at 1 AU from 1975 to December 1989. The estimated Lyman alpha strength of 3.9 x 10 to the 11th + or - 0.15 x 10 to the 11th photons/s sq cm on December 7, 1989 is at the same maximum levels seen in Cycle 21. Relative to other UV surrogates (sunspot number, 10.7 cm radio flux, and He I 10830 line strength), Lyman alpha estimates computed from the K line track the SME measurements well from solar maximum, through solar minimum, and into Cycle 22.

  16. Effects of electric field on the maximum electro-spinning rate of silk fibroin solutions.

    PubMed

    Park, Bo Kyung; Um, In Chul

    2017-02-01

    Owing to the excellent cyto-compatibility of silk fibroin (SF) and the simple fabrication of nano-fibrous webs, electro-spun SF webs have attracted much research attention in numerous biomedical fields. Because the production rate of electro-spun webs is strongly dependent on the electro-spinning rate used, the electro-spinning rate becomes more important. In the present study, to improve the electro-spinning rate of SF solutions, various electric fields were applied during electro-spinning of SF, and its effects on the maximum electro-spinning rate of SF solution as well as diameters and molecular conformations of the electro-spun SF fibers were examined. As the electric field was increased, the maximum electro-spinning rate of the SF solution also increased. The maximum electro-spinning rate of a 13% SF solution could be increased 12×by increasing the electric field from 0.5kV/cm (0.25mL/h) to 2.5kV/cm (3.0mL/h). The dependence of the fiber diameter on the present electric field was not significant when using less-concentrated SF solutions (7-9% SF). On the other hand, at higher SF concentrations the electric field had a greater effect on the resulting fiber diameter. The electric field had a minimal effect of the molecular conformation and crystallinity index of the electro-spun SF webs. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Quantitative chest computed tomography as a means of predicting exercise performance in severe emphysema.

    PubMed

    Crausman, R S; Ferguson, G; Irvin, C G; Make, B; Newell, J D

    1995-06-01

    We assessed the value of quantitative high-resolution computed tomography (CT) as a diagnostic and prognostic tool in smoking-related emphysema. We performed an inception cohort study of 14 patients referred with emphysema. The diagnosis of emphysema was based on a compatible history, physical examination, chest radiograph, CT scan of the lung, and pulmonary physiologic evaluation. As a group, those who underwent exercise testing were hyperinflated (percentage predicted total lung capacity +/- standard error of the mean = 133 +/- 9%), and there was evidence of air trapping (percentage predicted respiratory volume = 318 +/- 31%) and airflow limitation (forced expiratory volume in 1 sec [FEV1] = 40 +/- 7%). The exercise performance of the group was severely limited (maximum achievable workload = 43 +/- 6%) and was characterized by prominent ventilatory, gas exchange, and pulmonary vascular abnormalities. The quantitative CT index was markedly elevated in all patients (76 +/- 9; n = 14; normal < 4). There were correlations between this quantitative CT index and measures of airflow limitation (FEV1 r2 = .34, p = 09; FEV1/forced vital capacity r2 = .46, p = .04) and between maximum workload achieved (r2 = .93, p = .0001) and maximum oxygen utilization (r2 = .83, p = .0007). Quantitative chest CT assessment of disease severity is correlated with the degree of airflow limitation and exercise impairment in pulmonary emphysema.

  18. 48 CFR 1852.216-85 - Estimated cost and award fee.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... and Clauses 1852.216-85 Estimated cost and award fee. As prescribed in 1816.406-70(e), insert the following clause: Estimated Cost and Award Fee (SEP 1993) The estimated cost of this contract is $___. The... cost, base fee, and maximum award fee are $___. (End of clause) Alternate I (SEP 1993). As prescribed...

  19. A Test-Length Correction to the Estimation of Extreme Proficiency Levels

    ERIC Educational Resources Information Center

    Magis, David; Beland, Sebastien; Raiche, Gilles

    2011-01-01

    In this study, the estimation of extremely large or extremely small proficiency levels, given the item parameters of a logistic item response model, is investigated. On one hand, the estimation of proficiency levels by maximum likelihood (ML), despite being asymptotically unbiased, may yield infinite estimates. On the other hand, with an…

  20. An Empirical Comparison of Heterogeneity Variance Estimators in 12,894 Meta-Analyses

    ERIC Educational Resources Information Center

    Langan, Dean; Higgins, Julian P. T.; Simmonds, Mark

    2015-01-01

    Heterogeneity in meta-analysis is most commonly estimated using a moment-based approach described by DerSimonian and Laird. However, this method has been shown to produce biased estimates. Alternative methods to estimate heterogeneity include the restricted maximum likelihood approach and those proposed by Paule and Mandel, Sidik and Jonkman, and…

Top