Science.gov

Sample records for spline-based estimator muse

  1. MUlti-Dimensional Spline-Based Estimator (MUSE) for motion estimation: algorithm development and initial results.

    PubMed

    Viola, Francesco; Coe, Ryan L; Owen, Kevin; Guenther, Drake A; Walker, William F

    2008-12-01

    Image registration and motion estimation play central roles in many fields, including RADAR, SONAR, light microscopy, and medical imaging. Because of its central significance, estimator accuracy, precision, and computational cost are of critical importance. We have previously presented a highly accurate, spline-based time delay estimator that directly determines sub-sample time delay estimates from sampled data. The algorithm uses cubic splines to produce a continuous representation of a reference signal and then computes an analytical matching function between this reference and a delayed signal. The location of the minima of this function yields estimates of the time delay. In this paper we describe the MUlti-dimensional Spline-based Estimator (MUSE) that allows accurate and precise estimation of multi-dimensional displacements/strain components from multi-dimensional data sets. We describe the mathematical formulation for two- and three-dimensional motion/strain estimation and present simulation results to assess the intrinsic bias and standard deviation of this algorithm and compare it to currently available multi-dimensional estimators. In 1000 noise-free simulations of ultrasound data we found that 2D MUSE exhibits maximum bias of 2.6 x 10(-4) samples in range and 2.2 x 10(-3) samples in azimuth (corresponding to 4.8 and 297 nm, respectively). The maximum simulated standard deviation of estimates in both dimensions was comparable at roughly 2.8 x 10(-3) samples (corresponding to 54 nm axially and 378 nm laterally). These results are between two and three orders of magnitude better than currently used 2D tracking methods. Simulation of performance in 3D yielded similar results to those observed in 2D. We also present experimental results obtained using 2D MUSE on data acquired by an Ultrasonix Sonix RP imaging system with an L14-5/38 linear array transducer operating at 6.6 MHz. While our validation of the algorithm was performed using ultrasound data, MUSE is

  2. Item Response Theory with Estimation of the Latent Population Distribution Using Spline-Based Densities

    ERIC Educational Resources Information Center

    Woods, Carol M.; Thissen, David

    2006-01-01

    The purpose of this paper is to introduce a new method for fitting item response theory models with the latent population distribution estimated from the data using splines. A spline-based density estimation system provides a flexible alternative to existing procedures that use a normal distribution, or a different functional form, for the…

  3. On the spline-based wavelet differentiation matrix

    NASA Technical Reports Server (NTRS)

    Jameson, Leland

    1993-01-01

    The differentiation matrix for a spline-based wavelet basis is constructed. Given an n-th order spline basis it is proved that the differentiation matrix is accurate of order 2n + 2 when periodic boundary conditions are assumed. This high accuracy, or superconvergence, is lost when the boundary conditions are no longer periodic. Furthermore, it is shown that spline-based bases generate a class of compact finite difference schemes.

  4. Spline-based procedures for dose-finding studies with active control

    PubMed Central

    Helms, Hans-Joachim; Benda, Norbert; Zinserling, Jörg; Kneib, Thomas; Friede, Tim

    2015-01-01

    In a dose-finding study with an active control, several doses of a new drug are compared with an established drug (the so-called active control). One goal of such studies is to characterize the dose–response relationship and to find the smallest target dose concentration d*, which leads to the same efficacy as the active control. For this purpose, the intersection point of the mean dose–response function with the expected efficacy of the active control has to be estimated. The focus of this paper is a cubic spline-based method for deriving an estimator of the target dose without assuming a specific dose–response function. Furthermore, the construction of a spline-based bootstrap CI is described. Estimator and CI are compared with other flexible and parametric methods such as linear spline interpolation as well as maximum likelihood regression in simulation studies motivated by a real clinical trial. Also, design considerations for the cubic spline approach with focus on bias minimization are presented. Although the spline-based point estimator can be biased, designs can be chosen to minimize and reasonably limit the maximum absolute bias. Furthermore, the coverage probability of the cubic spline approach is satisfactory, especially for bias minimal designs. © 2014 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. PMID:25319931

  5. MUSE alignment onto VLT

    NASA Astrophysics Data System (ADS)

    Laurent, Florence; Renault, Edgard; Boudon, Didier; Caillier, Patrick; Daguisé, Eric; Dupuy, Christophe; Jarno, Aurélien; Lizon, Jean-Louis; Migniau, Jean-Emmanuel; Nicklas, Harald; Piqueras, Laure

    2014-07-01

    MUSE (Multi Unit Spectroscopic Explorer) is a second generation Very Large Telescope (VLT) integral field spectrograph developed for the European Southern Observatory (ESO). It combines a 1' x 1' field of view sampled at 0.2 arcsec for its Wide Field Mode (WFM) and a 7.5"x7.5" field of view for its Narrow Field Mode (NFM). Both modes will operate with the improved spatial resolution provided by GALACSI (Ground Atmospheric Layer Adaptive Optics for Spectroscopic Imaging), that will use the VLT deformable secondary mirror and 4 Laser Guide Stars (LGS) foreseen in 2015. MUSE operates in the visible wavelength range (0.465-0.93 μm). A consortium of seven institutes is currently commissioning MUSE in the Very Large Telescope for the Preliminary Acceptance in Chile, scheduled for September, 2014. MUSE is composed of several subsystems which are under the responsibility of each institute. The Fore Optics derotates and anamorphoses the image at the focal plane. A Splitting and Relay Optics feed the 24 identical Integral Field Units (IFU), that are mounted within a large monolithic structure. Each IFU incorporates an image slicer, a fully refractive spectrograph with VPH-grating and a detector system connected to a global vacuum and cryogenic system. During 2012 and 2013, all MUSE subsystems were integrated, aligned and tested to the P.I. institute at Lyon. After successful PAE in September 2013, MUSE instrument was shipped to the Very Large Telescope in Chile where that was aligned and tested in ESO integration hall at Paranal. After, MUSE was directly transported, fully aligned and without any optomechanical dismounting, onto VLT telescope where the first light was overcame the 7th of February, 2014. This paper describes the alignment procedure of the whole MUSE instrument with respect to the Very Large Telescope (VLT). It describes how 6 tons could be move with accuracy better than 0.025mm and less than 0.25 arcmin in order to reach alignment requirements. The success

  6. MUSE optical alignment procedure

    NASA Astrophysics Data System (ADS)

    Laurent, Florence; Renault, Edgard; Loupias, Magali; Kosmalski, Johan; Anwand, Heiko; Bacon, Roland; Boudon, Didier; Caillier, Patrick; Daguisé, Eric; Dubois, Jean-Pierre; Dupuy, Christophe; Kelz, Andreas; Lizon, Jean-Louis; Nicklas, Harald; Parès, Laurent; Remillieux, Alban; Seifert, Walter; Valentin, Hervé; Xu, Wenli

    2012-09-01

    MUSE (Multi Unit Spectroscopic Explorer) is a second generation VLT integral field spectrograph (1x1arcmin² Field of View) developed for the European Southern Observatory (ESO), operating in the visible wavelength range (0.465-0.93 μm). A consortium of seven institutes is currently assembling and testing MUSE in the Integration Hall of the Observatoire de Lyon for the Preliminary Acceptance in Europe, scheduled for 2013. MUSE is composed of several subsystems which are under the responsibility of each institute. The Fore Optics derotates and anamorphoses the image at the focal plane. A Splitting and Relay Optics feed the 24 identical Integral Field Units (IFU), that are mounted within a large monolithic instrument mechanical structure. Each IFU incorporates an image slicer, a fully refractive spectrograph with VPH-grating and a detector system connected to a global vacuum and cryogenic system. During 2011, all MUSE subsystems were integrated, aligned and tested independently in each institute. After validations, the systems were shipped to the P.I. institute at Lyon and were assembled in the Integration Hall This paper describes the end-to-end optical alignment procedure of the MUSE instrument. The design strategy, mixing an optical alignment by manufacturing (plug and play approach) and few adjustments on key components, is presented. We depict the alignment method for identifying the optical axis using several references located in pupil and image planes. All tools required to perform the global alignment between each subsystem are described. The success of this alignment approach is demonstrated by the good results for the MUSE image quality. MUSE commissioning at the VLT (Very Large Telescope) is planned for 2013.

  7. A Quadratic Spline based Interface (QUASI) reconstruction algorithm for accurate tracking of two-phase flows

    NASA Astrophysics Data System (ADS)

    Diwakar, S. V.; Das, Sarit K.; Sundararajan, T.

    2009-12-01

    A new Quadratic Spline based Interface (QUASI) reconstruction algorithm is presented which provides an accurate and continuous representation of the interface in a multiphase domain and facilitates the direct estimation of local interfacial curvature. The fluid interface in each of the mixed cells is represented by piecewise parabolic curves and an initial discontinuous PLIC approximation of the interface is progressively converted into a smooth quadratic spline made of these parabolic curves. The conversion is achieved by a sequence of predictor-corrector operations enforcing function ( C0) and derivative ( C1) continuity at the cell boundaries using simple analytical expressions for the continuity requirements. The efficacy and accuracy of the current algorithm has been demonstrated using standard test cases involving reconstruction of known static interface shapes and dynamically evolving interfaces in prescribed flow situations. These benchmark studies illustrate that the present algorithm performs excellently as compared to the other interface reconstruction methods available in literature. Quadratic rate of error reduction with respect to grid size has been observed in all the cases with curved interface shapes; only in situations where the interface geometry is primarily flat, the rate of convergence becomes linear with the mesh size. The flow algorithm implemented in the current work is designed to accurately balance the pressure gradients with the surface tension force at any location. As a consequence, it is able to minimize spurious flow currents arising from imperfect normal stress balance at the interface. This has been demonstrated through the standard test problem of an inviscid droplet placed in a quiescent medium. Finally, the direct curvature estimation ability of the current algorithm is illustrated through the coupled multiphase flow problem of a deformable air bubble rising through a column of water.

  8. Self-feeding MUSE: a robust method for high resolution diffusion imaging using interleaved EPI.

    PubMed

    Zhang, Zhe; Huang, Feng; Ma, Xiaodong; Xie, Sheng; Guo, Hua

    2015-01-15

    Single-shot echo planar imaging (EPI) with parallel imaging techniques has been well established as the most popular method for clinical diffusion imaging, due to its fast acquisition and motion insensitivity. However, this approach is limited by the relatively low spatial resolution and image distortion. Interleaved EPI is able to break the limitations but the phase variations among different shots must be considered for artifact suppression. The introduction of multiplexed sensitivity-encoding (MUSE) can address the phase issue using sensitivity encoding (SENSE) for self-navigation of each interleave. However, MUSE has suboptimal results when the number of shots is high. To achieve higher spatial resolution and lower geometric distortion, we introduce two new schemes into the MUSE framework: 1) a self-feeding mechanism is adopted by using prior information regularized SENSE in order to obtain reliable phase estimation; and 2) retrospective motion detection and data rejection strategies are performed to exclude unusable data corrupted by severe pulsatile motions. The proposed method is named self-feeding MUSE (SF-MUSE). Experiments on healthy volunteers demonstrate that this new SF-MUSE approach provides more accurate motion-induced phase estimation and fewer artifacts caused by data corruption when compared with the original MUSE method. SF-MUSE is a robust method for high resolution diffusion imaging and suitable for practical applications with reasonable scan time.

  9. Central-force decomposition of spline-based modified embedded atom method potential

    NASA Astrophysics Data System (ADS)

    Winczewski, S.; Dziedzic, J.; Rybicki, J.

    2016-10-01

    Central-force decompositions are fundamental to the calculation of stress fields in atomic systems by means of Hardy stress. We derive expressions for a central-force decomposition of the spline-based modified embedded atom method (s-MEAM) potential. The expressions are subsequently simplified to a form that can be readily used in molecular-dynamics simulations, enabling the calculation of the spatial distribution of stress in systems treated with this novel class of empirical potentials. We briefly discuss the properties of the obtained decomposition and highlight further computational techniques that can be expected to benefit from the results of this work. To demonstrate the practicability of the derived expressions, we apply them to calculate stress fields due to an edge dislocation in bcc Mo, comparing their predictions to those of linear elasticity theory.

  10. A Musing on Schuller's "Musings"

    ERIC Educational Resources Information Center

    Asia, Daniel

    2013-01-01

    For many years Gunther Schuller was at the center of the classical music world, as a player, composer, conductor, writer, record producer, polemicist and publisher for new music and jazz, educator, and president of New England Conservatory. His book, entitled, "Musings: The Musical Worlds of Gunther Schuller: A Collection of His…

  11. Reduction and analysis of MUSE data

    NASA Astrophysics Data System (ADS)

    Richard, J.; Bacon, R.; Weilbacher, P. M.; Streicher, O.; Wisotzki, L.; Herenz, E. C.; Slezak, E.; Petremand, M.; Jalobeanu, A.; Collet, C.; Louys, M.

    2012-12-01

    MUSE, the Multi Unit Spectroscopic Explorer, is a 2nd generation integral-field spectrograph under final assembly to see first light at the Very Large Telescope in 2013. By capturing ˜90,000 optical spectra in a single exposure, MUSE represents a challenge for data reduction and analysis. We summarise here the main features of the Data Reduction System, as well as some of the tools under development by the MUSE consortium and the DAHLIA team to handle the large MUSE datacubes (about 4×10^8 pixels) to recover the original astrophysical signal.

  12. MUSE, à l'assaut des galaxies

    NASA Astrophysics Data System (ADS)

    Richard, Johan

    2016-08-01

    MUSE (standing for Multi Unit Spectroscopic Explorer) is a second generation instrument recently commissioned on the Very Large Telescope in Chile. It is an integral field spectrograph with a large field of view of 1 arcmin2, well-suited for the study of galaxies, together with a high sensitivity. We present in this article some of the most significant results MUSE was able to obtain since the beginning of science operations. Thanks to its strong capabilities MUSE is as efficient in studying the kinematics of the gas and stars within the nearby galaxies as discovering some of the most distant galaxies in the Universe through Lyman-alpha emission. Once combined with an adaptive optics system, MUSE will certainly revolutionize our understanding of the formation and evolution of galaxies.

  13. Multi-Object Spectroscopy with MUSE

    NASA Astrophysics Data System (ADS)

    Kelz, A.; Kamann, S.; Urrutia, T.; Weilbacher, P.; Bacon, R.

    2016-10-01

    Since 2014, MUSE, the Multi-Unit Spectroscopic Explorer, is in operation at the ESO-VLT. It combines a superb spatial sampling with a large wavelength coverage. By design, MUSE is an integral-field instrument, but its field-of-view and large multiplex make it a powerful tool for multi-object spectroscopy too. Every data-cube consists of 90,000 image-sliced spectra and 3700 monochromatic images. In autumn 2014, the observing programs with MUSE have commenced, with targets ranging from distant galaxies in the Hubble Deep Field to local stellar populations, star formation regions and globular clusters. This paper provides a brief summary of the key features of the MUSE instrument and its complex data reduction software. Some selected examples are given, how multi-object spectroscopy for hundreds of continuum and emission-line objects can be obtained in wide, deep and crowded fields with MUSE, without the classical need for any target pre-selection.

  14. Musings.

    ERIC Educational Resources Information Center

    Hale, Robert D.

    1988-01-01

    Criticizes a newly published "updated and simplified" version of the Peter Rabbit story. Argues the new edition--with photographs and everyday language replacing the original paintings and verse--is a corruption of a timeless classic.(ARH)

  15. MUSE dream conclusion: the sky verdict

    NASA Astrophysics Data System (ADS)

    Caillier, P.; Accardo, M.; Adjali, L.; Anwand, H.; Bacon, R.; Boudon, D.; Capoani, L.; Daguisé, E.; Dupieux, M.; Dupuy, C.; Francois, M.; Glindemann, A.; Gojak, D.; Gonté, F.; Haddad, N.; Hansali, G.; Hahn, T.; Jarno, A.; Kelz, A.; Koehler, C.; Kosmalski, J.; Laurent, F.; Larrieu, M.; Lizon, J.-L.; Loupias, M.; Manescau, A.; Migniau, J.-E.; Monstein, C.; Nicklas, H.; Parès, L.; Pécontal-Rousset, A.; Piqueras, L.; Reiss, R.; Remillieux, A.; Renault, E.; Rupprecht, G.; Streicher, O.; Stuik, R.; Valentin, H.; Vernet, J.; Weilbacher, P.; Zins, G.

    2014-08-01

    MUSE (Multi Unit Spectroscopic Explorer) is a second generation instrument built for ESO (European Southern Observatory). The MUSE project is supported by a European consortium of 7 institutes. After the finalisation of its integration in Europe, the MUSE instrument has been partially dismounted and shipped to the VLT (Very Large Telescope) in Chile. From October 2013 till February 2014, it has then been reassembled, tested and finally installed on the telescope its final home. From there it collects its first photons coming from the outer limit of the visible universe. This critical moment when the instrument finally meets its destiny is the opportunity to look at the overall outcome of the project and the final performance of the instrument on the sky. The instrument which we dreamt of has become reality. Are the dreamt performances there as well? These final instrumental performances are the result of a step by step process of design, manufacturing, assembly, test and integration. Now is also time to review the path opened by the MUSE project. What challenges were faced during those last steps, what strategy, what choices did pay off? What did not?

  16. The Practice of Sharing a Historical Muse

    ERIC Educational Resources Information Center

    Henderson, Bob

    2012-01-01

    Sharing an imaginative energy for the storied landscape is one kind of pedagogical passion. The author had taken on the challenge of offering this particular passion to his fellow travellers. With students, the practice of peppering a trip with a historical muse involves focussed readings, in the moment stories, planned ceremonies and rituals and,…

  17. Fore-optics of the MUSE instrument

    NASA Astrophysics Data System (ADS)

    Parès, L.; Couderc, P.; Dupieux, M.; Gharsa, T.; Larrieu, M.; Valentin, H.; Gallou, G.; Bacon, R.; Laurent, F.; Loupias, M.; Kosmalski, J.

    2012-09-01

    MUSE (Multi Unit Spectroscopic Explorer) is a second generation VLT panoramic integral field spectrograph developed for the European Southern Observatory (ESO), operating in the visible wavelength range (0.465-0.93 μm). The MUSE instrument is currently under integration and the commissioning is expected to start at the beginning of year 2013. The scientific and technical capabilities of MUSE are described in a series of 19 companion papers. The Fore-Optics (FO), situated at the entrance of MUSE, is used to de-rotate and provide an anamorphic magnification (x 5 / x 2.5) of the 1 arc minute square field of view from the F/15.2 VLT Nasmyth focal plane (Wide Field Mode, WFM). Additional optical elements can be inserted in the optical beam to further increase the magnification by a factor 8 (Narrow Field Mode, NFM). An atmospheric dispersion corrector is also added in the NFM. Two image stabilization units have been developed to ensure a stabilization of the field of view (1/20 of a resolved element) for each observation mode. Environmental values such as temperature and hygrometry are monitored to inform about the observation conditions. All motorized functions and sensors are remote-controlled from the VLT Software via the CAN bus with CANOpen protocol. In this paper, we describe the FO optical, mechanical and control/command electronic concept, development and performance.

  18. A spline-based tool to assess and visualize the calibration of multiclass risk predictions.

    PubMed

    Van Hoorde, K; Van Huffel, S; Timmerman, D; Bourne, T; Van Calster, B

    2015-04-01

    When validating risk models (or probabilistic classifiers), calibration is often overlooked. Calibration refers to the reliability of the predicted risks, i.e. whether the predicted risks correspond to observed probabilities. In medical applications this is important because treatment decisions often rely on the estimated risk of disease. The aim of this paper is to present generic tools to assess the calibration of multiclass risk models. We describe a calibration framework based on a vector spline multinomial logistic regression model. This framework can be used to generate calibration plots and calculate the estimated calibration index (ECI) to quantify lack of calibration. We illustrate these tools in relation to risk models used to characterize ovarian tumors. The outcome of the study is the surgical stage of the tumor when relevant and the final histological outcome, which is divided into five classes: benign, borderline malignant, stage I, stage II-IV, and secondary metastatic cancer. The 5909 patients included in the study are randomly split into equally large training and test sets. We developed and tested models using the following algorithms: logistic regression, support vector machines, k nearest neighbors, random forest, naive Bayes and nearest shrunken centroids. Multiclass calibration plots are interesting as an approach to visualizing the reliability of predicted risks. The ECI is a convenient tool for comparing models, but is less informative and interpretable than calibration plots. In our case study, logistic regression and random forest showed the highest degree of calibration, and the naive Bayes the lowest.

  19. Improved Leg Tracking Considering Gait Phase and Spline-Based Interpolation during Turning Motion in Walk Tests.

    PubMed

    Yorozu, Ayanori; Moriguchi, Toshiki; Takahashi, Masaki

    2015-09-04

    Falling is a common problem in the growing elderly population, and fall-risk assessment systems are needed for community-based fall prevention programs. In particular, the timed up and go test (TUG) is the clinical test most often used to evaluate elderly individual ambulatory ability in many clinical institutions or local communities. This study presents an improved leg tracking method using a laser range sensor (LRS) for a gait measurement system to evaluate the motor function in walk tests, such as the TUG. The system tracks both legs and measures the trajectory of both legs. However, both legs might be close to each other, and one leg might be hidden from the sensor. This is especially the case during the turning motion in the TUG, where the time that a leg is hidden from the LRS is longer than that during straight walking and the moving direction rapidly changes. These situations are likely to lead to false tracking and deteriorate the measurement accuracy of the leg positions. To solve these problems, a novel data association considering gait phase and a Catmull-Rom spline-based interpolation during the occlusion are proposed. From the experimental results with young people, we confirm   that the proposed methods can reduce the chances of false tracking. In addition, we verify the measurement accuracy of the leg trajectory compared to a three-dimensional motion analysis system (VICON).

  20. Improved Leg Tracking Considering Gait Phase and Spline-Based Interpolation during Turning Motion in Walk Tests

    PubMed Central

    Yorozu, Ayanori; Moriguchi, Toshiki; Takahashi, Masaki

    2015-01-01

    Falling is a common problem in the growing elderly population, and fall-risk assessment systems are needed for community-based fall prevention programs. In particular, the timed up and go test (TUG) is the clinical test most often used to evaluate elderly individual ambulatory ability in many clinical institutions or local communities. This study presents an improved leg tracking method using a laser range sensor (LRS) for a gait measurement system to evaluate the motor function in walk tests, such as the TUG. The system tracks both legs and measures the trajectory of both legs. However, both legs might be close to each other, and one leg might be hidden from the sensor. This is especially the case during the turning motion in the TUG, where the time that a leg is hidden from the LRS is longer than that during straight walking and the moving direction rapidly changes. These situations are likely to lead to false tracking and deteriorate the measurement accuracy of the leg positions. To solve these problems, a novel data association considering gait phase and a Catmull–Rom spline-based interpolation during the occlusion are proposed. From the experimental results with young people, we confirm that the proposed methods can reduce the chances of false tracking. In addition, we verify the measurement accuracy of the leg trajectory compared to a three-dimensional motion analysis system (VICON). PMID:26404302

  1. SELFI: an object-based, Bayesian method for faint emission line source detection in MUSE deep field data cubes

    NASA Astrophysics Data System (ADS)

    Meillier, Céline; Chatelain, Florent; Michel, Olivier; Bacon, Roland; Piqueras, Laure; Bacher, Raphael; Ayasso, Hacheme

    2016-04-01

    We present SELFI, the Source Emission Line FInder, a new Bayesian method optimized for detection of faint galaxies in Multi Unit Spectroscopic Explorer (MUSE) deep fields. MUSE is the new panoramic integral field spectrograph at the Very Large Telescope (VLT) that has unique capabilities for spectroscopic investigation of the deep sky. It has provided data cubes with 324 million voxels over a single 1 arcmin2 field of view. To address the challenge of faint-galaxy detection in these large data cubes, we developed a new method that processes 3D data either for modeling or for estimation and extraction of source configurations. This object-based approach yields a natural sparse representation of the sources in massive data fields, such as MUSE data cubes. In the Bayesian framework, the parameters that describe the observed sources are considered random variables. The Bayesian model leads to a general and robust algorithm where the parameters are estimated in a fully data-driven way. This detection algorithm was applied to the MUSE observation of Hubble Deep Field-South. With 27 h total integration time, these observations provide a catalog of 189 sources of various categories and with secured redshift. The algorithm retrieved 91% of the galaxies with only 9% false detection. This method also allowed the discovery of three new Lyα emitters and one [OII] emitter, all without any Hubble Space Telescope counterpart. We analyzed the reasons for failure for some targets, and found that the most important limitation of the method is when faint sources are located in the vicinity of bright spatially resolved galaxies that cannot be approximated by the Sérsic elliptical profile. The software and its documentation are available on the MUSE science web service (muse-vlt.eu/science).

  2. The MUSE Data Reduction Software Pipeline

    NASA Astrophysics Data System (ADS)

    Weilbacher, P. M.; Roth, M. M.; Pécontal-Rousset, A.; Bacon, R.; Muse Team

    2006-07-01

    After giving a short overview of the instrument characteristics of the second generation VLT instrument MUSE, we discuss properties of the data will look like and present challenges and goals of its data reduction software. It is conceived as a number of pipeline recipes to be run in an automated way within the ESO data flow system. These recipes are based on a data reduction library that is being written in the C language using ESO's CPL API. We give a short overview of the steps needed for reduction and post-processing of science data, discuss requirements of a future visualization tool for integral field spectroscopy and close with the timeline for MUSE and its data reduction pipeline.

  3. The calibration unit and detector system tests for MUSE

    NASA Astrophysics Data System (ADS)

    Kelz, A.; Bauer, S. M.; Biswas, I.; Fechner, T.; Hahn, T.; Olaya, J.-C.; Popow, E.; Roth, M. M.; Streicher, O.; Weilbacher, P.; Bacon, R.; Laurent, F.; Laux, U.; Lizon, J. L.; Loupias, M.; Reiss, R.; Rupprecht, G.

    2010-07-01

    The Multi-Unit Spectroscopic Explorer (MUSE) is an integral-field spectrograph for the ESO Very Large Telescope. After completion of the Final Design Review in 2009, MUSE is now in its manufacture and assembly phase. To achieve a relative large field-of-view with fine spatial sampling, MUSE features 24 identical spectrograph-detector units. The acceptance tests of the detector sub-systems, the design and manufacture of the calibration unit and the development of the Data Reduction Software for MUSE are under the responsibility of the AIP. The optical design of the spectrograph implies strict tolerances on the alignment of the detector systems to minimize aberrations. As part of the acceptance testing, all 24 detector systems, developed by ESO, are mounted to a MUSE reference spectrograph, which is illuminated by a set of precision pinholes. Thus the best focus is determined and the image quality of the spectrograph-detector subsystem across wavelength and field angle is measured.

  4. Porting Big Data technology across domains. WISE for MUSE

    NASA Astrophysics Data System (ADS)

    Vriend, Willem-Jan

    2015-12-01

    Due to the nature of MUSE data, each data-cube obtained as part of the GTO program is used by most of the consortium institutes which are spread across Europe. Since the effort required in reducing the data is significant, and to ensure uniformity in analysis, it is desirable to have a data management system that integrates data reduction, provenance tracking, quality control and data analysis. Such a system should support the distribution of storage and processing over the consortium institutes. The MUSE-WISE system incorporates these aspects. It is built on the Astro-WISE system, originally designed to handle OmegaCAM imaging data, which has been extended to support 3D spectroscopic data. MUSE-WISE is now being used to process MUSE GTO data. It currently stores 95 TB consisting of 48k raw exposures and processed data used by 79 users spread over 7 nodes in Europe.

  5. MUSE observations of the lensing cluster Abell 1689

    NASA Astrophysics Data System (ADS)

    Bina, D.; Pelló, R.; Richard, J.; Lewis, J.; Patrício, V.; Cantalupo, S.; Herenz, E. C.; Soto, K.; Weilbacher, P. M.; Bacon, R.; Vernet, J. D. R.; Wisotzki, L.; Clément, B.; Cuby, J. G.; Lagattuta, D. J.; Soucail, G.; Verhamme, A.

    2016-05-01

    Context. This paper presents the results obtained with the Multi Unit Spectroscopic Explorer (MUSE) for the core of the lensing cluster Abell 1689, as part of MUSE's commissioning at the ESO Very Large Telescope. Aims: Integral-field observations with MUSE provide a unique view of the central 1 × 1 arcmin2 region at intermediate spectral resolution in the visible domain, allowing us to conduct a complete census of both cluster galaxies and lensed background sources. Methods: We performed a spectroscopic analysis of all sources found in the MUSE data cube. Two hundred and eighty-two objects were systematically extracted from the cube based on a guided-and-manual approach. We also tested three different tools for the automated detection and extraction of line emitters. Cluster galaxies and lensed sources were identified based on their spectral features. We investigated the multiple-image configuration for all known sources in the field. Results: Previous to our survey, 28 different lensed galaxies displaying 46 multiple images were known in the MUSE field of view, most of them were detected through photometric redshifts and lensing considerations. Of these, we spectroscopically confirm 12 images based on their emission lines, corresponding to 7 different lensed galaxies between z = 0.95 and 5.0. In addition, 14 new galaxies have been spectroscopically identified in this area thanks to MUSE data, with redshifts ranging between 0.8 and 6.2. All background sources detected within the MUSE field of view correspond to multiple-imaged systems lensed by A1689. Seventeen sources in total are found at z ≥ 3 based on their Lyman-α emission, with Lyman-α luminosities ranging between 40.5 ≲ log (Lyα) ≲ 42.5 after correction for magnification. This sample is particularly sensitive to the slope of the luminosity function toward the faintest end. The density of sources obtained in this survey is consistent with a steep value of α ≤ -1.5, although this result still

  6. Hardware Implementation of a Spline-Based Genetic Algorithm for Embedded Stereo Vision Sensor Providing Real-Time Visual Guidance to the Visually Impaired

    NASA Astrophysics Data System (ADS)

    Lee, Dah-Jye; Anderson, Jonathan D.; Archibald, James K.

    2008-12-01

    Many image and signal processing techniques have been applied to medical and health care applications in recent years. In this paper, we present a robust signal processing approach that can be used to solve the correspondence problem for an embedded stereo vision sensor to provide real-time visual guidance to the visually impaired. This approach is based on our new one-dimensional (1D) spline-based genetic algorithm to match signals. The algorithm processes image data lines as 1D signals to generate a dense disparity map, from which 3D information can be extracted. With recent advances in electronics technology, this 1D signal matching technique can be implemented and executed in parallel in hardware such as field-programmable gate arrays (FPGAs) to provide real-time feedback about the environment to the user. In order to complement (not replace) traditional aids for the visually impaired such as canes and Seeing Eyes dogs, vision systems that provide guidance to the visually impaired must be affordable, easy to use, compact, and free from attributes that are awkward or embarrassing to the user. "Seeing Eye Glasses," an embedded stereo vision system utilizing our new algorithm, meets all these requirements.

  7. Reflecting on the Experience: Musings from the Antipodes

    ERIC Educational Resources Information Center

    Dickson, Tracey J.

    2008-01-01

    Facilitating the reflection upon outdoor and adventure experiences is a common practice for many leaders and instructors. This article draws upon visual semiotics to provide reflections upon three images from books that originate from within the dominant North American paradigm. These musings are from one Antipodean's perspective, who may see the…

  8. A nebular analysis of the central Orion nebula with MUSE

    NASA Astrophysics Data System (ADS)

    Mc Leod, A. F.; Weilbacher, P. M.; Ginsburg, A.; Dale, J. E.; Ramsay, S.; Testi, L.

    2016-02-01

    A nebular analysis of the central Orion nebula and its main structures is presented. We exploit observations from the integral field spectrograph Multi Unit Spectroscopic Explorer (MUSE) in the wavelength range 4595-9366 Å to produce the first O, S and N ionic and total abundance maps of a region spanning 6 arcmin × 5 arcmin with a spatial resolution of 0.2 arcsec. We use the S23(=([S II] λλ6717, 6731+[S III] λ9068)/Hβ) parameter, together with [O II]/[O III] as an indicator of the degree of ionization, to distinguish between the various small-scale structures. The only Orion bullet covered by MUSE is HH 201, which shows a double component in the [Fe II] λ8617 line throughout indicating an expansion, and we discuss a scenario in which this object is undergoing a disruptive event. We separate the proplyds located south of the Bright Bar into four categories depending on their S23 values, propose the utility of the S23 parameter as an indicator of the shock contribution to the excitation of line-emitting atoms, and show that the MUSE data are able to identify the proplyds associated with discs and microjets. We compute the second-order structure function for the Hα, [O III] λ5007, [S II] λ6731 and [O I] λ6300 emission lines to analyse the turbulent velocity field of the region covered with MUSE. We find that the spectral and spatial resolution of MUSE are not able to faithfully reproduce the structure functions of previous works.

  9. Estimation of coefficients and boundary parameters in hyperbolic systems

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Murphy, K. A.

    1984-01-01

    Semi-discrete Galerkin approximation schemes are considered in connection with inverse problems for the estimation of spatially varying coefficients and boundary condition parameters in second order hyperbolic systems typical of those arising in 1-D surface seismic problems. Spline based algorithms are proposed for which theoretical convergence results along with a representative sample of numerical findings are given.

  10. Deep MUSE observations in the HDFS. Morpho-kinematics of distant star-forming galaxies down to 108M⊙

    NASA Astrophysics Data System (ADS)

    Contini, T.; Epinat, B.; Bouché, N.; Brinchmann, J.; Boogaard, L. A.; Ventou, E.; Bacon, R.; Richard, J.; Weilbacher, P. M.; Wisotzki, L.; Krajnović, D.; Vielfaure, J.-B.; Emsellem, E.; Finley, H.; Inami, H.; Schaye, J.; Swinbank, M.; Guérou, A.; Martinsson, T.; Michel-Dansac, L.; Schroetter, I.; Shirazi, M.; Soucail, G.

    2016-06-01

    Aims: Whereas the evolution of gas kinematics of massive galaxies is now relatively well established up to redshift z ~ 3, little is known about the kinematics of lower mass (M⋆≤ 1010M⊙) galaxies. We use MUSE, a powerful wide-field, optical integral-field spectrograph (IFS) recently mounted on the VLT, to characterize this galaxy population at intermediate redshift. Methods: We made use of the deepest MUSE observations performed so far on the Hubble Deep Field South (HDFS). This data cube, resulting from 27 h of integration time, covers a one arcmin2 field of view at an unprecedented depth (with a 1σ emission-line surface brightness limit of 1 × 10-19 erg s-1 cm-2 arcsec-2) and a final spatial resolution of ≈0.7''. We identified a sample of 28 resolved emission-line galaxies, extending over an area that is at least twice the seeing disk, spread over a redshift interval of 0.2 estimates of the disk inclination, disk scale length, and position angle of the major axis. We derived the resolved ionized gas properties of these galaxies from the MUSE data and model the disk (both in 2D and in 3D with GalPaK3D) to retrieve their intrinsic gas kinematics, including the maximum rotation velocity and velocity dispersion. Results: We build a sample of resolved emission-line galaxies of much lower stellar mass and SFR (by ~1 - 2 orders of magnitude) than previous IFS surveys. The gas kinematics of most of the spatially resolved MUSE-HDFS galaxies is consistent with disk-like rotation, but about 20% have velocity dispersions that are larger than the rotation velocities and 30% are part of a close pair and/or show clear signs of recent

  11. IFU simulator: a powerful alignment and performance tool for MUSE instrument

    NASA Astrophysics Data System (ADS)

    Laurent, Florence; Boudon, Didier; Daguisé, Eric; Dubois, Jean-Pierre; Jarno, Aurélien; Kosmalski, Johan; Piqueras, Laure; Remillieux, Alban; Renault, Edgard

    2014-07-01

    MUSE (Multi Unit Spectroscopic Explorer) is a second generation Very Large Telescope (VLT) integral field spectrograph (1x1arcmin² Field of View) developed for the European Southern Observatory (ESO), operating in the visible wavelength range (0.465-0.93 μm). A consortium of seven institutes is currently commissioning MUSE in the Very Large Telescope for the Preliminary Acceptance in Chile, scheduled for September, 2014. MUSE is composed of several subsystems which are under the responsibility of each institute. The Fore Optics derotates and anamorphoses the image at the focal plane. A Splitting and Relay Optics feed the 24 identical Integral Field Units (IFU), that are mounted within a large monolithic instrument mechanical structure. Each IFU incorporates an image slicer, a fully refractive spectrograph with VPH-grating and a detector system connected to a global vacuum and cryogenic system. During 2012 and 2013, all MUSE subsystems were integrated, aligned and tested to the P.I. institute at Lyon. After successful PAE in September 2013, MUSE instrument was shipped to the Very Large Telescope in Chile where that was aligned and tested in ESO integration hall at Paranal. After, MUSE was directly transferred in monolithic way without dismounting onto VLT telescope where the first light was overcame. This talk describes the IFU Simulator which is the main alignment and performance tool for MUSE instrument. The IFU Simulator mimics the optomechanical interface between the MUSE pre-optic and the 24 IFUs. The optomechanical design is presented. After, the alignment method of this innovative tool for identifying the pupil and image planes is depicted. At the end, the internal test report is described. The success of the MUSE alignment using the IFU Simulator is demonstrated by the excellent results obtained onto MUSE positioning, image quality and throughput. MUSE commissioning at the VLT is planned for September, 2014.

  12. Not Your Daddy's Data Link: Musings on Datalink Communications

    NASA Technical Reports Server (NTRS)

    Branstetter, James

    2004-01-01

    Viewgraphs about musings on Datalink Communications are presented. Some of the topics include: 1) Keen Eye for a Straight Proposal (Next Gen Data Link); 2) So many datalinks so little funding!!!; 3) Brave New World; 4) Time marches on!; 5) Through the Looking Glass; 6) Dollars & Sense Cooking; 7) Economics 101; 8) The Missing Link(s); 9) Straight Shooting; and 10) All is not lost.

  13. Therapeutic Effects of Human Multilineage-Differentiating Stress Enduring (MUSE) Cell Transplantation into Infarct Brain of Mice

    PubMed Central

    Yamauchi, Tomohiro; Kuroda, Yasumasa; Morita, Takahiro; Shichinohe, Hideo; Houkin, Kiyohiro; Dezawa, Mari; Kuroda, Satoshi

    2015-01-01

    Objective Bone marrow stromal cells (BMSCs) are heterogeneous and their therapeutic effect is pleiotropic. Multilineage-differentiating stress enduring (Muse) cells are recently identified to comprise several percentages of BMSCs, being able to differentiate into triploblastic lineages including neuronal cells and act as tissue repair cells. This study was aimed to clarify how Muse and non-Muse cells in BMSCs contribute to functional recovery after ischemic stroke. Methods Human BMSCs were separated into stage specific embryonic antigen-3-positive Muse cells and -negative non-Muse cells. Immunodeficient mice were subjected to permanent middle cerebral artery occlusion and received transplantation of vehicle, Muse, non-Muse or BMSCs (2.5×104 cells) into the ipsilateral striatum 7 days later. Results Motor function recovery in BMSC and non-Muse groups became apparent at 21 days after transplantation, but reached the plateau thereafter. In Muse group, functional recovery was not observed for up to 28 days post-transplantation, but became apparent at 35 days post-transplantation. On immunohistochemistry, only Muse cells were integrated into peri-infarct cortex and differentiate into Tuj-1- and NeuN-expressing cells, while negligible number of BMSCs and non-Muse cells remained in the peri-infarct area at 42 days post-transplantation. Conclusions These findings strongly suggest that Muse cells and non-Muse cells may contribute differently to tissue regeneration and functional recovery. Muse cells may be more responsible for replacement of the lost neurons through their integration into the peri-infarct cortex and spontaneous differentiation into neuronal marker-positive cells. Non-Muse cells do not remain in the host brain and may exhibit trophic effects rather than cell replacement. PMID:25747577

  14. MUSE field splitter unit: fan-shaped separator for 24 integral field units

    NASA Astrophysics Data System (ADS)

    Laurent, Florence; Renault, Edgard; Anwand, Heiko; Boudon, Didier; Caillier, Patrick; Kosmalski, Johan; Loupias, Magali; Nicklas, Harald; Seifert, Walter; Salaun, Yves; Xu, Wenli

    2014-07-01

    MUSE (Multi Unit Spectroscopic Explorer) is a second generation Very Large Telescope (VLT) integral field spectrograph developed for the European Southern Observatory (ESO). It combines a 1' x 1' field of view sampled at 0.2 arcsec for its Wide Field Mode (WFM) and a 7.5"x7.5" field of view for its Narrow Field Mode (NFM). Both modes will operate with the improved spatial resolution provided by GALACSI (Ground Atmospheric Layer Adaptive Optics for Spectroscopic Imaging), that will use the VLT deformable secondary mirror and 4 Laser Guide Stars (LGS) foreseen in 2015. MUSE operates in the visible wavelength range (0.465-0.93 μm). A consortium of seven institutes is currently commissioning MUSE in the Very Large Telescope for the Preliminary Acceptance in Chile, scheduled for September, 2014. MUSE is composed of several subsystems which are under the responsibility of each institute. The Fore Optics derotates and anamorphoses the image at the focal plane. A Splitting and Relay Optics feed the 24 identical Integral Field Units (IFU), that are mounted within a large monolithic instrument mechanical structure. Each IFU incorporates an image slicer, a fully refractive spectrograph with VPH-grating and a detector system connected to a global vacuum and cryogenic system. During 2012 and 2013, all MUSE subsystems were integrated, aligned and tested to the P.I. institute at Lyon. After successful PAE in September 2013, MUSE instrument was shipped to the Very Large Telescope in Chile where it was aligned and tested in ESO integration hall at Paranal. After, MUSE was directly transferred in monolithic way onto VLT telescope where the first light was achieved. This paper describes the MUSE main optical component: the Field Splitter Unit. It splits the VLT image into 24 subfields and provides the first separation of the beam for the 24 Integral Field Units. This talk depicts its manufacturing at Winlight Optics and its alignment into MUSE instrument. The success of the MUSE

  15. A MUSE View of the HDFS: The Lyα Luminosity Function out to z~6

    NASA Astrophysics Data System (ADS)

    Drake, Alyssa B.; Guiderdoni, Bruno; Blaizot, Jérémy; Richard, Johan; Bacon, Roland; Garel, Thibault; Hashimoto, Takuya

    We present preliminary results from MUSE on the Lyα luminosity function in the Hubble Deep Field South (HDFS). Using a large homogeneous sample of LAEs selected through blind spectroscopy, we utilise the unprecedented detection power of MUSE to study the progenitors of L* galaxies back to when the Universe was just ~2 Gyr old. We present these results in the context of the current literature, and highlight the importance of the forthcoming Hubble Ultra Deep Field (HUDF) study with MUSE, which will increase the size of our sample by a factor of ~ 10.

  16. Ultra Slow Muon Project at J-PARC, MUSE

    SciTech Connect

    Miyake, Y.; Nakahara, K.; Shimomura, K.; Strasser, P.; Kawamura, N.; Koda, A.; Makimura, S.; Fujimori, H.; Nishiyama, K.; Matsuda, Y.; Bakule, P.; Adachi, T.; Ogitsu, T.

    2009-03-17

    The muon science facility (MUSE), along with the neutron, hadron, and neutrino facilities, is one of the experimental areas of the J-PARC project, which was approved for construction at the Tokai JAEA site. The MUSE facility is located in the Materials and Life Science Facility (MLF), which is a building integrated to include both neutron and muon science programs. Construction of the MLF building was started in the beginning of 2004, and first muon beam is expected in the autumn of 2008.As a next step, we are planning to install, a Super Omega muon channel with a large acceptance of 400 msr, to extract the world strongest pulsed surface muon beam. Its goal is to extract 4x10{sup 8} surface muons/s for the generation of the intense ultra slow muons, utilizing laser resonant ionization of Mu by applying an intense pulsed VUV laser system. As maximum 1x10{sup 6} ultra slow muons/s will be expected, which will allow for the extension of {mu}SR into the field of thin film and surface science.

  17. ModelMuse - A Graphical User Interface for MODFLOW-2005 and PHAST

    USGS Publications Warehouse

    Winston, Richard B.

    2009-01-01

    ModelMuse is a graphical user interface (GUI) for the U.S. Geological Survey (USGS) models MODFLOW-2005 and PHAST. This software package provides a GUI for creating the flow and transport input file for PHAST and the input files for MODFLOW-2005. In ModelMuse, the spatial data for the model is independent of the grid, and the temporal data is independent of the stress periods. Being able to input these data independently allows the user to redefine the spatial and temporal discretization at will. This report describes the basic concepts required to work with ModelMuse. These basic concepts include the model grid, data sets, formulas, objects, the method used to assign values to data sets, and model features. The ModelMuse main window has a top, front, and side view of the model that can be used for editing the model, and a 3-D view of the model that can be used to display properties of the model. ModelMuse has tools to generate and edit the model grid. It also has a variety of interpolation methods and geographic functions that can be used to help define the spatial variability of the model. ModelMuse can be used to execute both MODFLOW-2005 and PHAST and can also display the results of MODFLOW-2005 models. An example of using ModelMuse with MODFLOW-2005 is included in this report. Several additional examples are described in the help system for ModelMuse, which can be accessed from the Help menu.

  18. The Story of Supernova “Refsdal” Told by Muse

    NASA Astrophysics Data System (ADS)

    Grillo, C.; Karman, W.; Suyu, S. H.; Rosati, P.; Balestra, I.; Mercurio, A.; Lombardi, M.; Treu, T.; Caminha, G. B.; Halkola, A.; Rodney, S. A.; Gavazzi, R.; Caputi, K. I.

    2016-05-01

    We present Multi Unit Spectroscopic Explorer (MUSE) observations in the core of the Hubble Frontier Fields (HFF) galaxy cluster MACS J1149.5+2223, where the first magnified and spatially resolved multiple images of supernova (SN) “Refsdal” at redshift 1.489 were detected. Thanks to a Director's Discretionary Time program with the Very Large Telescope and the extraordinary efficiency of MUSE, we measure 117 secure redshifts with just 4.8 hr of total integration time on a single 1 arcmin2 target pointing. We spectroscopically confirm 68 galaxy cluster members, with redshift values ranging from 0.5272 to 0.5660, and 18 multiple images belonging to seven background, lensed sources distributed in redshifts between 1.240 and 3.703. Starting from the combination of our catalog with those obtained from extensive spectroscopic and photometric campaigns using the Hubble Space Telescope (HST), we select a sample of 300 (164 spectroscopic and 136 photometric) cluster members, within approximately 500 kpc from the brightest cluster galaxy, and a set of 88 reliable multiple images associated with 10 different background source galaxies and 18 distinct knots in the spiral galaxy hosting SN “Refsdal.” We exploit this valuable information to build six detailed strong-lensing models, the best of which reproduces the observed positions of the multiple images with an rms offset of only 0.″26. We use these models to quantify the statistical and systematic errors on the predicted values of magnification and time delay of the next emerging image of SN “Refsdal.” We find that its peak luminosity should occur between 2016 March and June and should be approximately 20% fainter than the dimmest (S4) of the previously detected images but above the detection limit of the planned HST/WFC3 follow-up. We present our two-dimensional reconstruction of the cluster mass density distribution and of the SN “Refsdal” host galaxy surface brightness distribution. We outline the road map

  19. Random musings on stochastics (Lorenz Lecture)

    NASA Astrophysics Data System (ADS)

    Koutsoyiannis, D.

    2014-12-01

    moments, autocorrelation, power spectrum), in model identification and parameter estimation from data; and (c) to provide interpretations to scaling laws based on maximization of entropy or entropy production, or else natural amplification of uncertainty, which are alternative to more common ones, like self-organization.

  20. The New Hyperspectral Sensor Desis on the Multi-Payload Platform Muses Installed on the Iss

    NASA Astrophysics Data System (ADS)

    Müller, R.; Avbelj, J.; Carmona, E.; Eckardt, A.; Gerasch, B.; Graham, L.; Günther, B.; Heiden, U.; Ickes, J.; Kerr, G.; Knodt, U.; Krutz, D.; Krawczyk, H.; Makarau, A.; Miller, R.; Perkins, R.; Walter, I.

    2016-06-01

    The new hyperspectral instrument DLR Earth Sensing Imaging Spectrometer (DESIS) will be developed and integrated in the Multi-User-System for Earth Sensing (MUSES) platform installed on the International Space Station (ISS). The DESIS instrument will be launched to the ISS mid of 2017 and robotically installed in one of the four slots of the MUSES platform. After a four month commissioning phase the operational phase will last at least until 2020. The MUSES / DESIS system will be commanded and operated by the publically traded company TBE (Teledyne Brown Engineering), which initiated the whole program. TBE provides the MUSES platform and the German Aerospace Center (DLR) develops the instrument DESIS and establishes a Ground Segment for processing, archiving, delivering and calibration of the image data mainly used for scientific and humanitarian applications. Well calibrated and harmonized products will be generated together with the Ground Segment established at Teledyne. The article describes the Space Segment consisting of the MUSES platform and the instrument DESIS as well as the activities at the two (synchronized) Ground Segments consisting of the processing methods, product generation, data calibration and product validation. Finally comments to the data policy are given.

  1. Constraint-Muse: A Soft-Constraint Based System for Music Therapy

    NASA Astrophysics Data System (ADS)

    Hölzl, Matthias; Denker, Grit; Meier, Max; Wirsing, Martin

    Monoidal soft constraints are a versatile formalism for specifying and solving multi-criteria optimization problems with dynamically changing user preferences. We have developed a prototype tool for interactive music creation, called Constraint Muse, that uses monoidal soft constraints to ensure that a dynamically generated melody harmonizes with input from other sources. Constraint Muse provides an easy to use interface based on Nintendo Wii controllers and is intended to be used in music therapy for people with Parkinson’s disease and for children with high-functioning autism or Asperger’s syndrome.

  2. Tracking the Lyman alpha emission line in the CircumGalactic Medium in MUSE data

    NASA Astrophysics Data System (ADS)

    Bacher, R.; Maho, P.; Chatelain, F.; Michel, O.

    2016-09-01

    Since 2014, the Multi Unit Spectroscopic Explorer (MUSE) instrument generates hyperspectral datacubes (300 by 300 pixels by 3600 wavelength in the visible range) of the deep Universe. One of the main purposes of the wide field spectrograph MUSE is to analyse galaxies and their surroundings by the study of their spectra. Galaxy spectra are composed of a continuum emission and of sparse emission (or absorption) peaks. On the contrary surrounding gas only contains peak such as the Lyman alpha emission line. Several methods are developed here to detect the gas signature as far as possible in the galaxy surroundings. These methods combined clustering approaches and several pre-processing steps.

  3. Urania, the Muse of Astronomy: She Who Draws Our Eyes

    NASA Astrophysics Data System (ADS)

    Rossi, S.

    2016-01-01

    In exploring the inspiration of astronomical phenomena upon human culture we are invited, perhaps beckoned, to reflect on Urania, the Greek Muse of Astronomy. Heavenly One or Heavenly Bright, Urania teaches mortals the shape and wonder of the cosmos, “men who have been instructed by her she raises aloft to heaven for it is a fact that imagination and power of thought lift men's souls to heavenly heights” (Siculus 1935). Yet in cities, the heavenly lights are dimmed, flooded by another source of light which is that of culture, and that is the domain of Aphrodite. So it is to her we must turn to understand what draws our eyes up to the heavens above the dazzling city lights. And, as Aphrodite Urania, her cultural and aesthetic domain is connected to the order of the cosmos itself, “the triple Moirai are ruled by thy decree, and all productions yield alike to thee: whatever the heavens, encircling all, contain, earth fruit-producing, and the stormy main, thy sway confesses, and obeys thy word...” (Athanassakis 1988). My presentation is a mythopoetic cultural excavation of the gods and ideas in our passion for astronomy; how, in our fascination with the cosmos, we see Urania and Aphrodite, these goddesses who inspire us city dwellers, planetarium devotees, and silent-field stargazers to look upwards.

  4. The MUSES Satellite Team and Multidisciplinary System Engineering

    NASA Technical Reports Server (NTRS)

    Chen, John C.; Paiz, Alfred R.; Young, Donald L.

    1997-01-01

    In a unique partnership between three minority-serving institutions and NASA's Jet Propulsion Laboratory, a new course sequence, including a multidisciplinary capstone design experience, is to be developed and implemented at each of the schools with the ambitious goal of designing, constructing and launching a low-orbit Earth-resources satellite. The three universities involved are North Carolina A&T State University (NCA&T), University of Texas, El Paso (UTEP), and California State University, Los Angeles (CSULA). The schools form a consortium collectively known as MUSES - Minority Universities System Engineering and Satellite. Four aspects of this project make it unique: (1) Including all engineering disciplines in the capstone design course, (2) designing, building and launching an Earth-resources satellite, (3) sustaining the partnership between the three schools to achieve this goal, and (4) implementing systems engineering pedagogy at each of the three schools. This paper will describe the partnership and its goals, the first design of the satellite, the courses developed at NCA&T, and the implementation plan for the course sequence.

  5. Possible Signatures of a Cold-flow Disk from MUSE Using a z ˜ 1 Galaxy-Quasar Pair toward SDSS J1422-0001

    NASA Astrophysics Data System (ADS)

    Bouché, N.; Finley, H.; Schroetter, I.; Murphy, M. T.; Richter, P.; Bacon, R.; Contini, T.; Richard, J.; Wendt, M.; Kamann, S.; Epinat, B.; Cantalupo, S.; Straka, L. A.; Schaye, J.; Martin, C. L.; Péroux, C.; Wisotzki, L.; Soto, K.; Lilly, S.; Carollo, C. M.; Brinchmann, J.; Kollatschny, W.

    2016-04-01

    We use a background quasar to detect the presence of circumgalactic gas around a z=0.91 low-mass star-forming galaxy. Data from the new Multi Unit Spectroscopic Explorer (MUSE) on the Very Large Telescope show that the galaxy has a dust-corrected star formation rate (SFR) of 4.7 ± 2.0 M⊙ yr-1, with no companion down to 0.22 M⊙ yr-1 (5σ) within 240 {h}-1 kpc (“30”). Using a high-resolution spectrum of the background quasar, which is fortuitously aligned with the galaxy major axis (with an azimuth angle α of only 15°), we find, in the gas kinematics traced by low-ionization lines, distinct signatures consistent with those expected for a “cold-flow disk” extending at least 12 kpc (3× {R}1/2). We estimate the mass accretion rate {\\dot{M}}{{in}} to be at least two to three times larger than the SFR, using the geometric constraints from the IFU data and the H i column density of log {N}{{H}{{I}}}/{{cm}}-2 ≃ 20.4 obtained from a Hubble Space Telescope/COS near-UV spectrum. From a detailed analysis of the low-ionization lines (e.g., Zn ii, Cr ii, Ti ii, Mn ii, Si ii), the accreting material appears to be enriched to about 0.4 {Z}⊙ (albeit with large uncertainties: {log} Z/{Z}⊙ =-0.4\\quad +/- \\quad 0.4), which is comparable to the galaxy metallicity (12 + log O/H = 8.7 ± 0.2), implying a large recycling fraction from past outflows. Blueshifted Mg ii and Fe ii absorptions in the galaxy spectrum from the MUSE data reveal the presence of an outflow. The Mg ii and Fe ii absorption line ratios indicate emission infilling due to scattering processes, but the MUSE data do not show any signs of fluorescent Fe ii* emission. Based on observations made at the ESO telescopes under program 080.A-0364 (SINFONI), 079.A-0600 (UVES), and as part of MUSE commissioning (ESO program 060.A-9100). Based on observations made with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities

  6. Musings of Someone in the Disability Support Services Field for Almost 40 Years

    ERIC Educational Resources Information Center

    Goodin, Sam

    2014-01-01

    As the title states, this article is a collection of musings with only modest attempts at establishing an order for them or connections between them. It is not quite "free association," but it is close. This structure or perhaps lack of it reflects the variety of things we do in our work. Many of the things we do have little in common…

  7. From Sun King to Royal Twilight: Painting in Eighteenth Century France from the Musee Picardie, Amiens.

    ERIC Educational Resources Information Center

    Johnson, Mark M.

    2000-01-01

    Focuses on the traveling exhibition from the Musee de Picardie in Amiens, France, called "From the Sun King to the Royal Twilight: Painting in Eighteenth Century France," that provides an overview of French paintings from the reign of Louis IV to the fall of the monarchy. (CMK)

  8. The MUSES-C, mission description and its status

    NASA Astrophysics Data System (ADS)

    Kawaguchi, Jun'ichiro; Uesugi, Kuninori T.; Fujiwara, Akira; Saitoh, Hirobumi

    1999-11-01

    The MUSES-C mission is the world's first sample and return attempt to/from the near Earth asteroid Nereus (4660). It is the ISAS (The Institute of Space and Astronautical Science, Ministry of Education) which manages the mission that started in 1996 scheduling to be launched in January of 2002. The mission is built as a kind of technology demonstration, however, it is aiming at not only the in-situ observation but also the touch-down sampling of the surface fragments. The sample collected is returned to the Earth in January of 2006. The mission is a four year journey. The major purpose of it originally consists of the following four subjects: 1) The Ion thruster propulsion performed in interplanetary field as a primary means, 2) Autonomous guidance, navigation and control during the rendezvous and touch down phase, 3) The sample collection mechanism and 4) The hyperbolic reentry capsule with the asteroid sample contained inside it. The current primary objective is extended to carry the joint small rover with NASA/JPL, which is supposed to be placed on the surface and to look into the crater created by the sampling shot of the projectile. The rover is designated as the Small Science Vehicle (SSV) that weighs about 1 kg carrying three kinds of in-situ instruments: 1) A Visible Camera, 2) Near Infra-Red Spectrometer and potentially 3) Alpha-Proton X-ray Spectrometer (APXS) similar to that delivered on the Mars Path Finder. During the fiscal 1998, the spacecraft undergoes the PM tests and the FM fabrication starts from next year, 1999. The paper presents the latest mission description around the asteroid and shows the current status of the spacecraft as well as the instruments and so on. The mission will be the good example of an international collaboration in the small interplanetary exploration.

  9. BOOK REVIEW: Galileo's Muse: Renaissance Mathematics and the Arts

    NASA Astrophysics Data System (ADS)

    Peterson, Mark; Sterken, Christiaan

    2013-12-01

    Galileo's Muse is a book that focuses on the life and thought of Galileo Galilei. The Prologue consists of a first chapter on Galileo the humanist and deals with Galileo's influence on his student Vincenzo Viviani (who wrote a biography of Galileo). This introductory chapter is followed by a very nice chapter that describes the classical legacy: Pythagoreanism and Platonism, Euclid and Archimedes, and Plutarch and Ptolemy. The author explicates the distinction between Greek and Roman contributions to the classical legacy, an explanation that is crucial for understanding Galileo and Renaissance mathematics. The following eleven chapters of this book arranged in a kind of quadrivium, viz., Poetry, Painting, Music, Architecture present arguments to support the author's thesis that the driver for Galileo's genius was not Renaissance science as is generally accepted but Renaissance arts brought forth by poets, painters, musicians, and architects. These four sets of chapters describe the underlying mathematics in poetry, visual arts, music and architecture. Likewise, Peterson stresses the impact of the philosophical overtones present in geometry, but absent in algebra and its equations. Basically, the author writes about Galileo, while trying to ignore the Copernican controversy, which he sees as distracting attention from Galileo's scientific legacy. As such, his story deviates from the standard myth on Galileo. But the book also looks at other eminent characters, such as Galileo's father Vincenzo (who cultivated music and music theory), the painter Piero della Francesca (who featured elaborate perspectives in his work), Dante Alighieri (author of the Divina Commedia), Filippo Brunelleschi (who engineered the dome of the Basilica di Santa Maria del Fiore in Florence, Johannes Kepler (a strong supporter of Galileo's Copernicanism), etc. This book is very well documented: it offers, for each chapter, a wide selection of excellent biographical notes, and includes a fine

  10. The outer filament of Centaurus A as seen by MUSE

    NASA Astrophysics Data System (ADS)

    Santoro, F.; Oonk, J. B. R.; Morganti, R.; Oosterloo, T. A.; Tremblay, G.

    2015-03-01

    Context. Radio-loud active galactic nuclei (AGN) are known to inject kinetic energy into the surrounding interstellar medium (ISM) of their host galaxy via plasma jets. Understanding the impact that these flows can have on the host galaxy helps to characterize a crucial phase in their evolution. Because of its proximity, Centaurus A is an excellent laboratory in which the physics of the coupling of jet mechanical energy to the surrounding medium may be investigated. About 15 kpc northeast of this galaxy, a particularly complex region is found: the so-called outer filament, where jet-cloud interactions have been proposed to occur. Aims: We investigate signatures of a jet-ISM interaction using optical integral-field observations of this region, expanding on previous results that were obtained on a more limited area. Methods: Using the Multi Unit Spectroscopic Explorer (MUSE) on the VLT during the science verification period, we observed two regions that together cover a significant fraction of the brighter emitting gas across the outer filament. Emission from a number of lines, among which Hβλ4861 Å, [ O iii ] λλ4959,5007 Å, Hαλ6563 Å, and [ N ii ] λλ6548,6584 Å, is detected in both regions. Results: The ionized gas shows a complex morphology with compact blobs, arc-like structures, and diffuse emission. Based on the kinematics, we identified three main components of ionized gas. Interestingly, their morphology is very different. The more collimated component is oriented along the direction of the radio jet. The other two components exhibit a diffuse morphology together with arc-like structures, which are also oriented along the radio jet direction. Furthermore, the ionization level of the gas, as traced by the [O iii]λ5007/Hβ ratio, is found to decrease from the more collimated component to the more diffuse components. Conclusions: The morphology and velocities of the more collimated component confirm the results of our previous study, which was

  11. The MUSE 3D view of the Hubble Deep Field South

    NASA Astrophysics Data System (ADS)

    Bacon, R.; Brinchmann, J.; Richard, J.; Contini, T.; Drake, A.; Franx, M.; Tacchella, S.; Vernet, J.; Wisotzki, L.; Blaizot, J.; Bouché, N.; Bouwens, R.; Cantalupo, S.; Carollo, C. M.; Carton, D.; Caruana, J.; Clément, B.; Dreizler, S.; Epinat, B.; Guiderdoni, B.; Herenz, C.; Husser, T.-O.; Kamann, S.; Kerutt, J.; Kollatschny, W.; Krajnovic, D.; Lilly, S.; Martinsson, T.; Michel-Dansac, L.; Patricio, V.; Schaye, J.; Shirazi, M.; Soto, K.; Soucail, G.; Steinmetz, M.; Urrutia, T.; Weilbacher, P.; de Zeeuw, T.

    2015-03-01

    We observed Hubble Deep Field South with the new panoramic integral-field spectrograph MUSE that we built and have just commissioned at the VLT. The data cube resulting from 27 h of integration covers one arcmin2 field of view at an unprecedented depth with a 1σ emission-line surface brightness limit of 1 × 10-19 erg s-1 cm-2 arcsec-2, and contains ~90 000 spectra. We present the combined and calibrated data cube, and we performed a first-pass analysis of the sources detected in the Hubble Deep Field South imaging. We measured the redshifts of 189 sources up to a magnitude I814 = 29.5, increasing the number of known spectroscopic redshifts in this field by more than an order of magnitude. We also discovered 26 Lyα emitting galaxies that are not detected in the HST WFPC2 deep broad-band images. The intermediate spectral resolution of 2.3 Å allows us to separate resolved asymmetric Lyα emitters, [O ii]3727 emitters, and C iii]1908 emitters, and the broad instantaneous wavelength range of 4500 Å helps to identify single emission lines, such as [O iii]5007, Hβ, and Hα, over a very wide redshift range. We also show how the three-dimensional information of MUSE helps to resolve sources that are confused at ground-based image quality. Overall, secure identifications are provided for 83% of the 227 emission line sources detected in the MUSE data cube and for 32% of the 586 sources identified in the HST catalogue. The overall redshift distribution is fairly flat to z = 6.3, with a reduction between z = 1.5 to 2.9, in the well-known redshift desert. The field of view of MUSE also allowed us to detect 17 groups within the field. We checked that the number counts of [O ii]3727 and Lyα emitters are roughly consistent with predictions from the literature. Using two examples, we demonstrate that MUSE is able to provide exquisite spatially resolved spectroscopic information on the intermediate-redshift galaxies present in the field. Thisunique data set can be used for a

  12. VizieR Online Data Catalog: NGC 3115 MUSE images (Guerou+, 2016)

    NASA Astrophysics Data System (ADS)

    Guerou, A.; Emsellem, E.; Krajnovic, D.; McDermid, R. M.; Contini, T.; Weilbacher, P. M.

    2016-07-01

    NGC 3115 was observed during the night of 8 February 2014 using MUSE nominal mode (WFM-N) that allows a continuous wavelength coverage from 4750-9300Å with a varying resolution of R=2000-4000. These data were designed as a first test of the mosaicing abilities of MUSE using an extended (bright) target that showed substructures and had ample published imaging and spectroscopic data for comparison. Five exposures of only 10 minutes each were obtained, each exposure overlapping its neighbours over a quarter of the field of view (i.e. 30"x30"), with the central exposure centred on the galaxy nucleus. The data obtained cover ~4Re (Re~35", Arnold et al. 2014ApJ...791...80A) along the NGC 3115 major axis. File images.fits contains multiple extensions to allow users to reproduce the figures 3 and 5 of the paper, and so most of the other figures. (2 data files).

  13. A kinematically distinct core and minor-axis rotation: the MUSE perspective on M87

    NASA Astrophysics Data System (ADS)

    Emsellem, Eric; Krajnović, Davor; Sarzi, Marc

    2014-11-01

    We present evidence for the presence of a low-amplitude kinematically distinct component in the giant early-type galaxy M87, via data sets obtained with the SAURON and MUSE integral-field spectroscopic units. The MUSE velocity field reveals a strong twist of ˜140° within the central 30 arcsec connecting outwards such a kinematically distinct core to a prolate-like rotation around the large-scale photometric major axis of the galaxy. The existence of these kinematic features within the apparently round central regions of M87 implies a non-axisymmetric and complex shape for this galaxy, which could be further constrained using the presented kinematics. The associated orbital structure should be interpreted together with other tracers of the gravitational potential probed at larger scales (e.g. globular clusters, ultra-compact dwarfs, planetary nebulae): it would offer an insight in the assembly history of one of the brightest galaxies in the Virgo cluster. These data also demonstrate the potential of the MUSE spectrograph to uncover low-amplitude spectral signatures.

  14. MuSE: a novel experiment for CMB polarization measurement using highly multimoded bolometers

    NASA Astrophysics Data System (ADS)

    Kusaka, Akito; Fixsen, Dale J.; Kogut, Alan J.; Meyer, Stephan S.; Staggs, Suzanne T.; Stevenson, Thomas R.

    2012-09-01

    One of the most exciting targets for cosmic microwave background (CMB) polarization measurements is the faint signal from the primordial gravity waves predicted by inflationary models. Currently existing experiments and those under construction would constrain or detect such a signal at around r = 0.01, where r is the tensor to scalar ratio. In order to further improve the measurement, experiments for the next generation have to combine the following three: 1) excellent sensitivity, 2) multi-frequency measurement for the removal of galactic foregrounds, and 3) well-controlled systematics. We propose the Multimoded Survey Experiment (MuSE), which uses highly multimoded polarization-sensitive bolometers developed at NASA Goddard Space Flight Center (GSFC). MuSE, consisting of 69 pixels, will achieve a sensitivity equivalent to several thousand single-moded bolometers. Each pixel can be configured to be sensitive to a different frequency band, allowing very wide frequency coverage by a single focal plane. This enables us to clean galactic synchrotron and dust components with our data alone. MuSE achieves an effective array sensitivity to the CMB of 8 μK√s even after accounting for the sensitivity degradation from foreground removal and reaches a 2-σ error on r of 0.009 with two years of operation.

  15. Functional melanocytes are readily reprogrammable from multilineage-differentiating stress-enduring (muse) cells, distinct stem cells in human fibroblasts.

    PubMed

    Tsuchiyama, Kenichiro; Wakao, Shohei; Kuroda, Yasumasa; Ogura, Fumitaka; Nojima, Makoto; Sawaya, Natsue; Yamasaki, Kenshi; Aiba, Setsuya; Dezawa, Mari

    2013-10-01

    The induction of melanocytes from easily accessible stem cells has attracted attention for the treatment of melanocyte dysfunctions. We found that multilineage-differentiating stress-enduring (Muse) cells, a distinct stem cell type among human dermal fibroblasts, can be readily reprogrammed into functional melanocytes, whereas the remainder of the fibroblasts do not contribute to melanocyte differentiation. Muse cells can be isolated as cells positive for stage-specific embryonic antigen-3, a marker for undifferentiated human embryonic stem cells, and differentiate into cells representative of all three germ layers from a single cell, while also being nontumorigenic. The use of certain combinations of factors induces Muse cells to express melanocyte markers such as tyrosinase and microphthalmia-associated transcription factor and to show positivity for the 3,4-dihydroxy-L-phenylalanine reaction. When Muse cell-derived melanocytes were incorporated into three-dimensional (3D) cultured skin models, they localized themselves in the basal layer of the epidermis and produced melanin in the same manner as authentic melanocytes. They also maintained their melanin production even after the 3D cultured skin was transplanted to immunodeficient mice. This technique may be applicable to the efficient production of melanocytes from accessible human fibroblasts by using Muse cells, thereby contributing to autologous transplantation for melanocyte dysfunctions, such as vitiligo.

  16. Mapping the inner regions of the polar disk galaxy NGC 4650A with MUSE

    NASA Astrophysics Data System (ADS)

    Iodice, E.; Coccato, L.; Combes, F.; de Zeeuw, T.; Arnaboldi, M.; Weilbacher, P. M.; Bacon, R.; Kuntschner, H.; Spavone, M.

    2015-11-01

    The polar disk galaxy NGC 4650A was observed during the commissioning of the Multi Unit Spectroscopic Explorer (MUSE) at the ESO Very Large Telescope to obtain the first 2D map of the velocity and velocity dispersion for both stars and gas. The new MUSE data allow the analysis of the structure and kinematics towards the central regions of NGC 4650A, where the two components co-exist. These regions were unexplored by the previous long-slit literature data available for this galaxy. The stellar velocity field shows that there are two main directions of rotation, one along the host galaxy major axis (PA = 67 deg) and the other along the polar disk (PA = 160 deg). The host galaxy has, on average, the typical pattern of a rotating disk, with receding velocities on the SW side and approaching velocities on the NE side, and a velocity dispersion that remains constant at all radii (σstar ~ 50-60 km s-1). The polar disk shows a large amount of differential rotation from the centre up to the outer regions, reaching V ~ 100-120 km s-1 at R ~ 75 arcsec ~ 16 kpc. Inside the host galaxy, a velocity gradient is measured along the photometric minor axis. Close to the centre, for R ≤ 2 arcsec the velocity profile of the gas suggests a decoupled component and the velocity dispersion increases up to ~110 km s-1, while at larger distances it remains almost constant (σgas ~ 30-40 km s-1). The extended view of NGC 4650A given by the MUSE data is a galaxy made of two perpendicular disks that remain distinct and drive the kinematics right into the very centre of this object. In order to match this observed structure for NGC 4650A, we constructed a multicomponent mass model made by the combined projection of two disks. By comparing the observations with the 2D kinematics derived from the model, we found that the modelled mass distribution in these two disks can, on average, account for the complex kinematics revealed by the MUSE data, also in the central regions of the galaxy where the

  17. Modifications made to ModelMuse to add support for the Saturated-Unsaturated Transport model (SUTRA)

    USGS Publications Warehouse

    Winston, Richard B.

    2014-01-01

    This report (1) describes modifications to ModelMuse,as described in U.S. Geological Survey (USGS) Techniques and Methods (TM) 6–A29 (Winston, 2009), to add support for the Saturated-Unsaturated Transport model (SUTRA) (Voss and Provost, 2002; version of September 22, 2010) and (2) supplements USGS TM 6–A29. Modifications include changes to the main ModelMuse window where the model is designed, addition of methods for generating a finite-element mesh suitable for SUTRA, defining how some functions shouldapply when using a finite-element mesh rather than a finite-difference grid (as originally programmed in ModelMuse), and applying spatial interpolation to angles. In addition, the report describes ways of handling objects on the front view of the model and displaying data. A tabulation contains a summary of the new or modified dialog boxes.

  18. Development of a Muon Rotating Target for J-PARC/MUSE

    NASA Astrophysics Data System (ADS)

    Makimura, Shunsuke; Kobayashi, Yasuo; Miyake, Yasuhiro; Kawamura, Naritoshi; Strasser, Patrick; Koda, Akihiro; Shimomura, Koichiro; Fujimori, Hiroshi; Nishiyama, Kusuo; Kato, Mineo; Kojima, Kenji; Higemoto, Wataru; Ito, Takashi; Shimizu, Ryou; Kadono, Ryosuke

    At the J-PARC muon science facility (J-PARC/MUSE), a graphite target with a thickness of 20 mm has been used in vacuum to obtain an intense pulsed muon beam from the RCS 3-GeV proton beam [1], [2]. In the current design, the target frame is constructed using copper with a stainless steel tube embedded for water cooling. The energy deposited by the proton beam at 1 MW is evaluated to be 3.3 kW on the graphite target and 600 W on the copper frame by a Monte-Carlo simulation code, PHITS [3]. Graphite materials are known to lose their crystal structure and can be shrunk under intense proton beam irradiation. Consequently, the lifetime of the muon target is essentially determined by the radiation damage in graphite, and is evaluated to be half a year [4]. Hence, we are planning to distribute the radiation damage by rotating a graphite wheel. Although the lifetime of graphite in this case will be more than 10 years, the design of the bearing must be carefully considered. Because the bearing in JPARC/MUSE is utilized in vacuum, under high radiation, and at high temperature, an inorganic and solid lubricant must be applied to the bearing. Simultaneously, the temperature of the bearing must also be decreased to extend the lifetime. In 2009, a mock-up of the Muon Rotating Target, which could heat up and rotate a graphite wheel, was fabricated. Then several tests were started to select the lubricant and to determine the structure of the Muon Rotating Target, the control system and so on. In this report, the present status of the Muon Rotating Target for J-PARC/MUSE, especially the development of a rotation system in vacuum, is described.

  19. Mineralogy and Major Element Abundance of the Dust Particles Recovered from Muses-C Regio on the Asteroid Itokawa

    NASA Technical Reports Server (NTRS)

    Nakamura, T.; Noguchi, T.; Tanaka, M.; Zolensky, M. E.; Kimura, M.; Nakato, A.; Ogami, T.; Ishida, H.; Tsuchiyama, A.; Yada, T.; Shirai, K.; Okazaki, R.; Fujimura, A.; Ishibashi, Y.; Abe, M.; Okada, T.; Ueno, M.; Mukai, T.

    2011-01-01

    Remote sensing by the spacecraft Hayabusa suggested that outermost surface particles of Muses-C regio of the asteroid Itokawa consist of centimeter and sub-centimeter size small pebbles. However, particles we found in the sample catcher A stored in the Hayabusa capsule, where Muses-C particles were captured during first touchdown, are much smaller. i.e., most are smaller than 100 microns in size. This suggests that only small fractions of Muses-C particles were stirred up due to the impact of the sampling horn onto the surface, or due to jets from chemical thrusters during the lift off of the spacecraft from the surface. X-ray fluorescence and near-infrared measurements by the Hayabusa spacecraft suggested that Itokawa surface materials have mineral and major element composition roughly similar to LL chondrites. The particles of the Muses-C region are expected to have experienced some effects of space weathering. Both of these prospects can be tested by the direct mineralogical analyses of the returned Itokawa particles in our study and another one. This comparison is most important aspect of the Hayabusa mission, because it finally links chemical analyses of meteorites fallen on the Earth to spectroscopic measurements of the asteroids.

  20. Minority Universities Systems Engineering (MUSE) Program at the University of Texas at El Paso

    NASA Technical Reports Server (NTRS)

    Robbins, Mary Clare; Usevitch, Bryan; Starks, Scott A.

    1997-01-01

    In 1995, The University of Texas at El Paso (UTEP) responded to the suggestion of NASA Jet Propulsion Laboratory (NASA JPL) to form a consortium comprised of California State University at Los Angeles (CSULA), North Carolina Agricultural and Technical University (NCAT), and UTEP from which developed the Minority Universities Systems Engineering (MUSE) Program. The mission of this consortium is to develop a unique position for minority universities in providing the nation's future system architects and engineers as well as enhance JPL's system design capability. The goals of this collaboration include the development of a system engineering curriculum which includes hands-on project engineering and design experiences. UTEP is in a unique position to take full advantage of this program since UTEP has been named a Model Institution for Excellence (MIE) by the National Science Foundation. The purpose of MIE is to produce leaders in Science, Math, and Engineering. Furthermore, UTEP has also been selected as the site for two new centers including the Pan American Center for Earth and Environmental Sciences (PACES) directed by Dr. Scott Starks and the FAST Center for Structural Integrity of Aerospace Systems directed by Dr. Roberto Osegueda. The UTEP MUSE Program operates under the auspices of the PACES Center.

  1. Development of thermal protection system of the MUSES-C/DASH reentry capsule

    NASA Astrophysics Data System (ADS)

    Yamada, Tetsuya; Inatani, Yoshifumi; Honda, Masahisa; Hirai, Ken'ich

    2002-07-01

    In the final phase of the MUSES-C mission, a small capsule with asteroid sample conducts reentry flight directly from the interplanetary transfer orbit at the velocity over 12 km/s. The severe heat flux, the complicated functional requirements, and small weight budget impose several engineering challenges on the designing of the thermal protection system of the capsule. The heat shield is required to function not only as ablator but also as a structural component. The cloth-layered carbon-phenolic ablator, which has higher allowable stress, is developed in newly-devised fabric method for avoiding delamination due to the high aerodynamic heating. The ablation analysis code, which takes into account of the effect of pyrolysis gas on the surface recession rate, has been developed and verified in the arc-heating tests in the facility environment of broad range of enthalpy level. The capsule was designed to be ventilated during the reentry flight up to about atmospheric pressure by the time of parachute deployment by being sealed with porous flow-restrict material. The designing of the thermal protection system, the hardware specifications, and the ground-based test programs of both MUSES-C and DASH capsule are summarized and discussed here in this paper.

  2. The MAGNUM survey: positive feedback in the nuclear region of NGC 5643 suggested by MUSE

    NASA Astrophysics Data System (ADS)

    Cresci, G.; Marconi, A.; Zibetti, S.; Risaliti, G.; Carniani, S.; Mannucci, F.; Gallazzi, A.; Maiolino, R.; Balmaverde, B.; Brusa, M.; Capetti, A.; Cicone, C.; Feruglio, C.; Bland-Hawthorn, J.; Nagao, T.; Oliva, E.; Salvato, M.; Sani, E.; Tozzi, P.; Urrutia, T.; Venturi, G.

    2015-10-01

    We study the ionization and kinematics of the ionized gas in the nuclear region of the barred Seyfert 2 galaxy NGC 5643 using MUSE integral field observations in the framework of the Measuring Active Galactic Nuclei Under MUSE Microscope (MAGNUM) survey. The data were used to identify regions with different ionization conditions and to map the gas density and the dust extinction. We find evidence for a double-sided ionization cone, possibly collimated by a dusty structure surrounding the nucleus. At the center of the ionization cone, outflowing ionized gas is revealed as a blueshifted, asymmetric wing of the [OIII] emission line, up to projected velocity v10 ~ -450 km s-1. The outflow is also seen as a diffuse, low-luminosity radio and X-ray jet, with similar extension. The outflowing material points in the direction of two clumps characterized by prominent line emission with spectra typical of HII regions, located at the edge of the dust lane of the bar. We propose that the star formation in the clumps is due to positive feedback induced by gas compression by the nuclear outflow, providing the first candidate for outflow-induced star formation in a Seyfert-like, radio-quiet AGN. This suggests that positive feedback may be a relevant mechanism in shaping the black hole-host galaxy coevolution. This work is based on observations made at the European Southern Observatory, Paranal, Chile (ESO program 60.A-9339).

  3. Slide-free histology via MUSE: UV surface excitation microscopy for imaging unsectioned tissue (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Levenson, Richard M.; Harmany, Zachary; Demos, Stavros G.; Fereidouni, Farzad

    2016-03-01

    Widely used methods for preparing and viewing tissue specimens at microscopic resolution have not changed for over a century. They provide high-quality images but can involve time-frames of hours or even weeks, depending on logistics. There is increasing interest in slide-free methods for rapid tissue analysis that can both decrease turn-around times and reduce costs. One new approach is MUSE (microscopy with UV surface excitation), which exploits the shallow penetration of UV light to excite fluorescent signals from only the most superficial tissue elements. The method is non-destructive, and eliminates requirement for conventional histology processing, formalin fixation, paraffin embedding, or thin sectioning. It requires no lasers, confocal, multiphoton or optical coherence tomography optics. MUSE generates diagnostic-quality histological images that can be rendered to resemble conventional hematoxylin- and eosin-stained samples, with enhanced topographical information, from fresh or fixed, but unsectioned tissue, rapidly, with high resolution, simply and inexpensively. We anticipate that there could be widespread adoption in research facilities, hospital-based and stand-alone clinical settings, in local or regional pathology labs, as well as in low-resource environments.

  4. MUSE integral-field spectroscopy towards the Frontier Fields cluster Abell S1063. I. Data products and redshift identifications

    NASA Astrophysics Data System (ADS)

    Karman, W.; Caputi, K. I.; Grillo, C.; Balestra, I.; Rosati, P.; Vanzella, E.; Coe, D.; Christensen, L.; Koekemoer, A. M.; Krühler, T.; Lombardi, M.; Mercurio, A.; Nonino, M.; van der Wel, A.

    2015-02-01

    We present the first observations of the Frontier Fields cluster Abell S1063 taken with the newly commissioned Multi Unit Spectroscopic Explorer (MUSE) integral field spectrograph. Because of the relatively large field of view (1 arcmin2), MUSE is ideal to simultaneously target multiple galaxies in blank and cluster fields over the full optical spectrum. We analysed the four hours of data obtained in the science verification phase on this cluster and measured redshifts for 53 galaxies. We confirm the redshift of five cluster galaxies, and determine the redshift of 29 other cluster members. Behind the cluster, we find 17 galaxies at higher redshift, including three previously unknown Lyman-α emitters at z> 3, and five multiply-lensed galaxies. We report the detection of a new z = 4.113 multiply lensed galaxy, with images that are consistent with lensing model predictions derived for the Frontier Fields. We detect C iii], C iv, and He ii emission in a multiply lensed galaxy at z = 3.116, suggesting the likely presence of an active galactic nucleus. We also created narrow-band images from the MUSE datacube to automatically search for additional line emitters corresponding to high-redshift candidates, but we could not identify any significant detections other than those found by visual inspection. With the new redshifts, it will become possible to obtain an accurate mass reconstruction in the core of Abell S1063 through refined strong lensing modelling. Overall, our results illustrate the breadth of scientific topics that can be addressed with a single MUSE pointing. We conclude that MUSE is a very efficient instrument to observe galaxy clusters, enabling their mass modelling, and to perform a blind search for high-redshift galaxies.

  5. NASA Participation in the ISAS MUSES C Asteroid Sample Return Mission

    NASA Technical Reports Server (NTRS)

    Jones, Ross

    2000-01-01

    NASA and Japan's Institute of Space and Astronautical Science (ISAS) have agreed to cooperate on the first mission to collect samples from the surface of an asteroid and return them to Earth for in-depth study. The MUSES-C mission will be launched on a Japanese MV launch vehicle in January 2002 from Kagoshima Space Center, Japan, toward a touchdown on the asteroid Nereus in September 2003. A NASA-provided miniature rover will conduct in-situ measurements on the surface. The asteroid samples will be returned to Earth by MUSES-C via a parachute-borne recovery capsule in January 2006. NASA and ISAS will cooperate on several aspects of the mission, including mission support and scientific analysis. In addition to providing the rover, NASA will arrange for the testing of the MUSES-C re-entry heat shield at NASA/Ames Research Center, provide supplemental Deep Space Network tracking of the spacecraft, assist in navigating the spacecraft and provide arrangements for the recovery of the sample capsule at a landing site in the U. S. Scientific coinvestigators from the U.S. and Japan will share data from the instruments on the rover and the spacecraft. They will also collaborate on the investigations of the returned samples. With a mass of about I kg, the rover experiment will be a direct descendant of the technology used to build the Sojourner rover. The rover will carry three science instruments: a visible imaging camera, a near-infrared point spectrometer and an alpha X ray spectrometer. The solarpowered rover will move around the surface of Nereus collecting imagery data which are complimentary to the spacecraft investigation. The imaging system will be capable of making surface texture, composition, and morphology measurements at resolutions better than 1 cm. The rover will transmit this data to the spacecraft for relay back to Earth. Due to the microgravity environment on Nereus, the rover has been designed to right itself in case it flips over. Solar panels on all

  6. Exploring the mass assembly of the early-type disc galaxy NGC 3115 with MUSE

    NASA Astrophysics Data System (ADS)

    Guérou, A.; Emsellem, E.; Krajnović, D.; McDermid, R. M.; Contini, T.; Weilbacher, P. M.

    2016-07-01

    We present MUSE integral field spectroscopic data of the S0 galaxy NGC 3115 obtained during the instrument commissioning at the ESO Very Large Telescope (VLT). We analyse the galaxy stellar kinematics and stellar populations and present two-dimensional maps of their associated quantities. We thus illustrate the capacity of MUSE to map extra-galactic sources to large radii in an efficient manner, i.e. ~4 Re, and provide relevant constraints on its mass assembly. We probe the well-known set of substructures of NGC 3115 (nuclear disc, stellar rings, outer kpc-scale stellar disc, and spheroid) and show their individual associated signatures in the MUSE stellar kinematics and stellar populations maps. In particular, we confirm that NGC 3115 has a thin fast-rotating stellar disc embedded in a fast-rotating spheroid, and that these two structures show clear differences in their stellar age and metallicity properties. We emphasise an observed correlation between the radial stellar velocity, V, and the Gauss-Hermite moment, h3, which creates a butterfly shape in the central 15'' of the h3 map. We further detect the previously reported weak spiral- and ring-like structures, and find evidence that these features can be associated with regions of younger mean stellar ages. We provide tentative evidence for the presence of a bar, although the V-h3 correlation can be reproduced by a simple axisymmetric dynamical model. Finally, we present a reconstruction of the two-dimensional star formation history of NGC 3115 and find that most of its current stellar mass was formed at early epochs (>12 Gyr ago), while star formation continued in the outer (kpc-scale) stellar disc until recently. Since z ~2 and within ~4 Re, we suggest that NGC 3115 has been mainly shaped by secular processes. The images of the derived parameters in FITS format and the reduced datacube are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc

  7. Performance of the main instrument structure and the optical relay system of MUSE

    NASA Astrophysics Data System (ADS)

    Nicklas, Harald E.; Anwand, Heiko; Fleischmann, Andreas; Köhler, Christof; Xu, Wenli; Seifert, Walter; Laurent, Florence

    2012-09-01

    The foundation of the MUSE instrument with its high multiplexing factor of twenty-four spectrographs is formed through its central main structure that accommodates all instrumental subsystems and links them with the telescope. Due to instrument's dimension and complexity, the requirements on structural performance are demanding. How its performance was tested and optimized through reverse engineering is addressed. Intimately mated with this central structure is an optical relay system that splits the single telescopic field into twenty-four subfields. Each of those is individually directed along three dimensions across the structure through a folding and imaging setup of an optical relay system that at the end feeds one of the twenty-four spectrographs. This opto-mechanical relay system was tested when mounted onto the main structure. The results obtained so far are given here.

  8. MUSE, a Lab-On-Chip System for In-Situ Analysis

    NASA Astrophysics Data System (ADS)

    Eckhard, F.; Prak, A.; van den Assem, D.

    Stork Product Engineering and 3T are working for several years on the development of an assembly technology for microsystem parts. This work has led to MATAS: Modular Assembly Technology for μTAS, a generic methodology which enables the development of very compact and highly integrated microsystems technology (MST) systems. MATAS has as great advantage that it enables the application of commercially available microsystem parts derived from different suppliers. The high degree of integration of both the MST parts with electronics enables the development of highly autonomous and intelligent systems which are suited for incorporation in planetary rovers or to support the research in ISS. For further improvement of the technology, and to show its advantages, the development of a system for on-chip capillary electrophoresis (CE) has been selected. CE, which is of old applied in the biosciences and biotechnology, is one of the key technologies for the detection and measurement of enantiomers. The study on enantiomers is an important aspect in the search to pre-biotic life. Due to the limited dimensions of Muse, the system is perfectly suited for use in a planetary rover but could also easily become part of the Astrobiology Facility of Space Station. For the measurement and detection of these enantiomers and other biomolecules, the system is equipped with a fluorescence detector. In 2002 a new project has been started to equip the system with an electrochemical detector enabling conductivity and amperometric analysis. Direct conductivity detection is especially applied in capillary ion electrophoresis, which can be used complementary, or separate to the zone electrophoresis, in which the fluorescence detector is applied. The combination of these detection technologies leads to a multi analysis system (Muse) with a very broad application area.

  9. Gradient in the IMF slope and Sodium abundance of M87 with MUSE

    NASA Astrophysics Data System (ADS)

    Spiniello, C.; Sarzi, M.; Krajnovic, D.

    2016-06-01

    We present evidence for a radial variation of the stellar initial mass function IMF) in the giant elliptical NGC~4486 based on integral-field MUSE data acquired during the first Science Verification run for this instrument. A steepening of the low-mass end of the IMF towards the centre of this galaxy is necessary to explain the increasing strength of several of the optical IMF sensitive features introduced by Spiniello et al., which we observe in high-quality spectra extracted in annular apertures. The need for a varying slope of the IMF emerges when the strength of these IMF-sensitive features, together with that other classical Lick indices mostly sensitive to stellar metallicity and the bundance of α-elements, are fitted with the state-of-the-art stellar population models from Conroy & van Dokkum and Vazdekis et al., which we modified in order to allow variations in IMF slope, metallicity and α-elements abundance. More specifically, adopting 13-Gyr-old, single-age stellar population models and an unimodal IMF we find that the slope of the latter increases from x=1.8 to x=2.6 in the central 25 arcsec of NGC~4486. Varying IMF accompanied by a metallicity gradient, whereas the abundance of α-element appears constant throughout the MUSE field of view. We found metallicity and α-element abundance gradients perfectly consistent with the literature. A sodium over-abundance is necessary (according to CvD12 models) at all the distances (for all the apertures) and a slight gradient of increasing [Na/Fe] ratio towards the center can be inferred. However, in order to completely break the degeneracies between Na-abundance, total metallicity and IMF variation a more detailed investigation that includes the redder NaI line is required.

  10. Musing over Microbes in Microgravity: Microbial Physiology Flight Experiment

    NASA Technical Reports Server (NTRS)

    Schweickart, Randolph; McGinnis, Michael; Bloomberg, Jacob; Lee, Angie (Technical Monitor)

    2002-01-01

    New York City, the most populated city in the United States, is home to over 8 million humans. This means over 26,000 people per square mile! Imagine, though, what the view would be if you peeked into the world of microscopic organisms. Scientists estimate that a gram of soil may contain up to 1 billion of these microbes, which is as much as the entire human population of China! Scientists also know that the world of microbes is incredibly diverse-possibly 10,000 different species in one gram of soil - more than all the different types of mammals in the world. Microbes fill every niche in the world - from 20 miles below the Earth's surface to 20 miles above, and at temperatures from less than -20 C to hotter than water's boiling point. These organisms are ubiquitous because they can adapt quickly to changing environments, an effective strategy for survival. Although we may not realize it, microbes impact every aspect of our lives. Bacteria and fungi help us break down the food in our bodies, and they help clean the air and water around us. They can also cause the dark, filmy buildup on the shower curtain as well as, more seriously, illness and disease. Since humans and microbes share space on Earth, we can benefit tremendously from a better understanding of the workings and physiology of the microbes. This insight can help prevent any harmful effects on humans, on Earth and in space, as well as reap the benefits they provide. Space flight is a unique environment to study how microbes adapt to changing environmental conditions. To advance ground-based research in the field of microbiology, this STS-107 experiment will investigate how microgravity affects bacteria and fungi. Of particular interest are the growth rates and how they respond to certain antimicrobial substances that will be tested; the same tests will be conducted on Earth at the same times. Comparing the results obtained in flight to those on Earth, we will be able to examine how microgravity induces

  11. Lyman-α emitters in the context of hierarchical galaxy formation: predictions for VLT/MUSE surveys

    NASA Astrophysics Data System (ADS)

    Garel, T.; Guiderdoni, B.; Blaizot, J.

    2016-02-01

    The VLT/Multi Unit Spectrograph Explorer (MUSE) integral-field spectrograph can detect Lyα emitters (LAE) in the redshift range 2.8 ≲ z ≲ 6.7 in a homogeneous way. Ongoing MUSE surveys will notably probe faint Lyα sources that are usually missed by current narrow-band surveys. We provide quantitative predictions for a typical wedding-cake observing strategy with MUSE based on mock catalogues generated with a semi-analytic model of galaxy formation coupled to numerical Lyα radiation transfer models in gas outflows. We expect ≈1500 bright LAEs (FLyα ≳ 10-17 erg s-1 cm-2) in a typical shallow field (SF) survey carried over ≈100 arcmin2 , and ≈2000 sources as faint as 10-18 erg s-1 cm-2 in a medium-deep field (MDF) survey over 10 arcmin2 . In a typical deep field (DF) survey of 1 arcmin2 , we predict that ≈500 extremely faint LAEs (FLyα ≳ 4 × 10-19 erg s-1 cm-2) will be found. Our results suggest that faint Lyα sources contribute significantly to the cosmic Lyα luminosity and SFR budget. While the host haloes of bright LAEs at z ≈ 3 and 6 have descendants with median masses of 2 × 1012 and 5 × 1013 M⊙, respectively, the faintest sources detectable by MUSE at these redshifts are predicted to reside in haloes which evolve into typical sub-L* and L* galaxy haloes at z = 0. We expect typical DF and MDF surveys to uncover the building blocks of Milky Way-like objects, even probing the bulk of the stellar mass content of LAEs located in their progenitor haloes at z ≈ 3.

  12. THE MID-INFRARED LUMINOSITY FUNCTION AT z < 0.3 FROM 5MUSES: UNDERSTANDING THE STAR FORMATION/ACTIVE GALACTIC NUCLEUS BALANCE FROM A SPECTROSCOPIC VIEW

    SciTech Connect

    Wu Yanling; Shi Yong; Helou, George; Armus, Lee; Stierwalt, Sabrina; Dale, Daniel A.; Papovich, Casey; Rahman, Nurur; Dasyra, Kalliopi E-mail: yong@ipac.caltech.edu E-mail: lee@ipac.caltech.edu E-mail: ddale@uwyo.edu E-mail: nurur@astro.umd.edu

    2011-06-10

    We present rest-frame 15 and 24 {mu}m luminosity functions (LFs) and the corresponding star-forming LFs at z < 0.3 derived from the 5MUSES sample. Spectroscopic redshifts have been obtained for {approx}98% of the objects and the median redshift is {approx}0.12. The 5-35 {mu}m Infrared Spectrograph spectra allow us to estimate accurately the luminosities and build the LFs. Using a combination of starburst and quasar templates, we quantify the star formation (SF) and active galactic nucleus (AGN) contributions in the mid-IR spectral energy distribution. We then compute the SF LFs at 15 and 24 {mu}m, and compare with the total 15 and 24 {mu}m LFs. When we remove the contribution of AGNs, the bright end of the LF exhibits a strong decline, consistent with the exponential cutoff of a Schechter function. Integrating the differential LF, we find that the fractional contribution by SF to the energy density is 58% at 15 {mu}m and 78% at 24 {mu}m, while it goes up to {approx}86% when we extrapolate our mid-IR results to the total IR luminosity density. We confirm that the AGNs play more important roles energetically at high luminosities. Finally, we compare our results with work at z {approx} 0.7 and confirm that evolution on both luminosity and density is required to explain the difference in the LFs at different redshifts.

  13. The MAGNUM survey: outflows and star formation in ten local Seyfert galaxies with the integral field eye of MUSE

    NASA Astrophysics Data System (ADS)

    Venturi, G.; Marconi, A.; Cresci, G.; Risaliti, G.; Carniani, S.; Mannucci, F.

    2016-08-01

    In this talk I will present the first results from the MAGNUM survey (Measuring Active Galactic Nuclei Under MUSE Microscope), which takes advantage of the unprecedented combination of the large field of view and spectral coverage of MUSE so as to carry out a detail study of the interaction of AGN outflows with the host galaxies and of the relation between AGN activity and star formation. The data comprise ten nearby galaxies so far, such as NGC 1365, NGC 1068 and Circinus. The analysis of MUSE data in many different emission lines has allowed to disentangle the various motions of the gas in the central regions of the galaxies (rotation, outflows and inflows), furthermore resolving the structure of the AGN-ionised cone. Other information of the separate phases of the gas (having different temperature, density and ionisation state) has been obtained thanks to the comparison with high resolution X-ray Chandra images. Moreover, possible evidence for star formation triggered by AGN outflows has been observed.

  14. V: Musing

    ERIC Educational Resources Information Center

    Rosenfeld, Malke; Kelin, Daniel; Plows, Kate; Conarro, Ryan; Broderick, Debora

    2014-01-01

    When one says "writing about teaching artist practice," what exactly does that mean? In the first two sections (EJ1039315 and EJ1039319), the authors considered different ways to frame a story by either zooming in closely to a specific moment or zooming out to provide more context in an effort to address complex issues. The stories in…

  15. MUSE discovers perpendicular arcs in the inner filament of Centaurus A

    NASA Astrophysics Data System (ADS)

    Hamer, S.; Salomé, P.; Combes, F.; Salomé, Q.

    2015-03-01

    Context. Evidence of active galactic nuclei (AGN) interaction with the intergalactic medium is observed in some galaxies and many cool core clusters. Radio jets are suspected to dig large cavities into the surrounding gas. In most cases, very large optical filaments (several kpc) are also seen all around the central galaxy. The origin of these filaments is still not understood. Star-forming regions are sometimes observed inside the filaments and are interpreted as evidence of positive feedback (AGN-triggered star formation). Aims: Centaurus A is a very nearby galaxy with huge optical filaments aligned with the AGN radio-jet direction. Here, we searched for line ratio variations along the filaments, kinematic evidence of shock-broadend line widths, and large-scale dynamical structures. Methods: We observed a 1' × 1' region around the so-called inner filament of Cen A with the Multi Unit Spectroscopic Explorer (MUSE) on the Very Large Telescope (VLT) during the Science Verification period. Results: (i) The brightest lines detected are the Hαλ6562.8, [NII]λ6583, [OIII]λ4959+5007 and [SII]λ6716+6731. MUSE shows that the filaments are made of clumpy structures inside a more diffuse medium aligned with the radio-jet axis. We find evidence of shocked shells surrounding the star-forming clumps from the line profiles, suggesting that the star formation is induced by shocks. The clump line ratios are best explained by a composite of shocks and star formation illuminated by a radiation cone from the AGN. (ii) We also report a previously undetected large arc-like structure: three streams running perpendicular to the main filament; they are kinematically, morphologically, and excitationally distinct. The clear difference in the excitation of the arcs and clumps suggests that the arcs are very likely located outside of the radiation cone and match the position of the filament only in projection. The three arcs are thus most consistent with neutral material swept along by a

  16. MuSE: accounting for tumor heterogeneity using a sample-specific error model improves sensitivity and specificity in mutation calling from sequencing data.

    PubMed

    Fan, Yu; Xi, Liu; Hughes, Daniel S T; Zhang, Jianjun; Zhang, Jianhua; Futreal, P Andrew; Wheeler, David A; Wang, Wenyi

    2016-01-01

    Subclonal mutations reveal important features of the genetic architecture of tumors. However, accurate detection of mutations in genetically heterogeneous tumor cell populations using next-generation sequencing remains challenging. We develop MuSE ( http://bioinformatics.mdanderson.org/main/MuSE ), Mutation calling using a Markov Substitution model for Evolution, a novel approach for modeling the evolution of the allelic composition of the tumor and normal tissue at each reference base. MuSE adopts a sample-specific error model that reflects the underlying tumor heterogeneity to greatly improve the overall accuracy. We demonstrate the accuracy of MuSE in calling subclonal mutations in the context of large-scale tumor sequencing projects using whole exome and whole genome sequencing. PMID:27557938

  17. MuSE: accounting for tumor heterogeneity using a sample-specific error model improves sensitivity and specificity in mutation calling from sequencing data.

    PubMed

    Fan, Yu; Xi, Liu; Hughes, Daniel S T; Zhang, Jianjun; Zhang, Jianhua; Futreal, P Andrew; Wheeler, David A; Wang, Wenyi

    2016-01-01

    Subclonal mutations reveal important features of the genetic architecture of tumors. However, accurate detection of mutations in genetically heterogeneous tumor cell populations using next-generation sequencing remains challenging. We develop MuSE ( http://bioinformatics.mdanderson.org/main/MuSE ), Mutation calling using a Markov Substitution model for Evolution, a novel approach for modeling the evolution of the allelic composition of the tumor and normal tissue at each reference base. MuSE adopts a sample-specific error model that reflects the underlying tumor heterogeneity to greatly improve the overall accuracy. We demonstrate the accuracy of MuSE in calling subclonal mutations in the context of large-scale tumor sequencing projects using whole exome and whole genome sequencing.

  18. Connecting the Dots: MUSE Unveils the Destructive Effect of Massive Stars

    NASA Astrophysics Data System (ADS)

    McLeod, A. F.; Ginsburg, A.; Klaassen, P.; Mottram, J.; Ramsay, S.; Testi, L.

    2016-09-01

    Throughout their entire lives, massive stars have a substantial impact on their surroundings, such as via protostellar outflows, stellar winds, ionising radiation and supernovae. Conceptually this is well understood, but the exact role of feedback mechanisms on the global star formation process and the stellar environment, as well as their dependence on the properties of the star-forming regions, are yet to be understood in detail. Observational quantification of the various feedback mechanisms is needed to precisely understand how high mass stars interact with and shape their environment, and which feedback mechanisms dominate under given conditions. We analysed the photo-evaporative effect of ionising radiation from massive stars on their surrounding molecular clouds using MUSE integral field data. This allowed us to determine the mass-loss rate of pillar-like structures (due to photo-evaporation) in different environments, and relate it to the ionising power of nearby massive stars. The resulting correlation is the first observational quantification of the destructive effect of ionising radiation from massive stars.

  19. Reentry Motion and Aerodynamics of the MUSES-C Sample Return Capsule

    NASA Astrophysics Data System (ADS)

    Ishii, Nobuaki; Yamada, Tetsuya; Hiraki, Koju; Inatani, Yoshifumi

    The Hayabusa spacecraft (MUSES-C) carries a small capsule for bringing asteroid samples back to the earth. The initial spin rate of the reentry capsule together with the flight path angle of the reentry trajectory is a key parameter for the aerodynamic motion during the reentry flight. The initial spin rate is given by the spin-release mechanism attached between the capsule and the mother spacecraft, and the flight path angle can be modified by adjusting the earth approach orbit. To determine the desired values of both parameters, the attitude motion during atmospheric flight must be clarified, and angles of attack at the maximum dynamic pressure and the parachute deployment must be assessed. In previous studies, to characterize the aerodynamic effects of the reentry capsule, several wind-tunnel tests were conducted using the ISAS high-speed flow test facilities. In addition to the ground test data, the aerodynamic properties in hypersonic flows were analyzed numerically. Moreover, these data were made more accurate using the results of balloon drop tests. This paper summarized the aerodynamic properties of the reentry capsule and simulates the attitude motion of the full-configuration capsule during atmospheric flight in three dimensions with six degrees of freedom. The results show the best conditions for the initial spin rates and flight path angles of the reentry trajectory.

  20. Highest Resolution Topography of 433 Eros and Implications for MUSES-C

    NASA Technical Reports Server (NTRS)

    Cheng, A. F.; Barnouin-Jha, O.

    2003-01-01

    The highest resolution observations of surface morphology and topography at asteroid 433 Eros were obtained by the Near Earth Asteroid Rendezvous (NEAR) Shoemaker spacecraft on 12 February 2001, as it landed within a ponded deposit on Eros. Coordinated observations were obtained by the imager and the laser rangefinder, at best image resolution of 1 cm/pixel and best topographic resolution of 0.4 m. The NEAR landing datasets provide unique information on rock size and height distributions and regolith processes. Rocks and soil can be distinguished photometrically, suggesting that bare rock is indeed exposed. The NEAR landing data are the only data at sufficient resolution to be relevant to hazard assessment on future landed missions to asteroids, such as the MUSES-C mission which will land on asteroid 25143 (1998 SF36) in order to obtain samples. In a typical region just outside the pond where NEAR landed, the areal coverage by resolved positive topographic features is 18%. At least one topographic feature in the vicinity of the NEAR landing site would have been hazardous for a spacecraft.

  1. Microscopy with UV Surface Excitation (MUSE) for slide-free histology and pathology imaging

    NASA Astrophysics Data System (ADS)

    Fereidouni, Farzad; Datta-Mitra, Ananya; Demos, Stavros; Levenson, Richard

    2015-03-01

    A novel microscopy method that takes advantage of shallow photon penetration using ultraviolet-range excitation and exogenous fluorescent stains is described. This approach exploits the intrinsic optical sectioning function when exciting tissue fluorescence from superficial layers to generate images similar to those obtainable from a physically thinsectioned tissue specimen. UV light in the spectral range from roughly 240-275 nm penetrates only a few microns into the surface of biological specimens, thus eliminating out-of-focus signals that would otherwise arise from deeper tissue layers. Furthermore, UV excitation can be used to simultaneously excite fluorophores emitting across a wide spectral range. The sectioning property of the UV light (as opposed to more conventional illumination in the visible range) removes the need for physical or more elaborate optical sectioning approaches, such as confocal, nonlinear or coherent tomographic methods, to generate acceptable axial resolution. Using a tunable laser, we investigated the effect of excitation wavelength in the 230-350 nm spectral range on excitation depth. The results reveal an optimal wavelength range and suggest that this method can be a fast and reliable approach for rapid imaging of tissue specimens. Some of this range is addressable by currently available and relatively inexpensive LED light sources. MUSE may prove to be a good alternative to conventional, time-consuming, histopathology procedures.

  2. Multi-band imaging camera and its sciences for the Japanese near-earth asteroid mission MUSES-C

    NASA Astrophysics Data System (ADS)

    Nakamura, Tsuko; Nakamura, Akiko M.; Saito, Jun; Sasaki, Sho; Nakamura, Ryosuke; Demura, Hirohide; Akiyama, Hiroaki; Tholen, David

    2001-11-01

    In this paper we present current development status of our Asteroid Multi-band Imaging CAmera (AMICA) for the Japan-US joint asteroid sample return mission MUSES-C. The launch of the spacecraft is planned around the end of 2002 and the whole mission period till sample retrieval on Earth will be approximately five years. The nominal target is the asteroid 1998SF36, one of the Amor-type asteroids. The AMICA specifications for the mission are shown here along with its ground-based and inflight calibration methods. We also describe the observational scenario at the asteroid, in relation to scientific goals.

  3. MUSE three-dimensional spectroscopy and kinematics of the gigahertz peaked spectrum radio galaxy PKS 1934-63: interaction, recently triggered active galactic nucleus and star formation

    NASA Astrophysics Data System (ADS)

    Roche, Nathan; Humphrey, Andrew; Lagos, Patricio; Papaderos, Polychronis; Silva, Marckelson; Cardoso, Leandro S. M.; Gomes, Jean Michel

    2016-07-01

    We observe the radio galaxy PKS 1934-63 (at z = 0.1825) using the Multi-Unit Spectroscopic Explorer (MUSE) on the Very Large Telescope (VLT). The radio source is a gigahertz peaked spectrum source and is compact (0.13 kpc), implying an early stage of evolution (≤104 yr). Our data show an interacting pair of galaxies, with projected separation 9.1 kpc and velocity difference Δ(v) = 216 km s-1. The larger galaxy is a M* ≃ 1011 M⊙ spheroidal with the emission-line spectrum of a high-excitation young radio active galactic nucleus (AGN; e.g. strong [O I]6300 and [O III]5007). Emission-line ratios indicate a large contribution to the line luminosity from high-velocity shocks (≃ 550 km s-1). The companion is a non-AGN disc galaxy, with extended Hα emission from which its star formation rate is estimated as 0.61 M⊙ yr-1. Both galaxies show rotational velocity gradients in Hα and other lines, with the interaction being prograde-prograde. The SE-NW velocity gradient of the AGN host is misaligned from the E-W radio axis, but aligned with a previously discovered central ultraviolet source, and a factor of 2 greater in amplitude in Hα than in other (forbidden) lines (e.g. [O III]5007). This could be produced by a fast rotating (100-150 km s-1) disc with circumnuclear star formation. We also identify a broad component of [O III]5007 emission, blueshifted with a velocity gradient aligned with the radio jets, and associated with outflow. However, the broad component of [O I]6300 is redshifted. In spectral fits, both galaxies have old stellar populations plus ˜0.1 per cent of very young stars, consistent with the galaxies undergoing first perigalacticon, triggering infall and star formation from ˜40 Myr ago followed by the radio outburst.

  4. Hubble Frontier Fields: predictions for the return of SN Refsdal with the MUSE and GMOS spectrographs

    NASA Astrophysics Data System (ADS)

    Jauzac, M.; Richard, J.; Limousin, M.; Knowles, K.; Mahler, G.; Smith, G. P.; Kneib, J.-P.; Jullo, E.; Natarajan, P.; Ebeling, H.; Atek, H.; Clément, B.; Eckert, D.; Egami, E.; Massey, R.; Rexroth, M.

    2016-04-01

    We present a high-precision mass model of the galaxy cluster MACS J1149.6+ 2223, based on a strong gravitational lensing analysis of Hubble Space Telescope Frontier Fields (HFF) imaging data and spectroscopic follow-up with Gemini/Gemini Multi-Object Spectrographs (GMOS) and Very Large Telescope (VLT)/Multi Unit Spectroscopic Explorer (MUSE). Our model includes 12 new multiply imaged galaxies, bringing the total to 22, composed of 65 individual lensed images. Unlike the first two HFF clusters, Abell 2744 and MACS J0416.1-2403, MACS J1149 does not reveal as many multiple images in the HFF data. Using the LENSTOOL software package and the new sets of multiple images, we model the cluster with several cluster-scale dark matter haloes and additional galaxy-scale haloes for the cluster members. Consistent with previous analyses, we find the system to be complex, composed of five cluster-scale haloes. Their spatial distribution and lower mass, however, makes MACS J1149 a less powerful lens. Our best-fitting model predicts image positions with an rms of 0.91 arcsec. We measure the total projected mass inside a 200-kpc aperture as (1.840 ± 0.006) × 1014 M⊙, thus reaching again 1 per cent precision, following our previous HFF analyses of MACS J0416.1-2403 and Abell 2744. In light of the discovery of the first resolved quadruply lensed supernova, SN Refsdal, in one of the multiply imaged galaxies identified in MACS J1149, we use our revised mass model to investigate the time delays and predict the rise of the next image between 2015 November and 2016 January.

  5. Spectral mapping of comet 67P/Churyumov-Gerasimenko with VLT/MUSE and SINFONI

    NASA Astrophysics Data System (ADS)

    Guilbert-Lepoutre, Aurelie; Besse, Sebastien; Snodgrass, Colin; Yang, Bin

    2016-10-01

    Comets are supposedly the most primitive objects in the solar system, preserving the earliest record of material from the nebula out of which our Sun and planets were formed, and thus holding crucial clues on the early phases of the solar system formation and evolution. For most small bodies in the solar system we can only access the surface properties, whereas active comet nuclei lose material from their subsurface, so that understanding cometary activity represents an unique opportunity to assess their internal composition, and by extension the composition, the temperature and pressure conditions of the protoplanetary disk at their place of formation.The ESA/Rosetta mission is performing the most thorough investigation of a comet ever made. Rosetta is measuring properties of comet 67P/Churyumov-Gerasimenko at distances between 5 and hundreds of km from the nucleus. However, it is unable to make any measurement over the thousands of km of the rest of the coma. Fortunately, the outer coma is accessible from the ground. In addition, we currently lack an understanding of how the very detailed information gathered from space-based observations can be extrapolated to the many ground-based observations that we can potentially perform. Combining parallel in situ observations with observations from the ground therefore gives us a great opportunity, not only to understand the behavior of 67P, but also to other comets observed exclusively from Earth. As part of the many observations taken from the ground, we have performed a spectral mapping of 67's coma using two IFU instruments mounted on the VLT: MUSE in the visible, and SINFONI in the near-infrared. The observations, carried out in March 2016, will be presented and discussed.

  6. Abundance ratios and IMF slopes in the dwarf elliptical galaxy NGC 1396 with MUSE

    NASA Astrophysics Data System (ADS)

    Mentz, J. J.; La Barbera, F.; Peletier, R. F.; Falcón-Barroso, J.; Lisker, T.; van de Ven, G.; Loubser, S. I.; Hilker, M.; Sánchez-Janssen, R.; Napolitano, N.; Cantiello, M.; Capaccioli, M.; Norris, M.; Paolillo, M.; Smith, R.; Beasley, M. A.; Lyubenova, M.; Munoz, R.; Puzia, T.

    2016-08-01

    Deep observations of the dwarf elliptical (dE) galaxy NGC 1396 (MV = -16.60, Mass ˜4 × 108 M⊙), located in the Fornax cluster, have been performed with the VLT/ MUSE spectrograph in the wavelength region from 4750 - 9350 Å. In this paper we present a stellar population analysis studying chemical abundances, the star formation history (SFH) and the stellar initial mass function (IMF) as a function of galacto-centric distance. Different, independent ways to analyse the stellar populations result in a luminosity-weighted age of ˜ 6 Gyr and a metallicity [Fe/H]˜ -0.4, similar to other dEs of similar mass. We find unusually overabundant values of [Ca/Fe] ˜+0.1, and under-abundant Sodium, with [Na/Fe] values around -0.1, while [Mg/Fe] is overabundant at all radii, increasing from ˜+0.1 in the centre to ˜+0.2 dex. We notice a significant metallicity and age gradient within this dwarf galaxy. To constrain the stellar IMF of NGC 1396, we find that the IMF of NGC 1396 is consistent with either a Kroupa-like or a top-heavy distribution, while a bottom-heavy IMF is firmly ruled out. An analysis of the abundance ratios, and a comparison with galaxies in the Local Group, shows that the chemical enrichment history of NGC 1396 is similar to the Galactic disc, with an extended star formation history. This would be the case if the galaxy originated from a LMC-sized dwarf galaxy progenitor, which would lose its gas while falling into the Fornax cluster.

  7. Assessment of HIV testing among young methamphetamine users in Muse, Northern Shan State, Myanmar

    PubMed Central

    2014-01-01

    Background Methamphetamine (MA) use has a strong correlation with risky sexual behaviors, and thus may be triggering the growing HIV epidemic in Myanmar. Although methamphetamine use is a serious public health concern, only a few studies have examined HIV testing among young drug users. This study aimed to examine how predisposing, enabling and need factors affect HIV testing among young MA users. Methods A cross-sectional study was conducted from January to March 2013 in Muse city in the Northern Shan State of Myanmar. Using a respondent-driven sampling method, 776 MA users aged 18-24 years were recruited. The main outcome of interest was whether participants had ever been tested for HIV. Descriptive statistics and multivariate logistic regression were applied in this study. Results Approximately 14.7% of young MA users had ever been tested for HIV. Significant positive predictors of HIV testing included predisposing factors such as being a female MA user, having had higher education, and currently living with one’s spouse/sexual partner. Significant enabling factors included being employed and having ever visited NGO clinics or met NGO workers. Significant need factors were having ever been diagnosed with an STI and having ever wanted to receive help to stop drug use. Conclusions Predisposing, enabling and need factors were significant contributors affecting uptake of HIV testing among young MA users. Integrating HIV testing into STI treatment programs, alongside general expansion of HIV testing services may be effective in increasing HIV testing uptake among young MA users. PMID:25042697

  8. Probing the boundary between star clusters and dwarf galaxies: A MUSE view on the dynamics of Crater/Laevens I

    NASA Astrophysics Data System (ADS)

    Voggel, Karina; Hilker, Michael; Baumgardt, Holger; Collins, Michelle L. M.; Grebel, Eva K.; Husemann, Bernd; Richtler, Tom; Frank, Matthias J.

    2016-08-01

    We present MUSE observations of the debated ultrafaint stellar system Crater. We spectroscopically confirm 26 member stars of this system via radial velocity measurements. We derive the systematic instrumental velocity uncertainty of MUSE spectra to be 2.27 km s- 1. This new data set increases the confirmed member stars of Crater by a factor of 3. One out of three bright blue stars and a fainter blue star just above the main-sequence turn-off are also found to be likely members of the system. The observations reveal that Crater has a systemic radial velocity of v_sys=148.18^+1.08_-1.15 km s^{-1}, whereas the most likely velocity dispersion of this system is σ _v=2.04^+2.19_-1.06 km s^{-1}. The total dynamical mass of the system, assuming dynamical equilibrium is then M_tot=1.50^{+4.9}_{-1.2}× 10^5 M_{⊙} implying a mass-to-light ratio of M/LV = 8.52^{+28.0}_{-6.5} M_{⊙}/L_{⊙}, which is consistent with a purely baryonic stellar population within its errors and no significant evidence for the presence of dark matter was found. We also find evidence for a velocity gradient in the radial velocity distribution. We conclude that our findings strongly support that Crater is a faint intermediate-age outer halo globular cluster and not a dwarf galaxy.

  9. MUSE: MUlti-atlas region Segmentation utilizing Ensembles of registration algorithms and parameters, and locally optimal atlas selection.

    PubMed

    Doshi, Jimit; Erus, Guray; Ou, Yangming; Resnick, Susan M; Gur, Ruben C; Gur, Raquel E; Satterthwaite, Theodore D; Furth, Susan; Davatzikos, Christos

    2016-02-15

    Atlas-based automated anatomical labeling is a fundamental tool in medical image segmentation, as it defines regions of interest for subsequent analysis of structural and functional image data. The extensive investigation of multi-atlas warping and fusion techniques over the past 5 or more years has clearly demonstrated the advantages of consensus-based segmentation. However, the common approach is to use multiple atlases with a single registration method and parameter set, which is not necessarily optimal for every individual scan, anatomical region, and problem/data-type. Different registration criteria and parameter sets yield different solutions, each providing complementary information. Herein, we present a consensus labeling framework that generates a broad ensemble of labeled atlases in target image space via the use of several warping algorithms, regularization parameters, and atlases. The label fusion integrates two complementary sources of information: a local similarity ranking to select locally optimal atlases and a boundary modulation term to refine the segmentation consistently with the target image's intensity profile. The ensemble approach consistently outperforms segmentations using individual warping methods alone, achieving high accuracy on several benchmark datasets. The MUSE methodology has been used for processing thousands of scans from various datasets, producing robust and consistent results. MUSE is publicly available both as a downloadable software package, and as an application that can be run on the CBICA Image Processing Portal (https://ipp.cbica.upenn.edu), a web based platform for remote processing of medical images.

  10. MUSE: MUlti-atlas region Segmentation utilizing Ensembles of registration algorithms and parameters, and locally optimal atlas selection.

    PubMed

    Doshi, Jimit; Erus, Guray; Ou, Yangming; Resnick, Susan M; Gur, Ruben C; Gur, Raquel E; Satterthwaite, Theodore D; Furth, Susan; Davatzikos, Christos

    2016-02-15

    Atlas-based automated anatomical labeling is a fundamental tool in medical image segmentation, as it defines regions of interest for subsequent analysis of structural and functional image data. The extensive investigation of multi-atlas warping and fusion techniques over the past 5 or more years has clearly demonstrated the advantages of consensus-based segmentation. However, the common approach is to use multiple atlases with a single registration method and parameter set, which is not necessarily optimal for every individual scan, anatomical region, and problem/data-type. Different registration criteria and parameter sets yield different solutions, each providing complementary information. Herein, we present a consensus labeling framework that generates a broad ensemble of labeled atlases in target image space via the use of several warping algorithms, regularization parameters, and atlases. The label fusion integrates two complementary sources of information: a local similarity ranking to select locally optimal atlases and a boundary modulation term to refine the segmentation consistently with the target image's intensity profile. The ensemble approach consistently outperforms segmentations using individual warping methods alone, achieving high accuracy on several benchmark datasets. The MUSE methodology has been used for processing thousands of scans from various datasets, producing robust and consistent results. MUSE is publicly available both as a downloadable software package, and as an application that can be run on the CBICA Image Processing Portal (https://ipp.cbica.upenn.edu), a web based platform for remote processing of medical images. PMID:26679328

  11. Extended Lyman α haloes around individual high-redshift galaxies revealed by MUSE

    NASA Astrophysics Data System (ADS)

    Wisotzki, L.; Bacon, R.; Blaizot, J.; Brinchmann, J.; Herenz, E. C.; Schaye, J.; Bouché, N.; Cantalupo, S.; Contini, T.; Carollo, C. M.; Caruana, J.; Courbot, J.-B.; Emsellem, E.; Kamann, S.; Kerutt, J.; Leclercq, F.; Lilly, S. J.; Patrício, V.; Sandin, C.; Steinmetz, M.; Straka, L. A.; Urrutia, T.; Verhamme, A.; Weilbacher, P. M.; Wendt, M.

    2016-03-01

    We report the detection of extended Lyα emission around individual star-forming galaxies at redshifts z = 3-6 in an ultradeep exposure of the Hubble Deep Field South obtained with MUSE on the ESO-VLT. The data reach a limiting surface brightness (1σ) of ~1 × 10-19 erg s-1 cm-2 arcsec-2 in azimuthally averaged radial profiles, an order of magnitude improvement over previous narrowband imaging. Our sample consists of 26 spectroscopically confirmed Lyα-emitting, but mostly continuum-faint (mAB ≳ 27) galaxies. In most objects the Lyα emission is considerably more extended than the UV continuum light. While five of the faintest galaxies in the sample show no significantly detected Lyα haloes, the derived upper limits suggest that this is due to insufficient S/N. Lyα haloes therefore appear to be ubiquitous even for low-mass (~ 108-109 M⊙) star-forming galaxies at z > 3. We decompose the Lyα emission of each object into a compact component tracing the UV continuum and an extended halo component, and infer sizes and luminosities of the haloes. The extended Lyα emission approximately follows an exponential surface brightness distribution with a scale length of a few kpc. While these haloes are thus quite modest in terms of their absolute sizes, they are larger by a factor of 5-15 than the corresponding rest-frame UV continuum sources as seen by HST. They are also much more extended, by a factor ~5, than Lyα haloes around low-redshift star-forming galaxies. Between ~40% and ≳90% of the observed Lyα flux comes from the extended halo component, with no obvious correlation of this fraction with either the absolute or the relative size of the Lyα halo. Our observations provide direct insights into the spatial distribution of at least partly neutral gas residing in the circumgalactic medium of low to intermediate mass galaxies at z > 3.

  12. Comparing the properties of the X-shaped bulges of NGC 4710 and the Milky Way with MUSE

    NASA Astrophysics Data System (ADS)

    Gonzalez, O. A.; Gadotti, D. A.; Debattista, V. P.; Rejkuba, M.; Valenti, E.; Zoccali, M.; Coccato, L.; Minniti, D.; Ness, M.

    2016-06-01

    Context. Our view of the structure of the Milky Way and, in particular, its bulge is obscured by the intervening stars, dust, and gas in the disc. While great progress in understanding the bulge has been achieved with past and ongoing observations, the comparison of its global chemodynamical properties with respect to those of bulges seen in external galaxies has yet to be accomplished. Aims: We used the Multi Unit Spectroscopic Explorer (MUSE) instrument installed on the Very Large Telescope (VLT) to obtain spectral and imaging coverage of NGC 4710. The wide area and excellent sampling of the MUSE integral field spectrograph allows us to investigate the dynamical properties of the X-shaped bulge of NGC 4710 and compare it with the properties of the X-shaped bulge of the Milky Way. Methods: We measured the radial velocities, velocity dispersion, and stellar populations using a penalised pixel full spectral fitting technique adopting simple stellar populations models, on a 1' × 1' area centred on the bulge of NGC 4710. We constructed the velocity maps of the bulge of NGC 4710 and investigated the presence of vertical metallicity gradients. These properties were compared to those of the Milky Way bulge and to a simulated galaxy with a boxy-peanut bulge. Results: We find the line-of-sight velocity maps and 1D rotation curves of the bulge of NGC 4710 to be remarkably similar to those of the Milky Way bulge. Some specific differences that were identified are in good agreement with the expectations from variations in the bar orientation angle. The bulge of NGC 4710 has a boxy-peanut morphology with a pronounced X-shape, showing no indication of any additional spheroidally distributed bulge population, in which we measure a vertical metallicity gradient of 0.35 dex/kpc. Conclusions: The general properties of NGC 4710 are very similar to those observed in the Milky Way bulge. However, it has been suggested that the Milky Way bulge has an additional component that is

  13. Time of flight and the MUSE experiment in the PIM1 Channel at the Paul Sherrer Institute

    NASA Astrophysics Data System (ADS)

    Lin, Wan; MUSE Collaboration

    2015-10-01

    The MUSE experiment in the PIM1 Channel at the Paul Sherrer Institute in Villigen, Switzerland, measures scattering of electrons and muons from a liquid hydrogen target. The intent of the experiment is to deduce from the scattering probabilities whether the radius of the proton is the same when determined from the scattering of the two different particle types. An important technique for the experiment is precise timing measurements, using high precision scintillators and a beam Cerenkov counter. We will describe the motivations for the precise timing measurement. We will present results for the timing measurements from prototype experimental detectors. We will also present results from a simulation program, Geant4, that was used to calculate energy loss corrections to the time of flight determined between the beam Cherenkov counter and the scintillator. This work is supported in part by the U.S. National Science Foundation Grant PHY 1306126 and the Douglass Project for Women in Math, Science, and Engineering.

  14. ModelMuse: A U.S. Geological Survey Open-Source, Graphical User Interface for Groundwater Models

    NASA Astrophysics Data System (ADS)

    Winston, R. B.

    2013-12-01

    ModelMuse is a free publicly-available graphical preprocessor used to generate the input and display the output for several groundwater models. It is written in Object Pascal and the source code is available on the USGS software web site. Supported models include the MODFLOW family of models, PHAST (version 1), and SUTRA version 2.2. With MODFLOW and PHAST, the user generates a grid and uses 'objects' (points, lines, and polygons) to define boundary conditions and the spatial variation in aquifer properties. Because the objects define the spatial variation, the grid can be changed without the user needing to re-enter spatial data. The same paradigm is used with SUTRA except that the user generates a quadrilateral finite-element mesh instead of a rectangular grid. The user interacts with the model in a top view and in a vertical cross section. The cross section can be at any angle or location. There is also a three-dimensional view of the model. For SUTRA, a new method of visualizing the permeability and related properties has been introduced. In three dimensional SUTRA models, the user specifies the permeability tensor by specifying permeability in three mutually orthogonal directions that can be oriented in space in any direction. Because it is important for the user to be able to check both the magnitudes and directions of the permeabilities, ModelMuse displays the permeabilities as either a two-dimensional or a three-dimensional vector plot. Color is used to differentiate the maximum, middle, and minimum permeability vectors. The magnitude of the permeability is shown by the vector length. The vector angle shows the direction of the maximum, middle, or minimum permeability. Contour and color plots can also be used to display model input and output data.

  15. MUSE Reveals a Recent Merger in the Post-starburst Host Galaxy of the TDE ASASSN-14li

    NASA Astrophysics Data System (ADS)

    Prieto, J. L.; Krühler, T.; Anderson, J. P.; Galbany, L.; Kochanek, C. S.; Aquino, E.; Brown, J. S.; Dong, Subo; Förster, F.; Holoien, T. W.-S.; Kuncarayakti, H.; Maureira, J. C.; Rosales-Ortega, F. F.; Sánchez, S. F.; Shappee, B. J.; Stanek, K. Z.

    2016-10-01

    We present Multi Unit Spectroscopic Explorer (MUSE) integral field spectroscopic observations of the host galaxy (PGC 043234) of one of the closest (z = 0.0206, D ≃ 90 Mpc) and best-studied tidal disruption events (TDEs), ASASSN-14li. The MUSE integral field data reveal asymmetric and filamentary structures that extend up to ≳10 kpc from the post-starburst host galaxy of ASASSN-14li. The structures are traced only through the strong nebular [O iii] λ5007, [N ii] λ6584, and Hα emission lines. The total off-nuclear [O iii] λ5007 luminosity is 4.7 × 1039 erg s‑1, and the ionized H mass is ∼ {10}4(500/{n}{{e}}) {M}ȯ . Based on the Baldwin–Phillips–Terlevich diagram, the nebular emission can be driven by either AGN photoionization or shock excitation, with AGN photoionization favored given the narrow intrinsic line widths. The emission line ratios and spatial distribution strongly resemble ionization nebulae around fading AGNs such as IC 2497 (Hanny's Voorwerp) and ionization “cones” around Seyfert 2 nuclei. The morphology of the emission line filaments strongly suggest that PGC 043234 is a recent merger, which likely triggered a strong starburst and AGN activity leading to the post-starburst spectral signatures and the extended nebular emission line features we see today. We briefly discuss the implications of these observations in the context of the strongly enhanced TDE rates observed in post-starburst galaxies and their connection to enhanced theoretical TDE rates produced by supermassive black hole binaries.

  16. Partial volume estimation using continuous representations

    NASA Astrophysics Data System (ADS)

    Siadat, Mohammad-Reza; Soltanian-Zadeh, Hamid

    2001-07-01

    This paper presents a new method for partial volume estimation using standard eigenimage method and B-splines. The proposed method is applied on the multi-parameter volumetric images such as MRI. The proposed approach uses the B-spline bases (kernels) to interpolate a continuous 2D surface or 3D density function for a sampled image dataset. It uses the Fourier domain to calculate the interpolation coefficients for each data point. Then, the above interpolation is incorporated into the standard eigenimage method. This incorporation provides a particular mask depending on the B-spline basis used. To estimate the partial volumes, this mask is convolved with the interpolation coefficients and then the eigenimage transformation is applied on the convolution result. To evaluate the method, images scanned from a 3D simulation model are used. The simulation provides images similar to CSF, white matter, and gray matter of the human brain in T1-, T2-, and PD-weighted MRI. The performance of the new method is also compared to that of the polynomial estimators.1 The results show that the new estimators have standard deviations less than that of the eigenimage method (up to 25%) and larger than those of the polynomial estimators (up to 45%). The new estimators have superior capabilities compared to that of the polynomial ones in that they provide an arbitrary degree of continuity at the boundaries of pixels/voxels. As a result, employing the new method, a continuous, smooth, and very accurate contour/surface of the desired object can be generated. The new B-spline estimators are faster than the polynomial estimators but they are slower than the standard eigenimage method.

  17. The XXL Survey. VIII. MUSE characterisation of intracluster light in a z ~ 0.53 cluster of galaxies

    NASA Astrophysics Data System (ADS)

    Adami, C.; Pompei, E.; Sadibekova, T.; Clerc, N.; Iovino, A.; McGee, S. L.; Guennou, L.; Birkinshaw, M.; Horellou, C.; Maurogordato, S.; Pacaud, F.; Pierre, M.; Poggianti, B.; Willis, J.

    2016-06-01

    Aims: Within a cluster, gravitational effects can lead to the removal of stars from their parent galaxies and their subsequent dispersal into the intracluster medium. Gas hydrodynamical effects can additionally strip gas and dust from galaxies; both gas and stars contribute to intracluster light (ICL). The properties of the ICL can therefore help constrain the physical processes at work in clusters by serving as a fossil record of the interaction history. Methods: The present study is designed to characterise this ICL for the first time in a ~1014 M⊙ and z ~ 0.53 cluster of galaxies from imaging and spectroscopic points of view. By applying a wavelet-based method to CFHT Megacam and WIRCAM images, we detect significant quantities of diffuse light and are able to constrain their spectral energy distributions. These sources were then spectroscopically characterised with ESO Multi Unit Spectroscopic Explorer (MUSE) spectroscopic data. MUSE data were also used to compute redshifts of 24 cluster galaxies and search for cluster substructures. Results: An atypically large amount of ICL, equivalent in i' to the emission from two brightest cluster galaxies, has been detected in this cluster. Part of the detected diffuse light has a very weak optical stellar component and apparently consists mainly of gas emission, while other diffuse light sources are clearly dominated by old stars. Furthermore, emission lines were detected in several places of diffuse light. Our spectral analysis shows that this emission likely originates from low-excitation parameter gas. Globally, the stellar contribution to the ICL is about 2.3 × 109 yr old even though the ICL is not currently forming a large number of stars. On the other hand, the contribution of the gas emission to the ICL in the optical is much greater than the stellar contribution in some regions, but the gas density is likely too low to form stars. These observations favour ram pressure stripping, turbulent viscous stripping, or

  18. The extinction and dust-to-gas structure of the planetary nebula NGC 7009 observed with MUSE

    NASA Astrophysics Data System (ADS)

    Walsh, J. R.; Monreal-Ibero, A.; Barlow, M. J.; Ueta, T.; Wesson, R.; Zijlstra, A. A.

    2016-04-01

    Context. Dust plays a significant role in planetary nebulae. Dust ejected with the gas in the asymptotic giant branch (AGB) phase is subject to the harsh environment of the planetary nebula (PN) while the star is evolving towards a white dwarf. Dust surviving the PN phase contributes to the dust content of the interstellar medium. Aims: The morphology of the internal dust extinction has been mapped for the first time in a PN, the bright nearby Galactic nebula NGC 7009. The morphologies of the gas, dust extinction and dust-to-gas ratio are compared to the structural features of the nebula. Methods: Emission line maps in H Balmer and Paschen lines were formed from analysis of MUSE cubes of NGC 7009 observed during science verification of the instrument. The measured electron temperature and density from the same cube were employed to predict the theoretical H line ratios and derive the extinction distribution across the nebula. After correction for the interstellar extinction to NGC 7009, the internal AV/NH has been mapped for the first time in a PN. Results: The extinction map of NGC 7009 has considerable structure, broadly corresponding to the morphological features of the nebula. The dust-to-gas ratio, AV/NH, increases from 0.7 times the interstellar value to >5 times from the centre towards the periphery of the ionized nebula. The integrated AV/NH is about 2× the mean ISM value. A large-scale feature in the extinction map is a wave, consisting of a crest and trough, at the rim of the inner shell. The nature of this feature is investigated and instrumental and physical causes considered; no convincing mechanisms were identified to produce this feature, other than AGB mass loss variations. Conclusions: Extinction mapping from H emission line imaging of PNe with MUSE provides a powerful tool for revealing the properties of internal dust and the dust-to-gas ratio. Based on observations collected at the European Organisation for Astronomical Research in the Southern

  19. Interleaved EPI Based fMRI Improved by Multiplexed Sensitivity Encoding (MUSE) and Simultaneous Multi-Band Imaging

    PubMed Central

    Chang, Hing-Chiu; Gaur, Pooja; Chou, Ying-hui; Chu, Mei-Lan; Chen, Nan-kuei

    2014-01-01

    Functional magnetic resonance imaging (fMRI) is a non-invasive and powerful imaging tool for detecting brain activities. The majority of fMRI studies are performed with single-shot echo-planar imaging (EPI) due to its high temporal resolution. Recent studies have demonstrated that, by increasing the spatial-resolution of fMRI, previously unidentified neuronal networks can be measured. However, it is challenging to improve the spatial resolution of conventional single-shot EPI based fMRI. Although multi-shot interleaved EPI is superior to single-shot EPI in terms of the improved spatial-resolution, reduced geometric distortions, and sharper point spread function (PSF), interleaved EPI based fMRI has two main limitations: 1) the imaging throughput is lower in interleaved EPI; 2) the magnitude and phase signal variations among EPI segments (due to physiological noise, subject motion, and B0 drift) are translated to significant in-plane aliasing artifact across the field of view (FOV). Here we report a method that integrates multiple approaches to address the technical limitations of interleaved EPI-based fMRI. Firstly, the multiplexed sensitivity-encoding (MUSE) post-processing algorithm is used to suppress in-plane aliasing artifacts resulting from time-domain signal instabilities during dynamic scans. Secondly, a simultaneous multi-band interleaved EPI pulse sequence, with a controlled aliasing scheme incorporated, is implemented to increase the imaging throughput. Thirdly, the MUSE algorithm is then generalized to accommodate fMRI data obtained with our multi-band interleaved EPI pulse sequence, suppressing both in-plane and through-plane aliasing artifacts. The blood-oxygenation-level-dependent (BOLD) signal detectability and the scan throughput can be significantly improved for interleaved EPI-based fMRI. Our human fMRI data obtained from 3 Tesla systems demonstrate the effectiveness of the developed methods. It is expected that future fMRI studies requiring high

  20. The silver lining of a mind in the clouds: interesting musings are associated with positive mood while mind-wandering.

    PubMed

    Franklin, Michael S; Mrazek, Michael D; Anderson, Craig L; Smallwood, Jonathan; Kingstone, Alan; Schooler, Jonathan W

    2013-01-01

    The negative effects of mind-wandering on performance and mood have been widely documented. In a recent well-cited study, Killingsworth and Gilbert (2010) conducted a large experience sampling study revealing that all off-task episodes, regardless of content, have equal to or lower happiness ratings, than on-task episodes. We present data from a similarly implemented experience sampling study with additional mind-wandering content categories. Our results largely conform to those of the Killingsworth and Gilbert (2010) study, with mind-wandering generally being associated with a more negative mood. However, subsequent analyses reveal situations in which a more positive mood is reported after being off-task. Specifically when off-task episodes are rated for interest, the high interest episodes are associated with an increase in positive mood compared to all on-task episodes. These findings both identify a situation in which mind-wandering may have positive effects on mood, and suggest the possible benefits of encouraging individuals to shift their off-task musings to the topics they find most engaging.

  1. MUSE sneaks a peek at extreme ram-pressure stripping events - I. A kinematic study of the archetypal galaxy ESO137-001

    NASA Astrophysics Data System (ADS)

    Fumagalli, Michele; Fossati, Matteo; Hau, George K. T.; Gavazzi, Giuseppe; Bower, Richard; Sun, Ming; Boselli, Alessandro

    2014-12-01

    We present Multi Unit Spectroscopic Explorer (MUSE) observations of ESO137-001, a spiral galaxy infalling towards the centre of the massive Norma cluster at z ˜ 0.0162. During the high-velocity encounter of ESO137-001 with the intracluster medium, a dramatic ram-pressure stripping event gives rise to an extended gaseous tail, traced by our MUSE observations to >30 kpc from the galaxy centre. By studying the Hα surface brightness and kinematics in tandem with the stellar velocity field, we conclude that ram pressure has completely removed the interstellar medium from the outer disc, while the primary tail is still fed by gas from the inner regions. Gravitational interactions do not appear to be a primary mechanism for gas removal. The stripped gas retains the imprint of the disc rotational velocity to ˜20 kpc downstream, without a significant gradient along the tail, which suggests that ESO137-001 is fast moving along a radial orbit in the plane of the sky. Conversely, beyond ˜20 kpc, a greater degree of turbulence is seen, with velocity dispersion up to ≳100 km s-1. For a model-dependent infall velocity of vinf ˜ 3000 km s-1, we conclude that the transition from laminar to turbulent flow in the tail occurs on time-scales ≥6.5 Myr. Our work demonstrates the terrific potential of MUSE for detailed studies of how ram-pressure stripping operates on small scales, providing a deep understanding of how galaxies interact with the dense plasma of the cluster environment.

  2. MUSE searches for galaxies near very metal-poor gas clouds at z ˜ 3: new constraints for cold accretion models

    NASA Astrophysics Data System (ADS)

    Fumagalli, Michele; Cantalupo, Sebastiano; Dekel, Avishai; Morris, Simon L.; O'Meara, John M.; Prochaska, J. Xavier; Theuns, Tom

    2016-10-01

    We report on the search for galaxies in the proximity of two very metal-poor gas clouds at z ˜ 3 towards the quasar Q0956+122. With a 5-hour Multi-Unit Spectroscopic Explorer (MUSE) integration in a ˜500 × 500 kpc2 region centred at the quasar position, we achieve a ≥80 per cent complete spectroscopic survey of continuum-detected galaxies with mR ≤ 25 mag and Lyα emitters with luminosity LLyα ≥ 3 × 1041 erg s- 1. We do not identify galaxies at the redshift of a z ˜ 3.2 Lyman limit system (LLS) with log Z/Z⊙ = -3.35 ± 0.05, placing this gas cloud in the intergalactic medium or circumgalactic medium of a galaxy below our sensitivity limits. Conversely, we detect five Lyα emitters at the redshift of a pristine z ˜ 3.1 LLS with log Z/Z⊙ ≤ -3.8, while ˜0.4 sources were expected given the z ˜ 3 Lyα luminosity function. Both this high detection rate and the fact that at least three emitters appear aligned in projection with the LLS suggest that this pristine cloud is tracing a gas filament that is feeding one or multiple galaxies. Our observations uncover two different environments for metal-poor LLSs, implying a complex link between these absorbers and galaxy haloes, which ongoing MUSE surveys will soon explore in detail. Moreover, in agreement with recent MUSE observations, we detected a ˜ 90 kpc Lyα nebula at the quasar redshift and three Lyα emitters reminiscent of a `dark galaxy' population.

  3. The MUSE view of QSO PG 1307+085: an elliptical galaxy on the MBH-σ* relation interacting with its group environment

    NASA Astrophysics Data System (ADS)

    Husemann, B.; Bennert, V. N.; Scharwächter, J.; Woo, J.-H.; Choudhury, O. S.

    2016-01-01

    We report deep optical integral-field spectroscopy with the Multi-Unit Spectroscopic Explorer (MUSE) at the Very Large Telescope of the luminous radio-quiet quasi-stellar object (QSO) PG 1307+085 obtained during commissioning. Given the high sensitivity and spatial resolution delivered by MUSE, we are able to resolve the compact (re ˜ 1.3 arcsec) elliptical host galaxy. After spectroscopic deblending of the QSO and host galaxy emission, we infer a stellar velocity dispersion of σ* = 155 ± 19 km s-1. This places PG 1307+085 on the local MBH-σ* relation within its intrinsic scatter but offset towards a higher black hole mass with respect to the mean relation. The MUSE observations reveal a large extended narrow-line region (ENLR) around PG 1307+085 reaching out to ˜30 kpc. In addition, we detect a faint ionized gas bridge towards the most massive galaxy of the galaxy group at 50 kpc distance. The ionized gas kinematics does not show any evidence for gas outflows on kpc scales despite the high QSO luminosity of Lbol > 1046 erg s-1. Based on the ionized gas distribution, kinematics and metallicity we discuss the origin of the ENLR with respect to its group environments including minor mergers, ram-pressure stripping or gas accretion as the likely scenarios. We conclude that PG 1307+085 is a normal elliptical host in terms of the scaling relations, but that the gas is likely affected by the environment through gravity or ambient pressure. It is possible that the interaction with the environment, seen in the ionized gas, might be responsible for driving sufficient gas to the black hole.

  4. Ionization processes in a local analogue of distant clumpy galaxies: VLT MUSE IFU spectroscopy and FORS deep images of the TDG NGC 5291N

    NASA Astrophysics Data System (ADS)

    Fensch, J.; Duc, P.-A.; Weilbacher, P. M.; Boquien, M.; Zackrisson, E.

    2016-01-01

    Context. We present Integral Field Unit (IFU) observations with MUSE and deep imaging with FORS of a dwarf galaxy recently formed within the giant collisional HI ring surrounding NGC 5291. This Tidal Dwarf Galaxy (TDG) -like object has the characteristics of typical z = 1-2 gas-rich spiral galaxies: a high gas fraction, a rather turbulent clumpy interstellar medium, the absence of an old stellar population, and a moderate metallicity and star formation efficiency. Aims: The MUSE spectra allow us to determine the physical conditions within the various complex substructures revealed by the deep optical images and to scrutinize the ionization processes at play in this specific medium at unprecedented spatial resolution. Methods: Starburst age, extinction, and metallicity maps of the TDG and the surrounding regions were determined using the strong emission lines Hβ, [OIII], [OI], [NII], Hα, and [SII] combined with empirical diagnostics. Different ionization mechanisms were distinguished using BPT-like diagrams and shock plus photoionization models. Results: In general, the physical conditions within the star-forming regions are homogeneous, in particular with a uniform half-solar oxygen abundance. On small scales, the derived extinction map shows narrow dust lanes. Regions with atypically strong [OI] emission line immediately surround the TDG. The [OI]/ Hα ratio cannot be easily accounted for by the photoionization by young stars or shock models. At greater distances from the main star-foming clumps, a faint diffuse blue continuum emission is observed, both with the deep FORS images and the MUSE data. It does not have a clear counterpart in the UV regime probed by GALEX. A stacked spectrum towards this region does not exhibit any emission line, excluding faint levels of star formation, or stellar absorption lines that might have revealed the presence of old stars. Several hypotheses are discussed for the origin of these intriguing features. Based on observations

  5. A Galerkin method for the estimation of parameters in hybrid systems governing the vibration of flexible beams with tip bodies

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Rosen, I. G.

    1985-01-01

    An approximation scheme is developed for the identification of hybrid systems describing the transverse vibrations of flexible beams with attached tip bodies. In particular, problems involving the estimation of functional parameters are considered. The identification problem is formulated as a least squares fit to data subject to the coupled system of partial and ordinary differential equations describing the transverse displacement of the beam and the motion of the tip bodies respectively. A cubic spline-based Galerkin method applied to the state equations in weak form and the discretization of the admissible parameter space yield a sequence of approximating finite dimensional identification problems. It is shown that each of the approximating problems admits a solution and that from the resulting sequence of optimal solutions a convergent subsequence can be extracted, the limit of which is a solution to the original identification problem. The approximating identification problems can be solved using standard techniques and readily available software.

  6. Estimating nonrigid motion from inconsistent intensity with robust shape features

    SciTech Connect

    Liu, Wenyang; Ruan, Dan

    2013-12-15

    Purpose: To develop a nonrigid motion estimation method that is robust to heterogeneous intensity inconsistencies amongst the image pairs or image sequence. Methods: Intensity and contrast variations, as in dynamic contrast enhanced magnetic resonance imaging, present a considerable challenge to registration methods based on general discrepancy metrics. In this study, the authors propose and validate a novel method that is robust to such variations by utilizing shape features. The geometry of interest (GOI) is represented with a flexible zero level set, segmented via well-behaved regularized optimization. The optimization energy drives the zero level set to high image gradient regions, and regularizes it with area and curvature priors. The resulting shape exhibits high consistency even in the presence of intensity or contrast variations. Subsequently, a multiscale nonrigid registration is performed to seek a regular deformation field that minimizes shape discrepancy in the vicinity of GOIs. Results: To establish the working principle, realistic 2D and 3D images were subject to simulated nonrigid motion and synthetic intensity variations, so as to enable quantitative evaluation of registration performance. The proposed method was benchmarked against three alternative registration approaches, specifically, optical flow, B-spline based mutual information, and multimodality demons. When intensity consistency was satisfied, all methods had comparable registration accuracy for the GOIs. When intensities among registration pairs were inconsistent, however, the proposed method yielded pronounced improvement in registration accuracy, with an approximate fivefold reduction in mean absolute error (MAE = 2.25 mm, SD = 0.98 mm), compared to optical flow (MAE = 9.23 mm, SD = 5.36 mm), B-spline based mutual information (MAE = 9.57 mm, SD = 8.74 mm) and mutimodality demons (MAE = 10.07 mm, SD = 4.03 mm). Applying the proposed method on a real MR image sequence also provided

  7. Merging multiple longitudinal studies with study-specific missing covariates: A joint estimating function approach.

    PubMed

    Wang, Fei; Song, Peter X-K; Wang, Lu

    2015-12-01

    Merging multiple datasets collected from studies with identical or similar scientific objectives is often undertaken in practice to increase statistical power. This article concerns the development of an effective statistical method that enables to merge multiple longitudinal datasets subject to various heterogeneous characteristics, such as different follow-up schedules and study-specific missing covariates (e.g., covariates observed in some studies but missing in other studies). The presence of study-specific missing covariates presents great statistical methodology challenge in data merging and analysis. We propose a joint estimating function approach to addressing this challenge, in which a novel nonparametric estimating function constructed via splines-based sieve approximation is utilized to bridge estimating equations from studies with missing covariates to those with fully observed covariates. Under mild regularity conditions, we show that the proposed estimator is consistent and asymptotically normal. We evaluate finite-sample performances of the proposed method through simulation studies. In comparison to the conventional multiple imputation approach, our method exhibits smaller estimation bias. We provide an illustrative data analysis using longitudinal cohorts collected in Mexico City to assess the effect of lead exposures on children's somatic growth.

  8. Estimating Rain Rates from Tipping-Bucket Rain Gauge Measurements

    NASA Technical Reports Server (NTRS)

    Wang, Jianxin; Fisher, Brad L.; Wolff, David B.

    2007-01-01

    This paper describes the cubic spline based operational system for the generation of the TRMM one-minute rain rate product 2A-56 from Tipping Bucket (TB) gauge measurements. Methodological issues associated with applying the cubic spline to the TB gauge rain rate estimation are closely examined. A simulated TB gauge from a Joss-Waldvogel (JW) disdrometer is employed to evaluate effects of time scales and rain event definitions on errors of the rain rate estimation. The comparison between rain rates measured from the JW disdrometer and those estimated from the simulated TB gauge shows good overall agreement; however, the TB gauge suffers sampling problems, resulting in errors in the rain rate estimation. These errors are very sensitive to the time scale of rain rates. One-minute rain rates suffer substantial errors, especially at low rain rates. When one minute rain rates are averaged to 4-7 minute or longer time scales, the errors dramatically reduce. The rain event duration is very sensitive to the event definition but the event rain total is rather insensitive, provided that the events with less than 1 millimeter rain totals are excluded. Estimated lower rain rates are sensitive to the event definition whereas the higher rates are not. The median relative absolute errors are about 22% and 32% for 1-minute TB rain rates higher and lower than 3 mm per hour, respectively. These errors decrease to 5% and 14% when TB rain rates are used at 7-minute scale. The radar reflectivity-rainrate (Ze-R) distributions drawn from large amount of 7-minute TB rain rates and radar reflectivity data are mostly insensitive to the event definition.

  9. MUSE crowded field 3D spectroscopy of over 12 000 stars in the globular cluster NGC 6397. I. The first comprehensive HRD of a globular cluster

    NASA Astrophysics Data System (ADS)

    Husser, Tim-Oliver; Kamann, Sebastian; Dreizler, Stefan; Wendt, Martin; Wulff, Nina; Bacon, Roland; Wisotzki, Lutz; Brinchmann, Jarle; Weilbacher, Peter M.; Roth, Martin M.; Monreal-Ibero, Ana

    2016-04-01

    Aims: We demonstrate the high multiplex advantage of crowded field 3D spectroscopy with the new integral field spectrograph MUSE by means of a spectroscopic analysis of more than 12 000 individual stars in the globular cluster NGC 6397. Methods: The stars are deblended with a point spread function fitting technique, using a photometric reference catalogue from HST as prior, including relative positions and brightnesses. This catalogue is also used for a first analysis of the extracted spectra, followed by an automatic in-depth analysis via a full-spectrum fitting method based on a large grid of PHOENIX spectra. Results: We analysed the largest sample so far available for a single globular cluster of 18 932 spectra from 12 307 stars in NGC 6397. We derived a mean radial velocity of vrad = 17.84 ± 0.07 km s-1 and a mean metallicity of [Fe/H] = -2.120 ± 0.002, with the latter seemingly varying with temperature for stars on the red giant branch (RGB). We determine Teff and [Fe/H] from the spectra, and log g from HST photometry. This is the first very comprehensive Hertzsprung-Russell diagram (HRD) for a globular cluster based on the analysis of several thousands of stellar spectra, ranging from the main sequence to the tip of the RGB. Furthermore, two interesting objects were identified; one is a post-AGB star and the other is a possible millisecond-pulsar companion. Data products are available at http://muse-vlt.eu/scienceBased on observations obtained at the Very Large Telescope (VLT) of the European Southern Observatory, Paranal, Chile (ESO Programme ID 60.A-9100(C)).

  10. Introduction: Information and Musings

    NASA Astrophysics Data System (ADS)

    Shifman, M.

    The following sections are included: * Victor Frenkel * Background * The Accused * Alexander Leipunsky * Alexander Weissberg * Holodomor * The beginning of the Great Purge * Other foreigners at UPTI * The Ruhemanns * Tisza * Lange * Weisselberg * A detective story * Stalin's order * Yuri Raniuk * Giovanna Fjelstad * Giovanna's story * First time in the USSR * Fisl's humor * Houtermans and Pomeranchuk * Choices to make * Closing gaps * Houtermans and the Communist Party of Germany * Houtermans and von Ardenne * Houtermans' trip to Russia in 1941 * Why Houtermans had to flee from Berlin in 1945 * Houtermans in Göttingen in the 1940's * Denazification * Moving to Bern * Yuri Golfand, the discoverer of supersymmetry * Bolotovsky's and Eskin's essays * Moisei Koretz * FIAN * Additional recommended literature * References

  11. Hiten (Muses-A)

    NASA Astrophysics Data System (ADS)

    Murdin, P.

    2000-11-01

    First Japanese Moon mission. Launched January 1990. Named Hiten after a Buddhist angel who plays music in heaven. Used to verify the swingby technique by utilizing lunar gravity. Returned engineering data, detected cosmic dust, and released a 12 kg orbiter called Haroromo....

  12. Musings about Beauty

    ERIC Educational Resources Information Center

    Kintsch, Walter

    2012-01-01

    In this essay, I explore how cognitive science could illuminate the concept of beauty. Two results from the extensive literature on aesthetics guide my discussion. As the term "beauty" is overextended in general usage, I choose as my starting point the notion of "perfect form." Aesthetic theorists are in reasonable agreement about the criteria for…

  13. Estimation of 3D cardiac deformation using spatio-temporal elastic registration of non-scanconverted ultrasound data

    NASA Astrophysics Data System (ADS)

    Elen, An; Loeckx, Dirk; Choi, Hon Fai; Gao, Hang; Claus, Piet; Maes, Frederik; Suetens, Paul; D'hooge, Jan

    2008-03-01

    Current ultrasound methods for measuring myocardial strain are often limited to measurements in one or two dimensions. Spatio-temporal elastic registration of 3D cardiac ultrasound data can however be used to estimate the 3D motion and full 3D strain tensor. In this work, the spatio-temporal elastic registration method was validated for both non-scanconverted and scanconverted images. This was done using simulated 3D pyramidal ultrasound data sets based on a thick-walled deforming ellipsoid and an adapted convolution model. A B-spline based frame-to-frame elastic registration method was applied to both the scanconverted and non-scanconverded data sets and the accuracy of the resulting deformation fields was quantified. The mean accuracy of the estimated displacement was very similar for the scanconverted and non-scanconverted data sets and thus, it was shown that 3D elastic registration to estimate the cardiac deformation from ultrasound images can be performed on non-scanconverted images, but that avoiding of the scanconversion step does not significantly improve the results of the displacement estimation.

  14. Regenerative Effects of Mesenchymal Stem Cells: Contribution of Muse Cells, a Novel Pluripotent Stem Cell Type that Resides in Mesenchymal Cells.

    PubMed

    Wakao, Shohei; Kuroda, Yasumasa; Ogura, Fumitaka; Shigemoto, Taeko; Dezawa, Mari

    2012-11-08

    Mesenchymal stem cells (MSCs) are easily accessible and safe for regenerative medicine. MSCs exert trophic, immunomodulatory, anti-apoptotic, and tissue regeneration effects in a variety of tissues and organs, but their entity remains an enigma. Because MSCs are generally harvested from mesenchymal tissues, such as bone marrow, adipose tissue, or umbilical cord as adherent cells, MSCs comprise crude cell populations and are heterogeneous. The specific cells responsible for each effect have not been clarified. The most interesting property of MSCs is that, despite being adult stem cells that belong to the mesenchymal tissue lineage, they are able to differentiate into a broad spectrum of cells beyond the boundary of mesodermal lineage cells into ectodermal or endodermal lineages, and repair tissues. The broad spectrum of differentiation ability and tissue-repairing effects of MSCs might be mediated in part by the presence of a novel pluripotent stem cell type recently found in adult human mesenchymal tissues, termed multilineage-differentiating stress enduring (Muse) cells. Here we review recently updated studies of the regenerative effects of MSCs and discuss their potential in regenerative medicine.

  15. L'influence d'un stage d'enseignement dans un musee de sciences naturelles sur le sentiment d'autoefficacite en sciences de futurs enseignants

    NASA Astrophysics Data System (ADS)

    Deblois, Annick

    Cette etude qualitative multicas est ancree dans l'approche sociale-cognitive de la theorie de l'autoefficacite de Bandura (1977). Elle s'interesse a quatre stages a l'enseignement qui se sont deroules au Musee canadien de la nature a Ottawa, Canada, en 2009. L'utilisation de donnees secondaires issues du questionnaire STEBI-B traduit et modifie (Dionne et Couture, 2010) ainsi que des entrevues semi-dirigees ont permis une analyse du changement du sentiment d'autoefficacite en sciences chez les stagiaires. Les elements les plus interessants de cette recherche sont l'apprentissage vicariant et la possibilite de repetition qui favorise une meilleure connaissance de soi et une pratique reflexive. Les resultats, dans l'ensemble positifs, illustrent bien le potentiel d'un tel stage afin de rehausser le sentiment d'autoefficacite en sciences chez des stagiaires en enseignement, particulierement chez ceux qui se destinent a enseigner a l'elementaire puisque ceux-ci ont souvent une formation academique dans un domaine autre que les sciences.

  16. Ubiquitous Giant Lyα Nebulae around the Brightest Quasars at z ∼ 3.5 Revealed with MUSE

    NASA Astrophysics Data System (ADS)

    Borisova, Elena; Cantalupo, Sebastiano; Lilly, Simon J.; Marino, Raffaella A.; Gallego, Sofia G.; Bacon, Roland; Blaizot, Jeremy; Bouché, Nicolas; Brinchmann, Jarle; Carollo, C. Marcella; Caruana, Joseph; Finley, Hayley; Herenz, Edmund C.; Richard, Johan; Schaye, Joop; Straka, Lorrie A.; Turner, Monica L.; Urrutia, Tanya; Verhamme, Anne; Wisotzki, Lutz

    2016-11-01

    Direct Lyα imaging of intergalactic gas at z∼ 2 has recently revealed giant cosmological structures around quasars, e.g., the Slug Nebula. Despite their high luminosity, the detection rate of such systems in narrow-band and spectroscopic surveys is less than 10%, possibly encoding crucial information on the distribution of gas around quasars and the quasar emission properties. In this study, we use the MUSE integral-field instrument to perform a blind survey for giant {Ly}α nebulae around 17 bright radio-quiet quasars at 3\\lt z\\lt 4 that does not suffer from most of the limitations of previous surveys. After data reduction and analysis performed with specifically developed tools, we found that each quasar is surrounded by giant {Ly}α nebulae with projected sizes larger than 100 physical kiloparsecs and, in some cases, extending up to 320 kpc. The circularly averaged surface brightness profiles of the nebulae appear to be very similar to each other despite their different morphologies and are consistent with power laws with slopes ≈ -1.8. The similarity between the properties of all these nebulae and the Slug Nebula suggests a similar origin for all systems and that a large fraction of gas around bright quasars could be in a relatively “cold” (T ∼ 104 K) and dense phase. In addition, our results imply that such gas is ubiquitous within at least 50 kpc from bright quasars at 3\\lt z\\lt 4 independently of the quasar emission opening angle, or extending up to 200 kpc for quasar isotropic emission. Based on observations made with ESO Telescopes at the Paranal Observatory under programs 094.A-0396, 095.A-0708, 096.A-0345, 094.A-0131, 095.A-0200, and 096.A-0222.

  17. Fast Simulation of X-ray Projections of Spline-based Surfaces using an Append Buffer

    PubMed Central

    Maier, Andreas; Hofmann, Hannes G.; Schwemmer, Chris; Hornegger, Joachim; Keil, Andreas; Fahrig, Rebecca

    2012-01-01

    Many scientists in the field of x-ray imaging rely on the simulation of x-ray images. As the phantom models become more and more realistic, their projection requires high computational effort. Since x-ray images are based on transmission, many standard graphics acceleration algorithms cannot be applied to this task. However, if adapted properly, simulation speed can be increased dramatically using state-of-the-art graphics hardware. A custom graphics pipeline that simulates transmission projections for tomographic reconstruction was implemented based on moving spline surface models. All steps from tessellation of the splines, projection onto the detector, and drawing are implemented in OpenCL. We introduced a special append buffer for increased performance in order to store the intersections with the scene for every ray. Intersections are then sorted and resolved to materials. Lastly, an absorption model is evaluated to yield an absorption value for each projection pixel. Projection of a moving spline structure is fast and accurate. Projections of size 640×480 can be generated within 254 ms. Reconstructions using the projections show errors below 1 HU with a sharp reconstruction kernel. Traditional GPU-based acceleration schemes are not suitable for our reconstruction task. Even in the absence of noise, they result in errors up to 9 HU on average, although projection images appear to be correct under visual examination. Projections generated with our new method are suitable for the validation of novel CT reconstruction algorithms. For complex simulations, such as the evaluation of motion-compensated reconstruction algorithms, this kind of x-ray simulation will reduce the computation time dramatically. Source code is available at http://conrad.stanford.edu/ PMID:22975431

  18. Four-dimensional B-spline-based motion analysis of tagged cardiac MR images

    NASA Astrophysics Data System (ADS)

    Ozturk, Cengizhan; McVeigh, Elliot R.

    1999-05-01

    In recent years, with development of new MRI techniques, noninvasive evaluation of global and regional cardiac function is becoming a reality. One of the methods used for this purpose is MRI tagging. In tagging, spatially encoded magnetic saturation planes, tags, are created within tissues. These act as temporary markers and move with the tissue. In cardiac tagging, tag deformation pattern provides useful qualitative and quantitative information about the functional properties of underlying myocardium. The measured deformation of a single tag plane contains only unidirectional information of the past motion. In order to track the motion of a cardiac material point, this sparse, single dimensional data has to be combined with similar information gathered from other tag sets and all time frames. Previously, several methods have been developed which rely on the specific geometry of the chambers. Here, we employ an image plane based, simple cartesian coordinate system and provide a stepwise method to describe the heart motion using a four-dimensional tensor product of B-splines. The proposed displacement and forward motion fields exhibited sub-pixel accuracy. Since our motion fields are parametric and based on an image plane based coordinate system, trajectories or other derived values (velocity, acceleration, strains...) can be calculated for any desired point on the MRI images. This method is sufficiently general so that the motion of any tagged structure can be tracked.

  19. Spline based iterative phase retrieval algorithm for X-ray differential phase contrast radiography.

    PubMed

    Nilchian, Masih; Wang, Zhentian; Thuering, Thomas; Unser, Michael; Stampanoni, Marco

    2015-04-20

    Differential phase contrast imaging using grating interferometer is a promising alternative to conventional X-ray radiographic methods. It provides the absorption, differential phase and scattering information of the underlying sample simultaneously. Phase retrieval from the differential phase signal is an essential problem for quantitative analysis in medical imaging. In this paper, we formalize the phase retrieval as a regularized inverse problem, and propose a novel discretization scheme for the derivative operator based on B-spline calculus. The inverse problem is then solved by a constrained regularized weighted-norm algorithm (CRWN) which adopts the properties of B-spline and ensures a fast implementation. The method is evaluated with a tomographic dataset and differential phase contrast mammography data. We demonstrate that the proposed method is able to produce phase image with enhanced and higher soft tissue contrast compared to conventional absorption-based approach, which can potentially provide useful information to mammographic investigations.

  20. Spline-based image-to-volume registration for three-dimensional electron microscopy.

    PubMed

    Jonić, S; Sorzano, C O S; Thévenaz, P; El-Bez, C; De Carlo, S; Unser, M

    2005-07-01

    This paper presents an algorithm based on a continuous framework for a posteriori angular and translational assignment in three-dimensional electron microscopy (3DEM) of single particles. Our algorithm can be used advantageously to refine the assignment of standard quantized-parameter methods by registering the images to a reference 3D particle model. We achieve the registration by employing a gradient-based iterative minimization of a least-squares measure of dissimilarity between an image and a projection of the volume in the Fourier transform (FT) domain. We compute the FT of the projection using the central-slice theorem (CST). To compute the gradient accurately, we take advantage of a cubic B-spline model of the data in the frequency domain. To improve the robustness of the algorithm, we weight the cost function in the FT domain and apply a "mixed" strategy for the assignment based on the minimum value of the cost function at registration for several different initializations. We validate our algorithm in a fully controlled simulation environment. We show that the mixed strategy improves the assignment accuracy; on our data, the quality of the angular and translational assignment was better than 2 voxel (i.e., 6.54 angstroms). We also test the performance of our algorithm on real EM data. We conclude that our algorithm outperforms a standard projection-matching refinement in terms of both consistency of 3D reconstructions and speed. PMID:15885434

  1. Robust engineering design optimization with non-uniform rational B-splines-based metamodels

    NASA Astrophysics Data System (ADS)

    Steuben, John C.; Turner, Cameron J.; Crawford, Richard H.

    2013-07-01

    Non-uniform rational B-splines (NURBs) demonstrate properties that make them attractive as metamodels, or surrogate models, for engineering design purposes. Previous research has resulted in the development of algorithms capable of fitting NURBs-based metamodels to engineering design spaces, and optimizing these models. This article presents an approach to robust optimization that employs NURBs-based metamodels. This robust optimization technique exploits the unique structure of NURBs-based metamodels to derive a simple but effective robustness metric. An algorithm is demonstrated that uses this metric to weigh robustness against optimality, and visualizes the trade-offs between these metamodel properties. This approach is demonstrated with test problems of increasing dimensionality, including several practical design challenges.

  2. Accurate recovery of 4D left ventricular deformations using volumetric B-splines incorporating phase based displacement estimates

    NASA Astrophysics Data System (ADS)

    Chen, Jian; Tustison, Nicholas J.; Amini, Amir A.

    2006-03-01

    In this paper, an improved framework for estimation of 3-D left-ventricular deformations from tagged MRI is presented. Contiguous short- and long-axis tagged MR images are collected and are used within a 4-D B-Spline based deformable model to determine 4-D displacements and strains. An initial 4-D B-spline model fitted to sparse tag line data is first constructed by minimizing a 4-D Chamfer distance potential-based energy function for aligning isoparametric planes of the model with tag line locations; subsequently, dense virtual tag lines based on 2-D phase-based displacement estimates and the initial model are created. A final 4-D B-spline model with increased knots is fitted to the virtual tag lines. From the final model, we can extract accurate 3-D myocardial deformation fields and corresponding strain maps which are local measures of non-rigid deformation. Lagrangian strains in simulated data are derived which show improvement over our previous work. The method is also applied to 3-D tagged MRI data collected in a canine.

  3. Attitude Control Flight Experience: Coping with Solar Radiation and Ion Engines Leak Thrust in Hayabusa (MUSES-C)

    NASA Technical Reports Server (NTRS)

    Kawaguchi, Jun'ichiro; Kominato, Takashi; Shirakawa, Ken'ichi

    2007-01-01

    The paper presents the attitude reorientation taking the advantage of solar radiation pressure without use of any fuel aboard. The strategy had been adopted to make Hayabusa spacecraft keep pointed toward the Sun for several months, while spinning. The paper adds the above mentioned results reported in Sedona this February showing another challenge of combining ion engines propulsion tactically balanced with the solar radiation torque with no spin motion. The operation has been performed since this March for a half year successfully. The flight results are presented with the estimated solar array panel diffusion coefficient and the ion engine's swirl torque.

  4. Unresolved versus resolved: testing the validity of young simple stellar population models with VLT/MUSE observations of NGC 3603

    NASA Astrophysics Data System (ADS)

    Kuncarayakti, H.; Galbany, L.; Anderson, J. P.; Krühler, T.; Hamuy, M.

    2016-09-01

    Context. Stellar populations are the building blocks of galaxies, including the Milky Way. The majority, if not all, extragalactic studies are entangled with the use of stellar population models given the unresolved nature of their observation. Extragalactic systems contain multiple stellar populations with complex star formation histories. However, studies of these systems are mainly based upon the principles of simple stellar populations (SSP). Hence, it is critical to examine the validity of SSP models. Aims: This work aims to empirically test the validity of SSP models. This is done by comparing SSP models against observations of spatially resolved young stellar population in the determination of its physical properties, that is, age and metallicity. Methods: Integral field spectroscopy of a young stellar cluster in the Milky Way, NGC 3603, was used to study the properties of the cluster as both a resolved and unresolved stellar population. The unresolved stellar population was analysed using the Hα equivalent width as an age indicator and the ratio of strong emission lines to infer metallicity. In addition, spectral energy distribution (SED) fitting using STARLIGHT was used to infer these properties from the integrated spectrum. Independently, the resolved stellar population was analysed using the colour-magnitude diagram (CMD) to determine age and metallicity. As the SSP model represents the unresolved stellar population, the derived age and metallicity were tested to determine whether they agree with those derived from resolved stars. Results: The age and metallicity estimate of NGC 3603 derived from integrated spectroscopy are confirmed to be within the range of those derived from the CMD of the resolved stellar population, including other estimates found in the literature. The result from this pilot study supports the reliability of SSP models for studying unresolved young stellar populations. Based on observations collected at the European Organisation

  5. The Pillars of Creation revisited with MUSE: gas kinematics and high-mass stellar feedback traced by optical spectroscopy

    NASA Astrophysics Data System (ADS)

    McLeod, A. F.; Dale, J. E.; Ginsburg, A.; Ercolano, B.; Gritschneder, M.; Ramsay, S.; Testi, L.

    2015-06-01

    Integral field unit (IFU) data of the iconic Pillars of Creation in M16 are presented. The ionization structure of the pillars was studied in great detail over almost the entire visible wavelength range, and maps of the relevant physical parameters, e.g. extinction, electron density, electron temperature, line-of-sight velocity of the ionized and neutral gas are shown. In agreement with previous authors, we find that the pillar tips are being ionized and photoevaporated by the massive members of the nearby cluster NGC 6611. They display a stratified ionization structure where the emission lines peak in a descending order according to their ionization energies. The IFU data allowed us to analyse the kinematics of the photoevaporative flow in terms of the stratified ionization structure, and we find that, in agreement with simulations, the photoevaporative flow is traced by a blueshift in the position-velocity profile. The gas kinematics and ionization structure have allowed us to produce a sketch of the 3D geometry of the Pillars, positioning the pillars with respect to the ionizing cluster stars. We use a novel method to detect a previously unknown bipolar outflow at the tip of the middle pillar and suggest that it has an embedded protostar as its driving source. Furthermore we identify a candidate outflow in the leftmost pillar. With the derived physical parameters and ionic abundances, we estimate a mass-loss rate due to the photoevaporative flow of 70 M⊙ Myr-1 which yields an expected lifetime of approximately 3 Myr.

  6. An analysis of contrast agent flow patterns from sequential ultrasound images using a motion estimation algorithm based on optical flow patterns.

    PubMed

    Lee, Ju Hwan; Hwang, Yoo Na; Park, Sung Yun; Jeong, Jong Seob; Kim, Sung Min

    2015-01-01

    This study estimates flow patterns of contrast agents from successive ultrasound image sequences by using an anisotropic diffusion-based optical flow algorithm. Before flow fields were recovered, the test sequences were reconstructed using relative composition of structural and textural parts from the original image. To improve estimation performance, an anisotropic diffusion filtering model was embedded into a spline-based slightly nonconvex total variation-L1 minimization algorithm. In addition, an incremental coarse-to-fine warping framework was employed with a linear minimization scheme to account for a large displacement. After each warping iteration, the implementation used intermediate bilateral filtering to prevent oversmoothing across motion boundaries. The performance of the proposed algorithm was tested using three different sequences obtained from two simulated datasets and phantom ultrasound sequences. The results indicate the robust performance of the proposed method under different noise environments. The results of the phantom study also demonstrate reliable performance according to different injection conditions of contrast agents. These experimental results suggest the potential clinical applicability of the proposed algorithm to ultrasonographic diagnosis based on contrast agents.

  7. MUSE crowded field 3D spectroscopy of over 12 000 stars in the globular cluster NGC 6397. II. Probing the internal dynamics and the presence of a central black hole

    NASA Astrophysics Data System (ADS)

    Kamann, S.; Husser, T.-O.; Brinchmann, J.; Emsellem, E.; Weilbacher, P. M.; Wisotzki, L.; Wendt, M.; Krajnović, D.; Roth, M. M.; Bacon, R.; Dreizler, S.

    2016-04-01

    We present a detailed analysis of the kinematics of the Galactic globular cluster NGC 6397 based on more than ~18 000 spectra obtained with the novel integral field spectrograph MUSE. While NGC 6397 is often considered a core collapse cluster, our analysis suggests a flattening of the surface brightness profile at the smallest radii. Although it is among the nearest globular clusters, the low velocity dispersion of NGC 6397 of < 5 km s-1 imposes heavy demands on the quality of the kinematical data. We show that despite its limited spectral resolution, MUSE reaches an accuracy of 1 km s-1 in the analysis of stellar spectra. We find slight evidence for a rotational component in the cluster and the velocity dispersion profile that we obtain shows a mild central cusp. To investigate the nature of this feature, we calculate spherical Jeans models and compare these models to our kinematical data. This comparison shows that if a constant mass-to-light ratio is assumed, the addition of an intermediate-mass black hole with a mass of 600 M⊙ brings the model predictions into agreement with our data, and therefore could be at the origin of the velocity dispersion profile. We further investigate cases with varying mass-to-light ratios and find that a compact dark stellar component can also explain our observations. However, such a component would closely resemble the black hole from the constant mass-to-light ratio models as this component must be confined to the central ~5″ of the cluster and must have a similar mass. Independent constraints on the distribution of stellar remnants in the cluster or kinematic measurements at the highest possible spatial resolution should be able to distinguish the two alternatives. Based on observations obtained at the Very Large Telescope (VLT) of the European Southern Observatory, Paranal, Chile (ESO Programme ID 60.A-9100(C))

  8. INFRARED LUMINOSITIES AND AROMATIC FEATURES IN THE 24 {mu}m FLUX-LIMITED SAMPLE OF 5MUSES

    SciTech Connect

    Wu Yanling; Helou, George; Shi Yong E-mail: gxh@ipac.caltech.ed

    2010-11-01

    We study a 24 {mu}m selected sample of 330 galaxies observed with the infrared spectrograph for the 5 mJy Unbiased Spitzer Extragalactic Survey. We estimate accurate total infrared luminosities by combining mid-IR spectroscopy and mid-to-far infrared photometry, and by utilizing new empirical spectral templates from Spitzer data. The infrared luminosities of this sample range mostly from 10{sup 9} L{sub sun} to 10{sup 13.5} L{sub sun}, with 83% in the range 10{sup 10} L{sub sun} < L{sub IR} < 10{sup 12} L{sub sun}. The redshifts range from 0.008 to 4.27, with a median of 0.144. The equivalent widths of the 6.2 {mu}m aromatic feature have a bimodal distribution, probably related to selection effects. We use the 6.2 {mu}m polycyclic aromatic hydrocarbon equivalent width (PAH EW) to classify our objects as starburst (SB)-dominated (44%), SB-AGN composite (22%), and active galactic nucleus (AGN)-dominated (34%). The high EW objects (SB-dominated) tend to have steeper mid-IR to far-IR spectral slopes and lower L{sub IR} and redshifts. The low EW objects (AGN-dominated) tend to have less steep spectral slopes and higher L{sub IR} and redshifts. This dichotomy leads to a gross correlation between EW and slope, which does not hold within either group. AGN-dominated sources tend to have lower log(L{sub PAH7.7{sub {mu}m}}/L{sub PAH11.3{sub {mu}m}}) ratios than star-forming galaxies, possibly due to preferential destruction of the smaller aromatics by the AGN. The log(L{sub PAH7.7{sub {mu}m}}/L{sub PAH11.3{sub {mu}m}}) ratios for star-forming galaxies are lower in our sample than the ratios measured from the nuclear spectra of nearby normal galaxies, most probably indicating a difference in the ionization state or grain size distribution between the nuclear regions and the entire galaxy. Finally, we provide a calibration relating the monochromatic continuum or aromatic feature luminosity to L{sub IR} for different types of objects.

  9. Estimating Eggs

    ERIC Educational Resources Information Center

    Lindsay, Margaret; Scott, Amanda

    2005-01-01

    The authors discuss mass as one of the three fundamental measurements (the others being length and time), noting that estimation of mass is little taught and assessed in primary schools. This article briefly explores the reasons for this in terms of culture, practice, and the difficulty of assessing estimation of mass. An activity using the…

  10. Attitude Estimation or Quaternion Estimation?

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis

    2003-01-01

    The attitude of spacecraft is represented by a 3x3 orthogonal matrix with unity determinant, which belongs to the three-dimensional special orthogonal group SO(3). The fact that all three-parameter representations of SO(3) are singular or discontinuous for certain attitudes has led to the use of higher-dimensional nonsingular parameterizations, especially the four-component quaternion. In attitude estimation, we are faced with the alternatives of using an attitude representation that is either singular or redundant. Estimation procedures fall into three broad classes. The first estimates a three-dimensional representation of attitude deviations from a reference attitude parameterized by a higher-dimensional nonsingular parameterization. The deviations from the reference are assumed to be small enough to avoid any singularity or discontinuity of the three-dimensional parameterization. The second class, which estimates a higher-dimensional representation subject to enough constraints to leave only three degrees of freedom, is difficult to formulate and apply consistently. The third class estimates a representation of SO(3) with more than three dimensions, treating the parameters as independent. We refer to the most common member of this class as quaternion estimation, to contrast it with attitude estimation. We analyze the first and third of these approaches in the context of an extended Kalman filter with simplified kinematics and measurement models.

  11. Musings on the Naked Trucker

    ERIC Educational Resources Information Center

    Abernathy, Jeff

    2007-01-01

    In this article, the author, dean of academic affairs at Augustana College in Illinois, reflects on an alumnus, English major David Allen, who has gained prominence in his field. A photo of the alumnus, wearing nothing but a cap, a pair of boots, and a strategically-placed guitar, appeared on the front page of a local newspaper, under a headline…

  12. Light and enlightenment: some musings

    NASA Astrophysics Data System (ADS)

    Patthoff, Donald D.

    2012-03-01

    In the beginning of the age of enlightenment (or reason), the language of philosophy, science, and theology stemmed equally from the same pens. Many of these early enlightenment authors also applied their thoughts and experiences to practical inventions and entrepreneurship; in the process, they noted and measured different characteristics of light and redirected the use of lenses beyond that of the heat lens which had been developing for over 2000 years. Within decades, microscopes, telescopes, theodolites, and many variations of the heat lens were well known. These advances rapidly changed and expanded the nature of science, subsequent technology, and many boundary notions; that is the way boundaries are defined not just in the sense of what is land and commercial property, but also what notions of boundary help shape and define society, including the unique role that professions play within society. The advent of lasers in the mid twenty century, though, introduced the ability to measure the effects and characteristic of single coherent wavelengths. This also introduced more ways to evaluate the relationship of specific wavelengths of light to other variables and interactions. At the most basic level, the almost revolutionary boundary developments of lasers seem to split down two paths of work: 1) a pursuit of more sophisticated heat lenses having better controls over light's destructive and cutting powers and, 2) more nuanced light-based instruments that not only enhanced the powers of observation, but also offered more minute measurement opportunities and subtle treatment capabilities. It is well worth deliberating, then, if "enlightenment" and "light" might share more than five letters in a row. And (if a common underlying foundation is revealed within these deliberations) , is it worth questioning any possible revelations that might arise, or that might bear relevance on today's research and developments in light based sciences, technology, clinical professions, and other bio applications. And, finally, how might any such insight influence, then, the future of light based research and its possible application?

  13. Muses on the Gregorian Calendar

    ERIC Educational Resources Information Center

    Staples, Ed

    2013-01-01

    This article begins with an exploration of the origins of the Gregorian Calendar. Next it describes the function of school inspector Christian Zeller (1822-1899) used to determine the number of the elapsed days of a year up to and including a specified date and how Zeller's function can be used to determine the number of days that have elapsed in…

  14. Estimating risk.

    PubMed

    2016-07-01

    A free mobile phone app has been launched providing nurses and other hospital clinicians with a simple way to identify high-risk surgical patients. The app is a phone version of the Surgical Outcome Risk Tool (SORT), originally developed for online use with computers by researchers from the National Confidential Enquiry into Patient Outcome and Death and the University College London Hospital Surgical Outcomes Research Centre. SORT uses information about patients' health and planned surgical procedures to estimate the risk of death within 30 days of an operation. The percentages are only estimates, taking into account the general risks of the procedures and some information about patients, and should not be confused with patient-specific estimates in individual cases. PMID:27369709

  15. Computational Estimation

    ERIC Educational Resources Information Center

    Fung, Maria G.; Latulippe, Christine L.

    2010-01-01

    Elementary school teachers are responsible for constructing the foundation of number sense in youngsters, and so it is recommended that teacher-training programs include an emphasis on number sense to ensure the development of dynamic, productive computation and estimation skills in students. To better prepare preservice elementary school teachers…

  16. Estimating Modifying Effect of Age on Genetic and Environmental Variance Components in Twin Models.

    PubMed

    He, Liang; Sillanpää, Mikko J; Silventoinen, Karri; Kaprio, Jaakko; Pitkäniemi, Janne

    2016-04-01

    Twin studies have been adopted for decades to disentangle the relative genetic and environmental contributions for a wide range of traits. However, heritability estimation based on the classical twin models does not take into account dynamic behavior of the variance components over age. Varying variance of the genetic component over age can imply the existence of gene-environment (G×E) interactions that general genome-wide association studies (GWAS) fail to capture, which may lead to the inconsistency of heritability estimates between twin design and GWAS. Existing parametricG×Einteraction models for twin studies are limited by assuming a linear or quadratic form of the variance curves with respect to a moderator that can, however, be overly restricted in reality. Here we propose spline-based approaches to explore the variance curves of the genetic and environmental components. We choose the additive genetic, common, and unique environmental variance components (ACE) model as the starting point. We treat the component variances as variance functions with respect to age modeled by B-splines or P-splines. We develop an empirical Bayes method to estimate the variance curves together with their confidence bands and provide an R package for public use. Our simulations demonstrate that the proposed methods accurately capture dynamic behavior of the component variances in terms of mean square errors with a data set of >10,000 twin pairs. Using the proposed methods as an alternative and major extension to the classical twin models, our analyses with a large-scale Finnish twin data set (19,510 MZ twins and 27,312 DZ same-sex twins) discover that the variances of the A, C, and E components for body mass index (BMI) change substantially across life span in different patterns and the heritability of BMI drops to ∼50% after middle age. The results further indicate that the decline of heritability is due to increasing unique environmental variance, which provides more

  17. Comparison of Total Variation with a Motion Estimation Based Compressed Sensing Approach for Self-Gated Cardiac Cine MRI in Small Animal Studies

    PubMed Central

    Marinetto, Eugenio; Pascau, Javier; Desco, Manuel

    2014-01-01

    Purpose Compressed sensing (CS) has been widely applied to prospective cardiac cine MRI. The aim of this work is to study the benefits obtained by including motion estimation in the CS framework for small-animal retrospective cardiac cine. Methods We propose a novel B-spline-based compressed sensing method (SPLICS) that includes motion estimation and generalizes previous spatiotemporal total variation (ST-TV) methods by taking into account motion between frames. In addition, we assess the effect of an optimum weighting between spatial and temporal sparsity to further improve results. Both methods were implemented using the efficient Split Bregman methodology and were evaluated on rat data comparing animals with myocardial infarction with controls for several acceleration factors. Results ST-TV with optimum selection of the weighting sparsity parameter led to results similar to those of SPLICS; ST-TV with large relative temporal sparsity led to temporal blurring effects. However, SPLICS always properly corrected temporal blurring, independently of the weighting parameter. At acceleration factors of 15, SPLICS did not distort temporal intensity information but led to some artefacts and slight over-smoothing. At an acceleration factor of 7, images were reconstructed without significant loss of quality. Conclusion We have validated SPLICS for retrospective cardiac cine in small animal, achieving high acceleration factors. In addition, we have shown that motion modelling may not be essential for retrospective cine and that similar results can be obtained by using ST-TV provided that an optimum selection of the spatiotemporal sparsity weighting parameter is performed. PMID:25350290

  18. Ensemble estimators for multivariate entropy estimation

    PubMed Central

    Sricharan, Kumar; Wei, Dennis; Hero, Alfred O.

    2015-01-01

    The problem of estimation of density functionals like entropy and mutual information has received much attention in the statistics and information theory communities. A large class of estimators of functionals of the probability density suffer from the curse of dimensionality, wherein the mean squared error (MSE) decays increasingly slowly as a function of the sample size T as the dimension d of the samples increases. In particular, the rate is often glacially slow of order O(T−γ/d), where γ > 0 is a rate parameter. Examples of such estimators include kernel density estimators, k-nearest neighbor (k-NN) density estimators, k-NN entropy estimators, intrinsic dimension estimators and other examples. In this paper, we propose a weighted affine combination of an ensemble of such estimators, where optimal weights can be chosen such that the weighted estimator converges at a much faster dimension invariant rate of O(T−1). Furthermore, we show that these optimal weights can be determined by solving a convex optimization problem which can be performed offline and does not require training data. We illustrate the superior performance of our weighted estimator for two important applications: (i) estimating the Panter-Dite distortion-rate factor and (ii) estimating the Shannon entropy for testing the probability distribution of a random sample. PMID:25897177

  19. Price and cost estimation

    NASA Technical Reports Server (NTRS)

    Stewart, R. D.

    1979-01-01

    Price and Cost Estimating Program (PACE II) was developed to prepare man-hour and material cost estimates. Versatile and flexible tool significantly reduces computation time and errors and reduces typing and reproduction time involved in preparation of cost estimates.

  20. Estimating avian population size using Bowden's estimator

    USGS Publications Warehouse

    Diefenbach, D.R.

    2009-01-01

    Avian researchers often uniquely mark birds, and multiple estimators could be used to estimate population size using individually identified birds. However, most estimators of population size require that all sightings of marked birds be uniquely identified, and many assume homogeneous detection probabilities. Bowden's estimator can incorporate sightings of marked birds that are not uniquely identified and relax assumptions required of other estimators. I used computer simulation to evaluate the performance of Bowden's estimator for situations likely to be encountered in bird studies. When the assumptions of the estimator were met, abundance and variance estimates and confidence-interval coverage were accurate. However, precision was poor for small population sizes (N ??? 50) unless a large percentage of the population was marked (>75%) and multiple (???8) sighting surveys were conducted. If additional birds are marked after sighting surveys begin, it is important to initially mark a large proportion of the population (pm ??? 0.5 if N ??? 100 or pm > 0.1 if N ??? 250) and minimize sightings in which birds are not uniquely identified; otherwise, most population estimates will be overestimated by >10%. Bowden's estimator can be useful for avian studies because birds can be resighted multiple times during a single survey, not all sightings of marked birds have to uniquely identify individuals, detection probabilities among birds can vary, and the complete study area does not have to be surveyed. I provide computer code for use with pilot data to design mark-resight surveys to meet desired precision for abundance estimates. ?? 2009 by The American Ornithologists' Union. All rights reserved.

  1. Direct Density Derivative Estimation.

    PubMed

    Sasaki, Hiroaki; Noh, Yung-Kyun; Niu, Gang; Sugiyama, Masashi

    2016-06-01

    Estimating the derivatives of probability density functions is an essential step in statistical data analysis. A naive approach to estimate the derivatives is to first perform density estimation and then compute its derivatives. However, this approach can be unreliable because a good density estimator does not necessarily mean a good density derivative estimator. To cope with this problem, in this letter, we propose a novel method that directly estimates density derivatives without going through density estimation. The proposed method provides computationally efficient estimation for the derivatives of any order on multidimensional data with a hyperparameter tuning method and achieves the optimal parametric convergence rate. We further discuss an extension of the proposed method by applying regularized multitask learning and a general framework for density derivative estimation based on Bregman divergences. Applications of the proposed method to nonparametric Kullback-Leibler divergence approximation and bandwidth matrix selection in kernel density estimation are also explored. PMID:27140943

  2. Estimating Local Child Abuse.

    ERIC Educational Resources Information Center

    Ards, Sheila

    1989-01-01

    Three conceptual approaches to estimating local child abuse rates using the National Incidence Study of Child Abuse and Neglect data set are evaluated. All three approaches yield estimates of actual abuse cases that exceed the number of reported cases. (SLD)

  3. Aircraft parameter estimation

    NASA Technical Reports Server (NTRS)

    Iliff, Kenneth W.

    1987-01-01

    The aircraft parameter estimation problem is used to illustrate the utility of parameter estimation, which applies to many engineering and scientific fields. Maximum likelihood estimation has been used to extract stability and control derivatives from flight data for many years. This paper presents some of the basic concepts of aircraft parameter estimation and briefly surveys the literature in the field. The maximum likelihood estimator is discussed, and the basic concepts of minimization and estimation are examined for a simple simulated aircraft example. The cost functions that are to be minimized during estimation are defined and discussed. Graphic representations of the cost functions are given to illustrate the minimization process. Finally, the basic concepts are generalized, and estimation from flight data is discussed. Some of the major conclusions for the simulated example are also developed for the analysis of flight data from the F-14, highly maneuverable aircraft technology (HiMAT), and space shuttle vehicles.

  4. Price Estimation Guidelines

    NASA Technical Reports Server (NTRS)

    Chamberlain, R. G.; Aster, R. W.; Firnett, P. J.; Miller, M. A.

    1985-01-01

    Improved Price Estimation Guidelines, IPEG4, program provides comparatively simple, yet relatively accurate estimate of price of manufactured product. IPEG4 processes user supplied input data to determine estimate of price per unit of production. Input data include equipment cost, space required, labor cost, materials and supplies cost, utility expenses, and production volume on industry wide or process wide basis.

  5. Inertial Estimator Learning Automata

    NASA Astrophysics Data System (ADS)

    Zhang, Junqi; Ni, Lina; Xie, Chen; Gao, Shangce; Tang, Zheng

    This paper presents an inertial estimator learning automata scheme by which both the short-term and long-term perspectives of the environment can be incorporated in the stochastic estimator — the long term information crystallized in terms of the running reward-probability estimates, and the short term information used by considering whether the most recent response was a reward or a penalty. Thus, when the short-term perspective is considered, the stochastic estimator becomes pertinent in the context of the estimator algorithms. The proposed automata employ an inertial weight estimator as the short-term perspective to achieve a rapid and accurate convergence when operating in stationary random environments. According to the proposed inertial estimator scheme, the estimates of the reward probabilities of actions are affected by the last response from environment. In this way, actions that have gotten the positive response from environment in the short time, have the opportunity to be estimated as “optimal”, to increase their choice probability and consequently, to be selected. The estimates become more reliable and consequently, the automaton rapidly and accurately converges to the optimal action. The asymptotic behavior of the proposed scheme is analyzed and it is proved to be ε-optimal in every stationary random environment. Extensive simulation results indicate that the proposed algorithm converges faster than the traditional stochastic-estimator-based SERI scheme, and the deterministic-estimator-based DGPA and DPRI schemes when operating in stationary random environments.

  6. Fuel Burn Estimation Model

    NASA Technical Reports Server (NTRS)

    Chatterji, Gano

    2011-01-01

    Conclusions: Validated the fuel estimation procedure using flight test data. A good fuel model can be created if weight and fuel data are available. Error in assumed takeoff weight results in similar amount of error in the fuel estimate. Fuel estimation error bounds can be determined.

  7. The Attica Muse: Lessons from Prison.

    ERIC Educational Resources Information Center

    Dippel, Stewart A.

    1992-01-01

    Discusses a college level liberal arts educational program in Attica (NY) Prison. Maintains that the prisoner students work harder and complain less than traditional college students. Discusses techniques used in the prison that might be effective in regular college instruction. (CFR)

  8. Molecular musings in microbial ecology and evolution

    PubMed Central

    2011-01-01

    A few major discoveries have influenced how ecologists and evolutionists study microbes. Here, in the format of an interview, we answer questions that directly relate to how these discoveries are perceived in these two branches of microbiology, and how they have impacted on both scientific thinking and methodology. The first question is "What has been the influence of the 'Universal Tree of Life' based on molecular markers?" For evolutionists, the tree was a tool to understand the past of known (cultured) organisms, mapping the invention of various physiologies on the evolutionary history of microbes. For ecologists the tree was a guide to discover the current diversity of unknown (uncultured) organisms, without much knowledge of their physiology. The second question we ask is "What was the impact of discovering frequent lateral gene transfer among microbes?" In evolutionary microbiology, frequent lateral gene transfer (LGT) made a simple description of relationships between organisms impossible, and for microbial ecologists, functions could not be easily linked to specific genotypes. Both fields initially resisted LGT, but methods or topics of inquiry were eventually changed in one to incorporate LGT in its theoretical models (evolution) and in the other to achieve its goals despite that phenomenon (ecology). The third and last question we ask is "What are the implications of the unexpected extent of diversity?" The variation in the extent of diversity between organisms invalidated the universality of species definitions based on molecular criteria, a major obstacle to the adaptation of models developed for the study of macroscopic eukaryotes to evolutionary microbiology. This issue has not overtly affected microbial ecology, as it had already abandoned species in favor of the more flexible operational taxonomic units. This field is nonetheless moving away from traditional methods to measure diversity, as they do not provide enough resolution to uncover what lies below the species level. The answers of the evolutionary microbiologist and microbial ecologist to these three questions illustrate differences in their theoretical frameworks. These differences mean that both fields can react quite distinctly to the same discovery, incorporating it with more or less difficulty in their scientific practice. Reviewers This article was reviewed by W. Ford Doolittle, Eugene V. Koonin and Maureen A. O'Malley. PMID:22074255

  9. Musings on the Internet, Part 2

    ERIC Educational Resources Information Center

    Cerf, Vinton G.

    2004-01-01

    In t his article, the author discusses the role of higher education research and development (R&D)--particularly R&D into the issues and problems that industry is less able to explore. In addition to high-speed computer communication, broadband networking efforts, and the use of fiber, a rich service environment is equally important and is…

  10. Musings on Willower's "Fog": A Response.

    ERIC Educational Resources Information Center

    English, Fenwick

    1998-01-01

    Professor Willower complains about the "fog" encountered in postmodernist literature and the author's two articles in "Journal of School Leadership." On closer examination, this miasma is simply the mildew on Willower's Cartesian glasses. Educational administration continues to substitute management and business fads for any real effort to create…

  11. Musings on Critical Thinking (Middle Ground).

    ERIC Educational Resources Information Center

    van Allen, Lanny

    1995-01-01

    Gives suggestions for fostering critical thinking skills among English students. Summarizes the views and theories of several educators, all of whom participated in a critical thinking conference in Boston in the summer of 1994. (HB)

  12. Musings: "Hasten Slowly:" Thoughtfully Planned Acceleration

    ERIC Educational Resources Information Center

    Gross, Miraca U. M.

    2008-01-01

    Acceleration is one of the best researched interventions for gifted students. The author is an advocate of acceleration. However, advocating for the thoughtful, carefully judged employment of a procedure with well researched effectiveness does not imply approval of cases where the procedure is used without sufficient thought--especially where it…

  13. Transits of Venus and Mercury as muses

    NASA Astrophysics Data System (ADS)

    Tobin, William

    2013-11-01

    Transits of Venus and Mercury have inspired artistic creation of all kinds. After having been the first to witness a Venusian transit, in 1639, Jeremiah Horrocks expressed his feelings in poetry. Production has subsequently widened to include songs, short stories, novels, novellas, sermons, theatre, film, engravings, paintings, photography, medals, sculpture, stained glass, cartoons, stamps, music, opera, flower arrangements, and food and drink. Transit creations are reviewed, with emphasis on the English- and French-speaking worlds. It is found that transits of Mercury inspire much less creation than those of Venus, despite being much more frequent, and arguably of no less astronomical significance. It is suggested that this is primarily due to the mythological associations of Venus with sex and love, which are more powerful and gripping than Mercury's mythological role as a messenger and protector of traders and thieves. The lesson for those presenting the night sky to the public is that sex sells.

  14. Zen Musings on Bion's "O" and "K".

    PubMed

    Cooper, Paul C

    2016-08-01

    The author defines Bion's use of "O" and "K" and discusses both from the radical nondualist realizational perspective available through the lens of Eihei Dogen's (1200-1253) Soto Zen Buddhist orientation. Fundamental differences in core foundational principles are discussed as well as similarities and their relevance to clinical practice. A case example exemplifies and explicates the abstract aspects of the discussion, which draws from Zen teaching stories, reference to Dogen's original writings, and the scholarly commentarial literature as well as from contemporary writers who integrate Zen Buddhist study and practice with Bion's psychoanalytic writings on theory and technique.

  15. Estimating Airline Operating Costs

    NASA Technical Reports Server (NTRS)

    Maddalon, D. V.

    1978-01-01

    The factors affecting commercial aircraft operating and delay costs were used to develop an airline operating cost model which includes a method for estimating the labor and material costs of individual airframe maintenance systems. The model permits estimates of aircraft related costs, i.e., aircraft service, landing fees, flight attendants, and control fees. A method for estimating the costs of certain types of airline delay is also described.

  16. Estimating Prices of Products

    NASA Technical Reports Server (NTRS)

    Aster, R. W.; Chamberlain, R. G.; Zendejas, S. C.; Lee, T. S.; Malhotra, S.

    1986-01-01

    Company-wide or process-wide production simulated. Price Estimation Guidelines (IPEG) program provides simple, accurate estimates of prices of manufactured products. Simplification of SAMIS allows analyst with limited time and computing resources to perform greater number of sensitivity studies. Although developed for photovoltaic industry, readily adaptable to standard assembly-line type of manufacturing industry. IPEG program estimates annual production price per unit. IPEG/PC program written in TURBO PASCAL.

  17. Updated Conceptual Cost Estimating

    NASA Technical Reports Server (NTRS)

    Brown, J. A.

    1987-01-01

    16-page report discusses development and use of NASA TR-1508, the Kennedy Space Center Aerospace Construction Price Book for preparing conceptual, budget, funding, cost-estimating, and preliminary cost-engineering reports. Updated annually from 1974 through 1985 with actual bid prices and government estimates. Includes labor and material quantities and prices with contractor and subcontractor markups for buildings, facilities, and systems at Kennedy Space Center. While data pertains to aerospace facilities, format and cost-estimating techniques guide estimation of costs in other construction applications.

  18. Reservoir Temperature Estimator

    SciTech Connect

    Palmer, Carl D.

    2014-12-08

    The Reservoir Temperature Estimator (RTEst) is a program that can be used to estimate deep geothermal reservoir temperature and chemical parameters such as CO2 fugacity based on the water chemistry of shallower, cooler reservoir fluids. This code uses the plugin features provided in The Geochemist’s Workbench (Bethke and Yeakel, 2011) and interfaces with the model-independent parameter estimation code Pest (Doherty, 2005) to provide for optimization of the estimated parameters based on the minimization of the weighted sum of squares of a set of saturation indexes from a user-provided mineral assemblage.

  19. Statistics of Sxy estimates

    NASA Technical Reports Server (NTRS)

    Freilich, M. H.; Pawka, S. S.

    1987-01-01

    The statistics of Sxy estimates derived from orthogonal-component measurements are examined. Based on results of Goodman (1957), the probability density function (pdf) for Sxy(f) estimates is derived, and a closed-form solution for arbitrary moments of the distribution is obtained. Characteristic functions are used to derive the exact pdf of Sxy(tot). In practice, a simple Gaussian approximation is found to be highly accurate even for relatively few degrees of freedom. Implications for experiment design are discussed, and a maximum-likelihood estimator for a posterior estimation is outlined.

  20. Reservoir Temperature Estimator

    2014-12-08

    The Reservoir Temperature Estimator (RTEst) is a program that can be used to estimate deep geothermal reservoir temperature and chemical parameters such as CO2 fugacity based on the water chemistry of shallower, cooler reservoir fluids. This code uses the plugin features provided in The Geochemist’s Workbench (Bethke and Yeakel, 2011) and interfaces with the model-independent parameter estimation code Pest (Doherty, 2005) to provide for optimization of the estimated parameters based on the minimization of themore » weighted sum of squares of a set of saturation indexes from a user-provided mineral assemblage.« less

  1. Estimating Health Services Requirements

    NASA Technical Reports Server (NTRS)

    Alexander, H. M.

    1985-01-01

    In computer program NOROCA populations statistics from National Center for Health Statistics used with computational procedure to estimate health service utilization rates, physician demands (by specialty) and hospital bed demands (by type of service). Computational procedure applicable to health service area of any size and even used to estimate statewide demands for health services.

  2. Estimating synchronization signal phase

    NASA Astrophysics Data System (ADS)

    Lyons, Robert G.; Lord, John D.

    2015-03-01

    To read a watermark from printed images requires that the watermarking system read correctly after affine distortions. One way to recover from affine distortions is to add a synchronization signal in the Fourier frequency domain and use this synchronization signal to estimate the applied affine distortion. Using the Fourier Magnitudes one can estimate the linear portion of the affine distortion. To estimate the translation one must first estimate the phase of the synchronization signal and then use phase correlation to estimate the translation. In this paper we provide a new method to measure the phase of the synchronization signal using only the data from the complex Fourier domain. This data is used to compute the linear portion, so it is quite convenient to estimate the phase without further data manipulation. The phase estimation proposed in this paper is computationally simple and provides a significant computational advantage over previous methods while maintaining similar accuracy. In addition, the phase estimation formula gives a general way to interpolate images in the complex frequency domain.

  3. Automated Estimating System (AES)

    SciTech Connect

    Holder, D.A.

    1989-09-01

    This document describes Version 3.1 of the Automated Estimating System, a personal computer-based software package designed to aid in the creation, updating, and reporting of project cost estimates for the Estimating and Scheduling Department of the Martin Marietta Energy Systems Engineering Division. Version 3.1 of the Automated Estimating System is capable of running in a multiuser environment across a token ring network. The token ring network makes possible services and applications that will more fully integrate all aspects of information processing, provides a central area for large data bases to reside, and allows access to the data base by multiple users. Version 3.1 of the Automated Estimating System also has been enhanced to include an Assembly pricing data base that may be used to retrieve cost data into an estimate. A WBS Title File program has also been included in Version 3.1. The WBS Title File program allows for the creation of a WBS title file that has been integrated with the Automated Estimating System to provide WBS titles in update mode and in reports. This provides for consistency in WBS titles and provides the capability to display WBS titles on reports generated at a higher WBS level.

  4. Optimizing qubit phase estimation

    NASA Astrophysics Data System (ADS)

    Chapeau-Blondeau, François

    2016-08-01

    The theory of quantum state estimation is exploited here to investigate the most efficient strategies for this task, especially targeting a complete picture identifying optimal conditions in terms of Fisher information, quantum measurement, and associated estimator. The approach is specified to estimation of the phase of a qubit in a rotation around an arbitrary given axis, equivalent to estimating the phase of an arbitrary single-qubit quantum gate, both in noise-free and then in noisy conditions. In noise-free conditions, we establish the possibility of defining an optimal quantum probe, optimal quantum measurement, and optimal estimator together capable of achieving the ultimate best performance uniformly for any unknown phase. With arbitrary quantum noise, we show that in general the optimal solutions are phase dependent and require adaptive techniques for practical implementation. However, for the important case of the depolarizing noise, we again establish the possibility of a quantum probe, quantum measurement, and estimator uniformly optimal for any unknown phase. In this way, for qubit phase estimation, without and then with quantum noise, we characterize the phase-independent optimal solutions when they generally exist, and also identify the complementary conditions where the optimal solutions are phase dependent and only adaptively implementable.

  5. Estimating airline operating costs

    NASA Technical Reports Server (NTRS)

    Maddalon, D. V.

    1978-01-01

    A review was made of the factors affecting commercial aircraft operating and delay costs. From this work, an airline operating cost model was developed which includes a method for estimating the labor and material costs of individual airframe maintenance systems. The model, similar in some respects to the standard Air Transport Association of America (ATA) Direct Operating Cost Model, permits estimates of aircraft-related costs not now included in the standard ATA model (e.g., aircraft service, landing fees, flight attendants, and control fees). A study of the cost of aircraft delay was also made and a method for estimating the cost of certain types of airline delay is described.

  6. Estimating cell populations

    NASA Technical Reports Server (NTRS)

    White, B. S.; Castleman, K. R.

    1981-01-01

    An important step in the diagnosis of a cervical cytology specimen is estimating the proportions of the various cell types present. This is usually done with a cell classifier, the error rates of which can be expressed as a confusion matrix. We show how to use the confusion matrix to obtain an unbiased estimate of the desired proportions. We show that the mean square error of this estimate depends on a 'befuddlement matrix' derived from the confusion matrix, and how this, in turn, leads to a figure of merit for cell classifiers. Finally, we work out the two-class problem in detail and present examples to illustrate the theory.

  7. Supernova frequency estimates

    SciTech Connect

    Tsvetkov, D.Y.

    1983-01-01

    Estimates of the frequency of type I and II supernovae occurring in galaxies of different types are derived from observational material acquired by the supernova patrol of the Shternberg Astronomical Institute.

  8. Estimation of food consumption

    SciTech Connect

    Callaway, J.M. Jr.

    1992-04-01

    The research reported in this document was conducted as a part of the Hanford Environmental Dose Reconstruction (HEDR) Project. The objective of the HEDR Project is to estimate the radiation doses that people could have received from operations at the Hanford Site. Information required to estimate these doses includes estimates of the amounts of potentially contaminated foods that individuals in the region consumed during the study period. In that general framework, the objective of the Food Consumption Task was to develop a capability to provide information about the parameters of the distribution(s) of daily food consumption for representative groups in the population for selected years during the study period. This report describes the methods and data used to estimate food consumption and presents the results developed for Phase I of the HEDR Project.

  9. Efficient Bayesian Phase Estimation.

    PubMed

    Wiebe, Nathan; Granade, Chris

    2016-07-01

    We introduce a new method called rejection filtering that we use to perform adaptive Bayesian phase estimation. Our approach has several advantages: it is classically efficient, easy to implement, achieves Heisenberg limited scaling, resists depolarizing noise, tracks time-dependent eigenstates, recovers from failures, and can be run on a field programmable gate array. It also outperforms existing iterative phase estimation algorithms such as Kitaev's method. PMID:27419551

  10. Efficient Bayesian Phase Estimation

    NASA Astrophysics Data System (ADS)

    Wiebe, Nathan; Granade, Chris

    2016-07-01

    We introduce a new method called rejection filtering that we use to perform adaptive Bayesian phase estimation. Our approach has several advantages: it is classically efficient, easy to implement, achieves Heisenberg limited scaling, resists depolarizing noise, tracks time-dependent eigenstates, recovers from failures, and can be run on a field programmable gate array. It also outperforms existing iterative phase estimation algorithms such as Kitaev's method.

  11. Cost-Estimation Program

    NASA Technical Reports Server (NTRS)

    Cox, Brian

    1995-01-01

    COSTIT computer program estimates cost of electronic design by reading item-list file and file containing cost for each item. Accuracy of cost estimate based on accuracy of cost-list file. Written by use of AWK utility for Sun4-series computers running SunOS 4.x and IBM PC-series and compatible computers running MS-DOS. The Sun version (NPO-19587). PC version (NPO-19157).

  12. Capital cost estimate

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The capital cost estimate for the nuclear process heat source (NPHS) plant was made by: (1) using costs from the current commercial HTGR for electricity production as a base for items that are essentially the same and (2) development of new estimates for modified or new equipment that is specifically for the process heat application. Results are given in tabular form and cover the total investment required for each process temperature studied.

  13. Maximal combustion temperature estimation

    NASA Astrophysics Data System (ADS)

    Golodova, E.; Shchepakina, E.

    2006-12-01

    This work is concerned with the phenomenon of delayed loss of stability and the estimation of the maximal temperature of safe combustion. Using the qualitative theory of singular perturbations and canard techniques we determine the maximal temperature on the trajectories located in the transition region between the slow combustion regime and the explosive one. This approach is used to estimate the maximal temperature of safe combustion in multi-phase combustion models.

  14. Estimating networks with jumps

    PubMed Central

    Kolar, Mladen; Xing, Eric P.

    2013-01-01

    We study the problem of estimating a temporally varying coefficient and varying structure (VCVS) graphical model underlying data collected over a period of time, such as social states of interacting individuals or microarray expression profiles of gene networks, as opposed to i.i.d. data from an invariant model widely considered in current literature of structural estimation. In particular, we consider the scenario in which the model evolves in a piece-wise constant fashion. We propose a procedure that estimates the structure of a graphical model by minimizing the temporally smoothed L1 penalized regression, which allows jointly estimating the partition boundaries of the VCVS model and the coefficient of the sparse precision matrix on each block of the partition. A highly scalable proximal gradient method is proposed to solve the resultant convex optimization problem; and the conditions for sparsistent estimation and the convergence rate of both the partition boundaries and the network structure are established for the first time for such estimators. PMID:25013533

  15. Single snapshot DOA estimation

    NASA Astrophysics Data System (ADS)

    Häcker, P.; Yang, B.

    2010-10-01

    In array signal processing, direction of arrival (DOA) estimation has been studied for decades. Many algorithms have been proposed and their performance has been studied thoroughly. Yet, most of these works are focused on the asymptotic case of a large number of snapshots. In automotive radar applications like driver assistance systems, however, only a small number of snapshots of the radar sensor array or, in the worst case, a single snapshot is available for DOA estimation. In this paper, we investigate and compare different DOA estimators with respect to their single snapshot performance. The main focus is on the estimation accuracy and the angular resolution in multi-target scenarios including difficult situations like correlated targets and large target power differences. We will show that some algorithms lose their ability to resolve targets or do not work properly at all. Other sophisticated algorithms do not show a superior performance as expected. It turns out that the deterministic maximum likelihood estimator is a good choice under these hard conditions.

  16. Estimating temporary populations.

    PubMed

    Smith, S K

    1994-01-01

    The difficulty of tracking temporary short-term population movements (commuting, seasonal visitation, convention and business travel) is examined, with a focus on Hawaiian statistician Robert Schmitt's work. The author finds that "Schmitt's contributions toward a methodology for estimating daytime populations were important because this approach utilized data sources that were widely available for small areas on at least an annual basis. Consequently, this approach could be used for frequent updates of the estimates, for many areas and at relatively little cost.... The major drawback of the approach is the lack of solid data on temporary residents to serve as larger-area control totals and as a historical base for small-area estimates." The geographical focus is on the United States, particularly Hawaii.

  17. Hierarchical number estimation.

    PubMed

    Friedenberg, Jay; Limratana, William

    2005-01-01

    We investigated number estimation using dot patterns grouped by proximity into larger clusters. Participants estimated the number of dots and clusters in separate trials. Estimation was most accurate when the numbers of elements on both scales were the same. When the number of elements on the unattended scale was higher, overestimation occurred. Conversely, when the number of elements on the unattended scale was lower, underestimation occurred. In Experiment 2, response cues were blocked to reduce any tendency toward attending the irrelevant level. The results were essentially unchanged, indicating response confusion alone cannot account for the effect. The data support the existence of an opposite scale effect in which the number of elements at the unattended level influence the processing of number.

  18. Risk estimates for bone

    SciTech Connect

    Schlenker, R.A.

    1981-01-01

    The primary sources of information on the skeletal effects of internal emitters in humans are the US radium cases with occupational and medical exposures to /sup 226/ /sup 228/Ra and the German patients injected with /sup 224/Ra primarily for treatment of ankylosing spondylitis and tuberculosis. During the past decade, dose-response data from both study populations have been used by committees, e.g., the BEIR committees, to estimate risks at low dose levels. NCRP Committee 57 and its task groups are now engaged in making risk estimates for internal emitters. This paper presents brief discussions of the radium data, the results of some new analyses and suggestions for expressing risk estimates in a form appropriate to radiation protection.

  19. Parameter Estimation with Ignorance

    NASA Astrophysics Data System (ADS)

    Du, H.; Smith, L. A.

    2012-04-01

    Parameter estimation in nonlinear models is a common task, and one for which there is no general solution at present. In the case of linear models, the distribution of forecast errors provides a reliable guide to parameter estimation, but in nonlinear models the facts that (1) predictability may vary with location in state space, and that (2) the distribution of forecast errors is expected not to be Normal, suggests that parameter estimates based on least squares methods may be systematically biased. Parameter estimation for nonlinear systems based on variations in the accuracy of probability forecasts is considered. Empirical results for several chaotic systems (the Logistic Map, the Henon Map and the 12-D Lorenz96 flow) are presented at various noise levels and sampling rates. Selecting parameter values by minimizing Ignorance, a proper local skill score for continuous probability forecasts as a function of the parameter values is easier to implement in practice than alternative nonlinear methods based on the geometry of attractors, the ability of the model to shadow the observations or model synchronization. As expected, it is more effective when the forecast error distributions are non-Gaussian. The goal of parameter estimation is not defined uniquely when the model class is imperfect. In short, the desired parameter values can be expected to be a function of the application for which they are determined. Parameter estimation in this imperfect model scenario is also discussed. Initial experiments suggest that our approach is also useful for identifying "best" parameter in an imperfect model as long as the notion of "best" is well defined. The information deficit, defined as the difference between the Empirical Ignorance and Implied Ignorance can be used to identify remaining forecast system inadequacy, in both perfect and imperfect model scenario.

  20. Thermodynamic estimation: Ionic materials

    SciTech Connect

    Glasser, Leslie

    2013-10-15

    Thermodynamics establishes equilibrium relations among thermodynamic parameters (“properties”) and delineates the effects of variation of the thermodynamic functions (typically temperature and pressure) on those parameters. However, classical thermodynamics does not provide values for the necessary thermodynamic properties, which must be established by extra-thermodynamic means such as experiment, theoretical calculation, or empirical estimation. While many values may be found in the numerous collected tables in the literature, these are necessarily incomplete because either the experimental measurements have not been made or the materials may be hypothetical. The current paper presents a number of simple and relible estimation methods for thermodynamic properties, principally for ionic materials. The results may also be used as a check for obvious errors in published values. The estimation methods described are typically based on addition of properties of individual ions, or sums of properties of neutral ion groups (such as “double” salts, in the Simple Salt Approximation), or based upon correlations such as with formula unit volumes (Volume-Based Thermodynamics). - Graphical abstract: Thermodynamic properties of ionic materials may be readily estimated by summation of the properties of individual ions, by summation of the properties of ‘double salts’, and by correlation with formula volume. Such estimates may fill gaps in the literature, and may also be used as checks of published values. This simplicity arises from exploitation of the fact that repulsive energy terms are of short range and very similar across materials, while coulombic interactions provide a very large component of the attractive energy in ionic systems. Display Omitted - Highlights: • Estimation methods for thermodynamic properties of ionic materials are introduced. • Methods are based on summation of single ions, multiple salts, and correlations. • Heat capacity, entropy

  1. Parametric Hazard Function Estimation.

    1999-09-13

    Version 00 Phaze performs statistical inference calculations on a hazard function (also called a failure rate or intensity function) based on reported failure times of components that are repaired and restored to service. Three parametric models are allowed: the exponential, linear, and Weibull hazard models. The inference includes estimation (maximum likelihood estimators and confidence regions) of the parameters and of the hazard function itself, testing of hypotheses such as increasing failure rate, and checking ofmore » the model assumptions.« less

  2. Estimating hyperconcentrated flow discharge

    NASA Astrophysics Data System (ADS)

    Balcerak, Ernie

    2012-02-01

    Determining flow discharge in torrential mountain floods can help in managing flood risk. However, standard methods of estimating discharge have significant uncertainties. To reduce these uncertainties, Bodoque et al. developed an iterative methodological approach to flow estimation based on a method known as the critical depth method along with paleoflood evidence. They applied the method to study a flash flood that occurred on 17 December 1997 in the Arroyo Cabrera catchment in central Spain. This large flow event, triggered by torrential rains, was complex and included hyperconcentrated flows, which are flows of water mixed with significant amounts of sediment.

  3. A Locally Modal B-Spline Based Full-Vector Finite-Element Method with PML for Nonlinear and Lossy Plasmonic Waveguide

    NASA Astrophysics Data System (ADS)

    Karimi, Hossein; Nikmehr, Saeid; Khodapanah, Ehsan

    2016-09-01

    In this paper, we develop a B-spline finite-element method (FEM) based on a locally modal wave propagation with anisotropic perfectly matched layers (PMLs), for the first time, to simulate nonlinear and lossy plasmonic waveguides. Conventional approaches like beam propagation method, inherently omit the wave spectrum and do not provide physical insight into nonlinear modes especially in the plasmonic applications, where nonlinear modes are constructed by linear modes with very close propagation constant quantities. Our locally modal B-spline finite element method (LMBS-FEM) does not suffer from the weakness of the conventional approaches. To validate our method, first, propagation of wave for various kinds of linear, nonlinear, lossless and lossy materials of metal-insulator plasmonic structures are simulated using LMBS-FEM in MATLAB and the comparisons are made with FEM-BPM module of COMSOL Multiphysics simulator and B-spline finite-element finite-difference wide angle beam propagation method (BSFEFD-WABPM). The comparisons show that not only our developed numerical approach is computationally more accurate and efficient than conventional approaches but also it provides physical insight into the nonlinear nature of the propagation modes.

  4. Bayesian Threshold Estimation

    ERIC Educational Resources Information Center

    Gustafson, S. C.; Costello, C. S.; Like, E. C.; Pierce, S. J.; Shenoy, K. N.

    2009-01-01

    Bayesian estimation of a threshold time (hereafter simply threshold) for the receipt of impulse signals is accomplished given the following: 1) data, consisting of the number of impulses received in a time interval from zero to one and the time of the largest time impulse; 2) a model, consisting of a uniform probability density of impulse time…

  5. Estimating Gender Wage Gaps

    ERIC Educational Resources Information Center

    McDonald, Judith A.; Thornton, Robert J.

    2011-01-01

    Course research projects that use easy-to-access real-world data and that generate findings with which undergraduate students can readily identify are hard to find. The authors describe a project that requires students to estimate the current female-male earnings gap for new college graduates. The project also enables students to see to what…

  6. Estimating Cloud Cover

    ERIC Educational Resources Information Center

    Moseley, Christine

    2007-01-01

    The purpose of this activity was to help students understand the percentage of cloud cover and make more accurate cloud cover observations. Students estimated the percentage of cloud cover represented by simulated clouds and assigned a cloud cover classification to those simulations. (Contains 2 notes and 3 tables.)

  7. Traffic Flow Estimates.

    ERIC Educational Resources Information Center

    Hart, Vincent G.

    1981-01-01

    Two examples are given of ways traffic engineers estimate traffic flow. The first, Floating Car Method, involves some basic ideas and the notion of relative velocity. The second, Maximum Traffic Flow, is viewed to involve simple applications of calculus. The material provides insight into specialized applications of mathematics. (MP)

  8. Numerical Estimation in Preschoolers

    ERIC Educational Resources Information Center

    Berteletti, Ilaria; Lucangeli, Daniela; Piazza, Manuela; Dehaene, Stanislas; Zorzi, Marco

    2010-01-01

    Children's sense of numbers before formal education is thought to rely on an approximate number system based on logarithmically compressed analog magnitudes that increases in resolution throughout childhood. School-age children performing a numerical estimation task have been shown to increasingly rely on a formally appropriate, linear…

  9. Estimating Large Numbers

    ERIC Educational Resources Information Center

    Landy, David; Silbert, Noah; Goldin, Aleah

    2013-01-01

    Despite their importance in public discourse, numbers in the range of 1 million to 1 trillion are notoriously difficult to understand. We examine magnitude estimation by adult Americans when placing large numbers on a number line and when qualitatively evaluating descriptions of imaginary geopolitical scenarios. Prior theoretical conceptions…

  10. Estimating Thermoelectric Water Use

    NASA Astrophysics Data System (ADS)

    Hutson, S. S.

    2012-12-01

    In 2009, the Government Accountability Office recommended that the U.S. Geological Survey (USGS) and Department of Energy-Energy Information Administration, (DOE-EIA) jointly improve their thermoelectric water-use estimates. Since then, the annual mandatory reporting forms returned by powerplant operators to DOE-EIA have been revised twice to improve the water data. At the same time, the USGS began improving estimation of withdrawal and consumption. Because of the variation in amount and quality of water-use data across powerplants, the USGS adopted a hierarchy of methods for estimating water withdrawal and consumptive use for the approximately 1,300 water-using powerplants in the thermoelectric sector. About 800 of these powerplants have generation and cooling data, and the remaining 500 have generation data only, or sparse data. The preferred method is to accept DOE-EIA data following validation. This is the traditional USGS method and the best method if all operators follow best practices for measurement and reporting. However, in 2010, fewer than 200 powerplants reported thermodynamically realistic values of both withdrawal and consumption. Secondly, water use was estimated using linked heat and water budgets for the first group of 800 plants, and for some of the other 500 powerplants where data were sufficient for at least partial modeling using plant characteristics, electric generation, and fuel use. Thermodynamics, environmental conditions, and characteristics of the plant and cooling system constrain both the amount of heat discharged to the environment and the share of this heat that drives evaporation. Heat and water budgets were used to define reasonable estimates of withdrawal and consumption, including likely upper and lower thermodynamic limits. These results were used to validate the reported values at the 800 plants with water-use data, and reported values were replaced by budget estimates at most of these plants. Thirdly, at plants without valid

  11. Estimating extragalactic Faraday rotation

    NASA Astrophysics Data System (ADS)

    Oppermann, N.; Junklewitz, H.; Greiner, M.; Enßlin, T. A.; Akahori, T.; Carretti, E.; Gaensler, B. M.; Goobar, A.; Harvey-Smith, L.; Johnston-Hollitt, M.; Pratley, L.; Schnitzeler, D. H. F. M.; Stil, J. M.; Vacca, V.

    2015-03-01

    Observations of Faraday rotation for extragalactic sources probe magnetic fields both inside and outside the Milky Way. Building on our earlier estimate of the Galactic contribution, we set out to estimate the extragalactic contributions. We discuss the problems involved; in particular, we point out that taking the difference between the observed values and the Galactic foreground reconstruction is not a good estimate for the extragalactic contributions. We point out a degeneracy between the contributions to the observed values due to extragalactic magnetic fields and observational noise and comment on the dangers of over-interpreting an estimate without taking into account its uncertainty information. To overcome these difficulties, we develop an extended reconstruction algorithm based on the assumption that the observational uncertainties are accurately described for a subset of the data, which can overcome the degeneracy with the extragalactic contributions. We present a probabilistic derivation of the algorithm and demonstrate its performance using a simulation, yielding a high quality reconstruction of the Galactic Faraday rotation foreground, a precise estimate of the typical extragalactic contribution, and a well-defined probabilistic description of the extragalactic contribution for each data point. We then apply this reconstruction technique to a catalog of Faraday rotation observations for extragalactic sources. The analysis is done for several different scenarios, for which we consider the error bars of different subsets of the data to accurately describe the observational uncertainties. By comparing the results, we argue that a split that singles out only data near the Galactic poles is the most robust approach. We find that the dispersion of extragalactic contributions to observed Faraday depths is most likely lower than 7 rad/m2, in agreement with earlier results, and that the extragalactic contribution to an individual data point is poorly

  12. Magnetic nanoparticle temperature estimation

    SciTech Connect

    Weaver, John B.; Rauwerdink, Adam M.; Hansen, Eric W.

    2009-05-15

    The authors present a method of measuring the temperature of magnetic nanoparticles that can be adapted to provide in vivo temperature maps. Many of the minimally invasive therapies that promise to reduce health care costs and improve patient outcomes heat tissue to very specific temperatures to be effective. Measurements are required because physiological cooling, primarily blood flow, makes the temperature difficult to predict a priori. The ratio of the fifth and third harmonics of the magnetization generated by magnetic nanoparticles in a sinusoidal field is used to generate a calibration curve and to subsequently estimate the temperature. The calibration curve is obtained by varying the amplitude of the sinusoidal field. The temperature can then be estimated from any subsequent measurement of the ratio. The accuracy was 0.3 deg. K between 20 and 50 deg. C using the current apparatus and half-second measurements. The method is independent of nanoparticle concentration and nanoparticle size distribution.

  13. Power spectral estimation algorithms

    NASA Technical Reports Server (NTRS)

    Bhatia, Manjit S.

    1989-01-01

    Algorithms to estimate the power spectrum using Maximum Entropy Methods were developed. These algorithms were coded in FORTRAN 77 and were implemented on the VAX 780. The important considerations in this analysis are: (1) resolution, i.e., how close in frequency two spectral components can be spaced and still be identified; (2) dynamic range, i.e., how small a spectral peak can be, relative to the largest, and still be observed in the spectra; and (3) variance, i.e., how accurate the estimate of the spectra is to the actual spectra. The application of the algorithms based on Maximum Entropy Methods to a variety of data shows that these criteria are met quite well. Additional work in this direction would help confirm the findings. All of the software developed was turned over to the technical monitor. A copy of a typical program is included. Some of the actual data and graphs used on this data are also included.

  14. Demographic estimates and projections.

    PubMed

    El-badry, M A; Kono, S

    1986-01-01

    The periodic assessment of global population growth from the past to the future has been one of the UN's most important contributions to member states and many other users. Available data and applicable analysis and projection methods were very limited in 1947, when the 1st global population estimates and projections were attempted. The 1st contributions of the Commission were manuals for these functions. Throughout the 1950s, 4 regional reports on Central and South America; Southeast Asia; and Asia and the far East were published. UN studies during this period tended to group regions by their position on a continuum of the demographic transition. Rough but alarming projections of population growth appeared. Projection technics were refined and standardized in the 1960s, and the demand grew for more specialized technics, e.g. dealing with urban/rural populations; the labor force; and other elements. The availability of computer technology at the end of the decade multiplied the projection capabilities, and the total population projections for the future were larger than ever. The 1970s projections, based on the more accurate and widely covered baseline data which had become available in developing countries, were also aided by more powerful and innovative indirect estimation technics; better software, and computers with larger capacities. By 1982, only a few countries were left with a total lack of data. A revision of estimates and projections is now undertaken biennially, incorporating the latest available data, utilizing advanced analytical methods and computer technology. Methodological manuals have been produced as the by-product of the revisions. UN demographic estimates and projections could be further improved by injection of a probabilistic element and the inclusion of economic factors. Roles for the future include maintenance of regional and interregional comparability of assumptions.

  15. Optimal Centroid Position Estimation

    SciTech Connect

    Candy, J V; McClay, W A; Awwal, A S; Ferguson, S W

    2004-07-23

    The alignment of high energy laser beams for potential fusion experiments demand high precision and accuracy by the underlying positioning algorithms. This paper discusses the feasibility of employing online optimal position estimators in the form of model-based processors to achieve the desired results. Here we discuss the modeling, development, implementation and processing of model-based processors applied to both simulated and actual beam line data.

  16. Estimating Distances from Parallaxes

    NASA Astrophysics Data System (ADS)

    Bailer-Jones, Coryn A. L.

    2015-10-01

    Astrometric surveys such as Gaia and LSST will measure parallaxes for hundreds of millions of stars. Yet they will not measure a single distance. Rather, a distance must be estimated from a parallax. In this didactic article, I show that doing this is not trivial once the fractional parallax error is larger than about 20%, which will be the case for about 80% of stars in the Gaia catalog. Estimating distances is an inference problem in which the use of prior assumptions is unavoidable. I investigate the properties and performance of various priors and examine their implications. A supposed uninformative uniform prior in distance is shown to give very poor distance estimates (large bias and variance). Any prior with a sharp cut-off at some distance has similar problems. The choice of prior depends on the information one has available—and is willing to use—concerning, e.g., the survey and the Galaxy. I demonstrate that a simple prior which decreases asymptotically to zero at infinite distance has good performance, accommodates nonpositive parallaxes, and does not require a bias correction.

  17. Estimating directional epistasis.

    PubMed

    Le Rouzic, Arnaud

    2014-01-01

    Epistasis, i.e., the fact that gene effects depend on the genetic background, is a direct consequence of the complexity of genetic architectures. Despite this, most of the models used in evolutionary and quantitative genetics pay scant attention to genetic interactions. For instance, the traditional decomposition of genetic effects models epistasis as noise around the evolutionarily-relevant additive effects. Such an approach is only valid if it is assumed that there is no general pattern among interactions-a highly speculative scenario. Systematic interactions generate directional epistasis, which has major evolutionary consequences. In spite of its importance, directional epistasis is rarely measured or reported by quantitative geneticists, not only because its relevance is generally ignored, but also due to the lack of simple, operational, and accessible methods for its estimation. This paper describes conceptual and statistical tools that can be used to estimate directional epistasis from various kinds of data, including QTL mapping results, phenotype measurements in mutants, and artificial selection responses. As an illustration, I measured directional epistasis from a real-life example. I then discuss the interpretation of the estimates, showing how they can be used to draw meaningful biological inferences.

  18. Valid lower bound for all estimators in quantum parameter estimation

    NASA Astrophysics Data System (ADS)

    Liu, Jing; Yuan, Haidong

    2016-09-01

    The widely used quantum Cramér–Rao bound (QCRB) sets a lower bound for the mean square error of unbiased estimators in quantum parameter estimation, however, in general QCRB is only tight in the asymptotical limit. With a limited number of measurements biased estimators can have a far better performance for which QCRB cannot calibrate. Here we introduce a valid lower bound for all estimators, either biased or unbiased, which can serve as standard of merit for all quantum parameter estimations.

  19. Los Alamos PC estimating system

    SciTech Connect

    Stutz, R.A.; Lemon, G.D.

    1987-01-01

    The Los Alamos Cost Estimating System (QUEST) is being converted to run on IBM personal computers. This very extensive estimating system is capable of supporting cost estimators from many different and varied fields. QUEST does not dictate any fixed method for estimating. QUEST supports many styles and levels of detail estimating. QUEST can be used with or without data bases. This system allows the estimator to provide reports based on levels of detail defined by combining work breakdown structures. QUEST provides a set of tools for doing any type of estimate without forcing the estimator to use any given method. The level of detail in the estimate can be mixed based on the amount of information known about different parts of the project. The system can support many different data bases simultaneously. Estimators can modify any cost in any data base.

  20. ESTIM: A parameter estimation computer program: Final report

    SciTech Connect

    Hills, R.G.

    1987-08-01

    The computer code, ESTIM, enables subroutine versions of existing simulation codes to be used to estimate model parameters. Nonlinear least squares techniques are used to find the parameter values that result in a best fit between measurements made in the simulation domain and the simulation code's prediction of these measurements. ESTIM utilizes the non-linear least square code DQED (Hanson and Krogh (1982)) to handle the optimization aspects of the estimation problem. In addition to providing weighted least squares estimates, ESTIM provides a propagation of variance analysis. A subroutine version of COYOTE (Gartling (1982)) is provided. The use of ESTIM with COYOTE allows one to estimate the thermal property model parameters that result in the best agreement (in a least squares sense) between internal temperature measurements and COYOTE's predictions of these internal temperature measurements. We demonstrate the use of ESTIM through several example problems which utilize the subroutine version of COYOTE.

  1. Estimation of Lung Ventilation

    NASA Astrophysics Data System (ADS)

    Ding, Kai; Cao, Kunlin; Du, Kaifang; Amelon, Ryan; Christensen, Gary E.; Raghavan, Madhavan; Reinhardt, Joseph M.

    Since the primary function of the lung is gas exchange, ventilation can be interpreted as an index of lung function in addition to perfusion. Injury and disease processes can alter lung function on a global and/or a local level. MDCT can be used to acquire multiple static breath-hold CT images of the lung taken at different lung volumes, or with proper respiratory control, 4DCT images of the lung reconstructed at different respiratory phases. Image registration can be applied to this data to estimate a deformation field that transforms the lung from one volume configuration to the other. This deformation field can be analyzed to estimate local lung tissue expansion, calculate voxel-by-voxel intensity change, and make biomechanical measurements. The physiologic significance of the registration-based measures of respiratory function can be established by comparing to more conventional measurements, such as nuclear medicine or contrast wash-in/wash-out studies with CT or MR. An important emerging application of these methods is the detection of pulmonary function change in subjects undergoing radiation therapy (RT) for lung cancer. During RT, treatment is commonly limited to sub-therapeutic doses due to unintended toxicity to normal lung tissue. Measurement of pulmonary function may be useful as a planning tool during RT planning, may be useful for tracking the progression of toxicity to nearby normal tissue during RT, and can be used to evaluate the effectiveness of a treatment post-therapy. This chapter reviews the basic measures to estimate regional ventilation from image registration of CT images, the comparison of them to the existing golden standard and the application in radiation therapy.

  2. Misalignment estimation software system

    NASA Technical Reports Server (NTRS)

    Desjardins, R. L.

    1973-01-01

    A system of computer software, spacecraft, and ground system activity is described that enables spacecraft startrackers and inertial assemblies to be aligned and calibrated from the ground after the spacecraft has achieved orbit. The system generates in the uplink flow an exercise designed to render misalignments visible, and sends the exercise to the spacecraft where the spacecraft inserts the misalignment into the information in the form of attitude sensor error. The information is downlinked for processing into misalignment estimates to be used for correcting spacecraft model at data base.

  3. Estimating turbine limit load

    NASA Technical Reports Server (NTRS)

    Glassman, Arthur J.

    1993-01-01

    A method for estimating turbine limit-load pressure ratio from turbine map information is presented and demonstrated. It is based on a mean line analysis at the last-rotor exit. The required map information includes choke flow rate at all speeds as well as pressure ratio and efficiency at the onset of choke at design speed. One- and two-stage turbines are analyzed to compare the results with those from a more rigorous off-design flow analysis and to show the sensitivities of the computed limit-load pressure ratios to changes in the key assumptions.

  4. Estimating separation efficiency

    SciTech Connect

    Juska, J.W.

    1984-12-01

    PACKED COLUMNS are getting renewed interest for largescale vapor-liquid operations such as distillation, absorption and stripping. Packings offer advantages of low cost and low pressure drop. Unfortunately, there are only a few generalized methods available in the open literature for estimating the height of packing equivalent to a theoretical plate (HETP). These methods are empirical and supported by vendor advice. The performance data published by universities are often obtained using small (less than ten inches in diameter) columns and with packings not industrially important. When commercial-scale data are published, they usually are not supported by analysis or generalization.

  5. Phenological Parameters Estimation Tool

    NASA Technical Reports Server (NTRS)

    McKellip, Rodney D.; Ross, Kenton W.; Spruce, Joseph P.; Smoot, James C.; Ryan, Robert E.; Gasser, Gerald E.; Prados, Donald L.; Vaughan, Ronald D.

    2010-01-01

    The Phenological Parameters Estimation Tool (PPET) is a set of algorithms implemented in MATLAB that estimates key vegetative phenological parameters. For a given year, the PPET software package takes in temporally processed vegetation index data (3D spatio-temporal arrays) generated by the time series product tool (TSPT) and outputs spatial grids (2D arrays) of vegetation phenological parameters. As a precursor to PPET, the TSPT uses quality information for each pixel of each date to remove bad or suspect data, and then interpolates and digitally fills data voids in the time series to produce a continuous, smoothed vegetation index product. During processing, the TSPT displays NDVI (Normalized Difference Vegetation Index) time series plots and images from the temporally processed pixels. Both the TSPT and PPET currently use moderate resolution imaging spectroradiometer (MODIS) satellite multispectral data as a default, but each software package is modifiable and could be used with any high-temporal-rate remote sensing data collection system that is capable of producing vegetation indices. Raw MODIS data from the Aqua and Terra satellites is processed using the TSPT to generate a filtered time series data product. The PPET then uses the TSPT output to generate phenological parameters for desired locations. PPET output data tiles are mosaicked into a Conterminous United States (CONUS) data layer using ERDAS IMAGINE, or equivalent software package. Mosaics of the vegetation phenology data products are then reprojected to the desired map projection using ERDAS IMAGINE

  6. Automated Estimating System

    1996-04-15

    AES6.1 is a PC software package developed to aid in the preparation and reporting of cost estimates. AES6.1 provides an easy means for entering and updating the detailed cost, schedule information, project work breakdown structure, and escalation information contained in a typical project cost estimate through the use of menus and formatted input screens. AES6.1 combines this information to calculate both unescalated and escalated cost for a project which can be reported at varying levelsmore » of detail. Following are the major modifications to AES6.0f: Contingency update was modified to provide greater flexibility for user updates, Schedule Update was modified to provide user ability to schedule Bills of Material at the WBS/Participant/Cost Code level, Schedule Plot was modified to graphically show schedule by WBS/Participant/Cost Code, All Fiscal Year reporting has been modified to use the new schedule format, The Schedule 1-B-7, Cost Schedule, and the WBS/Participant reprorts were modified to determine Phase of Work from the B/M Cost Code, Utility program was modified to allow selection by cost code and update cost code in the Global Schedule update, Generic summary and line item download were added to the utility program, and an option was added to all reports which allows the user to indicate where overhead is to be reported (bottom line or in body of report)« less

  7. Semimajor Axis Estimation Strategies

    NASA Technical Reports Server (NTRS)

    How, Jonathan P.; Alfriend, Kyle T.; Breger, Louis; Mitchell, Megan

    2004-01-01

    This paper extends previous analysis on the impact of sensing noise for the navigation and control aspects of formation flying spacecraft. We analyze the use of Carrier-phase Differential GPS (CDGPS) in relative navigation filters, with a particular focus on the filter correlation coefficient. This work was motivated by previous publications which suggested that a "good" navigation filter would have a strong correlation (i.e., coefficient near -1) to reduce the semimajor axis (SMA) error, and therefore, the overall fuel use. However, practical experience with CDGPS-based filters has shown this strong correlation seldom occurs (typical correlations approx. -0.1), even when the estimation accuracies are very good. We derive an analytic estimate of the filter correlation coefficient and demonstrate that, for the process and sensor noises levels expected with CDGPS, the expected value will be very low. It is also demonstrated that this correlation can be improved by increasing the time step of the discrete Kalman filter, but since the balance condition is not satisfied, the SMA error also increases. These observations are verified with several linear simulations. The combination of these simulations and analysis provide new insights on the crucial role of the process noise in determining the semimajor axis knowledge.

  8. Estimating sparse precision matrices

    NASA Astrophysics Data System (ADS)

    Padmanabhan, Nikhil; White, Martin; Zhou, Harrison H.; O'Connell, Ross

    2016-08-01

    We apply a method recently introduced to the statistical literature to directly estimate the precision matrix from an ensemble of samples drawn from a corresponding Gaussian distribution. Motivated by the observation that cosmological precision matrices are often approximately sparse, the method allows one to exploit this sparsity of the precision matrix to more quickly converge to an asymptotic 1/sqrt{N_sim} rate while simultaneously providing an error model for all of the terms. Such an estimate can be used as the starting point for further regularization efforts which can improve upon the 1/sqrt{N_sim} limit above, and incorporating such additional steps is straightforward within this framework. We demonstrate the technique with toy models and with an example motivated by large-scale structure two-point analysis, showing significant improvements in the rate of convergence. For the large-scale structure example, we find errors on the precision matrix which are factors of 5 smaller than for the sample precision matrix for thousands of simulations or, alternatively, convergence to the same error level with more than an order of magnitude fewer simulations.

  9. Nonparametric Estimators for Incomplete Surveys

    NASA Astrophysics Data System (ADS)

    Caditz, David M.

    2016-11-01

    Nonparametric estimators, such as the 1/{V}\\max estimator and the {C}- estimator, have been applied extensively to estimate luminosity functions (LFs) of astronomical sources from complete, truncated survey data sets. Application of such estimators to incomplete data sets typically requires further truncation of data, separation into subsets of constant completeness, and/or correction for incompleteness-induced bias. In this paper, we derive generalizations of the above estimators designed for use with incomplete, truncated data sets. We compare these generalized nonparametric estimators, investigate some of their simple statistical properties, and validate them using Monte Carlo simulation methods. We apply a nonparametric estimator to data obtained from the extended Baryon Oscillation Spectroscopic Survey to estimate the QSO LF for redshifts 0.68\\lt z\\lt 4.

  10. Earthquake Loss Estimation Uncertainties

    NASA Astrophysics Data System (ADS)

    Frolova, Nina; Bonnin, Jean; Larionov, Valery; Ugarov, Aleksander

    2013-04-01

    The paper addresses the reliability issues of strong earthquakes loss assessment following strong earthquakes with worldwide Systems' application in emergency mode. Timely and correct action just after an event can result in significant benefits in saving lives. In this case the information about possible damage and expected number of casualties is very critical for taking decision about search, rescue operations and offering humanitarian assistance. Such rough information may be provided by, first of all, global systems, in emergency mode. The experience of earthquakes disasters in different earthquake-prone countries shows that the officials who are in charge of emergency response at national and international levels are often lacking prompt and reliable information on the disaster scope. Uncertainties on the parameters used in the estimation process are numerous and large: knowledge about physical phenomena and uncertainties on the parameters used to describe them; global adequacy of modeling techniques to the actual physical phenomena; actual distribution of population at risk at the very time of the shaking (with respect to immediate threat: buildings or the like); knowledge about the source of shaking, etc. Needless to be a sharp specialist to understand, for example, that the way a given building responds to a given shaking obeys mechanical laws which are poorly known (if not out of the reach of engineers for a large portion of the building stock); if a carefully engineered modern building is approximately predictable, this is far not the case for older buildings which make up the bulk of inhabited buildings. The way population, inside the buildings at the time of shaking, is affected by the physical damage caused to the buildings is not precisely known, by far. The paper analyzes the influence of uncertainties in strong event parameters determination by Alert Seismological Surveys, of simulation models used at all stages from, estimating shaking intensity

  11. AN ESTIMATE OF THE DETECTABILITY OF RISING FLUX TUBES

    SciTech Connect

    Birch, A. C.; Braun, D. C.; Fan, Y.

    2010-11-10

    The physics of the formation of magnetic active regions (ARs) is one of the most important problems in solar physics. One main class of theories suggests that ARs are the result of magnetic flux that rises from the tachocline. Time-distance helioseismology, which is based on measurements of wave propagation, promises to allow the study of the subsurface behavior of this magnetic flux. Here, we use a model for a buoyant magnetic flux concentration together with the ray approximation to show that the dominant effect on the wave propagation is expected to be from the roughly 100 m s{sup -1} retrograde flow associated with the rising flux. Using a B-spline-based method for carrying out inversions of wave travel times for flows in spherical geometry, we show that at 3 days before emergence the detection of this retrograde flow at a depth of 30 Mm should be possible with a signal-to-noise level of about 8 with a sample of 150 emerging ARs.

  12. Estimating multipartite entanglement measures

    SciTech Connect

    Osterloh, Andreas; Hyllus, Philipp

    2010-02-15

    We investigate the lower bound obtained from experimental data of a quantum state {rho}, as proposed independently by O. Guehne et al. [Phys. Rev. Lett. 98, 110502 (2007)] and J. Eisert et al. [New J. Phys. 9, 46 (2007)], and apply it to mixed states of three qubits. The measure we consider is the convex-roof extended three-tangle. Our findings highlight an intimate relation to lower bounds obtained recently from so-called characteristic curves of a given entanglement measure. We apply the bounds to estimate the three-tangle present in recently performed experiments aimed at producing a three-qubit Greenberger-Horne-Zeilinger (GHZ) state. A nonvanishing lower bound is obtained if the GHZ fidelity of the produced states is larger than 3/4.

  13. Ramjet cost estimating handbook

    NASA Technical Reports Server (NTRS)

    Emmons, H. T.; Norwood, D. L.; Rasmusen, J. E.; Reynolds, H. E.

    1978-01-01

    Research conducted under Air Force Contract F33615-76-C-2043 to generate cost data and to establish a cost methodology that accurately predicts the production costs of ramjet engines is presented. The cost handbook contains a description of over one hundred and twenty-five different components which are defined as baseline components. The cost estimator selects from the handbook the appropriate components to fit his ramjet assembly, computes the cost from cost computation data sheets in the handbook, and totals all of the appropriate cost elements to arrive at the total engine cost. The methodology described in the cost handbook addresses many different ramjet types from simple podded arrangements of the liquid fuel ramjet to the more complex integral rocket/ramjet configurations including solid fuel ramjets and solid ducted rockets. It is applicable to a range of sizes from 6 in diameter to 18 in diameter and to production quantities up to 5000 engines.

  14. Uncertainties in transpiration estimates.

    PubMed

    Coenders-Gerrits, A M J; van der Ent, R J; Bogaard, T A; Wang-Erlandsson, L; Hrachowitz, M; Savenije, H H G

    2014-02-13

    arising from S. Jasechko et al. Nature 496, 347-350 (2013)10.1038/nature11983How best to assess the respective importance of plant transpiration over evaporation from open waters, soils and short-term storage such as tree canopies and understories (interception) has long been debated. On the basis of data from lake catchments, Jasechko et al. conclude that transpiration accounts for 80-90% of total land evaporation globally (Fig. 1a). However, another choice of input data, together with more conservative accounting of the related uncertainties, reduces and widens the transpiration ratio estimation to 35-80%. Hence, climate models do not necessarily conflict with observations, but more measurements on the catchment scale are needed to reduce the uncertainty range. There is a Reply to this Brief Communications Arising by Jasechko, S. et al. Nature 506, http://dx.doi.org/10.1038/nature12926 (2014).

  15. Estimating Bias Error Distributions

    NASA Technical Reports Server (NTRS)

    Liu, Tian-Shu; Finley, Tom D.

    2001-01-01

    This paper formulates the general methodology for estimating the bias error distribution of a device in a measuring domain from less accurate measurements when a minimal number of standard values (typically two values) are available. A new perspective is that the bias error distribution can be found as a solution of an intrinsic functional equation in a domain. Based on this theory, the scaling- and translation-based methods for determining the bias error distribution arc developed. These methods are virtually applicable to any device as long as the bias error distribution of the device can be sufficiently described by a power series (a polynomial) or a Fourier series in a domain. These methods have been validated through computational simulations and laboratory calibration experiments for a number of different devices.

  16. Precipitation Estimates for Hydroelectricity

    NASA Technical Reports Server (NTRS)

    Tapiador, Francisco J.; Hou, Arthur Y.; de Castro, Manuel; Checa, Ramiro; Cuartero, Fernando; Barros, Ana P.

    2011-01-01

    Hydroelectric plants require precise and timely estimates of rain, snow and other hydrometeors for operations. However, it is far from being a trivial task to measure and predict precipitation. This paper presents the linkages between precipitation science and hydroelectricity, and in doing so it provides insight into current research directions that are relevant for this renewable energy. Methods described include radars, disdrometers, satellites and numerical models. Two recent advances that have the potential of being highly beneficial for hydropower operations are featured: the Global Precipitation Measuring (GPM) mission, which represents an important leap forward in precipitation observations from space, and high performance computing (HPC) and grid technology, that allows building ensembles of numerical weather and climate models.

  17. Estimating earthquake potential

    USGS Publications Warehouse

    Page, R.A.

    1980-01-01

    The hazards to life and property from earthquakes can be minimized in three ways. First, structures can be designed and built to resist the effects of earthquakes. Second, the location of structures and human activities can be chosen to avoid or to limit the use of areas known to be subject to serious earthquake hazards. Third, preparations for an earthquake in response to a prediction or warning can reduce the loss of life and damage to property as well as promote a rapid recovery from the disaster. The success of the first two strategies, earthquake engineering and land use planning, depends on being able to reliably estimate the earthquake potential. The key considerations in defining the potential of a region are the location, size, and character of future earthquakes and frequency of their occurrence. Both historic seismicity of the region and the geologic record are considered in evaluating earthquake potential. 

  18. Estimating many variances

    SciTech Connect

    Robbins, H.

    1981-01-01

    Suppose that an unknown random parameter theta with distribution function G is such that given theta, an observable random variable x has conditional probability density f(x / theta) of known form. If a function t = t(x) is used to estimate theta, then the expected squared error with respect to the random variation of both theta and x is: E(t-theta)/sup 2/ = ..integral.. ..integral..(t(x)-theta)/sup 2/ f(x parallel theta)dx dG(theta). For fixed G we can seek to minimize this equation within any desired class of functions t, such as the class of all linear functions A + Bx, or the class of al Borel functions whatsoever.

  19. Toxicity Estimation Software Tool (TEST)

    EPA Science Inventory

    The Toxicity Estimation Software Tool (TEST) was developed to allow users to easily estimate the toxicity of chemicals using Quantitative Structure Activity Relationships (QSARs) methodologies. QSARs are mathematical models used to predict measures of toxicity from the physical c...

  20. Strategies for Estimating Discrete Quantities.

    ERIC Educational Resources Information Center

    Crites, Terry W.

    1993-01-01

    Describes the benchmark and decomposition-recomposition estimation strategies and presents five techniques to develop students' estimation ability. Suggests situations involving quantities of candy and popcorn in which the teacher can model those strategies for the students. (MDH)

  1. Historical Tank Content Estimate (HTCE) and sampling estimate comparisons

    SciTech Connect

    Remund, K.M.; Chen, G.; Hartley, S.A.

    1995-11-01

    There has been a substantial effort over the years to characterize the waste content in Hanford`s waste tanks. This characterization is vital to future efforts to retrieve, pretreat, and dispose of the waste in the proper manner. The present study is being conducted to help escalate this effort. This study compares estimates from two independent tank characterization approaches. One approach is based on tank sampling while the other is based on historical records. In order to statistically compare the two independent approaches, quantified variabilities (or uncertainty estimates) around the estimates of the mean concentrations are required. For the sampling-based estimates, the uncertainty estimates are provided in the Tank Characterization Reports (TCR`s). However, the historically based estimates are determined from a model, and therefore possess no quantified variabilities. Steps must be taken to provide quantified variabilities for these estimates. These steps involve a parameter influence study (factorial experiment study) and an uncertainty analysis (Monte Carlo study) of the Historical Tank Content Estimate (HTCE). The purpose of the factorial experiment is to identify in the Hanford Defined Wastes (HDW) model which parameters, as they vary, have the largest effect on the HTCE. The results of this study provide the proper input parameters for the Monte Carlo study. The two estimates (HTCE and sampling-based) can then be compared. The purpose of the Monte Carlo study is to provide estimates of variability around the estimate derived the historical records.

  2. Bibliography for aircraft parameter estimation

    NASA Technical Reports Server (NTRS)

    Iliff, Kenneth W.; Maine, Richard E.

    1986-01-01

    An extensive bibliography in the field of aircraft parameter estimation has been compiled. This list contains definitive works related to most aircraft parameter estimation approaches. Theoretical studies as well as practical applications are included. Many of these publications are pertinent to subjects peripherally related to parameter estimation, such as aircraft maneuver design or instrumentation considerations.

  3. Estimating concurrence via entanglement witnesses

    SciTech Connect

    Jurkowski, Jacek; Chruscinski, Dariusz

    2010-05-15

    We show that each entanglement witness detecting a given bipartite entangled state provides an estimation of its concurrence. We illustrate our result with several well-known examples of entanglement witnesses and compare the corresponding estimation of concurrence with other estimations provided by the trace norm of partial transposition and realignment.

  4. Tree Topology Estimation

    PubMed Central

    Estrada, Rolando; Tomasi, Carlo; Schmidler, Scott C.; Farsiu, Sina

    2015-01-01

    Tree-like structures are fundamental in nature, and it is often useful to reconstruct the topology of a tree—what connects to what—from a two-dimensional image of it. However, the projected branches often cross in the image: the tree projects to a planar graph, and the inverse problem of reconstructing the topology of the tree from that of the graph is ill-posed. We regularize this problem with a generative, parametric tree-growth model. Under this model, reconstruction is possible in linear time if one knows the direction of each edge in the graph—which edge endpoint is closer to the root of the tree—but becomes NP-hard if the directions are not known. For the latter case, we present a heuristic search algorithm to estimate the most likely topology of a rooted, three-dimensional tree from a single two-dimensional image. Experimental results on retinal vessel, plant root, and synthetic tree datasets show that our methodology is both accurate and efficient. PMID:26353004

  5. Tree Topology Estimation.

    PubMed

    Estrada, Rolando; Tomasi, Carlo; Schmidler, Scott C; Farsiu, Sina

    2015-08-01

    Tree-like structures are fundamental in nature, and it is often useful to reconstruct the topology of a tree - what connects to what - from a two-dimensional image of it. However, the projected branches often cross in the image: the tree projects to a planar graph, and the inverse problem of reconstructing the topology of the tree from that of the graph is ill-posed. We regularize this problem with a generative, parametric tree-growth model. Under this model, reconstruction is possible in linear time if one knows the direction of each edge in the graph - which edge endpoint is closer to the root of the tree - but becomes NP-hard if the directions are not known. For the latter case, we present a heuristic search algorithm to estimate the most likely topology of a rooted, three-dimensional tree from a single two-dimensional image. Experimental results on retinal vessel, plant root, and synthetic tree data sets show that our methodology is both accurate and efficient. PMID:26353004

  6. Estimate exchanger vibration

    SciTech Connect

    Nieh, C.D.; Zengyan, H.

    1986-04-01

    Based on the classical beam theory, a simple method for calculating the natural frequency of unequally spanned tubes is presented. The method is suitable for various boundary conditions. Accuracy of the calculations is sufficient for practical applications. This method will help designers and operators estimate the vibration of tubular exchangers. In general, there are three reasons why a tube vibrates in cross flow: vortex shedding, fluid elasticity and turbulent buffeting. No matter which is the cause, the basic reason is that the frequency of exciting force is approximately the same as or equal to the natural frequency of the tube. To prevent the heat exchanger from vibrating, it is necessary to select correctly the shell-side fluid velocity so that the frequency of exciting force is different from the natural frequency of the tube, or to vary the natural frequency of the heat exchanger tube. So precisely determining the natural frequency of the heat exchanger, especially its foundational frequency under various supporting conditions, is of significance.

  7. Estimation of the FEV.

    PubMed Central

    Oldham, P D; Cole, T J

    1983-01-01

    The procedure recommended by the Medical Research Council for estimating a subject's forced expiratory volume in one second (FEV1) is to require five separate attempts, discard the first two results, and average the last three. The most popular alternatives are to use the largest of the last three or the largest of a smaller number of results. Nine different indices derived from some or all of five attempts were compared in two studies. In one 40 normal subjects were studied. In the other 335 men exposed to industrial dust, whose forced expiratory volume declined with their degree of radiological pneumoconiosis as well as with age, were studied. There were small but consistent differences between indices. The index which emerged as the best overall in both studies was the mean of the largest three results from five attempts. It was better than the recommended index for all the comparisons made, but at the same time it gave a very similar mean value for the FEV1. Excluding the lowest two results rather than the first two from five blows is a rational procedure, and it should be formally recognised as providing the best index available. PMID:6623419

  8. TRAC performance estimates

    NASA Technical Reports Server (NTRS)

    Everett, L.

    1992-01-01

    This report documents the performance characteristics of a Targeting Reflective Alignment Concept (TRAC) sensor. The performance will be documented for both short and long ranges. For long ranges, the sensor is used without the flat mirror attached to the target. To better understand the capabilities of the TRAC based sensors, an engineering model is required. The model can be used to better design the system for a particular application. This is necessary because there are many interrelated design variables in application. These include lense parameters, camera, and target configuration. The report presents first an analytical development of the performance, and second an experimental verification of the equations. In the analytical presentation it is assumed that the best vision resolution is a single pixel element. The experimental results suggest however that the resolution is better than 1 pixel. Hence the analytical results should be considered worst case conditions. The report also discusses advantages and limitations of the TRAC sensor in light of the performance estimates. Finally the report discusses potential improvements.

  9. Precision cosmological parameter estimation

    NASA Astrophysics Data System (ADS)

    Fendt, William Ashton, Jr.

    2009-09-01

    methods. These techniques will help in the understanding of new physics contained in current and future data sets as well as benefit the research efforts of the cosmology community. Our idea is to shift the computationally intensive pieces of the parameter estimation framework to a parallel training step. We then provide a machine learning code that uses this training set to learn the relationship between the underlying cosmological parameters and the function we wish to compute. This code is very accurate and simple to evaluate. It can provide incredible speed- ups of parameter estimation codes. For some applications this provides the convenience of obtaining results faster, while in other cases this allows the use of codes that would be impossible to apply in the brute force setting. In this thesis we provide several examples where our method allows more accurate computation of functions important for data analysis than is currently possible. As the techniques developed in this work are very general, there are no doubt a wide array of applications both inside and outside of cosmology. We have already seen this interest as other scientists have presented ideas for using our algorithm to improve their computational work, indicating its importance as modern experiments push forward. In fact, our algorithm will play an important role in the parameter analysis of Planck, the next generation CMB space mission.

  10. Uveal melanoma: Estimating prognosis

    PubMed Central

    Kaliki, Swathi; Shields, Carol L; Shields, Jerry A

    2015-01-01

    Uveal melanoma is the most common primary malignant tumor of the eye in adults, predominantly found in Caucasians. Local tumor control of uveal melanoma is excellent, yet this malignancy is associated with relatively high mortality secondary to metastasis. Various clinical, histopathological, cytogenetic features and gene expression features help in estimating the prognosis of uveal melanoma. The clinical features associated with poor prognosis in patients with uveal melanoma include older age at presentation, male gender, larger tumor basal diameter and thickness, ciliary body location, diffuse tumor configuration, association with ocular/oculodermal melanocytosis, extraocular tumor extension, and advanced tumor staging by American Joint Committee on Cancer classification. Histopathological features suggestive of poor prognosis include epithelioid cell type, high mitotic activity, higher values of mean diameter of ten largest nucleoli, higher microvascular density, extravascular matrix patterns, tumor-infiltrating lymphocytes, tumor-infiltrating macrophages, higher expression of insulin-like growth factor-1 receptor, and higher expression of human leukocyte antigen Class I and II. Monosomy 3, 1p loss, 6q loss, and 8q and those classified as Class II by gene expression are predictive of poor prognosis of uveal melanoma. In this review, we discuss the prognostic factors of uveal melanoma. A database search was performed on PubMed, using the terms “uvea,” “iris,” “ciliary body,” “choroid,” “melanoma,” “uveal melanoma” and “prognosis,” “metastasis,” “genetic testing,” “gene expression profiling.” Relevant English language articles were extracted, reviewed, and referenced appropriately. PMID:25827538

  11. Estimators for the Cauchy distribution

    SciTech Connect

    Hanson, K.M.; Wolf, D.R.

    1993-12-31

    We discuss the properties of various estimators of the central position of the Cauchy distribution. The performance of these estimators is evaluated for a set of simulated experiments. Estimators based on the maximum and mean of the posterior probability density function are empirically found to be well behaved when more than two measurements are available. On the contrary, because of the infinite variance of the Cauchy distribution, the average of the measured positions is an extremely poor estimator of the location of the source. However, the median of the measured positions is well behaved. The rms errors for the various estimators are compared to the Fisher-Cramer-Rao lower bound. We find that the square root of the variance of the posterior density function is predictive of the rms error in the mean posterior estimator.

  12. Spring Small Grains Area Estimation

    NASA Technical Reports Server (NTRS)

    Palmer, W. F.; Mohler, R. J.

    1986-01-01

    SSG3 automatically estimates acreage of spring small grains from Landsat data. Report describes development and testing of a computerized technique for using Landsat multispectral scanner (MSS) data to estimate acreage of spring small grains (wheat, barley, and oats). Application of technique to analysis of four years of data from United States and Canada yielded estimates of accuracy comparable to those obtained through procedures that rely on trained analysis.

  13. Robust and intelligent bearing estimation

    DOEpatents

    Claassen, John P.

    2000-01-01

    A method of bearing estimation comprising quadrature digital filtering of event observations, constructing a plurality of observation matrices each centered on a time-frequency interval, determining for each observation matrix a parameter such as degree of polarization, linearity of particle motion, degree of dyadicy, or signal-to-noise ratio, choosing observation matrices most likely to produce a set of best available bearing estimates, and estimating a bearing for each observation matrix of the chosen set.

  14. Supercooled liquid water Estimation Tool

    2012-05-04

    The Cloud Supercooled liquid water Estimation Tool (SEET) is a user driven Graphical User Interface (GUI) that estimates cloud supercooled liquid water (SLW) content in terms of vertical column and total mass from Moderate resolution Imaging Supercooled liquid water Estimation Tool Spectroradiometer (MODIS) spatially derived cloud products and realistic vertical cloud parameterizations that are user defined. It also contains functions for post-processing of the resulting data in tabular and graphical form.

  15. Quantum estimation by local observables

    SciTech Connect

    Hotta, Masahiro; Ozawa, Masanao

    2004-08-01

    Quantum estimation theory provides optimal observations for various estimation problems for unknown parameters in the state of the system under investigation. However, the theory has been developed under the assumption that every observable is available for experimenters. Here, we generalize the theory to problems in which the experimenter can use only locally accessible observables. For such problems, we establish a Cramer-Rao-type inequality by obtaining an explicit form of the Fisher information as a reciprocal lower bound for the mean-square errors of estimations by locally accessible observables. Furthermore, we explore various local quantum estimation problems for composite systems, where nontrivial combinatorics is needed for obtaining the Fisher information.

  16. Frequency tracking and parameter estimation for robust quantum state estimation

    SciTech Connect

    Ralph, Jason F.; Jacobs, Kurt; Hill, Charles D.

    2011-11-15

    In this paper we consider the problem of tracking the state of a quantum system via a continuous weak measurement. If the system Hamiltonian is known precisely, this merely requires integrating the appropriate stochastic master equation. However, even a small error in the assumed Hamiltonian can render this approach useless. The natural answer to this problem is to include the parameters of the Hamiltonian as part of the estimation problem, and the full Bayesian solution to this task provides a state estimate that is robust against uncertainties. However, this approach requires considerable computational overhead. Here we consider a single qubit in which the Hamiltonian contains a single unknown parameter. We show that classical frequency estimation techniques greatly reduce the computational overhead associated with Bayesian estimation and provide accurate estimates for the qubit frequency.

  17. Space Vehicle Pose Estimation via Optical Correlation and Nonlinear Estimation

    NASA Technical Reports Server (NTRS)

    Rakoczy, John; Herren, Kenneth

    2007-01-01

    A technique for 6-degree-of-freedom (6DOF) pose estimation of space vehicles is being developed. This technique draws upon recent developments in implementing optical correlation measurements in a nonlinear estimator, which relates the optical correlation measurements to the pose states (orientation and position). For the optical correlator, the use of both conjugate filters and binary, phase-only filters in the design of synthetic discriminant function (SDF) filters is explored. A static neural network is trained a priori and used as the nonlinear estimator. New commercial animation and image rendering software is exploited to design the SDF filters and to generate a large filter set with which to train the neural network. The technique is applied to pose estimation for rendezvous and docking of free-flying spacecraft and to terrestrial surface mobility systems for NASA's Vision for Space Exploration. Quantitative pose estimation performance will be reported. Advantages and disadvantages of the implementation of this technique are discussed.

  18. Space Vehicle Pose Estimation via Optical Correlation and Nonlinear Estimation

    NASA Technical Reports Server (NTRS)

    Rakoczy, John M.; Herren, Kenneth A.

    2008-01-01

    A technique for 6-degree-of-freedom (6DOF) pose estimation of space vehicles is being developed. This technique draws upon recent developments in implementing optical correlation measurements in a nonlinear estimator, which relates the optical correlation measurements to the pose states (orientation and position). For the optical correlator, the use of both conjugate filters and binary, phase-only filters in the design of synthetic discriminant function (SDF) filters is explored. A static neural network is trained a priori and used as the nonlinear estimator. New commercial animation and image rendering software is exploited to design the SDF filters and to generate a large filter set with which to train the neural network. The technique is applied to pose estimation for rendezvous and docking of free-flying spacecraft and to terrestrial surface mobility systems for NASA's Vision for Space Exploration. Quantitative pose estimation performance will be reported. Advantages and disadvantages of the implementation of this technique are discussed.

  19. Estimating discharge measurement uncertainty using the interpolated variance estimator

    USGS Publications Warehouse

    Cohn, T.; Kiang, J.; Mason, R.

    2012-01-01

    Methods for quantifying the uncertainty in discharge measurements typically identify various sources of uncertainty and then estimate the uncertainty from each of these sources by applying the results of empirical or laboratory studies. If actual measurement conditions are not consistent with those encountered in the empirical or laboratory studies, these methods may give poor estimates of discharge uncertainty. This paper presents an alternative method for estimating discharge measurement uncertainty that uses statistical techniques and at-site observations. This Interpolated Variance Estimator (IVE) estimates uncertainty based on the data collected during the streamflow measurement and therefore reflects the conditions encountered at the site. The IVE has the additional advantage of capturing all sources of random uncertainty in the velocity and depth measurements. It can be applied to velocity-area discharge measurements that use a velocity meter to measure point velocities at multiple vertical sections in a channel cross section.

  20. Information estimators for weighted observations.

    PubMed

    Hino, Hideitsu; Murata, Noboru

    2013-10-01

    The Shannon information content is a valuable numerical characteristic of probability distributions. The problem of estimating the information content from an observed dataset is very important in the fields of statistics, information theory, and machine learning. The contribution of the present paper is in proposing information estimators, and showing some of their applications. When the given data are associated with weights, each datum contributes differently to the empirical average of statistics. The proposed estimators can deal with this kind of weighted data. Similar to other conventional methods, the proposed information estimator contains a parameter to be tuned, and is computationally expensive. To overcome these problems, the proposed estimator is further modified so that it is more computationally efficient and has no tuning parameter. The proposed methods are also extended so as to estimate the cross-entropy, entropy, and Kullback-Leibler divergence. Simple numerical experiments show that the information estimators work properly. Then, the estimators are applied to two specific problems, distribution-preserving data compression, and weight optimization for ensemble regression.

  1. Estimating the Polyserial Correlation Coefficient.

    ERIC Educational Resources Information Center

    Bedrick, Edward J.; Breslin, Frederick C.

    1996-01-01

    Simple noniterative estimators of the polyserial correlation coefficient are developed by exploiting a general relationship between the polyserial correlation and the point polyserial correlation to give extensions of the biserial estimators of K. Pearson (1909), H. E. Brogden (1949), and F. M. Lord (1963) to the multicategory setting. (SLD)

  2. PBXN-110 Burn Rate Estimate

    SciTech Connect

    Glascoe, E

    2008-08-11

    It is estimated that PBXN-110 will burn laminarly with a burn function of B = (0.6-1.3)*P{sup 1.0} (B is the burn rate in mm/s and P is pressure in MPa). This paper provides a brief discussion of how this burn behavior was estimated.

  3. Reinforcing flood-risk estimation.

    PubMed

    Reed, Duncan W

    2002-07-15

    Flood-frequency estimation is inherently uncertain. The practitioner applies a combination of gauged data, scientific method and hydrological judgement to derive a flood-frequency curve for a particular site. The resulting estimate can be thought fully satisfactory only if it is broadly consistent with all that is reliably known about the flood-frequency behaviour of the river. The paper takes as its main theme the search for information to strengthen a flood-risk estimate made from peak flows alone. Extra information comes in many forms, including documentary and monumental records of historical floods, and palaeological markers. Meteorological information is also useful, although rainfall rarity is difficult to assess objectively and can be a notoriously unreliable indicator of flood rarity. On highly permeable catchments, groundwater levels present additional data. Other types of information are relevant to judging hydrological similarity when the flood-frequency estimate derives from data pooled across several catchments. After highlighting information sources, the paper explores a second theme: that of consistency in flood-risk estimates. Following publication of the Flood estimation handbook, studies of flood risk are now using digital catchment data. Automated calculation methods allow estimates by standard methods to be mapped basin-wide, revealing anomalies at special sites such as river confluences. Such mapping presents collateral information of a new character. Can this be used to achieve flood-risk estimates that are coherent throughout a river basin? PMID:12804255

  4. Estimation in the Power Law.

    ERIC Educational Resources Information Center

    Thomas, Hoben

    1981-01-01

    Psychophysicists neglect to consider how error should be characterized in applications of the power law. Failures of the power law to agree with certain theoretical predictions are examined. A power law with lognormal product structure is proposed and approximately unbiased parameter estimates given for several common estimation situations.…

  5. Reinforcing flood-risk estimation.

    PubMed

    Reed, Duncan W

    2002-07-15

    Flood-frequency estimation is inherently uncertain. The practitioner applies a combination of gauged data, scientific method and hydrological judgement to derive a flood-frequency curve for a particular site. The resulting estimate can be thought fully satisfactory only if it is broadly consistent with all that is reliably known about the flood-frequency behaviour of the river. The paper takes as its main theme the search for information to strengthen a flood-risk estimate made from peak flows alone. Extra information comes in many forms, including documentary and monumental records of historical floods, and palaeological markers. Meteorological information is also useful, although rainfall rarity is difficult to assess objectively and can be a notoriously unreliable indicator of flood rarity. On highly permeable catchments, groundwater levels present additional data. Other types of information are relevant to judging hydrological similarity when the flood-frequency estimate derives from data pooled across several catchments. After highlighting information sources, the paper explores a second theme: that of consistency in flood-risk estimates. Following publication of the Flood estimation handbook, studies of flood risk are now using digital catchment data. Automated calculation methods allow estimates by standard methods to be mapped basin-wide, revealing anomalies at special sites such as river confluences. Such mapping presents collateral information of a new character. Can this be used to achieve flood-risk estimates that are coherent throughout a river basin?

  6. Quantity Estimation Of The Interactions

    SciTech Connect

    Gorana, Agim; Malkaj, Partizan; Muda, Valbona

    2007-04-23

    In this paper we present some considerations about quantity estimations, regarding the range of interaction and the conservations laws in various types of interactions. Our estimations are done under classical and quantum point of view and have to do with the interaction's carriers, the radius, the influence range and the intensity of interactions.

  7. The incredible shrinking covariance estimator

    NASA Astrophysics Data System (ADS)

    Theiler, James

    2012-05-01

    Covariance estimation is a key step in many target detection algorithms. To distinguish target from background requires that the background be well-characterized. This applies to targets ranging from the precisely known chemical signatures of gaseous plumes to the wholly unspecified signals that are sought by anomaly detectors. When the background is modelled by a (global or local) Gaussian or other elliptically contoured distribution (such as Laplacian or multivariate-t), a covariance matrix must be estimated. The standard sample covariance overfits the data, and when the training sample size is small, the target detection performance suffers. Shrinkage addresses the problem of overfitting that inevitably arises when a high-dimensional model is fit from a small dataset. In place of the (overfit) sample covariance matrix, a linear combination of that covariance with a fixed matrix is employed. The fixed matrix might be the identity, the diagonal elements of the sample covariance, or some other underfit estimator. The idea is that the combination of an overfit with an underfit estimator can lead to a well-fit estimator. The coefficient that does this combining, called the shrinkage parameter, is generally estimated by some kind of cross-validation approach, but direct cross-validation can be computationally expensive. This paper extends an approach suggested by Hoffbeck and Landgrebe, and presents efficient approximations of the leave-one-out cross-validation (LOOC) estimate of the shrinkage parameter used in estimating the covariance matrix from a limited sample of data.

  8. Robust and intelligent bearing estimation

    SciTech Connect

    Claassen, J.P.

    1998-07-01

    As the monitoring thresholds of global and regional networks are lowered, bearing estimates become more important to the processes which associate (sparse) detections and which locate events. Current methods of estimating bearings from observations by 3-component stations and arrays lack both accuracy and precision. Methods are required which will develop all the precision inherently available in the arrival, determine the measurability of the arrival, provide better estimates of the bias induced by the medium, permit estimates at lower SNRs, and provide physical insight into the effects of the medium on the estimates. Initial efforts have focused on 3-component stations since the precision is poorest there. An intelligent estimation process for 3-component stations has been developed and explored. The method, called SEE for Search, Estimate, and Evaluation, adaptively exploits all the inherent information in the arrival at every step of the process to achieve optimal results. In particular, the approach uses a consistent and robust mathematical framework to define the optimal time-frequency windows on which to make estimates, to make the bearing estimates themselves, and to withdraw metrics helpful in choosing the best estimate(s) or admitting that the bearing is immeasurable. The approach is conceptually superior to current methods, particular those which rely on real values signals. The method has been evaluated to a considerable extent in a seismically active region and has demonstrated remarkable utility by providing not only the best estimates possible but also insight into the physical processes affecting the estimates. It has been shown, for example, that the best frequency at which to make an estimate seldom corresponds to the frequency having the best detection SNR and sometimes the best time interval is not at the onset of the signal. The method is capable of measuring bearing dispersion, thereby withdrawing the bearing bias as a function of frequency

  9. The Testability of Mmax Estimates

    NASA Astrophysics Data System (ADS)

    Clements, R. A.; González, Á.; Schorlemmer, D.

    2012-04-01

    Recent disasters caused by earthquakes of unexpectedly large magnitude (such as those of Tohoku and Christchurch) illustrate the need for reliable estimates of the maximum possible magnitude, Mmax, at a given fault or in a particular zone. Such estimates are essential parameters in seismic hazard assessment, but their accuracy remains untested. In fact, the testability, or lack thereof, of Mmax estimates, even over short time periods, is still uncertain. In this study, we discuss the testability of long-term and short-term Mmax estimates and the limitations that arise from testing such rare events. Of considerable importance is whether or not those limitations imply a lack of testability of a useful maximum magnitude estimate, and whether this should have any influence on current hazard assessment methodology.

  10. Estimation of toxicity using the Toxicity Estimation Software Tool (TEST)

    EPA Science Inventory

    Tens of thousands of chemicals are currently in commerce, and hundreds more are introduced every year. Since experimental measurements of toxicity are extremely time consuming and expensive, it is imperative that alternative methods to estimate toxicity are developed.

  11. Condition Number Regularized Covariance Estimation*

    PubMed Central

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2012-01-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197

  12. Radiation dose estimates for radiopharmaceuticals

    SciTech Connect

    Stabin, M.G.; Stubbs, J.B.; Toohey, R.E.

    1996-04-01

    Tables of radiation dose estimates based on the Cristy-Eckerman adult male phantom are provided for a number of radiopharmaceuticals commonly used in nuclear medicine. Radiation dose estimates are listed for all major source organs, and several other organs of interest. The dose estimates were calculated using the MIRD Technique as implemented in the MIRDOSE3 computer code, developed by the Oak Ridge Institute for Science and Education, Radiation Internal Dose Information Center. In this code, residence times for source organs are used with decay data from the MIRD Radionuclide Data and Decay Schemes to produce estimates of radiation dose to organs of standardized phantoms representing individuals of different ages. The adult male phantom of the Cristy-Eckerman phantom series is different from the MIRD 5, or Reference Man phantom in several aspects, the most important of which is the difference in the masses and absorbed fractions for the active (red) marrow. The absorbed fractions for flow energy photons striking the marrow are also different. Other minor differences exist, but are not likely to significantly affect dose estimates calculated with the two phantoms. Assumptions which support each of the dose estimates appears at the bottom of the table of estimates for a given radiopharmaceutical. In most cases, the model kinetics or organ residence times are explicitly given. The results presented here can easily be extended to include other radiopharmaceuticals or phantoms.

  13. SDR Input Power Estimation Algorithms

    NASA Technical Reports Server (NTRS)

    Nappier, Jennifer M.; Briones, Janette C.

    2013-01-01

    The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.

  14. Estimating equivalence with quantile regression

    USGS Publications Warehouse

    Cade, B.S.

    2011-01-01

    Equivalence testing and corresponding confidence interval estimates are used to provide more enlightened statistical statements about parameter estimates by relating them to intervals of effect sizes deemed to be of scientific or practical importance rather than just to an effect size of zero. Equivalence tests and confidence interval estimates are based on a null hypothesis that a parameter estimate is either outside (inequivalence hypothesis) or inside (equivalence hypothesis) an equivalence region, depending on the question of interest and assignment of risk. The former approach, often referred to as bioequivalence testing, is often used in regulatory settings because it reverses the burden of proof compared to a standard test of significance, following a precautionary principle for environmental protection. Unfortunately, many applications of equivalence testing focus on establishing average equivalence by estimating differences in means of distributions that do not have homogeneous variances. I discuss how to compare equivalence across quantiles of distributions using confidence intervals on quantile regression estimates that detect differences in heterogeneous distributions missed by focusing on means. I used one-tailed confidence intervals based on inequivalence hypotheses in a two-group treatment-control design for estimating bioequivalence of arsenic concentrations in soils at an old ammunition testing site and bioequivalence of vegetation biomass at a reclaimed mining site. Two-tailed confidence intervals based both on inequivalence and equivalence hypotheses were used to examine quantile equivalence for negligible trends over time for a continuous exponential model of amphibian abundance. ?? 2011 by the Ecological Society of America.

  15. SDR input power estimation algorithms

    NASA Astrophysics Data System (ADS)

    Briones, J. C.; Nappier, J. M.

    The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.

  16. Space Station Facility government estimating

    NASA Technical Reports Server (NTRS)

    Brown, Joseph A.

    1993-01-01

    This new, unique Cost Engineering Report introduces the 800-page, C-100 government estimate for the Space Station Processing Facility (SSPF) and Volume IV Aerospace Construction Price Book. At the January 23, 1991, bid opening for the SSPF, the government cost estimate was right on target. Metric, Inc., Prime Contractor, low bid was 1.2 percent below the government estimate. This project contains many different and complex systems. Volume IV is a summary of the cost associated with construction, activation and Ground Support Equipment (GSE) design, estimating, fabrication, installation, testing, termination, and verification of this project. Included are 13 reasons the government estimate was so accurate; abstract of bids, for 8 bidders and government estimate with additive alternates, special labor and materials, budget comparison and system summaries; and comments on the energy credit from local electrical utility. This report adds another project to our continuing study of 'How Does the Low Bidder Get Low and Make Money?' which was started in 1967, and first published in the 1973 AACE Transaction with 18 ways the low bidders get low. The accuracy of this estimate proves the benefits of our Kennedy Space Center (KSC) teamwork efforts and KSC Cost Engineer Tools which are contributing toward our goals of the Space Station.

  17. ESTIMATING REPRODUCTIVE SUCCESS IN BIRDS

    EPA Science Inventory

    This presentation will focus on the statistical issues surrounding estimation of avian nest-survival. I first describe the natural history and breeding ecology of two North American songbirds, the Loggerhead Shrike (Lanius ludovicianus) and the Wood Thrush (Hylocichla mustelina)....

  18. Manned Mars mission cost estimate

    NASA Technical Reports Server (NTRS)

    Hamaker, Joseph; Smith, Keith

    1986-01-01

    The potential costs of several options of a manned Mars mission are examined. A cost estimating methodology based primarily on existing Marshall Space Flight Center (MSFC) parametric cost models is summarized. These models include the MSFC Space Station Cost Model and the MSFC Launch Vehicle Cost Model as well as other modes and techniques. The ground rules and assumptions of the cost estimating methodology are discussed and cost estimates presented for six potential mission options which were studied. The estimated manned Mars mission costs are compared to the cost of the somewhat analogous Apollo Program cost after normalizing the Apollo cost to the environment and ground rules of the manned Mars missions. It is concluded that a manned Mars mission, as currently defined, could be accomplished for under $30 billion in 1985 dollars excluding launch vehicle development and mission operations.

  19. Performance Bounds of Quaternion Estimators.

    PubMed

    Xia, Yili; Jahanchahi, Cyrus; Nitta, Tohru; Mandic, Danilo P

    2015-12-01

    The quaternion widely linear (WL) estimator has been recently introduced for optimal second-order modeling of the generality of quaternion data, both second-order circular (proper) and second-order noncircular (improper). Experimental evidence exists of its performance advantage over the conventional strictly linear (SL) as well as the semi-WL (SWL) estimators for improper data. However, rigorous theoretical and practical performance bounds are still missing in the literature, yet this is crucial for the development of quaternion valued learning systems for 3-D and 4-D data. To this end, based on the orthogonality principle, we introduce a rigorous closed-form solution to quantify the degree of performance benefits, in terms of the mean square error, obtained when using the WL models. The cases when the optimal WL estimation can simplify into the SWL or the SL estimation are also discussed. PMID:25643416

  20. Age estimation from canine volumes.

    PubMed

    De Angelis, Danilo; Gaudio, Daniel; Guercini, Nicola; Cipriani, Filippo; Gibelli, Daniele; Caputi, Sergio; Cattaneo, Cristina

    2015-08-01

    Techniques for estimation of biological age are constantly evolving and are finding daily application in the forensic radiology field in cases concerning the estimation of the chronological age of a corpse in order to reconstruct the biological profile, or of a living subject, for example in cases of immigration of people without identity papers from a civil registry. The deposition of teeth secondary dentine and consequent decrease of pulp chamber in size are well known as aging phenomena, and they have been applied to the forensic context by the development of age estimation procedures, such as Kvaal-Solheim and Cameriere methods. The present study takes into consideration canines pulp chamber volume related to the entire teeth volume, with the aim of proposing new regression formulae for age estimation using 91 cone beam computerized scans and a freeware open-source software, in order to permit affordable reproducibility of volumes calculation.

  1. [Medical insurance estimation of risks].

    PubMed

    Dunér, H

    1975-11-01

    The purpose of insurance medicine is to make a prognostic estimate of medical risk-factors in persons who apply for life, health, or accident insurance. Established risk-groups with a calculated average mortality and morbidity form the basis for premium rates and insurance terms. In most cases the applicant is accepted for insurance after a self-assessment of his health. Only around one per cent of the applications are refused, but there are cases in which the premium is raised, temporarily or permanently. It is often a matter of rough estimate, since the knowlege of the long-term prognosis for many diseases is incomplete. The insurance companies' rules for estimate of risk are revised at intervals of three or four years. The estimate of risk as regards life insurance has been gradually liberalised, while the medical conditions for health insurance have become stricter owing to an increase in the claims rate.

  2. Estimate Radiological Dose for Animals

    1997-12-18

    Estimate Radiological dose for animals in ecological environment using open literature values for parameters such as body weight, plant and soil ingestion rate, rad. halflife, absorbed energy, biological halflife, gamma energy per decay, soil-to-plant transfer factor, ...etc

  3. The Psychology of Cost Estimating

    NASA Technical Reports Server (NTRS)

    Price, Andy

    2016-01-01

    Cost estimation for large (and even not so large) government programs is a challenge. The number and magnitude of cost overruns associated with large Department of Defense (DoD) and National Aeronautics and Space Administration (NASA) programs highlight the difficulties in developing and promulgating accurate cost estimates. These overruns can be the result of inadequate technology readiness or requirements definition, the whims of politicians or government bureaucrats, or even as failures of the cost estimating profession itself. However, there may be another reason for cost overruns that is right in front of us, but only recently have we begun to grasp it: the fact that cost estimators and their customers are human. The last 70+ years of research into human psychology and behavioral economics have yielded amazing findings into how we humans process and use information to make judgments and decisions. What these scientists have uncovered is surprising: humans are often irrational and illogical beings, making decisions based on factors such as emotion and perception, rather than facts and data. These built-in biases to our thinking directly affect how we develop our cost estimates and how those cost estimates are used. We cost estimators can use this knowledge of biases to improve our cost estimates and also to improve how we communicate and work with our customers. By understanding how our customers think, and more importantly, why they think the way they do, we can have more productive relationships and greater influence. By using psychology to our advantage, we can more effectively help the decision maker and our organizations make fact-based decisions.

  4. PDV Uncertainty Estimation & Methods Comparison

    SciTech Connect

    Machorro, E.

    2011-11-01

    Several methods are presented for estimating the rapidly changing instantaneous frequency of a time varying signal that is contaminated by measurement noise. Useful a posteriori error estimates for several methods are verified numerically through Monte Carlo simulation. However, given the sampling rates of modern digitizers, sub-nanosecond variations in velocity are shown to be reliably measurable in most (but not all) cases. Results support the hypothesis that in many PDV regimes of interest, sub-nanosecond resolution can be achieved.

  5. Relaxation times estimation in MRI

    NASA Astrophysics Data System (ADS)

    Baselice, Fabio; Caivano, Rocchina; Cammarota, Aldo; Ferraioli, Giampaolo; Pascazio, Vito

    2014-03-01

    Magnetic Resonance Imaging is a very powerful techniques for soft tissue diagnosis. At the present, the clinical evaluation is mainly conducted exploiting the amplitude of the recorded MR image which, in some specific cases, is modified by using contrast enhancements. Nevertheless, spin-lattice (T1) and spin-spin (T2) relaxation times can play an important role in many pathology diagnosis, such as cancer, Alzheimer or Parkinson diseases. Different algorithms for relaxation time estimation have been proposed in literature. In particular, the two most adopted approaches are based on Least Squares (LS) and on Maximum Likelihood (ML) techniques. As the amplitude noise is not zero mean, the first one produces a biased estimator, while the ML is unbiased but at the cost of high computational effort. Recently the attention has been focused on the estimation in the complex, instead of the amplitude, domain. The advantage of working with real and imaginary decomposition of the available data is mainly the possibility of achieving higher quality estimations. Moreover, the zero mean complex noise makes the Least Square estimation unbiased, achieving low computational times. First results of complex domain relaxation times estimation on real datasets are presented. In particular, a patient with an occipital lesion has been imaged on a 3.0T scanner. Globally, the evaluation of relaxation times allow us to establish a more precise topography of biologically active foci, also with respect to contrast enhanced images.

  6. Parameter estimation in food science.

    PubMed

    Dolan, Kirk D; Mishra, Dharmendra K

    2013-01-01

    Modeling includes two distinct parts, the forward problem and the inverse problem. The forward problem-computing y(t) given known parameters-has received much attention, especially with the explosion of commercial simulation software. What is rarely made clear is that the forward results can be no better than the accuracy of the parameters. Therefore, the inverse problem-estimation of parameters given measured y(t)-is at least as important as the forward problem. However, in the food science literature there has been little attention paid to the accuracy of parameters. The purpose of this article is to summarize the state of the art of parameter estimation in food science, to review some of the common food science models used for parameter estimation (for microbial inactivation and growth, thermal properties, and kinetics), and to suggest a generic method to standardize parameter estimation, thereby making research results more useful. Scaled sensitivity coefficients are introduced and shown to be important in parameter identifiability. Sequential estimation and optimal experimental design are also reviewed as powerful parameter estimation methods that are beginning to be used in the food science literature.

  7. Variational Dirichlet Blur Kernel Estimation.

    PubMed

    Zhou, Xu; Mateos, Javier; Zhou, Fugen; Molina, Rafael; Katsaggelos, Aggelos K

    2015-12-01

    Blind image deconvolution involves two key objectives: 1) latent image and 2) blur estimation. For latent image estimation, we propose a fast deconvolution algorithm, which uses an image prior of nondimensional Gaussianity measure to enforce sparsity and an undetermined boundary condition methodology to reduce boundary artifacts. For blur estimation, a linear inverse problem with normalization and nonnegative constraints must be solved. However, the normalization constraint is ignored in many blind image deblurring methods, mainly because it makes the problem less tractable. In this paper, we show that the normalization constraint can be very naturally incorporated into the estimation process by using a Dirichlet distribution to approximate the posterior distribution of the blur. Making use of variational Dirichlet approximation, we provide a blur posterior approximation that considers the uncertainty of the estimate and removes noise in the estimated kernel. Experiments with synthetic and real data demonstrate that the proposed method is very competitive to the state-of-the-art blind image restoration methods. PMID:26390458

  8. Estimating recharge rates with analytic element models and parameter estimation

    USGS Publications Warehouse

    Dripps, W.R.; Hunt, R.J.; Anderson, M.P.

    2006-01-01

    Quantifying the spatial and temporal distribution of recharge is usually a prerequisite for effective ground water flow modeling. In this study, an analytic element (AE) code (GFLOW) was used with a nonlinear parameter estimation code (UCODE) to quantify the spatial and temporal distribution of recharge using measured base flows as calibration targets. The ease and flexibility of AE model construction and evaluation make this approach well suited for recharge estimation. An AE flow model of an undeveloped watershed in northern Wisconsin was optimized to match median annual base flows at four stream gages for 1996 to 2000 to demonstrate the approach. Initial optimizations that assumed a constant distributed recharge rate provided good matches (within 5%) to most of the annual base flow estimates, but discrepancies of >12% at certain gages suggested that a single value of recharge for the entire watershed is inappropriate. Subsequent optimizations that allowed for spatially distributed recharge zones based on the distribution of vegetation types improved the fit and confirmed that vegetation can influence spatial recharge variability in this watershed. Temporally, the annual recharge values varied >2.5-fold between 1996 and 2000 during which there was an observed 1.7-fold difference in annual precipitation, underscoring the influence of nonclimatic factors on interannual recharge variability for regional flow modeling. The final recharge values compared favorably with more labor-intensive field measurements of recharge and results from studies, supporting the utility of using linked AE-parameter estimation codes for recharge estimation. Copyright ?? 2005 The Author(s).

  9. Estimated soil ingestion by children.

    PubMed

    van Wijnen, J H; Clausing, P; Brunekreef, B

    1990-04-01

    The amount of soil ingested by young children was estimated by measuring the titanium, aluminum, and acid-insoluble residue in soil and feces. As intake of each of these tracers is also possible from sources other than soil ingestion, the amount of soil ingested was estimated to be not higher than the lowest of the three separate estimates. This estimate, the limiting tracer method (LTM) value, was then corrected for the similarly calculated mean LTM value for a group of hospitalized children without access to soil and dust. The study groups included children in three different environmental situations: day-care centers, campgrounds, and hospitals. The day-care center groups were sampled twice. From these groups, 162 children produced usable feces samples during both sampling periods. The camping groups and the hospitalized (control) group were sampled once. For the day-care center groups, the estimated geometric mean soil intake varied from 0 to 90 mg/day and for the camping groups these estimates ranged from 30 to 200 mg/day (in dry weight). Using estimates of the "true" between-child GSD values, the 90th percentile of the estimated soil intakes was shown to be typically 40-100 mg/day higher than the geometric means of these estimates. In the day-care center groups few correlations with the geometric mean LTM values were found for variables concerning living conditions, mouthing behavior, playing habits, etc. A strong correlation was found with weather. During dry weather the younger children especially showed higher LTM values. In the camping group the weather also influenced the mean LTM value only in the younger age groups. Analysis of variance showed that a single LTM value of a child has a low predictive value with regard to the LTM value of the next few days or that of a few months later. Therefore it seems reasonable to use group statistics as estimates of soil ingestion in health risk assessments of soil pollution incidents. PMID:2335156

  10. Compound estimation procedures in reliability

    NASA Technical Reports Server (NTRS)

    Barnes, Ron

    1990-01-01

    At NASA, components and subsystems of components in the Space Shuttle and Space Station generally go through a number of redesign stages. While data on failures for various design stages are sometimes available, the classical procedures for evaluating reliability only utilize the failure data on the present design stage of the component or subsystem. Often, few or no failures have been recorded on the present design stage. Previously, Bayesian estimators for the reliability of a single component, conditioned on the failure data for the present design, were developed. These new estimators permit NASA to evaluate the reliability, even when few or no failures have been recorded. Point estimates for the latter evaluation were not possible with the classical procedures. Since different design stages of a component (or subsystem) generally have a good deal in common, the development of new statistical procedures for evaluating the reliability, which consider the entire failure record for all design stages, has great intuitive appeal. A typical subsystem consists of a number of different components and each component has evolved through a number of redesign stages. The present investigations considered compound estimation procedures and related models. Such models permit the statistical consideration of all design stages of each component and thus incorporate all the available failure data to obtain estimates for the reliability of the present version of the component (or subsystem). A number of models were considered to estimate the reliability of a component conditioned on its total failure history from two design stages. It was determined that reliability estimators for the present design stage, conditioned on the complete failure history for two design stages have lower risk than the corresponding estimators conditioned only on the most recent design failure data. Several models were explored and preliminary models involving bivariate Poisson distribution and the

  11. Estimating preselected and postselected ensembles

    SciTech Connect

    Massar, Serge; Popescu, Sandu

    2011-11-15

    In analogy with the usual quantum state-estimation problem, we introduce the problem of state estimation for a pre- and postselected ensemble. The problem has fundamental physical significance since, as argued by Y. Aharonov and collaborators, pre- and postselected ensembles are the most basic quantum ensembles. Two new features are shown to appear: (1) information is flowing to the measuring device both from the past and from the future; (2) because of the postselection, certain measurement outcomes can be forced never to occur. Due to these features, state estimation in such ensembles is dramatically different from the case of ordinary, preselected-only ensembles. We develop a general theoretical framework for studying this problem and illustrate it through several examples. We also prove general theorems establishing that information flowing from the future is closely related to, and in some cases equivalent to, the complex conjugate information flowing from the past. Finally, we illustrate our approach on examples involving covariant measurements on spin-1/2 particles. We emphasize that all state-estimation problems can be extended to the pre- and postselected situation. The present work thus lays the foundations of a much more general theory of quantum state estimation.

  12. Estimating the diversity of dinosaurs

    PubMed Central

    Wang, Steve C.; Dodson, Peter

    2006-01-01

    Despite current interest in estimating the diversity of fossil and extant groups, little effort has been devoted to estimating the diversity of dinosaurs. Here we estimate the diversity of nonavian dinosaurs at ≈1,850 genera, including those that remain to be discovered. With 527 genera currently described, at least 71% of dinosaur genera thus remain unknown. Although known diversity declined in the last stage of the Cretaceous, estimated diversity was steady, suggesting that dinosaurs as a whole were not in decline in the 10 million years before their ultimate extinction. We also show that known diversity is biased by the availability of fossiliferous rock outcrop. Finally, by using a logistic model, we predict that 75% of discoverable genera will be known within 60–100 years and 90% within 100–140 years. Because of nonrandom factors affecting the process of fossil discovery (which preclude the possibility of computing realistic confidence bounds), our estimate of diversity is likely to be a lower bound. PMID:16954187

  13. Weldon Spring historical dose estimate

    SciTech Connect

    Meshkov, N.; Benioff, P.; Wang, J.; Yuan, Y.

    1986-07-01

    This study was conducted to determine the estimated radiation doses that individuals in five nearby population groups and the general population in the surrounding area may have received as a consequence of activities at a uranium processing plant in Weldon Spring, Missouri. The study is retrospective and encompasses plant operations (1957-1966), cleanup (1967-1969), and maintenance (1969-1982). The dose estimates for members of the nearby population groups are as follows. Of the three periods considered, the largest doses to the general population in the surrounding area would have occurred during the plant operations period (1957-1966). Dose estimates for the cleanup (1967-1969) and maintenance (1969-1982) periods are negligible in comparison. Based on the monitoring data, if there was a person residing continually in a dwelling 1.2 km (0.75 mi) north of the plant, this person is estimated to have received an average of about 96 mrem/yr (ranging from 50 to 160 mrem/yr) above background during plant operations, whereas the dose to a nearby resident during later years is estimated to have been about 0.4 mrem/yr during cleanup and about 0.2 mrem/yr during the maintenance period. These values may be compared with the background dose in Missouri of 120 mrem/yr.

  14. Estimates of radiogenic cancer risks

    SciTech Connect

    Puskin, J.S.; Nelson, C.B.

    1995-07-01

    A methodology recently developed by the U.S. EPA for estimating the carcingenic risks from ionizing radiation is described. For most cancer sites, the risk model is one in which age-specific, relative risk coefficients are obtained by taking a geometric mean of the coefficients derived from the atomic bomb survivor data using two different methods for transporting risks from the Japanese to the U.S. population. The risk models are applied to estimate organ-specific risks per unit dose for a stationary population with mortality rates governed by 1980 U.S. vital statistics. With the exception of breast cancer, low-LET radiogenic cancer risk estimates are reduced by a factor of 2 at low doses and dose rates compared to acute high dose exposure conditions. For low dose (or dose rate) conditions, the risk of inducing a premature cancer death from uniform, whole body, low-LET irradiation is calculated to be 5.1 x 10{sup -2} Gy{sup -1}. Neglecting nonfatal skin cancers, the corresponding incidence risk is 7.6 x 10{sup -2} Gy{sup -1}. High-LET (alpha particle) risks are presumed to increase linearly with dose and to be independent of dose rate. High-LET risks are estimated to be 20 times the low-LET risks estimated under low dose rate conditions, except for leukemia and breast cancer where RBEs of 1 and 10 are adopted, respectively. 29 refs., 3 tabs.

  15. Black Hole Mass Estimation: How Good is the Virial Estimate?

    NASA Astrophysics Data System (ADS)

    Yong, Suk Yee; Webster, Rachel L.; King, Anthea L.

    2016-03-01

    Black hole mass is a key factor in determining how a black hole interacts with its environment. However, the determination of black hole masses at high redshifts depends on secondary mass estimators, which are based on empirical relationships and broad approximations. A dynamical disk wind broad line region model (BLR) of active galactic nuclei is built in order to test the impact of different BLR geometries and inclination angles on the black hole mass estimation. Monte Carlo simulations of two disk wind models are constructed to recover the virial scale factor, f, at various inclination angles. The resulting f values strongly correlate with inclination angle, with large f values associated with small inclination angles (close to face-on) and small f values with large inclination angles (close to edge-on). The recovered f factors are consistent with previously determined f values, found from empirical relationships. Setting f as a constant may introduce a bias into virial black hole mass estimates for a large sample of active galactic nuclei. However, the extent of the bias depends on the line width characterisation (e.g. full width at half maximum or line dispersion). Masses estimated using f_{FWHM} tend to be biased towards larger masses, but this can generally be corrected by calibrating for the width or shape of the emission line.

  16. Software Size Estimation Using Expert Estimation: A Fuzzy Logic Approach

    ERIC Educational Resources Information Center

    Stevenson, Glenn A.

    2012-01-01

    For decades software managers have been using formal methodologies such as the Constructive Cost Model and Function Points to estimate the effort of software projects during the early stages of project development. While some research shows these methodologies to be effective, many software managers feel that they are overly complicated to use and…

  17. Estimating repeatability of egg size

    USGS Publications Warehouse

    Flint, P.L.; Rockwell, R.F.; Sedinger, J.S.

    2001-01-01

    Measures of repeatability have long been used to assess patterns of variation in egg size within and among females. We compared different analytical approaches for estimating repeatability of egg size of Black Brant. Separate estimates of repeatability for eggs of each clutch size and laying sequence number varied from 0.49 to 0.64. We suggest that using the averaging egg size within clutches results in underestimation of variation within females and thereby overestimates repeatability. We recommend a nested design that partitions egg-size variation within clutches, among clutches within females, and among females. We demonstrate little variation in estimates of repeatability resulting from a nested model controlling for egg laying sequence and a nested model in which we assumed laying sequence was unknown.

  18. Inertial corrections by dynamic estimation

    NASA Technical Reports Server (NTRS)

    Sonnabend, David

    1989-01-01

    The highlights are presented of an Engineering Memorandum, Dynamic Estimation for Floated Gradiometers. The original impetus for the work was that gradiometers, in principle, measure components of the gravity gradient tensor, plus rotation effects, similar to centrifugal and Coriolis effects in accelerometers. The problem is that the rotation effects are often quite large, compared to the gradient, and that available inertial instruments can't measure them to adequate accuracy. The paper advances the idea that, if the instruments can be floated in a package subject to very low disturbances, a dynamic estimation, based on the Euler and translational equations of motion, plus models of all the instruments, can be used to greatly strengthen the estimates of the gradient and the rotation parameters. Moreover, symmetry constraints can be imposed directly in the filter, further strengthening the solution.

  19. Estimations of uncertainties of frequencies

    NASA Astrophysics Data System (ADS)

    Eyer, Laurent; Nicoletti, Jean-Marc; Morgenthaler, Stephan

    2015-08-01

    Diverse variable phenomena in the Universe are periodic. Astonishingly many of the periodic signals present in stars have timescales coinciding with human ones (from minutes to years). The periods of signals often have to be deduced from time series which are irregularly sampled and sparse, furthermore correlations between the brightness measurements and their estimated uncertainties are common.The uncertainty on the frequency estimation is reviewed. We explore the astronomical and statistical literature, in both cases of regular and irregular samplings. The frequency uncertainty is depending on signal to noise ratio, the frequency, the observational timespan. The shape of the light curve should also intervene, since sharp features such as exoplanet transits, stellar eclipses, raising branches of pulsation stars give stringent constraints.We propose several procedures (parametric and nonparametric) to estimate the uncertainty on the frequency which are subsequently tested against simulated data to assess their performances.

  20. Oil and gas reserves estimates

    USGS Publications Warehouse

    Harrell, R.; Gajdica, R.; Elliot, D.; Ahlbrandt, T.S.; Khurana, S.

    2005-01-01

    This article is a summary of a panel session at the 2005 Offshore Technology Conference. Oil and gas reserves estimates are further complicated with the expanding importance of the worldwide deepwater arena. These deepwater reserves can be analyzed, interpreted, and conveyed in a consistent, reliable way to investors and other stakeholders. Continually improving technologies can lead to improved estimates of production and reserves, but the estimates are not necessarily recognized by regulatory authorities as an indicator of "reasonable certainty," a term used since 1964 to describe proved reserves in several venues. Solutions are being debated in the industry to arrive at a reporting mechanism that generates consistency and at the same time leads to useful parameters in assessing a company's value without compromising confidentiality. Copyright 2005 Offshore Technology Conference.

  1. Density estimation in wildlife surveys

    USGS Publications Warehouse

    Bart, J.; Droege, S.; Geissler, P.; Peterjohn, B.; Ralph, C.J.

    2004-01-01

    Several authors have recently discussed the problems with using index methods to estimate trends in population size. Some have expressed the view that index methods should virtually never be used. Others have responded by defending index methods and questioning whether better alternatives exist. We suggest that index methods are often a cost-effective component of valid wildlife monitoring but that double-sampling or another procedure that corrects for bias or establishes bounds on bias is essential. The common assertion that index methods require constant detection rates for trend estimation is mathematically incorrect; the requirement is no long-term trend in detection "ratios" (index result/parameter of interest), a requirement that is probably approximately met by many well-designed index surveys. We urge that more attention be given to defining bird density rigorously and in ways useful to managers. Once this is done, 4 sources of bias in density estimates may be distinguished: coverage, closure, surplus birds, and detection rates. Distance, double-observer, and removal methods do not reduce bias due to coverage, closure, or surplus birds. These methods may yield unbiased estimates of the number of birds present at the time of the survey, but only if their required assumptions are met, which we doubt occurs very often in practice. Double-sampling, in contrast, produces unbiased density estimates if the plots are randomly selected and estimates on the intensive surveys are unbiased. More work is needed, however, to determine the feasibility of double-sampling in different populations and habitats. We believe the tension that has developed over appropriate survey methods can best be resolved through increased appreciation of the mathematical aspects of indices, especially the effects of bias, and through studies in which candidate methods are evaluated against known numbers determined through intensive surveys.

  2. Motion models in attitude estimation

    NASA Technical Reports Server (NTRS)

    Chu, D.; Wheeler, Z.; Sedlak, J.

    1994-01-01

    Attitude estimator use observations from different times to reduce the effects of noise. If the vehicle is rotating, the attitude at one time needs to be propagated to that at another time. If the vehicle measures its angular velocity, attitude propagating entails integrating a rotational kinematics equation only. If a measured angular velocity is not available, torques can be computed and an additional rotational dynamics equation integrated to give the angular velocity. Initial conditions for either of these integrations come from the estimation process. Sometimes additional quantities, such as gyro and torque parameters, are also solved for. Although the partial derivatives of attitude with respect to initial attitude and gyro parameters are well known, the corresponding partial derivatives with respect to initial angular velocity and torque parameters are less familiar. They can be derived and computed numerically in a way that is analogous to that used for the initial attitude and gyro parameters. Previous papers have demonstrated the feasibility of using dynamics models for attitude estimation but have not provided details of how each angular velocity and torque parameters can be estimated. This tutorial paper provides some of that detail, notably how to compute the state transition matrix when closed form expressions are not available. It also attempts to put dynamics estimation in perspective by showing the progression from constant to gyro-propagated to dynamics-propagated attitude motion models. Readers not already familiar with attitude estimation will find this paper an introduction to the subject, and attitude specialists may appreciate the collection of heretofore scattered results brought together in a single place.

  3. Venus - A total mass estimate

    NASA Technical Reports Server (NTRS)

    Sjogren, W. L.; Trager, G. B.; Roldan, G. R.

    1990-01-01

    Reductions of four independent blocks of Pioneer Venus Orbiter Doppler radio tracking data have produced very consistent determinations of the GM of Venus (the product of the universal gravitational constant and total mass of Venus). These estimates have uncertainties that are significantly smaller than any values published to date. The value of GM is also consistent with previously published results in that it falls within their one-sigma uncertainties. The value of 324858.60 + or - 0.05 cu km/sec sq is the best estimate.

  4. Inflight estimation of gyro noise

    NASA Technical Reports Server (NTRS)

    Filla, O. H.; Willard, T. Z.; Chu, D.; Deutschmann, Julie

    1990-01-01

    A method is described and demonstrated for estimating single-axis gyro noise levels in terms of the Farrenkopf model parameters. This is accomplished for the Cosmic Background Explorer (COBE) by comparing gyro-propagated attitudes with less accurate single-frame solutions and fitting the squared differences to a third-order polynomial in time. Initial results are consistent with the gyro specifications, and these results are used to determine limits on the duration of batches used to determine attitude. Sources of error are discussed, and guidelines for a more elegant implementation, as part of a batch estimator or filter, are included for future work.

  5. The estimation of microbial biomass.

    PubMed

    Harris, C M; Kell, D B

    1985-01-01

    Methods that have been used to estimate the content, and in some cases the nature, of the microbial biomass in a sample are reviewed. The methods may be categorised in terms of their principle (physical, chemical, biological or mathematical/computational), their speed (real-time or otherwise) and the amount of automation/expense involved. For sparse populations, where the output signal is to be enhanced by growth of the organisms, physical, chemical and biological approaches may be of equal merit, whilst in systems, such as laboratory and industrial fermentations, in which the microbial biomass content is high, physical methods (alone) can permit the real-time estimation of microbial biomass.

  6. Point estimates for probability moments

    PubMed Central

    Rosenblueth, Emilio

    1975-01-01

    Given a well-behaved real function Y of a real random variable X and the first two or three moments of X, expressions are derived for the moments of Y as linear combinations of powers of the point estimates y(x+) and y(x-), where x+ and x- are specific values of X. Higher-order approximations and approximations for discontinuous Y using more point estimates are also given. Second-moment approximations are generalized to the case when Y is a function of several variables. PMID:16578731

  7. Perceptual frames in frequency estimation.

    PubMed

    Zyłowska, Aleksandra; Kossek, Marcin; Wawrzyniak, Małgorzata

    2014-02-01

    This study is an introductory investigation of cognitive frames, focused on perceptual frames divided into information and formal perceptual frames, which were studied based on sub-additivity of frequency estimations. It was postulated that different presentations of a response scale would result in different percentage estimates of time spent watching TV or using the Internet. The results supported the existence of perceptual frames that influence the perception process and indicated that information perceptual frames had a stronger effect than formal frames. The measures made possible the exploration of the operation of perceptual frames and also outlined the relations between heuristics and cognitive frames. PMID:24765715

  8. Density Estimation with Mercer Kernels

    NASA Technical Reports Server (NTRS)

    Macready, William G.

    2003-01-01

    We present a new method for density estimation based on Mercer kernels. The density estimate can be understood as the density induced on a data manifold by a mixture of Gaussians fit in a feature space. As is usual, the feature space and data manifold are defined with any suitable positive-definite kernel function. We modify the standard EM algorithm for mixtures of Gaussians to infer the parameters of the density. One benefit of the approach is it's conceptual simplicity, and uniform applicability over many different types of data. Preliminary results are presented for a number of simple problems.

  9. A parameter estimation subroutine package

    NASA Technical Reports Server (NTRS)

    Bierman, G. J.; Nead, M. W.

    1978-01-01

    Linear least squares estimation and regression analyses continue to play a major role in orbit determination and related areas. A library of FORTRAN subroutines were developed to facilitate analyses of a variety of estimation problems. An easy to use, multi-purpose set of algorithms that are reasonably efficient and which use a minimal amount of computer storage are presented. Subroutine inputs, outputs, usage and listings are given, along with examples of how these routines can be used. The routines are compact and efficient and are far superior to the normal equation and Kalman filter data processing algorithms that are often used for least squares analyses.

  10. Spacecraft platform cost estimating relationships

    NASA Technical Reports Server (NTRS)

    Gruhl, W. M.

    1972-01-01

    The three main cost areas of unmanned satellite development are discussed. The areas are identified as: (1) the spacecraft platform (SCP), (2) the payload or experiments, and (3) the postlaunch ground equipment and operations. The SCP normally accounts for over half of the total project cost and accurate estimates of SCP costs are required early in project planning as a basis for determining total project budget requirements. The development of single formula SCP cost estimating relationships (CER) from readily available data by statistical linear regression analysis is described. The advantages of single formula CER are presented.

  11. Perceptual estimation obeys Occam's razor.

    PubMed

    Gershman, Samuel J; Niv, Yael

    2013-01-01

    Theoretical models of unsupervised category learning postulate that humans "invent" categories to accommodate new patterns, but tend to group stimuli into a small number of categories. This "Occam's razor" principle is motivated by normative rules of statistical inference. If categories influence perception, then one should find effects of category invention on simple perceptual estimation. In a series of experiments, we tested this prediction by asking participants to estimate the number of colored circles on a computer screen, with the number of circles drawn from a color-specific distribution. When the distributions associated with each color overlapped substantially, participants' estimates were biased toward values intermediate between the two means, indicating that subjects ignored the color of the circles and grouped different-colored stimuli into one perceptual category. These data suggest that humans favor simpler explanations of sensory inputs. In contrast, when the distributions associated with each color overlapped minimally, the bias was reduced (i.e., the estimates for each color were closer to the true means), indicating that sensory evidence for more complex explanations can override the simplicity bias. We present a rational analysis of our task, showing how these qualitative patterns can arise from Bayesian computations. PMID:24137136

  12. An Improved Cluster Richness Estimator

    SciTech Connect

    Rozo, Eduardo; Rykoff, Eli S.; Koester, Benjamin P.; McKay, Timothy; Hao, Jiangang; Evrard, August; Wechsler, Risa H.; Hansen, Sarah; Sheldon, Erin; Johnston, David; Becker, Matthew R.; Annis, James T.; Bleem, Lindsey; Scranton, Ryan; /Pittsburgh U.

    2009-08-03

    Minimizing the scatter between cluster mass and accessible observables is an important goal for cluster cosmology. In this work, we introduce a new matched filter richness estimator, and test its performance using the maxBCG cluster catalog. Our new estimator significantly reduces the variance in the L{sub X}-richness relation, from {sigma}{sub lnL{sub X}}{sup 2} = (0.86 {+-} 0.02){sup 2} to {sigma}{sub lnL{sub X}}{sup 2} = (0.69 {+-} 0.02){sup 2}. Relative to the maxBCG richness estimate, it also removes the strong redshift dependence of the richness scaling relations, and is significantly more robust to photometric and redshift errors. These improvements are largely due to our more sophisticated treatment of galaxy color data. We also demonstrate the scatter in the L{sub X}-richness relation depends on the aperture used to estimate cluster richness, and introduce a novel approach for optimizing said aperture which can be easily generalized to other mass tracers.

  13. Helicopter Toy and Lift Estimation

    ERIC Educational Resources Information Center

    Shakerin, Said

    2013-01-01

    A $1 plastic helicopter toy (called a Wacky Whirler) can be used to demonstrate lift. Students can make basic measurements of the toy, use reasonable assumptions and, with the lift formula, estimate the lift, and verify that it is sufficient to overcome the toy's weight. (Contains 1 figure.)

  14. Helicopter Toy and Lift Estimation

    NASA Astrophysics Data System (ADS)

    Shakerin, Said

    2013-05-01

    A1 plastic helicopter toy (called a Wacky Whirler) can be used to demonstrate lift. Students can make basic measurements of the toy, use reasonable assumptions and, with the lift formula, estimate the lift, and verify that it is sufficient to overcome the toy's weight.

  15. Runtime Verification with State Estimation

    NASA Technical Reports Server (NTRS)

    Stoller, Scott D.; Bartocci, Ezio; Seyster, Justin; Grosu, Radu; Havelund, Klaus; Smolka, Scott A.; Zadok, Erez

    2011-01-01

    We introduce the concept of Runtime Verification with State Estimation and show how this concept can be applied to estimate theprobability that a temporal property is satisfied by a run of a program when monitoring overhead is reduced by sampling. In such situations, there may be gaps in the observed program executions, thus making accurate estimation challenging. To deal with the effects of sampling on runtime verification, we view event sequences as observation sequences of a Hidden Markov Model (HMM), use an HMM model of the monitored program to "fill in" sampling-induced gaps in observation sequences, and extend the classic forward algorithm for HMM state estimation (which determines the probability of a state sequence, given an observation sequence) to compute the probability that the property is satisfied by an execution of the program. To validate our approach, we present a case study based on the mission software for a Mars rover. The results of our case study demonstrate high prediction accuracy for the probabilities computed by our algorithm. They also show that our technique is much more accurate than simply evaluating the temporal property on the given observation sequences, ignoring the gaps.

  16. Progress Toward Automated Cost Estimation

    NASA Technical Reports Server (NTRS)

    Brown, Joseph A.

    1992-01-01

    Report discusses efforts to develop standard system of automated cost estimation (ACE) and computer-aided design (CAD). Advantage of system is time saved and accuracy enhanced by automating extraction of quantities from design drawings, consultation of price lists, and application of cost and markup formulas.

  17. Empirical equation estimates geothermal gradients

    SciTech Connect

    Kutasov, I.M. )

    1995-01-02

    An empirical equation can estimate geothermal (natural) temperature profiles in new exploration areas. These gradients are useful for cement slurry and mud design and for improving electrical and temperature log interpretation. Downhole circulating temperature logs and surface outlet temperatures are used for predicting the geothermal gradients.

  18. ESTIMATING AND PROJECTING IMPERVIOUS COVER

    EPA Science Inventory

    Effective methods to estimate and project impervious cover can help identify areas where a watershed is at risk of changing rapidly from one with relatively pristine streams to one with streams with significant symptoms of degradation. In collaboration with the USEPA, Region 4, ...

  19. Modeling of landslide volume estimation

    NASA Astrophysics Data System (ADS)

    Amirahmadi, Abolghasem; Pourhashemi, Sima; Karami, Mokhtar; Akbari, Elahe

    2016-06-01

    Mass displacement of materials such as landslide is considered among problematic phenomena in Baqi Basin located at southern slopes of Binaloud, Iran; since, it destroys agricultural lands and pastures and also increases deposits at the basin exit. Therefore, it is necessary to identify areas which are sensitive to landslide and estimate the significant volume. In the present study, in order to estimate the volume of landslide, information about depth and area of slides was collected; then, considering regression assumptions, a power regression model was given which was compared with 17 suggested models in various regions in different countries. The results showed that values of estimated mass obtained from the suggested model were consistent with observed data (P value= 0.000 and R = 0.692) and some of the existing relations which implies on efficiency of the suggested model. Also, relations that were created in small-area landslides were more suitable rather than the ones created in large-area landslides for using in Baqi Basin. According to the suggested relation, average depth value of landslides was estimated 3.314 meters in Baqi Basin which was close to the observed value, 4.609 m.

  20. State Estimation for Tensegrity Robots

    NASA Technical Reports Server (NTRS)

    Caluwaerts, Ken; Bruce, Jonathan; Friesen, Jeffrey M.; Sunspiral, Vytas

    2016-01-01

    Tensegrity robots are a class of compliant robots that have many desirable traits when designing mass efficient systems that must interact with uncertain environments. Various promising control approaches have been proposed for tensegrity systems in simulation. Unfortunately, state estimation methods for tensegrity robots have not yet been thoroughly studied. In this paper, we present the design and evaluation of a state estimator for tensegrity robots. This state estimator will enable existing and future control algorithms to transfer from simulation to hardware. Our approach is based on the unscented Kalman filter (UKF) and combines inertial measurements, ultra wideband time-of-flight ranging measurements, and actuator state information. We evaluate the effectiveness of our method on the SUPERball, a tensegrity based planetary exploration robotic prototype. In particular, we conduct tests for evaluating both the robot's success in estimating global position in relation to fixed ranging base stations during rolling maneuvers as well as local behavior due to small-amplitude deformations induced by cable actuation.

  1. Developmental Change in Numerical Estimation

    ERIC Educational Resources Information Center

    Slusser, Emily B.; Santiago, Rachel T.; Barth, Hilary C.

    2013-01-01

    Mental representations of numerical magnitude are commonly thought to undergo discontinuous change over development in the form of a "representational shift." This idea stems from an apparent categorical shift from logarithmic to linear patterns of numerical estimation on tasks that involve translating between numerical magnitudes and spatial…

  2. The Estimation of Factor Indeterminacy.

    ERIC Educational Resources Information Center

    Budescu, David V.

    1983-01-01

    The degree of indeterminacy of the factor score estimates is biased and can lead to erroneous conclusion regarding the nature of the results. The magnitude of this bias is illustrated and guidelines for describing factor analytic studies using factor scores are offered. (Author/PN)

  3. ESTIMATING IMPERVIOUS COVER FROM REGIONALLY AVAILABLE DATA

    EPA Science Inventory

    The objective of this study is to compare and evaluate the reliability of different approaches for estimating impervious cover including three empirical formulations for estimating impervious cover from population density data, estimation from categorized land cover data, and to ...

  4. Simulation testing of unbiasedness of variance estimators

    USGS Publications Warehouse

    Link, W.A.

    1993-01-01

    In this article I address the evaluation of estimators of variance for parameter estimates. Given an unbiased estimator X of a parameter 0, and an estimator V of the variance of X, how does one test (via simulation) whether V is an unbiased estimator of the variance of X? The derivation of the test statistic illustrates the need for care in substituting consistent estimators for unknown parameters.

  5. Mentors, Muses, and Mutuality: Honoring Barbara Snell Dohrenwend

    ERIC Educational Resources Information Center

    Mulvey, Anne

    2012-01-01

    I describe feminist community psychology principles that have the potential to expand and enrich mentoring and that honor Barbara Snell Dohrenwend, a leader who contributed to the research, theory, and profession of community psychology. I reflect on the affect that Barbara Dohrenwend had on life and on the development of feminist community…

  6. In Pursuit of the Muse: Librarians Who Write.

    ERIC Educational Resources Information Center

    Chepesiuk, Ron

    1991-01-01

    This article profiles six librarians in academic and public libraries who discuss how they balance their dual careers as authors and librarians. The influence of librarianship on their writing is described, the influence of writing on their careers in librarianship is considered, and the problems of finding time for both careers is discussed. (LRW)

  7. Practitioner Meets Philosopher: Bakhtinian Musings on Learning with Paul

    ERIC Educational Resources Information Center

    Johnsson, Mary Chen

    2013-01-01

    The stars and the planets must have been in alignment when Paul Hager needed a doctoral student to work on his research grant at the same time that I had transitioned from 20 years as business practitioner to become an educator interested in workplace learning. This paper explores the Bakhtinian ways in which I learned about learning with Paul,…

  8. Musing on the Use of Dynamic Software and Mathematics Epistemology

    ERIC Educational Resources Information Center

    Santos-Trigo, Manuel; Reyes-Rodriguez, Aaron; Espinosa-Perez, Hugo

    2007-01-01

    Different computational tools may offer teachers and students distinct opportunities in representing, exploring and solving mathematical tasks. In this context, we illustrate that the use of dynamic software (Cabri Geometry) helped high school teachers to think of and represent a particular task dynamically. In this process, the teachers had the…

  9. Musings on the State of the ILS in 2006

    ERIC Educational Resources Information Center

    Breeding, Marshall

    2006-01-01

    It is hard to imagine operating a library today without the assistance of an integrated library system (ILS). Without help from it, library work would be tedious, labor would be intensive, and patrons would be underserved in almost all respects. Given the importance of these automation systems, it is essential that they work well and deliver…

  10. System-level musings about system-level science (Invited)

    NASA Astrophysics Data System (ADS)

    Liu, W.

    2009-12-01

    In teleology, a system has a purpose. In physics, a system has a tendency. For example, a mechanical system has a tendency to lower its potential energy. A thermodynamic system has a tendency to increase its entropy. Therefore, if geospace is seen as a system, what is its tendency? Surprisingly or not, there is no simple answer to this question. Or, to flip the statement, the answer is complex, or complexity. We can understand generally why complexity arises, as the geospace boundary is open to influences from the solar wind and Earth’s atmosphere and components of the system couple to each other in a myriad of ways to make the systemic behavior highly nonlinear. But this still begs the question: What is the system-level approach to geospace science? A reductionist view might assert that as our understanding of a component or subsystem progresses to a certain point, we can couple some together to understand the system on a higher level. However, in practice, a subsystem can almost never been observed in isolation with others. Even if such is possible, there is no guarantee that the subsystem behavior will not change when coupled to others. Hence, there is no guarantee that a subsystem, such as the ring current, has an innate and intrinsic behavior like a hydrogen atom. An absolutist conclusion from this logic can be sobering, as one would have to trace a flash of aurora to the nucleosynthesis in the solar core. The practical answer, however, is more promising; it is a mix of the common sense we call reductionism and awareness that, especially when strongly coupled, subsystems can experience behavioral changes, breakdowns, and catastrophes. If the stock answer to the systemic tendency of geospace is complexity, the objective of the system-level approach to geospace science is to define, measure, and understand this complexity. I will use the example of magnetotail dynamics to illuminate some key points in this talk.

  11. Musings: Does Criticizing Your Child's Teacher Disempower Your Child?

    ERIC Educational Resources Information Center

    Gross, Miraca U. M.

    2003-01-01

    This article discusses the negative effects of criticizing a child's teacher in front of the child and the positive effects of modeling a healthy respect of the educational system. Study findings are discussed which indicate high-achieving children saw themselves as active partners with their teachers or coaches, not empty vessels. (Contains 6…

  12. The muse in the machine: computers can help us compose

    NASA Astrophysics Data System (ADS)

    Greenhough, M.

    1990-01-01

    A method of producing musical structures by means of a constrained random process is described. Real-time operation allows intuitive control. Musical samples from a computer system can 'evolve' Darwinian-style in the environment provided by the operator's ear-brain.

  13. The Unembarressed Muse: The Popular Arts in America.

    ERIC Educational Resources Information Center

    Nye, Russel

    This book is a study of certain of the popular arts in American society, that is, the arts in their customarily accepted genres. "Popular" is interpreted to mean "generally dispersed and approved"--descriptive of those artistic productions which express the taste and understanding of the majority and which are free of control, in content and…

  14. Research using blogs for data: public documents or private musings?

    PubMed

    Eastham, Linda A

    2011-08-01

    Nursing and other health sciences researchers increasingly find blogs to be valuable sources of information for investigating illness and other human health experiences. When researchers use blogs as their exclusive data source, they must discern the public/private aspects inherent in the nature of blogs in order to plan for appropriate protection of the bloggers' identities. Approaches to the protection of human subjects are poorly addressed when the human subject is a blogger and the blog is used as an exclusive source of data. Researchers may be assisted to protect human subjects via a decisional framework for assessing a blog author's intended position on the public/private continuum.

  15. Presence of Mind... A Reaction to Sheridan's "Musing on Telepresence"

    NASA Technical Reports Server (NTRS)

    Ellis, Stephen R.; Null, Cynthia H. (Technical Monitor)

    1995-01-01

    What are the benefits and significance of developing a scientifically useful measure of the human sense of presence in an environment? Such a scale could be conceived to measure the extent to which users of telerobotics interfaces feel or behave as if they were present at the site of a remotely controlled robot. The essay examines some issues raised in order to identify characteristics, a scale of 'presence' ought to have to be useful as an explanatory scientific concept. It also addresses the utility of worrying about developing such a scale at all. To be useful in the same manner as a traditional scientific concept such as mass, for example, it is argued that such scales not only need to be precisely defined and co-vary with determinative factors but also need to establish equivalence classes of its independent constituents. This simplifying property is important for either subjective or objective scales of presence and arises if the constituents of presence are truly independent.

  16. Musings on cosmological relaxation and the hierarchy problem

    NASA Astrophysics Data System (ADS)

    Jaeckel, Joerg; Mehta, Viraf M.; Witkowski, Lukas T.

    2016-03-01

    Recently Graham, Kaplan and Rajendran proposed cosmological relaxation as a mechanism for generating a hierarchically small Higgs vacuum expectation value. Inspired by this we collect some thoughts on steps towards a solution to the electroweak hierarchy problem and apply them to the original model of cosmological relaxation [Phys. Rev. Lett. 115, 221801 (2015)]. To do so, we study the dynamics of the model and determine the relation between the fundamental input parameters and the electroweak vacuum expectation value. Depending on the input parameters the model exhibits three qualitatively different regimes, two of which allow for hierarchically small Higgs vacuum expectation values. One leads to standard electroweak symmetry breaking whereas in the other regime electroweak symmetry is mainly broken by a Higgs source term. While the latter is not acceptable in a model based on the QCD axion, in non-QCD models this may lead to new and interesting signatures in Higgs observables. Overall, we confirm that cosmological relaxation can successfully give rise to a hierarchically small Higgs vacuum expectation value if (at least) one model parameter is chosen sufficiently small. However, we find that the required level of tuning for achieving this hierarchy in relaxation models can be much more severe than in the Standard Model.

  17. Supersymmetric musings on the predictivity of family symmetries

    SciTech Connect

    Kadota, Kenji; Kersten, Joern; Velasco-Sevilla, Liliana

    2010-10-15

    We discuss the predictivity of family symmetries for the soft supersymmetry breaking parameters in the framework of supergravity. We show that unknown details of the messenger sector and the supersymmetry breaking hidden sector enter into the soft parameters, making it difficult to obtain robust predictions. We find that there are specific choices of messenger fields which can improve the predictivity for the soft parameters.

  18. Musings on male sex work: a "virtual" discussion.

    PubMed

    Harriman, Rebecca L; Johnston, Barry; Kenny, Paula M

    2007-01-01

    Contributors and editors were asked to respond to a series of questions concerning male sex work in order to stimulate an informal "conversation." Some of the topics explored include: why people seek the services of prostitutes; is the term "sex work" favorable to "prostitution": is it right to pay for sex; and is exploitation a necessary part of the sex worker/client interchange'? Contributors' responses were compiled and listed in the order they were received. Common elements of their responses are summarized and the advantages of this informal approach are articulated. PMID:18019078

  19. Traceability of radiation measurements: musings of a user

    SciTech Connect

    Kathren, R.L.

    1980-04-01

    Although users of radiation desire measurement traceability for a number of reasons, including legal, regulatory, contractual, and quality assurance requirements, there exists no real definition of the term in the technical literature. Definitions are proposed for both traceability and traceability to the National Bureau of Standards. The hierarchy of radiation standards is discussed and allowable uncertainties are given for each level. Areas of need with respect to radiation standards are identified, and a system of secondary radiation calibration laboratories is proposed as a means of providing quality calibrations and traceability on a routine basis.

  20. Sing, muse: songs in Homer and in hospital.

    PubMed

    Marshall, Robert; Bleakley, Alan

    2011-06-01

    This paper progresses the original argument of Richard Ratzan that formal presentation of the medical case history follows a Homeric oral-formulaic tradition.The everyday work routines of doctors involve a ritual poetics, where the language of recounting the patient’s ‘history’ offers an explicitly aesthetic enactment or performance that can be appreciated and given meaning within the historical tradition of Homeric oral poetry and the modernist aesthetic of Minimalism. This ritual poetics shows a reliance on traditional word usages that crucially act as tools for memorisation and performance and can be linked to forms of clinical reasoning; both contain a tension between the oral and the written record, questioning the priority of the latter; and the performance of both helps to create the Janus-faced identity of the doctor as a ‘performance artist’ or ‘medical bard’ in identifying with medical culture and maintaining a positive difference from the patient as audience, offering a valid form of patient-centredness. PMID:21744518

  1. Musings in the Wake of Columbine: What Can Schools Do?

    ERIC Educational Resources Information Center

    Raywid, Mary Anne; Oshiyama, Libby

    2000-01-01

    As suggested by standard indicators--truancy, dropout rates, graffiti, vandalism, violence--youngsters in small schools rarely display the anger at the institution and its inhabitants that typifies Columbine and many other comprehensive high schools. Educators must cultivate learning communities and qualities (like empathy and compassion)…

  2. Marrying the "Muse" and the Thinker "Poetry as Scientific Writing"

    ERIC Educational Resources Information Center

    Marcum-Dietrich, Nanette I.; Byrne, Eileen; O'Hern, Brenda

    2009-01-01

    This article describes an unlikely collaboration between a high school chemistry teacher and a high school English teacher who attempted to teach scientific concepts through poetry. Inspired by poet John Updike's (1960) "Cosmic Gall," these two teachers crafted writing tasks aimed at teaching science content through literary devices. The result…

  3. Musing on the Memes of Open and Distance Education

    ERIC Educational Resources Information Center

    Latchem, Colin

    2014-01-01

    Just as genes propagate themselves in the gene pool by leaping from body to body, so memes (ideas, behaviours, and actions) transmit cultural ideas or practices from one mind to another through writing, speech, or other imitable phenomena. This paper considers the memes that influence the evolution of open and distance education. If the…

  4. The painful muse: migrainous artistic archetypes from visual cortex.

    PubMed

    Aguggia, Marco; Grassi, Enrico

    2014-05-01

    Neurological diseases which constituted traditionally obstacles to artistic creation can, in the case of migraine, be transformed by the artists into a source of inspiration and artistic production. These phenomena represent a chapter of a broader embryonic neurobiology of painting. PMID:24867837

  5. Looking for the Muse in Some of the Right Places.

    ERIC Educational Resources Information Center

    Pariser, David A.

    1999-01-01

    Discusses C. Milbrath's thesis that artistically talented and less talented children follow different developmental paths because they rely on different ways of responding to the world. Relates this thesis to studies of the childhood work of Paul Klee, Henri Toulouse Lautrec, and Pablo Picasso. (SLD)

  6. Bayesian Estimation of Conditional Independence Graphs Improves Functional Connectivity Estimates

    PubMed Central

    Hinne, Max; Janssen, Ronald J.; Heskes, Tom; van Gerven, Marcel A.J.

    2015-01-01

    Functional connectivity concerns the correlated activity between neuronal populations in spatially segregated regions of the brain, which may be studied using functional magnetic resonance imaging (fMRI). This coupled activity is conveniently expressed using covariance, but this measure fails to distinguish between direct and indirect effects. A popular alternative that addresses this issue is partial correlation, which regresses out the signal of potentially confounding variables, resulting in a measure that reveals only direct connections. Importantly, provided the data are normally distributed, if two variables are conditionally independent given all other variables, their respective partial correlation is zero. In this paper, we propose a probabilistic generative model that allows us to estimate functional connectivity in terms of both partial correlations and a graph representing conditional independencies. Simulation results show that this methodology is able to outperform the graphical LASSO, which is the de facto standard for estimating partial correlations. Furthermore, we apply the model to estimate functional connectivity for twenty subjects using resting-state fMRI data. Results show that our model provides a richer representation of functional connectivity as compared to considering partial correlations alone. Finally, we demonstrate how our approach can be extended in several ways, for instance to achieve data fusion by informing the conditional independence graph with data from probabilistic tractography. As our Bayesian formulation of functional connectivity provides access to the posterior distribution instead of only to point estimates, we are able to quantify the uncertainty associated with our results. This reveals that while we are able to infer a clear backbone of connectivity in our empirical results, the data are not accurately described by simply looking at the mode of the distribution over connectivity. The implication of this is that

  7. Estimating sediment discharge: Appendix D

    USGS Publications Warehouse

    Gray, John R.; Simões, Francisco J. M.

    2008-01-01

    Sediment-discharge measurements usually are available on a discrete or periodic basis. However, estimates of sediment transport often are needed for unmeasured periods, such as when daily or annual sediment-discharge values are sought, or when estimates of transport rates for unmeasured or hypothetical flows are required. Selected methods for estimating suspended-sediment, bed-load, bed- material-load, and total-load discharges have been presented in some detail elsewhere in this volume. The purposes of this contribution are to present some limitations and potential pitfalls associated with obtaining and using the requisite data and equations to estimate sediment discharges and to provide guidance for selecting appropriate estimating equations. Records of sediment discharge are derived from data collected with sufficient frequency to obtain reliable estimates for the computational interval and period. Most sediment- discharge records are computed at daily or annual intervals based on periodically collected data, although some partial records represent discrete or seasonal intervals such as those for flood periods. The method used to calculate sediment- discharge records is dependent on the types and frequency of available data. Records for suspended-sediment discharge computed by methods described by Porterfield (1972) are most prevalent, in part because measurement protocols and computational techniques are well established and because suspended sediment composes the bulk of sediment dis- charges for many rivers. Discharge records for bed load, total load, or in some cases bed-material load plus wash load are less common. Reliable estimation of sediment discharges presupposes that the data on which the estimates are based are comparable and reliable. Unfortunately, data describing a selected characteristic of sediment were not necessarily derived—collected, processed, analyzed, or interpreted—in a consistent manner. For example, bed-load data collected with

  8. Methods for Cloud Cover Estimation

    NASA Technical Reports Server (NTRS)

    Glackin, D. L.; Huning, J. R.; Smith, J. H.; Logan, T. L.

    1984-01-01

    Several methods for cloud cover estimation are described relevant to assessing the performance of a ground-based network of solar observatories. The methods rely on ground and satellite data sources and provide meteorological or climatological information. One means of acquiring long-term observations of solar oscillations is the establishment of a ground-based network of solar observatories. Criteria for station site selection are: gross cloudiness, accurate transparency information, and seeing. Alternative methods for computing this duty cycle are discussed. The cycle, or alternatively a time history of solar visibility from the network, can then be input to a model to determine the effect of duty cycle on derived solar seismology parameters. Cloudiness from space is studied to examine various means by which the duty cycle might be computed. Cloudiness, and to some extent transparency, can potentially be estimated from satellite data.

  9. Estimating diversity via frequency ratios.

    PubMed

    Willis, Amy; Bunge, John

    2015-12-01

    We wish to estimate the total number of classes in a population based on sample counts, especially in the presence of high latent diversity. Drawing on probability theory that characterizes distributions on the integers by ratios of consecutive probabilities, we construct a nonlinear regression model for the ratios of consecutive frequency counts. This allows us to predict the unobserved count and hence estimate the total diversity. We believe that this is the first approach to depart from the classical mixed Poisson model in this problem. Our method is geometrically intuitive and yields good fits to data with reasonable standard errors. It is especially well-suited to analyzing high diversity datasets derived from next-generation sequencing in microbial ecology. We demonstrate the method's performance in this context and via simulation, and we present a dataset for which our method outperforms all competitors. PMID:26038228

  10. A parameter estimation subroutine package

    NASA Technical Reports Server (NTRS)

    Bierman, G. J.; Nead, M. W.

    1978-01-01

    Linear least squares estimation and regression analyses continue to play a major role in orbit determination and related areas. In this report we document a library of FORTRAN subroutines that have been developed to facilitate analyses of a variety of estimation problems. Our purpose is to present an easy to use, multi-purpose set of algorithms that are reasonably efficient and which use a minimal amount of computer storage. Subroutine inputs, outputs, usage and listings are given along with examples of how these routines can be used. The following outline indicates the scope of this report: Section (1) introduction with reference to background material; Section (2) examples and applications; Section (3) subroutine directory summary; Section (4) the subroutine directory user description with input, output, and usage explained; and Section (5) subroutine FORTRAN listings. The routines are compact and efficient and are far superior to the normal equation and Kalman filter data processing algorithms that are often used for least squares analyses.

  11. Estimation of urban stormwater quality

    USGS Publications Warehouse

    Jennings, Marshall E.; Tasker, Gary D.

    1988-01-01

    Two data-based methods for estimating urban stormwater quality have recently been made available - a planning level method developed by the U.S. Environmental Protection Agency (EPA), and a nationwide regression method developed by the U.S. Geological Survey. Each method uses urban stormwater water-quality constituent data collected for the Nationwide Urban Runoff Program (NURP) during 1979-83. The constituents analyzed include 10 chemical constituents - chemical oxygen demand (COD), total suspended solids (TSS), dissolved solids (DS), total nitrogen (TN), total ammonia plus nitrogen (AN), total phosphorus (TP), dissolved phosphorous (DP), total copper (CU), total lead (PB), and total zinc (ZN). The purpose of this report is to briefly compare features of the two estimation methods.

  12. Operator estimates in homogenization theory

    NASA Astrophysics Data System (ADS)

    Zhikov, V. V.; Pastukhova, S. E.

    2016-06-01

    This paper gives a systematic treatment of two methods for obtaining operator estimates: the shift method and the spectral method. Though substantially different in mathematical technique and physical motivation, these methods produce basically the same results. Besides the classical formulation of the homogenization problem, other formulations of the problem are also considered: homogenization in perforated domains, the case of an unbounded diffusion matrix, non-self-adjoint evolution equations, and higher-order elliptic operators. Bibliography: 62 titles.

  13. Position Estimation Using Image Derivative

    NASA Technical Reports Server (NTRS)

    Mortari, Daniele; deDilectis, Francesco; Zanetti, Renato

    2015-01-01

    This paper describes an image processing algorithm to process Moon and/or Earth images. The theory presented is based on the fact that Moon hard edge points are characterized by the highest values of the image derivative. Outliers are eliminated by two sequential filters. Moon center and radius are then estimated by nonlinear least-squares using circular sigmoid functions. The proposed image processing has been applied and validated using real and synthetic Moon images.

  14. Estimating the coherence of noise

    NASA Astrophysics Data System (ADS)

    Wallman, Joel

    To harness the advantages of quantum information processing, quantum systems have to be controlled to within some maximum threshold error. Certifying whether the error is below the threshold is possible by performing full quantum process tomography, however, quantum process tomography is inefficient in the number of qubits and is sensitive to state-preparation and measurement errors (SPAM). Randomized benchmarking has been developed as an efficient method for estimating the average infidelity of noise to the identity. However, the worst-case error, as quantified by the diamond distance from the identity, can be more relevant to determining whether an experimental implementation is at the threshold for fault-tolerant quantum computation. The best possible bound on the worst-case error (without further assumptions on the noise) scales as the square root of the infidelity and can be orders of magnitude greater than the reported average error. We define a new quantification of the coherence of a general noise channel, the unitarity, and show that it can be estimated using an efficient protocol that is robust to SPAM. Furthermore, we also show how the unitarity can be used with the infidelity obtained from randomized benchmarking to obtain improved estimates of the diamond distance and to efficiently determine whether experimental noise is close to stochastic Pauli noise.

  15. State Estimation for K9

    NASA Technical Reports Server (NTRS)

    Xu, Ru-Gang; Koga, Dennis (Technical Monitor)

    2001-01-01

    The goal of 'Estimate' is to take advantage of attitude information to produce better pose while staying flexible and robust. Currently there are several instruments that are used for attitude: gyros, inclinometers, and compasses. However, precise and useful attitude information cannot come from one instrument. Integration of rotational rates, from gyro data for example, would result in drift. Therefore, although gyros are accurate in the short-term, accuracy in the long term is unlikely. Using absolute instruments such as compasses and inclinometers can result in an accurate measurement of attitude in the long term. However, in the short term, the physical nature of compasses and inclinometers, and the dynamic nature of a mobile platform result in highly volatile and therefore useless data. The solution then is to use both absolute and relative data. Kalman Filtering is known to be able to combine gyro and compass/inclinometer data to produce stable and accurate attitude information. Since the model of motion is linear and the data comes in as discrete samples, a Discrete Kalman Filter was selected as the core of the new estimator. Therefore, 'Estimate' can be divided into two parts: the Discrete Kalman Filter and the code framework.

  16. Bayes Estimators for Phylogenetic Reconstruction

    PubMed Central

    Huggins, P. M.; Li, W.; Haws, D.; Friedrich, T.; Liu, J.; Yoshida, R.

    2011-01-01

    Tree reconstruction methods are often judged by their accuracy, measured by how close they get to the true tree. Yet, most reconstruction methods like maximum likelihood (ML) do not explicitly maximize this accuracy. To address this problem, we propose a Bayesian solution. Given tree samples, we propose finding the tree estimate that is closest on average to the samples. This “median” tree is known as the Bayes estimator (BE). The BE literally maximizes posterior expected accuracy, measured in terms of closeness (distance) to the true tree. We discuss a unified framework of BE trees, focusing especially on tree distances that are expressible as squared euclidean distances. Notable examples include Robinson–Foulds (RF) distance, quartet distance, and squared path difference. Using both simulated and real data, we show that BEs can be estimated in practice by hill-climbing. In our simulation, we find that BEs tend to be closer to the true tree, compared with ML and neighbor joining. In particular, the BE under squared path difference tends to perform well in terms of both path difference and RF distances. PMID:21471560

  17. Loss estimation of Membramo earthquake

    NASA Astrophysics Data System (ADS)

    Damanik, R.; Sedayo, H.

    2016-05-01

    Papua Tectonics are dominated by the oblique collision of the Pacific plate along the north side of the island. A very high relative plate motions (i.e. 120 mm/year) between the Pacific and Papua-Australian Plates, gives this region a very high earthquake production rate, about twice as much as that of Sumatra, the western margin of Indonesia. Most of the seismicity occurring beneath the island of New Guinea is clustered near the Huon Peninsula, the Mamberamo region, and the Bird's Neck. At 04:41 local time(GMT+9), July 28th 2015, a large earthquake of Mw = 7.0 occurred at West Mamberamo Fault System. The earthquake focal mechanism are dominated by northwest-trending thrust mechanisms. GMPE and ATC vulnerability curve were used to estimate distribution of damage. Mean of estimated losses was caused by this earthquake is IDR78.6 billion. We estimated insurance loss will be only small portion in total general due to deductible.

  18. Abundance estimation and conservation biology

    USGS Publications Warehouse

    Nichols, J.D.; MacKenzie, D.I.

    2004-01-01

    Abundance is the state variable of interest in most population–level ecological research and in most programs involving management and conservation of animal populations. Abundance is the single parameter of interest in capture–recapture models for closed populations (e.g., Darroch, 1958; Otis et al., 1978; Chao, 2001). The initial capture–recapture models developed for partially (Darroch, 1959) and completely (Jolly, 1965; Seber, 1965) open populations represented efforts to relax the restrictive assumption of population closure for the purpose of estimating abundance. Subsequent emphases in capture–recapture work were on survival rate estimation in the 1970’s and 1980’s (e.g., Burnham et al., 1987; Lebreton et al.,1992), and on movement estimation in the 1990’s (Brownie et al., 1993; Schwarz et al., 1993). However, from the mid–1990’s until the present time, capture–recapture investigators have expressed a renewed interest in abundance and related parameters (Pradel, 1996; Schwarz & Arnason, 1996; Schwarz, 2001). The focus of this session was abundance, and presentations covered topics ranging from estimation of abundance and rate of change in abundance, to inferences about the demographic processes underlying changes in abundance, to occupancy as a surrogate of abundance. The plenary paper by Link & Barker (2004) is provocative and very interesting, and it contains a number of important messages and suggestions. Link & Barker (2004) emphasize that the increasing complexity of capture–recapture models has resulted in large numbers of parameters and that a challenge to ecologists is to extract ecological signals from this complexity. They offer hierarchical models as a natural approach to inference in which traditional parameters are viewed as realizations of stochastic processes. These processes are governed by hyperparameters, and the inferential approach focuses on these hyperparameters. Link & Barker (2004) also suggest that our attention

  19. Parameter Estimation Using VLA Data

    NASA Astrophysics Data System (ADS)

    Venter, Willem C.

    The main objective of this dissertation is to extract parameters from multiple wavelength images, on a pixel-to-pixel basis, when the images are corrupted with noise and a point spread function. The data used are from the field of radio astronomy. The very large array (VLA) at Socorro in New Mexico was used to observe planetary nebula NGC 7027 at three different wavelengths, 2 cm, 6 cm and 20 cm. A temperature model, describing the temperature variation in the nebula as a function of optical depth, is postulated. Mathematical expressions for the brightness distribution (flux density) of the nebula, at the three observed wavelengths, are obtained. Using these three equations and the three data values available, one from the observed flux density map at each wavelength, it is possible to solve for two temperature parameters and one optical depth parameter at each pixel location. Due to the fact that the number of unknowns equal the number of equations available, estimation theory cannot be used to smooth any noise present in the data values. It was found that a direct solution of the three highly nonlinear flux density equations is very sensitive to noise in the data. Results obtained from solving for the three unknown parameters directly, as discussed above, were not physical realizable. This was partly due to the effect of incomplete sampling at the time when the data were gathered and to noise in the system. The application of rigorous digital parameter estimation techniques result in estimated parameters that are also not physically realizable. The estimated values for the temperature parameters are for example either too high or negative, which is not physically possible. Simulation studies have shown that a "double smoothing" technique improves the results by a large margin. This technique consists of two parts: in the first part the original observed data are smoothed using a running window and in the second part a similar smoothing of the estimated parameters

  20. Estimating the Costs of Preventive Interventions

    ERIC Educational Resources Information Center

    Foster, E. Michael; Porter, Michele M.; Ayers, Tim S.; Kaplan, Debra L.; Sandler, Irwin

    2007-01-01

    The goal of this article is to improve the practice and reporting of cost estimates of prevention programs. It reviews the steps in estimating the costs of an intervention and the principles that should guide estimation. The authors then review prior efforts to estimate intervention costs using a sample of well-known but diverse studies. Finally,…

  1. Psychometric Properties of IRT Proficiency Estimates

    ERIC Educational Resources Information Center

    Kolen, Michael J.; Tong, Ye

    2010-01-01

    Psychometric properties of item response theory proficiency estimates are considered in this paper. Proficiency estimators based on summed scores and pattern scores include non-Bayes maximum likelihood and test characteristic curve estimators and Bayesian estimators. The psychometric properties investigated include reliability, conditional…

  2. Some Reliability Estimates for Computerized Adaptive Tests.

    ERIC Educational Resources Information Center

    Nicewander, W. Alan; Thomasson, Gary L.

    1999-01-01

    Derives three reliability estimates for the Bayes modal estimate (BME) and the maximum-likelihood estimate (MLE) of theta in computerized adaptive tests (CATs). Computes the three reliability estimates and the true reliabilities of both BME and MLE for seven simulated CATs. Results show the true reliabilities for BME and MLE to be nearly identical…

  3. 7 CFR 58.135 - Bacterial estimate.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Bacterial estimate. 58.135 Section 58.135 Agriculture... Milk § 58.135 Bacterial estimate. (a) Methods of Testing. Milk shall be tested for bacterial estimate... of Testing. A laboratory examination to determine the bacterial estimate shall be made on...

  4. 40 CFR 261.142 - Cost estimate.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 25 2010-07-01 2010-07-01 false Cost estimate. 261.142 Section 261.142... Materials § 261.142 Cost estimate. (a) The owner or operator must have a detailed written estimate, in... facility. (1) The estimate must equal the cost of conducting the activities described in paragraph (a)...

  5. Calculating weighted estimates of peak streamflow statistics

    USGS Publications Warehouse

    Cohn, Timothy A.; Berenbrock, Charles; Kiang, Julie E.; Mason, Jr., Robert R.

    2012-01-01

    According to the Federal guidelines for flood-frequency estimation, the uncertainty of peak streamflow statistics, such as the 1-percent annual exceedance probability (AEP) flow at a streamgage, can be reduced by combining the at-site estimate with the regional regression estimate to obtain a weighted estimate of the flow statistic. The procedure assumes the estimates are independent, which is reasonable in most practical situations. The purpose of this publication is to describe and make available a method for calculating a weighted estimate from the uncertainty or variance of the two independent estimates.

  6. Estimated Radiation Dosage on Mars

    NASA Technical Reports Server (NTRS)

    2002-01-01

    This global map of Mars shows the estimated radiation dosages from cosmic rays reaching the surface, a serious health concern for any future human exploration of the planet.

    The estimates are based on cosmic-radiation measurements by the Mars radiation environment experiment, an instrument on NASA's Mars 2000 Odyssey spacecraft, plus information about Mars' surface elevations from the laser altimeter instrument on NASA's Mars Global Surveyor. The areas of Mars expected to have the lowest levels of cosmic radiation are where the elevation is lowest, because those areas have more atmosphere above them to block out some of the radiation. Earth's thick atmosphere shields us from most cosmic radiation, but Mars has a much thinner atmosphere than we have on Earth.

    The colors in the map refer to the estimated annual dose equivalent in rems, a unit of radiation dose. The range is generally from 10 rems(color-coded dark blue) to 20 rems (color coded dark red). Radiation exposure for astronauts on the International Space Station in Earth orbit is typically equivalent to an annualized rate of 20 to 40 rems.

    NASA's Jet Propulsion Laboratory, Pasadena, Calif. manages the 2001 Mars Odyssey and Mars Global Surveyor missions for NASA's Office of Space Science, Washington D.C. The Mars radiation environment experiment was developed by NASA's Johnson Space Center, Houston. Lockheed Martin Astronautics, Denver, is the prime contractor for Odyssey, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.

  7. Multiple tumours in survival estimates.

    PubMed

    Rosso, Stefano; De Angelis, Roberta; Ciccolallo, Laura; Carrani, Eugenio; Soerjomataram, Isabelle; Grande, Enrico; Zigon, Giulia; Brenner, Hermann

    2009-04-01

    In international comparisons of cancer registry based survival it is common practice to restrict the analysis to first primary tumours and exclude multiple cancers. The probability of correctly detecting subsequent cancers depends on the registry's running time, which results in different proportions of excluded patients and may lead to biased comparisons. We evaluated the impact on the age-standardised relative survival estimates of also including multiple primary tumours. Data from 2,919,023 malignant cancers from 69 European cancer registries participating in the EUROCARE-4 collaborative study were used. A total of 183,683 multiple primary tumours were found, with an overall proportion of 6.3% over all the considered cancers, ranging from 0.4% (Naples, Italy) to 12.9% (Iceland). The proportion of multiple tumours varied greatly by type of tumour, being higher for those with high incidence and long survival (breast, prostate and colon-rectum). Five-year relative survival was lower when including patients with multiple cancers. For all cancers combined the average difference was -0.4 percentage points in women and -0.7 percentage points in men, and was greater for older registries. Inclusion of multiple tumours led to lower survival in 44 out of 45 cancer sites analysed, with the greatest differences found for larynx (-1.9%), oropharynx (-1.5%), and penis (-1.3%). Including multiple primary tumours in survival estimates for international comparison is advisable because it reduces the bias due to different observation periods, age, registration quality and completeness of registration. The general effect of inclusion is to reduce survival estimates by a variable amount depending on the proportion of multiple primaries and cancer site.

  8. Estimating emissions from accidental releases

    SciTech Connect

    Wolf, D.B.

    1996-12-31

    The Clean Air Amendments (CAAA) of 1990 have an objective sources of air emissions through programs such as Title III, which is aimed at reducing hazardous air pollutant emissions. However, under Section 112(r) of the CAAA of 1990, the U.S. Environmental Protection Agency (EPA) has also developed requirements for owners and operators of facilities regulated for hazardous substances to implement accidental release prevention programs for non-continuous emissions. Provisions of 112(r) include programs for release prevention, emergency planning and risk management. This paper examines methodologies available to regulated facilities for estimating accidental release emissions and determining off-site impacts.

  9. Solar power satellite cost estimate

    NASA Technical Reports Server (NTRS)

    Harron, R. J.; Wadle, R. C.

    1981-01-01

    The solar power configuration costed is the 5 GW silicon solar cell reference system. The subsystems identified by work breakdown structure elements to the lowest level for which cost information was generated. This breakdown divides into five sections: the satellite, construction, transportation, the ground receiving station and maintenance. For each work breakdown structure element, a definition, design description and cost estimate were included. An effort was made to include for each element a reference that more thoroughly describes the element and the method of costing used. All costs are in 1977 dollars.

  10. Estimation of ground motion parameters

    USGS Publications Warehouse

    Boore, David M.; Joyner, W.B.; Oliver, A.A.; Page, R.A.

    1978-01-01

    Strong motion data from western North America for earthquakes of magnitude greater than 5 are examined to provide the basis for estimating peak acceleration, velocity, displacement, and duration as a function of distance for three magnitude classes. A subset of the data (from the San Fernando earthquake) is used to assess the effects of structural size and of geologic site conditions on peak motions recorded at the base of structures. Small but statistically significant differences are observed in peak values of horizontal acceleration, velocity and displacement recorded on soil at the base of small structures compared with values recorded at the base of large structures. The peak acceleration tends to b3e less and the peak velocity and displacement tend to be greater on the average at the base of large structures than at the base of small structures. In the distance range used in the regression analysis (15-100 km) the values of peak horizontal acceleration recorded at soil sites in the San Fernando earthquake are not significantly different from the values recorded at rock sites, but values of peak horizontal velocity and displacement are significantly greater at soil sites than at rock sites. Some consideration is given to the prediction of ground motions at close distances where there are insufficient recorded data points. As might be expected from the lack of data, published relations for predicting peak horizontal acceleration give widely divergent estimates at close distances (three well known relations predict accelerations between 0.33 g to slightly over 1 g at a distance of 5 km from a magnitude 6.5 earthquake). After considering the physics of the faulting process, the few available data close to faults, and the modifying effects of surface topography, at the present time it would be difficult to accept estimates less than about 0.8 g, 110 cm/s, and 40 cm, respectively, for the mean values of peak acceleration, velocity, and displacement at rock sites

  11. Estimation of continental precipitation recycling

    SciTech Connect

    Brubaker, K.L.; Entekhabi, D.; Eagleson, P.S. )

    1993-06-01

    The total amount of water that precipitates on large continental regions is supplied by two mechanisms: (1) advection from the surrounding areas external to the region and (2) evaporation and transpiration from the land surface within the region. The latter supply mechanism is tantamount to the recycling of precipitation over the Continental area. The degree to which regional precipitation is supplied by recycled moisture is a potentially significant climate feedback mechanism and land surface-atmosphere interaction, which may contribute to the persistence and intensification of droughts. Gridded data on observed wind and humidity in the global atmosphere are used to determine the convergence of atmospheric water vapor over continental regions. A simplified model of the atmospheric moisture over continents and simultaneous estimates of regional precipitation are employed to estimate, for several large continental regions, the fraction of precipitation that is locally derived. The results indicate that the contribution of regional evaporation to regional precipitation varies substantially with location and season. For the regions studied, the ratio of locally contributed to total monthly precipitation generally lies between 0. 10 and 0.30 but is as high as 0.40 in several cases. 48 refs., 7 figs., 4 tabs.

  12. Estimating Bayesian Phylogenetic Information Content

    PubMed Central

    Lewis, Paul O.; Chen, Ming-Hui; Kuo, Lynn; Lewis, Louise A.; Fučíková, Karolina; Neupane, Suman; Wang, Yu-Bo; Shi, Daoyuan

    2016-01-01

    Measuring the phylogenetic information content of data has a long history in systematics. Here we explore a Bayesian approach to information content estimation. The entropy of the posterior distribution compared with the entropy of the prior distribution provides a natural way to measure information content. If the data have no information relevant to ranking tree topologies beyond the information supplied by the prior, the posterior and prior will be identical. Information in data discourages consideration of some hypotheses allowed by the prior, resulting in a posterior distribution that is more concentrated (has lower entropy) than the prior. We focus on measuring information about tree topology using marginal posterior distributions of tree topologies. We show that both the accuracy and the computational efficiency of topological information content estimation improve with use of the conditional clade distribution, which also allows topological information content to be partitioned by clade. We explore two important applications of our method: providing a compelling definition of saturation and detecting conflict among data partitions that can negatively affect analyses of concatenated data. [Bayesian; concatenation; conditional clade distribution; entropy; information; phylogenetics; saturation.] PMID:27155008

  13. Estimation of continental precipitation recycling

    NASA Technical Reports Server (NTRS)

    Brubaker, Kaye L.; Entekhabi, Dara; Eagleson, P. S.

    1993-01-01

    The total amount of water that precipitates on large continental regions is supplied by two mechanisms: 1) advection from the surrounding areas external to the region and 2) evaporation and transpiration from the land surface within the region. The latter supply mechanism is tantamount to the recycling of precipitation over the continental area. The degree to which regional precipitation is supplied by recycled moisture is a potentially significant climate feedback mechanism and land surface-atmosphere interaction, which may contribute to the persistence and intensification of droughts. Gridded data on observed wind and humidity in the global atmosphere are used to determine the convergence of atmospheric water vapor over continental regions. A simplified model of the atmospheric moisture over continents and simultaneous estimates of regional precipitation are employed to estimate, for several large continental regions, the fraction of precipitation that is locally derived. The results indicate that the contribution of regional evaporation to regional precipitation varies substantially with location and season. For the regions studied, the ratio of locally contributed to total monthly precipitation generally lies between 0. 10 and 0.30 but is as high as 0.40 in several cases.

  14. Organ volume estimation using SPECT

    SciTech Connect

    Zaidi, H.

    1996-06-01

    Knowledge of in vivo thyroid volume has both diagnostic and therapeutic importance and could lead to a more precise quantification of absolute activity contained in the thyroid gland. In order to improve single-photon emission computed tomography (SPECT) quantitation, attenuation correction was performed according to Chang`s algorithm. The dual window method was used for scatter subtraction. The author used a Monte Carlo simulation of the SPECT system to accurately determine the scatter multiplier factor k. Volume estimation using SPECT was performed by summing up the volume elements (voxels) lying within the contour of the object, determined by a fixed threshold and the gray level histogram (GLH) method. Thyroid phantom and patient studies were performed and the influence of (1) fixed thresholding, (2) automatic thresholding, (3) attenuation, (4) scatter, and (5) reconstruction filter were investigated. This study shows that accurate volume estimation of the thyroid gland is feasible when accurate corrections are performed. The relative error is within 7% for the GLH method combined with attenuation and scatter corrections.

  15. Parameter estimation for transformer modeling

    NASA Astrophysics Data System (ADS)

    Cho, Sung Don

    Large Power transformers, an aging and vulnerable part of our energy infrastructure, are at choke points in the grid and are key to reliability and security. Damage or destruction due to vandalism, misoperation, or other unexpected events is of great concern, given replacement costs upward of $2M and lead time of 12 months. Transient overvoltages can cause great damage and there is much interest in improving computer simulation models to correctly predict and avoid the consequences. EMTP (the Electromagnetic Transients Program) has been developed for computer simulation of power system transients. Component models for most equipment have been developed and benchmarked. Power transformers would appear to be simple. However, due to their nonlinear and frequency-dependent behaviors, they can be one of the most complex system components to model. It is imperative that the applied models be appropriate for the range of frequencies and excitation levels that the system experiences. Thus, transformer modeling is not a mature field and newer improved models must be made available. In this work, improved topologically-correct duality-based models are developed for three-phase autotransformers having five-legged, three-legged, and shell-form cores. The main problem in the implementation of detailed models is the lack of complete and reliable data, as no international standard suggests how to measure and calculate parameters. Therefore, parameter estimation methods are developed here to determine the parameters of a given model in cases where available information is incomplete. The transformer nameplate data is required and relative physical dimensions of the core are estimated. The models include a separate representation of each segment of the core, including hysteresis of the core, lambda-i saturation characteristic, capacitive effects, and frequency dependency of winding resistance and core loss. Steady-state excitation, and de-energization and re-energization transients

  16. Shear modulus estimation with vibrating needle stimulation.

    PubMed

    Orescanin, Marko; Insana, Michael

    2010-06-01

    An ultrasonic shear wave imaging technique is being developed for estimating the complex shear modulus of biphasic hydropolymers including soft biological tissues. A needle placed in the medium is vibrated along its axis to generate harmonic shear waves. Doppler pulses synchronously track particle motion to estimate shear wave propagation speed. Velocity estimation is improved by implementing a k-lag phase estimator. Fitting shear-wave speed estimates to the predicted dispersion relation curves obtained from two rheological models, we estimate the elastic and viscous components of the complex shear modulus. The dispersion equation estimated using the standard linear solid-body (Zener) model is compared with that from the Kelvin-Voigt model to estimate moduli in gelatin gels in the 50 to 450 Hz shear wave frequency bandwidth. Both models give comparable estimates that agree with independent shear rheometer measurements obtained at lower strain rates. PMID:20529711

  17. Daily estimates of soil ingestion in children.

    PubMed Central

    Stanek, E J; Calabrese, E J

    1995-01-01

    Soil ingestion estimates play an important role in risk assessment of contaminated sites, and estimates of soil ingestion in children are of special interest. Current estimates of soil ingestion are trace-element specific and vary widely among elements. Although expressed as daily estimates, the actual estimates have been constructed by averaging soil ingestion over a study period of several days. The wide variability has resulted in uncertainty as to which method of estimation of soil ingestion is best. We developed a methodology for calculating a single estimate of soil ingestion for each subject for each day. Because the daily soil ingestion estimate represents the median estimate of eligible daily trace-element-specific soil ingestion estimates for each child, this median estimate is not trace-element specific. Summary estimates for individuals and weeks are calculated using these daily estimates. Using this methodology, the median daily soil ingestion estimate for 64 children participating in the 1989 Amherst soil ingestion study is 13 mg/day or less for 50% of the children and 138 mg/day or less for 95% of the children. Mean soil ingestion estimates (for up to an 8-day period) were 45 mg/day or less for 50% of the children, whereas 95% of the children reported a mean soil ingestion of 208 mg/day or less. Daily soil ingestion estimates were used subsequently to estimate the mean and variance in soil ingestion for each child and to extrapolate a soil ingestion distribution over a year, assuming that soil ingestion followed a log-normal distribution. Images Figure 1. Figure 2. Figure 3. Figure 4. PMID:7768230

  18. 2007 Estimated International Energy Flows

    SciTech Connect

    Smith, C A; Belles, R D; Simon, A J

    2011-03-10

    An energy flow chart or 'atlas' for 136 countries has been constructed from data maintained by the International Energy Agency (IEA) and estimates of energy use patterns for the year 2007. Approximately 490 exajoules (460 quadrillion BTU) of primary energy are used in aggregate by these countries each year. While the basic structure of the energy system is consistent from country to country, patterns of resource use and consumption vary. Energy can be visualized as it flows from resources (i.e. coal, petroleum, natural gas) through transformations such as electricity generation to end uses (i.e. residential, commercial, industrial, transportation). These flow patterns are visualized in this atlas of 136 country-level energy flow charts.

  19. Estimating the coherence of noise

    NASA Astrophysics Data System (ADS)

    Wallman, Joel; Granade, Chris; Harper, Robin; Flammia, Steven T.

    2015-11-01

    Noise mechanisms in quantum systems can be broadly characterized as either coherent (i.e., unitary) or incoherent. For a given fixed average error rate, coherent noise mechanisms will generally lead to a larger worst-case error than incoherent noise. We show that the coherence of a noise source can be quantified by the unitarity, which we relate to the average change in purity averaged over input pure states. We then show that the unitarity can be efficiently estimated using a protocol based on randomized benchmarking that is efficient and robust to state-preparation and measurement errors. We also show that the unitarity provides a lower bound on the optimal achievable gate infidelity under a given noisy process.

  20. Comparison of space debris estimates

    SciTech Connect

    Canavan, G.H.; Judd, O.P.; Naka, R.F.

    1996-10-01

    Debris is thought to be a hazard to space systems through impact and cascading. The current environment is assessed as not threatening to defense systems. Projected reductions in launch rates to LEO should delay concerns for centuries. There is agreement between AFSPC and NASA analyses on catalogs and collision rates, but not on fragmentation rates. Experiments in the laboratory, field, and space are consistent with AFSPC estimates of the number of fragments per collision. A more careful treatment of growth rates greatly reduces long-term stability issues. Space debris has not been shown to be an issue in coming centuries; thus, it does not appear necessary for the Air Force to take additional steps to mitigate it.

  1. Variance estimation for nucleotide substitution models.

    PubMed

    Chen, Weishan; Wang, Hsiuying

    2015-09-01

    The current variance estimators for most evolutionary models were derived when a nucleotide substitution number estimator was approximated with a simple first order Taylor expansion. In this study, we derive three variance estimators for the F81, F84, HKY85 and TN93 nucleotide substitution models, respectively. They are obtained using the second order Taylor expansion of the substitution number estimator, the first order Taylor expansion of a squared deviation and the second order Taylor expansion of a squared deviation, respectively. These variance estimators are compared with the existing variance estimator in terms of a simulation study. It shows that the variance estimator, which is derived using the second order Taylor expansion of a squared deviation, is more accurate than the other three estimators. In addition, we also compare these estimators with an estimator derived by the bootstrap method. The simulation shows that the performance of this bootstrap estimator is similar to the estimator derived by the second order Taylor expansion of a squared deviation. Since the latter one has an explicit form, it is more efficient than the bootstrap estimator.

  2. Estimating Location without External Cues

    PubMed Central

    Cheung, Allen

    2014-01-01

    The ability to determine one's location is fundamental to spatial navigation. Here, it is shown that localization is theoretically possible without the use of external cues, and without knowledge of initial position or orientation. With only error-prone self-motion estimates as input, a fully disoriented agent can, in principle, determine its location in familiar spaces with 1-fold rotational symmetry. Surprisingly, localization does not require the sensing of any external cue, including the boundary. The combination of self-motion estimates and an internal map of the arena provide enough information for localization. This stands in conflict with the supposition that 2D arenas are analogous to open fields. Using a rodent error model, it is shown that the localization performance which can be achieved is enough to initiate and maintain stable firing patterns like those of grid cells, starting from full disorientation. Successful localization was achieved when the rotational asymmetry was due to the external boundary, an interior barrier or a void space within an arena. Optimal localization performance was found to depend on arena shape, arena size, local and global rotational asymmetry, and the structure of the path taken during localization. Since allothetic cues including visual and boundary contact cues were not present, localization necessarily relied on the fusion of idiothetic self-motion cues and memory of the boundary. Implications for spatial navigation mechanisms are discussed, including possible relationships with place field overdispersion and hippocampal reverse replay. Based on these results, experiments are suggested to identify if and where information fusion occurs in the mammalian spatial memory system. PMID:25356642

  3. Flight Mechanics/Estimation Theory Symposium, 1989

    NASA Technical Reports Server (NTRS)

    Stengle, Thomas (Editor)

    1989-01-01

    Numerous topics in flight mechanics and estimation were discussed. Satellite attitude control, quaternion estimation, orbit and attitude determination, spacecraft maneuvers, spacecraft navigation, gyroscope calibration, spacecraft rendevous, and atmospheric drag model calculations for spacecraft lifetime prediction are among the topics covered.

  4. Ambulatory Medical Care Utilization Estimates for 2007

    MedlinePlus

    ... Results Patients in the United States made an estimated 1.2 billion visits to physician offices and ... shows data on injury visits. There were an estimated 156.8 million injury visits in 2007, or ...

  5. Estimator reduction and convergence of adaptive BEM.

    PubMed

    Aurada, Markus; Ferraz-Leite, Samuel; Praetorius, Dirk

    2012-06-01

    A posteriori error estimation and related adaptive mesh-refining algorithms have themselves proven to be powerful tools in nowadays scientific computing. Contrary to adaptive finite element methods, convergence of adaptive boundary element schemes is, however, widely open. We propose a relaxed notion of convergence of adaptive boundary element schemes. Instead of asking for convergence of the error to zero, we only aim to prove estimator convergence in the sense that the adaptive algorithm drives the underlying error estimator to zero. We observe that certain error estimators satisfy an estimator reduction property which is sufficient for estimator convergence. The elementary analysis is only based on Dörfler marking and inverse estimates, but not on reliability and efficiency of the error estimator at hand. In particular, our approach gives a first mathematical justification for the proposed steering of anisotropic mesh-refinements, which is mandatory for optimal convergence behavior in 3D boundary element computations.

  6. Estimator reduction and convergence of adaptive BEM

    PubMed Central

    Aurada, Markus; Ferraz-Leite, Samuel; Praetorius, Dirk

    2012-01-01

    A posteriori error estimation and related adaptive mesh-refining algorithms have themselves proven to be powerful tools in nowadays scientific computing. Contrary to adaptive finite element methods, convergence of adaptive boundary element schemes is, however, widely open. We propose a relaxed notion of convergence of adaptive boundary element schemes. Instead of asking for convergence of the error to zero, we only aim to prove estimator convergence in the sense that the adaptive algorithm drives the underlying error estimator to zero. We observe that certain error estimators satisfy an estimator reduction property which is sufficient for estimator convergence. The elementary analysis is only based on Dörfler marking and inverse estimates, but not on reliability and efficiency of the error estimator at hand. In particular, our approach gives a first mathematical justification for the proposed steering of anisotropic mesh-refinements, which is mandatory for optimal convergence behavior in 3D boundary element computations. PMID:23482248

  7. Distributed pose estimation from multiple views

    NASA Astrophysics Data System (ADS)

    Chen, Chong; Schonfeld, Dan; Mohamed, Magdi

    2008-01-01

    A method is introduced to track the object's motion and estimate its pose from multiple cameras. We focus on direct estimation of the 3D pose from 2D image sequences. We derive a distributed solution that is equivalent to the centralized pose estimation from multiple cameras. Moreover, we show that, by using a proper rotation between each camera and a fixed camera view, we can rely on independent pose estimation from each camera. Then, we propose a robust solution to the centralized pose estimation problem by deriving a best linear unbiased estimate from the rotated pose estimates obtained from each camera. The resulting pose estimation is therefore robust to errors obtained from specific camera views. Moreover, the computational complexity of the distributed solution is efficient and grows linearly with the number of camera views. Finally, the computer simulation experiments demonstrate that our algorithm is fast and accurate.

  8. Efficient Estimation of the Standardized Value

    ERIC Educational Resources Information Center

    Longford, Nicholas T.

    2009-01-01

    We derive an estimator of the standardized value which, under the standard assumptions of normality and homoscedasticity, is more efficient than the established (asymptotically efficient) estimator and discuss its gains for small samples. (Contains 1 table and 3 figures.)

  9. IMPROVING BIOGENIC EMISSION ESTIMATES WITH SATELLITE IMAGERY

    EPA Science Inventory

    This presentation will review how existing and future applications of satellite imagery can improve the accuracy of biogenic emission estimates. Existing applications of satellite imagery to biogenic emission estimates have focused on characterizing land cover. Vegetation dat...

  10. ESTIMATING URBAN WET-WEATHER POLLUTANT LOADING

    EPA Science Inventory

    This paper presents procedures for estimating pollutant loads in urban watersheds emanating from wet-weather flow discharge. Equations for pollutant loading estimates will focus on the effects of wastewater characteristics, sewer flow carrying velocity, and sewer-solids depositi...

  11. Adaptive density estimator for galaxy surveys

    NASA Astrophysics Data System (ADS)

    Saar, Enn

    2016-10-01

    Galaxy number or luminosity density serves as a basis for many structure classification algorithms. Several methods are used to estimate this density. Among them kernel methods have probably the best statistical properties and allow also to estimate the local sample errors of the estimate. We introduce a kernel density estimator with an adaptive data-driven anisotropic kernel, describe its properties and demonstrate the wealth of additional information it gives us about the local properties of the galaxy distribution.

  12. Estimated freshwater withdrawals in Washington, 2010

    USGS Publications Warehouse

    Lane, Ron C.; Welch, Wendy B.

    2015-03-18

    The amount of public- and self-supplied water used for domestic, irrigation, livestock, aquaculture, industrial, mining, and thermoelectric power was estimated for state, county, and eastern and western regions of Washington during calendar year 2010. Withdrawals of freshwater for offstream uses were estimated to be about 4,885 million gallons per day. The total estimated freshwater withdrawals for 2010 was approximately 15 percent less than the 2005 estimate because of decreases in irrigation and thermoelectric power withdrawals.

  13. Estimating OSNR of equalised QPSK signals.

    PubMed

    Ives, David J; Thomsen, Benn C; Maher, Robert; Savory, Seb J

    2011-12-12

    We propose and demonstrate a technique to estimate the OSNR of an equalised QPSK signal based on the radial moments of the complex signal constellation. The technique is compared through simulation with maximum likelihood estimation and the effect of the block size used in the estimation is also assessed. The technique is verified experimentally and when combined with a single point calibration the OSNR of the input signal was estimated to within 0.5 dB.

  14. New Methodology for Natural Gas Production Estimates

    EIA Publications

    2010-01-01

    A new methodology is implemented with the monthly natural gas production estimates from the EIA-914 survey this month. The estimates, to be released April 29, 2010, include revisions for all of 2009. The fundamental changes in the new process include the timeliness of the historical data used for estimation and the frequency of sample updates, both of which are improved.

  15. Estimating Canopy Dark Respiration for Crop Models

    NASA Technical Reports Server (NTRS)

    Monje Mejia, Oscar Alberto

    2014-01-01

    Crop production is obtained from accurate estimates of daily carbon gain.Canopy gross photosynthesis (Pgross) can be estimated from biochemical models of photosynthesis using sun and shaded leaf portions and the amount of intercepted photosyntheticallyactive radiation (PAR).In turn, canopy daily net carbon gain can be estimated from canopy daily gross photosynthesis when canopy dark respiration (Rd) is known.

  16. Current Term Enrollment Estimates: Spring 2014

    ERIC Educational Resources Information Center

    National Student Clearinghouse, 2014

    2014-01-01

    Current Term Enrollment Estimates, published every December and May by the National Student Clearinghouse Research Center, include national enrollment estimates by institutional sector, state, enrollment intensity, age group, and gender. Enrollment estimates are adjusted for Clearinghouse data coverage rates by institutional sector, state, and…

  17. Stability constant estimator user`s guide

    SciTech Connect

    Hay, B.P.; Castleton, K.J.; Rustad, J.R.

    1996-12-01

    The purpose of the Stability Constant Estimator (SCE) program is to estimate aqueous stability constants for 1:1 complexes of metal ions with ligands by using trends in existing stability constant data. Such estimates are useful to fill gaps in existing thermodynamic databases and to corroborate the accuracy of reported stability constant values.

  18. Applications of adaptive state estimation theory

    NASA Technical Reports Server (NTRS)

    Moose, R. L.; Vanlandingham, H. F.; Mccabe, D. H.

    1980-01-01

    Two main areas of application of adaptive state estimation theory are presented. Following a review of the basic estimation approach, its application to both the control of nonlinear plants and to the problem of tracking maneuvering targets is presented. Results are brought together from these two areas of investigation to provide insight into the wide range of possible applications of the general estimation method.

  19. The Mayfield method of estimating nesting success: A model, estimators and simulation results

    USGS Publications Warehouse

    Hensler, G.L.; Nichols, J.D.

    1981-01-01

    Using a nesting model proposed by Mayfield we show that the estimator he proposes is a maximum likelihood estimator (m.l.e.). M.l.e. theory allows us to calculate the asymptotic distribution of this estimator, and we propose an estimator of the asymptotic variance. Using these estimators we give approximate confidence intervals and tests of significance for daily survival. Monte Carlo simulation results show the performance of our estimators and tests under many sets of conditions. A traditional estimator of nesting success is shown to be quite inferior to the Mayfield estimator. We give sample sizes required for a given accuracy under several sets of conditions.

  20. Estimating changing contexts in schizophrenia.

    PubMed

    Kaplan, Claire M; Saha, Debjani; Molina, Juan L; Hockeimer, William D; Postell, Elizabeth M; Apud, Jose A; Weinberger, Daniel R; Tan, Hao Yang

    2016-07-01

    SEE STEPHAN ET AL DOI101093/AWW120 FOR A SCIENTIFIC COMMENTARY ON THIS WORK: Real world information is often abstract, dynamic and imprecise. Deciding if changes represent random fluctuations, or alterations in underlying contexts involve challenging probability estimations. Dysfunction may contribute to erroneous beliefs, such as delusions. Here we examined brain function during inferences about context change from noisy information. We examined cortical-subcortical circuitry engaging anterior and dorsolateral prefrontal cortex, and midbrain. We hypothesized that schizophrenia-related deficits in prefrontal function might overestimate context change probabilities, and that this more chaotic worldview may subsequently gain familiarity and be over-reinforced, with implications for delusions. We then examined these opposing information processing biases against less expected versus familiar information patterns in relation to genetic risk for schizophrenia in unaffected siblings. In one experiment, 17 patients with schizophrenia and 24 normal control subjects were presented in 3 T magnetic resonance imaging with numerical information varying noisily about a context integer, which occasionally shifted up or down. Subjects were to indicate when the inferred numerical context had changed. We fitted Bayesian models to estimate probabilities associated with change inferences. Dynamic causal models examined cortical-subcortical circuitry interactions at context change inference, and at subsequent reduced uncertainty. In a second experiment, genetic risk for schizophrenia associated with similar cortical-subcortical findings were explored in an independent sample of 36 normal control subjects and 35 unaffected siblings during processing of intuitive number sequences along the number line, or during the inverse, less familiar, sequence. In the first experiment, reduced Bayesian models fitting subject behaviour suggest that patients with schizophrenia overestimated context

  1. Estimating the NIH Efficient Frontier

    PubMed Central

    2012-01-01

    Background The National Institutes of Health (NIH) is among the world’s largest investors in biomedical research, with a mandate to: “…lengthen life, and reduce the burdens of illness and disability.” Its funding decisions have been criticized as insufficiently focused on disease burden. We hypothesize that modern portfolio theory can create a closer link between basic research and outcome, and offer insight into basic-science related improvements in public health. We propose portfolio theory as a systematic framework for making biomedical funding allocation decisions–one that is directly tied to the risk/reward trade-off of burden-of-disease outcomes. Methods and Findings Using data from 1965 to 2007, we provide estimates of the NIH “efficient frontier”, the set of funding allocations across 7 groups of disease-oriented NIH institutes that yield the greatest expected return on investment for a given level of risk, where return on investment is measured by subsequent impact on U.S. years of life lost (YLL). The results suggest that NIH may be actively managing its research risk, given that the volatility of its current allocation is 17% less than that of an equal-allocation portfolio with similar expected returns. The estimated efficient frontier suggests that further improvements in expected return (89% to 119% vs. current) or reduction in risk (22% to 35% vs. current) are available holding risk or expected return, respectively, constant, and that 28% to 89% greater decrease in average years-of-life-lost per unit risk may be achievable. However, these results also reflect the imprecision of YLL as a measure of disease burden, the noisy statistical link between basic research and YLL, and other known limitations of portfolio theory itself. Conclusions Our analysis is intended to serve as a proof-of-concept and starting point for applying quantitative methods to allocating biomedical research funding that are objective, systematic, transparent

  2. Spacecraft telecommunications system mass estimates

    NASA Astrophysics Data System (ADS)

    Yuen, J. H.; Sakamoto, L. L.

    1988-02-01

    Mass is the most important limiting parameter for present-day planetary spacecraft design, In fact, the entire design can be characterized by mass. The more efficient the design of the spacecraft, the less mass will be required. The communications system is an essential and integral part of planetary spacecraft. A study is presented of the mass attributable to the communications system for spacecraft designs used in recent missions in an attempt to help guide future design considerations and research and development efforts. The basic approach is to examine the spacecraft by subsystem and allocate a portion of each subsystem to telecommunications. Conceptually, this is to divide the spacecraft into two parts, telecommunications and nontelecommunications. In this way, it is clear what the mass attributable to the communications system is. The percentage of mass is calculated using the actual masses of the spacecraft parts, except in the case of CRAF. In that case, estimated masses are used since the spacecraft was not yet built. The results show that the portion of the spacecraft attributable to telecommunications is substantial. The mass fraction for Voyager, Galileo, and CRAF (Mariner Mark 2) is 34, 19, and 18 percent, respectively. The large reduction of telecommunications mass from Voyager to Galileo is mainly due to the use of a deployable antenna instead of the solid antenna on Voyager.

  3. Global Warming Estimation from MSU

    NASA Technical Reports Server (NTRS)

    Prabhakara, C.; Iacovazzi, Robert, Jr.

    1999-01-01

    In this study, we have developed time series of global temperature from 1980-97 based on the Microwave Sounding Unit (MSU) Ch 2 (53.74 GHz) observations taken from polar-orbiting NOAA operational satellites. In order to create these time series, systematic errors (approx. 0.1 K) in the Ch 2 data arising from inter-satellite differences are removed objectively. On the other hand, smaller systematic errors (approx. 0.03 K) in the data due to orbital drift of each satellite cannot be removed objectively. Such errors are expected to remain in the time series and leave an uncertainty in the inferred global temperature trend. With the help of a statistical method, the error in the MSU inferred global temperature trend resulting from orbital drifts and residual inter-satellite differences of all satellites is estimated to be 0.06 K decade. Incorporating this error, our analysis shows that the global temperature increased at a rate of 0.13 +/- 0.06 K decade during 1980-97.

  4. Bayesian estimation of dose thresholds

    NASA Technical Reports Server (NTRS)

    Groer, P. G.; Carnes, B. A.

    2003-01-01

    An example is described of Bayesian estimation of radiation absorbed dose thresholds (subsequently simply referred to as dose thresholds) using a specific parametric model applied to a data set on mice exposed to 60Co gamma rays and fission neutrons. A Weibull based relative risk model with a dose threshold parameter was used to analyse, as an example, lung cancer mortality and determine the posterior density for the threshold dose after single exposures to 60Co gamma rays or fission neutrons from the JANUS reactor at Argonne National Laboratory. The data consisted of survival, censoring times and cause of death information for male B6CF1 unexposed and exposed mice. The 60Co gamma whole-body doses for the two exposed groups were 0.86 and 1.37 Gy. The neutron whole-body doses were 0.19 and 0.38 Gy. Marginal posterior densities for the dose thresholds for neutron and gamma radiation were calculated with numerical integration and found to have quite different shapes. The density of the threshold for 60Co is unimodal with a mode at about 0.50 Gy. The threshold density for fission neutrons declines monotonically from a maximum value at zero with increasing doses. The posterior densities for all other parameters were similar for the two radiation types.

  5. Estimation of ground motion parameters

    USGS Publications Warehouse

    Boore, David M.; Oliver, Adolph A.; Page, Robert A.; Joyner, William B.

    1978-01-01

    Strong motion data from western North America for earthquakes of magnitude greater than 5 are examined to provide the basis for estimating peak acceleration, velocity, displacement, and duration as a function of distance for three magnitude classes. Data from the San Fernando earthquake are examined to assess the effects of associated structures and of geologic site conditions on peak recorded motions. Small but statistically significant differences are observed in peak values of horizontal acceleration, velocity, and displacement recorded on soil at the base of small structures compared with values recorded at the base of large structures. Values of peak horizontal acceleration recorded at soil sites in the San Fernando earthquake are not significantly different from the values recorded at rock sites, but values of peak horizontal velocity and displacement are significantly greater at soil sites than at rock sites. Three recently published relationships for predicting peak horizontal acceleration are compared and discussed. Considerations are reviewed relevant to ground motion predictions at close distances where there are insufficient recorded data points.

  6. Hydroacoustic estimates of fish abundance

    SciTech Connect

    Wilson, W.K.

    1991-03-01

    Hydroacoustics, as defined in the context of this report, is the use of a scientific sonar system to determine fish densities with respect to numbers and biomass. These two parameters provide a method of monitoring reservoir fish populations and detecting gross changes in the ecosystem. With respect to southeastern reservoirs, hydroacoustic surveys represent a new method of sampling open water areas and the best technology available. The advantages of this technology are large amounts of data can be collected in a relatively short period of time allowing improved statistical interpretation and data comparison, the pelagic (open water) zone can be sampled efficiently regardless of depth, and sampling is nondestructive and noninvasive with neither injury to the fish nor alteration of the environment. Hydroacoustics cannot provide species identification and related information on species composition or length/weight relationships. Also, sampling is limited to a minimum depth of ten feet which precludes the use of this equipment for sampling shallow shoreline areas. The objective of this study is to use hydroacoustic techniques to estimate fish standing stocks (i.e., numbers and biomass) in several areas of selected Tennessee Valley Reservoirs as part of a base level monitoring program to assess long-term changes in reservoir water quality.

  7. Quantum rewinding via phase estimation

    NASA Astrophysics Data System (ADS)

    Tabia, Gelo Noel

    2015-03-01

    In cryptography, the notion of a zero-knowledge proof was introduced by Goldwasser, Micali, and Rackoff. An interactive proof system is said to be zero-knowledge if any verifier interacting with an honest prover learns nothing beyond the validity of the statement being proven. With recent advances in quantum information technologies, it has become interesting to ask if classical zero-knowledge proof systems remain secure against adversaries with quantum computers. The standard approach to show the zero-knowledge property involves constructing a simulator for a malicious verifier that can be rewinded to a previous step when the simulation fails. In the quantum setting, the simulator can be described by a quantum circuit that takes an arbitrary quantum state as auxiliary input but rewinding becomes a nontrivial issue. Watrous proposed a quantum rewinding technique in the case where the simulation's success probability is independent of the auxiliary input. Here I present a more general quantum rewinding scheme that employs the quantum phase estimation algorithm. This work was funded by institutional research grant IUT2-1 from the Estonian Research Council and by the European Union through the European Regional Development Fund.

  8. Software Development Cost Estimation Executive Summary

    NASA Technical Reports Server (NTRS)

    Hihn, Jairus M.; Menzies, Tim

    2006-01-01

    Identify simple fully validated cost models that provide estimation uncertainty with cost estimate. Based on COCOMO variable set. Use machine learning techniques to determine: a) Minimum number of cost drivers required for NASA domain based cost models; b) Minimum number of data records required and c) Estimation Uncertainty. Build a repository of software cost estimation information. Coordinating tool development and data collection with: a) Tasks funded by PA&E Cost Analysis; b) IV&V Effort Estimation Task and c) NASA SEPG activities.

  9. Site characterization: a spatial estimation approach

    SciTech Connect

    Candy, J.V.; Mao, N.

    1980-10-01

    In this report the application of spatial estimation techniques or kriging to groundwater aquifers and geological borehole data is considered. The adequacy of these techniques to reliably develop contour maps from various data sets is investigated. The estimator is developed theoretically in a simplified fashion using vector-matrix calculus. The practice of spatial estimation is discussed and the estimator is then applied to two groundwater aquifer systems and used also to investigate geological formations from borehole data. It is shown that the estimator can provide reasonable results when designed properly.

  10. Local Estimators for Spacecraft Formation Flying

    NASA Technical Reports Server (NTRS)

    Fathpour, Nanaz; Hadaegh, Fred Y.; Mesbahi, Mehran; Nabi, Marzieh

    2011-01-01

    A formation estimation architecture for formation flying builds upon the local information exchange among multiple local estimators. Spacecraft formation flying involves the coordination of states among multiple spacecraft through relative sensing, inter-spacecraft communication, and control. Most existing formation flying estimation algorithms can only be supported via highly centralized, all-to-all, static relative sensing. New algorithms are needed that are scalable, modular, and robust to variations in the topology and link characteristics of the formation exchange network. These distributed algorithms should rely on a local information-exchange network, relaxing the assumptions on existing algorithms. In this research, it was shown that only local observability is required to design a formation estimator and control law. The approach relies on breaking up the overall information-exchange network into sequence of local subnetworks, and invoking an agreement-type filter to reach consensus among local estimators within each local network. State estimates were obtained by a set of local measurements that were passed through a set of communicating Kalman filters to reach an overall state estimation for the formation. An optimization approach was also presented by means of which diffused estimates over the network can be incorporated in the local estimates obtained by each estimator via local measurements. This approach compares favorably with that obtained by a centralized Kalman filter, which requires complete knowledge of the raw measurement available to each estimator.

  11. Optimal estimation for discrete time jump processes

    NASA Technical Reports Server (NTRS)

    Vaca, M. V.; Tretter, S. A.

    1977-01-01

    Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are obtained. The approach is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. A general representation for optimum estimates and recursive equations for minimum mean squared error (MMSE) estimates are obtained. MMSE estimates are nonlinear functions of the observations. The problem of estimating the rate of a DTJP when the rate is a random variable with a probability density function of the form cx super K (l-x) super m and show that the MMSE estimates are linear in this case. This class of density functions explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.

  12. Optimized tuner selection for engine performance estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L. (Inventor); Garg, Sanjay (Inventor)

    2013-01-01

    A methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. Theoretical Kalman filter estimation error bias and variance values are derived at steady-state operating conditions, and the tuner selection routine is applied to minimize these values. The new methodology yields an improvement in on-line engine performance estimation accuracy.

  13. Robust Bearing Estimation for 3-Component Stations

    SciTech Connect

    Claassen, John P.

    1999-06-03

    A robust bearing estimation process for 3-component stations has been developed and explored. The method, called SEEC for Search, Estimate, Evaluate and Correct, intelligently exploits the in- herent information in the arrival at every step of the process to achieve near-optimal results. In particular, the approach uses a consistent framework to define the optimal time-frequency windows on which to make estimates, to make the bearing estimates themselves, to construct metrics helpful in choosing the better estimates or admitting that the bearing is immeasurable, andjinally to apply bias corrections when calibration information is available to yield a single final estimate. The method was applied to a small but challenging set of events in a seismically active region. The method demonstrated remarkable utility by providing better estimates and insights than previously available. Various monitoring implications are noted fiom these findings.

  14. Robust bearing estimation for 3-component stations

    SciTech Connect

    CLAASSEN,JOHN P.

    2000-02-01

    A robust bearing estimation process for 3-component stations has been developed and explored. The method, called SEEC for Search, Estimate, Evaluate and Correct, intelligently exploits the inherent information in the arrival at every step of the process to achieve near-optimal results. In particular the approach uses a consistent framework to define the optimal time-frequency windows on which to make estimates, to make the bearing estimates themselves, to construct metrics helpful in choosing the better estimates or admitting that the bearing is immeasurable, and finally to apply bias corrections when calibration information is available to yield a single final estimate. The algorithm was applied to a small but challenging set of events in a seismically active region. It demonstrated remarkable utility by providing better estimates and insights than previously available. Various monitoring implications are noted from these findings.

  15. Weighted conditional least-squares estimation

    SciTech Connect

    Booth, J.G.

    1987-01-01

    A two-stage estimation procedure is proposed that generalizes the concept of conditional least squares. The method is instead based upon the minimization of a weighted sum of squares, where the weights are inverses of estimated conditional variance terms. Some general conditions are given under which the estimators are consistent and jointly asymptotically normal. More specific details are given for ergodic Markov processes with stationary transition probabilities. A comparison is made with the ordinary conditional least-squares estimators for two simple branching processes with immigration. The relationship between weighted conditional least squares and other, more well-known, estimators is also investigated. In particular, it is shown that in many cases estimated generalized least-squares estimators can be obtained using the weighted conditional least-squares approach. Applications to stochastic compartmental models, and linear models with nested error structures are considered.

  16. Optimal, reliable estimation of quantum states

    NASA Astrophysics Data System (ADS)

    Blume-Kohout, Robin

    2010-04-01

    Accurately inferring the state of a quantum device from the results of measurements is a crucial task in building quantum information processing hardware. The predominant state estimation procedure, maximum likelihood estimation (MLE), generally reports an estimate with zero eigenvalues. These cannot be justified. Furthermore, the MLE estimate is incompatible with error bars, so conclusions drawn from it are suspect. I propose an alternative procedure, Bayesian mean estimation (BME). BME never yields zero eigenvalues, its eigenvalues provide a bound on their own uncertainties, and under certain circumstances it is provably the most accurate procedure possible. I show how to implement BME numerically, and how to obtain natural error bars that are compatible with the estimate. Finally, I briefly discuss the differences between Bayesian and frequentist estimation techniques.

  17. Reliability Estimates for Power Supplies

    SciTech Connect

    Lee C. Cadwallader; Peter I. Petersen

    2005-09-01

    Failure rates for large power supplies at a fusion facility are critical knowledge needed to estimate availability of the facility or to set priorties for repairs and spare components. A study of the "failure to operate on demand" and "failure to continue to operate" failure rates has been performed for the large power supplies at DIII-D, which provide power to the magnet coils, the neutral beam injectors, the electron cyclotron heating systems, and the fast wave systems. When one of the power supplies fails to operate, the research program has to be either temporarily changed or halted. If one of the power supplies for the toroidal or ohmic heating coils fails, the operations have to be suspended or the research is continued at de-rated parameters until a repair is completed. If one of the power supplies used in the auxiliary plasma heating systems fails the research is often temporarily changed until a repair is completed. The power supplies are operated remotely and repairs are only performed when the power supplies are off line, so that failure of a power supply does not cause any risk to personnel. The DIII-D Trouble Report database was used to determine the number of power supply faults (over 1,700 reports), and tokamak annual operations data supplied the number of shots, operating times, and power supply usage for the DIII-D operating campaigns between mid-1987 and 2004. Where possible, these power supply failure rates from DIII-D will be compared to similar work that has been performed for the Joint European Torus equipment. These independent data sets support validation of the fusion-specific failure rate values.

  18. Budget estimates. Fiscal year 1998

    SciTech Connect

    1997-02-01

    The U.S. Congress has determined that the safe use of nuclear materials for peaceful purposes is a legitimate and important national goal. It has entrusted the Nuclear Regulatory Commission (NRC) with the primary Federal responsibility for achieving that goal. The NRC`s mission, therefore, is to regulate the Nation`s civilian use of byproduct, source, and special nuclear materials to ensure adequate protection of public health and safety, to promote the common defense and security, and to protect the environment. The NRC`s FY 1998 budget requests new budget authority of $481,300,000 to be funded by two appropriations - one is the NRC`s Salaraies and Expenses appropriation for $476,500,000, and the other is NRC`s Office of Inspector General appropriation for $4,800,000. Of the funds appropriated to the NRC`s Salaries and Expenses, $17,000,000, shall be derived from the Nuclear Waste Fund and $2,000,000 shall be derived from general funds. The proposed FY 1998 appropriation legislation would also exempt the $2,000,000 for regulatory reviews and other assistance provided to the Department of Energy from the requirement that the NRC collect 100 percent of its budget from fees. The sums appropriated to the NRC`s Salaries and Expenses and NRC`s Office of Inspector General shall be reduced by the amount of revenues received during FY 1998 from licensing fees, inspection services, and other services and collections, so as to result in a final FY 1998 appropriation for the NRC of an estimated $19,000,000 - the amount appropriated from the Nuclear Waste Fund and from general funds. Revenues derived from enforcement actions shall be deposited to miscellaneous receipts of the Treasury.

  19. Time and Resource Estimation Tool

    2004-06-08

    RESTORE is a computer software tool that allows one to model a complex set of steps required to accomplish a goal (e.g., repair a ruptured natural gas pipeline and restore service to customers). However, the time necessary to complete step may be uncertain and may be affected by conditions, such as the weather, the time of day, the day of the week. Therefore, "nature" can influence which steps are taken and the time needed tomore » complete each step. In addition, the tool allows one to model the costs for each step, which also may be uncertain. RESTORE allows the user to estimate the time and cost, both of which may be uncertain, to achieve an intermediate stage of completion, as well as overall completion. The software also makes it possible to model parallel, competing groups of activities (i.e., parallel paths) so that progreSs at a ‘merge point’ can proceed before other competing activities are completed. For example, RESTORE permits one to model a workaround and a simultaneous complete repair to determine a probability distribution for the earliest time service can be restored to a critical customer. The tool identifies the ‘most active path’ through the network of tasks, which is extremely important information for assessing the most effective way to speed-up or slow-down progress. Unlike other project planning and risk analysis tools, RESTORE provides an intuitive, graphical, and object-oriented environment for structuring a model and setting its parameters.« less

  20. A Monte Carlo Evaluation of Estimated Parameters of Five Shrinkage Estimate Formuli.

    ERIC Educational Resources Information Center

    Newman, Isadore; And Others

    A Monte Carlo study was conducted to estimate the efficiency of and the relationship between five equations and the use of cross validation as methods for estimating shrinkage in multiple correlations. Two of the methods were intended to estimate shrinkage to population values and the other methods were intended to estimate shrinkage from sample…

  1. A novel multistage estimation of signal parameters

    NASA Technical Reports Server (NTRS)

    Kumar, Rajendra

    1990-01-01

    A multistage estimation scheme is presented for estimating the parameters of a received carrier signal possibly phase-modulated by unknown data and experiencing very high Doppler, Doppler rate, etc. Such a situation arises, for example, in the case of the Global Positioning Systems (GPS). In the proposed scheme, the first-stage estimator operates as a coarse estimator of the frequency and its derivatives, resulting in higher rms estimation errors but with a relatively small probability of the frequency estimation error exceeding one-half of the sampling frequency (an event termed cycle slip). The second stage of the estimator operates on the error signal available from the first stage, refining the overall estimates, and in the process also reduces the number of cycle slips. The first-stage algorithm is a modified least-squares algorithm operating on the differential signal model and referred to as differential least squares (DLS). The second-stage algorithm is an extended Kalman filter, which yields the estimate of the phase as well as refining the frequency estimate. A major advantage of the is a reduction in the threshold for the received carrier power-to-noise power spectral density ratio (CNR) as compared with the threshold achievable by either of the algorithms alone.

  2. Some insight on censored cost estimators.

    PubMed

    Zhao, H; Cheng, Y; Bang, H

    2011-08-30

    Censored survival data analysis has been studied for many years. Yet, the analysis of censored mark variables, such as medical cost, quality-adjusted lifetime, and repeated events, faces a unique challenge that makes standard survival analysis techniques invalid. Because of the 'informative' censorship imbedded in censored mark variables, the use of the Kaplan-Meier (Journal of the American Statistical Association 1958; 53:457-481) estimator, as an example, will produce biased estimates. Innovative estimators have been developed in the past decade in order to handle this issue. Even though consistent estimators have been proposed, the formulations and interpretations of some estimators are less intuitive to practitioners. On the other hand, more intuitive estimators have been proposed, but their mathematical properties have not been established. In this paper, we prove the analytic identity between some estimators (a statistically motivated estimator and an intuitive estimator) for censored cost data. Efron (1967) made similar investigation for censored survival data (between the Kaplan-Meier estimator and the redistribute-to-the-right algorithm). Therefore, we view our study as an extension of Efron's work to informatively censored data so that our findings could be applied to other marked variables. PMID:21748774

  3. Estimating the absolute wealth of households

    PubMed Central

    Gerkey, Drew; Hadley, Craig

    2015-01-01

    Abstract Objective To estimate the absolute wealth of households using data from demographic and health surveys. Methods We developed a new metric, the absolute wealth estimate, based on the rank of each surveyed household according to its material assets and the assumed shape of the distribution of wealth among surveyed households. Using data from 156 demographic and health surveys in 66 countries, we calculated absolute wealth estimates for households. We validated the method by comparing the proportion of households defined as poor using our estimates with published World Bank poverty headcounts. We also compared the accuracy of absolute versus relative wealth estimates for the prediction of anthropometric measures. Findings The median absolute wealth estimates of 1 403 186 households were 2056 international dollars per capita (interquartile range: 723–6103). The proportion of poor households based on absolute wealth estimates were strongly correlated with World Bank estimates of populations living on less than 2.00 United States dollars per capita per day (R2 = 0.84). Absolute wealth estimates were better predictors of anthropometric measures than relative wealth indexes. Conclusion Absolute wealth estimates provide new opportunities for comparative research to assess the effects of economic resources on health and human capital, as well as the long-term health consequences of economic change and inequality. PMID:26170506

  4. Structure Function Estimated From Histological Tissue Sections.

    PubMed

    Han, Aiguo; O'Brien, William D

    2016-09-01

    Ultrasonic scattering is determined by not only the properties of individual scatterers but also the correlation among scatterer positions. The role of scatterer spatial correlation is significant for dense medium, but has not been fully understood. The effect of scatterer spatial correlation may be modeled by the structure function as a frequency-dependent factor in the backscatter coefficient (BSC) expression. The structure function has been previously estimated from the BSC data. The aim of this study is to estimate the structure function from histology to test if the acoustically estimated structure function is indeed caused by the scatterer spatial distribution. Hematoxylin and eosin stained histological sections from dense cell pellet biophantoms were digitized. The scatterer positions were determined manually from the histological images. The structure function was calculated from the extracted scatterer positions. The structure function obtained from histology showed reasonable agreement in the shape but not in the amplitude, compared with the structure function previously estimated from the backscattered data. Fitting a polydisperse structure function model to the histologically estimated structure function yielded relatively accurate cell radius estimates ([Formula: see text]). Furthermore, two types of mouse tumors that have similar cell size and shape but distinct cell spatial distributions were studied, where the backscattered data were shown to be related to the cell spatial distribution through the structure function estimated from histology. In conclusion, the agreement between acoustically estimated and histologically estimated structure functions suggests that the acoustically estimated structure function is related to the scatterer spatial distribution.

  5. Estimating one's own personality and intelligence scores.

    PubMed

    Furnham, Adrian; Chamorro-Premuzic, Tomas

    2004-05-01

    One hundred and eighty-seven university students completed the full NEO-PI-R assessing the five super-traits and 30 primary traits, and the Wonderlic Personnel Test of general intelligence. Two months later (before receiving feedback on their psychometric scores), they estimated their own scores on these variables. Results at the super-factor level indicated that participants could significantly predict/estimate their own Neuroticism, Extraversion, and Conscientiousness scores. The correlation between estimated and psychometrically measured IQ was r=.30, showing that participants could, to some extent, accurately estimate their intelligence. In addition, there were a number of significant correlations between estimated intelligence and psychometrically assessed personality (particularly Neuroticism, Agreeableness and Extraversion). Disagreeable people tended to award themselves higher self-estimated intelligence scores. Similarly, stable people tended to award themselves higher estimates of intelligence (even when other variables were controlled). Regressing both estimated and psychometric IQ scores onto estimated and psychometric personality scores indicated that the strongest significant effect was the relationship between trait scores and self-estimated intelligence. PMID:15142299

  6. Optimal channels for channelized quadratic estimators.

    PubMed

    Kupinski, Meredith K; Clarkson, Eric

    2016-06-01

    We present a new method for computing optimized channels for estimation tasks that is feasible for high-dimensional image data. Maximum-likelihood (ML) parameter estimates are challenging to compute from high-dimensional likelihoods. The dimensionality reduction from M measurements to L channels is a critical advantage of channelized quadratic estimators (CQEs), since estimating likelihood moments from channelized data requires smaller sample sizes and inverting a smaller covariance matrix is easier. The channelized likelihood is then used to form ML estimates of the parameter(s). In this work we choose an imaging example in which the second-order statistics of the image data depend upon the parameter of interest: the correlation length. Correlation lengths are used to approximate background textures in many imaging applications, and in these cases an estimate of the correlation length is useful for pre-whitening. In a simulation study we compare the estimation performance, as measured by the root-mean-squared error (RMSE), of correlation length estimates from CQE and power spectral density (PSD) distribution fitting. To abide by the assumptions of the PSD method we simulate an ergodic, isotropic, stationary, and zero-mean random process. These assumptions are not part of the CQE formalism. The CQE method assumes a Gaussian channelized likelihood that can be a valid for non-Gaussian image data, since the channel outputs are formed from weighted sums of the image elements. We have shown that, for three or more channels, the RMSE of CQE estimates of correlation length is lower than conventional PSD estimates. We also show that computing CQE by using a standard nonlinear optimization method produces channels that yield RMSE within 2% of the analytic optimum. CQE estimates of anisotropic correlation length estimation are reported to demonstrate this technique on a two-parameter estimation problem. PMID:27409452

  7. Fast, Continuous Audiogram Estimation using Machine Learning

    PubMed Central

    Song, Xinyu D.; Wallace, Brittany M.; Gardner, Jacob R.; Ledbetter, Noah M.; Weinberger, Kilian Q.; Barbour, Dennis L.

    2016-01-01

    Objectives Pure-tone audiometry has been a staple of hearing assessments for decades. Many different procedures have been proposed for measuring thresholds with pure tones by systematically manipulating intensity one frequency at a time until a discrete threshold function is determined. The authors have developed a novel nonparametric approach for estimating a continuous threshold audiogram using Bayesian estimation and machine learning classification. The objective of this study is to assess the accuracy and reliability of this new method relative to a commonly used threshold measurement technique. Design The authors performed air conduction pure-tone audiometry on 21 participants between the ages of 18 and 90 years with varying degrees of hearing ability. Two repetitions of automated machine learning audiogram estimation and 1 repetition of conventional modified Hughson-Westlake ascending-descending audiogram estimation were acquired by an audiologist. The estimated hearing thresholds of these two techniques were compared at standard audiogram frequencies (i.e., 0.25, 0.5, 1, 2, 4, 8 kHz). Results The two threshold estimate methods delivered very similar estimates at standard audiogram frequencies. Specifically, the mean absolute difference between estimates was 4.16 ± 3.76 dB HL. The mean absolute difference between repeated measurements of the new machine learning procedure was 4.51 ± 4.45 dB HL. These values compare favorably to those of other threshold audiogram estimation procedures. Furthermore, the machine learning method generated threshold estimates from significantly fewer samples than the modified Hughson-Westlake procedure while returning a continuous threshold estimate as a function of frequency. Conclusions The new machine learning audiogram estimation technique produces continuous threshold audiogram estimates accurately, reliably, and efficiently, making it a strong candidate for widespread application in clinical and research audiometry. PMID

  8. Input estimation from measured structural response

    SciTech Connect

    Harvey, Dustin; Cross, Elizabeth; Silva, Ramon A; Farrar, Charles R; Bement, Matt

    2009-01-01

    This report will focus on the estimation of unmeasured dynamic inputs to a structure given a numerical model of the structure and measured response acquired at discrete locations. While the estimation of inputs has not received as much attention historically as state estimation, there are many applications where an improved understanding of the immeasurable input to a structure is vital (e.g. validating temporally varying and spatially-varying load models for large structures such as buildings and ships). In this paper, the introduction contains a brief summary of previous input estimation studies. Next, an adjoint-based optimization method is used to estimate dynamic inputs to two experimental structures. The technique is evaluated in simulation and with experimental data both on a cantilever beam and on a three-story frame structure. The performance and limitations of the adjoint-based input estimation technique are discussed.

  9. Nonparametric k-nearest-neighbor entropy estimator.

    PubMed

    Lombardi, Damiano; Pant, Sanjay

    2016-01-01

    A nonparametric k-nearest-neighbor-based entropy estimator is proposed. It improves on the classical Kozachenko-Leonenko estimator by considering nonuniform probability densities in the region of k-nearest neighbors around each sample point. It aims to improve the classical estimators in three situations: first, when the dimensionality of the random variable is large; second, when near-functional relationships leading to high correlation between components of the random variable are present; and third, when the marginal variances of random variable components vary significantly with respect to each other. Heuristics on the error of the proposed and classical estimators are presented. Finally, the proposed estimator is tested for a variety of distributions in successively increasing dimensions and in the presence of a near-functional relationship. Its performance is compared with a classical estimator, and a significant improvement is demonstrated. PMID:26871193

  10. Impact location estimation in anisotropic structures

    NASA Astrophysics Data System (ADS)

    Zhou, Jingru; Mathews, V. John; Adams, Daniel O.

    2015-03-01

    Impacts are major causes of in-service damage in aerospace structures. Therefore, impact location estimation techniques are necessary components of Structural Health Monitoring (SHM). In this paper, we consider impact location estimation in anisotropic composite structures using acoustic emission signals arriving at a passive sensor array attached to the structure. Unlike many published location estimation algorithms, the algorithm presented in this paper does not require the waveform velocity profile for the structure. Rather, the method employs time-of-arrival information to jointly estimate the impact location and the average signal transmission velocities from the impact to each sensor on the structure. The impact location and velocities are estimated as the solution of a nonlinear optimization problem with multiple quadratic constraints. The optimization problem is solved by using first-order optimality conditions. Numerical simulations as well as experimental results demonstrate the ability of the algorithm to accurately estimate the impact location using acoustic emission signals.

  11. Notes on a New Coherence Estimator

    SciTech Connect

    Bickel, Douglas L.

    2016-01-01

    This document discusses some interesting features of the new coherence estimator in [1] . The estimator is d erived from a slightly different viewpoint. We discuss a few properties of the estimator, including presenting the probability density function of the denominator of the new estimator , which is a new feature of this estimator . Finally, we present an appr oximate equation for analysis of the sensitivity of the estimator to the knowledge of the noise value. ACKNOWLEDGEMENTS The preparation of this report is the result of an unfunded research and development activity. Sandia National Laboratories is a multi - program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE - AC04 - 94AL85000.

  12. Estimating Hurricane Rates in a Changing Climate

    NASA Astrophysics Data System (ADS)

    Coughlin, K.; Laepple, T.; Rowlands, D.; Jewson, S.; Bellone, E.

    2009-04-01

    The estimation of hurricane risk is of life-and-death importance for millions of people living on the West Coast of the Atlantic. Risk Management Solutions (RMS) provides products and services for the quantification and management of many catastrophe risks, including the risks associated with Atlantic hurricanes. Of particular importance in the modeling of hurricane risk, is the estimation of future hurricane rates. We are interested in making accurate estimates over the next 5-years of the underlying rates associated with Atlantic Hurricanes that make landfall. This presentation discusses our methodology used in making these estimates. Specifically, we discuss the importance of estimating the changing environments, both local and global, that affect hurricane formation and development. Our methodology combines statistical modeling, physical insight and modeling and expert opinion to provide RMS with accurate estimates of the underlying rate associated with landfalling hurricanes in the Atlantic.

  13. Systems Engineering Programmatic Estimation Using Technology Variance

    NASA Technical Reports Server (NTRS)

    Mog, Robert A.

    2000-01-01

    Unique and innovative system programmatic estimation is conducted using the variance of the packaged technologies. Covariance analysis is performed oil the subsystems and components comprising the system of interest. Technological "returns" and "variation" parameters, are estimated. These parameters are combined with the model error to arrive at a measure of system development stability. The resulting estimates provide valuable information concerning the potential cost growth of the system under development.

  14. Systems Engineering Programmatic Estimation Using Technology Variance

    NASA Technical Reports Server (NTRS)

    Mog, Robert A.

    2000-01-01

    Unique and innovative system programmatic estimation is conducted using the variance of the packaged technologies. Covariance analysis is performed on the subsystems and components comprising the system of interest. Technological "return" and "variation" parameters are estimated. These parameters are combined with the model error to arrive at a measure of system development stability. The resulting estimates provide valuable information concerning the potential cost growth of the system under development.

  15. On estimating the Venus spin vector

    NASA Technical Reports Server (NTRS)

    Argentiero, P. D.

    1972-01-01

    The improvement in spin vector and probe position estimates one may reasonably expect from the processing of such data is indicated. This was done by duplicating the ensemble calculations associated with a weighed least squares with a priori estimation technique applied to range rate data that were assumed to be unbiased and uncorrelated. The weighting matrix was assumed to be the inverse of the covariance matrix of the noise on the data. Attention is focused primarily on the spin vector estimation.

  16. Estimating phytoplankton biomass and productivity. Final report

    SciTech Connect

    Janik, J.J.; Taylor, W.D.; Lambou, V.W.

    1981-06-01

    Estimates of phytoplankton biomass and rates of production can provide a manager with some insight into questions concerning trophic state, water quality, and aesthetics. Methods for estimation of phytoplankton biomass include a gravimetric approach, microscopic enumeration, and chlorophyll analysis, Strengths and weaknesses of these and other methods are presented. Productivity estimation techniques are discussed including oxygen measurement, carbon dioxide measurements, carbon 14 measurements, and the chlorophyll method. Again, strengths and weaknesses are presented.

  17. Communications availability: Estimation studies at AMSC

    NASA Technical Reports Server (NTRS)

    Sigler, C. Edward, Jr.

    1994-01-01

    The results of L-band communications availability work performed to date are presented. Results include a L-band communications availability estimate model and field propagation trials using an INMARSAT-M terminal. American Mobile Satellite Corporation's (AMSC's) primary concern centers on availability of voice communications intelligibility, with secondary concerns for circuit-switched data and fax. The model estimates for representative terrain/vegetation areas are applied to the contiguous U.S. for overall L-band communications availability estimates.

  18. State energy data report 1994: Consumption estimates

    SciTech Connect

    1996-10-01

    This document provides annual time series estimates of State-level energy consumption by major economic sector. The estimates are developed in the State Energy Data System (SEDS), operated by EIA. SEDS provides State energy consumption estimates to members of Congress, Federal and State agencies, and the general public, and provides the historical series needed for EIA`s energy models. Division is made for each energy type and end use sector. Nuclear electric power is included.

  19. Generalized Covariance Analysis For Remote Estimators

    NASA Technical Reports Server (NTRS)

    Boone, Jack N.

    1991-01-01

    Technique developed to predict true covariance of stochastic process at remote location when control applied to process both by autonomous (local-estimator) control subsystem and remote (non-local-estimator) control subsystem. Intended orginally for design and evaluation of ground-based schemes for estimation of gyro parameters of Magellan spacecraft. Applications include variety of remote-control systems with and without delays. Potential terrestrial applications include navigation and control of industrial processes.

  20. Improved diagnostic model for estimating wind energy

    SciTech Connect

    Endlich, R.M.; Lee, J.D.

    1983-03-01

    Because wind data are available only at scattered locations, a quantitative method is needed to estimate the wind resource at specific sites where wind energy generation may be economically feasible. This report describes a computer model that makes such estimates. The model uses standard weather reports and terrain heights in deriving wind estimates; the method of computation has been changed from what has been used previously. The performance of the current model is compared with that of the earlier version at three sites; estimates of wind energy at four new sites are also presented.

  1. Outer planet probe cost estimates: First impressions

    NASA Technical Reports Server (NTRS)

    Niehoff, J.

    1974-01-01

    An examination was made of early estimates of outer planetary atmospheric probe cost by comparing the estimates with past planetary projects. Of particular interest is identification of project elements which are likely cost drivers for future probe missions. Data are divided into two parts: first, the description of a cost model developed by SAI for the Planetary Programs Office of NASA, and second, use of this model and its data base to evaluate estimates of probe costs. Several observations are offered in conclusion regarding the credibility of current estimates and specific areas of the outer planet probe concept most vulnerable to cost escalation.

  2. Uncertainty analysis for Probable Maximum Precipitation estimates

    NASA Astrophysics Data System (ADS)

    Micovic, Zoran; Schaefer, Melvin G.; Taylor, George H.

    2015-02-01

    An analysis of uncertainty associated with Probable Maximum Precipitation (PMP) estimates is presented. The focus of the study is firmly on PMP estimates derived through meteorological analyses and not on statistically derived PMPs. Theoretical PMP cannot be computed directly and operational PMP estimates are developed through a stepwise procedure using a significant degree of subjective professional judgment. This paper presents a methodology for portraying the uncertain nature of PMP estimation by analyzing individual steps within the PMP derivation procedure whereby for each parameter requiring judgment, a set of possible values is specified and accompanied by expected probabilities. The resulting range of possible PMP values can be compared with the previously derived operational single-value PMP, providing measures of the conservatism and variability of the original estimate. To our knowledge, this is the first uncertainty analysis conducted for a PMP derived through meteorological analyses. The methodology was tested on the La Joie Dam watershed in British Columbia. The results indicate that the commonly used single-value PMP estimate could be more than 40% higher when possible changes in various meteorological variables used to derive the PMP are considered. The findings of this study imply that PMP estimates should always be characterized as a range of values recognizing the significant uncertainties involved in PMP estimation. In fact, we do not know at this time whether precipitation is actually upper-bounded, and if precipitation is upper-bounded, how closely PMP estimates approach the theoretical limit.

  3. Linear Covariance Analysis and Epoch State Estimators

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Carpenter, J. Russell

    2014-01-01

    This paper extends in two directions the results of prior work on generalized linear covariance analysis of both batch least-squares and sequential estimators. The first is an improved treatment of process noise in the batch, or epoch state, estimator with an epoch time that may be later than some or all of the measurements in the batch. The second is to account for process noise in specifying the gains in the epoch state estimator. We establish the conditions under which the latter estimator is equivalent to the Kalman filter.

  4. Linear Covariance Analysis and Epoch State Estimators

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Carpenter, J. Russell

    2012-01-01

    This paper extends in two directions the results of prior work on generalized linear covariance analysis of both batch least-squares and sequential estimators. The first is an improved treatment of process noise in the batch, or epoch state, estimator with an epoch time that may be later than some or all of the measurements in the batch. The second is to account for process noise in specifying the gains in the epoch state estimator. We establish the conditions under which the latter estimator is equivalent to the Kalman filter.

  5. Quantum Enhanced Estimation of a Multidimensional Field.

    PubMed

    Baumgratz, Tillmann; Datta, Animesh

    2016-01-22

    We present a framework for the quantum enhanced estimation of multiple parameters corresponding to noncommuting unitary generators. Our formalism provides a recipe for the simultaneous estimation of all three components of a magnetic field. We propose a probe state that surpasses the precision of estimating the three components individually, and we discuss measurements that come close to attaining the quantum limit. Our study also reveals that too much quantum entanglement may be detrimental to attaining the Heisenberg scaling in the estimation of unitarily generated parameters. PMID:26849579

  6. How EIA Estimates Natural Gas Production

    EIA Publications

    2004-01-01

    The Energy Information Administration (EIA) publishes estimates monthly and annually of the production of natural gas in the United States. The estimates are based on data EIA collects from gas producing states and data collected by the U. S. Minerals Management Service (MMS) in the Department of Interior. The states and MMS collect this information from producers of natural gas for various reasons, most often for revenue purposes. Because the information is not sufficiently complete or timely for inclusion in EIA's Natural Gas Monthly (NGM), EIA has developed estimation methodologies to generate monthly production estimates that are described in this document.

  7. SNR estimation for the baseband assembly

    NASA Technical Reports Server (NTRS)

    Simon, M. K.; Mileant, A.

    1986-01-01

    The expected value and the variance of the Baseband Assembly symbol signal-to-noise ratio (SNR) estimation algorithm are derived. The SNR algorithm treated here is designated as the Split Symbol Moments Estimator (SSME). It consists of averaging the first two moments of the integrated half symbols. The SSME is a biased, consistent estimator. The SNR degradation factor due to the jitter in the subcarrier demodulation and symbol synchronization loops is taken into account. Curves of the expected value of the SNR estimator versus the actual SNR are presented.

  8. RAINFALL-LOSS PARAMETER ESTIMATION FOR ILLINOIS.

    USGS Publications Warehouse

    Weiss, Linda S.; Ishii, Audrey

    1986-01-01

    The U. S. Geological Survey is currently conducting an investigation to estimate values of parameters for two rainfall-loss computation methods used in a commonly used flood-hydrograph model. Estimates of six rainfall-loss parameters are required: four for the Exponential Loss-Rate method and two for the Initial and Uniform Loss-Rate method. Multiple regression analyses on calibrated data from 616 storms at 98 gaged basins are being used to develop parameter-estimating techniques for these six parameters at ungaged basins in Illinois. Parameter-estimating techniques are being verified using data from a total of 105 storms at 35 uncalibrated gaged basins.

  9. Spacecraft inertia estimation via constrained least squares

    NASA Technical Reports Server (NTRS)

    Keim, Jason A.; Acikmese, Behcet A.; Shields, Joel F.

    2006-01-01

    This paper presents a new formulation for spacecraft inertia estimation from test data. Specifically, the inertia estimation problem is formulated as a constrained least squares minimization problem with explicit bounds on the inertia matrix incorporated as LMIs [linear matrix inequalities). The resulting minimization problem is a semidefinite optimization that can be solved efficiently with guaranteed convergence to the global optimum by readily available algorithms. This method is applied to data collected from a robotic testbed consisting of a freely rotating body. The results show that the constrained least squares approach produces more accurate estimates of the inertia matrix than standard unconstrained least squares estimation methods.

  10. System for Estimating Horizontal Velocity During Descent

    NASA Technical Reports Server (NTRS)

    Johnson, Andrew; Cheng, Yang; Wilson, Reg; Goguen, Jay; Martin, Alejandro San; Leger, Chris; Matthies, Larry

    2007-01-01

    The descent image motion estimation system (DIMES) is a system of hardware and software, designed for original use in estimating the horizontal velocity of a spacecraft descending toward a landing on Mars. The estimated horizontal velocity is used in generating rocket-firing commands to reduce the horizontal velocity as part of an overall control scheme to minimize the landing impact. DIMES can also be used for estimating the horizontal velocity of a remotely controlled or autonomous aircraft for purposes of navigation and control.

  11. Estimation of fecundability from survey data.

    PubMed

    Goldman, N; Westoff, C F; Paul, L E

    1985-01-01

    The estimation of fecundability from survey data is plagued by methodological problems such as misreporting of dates of birth and marriage and the occurrence of premarital exposure to the risk of conception. Nevertheless, estimates of fecundability from World Fertility Survey data for women married in recent years appear to be plausible for most of the surveys analyzed here and are quite consistent with estimates reported in earlier studies. The estimates presented in this article are all derived from the first interval, the interval between marriage or consensual union and the first live birth conception.

  12. Estimating survival of radio-tagged birds

    USGS Publications Warehouse

    Bunck, C.M.; Pollock, K.H.; Lebreton, J.-D.; North, P.M.

    1993-01-01

    Parametric and nonparametric methods for estimating survival of radio-tagged birds are described. The general assumptions of these methods are reviewed. An estimate based on the assumption of constant survival throughout the period is emphasized in the overview of parametric methods. Two nonparametric methods, the Kaplan-Meier estimate of the survival funcrion and the log rank test, are explained in detail The link between these nonparametric methods and traditional capture-recapture models is discussed aloag with considerations in designing studies that use telemetry techniques to estimate survival.

  13. Fatality estimator user’s guide

    USGS Publications Warehouse

    Huso, Manuela M.; Som, Nicholas; Ladd, Lew

    2012-01-01

    Only carcasses judged to have been killed after the previous search should be included in the fatality data set submitted to this estimator software. This estimator already corrects for carcasses missed in previous searches, so carcasses judged to have been missed at least once should be considered “incidental” and not included in the fatality data set used to estimate fatality. Note: When observed carcass count is <5 (including 0 for species known to be at risk, but not observed), USGS Data Series 881 (http://pubs.usgs.gov/ds/0881/) is recommended for fatality estimation.

  14. Outlier robust nonlinear mixed model estimation.

    PubMed

    Williams, James D; Birch, Jeffrey B; Abdel-Salam, Abdel-Salam G

    2015-04-15

    In standard analyses of data well-modeled by a nonlinear mixed model, an aberrant observation, either within a cluster, or an entire cluster itself, can greatly distort parameter estimates and subsequent standard errors. Consequently, inferences about the parameters are misleading. This paper proposes an outlier robust method based on linearization to estimate fixed effects parameters and variance components in the nonlinear mixed model. An example is given using the four-parameter logistic model and bioassay data, comparing the robust parameter estimates with the nonrobust estimates given by SAS(®).

  15. Array algebra estimation in signal processing

    NASA Astrophysics Data System (ADS)

    Rauhala, U. A.

    A general theory of linear estimators called array algebra estimation is interpreted in some terms of multidimensional digital signal processing, mathematical statistics, and numerical analysis. The theory has emerged during the past decade from the new field of a unified vector, matrix and tensor algebra called array algebra. The broad concepts of array algebra and its estimation theory cover several modern computerized sciences and technologies converting their established notations and terminology into one common language. Some concepts of digital signal processing are adopted into this language after a review of the principles of array algebra estimation and its predecessors in mathematical surveying sciences.

  16. Fatality estimator user’s guide

    USGS Publications Warehouse

    Huso, Manuela M.; Som, Nicholas; Ladd, Lew

    2012-12-11

    Only carcasses judged to have been killed after the previous search should be included in the fatality data set submitted to this estimator software. This estimator already corrects for carcasses missed in previous searches, so carcasses judged to have been missed at least once should be considered “incidental” and not included in the fatality data set used to estimate fatality. Note: When observed carcass count is <5 (including 0 for species known to be at risk, but not observed), USGS Data Series 881 (http://pubs.usgs.gov/ds/0881/) is recommended for fatality estimation.

  17. On Optimal Projective Fusers for Function Estimators

    SciTech Connect

    Rao, N.S.V.

    1999-06-22

    We propose a fuser that projects different function estimators in different regions of the input space based on the lower envelope of the error curves of the individual estimators. This fuser is shown to be optimal among projective fusers and also to perform at least as well as the best individual estimator. By incorporating an optimal linear fuser as another estimator, this fuser performs at least as well as the optimal linear combination. We illustrate the fuser by combining neural networks trained using different parameters for the network and/or for learning algorithms.

  18. A Monte Carlo method for variance estimation for estimators based on induced smoothing

    PubMed Central

    Jin, Zhezhen; Shao, Yongzhao; Ying, Zhiliang

    2015-01-01

    An important issue in statistical inference for semiparametric models is how to provide reliable and consistent variance estimation. Brown and Wang (2005. Standard errors and covariance matrices for smoothed rank estimators. Biometrika 92, 732–746) proposed a variance estimation procedure based on an induced smoothing for non-smooth estimating functions. Herein a Monte Carlo version is developed that does not require any explicit form for the estimating function itself, as long as numerical evaluation can be carried out. A general convergence theory is established, showing that any one-step iteration leads to a consistent variance estimator and continuation of the iterations converges at an exponential rate. The method is demonstrated through the Buckley–James estimator and the weighted log-rank estimators for censored linear regression, and rank estimation for multiple event times data. PMID:24812418

  19. Load estimator (LOADEST): a FORTRAN program for estimating constituent loads in streams and rivers

    USGS Publications Warehouse

    Runkel, Robert L.; Crawford, Charles G.; Cohn, Timothy A.

    2004-01-01

    LOAD ESTimator (LOADEST) is a FORTRAN program for estimating constituent loads in streams and rivers. Given a time series of streamflow, additional data variables, and constituent concentration, LOADEST assists the user in developing a regression model for the estimation of constituent load (calibration). Explanatory variables within the regression model include various functions of streamflow, decimal time, and additional user-specified data variables. The formulated regression model then is used to estimate loads over a user-specified time interval (estimation). Mean load estimates, standard errors, and 95 percent confidence intervals are developed on a monthly and(or) seasonal basis. The calibration and estimation procedures within LOADEST are based on three statistical estimation methods. The first two methods, Adjusted Maximum Likelihood Estimation (AMLE) and Maximum Likelihood Estimation (MLE), are appropriate when the calibration model errors (residuals) are normally distributed. Of the two, AMLE is the method of choice when the calibration data set (time series of streamflow, additional data variables, and concentration) contains censored data. The third method, Least Absolute Deviation (LAD), is an alternative to maximum likelihood estimation when the residuals are not normally distributed. LOADEST output includes diagnostic tests and warnings to assist the user in determining the appropriate estimation method and in interpreting the estimated loads. This report describes the development and application of LOADEST. Sections of the report describe estimation theory, input/output specifications, sample applications, and installation instructions.

  20. Cost Estimating Handbook for Environmental Restoration

    SciTech Connect

    1990-09-01

    Environmental restoration (ER) projects have presented the DOE and cost estimators with a number of properties that are not comparable to the normal estimating climate within DOE. These properties include: An entirely new set of specialized expressions and terminology. A higher than normal exposure to cost and schedule risk, as compared to most other DOE projects, due to changing regulations, public involvement, resource shortages, and scope of work. A higher than normal percentage of indirect costs to the total estimated cost due primarily to record keeping, special training, liability, and indemnification. More than one estimate for a project, particularly in the assessment phase, in order to provide input into the evaluation of alternatives for the cleanup action. While some aspects of existing guidance for cost estimators will be applicable to environmental restoration projects, some components of the present guidelines will have to be modified to reflect the unique elements of these projects. The purpose of this Handbook is to assist cost estimators in the preparation of environmental restoration estimates for Environmental Restoration and Waste Management (EM) projects undertaken by DOE. The DOE has, in recent years, seen a significant increase in the number, size, and frequency of environmental restoration projects that must be costed by the various DOE offices. The coming years will show the EM program to be the largest non-weapons program undertaken by DOE. These projects create new and unique estimating requirements since historical cost and estimating precedents are meager at best. It is anticipated that this Handbook will enhance the quality of cost data within DOE in several ways by providing: The basis for accurate, consistent, and traceable baselines. Sound methodologies, guidelines, and estimating formats. Sources of cost data/databases and estimating tools and techniques available at DOE cost professionals.

  1. A Comparison of Student Skill Knowledge Estimates

    ERIC Educational Resources Information Center

    Ayers, Elizabeth; Nugent, Rebecca; Dean, Nema

    2009-01-01

    A fundamental goal of educational research is identifying students' current stage of skill mastery (complete/partial/none). In recent years a number of cognitive diagnosis models have become a popular means of estimating student skill knowledge. However, these models become difficult to estimate as the number of students, items, and skills grows.…

  2. 7 CFR 58.135 - Bacterial estimate.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Specifications for Dairy Plants Approved for USDA Inspection and Grading Service 1 Quality Specifications for Raw Milk § 58.135 Bacterial estimate. (a) Methods of Testing. Milk shall be tested for bacterial estimate... representative sample of each producer's milk at least once each month at irregular intervals. Samples shall...

  3. 7 CFR 58.135 - Bacterial estimate.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Specifications for Dairy Plants Approved for USDA Inspection and Grading Service 1 Quality Specifications for Raw Milk § 58.135 Bacterial estimate. (a) Methods of Testing. Milk shall be tested for bacterial estimate... representative sample of each producer's milk at least once each month at irregular intervals. Samples shall...

  4. 7 CFR 58.135 - Bacterial estimate.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... Specifications for Dairy Plants Approved for USDA Inspection and Grading Service 1 Quality Specifications for Raw Milk § 58.135 Bacterial estimate. (a) Methods of Testing. Milk shall be tested for bacterial estimate... representative sample of each producer's milk at least once each month at irregular intervals. Samples shall...

  5. 7 CFR 58.135 - Bacterial estimate.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Specifications for Dairy Plants Approved for USDA Inspection and Grading Service 1 Quality Specifications for Raw Milk § 58.135 Bacterial estimate. (a) Methods of Testing. Milk shall be tested for bacterial estimate... representative sample of each producer's milk at least once each month at irregular intervals. Samples shall...

  6. Malaria transmission rates estimated from serological data.

    PubMed Central

    Burattini, M. N.; Massad, E.; Coutinho, F. A.

    1993-01-01

    A mathematical model was used to estimate malaria transmission rates based on serological data. The model is minimally stochastic and assumes an age-dependent force of infection for malaria. The transmission rates estimated were applied to a simple compartmental model in order to mimic the malaria transmission. The model has shown a good retrieving capacity for serological and parasite prevalence data. PMID:8270011

  7. Estimation of linear functionals in emission tomography

    SciTech Connect

    Kuruc, A.

    1995-08-01

    In emission tomography, the spatial distribution of a radioactive tracer is estimated from a finite sample of externally-detected photons. We present an algorithm-independent theory of statistical accuracy attainable in emission tomography that makes minimal assumptions about the underlying image. Let f denote the tracer density as a function of position (i.e., f is the image being estimated). We consider the problem of estimating the linear functional {Phi}(f) {triple_bond} {integral}{phi}(x)f(x) dx, where {phi} is a smooth function, from n independent observations identically distributed according to the Radon transform of f. Assuming only that f is bounded above and below away from 0, we construct statistically efficient estimators for {Phi}(f). By definition, the variance of the efficient estimator is a best-possible lower bound (depending on and f) on the variance of unbiased estimators of {Phi}(f). Our results show that, in general, the efficient estimator will have a smaller variance than the standard estimator based on the filtered-backprojection reconstruction algorithm. The improvement in performance is obtained by exploiting the range properties of the Radon transform.

  8. Identification and Estimation of Hedonic Models

    ERIC Educational Resources Information Center

    Ekeland, Ivar; Heckman, James J.; Nesheim, Lars

    2004-01-01

    This paper considers the identification and estimation of hedonic models. We establish that in an additive version of the hedonic model, technology and preferences are generically nonparametrically identified from data on demand and supply in a single hedonic market. The empirical literature that claims that hedonic models estimated on data from a…

  9. Estimation of Variance Components Using Computer Packages.

    ERIC Educational Resources Information Center

    Chastain, Robert L.; Willson, Victor L.

    Generalizability theory is based upon analysis of variance (ANOVA) and requires estimation of variance components for the ANOVA design under consideration in order to compute either G (Generalizability) or D (Decision) coefficients. Estimation of variance components has a number of alternative methods available using SAS, BMDP, and ad hoc…

  10. Estimating Avogadro's number from skylight and airlight

    NASA Astrophysics Data System (ADS)

    Pesic, Peter

    2005-01-01

    Naked-eye determinations of the visual range yield order-of-magnitude estimates of Avogadro's number, using an argument of Rayleigh. Alternatively, by looking through a cardboard tube, we can compare airlight and skylight and give another estimate of this number using the law of atmospheres.

  11. Least squares estimation of avian molt rates

    USGS Publications Warehouse

    Johnson, D.H.

    1989-01-01

    A straightforward least squares method of estimating the rate at which birds molt feathers is presented, suitable for birds captured more than once during the period of molt. The date of molt onset can also be estimated. The method is applied to male and female mourning doves.

  12. Estimated Water Use in Puerto Rico, 2005

    USGS Publications Warehouse

    Molina-Rivera, Wanda L.; Gómez-Gómez, Fernando

    2008-01-01

    Water-use data were compiled for the 78 municipios of the Commonwealth of Puerto Rico for 2005. Five offstream categories were considered: public-supply water withdrawals and deliveries, domestic self-supplied water use, industrial self-supplied ground-water withdrawals, crop irrigation water use, and thermoelectric power freshwater use. One water-use category also was considered: power-generation instream water use (thermoelectric-saline withdrawals and hydroelectric power). Freshwater withdrawals and deliveries for offstream use from surface- and ground-water sources in Puerto Rico were estimated at 712 million gallons per day (Mgal/d). The largest amount of freshwater withdrawn was by public-supply water facilities and was estimated at 652 Mgal/d. The public-supply domestic water use was estimated at 347 Mgal/d. Fresh surface- and ground-water withdrawals by domestic self-supplied users were estimated at 2.1 Mgal/d and the industrial self-supplied withdrawals were estimated at 9.4 Mgal/d. Withdrawals for crop irrigation purposes were estimated at 45.2 Mgal/d, or approximately 6.3 percent of all offstream freshwater withdrawals. Instream freshwater withdrawals by hydroelectric facilities were estimated at 568 Mgal/d and saline instream surface-water withdrawals for cooling purposes by thermoelectric-power facilities was estimated at 2,288 Mgal/d.

  13. APPROACH FOR ESTIMATING GLOBAL LANDFILL METHANE EMISSIONS

    EPA Science Inventory

    The report is an overview of available country-specific data and modeling approaches for estimating global landfill methane. Current estimates of global landfill methane indicate that landfills account for between 4 and 15% of the global methane budget. The report describes an ap...

  14. Taking the Guesswork out of Computational Estimation

    ERIC Educational Resources Information Center

    Cochran, Jill; Dugger, Megan Hartmann

    2013-01-01

    Computational estimation is an important skill necessary for students' mathematical development. Students who can estimate well for computations rely on an understanding of many mathematical topics, including a strong number sense, which facilitates understanding the mathematical operations and contextual evidence within a problem. In turn,…

  15. Linearized motion estimation for articulated planes.

    PubMed

    Datta, Ankur; Sheikh, Yaser; Kanade, Takeo

    2011-04-01

    In this paper, we describe the explicit application of articulation constraints for estimating the motion of a system of articulated planes. We relate articulations to the relative homography between planes and show that these articulations translate into linearized equality constraints on a linear least-squares system, which can be solved efficiently using a Karush-Kuhn-Tucker system. The articulation constraints can be applied for both gradient-based and feature-based motion estimation algorithms and to illustrate this, we describe a gradient-based motion estimation algorithm for an affine camera and a feature-based motion estimation algorithm for a projective camera that explicitly enforces articulation constraints. We show that explicit application of articulation constraints leads to numerically stable estimates of motion. The simultaneous computation of motion estimates for all of the articulated planes in a scene allows us to handle scene areas where there is limited texture information and areas that leave the field of view. Our results demonstrate the wide applicability of the algorithm in a variety of challenging real-world cases such as human body tracking, motion estimation of rigid, piecewise planar scenes, and motion estimation of triangulated meshes.

  16. Bayesian Estimation Supersedes the "t" Test

    ERIC Educational Resources Information Center

    Kruschke, John K.

    2013-01-01

    Bayesian estimation for 2 groups provides complete distributions of credible values for the effect size, group means and their difference, standard deviations and their difference, and the normality of the data. The method handles outliers. The decision rule can accept the null value (unlike traditional "t" tests) when certainty in the estimate is…

  17. Smoothing Methods for Estimating Test Score Distributions.

    ERIC Educational Resources Information Center

    Kolen, Michael J.

    1991-01-01

    Estimation/smoothing methods that are flexible enough to fit a wide variety of test score distributions are reviewed: kernel method, strong true-score model-based method, and method that uses polynomial log-linear models. Applications of these methods include describing/comparing test score distributions, estimating norms, and estimating…

  18. Comparing population size estimators for plethodontid salamanders

    USGS Publications Warehouse

    Bailey, L.L.; Simons, T.R.; Pollock, K.H.

    2004-01-01

    Despite concern over amphibian declines, few studies estimate absolute abundances because of logistic and economic constraints and previously poor estimator performance. Two estimation approaches recommended for amphibian studies are mark-recapture and depletion (or removal) sampling. We compared abundance estimation via various mark-recapture and depletion methods, using data from a three-year study of terrestrial salamanders in Great Smoky Mountains National Park. Our results indicate that short-term closed-population, robust design, and depletion methods estimate surface population of salamanders (i.e., those near the surface and available for capture during a given sampling occasion). In longer duration studies, temporary emigration violates assumptions of both open- and closed-population mark-recapture estimation models. However, if the temporary emigration is completely random, these models should yield unbiased estimates of the total population (superpopulation) of salamanders in the sampled area. We recommend using Pollock's robust design in mark-recapture studies because of its flexibility to incorporate variation in capture probabilities and to estimate temporary emigration probabilities.

  19. Spectral moment estimation in MST radars

    NASA Technical Reports Server (NTRS)

    Woodman, R. F.

    1983-01-01

    Signal processing techniques used in Mesosphere-Stratosphere-Troposphere (MST) radars are reviewed. Techniques which produce good estimates of the total power, frequency shift, and spectral width of the radar power spectra are considered. Non-linear curve fitting, autocovariance, autocorrelation, covariance, and maximum likelihood estimators are discussed.

  20. Predictive control and estimation - State space approach

    NASA Technical Reports Server (NTRS)

    Gawronski, W.

    1991-01-01

    A modified output prediction procedure and a new controller design based on the predictive control law are presented. A new predictive estimator enhances system performance. The predictive controller was designed and applied to the tracking control of the NASA/JPL 70-m antenna. Simulation results show significant improvement in tracking performance over the linear quadratic controller and estimator presently in use.

  1. The Problems of Multiple Feedback Estimation.

    ERIC Educational Resources Information Center

    Bulcock, Jeffrey W.

    The use of two-stage least squares (2SLS) for the estimation of feedback linkages is inappropriate for nonorthogonal data sets because 2SLS is extremely sensitive to multicollinearity. It is argued that what is needed is use of a different estimating criterion than the least squares criterion. Theoretically the variance normalization criterion has…

  2. Estimating Decision Indices Based on Composite Scores

    ERIC Educational Resources Information Center

    Knupp, Tawnya Lee

    2009-01-01

    The purpose of this study was to develop an IRT model that would enable the estimation of decision indices based on composite scores. The composite scores, defined as a combination of unidimensional test scores, were either a total raw score or an average scale score. Additionally, estimation methods for the normal and compound multinomial models…

  3. Optimal estimation of non-Gaussianity

    SciTech Connect

    Babich, Daniel

    2005-08-15

    We systematically analyze the primordial non-Gaussianity estimator used by the Wilkinson Microwave Anisotropy Probe (WMAP) science team with the basic ideas of estimation theory in order to see if the limited cosmic microwave background (CMB) data is being optimally utilized. The WMAP estimator is based on the implicit assumption that the CMB bispectrum, the harmonic transform of the three-point correlation function, contains all of the primordial non-Gaussianity information in a CMB map. We first demonstrate that the signal-to-noise (S/N) of an estimator based on CMB three-point correlation functions is significantly larger than the S/N of any estimator based on higher-order correlation functions; justifying our choice to focus on the three-point correlation function. We then conclude that the estimator based on the three-point correlation function, which was used by WMAP, is optimal, meaning it saturates the Cramer-Rao inequality when the underlying CMB map is nearly Gaussian. We quantify this restriction by demonstrating that the suboptimal character of our estimator is proportional to the square of the fiducial non-Gaussianity, which is already constrained to be extremely small, so we can consider the WMAP estimator to be optimal in practice. Our conclusions do not depend on the form of the primordial bispectrum, only on the observationally established weak levels of primordial non-Gaussianity.

  4. A Bootstrap Procedure of Propensity Score Estimation

    ERIC Educational Resources Information Center

    Bai, Haiyan

    2013-01-01

    Propensity score estimation plays a fundamental role in propensity score matching for reducing group selection bias in observational data. To increase the accuracy of propensity score estimation, the author developed a bootstrap propensity score. The commonly used propensity score matching methods: nearest neighbor matching, caliper matching, and…

  5. 23 CFR 635.115 - Agreement estimate.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... submitted by the STD for each force account project (see 23 CFR part 635, subpart B) when the plans and... 23 Highways 1 2010-04-01 2010-04-01 false Agreement estimate. 635.115 Section 635.115 Highways... CONSTRUCTION AND MAINTENANCE Contract Procedures § 635.115 Agreement estimate. (a) Following the award...

  6. Canonical Estimation of Joint Educational Production Functions.

    ERIC Educational Resources Information Center

    Chizmar, John F.; Zak, Thomas A.

    1984-01-01

    This article views learning and attitude formation within the context of joint production. Tables show summary statistics and estimated marginal products and rates for each output. These estimates reveal trade-offs within the learning process that differ for men and women. (PB)

  7. A Cost Estimation Tool for Charter Schools

    ERIC Educational Resources Information Center

    Hayes, Cheryl D.; Keller, Eric

    2009-01-01

    To align their financing strategies and fundraising efforts with their fiscal needs, charter school leaders need to know how much funding they need and what that funding will support. This cost estimation tool offers a simple set of worksheets to help start-up charter school operators identify and estimate the range of costs and timing of…

  8. Spatial Categories and the Estimation of Location

    ERIC Educational Resources Information Center

    Huttenlocher, Janellen; Hedges, Larry V.; Corrigan, Bryce; Crawford, L. Elizabeth

    2004-01-01

    Four experiments are reported in which people organize a space hierarchically when they estimate particular locations in that space. Earlier work showed that people subdivide circles into quadrants bounded at the vertical and horizontal axes, biasing their estimates towards prototypical diagonal locations within those spatial categories…

  9. Fuel Burn Estimation Using Real Track Data

    NASA Technical Reports Server (NTRS)

    Chatterji, Gano B.

    2011-01-01

    A procedure for estimating fuel burned based on actual flight track data, and drag and fuel-flow models is described. The procedure consists of estimating aircraft and wind states, lift, drag and thrust. Fuel-flow for jet aircraft is determined in terms of thrust, true airspeed and altitude as prescribed by the Base of Aircraft Data fuel-flow model. This paper provides a theoretical foundation for computing fuel-flow with most of the information derived from actual flight data. The procedure does not require an explicit model of thrust and calibrated airspeed/Mach profile which are typically needed for trajectory synthesis. To validate the fuel computation method, flight test data provided by the Federal Aviation Administration were processed. Results from this method show that fuel consumed can be estimated within 1% of the actual fuel consumed in the flight test. Next, fuel consumption was estimated with simplified lift and thrust models. Results show negligible difference with respect to the full model without simplifications. An iterative takeoff weight estimation procedure is described for estimating fuel consumption, when takeoff weight is unavailable, and for establishing fuel consumption uncertainty bounds. Finally, the suitability of using radar-based position information for fuel estimation is examined. It is shown that fuel usage could be estimated within 5.4% of the actual value using positions reported in the Airline Situation Display to Industry data with simplified models and iterative takeoff weight computation.

  10. Estimated water use in Puerto Rico, 2010

    USGS Publications Warehouse

    Molina-Rivera, Wanda L.

    2014-01-01

    Water-use data were aggregated for the 78 municipios of the Commonwealth of Puerto Rico for 2010. Five major offstream categories were considered: public-supply water withdrawals and deliveries, domestic and industrial self-supplied water use, crop-irrigation water use, and thermoelectric-power freshwater use. One instream water-use category also was compiled: power-generation instream water use (thermoelectric saline withdrawals and hydroelectric power). Freshwater withdrawals for offstream use from surface-water [606 million gallons per day (Mgal/d)] and groundwater (118 Mgal/d) sources in Puerto Rico were estimated at 724 million gallons per day. The largest amount of freshwater withdrawn was by public-supply water facilities estimated at 677 Mgal/d. Public-supply domestic water use was estimated at 206 Mgal/d. Fresh groundwater withdrawals by domestic self-supplied users were estimated at 2.41 Mgal/d. Industrial self-supplied withdrawals were estimated at 4.30 Mgal/d. Withdrawals for crop irrigation purposes were estimated at 38.2 Mgal/d, or approximately 5 percent of all offstream freshwater withdrawals. Instream freshwater withdrawals by hydroelectric facilities were estimated at 556 Mgal/d and saline instream surface-water withdrawals for cooling purposes by thermoelectric-power facilities was estimated at 2,262 Mgal/d.

  11. Estimating Gender Wage Gaps: A Data Update

    ERIC Educational Resources Information Center

    McDonald, Judith A.; Thornton, Robert J.

    2016-01-01

    In the authors' 2011 "JEE" article, "Estimating Gender Wage Gaps," they described an interesting class project that allowed students to estimate the current gender earnings gap for recent college graduates using data from the National Association of Colleges and Employers (NACE). Unfortunately, since 2012, NACE no longer…

  12. Contour-based object orientation estimation

    NASA Astrophysics Data System (ADS)

    Alpatov, Boris; Babayan, Pavel

    2016-04-01

    Real-time object orientation estimation is an actual problem of computer vision nowadays. In this paper we propose an approach to estimate an orientation of objects lacking axial symmetry. Proposed algorithm is intended to estimate orientation of a specific known 3D object, so 3D model is required for learning. The proposed orientation estimation algorithm consists of 2 stages: learning and estimation. Learning stage is devoted to the exploring of studied object. Using 3D model we can gather set of training images by capturing 3D model from viewpoints evenly distributed on a sphere. Sphere points distribution is made by the geosphere principle. It minimizes the training image set. Gathered training image set is used for calculating descriptors, which will be used in the estimation stage of the algorithm. The estimation stage is focusing on matching process between an observed image descriptor and the training image descriptors. The experimental research was performed using a set of images of Airbus A380. The proposed orientation estimation algorithm showed good accuracy (mean error value less than 6°) in all case studies. The real-time performance of the algorithm was also demonstrated.

  13. Model feedback in Bayesian propensity score estimation.

    PubMed

    Zigler, Corwin M; Watts, Krista; Yeh, Robert W; Wang, Yun; Coull, Brent A; Dominici, Francesca

    2013-03-01

    Methods based on the propensity score comprise one set of valuable tools for comparative effectiveness research and for estimating causal effects more generally. These methods typically consist of two distinct stages: (1) a propensity score stage where a model is fit to predict the propensity to receive treatment (the propensity score), and (2) an outcome stage where responses are compared in treated and untreated units having similar values of the estimated propensity score. Traditional techniques conduct estimation in these two stages separately; estimates from the first stage are treated as fixed and known for use in the second stage. Bayesian methods have natural appeal in these settings because separate likelihoods for the two stages can be combined into a single joint likelihood, with estimation of the two stages carried out simultaneously. One key feature of joint estimation in this context is "feedback" between the outcome stage and the propensity score stage, meaning that quantities in a model for the outcome contribute information to posterior distributions of quantities in the model for the propensity score. We provide a rigorous assessment of Bayesian propensity score estimation to show that model feedback can produce poor estimates of causal effects absent strategies that augment propensity score adjustment with adjustment for individual covariates. We illustrate this phenomenon with a simulation study and with a comparative effectiveness investigation of carotid artery stenting versus carotid endarterectomy among 123,286 Medicare beneficiaries hospitlized for stroke in 2006 and 2007. PMID:23379793

  14. Power, Optimization, Waste Estimating, Resourcing Tool

    2009-08-13

    Planning, Optimization, Waste Estimating, Resourcing tool (POWERtool) is a comprehensive relational database software tool that can be used to develop and organize a detailed project scope, plan work tasks, develop bottoms-up field cost and waste estimates for facility Deactivation and Decommissioning (D&D), equipment, and environmental restoration (ER) projects and produces resource-loaded schedules.

  15. An approach to software cost estimation

    NASA Technical Reports Server (NTRS)

    Mcgarry, F.; Page, J.; Card, D.; Rohleder, M.; Church, V.

    1984-01-01

    A general procedure for software cost estimation in any environment is outlined. The basic concepts of work and effort estimation are explained, some popular resource estimation models are reviewed, and the accuracy of source estimates is discussed. A software cost prediction procedure based on the experiences of the Software Engineering Laboratory in the flight dynamics area and incorporating management expertise, cost models, and historical data is described. The sources of information and relevant parameters available during each phase of the software life cycle are identified. The methodology suggested incorporates these elements into a customized management tool for software cost prediction. Detailed guidelines for estimation in the flight dynamics environment developed using this methodology are presented.

  16. Cross correlations among estimators of shape

    NASA Astrophysics Data System (ADS)

    Martins, Eduardo S.; Stedinger, Jery R.

    2002-11-01

    The regional variability of shape parameters (such as κ for the GEV distribution) may be described by generalized least squares (GLS) regression models that allow shape parameters to be estimated from basin characteristics recognizing the sampling uncertainty in available shape estimators. Implementation of such GLS models requires estimates of the cross-site correlation of the shape parameter estimators for every pair of sites. Monte Carlo experiments provided the information needed to identify simple power approximations of the relationships between the cross correlation of estimators of skewness γ from [log] Pearson type 3 (P3) data and of the shape parameter κ of both generalized Pareto (GP) and generalized extreme value (GEV) distributions, as functions of the intersite correlation of concurrent flows.

  17. Channel estimation in DCT-based OFDM.

    PubMed

    Wang, Yulin; Zhang, Gengxin; Xie, Zhidong; Hu, Jing

    2014-01-01

    This paper derives the channel estimation of a discrete cosine transform-(DCT-) based orthogonal frequency-division multiplexing (OFDM) system over a frequency-selective multipath fading channel. Channel estimation has been proved to improve system throughput and performance by allowing for coherent demodulation. Pilot-aided methods are traditionally used to learn the channel response. Least square (LS) and mean square error estimators (MMSE) are investigated. We also study a compressed sensing (CS) based channel estimation, which takes the sparse property of wireless channel into account. Simulation results have shown that the CS based channel estimation is expected to have better performance than LS. However MMSE can achieve optimal performance because of prior knowledge of the channel statistic.

  18. Cauchy Drag Estimation For Low Earth Orbiters

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Mashiku, Alinda K.

    2015-01-01

    Recent work on minimum variances estimators based on Cauchy distributions appear relevant to orbital drag estimation. Samples form Cauchy distributions which are part of a class of heavy-tailed distributions, are characterized by long stretches of fairly small variation, punctuated by large variations that are many times larger than could be expected from a Gaussian. Such behavior can occur when solar storms perturb the atmosphere. In this context, the present work describes an embedding of the scalar Idan-Speyer Cauchy Estimator to estimate density corrections, within an Extended Kalman Filter that estimates the state of a low Earth orbiter. In contrast to the baseline Kalman approach, the larger formal errors of the present approach fully and conservatively bound the predictive error distribution, even in the face of unanticipated density disturbances of hundreds of percent.

  19. Quantum statistical inference for density estimation

    SciTech Connect

    Silver, R.N.; Martz, H.F.; Wallstrom, T.

    1993-11-01

    A new penalized likelihood method for non-parametric density estimation is proposed, which is based on a mathematical analogy to quantum statistical physics. The mathematical procedure for density estimation is related to maximum entropy methods for inverse problems; the penalty function is a convex information divergence enforcing global smoothing toward default models, positivity, extensivity and normalization. The novel feature is the replacement of classical entropy by quantum entropy, so that local smoothing may be enforced by constraints on the expectation values of differential operators. Although the hyperparameters, covariance, and linear response to perturbations can be estimated by a variety of statistical methods, we develop the Bayesian interpretation. The linear response of the MAP estimate is proportional to the covariance. The hyperparameters are estimated by type-II maximum likelihood. The method is demonstrated on standard data sets.

  20. Estimation for large non-centrality parameters

    NASA Astrophysics Data System (ADS)

    Inácio, Sónia; Mexia, João; Fonseca, Miguel; Carvalho, Francisco

    2016-06-01

    We introduce the concept of estimability for models for which accurate estimators can be obtained for the respective parameters. The study was conducted for model with almost scalar matrix using the study of estimability after validation of these models. In the validation of these models we use F statistics with non centrality parameter τ =‖λ/‖2 σ2 when this parameter is sufficiently large we obtain good estimators for λ and α so there is estimability. Thus, we are interested in obtaining a lower bound for the non-centrality parameter. In this context we use for the statistical inference inducing pivot variables, see Ferreira et al. 2013, and asymptotic linearity, introduced by Mexia & Oliveira 2011, to derive confidence intervals for large non-centrality parameters (see Inácio et al. 2015). These results enable us to measure relevance of effects and interactions in multifactors models when we get highly statistically significant the values of F tests statistics.

  1. A Developed ESPRIT Algorithm for DOA Estimation

    NASA Astrophysics Data System (ADS)

    Fayad, Youssef; Wang, Caiyun; Cao, Qunsheng; Hafez, Alaa El-Din Sayed

    2015-05-01

    A novel algorithm for estimating direction of arrival (DOAE) for target, which aspires to contribute to increase the estimation process accuracy and decrease the calculation costs, has been carried out. It has introduced time and space multiresolution in Estimation of Signal Parameter via Rotation Invariance Techniques (ESPRIT) method (TS-ESPRIT) to realize subspace approach that decreases errors caused by the model's nonlinearity effect. The efficacy of the proposed algorithm is verified by using Monte Carlo simulation, the DOAE accuracy has evaluated by closed-form Cramér-Rao bound (CRB) which reveals that the proposed algorithm's estimated results are better than those of the normal ESPRIT methods leading to the estimator performance enhancement.

  2. Adaptive link selection algorithms for distributed estimation

    NASA Astrophysics Data System (ADS)

    Xu, Songcen; de Lamare, Rodrigo C.; Poor, H. Vincent

    2015-12-01

    This paper presents adaptive link selection algorithms for distributed estimation and considers their application to wireless sensor networks and smart grids. In particular, exhaustive search-based least mean squares (LMS) / recursive least squares (RLS) link selection algorithms and sparsity-inspired LMS / RLS link selection algorithms that can exploit the topology of networks with poor-quality links are considered. The proposed link selection algorithms are then analyzed in terms of their stability, steady-state, and tracking performance and computational complexity. In comparison with the existing centralized or distributed estimation strategies, the key features of the proposed algorithms are as follows: (1) more accurate estimates and faster convergence speed can be obtained and (2) the network is equipped with the ability of link selection that can circumvent link failures and improve the estimation performance. The performance of the proposed algorithms for distributed estimation is illustrated via simulations in applications of wireless sensor networks and smart grids.

  3. Contractor-style tunnel cost estimating

    SciTech Connect

    Scapuzzi, D. )

    1990-06-01

    Keeping pace with recent advances in construction technology is a challenge for the cost estimating engineer. Using an estimating style that simulates the actual construction process and is similar in style to the contractor's estimate will give a realistic view of underground construction costs. For a contractor-style estimate, a mining method is chosen; labor crews, plant and equipment are selected, and advance rates are calculated for the various phases of work which are used to determine the length of time necessary to complete each phase of work. The durations are multiplied by the cost or labor and equipment per unit of time and, along with the costs for materials and supplies, combine to complete the estimate. Variations in advance rates, ground support, labor crew size, or other areas are more easily analyzed for their overall effect on the cost and schedule of a project. 14 figs.

  4. New approaches to estimation of magnetotelluric parameters

    SciTech Connect

    Egbert, G.D.

    1991-01-01

    Fully efficient robust data processing procedures were developed and tested for single station and remote reference magnetotelluric (Mr) data. Substantial progress was made on development, testing and comparison of optimal procedures for single station data. A principal finding of this phase of the research was that the simplest robust procedures can be more heavily biased by noise in the (input) magnetic fields, than standard least squares estimates. To deal with this difficulty we developed a robust processing scheme which combined the regression M-estimate with coherence presorting. This hybrid approach greatly improves impedance estimates, particularly in the low signal-to-noise conditions often encountered in the dead band'' (0.1--0.0 hz). The methods, and the results of comparisons of various single station estimators are described in detail. Progress was made on developing methods for estimating static distortion parameters, and for testing hypotheses about the underlying dimensionality of the geological section.

  5. Ethics in age estimation of unaccompanied minors.

    PubMed

    Thevissen, P W; Kvaal, S I; Willems, G

    2012-11-30

    Children absconding from countries of conflict and war are often not able to document their age. When an age is given, it is frequently untraceable or poorly documented and therefore questioned by immigration authorities. Consequently many countries perform age estimations on these children. Provision of ethical practice during the age estimation investigation of unaccompanied minors is considered from different angles: (1) The UN convention on children's rights, formulating specific rights, protection, support, healthcare and education for unaccompanied minors. (2) Since most age estimation investigations are based on medical examination, the four basic principles of biomedical ethics, namely autonomy, beneficence, non-malevolence, justice. (3) The use of medicine for non treatment purposes. (4) How age estimates with highest accuracy in age prediction can be obtained. Ethical practice in age estimation of unaccompanied minors is achieved when different but related aspects are searched, evaluated, weighted in importance and subsequently combined. However this is not always feasible and unanswered questions remain.

  6. Estimating monthly streamflow values by cokriging

    USGS Publications Warehouse

    Solow, A.R.; Gorelick, S.M.

    1986-01-01

    Cokriging is applied to estimation of missing monthly streamflow values in three records from gaging stations in west central Virginia. Missing values are estimated from optimal consideration of the pattern of auto- and cross-correlation among standardized residual log-flow records. Investigation of the sensitivity of estimation to data configuration showed that when observations are available within two months of a missing value, estimation is improved by accounting for correlation. Concurrent and lag-one observations tend to screen the influence of other available observations. Three models of covariance structure in residual log-flow records are compared using cross-validation. Models differ in how much monthly variation they allow in covariance. Precision of estimation, reflected in mean squared error (MSE), proved to be insensitive to this choice. Cross-validation is suggested as a tool for choosing an inverse transformation when an initial nonlinear transformation is applied to flow values. ?? 1986 Plenum Publishing Corporation.

  7. Potential Pitfalls in Estimating Viral Load Heritability.

    PubMed

    Leventhal, Gabriel E; Bonhoeffer, Sebastian

    2016-09-01

    In HIV patients, the set-point viral load (SPVL) is the most widely used predictor of disease severity. Yet SPVL varies over several orders of magnitude between patients. The heritability of SPVL quantifies how much of the variation in SPVL is due to transmissible viral genetics. There is currently no clear consensus on the value of SPVL heritability, as multiple studies have reported apparently discrepant estimates. Here we illustrate that the discrepancies in estimates are most likely due to differences in the estimation methods, rather than the study populations. Importantly, phylogenetic estimates run the risk of being strongly confounded by unrealistic model assumptions. Care must be taken when interpreting and comparing the different estimates to each other.

  8. Cost-estimating relationships for space programs

    NASA Technical Reports Server (NTRS)

    Mandell, Humboldt C., Jr.

    1992-01-01

    Cost-estimating relationships (CERs) are defined and discussed as they relate to the estimation of theoretical costs for space programs. The paper primarily addresses CERs based on analogous relationships between physical and performance parameters to estimate future costs. Analytical estimation principles are reviewed examining the sources of errors in cost models, and the use of CERs is shown to be affected by organizational culture. Two paradigms for cost estimation are set forth: (1) the Rand paradigm for single-culture single-system methods; and (2) the Price paradigms that incorporate a set of cultural variables. For space programs that are potentially subject to even small cultural changes, the Price paradigms are argued to be more effective. The derivation and use of accurate CERs is important for developing effective cost models to analyze the potential of a given space program.

  9. Parameter Estimation in Atmospheric Data Sets

    NASA Technical Reports Server (NTRS)

    Wenig, Mark; Colarco, Peter

    2004-01-01

    In this study the structure tensor technique is used to estimate dynamical parameters in atmospheric data sets. The structure tensor is a common tool for estimating motion in image sequences. This technique can be extended to estimate other dynamical parameters such as diffusion constants or exponential decay rates. A general mathematical framework was developed for the direct estimation of the physical parameters that govern the underlying processes from image sequences. This estimation technique can be adapted to the specific physical problem under investigation, so it can be used in a variety of applications in trace gas, aerosol, and cloud remote sensing. As a test scenario this technique will be applied to modeled dust data. In this case vertically integrated dust concentrations were used to derive wind information. Those results can be compared to the wind vector fields which served as input to the model. Based on this analysis, a method to compute atmospheric data parameter fields will be presented. .

  10. Nonlinear circuits for naturalistic visual motion estimation.

    PubMed

    Fitzgerald, James E; Clark, Damon A

    2015-01-01

    Many animals use visual signals to estimate motion. Canonical models suppose that animals estimate motion by cross-correlating pairs of spatiotemporally separated visual signals, but recent experiments indicate that humans and flies perceive motion from higher-order correlations that signify motion in natural environments. Here we show how biologically plausible processing motifs in neural circuits could be tuned to extract this information. We emphasize how known aspects of Drosophila's visual circuitry could embody this tuning and predict fly behavior. We find that segregating motion signals into ON/OFF channels can enhance estimation accuracy by accounting for natural light/dark asymmetries. Furthermore, a diversity of inputs to motion detecting neurons can provide access to more complex higher-order correlations. Collectively, these results illustrate how non-canonical computations improve motion estimation with naturalistic inputs. This argues that the complexity of the fly's motion computations, implemented in its elaborate circuits, represents a valuable feature of its visual motion estimator.

  11. Time estimation deficits in childhood mathematics difficulties.

    PubMed

    Hurks, Petra P M; van Loosbroek, Erik

    2014-01-01

    Time perception has not been comprehensively examined in mathematics difficulties (MD). Therefore, verbal time estimation, production, and reproduction were tested in 13 individuals with MD and 16 healthy controls, matched for age, sex, and intellectual skills. Individuals with MD performed comparably to controls in time reproduction, but showed a tendency to be less accurate on tasks of verbal time estimation and time production. More specifically, these individuals overestimated the duration of a time interval in the verbal time estimation task and showed underproduction when required to produce a time sample. All previous significant comparisons remained significant after controlling for the effects of interval duration, working memory, attention allocation, and quantity estimation. These findings lead us to suggest that time estimation, and more specifically the "internal clock," is abnormally fast in individuals with MD. Results are discussed in terms of Meck and Church's model of temporal processing and Dehaene's triple code model for number processing.

  12. Using Robust Variance Estimation to Combine Multiple Regression Estimates with Meta-Analysis

    ERIC Educational Resources Information Center

    Williams, Ryan

    2013-01-01

    The purpose of this study was to explore the use of robust variance estimation for combining commonly specified multiple regression models and for combining sample-dependent focal slope estimates from diversely specified models. The proposed estimator obviates traditionally required information about the covariance structure of the dependent…

  13. Hydrogen Station Cost Estimates: Comparing Hydrogen Station Cost Calculator Results with other Recent Estimates

    SciTech Connect

    Melaina, M.; Penev, M.

    2013-09-01

    This report compares hydrogen station cost estimates conveyed by expert stakeholders through the Hydrogen Station Cost Calculation (HSCC) to a select number of other cost estimates. These other cost estimates include projections based upon cost models and costs associated with recently funded stations.

  14. Linear Prediction of a True Score from a Direct Estimate and Several Derived Estimates

    ERIC Educational Resources Information Center

    Haberman, Shelby J.; Qian, Jiahe

    2007-01-01

    Statistical prediction problems often involve both a direct estimate of a true score and covariates of this true score. Given the criterion of mean squared error, this study determines the best linear predictor of the true score given the direct estimate and the covariates. Results yield an extension of Kelley's formula for estimation of the true…

  15. Estimating fractal dimension of medical images

    NASA Astrophysics Data System (ADS)

    Penn, Alan I.; Loew, Murray H.

    1996-04-01

    Box counting (BC) is widely used to estimate the fractal dimension (fd) of medical images on the basis of a finite set of pixel data. The fd is then used as a feature to discriminate between healthy and unhealthy conditions. We show that BC is ineffective when used on small data sets and give examples of published studies in which researchers have obtained contradictory and flawed results by using BC to estimate the fd of data-limited medical images. We present a new method for estimating fd of data-limited medical images. In the new method, fractal interpolation functions (FIFs) are used to generate self-affine models of the underlying image; each model, upon discretization, approximates the original data points. The fd of each FIF is analytically evaluated. The mean of the fds of the FIFs is the estimate of the fd of the original data. The standard deviation of the fds of the FIFs is a confidence measure of the estimate. The goodness-of-fit of the discretized models to the original data is a measure of self-affinity of the original data. In a test case, the new method generated a stable estimate of fd of a rib edge in a standard chest x-ray; box counting failed to generate a meaningful estimate of the same image.

  16. Estimated water use in Puerto Rico, 2000

    USGS Publications Warehouse

    Molina-Rivera, Wanda L.

    2005-01-01

    Water-use data were compiled for the 78 municipios of the Commonwealth of Puerto Rico for 2000. Five offstream categories were considered: public-supply water withdrawals, domestic self-supplied water use, industrial self-supplied withdrawals, crop irrigation water use, and thermoelectric power fresh water use. Two additional categories also were considered: power generation instream use and public wastewater treatment return-flows. Fresh water withdrawals for offstream use from surface- and ground-water sources in Puerto Rico were estimated at 617 million gallons per day. The largest amount of fresh water withdrawn was by public-supply water facilities and was estimated at 540 million gallons per day. Fresh surface- and ground-water withdrawals by domestic self-supplied users was estimated at 2 million gallons per day and the industrial self-supplied withdrawals were estimated at 9.5 million gallons per day. Withdrawals for crop irrigation purposes were estimated at 64 million gallons per day, or approximately 10 percent of all offstream fresh water withdrawals. Saline instream surface-water withdrawals for cooling purposes by thermoelectric power facilities was estimated at 2,191 million gallons per day, and instream fresh water withdrawals by hydroelectric facilities at 171 million gallons per day. Total discharge from public wastewater treatment facilities was estimated at 211 million gallons per day.

  17. Can extinction rates be estimated without fossils?

    PubMed

    Paradis, Emmanuel

    2004-07-01

    There is considerable interest in the possibility of using molecular phylogenies to estimate extinction rates. The present study aims at assessing the statistical performance of the birth-death model fitting approach to estimate speciation and extinction rates by comparison to the approach considering fossil data. A simulation-based approach was used. The diversification of a large number of lineages was simulated under a wide range of speciation and extinction rate values. The estimators obtained with fossils performed better than those without fossils. In the absence of fossils (e.g. with a molecular phylogeny), the speciation rate was correctly estimated in a wide range of situations; the bias of the corresponding estimator was close to zero for the largest trees. However, this estimator was substantially biased when the simulated extinction rate was high. On the other hand the estimator of extinction rate was biased in a wide range of situations. Surprisingly, this bias was lesser with medium-sized trees. Some recommendations for interpreting results from a diversification analysis are given.

  18. Comparing different classifiers for automatic age estimation.

    PubMed

    Lanitis, Andreas; Draganova, Chrisina; Christodoulou, Chris

    2004-02-01

    We describe a quantitative evaluation of the performance of different classifiers in the task of automatic age estimation. In this context, we generate a statistical model of facial appearance, which is subsequently used as the basis for obtaining a compact parametric description of face images. The aim of our work is to design classifiers that accept the model-based representation of unseen images and produce an estimate of the age of the person in the corresponding face image. For this application, we have tested different classifiers: a classifier based on the use of quadratic functions for modeling the relationship between face model parameters and age, a shortest distance classifier, and artificial neural network based classifiers. We also describe variations to the basic method where we use age-specific and/or appearance specific age estimation methods. In this context, we use age estimation classifiers for each age group and/or classifiers for different clusters of subjects within our training set. In those cases, part of the classification procedure is devoted to choosing the most appropriate classifier for the subject/age range in question, so that more accurate age estimates can be obtained. We also present comparative results concerning the performance of humans and computers in the task of age estimation. Our results indicate that machines can estimate the age of a person almost as reliably as humans.

  19. Backscatter coefficient estimation using tapers with gaps.

    PubMed

    Luchies, Adam C; Oelze, Michael L

    2015-04-01

    When using the backscatter coefficient (BSC) to estimate quantitative ultrasound parameters such as the effective scatterer diameter (ESD) and the effective acoustic concentration (EAC), it is necessary to assume that the interrogated medium contains diffuse scatterers. Structures that invalidate this assumption can affect the estimated BSC parameters in terms of increased bias and variance and decrease performance when classifying disease. In this work, a method was developed to mitigate the effects of echoes from structures that invalidate the assumption of diffuse scattering, while preserving as much signal as possible for obtaining diffuse scatterer property estimates. Backscattered signal sections that contained nondiffuse signals were identified and a windowing technique was used to provide BSC estimates for diffuse echoes only. Experiments from physical phantoms were used to evaluate the effectiveness of the proposed BSC estimation methods. Tradeoffs associated with effective mitigation of specular scatterers and bias and variance introduced into the estimates were quantified. Analysis of the results suggested that discrete prolate spheroidal (PR) tapers with gaps provided the best performance for minimizing BSC error. Specifically, the mean square error for BSC between measured and theoretical had an average value of approximately 1.0 and 0.2 when using a Hanning taper and PR taper respectively, with six gaps. The BSC error due to amplitude bias was smallest for PR (Nω = 1) tapers. The BSC error due to shape bias was smallest for PR (Nω = 4) tapers. These results suggest using different taper types for estimating ESD versus EAC.

  20. An assessment of vapour pressure estimation methods.

    PubMed

    O'Meara, Simon; Booth, Alastair Murray; Barley, Mark Howard; Topping, David; McFiggans, Gordon

    2014-09-28

    Laboratory measurements of vapour pressures for atmospherically relevant compounds were collated and used to assess the accuracy of vapour pressure estimates generated by seven estimation methods and impacts on predicted secondary organic aerosol. Of the vapour pressure estimation methods that were applicable to all the test set compounds, the Lee-Kesler [Reid et al., The Properties of Gases and Liquids, 1987] method showed the lowest mean absolute error and the Nannoolal et al. [Nannoonal et al., Fluid Phase Equilib., 2008, 269, 117-133] method showed the lowest mean bias error (when both used normal boiling points estimated using the Nannoolal et al. [Nannoolal et al., Fluid Phase Equilib., 2004, 226, 45-63] method). The effect of varying vapour pressure estimation methods on secondary organic aerosol (SOA) mass loading and composition was investigated using an absorptive partitioning equilibrium model. The Myrdal and Yalkowsky [Myrdal and Yalkowsky, Ind. Eng. Chem. Res., 1997, 36, 2494-2499] vapour pressure estimation method using the Nannoolal et al. [Nannoolal et al., Fluid Phase Equilib., 2004, 226, 45-63] normal boiling point gave the most accurate estimation of SOA loading despite not being the most accurate for vapour pressures alone. PMID:25105180