ERIC Educational Resources Information Center
Woods, Carol M.; Thissen, David
2006-01-01
The purpose of this paper is to introduce a new method for fitting item response theory models with the latent population distribution estimated from the data using splines. A spline-based density estimation system provides a flexible alternative to existing procedures that use a normal distribution, or a different functional form, for the…
A spline-based parameter estimation technique for static models of elastic structures
NASA Technical Reports Server (NTRS)
Dutt, P.; Ta'asan, S.
1989-01-01
The problem of identifying the spatially varying coefficient of elasticity using an observed solution to the forward problem is considered. Under appropriate conditions this problem can be treated as a first order hyperbolic equation in the unknown coefficient. Some continuous dependence results are developed for this problem and a spline-based technique is proposed for approximating the unknown coefficient, based on these results. The convergence of the numerical scheme is established and error estimates obtained.
Spline-Based Parameter Estimation Techniques for Two-Dimensional Convection and Diffusion Equations.
1986-07-01
Bolling Air Force Base 12. NUMBER Of PAGES ’P4.SW% AENCY. &MNAME A AOORESSIf dii*ffru be= COai1001I1ud OCR..) to. SIECURI CLASS. (of Wel 00110") c5*~~L ?S...PAGE (SIIII Dots 8-0 20. Abstract A general approximation framework based on bicubic splines is developed for estimating temporally and spatially...Unannounced Justification By Distribution/ Availability Codes Avail and/or Dist Special ....., AFQR.h.87-0799 SPLINE- BASED PARAMETER ESTIMATION TECHNIQUES
Deng, Shirong; Liu, Li; Zhao, Xingqiu
2015-09-01
This article discusses the statistical analysis of panel count data when the underlying recurrent event process and observation process may be correlated. For the recurrent event process, we propose a new class of semiparametric mean models that allows for the interaction between the observation history and covariates. For inference on the model parameters, a monotone spline-based least squares estimation approach is developed, and the resulting estimators are consistent and asymptotically normal. In particular, our new approach does not rely on the model specification of the observation process. The proposed inference procedure performs well through simulation studies, and it is illustrated by the analysis of bladder tumor data.
Spline-based semiparametric projected generalized estimating equation method for panel count data.
Hua, Lei; Zhang, Ying
2012-07-01
We propose to analyze panel count data using a spline-based semiparametric projected generalized estimating equation (GEE) method with the proportional mean model E(N(t)|Z) = Λ(0)(t) e(β(0)(T)Z). The natural logarithm of the baseline mean function, logΛ(0)(t), is approximated by a monotone cubic B-spline function. The estimates of regression parameters and spline coefficients are obtained by projecting the GEE estimates into the feasible domain using a weighted isotonic regression (IR). The proposed method avoids assuming any parametric structure of the baseline mean function or any stochastic model for the underlying counting process. Selection of the working covariance matrix that accounts for overdispersion improves the estimation efficiency and leads to less biased variance estimations. Simulation studies are conducted using different working covariance matrices in the GEE to investigate finite sample performance of the proposed method, to compare the estimation efficiency, and to explore the performance of different variance estimates in presence of overdispersion. Finally, the proposed method is applied to a real data set from a bladder tumor clinical trial.
A spline-based parameter and state estimation technique for static models of elastic surfaces
NASA Technical Reports Server (NTRS)
Banks, H. T.; Daniel, P. L.; Armstrong, E. S.
1983-01-01
Parameter and state estimation techniques for an elliptic system arising in a developmental model for the antenna surface in the Maypole Hoop/Column antenna are discussed. A computational algorithm based on spline approximations for the state and elastic parameters is given and numerical results obtained using this algorithm are summarized.
B-spline based image tracking by detection
NASA Astrophysics Data System (ADS)
Balaji, Bhashyam; Sithiravel, Rajiv; Damini, Anthony; Kirubarajan, Thiagalingam; Rajan, Sreeraman
2016-05-01
Visual image tracking involves the estimation of the motion of any desired targets in a surveillance region using a sequence of images. A standard method of isolating moving targets in image tracking uses background subtraction. The standard background subtraction method is often impacted by irrelevant information in the images, which can lead to poor performance in image-based target tracking. In this paper, a B-Spline based image tracking is implemented. The novel method models the background and foreground using the B-Spline method followed by a tracking-by-detection algorithm. The effectiveness of the proposed algorithm is demonstrated.
On the spline-based wavelet differentiation matrix
NASA Technical Reports Server (NTRS)
Jameson, Leland
1993-01-01
The differentiation matrix for a spline-based wavelet basis is constructed. Given an n-th order spline basis it is proved that the differentiation matrix is accurate of order 2n + 2 when periodic boundary conditions are assumed. This high accuracy, or superconvergence, is lost when the boundary conditions are no longer periodic. Furthermore, it is shown that spline-based bases generate a class of compact finite difference schemes.
Spline-based distributed system identification with application to large space antennas
NASA Technical Reports Server (NTRS)
Banks, H. T.; Lamm, P. K.; Armstrong, E. S.
1986-01-01
A parameter and state estimation technique for distributed models is demonstrated through the solution of a problem generic to large space antenna system identification. Assuming the position of the reflective surface of the maypole (hoop/column) antenna to be approximated by the static two-dimensional, stretched-membrane partial differential equation with variable-stiffness coefficient functions, a spline-based approximation procedure is described that estimates the shape and stiffness functions from data set observations. For given stiffness functions, the Galerkin projection with linear spline-based functions is applied to project the distributed problem onto a finite-dimensional subspace wherein algebraic equations exist for determining a static shape (state) prediction. The stiffness functions are then parameterized by cubic splines and the parameters estimated by an output error technique. Numerical results are presented for data descriptive of a 100-m-diameter maypole antenna.
Spline-based procedures for dose-finding studies with active control
Helms, Hans-Joachim; Benda, Norbert; Zinserling, Jörg; Kneib, Thomas; Friede, Tim
2015-01-01
In a dose-finding study with an active control, several doses of a new drug are compared with an established drug (the so-called active control). One goal of such studies is to characterize the dose–response relationship and to find the smallest target dose concentration d*, which leads to the same efficacy as the active control. For this purpose, the intersection point of the mean dose–response function with the expected efficacy of the active control has to be estimated. The focus of this paper is a cubic spline-based method for deriving an estimator of the target dose without assuming a specific dose–response function. Furthermore, the construction of a spline-based bootstrap CI is described. Estimator and CI are compared with other flexible and parametric methods such as linear spline interpolation as well as maximum likelihood regression in simulation studies motivated by a real clinical trial. Also, design considerations for the cubic spline approach with focus on bias minimization are presented. Although the spline-based point estimator can be biased, designs can be chosen to minimize and reasonably limit the maximum absolute bias. Furthermore, the coverage probability of the cubic spline approach is satisfactory, especially for bias minimal designs. © 2014 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. PMID:25319931
Spline-based procedures for dose-finding studies with active control.
Helms, Hans-Joachim; Benda, Norbert; Zinserling, Jörg; Kneib, Thomas; Friede, Tim
2015-01-30
In a dose-finding study with an active control, several doses of a new drug are compared with an established drug (the so-called active control). One goal of such studies is to characterize the dose-response relationship and to find the smallest target dose concentration d(*), which leads to the same efficacy as the active control. For this purpose, the intersection point of the mean dose-response function with the expected efficacy of the active control has to be estimated. The focus of this paper is a cubic spline-based method for deriving an estimator of the target dose without assuming a specific dose-response function. Furthermore, the construction of a spline-based bootstrap CI is described. Estimator and CI are compared with other flexible and parametric methods such as linear spline interpolation as well as maximum likelihood regression in simulation studies motivated by a real clinical trial. Also, design considerations for the cubic spline approach with focus on bias minimization are presented. Although the spline-based point estimator can be biased, designs can be chosen to minimize and reasonably limit the maximum absolute bias. Furthermore, the coverage probability of the cubic spline approach is satisfactory, especially for bias minimal designs.
NASA Astrophysics Data System (ADS)
Laurent, Florence; Renault, Edgard; Boudon, Didier; Caillier, Patrick; Daguisé, Eric; Dupuy, Christophe; Jarno, Aurélien; Lizon, Jean-Louis; Migniau, Jean-Emmanuel; Nicklas, Harald; Piqueras, Laure
2014-07-01
MUSE (Multi Unit Spectroscopic Explorer) is a second generation Very Large Telescope (VLT) integral field spectrograph developed for the European Southern Observatory (ESO). It combines a 1' x 1' field of view sampled at 0.2 arcsec for its Wide Field Mode (WFM) and a 7.5"x7.5" field of view for its Narrow Field Mode (NFM). Both modes will operate with the improved spatial resolution provided by GALACSI (Ground Atmospheric Layer Adaptive Optics for Spectroscopic Imaging), that will use the VLT deformable secondary mirror and 4 Laser Guide Stars (LGS) foreseen in 2015. MUSE operates in the visible wavelength range (0.465-0.93 μm). A consortium of seven institutes is currently commissioning MUSE in the Very Large Telescope for the Preliminary Acceptance in Chile, scheduled for September, 2014. MUSE is composed of several subsystems which are under the responsibility of each institute. The Fore Optics derotates and anamorphoses the image at the focal plane. A Splitting and Relay Optics feed the 24 identical Integral Field Units (IFU), that are mounted within a large monolithic structure. Each IFU incorporates an image slicer, a fully refractive spectrograph with VPH-grating and a detector system connected to a global vacuum and cryogenic system. During 2012 and 2013, all MUSE subsystems were integrated, aligned and tested to the P.I. institute at Lyon. After successful PAE in September 2013, MUSE instrument was shipped to the Very Large Telescope in Chile where that was aligned and tested in ESO integration hall at Paranal. After, MUSE was directly transported, fully aligned and without any optomechanical dismounting, onto VLT telescope where the first light was overcame the 7th of February, 2014. This paper describes the alignment procedure of the whole MUSE instrument with respect to the Very Large Telescope (VLT). It describes how 6 tons could be move with accuracy better than 0.025mm and less than 0.25 arcmin in order to reach alignment requirements. The success
MUSE optical alignment procedure
NASA Astrophysics Data System (ADS)
Laurent, Florence; Renault, Edgard; Loupias, Magali; Kosmalski, Johan; Anwand, Heiko; Bacon, Roland; Boudon, Didier; Caillier, Patrick; Daguisé, Eric; Dubois, Jean-Pierre; Dupuy, Christophe; Kelz, Andreas; Lizon, Jean-Louis; Nicklas, Harald; Parès, Laurent; Remillieux, Alban; Seifert, Walter; Valentin, Hervé; Xu, Wenli
2012-09-01
MUSE (Multi Unit Spectroscopic Explorer) is a second generation VLT integral field spectrograph (1x1arcmin² Field of View) developed for the European Southern Observatory (ESO), operating in the visible wavelength range (0.465-0.93 μm). A consortium of seven institutes is currently assembling and testing MUSE in the Integration Hall of the Observatoire de Lyon for the Preliminary Acceptance in Europe, scheduled for 2013. MUSE is composed of several subsystems which are under the responsibility of each institute. The Fore Optics derotates and anamorphoses the image at the focal plane. A Splitting and Relay Optics feed the 24 identical Integral Field Units (IFU), that are mounted within a large monolithic instrument mechanical structure. Each IFU incorporates an image slicer, a fully refractive spectrograph with VPH-grating and a detector system connected to a global vacuum and cryogenic system. During 2011, all MUSE subsystems were integrated, aligned and tested independently in each institute. After validations, the systems were shipped to the P.I. institute at Lyon and were assembled in the Integration Hall This paper describes the end-to-end optical alignment procedure of the MUSE instrument. The design strategy, mixing an optical alignment by manufacturing (plug and play approach) and few adjustments on key components, is presented. We depict the alignment method for identifying the optical axis using several references located in pupil and image planes. All tools required to perform the global alignment between each subsystem are described. The success of this alignment approach is demonstrated by the good results for the MUSE image quality. MUSE commissioning at the VLT (Very Large Telescope) is planned for 2013.
Multiquadric Spline-Based Interactive Segmentation of Vascular Networks
Meena, Sachin; Surya Prasath, V. B.; Kassim, Yasmin M.; Maude, Richard J.; Glinskii, Olga V.; Glinsky, Vladislav V.; Huxley, Virginia H.; Palaniappan, Kannappan
2016-01-01
Commonly used drawing tools for interactive image segmentation and labeling include active contours or boundaries, scribbles, rectangles and other shapes. Thin vessel shapes in images of vascular networks are difficult to segment using automatic or interactive methods. This paper introduces the novel use of a sparse set of user-defined seed points (supervised labels) for precisely, quickly and robustly segmenting complex biomedical images. A multiquadric spline-based binary classifier is proposed as a unique approach for interactive segmentation using as features color values and the location of seed points. Epifluorescence imagery of the dura mater microvasculature are difficult to segment for quantitative applications due to challenging tissue preparation, imaging conditions, and thin, faint structures. Experimental results based on twenty epifluorescence images is used to illustrate the benefits of using a set of seed points to obtain fast and accurate interactive segmentation compared to four interactive and automatic segmentation approaches. PMID:28227856
Multiquadric Spline-Based Interactive Segmentation of Vascular Networks.
Meena, Sachin; Surya Prasath, V B; Kassim, Yasmin M; Maude, Richard J; Glinskii, Olga V; Glinsky, Vladislav V; Huxley, Virginia H; Palaniappan, Kannappan
2016-08-01
Commonly used drawing tools for interactive image segmentation and labeling include active contours or boundaries, scribbles, rectangles and other shapes. Thin vessel shapes in images of vascular networks are difficult to segment using automatic or interactive methods. This paper introduces the novel use of a sparse set of user-defined seed points (supervised labels) for precisely, quickly and robustly segmenting complex biomedical images. A multiquadric spline-based binary classifier is proposed as a unique approach for interactive segmentation using as features color values and the location of seed points. Epifluorescence imagery of the dura mater microvasculature are difficult to segment for quantitative applications due to challenging tissue preparation, imaging conditions, and thin, faint structures. Experimental results based on twenty epifluorescence images is used to illustrate the benefits of using a set of seed points to obtain fast and accurate interactive segmentation compared to four interactive and automatic segmentation approaches.
Fitting Cox Models with Doubly Censored Data Using Spline-Based Sieve Marginal Likelihood.
Li, Zhiguo; Owzar, Kouros
2016-06-01
In some applications, the failure time of interest is the time from an originating event to a failure event, while both event times are interval censored. We propose fitting Cox proportional hazards models to this type of data using a spline-based sieve maximum marginal likelihood, where the time to the originating event is integrated out in the empirical likelihood function of the failure time of interest. This greatly reduces the complexity of the objective function compared with the fully semiparametric likelihood. The dependence of the time of interest on time to the originating event is induced by including the latter as a covariate in the proportional hazards model for the failure time of interest. The use of splines results in a higher rate of convergence of the estimator of the baseline hazard function compared with the usual nonparametric estimator. The computation of the estimator is facilitated by a multiple imputation approach. Asymptotic theory is established and a simulation study is conducted to assess its finite sample performance. It is also applied to analyzing a real data set on AIDS incubation time.
NASA Technical Reports Server (NTRS)
Nishimura, T.; Hayashi, T.
1991-01-01
The MUSES-A spacecraft mission objectives are to study the effect of a double lunar swingby technique, lunar orbital insertion, obtain experience using optical navigation equipment, measure mass and momentum of micrometeoroids by using a particle dust counter, and to support a packet telemetry and Reed-Solomon coding experiment by using a newly developed fault tolerant onboard computer. A flight profile is given, and information is presented in tabular form on the following topics: Deep Space Network support, frequency assignments, telemetry, command, and tracking support responsibility.
[Application of spline-based Cox regression on analyzing data from follow-up studies].
Dong, Ying; Yu, Jin-ming; Hu, Da-yi
2012-09-01
With R, this study involved the application of the spline-based Cox regression to analyze data related to follow-up studies when the two basic assumptions of Cox proportional hazards regression were not satisfactory. Results showed that most of the continuous covariates contributed nonlinearly to mortality risk while the effects of three covariates were time-dependent. After considering multiple covariates in spline-based Cox regression, when the ankle brachial index (ABI) decreased by 0.1, the hazard ratio (HR) for all-cause death was 1.071. The spline-based Cox regression method could be applied to analyze the data related to follow-up studies when the assumptions of Cox proportional hazards regression were violated.
A Musing on Schuller's "Musings"
ERIC Educational Resources Information Center
Asia, Daniel
2013-01-01
For many years Gunther Schuller was at the center of the classical music world, as a player, composer, conductor, writer, record producer, polemicist and publisher for new music and jazz, educator, and president of New England Conservatory. His book, entitled, "Musings: The Musical Worlds of Gunther Schuller: A Collection of His…
MPDAF: MUSE Python Data Analysis Framework
NASA Astrophysics Data System (ADS)
Bacon, Roland; Piqueras, Laure; Conseil, Simon; Richard, Johan; Shepherd, Martin
2016-11-01
MPDAF, the MUSE Python Data Analysis Framework, provides tools to work with MUSE-specific data (for example, raw data and pixel tables), and with more general data such as spectra, images, and data cubes. Originally written to work with MUSE data, it can also be used for other data, such as that from the Hubble Space Telescope. MPDAF also provides MUSELET, a SExtractor-based tool to detect emission lines in a data cube, and a format to gather all the information on a source in one FITS file. MPDAF was developed and is maintained by CRAL (Centre de Recherche Astrophysique de Lyon).
The MUSE instrument detector system
NASA Astrophysics Data System (ADS)
Reiss, Roland; Deiries, Sebastian; Lizon, Jean-Louis; Rupprecht, Gero
2012-09-01
The MUSE (Multi Unit Spectroscopic Explorer) instrument (see Bacon et al., this conference) for ESO's Very Large Telescope VLT employs 24 integral field units (spectrographs). Each of these is equipped with its own cryogenically cooled CCD head. The heads are individually cooled by continuous flow cryostats. The detectors used are deep depletion e2v CCD231-84 with 4096x4112 active 15 μm pixels. The MUSE Instrument Detector System is now in the final integration and test phase on the instrument. This paper gives an overview of the architecture and performance of the complex detector system including ESO's New General detector Controllers (NGC) for the 24 science detectors, the detector head electronics and the data acquisition system with Linux Local Control Units. NGC is sub-divided into 4 Detector Front End units each operating 6 CCDs. All CCDs are simultaneously read out through 4 ports to achieve short readout times at low noise levels. All science grade CCDs were thoroughly characterized on ESO's optical detectors testbench facility and the test results processed and documented in a semi-automated, reproducible way. We present the test methodology and the results that fully confirm the feasibility of these detectors for their use in this challenging instrument.
MUSES-A double lunar swingby mission
NASA Astrophysics Data System (ADS)
Uesugi, Kuninori; Hayashi, Tomonao; Matsuo, Hiroki
MUSES-A is Japan's first double lunar swingby mission conducted by the ISAS (Institute of Space and Astronautical Science). The main objective of the MUSES-A mission is the verification of the technology and techniques, which are inevitably required for planetary or lunar missions, such as swingby or orbiting around a certain heavenly body, navigation, attitude and orbit control, telecommunication at X-band frequency, related ground control and operation hardware and software, etc. The MUSES-A spacecraft has a cylindrical shape and the mass is 194 kg including 12 kg of a tiny lunar orbiter which is installed on top of the spacecraft. The MUSES-A program started in 1985, and the launch is planned for early 1990.
PSF reconstruction for MUSE in wide field mode
NASA Astrophysics Data System (ADS)
Villecroze, R.; Fusco, Thierry; Bacon, Roland; Madec, Pierre-Yves
2012-07-01
The resolution of ground-based telescopes is dramatically limited by the atmospheric turbulence.. Adaptative optics (AO) is a real-time opto-mechanical approach which allows to correct for the turbulence effect and to reach the ultimate diffraction limit astronomical telescopes and their associated instrumentation. Nevertheless, the AO correction is never perfect especially when it has to deal with large Field of View (FoV). Hence, a posteriori image processing really improves the final estimation of astrophysical data. Such techniques require an accurate knowledge of the system response at any position in the FoV The purpose of this work is then the estimation of the AO response in the particular case of the MUSE [1] /GALACSI [2] instrument (a 3D mult-object spectrograph combined with a Laser-assisted wide field AO system which will be installed at the VLT in 2013). Using telemetry data coming from both AO Laser and natural guide stars, a Point Spread Function (PSF) is derived at any location of the FoV and for every wavelength of the MUSE spectrograph. This document presents the preliminary design of the MUSE WFM PSF reconstruction process. The various hypothesis and approximations are detailed and justified. A first description of the overall process is proposed. Some alternative strategies to improve the performance (in terms of computation time and storage) are described and have been implemented. Finally, after a validation of the proposed algorithm using end-to-end models, a performance analysis is conducted (with the help of a full end-to-end model). This performance analysis will help us to populate an exhaustive error budget table.
Accurate B-spline-based 3-D interpolation scheme for digital volume correlation.
Ren, Maodong; Liang, Jin; Wei, Bin
2016-12-01
An accurate and efficient 3-D interpolation scheme, based on sampling theorem and Fourier transform technique, is proposed to reduce the sub-voxel matching error caused by intensity interpolation bias in digital volume correlation. First, the influence factors of the interpolation bias are investigated theoretically using the transfer function of an interpolation filter (henceforth filter) in the Fourier domain. A law that the positional error of a filter can be expressed as a function of fractional position and wave number is found. Then, considering the above factors, an optimized B-spline-based recursive filter, combining B-spline transforms and least squares optimization method, is designed to virtually eliminate the interpolation bias in the process of sub-voxel matching. Besides, given each volumetric image containing different wave number ranges, a Gaussian weighting function is constructed to emphasize or suppress certain of wave number ranges based on the Fourier spectrum analysis. Finally, a novel software is developed and series of validation experiments were carried out to verify the proposed scheme. Experimental results show that the proposed scheme can reduce the interpolation bias to an acceptable level.
Spline-based deforming ellipsoids for interactive 3D bioimage segmentation.
Delgado-Gonzalo, Ricard; Chenouard, Nicolas; Unser, Michael
2013-10-01
We present a new fast active-contour model (a.k.a. snake) for image segmentation in 3D microscopy. We introduce a parametric design that relies on exponential B-spline bases and allows us to build snakes that are able to reproduce ellipsoids. We design our bases to have the shortest-possible support, subject to some constraints. Thus, computational efficiency is maximized. The proposed 3D snake can approximate blob-like objects with good accuracy and can perfectly reproduce spheres and ellipsoids, irrespective of their position and orientation. The optimization process is remarkably fast due to the use of Gauss' theorem within our energy computation scheme. Our technique yields successful segmentation results, even for challenging data where object contours are not well defined. This is due to our parametric approach that allows one to favor prior shapes. In addition, this paper provides a software that gives full control over the snakes via an intuitive manipulation of few control points.
Beam particle tracking for MUSE
NASA Astrophysics Data System (ADS)
Liyanage, Anusha; MUSE Collaboration
2017-01-01
The proton radius puzzle is the 7 σ disagreement between the proton radius extracted from the measured muonic hydrogen Lamb shift and the proton radius extracted from the regular hydrogen Lamb shift and elastic ep scattering form factor data. So far there is no generally accepted resolution to the puzzle. The explanations for the discrepancy include new degrees of freedom beyond the Standard Model. The MUon Scattering Experiment (MUSE) will simultaneously measure ep and μp scattering at the Paul Scherrer Institute, using the πM1 beam line at 100-250 MeV/c to cover a four-momentum transfer range of Q2=0.002-0.07 (GeV/c)2. Due to the large divergence of the secondary muon beam, beam particle trajectories are needed for every event. They are measured by a Gas Electron Multiplier (GEM) tracking telescope consisting of three 10x10 cm2 triple-GEM chambers. Fast segmented scintillator paddles provide precise timing information. The GEM detectors, their performance in test beam times, and plans and milestones will be discussed. This work has been supported by DOE DE-SC0012589 and NSF HRD-1649909. DOE DE-SC0012589 and NSF HRD-1649909.
Uji, Akihito; Ooto, Sotaro; Hangai, Masanori; Arichika, Shigeta; Yoshimura, Nagahisa
2013-01-01
Purpose To investigate the effect of B-spline-based elastic image registration on adaptive optics scanning laser ophthalmoscopy (AO-SLO)-assisted capillary visualization. Methods AO-SLO videos were acquired from parafoveal areas in the eyes of healthy subjects and patients with various diseases. After nonlinear image registration, the image quality of capillary images constructed from AO-SLO videos using motion contrast enhancement was compared before and after B-spline-based elastic (nonlinear) image registration performed using ImageJ. For objective comparison of image quality, contrast-to-noise ratios (CNRS) for vessel images were calculated. For subjective comparison, experienced ophthalmologists ranked images on a 5-point scale. Results All AO-SLO videos were successfully stabilized by elastic image registration. CNR was significantly higher in capillary images stabilized by elastic image registration than in those stabilized without registration. The average ratio of CNR in images with elastic image registration to CNR in images without elastic image registration was 2.10 ± 1.73, with no significant difference in the ratio between patients and healthy subjects. Improvement of image quality was also supported by expert comparison. Conclusions Use of B-spline-based elastic image registration in AO-SLO-assisted capillary visualization was effective for enhancing image quality both objectively and subjectively. PMID:24265796
Update on the MUSE Proton Radius Measurement
NASA Astrophysics Data System (ADS)
Cline, Ethan; MUSE Collaboration
2016-03-01
The results of the test beam run in December 2015 for the MUSE experiment are presented and the current status of MUSE will be discussed. During this test run a study of 2 mm thick scintillators coupled to SiPMs was performed and the results are the focus of this talk. SiPMs from two different companies, AvanSiD and Hamamatsu, were tested and it was found the timing resolution is between 89 ps - 110 ps, depending on SiPM model, bar length, and momentum. The bars have an operational efficiency of at least 99%.
The Practice of Sharing a Historical Muse
ERIC Educational Resources Information Center
Henderson, Bob
2012-01-01
Sharing an imaginative energy for the storied landscape is one kind of pedagogical passion. The author had taken on the challenge of offering this particular passion to his fellow travellers. With students, the practice of peppering a trip with a historical muse involves focussed readings, in the moment stories, planned ceremonies and rituals and,…
NASA Astrophysics Data System (ADS)
Meillier, Céline; Chatelain, Florent; Michel, Olivier; Bacon, Roland; Piqueras, Laure; Bacher, Raphael; Ayasso, Hacheme
2016-04-01
We present SELFI, the Source Emission Line FInder, a new Bayesian method optimized for detection of faint galaxies in Multi Unit Spectroscopic Explorer (MUSE) deep fields. MUSE is the new panoramic integral field spectrograph at the Very Large Telescope (VLT) that has unique capabilities for spectroscopic investigation of the deep sky. It has provided data cubes with 324 million voxels over a single 1 arcmin2 field of view. To address the challenge of faint-galaxy detection in these large data cubes, we developed a new method that processes 3D data either for modeling or for estimation and extraction of source configurations. This object-based approach yields a natural sparse representation of the sources in massive data fields, such as MUSE data cubes. In the Bayesian framework, the parameters that describe the observed sources are considered random variables. The Bayesian model leads to a general and robust algorithm where the parameters are estimated in a fully data-driven way. This detection algorithm was applied to the MUSE observation of Hubble Deep Field-South. With 27 h total integration time, these observations provide a catalog of 189 sources of various categories and with secured redshift. The algorithm retrieved 91% of the galaxies with only 9% false detection. This method also allowed the discovery of three new Lyα emitters and one [OII] emitter, all without any Hubble Space Telescope counterpart. We analyzed the reasons for failure for some targets, and found that the most important limitation of the method is when faint sources are located in the vicinity of bright spatially resolved galaxies that cannot be approximated by the Sérsic elliptical profile. The software and its documentation are available on the MUSE science web service (muse-vlt.eu/science).
The MUon Scattering Experiment (MUSE) at PSI
NASA Astrophysics Data System (ADS)
Kohl, Michael; MUSE Collaboration
2016-09-01
The proton is not an elementary particle but has a substructure governed by the interaction of quarks and gluons. The size of the proton is manifest in the spatial distributions of the electric charge and magnetization, which determine the response to electromagnetic interaction. Recently, contradictory measurements of the proton charge radius between muonic hydrogen and electronic probes have constituted the proton radius puzzle, which has been challenging our basic understanding of the proton. The MUon Scattering Experiment (MUSE) in preparation at the Paul-Scherrer Institute (PSI) has the potential to resolve the puzzle by measuring the proton charge radius with electron and muon scattering simultaneously and with high precision, including any possible difference between the two, and with both beam charges. The status of the MUSE experiment will be reported. Supported by NSF and DOE.
Characterizing the environments of supernovae with MUSE
NASA Astrophysics Data System (ADS)
Galbany, L.; Anderson, J. P.; Rosales-Ortega, F. F.; Kuncarayakti, H.; Krühler, T.; Sánchez, S. F.; Falcón-Barroso, J.; Pérez, E.; Maureira, J. C.; Hamuy, M.; González-Gaitán, S.; Förster, F.; Moral, V.
2016-02-01
We present a statistical analysis of the environments of 11 supernovae (SNe) which occurred in six nearby galaxies (z ≲ 0.016). All galaxies were observed with MUSE, the high spatial resolution integral-field spectrograph mounted to the 8 m VLT UT4. These data enable us to map the full spatial extent of host galaxies up to ˜3 effective radii. In this way, not only can one characterize the specific host environment of each SN, one can compare their properties with stellar populations within the full range of other environments within the host. We present a method that consists of selecting all H II regions found within host galaxies from 2D extinction-corrected Hα emission maps. These regions are then characterized in terms of their Hα equivalent widths, star formation rates and oxygen abundances. Identifying H II regions spatially coincident with SN explosion sites, we are thus able to determine where within the distributions of host galaxy e.g. metallicities and ages each SN is found, thus providing new constraints on SN progenitor properties. This initial pilot study using MUSE opens the way for a revolution in SN environment studies where we are now able to study multiple environment SN progenitor dependencies using a single instrument and single pointing.
Porting Big Data technology across domains. WISE for MUSE
NASA Astrophysics Data System (ADS)
Vriend, Willem-Jan
2015-12-01
Due to the nature of MUSE data, each data-cube obtained as part of the GTO program is used by most of the consortium institutes which are spread across Europe. Since the effort required in reducing the data is significant, and to ensure uniformity in analysis, it is desirable to have a data management system that integrates data reduction, provenance tracking, quality control and data analysis. Such a system should support the distribution of storage and processing over the consortium institutes. The MUSE-WISE system incorporates these aspects. It is built on the Astro-WISE system, originally designed to handle OmegaCAM imaging data, which has been extended to support 3D spectroscopic data. MUSE-WISE is now being used to process MUSE GTO data. It currently stores 95 TB consisting of 48k raw exposures and processed data used by 79 users spread over 7 nodes in Europe.
Muse Cells Derived from Dermal Tissues Can Differentiate into Melanocytes.
Tian, Ting; Zhang, Ru-Zhi; Yang, Yu-Hua; Liu, Qi; Li, Di; Pan, Xiao-Ru
2017-02-07
The objective of the authors has been to obtain multilineage-differentiating stress-enduring cells (Muse cells) from primary cultures of dermal fibroblasts, identify their pluripotency, and detect their ability to differentiate into melanocytes. The distribution of SSEA-3-positive cells in human scalp skin was assessed by immunohistochemistry, and the distribution of Oct4, Sox2, Nanog, and SSEA-3-positive cells was determined by immunofluorescence staining. The expression levels of Sox2, Oct4, hKlf4, and Nanog mRNAs and proteins in Muse cells were determined by reverse transcription polymerase chain reaction (RT-PCR) analyses and Western blots, respectively. These Muse cells differentiated into melanocytes in differentiation medium. The SSEA-3-positive cells were scattered in the basement membrane zone and the dermis, with comparatively more in the sebaceous glands, vascular and sweat glands, as well as the outer root sheath of hair follicles, the dermal papillae, and the hair bulbs. Muse cells, which have the ability to self-renew, were obtained from scalp dermal fibroblasts by flow cytometry sorting with an anti-SSEA-3 antibody. The results of RT-PCR, Western blot, and immunofluorescence staining showed that the expression levels of Oct4, Nanog, Sox2, and Klf4 mRNAs and proteins in Muse cells were significantly different from their parental dermal fibroblasts. Muse cells differentiated into melanocytes when cultured in melanocyte differentiation medium, and the Muse cell-derived melanocytes expressed the melanocyte-specific marker HMB45. Muse cells could be obtained by flow cytometry from primary cultures of scalp dermal fibroblasts, which possessed the ability of pluripotency and self-renewal, and could differentiate into melanocytes in vitro.
Cameron, Andrew; Lui, Dorothy; Boroomand, Ameneh; Glaister, Jeffrey; Wong, Alexander; Bizheva, Kostadinka
2013-01-01
Optical coherence tomography (OCT) allows for non-invasive 3D visualization of biological tissue at cellular level resolution. Often hindered by speckle noise, the visualization of important biological tissue details in OCT that can aid disease diagnosis can be improved by speckle noise compensation. A challenge with handling speckle noise is its inherent non-stationary nature, where the underlying noise characteristics vary with the spatial location. In this study, an innovative speckle noise compensation method is presented for handling the non-stationary traits of speckle noise in OCT imagery. The proposed approach centers on a non-stationary spline-based speckle noise modeling strategy to characterize the speckle noise. The novel method was applied to ultra high-resolution OCT (UHROCT) images of the human retina and corneo-scleral limbus acquired in-vivo that vary in tissue structure and optical properties. Test results showed improved performance of the proposed novel algorithm compared to a number of previously published speckle noise compensation approaches in terms of higher signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and better overall visual assessment.
Yorozu, Ayanori; Moriguchi, Toshiki; Takahashi, Masaki
2015-09-04
Falling is a common problem in the growing elderly population, and fall-risk assessment systems are needed for community-based fall prevention programs. In particular, the timed up and go test (TUG) is the clinical test most often used to evaluate elderly individual ambulatory ability in many clinical institutions or local communities. This study presents an improved leg tracking method using a laser range sensor (LRS) for a gait measurement system to evaluate the motor function in walk tests, such as the TUG. The system tracks both legs and measures the trajectory of both legs. However, both legs might be close to each other, and one leg might be hidden from the sensor. This is especially the case during the turning motion in the TUG, where the time that a leg is hidden from the LRS is longer than that during straight walking and the moving direction rapidly changes. These situations are likely to lead to false tracking and deteriorate the measurement accuracy of the leg positions. To solve these problems, a novel data association considering gait phase and a Catmull-Rom spline-based interpolation during the occlusion are proposed. From the experimental results with young people, we confirm that the proposed methods can reduce the chances of false tracking. In addition, we verify the measurement accuracy of the leg trajectory compared to a three-dimensional motion analysis system (VICON).
Spline based least squares integration for two-dimensional shape or wavefront reconstruction
NASA Astrophysics Data System (ADS)
Huang, Lei; Xue, Junpeng; Gao, Bo; Zuo, Chao; Idir, Mourad
2017-04-01
In this work, we present a novel method to handle two-dimensional shape or wavefront reconstruction from its slopes. The proposed integration method employs splines to fit the measured slope data with piecewise polynomials and uses the analytical polynomial functions to represent the height changes in a lateral spacing with the pre-determined spline coefficients. The linear least squares method is applied to estimate the height or wavefront as a final result. Numerical simulations verify that the proposed method has less algorithm errors than two other existing methods used for comparison. Especially at the boundaries, the proposed method has better performance. The noise influence is studied by adding white Gaussian noise to the slope data. Experimental data from phase measuring deflectometry are tested to demonstrate the feasibility of the new method in a practical measurement.
Muse Gas Flow and Wind (MEGAFLOW). I. First MUSE Results on Background Quasars
NASA Astrophysics Data System (ADS)
Schroetter, I.; Bouché, N.; Wendt, M.; Contini, T.; Finley, H.; Pelló, R.; Bacon, R.; Cantalupo, S.; Marino, R. A.; Richard, J.; Lilly, S. J.; Schaye, J.; Soto, K.; Steinmetz, M.; Straka, L. A.; Wisotzki, L.
2016-12-01
The physical properties of galactic winds are one of the keys to understand galaxy formation and evolution. These properties can be constrained thanks to background quasar lines of sight (LOS) passing near star-forming galaxies (SFGs). We present the first results of the MusE GAs FLOw and Wind survey obtained from two quasar fields, which have eight Mg ii absorbers of which three have rest equivalent width greater than 0.8 Å. With the new Multi Unit Spectroscopic Explorer (MUSE) spectrograph on the Very Large Telescope (VLT), we detect six (75%) Mg ii host galaxy candidates within a radius of 30″ from the quasar LOS. Out of these six galaxy-quasar pairs, from geometrical argument, one is likely probing galactic outflows, where two are classified as “ambiguous,” two are likely probing extended gaseous disks and one pair seems to be a merger. We focus on the wind-pair and constrain the outflow using a high-resolution quasar spectra from the Ultraviolet and Visual Echelle Spectrograph. Assuming the metal absorption to be due to ga;s flowing out of the detected galaxy through a cone along the minor axis, we find outflow velocities in the order of ≈150 {km} {{{s}}}-1 (i.e., smaller than the escape velocity) with a loading factor, η ={\\dot{M}}{out}/{{SFR}}, of ≈0.7. We see evidence for an open conical flow, with a low-density inner core. In the future, MUSE will provide us with about 80 multiple galaxy-quasar pairs in two dozen fields. Based on observations made at the ESO telescopes under programs 094.A-0211(B) and 293.A-5038(A).
MUSE observations of the lensing cluster Abell 1689
NASA Astrophysics Data System (ADS)
Bina, D.; Pelló, R.; Richard, J.; Lewis, J.; Patrício, V.; Cantalupo, S.; Herenz, E. C.; Soto, K.; Weilbacher, P. M.; Bacon, R.; Vernet, J. D. R.; Wisotzki, L.; Clément, B.; Cuby, J. G.; Lagattuta, D. J.; Soucail, G.; Verhamme, A.
2016-05-01
Context. This paper presents the results obtained with the Multi Unit Spectroscopic Explorer (MUSE) for the core of the lensing cluster Abell 1689, as part of MUSE's commissioning at the ESO Very Large Telescope. Aims: Integral-field observations with MUSE provide a unique view of the central 1 × 1 arcmin2 region at intermediate spectral resolution in the visible domain, allowing us to conduct a complete census of both cluster galaxies and lensed background sources. Methods: We performed a spectroscopic analysis of all sources found in the MUSE data cube. Two hundred and eighty-two objects were systematically extracted from the cube based on a guided-and-manual approach. We also tested three different tools for the automated detection and extraction of line emitters. Cluster galaxies and lensed sources were identified based on their spectral features. We investigated the multiple-image configuration for all known sources in the field. Results: Previous to our survey, 28 different lensed galaxies displaying 46 multiple images were known in the MUSE field of view, most of them were detected through photometric redshifts and lensing considerations. Of these, we spectroscopically confirm 12 images based on their emission lines, corresponding to 7 different lensed galaxies between z = 0.95 and 5.0. In addition, 14 new galaxies have been spectroscopically identified in this area thanks to MUSE data, with redshifts ranging between 0.8 and 6.2. All background sources detected within the MUSE field of view correspond to multiple-imaged systems lensed by A1689. Seventeen sources in total are found at z ≥ 3 based on their Lyman-α emission, with Lyman-α luminosities ranging between 40.5 ≲ log (Lyα) ≲ 42.5 after correction for magnification. This sample is particularly sensitive to the slope of the luminosity function toward the faintest end. The density of sources obtained in this survey is consistent with a steep value of α ≤ -1.5, although this result still
NASA Technical Reports Server (NTRS)
Rosen, I. G.
1985-01-01
Rayleigh-Ritz methods for the approximation of the natural modes for a class of vibration problems involving flexible beams with tip bodies using subspaces of piecewise polynomial spline functions are developed. An abstract operator theoretic formulation of the eigenvalue problem is derived and spectral properties investigated. The existing theory for spline-based Rayleigh-Ritz methods applied to elliptic differential operators and the approximation properties of interpolatory splines are useed to argue convergence and establish rates of convergence. An example and numerical results are discussed.
Deep MUSE observations in the HDFS. Morpho-kinematics of distant star-forming galaxies down to 108M⊙
NASA Astrophysics Data System (ADS)
Contini, T.; Epinat, B.; Bouché, N.; Brinchmann, J.; Boogaard, L. A.; Ventou, E.; Bacon, R.; Richard, J.; Weilbacher, P. M.; Wisotzki, L.; Krajnović, D.; Vielfaure, J.-B.; Emsellem, E.; Finley, H.; Inami, H.; Schaye, J.; Swinbank, M.; Guérou, A.; Martinsson, T.; Michel-Dansac, L.; Schroetter, I.; Shirazi, M.; Soucail, G.
2016-06-01
Aims: Whereas the evolution of gas kinematics of massive galaxies is now relatively well established up to redshift z ~ 3, little is known about the kinematics of lower mass (M⋆≤ 1010M⊙) galaxies. We use MUSE, a powerful wide-field, optical integral-field spectrograph (IFS) recently mounted on the VLT, to characterize this galaxy population at intermediate redshift. Methods: We made use of the deepest MUSE observations performed so far on the Hubble Deep Field South (HDFS). This data cube, resulting from 27 h of integration time, covers a one arcmin2 field of view at an unprecedented depth (with a 1σ emission-line surface brightness limit of 1 × 10-19 erg s-1 cm-2 arcsec-2) and a final spatial resolution of ≈0.7''. We identified a sample of 28 resolved emission-line galaxies, extending over an area that is at least twice the seeing disk, spread over a redshift interval of 0.2
New muonium HFS measurements at J-PARC/MUSE
NASA Astrophysics Data System (ADS)
Strasser, P.; Aoki, M.; Fukao, Y.; Higashi, Y.; Higuchi, T.; Iinuma, H.; Ikedo, Y.; Ishida, K.; Ito, T. U.; Iwasaki, M.; Kadono, R.; Kamigaito, O.; Kanda, S.; Kawall, D.; Kawamura, N.; Koda, A.; Kojima, K. M.; Kubo, K.; Matsuda, Y.; Matsudate, Y.; Mibe, T.; Miyake, Y.; Mizutani, T.; Nagamine, K.; Nishimura, S.; Nishiyama, K.; Ogitsu, T.; Okubo, R.; Saito, N.; Sasaki, K.; Seo, S.; Shimomura, K.; Sugano, M.; Tajima, M.; Tanaka, K. S.; Tanaka, T.; Tomono, D.; Torii, H. A.; Torikai, E.; Toyoda, A.; Ueno, K.; Ueno, Y.; Yagi, D.; Yamamoto, A.; Yoshida, M.
2016-12-01
At the Muon Science Facility (MUSE) of J-PARC (Japan Proton Accelerator Research Complex), the MuSEUM collaboration is planing new measurements of the ground state hyperfine structure (HFS) of muonium both at zero field and at high magnetic field. The previous measurements were performed both at LAMPF (Los Alamos Meson Physics Facility) with experimental uncertainties mostly dominated by statistical errors. The new high intensity muon beam that will soon be available at MUSE H-Line will provide an opportunity to improve the precision of these measurements by one order of magnitude. An overview of the different aspects of these new muonium HFS measurements, the current status of the preparation, and the results of a first commissioning test experiment at zero field are presented.
IFU simulator: a powerful alignment and performance tool for MUSE instrument
NASA Astrophysics Data System (ADS)
Laurent, Florence; Boudon, Didier; Daguisé, Eric; Dubois, Jean-Pierre; Jarno, Aurélien; Kosmalski, Johan; Piqueras, Laure; Remillieux, Alban; Renault, Edgard
2014-07-01
MUSE (Multi Unit Spectroscopic Explorer) is a second generation Very Large Telescope (VLT) integral field spectrograph (1x1arcmin² Field of View) developed for the European Southern Observatory (ESO), operating in the visible wavelength range (0.465-0.93 μm). A consortium of seven institutes is currently commissioning MUSE in the Very Large Telescope for the Preliminary Acceptance in Chile, scheduled for September, 2014. MUSE is composed of several subsystems which are under the responsibility of each institute. The Fore Optics derotates and anamorphoses the image at the focal plane. A Splitting and Relay Optics feed the 24 identical Integral Field Units (IFU), that are mounted within a large monolithic instrument mechanical structure. Each IFU incorporates an image slicer, a fully refractive spectrograph with VPH-grating and a detector system connected to a global vacuum and cryogenic system. During 2012 and 2013, all MUSE subsystems were integrated, aligned and tested to the P.I. institute at Lyon. After successful PAE in September 2013, MUSE instrument was shipped to the Very Large Telescope in Chile where that was aligned and tested in ESO integration hall at Paranal. After, MUSE was directly transferred in monolithic way without dismounting onto VLT telescope where the first light was overcame. This talk describes the IFU Simulator which is the main alignment and performance tool for MUSE instrument. The IFU Simulator mimics the optomechanical interface between the MUSE pre-optic and the 24 IFUs. The optomechanical design is presented. After, the alignment method of this innovative tool for identifying the pupil and image planes is depicted. At the end, the internal test report is described. The success of the MUSE alignment using the IFU Simulator is demonstrated by the excellent results obtained onto MUSE positioning, image quality and throughput. MUSE commissioning at the VLT is planned for September, 2014.
Not Your Daddy's Data Link: Musings on Datalink Communications
NASA Technical Reports Server (NTRS)
Branstetter, James
2004-01-01
Viewgraphs about musings on Datalink Communications are presented. Some of the topics include: 1) Keen Eye for a Straight Proposal (Next Gen Data Link); 2) So many datalinks so little funding!!!; 3) Brave New World; 4) Time marches on!; 5) Through the Looking Glass; 6) Dollars & Sense Cooking; 7) Economics 101; 8) The Missing Link(s); 9) Straight Shooting; and 10) All is not lost.
Current status of the J-PARC muon facility, MUSE
NASA Astrophysics Data System (ADS)
Miyake, Y.; Shimomura, K.; Kawamura, N.; Strasser, P.; Koda, A.; Fujimori, H.; Ikedo, Y.; Makimura, S.; Kobayashi, Y.; Nakamura, J.; Kojima, K.; Adachi, T.; Kadono, R.; Takeshita, S.; Nishiyama, K.; Higemoto, W.; Ito, T.; Nagamine, K.; Ohata, H.; Makida, Y.; Yoshida, M.; Okamura, T.; Okada, R.; Ogitsu, T.
2014-12-01
The muon science facility (MUSE), along with the neutron, hadron, and neutrino facilities, is one of the experimental areas of the J-PARC project. The MUSE facility is located in the Materials and Life Science Facility (MLF), which is a building integrated to include both neutron and muon science programs. Since the autumn of 2008, users operation is effective and making use of the pulsed muon beam particularly at the D-Line. Unfortunately, MUSE suffered severe damages from the earthquake on March 11, 2011, the so-called "Higashi-Nippon Dai-Shinsai". We managed to have a stable operation of the superconducting solenoid magnet with use of the on-line refrigerator on December, 2012, although we had to overcome a lot of difficulties against components not working properly. But we had to stop again the whole operations on May 2013, because of the radioactive materials leakage accident at the Hadron Hall Experimental Facility. Finally we restarted the users' runs on February 2014.
Kinoshita, Kahori; Kuno, Shinichiro; Ishimine, Hisako; Aoi, Noriyuki; Mineda, Kazuhide; Kato, Harunosuke; Doi, Kentaro; Kanayama, Koji; Feng, Jingwei; Mashiko, Takanobu; Kurisaki, Akira
2015-01-01
Stage-specific embryonic antigen-3 (SSEA-3)-positive multipotent mesenchymal cells (multilineage differentiating stress-enduring [Muse] cells) were isolated from cultured human adipose tissue-derived stem/stromal cells (hASCs) and characterized, and their therapeutic potential for treating diabetic skin ulcers was evaluated. Cultured hASCs were separated using magnetic-activated cell sorting into positive and negative fractions, a SSEA-3+ cell-enriched fraction (Muse-rich) and the remaining fraction (Muse-poor). Muse-rich hASCs showed upregulated and downregulated pluripotency and cell proliferation genes, respectively, compared with Muse-poor hASCs. These cells also released higher amounts of certain growth factors, particularly under hypoxic conditions, compared with Muse-poor cells. Skin ulcers were generated in severe combined immunodeficiency (SCID) mice with type 1 diabetes, which showed delayed wound healing compared with nondiabetic SCID mice. Treatment with Muse-rich cells significantly accelerated wound healing compared with treatment with Muse-poor cells. Transplanted cells were integrated into the regenerated dermis as vascular endothelial cells and other cells. However, they were not detected in the surrounding intact regions. Thus, the selected population of ASCs has greater therapeutic effects to accelerate impaired wound healing associated with type 1 diabetes. These cells can be achieved in large amounts with minimal morbidity and could be a practical tool for a variety of stem cell-depleted or ischemic conditions of various organs and tissues. PMID:25561682
Virtual Space Learning MariMUSE: Connecting Learners from Kindergarten to 99.
ERIC Educational Resources Information Center
Hughes, Billie; And Others
The Multi-User Simulation Environment (MUSE) software is designed to motivate students across many age levels to engage in reading, writing, problem-solving, and collaborative and creative projects. MUSE software provides a text-based, virtual world on computers connected to a network, allowing synchronous and asynchronous communication among…
Estimation of coefficients and boundary parameters in hyperbolic systems
NASA Technical Reports Server (NTRS)
Banks, H. T.; Murphy, K. A.
1984-01-01
Semi-discrete Galerkin approximation schemes are considered in connection with inverse problems for the estimation of spatially varying coefficients and boundary condition parameters in second order hyperbolic systems typical of those arising in 1-D surface seismic problems. Spline based algorithms are proposed for which theoretical convergence results along with a representative sample of numerical findings are given.
Vacuum and cryogenic system for the MUSE detectors
NASA Astrophysics Data System (ADS)
Lizon, J. L.; Accardo, M.; Gojak, Domingo; Reiss, Roland; Kern, Lothar
2012-09-01
MUSE with its 24 detectors distributed over an eight square meter vertical area was requiring a well engineered and extremely reliable cryogenic system. The solution should also use a technology proven to be compatible with the very high sensitivity of the VLT interferometer. A short introduction reviews the various available technologies to cool these 24 chips down to 160 K. The first part of the paper presents the selected concept insisting on the various advantages offered by LN2. In addition to the purely vacuum and cryogenic aspects we highlight some of the most interesting features given by the control system based on a PLC.
Sample Return Mission to NEA : MUSES-C
NASA Astrophysics Data System (ADS)
Fujiwara, A.; Mukai, T.; Kawaguchi, J.; Uesugi, K. T.
MUSES-C is a mission for near-earth-asteroid sample return. The spacecraft is launched in 2002 and returns to the earth in 2006.The primary objective is to demonstrate the key technologies requisite for future advanced sample return mission. It collects samples of a few g from a near-earth asteroid. Sampling will be made by rendezvous with the asteroid, approaching the asteroid, then shooting a small projectile onto the asteroid surface and catching the ejecta. In this paper outline of the mission is shown with special stress on the science aspect
Estimation of discontinuous coefficients in parabolic systems: Applications to reservoir simulation
NASA Technical Reports Server (NTRS)
Lamm, P. D.
1984-01-01
Spline based techniques for estimating spatially varying parameters that appear in parabolic distributed systems (typical of those found in reservoir simulation problems) are presented. The problem of determining discontinuous coefficients, estimating both the functional shape and points of discontinuity for such parameters is discussed. Convergence results and a summary of numerical performance of the resulting algorithms are given.
NASA Technical Reports Server (NTRS)
Banks, H. T.; Rosen, I. G.
1984-01-01
Approximation ideas are discussed that can be used in parameter estimation and feedback control for Euler-Bernoulli models of elastic systems. Focusing on parameter estimation problems, ways by which one can obtain convergence results for cubic spline based schemes for hybrid models involving an elastic cantilevered beam with tip mass and base acceleration are outlined. Sample numerical findings are also presented.
MUSE field splitter unit: fan-shaped separator for 24 integral field units
NASA Astrophysics Data System (ADS)
Laurent, Florence; Renault, Edgard; Anwand, Heiko; Boudon, Didier; Caillier, Patrick; Kosmalski, Johan; Loupias, Magali; Nicklas, Harald; Seifert, Walter; Salaun, Yves; Xu, Wenli
2014-07-01
MUSE (Multi Unit Spectroscopic Explorer) is a second generation Very Large Telescope (VLT) integral field spectrograph developed for the European Southern Observatory (ESO). It combines a 1' x 1' field of view sampled at 0.2 arcsec for its Wide Field Mode (WFM) and a 7.5"x7.5" field of view for its Narrow Field Mode (NFM). Both modes will operate with the improved spatial resolution provided by GALACSI (Ground Atmospheric Layer Adaptive Optics for Spectroscopic Imaging), that will use the VLT deformable secondary mirror and 4 Laser Guide Stars (LGS) foreseen in 2015. MUSE operates in the visible wavelength range (0.465-0.93 μm). A consortium of seven institutes is currently commissioning MUSE in the Very Large Telescope for the Preliminary Acceptance in Chile, scheduled for September, 2014. MUSE is composed of several subsystems which are under the responsibility of each institute. The Fore Optics derotates and anamorphoses the image at the focal plane. A Splitting and Relay Optics feed the 24 identical Integral Field Units (IFU), that are mounted within a large monolithic instrument mechanical structure. Each IFU incorporates an image slicer, a fully refractive spectrograph with VPH-grating and a detector system connected to a global vacuum and cryogenic system. During 2012 and 2013, all MUSE subsystems were integrated, aligned and tested to the P.I. institute at Lyon. After successful PAE in September 2013, MUSE instrument was shipped to the Very Large Telescope in Chile where it was aligned and tested in ESO integration hall at Paranal. After, MUSE was directly transferred in monolithic way onto VLT telescope where the first light was achieved. This paper describes the MUSE main optical component: the Field Splitter Unit. It splits the VLT image into 24 subfields and provides the first separation of the beam for the 24 Integral Field Units. This talk depicts its manufacturing at Winlight Optics and its alignment into MUSE instrument. The success of the MUSE
MUSE, the goddess of muons, and her future.
Kadono, Ryosuke; Miyake, Yasuhiro
2012-02-01
The Muon Science Establishment (MUSE) is one of the major experimental facilities, along with those for neutron, hadron and neutrino experiments, in J-PARC. It makes up a part of the Materials and Life Science Experiment Facility (MLF) that hosts a tandem neutron facility (JSNS) driven by a single proton beam. The facility consists of a superconducting solenoid (for pion confinement) with a modest-acceptance (about 45 mSr) injector of pions and muons obtained from a 20 mm thick edge-cooled stationary graphite target, delivering a 'surface muon' beam (μ(+)) and a 'decay muon' beam (μ(+)/μ(-)) for a wide variety of applications. It has recently been confirmed that the beamline has the world's highest muon intensity (∼10(6) μ(+)/s) at a proton beam power of 120 kW. The beamline is furnished with two experimental areas (D1 and D2) at the exit branches, where an apparatus for muon spin rotation/relaxation experiments (μSR) is currently installed at the D1 area while test experiments are conducted at the D2 area. In this paper, the current performance of the MUSE facility as a whole is reviewed. The facility is still in the early stage of development, including both beamlines and infrastructure for experiments, and plans for upgrading it are discussed together with perspectives for research works envisaged with unprecedented high-intensity muons.
ModelMuse - A Graphical User Interface for MODFLOW-2005 and PHAST
Winston, Richard B.
2009-01-01
ModelMuse is a graphical user interface (GUI) for the U.S. Geological Survey (USGS) models MODFLOW-2005 and PHAST. This software package provides a GUI for creating the flow and transport input file for PHAST and the input files for MODFLOW-2005. In ModelMuse, the spatial data for the model is independent of the grid, and the temporal data is independent of the stress periods. Being able to input these data independently allows the user to redefine the spatial and temporal discretization at will. This report describes the basic concepts required to work with ModelMuse. These basic concepts include the model grid, data sets, formulas, objects, the method used to assign values to data sets, and model features. The ModelMuse main window has a top, front, and side view of the model that can be used for editing the model, and a 3-D view of the model that can be used to display properties of the model. ModelMuse has tools to generate and edit the model grid. It also has a variety of interpolation methods and geographic functions that can be used to help define the spatial variability of the model. ModelMuse can be used to execute both MODFLOW-2005 and PHAST and can also display the results of MODFLOW-2005 models. An example of using ModelMuse with MODFLOW-2005 is included in this report. Several additional examples are described in the help system for ModelMuse, which can be accessed from the Help menu.
The Story of Supernova “Refsdal” Told by Muse
NASA Astrophysics Data System (ADS)
Grillo, C.; Karman, W.; Suyu, S. H.; Rosati, P.; Balestra, I.; Mercurio, A.; Lombardi, M.; Treu, T.; Caminha, G. B.; Halkola, A.; Rodney, S. A.; Gavazzi, R.; Caputi, K. I.
2016-05-01
We present Multi Unit Spectroscopic Explorer (MUSE) observations in the core of the Hubble Frontier Fields (HFF) galaxy cluster MACS J1149.5+2223, where the first magnified and spatially resolved multiple images of supernova (SN) “Refsdal” at redshift 1.489 were detected. Thanks to a Director's Discretionary Time program with the Very Large Telescope and the extraordinary efficiency of MUSE, we measure 117 secure redshifts with just 4.8 hr of total integration time on a single 1 arcmin2 target pointing. We spectroscopically confirm 68 galaxy cluster members, with redshift values ranging from 0.5272 to 0.5660, and 18 multiple images belonging to seven background, lensed sources distributed in redshifts between 1.240 and 3.703. Starting from the combination of our catalog with those obtained from extensive spectroscopic and photometric campaigns using the Hubble Space Telescope (HST), we select a sample of 300 (164 spectroscopic and 136 photometric) cluster members, within approximately 500 kpc from the brightest cluster galaxy, and a set of 88 reliable multiple images associated with 10 different background source galaxies and 18 distinct knots in the spiral galaxy hosting SN “Refsdal.” We exploit this valuable information to build six detailed strong-lensing models, the best of which reproduces the observed positions of the multiple images with an rms offset of only 0.″26. We use these models to quantify the statistical and systematic errors on the predicted values of magnification and time delay of the next emerging image of SN “Refsdal.” We find that its peak luminosity should occur between 2016 March and June and should be approximately 20% fainter than the dimmest (S4) of the previously detected images but above the detection limit of the planned HST/WFC3 follow-up. We present our two-dimensional reconstruction of the cluster mass density distribution and of the SN “Refsdal” host galaxy surface brightness distribution. We outline the road map
MUSES-C, its launch and early orbit operations
NASA Astrophysics Data System (ADS)
Kawaguchi, Jun'ichiro; Kuninaka, Hitoshi; Fujiwara, Akira; Uesugi, Tono
2003-11-01
The MUSES-C was launched on May 9th of 2003 and was named 'Hayabusa'. It takes an aim at the world's first sample and return from a near Earth asteroid, 1998 SF36 now renamed "Itokawa". The spacecraft is a kind of technology demonstrator with four key technologies. The paper presents a quick report on the initial operation of the ion engines aboard and will show how the attitude control has been performed incorporating the closed loop de-saturation function onboard. The paper also presents how much delta-V has been applied to the spacecraft as well as how the orbit determination under the low-thrust acceleration has been performed.
Fava, Joseph L.; van den Berg, Jacob J.; Rosen, Rochelle K.; Salomon, Liz A.; Vargas, Sara; Christensen, Anna L.; Pinkston, Megan; Morrow, Kathleen M.
2015-01-01
Objectives To evaluate the psychometric properties of the Microbicide Use Self-Efficacy (MUSE) instrument and to examine correlates of self-efficacy to use vaginal microbicides among a sample of racially and ethnically diverse women living in the northeastern United States. Methods Exploratory and confirmatory factor analytic methods were used to explore and determine the dimensionality and psychometric properties of the MUSE. Construct validity was assessed by examining the relationships of the MUSE to key sexual behavior, partner communication, relationship, and psychosocial variables. Results Two dimensions of microbicide use self-efficacy were psychometrically validated and identified as Adherence and Access and Situational Challenges. The two 4-item subscales measuring Adherence and Access and Situational Challenges had reliability coefficients of .78 and .85, respectively. Correlates of the two measures were tested at a Bonferroni-adjusted alpha level of p =.001, and 19 of 43 variables analyzed were found to significantly relate to Adherence and Access, while 16 of 43 variables were significantly related to Situational Challenges. Of the 35 significant relationships, 32 were in the domains of partner communication, partner relationships, and behavioral and psychosocial variables. Conclusions The MUSE instrument demonstrated strong internal validity, reliability, and initial construct validity. The MUSE can be a useful tool in capturing the multidimensional nature of microbicide use self-efficacy among diverse populations of women. PMID:23806676
The New Hyperspectral Sensor Desis on the Multi-Payload Platform Muses Installed on the Iss
NASA Astrophysics Data System (ADS)
Müller, R.; Avbelj, J.; Carmona, E.; Eckardt, A.; Gerasch, B.; Graham, L.; Günther, B.; Heiden, U.; Ickes, J.; Kerr, G.; Knodt, U.; Krutz, D.; Krawczyk, H.; Makarau, A.; Miller, R.; Perkins, R.; Walter, I.
2016-06-01
The new hyperspectral instrument DLR Earth Sensing Imaging Spectrometer (DESIS) will be developed and integrated in the Multi-User-System for Earth Sensing (MUSES) platform installed on the International Space Station (ISS). The DESIS instrument will be launched to the ISS mid of 2017 and robotically installed in one of the four slots of the MUSES platform. After a four month commissioning phase the operational phase will last at least until 2020. The MUSES / DESIS system will be commanded and operated by the publically traded company TBE (Teledyne Brown Engineering), which initiated the whole program. TBE provides the MUSES platform and the German Aerospace Center (DLR) develops the instrument DESIS and establishes a Ground Segment for processing, archiving, delivering and calibration of the image data mainly used for scientific and humanitarian applications. Well calibrated and harmonized products will be generated together with the Ground Segment established at Teledyne. The article describes the Space Segment consisting of the MUSES platform and the instrument DESIS as well as the activities at the two (synchronized) Ground Segments consisting of the processing methods, product generation, data calibration and product validation. Finally comments to the data policy are given.
Shimamura, Norihito; Kakuta, Kiyohide; Wang, Liang; Naraoka, Masato; Uchida, Hiroki; Wakao, Shohei; Dezawa, Mari; Ohkuma, Hiroki
2017-02-01
A novel type of non-tumorigenic pluripotent stem cell, the Muse cell (multi-lineage, differentiating stress enduring cell), resides in the connective tissue and in cultured mesenchymal stem cells (MSCs) and is reported to differentiate into multiple cell types according to the microenvironment to repair tissue damage. We examined the efficiency of Muse cells in a mouse intracerebral hemorrhage (ICH) model. Seventy μl of cardiac blood was stereotactically injected into the left putamen of immunodeficient mice. Five days later, 2 × 10(5) of human bone marrow MSC-derived Muse cells (n = 6) or cells other than Muse cells in MSCs (non-Muse, n = 6) or the same volume of PBS (n = 11) was injected into the ICH cavity. Water maze and motor function tests were implemented for 68 days, and immunohistochemistry for NeuN, MAP2 and GFAP was done. The Muse group showed impressive recovery: Recovery was seen in the water maze after day 19, and motor functions after 5 days was compared with the other two groups, with a significant statistical difference (p < 0.05). The survival rate of the engrafted cells in the Muse group was significantly higher than in the non-Muse group (p < 0.05) at day 69, and those cells showed positivity for NeuN (~57%) and MAP-2 (~41.6%). Muse cells could remain in the ICH brain, differentiate into neural-lineage cells and restore functions without inducing them into neuronal cells by gene introduction and cytokine treatment prior to transplantation. A simple collection of Muse cells and their supply to the brain in naïve state facilitates regenerative therapy in ICH.
Urania, the Muse of Astronomy: She Who Draws Our Eyes
NASA Astrophysics Data System (ADS)
Rossi, S.
2016-01-01
In exploring the inspiration of astronomical phenomena upon human culture we are invited, perhaps beckoned, to reflect on Urania, the Greek Muse of Astronomy. Heavenly One or Heavenly Bright, Urania teaches mortals the shape and wonder of the cosmos, “men who have been instructed by her she raises aloft to heaven for it is a fact that imagination and power of thought lift men's souls to heavenly heights” (Siculus 1935). Yet in cities, the heavenly lights are dimmed, flooded by another source of light which is that of culture, and that is the domain of Aphrodite. So it is to her we must turn to understand what draws our eyes up to the heavens above the dazzling city lights. And, as Aphrodite Urania, her cultural and aesthetic domain is connected to the order of the cosmos itself, “the triple Moirai are ruled by thy decree, and all productions yield alike to thee: whatever the heavens, encircling all, contain, earth fruit-producing, and the stormy main, thy sway confesses, and obeys thy word...” (Athanassakis 1988). My presentation is a mythopoetic cultural excavation of the gods and ideas in our passion for astronomy; how, in our fascination with the cosmos, we see Urania and Aphrodite, these goddesses who inspire us city dwellers, planetarium devotees, and silent-field stargazers to look upwards.
The MUSES Satellite Team and Multidisciplinary System Engineering
NASA Technical Reports Server (NTRS)
Chen, John C.; Paiz, Alfred R.; Young, Donald L.
1997-01-01
In a unique partnership between three minority-serving institutions and NASA's Jet Propulsion Laboratory, a new course sequence, including a multidisciplinary capstone design experience, is to be developed and implemented at each of the schools with the ambitious goal of designing, constructing and launching a low-orbit Earth-resources satellite. The three universities involved are North Carolina A&T State University (NCA&T), University of Texas, El Paso (UTEP), and California State University, Los Angeles (CSULA). The schools form a consortium collectively known as MUSES - Minority Universities System Engineering and Satellite. Four aspects of this project make it unique: (1) Including all engineering disciplines in the capstone design course, (2) designing, building and launching an Earth-resources satellite, (3) sustaining the partnership between the three schools to achieve this goal, and (4) implementing systems engineering pedagogy at each of the three schools. This paper will describe the partnership and its goals, the first design of the satellite, the courses developed at NCA&T, and the implementation plan for the course sequence.
Constraint-Muse: A Soft-Constraint Based System for Music Therapy
NASA Astrophysics Data System (ADS)
Hölzl, Matthias; Denker, Grit; Meier, Max; Wirsing, Martin
Monoidal soft constraints are a versatile formalism for specifying and solving multi-criteria optimization problems with dynamically changing user preferences. We have developed a prototype tool for interactive music creation, called Constraint Muse, that uses monoidal soft constraints to ensure that a dynamically generated melody harmonizes with input from other sources. Constraint Muse provides an easy to use interface based on Nintendo Wii controllers and is intended to be used in music therapy for people with Parkinson’s disease and for children with high-functioning autism or Asperger’s syndrome.
NASA Astrophysics Data System (ADS)
Bouché, N.; Finley, H.; Schroetter, I.; Murphy, M. T.; Richter, P.; Bacon, R.; Contini, T.; Richard, J.; Wendt, M.; Kamann, S.; Epinat, B.; Cantalupo, S.; Straka, L. A.; Schaye, J.; Martin, C. L.; Péroux, C.; Wisotzki, L.; Soto, K.; Lilly, S.; Carollo, C. M.; Brinchmann, J.; Kollatschny, W.
2016-04-01
We use a background quasar to detect the presence of circumgalactic gas around a z=0.91 low-mass star-forming galaxy. Data from the new Multi Unit Spectroscopic Explorer (MUSE) on the Very Large Telescope show that the galaxy has a dust-corrected star formation rate (SFR) of 4.7 ± 2.0 M⊙ yr-1, with no companion down to 0.22 M⊙ yr-1 (5σ) within 240 {h}-1 kpc (“30”). Using a high-resolution spectrum of the background quasar, which is fortuitously aligned with the galaxy major axis (with an azimuth angle α of only 15°), we find, in the gas kinematics traced by low-ionization lines, distinct signatures consistent with those expected for a “cold-flow disk” extending at least 12 kpc (3× {R}1/2). We estimate the mass accretion rate {\\dot{M}}{{in}} to be at least two to three times larger than the SFR, using the geometric constraints from the IFU data and the H i column density of log {N}{{H}{{I}}}/{{cm}}-2 ≃ 20.4 obtained from a Hubble Space Telescope/COS near-UV spectrum. From a detailed analysis of the low-ionization lines (e.g., Zn ii, Cr ii, Ti ii, Mn ii, Si ii), the accreting material appears to be enriched to about 0.4 {Z}⊙ (albeit with large uncertainties: {log} Z/{Z}⊙ =-0.4\\quad +/- \\quad 0.4), which is comparable to the galaxy metallicity (12 + log O/H = 8.7 ± 0.2), implying a large recycling fraction from past outflows. Blueshifted Mg ii and Fe ii absorptions in the galaxy spectrum from the MUSE data reveal the presence of an outflow. The Mg ii and Fe ii absorption line ratios indicate emission infilling due to scattering processes, but the MUSE data do not show any signs of fluorescent Fe ii* emission. Based on observations made at the ESO telescopes under program 080.A-0364 (SINFONI), 079.A-0600 (UVES), and as part of MUSE commissioning (ESO program 060.A-9100). Based on observations made with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities
2016-01-01
Muse cells are a novel population of nontumorigenic pluripotent stem cells, highly resistant to cellular stress. These cells are present in every connective tissue and intrinsically express pluripotent stem markers such as Nanog, Oct3/4, Sox2, and TRA1-60. Muse cells are able to differentiate into cells from all three embryonic germ layers both spontaneously and under media-specific induction. Unlike ESCs and iPSCs, Muse cells exhibit low telomerase activity and asymmetric division and do not undergo tumorigenesis or teratoma formation when transplanted into a host organism. Muse cells have a high capacity for homing into damaged tissue and spontaneous differentiation into cells of compatible tissue, leading to tissue repair and functional restoration. The ability of Muse cells to restore tissue function may demonstrate the role of Muse cells in a highly conserved cellular mechanism related to cell survival and regeneration, in response to cellular stress and acute injury. From an evolutionary standpoint, genes pertaining to the regenerative capacity of an organism have been lost in higher mammals from more primitive species. Therefore, Muse cells may offer insight into the molecular and evolutionary bases of autonomous tissue regeneration and elucidate the molecular and cellular mechanisms that prevent mammals from regenerating limbs and organs, as planarians, newts, zebrafish, and salamanders do. PMID:28070194
MUSE-INGS ON AM1354-250: COLLISIONS, SHOCKS, AND RINGS
Conn, Blair C.; Fogarty, L. M. R.; Smith, Rory; Candlish, Graeme N.
2016-03-10
We present Multi Unit Spectroscopic Explorer observations of AM1354-250, confirming its status as a collisional ring galaxy that has recently undergone an interaction, creating its distinctive shape. We analyze the stellar and gaseous emission throughout the galaxy finding direct evidence that the gaseous ring is expanding with a velocity of ∼70 km s{sup −1} and that star formation is occurring primarily in H ii regions associated with the ring. This star formation activity is likely triggered by this interaction. We find evidence for several excitation mechanisms in the gas, including emission consistent with shocked gas in the expanding ring and a region of LINER-like emission in the central core of the galaxy. Evidence of kinematic disturbance in both the stars and gas, possibly also triggered by the interaction, can be seen in all of the velocity maps. The ring galaxy retains a weak spiral structure, strongly suggesting the progenitor galaxy was a massive spiral prior to the collision with its companion an estimated 140 ± 12 Myr ago.
Gimeno, María L; Fuertes, Florencia; Barcala Tabarrozzi, Andres E; Attorressi, Alejandra I; Cucchiani, Rodolfo; Corrales, Luis; Oliveira, Talita C; Sogayar, Mari C; Labriola, Leticia; Dewey, Ricardo A; Perone, Marcelo J
2017-01-01
Adult mesenchymal stromal cell-based interventions have shown promising results in a broad range of diseases. However, their use has faced limited effectiveness owing to the low survival rates and susceptibility to environmental stress on transplantation. We describe the cellular and molecular characteristics of multilineage-differentiating stress-enduring (Muse) cells derived from adipose tissue (AT), a subpopulation of pluripotent stem cells isolated from human lipoaspirates. Muse-AT cells were efficiently obtained using a simple, fast, and affordable procedure, avoiding cell sorting and genetic manipulation methods. Muse-AT cells isolated under severe cellular stress, expressed pluripotency stem cell markers and spontaneously differentiated into the three germ lineages. Muse-AT cells grown as spheroids have a limited proliferation rate, a diameter of ∼15 µm, and ultrastructural organization similar to that of embryonic stem cells. Muse-AT cells evidenced high stage-specific embryonic antigen-3 (SSEA-3) expression (∼60% of cells) after 7-10 days growing in suspension and did not form teratomas when injected into immunodeficient mice. SSEA-3(+) -Muse-AT cells expressed CD105, CD29, CD73, human leukocyte antigen (HLA) class I, CD44, and CD90 and low levels of HLA class II, CD45, and CD34. Using lipopolysaccharide-stimulated macrophages and antigen-challenged T-cell assays, we have shown that Muse-AT cells have anti-inflammatory activities downregulating the secretion of proinflammatory cytokines, such as interferon-γ and tumor necrosis factor-α. Muse-AT cells spontaneously gained transforming growth factor-β1 expression that, in a phosphorylated SMAD2-dependent manner, might prove pivotal in their observed immunoregulatory activity through decreased expression of T-box transcription factor in T cells. Collectively, the present study has demonstrated the feasibility and efficiency of obtaining Muse-AT cells that can potentially be harnessed as
Gimeno, María L; Fuertes, Florencia; Barcala Tabarrozzi, Andres E; Attorressi, Alejandra I; Cucchiani, Rodolfo; Corrales, Luis; Oliveira, Talita C; Sogayar, Mari C; Labriola, Leticia; Dewey, Ricardo A; Perone, Marcelo J
2016-08-02
: Adult mesenchymal stromal cell-based interventions have shown promising results in a broad range of diseases. However, their use has faced limited effectiveness owing to the low survival rates and susceptibility to environmental stress on transplantation. We describe the cellular and molecular characteristics of multilineage-differentiating stress-enduring (Muse) cells derived from adipose tissue (AT), a subpopulation of pluripotent stem cells isolated from human lipoaspirates. Muse-AT cells were efficiently obtained using a simple, fast, and affordable procedure, avoiding cell sorting and genetic manipulation methods. Muse-AT cells isolated under severe cellular stress, expressed pluripotency stem cell markers and spontaneously differentiated into the three germ lineages. Muse-AT cells grown as spheroids have a limited proliferation rate, a diameter of ∼15 µm, and ultrastructural organization similar to that of embryonic stem cells. Muse-AT cells evidenced high stage-specific embryonic antigen-3 (SSEA-3) expression (∼60% of cells) after 7-10 days growing in suspension and did not form teratomas when injected into immunodeficient mice. SSEA-3(+)-Muse-AT cells expressed CD105, CD29, CD73, human leukocyte antigen (HLA) class I, CD44, and CD90 and low levels of HLA class II, CD45, and CD34. Using lipopolysaccharide-stimulated macrophages and antigen-challenged T-cell assays, we have shown that Muse-AT cells have anti-inflammatory activities downregulating the secretion of proinflammatory cytokines, such as interferon-γ and tumor necrosis factor-α. Muse-AT cells spontaneously gained transforming growth factor-β1 expression that, in a phosphorylated SMAD2-dependent manner, might prove pivotal in their observed immunoregulatory activity through decreased expression of T-box transcription factor in T cells. Collectively, the present study has demonstrated the feasibility and efficiency of obtaining Muse-AT cells that can potentially be harnessed as
Estimation of discontinuous coefficients and boundary parameters for hyperbolic systems
NASA Technical Reports Server (NTRS)
Lamm, P. K.; Murphy, K. A.
1986-01-01
The problem of estimating discontinuous coefficients, including locations of discontinuities, that occur in second order hyperbolic systems typical of those arising in I-D surface seismic problems is discussed. In addition, the problem of identifying unknown parameters that appear in boundary conditions for the system is treated. A spline-based approximation theory is presented, together with related convergence findings and representative numerical examples.
The MUSES-C, mission description and its status
NASA Astrophysics Data System (ADS)
Kawaguchi, Jun'ichiro; Uesugi, Kuninori T.; Fujiwara, Akira; Saitoh, Hirobumi
1999-11-01
The MUSES-C mission is the world's first sample and return attempt to/from the near Earth asteroid Nereus (4660). It is the ISAS (The Institute of Space and Astronautical Science, Ministry of Education) which manages the mission that started in 1996 scheduling to be launched in January of 2002. The mission is built as a kind of technology demonstration, however, it is aiming at not only the in-situ observation but also the touch-down sampling of the surface fragments. The sample collected is returned to the Earth in January of 2006. The mission is a four year journey. The major purpose of it originally consists of the following four subjects: 1) The Ion thruster propulsion performed in interplanetary field as a primary means, 2) Autonomous guidance, navigation and control during the rendezvous and touch down phase, 3) The sample collection mechanism and 4) The hyperbolic reentry capsule with the asteroid sample contained inside it. The current primary objective is extended to carry the joint small rover with NASA/JPL, which is supposed to be placed on the surface and to look into the crater created by the sampling shot of the projectile. The rover is designated as the Small Science Vehicle (SSV) that weighs about 1 kg carrying three kinds of in-situ instruments: 1) A Visible Camera, 2) Near Infra-Red Spectrometer and potentially 3) Alpha-Proton X-ray Spectrometer (APXS) similar to that delivered on the Mars Path Finder. During the fiscal 1998, the spacecraft undergoes the PM tests and the FM fabrication starts from next year, 1999. The paper presents the latest mission description around the asteroid and shows the current status of the spacecraft as well as the instruments and so on. The mission will be the good example of an international collaboration in the small interplanetary exploration.
A very dark stellar system lost in Virgo: kinematics and metallicity of SECCO 1 with MUSE
NASA Astrophysics Data System (ADS)
Beccari, G.; Bellazzini, M.; Magrini, L.; Coccato, L.; Cresci, G.; Fraternali, F.; de Zeeuw, P. T.; Husemann, B.; Ibata, R.; Battaglia, G.; Martin, N.; Testa, V.; Perina, S.; Correnti, M.
2017-02-01
We present the results of VLT-MUSE (Very Large Telescope-Multi Unit Spectroscopic Explorer) integral field spectroscopy of SECCO 1, a faint, star-forming stellar system recently discovered as the stellar counterpart of an ultracompact high-velocity cloud (HVC 274.68+74.0), very likely residing within a substructure of the Virgo cluster of galaxies. We have obtained the radial velocity of a total of 38 individual compact sources identified as H II regions in the main and secondary bodies of the system, and derived the metallicity for 18 of them. We provide the first direct demonstration that the two stellar bodies of SECCO 1 are physically associated and that their velocities match the H I velocities. The metallicity is quite uniform over the whole system, with a dispersion lower than the uncertainty on individual metallicity estimates. The mean abundance, <12 + log(O/H)> = 8.44, is much higher than the typical values for local dwarf galaxies of similar stellar mass. This strongly suggests that the SECCO 1 stars were born from a pre-enriched gas cloud, possibly stripped from a larger galaxy. Using archival Hubble Space Telescope (HST) images, we derive a total stellar mass of ≃1.6 × 105 M⊙ for SECCO 1, confirming that it has a very high H I-to-stellar mass ratio for a dwarf galaxy, M_{H I}/M* ∼ 100. The star formation rate, derived from the Hα flux, is a factor of more than 10 higher than in typical dwarf galaxies of similar luminosity.
Musings of Someone in the Disability Support Services Field for Almost 40 Years
ERIC Educational Resources Information Center
Goodin, Sam
2014-01-01
As the title states, this article is a collection of musings with only modest attempts at establishing an order for them or connections between them. It is not quite "free association," but it is close. This structure or perhaps lack of it reflects the variety of things we do in our work. Many of the things we do have little in common…
MUSE--Model for University Strategic Evaluation. AIR 2002 Forum Paper.
ERIC Educational Resources Information Center
Kutina, Kenneth L.; Zullig, Craig M.; Starkman, Glenn D.; Tanski, Laura E.
A model for simulating college and university operations, finances, program investments, and market response in terms of applicants, acceptances, and retention has been developed and implemented using the system dynamics approach. The Model for University Strategic Evaluation (MUSE) is a simulation of the total operations of the university,…
VizieR Online Data Catalog: MUSE 3D view of HDF-S (Bacon+, 2015)
NASA Astrophysics Data System (ADS)
Bacon, R.; Brinchmann, J.; Richard, J.; Contini, T.; Drake, A.; Franx, M.; Tacchella, S.; Vernet, J.; Wisotzki, L.; Blaizot, J.; Bouche, N.; Bouwens, R.; Cantalupo, S.; Carollo, C. M.; Carton, D.; Caruana, J.; Clement, B.; Dreizler, S.; Epinat, B.; Guiderdoni, B.; Herenz, C.; Husser, T.-O.; Kamann, S.; Kerutt, J.; Kollatschny, W.; Krajnovic, D.; Lilly, S.; Martinsson, T.; Michel-Dansac, L.; Patricio, V.; Schaye, J.; Shirazi, M.; Soto, K.; Soucail, G.; Steinmetz, M.; Urrutia, T.; Weilbacher, P.; de Zeeuw, T.
2015-04-01
The HDFS was observed during six nights in July 25-29, 31 and August 2, 3 2014 of the last commissioning run of MUSE. We used the nominal wavelength range (4750-9300Å) and performed a series of exposures of 30min each. (3 data files).
Estimation of Delays and Other Parameters in Nonlinear Functional Differential Equations.
1981-12-01
FSTIMATION OF DELAYS AND OTHER PARAMETERS IN NONLINEAR FUNCTIONAL DIFFERENTIAL EQUATIONS by K. T. Banks and P. L. Daniel December 1981 LCDS Report #82...ESTIMATION OF DELAYS AND OTHER PARAMETERS IN NONLINEAR FUNCTIONAL DIFFERENTIAL EQUATIONS H. T. Banks and P. L. Daniel ABSTRACT We discuss a spline...based approximation scheme for nonlinear nonautonomous delay differential equations . Convergence results (using dissipative type estimates on the
BOOK REVIEW: Galileo's Muse: Renaissance Mathematics and the Arts
NASA Astrophysics Data System (ADS)
Peterson, Mark; Sterken, Christiaan
2013-12-01
Galileo's Muse is a book that focuses on the life and thought of Galileo Galilei. The Prologue consists of a first chapter on Galileo the humanist and deals with Galileo's influence on his student Vincenzo Viviani (who wrote a biography of Galileo). This introductory chapter is followed by a very nice chapter that describes the classical legacy: Pythagoreanism and Platonism, Euclid and Archimedes, and Plutarch and Ptolemy. The author explicates the distinction between Greek and Roman contributions to the classical legacy, an explanation that is crucial for understanding Galileo and Renaissance mathematics. The following eleven chapters of this book arranged in a kind of quadrivium, viz., Poetry, Painting, Music, Architecture present arguments to support the author's thesis that the driver for Galileo's genius was not Renaissance science as is generally accepted but Renaissance arts brought forth by poets, painters, musicians, and architects. These four sets of chapters describe the underlying mathematics in poetry, visual arts, music and architecture. Likewise, Peterson stresses the impact of the philosophical overtones present in geometry, but absent in algebra and its equations. Basically, the author writes about Galileo, while trying to ignore the Copernican controversy, which he sees as distracting attention from Galileo's scientific legacy. As such, his story deviates from the standard myth on Galileo. But the book also looks at other eminent characters, such as Galileo's father Vincenzo (who cultivated music and music theory), the painter Piero della Francesca (who featured elaborate perspectives in his work), Dante Alighieri (author of the Divina Commedia), Filippo Brunelleschi (who engineered the dome of the Basilica di Santa Maria del Fiore in Florence, Johannes Kepler (a strong supporter of Galileo's Copernicanism), etc. This book is very well documented: it offers, for each chapter, a wide selection of excellent biographical notes, and includes a fine
VLT/MUSE discovers a jet from the evolved B[e] star MWC 137
NASA Astrophysics Data System (ADS)
Mehner, A.; de Wit, W. J.; Groh, J. H.; Oudmaijer, R. D.; Baade, D.; Rivinius, T.; Selman, F.; Boffin, H. M. J.; Martayan, C.
2016-01-01
Aims: Not all stars exhibiting the optical spectral characteristics of B[e] stars are in the same evolutionary stage. The Galactic B[e] star MWC 137 is a prime example of an object with uncertain classification, where previous work has suggested either a pre- or a post-main sequence classification. Our goal is to settle this debate and provide a reliable evolutionary classification. Methods: Integral field spectrograph observations with the Very Large Telescope Multi Unit Spectroscopic Explorer (VLT MUSE) of the cluster SH 2-266 are used to analyze the nature of MWC 137. Results: A collimated outflow is discovered that is geometrically centered on MWC 137. The central position of MWC 137 in the cluster SH 2-266 within the larger nebula suggests strongly that it is a member of this cluster and that it is the origin of both the nebula and the newly discovered jet. Comparison of the color-magnitude diagram of the brightest cluster stars with stellar evolutionary models results in a distance of about 5.2 ± 1.4 kpc. We estimate that the cluster is at least 3 Myr old. The jet emanates from MWC 137 at a position angle of 18-20°. The jet extends over 66'' (1.7 pc) projected on the plane of the sky, shows several knots, and has electron densities of about 103 cm-1 and projected velocities of up to ± 450 km s-1. From the Balmer emission line decrement of the diffuse intracluster nebulosity, we determine E(B-V) = 1.4 mag for the inner 1' cluster region. The spectral energy distribution of the brightest cluster stars yields a slightly lower extinction of E(B-V) ~ 1.2 mag for the inner region and E(B-V) ~ 0.4-0.8 mag for the outer region. The extinction toward MWC 137 is estimated to be E(B-V) ~ 1.8 mag (AV ~ 5.6 mag). Conclusions: Our analysis of the optical and near-infrared color-magnitude and color-color diagrams suggests a post-main sequence stage for MWC 137. The existence of a jet in this object implies the presence of an accretion disk. Several possibilities for MWC
New μSR spectrometer at J-PARC MUSE based on Kalliope detectors
NASA Astrophysics Data System (ADS)
Kojima, K. M.; Murakami, T.; Takahashi, Y.; Lee, H.; Suzuki, S. Y.; Koda, A.; Yamauchi, I.; Miyazaki, M.; Hiraishi, M.; Okabe, H.; Takeshita, S.; Kadono, R.; Ito, T.; Higemoto, W.; Kanda, S.; Fukao, Y.; Saito, N.; Saito, M.; Ikeno, M.; Uchida, T.; Tanaka, M. M.
2014-12-01
We developed a new positron detector system called Kalliope, which is based on multi-pixel avalanch photo-diode (m-APD), application specific integrated circuit (ASIC), field programmable gated array (FPGA) and ethernet-based SiTCP data transfer technology. We have manufactured a general-purpose spectrometer for muon spin relaxation (μSR) measurements, employing 40 Kalliope units (1280 channels of scintillators) installed in a 0.4 T longitudinal-field magnet. The spectrometer has been placed at D1 experimental area of J- PARC Muon Science Establishment (MUSE). Since February of 2014, the spectrometer has been used for the user programs of MUSE after a short commissioning period of one week. The data accumulation rate of the new spectrometer is 180 million positron events per hour (after taking the coincidence of two scintillators of telescopes) from a 20×20 mm sample for double-pulsed incoming muons.
Collaborative Research: Equipment for and Running of the PSI MUSE Experiment
Kohl, Michael
2016-10-01
The R&D funding from this award has been a significant tool to move the Muon Scattering Experiment (MUSE) at the Paul Scherrer Institute in Switzerland forward to the stage of realization. Specifically, this award has enabled Dr. Michael Kohl and his working group at Hampton University to achieve substantial progress toward the goal of providing beam particle tracking with Gas Electron Multiplier (GEM) detectors for MUSE experiment. Establishing a particle detection system that is capable of operating in a high-intensity environment, with a data acquisition system capable of running at several kHz, combined with robust tracking software providing high efficiency for track reconstruction in the presence of noise and backgrounds will have immediate application in many other experiments.
ICL light in a z~0.5 cluster: the MUSE perspective
NASA Astrophysics Data System (ADS)
Pompei, E.; Adami, C.; XXL Team
2017-03-01
Intracluster light is contributed by both stars and gas and it is an important tracer of the interaction history of galaxies within a cluster. We present here the results obtained from MUSE observations of an intermediate redshift (z~ 0.5) cluster taken from the XXL survey and we conclude that the most plausible process responsible for the observed amount of ICL is ram pressure stripping.
ADS reactivity measurements from MUSE to TRADE (and where do we go from here?)
Imel, G.; Mellier, F.; Jammes, C.; Philibert, H.; Granget, G.; Gonzalez, E.; Villamarin, D.; Rosa, R.; Carta, M.; Monti, S.; Baeten, P.; Billebaud, A.
2006-07-01
This paper provides a link between the MUSE (Multiplication avec Source Externe) program performed at CEA-Cadarache in France, and the TRADE (TRIGA Accelerator Driven Experiment) program performed at ENEA-Casaccia in Italy. In both programs, extensive measurements were made to determine the best methods for sub-criticality measurements in an accelerator-driven system. A very serious attempt was made to quantify the uncertainties associated with such measurements. While both MUSE and TRADE studied the methods of sub-criticality determination, in fact the two systems are very different. MUSE was a fast system with MOX fuel (generation time around 0.5 {mu}s), and TRADE was performed in a TRIGA reactor (generation time around 50 {mu}s). This paper will summarize the important results of these two experiments, with the main purpose being to tie them together to attempt to draw generic conclusions that can be applied in the future to a real ADS. In addition, this paper will briefly discuss the next series of experiments that will continue this work in the U.S. (RACE, Reactor Accelerator Coupled Experiments), Belarus (YALINA), Belgium (GUINEVERE), and Russia (SAD, Sub-critical Assembly Dubna). MUSE and TRADE have contributed greatly to our understanding of the uncertainties associated with sub-critical measurements, but there are still some gaps that must be covered. This paper will describe the gaps that exist, and demonstrate how the above future programs will fill in the missing information needed for the design of an actual ADS system in the future. (authors)
Choosing MUSE: Validation of a Low-Cost, Portable EEG System for ERP Research
Krigolson, Olave E.; Williams, Chad C.; Norton, Angela; Hassall, Cameron D.; Colino, Francisco L.
2017-01-01
In recent years there has been an increase in the number of portable low-cost electroencephalographic (EEG) systems available to researchers. However, to date the validation of the use of low-cost EEG systems has focused on continuous recording of EEG data and/or the replication of large system EEG setups reliant on event-markers to afford examination of event-related brain potentials (ERP). Here, we demonstrate that it is possible to conduct ERP research without being reliant on event markers using a portable MUSE EEG system and a single computer. Specifically, we report the results of two experiments using data collected with the MUSE EEG system—one using the well-known visual oddball paradigm and the other using a standard reward-learning task. Our results demonstrate that we could observe and quantify the N200 and P300 ERP components in the visual oddball task and the reward positivity (the mirror opposite component to the feedback-related negativity) in the reward-learning task. Specifically, single sample t-tests of component existence (all p's < 0.05), computation of Bayesian credible intervals, and 95% confidence intervals all statistically verified the existence of the N200, P300, and reward positivity in all analyses. We provide with this research paper an open source website with all the instructions, methods, and software to replicate our findings and to provide researchers with an easy way to use the MUSE EEG system for ERP research. Importantly, our work highlights that with a single computer and a portable EEG system such as the MUSE one can conduct ERP research with ease thus greatly extending the possible use of the ERP methodology to a variety of novel contexts. PMID:28344546
ERIC Educational Resources Information Center
Neal, James G.
This paper outlines a series of quantitative and qualitative models for understanding and evaluating the use of electronic scholarly journals, and summarizes data based on the experience of Project Muse at Johns Hopkins University and early feedback received from subscribing libraries. Project Muse is a collaborative initiative between the Press…
Tsuchiyama, Kenichiro; Wakao, Shohei; Kuroda, Yasumasa; Ogura, Fumitaka; Nojima, Makoto; Sawaya, Natsue; Yamasaki, Kenshi; Aiba, Setsuya; Dezawa, Mari
2013-10-01
The induction of melanocytes from easily accessible stem cells has attracted attention for the treatment of melanocyte dysfunctions. We found that multilineage-differentiating stress-enduring (Muse) cells, a distinct stem cell type among human dermal fibroblasts, can be readily reprogrammed into functional melanocytes, whereas the remainder of the fibroblasts do not contribute to melanocyte differentiation. Muse cells can be isolated as cells positive for stage-specific embryonic antigen-3, a marker for undifferentiated human embryonic stem cells, and differentiate into cells representative of all three germ layers from a single cell, while also being nontumorigenic. The use of certain combinations of factors induces Muse cells to express melanocyte markers such as tyrosinase and microphthalmia-associated transcription factor and to show positivity for the 3,4-dihydroxy-L-phenylalanine reaction. When Muse cell-derived melanocytes were incorporated into three-dimensional (3D) cultured skin models, they localized themselves in the basal layer of the epidermis and produced melanin in the same manner as authentic melanocytes. They also maintained their melanin production even after the 3D cultured skin was transplanted to immunodeficient mice. This technique may be applicable to the efficient production of melanocytes from accessible human fibroblasts by using Muse cells, thereby contributing to autologous transplantation for melanocyte dysfunctions, such as vitiligo.
Modifications made to ModelMuse to add support for the Saturated-Unsaturated Transport model (SUTRA)
Winston, Richard B.
2014-01-01
This report (1) describes modifications to ModelMuse,as described in U.S. Geological Survey (USGS) Techniques and Methods (TM) 6–A29 (Winston, 2009), to add support for the Saturated-Unsaturated Transport model (SUTRA) (Voss and Provost, 2002; version of September 22, 2010) and (2) supplements USGS TM 6–A29. Modifications include changes to the main ModelMuse window where the model is designed, addition of methods for generating a finite-element mesh suitable for SUTRA, defining how some functions shouldapply when using a finite-element mesh rather than a finite-difference grid (as originally programmed in ModelMuse), and applying spatial interpolation to angles. In addition, the report describes ways of handling objects on the front view of the model and displaying data. A tabulation contains a summary of the new or modified dialog boxes.
Abdoun, Oussama; Joucla, Sébastien; Mazzocco, Claire; Yvert, Blaise
2011-01-01
A major characteristic of neural networks is the complexity of their organization at various spatial scales, from microscopic local circuits to macroscopic brain-scale areas. Understanding how neural information is processed thus entails the ability to study them at multiple scales simultaneously. This is made possible using microelectrodes array (MEA) technology. Indeed, high-density MEAs provide large-scale coverage (several square millimeters) of whole neural structures combined with microscopic resolution (about 50 μm) of unit activity. Yet, current options for spatiotemporal representation of MEA-collected data remain limited. Here we present NeuroMap, a new interactive Matlab-based software for spatiotemporal mapping of MEA data. NeuroMap uses thin plate spline interpolation, which provides several assets with respect to conventional mapping methods used currently. First, any MEA design can be considered, including 2D or 3D, regular or irregular, arrangements of electrodes. Second, spline interpolation allows the estimation of activity across the tissue with local extrema not necessarily at recording sites. Finally, this interpolation approach provides a straightforward analytical estimation of the spatial Laplacian for better current sources localization. In this software, coregistration of 2D MEA data on the anatomy of the neural tissue is made possible by fine matching of anatomical data with electrode positions using rigid-deformation-based correction of anatomical pictures. Overall, NeuroMap provides substantial material for detailed spatiotemporal analysis of MEA data. The package is distributed under GNU General Public License and available at http://sites.google.com/site/neuromapsoftware.
Design and construction of the ultra-slow muon beamline at J-PARC/MUSE
NASA Astrophysics Data System (ADS)
Strasser, P.; Ikedo, Y.; Makimura, S.; Nakamura, J.; Nishiyama, K.; Shimomura, K.; Fujimori, H.; Adachi, T.; Koda, A.; Kawamura, N.; Kobayashi, Y.; Higemoto, W.; Ito, T. U.; Nagatomo, T.; Torikai, E.; Kadono, R.; Miyake, Y.
2014-12-01
At the J-PARC Muon Science Facility (MUSE), a new Ultra-Slow Muon beamline is being constructed to extend the μSR technique from bulk material to thin films, thus empowering a wide variety of surface and nano-science studies, and also a novel 3D imaging with the "ultra-slow muon microscope". Ultra-slow muons will be produced by the re-acceleration of thermal muons regenerated by the laser resonant ionization of muonium atoms evaporated from a hot tungsten foil, a method that originated from the Meson Science Laboratory at KEK. The design parameters, construction status and initial beam commissioning are reported.
NASA Astrophysics Data System (ADS)
Mehner, A.; Steffen, W.; Groh, J. H.; Vogt, F. P. A.; Baade, D.; Boffin, H. M. J.; Davidson, K.; de Wit, W. J.; Humphreys, R. M.; Martayan, C.; Oudmaijer, R. D.; Rivinius, T.; Selman, F.
2016-11-01
Aims: The role of episodic mass loss is one of the outstanding questions in massive star evolution. The structural inhomogeneities and kinematics of their nebulae are tracers of their mass-loss history. We conduct a three-dimensional morpho-kinematic analysis of the ejecta of η Car outside its famous Homunculus nebula. Methods: We carried out the first large-scale integral field unit observations of η Car in the optical, covering a field of view of 1'× 1' centered on the star. Observations with the Multi Unit Spectroscopic Explorer (MUSE) at the Very Large Telescope (VLT) reveal the detailed three-dimensional structure of η Car's outer ejecta. Morpho-kinematic modeling of these ejecta is conducted with the code SHAPE. Results: The largest coherent structure in η Car's outer ejecta can be described as a bent cylinder with roughly the same symmetry axis as the Homunculus nebula. This large outer shell is interacting with the surrounding medium, creating soft X-ray emission. Doppler velocities of up to 3000 km s-1 are observed. We establish the shape and extent of the ghost shell in front of the southern Homunculus lobe and confirm that the NN condensation can best be modeled as a bowshock in the orbital/equatorial plane. Conclusions: The SHAPE modeling of the MUSE observations provides a significant gain in the study of the three-dimensional structure of η Car's outer ejecta. Our SHAPE modeling indicates that the kinematics of the outer ejecta measured with MUSE can be described by a spatially coherent structure, and that this structure also correlates with the extended soft X-ray emission associated with the outer debris field. The ghost shell immediately outside the southern Homunculus lobe hints at a sequence of eruptions within the time frame of the Great Eruption from 1837-1858 or possibly a later shock/reverse shock velocity separation. Our 3D morpho-kinematic modeling and the MUSE observations constitute an invaluable dataset to be confronted with future
NASA Technical Reports Server (NTRS)
Nakamura, T.; Noguchi, T.; Tanaka, M.; Zolensky, M. E.; Kimura, M.; Nakato, A.; Ogami, T.; Ishida, H.; Tsuchiyama, A.; Yada, T.; Shirai, K.; Okazaki, R.; Fujimura, A.; Ishibashi, Y.; Abe, M.; Okada, T.; Ueno, M.; Mukai, T.
2011-01-01
Remote sensing by the spacecraft Hayabusa suggested that outermost surface particles of Muses-C regio of the asteroid Itokawa consist of centimeter and sub-centimeter size small pebbles. However, particles we found in the sample catcher A stored in the Hayabusa capsule, where Muses-C particles were captured during first touchdown, are much smaller. i.e., most are smaller than 100 microns in size. This suggests that only small fractions of Muses-C particles were stirred up due to the impact of the sampling horn onto the surface, or due to jets from chemical thrusters during the lift off of the spacecraft from the surface. X-ray fluorescence and near-infrared measurements by the Hayabusa spacecraft suggested that Itokawa surface materials have mineral and major element composition roughly similar to LL chondrites. The particles of the Muses-C region are expected to have experienced some effects of space weathering. Both of these prospects can be tested by the direct mineralogical analyses of the returned Itokawa particles in our study and another one. This comparison is most important aspect of the Hayabusa mission, because it finally links chemical analyses of meteorites fallen on the Earth to spectroscopic measurements of the asteroids.
The MUSE QSO Blind Survey: A Census of Absorber Host Galaxies
NASA Astrophysics Data System (ADS)
Straka, Lorrie A.
2017-03-01
Understanding the distribution of gas in galaxies and its interaction with the IGM is crucial to complete the picture of galaxy evolution. At all redshifts, absorption features seen in QSO spectra serve as a unique probe of the gaseous content of foreground galaxies and the IGM, extending out to 200 kpc. Studies show that star formation history is intimately related to the co-evolution of galaxies and the IGM. In order to study the environments traced by absorption systems and the role of inflows and outflows, it is critical to measure the emission properties of host galaxies and their halos. We overcome the challenge of detecting absorption host galaxies with the MUSE integral field spectrograph on VLT. MUSE's large field of view and sensitivity to emission lines has allowed a never-before seen match between the number density of absorbers along QSO sightlines and the number density of emission line galaxies within 200 kpc of the QSO. These galaxies represent a sample for which previously elusive connections can be made between mass, metallicity, SFR, and absorption.
NASA Astrophysics Data System (ADS)
Levenson, Richard M.; Harmany, Zachary; Demos, Stavros G.; Fereidouni, Farzad
2016-03-01
Widely used methods for preparing and viewing tissue specimens at microscopic resolution have not changed for over a century. They provide high-quality images but can involve time-frames of hours or even weeks, depending on logistics. There is increasing interest in slide-free methods for rapid tissue analysis that can both decrease turn-around times and reduce costs. One new approach is MUSE (microscopy with UV surface excitation), which exploits the shallow penetration of UV light to excite fluorescent signals from only the most superficial tissue elements. The method is non-destructive, and eliminates requirement for conventional histology processing, formalin fixation, paraffin embedding, or thin sectioning. It requires no lasers, confocal, multiphoton or optical coherence tomography optics. MUSE generates diagnostic-quality histological images that can be rendered to resemble conventional hematoxylin- and eosin-stained samples, with enhanced topographical information, from fresh or fixed, but unsectioned tissue, rapidly, with high resolution, simply and inexpensively. We anticipate that there could be widespread adoption in research facilities, hospital-based and stand-alone clinical settings, in local or regional pathology labs, as well as in low-resource environments.
Minority Universities Systems Engineering (MUSE) Program at the University of Texas at El Paso
NASA Technical Reports Server (NTRS)
Robbins, Mary Clare; Usevitch, Bryan; Starks, Scott A.
1997-01-01
In 1995, The University of Texas at El Paso (UTEP) responded to the suggestion of NASA Jet Propulsion Laboratory (NASA JPL) to form a consortium comprised of California State University at Los Angeles (CSULA), North Carolina Agricultural and Technical University (NCAT), and UTEP from which developed the Minority Universities Systems Engineering (MUSE) Program. The mission of this consortium is to develop a unique position for minority universities in providing the nation's future system architects and engineers as well as enhance JPL's system design capability. The goals of this collaboration include the development of a system engineering curriculum which includes hands-on project engineering and design experiences. UTEP is in a unique position to take full advantage of this program since UTEP has been named a Model Institution for Excellence (MIE) by the National Science Foundation. The purpose of MIE is to produce leaders in Science, Math, and Engineering. Furthermore, UTEP has also been selected as the site for two new centers including the Pan American Center for Earth and Environmental Sciences (PACES) directed by Dr. Scott Starks and the FAST Center for Structural Integrity of Aerospace Systems directed by Dr. Roberto Osegueda. The UTEP MUSE Program operates under the auspices of the PACES Center.
Development of thermal protection system of the MUSES-C/DASH reentry capsule
NASA Astrophysics Data System (ADS)
Yamada, Tetsuya; Inatani, Yoshifumi; Honda, Masahisa; Hirai, Ken'ich
2002-07-01
In the final phase of the MUSES-C mission, a small capsule with asteroid sample conducts reentry flight directly from the interplanetary transfer orbit at the velocity over 12 km/s. The severe heat flux, the complicated functional requirements, and small weight budget impose several engineering challenges on the designing of the thermal protection system of the capsule. The heat shield is required to function not only as ablator but also as a structural component. The cloth-layered carbon-phenolic ablator, which has higher allowable stress, is developed in newly-devised fabric method for avoiding delamination due to the high aerodynamic heating. The ablation analysis code, which takes into account of the effect of pyrolysis gas on the surface recession rate, has been developed and verified in the arc-heating tests in the facility environment of broad range of enthalpy level. The capsule was designed to be ventilated during the reentry flight up to about atmospheric pressure by the time of parachute deployment by being sealed with porous flow-restrict material. The designing of the thermal protection system, the hardware specifications, and the ground-based test programs of both MUSES-C and DASH capsule are summarized and discussed here in this paper.
Spline-Based Smoothing of Airfoil Curvatures
NASA Technical Reports Server (NTRS)
Li, W.; Krist, S.
2008-01-01
Constrained fitting for airfoil curvature smoothing (CFACS) is a splinebased method of interpolating airfoil surface coordinates (and, concomitantly, airfoil thicknesses) between specified discrete design points so as to obtain smoothing of surface-curvature profiles in addition to basic smoothing of surfaces. CFACS was developed in recognition of the fact that the performance of a transonic airfoil is directly related to both the curvature profile and the smoothness of the airfoil surface. Older methods of interpolation of airfoil surfaces involve various compromises between smoothing of surfaces and exact fitting of surfaces to specified discrete design points. While some of the older methods take curvature profiles into account, they nevertheless sometimes yield unfavorable results, including curvature oscillations near end points and substantial deviations from desired leading-edge shapes. In CFACS as in most of the older methods, one seeks a compromise between smoothing and exact fitting. Unlike in the older methods, the airfoil surface is modified as little as possible from its original specified form and, instead, is smoothed in such a way that the curvature profile becomes a smooth fit of the curvature profile of the original airfoil specification. CFACS involves a combination of rigorous mathematical modeling and knowledge-based heuristics. Rigorous mathematical formulation provides assurance of removal of undesirable curvature oscillations with minimum modification of the airfoil geometry. Knowledge-based heuristics bridge the gap between theory and designers best practices. In CFACS, one of the measures of the deviation of an airfoil surface from smoothness is the sum of squares of the jumps in the third derivatives of a cubicspline interpolation of the airfoil data. This measure is incorporated into a formulation for minimizing an overall deviation- from-smoothness measure of the airfoil data within a specified fitting error tolerance. CFACS has been extensively tested on a number of supercritical airfoil data sets generated by inverse design and optimization computer programs. All of the smoothing results show that CFACS is able to generate unbiased smooth fits of curvature profiles, trading small modifications of geometry for increasing curvature smoothness by eliminating curvature oscillations and bumps (see figure).
Exploring the mass assembly of the early-type disc galaxy NGC 3115 with MUSE
NASA Astrophysics Data System (ADS)
Guérou, A.; Emsellem, E.; Krajnović, D.; McDermid, R. M.; Contini, T.; Weilbacher, P. M.
2016-07-01
We present MUSE integral field spectroscopic data of the S0 galaxy NGC 3115 obtained during the instrument commissioning at the ESO Very Large Telescope (VLT). We analyse the galaxy stellar kinematics and stellar populations and present two-dimensional maps of their associated quantities. We thus illustrate the capacity of MUSE to map extra-galactic sources to large radii in an efficient manner, i.e. ~4 Re, and provide relevant constraints on its mass assembly. We probe the well-known set of substructures of NGC 3115 (nuclear disc, stellar rings, outer kpc-scale stellar disc, and spheroid) and show their individual associated signatures in the MUSE stellar kinematics and stellar populations maps. In particular, we confirm that NGC 3115 has a thin fast-rotating stellar disc embedded in a fast-rotating spheroid, and that these two structures show clear differences in their stellar age and metallicity properties. We emphasise an observed correlation between the radial stellar velocity, V, and the Gauss-Hermite moment, h3, which creates a butterfly shape in the central 15'' of the h3 map. We further detect the previously reported weak spiral- and ring-like structures, and find evidence that these features can be associated with regions of younger mean stellar ages. We provide tentative evidence for the presence of a bar, although the V-h3 correlation can be reproduced by a simple axisymmetric dynamical model. Finally, we present a reconstruction of the two-dimensional star formation history of NGC 3115 and find that most of its current stellar mass was formed at early epochs (>12 Gyr ago), while star formation continued in the outer (kpc-scale) stellar disc until recently. Since z ~2 and within ~4 Re, we suggest that NGC 3115 has been mainly shaped by secular processes. The images of the derived parameters in FITS format and the reduced datacube are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc
NASA Participation in the ISAS MUSES C Asteroid Sample Return Mission
NASA Technical Reports Server (NTRS)
Jones, Ross
2000-01-01
NASA and Japan's Institute of Space and Astronautical Science (ISAS) have agreed to cooperate on the first mission to collect samples from the surface of an asteroid and return them to Earth for in-depth study. The MUSES-C mission will be launched on a Japanese MV launch vehicle in January 2002 from Kagoshima Space Center, Japan, toward a touchdown on the asteroid Nereus in September 2003. A NASA-provided miniature rover will conduct in-situ measurements on the surface. The asteroid samples will be returned to Earth by MUSES-C via a parachute-borne recovery capsule in January 2006. NASA and ISAS will cooperate on several aspects of the mission, including mission support and scientific analysis. In addition to providing the rover, NASA will arrange for the testing of the MUSES-C re-entry heat shield at NASA/Ames Research Center, provide supplemental Deep Space Network tracking of the spacecraft, assist in navigating the spacecraft and provide arrangements for the recovery of the sample capsule at a landing site in the U. S. Scientific coinvestigators from the U.S. and Japan will share data from the instruments on the rover and the spacecraft. They will also collaborate on the investigations of the returned samples. With a mass of about I kg, the rover experiment will be a direct descendant of the technology used to build the Sojourner rover. The rover will carry three science instruments: a visible imaging camera, a near-infrared point spectrometer and an alpha X ray spectrometer. The solarpowered rover will move around the surface of Nereus collecting imagery data which are complimentary to the spacecraft investigation. The imaging system will be capable of making surface texture, composition, and morphology measurements at resolutions better than 1 cm. The rover will transmit this data to the spacecraft for relay back to Earth. Due to the microgravity environment on Nereus, the rover has been designed to right itself in case it flips over. Solar panels on all
Musing over Microbes in Microgravity: Microbial Physiology Flight Experiment
NASA Technical Reports Server (NTRS)
Schweickart, Randolph; McGinnis, Michael; Bloomberg, Jacob; Lee, Angie (Technical Monitor)
2002-01-01
New York City, the most populated city in the United States, is home to over 8 million humans. This means over 26,000 people per square mile! Imagine, though, what the view would be if you peeked into the world of microscopic organisms. Scientists estimate that a gram of soil may contain up to 1 billion of these microbes, which is as much as the entire human population of China! Scientists also know that the world of microbes is incredibly diverse-possibly 10,000 different species in one gram of soil - more than all the different types of mammals in the world. Microbes fill every niche in the world - from 20 miles below the Earth's surface to 20 miles above, and at temperatures from less than -20 C to hotter than water's boiling point. These organisms are ubiquitous because they can adapt quickly to changing environments, an effective strategy for survival. Although we may not realize it, microbes impact every aspect of our lives. Bacteria and fungi help us break down the food in our bodies, and they help clean the air and water around us. They can also cause the dark, filmy buildup on the shower curtain as well as, more seriously, illness and disease. Since humans and microbes share space on Earth, we can benefit tremendously from a better understanding of the workings and physiology of the microbes. This insight can help prevent any harmful effects on humans, on Earth and in space, as well as reap the benefits they provide. Space flight is a unique environment to study how microbes adapt to changing environmental conditions. To advance ground-based research in the field of microbiology, this STS-107 experiment will investigate how microgravity affects bacteria and fungi. Of particular interest are the growth rates and how they respond to certain antimicrobial substances that will be tested; the same tests will be conducted on Earth at the same times. Comparing the results obtained in flight to those on Earth, we will be able to examine how microgravity induces
Ultra slow muon microscopy by laser resonant ionization at J-PARC, MUSE
NASA Astrophysics Data System (ADS)
Miyake, Y.; Ikedo, Y.; Shimomura, K.; Strasser, P.; Kawamura, N.; Nishiyama, K.; Koda, A.; Fujimori, H.; Makimura, S.; Nakamura, J.; Nagatomo, T.; Kadono, R.; Torikai, E.; Iwasaki, M.; Wada, S.; Saito, N.; Okamura, K.; Yokoyama, K.; Ito, T.; Higemoto, W.
2013-04-01
As one of the principal muon beam line at the J-PARC muon facility (MUSE), we are now constructing a Muon beam line (U-Line), which consists of a large acceptance solenoid made of mineral insulation cables (MIC), a superconducting curved transport solenoid and superconducting axial focusing magnets. There, we can extract 2 × 108/s surface muons towards a hot tungsten target. At the U-Line, we are now establishing a new type of muon microscopy; a new technique with use of the intense ultra-slow muon source generated by resonant ionization of thermal Muonium (designated as Mu; consisting of a μ + and an e - ) atoms generated from the surface of the tungsten target. In this contribution, the latest status of the Ultra Slow Muon Microscopy project, fully funded, is reported.
Time of flight in MUSE at PIM1 at Paul Scherrer Institute
NASA Astrophysics Data System (ADS)
Lin, Wan; Gilman, Ronald; MUSE Collaboration
2016-09-01
The MUSE experiment at PIM1 at Paul Scherrer Institute in Villigen, Switzerland, measures elastic scattering of electrons and muons from a liquid hydrogen target. The intent of the experiment is to deduce whether the radius of the proton is the same when determined from the two different particle types. Precision timing is an important aspect of the experiment, used to determine particle types, reaction types, and beam momentum. Here we present results for a test setup measuring time of flight between prototypes of two detector systems to be used in the experiment, compared to Geant4 simulations. The results demonstrate time of flight resolution better than 100 ps, and beam momentum determination at the level of a few tenths of a percent. Douglass Project for Rutgers Women in Math, Science & Engineering, National Science Foundation Grant 1306126 to Rutgers University.
Development of a scintillating-fiber beam detector for the MUSE experiment
NASA Astrophysics Data System (ADS)
Cohen, Erez O.; Piasetzky, Eli; Shamai, Yair; Pilip, Nikolay
2016-04-01
This paper describes the design, simulation, and prototyping of a scintillating-fiber (SciFi) beam hodoscope that enables real-time particle identification, momentum and position determination, and flux counting in a low-momentum mixed beam of pions, electrons and muons for the MUon-proton Scattering Experiment (MUSE) at the Paul Scherrer Institute (PSI), Switzerland. The experimental demands and conceptual design are discussed, including the mixing scheme used to suppress cross-talk between adjacent fibers. A comparison between different types of fibers is given. The timing resolution for 1 plane of SciFis is 0.40 ± 0.05 ns, and for 2 fiber planes in coincidence, it is 0.27 ± 0.03 ns. The detection efficiency when at least two planes are required to fire is 98%.
The Muon Scattering Experiment (MUSE) at PSI and the proton radius puzzle
NASA Astrophysics Data System (ADS)
Kohl, Michael
2014-11-01
The unexplained large discrepancy of the proton charge radius measurements with muonic hydrogen Lamb shift and determinations from elastic electron scattering and Lamb shift in regular hydrogen of seven standard deviations is known as the proton radius puzzle. Suggested solutions of the puzzle range from possible errors in the experiments through unexpectedly large hadronic physics effects to new physics beyond the Standard Model. A new approach to verify the radius discrepancy in a systematic manner will be pursued with the Muon Scattering Experiment (MUSE) at PSI. The experiment aims to compare elastic cross sections, the proton elastic form factors, and the extracted proton charge radius with scattering of electrons and muons of either charge and under identical conditions. The difference in the observed radius will be probed with a high precision to verify the discrepancy. An overview of the experiment and the current status will be presented.
MUSE, a Lab-On-Chip System for In-Situ Analysis
NASA Astrophysics Data System (ADS)
Eckhard, F.; Prak, A.; van den Assem, D.
Stork Product Engineering and 3T are working for several years on the development of an assembly technology for microsystem parts. This work has led to MATAS: Modular Assembly Technology for μTAS, a generic methodology which enables the development of very compact and highly integrated microsystems technology (MST) systems. MATAS has as great advantage that it enables the application of commercially available microsystem parts derived from different suppliers. The high degree of integration of both the MST parts with electronics enables the development of highly autonomous and intelligent systems which are suited for incorporation in planetary rovers or to support the research in ISS. For further improvement of the technology, and to show its advantages, the development of a system for on-chip capillary electrophoresis (CE) has been selected. CE, which is of old applied in the biosciences and biotechnology, is one of the key technologies for the detection and measurement of enantiomers. The study on enantiomers is an important aspect in the search to pre-biotic life. Due to the limited dimensions of Muse, the system is perfectly suited for use in a planetary rover but could also easily become part of the Astrobiology Facility of Space Station. For the measurement and detection of these enantiomers and other biomolecules, the system is equipped with a fluorescence detector. In 2002 a new project has been started to equip the system with an electrochemical detector enabling conductivity and amperometric analysis. Direct conductivity detection is especially applied in capillary ion electrophoresis, which can be used complementary, or separate to the zone electrophoresis, in which the fluorescence detector is applied. The combination of these detection technologies leads to a multi analysis system (Muse) with a very broad application area.
Gradient in the IMF slope and Sodium abundance of M87 with MUSE
NASA Astrophysics Data System (ADS)
Spiniello, C.; Sarzi, M.; Krajnovic, D.
2016-06-01
We present evidence for a radial variation of the stellar initial mass function IMF) in the giant elliptical NGC~4486 based on integral-field MUSE data acquired during the first Science Verification run for this instrument. A steepening of the low-mass end of the IMF towards the centre of this galaxy is necessary to explain the increasing strength of several of the optical IMF sensitive features introduced by Spiniello et al., which we observe in high-quality spectra extracted in annular apertures. The need for a varying slope of the IMF emerges when the strength of these IMF-sensitive features, together with that other classical Lick indices mostly sensitive to stellar metallicity and the bundance of α-elements, are fitted with the state-of-the-art stellar population models from Conroy & van Dokkum and Vazdekis et al., which we modified in order to allow variations in IMF slope, metallicity and α-elements abundance. More specifically, adopting 13-Gyr-old, single-age stellar population models and an unimodal IMF we find that the slope of the latter increases from x=1.8 to x=2.6 in the central 25 arcsec of NGC~4486. Varying IMF accompanied by a metallicity gradient, whereas the abundance of α-element appears constant throughout the MUSE field of view. We found metallicity and α-element abundance gradients perfectly consistent with the literature. A sodium over-abundance is necessary (according to CvD12 models) at all the distances (for all the apertures) and a slight gradient of increasing [Na/Fe] ratio towards the center can be inferred. However, in order to completely break the degeneracies between Na-abundance, total metallicity and IMF variation a more detailed investigation that includes the redder NaI line is required.
Estimation of discontinuous coefficients in parabolic systems - Applications to reservoir simulation
NASA Technical Reports Server (NTRS)
Lamm, Patricia K.
1987-01-01
Spline-based techniques for estimating spatially varying parameters that appear in parabolic distributed systems (typical of those found in reservoir simulation problems) are presented. In particular, the problem of determining discontinuous coefficients is discussed, estimating both the functional shape and points of discontinuity for such parameters. In addition, the ideas may also be applied to problems with unknown initial conditions and unknown parameters appearing in terms representing external forces. Convergence results and a summary of numerical performance of the resulting algorithms are given.
Lyman-α emitters in the context of hierarchical galaxy formation: predictions for VLT/MUSE surveys
NASA Astrophysics Data System (ADS)
Garel, T.; Guiderdoni, B.; Blaizot, J.
2016-02-01
The VLT/Multi Unit Spectrograph Explorer (MUSE) integral-field spectrograph can detect Lyα emitters (LAE) in the redshift range 2.8 ≲ z ≲ 6.7 in a homogeneous way. Ongoing MUSE surveys will notably probe faint Lyα sources that are usually missed by current narrow-band surveys. We provide quantitative predictions for a typical wedding-cake observing strategy with MUSE based on mock catalogues generated with a semi-analytic model of galaxy formation coupled to numerical Lyα radiation transfer models in gas outflows. We expect ≈1500 bright LAEs (FLyα ≳ 10-17 erg s-1 cm-2) in a typical shallow field (SF) survey carried over ≈100 arcmin2 , and ≈2000 sources as faint as 10-18 erg s-1 cm-2 in a medium-deep field (MDF) survey over 10 arcmin2 . In a typical deep field (DF) survey of 1 arcmin2 , we predict that ≈500 extremely faint LAEs (FLyα ≳ 4 × 10-19 erg s-1 cm-2) will be found. Our results suggest that faint Lyα sources contribute significantly to the cosmic Lyα luminosity and SFR budget. While the host haloes of bright LAEs at z ≈ 3 and 6 have descendants with median masses of 2 × 1012 and 5 × 1013 M⊙, respectively, the faintest sources detectable by MUSE at these redshifts are predicted to reside in haloes which evolve into typical sub-L* and L* galaxy haloes at z = 0. We expect typical DF and MDF surveys to uncover the building blocks of Milky Way-like objects, even probing the bulk of the stellar mass content of LAEs located in their progenitor haloes at z ≈ 3.
NASA Technical Reports Server (NTRS)
Lederer, S. M.; Domingue, D. L.; Vilas, F.; Abe, M.; Farnham, T. L.; Jarvis, K. S.; Lowry, S. C.; Ohba, Y.; Weissman, P. R.; French, L. M.
2004-01-01
Several spacecraft missions have recently targeted asteroids to study their morphologies and physical properties (e.g. Galileo, NEAR Shoemaker), and more are planned. MUSES-C is a Japanese mission designed to rendezvous with a near-Earth asteroid (NEA). The MUSES-C spacecraft, Hayabusa, was launched successfully in May 2003. It will rendezvous with its target asteroid in 2005, and return samples to the Earth in 2007. Its target, 25143 Itokawa (1998 SF36), made a close approach to the Earth in 2001. We collected an extensive ground-based database of broadband photometry obtained during this time, which maximized the phase angle coverage, to characterize this target in preparation for the mission. Our project was designed to capitalize on the broadband UBVRI photometric observations taken with a series of telescopes, instrumentation, and observers. Photometry and spectrophotometry of Itokawa were acquired at Lowell, McDonald, Steward, Palomar, Table Mountain and Kiso Observatories. The photometric data sets were combined to calculate Hapke model parameters of the surface material of Itokawa, and examine the solar-corrected broadband color characteristics of the asteroid. Broadband photometry of an object can be used to: (1) determine its colors and thereby contribute to the understanding of its surface composition and taxonomic class, and (2) infer global physical surface properties of the target body. We present both colors from UBVRI observations of the MUSES-C target Itokawa, and physical properties derived by applying a Hapke model to the broadband BVRI photometry.
Wu Yanling; Shi Yong; Helou, George; Armus, Lee; Stierwalt, Sabrina; Dale, Daniel A.; Papovich, Casey; Rahman, Nurur; Dasyra, Kalliopi E-mail: yong@ipac.caltech.edu E-mail: lee@ipac.caltech.edu E-mail: ddale@uwyo.edu E-mail: nurur@astro.umd.edu
2011-06-10
We present rest-frame 15 and 24 {mu}m luminosity functions (LFs) and the corresponding star-forming LFs at z < 0.3 derived from the 5MUSES sample. Spectroscopic redshifts have been obtained for {approx}98% of the objects and the median redshift is {approx}0.12. The 5-35 {mu}m Infrared Spectrograph spectra allow us to estimate accurately the luminosities and build the LFs. Using a combination of starburst and quasar templates, we quantify the star formation (SF) and active galactic nucleus (AGN) contributions in the mid-IR spectral energy distribution. We then compute the SF LFs at 15 and 24 {mu}m, and compare with the total 15 and 24 {mu}m LFs. When we remove the contribution of AGNs, the bright end of the LF exhibits a strong decline, consistent with the exponential cutoff of a Schechter function. Integrating the differential LF, we find that the fractional contribution by SF to the energy density is 58% at 15 {mu}m and 78% at 24 {mu}m, while it goes up to {approx}86% when we extrapolate our mid-IR results to the total IR luminosity density. We confirm that the AGNs play more important roles energetically at high luminosities. Finally, we compare our results with work at z {approx} 0.7 and confirm that evolution on both luminosity and density is required to explain the difference in the LFs at different redshifts.
ERIC Educational Resources Information Center
Rosenfeld, Malke; Kelin, Daniel; Plows, Kate; Conarro, Ryan; Broderick, Debora
2014-01-01
When one says "writing about teaching artist practice," what exactly does that mean? In the first two sections (EJ1039315 and EJ1039319), the authors considered different ways to frame a story by either zooming in closely to a specific moment or zooming out to provide more context in an effort to address complex issues. The stories in…
Connecting the Dots: MUSE Unveils the Destructive Effect of Massive Stars
NASA Astrophysics Data System (ADS)
McLeod, A. F.; Ginsburg, A.; Klaassen, P.; Mottram, J.; Ramsay, S.; Testi, L.
2016-09-01
Throughout their entire lives, massive stars have a substantial impact on their surroundings, such as via protostellar outflows, stellar winds, ionising radiation and supernovae. Conceptually this is well understood, but the exact role of feedback mechanisms on the global star formation process and the stellar environment, as well as their dependence on the properties of the star-forming regions, are yet to be understood in detail. Observational quantification of the various feedback mechanisms is needed to precisely understand how high mass stars interact with and shape their environment, and which feedback mechanisms dominate under given conditions. We analysed the photo-evaporative effect of ionising radiation from massive stars on their surrounding molecular clouds using MUSE integral field data. This allowed us to determine the mass-loss rate of pillar-like structures (due to photo-evaporation) in different environments, and relate it to the ionising power of nearby massive stars. The resulting correlation is the first observational quantification of the destructive effect of ionising radiation from massive stars.
Microscopy with UV Surface Excitation (MUSE) for slide-free histology and pathology imaging
NASA Astrophysics Data System (ADS)
Fereidouni, Farzad; Datta-Mitra, Ananya; Demos, Stavros; Levenson, Richard
2015-03-01
A novel microscopy method that takes advantage of shallow photon penetration using ultraviolet-range excitation and exogenous fluorescent stains is described. This approach exploits the intrinsic optical sectioning function when exciting tissue fluorescence from superficial layers to generate images similar to those obtainable from a physically thinsectioned tissue specimen. UV light in the spectral range from roughly 240-275 nm penetrates only a few microns into the surface of biological specimens, thus eliminating out-of-focus signals that would otherwise arise from deeper tissue layers. Furthermore, UV excitation can be used to simultaneously excite fluorophores emitting across a wide spectral range. The sectioning property of the UV light (as opposed to more conventional illumination in the visible range) removes the need for physical or more elaborate optical sectioning approaches, such as confocal, nonlinear or coherent tomographic methods, to generate acceptable axial resolution. Using a tunable laser, we investigated the effect of excitation wavelength in the 230-350 nm spectral range on excitation depth. The results reveal an optimal wavelength range and suggest that this method can be a fast and reliable approach for rapid imaging of tissue specimens. Some of this range is addressable by currently available and relatively inexpensive LED light sources. MUSE may prove to be a good alternative to conventional, time-consuming, histopathology procedures.
Highest Resolution Topography of 433 Eros and Implications for MUSES-C
NASA Technical Reports Server (NTRS)
Cheng, A. F.; Barnouin-Jha, O.
2003-01-01
The highest resolution observations of surface morphology and topography at asteroid 433 Eros were obtained by the Near Earth Asteroid Rendezvous (NEAR) Shoemaker spacecraft on 12 February 2001, as it landed within a ponded deposit on Eros. Coordinated observations were obtained by the imager and the laser rangefinder, at best image resolution of 1 cm/pixel and best topographic resolution of 0.4 m. The NEAR landing datasets provide unique information on rock size and height distributions and regolith processes. Rocks and soil can be distinguished photometrically, suggesting that bare rock is indeed exposed. The NEAR landing data are the only data at sufficient resolution to be relevant to hazard assessment on future landed missions to asteroids, such as the MUSES-C mission which will land on asteroid 25143 (1998 SF36) in order to obtain samples. In a typical region just outside the pond where NEAR landed, the areal coverage by resolved positive topographic features is 18%. At least one topographic feature in the vicinity of the NEAR landing site would have been hazardous for a spacecraft.
Multiple Scattering in Beam-line Detectors of the MUSE Experiment
NASA Astrophysics Data System (ADS)
Garland, Heather; Robinette, Clay; Strauch, Steffen; MUon Scattering Experiment (MUSE) Collaboration
2015-10-01
The charge radius of the proton has been obtained precisely from elastic electron-scattering data and spectroscopy of atomic hydrogen. However, a recent experiment using muonic hydrogen, designed for high-precision, presented a charge radius significantly smaller than the accepted value. This discrepancy certainly prompts a discussion of topics ranging from experimental methods to physics beyond the Standard Model. The MUon Scattering Experiment (MUSE) collaboration at the Paul Scherrer Institute, Switzerland, is planning an experiment to measure the charge radius of the proton in elastic scattering of electrons and muons of positive and negative charge off protons. In the layout for the proposed experiment, detectors will be placed in the beam line upstream of a hydrogen target. Using Geant4 simulations, we studied the effect of multiple scattering due to these detectors and determined the fraction of primary particles that hit the target for a muon beam at each beam momentum. Of the studied detectors, a quartz Cherenkov detector caused the largest multiple scattering. Our results will guide further optimization of the detector setup. Supported in parts by the U.S. National Science Foundation: NSF PHY-1205782.
Kuroda, Yasumasa; Dezawa, Mari
2014-01-01
Mesenchymal stem cells (MSCs) have gained a great deal of attention for regenerative medicine because they can be obtained from easy accessible mesenchymal tissues, such as bone marrow, adipose tissue, and the umbilical cord, and have trophic and immunosuppressive effects to protect tissues. The most outstanding property of MSCs is their potential for differentiation into cells of all three germ layers. MSCs belong to the mesodermal lineage, but they are known to cross boundaries from mesodermal to ectodermal and endodermal lineages, and differentiate into a variety of cell types both in vitro and in vivo. Such behavior is exceptional for tissue stem cells. As observed with hematopoietic and neural stem cells, tissue stem cells usually generate cells that belong to the tissue in which they reside, and do not show triploblastic differentiation. However, the scientific basis for the broad multipotent differentiation of MSCs still remains an enigma. This review summarizes the properties of MSCs from representative mesenchymal tissues, including bone marrow, adipose tissue, and the umbilical cord, to demonstrate their similarities and differences. Finally, we introduce a novel type of pluripotent stem cell, multilineage-differentiating stress-enduring (Muse) cells, a small subpopulation of MSCs, which can explain the broad spectrum of differentiation ability in MSCs.
Reentry Motion and Aerodynamics of the MUSES-C Sample Return Capsule
NASA Astrophysics Data System (ADS)
Ishii, Nobuaki; Yamada, Tetsuya; Hiraki, Koju; Inatani, Yoshifumi
The Hayabusa spacecraft (MUSES-C) carries a small capsule for bringing asteroid samples back to the earth. The initial spin rate of the reentry capsule together with the flight path angle of the reentry trajectory is a key parameter for the aerodynamic motion during the reentry flight. The initial spin rate is given by the spin-release mechanism attached between the capsule and the mother spacecraft, and the flight path angle can be modified by adjusting the earth approach orbit. To determine the desired values of both parameters, the attitude motion during atmospheric flight must be clarified, and angles of attack at the maximum dynamic pressure and the parachute deployment must be assessed. In previous studies, to characterize the aerodynamic effects of the reentry capsule, several wind-tunnel tests were conducted using the ISAS high-speed flow test facilities. In addition to the ground test data, the aerodynamic properties in hypersonic flows were analyzed numerically. Moreover, these data were made more accurate using the results of balloon drop tests. This paper summarized the aerodynamic properties of the reentry capsule and simulates the attitude motion of the full-configuration capsule during atmospheric flight in three dimensions with six degrees of freedom. The results show the best conditions for the initial spin rates and flight path angles of the reentry trajectory.
Multi-band imaging camera and its sciences for the Japanese near-earth asteroid mission MUSES-C
NASA Astrophysics Data System (ADS)
Nakamura, Tsuko; Nakamura, Akiko M.; Saito, Jun; Sasaki, Sho; Nakamura, Ryosuke; Demura, Hirohide; Akiyama, Hiroaki; Tholen, David
2001-11-01
In this paper we present current development status of our Asteroid Multi-band Imaging CAmera (AMICA) for the Japan-US joint asteroid sample return mission MUSES-C. The launch of the spacecraft is planned around the end of 2002 and the whole mission period till sample retrieval on Earth will be approximately five years. The nominal target is the asteroid 1998SF36, one of the Amor-type asteroids. The AMICA specifications for the mission are shown here along with its ground-based and inflight calibration methods. We also describe the observational scenario at the asteroid, in relation to scientific goals.
NASA Astrophysics Data System (ADS)
Moralejo, B.; Roth, M. M.; Godefroy, P.; Fechner, T.; Bauer, S. M.; Schmälzlin, E.; Kelz, A.; Haynes, R.
2016-07-01
After having demonstrated that an IFU, attached to a microscope rather than to a telescope, is capable of differentiating complex organic tissue with spatially resolved Raman spectroscopy, we have launched a clinical validation program that utilizes a novel optimized fiber-coupled multi-channel spectrograph whose layout is based on the modular MUSE spectrograph concept. The new design features a telecentric input and has an extended blue performance, but otherwise maintains the properties of high throughput and excellent image quality over an octave of wavelength coverage with modest spectral resolution. We present the opto-mechanical layout and details of its optical performance.
Assessment of HIV testing among young methamphetamine users in Muse, Northern Shan State, Myanmar
2014-01-01
Background Methamphetamine (MA) use has a strong correlation with risky sexual behaviors, and thus may be triggering the growing HIV epidemic in Myanmar. Although methamphetamine use is a serious public health concern, only a few studies have examined HIV testing among young drug users. This study aimed to examine how predisposing, enabling and need factors affect HIV testing among young MA users. Methods A cross-sectional study was conducted from January to March 2013 in Muse city in the Northern Shan State of Myanmar. Using a respondent-driven sampling method, 776 MA users aged 18-24 years were recruited. The main outcome of interest was whether participants had ever been tested for HIV. Descriptive statistics and multivariate logistic regression were applied in this study. Results Approximately 14.7% of young MA users had ever been tested for HIV. Significant positive predictors of HIV testing included predisposing factors such as being a female MA user, having had higher education, and currently living with one’s spouse/sexual partner. Significant enabling factors included being employed and having ever visited NGO clinics or met NGO workers. Significant need factors were having ever been diagnosed with an STI and having ever wanted to receive help to stop drug use. Conclusions Predisposing, enabling and need factors were significant contributors affecting uptake of HIV testing among young MA users. Integrating HIV testing into STI treatment programs, alongside general expansion of HIV testing services may be effective in increasing HIV testing uptake among young MA users. PMID:25042697
Spectral mapping of comet 67P/Churyumov-Gerasimenko with VLT/MUSE and SINFONI
NASA Astrophysics Data System (ADS)
Guilbert-Lepoutre, Aurelie; Besse, Sebastien; Snodgrass, Colin; Yang, Bin
2016-10-01
Comets are supposedly the most primitive objects in the solar system, preserving the earliest record of material from the nebula out of which our Sun and planets were formed, and thus holding crucial clues on the early phases of the solar system formation and evolution. For most small bodies in the solar system we can only access the surface properties, whereas active comet nuclei lose material from their subsurface, so that understanding cometary activity represents an unique opportunity to assess their internal composition, and by extension the composition, the temperature and pressure conditions of the protoplanetary disk at their place of formation.The ESA/Rosetta mission is performing the most thorough investigation of a comet ever made. Rosetta is measuring properties of comet 67P/Churyumov-Gerasimenko at distances between 5 and hundreds of km from the nucleus. However, it is unable to make any measurement over the thousands of km of the rest of the coma. Fortunately, the outer coma is accessible from the ground. In addition, we currently lack an understanding of how the very detailed information gathered from space-based observations can be extrapolated to the many ground-based observations that we can potentially perform. Combining parallel in situ observations with observations from the ground therefore gives us a great opportunity, not only to understand the behavior of 67P, but also to other comets observed exclusively from Earth. As part of the many observations taken from the ground, we have performed a spectral mapping of 67's coma using two IFU instruments mounted on the VLT: MUSE in the visible, and SINFONI in the near-infrared. The observations, carried out in March 2016, will be presented and discussed.
Star Observations by Asteroid Multiband Imaging Camera (AMICA) on Hayabusa (MUSES-C) Cruising Phase
NASA Astrophysics Data System (ADS)
Saito, J.; Hashimoto, T.; Kubota, T.; Hayabusa AMICA Team
Muses-C is the first Japanese asteroid mission and also a technology demonstration one to the S-type asteroid, 25143 Itokawa (1998SF36). It was launched at May 9, 2003, and renamed Hayabusa after the spacecraft was confirmed to be on the interplanetary orbit. This spacecraft has the event of the Earth-swingby for gravitational assist in the way to Itokawa on 2004 May. The arrival to Itokawa is scheduled on 2005 summer. During the visit to Itokawa, the remote-sensing observation with AMICA, NIRS (Near Infrared Spectrometer), XRS (X-ray Fluorescence Spectrometer), and LIDAR are performed, and the spacecraft descends and collects the surface samples at the touch down to the surface. The captured asteroid sample will be returned to the Earth in the middle of 2007. The telescopic optical navigation camera (ONC-T) with seven bandpass filters (and one wide-band filter) and polarizers is called AMICA (Asteroid Multiband Imaging CAmera) when ONC-T is used for scientific observations. The AMICA's seven bandpass filters are nearly equivalent to the seven filters of the ECAS (Eight Color Asteroid Survey) system. Obtained spectroscopic data will be compared with previously obtained ECAS observations. AMICA also has four polarizers, which are located on one edge of the CCD chip (covering 1.1 x 1.1 degrees each). Using the polarizers of AMICA, we can obtain polarimetric information of the target asteroid's surface. Since last November, we planned the test observations of some stars and planets by AMICA and could successfully obtain these images. Here, we briefly report these observations and its calibration by the ground-based observational data. In addition, we also present a current status of AMICA.
NASA Astrophysics Data System (ADS)
Jauzac, M.; Richard, J.; Limousin, M.; Knowles, K.; Mahler, G.; Smith, G. P.; Kneib, J.-P.; Jullo, E.; Natarajan, P.; Ebeling, H.; Atek, H.; Clément, B.; Eckert, D.; Egami, E.; Massey, R.; Rexroth, M.
2016-04-01
We present a high-precision mass model of the galaxy cluster MACS J1149.6+ 2223, based on a strong gravitational lensing analysis of Hubble Space Telescope Frontier Fields (HFF) imaging data and spectroscopic follow-up with Gemini/Gemini Multi-Object Spectrographs (GMOS) and Very Large Telescope (VLT)/Multi Unit Spectroscopic Explorer (MUSE). Our model includes 12 new multiply imaged galaxies, bringing the total to 22, composed of 65 individual lensed images. Unlike the first two HFF clusters, Abell 2744 and MACS J0416.1-2403, MACS J1149 does not reveal as many multiple images in the HFF data. Using the LENSTOOL software package and the new sets of multiple images, we model the cluster with several cluster-scale dark matter haloes and additional galaxy-scale haloes for the cluster members. Consistent with previous analyses, we find the system to be complex, composed of five cluster-scale haloes. Their spatial distribution and lower mass, however, makes MACS J1149 a less powerful lens. Our best-fitting model predicts image positions with an rms of 0.91 arcsec. We measure the total projected mass inside a 200-kpc aperture as (1.840 ± 0.006) × 1014 M⊙, thus reaching again 1 per cent precision, following our previous HFF analyses of MACS J0416.1-2403 and Abell 2744. In light of the discovery of the first resolved quadruply lensed supernova, SN Refsdal, in one of the multiply imaged galaxies identified in MACS J1149, we use our revised mass model to investigate the time delays and predict the rise of the next image between 2015 November and 2016 January.
Ou, Yangming; Resnick, Susan M.; Gur, Ruben C.; Gur, Raquel E.; Satterthwaite, Theodore D.; Furth, Susan; Davatzikos, Christos
2016-01-01
Atlas-based automated anatomical labeling is a fundamental tool in medical image segmentation, as it defines regions of interest for subsequent analysis of structural and functional image data. The extensive investigation of multi-atlas warping and fusion techniques over the past 5 or more years has clearly demonstrated the advantages of consensus-based segmentation. However, the common approach is to use multiple atlases with a single registration method and parameter set, which is not necessarily optimal for every individual scan, anatomical region, and problem/data-type. Different registration criteria and parameter sets yield different solutions, each providing complementary information. Herein, we present a consensus labeling framework that generates a broad ensemble of labeled atlases in target image space via the use of several warping algorithms, regularization parameters, and atlases. The label fusion integrates two complementary sources of information: a local similarity ranking to select locally optimal atlases and a boundary modulation term to refine the segmentation consistently with the target image's intensity profile. The ensemble approach consistently outperforms segmentations using individual warping methods alone, achieving high accuracy on several benchmark datasets. The MUSE methodology has been used for processing thousands of scans from various datasets, producing robust and consistent results. MUSE is publicly available both as a downloadable software package, and as an application that can be run on the CBICA Image Processing Portal (https://ipp.cbica.upenn.edu), a web based platform for remote processing of medical images. PMID:26679328
Doshi, Jimit; Erus, Guray; Ou, Yangming; Resnick, Susan M; Gur, Ruben C; Gur, Raquel E; Satterthwaite, Theodore D; Furth, Susan; Davatzikos, Christos
2016-02-15
Atlas-based automated anatomical labeling is a fundamental tool in medical image segmentation, as it defines regions of interest for subsequent analysis of structural and functional image data. The extensive investigation of multi-atlas warping and fusion techniques over the past 5 or more years has clearly demonstrated the advantages of consensus-based segmentation. However, the common approach is to use multiple atlases with a single registration method and parameter set, which is not necessarily optimal for every individual scan, anatomical region, and problem/data-type. Different registration criteria and parameter sets yield different solutions, each providing complementary information. Herein, we present a consensus labeling framework that generates a broad ensemble of labeled atlases in target image space via the use of several warping algorithms, regularization parameters, and atlases. The label fusion integrates two complementary sources of information: a local similarity ranking to select locally optimal atlases and a boundary modulation term to refine the segmentation consistently with the target image's intensity profile. The ensemble approach consistently outperforms segmentations using individual warping methods alone, achieving high accuracy on several benchmark datasets. The MUSE methodology has been used for processing thousands of scans from various datasets, producing robust and consistent results. MUSE is publicly available both as a downloadable software package, and as an application that can be run on the CBICA Image Processing Portal (https://ipp.cbica.upenn.edu), a web based platform for remote processing of medical images.
Comparing the properties of the X-shaped bulges of NGC 4710 and the Milky Way with MUSE
NASA Astrophysics Data System (ADS)
Gonzalez, O. A.; Gadotti, D. A.; Debattista, V. P.; Rejkuba, M.; Valenti, E.; Zoccali, M.; Coccato, L.; Minniti, D.; Ness, M.
2016-06-01
Context. Our view of the structure of the Milky Way and, in particular, its bulge is obscured by the intervening stars, dust, and gas in the disc. While great progress in understanding the bulge has been achieved with past and ongoing observations, the comparison of its global chemodynamical properties with respect to those of bulges seen in external galaxies has yet to be accomplished. Aims: We used the Multi Unit Spectroscopic Explorer (MUSE) instrument installed on the Very Large Telescope (VLT) to obtain spectral and imaging coverage of NGC 4710. The wide area and excellent sampling of the MUSE integral field spectrograph allows us to investigate the dynamical properties of the X-shaped bulge of NGC 4710 and compare it with the properties of the X-shaped bulge of the Milky Way. Methods: We measured the radial velocities, velocity dispersion, and stellar populations using a penalised pixel full spectral fitting technique adopting simple stellar populations models, on a 1' × 1' area centred on the bulge of NGC 4710. We constructed the velocity maps of the bulge of NGC 4710 and investigated the presence of vertical metallicity gradients. These properties were compared to those of the Milky Way bulge and to a simulated galaxy with a boxy-peanut bulge. Results: We find the line-of-sight velocity maps and 1D rotation curves of the bulge of NGC 4710 to be remarkably similar to those of the Milky Way bulge. Some specific differences that were identified are in good agreement with the expectations from variations in the bar orientation angle. The bulge of NGC 4710 has a boxy-peanut morphology with a pronounced X-shape, showing no indication of any additional spheroidally distributed bulge population, in which we measure a vertical metallicity gradient of 0.35 dex/kpc. Conclusions: The general properties of NGC 4710 are very similar to those observed in the Milky Way bulge. However, it has been suggested that the Milky Way bulge has an additional component that is
Time of flight and the MUSE experiment in the PIM1 Channel at the Paul Sherrer Institute
NASA Astrophysics Data System (ADS)
Lin, Wan; MUSE Collaboration
2015-10-01
The MUSE experiment in the PIM1 Channel at the Paul Sherrer Institute in Villigen, Switzerland, measures scattering of electrons and muons from a liquid hydrogen target. The intent of the experiment is to deduce from the scattering probabilities whether the radius of the proton is the same when determined from the scattering of the two different particle types. An important technique for the experiment is precise timing measurements, using high precision scintillators and a beam Cerenkov counter. We will describe the motivations for the precise timing measurement. We will present results for the timing measurements from prototype experimental detectors. We will also present results from a simulation program, Geant4, that was used to calculate energy loss corrections to the time of flight determined between the beam Cherenkov counter and the scintillator. This work is supported in part by the U.S. National Science Foundation Grant PHY 1306126 and the Douglass Project for Women in Math, Science, and Engineering.
MUSE Reveals a Recent Merger in the Post-starburst Host Galaxy of the TDE ASASSN-14li
NASA Astrophysics Data System (ADS)
Prieto, J. L.; Krühler, T.; Anderson, J. P.; Galbany, L.; Kochanek, C. S.; Aquino, E.; Brown, J. S.; Dong, Subo; Förster, F.; Holoien, T. W.-S.; Kuncarayakti, H.; Maureira, J. C.; Rosales-Ortega, F. F.; Sánchez, S. F.; Shappee, B. J.; Stanek, K. Z.
2016-10-01
We present Multi Unit Spectroscopic Explorer (MUSE) integral field spectroscopic observations of the host galaxy (PGC 043234) of one of the closest (z = 0.0206, D ≃ 90 Mpc) and best-studied tidal disruption events (TDEs), ASASSN-14li. The MUSE integral field data reveal asymmetric and filamentary structures that extend up to ≳10 kpc from the post-starburst host galaxy of ASASSN-14li. The structures are traced only through the strong nebular [O iii] λ5007, [N ii] λ6584, and Hα emission lines. The total off-nuclear [O iii] λ5007 luminosity is 4.7 × 1039 erg s-1, and the ionized H mass is ˜ {10}4(500/{n}{{e}}) {M}⊙ . Based on the Baldwin-Phillips-Terlevich diagram, the nebular emission can be driven by either AGN photoionization or shock excitation, with AGN photoionization favored given the narrow intrinsic line widths. The emission line ratios and spatial distribution strongly resemble ionization nebulae around fading AGNs such as IC 2497 (Hanny's Voorwerp) and ionization “cones” around Seyfert 2 nuclei. The morphology of the emission line filaments strongly suggest that PGC 043234 is a recent merger, which likely triggered a strong starburst and AGN activity leading to the post-starburst spectral signatures and the extended nebular emission line features we see today. We briefly discuss the implications of these observations in the context of the strongly enhanced TDE rates observed in post-starburst galaxies and their connection to enhanced theoretical TDE rates produced by supermassive black hole binaries.
ModelMuse: A U.S. Geological Survey Open-Source, Graphical User Interface for Groundwater Models
NASA Astrophysics Data System (ADS)
Winston, R. B.
2013-12-01
ModelMuse is a free publicly-available graphical preprocessor used to generate the input and display the output for several groundwater models. It is written in Object Pascal and the source code is available on the USGS software web site. Supported models include the MODFLOW family of models, PHAST (version 1), and SUTRA version 2.2. With MODFLOW and PHAST, the user generates a grid and uses 'objects' (points, lines, and polygons) to define boundary conditions and the spatial variation in aquifer properties. Because the objects define the spatial variation, the grid can be changed without the user needing to re-enter spatial data. The same paradigm is used with SUTRA except that the user generates a quadrilateral finite-element mesh instead of a rectangular grid. The user interacts with the model in a top view and in a vertical cross section. The cross section can be at any angle or location. There is also a three-dimensional view of the model. For SUTRA, a new method of visualizing the permeability and related properties has been introduced. In three dimensional SUTRA models, the user specifies the permeability tensor by specifying permeability in three mutually orthogonal directions that can be oriented in space in any direction. Because it is important for the user to be able to check both the magnitudes and directions of the permeabilities, ModelMuse displays the permeabilities as either a two-dimensional or a three-dimensional vector plot. Color is used to differentiate the maximum, middle, and minimum permeability vectors. The magnitude of the permeability is shown by the vector length. The vector angle shows the direction of the maximum, middle, or minimum permeability. Contour and color plots can also be used to display model input and output data.
Iseki, Masahiro; Kushida, Yoshihiro; Wakao, Shohei; Akimoto, Takahiro; Mizuma, Masamichi; Motoi, Fuyuhiko; Asada, Ryuta; Shimizu, Shinobu; Unno, Michiaki; Chazenbalk, Gregorio; Dezawa, Mari
2016-11-02
Muse cells, a novel type of non-tumorigenic pluripotent-like stem cells reside in the bone marrow, skin and adipose tissue, are collectable as cells positive for pluripotent surface marker SSEA-3. They are able to differentiate into cells representative of all three germ layers. The capacity of intravenously injected human bone marrow-Muse cells to repair the liver fibrosis model of immunodeficient mice was evaluated in this study. They exhibited the ability for differentiation spontaneously into hepatoblast/hepatocyte-lineage cells and high migration toward the serum and liver tissue of carbon tetrachloride-treated mice in vitro. In vivo, they specifically accumulated into the liver, but not into other organs except the lower rate in the lung at 2 weeks after intravenous injection into the liver fibrosis model. After homing, Muse cells spontaneously differentiated in vivo into HepPar-1 (71.1±15.2%), human albumin (54.3±8.2%) and anti-trypsin (47.9±4.6%)-positive cells without fusing with host hepatocytes, and expressed mature functional markers such as human-CYP1A2, and human-Glc-6-Pase, at 8 weeks. Recovery in serum total bilirubin and albumin, and significant attenuation of fibrosis were recognized with statistical differences between the Muse group and control groups which received the vehicle or the same number of non-Muse cells, namely cells other than Muse cells in bone marrow mesenchymal stem cells. Thus, unlike ES and iPS cells, Muse cells are unique in their efficient migration and integration into damaged liver only by intravenous injection, nontumorigenicity, and spontaneous differentiation into hepatocytes, rendering induction into hepatocytes prior to transplantation unnecessary. They are suggested to repair liver fibrosis in two simple steps; expansion after collection from the bone marrow and intravenous injection. Such feasible strategy might provide impressive regenerative performance to liver disease patients.
The XXL Survey. VIII. MUSE characterisation of intracluster light in a z ~ 0.53 cluster of galaxies
NASA Astrophysics Data System (ADS)
Adami, C.; Pompei, E.; Sadibekova, T.; Clerc, N.; Iovino, A.; McGee, S. L.; Guennou, L.; Birkinshaw, M.; Horellou, C.; Maurogordato, S.; Pacaud, F.; Pierre, M.; Poggianti, B.; Willis, J.
2016-06-01
Aims: Within a cluster, gravitational effects can lead to the removal of stars from their parent galaxies and their subsequent dispersal into the intracluster medium. Gas hydrodynamical effects can additionally strip gas and dust from galaxies; both gas and stars contribute to intracluster light (ICL). The properties of the ICL can therefore help constrain the physical processes at work in clusters by serving as a fossil record of the interaction history. Methods: The present study is designed to characterise this ICL for the first time in a ~1014 M⊙ and z ~ 0.53 cluster of galaxies from imaging and spectroscopic points of view. By applying a wavelet-based method to CFHT Megacam and WIRCAM images, we detect significant quantities of diffuse light and are able to constrain their spectral energy distributions. These sources were then spectroscopically characterised with ESO Multi Unit Spectroscopic Explorer (MUSE) spectroscopic data. MUSE data were also used to compute redshifts of 24 cluster galaxies and search for cluster substructures. Results: An atypically large amount of ICL, equivalent in i' to the emission from two brightest cluster galaxies, has been detected in this cluster. Part of the detected diffuse light has a very weak optical stellar component and apparently consists mainly of gas emission, while other diffuse light sources are clearly dominated by old stars. Furthermore, emission lines were detected in several places of diffuse light. Our spectral analysis shows that this emission likely originates from low-excitation parameter gas. Globally, the stellar contribution to the ICL is about 2.3 × 109 yr old even though the ICL is not currently forming a large number of stars. On the other hand, the contribution of the gas emission to the ICL in the optical is much greater than the stellar contribution in some regions, but the gas density is likely too low to form stars. These observations favour ram pressure stripping, turbulent viscous stripping, or
Chang, Hing-Chiu; Gaur, Pooja; Chou, Ying-hui; Chu, Mei-Lan; Chen, Nan-kuei
2014-01-01
Functional magnetic resonance imaging (fMRI) is a non-invasive and powerful imaging tool for detecting brain activities. The majority of fMRI studies are performed with single-shot echo-planar imaging (EPI) due to its high temporal resolution. Recent studies have demonstrated that, by increasing the spatial-resolution of fMRI, previously unidentified neuronal networks can be measured. However, it is challenging to improve the spatial resolution of conventional single-shot EPI based fMRI. Although multi-shot interleaved EPI is superior to single-shot EPI in terms of the improved spatial-resolution, reduced geometric distortions, and sharper point spread function (PSF), interleaved EPI based fMRI has two main limitations: 1) the imaging throughput is lower in interleaved EPI; 2) the magnitude and phase signal variations among EPI segments (due to physiological noise, subject motion, and B0 drift) are translated to significant in-plane aliasing artifact across the field of view (FOV). Here we report a method that integrates multiple approaches to address the technical limitations of interleaved EPI-based fMRI. Firstly, the multiplexed sensitivity-encoding (MUSE) post-processing algorithm is used to suppress in-plane aliasing artifacts resulting from time-domain signal instabilities during dynamic scans. Secondly, a simultaneous multi-band interleaved EPI pulse sequence, with a controlled aliasing scheme incorporated, is implemented to increase the imaging throughput. Thirdly, the MUSE algorithm is then generalized to accommodate fMRI data obtained with our multi-band interleaved EPI pulse sequence, suppressing both in-plane and through-plane aliasing artifacts. The blood-oxygenation-level-dependent (BOLD) signal detectability and the scan throughput can be significantly improved for interleaved EPI-based fMRI. Our human fMRI data obtained from 3 Tesla systems demonstrate the effectiveness of the developed methods. It is expected that future fMRI studies requiring high
NASA Astrophysics Data System (ADS)
Fumagalli, Michele; Cantalupo, Sebastiano; Dekel, Avishai; Morris, Simon L.; O'Meara, John M.; Prochaska, J. Xavier; Theuns, Tom
2016-10-01
We report on the search for galaxies in the proximity of two very metal-poor gas clouds at z ˜ 3 towards the quasar Q0956+122. With a 5-hour Multi-Unit Spectroscopic Explorer (MUSE) integration in a ˜500 × 500 kpc2 region centred at the quasar position, we achieve a ≥80 per cent complete spectroscopic survey of continuum-detected galaxies with mR ≤ 25 mag and Lyα emitters with luminosity LLyα ≥ 3 × 1041 erg s- 1. We do not identify galaxies at the redshift of a z ˜ 3.2 Lyman limit system (LLS) with log Z/Z⊙ = -3.35 ± 0.05, placing this gas cloud in the intergalactic medium or circumgalactic medium of a galaxy below our sensitivity limits. Conversely, we detect five Lyα emitters at the redshift of a pristine z ˜ 3.1 LLS with log Z/Z⊙ ≤ -3.8, while ˜0.4 sources were expected given the z ˜ 3 Lyα luminosity function. Both this high detection rate and the fact that at least three emitters appear aligned in projection with the LLS suggest that this pristine cloud is tracing a gas filament that is feeding one or multiple galaxies. Our observations uncover two different environments for metal-poor LLSs, implying a complex link between these absorbers and galaxy haloes, which ongoing MUSE surveys will soon explore in detail. Moreover, in agreement with recent MUSE observations, we detected a ˜ 90 kpc Lyα nebula at the quasar redshift and three Lyα emitters reminiscent of a `dark galaxy' population.
NASA Astrophysics Data System (ADS)
Fumagalli, Michele; Fossati, Matteo; Hau, George K. T.; Gavazzi, Giuseppe; Bower, Richard; Sun, Ming; Boselli, Alessandro
2014-12-01
We present Multi Unit Spectroscopic Explorer (MUSE) observations of ESO137-001, a spiral galaxy infalling towards the centre of the massive Norma cluster at z ˜ 0.0162. During the high-velocity encounter of ESO137-001 with the intracluster medium, a dramatic ram-pressure stripping event gives rise to an extended gaseous tail, traced by our MUSE observations to >30 kpc from the galaxy centre. By studying the Hα surface brightness and kinematics in tandem with the stellar velocity field, we conclude that ram pressure has completely removed the interstellar medium from the outer disc, while the primary tail is still fed by gas from the inner regions. Gravitational interactions do not appear to be a primary mechanism for gas removal. The stripped gas retains the imprint of the disc rotational velocity to ˜20 kpc downstream, without a significant gradient along the tail, which suggests that ESO137-001 is fast moving along a radial orbit in the plane of the sky. Conversely, beyond ˜20 kpc, a greater degree of turbulence is seen, with velocity dispersion up to ≳100 km s-1. For a model-dependent infall velocity of vinf ˜ 3000 km s-1, we conclude that the transition from laminar to turbulent flow in the tail occurs on time-scales ≥6.5 Myr. Our work demonstrates the terrific potential of MUSE for detailed studies of how ram-pressure stripping operates on small scales, providing a deep understanding of how galaxies interact with the dense plasma of the cluster environment.
Ubiquitous Giant Lyα Nebulae around the Brightest Quasars at z ˜ 3.5 Revealed with MUSE
NASA Astrophysics Data System (ADS)
Borisova, Elena; Cantalupo, Sebastiano; Lilly, Simon J.; Marino, Raffaella A.; Gallego, Sofia G.; Bacon, Roland; Blaizot, Jeremy; Bouché, Nicolas; Brinchmann, Jarle; Carollo, C. Marcella; Caruana, Joseph; Finley, Hayley; Herenz, Edmund C.; Richard, Johan; Schaye, Joop; Straka, Lorrie A.; Turner, Monica L.; Urrutia, Tanya; Verhamme, Anne; Wisotzki, Lutz
2016-11-01
Direct Lyα imaging of intergalactic gas at z˜ 2 has recently revealed giant cosmological structures around quasars, e.g., the Slug Nebula. Despite their high luminosity, the detection rate of such systems in narrow-band and spectroscopic surveys is less than 10%, possibly encoding crucial information on the distribution of gas around quasars and the quasar emission properties. In this study, we use the MUSE integral-field instrument to perform a blind survey for giant {Ly}α nebulae around 17 bright radio-quiet quasars at 3\\lt z\\lt 4 that does not suffer from most of the limitations of previous surveys. After data reduction and analysis performed with specifically developed tools, we found that each quasar is surrounded by giant {Ly}α nebulae with projected sizes larger than 100 physical kiloparsecs and, in some cases, extending up to 320 kpc. The circularly averaged surface brightness profiles of the nebulae appear to be very similar to each other despite their different morphologies and are consistent with power laws with slopes ≈ -1.8. The similarity between the properties of all these nebulae and the Slug Nebula suggests a similar origin for all systems and that a large fraction of gas around bright quasars could be in a relatively “cold” (T ˜ 104 K) and dense phase. In addition, our results imply that such gas is ubiquitous within at least 50 kpc from bright quasars at 3\\lt z\\lt 4 independently of the quasar emission opening angle, or extending up to 200 kpc for quasar isotropic emission. Based on observations made with ESO Telescopes at the Paranal Observatory under programs 094.A-0396, 095.A-0708, 096.A-0345, 094.A-0131, 095.A-0200, and 096.A-0222.
NASA Astrophysics Data System (ADS)
Swinbank, A. M.; Vernet, J. D. R.; Smail, Ian; De Breuck, C.; Bacon, R.; Contini, T.; Richard, J.; Röttgering, H. J. A.; Urrutia, T.; Venemans, B.
2015-05-01
We present Multi Unit Spectroscopic Explorer (MUSE) integral field unit spectroscopic observations of the ˜150 kpc Lyα halo around the z = 4.1 radio galaxy TN J1338-1942. This 9-h observation maps the full two-dimensional kinematics of the Lyα emission across the halo, which shows a velocity gradient of Δv ˜ 700 km s-1 across 150 kpc in projection, and also identified two absorption systems associated with the Lyα emission from the radio galaxy. Both absorbers have high covering fractions (˜1) spanning the full ˜150 × 80 kpc2 extent of the halo. The stronger and more blueshifted absorber (Δv ˜ -1200 km s-1 from the systemic) has dynamics that mirror that of the underlying halo emission and we suggest that this high column material (n(H I) ˜ 1019.4 cm-2), which is also seen in C IV absorption, represents an outflowing shell that has been driven by the active galactic nuclei (AGN) or the star formation within the galaxy. The weaker (n(H I) ˜ 1014 cm-2) and less blueshifted (Δv ˜ -500 km s-1) absorber most likely represents material in the cavity between the outflowing shell and the Lyα halo. We estimate that the mass in the shell must be ˜1010 M⊙ - a significant fraction of the interstellar medium from a galaxy at z = 4. The large scales of these coherent structures illustrate the potentially powerful influence of AGN feedback on the distribution and energetics of material in their surroundings. Indeed, the discovery of high-velocity (˜1000 km s-1), group-halo-scale (i.e. >150 kpc) and mass-loaded winds in the vicinity of the central radio source is in agreement with the requirements of models that invoke AGN-driven outflows to regulate star formation and black hole growth in massive galaxies.
Chen, Nan-Kuei; Guidon, Arnaud; Chang, Hing-Chiu; Song, Allen W
2013-05-15
Diffusion weighted magnetic resonance imaging (DWI) data have been mostly acquired with single-shot echo-planar imaging (EPI) to minimize motion induced artifacts. The spatial resolution, however, is inherently limited in single-shot EPI, even when the parallel imaging (usually at an acceleration factor of 2) is incorporated. Multi-shot acquisition strategies could potentially achieve higher spatial resolution and fidelity, but they are generally susceptible to motion-induced phase errors among excitations that are exacerbated by diffusion sensitizing gradients, rendering the reconstructed images unusable. It has been shown that shot-to-shot phase variations may be corrected using navigator echoes, but at the cost of imaging throughput. To address these challenges, a novel and robust multi-shot DWI technique, termed multiplexed sensitivity-encoding (MUSE), is developed here to reliably and inherently correct nonlinear shot-to-shot phase variations without the use of navigator echoes. The performance of the MUSE technique is confirmed experimentally in healthy adult volunteers on 3Tesla MRI systems. This newly developed technique should prove highly valuable for mapping brain structures and connectivities at high spatial resolution for neuroscience studies.
Chen, Nan-kuei; Guidon, Arnaud; Chang, Hing-Chiu; Song, Allen W.
2013-01-01
Diffusion weighted magnetic resonance imaging (DWI) data have been mostly acquired with single-shot echo-planar imaging (EPI) to minimize motion induced artifacts. The spatial resolution, however, is inherently limited in single-shot EPI, even when the parallel imaging (usually at an acceleration factor of 2) is incorporated. Multi-shot acquisition strategies could potentially achieve higher spatial resolution and fidelity, but they are generally susceptible to motion-induced phase errors among excitations that are exacerbated by diffusion sensitizing gradients, rendering the reconstructed images unusable. It has been shown that shot-to-shot phase variations may be corrected using navigator echoes, but at the cost of imaging throughput. To address these challenges, a novel and robust multi-shot DWI technique, termed multiplexed sensitivity-encoding (MUSE), is developed here to reliably and inherently correct nonlinear shot-to-shot phase variations without the use of navigator echoes. The performance of the MUSE technique is confirmed experimentally in healthy adult volunteers on 3 Tesla MRI systems. This newly developed technique should prove highly valuable for mapping brain structures and connectivities at high spatial resolution for neuroscience studies. PMID:23370063
A Revised Planetary Nebula Luminosity Function Distance to NGC 628 Using MUSE
NASA Astrophysics Data System (ADS)
Kreckel, K.; Groves, B.; Bigiel, F.; Blanc, G. A.; Kruijssen, J. M. D.; Hughes, A.; Schruba, A.; Schinnerer, E.
2017-01-01
Distance uncertainties plague our understanding of the physical scales relevant to the physics of star formation in extragalactic studies. The planetary nebulae luminosity function (PNLF) is one of very few techniques that can provide distance estimates to within ∼10% however, it requires a planetary nebula (PN) sample that is uncontaminated by other ionizing sources. We employ optical integral field unit spectroscopy using the Multi-Unit Spectroscopic Explorer on the Very Large Telescope to measure [O iii] line fluxes for sources unresolved on 50 pc scales within the central star-forming galaxy disk of NGC 628. We use diagnostic line ratios to identify 62 PNe, 30 supernova remnants, and 87 H ii regions within our fields. Using the 36 brightest PNe, we determine a new PNLF distance modulus of {29.91}-0.13+0.08 mag (9.59{}-0.57+0.35 Mpc), which is in good agreement with literature values, but significantly larger than the previously reported PNLF distance. We are able to explain the discrepancy and recover the previous result when we reintroduce SNR contaminants to our sample. This demonstrates the power of full spectral information over narrowband imaging in isolating PNe. Given our limited spatial coverage within the Galaxy, we show that this technique can be used to refine distance estimates, even when IFU observations cover only a fraction of a galaxy disk.
Estimating nonrigid motion from inconsistent intensity with robust shape features
Liu, Wenyang; Ruan, Dan
2013-12-15
Purpose: To develop a nonrigid motion estimation method that is robust to heterogeneous intensity inconsistencies amongst the image pairs or image sequence. Methods: Intensity and contrast variations, as in dynamic contrast enhanced magnetic resonance imaging, present a considerable challenge to registration methods based on general discrepancy metrics. In this study, the authors propose and validate a novel method that is robust to such variations by utilizing shape features. The geometry of interest (GOI) is represented with a flexible zero level set, segmented via well-behaved regularized optimization. The optimization energy drives the zero level set to high image gradient regions, and regularizes it with area and curvature priors. The resulting shape exhibits high consistency even in the presence of intensity or contrast variations. Subsequently, a multiscale nonrigid registration is performed to seek a regular deformation field that minimizes shape discrepancy in the vicinity of GOIs. Results: To establish the working principle, realistic 2D and 3D images were subject to simulated nonrigid motion and synthetic intensity variations, so as to enable quantitative evaluation of registration performance. The proposed method was benchmarked against three alternative registration approaches, specifically, optical flow, B-spline based mutual information, and multimodality demons. When intensity consistency was satisfied, all methods had comparable registration accuracy for the GOIs. When intensities among registration pairs were inconsistent, however, the proposed method yielded pronounced improvement in registration accuracy, with an approximate fivefold reduction in mean absolute error (MAE = 2.25 mm, SD = 0.98 mm), compared to optical flow (MAE = 9.23 mm, SD = 5.36 mm), B-spline based mutual information (MAE = 9.57 mm, SD = 8.74 mm) and mutimodality demons (MAE = 10.07 mm, SD = 4.03 mm). Applying the proposed method on a real MR image sequence also provided
NASA Technical Reports Server (NTRS)
Banks, H. T.; Rosen, I. G.
1985-01-01
An approximation scheme is developed for the identification of hybrid systems describing the transverse vibrations of flexible beams with attached tip bodies. In particular, problems involving the estimation of functional parameters are considered. The identification problem is formulated as a least squares fit to data subject to the coupled system of partial and ordinary differential equations describing the transverse displacement of the beam and the motion of the tip bodies respectively. A cubic spline-based Galerkin method applied to the state equations in weak form and the discretization of the admissible parameter space yield a sequence of approximating finite dimensional identification problems. It is shown that each of the approximating problems admits a solution and that from the resulting sequence of optimal solutions a convergent subsequence can be extracted, the limit of which is a solution to the original identification problem. The approximating identification problems can be solved using standard techniques and readily available software.
NASA Astrophysics Data System (ADS)
Husser, Tim-Oliver; Kamann, Sebastian; Dreizler, Stefan; Wendt, Martin; Wulff, Nina; Bacon, Roland; Wisotzki, Lutz; Brinchmann, Jarle; Weilbacher, Peter M.; Roth, Martin M.; Monreal-Ibero, Ana
2016-04-01
Aims: We demonstrate the high multiplex advantage of crowded field 3D spectroscopy with the new integral field spectrograph MUSE by means of a spectroscopic analysis of more than 12 000 individual stars in the globular cluster NGC 6397. Methods: The stars are deblended with a point spread function fitting technique, using a photometric reference catalogue from HST as prior, including relative positions and brightnesses. This catalogue is also used for a first analysis of the extracted spectra, followed by an automatic in-depth analysis via a full-spectrum fitting method based on a large grid of PHOENIX spectra. Results: We analysed the largest sample so far available for a single globular cluster of 18 932 spectra from 12 307 stars in NGC 6397. We derived a mean radial velocity of vrad = 17.84 ± 0.07 km s-1 and a mean metallicity of [Fe/H] = -2.120 ± 0.002, with the latter seemingly varying with temperature for stars on the red giant branch (RGB). We determine Teff and [Fe/H] from the spectra, and log g from HST photometry. This is the first very comprehensive Hertzsprung-Russell diagram (HRD) for a globular cluster based on the analysis of several thousands of stellar spectra, ranging from the main sequence to the tip of the RGB. Furthermore, two interesting objects were identified; one is a post-AGB star and the other is a possible millisecond-pulsar companion. Data products are available at http://muse-vlt.eu/scienceBased on observations obtained at the Very Large Telescope (VLT) of the European Southern Observatory, Paranal, Chile (ESO Programme ID 60.A-9100(C)).
Complex principal components for robust motion estimation.
Mauldin, F William; Viola, Francesco; Walker, William F
2010-11-01
cross-correlation with cosine fitting (NC CF). More modest gains were observed relative to spline-based time delay estimation (sTDE). PCDE was also tested on experimental elastography data. Compressions of approximately 1.5% were applied to a CIRS elastography phantom with embedded 10.4-mm-diameter lesions that had moduli contrasts of -9.2, -5.9, and 12.0 dB. The standard deviation of displacement estimates was reduced by at least 67% in homogeneous regions at 35 to 40 mm in depth with respect to estimates produced by Loupas, NC CF, and sTDE. Greater improvements in CNR and displacement standard deviation were observed at larger depths where speckle decorrelation and other noise sources were more significant.
ERIC Educational Resources Information Center
Alvarez, Antonio G.; Stauffer, Gary A.
2001-01-01
Critiques various definitions of adventure therapy, then suggests that adventure therapy is any intentional, facilitated use of adventure tools and techniques to guide personal change toward desired therapeutic goals. Reflects on the nature of adventure therapy through a discussion of the application of this definition and its implications for…
ERIC Educational Resources Information Center
Kintsch, Walter
2012-01-01
In this essay, I explore how cognitive science could illuminate the concept of beauty. Two results from the extensive literature on aesthetics guide my discussion. As the term "beauty" is overextended in general usage, I choose as my starting point the notion of "perfect form." Aesthetic theorists are in reasonable agreement about the criteria for…
Introduction: Information and Musings
NASA Astrophysics Data System (ADS)
Shifman, M.
The following sections are included: * Victor Frenkel * Background * The Accused * Alexander Leipunsky * Alexander Weissberg * Holodomor * The beginning of the Great Purge * Other foreigners at UPTI * The Ruhemanns * Tisza * Lange * Weisselberg * A detective story * Stalin's order * Yuri Raniuk * Giovanna Fjelstad * Giovanna's story * First time in the USSR * Fisl's humor * Houtermans and Pomeranchuk * Choices to make * Closing gaps * Houtermans and the Communist Party of Germany * Houtermans and von Ardenne * Houtermans' trip to Russia in 1941 * Why Houtermans had to flee from Berlin in 1945 * Houtermans in Göttingen in the 1940's * Denazification * Moving to Bern * Yuri Golfand, the discoverer of supersymmetry * Bolotovsky's and Eskin's essays * Moisei Koretz * FIAN * Additional recommended literature * References
Kintsch, Walter
2012-01-01
In this essay, I explore how cognitive science could illuminate the concept of beauty. Two results from the extensive literature on aesthetics guide my discussion. As the term "beauty" is overextended in general usage, I choose as my starting point the notion of "perfect form." Aesthetic theorists are in reasonable agreement about the criteria for perfect form. What do these criteria imply for mental representations that are experienced as beautiful? Complexity theory can be used to specify constraints on mental representations abstractly formulated as vectors in a high-dimensional space. A central feature of the proposed model is that perfect form depends both on features of the objects or events perceived and on the nature of the encoding strategies or model of the observer. A simple example illustrates the proposed calculations. A number of interesting implications that arise as a consequence of reformulating beauty in this way are noted.
Estimating Rain Rates from Tipping-Bucket Rain Gauge Measurements
NASA Technical Reports Server (NTRS)
Wang, Jianxin; Fisher, Brad L.; Wolff, David B.
2007-01-01
This paper describes the cubic spline based operational system for the generation of the TRMM one-minute rain rate product 2A-56 from Tipping Bucket (TB) gauge measurements. Methodological issues associated with applying the cubic spline to the TB gauge rain rate estimation are closely examined. A simulated TB gauge from a Joss-Waldvogel (JW) disdrometer is employed to evaluate effects of time scales and rain event definitions on errors of the rain rate estimation. The comparison between rain rates measured from the JW disdrometer and those estimated from the simulated TB gauge shows good overall agreement; however, the TB gauge suffers sampling problems, resulting in errors in the rain rate estimation. These errors are very sensitive to the time scale of rain rates. One-minute rain rates suffer substantial errors, especially at low rain rates. When one minute rain rates are averaged to 4-7 minute or longer time scales, the errors dramatically reduce. The rain event duration is very sensitive to the event definition but the event rain total is rather insensitive, provided that the events with less than 1 millimeter rain totals are excluded. Estimated lower rain rates are sensitive to the event definition whereas the higher rates are not. The median relative absolute errors are about 22% and 32% for 1-minute TB rain rates higher and lower than 3 mm per hour, respectively. These errors decrease to 5% and 14% when TB rain rates are used at 7-minute scale. The radar reflectivity-rainrate (Ze-R) distributions drawn from large amount of 7-minute TB rain rates and radar reflectivity data are mostly insensitive to the event definition.
NASA Astrophysics Data System (ADS)
Deblois, Annick
Cette etude qualitative multicas est ancree dans l'approche sociale-cognitive de la theorie de l'autoefficacite de Bandura (1977). Elle s'interesse a quatre stages a l'enseignement qui se sont deroules au Musee canadien de la nature a Ottawa, Canada, en 2009. L'utilisation de donnees secondaires issues du questionnaire STEBI-B traduit et modifie (Dionne et Couture, 2010) ainsi que des entrevues semi-dirigees ont permis une analyse du changement du sentiment d'autoefficacite en sciences chez les stagiaires. Les elements les plus interessants de cette recherche sont l'apprentissage vicariant et la possibilite de repetition qui favorise une meilleure connaissance de soi et une pratique reflexive. Les resultats, dans l'ensemble positifs, illustrent bien le potentiel d'un tel stage afin de rehausser le sentiment d'autoefficacite en sciences chez des stagiaires en enseignement, particulierement chez ceux qui se destinent a enseigner a l'elementaire puisque ceux-ci ont souvent une formation academique dans un domaine autre que les sciences.
NASA Astrophysics Data System (ADS)
Smit, Renske; Swinbank, A. M.; Massey, Richard; Richard, Johan; Smail, Ian; Kneib, J.-P.
2017-01-01
We present a VLT/MUSE survey of lensed high-redshift galaxies behind the z = 0.77 cluster RCS 0224-0002. We study the detailed internal properties of a highly magnified (μ ˜ 29) z = 4.88 galaxy seen through the cluster. We detect wide-spread nebular C IVλλ1548,1551 Å emission from this galaxy as well as a bright Lyα halo with a spatially-uniform wind and absorption profile across 12 kpc in the image plane. Blueshifted high- and low-ionisation interstellar absorption indicate the presence of a high-velocity outflow (Δ v˜ 300 km s^{-1}) from the galaxy. Unlike similar observations of galaxies at z ˜ 2 - 3, the Lyα emission from the halo emerges close to the systemic velocity - an order of magnitude lower in velocity offset than predicted in "shell"-like outflow models. To explain these observations we favour a model of an outflow with a strong velocity gradient, which changes the effective column density seen by the Lyα photons. We also search for high-redshift Lyα emitters and identify 14 candidates between z = 4.8 - 6.6, including an over-density at z = 4.88, of which only one has a detected counterpart in HST/ACS+WFC3 imaging.
Al Abboud, Safaa Ahmed; Ahmad, Sohail; Bidin, Mohamed Badrulnizam Long
2016-01-01
Introduction The Diabetes Mellitus (DM) is a common silent epidemic disease with frequent morbidity and mortality. The psychological and psychosocial health factors are negatively influencing the glycaemic control in diabetic patients. Therefore, various questionnaires were developed to address the psychological and psychosocial well-being of the diabetic patients. Most of these questionnaires were first developed in English and then translated into different languages to make them useful for the local communities. Aim The main aim of this study was to translate and validate the Malaysian versions of Perceived Diabetes Self-Management Scale (PDSMS), Medication Understanding and Use Self-Efficacy Scale (MUSE), and to revalidate 8-Morisky Medication Adherence Scale (MMAS-8) by Partial Credit Rasch Model (Modern Test Theory). Materials and Methods Permission was obtained from respective authors to translate the English versions of PDSMS, MUSE and MMAS-8 into Malay language according to established standard international translation guidelines. In this cross-sectional study, 62 adult DM patients were recruited from Hospital Kuala Lumpur by purposive sampling method. The data were extracted from the self-administered questionnaires and entered manually in the Ministeps (Winsteps) software for Partial Credit Rasch Model. The item and person reliability, infit/outfit Z-Standard (ZSTD), infit/outfit Mean Square (MNSQ) and point measure correlation (PTMEA Corr) values were analysed for the reliability analyses and construct validation. Results The Malay version of PDSMS, MUSE and MMAS-8 found to be valid and reliable instrument for the Malaysian diabetic adults. The instrument showed good overall reliability value of 0.76 and 0.93 for item and person reliability, respectively. The values of infit/outfit ZSTD, infit/outfit MNSQ, and PTMEA Corr were also within the stipulated range of the Rasch Model proving the valid item constructs of the questionnaire. Conclusion The
NASA Astrophysics Data System (ADS)
Vanzella, E.; Balestra, I.; Gronke, M.; Karman, W.; Caminha, G. B.; Dijkstra, M.; Rosati, P.; De Barros, S.; Caputi, K.; Grillo, C.; Tozzi, P.; Meneghetti, M.; Mercurio, A.; Gilli, R.
2017-03-01
We report the identification of extended Lyα nebulae at z ≃ 3.3 in the Hubble Ultra Deep Field (HUDF, ≃40 kpc × 80 kpc) and behind the Hubble Frontier Field galaxy cluster MACSJ0416 (≃40 kpc), spatially associated with groups of star-forming galaxies. VLT/MUSE integral field spectroscopy reveals a complex structure with a spatially varying double-peaked Lyα emission. Overall, the spectral profiles of the two Lyα nebulae are remarkably similar, both showing a prominent blue emission, more intense and slightly broader than the red peak. From the first nebula, located in the HUDF, no X-ray emission has been detected, disfavouring the possible presence of active galactic nuclei. Spectroscopic redshifts have been derived for 11 galaxies within 2 arcsec from the nebula and spanning the redshift range 1.037 < z < 5.97. The second nebula, behind MACSJ0416, shows three aligned star-forming galaxies plausibly associated with the emitting gas. In both systems, the associated galaxies reveal possible intense rest-frame-optical nebular emissions lines [O III] λλ4959, 5007+Hβ with equivalent widths as high as 1500 Å rest frame and star formation rates ranging from a few to tens of solar masses per year. A possible scenario is that of a group of young, star-forming galaxies emitting ionizing radiation that induces Lyα fluorescence, therefore revealing the kinematics of the surrounding gas. Also Lyα powered by star formation and/or cooling radiation may resemble the double-peaked spectral properties and the morphology observed here. If the intense blue emission is associated with inflowing gas, then we may be witnessing an early phase of galaxy or a proto-cluster (or group) formation.
NASA Astrophysics Data System (ADS)
Packard, Corey D.; Klein, Mark D.; Viola, Timothy S.; Hepokoski, Mark A.
2016-10-01
The ability to predict electro-optical (EO) signatures of diverse targets against cluttered backgrounds is paramount for signature evaluation and/or management. Knowledge of target and background signatures is essential for a variety of defense-related applications. While there is no substitute for measured target and background signatures to determine contrast and detection probability, the capability to simulate any mission scenario with desired environmental conditions is a tremendous asset for defense agencies. In this paper, a systematic process for the thermal and visible-through-infrared simulation of camouflaged human dismounts in cluttered outdoor environments is presented. This process, utilizing the thermal and EO/IR radiance simulation tool TAIThermIR (and MuSES), provides a repeatable and accurate approach for analyzing contrast, signature and detectability of humans in multiple wavebands. The engineering workflow required to combine natural weather boundary conditions and the human thermoregulatory module developed by ThermoAnalytics is summarized. The procedure includes human geometry creation, human segmental physiology description and transient physical temperature prediction using environmental boundary conditions and active thermoregulation. Radiance renderings, which use Sandford-Robertson BRDF optical surface property descriptions and are coupled with MODTRAN for the calculation of atmospheric effects, are demonstrated. Sensor effects such as optical blurring and photon noise can be optionally included, increasing the accuracy of detection probability outputs that accompany each rendering. This virtual evaluation procedure has been extensively validated and provides a flexible evaluation process that minimizes the difficulties inherent in human-subject field testing. Defense applications such as detection probability assessment, camouflage pattern evaluation, conspicuity tests and automatic target recognition are discussed.
NASA Technical Reports Server (NTRS)
Kawaguchi, Jun'ichiro; Kominato, Takashi; Shirakawa, Ken'ichi
2007-01-01
The paper presents the attitude reorientation taking the advantage of solar radiation pressure without use of any fuel aboard. The strategy had been adopted to make Hayabusa spacecraft keep pointed toward the Sun for several months, while spinning. The paper adds the above mentioned results reported in Sedona this February showing another challenge of combining ion engines propulsion tactically balanced with the solar radiation torque with no spin motion. The operation has been performed since this March for a half year successfully. The flight results are presented with the estimated solar array panel diffusion coefficient and the ion engine's swirl torque.
NASA Astrophysics Data System (ADS)
Kuncarayakti, H.; Galbany, L.; Anderson, J. P.; Krühler, T.; Hamuy, M.
2016-09-01
Context. Stellar populations are the building blocks of galaxies, including the Milky Way. The majority, if not all, extragalactic studies are entangled with the use of stellar population models given the unresolved nature of their observation. Extragalactic systems contain multiple stellar populations with complex star formation histories. However, studies of these systems are mainly based upon the principles of simple stellar populations (SSP). Hence, it is critical to examine the validity of SSP models. Aims: This work aims to empirically test the validity of SSP models. This is done by comparing SSP models against observations of spatially resolved young stellar population in the determination of its physical properties, that is, age and metallicity. Methods: Integral field spectroscopy of a young stellar cluster in the Milky Way, NGC 3603, was used to study the properties of the cluster as both a resolved and unresolved stellar population. The unresolved stellar population was analysed using the Hα equivalent width as an age indicator and the ratio of strong emission lines to infer metallicity. In addition, spectral energy distribution (SED) fitting using STARLIGHT was used to infer these properties from the integrated spectrum. Independently, the resolved stellar population was analysed using the colour-magnitude diagram (CMD) to determine age and metallicity. As the SSP model represents the unresolved stellar population, the derived age and metallicity were tested to determine whether they agree with those derived from resolved stars. Results: The age and metallicity estimate of NGC 3603 derived from integrated spectroscopy are confirmed to be within the range of those derived from the CMD of the resolved stellar population, including other estimates found in the literature. The result from this pilot study supports the reliability of SSP models for studying unresolved young stellar populations. Based on observations collected at the European Organisation
NASA Astrophysics Data System (ADS)
Karman, W.; Caputi, K. I.; Caminha, G. B.; Gronke, M.; Grillo, C.; Balestra, I.; Rosati, P.; Vanzella, E.; Coe, D.; Dijkstra, M.; Koekemoer, A. M.; McLeod, D.; Mercurio, A.; Nonino, M.
2017-02-01
In spite of their conjectured importance for the Epoch of Reionization, the properties of low-mass galaxies are currently still very much under debate. In this article, we study the stellar and gaseous properties of faint, low-mass galaxies at z > 3. We observed the Frontier Fields cluster Abell S1063 with MUSE over a 2 arcmin2 field, and combined integral-field spectroscopy with gravitational lensing to perform a blind search for intrinsically faint Lyα emitters (LAEs). We determined in total the redshift of 172 galaxies of which 14 are lensed LAEs at z = 3-6.1. We increased the number of spectroscopically-confirmed multiple-image families from 6 to 17 and updated our gravitational-lensing model accordingly. The lensing-corrected Lyα luminosities are with LLyα ≲ 1041.5 erg/s among the lowest for spectroscopically confirmed LAEs at any redshift. We used expanding gaseous shell models to fit the Lyα line profile, and find low column densities and expansion velocities. This is, to our knowledge, the first time that gaseous properties of such faint galaxies at z ≳ 3 are reported. We performed SED modelling to broadband photometry from the U band through the infrared to determine the stellar properties of these LAEs. The stellar masses are very low (106-8M⊙ ), and are accompanied by very young ages of 1-100 Myr. The very high specific star-formation rates ( 100 Gyr-1) are characteristic of starburst galaxies, and we find that most galaxies will double their stellar mass in ≲20 Myr. The UV-continuum slopes β are low in our sample, with β < -2 for all galaxies with M⋆ < 108M⊙. We conclude that our low-mass galaxies at 3 < z < 6 are forming stars at higher rates when correcting for stellar mass effects than seen locally or in more massive galaxies. The young stellar populations with high star-formation rates and low H i column densities lead to continuum slopes and LyC-escape fractions expected for a scenario where low mass galaxies reionise the Universe.
NASA Astrophysics Data System (ADS)
Dang, H.; Wang, A. S.; Sussman, Marc S.; Siewerdsen, J. H.; Stayman, J. W.
2014-09-01
Sequential imaging studies are conducted in many clinical scenarios. Prior images from previous studies contain a great deal of patient-specific anatomical information and can be used in conjunction with subsequent imaging acquisitions to maintain image quality while enabling radiation dose reduction (e.g., through sparse angular sampling, reduction in fluence, etc). However, patient motion between images in such sequences results in misregistration between the prior image and current anatomy. Existing prior-image-based approaches often include only a simple rigid registration step that can be insufficient for capturing complex anatomical motion, introducing detrimental effects in subsequent image reconstruction. In this work, we propose a joint framework that estimates the 3D deformation between an unregistered prior image and the current anatomy (based on a subsequent data acquisition) and reconstructs the current anatomical image using a model-based reconstruction approach that includes regularization based on the deformed prior image. This framework is referred to as deformable prior image registration, penalized-likelihood estimation (dPIRPLE). Central to this framework is the inclusion of a 3D B-spline-based free-form-deformation model into the joint registration-reconstruction objective function. The proposed framework is solved using a maximization strategy whereby alternating updates to the registration parameters and image estimates are applied allowing for improvements in both the registration and reconstruction throughout the optimization process. Cadaver experiments were conducted on a cone-beam CT testbench emulating a lung nodule surveillance scenario. Superior reconstruction accuracy and image quality were demonstrated using the dPIRPLE algorithm as compared to more traditional reconstruction methods including filtered backprojection, penalized-likelihood estimation (PLE), prior image penalized-likelihood estimation (PIPLE) without registration, and
Dang, H.; Wang, A. S.; Sussman, Marc S.; Siewerdsen, J. H.; Stayman, J. W.
2014-01-01
Sequential imaging studies are conducted in many clinical scenarios. Prior images from previous studies contain a great deal of patient-specific anatomical information and can be used in conjunction with subsequent imaging acquisitions to maintain image quality while enabling radiation dose reduction (e.g., through sparse angular sampling, reduction in fluence, etc.). However, patient motion between images in such sequences results in misregistration between the prior image and current anatomy. Existing prior-image-based approaches often include only a simple rigid registration step that can be insufficient for capturing complex anatomical motion, introducing detrimental effects in subsequent image reconstruction. In this work, we propose a joint framework that estimates the 3D deformation between an unregistered prior image and the current anatomy (based on a subsequent data acquisition) and reconstructs the current anatomical image using a model-based reconstruction approach that includes regularization based on the deformed prior image. This framework is referred to as deformable prior image registration, penalized-likelihood estimation (dPIRPLE). Central to this framework is the inclusion of a 3D B-spline-based free-form-deformation model into the joint registration-reconstruction objective function. The proposed framework is solved using a maximization strategy whereby alternating updates to the registration parameters and image estimates are applied allowing for improvements in both the registration and reconstruction throughout the optimization process. Cadaver experiments were conducted on a cone-beam CT testbench emulating a lung nodule surveillance scenario. Superior reconstruction accuracy and image quality were demonstrated using the dPIRPLE algorithm as compared to more traditional reconstruction methods including filtered backprojection, penalized-likelihood estimation (PLE), prior image penalized-likelihood estimation (PIPLE) without registration
Meira-Machado, Luís; Cadarso-Suárez, Carmen; Gude, Francisco; Araújo, Artur
2013-01-01
The Cox proportional hazards regression model has become the traditional choice for modeling survival data in medical studies. To introduce flexibility into the Cox model, several smoothing methods may be applied, and approaches based on splines are the most frequently considered in this context. To better understand the effects that each continuous covariate has on the outcome, results can be expressed in terms of splines-based hazard ratio (HR) curves, taking a specific covariate value as reference. Despite the potential advantages of using spline smoothing methods in survival analysis, there is currently no analytical method in the R software to choose the optimal degrees of freedom in multivariable Cox models (with two or more nonlinear covariate effects). This paper describes an R package, called smoothHR, that allows the computation of pointwise estimates of the HRs--and their corresponding confidence limits--of continuous predictors introduced nonlinearly. In addition the package provides functions for choosing automatically the degrees of freedom in multivariable Cox models. The package is available from the R homepage. We illustrate the use of the key functions of the smoothHR package using data from a study on breast cancer and data on acute coronary syndrome, from Galicia, Spain.
Dumett, M; Rosen, G; Sabat, J; Shaman, A; Tempelman, L; Wang, C; Swift, RM
2008-01-01
Biosensor measurement of transdermal alcohol oncentration in perspiration exhibits significant variance from subject to subject and device to device. Short duration data collected in a controlled clinical setting is used to calibrate a forward model for ethanol transport from the blood to the sensor. The calibrated model is then used to invert transdermal signals collected in the field (short or long duration) to obtain an estimate for breath measured blood alcohol concentration. A distributed parameter model for the forward transport of ethanol from the blood through the skin and its processing by the sensor is developed. Model calibration is formulated as a nonlinear least squares fit to data. The fit model is then used as part of a spline based scheme in the form of a regularized, non-negatively constrained linear deconvolution. Fully discrete, steepest descent based schemes for solving the resulting optimization problems are developed. The adjoint method is used to accurately and efficiently compute requisite gradients. Efficacy is demonstrated on subject field data. PMID:19255617
Light and enlightenment: some musings
NASA Astrophysics Data System (ADS)
Patthoff, Donald D.
2012-03-01
In the beginning of the age of enlightenment (or reason), the language of philosophy, science, and theology stemmed equally from the same pens. Many of these early enlightenment authors also applied their thoughts and experiences to practical inventions and entrepreneurship; in the process, they noted and measured different characteristics of light and redirected the use of lenses beyond that of the heat lens which had been developing for over 2000 years. Within decades, microscopes, telescopes, theodolites, and many variations of the heat lens were well known. These advances rapidly changed and expanded the nature of science, subsequent technology, and many boundary notions; that is the way boundaries are defined not just in the sense of what is land and commercial property, but also what notions of boundary help shape and define society, including the unique role that professions play within society. The advent of lasers in the mid twenty century, though, introduced the ability to measure the effects and characteristic of single coherent wavelengths. This also introduced more ways to evaluate the relationship of specific wavelengths of light to other variables and interactions. At the most basic level, the almost revolutionary boundary developments of lasers seem to split down two paths of work: 1) a pursuit of more sophisticated heat lenses having better controls over light's destructive and cutting powers and, 2) more nuanced light-based instruments that not only enhanced the powers of observation, but also offered more minute measurement opportunities and subtle treatment capabilities. It is well worth deliberating, then, if "enlightenment" and "light" might share more than five letters in a row. And (if a common underlying foundation is revealed within these deliberations) , is it worth questioning any possible revelations that might arise, or that might bear relevance on today's research and developments in light based sciences, technology, clinical professions, and other bio applications. And, finally, how might any such insight influence, then, the future of light based research and its possible application?
Teaching Poetry: The Neglected Muse.
ERIC Educational Resources Information Center
Perricone, Catherine R.
1978-01-01
A discussion of six techniques whereby the abstract nature of foreign language poetry may be communicated to students. These are introduction to the genre, understanding and appreciating poetry, the poet and his/her milieu, reading for expression and vocabulary and in context, and analysis for theme, content, and structure. (Author/AMH)
Muses on the Gregorian Calendar
ERIC Educational Resources Information Center
Staples, Ed
2013-01-01
This article begins with an exploration of the origins of the Gregorian Calendar. Next it describes the function of school inspector Christian Zeller (1822-1899) used to determine the number of the elapsed days of a year up to and including a specified date and how Zeller's function can be used to determine the number of days that have elapsed in…
ERIC Educational Resources Information Center
Abernathy, Jeff
2007-01-01
In this article, the author, dean of academic affairs at Augustana College in Illinois, reflects on an alumnus, English major David Allen, who has gained prominence in his field. A photo of the alumnus, wearing nothing but a cap, a pair of boots, and a strategically-placed guitar, appeared on the front page of a local newspaper, under a headline…
Invoking the muse: Dada's chaos.
Rosen, Diane
2014-07-01
Dada, a self-proclaimed (anti)art (non)movement, took shape in 1916 among a group of writers and artists who rejected the traditions of a stagnating bourgeoisie. Instead, they adopted means of creative expression that embraced chaos, stoked instability and undermined logic, an outburst that overturned centuries of classical and Romantic aesthetics. Paradoxically, this insistence on disorder foreshadowed a new order in understanding creativity. Nearly one hundred years later, Nonlinear Dynamical Systems theory (NDS) gives renewed currency to Dada's visionary perspective on chance, chaos and creative cognition. This paper explores commonalities between NDS-theory and this early precursor of the nonlinear paradigm, suggesting that their conceptual synergy illuminates what it means to 'be creative' beyond the disciplinary boundaries of either. Key features are discussed within a 5P model of creativity based on Rhodes' 4P framework (Person, Process, Press, Product), to which I add Participant-Viewer for the interactivity of observer-observed. Grounded in my own art practice, several techniques are then put forward as non-methodical methods that invoke creative border zones, those regions where Dada's chance and design are wedded in a dialectical tension of opposites.
Attitude Estimation or Quaternion Estimation?
NASA Technical Reports Server (NTRS)
Markley, F. Landis
2003-01-01
The attitude of spacecraft is represented by a 3x3 orthogonal matrix with unity determinant, which belongs to the three-dimensional special orthogonal group SO(3). The fact that all three-parameter representations of SO(3) are singular or discontinuous for certain attitudes has led to the use of higher-dimensional nonsingular parameterizations, especially the four-component quaternion. In attitude estimation, we are faced with the alternatives of using an attitude representation that is either singular or redundant. Estimation procedures fall into three broad classes. The first estimates a three-dimensional representation of attitude deviations from a reference attitude parameterized by a higher-dimensional nonsingular parameterization. The deviations from the reference are assumed to be small enough to avoid any singularity or discontinuity of the three-dimensional parameterization. The second class, which estimates a higher-dimensional representation subject to enough constraints to leave only three degrees of freedom, is difficult to formulate and apply consistently. The third class estimates a representation of SO(3) with more than three dimensions, treating the parameters as independent. We refer to the most common member of this class as quaternion estimation, to contrast it with attitude estimation. We analyze the first and third of these approaches in the context of an extended Kalman filter with simplified kinematics and measurement models.
Spline-based high-accuracy piecewise-polynomial phase-to-sinusoid amplitude converters.
Petrinović, Davor; Brezović, Marko
2011-04-01
We propose a method for direct digital frequency synthesis (DDS) using a cubic spline piecewise-polynomial model for a phase-to-sinusoid amplitude converter (PSAC). This method offers maximum smoothness of the output signal. Closed-form expressions for the cubic polynomial coefficients are derived in the spectral domain and the performance analysis of the model is given in the time and frequency domains. We derive the closed-form performance bounds of such DDS using conventional metrics: rms and maximum absolute errors (MAE) and maximum spurious free dynamic range (SFDR) measured in the discrete time domain. The main advantages of the proposed PSAC are its simplicity, analytical tractability, and inherent numerical stability for high table resolutions. Detailed guidelines for a fixed-point implementation are given, based on the algebraic analysis of all quantization effects. The results are verified on 81 PSAC configurations with the output resolutions from 5 to 41 bits by using a bit-exact simulation. The VHDL implementation of a high-accuracy DDS based on the proposed PSAC with 28-bit input phase word and 32-bit output value achieves SFDR of its digital output signal between 180 and 207 dB, with a signal-to-noise ratio of 192 dB. Its implementation requires only one 18 kB block RAM and three 18-bit embedded multipliers in a typical field-programmable gate array (FPGA) device.
A Spline-Based Lack-Of-Fit Test for Independent Variable Effect in Poisson Regression.
Li, Chin-Shang; Tu, Wanzhu
2007-05-01
In regression analysis of count data, independent variables are often modeled by their linear effects under the assumption of log-linearity. In reality, the validity of such an assumption is rarely tested, and its use is at times unjustifiable. A lack-of-fit test is proposed for the adequacy of a postulated functional form of an independent variable within the framework of semiparametric Poisson regression models based on penalized splines. It offers added flexibility in accommodating the potentially non-loglinear effect of the independent variable. A likelihood ratio test is constructed for the adequacy of the postulated parametric form, for example log-linearity, of the independent variable effect. Simulations indicate that the proposed model performs well, and misspecified parametric model has much reduced power. An example is given.
CerebroMatic: A Versatile Toolbox for Spline-Based MRI Template Creation.
Wilke, Marko; Altaye, Mekibib; Holland, Scott K
2017-01-01
Brain image spatial normalization and tissue segmentation rely on prior tissue probability maps. Appropriately selecting these tissue maps becomes particularly important when investigating "unusual" populations, such as young children or elderly subjects. When creating such priors, the disadvantage of applying more deformation must be weighed against the benefit of achieving a crisper image. We have previously suggested that statistically modeling demographic variables, instead of simply averaging images, is advantageous. Both aspects (more vs. less deformation and modeling vs. averaging) were explored here. We used imaging data from 1914 subjects, aged 13 months to 75 years, and employed multivariate adaptive regression splines to model the effects of age, field strength, gender, and data quality. Within the spm/cat12 framework, we compared an affine-only with a low- and a high-dimensional warping approach. As expected, more deformation on the individual level results in lower group dissimilarity. Consequently, effects of age in particular are less apparent in the resulting tissue maps when using a more extensive deformation scheme. Using statistically-described parameters, high-quality tissue probability maps could be generated for the whole age range; they are consistently closer to a gold standard than conventionally-generated priors based on 25, 50, or 100 subjects. Distinct effects of field strength, gender, and data quality were seen. We conclude that an extensive matching for generating tissue priors may model much of the variability inherent in the dataset which is then not contained in the resulting priors. Further, the statistical description of relevant parameters (using regression splines) allows for the generation of high-quality tissue probability maps while controlling for known confounds. The resulting CerebroMatic toolbox is available for download at http://irc.cchmc.org/software/cerebromatic.php.
Spline based iterative phase retrieval algorithm for X-ray differential phase contrast radiography.
Nilchian, Masih; Wang, Zhentian; Thuering, Thomas; Unser, Michael; Stampanoni, Marco
2015-04-20
Differential phase contrast imaging using grating interferometer is a promising alternative to conventional X-ray radiographic methods. It provides the absorption, differential phase and scattering information of the underlying sample simultaneously. Phase retrieval from the differential phase signal is an essential problem for quantitative analysis in medical imaging. In this paper, we formalize the phase retrieval as a regularized inverse problem, and propose a novel discretization scheme for the derivative operator based on B-spline calculus. The inverse problem is then solved by a constrained regularized weighted-norm algorithm (CRWN) which adopts the properties of B-spline and ensures a fast implementation. The method is evaluated with a tomographic dataset and differential phase contrast mammography data. We demonstrate that the proposed method is able to produce phase image with enhanced and higher soft tissue contrast compared to conventional absorption-based approach, which can potentially provide useful information to mammographic investigations.
BOX SPLINE BASED 3D TOMOGRAPHIC RECONSTRUCTION OF DIFFUSION PROPAGATORS FROM MRI DATA.
Ye, Wenxing; Portnoy, Sharon; Entezari, Alireza; Vemuri, Baba C; Blackband, Stephen J
2011-06-09
This paper introduces a tomographic approach for reconstruction of diffusion propagators, P( r ), in a box spline framework. Box splines are chosen as basis functions for high-order approximation of P( r ) from the diffusion signal. Box splines are a generalization of B-splines to multivariate setting that are particularly useful in the context of tomographic reconstruction. The X-Ray or Radon transform of a (tensor-product B-spline or a non-separable) box spline is a box spline - the space of box splines is closed under the Radon transform.We present synthetic and real multi-shell diffusion-weighted MR data experiments that demonstrate the increased accuracy of P( r ) reconstruction as the order of basis functions is increased.
CerebroMatic: A Versatile Toolbox for Spline-Based MRI Template Creation
Wilke, Marko; Altaye, Mekibib; Holland, Scott K.
2017-01-01
Brain image spatial normalization and tissue segmentation rely on prior tissue probability maps. Appropriately selecting these tissue maps becomes particularly important when investigating “unusual” populations, such as young children or elderly subjects. When creating such priors, the disadvantage of applying more deformation must be weighed against the benefit of achieving a crisper image. We have previously suggested that statistically modeling demographic variables, instead of simply averaging images, is advantageous. Both aspects (more vs. less deformation and modeling vs. averaging) were explored here. We used imaging data from 1914 subjects, aged 13 months to 75 years, and employed multivariate adaptive regression splines to model the effects of age, field strength, gender, and data quality. Within the spm/cat12 framework, we compared an affine-only with a low- and a high-dimensional warping approach. As expected, more deformation on the individual level results in lower group dissimilarity. Consequently, effects of age in particular are less apparent in the resulting tissue maps when using a more extensive deformation scheme. Using statistically-described parameters, high-quality tissue probability maps could be generated for the whole age range; they are consistently closer to a gold standard than conventionally-generated priors based on 25, 50, or 100 subjects. Distinct effects of field strength, gender, and data quality were seen. We conclude that an extensive matching for generating tissue priors may model much of the variability inherent in the dataset which is then not contained in the resulting priors. Further, the statistical description of relevant parameters (using regression splines) allows for the generation of high-quality tissue probability maps while controlling for known confounds. The resulting CerebroMatic toolbox is available for download at http://irc.cchmc.org/software/cerebromatic.php. PMID:28275348
Fast simulation of x-ray projections of spline-based surfaces using an append buffer
NASA Astrophysics Data System (ADS)
Maier, Andreas; Hofmann, Hannes G.; Schwemmer, Chris; Hornegger, Joachim; Keil, Andreas; Fahrig, Rebecca
2012-10-01
Many scientists in the field of x-ray imaging rely on the simulation of x-ray images. As the phantom models become more and more realistic, their projection requires high computational effort. Since x-ray images are based on transmission, many standard graphics acceleration algorithms cannot be applied to this task. However, if adapted properly, the simulation speed can be increased dramatically using state-of-the-art graphics hardware. A custom graphics pipeline that simulates transmission projections for tomographic reconstruction was implemented based on moving spline surface models. All steps from tessellation of the splines, projection onto the detector and drawing are implemented in OpenCL. We introduced a special append buffer for increased performance in order to store the intersections with the scene for every ray. Intersections are then sorted and resolved to materials. Lastly, an absorption model is evaluated to yield an absorption value for each projection pixel. Projection of a moving spline structure is fast and accurate. Projections of size 640 × 480 can be generated within 254 ms. Reconstructions using the projections show errors below 1 HU with a sharp reconstruction kernel. Traditional GPU-based acceleration schemes are not suitable for our reconstruction task. Even in the absence of noise, they result in errors up to 9 HU on average, although projection images appear to be correct under visual examination. Projections generated with our new method are suitable for the validation of novel CT reconstruction algorithms. For complex simulations, such as the evaluation of motion-compensated reconstruction algorithms, this kind of x-ray simulation will reduce the computation time dramatically.
ERIC Educational Resources Information Center
Fung, Maria G.; Latulippe, Christine L.
2010-01-01
Elementary school teachers are responsible for constructing the foundation of number sense in youngsters, and so it is recommended that teacher-training programs include an emphasis on number sense to ensure the development of dynamic, productive computation and estimation skills in students. To better prepare preservice elementary school teachers…
ERIC Educational Resources Information Center
Threewit, Fran
This book leads students through a journey of hands-on investigations of skill-based estimation. The 30 lessons in the book are grouped into four units: Holding Hands, The Real Scoop, Container Calculations, and Estimeasurements. In each unit children work with unique, real materials intended to build an awareness of number, quantity, and…
Estimating Modifying Effect of Age on Genetic and Environmental Variance Components in Twin Models
He, Liang; Sillanpää, Mikko J.; Silventoinen, Karri; Kaprio, Jaakko; Pitkäniemi, Janne
2016-01-01
Twin studies have been adopted for decades to disentangle the relative genetic and environmental contributions for a wide range of traits. However, heritability estimation based on the classical twin models does not take into account dynamic behavior of the variance components over age. Varying variance of the genetic component over age can imply the existence of gene–environment (G × E) interactions that general genome-wide association studies (GWAS) fail to capture, which may lead to the inconsistency of heritability estimates between twin design and GWAS. Existing parametric G × E interaction models for twin studies are limited by assuming a linear or quadratic form of the variance curves with respect to a moderator that can, however, be overly restricted in reality. Here we propose spline-based approaches to explore the variance curves of the genetic and environmental components. We choose the additive genetic, common, and unique environmental variance components (ACE) model as the starting point. We treat the component variances as variance functions with respect to age modeled by B-splines or P-splines. We develop an empirical Bayes method to estimate the variance curves together with their confidence bands and provide an R package for public use. Our simulations demonstrate that the proposed methods accurately capture dynamic behavior of the component variances in terms of mean square errors with a data set of >10,000 twin pairs. Using the proposed methods as an alternative and major extension to the classical twin models, our analyses with a large-scale Finnish twin data set (19,510 MZ twins and 27,312 DZ same-sex twins) discover that the variances of the A, C, and E components for body mass index (BMI) change substantially across life span in different patterns and the heritability of BMI drops to ∼50% after middle age. The results further indicate that the decline of heritability is due to increasing unique environmental variance, which provides
NASA Astrophysics Data System (ADS)
Swinbank, A. M.; Harrison, C. M.; Trayford, J.; Schaller, M.; Smail, Ian; Schaye, J.; Theuns, T.; Smit, R.; Alexander, D. M.; Bacon, R.; Bower, R. G.; Contini, T.; Crain, R. A.; de Breuck, C.; Decarli, R.; Epinat, B.; Fumagalli, M.; Furlong, M.; Galametz, A.; Johnson, H. L.; Lagos, C.; Richard, J.; Vernet, J.; Sharples, R. M.; Sobral, D.; Stott, J. P.
2017-01-01
We present a MUSE and KMOS dynamical study 405 star-forming galaxies at redshift z = 0.28-1.65 (median redshift bar{z} = 0.84). Our sample is representative of the star-forming "main-sequence", with star-formation rates of SFR = 0.1-30 M⊙ yr-1 and stellar masses M⋆ = 108-1011 M⊙. For 49 ± 4% of our sample, the dynamics suggest rotational support, 24 ± 3% are unresolved systems and 5 ± 2% appear to be early-stage major mergers with components on 8-30 kpc scales. The remaining 22 ± 5% appear to be dynamically complex, irregular (or face-on systems). For galaxies whose dynamics suggest rotational support, we derive inclination corrected rotational velocities and show these systems lie on a similar scaling between stellar mass and specific angular momentum as local spirals with j⋆ = J / M_star ∝ M_star ^{2/3} but with a redshift evolution that scales as j⋆ ∝ M_star ^{2/3}(1+z)^{-1}. We also identify a correlation between specific angular momentum and disk stability such that galaxies with the highest specific angular momentum (log(j⋆ / M_star ^{2/3}) > 2.5) are the most stable, with Toomre Q = 1.10 ± 0.18, compared to Q = 0.53 ± 0.22 for galaxies with log(j⋆ / M_star ^{2/3}) < 2.5. At a fixed mass, the HST morphologies of galaxies with the highest specific angular momentum resemble spiral galaxies, whilst those with low specific angular momentum are morphologically complex and dominated by several bright star-forming regions. This suggests that angular momentum plays a major role in defining the stability of gas disks: at z ˜ 1, massive galaxies that have disks with low specific angular momentum, are globally unstable, clumpy and turbulent systems. In contrast, galaxies with high specific angular have evolved in to stable disks with spiral structure where star formation is a local (rather than global) process.
Marinetto, Eugenio; Pascau, Javier; Desco, Manuel
2014-01-01
Purpose Compressed sensing (CS) has been widely applied to prospective cardiac cine MRI. The aim of this work is to study the benefits obtained by including motion estimation in the CS framework for small-animal retrospective cardiac cine. Methods We propose a novel B-spline-based compressed sensing method (SPLICS) that includes motion estimation and generalizes previous spatiotemporal total variation (ST-TV) methods by taking into account motion between frames. In addition, we assess the effect of an optimum weighting between spatial and temporal sparsity to further improve results. Both methods were implemented using the efficient Split Bregman methodology and were evaluated on rat data comparing animals with myocardial infarction with controls for several acceleration factors. Results ST-TV with optimum selection of the weighting sparsity parameter led to results similar to those of SPLICS; ST-TV with large relative temporal sparsity led to temporal blurring effects. However, SPLICS always properly corrected temporal blurring, independently of the weighting parameter. At acceleration factors of 15, SPLICS did not distort temporal intensity information but led to some artefacts and slight over-smoothing. At an acceleration factor of 7, images were reconstructed without significant loss of quality. Conclusion We have validated SPLICS for retrospective cardiac cine in small animal, achieving high acceleration factors. In addition, we have shown that motion modelling may not be essential for retrospective cine and that similar results can be obtained by using ST-TV provided that an optimum selection of the spatiotemporal sparsity weighting parameter is performed. PMID:25350290
Variance estimation for stratified propensity score estimators.
Williamson, E J; Morley, R; Lucas, A; Carpenter, J R
2012-07-10
Propensity score methods are increasingly used to estimate the effect of a treatment or exposure on an outcome in non-randomised studies. We focus on one such method, stratification on the propensity score, comparing it with the method of inverse-probability weighting by the propensity score. The propensity score--the conditional probability of receiving the treatment given observed covariates--is usually an unknown probability estimated from the data. Estimators for the variance of treatment effect estimates typically used in practice, however, do not take into account that the propensity score itself has been estimated from the data. By deriving the asymptotic marginal variance of the stratified estimate of treatment effect, correctly taking into account the estimation of the propensity score, we show that routinely used variance estimators are likely to produce confidence intervals that are too conservative when the propensity score model includes variables that predict (cause) the outcome, but only weakly predict the treatment. In contrast, a comparison with the analogous marginal variance for the inverse probability weighted (IPW) estimator shows that routinely used variance estimators for the IPW estimator are likely to produce confidence intervals that are almost always too conservative. Because exact calculation of the asymptotic marginal variance is likely to be complex, particularly for the stratified estimator, we suggest that bootstrap estimates of variance should be used in practice.
Estimation of Temporally and Spatially Varying Coefficients in Models for Insect Dispersal.
1983-06-01
movements of marked flea beetles in cultivated arrays of the cole crop, collards ( Brassica oleraceae). DO 1473 EDITION OF I NOV 65 IS OBSOLETE...arrays of the cole crop, collards ( Brassica oleraceae). ~~1 I. INTRODUCTION Transport equations appropriately model numerous biological systems and have...between model predictions and ob- served data). For the past several years we have been developing and testing spline- based algorithms for identifying
Estimating potential evapotranspiration with improved radiation estimation
Technology Transfer Automated Retrieval System (TEKTRAN)
Potential evapotranspiration (PET) is of great importance to estimation of surface energy budget and water balance calculation. The accurate estimation of PET will facilitate efficient irrigation scheduling, drainage design, and other agricultural and meteorological applications. However, accuracy o...
Ensemble estimators for multivariate entropy estimation.
Sricharan, Kumar; Wei, Dennis; Hero, Alfred O
2013-07-01
The problem of estimation of density functionals like entropy and mutual information has received much attention in the statistics and information theory communities. A large class of estimators of functionals of the probability density suffer from the curse of dimensionality, wherein the mean squared error (MSE) decays increasingly slowly as a function of the sample size T as the dimension d of the samples increases. In particular, the rate is often glacially slow of order O(T(-)(γ)(/)(d) ), where γ > 0 is a rate parameter. Examples of such estimators include kernel density estimators, k-nearest neighbor (k-NN) density estimators, k-NN entropy estimators, intrinsic dimension estimators and other examples. In this paper, we propose a weighted affine combination of an ensemble of such estimators, where optimal weights can be chosen such that the weighted estimator converges at a much faster dimension invariant rate of O(T(-1)). Furthermore, we show that these optimal weights can be determined by solving a convex optimization problem which can be performed offline and does not require training data. We illustrate the superior performance of our weighted estimator for two important applications: (i) estimating the Panter-Dite distortion-rate factor and (ii) estimating the Shannon entropy for testing the probability distribution of a random sample.
NASA Technical Reports Server (NTRS)
Stewart, R. D.
1979-01-01
Price and Cost Estimating Program (PACE II) was developed to prepare man-hour and material cost estimates. Versatile and flexible tool significantly reduces computation time and errors and reduces typing and reproduction time involved in preparation of cost estimates.
Carr, D.B.; Tolley, H.D.
1982-12-01
This paper investigates procedures for univariate nonparametric estimation of tail probabilities. Extrapolated values for tail probabilities beyond the data are also obtained based on the shape of the density in the tail. Several estimators which use exponential weighting are described. These are compared in a Monte Carlo study to nonweighted estimators, to the empirical cdf, to an integrated kernel, to a Fourier series estimate, to a penalized likelihood estimate and a maximum likelihood estimate. Selected weighted estimators are shown to compare favorably to many of these standard estimators for the sampling distributions investigated.
Estimating avian population size using Bowden's estimator
Diefenbach, D.R.
2009-01-01
Avian researchers often uniquely mark birds, and multiple estimators could be used to estimate population size using individually identified birds. However, most estimators of population size require that all sightings of marked birds be uniquely identified, and many assume homogeneous detection probabilities. Bowden's estimator can incorporate sightings of marked birds that are not uniquely identified and relax assumptions required of other estimators. I used computer simulation to evaluate the performance of Bowden's estimator for situations likely to be encountered in bird studies. When the assumptions of the estimator were met, abundance and variance estimates and confidence-interval coverage were accurate. However, precision was poor for small population sizes (N ??? 50) unless a large percentage of the population was marked (>75%) and multiple (???8) sighting surveys were conducted. If additional birds are marked after sighting surveys begin, it is important to initially mark a large proportion of the population (pm ??? 0.5 if N ??? 100 or pm > 0.1 if N ??? 250) and minimize sightings in which birds are not uniquely identified; otherwise, most population estimates will be overestimated by >10%. Bowden's estimator can be useful for avian studies because birds can be resighted multiple times during a single survey, not all sightings of marked birds have to uniquely identify individuals, detection probabilities among birds can vary, and the complete study area does not have to be surveyed. I provide computer code for use with pilot data to design mark-resight surveys to meet desired precision for abundance estimates. ?? 2009 by The American Ornithologists' Union. All rights reserved.
Flickner, M; Hafner, J; Rodriguez, E J; Sanz, J C
1996-01-01
Presents a new covariant basis, dubbed the quasi-orthogonal Q-spline basis, for the space of n-degree periodic uniform splines with k knots. This basis is obtained analogously to the B-spline basis by scaling and periodically translating a single spline function of bounded support. The construction hinges on an important theorem involving the asymptotic behavior (in the dimension) of the inverse of banded Toeplitz matrices. The authors show that the Gram matrix for this basis is nearly diagonal, hence, the name "quasi-orthogonal". The new basis is applied to the problem of approximating closed digital curves in 2D images by least-squares fitting. Since the new spline basis is almost orthogonal, the least-squares solution can be approximated by decimating a convolution between a resolution-dependent kernel and the given data. The approximating curve is expressed as a linear combination of the new spline functions and new "control points". Another convolution maps these control points to the classical B-spline control points. A generalization of the result has relevance to the solution of regularized fitting problems.
An Optimized Spline-Based Registration of a 3D CT to a Set of C-Arm Images
Thévenaz, P.; Zheng, G.; Nolte, L. -P.; Unser, M.
2006-01-01
We have developed an algorithm for the rigid-body registration of a CT volume to a set of C-arm images. The algorithm uses a gradient-based iterative minimization of a least-squares measure of dissimilarity between the C-arm images and projections of the CT volume. To compute projections, we use a novel method for fast integration of the volume along rays. To improve robustness and speed, we take advantage of a coarse-to-fine processing of the volume/image pyramids. To compute the projections of the volume, the gradient of the dissimilarity measure, and the multiresolution data pyramids, we use a continuous image/volume model based on cubic B-splines, which ensures a high interpolation accuracy and a gradient of the dissimilarity measure that is well defined everywhere. We show the performance of our algorithm on a human spine phantom, where the true alignment is determined using a set of fiducial markers. PMID:23165033
NASA Astrophysics Data System (ADS)
Wen, W. B.; Duan, S. Y.; Yan, J.; Ma, Y. B.; Wei, K.; Fang, D. N.
2017-03-01
An explicit time integration scheme based on quartic B-splines is presented for solving linear structural dynamics problems. The scheme is of a one-parameter family of schemes where free algorithmic parameter controls stability, accuracy and numerical dispersion. The proposed scheme possesses at least second-order accuracy and at most third-order accuracy. A 2D wave problem is analyzed to demonstrate the effectiveness of the proposed scheme in reducing high-frequency modes and retaining low-frequency modes. Except for general structural dynamics, the proposed scheme can be used effectively for wave propagation problems in which numerical dissipation is needed to reduce spurious oscillations.
An Optimized Spline-Based Registration of a 3D CT to a Set of C-Arm Images.
Jonić, S; Thévenaz, P; Zheng, G; Nolte, L-P; Unser, M
2006-01-01
We have developed an algorithm for the rigid-body registration of a CT volume to a set of C-arm images. The algorithm uses a gradient-based iterative minimization of a least-squares measure of dissimilarity between the C-arm images and projections of the CT volume. To compute projections, we use a novel method for fast integration of the volume along rays. To improve robustness and speed, we take advantage of a coarse-to-fine processing of the volume/image pyramids. To compute the projections of the volume, the gradient of the dissimilarity measure, and the multiresolution data pyramids, we use a continuous image/volume model based on cubic B-splines, which ensures a high interpolation accuracy and a gradient of the dissimilarity measure that is well defined everywhere. We show the performance of our algorithm on a human spine phantom, where the true alignment is determined using a set of fiducial markers.
Molecular musings in microbial ecology and evolution
2011-01-01
A few major discoveries have influenced how ecologists and evolutionists study microbes. Here, in the format of an interview, we answer questions that directly relate to how these discoveries are perceived in these two branches of microbiology, and how they have impacted on both scientific thinking and methodology. The first question is "What has been the influence of the 'Universal Tree of Life' based on molecular markers?" For evolutionists, the tree was a tool to understand the past of known (cultured) organisms, mapping the invention of various physiologies on the evolutionary history of microbes. For ecologists the tree was a guide to discover the current diversity of unknown (uncultured) organisms, without much knowledge of their physiology. The second question we ask is "What was the impact of discovering frequent lateral gene transfer among microbes?" In evolutionary microbiology, frequent lateral gene transfer (LGT) made a simple description of relationships between organisms impossible, and for microbial ecologists, functions could not be easily linked to specific genotypes. Both fields initially resisted LGT, but methods or topics of inquiry were eventually changed in one to incorporate LGT in its theoretical models (evolution) and in the other to achieve its goals despite that phenomenon (ecology). The third and last question we ask is "What are the implications of the unexpected extent of diversity?" The variation in the extent of diversity between organisms invalidated the universality of species definitions based on molecular criteria, a major obstacle to the adaptation of models developed for the study of macroscopic eukaryotes to evolutionary microbiology. This issue has not overtly affected microbial ecology, as it had already abandoned species in favor of the more flexible operational taxonomic units. This field is nonetheless moving away from traditional methods to measure diversity, as they do not provide enough resolution to uncover what lies below the species level. The answers of the evolutionary microbiologist and microbial ecologist to these three questions illustrate differences in their theoretical frameworks. These differences mean that both fields can react quite distinctly to the same discovery, incorporating it with more or less difficulty in their scientific practice. Reviewers This article was reviewed by W. Ford Doolittle, Eugene V. Koonin and Maureen A. O'Malley. PMID:22074255
The conformational musings of a medicinal chemist.
Finch, Harry
2014-03-01
Structure-based drug design strategies based on X-ray crystallographic data of ligands bound to biological targets or computationally derived pharmacophore models have been introduced over the past 25 years or so. These have now matured and are deeply embedded in the drug discovery process in most pharmaceutical and biotechnology companies where they continue to play a major part in the discovery of new medicines and drug candidates. Newly developed NMR methods can now provide a full description of the conformations in which ligands exist in free solution, crucially allowing those that are dominant to be identified. Integrating experimentally determined conformational information on active and inactive molecules in drug discovery programmes, alongside the existing techniques, should have a major impact on the success of drug discovery.
Musings on Willower's "Fog": A Response.
ERIC Educational Resources Information Center
English, Fenwick
1998-01-01
Professor Willower complains about the "fog" encountered in postmodernist literature and the author's two articles in "Journal of School Leadership." On closer examination, this miasma is simply the mildew on Willower's Cartesian glasses. Educational administration continues to substitute management and business fads for any…
Transits of Venus and Mercury as muses
NASA Astrophysics Data System (ADS)
Tobin, William
2013-11-01
Transits of Venus and Mercury have inspired artistic creation of all kinds. After having been the first to witness a Venusian transit, in 1639, Jeremiah Horrocks expressed his feelings in poetry. Production has subsequently widened to include songs, short stories, novels, novellas, sermons, theatre, film, engravings, paintings, photography, medals, sculpture, stained glass, cartoons, stamps, music, opera, flower arrangements, and food and drink. Transit creations are reviewed, with emphasis on the English- and French-speaking worlds. It is found that transits of Mercury inspire much less creation than those of Venus, despite being much more frequent, and arguably of no less astronomical significance. It is suggested that this is primarily due to the mythological associations of Venus with sex and love, which are more powerful and gripping than Mercury's mythological role as a messenger and protector of traders and thieves. The lesson for those presenting the night sky to the public is that sex sells.
Musings: "Hasten Slowly:" Thoughtfully Planned Acceleration
ERIC Educational Resources Information Center
Gross, Miraca U. M.
2008-01-01
Acceleration is one of the best researched interventions for gifted students. The author is an advocate of acceleration. However, advocating for the thoughtful, carefully judged employment of a procedure with well researched effectiveness does not imply approval of cases where the procedure is used without sufficient thought--especially where it…
Musings on the Internet, Part 2
ERIC Educational Resources Information Center
Cerf, Vinton G.
2004-01-01
In t his article, the author discusses the role of higher education research and development (R&D)--particularly R&D into the issues and problems that industry is less able to explore. In addition to high-speed computer communication, broadband networking efforts, and the use of fiber, a rich service environment is equally important and is…
Zen Musings on Bion's "O" and "K".
Cooper, Paul C
2016-08-01
The author defines Bion's use of "O" and "K" and discusses both from the radical nondualist realizational perspective available through the lens of Eihei Dogen's (1200-1253) Soto Zen Buddhist orientation. Fundamental differences in core foundational principles are discussed as well as similarities and their relevance to clinical practice. A case example exemplifies and explicates the abstract aspects of the discussion, which draws from Zen teaching stories, reference to Dogen's original writings, and the scholarly commentarial literature as well as from contemporary writers who integrate Zen Buddhist study and practice with Bion's psychoanalytic writings on theory and technique.
NASA Technical Reports Server (NTRS)
Iliff, Kenneth W.
1987-01-01
The aircraft parameter estimation problem is used to illustrate the utility of parameter estimation, which applies to many engineering and scientific fields. Maximum likelihood estimation has been used to extract stability and control derivatives from flight data for many years. This paper presents some of the basic concepts of aircraft parameter estimation and briefly surveys the literature in the field. The maximum likelihood estimator is discussed, and the basic concepts of minimization and estimation are examined for a simple simulated aircraft example. The cost functions that are to be minimized during estimation are defined and discussed. Graphic representations of the cost functions are given to illustrate the minimization process. Finally, the basic concepts are generalized, and estimation from flight data is discussed. Some of the major conclusions for the simulated example are also developed for the analysis of flight data from the F-14, highly maneuverable aircraft technology (HiMAT), and space shuttle vehicles.
Information geometric density estimation
NASA Astrophysics Data System (ADS)
Sun, Ke; Marchand-Maillet, Stéphane
2015-01-01
We investigate kernel density estimation where the kernel function varies from point to point. Density estimation in the input space means to find a set of coordinates on a statistical manifold. This novel perspective helps to combine efforts from information geometry and machine learning to spawn a family of density estimators. We present example models with simulations. We discuss the principle and theory of such density estimation.
NASA Technical Reports Server (NTRS)
Chatterji, Gano
2011-01-01
Conclusions: Validated the fuel estimation procedure using flight test data. A good fuel model can be created if weight and fuel data are available. Error in assumed takeoff weight results in similar amount of error in the fuel estimate. Fuel estimation error bounds can be determined.
Making Connections with Estimation.
ERIC Educational Resources Information Center
Lobato, Joanne E.
1993-01-01
Describes four methods to structure estimation activities that enable students to make connections between their understanding of numbers and extensions of those concepts to estimating. Presents activities that connect estimation with other curricular areas, other mathematical topics, and real-world applications. (MDH)
NASA Technical Reports Server (NTRS)
Chamberlain, R. G.; Aster, R. W.; Firnett, P. J.; Miller, M. A.
1985-01-01
Improved Price Estimation Guidelines, IPEG4, program provides comparatively simple, yet relatively accurate estimate of price of manufactured product. IPEG4 processes user supplied input data to determine estimate of price per unit of production. Input data include equipment cost, space required, labor cost, materials and supplies cost, utility expenses, and production volume on industry wide or process wide basis.
Estimation in satellite control.
NASA Technical Reports Server (NTRS)
Debra, D. B.
1971-01-01
The use of estimators or observers is discussed as applied to satellite attitude control and the control of drag-free satellites. The practical problems of implementation are discussed, and the relative advantages of full and reduced state estimators are compared, particularly in terms of their effectiveness and bandwidth as filters. Three applications are used to illustrate the principles. They are: (1) a reaction wheel control system, (2) a spinning attitude control system, and (3) a drag-free satellite translational control system. Fixed estimator gains are shown to be adequate for these (and many other) applications. Our experience in the hardware realization of estimators has led to categorize the error sources in terms of those that improve with increased estimator gains and those that get worse with increased estimator gains.
NASA Astrophysics Data System (ADS)
Fodor, I. K.; Stark, P. B.
Multitapering is a statistical technique developed to improve on the notorious periodogram estimate of the power spectrum (Thomson, 1982; Percival, Walden 1993). We show how to obtain orthogonal tapers for time series observed with gaps, and how to use statistical resampling techniques (Efron, Tibshirani 1993) to calculate realistic uncertainty estimates for multitaper estimates. We introduce multisegment multitapering. Multitapering can also be extended to the 2D case. We indicate how to construct tapers that minimize the spatial leakage in estimates of the spherical harmonic decomposition of the velocity images. Spatial multitapering followed by the temporal tapering of the estimated spherical harmonic time series is expected to result in improved spectrum and subsequent solar oscillation mode parameter estimates.
Estimating Airline Operating Costs
NASA Technical Reports Server (NTRS)
Maddalon, D. V.
1978-01-01
The factors affecting commercial aircraft operating and delay costs were used to develop an airline operating cost model which includes a method for estimating the labor and material costs of individual airframe maintenance systems. The model permits estimates of aircraft related costs, i.e., aircraft service, landing fees, flight attendants, and control fees. A method for estimating the costs of certain types of airline delay is also described.
NASA Technical Reports Server (NTRS)
Aster, R. W.; Chamberlain, R. G.; Zendejas, S. C.; Lee, T. S.; Malhotra, S.
1986-01-01
Company-wide or process-wide production simulated. Price Estimation Guidelines (IPEG) program provides simple, accurate estimates of prices of manufactured products. Simplification of SAMIS allows analyst with limited time and computing resources to perform greater number of sensitivity studies. Although developed for photovoltaic industry, readily adaptable to standard assembly-line type of manufacturing industry. IPEG program estimates annual production price per unit. IPEG/PC program written in TURBO PASCAL.
Reservoir Temperature Estimator
Palmer, Carl D.
2014-12-08
The Reservoir Temperature Estimator (RTEst) is a program that can be used to estimate deep geothermal reservoir temperature and chemical parameters such as CO2 fugacity based on the water chemistry of shallower, cooler reservoir fluids. This code uses the plugin features provided in The Geochemists Workbench (Bethke and Yeakel, 2011) and interfaces with the model-independent parameter estimation code Pest (Doherty, 2005) to provide for optimization of the estimated parameters based on the minimization of the weighted sum of squares of a set of saturation indexes from a user-provided mineral assemblage.
Parameter estimating state reconstruction
NASA Technical Reports Server (NTRS)
George, E. B.
1976-01-01
Parameter estimation is considered for systems whose entire state cannot be measured. Linear observers are designed to recover the unmeasured states to a sufficient accuracy to permit the estimation process. There are three distinct dynamics that must be accommodated in the system design: the dynamics of the plant, the dynamics of the observer, and the system updating of the parameter estimation. The latter two are designed to minimize interaction of the involved systems. These techniques are extended to weakly nonlinear systems. The application to a simulation of a space shuttle POGO system test is of particular interest. A nonlinear simulation of the system is developed, observers designed, and the parameters estimated.
Estimating Latent Distributions.
ERIC Educational Resources Information Center
Mislevy, Robert J.
1984-01-01
Assuming vectors of item responses depend on ability through a fully specified item response model, this paper presents maximum likelihood equations for estimating the population parameters without estimating an ability parameter for each subject. Asymptotic standard errors, tests of fit, computing approximations, and details of four special cases…
NASA Astrophysics Data System (ADS)
Kraskov, Alexander; Stögbauer, Harald; Grassberger, Peter
2004-06-01
We present two classes of improved estimators for mutual information M(X,Y) , from samples of random points distributed according to some joint probability density μ(x,y) . In contrast to conventional estimators based on binnings, they are based on entropy estimates from k -nearest neighbor distances. This means that they are data efficient (with k=1 we resolve structures down to the smallest possible scales), adaptive (the resolution is higher where data are more numerous), and have minimal bias. Indeed, the bias of the underlying entropy estimates is mainly due to nonuniformity of the density at the smallest resolved scale, giving typically systematic errors which scale as functions of k/N for N points. Numerically, we find that both families become exact for independent distributions, i.e. the estimator M̂ (X,Y) vanishes (up to statistical fluctuations) if μ(x,y)=μ(x)μ(y) . This holds for all tested marginal distributions and for all dimensions of x and y . In addition, we give estimators for redundancies between more than two random variables. We compare our algorithms in detail with existing algorithms. Finally, we demonstrate the usefulness of our estimators for assessing the actual independence of components obtained from independent component analysis (ICA), for improving ICA, and for estimating the reliability of blind source separation.
Rajdl, Kamil; Lansky, Petr
2014-02-01
Fano factor is one of the most widely used measures of variability of spike trains. Its standard estimator is the ratio of sample variance to sample mean of spike counts observed in a time window and the quality of the estimator strongly depends on the length of the window. We investigate this dependence under the assumption that the spike train behaves as an equilibrium renewal process. It is shown what characteristics of the spike train have large effect on the estimator bias. Namely, the effect of refractory period is analytically evaluated. Next, we create an approximate asymptotic formula for the mean square error of the estimator, which can also be used to find minimum of the error in estimation from single spike trains. The accuracy of the Fano factor estimator is compared with the accuracy of the estimator based on the squared coefficient of variation. All the results are illustrated for spike trains with gamma and inverse Gaussian probability distributions of interspike intervals. Finally, we discuss possibilities of how to select a suitable observation window for the Fano factor estimation.
2006-01-01
investigate the possibility of exploiting the properties of a detected Low Probability of Intercept (LPI) signal waveform to estimate time delay, and by...ratios, namely 10 dB and less. We also examine the minimum time –delay estimate error – the Cramer–Rao bound. The results indicate that the method
Robust incremental condition estimation
Bischof, C.H.; Tang, P.T.P.
1991-03-29
This paper presents an improved version of incremental condition estimation, a technique for tracking the extremal singular values of a triangular matrix as it is being constructed one column at a time. We present a new motivation for this estimation technique using orthogonal projections. The paper focuses on an implementation of this estimation scheme in an accurate and consistent fashion. In particular, we address the subtle numerical issues arising in the computation of the eigensystem of a symmetric rank-one perturbed diagonal 2 {times} 2 matrix. Experimental results show that the resulting scheme does a good job in estimating the extremal singular values of triangular matrices, independent of matrix size and matrix condition number, and that it performs qualitatively in the same fashion as some of the commonly used nonincremental condition estimation schemes.
Estimating airline operating costs
NASA Technical Reports Server (NTRS)
Maddalon, D. V.
1978-01-01
A review was made of the factors affecting commercial aircraft operating and delay costs. From this work, an airline operating cost model was developed which includes a method for estimating the labor and material costs of individual airframe maintenance systems. The model, similar in some respects to the standard Air Transport Association of America (ATA) Direct Operating Cost Model, permits estimates of aircraft-related costs not now included in the standard ATA model (e.g., aircraft service, landing fees, flight attendants, and control fees). A study of the cost of aircraft delay was also made and a method for estimating the cost of certain types of airline delay is described.
NASA Technical Reports Server (NTRS)
White, B. S.; Castleman, K. R.
1981-01-01
An important step in the diagnosis of a cervical cytology specimen is estimating the proportions of the various cell types present. This is usually done with a cell classifier, the error rates of which can be expressed as a confusion matrix. We show how to use the confusion matrix to obtain an unbiased estimate of the desired proportions. We show that the mean square error of this estimate depends on a 'befuddlement matrix' derived from the confusion matrix, and how this, in turn, leads to a figure of merit for cell classifiers. Finally, we work out the two-class problem in detail and present examples to illustrate the theory.
Estimating Radiogenic Cancer Risks
This document presents a revised methodology for EPA's estimation of cancer risks due to low-LET radiation exposures developed in light of information that has become available, especially new information on the Japanese atomic bomb survivors.
Estimation of food consumption
Callaway, J.M. Jr.
1992-04-01
The research reported in this document was conducted as a part of the Hanford Environmental Dose Reconstruction (HEDR) Project. The objective of the HEDR Project is to estimate the radiation doses that people could have received from operations at the Hanford Site. Information required to estimate these doses includes estimates of the amounts of potentially contaminated foods that individuals in the region consumed during the study period. In that general framework, the objective of the Food Consumption Task was to develop a capability to provide information about the parameters of the distribution(s) of daily food consumption for representative groups in the population for selected years during the study period. This report describes the methods and data used to estimate food consumption and presents the results developed for Phase I of the HEDR Project.
Tsvetkov, D.Y.
1983-01-01
Estimates of the frequency of type I and II supernovae occurring in galaxies of different types are derived from observational material acquired by the supernova patrol of the Shternberg Astronomical Institute.
Early Training Estimation System
1980-08-01
are needed. First, by developing earlier and more accurate estimates of training requirements, the training planning process can begin earlier, and...this period and these questions require training input data and (2) the early training planning process requires a solid foundation on which to...development of initial design, task, skill, and training estimates? provision of input into training planning and acquisition documents: 2-39 provision
Nonparametric conditional estimation
Owen, A.B.
1987-01-01
Many nonparametric regression techniques (such as kernels, nearest neighbors, and smoothing splines) estimate the conditional mean of Y given X = chi by a weighted sum of observed Y values, where observations with X values near chi tend to have larger weights. In this report the weights are taken to represent a finite signed measure on the space of Y values. This measure is studied as an estimate of the conditional distribution of Y given X = chi. From estimates of the conditional distribution, estimates of conditional means, standard deviations, quantiles and other statistical functionals may be computed. Chapter 1 illustrates the computation of conditional quantiles and conditional survival probabilities on the Stanford Heart Transplant data. Chapter 2 contains a survey of nonparametric regression methods and introduces statistical metrics and von Mises' method for later use. Chapter 3 proves some consistency results. Chapter 4 provides conditions under which the suitably normalized errors in estimating the conditional distribution of Y have a Brownian limit. Using von Mises' method, asymptotic normality is obtained for nonparametric conditional estimates of compactly differentiable statistical functionals.
Estimating networks with jumps
Kolar, Mladen; Xing, Eric P.
2013-01-01
We study the problem of estimating a temporally varying coefficient and varying structure (VCVS) graphical model underlying data collected over a period of time, such as social states of interacting individuals or microarray expression profiles of gene networks, as opposed to i.i.d. data from an invariant model widely considered in current literature of structural estimation. In particular, we consider the scenario in which the model evolves in a piece-wise constant fashion. We propose a procedure that estimates the structure of a graphical model by minimizing the temporally smoothed L1 penalized regression, which allows jointly estimating the partition boundaries of the VCVS model and the coefficient of the sparse precision matrix on each block of the partition. A highly scalable proximal gradient method is proposed to solve the resultant convex optimization problem; and the conditions for sparsistent estimation and the convergence rate of both the partition boundaries and the network structure are established for the first time for such estimators. PMID:25013533
Adaptive spectral doppler estimation.
Gran, Fredrik; Jakobsson, Andreas; Jensen, Jørgen Arendt
2009-04-01
In this paper, 2 adaptive spectral estimation techniques are analyzed for spectral Doppler ultrasound. The purpose is to minimize the observation window needed to estimate the spectrogram to provide a better temporal resolution and gain more flexibility when designing the data acquisition sequence. The methods can also provide better quality of the estimated power spectral density (PSD) of the blood signal. Adaptive spectral estimation techniques are known to provide good spectral resolution and contrast even when the observation window is very short. The 2 adaptive techniques are tested and compared with the averaged periodogram (Welch's method). The blood power spectral capon (BPC) method is based on a standard minimum variance technique adapted to account for both averaging over slow-time and depth. The blood amplitude and phase estimation technique (BAPES) is based on finding a set of matched filters (one for each velocity component of interest) and filtering the blood process over slow-time and averaging over depth to find the PSD. The methods are tested using various experiments and simulations. First, controlled flow-rig experiments with steady laminar flow are carried out. Simulations in Field II for pulsating flow resembling the femoral artery are also analyzed. The simulations are followed by in vivo measurement on the common carotid artery. In all simulations and experiments it was concluded that the adaptive methods display superior performance for short observation windows compared with the averaged periodogram. Computational costs and implementation details are also discussed.
Subelliptic Estimates for Complexes
Guillemin, Victor; Sternberg, Shlomo
1970-01-01
New results are announced linking properties of the symbol module and characteristic variety of a differential complex with test estimates near the characteristic variety of the type considered by Hörmander (½-estimate). The first result is the invariance of the test estimates under pseudo-differential change of coordinates, and this leads to the introduction of a normal form for the complex in the neighborhood of a Cohen-MacCauley point of the symbol module. If the characteristic variety V is a manifold near the Cohen-MacCauley point (x0,ζ0) with parametrizing functions p1,...,pq, where q is the codimension of the characteristic variety in the complexified contangent bundle, the matrix [Formula: see text] of Poisson brackets defines invariantly a Hermitian form Q on the normal space to V at (x0,ζ0) when the dpζ(x0,ζ0) are used as basis, and the test estimates are satisfied at the ith stage of the complex if sig. Q (signature of Q) is ≥ n - i + 1 (n the dimension of the base manifold) or rank Q - sig. Q ≥ i + 1. Finally, conditions are given in order that, on a manifold with smooth boundary, the associated boundary complexes satisfy the ½-estimate. PMID:16591855
Estimating Commit Sizes Efficiently
NASA Astrophysics Data System (ADS)
Hofmann, Philipp; Riehle, Dirk
The quantitative analysis of software projects can provide insights that let us better understand open source and other software development projects. An important variable used in the analysis of software projects is the amount of work being contributed, the commit size. Unfortunately, post-facto, the commit size can only be estimated, not measured. This paper presents several algorithms for estimating the commit size. Our performance evaluation shows that simple, straightforward heuristics are superior to the more complex text-analysis-based algorithms. Not only are the heuristics significantly faster to compute, they also deliver more accurate results when estimating commit sizes. Based on this experience, we design and present an algorithm that improves on the heuristics, can be computed equally fast, and is more accurate than any of the prior approaches.
Thermodynamic estimation: Ionic materials
Glasser, Leslie
2013-10-15
Thermodynamics establishes equilibrium relations among thermodynamic parameters (“properties”) and delineates the effects of variation of the thermodynamic functions (typically temperature and pressure) on those parameters. However, classical thermodynamics does not provide values for the necessary thermodynamic properties, which must be established by extra-thermodynamic means such as experiment, theoretical calculation, or empirical estimation. While many values may be found in the numerous collected tables in the literature, these are necessarily incomplete because either the experimental measurements have not been made or the materials may be hypothetical. The current paper presents a number of simple and relible estimation methods for thermodynamic properties, principally for ionic materials. The results may also be used as a check for obvious errors in published values. The estimation methods described are typically based on addition of properties of individual ions, or sums of properties of neutral ion groups (such as “double” salts, in the Simple Salt Approximation), or based upon correlations such as with formula unit volumes (Volume-Based Thermodynamics). - Graphical abstract: Thermodynamic properties of ionic materials may be readily estimated by summation of the properties of individual ions, by summation of the properties of ‘double salts’, and by correlation with formula volume. Such estimates may fill gaps in the literature, and may also be used as checks of published values. This simplicity arises from exploitation of the fact that repulsive energy terms are of short range and very similar across materials, while coulombic interactions provide a very large component of the attractive energy in ionic systems. Display Omitted - Highlights: • Estimation methods for thermodynamic properties of ionic materials are introduced. • Methods are based on summation of single ions, multiple salts, and correlations. • Heat capacity, entropy
Ability Estimation for Conventional Tests.
ERIC Educational Resources Information Center
Kim, Jwa K.; Nicewander, W. Alan
1993-01-01
Bias, standard error, and reliability of five ability estimators were evaluated using Monte Carlo estimates of the unknown conditional means and variances of the estimators. Results indicate that estimates based on Bayesian modal, expected a posteriori, and weighted likelihood estimators were reasonably unbiased with relatively small standard…
Quantifying surface normal estimation
NASA Astrophysics Data System (ADS)
Reid, Robert B.; Oxley, Mark E.; Eismann, Michael T.; Goda, Matthew E.
2006-05-01
An inverse algorithm for surface normal estimation from thermal polarimetric imagery was developed and used to quantify the requirements on a priori information. Building on existing knowledge that calculates the degree of linear polarization (DOLP) and the angle of polarization (AOP) for a given surface normal in a forward model (from an object's characteristics to calculation of the DOLP and AOP), this research quantifies the impact of a priori information with the development of an inverse algorithm to estimate surface normals from thermal polarimetric emissions in long-wave infrared (LWIR). The inverse algorithm assumes a polarized infrared focal plane array capturing LWIR intensity images which are then converted to Stokes vectors. Next, the DOLP and AOP are calculated from the Stokes vectors. Last, the viewing angles, θ v, to the surface normals are estimated assuming perfect material information about the imaged scene. A sensitivity analysis is presented to quantitatively describe the a priori information's impact on the amount of error in the estimation of surface normals, and a bound is determined given perfect information about an object. Simulations explored the impact of surface roughness (σ) and the real component (n) of a dielectric's complex index of refraction across a range of viewing angles (θ v) for a given wavelength of observation.
Numerical Estimation in Preschoolers
ERIC Educational Resources Information Center
Berteletti, Ilaria; Lucangeli, Daniela; Piazza, Manuela; Dehaene, Stanislas; Zorzi, Marco
2010-01-01
Children's sense of numbers before formal education is thought to rely on an approximate number system based on logarithmically compressed analog magnitudes that increases in resolution throughout childhood. School-age children performing a numerical estimation task have been shown to increasingly rely on a formally appropriate, linear…
Thermodynamically Correct Bioavailability Estimations
1992-04-30
6448 I 1. SWPPUMENTA* NOTIS lIa. OISTUAMJTiOAVAILAIILTY STATIMENT 121 OT REbT ostwosCo z I Approved for public release; distribution unlimited... research is to develop thermodynamically correct bioavailability estimations using chromatographic stationary phases as a model of the "interphase
Activities: Visualization, Estimation, Computation.
ERIC Educational Resources Information Center
Maletsky, Evan M.
1982-01-01
The material is designed to help students build a cone model, visualize how its dimensions change as its shape changes, estimate maximum volume position, and develop problem-solving skills. Worksheets designed for duplication for classroom use are included. Part of the activity involves student analysis of a BASIC program. (MP)
ERIC Educational Resources Information Center
Landy, David; Silbert, Noah; Goldin, Aleah
2013-01-01
Despite their importance in public discourse, numbers in the range of 1 million to 1 trillion are notoriously difficult to understand. We examine magnitude estimation by adult Americans when placing large numbers on a number line and when qualitatively evaluating descriptions of imaginary geopolitical scenarios. Prior theoretical conceptions…
NASA Technical Reports Server (NTRS)
Chung, W.; Meng, S. Y.; Meng, C. Y.
1984-01-01
Blockage predicted for all components including inducers, impellers and diffusers. Pump performance predicted by semiempirical method shows excellent agreement with test results in Space Shuttle main-engine highpressure fuel turbopump. Comparisons of pump efficiency show equally good agreement of calculated values with experimental ones. Method improves current estimation methods based solely on subjective engineering judgment.
ERIC Educational Resources Information Center
Moseley, Christine
2007-01-01
The purpose of this activity was to help students understand the percentage of cloud cover and make more accurate cloud cover observations. Students estimated the percentage of cloud cover represented by simulated clouds and assigned a cloud cover classification to those simulations. (Contains 2 notes and 3 tables.)
ERIC Educational Resources Information Center
Gustafson, S. C.; Costello, C. S.; Like, E. C.; Pierce, S. J.; Shenoy, K. N.
2009-01-01
Bayesian estimation of a threshold time (hereafter simply threshold) for the receipt of impulse signals is accomplished given the following: 1) data, consisting of the number of impulses received in a time interval from zero to one and the time of the largest time impulse; 2) a model, consisting of a uniform probability density of impulse time…
Interval estimations in metrology
NASA Astrophysics Data System (ADS)
Mana, G.; Palmisano, C.
2014-06-01
This paper investigates interval estimation for a measurand that is known to be positive. Both the Neyman and Bayesian procedures are considered and the difference between the two, not always perceived, is discussed in detail. A solution is proposed to a paradox originated by the frequentist assessment of the long-run success rate of Bayesian intervals.
ERIC Educational Resources Information Center
McDonald, Judith A.; Thornton, Robert J.
2011-01-01
Course research projects that use easy-to-access real-world data and that generate findings with which undergraduate students can readily identify are hard to find. The authors describe a project that requires students to estimate the current female-male earnings gap for new college graduates. The project also enables students to see to what…
Numerical estimation of densities
NASA Astrophysics Data System (ADS)
Ascasibar, Y.; Binney, J.
2005-01-01
We present a novel technique, dubbed FIESTAS, to estimate the underlying density field from a discrete set of sample points in an arbitrary multidimensional space. FIESTAS assigns a volume to each point by means of a binary tree. Density is then computed by integrating over an adaptive kernel. As a first test, we construct several Monte Carlo realizations of a Hernquist profile and recover the particle density in both real and phase space. At a given point, Poisson noise causes the unsmoothed estimates to fluctuate by a factor of ~2 regardless of the number of particles. This spread can be reduced to about 1dex (~26 per cent) by our smoothing procedure. The density range over which the estimates are unbiased widens as the particle number increases. Our tests show that real-space densities obtained with an SPH kernel are significantly more biased than those yielded by FIESTAS. In phase space, about 10 times more particles are required in order to achieve a similar accuracy. As a second application we have estimated phase-space densities in a dark matter halo from a cosmological simulation. We confirm the results of Arad, Dekel & Klypin that the highest values of f are all associated with substructure rather than the main halo, and that the volume function v(f) ~f-2.5 over about four orders of magnitude in f. We show that a modified version of the toy model proposed by Arad et al. explains this result and suggests that the departures of v(f) from power-law form are not mere numerical artefacts. We conclude that our algorithm accurately measures the phase-space density up to the limit where discreteness effects render the simulation itself unreliable. Computationally, FIESTAS is orders of magnitude faster than the method based on Delaunay tessellation that Arad et al. employed, making it practicable to recover smoothed density estimates for sets of 109 points in six dimensions.
J-adaptive estimation with estimated noise statistics
NASA Technical Reports Server (NTRS)
Jazwinski, A. H.; Hipkins, C.
1973-01-01
The J-adaptive sequential estimator is extended to include simultaneous estimation of the noise statistics in a model for system dynamics. This extension completely automates the estimator, eliminating the requirement of an analyst in the loop. Simulations in satellite orbit determination demonstrate the efficacy of the sequential estimation algorithm.
Power spectral estimation algorithms
NASA Technical Reports Server (NTRS)
Bhatia, Manjit S.
1989-01-01
Algorithms to estimate the power spectrum using Maximum Entropy Methods were developed. These algorithms were coded in FORTRAN 77 and were implemented on the VAX 780. The important considerations in this analysis are: (1) resolution, i.e., how close in frequency two spectral components can be spaced and still be identified; (2) dynamic range, i.e., how small a spectral peak can be, relative to the largest, and still be observed in the spectra; and (3) variance, i.e., how accurate the estimate of the spectra is to the actual spectra. The application of the algorithms based on Maximum Entropy Methods to a variety of data shows that these criteria are met quite well. Additional work in this direction would help confirm the findings. All of the software developed was turned over to the technical monitor. A copy of a typical program is included. Some of the actual data and graphs used on this data are also included.
Optimal Centroid Position Estimation
Candy, J V; McClay, W A; Awwal, A S; Ferguson, S W
2004-07-23
The alignment of high energy laser beams for potential fusion experiments demand high precision and accuracy by the underlying positioning algorithms. This paper discusses the feasibility of employing online optimal position estimators in the form of model-based processors to achieve the desired results. Here we discuss the modeling, development, implementation and processing of model-based processors applied to both simulated and actual beam line data.
Nonparametric Conditional Estimation
1987-02-01
have a Brownian limit. Using von Mises’ method, asymptotic normality is obtained for nonparametric conditional estimates of compactly differentiable ... differentiable statistical functionals. This res~arch supported by Office of Naval Research Contract NOOOl4-83-K-0472; supported National Science Foundation...2.5 Models for F. 2.6 Compact Differentiability and von Mises’ Method 3. Consistency . 3.1 Introduction and Definitions 3.2 Prohorov Consistency of
Airborne Crowd Density Estimation
NASA Astrophysics Data System (ADS)
Meynberg, O.; Kuschk, G.
2013-10-01
This paper proposes a new method for estimating human crowd densities from aerial imagery. Applications benefiting from an accurate crowd monitoring system are mainly found in the security sector. Normally crowd density estimation is done through in-situ camera systems mounted on high locations although this is not appropriate in case of very large crowds with thousands of people. Using airborne camera systems in these scenarios is a new research topic. Our method uses a preliminary filtering of the whole image space by suitable and fast interest point detection resulting in a number of image regions, possibly containing human crowds. Validation of these candidates is done by transforming the corresponding image patches into a low-dimensional and discriminative feature space and classifying the results using a support vector machine (SVM). The feature space is spanned by texture features computed by applying a Gabor filter bank with varying scale and orientation to the image patches. For evaluation, we use 5 different image datasets acquired by the 3K+ aerial camera system of the German Aerospace Center during real mass events like concerts or football games. To evaluate the robustness and generality of our method, these datasets are taken from different flight heights between 800 m and 1500 m above ground (keeping a fixed focal length) and varying daylight and shadow conditions. The results of our crowd density estimation are evaluated against a reference data set obtained by manually labeling tens of thousands individual persons in the corresponding datasets and show that our method is able to estimate human crowd densities in challenging realistic scenarios.
Estimating directional epistasis
Le Rouzic, Arnaud
2014-01-01
Epistasis, i.e., the fact that gene effects depend on the genetic background, is a direct consequence of the complexity of genetic architectures. Despite this, most of the models used in evolutionary and quantitative genetics pay scant attention to genetic interactions. For instance, the traditional decomposition of genetic effects models epistasis as noise around the evolutionarily-relevant additive effects. Such an approach is only valid if it is assumed that there is no general pattern among interactions—a highly speculative scenario. Systematic interactions generate directional epistasis, which has major evolutionary consequences. In spite of its importance, directional epistasis is rarely measured or reported by quantitative geneticists, not only because its relevance is generally ignored, but also due to the lack of simple, operational, and accessible methods for its estimation. This paper describes conceptual and statistical tools that can be used to estimate directional epistasis from various kinds of data, including QTL mapping results, phenotype measurements in mutants, and artificial selection responses. As an illustration, I measured directional epistasis from a real-life example. I then discuss the interpretation of the estimates, showing how they can be used to draw meaningful biological inferences. PMID:25071828
Bayesian Error Estimation Functionals
NASA Astrophysics Data System (ADS)
Jacobsen, Karsten W.
The challenge of approximating the exchange-correlation functional in Density Functional Theory (DFT) has led to the development of numerous different approximations of varying accuracy on different calculated properties. There is therefore a need for reliable estimation of prediction errors within the different approximation schemes to DFT. The Bayesian Error Estimation Functionals (BEEF) have been developed with this in mind. The functionals are constructed by fitting to experimental and high-quality computational databases for molecules and solids including chemisorption and van der Waals systems. This leads to reasonably accurate general-purpose functionals with particual focus on surface science. The fitting procedure involves considerations on how to combine different types of data, and applies Tikhonov regularization and bootstrap cross validation. The methodology has been applied to construct GGA and metaGGA functionals with and without inclusion of long-ranged van der Waals contributions. The error estimation is made possible by the generation of not only a single functional but through the construction of a probability distribution of functionals represented by a functional ensemble. The use of the functional ensemble is illustrated on compound heat of formation and by investigations of the reliability of calculated catalytic ammonia synthesis rates.
Injury Risk Estimation Expertise
Petushek, Erich J.; Ward, Paul; Cokely, Edward T.; Myer, Gregory D.
2015-01-01
Background: Simple observational assessment of movement is a potentially low-cost method for anterior cruciate ligament (ACL) injury screening and prevention. Although many individuals utilize some form of observational assessment of movement, there are currently no substantial data on group skill differences in observational screening of ACL injury risk. Purpose/Hypothesis: The purpose of this study was to compare various groups’ abilities to visually assess ACL injury risk as well as the associated strategies and ACL knowledge levels. The hypothesis was that sports medicine professionals would perform better than coaches and exercise science academics/students and that these subgroups would all perform better than parents and other general population members. Study Design: Cross-sectional study; Level of evidence, 3. Methods: A total of 428 individuals, including physicians, physical therapists, athletic trainers, strength and conditioning coaches, exercise science researchers/students, athletes, parents, and members of the general public participated in the study. Participants completed the ACL Injury Risk Estimation Quiz (ACL-IQ) and answered questions related to assessment strategy and ACL knowledge. Results: Strength and conditioning coaches, athletic trainers, physical therapists, and exercise science students exhibited consistently superior ACL injury risk estimation ability (+2 SD) as compared with sport coaches, parents of athletes, and members of the general public. The performance of a substantial number of individuals in the exercise sciences/sports medicines (approximately 40%) was similar to or exceeded clinical instrument-based biomechanical assessment methods (eg, ACL nomogram). Parents, sport coaches, and the general public had lower ACL-IQ, likely due to their lower ACL knowledge and to rating the importance of knee/thigh motion lower and weight and jump height higher. Conclusion: Substantial cross-professional/group differences in visual ACL
Estimation for bilinear stochastic systems
NASA Technical Reports Server (NTRS)
Willsky, A. S.; Marcus, S. I.
1974-01-01
Three techniques for the solution of bilinear estimation problems are presented. First, finite dimensional optimal nonlinear estimators are presented for certain bilinear systems evolving on solvable and nilpotent lie groups. Then the use of harmonic analysis for estimation problems evolving on spheres and other compact manifolds is investigated. Finally, an approximate estimation technique utilizing cumulants is discussed.
Los Alamos PC estimating system
Stutz, R.A.; Lemon, G.D.
1987-01-01
The Los Alamos Cost Estimating System (QUEST) is being converted to run on IBM personal computers. This very extensive estimating system is capable of supporting cost estimators from many different and varied fields. QUEST does not dictate any fixed method for estimating. QUEST supports many styles and levels of detail estimating. QUEST can be used with or without data bases. This system allows the estimator to provide reports based on levels of detail defined by combining work breakdown structures. QUEST provides a set of tools for doing any type of estimate without forcing the estimator to use any given method. The level of detail in the estimate can be mixed based on the amount of information known about different parts of the project. The system can support many different data bases simultaneously. Estimators can modify any cost in any data base.
Estimation of Lung Ventilation
NASA Astrophysics Data System (ADS)
Ding, Kai; Cao, Kunlin; Du, Kaifang; Amelon, Ryan; Christensen, Gary E.; Raghavan, Madhavan; Reinhardt, Joseph M.
Since the primary function of the lung is gas exchange, ventilation can be interpreted as an index of lung function in addition to perfusion. Injury and disease processes can alter lung function on a global and/or a local level. MDCT can be used to acquire multiple static breath-hold CT images of the lung taken at different lung volumes, or with proper respiratory control, 4DCT images of the lung reconstructed at different respiratory phases. Image registration can be applied to this data to estimate a deformation field that transforms the lung from one volume configuration to the other. This deformation field can be analyzed to estimate local lung tissue expansion, calculate voxel-by-voxel intensity change, and make biomechanical measurements. The physiologic significance of the registration-based measures of respiratory function can be established by comparing to more conventional measurements, such as nuclear medicine or contrast wash-in/wash-out studies with CT or MR. An important emerging application of these methods is the detection of pulmonary function change in subjects undergoing radiation therapy (RT) for lung cancer. During RT, treatment is commonly limited to sub-therapeutic doses due to unintended toxicity to normal lung tissue. Measurement of pulmonary function may be useful as a planning tool during RT planning, may be useful for tracking the progression of toxicity to nearby normal tissue during RT, and can be used to evaluate the effectiveness of a treatment post-therapy. This chapter reviews the basic measures to estimate regional ventilation from image registration of CT images, the comparison of them to the existing golden standard and the application in radiation therapy.
1976-04-09
summarizes two methods of spectral estimation given in Carter, Knapp, and Nuttall (1973a) and Carter and Knapp ( 1975 ). Appendix B gives important...Knapp ( 1975 ) Rxy(T) = K RXX(T), (2-22) where ’’^■—""""gg-*"^-^^ K -2 / n(x)x —— 22 -x /2a , e ’ dx (2-23) Therefore, for even...convolution is the multi- plication (Oppenheim and Schäfer ( 1975 )) Y(f) = H(f)X(f ) , (2-25) where X, H, and Y are Fourier transforms of x, h and y
Estimating carnivore community structures.
Jiménez, José; Nuñez-Arjona, Juan Carlos; Rueda, Carmen; González, Luis Mariano; García-Domínguez, Francisco; Muñoz-Igualada, Jaime; López-Bao, José Vicente
2017-01-25
Obtaining reliable estimates of the structure of carnivore communities is of paramount importance because of their ecological roles, ecosystem services and impact on biodiversity conservation, but they are still scarce. This information is key for carnivore management: to build support for and acceptance of management decisions and policies it is crucial that those decisions are based on robust and high quality information. Here, we combined camera and live-trapping surveys, as well as telemetry data, with spatially-explicit Bayesian models to show the usefulness of an integrated multi-method and multi-model approach to monitor carnivore community structures. Our methods account for imperfect detection and effectively deal with species with non-recognizable individuals. In our Mediterranean study system, the terrestrial carnivore community was dominated by red foxes (0.410 individuals/km(2)); Egyptian mongooses, feral cats and stone martens were similarly abundant (0.252, 0.249 and 0.240 individuals/km(2), respectively), whereas badgers and common genets were the least common (0.130 and 0.087 individuals/km(2), respectively). The precision of density estimates improved by incorporating multiple covariates, device operation, and accounting for the removal of individuals. The approach presented here has substantial implications for decision-making since it allows, for instance, the evaluation, in a standard and comparable way, of community responses to interventions.
Phenological Parameters Estimation Tool
NASA Technical Reports Server (NTRS)
McKellip, Rodney D.; Ross, Kenton W.; Spruce, Joseph P.; Smoot, James C.; Ryan, Robert E.; Gasser, Gerald E.; Prados, Donald L.; Vaughan, Ronald D.
2010-01-01
The Phenological Parameters Estimation Tool (PPET) is a set of algorithms implemented in MATLAB that estimates key vegetative phenological parameters. For a given year, the PPET software package takes in temporally processed vegetation index data (3D spatio-temporal arrays) generated by the time series product tool (TSPT) and outputs spatial grids (2D arrays) of vegetation phenological parameters. As a precursor to PPET, the TSPT uses quality information for each pixel of each date to remove bad or suspect data, and then interpolates and digitally fills data voids in the time series to produce a continuous, smoothed vegetation index product. During processing, the TSPT displays NDVI (Normalized Difference Vegetation Index) time series plots and images from the temporally processed pixels. Both the TSPT and PPET currently use moderate resolution imaging spectroradiometer (MODIS) satellite multispectral data as a default, but each software package is modifiable and could be used with any high-temporal-rate remote sensing data collection system that is capable of producing vegetation indices. Raw MODIS data from the Aqua and Terra satellites is processed using the TSPT to generate a filtered time series data product. The PPET then uses the TSPT output to generate phenological parameters for desired locations. PPET output data tiles are mosaicked into a Conterminous United States (CONUS) data layer using ERDAS IMAGINE, or equivalent software package. Mosaics of the vegetation phenology data products are then reprojected to the desired map projection using ERDAS IMAGINE
Estimating carnivore community structures
Jiménez, José; Nuñez-Arjona, Juan Carlos; Rueda, Carmen; González, Luis Mariano; García-Domínguez, Francisco; Muñoz-Igualada, Jaime; López-Bao, José Vicente
2017-01-01
Obtaining reliable estimates of the structure of carnivore communities is of paramount importance because of their ecological roles, ecosystem services and impact on biodiversity conservation, but they are still scarce. This information is key for carnivore management: to build support for and acceptance of management decisions and policies it is crucial that those decisions are based on robust and high quality information. Here, we combined camera and live-trapping surveys, as well as telemetry data, with spatially-explicit Bayesian models to show the usefulness of an integrated multi-method and multi-model approach to monitor carnivore community structures. Our methods account for imperfect detection and effectively deal with species with non-recognizable individuals. In our Mediterranean study system, the terrestrial carnivore community was dominated by red foxes (0.410 individuals/km2); Egyptian mongooses, feral cats and stone martens were similarly abundant (0.252, 0.249 and 0.240 individuals/km2, respectively), whereas badgers and common genets were the least common (0.130 and 0.087 individuals/km2, respectively). The precision of density estimates improved by incorporating multiple covariates, device operation, and accounting for the removal of individuals. The approach presented here has substantial implications for decision-making since it allows, for instance, the evaluation, in a standard and comparable way, of community responses to interventions. PMID:28120871
Estimating sparse precision matrices
NASA Astrophysics Data System (ADS)
Padmanabhan, Nikhil; White, Martin; Zhou, Harrison H.; O'Connell, Ross
2016-08-01
We apply a method recently introduced to the statistical literature to directly estimate the precision matrix from an ensemble of samples drawn from a corresponding Gaussian distribution. Motivated by the observation that cosmological precision matrices are often approximately sparse, the method allows one to exploit this sparsity of the precision matrix to more quickly converge to an asymptotic 1/sqrt{N_sim} rate while simultaneously providing an error model for all of the terms. Such an estimate can be used as the starting point for further regularization efforts which can improve upon the 1/sqrt{N_sim} limit above, and incorporating such additional steps is straightforward within this framework. We demonstrate the technique with toy models and with an example motivated by large-scale structure two-point analysis, showing significant improvements in the rate of convergence. For the large-scale structure example, we find errors on the precision matrix which are factors of 5 smaller than for the sample precision matrix for thousands of simulations or, alternatively, convergence to the same error level with more than an order of magnitude fewer simulations.
Earthquake Loss Estimation Uncertainties
NASA Astrophysics Data System (ADS)
Frolova, Nina; Bonnin, Jean; Larionov, Valery; Ugarov, Aleksander
2013-04-01
The paper addresses the reliability issues of strong earthquakes loss assessment following strong earthquakes with worldwide Systems' application in emergency mode. Timely and correct action just after an event can result in significant benefits in saving lives. In this case the information about possible damage and expected number of casualties is very critical for taking decision about search, rescue operations and offering humanitarian assistance. Such rough information may be provided by, first of all, global systems, in emergency mode. The experience of earthquakes disasters in different earthquake-prone countries shows that the officials who are in charge of emergency response at national and international levels are often lacking prompt and reliable information on the disaster scope. Uncertainties on the parameters used in the estimation process are numerous and large: knowledge about physical phenomena and uncertainties on the parameters used to describe them; global adequacy of modeling techniques to the actual physical phenomena; actual distribution of population at risk at the very time of the shaking (with respect to immediate threat: buildings or the like); knowledge about the source of shaking, etc. Needless to be a sharp specialist to understand, for example, that the way a given building responds to a given shaking obeys mechanical laws which are poorly known (if not out of the reach of engineers for a large portion of the building stock); if a carefully engineered modern building is approximately predictable, this is far not the case for older buildings which make up the bulk of inhabited buildings. The way population, inside the buildings at the time of shaking, is affected by the physical damage caused to the buildings is not precisely known, by far. The paper analyzes the influence of uncertainties in strong event parameters determination by Alert Seismological Surveys, of simulation models used at all stages from, estimating shaking intensity
Estimating Bias Error Distributions
NASA Technical Reports Server (NTRS)
Liu, Tian-Shu; Finley, Tom D.
2001-01-01
This paper formulates the general methodology for estimating the bias error distribution of a device in a measuring domain from less accurate measurements when a minimal number of standard values (typically two values) are available. A new perspective is that the bias error distribution can be found as a solution of an intrinsic functional equation in a domain. Based on this theory, the scaling- and translation-based methods for determining the bias error distribution arc developed. These methods are virtually applicable to any device as long as the bias error distribution of the device can be sufficiently described by a power series (a polynomial) or a Fourier series in a domain. These methods have been validated through computational simulations and laboratory calibration experiments for a number of different devices.
Precipitation Estimates for Hydroelectricity
NASA Technical Reports Server (NTRS)
Tapiador, Francisco J.; Hou, Arthur Y.; de Castro, Manuel; Checa, Ramiro; Cuartero, Fernando; Barros, Ana P.
2011-01-01
Hydroelectric plants require precise and timely estimates of rain, snow and other hydrometeors for operations. However, it is far from being a trivial task to measure and predict precipitation. This paper presents the linkages between precipitation science and hydroelectricity, and in doing so it provides insight into current research directions that are relevant for this renewable energy. Methods described include radars, disdrometers, satellites and numerical models. Two recent advances that have the potential of being highly beneficial for hydropower operations are featured: the Global Precipitation Measuring (GPM) mission, which represents an important leap forward in precipitation observations from space, and high performance computing (HPC) and grid technology, that allows building ensembles of numerical weather and climate models.
Robbins, H.
1981-01-01
Suppose that an unknown random parameter theta with distribution function G is such that given theta, an observable random variable x has conditional probability density f(x / theta) of known form. If a function t = t(x) is used to estimate theta, then the expected squared error with respect to the random variation of both theta and x is: E(t-theta)/sup 2/ = ..integral.. ..integral..(t(x)-theta)/sup 2/ f(x parallel theta)dx dG(theta). For fixed G we can seek to minimize this equation within any desired class of functions t, such as the class of all linear functions A + Bx, or the class of al Borel functions whatsoever.
Uncertainties in transpiration estimates.
Coenders-Gerrits, A M J; van der Ent, R J; Bogaard, T A; Wang-Erlandsson, L; Hrachowitz, M; Savenije, H H G
2014-02-13
arising from S. Jasechko et al. Nature 496, 347-350 (2013)10.1038/nature11983How best to assess the respective importance of plant transpiration over evaporation from open waters, soils and short-term storage such as tree canopies and understories (interception) has long been debated. On the basis of data from lake catchments, Jasechko et al. conclude that transpiration accounts for 80-90% of total land evaporation globally (Fig. 1a). However, another choice of input data, together with more conservative accounting of the related uncertainties, reduces and widens the transpiration ratio estimation to 35-80%. Hence, climate models do not necessarily conflict with observations, but more measurements on the catchment scale are needed to reduce the uncertainty range. There is a Reply to this Brief Communications Arising by Jasechko, S. et al. Nature 506, http://dx.doi.org/10.1038/nature12926 (2014).
Estimating earthquake potential
Page, R.A.
1980-01-01
The hazards to life and property from earthquakes can be minimized in three ways. First, structures can be designed and built to resist the effects of earthquakes. Second, the location of structures and human activities can be chosen to avoid or to limit the use of areas known to be subject to serious earthquake hazards. Third, preparations for an earthquake in response to a prediction or warning can reduce the loss of life and damage to property as well as promote a rapid recovery from the disaster. The success of the first two strategies, earthquake engineering and land use planning, depends on being able to reliably estimate the earthquake potential. The key considerations in defining the potential of a region are the location, size, and character of future earthquakes and frequency of their occurrence. Both historic seismicity of the region and the geologic record are considered in evaluating earthquake potential.
Toxicity Estimation Software Tool (TEST)
The Toxicity Estimation Software Tool (TEST) was developed to allow users to easily estimate the toxicity of chemicals using Quantitative Structure Activity Relationships (QSARs) methodologies. QSARs are mathematical models used to predict measures of toxicity from the physical c...
Estimation Strategies of Four Groups.
ERIC Educational Resources Information Center
Dowker, Ann; And Others
1996-01-01
Describes a study of the estimation skills of mathematicians (N=44), accountants (N=44), psychology students (N=44), and English students (N=44). Explores their methods of estimating the products and quotients of 20 problems. Contains 49 references. (DDR)
Model optimization using statistical estimation
NASA Technical Reports Server (NTRS)
Collins, J. D.; Hart, G. C.; Hasselman, T. K.; Kennedy, B.; Pack, H., Jr.
1974-01-01
Program revises initial or prior estimate of stiffness and mass parameters to parameters yielding frequency and mode characteristics in agreement with test data. Variances are also calculated and consequently define uncertainties of final estimates.
Improved Estimates of Thermodynamic Parameters
NASA Technical Reports Server (NTRS)
Lawson, D. D.
1982-01-01
Techniques refined for estimating heat of vaporization and other parameters from molecular structure. Using parabolic equation with three adjustable parameters, heat of vaporization can be used to estimate boiling point, and vice versa. Boiling points and vapor pressures for some nonpolar liquids were estimated by improved method and compared with previously reported values. Technique for estimating thermodynamic parameters should make it easier for engineers to choose among candidate heat-exchange fluids for thermochemical cycles.
Error Estimates for Mixed Methods.
1979-03-01
This paper presents abstract error estimates for mixed methods for the approximate solution of elliptic boundary value problems. These estimates are...then applied to obtain quasi-optimal error estimates in the usual Sobolev norms for four examples: three mixed methods for the biharmonic problem and a mixed method for 2nd order elliptic problems. (Author)
Bayes' estimators of generalized entropies
NASA Astrophysics Data System (ADS)
Holste, D.; Große, I.; Herzel, H.
1998-03-01
The order-q Tsallis 0305-4470/31/11/007/img5 and Rényi entropy 0305-4470/31/11/007/img6 receive broad applications in the statistical analysis of complex phenomena. A generic problem arises, however, when these entropies need to be estimated from observed data. The finite size of data sets can lead to serious systematic and statistical errors in numerical estimates. In this paper, we focus upon the problem of estimating generalized entropies from finite samples and derive the Bayes estimator of the order-q Tsallis entropy, including the order-1 (i.e. the Shannon) entropy, under the assumption of a uniform prior probability density. The Bayes estimator yields, in general, the smallest mean-quadratic deviation from the true parameter as compared with any other estimator. Exploiting the functional relationship between 0305-4470/31/11/007/img7 and 0305-4470/31/11/007/img8, we use the Bayes estimator of 0305-4470/31/11/007/img7 to estimate the Rényi entropy 0305-4470/31/11/007/img8. We compare these novel estimators with the frequency-count estimators for 0305-4470/31/11/007/img7 and 0305-4470/31/11/007/img8. We find by numerical simulations that the Bayes estimator reduces statistical errors of order-q entropy estimates for Bernoulli as well as for higher-order Markov processes derived from the complete genome of the prokaryote Haemophilus influenzae.
Estimating concurrence via entanglement witnesses
Jurkowski, Jacek; Chruscinski, Dariusz
2010-05-15
We show that each entanglement witness detecting a given bipartite entangled state provides an estimation of its concurrence. We illustrate our result with several well-known examples of entanglement witnesses and compare the corresponding estimation of concurrence with other estimations provided by the trace norm of partial transposition and realignment.
Bibliography for aircraft parameter estimation
NASA Technical Reports Server (NTRS)
Iliff, Kenneth W.; Maine, Richard E.
1986-01-01
An extensive bibliography in the field of aircraft parameter estimation has been compiled. This list contains definitive works related to most aircraft parameter estimation approaches. Theoretical studies as well as practical applications are included. Many of these publications are pertinent to subjects peripherally related to parameter estimation, such as aircraft maneuver design or instrumentation considerations.
AN ESTIMATE OF THE DETECTABILITY OF RISING FLUX TUBES
Birch, A. C.; Braun, D. C.; Fan, Y.
2010-11-10
The physics of the formation of magnetic active regions (ARs) is one of the most important problems in solar physics. One main class of theories suggests that ARs are the result of magnetic flux that rises from the tachocline. Time-distance helioseismology, which is based on measurements of wave propagation, promises to allow the study of the subsurface behavior of this magnetic flux. Here, we use a model for a buoyant magnetic flux concentration together with the ray approximation to show that the dominant effect on the wave propagation is expected to be from the roughly 100 m s{sup -1} retrograde flow associated with the rising flux. Using a B-spline-based method for carrying out inversions of wave travel times for flows in spherical geometry, we show that at 3 days before emergence the detection of this retrograde flow at a depth of 30 Mm should be possible with a signal-to-noise level of about 8 with a sample of 150 emerging ARs.
Uveal melanoma: estimating prognosis.
Kaliki, Swathi; Shields, Carol L; Shields, Jerry A
2015-02-01
Uveal melanoma is the most common primary malignant tumor of the eye in adults, predominantly found in Caucasians. Local tumor control of uveal melanoma is excellent, yet this malignancy is associated with relatively high mortality secondary to metastasis. Various clinical, histopathological, cytogenetic features and gene expression features help in estimating the prognosis of uveal melanoma. The clinical features associated with poor prognosis in patients with uveal melanoma include older age at presentation, male gender, larger tumor basal diameter and thickness, ciliary body location, diffuse tumor configuration, association with ocular/oculodermal melanocytosis, extraocular tumor extension, and advanced tumor staging by American Joint Committee on Cancer classification. Histopathological features suggestive of poor prognosis include epithelioid cell type, high mitotic activity, higher values of mean diameter of ten largest nucleoli, higher microvascular density, extravascular matrix patterns, tumor-infiltrating lymphocytes, tumor-infiltrating macrophages, higher expression of insulin-like growth factor-1 receptor, and higher expression of human leukocyte antigen Class I and II. Monosomy 3, 1p loss, 6q loss, and 8q and those classified as Class II by gene expression are predictive of poor prognosis of uveal melanoma. In this review, we discuss the prognostic factors of uveal melanoma. A database search was performed on PubMed, using the terms "uvea," "iris," "ciliary body," "choroid," "melanoma," "uveal melanoma" and "prognosis," "metastasis," "genetic testing," "gene expression profiling." Relevant English language articles were extracted, reviewed, and referenced appropriately.
Estrada, Rolando; Tomasi, Carlo; Schmidler, Scott C.; Farsiu, Sina
2015-01-01
Tree-like structures are fundamental in nature, and it is often useful to reconstruct the topology of a tree—what connects to what—from a two-dimensional image of it. However, the projected branches often cross in the image: the tree projects to a planar graph, and the inverse problem of reconstructing the topology of the tree from that of the graph is ill-posed. We regularize this problem with a generative, parametric tree-growth model. Under this model, reconstruction is possible in linear time if one knows the direction of each edge in the graph—which edge endpoint is closer to the root of the tree—but becomes NP-hard if the directions are not known. For the latter case, we present a heuristic search algorithm to estimate the most likely topology of a rooted, three-dimensional tree from a single two-dimensional image. Experimental results on retinal vessel, plant root, and synthetic tree datasets show that our methodology is both accurate and efficient. PMID:26353004
NASA Technical Reports Server (NTRS)
Everett, L.
1992-01-01
This report documents the performance characteristics of a Targeting Reflective Alignment Concept (TRAC) sensor. The performance will be documented for both short and long ranges. For long ranges, the sensor is used without the flat mirror attached to the target. To better understand the capabilities of the TRAC based sensors, an engineering model is required. The model can be used to better design the system for a particular application. This is necessary because there are many interrelated design variables in application. These include lense parameters, camera, and target configuration. The report presents first an analytical development of the performance, and second an experimental verification of the equations. In the analytical presentation it is assumed that the best vision resolution is a single pixel element. The experimental results suggest however that the resolution is better than 1 pixel. Hence the analytical results should be considered worst case conditions. The report also discusses advantages and limitations of the TRAC sensor in light of the performance estimates. Finally the report discusses potential improvements.
Precision cosmological parameter estimation
NASA Astrophysics Data System (ADS)
Fendt, William Ashton, Jr.
2009-09-01
methods. These techniques will help in the understanding of new physics contained in current and future data sets as well as benefit the research efforts of the cosmology community. Our idea is to shift the computationally intensive pieces of the parameter estimation framework to a parallel training step. We then provide a machine learning code that uses this training set to learn the relationship between the underlying cosmological parameters and the function we wish to compute. This code is very accurate and simple to evaluate. It can provide incredible speed- ups of parameter estimation codes. For some applications this provides the convenience of obtaining results faster, while in other cases this allows the use of codes that would be impossible to apply in the brute force setting. In this thesis we provide several examples where our method allows more accurate computation of functions important for data analysis than is currently possible. As the techniques developed in this work are very general, there are no doubt a wide array of applications both inside and outside of cosmology. We have already seen this interest as other scientists have presented ideas for using our algorithm to improve their computational work, indicating its importance as modern experiments push forward. In fact, our algorithm will play an important role in the parameter analysis of Planck, the next generation CMB space mission.
Uveal melanoma: Estimating prognosis
Kaliki, Swathi; Shields, Carol L; Shields, Jerry A
2015-01-01
Uveal melanoma is the most common primary malignant tumor of the eye in adults, predominantly found in Caucasians. Local tumor control of uveal melanoma is excellent, yet this malignancy is associated with relatively high mortality secondary to metastasis. Various clinical, histopathological, cytogenetic features and gene expression features help in estimating the prognosis of uveal melanoma. The clinical features associated with poor prognosis in patients with uveal melanoma include older age at presentation, male gender, larger tumor basal diameter and thickness, ciliary body location, diffuse tumor configuration, association with ocular/oculodermal melanocytosis, extraocular tumor extension, and advanced tumor staging by American Joint Committee on Cancer classification. Histopathological features suggestive of poor prognosis include epithelioid cell type, high mitotic activity, higher values of mean diameter of ten largest nucleoli, higher microvascular density, extravascular matrix patterns, tumor-infiltrating lymphocytes, tumor-infiltrating macrophages, higher expression of insulin-like growth factor-1 receptor, and higher expression of human leukocyte antigen Class I and II. Monosomy 3, 1p loss, 6q loss, and 8q and those classified as Class II by gene expression are predictive of poor prognosis of uveal melanoma. In this review, we discuss the prognostic factors of uveal melanoma. A database search was performed on PubMed, using the terms “uvea,” “iris,” “ciliary body,” “choroid,” “melanoma,” “uveal melanoma” and “prognosis,” “metastasis,” “genetic testing,” “gene expression profiling.” Relevant English language articles were extracted, reviewed, and referenced appropriately. PMID:25827538
A priori SNR estimation and noise estimation for speech enhancement.
Yao, Rui; Zeng, ZeQing; Zhu, Ping
2016-01-01
A priori signal-to-noise ratio (SNR) estimation and noise estimation are important for speech enhancement. In this paper, a novel modified decision-directed (DD) a priori SNR estimation approach based on single-frequency entropy, named DDBSE, is proposed. DDBSE replaces the fixed weighting factor in the DD approach with an adaptive one calculated according to change of single-frequency entropy. Simultaneously, a new noise power estimation approach based on unbiased minimum mean square error (MMSE) and voice activity detection (VAD), named UMVAD, is proposed. UMVAD adopts different strategies to estimate noise in order to reduce over-estimation and under-estimation of noise. UMVAD improves the classical statistical model-based VAD by utilizing an adaptive threshold to replace the original fixed one and modifies the unbiased MMSE-based noise estimation approach using an adaptive a priori speech presence probability calculated by entropy instead of the original fixed one. Experimental results show that DDBSE can provide greater noise suppression than DD and UMVAD can improve the accuracy of noise estimation. Compared to existing approaches, speech enhancement based on UMVAD and DDBSE can obtain a better segment SNR score and composite measure covl score, especially in adverse environments such as non-stationary noise and low-SNR.
Quantum Estimation, meet Computational Statistics; Computational Statistics, meet Quantum Estimation
NASA Astrophysics Data System (ADS)
Ferrie, Chris; Granade, Chris; Combes, Joshua
2013-03-01
Quantum estimation, that is, post processing data to obtain classical descriptions of quantum states and processes, is an intractable problem--scaling exponentially with the number of interacting systems. Thankfully there is an entire field, Computational Statistics, devoted to designing algorithms to estimate probabilities for seemingly intractable problems. So, why not look to the most advanced machine learning algorithms for quantum estimation tasks? We did. I'll describe how we adapted and combined machine learning methodologies to obtain an online learning algorithm designed to estimate quantum states and processes.
A priori SNR estimation and noise estimation for speech enhancement
NASA Astrophysics Data System (ADS)
Yao, Rui; Zeng, ZeQing; Zhu, Ping
2016-12-01
A priori signal-to-noise ratio (SNR) estimation and noise estimation are important for speech enhancement. In this paper, a novel modified decision-directed (DD) a priori SNR estimation approach based on single-frequency entropy, named DDBSE, is proposed. DDBSE replaces the fixed weighting factor in the DD approach with an adaptive one calculated according to change of single-frequency entropy. Simultaneously, a new noise power estimation approach based on unbiased minimum mean square error (MMSE) and voice activity detection (VAD), named UMVAD, is proposed. UMVAD adopts different strategies to estimate noise in order to reduce over-estimation and under-estimation of noise. UMVAD improves the classical statistical model-based VAD by utilizing an adaptive threshold to replace the original fixed one and modifies the unbiased MMSE-based noise estimation approach using an adaptive a priori speech presence probability calculated by entropy instead of the original fixed one. Experimental results show that DDBSE can provide greater noise suppression than DD and UMVAD can improve the accuracy of noise estimation. Compared to existing approaches, speech enhancement based on UMVAD and DDBSE can obtain a better segment SNR score and composite measure c ovl score, especially in adverse environments such as non-stationary noise and low-SNR.
Estimators for the Cauchy distribution
Hanson, K.M.; Wolf, D.R.
1993-12-31
We discuss the properties of various estimators of the central position of the Cauchy distribution. The performance of these estimators is evaluated for a set of simulated experiments. Estimators based on the maximum and mean of the posterior probability density function are empirically found to be well behaved when more than two measurements are available. On the contrary, because of the infinite variance of the Cauchy distribution, the average of the measured positions is an extremely poor estimator of the location of the source. However, the median of the measured positions is well behaved. The rms errors for the various estimators are compared to the Fisher-Cramer-Rao lower bound. We find that the square root of the variance of the posterior density function is predictive of the rms error in the mean posterior estimator.
State Estimation for Humanoid Robots
2015-07-01
natural for a controller to produce force commands to the robot using inverse dynamics. Model based control and state estimation relies on the accuracy of...produce force commands to the robot using inverse dynamics. Model based control and state estimation relies on the accuracy of the model. We address the...natural for a controller to produce force commands to the robot using inverse dynamics. Model based control and state estimation relies on the accuracy of
Estimating the Modified Allan Variance
NASA Technical Reports Server (NTRS)
Greenhall, Charles
1995-01-01
The third-difference approach to modified Allan variance (MVAR) leads to a tractable formula for a measure of MVAR estimator confidence, the equivalent degrees of freedom (edf), in the presence of power-law phase noise. The effect of estimation stride on edf is tabulated. A simple approximation for edf is given, and its errors are tabulated. A theorem allowing conservative estimates of edf in the presence of compound noise processes is given.
Robust and intelligent bearing estimation
Claassen, John P.
2000-01-01
A method of bearing estimation comprising quadrature digital filtering of event observations, constructing a plurality of observation matrices each centered on a time-frequency interval, determining for each observation matrix a parameter such as degree of polarization, linearity of particle motion, degree of dyadicy, or signal-to-noise ratio, choosing observation matrices most likely to produce a set of best available bearing estimates, and estimating a bearing for each observation matrix of the chosen set.
1982-05-01
correlation function and is equivalent to an en-transformation [11] of the same function. Gray, Houston and Morgan ( GHM ) noted the estimator to have some...satis- factory way of selecting the proper value n in the en-transform. GHM went on to conclude that an ARMA spectral estimator would probably have...which will be seen to avoid the difficulties noted by GHM , and will in fact, be shown to be equivalent to a method of moments ARMA spectral estimator
Spring Small Grains Area Estimation
NASA Technical Reports Server (NTRS)
Palmer, W. F.; Mohler, R. J.
1986-01-01
SSG3 automatically estimates acreage of spring small grains from Landsat data. Report describes development and testing of a computerized technique for using Landsat multispectral scanner (MSS) data to estimate acreage of spring small grains (wheat, barley, and oats). Application of technique to analysis of four years of data from United States and Canada yielded estimates of accuracy comparable to those obtained through procedures that rely on trained analysis.
ALTERNATIVE APPROACH TO ESTIMATING CANCER ...
The alternative approach for estimating cancer potency from inhalation exposure to asbestos seeks to improve the methods developed by USEPA (1986). This efforts seeks to modify the the current approach for estimating cancer potency for lung cancer and mesothelioma to account for the current scientific consensus that cancer risk from asbestos depends both on mineral type and on particle size distribution. In brief, epidemiological exposure-response data for lung cancer and mesothelioma in asbestos workers are combined with estimates of the mineral type(s) and partical size distribution at each exposure location in order to estimate potency factors that are specific to a selected set of mineral type and size
Asymptotic Normality of Quadratic Estimators.
Robins, James; Li, Lingling; Tchetgen, Eric; van der Vaart, Aad
2016-12-01
We prove conditional asymptotic normality of a class of quadratic U-statistics that are dominated by their degenerate second order part and have kernels that change with the number of observations. These statistics arise in the construction of estimators in high-dimensional semi- and non-parametric models, and in the construction of nonparametric confidence sets. This is illustrated by estimation of the integral of a square of a density or regression function, and estimation of the mean response with missing data. We show that estimators are asymptotically normal even in the case that the rate is slower than the square root of the observations.
Estimating the Cost of Doing a Cost Estimate
NASA Technical Reports Server (NTRS)
Remer, D. S.; Buchanan, H. R.
1996-01-01
This article provides a model for estimating the cost required to do a cost estimate...Our earlier work provided data for high technology projects. This article adds data from the construction industry which validates the model over a wider range of technology.
Estimating Absolute Site Effects
Malagnini, L; Mayeda, K M; Akinci, A; Bragato, P L
2004-07-15
The authors use previously determined direct-wave attenuation functions as well as stable, coda-derived source excitation spectra to isolate the absolute S-wave site effect for the horizontal and vertical components of weak ground motion. They used selected stations in the seismic network of the eastern Alps, and find the following: (1) all ''hard rock'' sites exhibited deamplification phenomena due to absorption at frequencies ranging between 0.5 and 12 Hz (the available bandwidth), on both the horizontal and vertical components; (2) ''hard rock'' site transfer functions showed large variability at high-frequency; (3) vertical-motion site transfer functions show strong frequency-dependence, and (4) H/V spectral ratios do not reproduce the characteristics of the true horizontal site transfer functions; (5) traditional, relative site terms obtained by using reference ''rock sites'' can be misleading in inferring the behaviors of true site transfer functions, since most rock sites have non-flat responses due to shallow heterogeneities resulting from varying degrees of weathering. They also use their stable source spectra to estimate total radiated seismic energy and compare against previous results. they find that the earthquakes in this region exhibit non-constant dynamic stress drop scaling which gives further support for a fundamental difference in rupture dynamics between small and large earthquakes. To correct the vertical and horizontal S-wave spectra for attenuation, they used detailed regional attenuation functions derived by Malagnini et al. (2002) who determined frequency-dependent geometrical spreading and Q for the region. These corrections account for the gross path effects (i.e., all distance-dependent effects), although the source and site effects are still present in the distance-corrected spectra. The main goal of this study is to isolate the absolute site effect (as a function of frequency) by removing the source spectrum (moment-rate spectrum) from
Estimating discharge measurement uncertainty using the interpolated variance estimator
Cohn, T.; Kiang, J.; Mason, R.
2012-01-01
Methods for quantifying the uncertainty in discharge measurements typically identify various sources of uncertainty and then estimate the uncertainty from each of these sources by applying the results of empirical or laboratory studies. If actual measurement conditions are not consistent with those encountered in the empirical or laboratory studies, these methods may give poor estimates of discharge uncertainty. This paper presents an alternative method for estimating discharge measurement uncertainty that uses statistical techniques and at-site observations. This Interpolated Variance Estimator (IVE) estimates uncertainty based on the data collected during the streamflow measurement and therefore reflects the conditions encountered at the site. The IVE has the additional advantage of capturing all sources of random uncertainty in the velocity and depth measurements. It can be applied to velocity-area discharge measurements that use a velocity meter to measure point velocities at multiple vertical sections in a channel cross section.
Space vehicle pose estimation via optical correlation and nonlinear estimation
NASA Astrophysics Data System (ADS)
Rakoczy, John M.; Herren, Kenneth A.
2008-03-01
A technique for 6-degree-of-freedom (6DOF) pose estimation of space vehicles is being developed. This technique draws upon recent developments in implementing optical correlation measurements in a nonlinear estimator, which relates the optical correlation measurements to the pose states (orientation and position). For the optical correlator, the use of both conjugate filters and binary, phase-only filters in the design of synthetic discriminant function (SDF) filters is explored. A static neural network is trained a priori and used as the nonlinear estimator. New commercial animation and image rendering software is exploited to design the SDF filters and to generate a large filter set with which to train the neural network. The technique is applied to pose estimation for rendezvous and docking of free-flying spacecraft and to terrestrial surface mobility systems for NASA's Vision for Space Exploration. Quantitative pose estimation performance will be reported. Advantages and disadvantages of the implementation of this technique are discussed.
Space Vehicle Pose Estimation via Optical Correlation and Nonlinear Estimation
NASA Technical Reports Server (NTRS)
Rakoczy, John; Herren, Kenneth
2007-01-01
A technique for 6-degree-of-freedom (6DOF) pose estimation of space vehicles is being developed. This technique draws upon recent developments in implementing optical correlation measurements in a nonlinear estimator, which relates the optical correlation measurements to the pose states (orientation and position). For the optical correlator, the use of both conjugate filters and binary, phase-only filters in the design of synthetic discriminant function (SDF) filters is explored. A static neural network is trained a priori and used as the nonlinear estimator. New commercial animation and image rendering software is exploited to design the SDF filters and to generate a large filter set with which to train the neural network. The technique is applied to pose estimation for rendezvous and docking of free-flying spacecraft and to terrestrial surface mobility systems for NASA's Vision for Space Exploration. Quantitative pose estimation performance will be reported. Advantages and disadvantages of the implementation of this technique are discussed.
Space Vehicle Pose Estimation via Optical Correlation and Nonlinear Estimation
NASA Technical Reports Server (NTRS)
Rakoczy, John M.; Herren, Kenneth A.
2008-01-01
A technique for 6-degree-of-freedom (6DOF) pose estimation of space vehicles is being developed. This technique draws upon recent developments in implementing optical correlation measurements in a nonlinear estimator, which relates the optical correlation measurements to the pose states (orientation and position). For the optical correlator, the use of both conjugate filters and binary, phase-only filters in the design of synthetic discriminant function (SDF) filters is explored. A static neural network is trained a priori and used as the nonlinear estimator. New commercial animation and image rendering software is exploited to design the SDF filters and to generate a large filter set with which to train the neural network. The technique is applied to pose estimation for rendezvous and docking of free-flying spacecraft and to terrestrial surface mobility systems for NASA's Vision for Space Exploration. Quantitative pose estimation performance will be reported. Advantages and disadvantages of the implementation of this technique are discussed.
Frequency tracking and parameter estimation for robust quantum state estimation
Ralph, Jason F.; Jacobs, Kurt; Hill, Charles D.
2011-11-15
In this paper we consider the problem of tracking the state of a quantum system via a continuous weak measurement. If the system Hamiltonian is known precisely, this merely requires integrating the appropriate stochastic master equation. However, even a small error in the assumed Hamiltonian can render this approach useless. The natural answer to this problem is to include the parameters of the Hamiltonian as part of the estimation problem, and the full Bayesian solution to this task provides a state estimate that is robust against uncertainties. However, this approach requires considerable computational overhead. Here we consider a single qubit in which the Hamiltonian contains a single unknown parameter. We show that classical frequency estimation techniques greatly reduce the computational overhead associated with Bayesian estimation and provide accurate estimates for the qubit frequency.
NASA Astrophysics Data System (ADS)
Karimi, Hossein; Nikmehr, Saeid; Khodapanah, Ehsan
2016-09-01
In this paper, we develop a B-spline finite-element method (FEM) based on a locally modal wave propagation with anisotropic perfectly matched layers (PMLs), for the first time, to simulate nonlinear and lossy plasmonic waveguides. Conventional approaches like beam propagation method, inherently omit the wave spectrum and do not provide physical insight into nonlinear modes especially in the plasmonic applications, where nonlinear modes are constructed by linear modes with very close propagation constant quantities. Our locally modal B-spline finite element method (LMBS-FEM) does not suffer from the weakness of the conventional approaches. To validate our method, first, propagation of wave for various kinds of linear, nonlinear, lossless and lossy materials of metal-insulator plasmonic structures are simulated using LMBS-FEM in MATLAB and the comparisons are made with FEM-BPM module of COMSOL Multiphysics simulator and B-spline finite-element finite-difference wide angle beam propagation method (BSFEFD-WABPM). The comparisons show that not only our developed numerical approach is computationally more accurate and efficient than conventional approaches but also it provides physical insight into the nonlinear nature of the propagation modes.
ARSENIC REMOVAL COST ESTIMATING PROGRAM
The Arsenic Removal Cost Estimating program (Excel) calculates the costs for using adsorptive media and anion exchange treatment systems to remove arsenic from drinking water. The program is an easy-to-use tool to estimate capital and operating costs for three types of arsenic re...
Estimating Bottleneck Bandwidth using TCP
NASA Technical Reports Server (NTRS)
Allman, Mark
1998-01-01
Various issues associated with estimating bottleneck bandwidth using TCP are presented in viewgraph form. Specific topics include: 1) Why TCP is wanted to estimate the bottleneck bandwidth; 2) Setting ssthresh to an appropriate value to reduce loss; 3) Possible packet-pair solutions; and 4) Preliminary results: ACTS and the Internet.
Estimation of Unattenuated Factor Loadings.
ERIC Educational Resources Information Center
Woodward, Todd S.; Hunter, Michael A.
1999-01-01
Demonstrates that traditional exploratory factor analytic methods, when applied to correlation matrices, cannot be used to estimate unattenuated factor loadings. Presents a mathematical basis for the accurate estimation of such values when the disattenuated correlation matrix or the covariance matrix is used as input. Explains how the equations…
Computer-Aided Reliability Estimation
NASA Technical Reports Server (NTRS)
Bavuso, S. J.; Stiffler, J. J.; Bryant, L. A.; Petersen, P. L.
1986-01-01
CARE III (Computer-Aided Reliability Estimation, Third Generation) helps estimate reliability of complex, redundant, fault-tolerant systems. Program specifically designed for evaluation of fault-tolerant avionics systems. However, CARE III general enough for use in evaluation of other systems as well.
Estimation in Latent Trait Models.
ERIC Educational Resources Information Center
Rigdon, Steven E.; Tsutakawa, Robert K.
Estimation of ability and item parameters in latent trait models is discussed. When both ability and item parameters are considered fixed but unknown, the method of maximum likelihood for the logistic or probit models is well known. Discussed are techniques for estimating ability and item parameters when the ability parameters or item parameters…
Glascoe, E
2008-08-11
It is estimated that PBXN-110 will burn laminarly with a burn function of B = (0.6-1.3)*P{sup 1.0} (B is the burn rate in mm/s and P is pressure in MPa). This paper provides a brief discussion of how this burn behavior was estimated.
Schürmann, Thomas
2015-10-01
We compare an entropy estimator H(z) recently discussed by Zhang (2012) with two estimators, H(1) and H(2), introduced by Grassberger (2003) and Schürmann (2004). We prove the identity H(z) ≡ H(1), which has not been taken into account by Zhang (2012). Then we prove that the systematic error (bias) of H(1) is less than or equal to the bias of the ordinary likelihood (or plug-in) estimator of entropy. Finally, by numerical simulation, we verify that for the most interesting regime of small sample estimation and large event spaces, the estimator H(2) has a significantly smaller statistical error than H(z).
Quantum estimation of unknown parameters
NASA Astrophysics Data System (ADS)
Martínez-Vargas, Esteban; Pineda, Carlos; Leyvraz, François; Barberis-Blostein, Pablo
2017-01-01
We discuss the problem of finding the best measurement strategy for estimating the value of a quantum system parameter. In general the optimum quantum measurement, in the sense that it maximizes the quantum Fisher information and hence allows one to minimize the estimation error, can only be determined if the value of the parameter is already known. A modification of the quantum Van Trees inequality, which gives a lower bound on the error in the estimation of a random parameter, is proposed. The suggested inequality allows us to assert if a particular quantum measurement, together with an appropriate estimator, is optimal. An adaptive strategy to estimate the value of a parameter, based on our modified inequality, is proposed.
Coverage-adjusted entropy estimation.
Vu, Vincent Q; Yu, Bin; Kass, Robert E
2007-09-20
Data on 'neural coding' have frequently been analyzed using information-theoretic measures. These formulations involve the fundamental and generally difficult statistical problem of estimating entropy. We review briefly several methods that have been advanced to estimate entropy and highlight a method, the coverage-adjusted entropy estimator (CAE), due to Chao and Shen that appeared recently in the environmental statistics literature. This method begins with the elementary Horvitz-Thompson estimator, developed for sampling from a finite population, and adjusts for the potential new species that have not yet been observed in the sample-these become the new patterns or 'words' in a spike train that have not yet been observed. The adjustment is due to I. J. Good, and is called the Good-Turing coverage estimate. We provide a new empirical regularization derivation of the coverage-adjusted probability estimator, which shrinks the maximum likelihood estimate. We prove that the CAE is consistent and first-order optimal, with rate O(P)(1/log n), in the class of distributions with finite entropy variance and that, within the class of distributions with finite qth moment of the log-likelihood, the Good-Turing coverage estimate and the total probability of unobserved words converge at rate O(P)(1/(log n)(q)). We then provide a simulation study of the estimator with standard distributions and examples from neuronal data, where observations are dependent. The results show that, with a minor modification, the CAE performs much better than the MLE and is better than the best upper bound estimator, due to Paninski, when the number of possible words m is unknown or infinite.
Space shuttle propulsion parameter estimation using optional estimation techniques
NASA Technical Reports Server (NTRS)
1983-01-01
A regression analyses on tabular aerodynamic data provided. A representative aerodynamic model for coefficient estimation. It also reduced the storage requirements for the "normal' model used to check out the estimation algorithms. The results of the regression analyses are presented. The computer routines for the filter portion of the estimation algorithm and the :"bringing-up' of the SRB predictive program on the computer was developed. For the filter program, approximately 54 routines were developed. The routines were highly subsegmented to facilitate overlaying program segments within the partitioned storage space on the computer.
Estimation of toxicity using the Toxicity Estimation Software Tool (TEST)
Tens of thousands of chemicals are currently in commerce, and hundreds more are introduced every year. Since experimental measurements of toxicity are extremely time consuming and expensive, it is imperative that alternative methods to estimate toxicity are developed.
Risk estimation using probability machines
2014-01-01
Background Logistic regression has been the de facto, and often the only, model used in the description and analysis of relationships between a binary outcome and observed features. It is widely used to obtain the conditional probabilities of the outcome given predictors, as well as predictor effect size estimates using conditional odds ratios. Results We show how statistical learning machines for binary outcomes, provably consistent for the nonparametric regression problem, can be used to provide both consistent conditional probability estimation and conditional effect size estimates. Effect size estimates from learning machines leverage our understanding of counterfactual arguments central to the interpretation of such estimates. We show that, if the data generating model is logistic, we can recover accurate probability predictions and effect size estimates with nearly the same efficiency as a correct logistic model, both for main effects and interactions. We also propose a method using learning machines to scan for possible interaction effects quickly and efficiently. Simulations using random forest probability machines are presented. Conclusions The models we propose make no assumptions about the data structure, and capture the patterns in the data by just specifying the predictors involved and not any particular model structure. So they do not run the same risks of model mis-specification and the resultant estimation biases as a logistic model. This methodology, which we call a “risk machine”, will share properties from the statistical machine that it is derived from. PMID:24581306
Entropy estimation in Turing's perspective.
Zhang, Zhiyi
2012-05-01
A new nonparametric estimator of Shannon's entropy on a countable alphabet is proposed and analyzed against the well-known plug-in estimator. The proposed estimator is developed based on Turing's formula, which recovers distributional characteristics on the subset of the alphabet not covered by a size-n sample. The fundamental switch in perspective brings about substantial gain in estimation accuracy for every distribution with finite entropy. In general, a uniform variance upper bound is established for the entire class of distributions with finite entropy that decays at a rate of O(ln(n)/n) compared to O([ln(n)]2/n) for the plug-in. In a wide range of subclasses, the variance of the proposed estimator converges at a rate of O(1/n), and this rate of convergence carries over to the convergence rates in mean squared errors in many subclasses. Specifically, for any finite alphabet, the proposed estimator has a bias decaying exponentially in n. Several new bias-adjusted estimators are also discussed.
Radiation dose estimates for radiopharmaceuticals
Stabin, M.G.; Stubbs, J.B.; Toohey, R.E.
1996-04-01
Tables of radiation dose estimates based on the Cristy-Eckerman adult male phantom are provided for a number of radiopharmaceuticals commonly used in nuclear medicine. Radiation dose estimates are listed for all major source organs, and several other organs of interest. The dose estimates were calculated using the MIRD Technique as implemented in the MIRDOSE3 computer code, developed by the Oak Ridge Institute for Science and Education, Radiation Internal Dose Information Center. In this code, residence times for source organs are used with decay data from the MIRD Radionuclide Data and Decay Schemes to produce estimates of radiation dose to organs of standardized phantoms representing individuals of different ages. The adult male phantom of the Cristy-Eckerman phantom series is different from the MIRD 5, or Reference Man phantom in several aspects, the most important of which is the difference in the masses and absorbed fractions for the active (red) marrow. The absorbed fractions for flow energy photons striking the marrow are also different. Other minor differences exist, but are not likely to significantly affect dose estimates calculated with the two phantoms. Assumptions which support each of the dose estimates appears at the bottom of the table of estimates for a given radiopharmaceutical. In most cases, the model kinetics or organ residence times are explicitly given. The results presented here can easily be extended to include other radiopharmaceuticals or phantoms.
Condition Number Regularized Covariance Estimation.
Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala
2013-06-01
Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n" setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.
SDR Input Power Estimation Algorithms
NASA Technical Reports Server (NTRS)
Nappier, Jennifer M.; Briones, Janette C.
2013-01-01
The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.
Space Station Facility government estimating
NASA Technical Reports Server (NTRS)
Brown, Joseph A.
1993-01-01
This new, unique Cost Engineering Report introduces the 800-page, C-100 government estimate for the Space Station Processing Facility (SSPF) and Volume IV Aerospace Construction Price Book. At the January 23, 1991, bid opening for the SSPF, the government cost estimate was right on target. Metric, Inc., Prime Contractor, low bid was 1.2 percent below the government estimate. This project contains many different and complex systems. Volume IV is a summary of the cost associated with construction, activation and Ground Support Equipment (GSE) design, estimating, fabrication, installation, testing, termination, and verification of this project. Included are 13 reasons the government estimate was so accurate; abstract of bids, for 8 bidders and government estimate with additive alternates, special labor and materials, budget comparison and system summaries; and comments on the energy credit from local electrical utility. This report adds another project to our continuing study of 'How Does the Low Bidder Get Low and Make Money?' which was started in 1967, and first published in the 1973 AACE Transaction with 18 ways the low bidders get low. The accuracy of this estimate proves the benefits of our Kennedy Space Center (KSC) teamwork efforts and KSC Cost Engineer Tools which are contributing toward our goals of the Space Station.
Estimating equivalence with quantile regression
Cade, B.S.
2011-01-01
Equivalence testing and corresponding confidence interval estimates are used to provide more enlightened statistical statements about parameter estimates by relating them to intervals of effect sizes deemed to be of scientific or practical importance rather than just to an effect size of zero. Equivalence tests and confidence interval estimates are based on a null hypothesis that a parameter estimate is either outside (inequivalence hypothesis) or inside (equivalence hypothesis) an equivalence region, depending on the question of interest and assignment of risk. The former approach, often referred to as bioequivalence testing, is often used in regulatory settings because it reverses the burden of proof compared to a standard test of significance, following a precautionary principle for environmental protection. Unfortunately, many applications of equivalence testing focus on establishing average equivalence by estimating differences in means of distributions that do not have homogeneous variances. I discuss how to compare equivalence across quantiles of distributions using confidence intervals on quantile regression estimates that detect differences in heterogeneous distributions missed by focusing on means. I used one-tailed confidence intervals based on inequivalence hypotheses in a two-group treatment-control design for estimating bioequivalence of arsenic concentrations in soils at an old ammunition testing site and bioequivalence of vegetation biomass at a reclaimed mining site. Two-tailed confidence intervals based both on inequivalence and equivalence hypotheses were used to examine quantile equivalence for negligible trends over time for a continuous exponential model of amphibian abundance. ?? 2011 by the Ecological Society of America.
State estimation of the power system using robust estimator
NASA Astrophysics Data System (ADS)
Khan, Zahid; Razali, Radzuan B.; Daud, Hanita; Nor, Nursyarizal Mohd; Firuzabad, Mahmud Fotuhi
2016-11-01
The presence of gross errors in the process data for the power system state estimation (PSSE) algorithm is very crucial as they may severely degrade its results. The conventionally used state estimator is based on the method of the weighted least squares (WLS) which is not robust against the bad measurements that results in larger deviation in output estimates. In this study, a new robust algorithm based on the quasi weighted least squares (QWLS) estimator is presented. The robustness of the QWLS approach is achieved by reducing the impact of bad measurements on the objective function. In the existence of gross errors, the proposed algorithm provides estimates as good as those that are achieved by the conventional method of the WLS when no gross error exists in the process data. The implementation of the proposed algorithm has been illustrated for the case studies on the 6-bus and IEEE 14-bus power networks. The numerical results validate the performance of the proposed estimator in the PSSE algorithm.
Estimating the Modified Allan Variance
NASA Technical Reports Server (NTRS)
Greenhall, Charles
1995-01-01
A paper at the 1992 FCS showed how to express the modified Allan variance (mvar) in terms of the third difference of the cumulative sum of time residuals. Although this reformulated definition was presented merely as a computational trick for simplifying the calculation of mvar estimates, it has since turned out to be a powerful theoretical tool for deriving the statistical quality of those estimates in terms of their equivalent degrees of freedom (edf), defined for an estimator V by edf V = 2(EV)2/(var V). Confidence intervals for mvar can then be constructed from levels of the appropriate 2 distribution.
Age estimation from canine volumes.
De Angelis, Danilo; Gaudio, Daniel; Guercini, Nicola; Cipriani, Filippo; Gibelli, Daniele; Caputi, Sergio; Cattaneo, Cristina
2015-08-01
Techniques for estimation of biological age are constantly evolving and are finding daily application in the forensic radiology field in cases concerning the estimation of the chronological age of a corpse in order to reconstruct the biological profile, or of a living subject, for example in cases of immigration of people without identity papers from a civil registry. The deposition of teeth secondary dentine and consequent decrease of pulp chamber in size are well known as aging phenomena, and they have been applied to the forensic context by the development of age estimation procedures, such as Kvaal-Solheim and Cameriere methods. The present study takes into consideration canines pulp chamber volume related to the entire teeth volume, with the aim of proposing new regression formulae for age estimation using 91 cone beam computerized scans and a freeware open-source software, in order to permit affordable reproducibility of volumes calculation.
ESTIMATING REPRODUCTIVE SUCCESS IN BIRDS
This presentation will focus on the statistical issues surrounding estimation of avian nest-survival. I first describe the natural history and breeding ecology of two North American songbirds, the Loggerhead Shrike (Lanius ludovicianus) and the Wood Thrush (Hylocichla mustelina)....
Bayesian estimation of turbulent motion.
Héas, Patrick; Herzet, Cédric; Mémin, Etienne; Heitz, Dominique; Mininni, Pablo D
2013-06-01
Based on physical laws describing the multiscale structure of turbulent flows, this paper proposes a regularizer for fluid motion estimation from an image sequence. Regularization is achieved by imposing some scale invariance property between histograms of motion increments computed at different scales. By reformulating this problem from a Bayesian perspective, an algorithm is proposed to jointly estimate motion, regularization hyperparameters, and to select the most likely physical prior among a set of models. Hyperparameter and model inference are conducted by posterior maximization, obtained by marginalizing out non--Gaussian motion variables. The Bayesian estimator is assessed on several image sequences depicting synthetic and real turbulent fluid flows. Results obtained with the proposed approach exceed the state-of-the-art results in fluid flow estimation.
[Medical insurance estimation of risks].
Dunér, H
1975-11-01
The purpose of insurance medicine is to make a prognostic estimate of medical risk-factors in persons who apply for life, health, or accident insurance. Established risk-groups with a calculated average mortality and morbidity form the basis for premium rates and insurance terms. In most cases the applicant is accepted for insurance after a self-assessment of his health. Only around one per cent of the applications are refused, but there are cases in which the premium is raised, temporarily or permanently. It is often a matter of rough estimate, since the knowlege of the long-term prognosis for many diseases is incomplete. The insurance companies' rules for estimate of risk are revised at intervals of three or four years. The estimate of risk as regards life insurance has been gradually liberalised, while the medical conditions for health insurance have become stricter owing to an increase in the claims rate.
Manned Mars mission cost estimate
NASA Technical Reports Server (NTRS)
Hamaker, Joseph; Smith, Keith
1986-01-01
The potential costs of several options of a manned Mars mission are examined. A cost estimating methodology based primarily on existing Marshall Space Flight Center (MSFC) parametric cost models is summarized. These models include the MSFC Space Station Cost Model and the MSFC Launch Vehicle Cost Model as well as other modes and techniques. The ground rules and assumptions of the cost estimating methodology are discussed and cost estimates presented for six potential mission options which were studied. The estimated manned Mars mission costs are compared to the cost of the somewhat analogous Apollo Program cost after normalizing the Apollo cost to the environment and ground rules of the manned Mars missions. It is concluded that a manned Mars mission, as currently defined, could be accomplished for under $30 billion in 1985 dollars excluding launch vehicle development and mission operations.
Estimate product quality with ANNs
Brambilla, A.; Trivella, F.
1996-09-01
Artificial neural networks (ANNs) have been applied to predict catalytic reformer octane number (ON) and gasoline splitter product qualities. Results show that ANNs are a valuable tool to derive fast and accurate product quality measurements, and offer a low-cost alternative to online analyzers or rigorous mathematical models. The paper describes product quality measurements, artificial neural networks, ANN structure, estimating gasoline octane numbers, and estimating naphtha splitter product qualities.
Acquisition Cost/Price Estimating
1981-01-01
REVIEW AND VALIDATION, (3) RESEARCH AND METHODOLOGY AND (4) DATA ANALYSIS. THESE FUNCTIONAL THRUSTS ARE IN TURN FOCUSED TO ESTIMATING AND ANALISIS ...RELATIONSHIPS, CERs, IS LARGELY A FUNCTION OF THE QUANTITY AND QUALITY OF DATA THAT IS AVAILABLE AT THE TINE OF FORMULATION. IN ORDER TO ENSURE THAT SUCH COST...HAVE BEEN DISCUSSED PREVIOUSLY AS GOOD SOURCES OF DATA FOR COST ESTIMATING. THEIR PRIMARY FUNCTION , HOWEVER, IS TO PROVIDE THE ARMY WITH EARLY
Some Topics in Linear Estimation,
1981-01-01
outstanding symposium. Iz SOME TOPICS IN LINEAR ESTIMATION 309 TABILE OF CONTENTS 1. The Integral Equations of Smoothing and Filtering la. The Smoothing ... Smoothing and Filtering 2. Some Examples - Stationary Processes 2a. Scalar Stationary Processes over Infinite Intervals 2b. Finite Intervals - The...Stationary 4. A Concluding Remark 310 T. KAILATH 1. The Integral Equations of Smoothing and Filtering Our estimation problems will be discussed in the context
PDV Uncertainty Estimation & Methods Comparison
Machorro, E.
2011-11-01
Several methods are presented for estimating the rapidly changing instantaneous frequency of a time varying signal that is contaminated by measurement noise. Useful a posteriori error estimates for several methods are verified numerically through Monte Carlo simulation. However, given the sampling rates of modern digitizers, sub-nanosecond variations in velocity are shown to be reliably measurable in most (but not all) cases. Results support the hypothesis that in many PDV regimes of interest, sub-nanosecond resolution can be achieved.
The Psychology of Cost Estimating
NASA Technical Reports Server (NTRS)
Price, Andy
2016-01-01
Cost estimation for large (and even not so large) government programs is a challenge. The number and magnitude of cost overruns associated with large Department of Defense (DoD) and National Aeronautics and Space Administration (NASA) programs highlight the difficulties in developing and promulgating accurate cost estimates. These overruns can be the result of inadequate technology readiness or requirements definition, the whims of politicians or government bureaucrats, or even as failures of the cost estimating profession itself. However, there may be another reason for cost overruns that is right in front of us, but only recently have we begun to grasp it: the fact that cost estimators and their customers are human. The last 70+ years of research into human psychology and behavioral economics have yielded amazing findings into how we humans process and use information to make judgments and decisions. What these scientists have uncovered is surprising: humans are often irrational and illogical beings, making decisions based on factors such as emotion and perception, rather than facts and data. These built-in biases to our thinking directly affect how we develop our cost estimates and how those cost estimates are used. We cost estimators can use this knowledge of biases to improve our cost estimates and also to improve how we communicate and work with our customers. By understanding how our customers think, and more importantly, why they think the way they do, we can have more productive relationships and greater influence. By using psychology to our advantage, we can more effectively help the decision maker and our organizations make fact-based decisions.
Estimating beta-mixing coefficients
McDonald, Daniel J.; Shalizi, Cosma Rohilla; Schervish, Mark
2015-01-01
The literature on statistical learning for time series assumes the asymptotic independence or “mixing” of the data-generating process. These mixing assumptions are never tested, and there are no methods for estimating mixing rates from data. We give an estimator for the beta-mixing rate based on a single stationary sample path and show it is L1-risk consistent. PMID:26279742
Nonlinear Regression Methods for Estimation
2005-09-01
accuracy when the geometric dilution of precision ( GDOP ) causes collinearity, which in turn brings about poor position estimates. The main goal is...measurements are needed to wash-out the 168 measurement noise. Furthermore, the measurement arrangement’s geometry ( GDOP ) strongly impacts the achievable...Newton algorithm, 61 geometric dilution of precision, see GDOP initial parameter estimate, 91 iterative least squares, see ILS Kalman filtering, 10
PHAZE. Parametric Hazard Function Estimation
Atwood, C.L.
1990-09-01
Phaze performs statistical inference calculations on a hazard function ( also called a failure rate or intensity function) based on reported failure times of components that are repaired and restored to service. Three parametric models are allowed: the exponential, linear, and Weibull hazard models. The inference includes estimation (maximum likelihood estimators and confidence regions) of the parameters and of the hazard function itself, testing of hypotheses such as increasing failure rate, and checking of the model assumptions.
Parameter estimation in food science.
Dolan, Kirk D; Mishra, Dharmendra K
2013-01-01
Modeling includes two distinct parts, the forward problem and the inverse problem. The forward problem-computing y(t) given known parameters-has received much attention, especially with the explosion of commercial simulation software. What is rarely made clear is that the forward results can be no better than the accuracy of the parameters. Therefore, the inverse problem-estimation of parameters given measured y(t)-is at least as important as the forward problem. However, in the food science literature there has been little attention paid to the accuracy of parameters. The purpose of this article is to summarize the state of the art of parameter estimation in food science, to review some of the common food science models used for parameter estimation (for microbial inactivation and growth, thermal properties, and kinetics), and to suggest a generic method to standardize parameter estimation, thereby making research results more useful. Scaled sensitivity coefficients are introduced and shown to be important in parameter identifiability. Sequential estimation and optimal experimental design are also reviewed as powerful parameter estimation methods that are beginning to be used in the food science literature.
Blind estimation of reverberation time
NASA Astrophysics Data System (ADS)
Ratnam, Rama; Jones, Douglas L.; Wheeler, Bruce C.; O'Brien, William D.; Lansing, Charissa R.; Feng, Albert S.
2003-11-01
The reverberation time (RT) is an important parameter for characterizing the quality of an auditory space. Sounds in reverberant environments are subject to coloration. This affects speech intelligibility and sound localization. Many state-of-the-art audio signal processing algorithms, for example in hearing-aids and telephony, are expected to have the ability to characterize the listening environment, and turn on an appropriate processing strategy accordingly. Thus, a method for characterization of room RT based on passively received microphone signals represents an important enabling technology. Current RT estimators, such as Schroeder's method, depend on a controlled sound source, and thus cannot produce an online, blind RT estimate. Here, a method for estimating RT without prior knowledge of sound sources or room geometry is presented. The diffusive tail of reverberation was modeled as an exponentially damped Gaussian white noise process. The time-constant of the decay, which provided a measure of the RT, was estimated using a maximum-likelihood procedure. The estimates were obtained continuously, and an order-statistics filter was used to extract the most likely RT from the accumulated estimates. The procedure was illustrated for connected speech. Results obtained for simulated and real room data are in good agreement with the real RT values.
Ratio estimation in SIMS analysis
NASA Astrophysics Data System (ADS)
Ogliore, R. C.; Huss, G. R.; Nagashima, K.
2011-09-01
The determination of an isotope ratio by secondary ion mass spectrometry (SIMS) traditionally involves averaging a number of ratios collected over the course of a measurement. We show that this method leads to an additive positive bias in the expectation value of the estimated ratio that is approximately equal to the true ratio divided by the counts of the denominator isotope of an individual ratio. This bias does not decrease as the number of ratios used in the average increases. By summing all counts in the numerator isotope, then dividing by the sum of counts in the denominator isotope, the estimated ratio is less biased: the bias is approximately equal to the ratio divided by the summed counts of the denominator isotope over the entire measurement. We propose a third ratio estimator (Beale's estimator) that can be used when the bias from the summed counts is unacceptably large for the hypothesis being tested. We derive expressions for the variance of these ratio estimators as well as the conditions under which they are normally distributed. Finally, we investigate a SIMS dataset showing the effects of ratio bias, and discuss proper ratio estimation for SIMS analysis.
Estimating recharge rates with analytic element models and parameter estimation
Dripps, W.R.; Hunt, R.J.; Anderson, M.P.
2006-01-01
Quantifying the spatial and temporal distribution of recharge is usually a prerequisite for effective ground water flow modeling. In this study, an analytic element (AE) code (GFLOW) was used with a nonlinear parameter estimation code (UCODE) to quantify the spatial and temporal distribution of recharge using measured base flows as calibration targets. The ease and flexibility of AE model construction and evaluation make this approach well suited for recharge estimation. An AE flow model of an undeveloped watershed in northern Wisconsin was optimized to match median annual base flows at four stream gages for 1996 to 2000 to demonstrate the approach. Initial optimizations that assumed a constant distributed recharge rate provided good matches (within 5%) to most of the annual base flow estimates, but discrepancies of >12% at certain gages suggested that a single value of recharge for the entire watershed is inappropriate. Subsequent optimizations that allowed for spatially distributed recharge zones based on the distribution of vegetation types improved the fit and confirmed that vegetation can influence spatial recharge variability in this watershed. Temporally, the annual recharge values varied >2.5-fold between 1996 and 2000 during which there was an observed 1.7-fold difference in annual precipitation, underscoring the influence of nonclimatic factors on interannual recharge variability for regional flow modeling. The final recharge values compared favorably with more labor-intensive field measurements of recharge and results from studies, supporting the utility of using linked AE-parameter estimation codes for recharge estimation. Copyright ?? 2005 The Author(s).
Presence of Mind... A Reaction to Sheridan's "Musing on Telepresence"
NASA Technical Reports Server (NTRS)
Ellis, Stephen R.; Null, Cynthia H. (Technical Monitor)
1995-01-01
What are the benefits and significance of developing a scientifically useful measure of the human sense of presence in an environment? Such a scale could be conceived to measure the extent to which users of telerobotics interfaces feel or behave as if they were present at the site of a remotely controlled robot. The essay examines some issues raised in order to identify characteristics, a scale of 'presence' ought to have to be useful as an explanatory scientific concept. It also addresses the utility of worrying about developing such a scale at all. To be useful in the same manner as a traditional scientific concept such as mass, for example, it is argued that such scales not only need to be precisely defined and co-vary with determinative factors but also need to establish equivalence classes of its independent constituents. This simplifying property is important for either subjective or objective scales of presence and arises if the constituents of presence are truly independent.
Awaken the Muse--Teaching Music to Young Children.
ERIC Educational Resources Information Center
Tusnady, Monika
2001-01-01
Presents ways educators can make music an integral part of early childhood education and give every child a quality music experience. Discusses five ways children experience musical play: singing, rhymes and fingerplays, movement, listening, and instruments. Emphasizes the importance of musical goals rather than spatial-temporal reasoning or other…
Musings in the Wake of Columbine: What Can Schools Do?
ERIC Educational Resources Information Center
Raywid, Mary Anne; Oshiyama, Libby
2000-01-01
As suggested by standard indicators--truancy, dropout rates, graffiti, vandalism, violence--youngsters in small schools rarely display the anger at the institution and its inhabitants that typifies Columbine and many other comprehensive high schools. Educators must cultivate learning communities and qualities (like empathy and compassion)…
Marrying the "Muse" and the Thinker "Poetry as Scientific Writing"
ERIC Educational Resources Information Center
Marcum-Dietrich, Nanette I.; Byrne, Eileen; O'Hern, Brenda
2009-01-01
This article describes an unlikely collaboration between a high school chemistry teacher and a high school English teacher who attempted to teach scientific concepts through poetry. Inspired by poet John Updike's (1960) "Cosmic Gall," these two teachers crafted writing tasks aimed at teaching science content through literary devices. The result…
Looking for the Muse in Some of the Right Places.
ERIC Educational Resources Information Center
Pariser, David A.
1999-01-01
Discusses C. Milbrath's thesis that artistically talented and less talented children follow different developmental paths because they rely on different ways of responding to the world. Relates this thesis to studies of the childhood work of Paul Klee, Henri Toulouse Lautrec, and Pablo Picasso. (SLD)
Traceability of radiation measurements: musings of a user
Kathren, R.L.
1980-04-01
Although users of radiation desire measurement traceability for a number of reasons, including legal, regulatory, contractual, and quality assurance requirements, there exists no real definition of the term in the technical literature. Definitions are proposed for both traceability and traceability to the National Bureau of Standards. The hierarchy of radiation standards is discussed and allowable uncertainties are given for each level. Areas of need with respect to radiation standards are identified, and a system of secondary radiation calibration laboratories is proposed as a means of providing quality calibrations and traceability on a routine basis.
The painful muse: migrainous artistic archetypes from visual cortex.
Aguggia, Marco; Grassi, Enrico
2014-05-01
Neurological diseases which constituted traditionally obstacles to artistic creation can, in the case of migraine, be transformed by the artists into a source of inspiration and artistic production. These phenomena represent a chapter of a broader embryonic neurobiology of painting.
Musing on the Memes of Open and Distance Education
ERIC Educational Resources Information Center
Latchem, Colin
2014-01-01
Just as genes propagate themselves in the gene pool by leaping from body to body, so memes (ideas, behaviours, and actions) transmit cultural ideas or practices from one mind to another through writing, speech, or other imitable phenomena. This paper considers the memes that influence the evolution of open and distance education. If the…
The debilitated muse: poetry in the face of illness.
Ofri, Danielle
2010-12-01
Poetry is a supremely sensory art, both in the imagining and in the writing. What happens when the poet faces illness? How is the poetry affected by alterations of the body and mind? This paper examines the poetry of several writers afflicted by physical illness-poets of great renown and poets who might be classified as "emerging voices," in order to explore the interplay between creativity and corporeal vulnerability.
Musings on genome medicine: the Obama effect redux.
Nathan, David G; Orkin, Stuart H
2009-09-11
From the point of view of genome medicine, Barack Obama has made two vital policy decisions: he has chosen a new director of the National Institutes of Health, and his proposed change in United States healthcare policy will have profound effects on genome medicine and, indeed, all of academic medicine.
In Pursuit of the Muse: Librarians Who Write.
ERIC Educational Resources Information Center
Chepesiuk, Ron
1991-01-01
This article profiles six librarians in academic and public libraries who discuss how they balance their dual careers as authors and librarians. The influence of librarianship on their writing is described, the influence of writing on their careers in librarianship is considered, and the problems of finding time for both careers is discussed. (LRW)
Musings on the State of the ILS in 2006
ERIC Educational Resources Information Center
Breeding, Marshall
2006-01-01
It is hard to imagine operating a library today without the assistance of an integrated library system (ILS). Without help from it, library work would be tedious, labor would be intensive, and patrons would be underserved in almost all respects. Given the importance of these automation systems, it is essential that they work well and deliver…
The muse in the machine: computers can help us compose
NASA Astrophysics Data System (ADS)
Greenhough, M.
1990-01-01
A method of producing musical structures by means of a constrained random process is described. Real-time operation allows intuitive control. Musical samples from a computer system can 'evolve' Darwinian-style in the environment provided by the operator's ear-brain.
Mentors, Muses, and Mutuality: Honoring Barbara Snell Dohrenwend
ERIC Educational Resources Information Center
Mulvey, Anne
2012-01-01
I describe feminist community psychology principles that have the potential to expand and enrich mentoring and that honor Barbara Snell Dohrenwend, a leader who contributed to the research, theory, and profession of community psychology. I reflect on the affect that Barbara Dohrenwend had on life and on the development of feminist community…
End of the Line: A Poet's Postmodern Musings on Writing
ERIC Educational Resources Information Center
Leggo, Carl
2006-01-01
I invite and encourage students to take risks in their writing, to engage innovatively with a wide range of genre, to push limits in order to explore creatively how language and discourse are never ossified, but always organic, how language use is integrally and inextricably connected to identity, knowledge, subjectivity, and living. Informed by…
Sing, muse: songs in Homer and in hospital.
Marshall, Robert; Bleakley, Alan
2011-06-01
This paper progresses the original argument of Richard Ratzan that formal presentation of the medical case history follows a Homeric oral-formulaic tradition.The everyday work routines of doctors involve a ritual poetics, where the language of recounting the patient’s ‘history’ offers an explicitly aesthetic enactment or performance that can be appreciated and given meaning within the historical tradition of Homeric oral poetry and the modernist aesthetic of Minimalism. This ritual poetics shows a reliance on traditional word usages that crucially act as tools for memorisation and performance and can be linked to forms of clinical reasoning; both contain a tension between the oral and the written record, questioning the priority of the latter; and the performance of both helps to create the Janus-faced identity of the doctor as a ‘performance artist’ or ‘medical bard’ in identifying with medical culture and maintaining a positive difference from the patient as audience, offering a valid form of patient-centredness.
The Unembarressed Muse: The Popular Arts in America.
ERIC Educational Resources Information Center
Nye, Russel
This book is a study of certain of the popular arts in American society, that is, the arts in their customarily accepted genres. "Popular" is interpreted to mean "generally dispersed and approved"--descriptive of those artistic productions which express the taste and understanding of the majority and which are free of control, in content and…
Supersymmetric musings on the predictivity of family symmetries
Kadota, Kenji; Kersten, Joern; Velasco-Sevilla, Liliana
2010-10-15
We discuss the predictivity of family symmetries for the soft supersymmetry breaking parameters in the framework of supergravity. We show that unknown details of the messenger sector and the supersymmetry breaking hidden sector enter into the soft parameters, making it difficult to obtain robust predictions. We find that there are specific choices of messenger fields which can improve the predictivity for the soft parameters.
Research using blogs for data: public documents or private musings?
Eastham, Linda A
2011-08-01
Nursing and other health sciences researchers increasingly find blogs to be valuable sources of information for investigating illness and other human health experiences. When researchers use blogs as their exclusive data source, they must discern the public/private aspects inherent in the nature of blogs in order to plan for appropriate protection of the bloggers' identities. Approaches to the protection of human subjects are poorly addressed when the human subject is a blogger and the blog is used as an exclusive source of data. Researchers may be assisted to protect human subjects via a decisional framework for assessing a blog author's intended position on the public/private continuum.
Musing on the Use of Dynamic Software and Mathematics Epistemology
ERIC Educational Resources Information Center
Santos-Trigo, Manuel; Reyes-Rodriguez, Aaron; Espinosa-Perez, Hugo
2007-01-01
Different computational tools may offer teachers and students distinct opportunities in representing, exploring and solving mathematical tasks. In this context, we illustrate that the use of dynamic software (Cabri Geometry) helped high school teachers to think of and represent a particular task dynamically. In this process, the teachers had the…
Weldon Spring historical dose estimate
Meshkov, N.; Benioff, P.; Wang, J.; Yuan, Y.
1986-07-01
This study was conducted to determine the estimated radiation doses that individuals in five nearby population groups and the general population in the surrounding area may have received as a consequence of activities at a uranium processing plant in Weldon Spring, Missouri. The study is retrospective and encompasses plant operations (1957-1966), cleanup (1967-1969), and maintenance (1969-1982). The dose estimates for members of the nearby population groups are as follows. Of the three periods considered, the largest doses to the general population in the surrounding area would have occurred during the plant operations period (1957-1966). Dose estimates for the cleanup (1967-1969) and maintenance (1969-1982) periods are negligible in comparison. Based on the monitoring data, if there was a person residing continually in a dwelling 1.2 km (0.75 mi) north of the plant, this person is estimated to have received an average of about 96 mrem/yr (ranging from 50 to 160 mrem/yr) above background during plant operations, whereas the dose to a nearby resident during later years is estimated to have been about 0.4 mrem/yr during cleanup and about 0.2 mrem/yr during the maintenance period. These values may be compared with the background dose in Missouri of 120 mrem/yr.
Estimating preselected and postselected ensembles
Massar, Serge; Popescu, Sandu
2011-11-15
In analogy with the usual quantum state-estimation problem, we introduce the problem of state estimation for a pre- and postselected ensemble. The problem has fundamental physical significance since, as argued by Y. Aharonov and collaborators, pre- and postselected ensembles are the most basic quantum ensembles. Two new features are shown to appear: (1) information is flowing to the measuring device both from the past and from the future; (2) because of the postselection, certain measurement outcomes can be forced never to occur. Due to these features, state estimation in such ensembles is dramatically different from the case of ordinary, preselected-only ensembles. We develop a general theoretical framework for studying this problem and illustrate it through several examples. We also prove general theorems establishing that information flowing from the future is closely related to, and in some cases equivalent to, the complex conjugate information flowing from the past. Finally, we illustrate our approach on examples involving covariant measurements on spin-1/2 particles. We emphasize that all state-estimation problems can be extended to the pre- and postselected situation. The present work thus lays the foundations of a much more general theory of quantum state estimation.
Software Size Estimation Using Expert Estimation: A Fuzzy Logic Approach
ERIC Educational Resources Information Center
Stevenson, Glenn A.
2012-01-01
For decades software managers have been using formal methodologies such as the Constructive Cost Model and Function Points to estimate the effort of software projects during the early stages of project development. While some research shows these methodologies to be effective, many software managers feel that they are overly complicated to use and…
Space Shuttle propulsion parameter estimation using optimal estimation techniques
NASA Technical Reports Server (NTRS)
1983-01-01
The fifth monthly progress report includes corrections and additions to the previously submitted reports. The addition of the SRB propellant thickness as a state variable is included with the associated partial derivatives. During this reporting period, preliminary results of the estimation program checkout was presented to NASA technical personnel.
Estimations of uncertainties of frequencies
NASA Astrophysics Data System (ADS)
Eyer, Laurent; Nicoletti, Jean-Marc; Morgenthaler, Stephan
2016-10-01
Diverse variable phenomena in the Universe are periodic. Astonishingly many of the periodic signals present in stars have timescales coinciding with human ones (from minutes to years). The periods of signals often have to be deduced from time series which are irregularly sampled and sparse, furthermore correlations between the brightness measurements and their estimated uncertainties are common. The uncertainty on the frequency estimation is reviewed. We explore the astronomical and statistical literature, in both cases of regular and irregular samplings. The frequency uncertainty is depending on signal to noise ratio, the frequency, the observational timespan. The shape of the light curve should also intervene, since sharp features such as exoplanet transits, stellar eclipses, raising branches of pulsation stars give stringent constraints. We propose several procedures (parametric and nonparametric) to estimate the uncertainty on the frequency which are subsequently tested against simulated data to assess their performances.
Estimation of spontaneous mutation rates.
Natarajan, Loki; Berry, Charles C; Gasche, Christoph
2003-09-01
Spontaneous or randomly occurring mutations play a key role in cancer progression. Estimation of the mutation rate of cancer cells can provide useful information about the disease. To ascertain these mutation rates, we need mathematical models that describe the distribution of mutant cells. In this investigation, we develop a discrete time stochastic model for a mutational birth process. We assume that mutations occur concurrently with mitosis so that when a nonmutant parent cell splits into two progeny, one of these daughter cells could carry a mutation. We propose an estimator for the mutation rate and investigate its statistical properties via theory and simulations. A salient feature of this estimator is the ease with which it can be computed. The methods developed herein are applied to a human colorectal cancer cell line and compared to existing continuous time models.
Automated Estimation Of Lesion Size
NASA Astrophysics Data System (ADS)
Ruttimann, Urs E.; Webber, Richard L.; Groenhuis, Roelf A. J.; Troullos, Emanuel; Rethman, Michael T.
1985-06-01
Two methods were studied of estimating automatically the relative volume of local lesions in digital subtractions radiographs. The first method approximates the projected, lesion area by an equivalent circular area, and the second by an equivalent polygonal area. Lesion volume is estimated in both methods as equivalent area times the average gray-level difference between the detected area and the surrounding background. Regression results of the estimated relative volume versus the calibrated size of lesions induced in dry human mandibles showed the polygonal approximation to be superior. This method also permitted successful monitoring of bone remodelling during the healing process of surgically induced lesions in dogs. The quantitative results, as well as the examples from in vivo lesions demonstrate feasibility and clinically relavance of the methodology.
Highly Automated Dipole EStimation (HADES)
Campi, C.; Pascarella, A.; Sorrentino, A.; Piana, M.
2011-01-01
Automatic estimation of current dipoles from biomagnetic data is still a problematic task. This is due not only to the ill-posedness of the inverse problem but also to two intrinsic difficulties introduced by the dipolar model: the unknown number of sources and the nonlinear relationship between the source locations and the data. Recently, we have developed a new Bayesian approach, particle filtering, based on dynamical tracking of the dipole constellation. Contrary to many dipole-based methods, particle filtering does not assume stationarity of the source configuration: the number of dipoles and their positions are estimated and updated dynamically during the course of the MEG sequence. We have now developed a Matlab-based graphical user interface, which allows nonexpert users to do automatic dipole estimation from MEG data with particle filtering. In the present paper, we describe the main features of the software and show the analysis of both a synthetic data set and an experimental dataset. PMID:21437232
Highly Automated Dipole EStimation (HADES).
Campi, C; Pascarella, A; Sorrentino, A; Piana, M
2011-01-01
Automatic estimation of current dipoles from biomagnetic data is still a problematic task. This is due not only to the ill-posedness of the inverse problem but also to two intrinsic difficulties introduced by the dipolar model: the unknown number of sources and the nonlinear relationship between the source locations and the data. Recently, we have developed a new Bayesian approach, particle filtering, based on dynamical tracking of the dipole constellation. Contrary to many dipole-based methods, particle filtering does not assume stationarity of the source configuration: the number of dipoles and their positions are estimated and updated dynamically during the course of the MEG sequence. We have now developed a Matlab-based graphical user interface, which allows nonexpert users to do automatic dipole estimation from MEG data with particle filtering. In the present paper, we describe the main features of the software and show the analysis of both a synthetic data set and an experimental dataset.
Oil and gas reserves estimates
Harrell, R.; Gajdica, R.; Elliot, D.; Ahlbrandt, T.S.; Khurana, S.
2005-01-01
This article is a summary of a panel session at the 2005 Offshore Technology Conference. Oil and gas reserves estimates are further complicated with the expanding importance of the worldwide deepwater arena. These deepwater reserves can be analyzed, interpreted, and conveyed in a consistent, reliable way to investors and other stakeholders. Continually improving technologies can lead to improved estimates of production and reserves, but the estimates are not necessarily recognized by regulatory authorities as an indicator of "reasonable certainty," a term used since 1964 to describe proved reserves in several venues. Solutions are being debated in the industry to arrive at a reporting mechanism that generates consistency and at the same time leads to useful parameters in assessing a company's value without compromising confidentiality. Copyright 2005 Offshore Technology Conference.
Estimating risks of perinatal death.
Smith, Gordon C S
2005-01-01
The relative and absolute risks of perinatal death that are estimated from observational studies are used frequently in counseling about obstetric intervention. The statistical basis for these estimates therefore is crucial, but many studies are seriously flawed. In this review, a number of aspects of the approach to the estimation of the risk of perinatal death are addressed. Key factors in the analysis include (1) the definition of the cause of the death, (2) differentiation between antepartum and intrapartum events, (3) the use of the appropriate denominator for the given cause of death, (4) the assessment of the cumulative risk where appropriate, (5) the use of appropriate statistical tests, (6) the stratification of analysis of delivery-related deaths by gestational age, and (7) the specific features of multiple pregnancy, which include the correct determination of the timing of antepartum stillbirth and the use of paired statistical tests when outcomes are compared in relation to the birth order of twin pairs.
Integral Criticality Estimators in MCATK
Nolen, Steven Douglas; Adams, Terry R.; Sweezy, Jeremy Ed
2016-06-14
The Monte Carlo Application ToolKit (MCATK) is a component-based software toolset for delivering customized particle transport solutions using the Monte Carlo method. Currently under development in the XCP Monte Carlo group at Los Alamos National Laboratory, the toolkit has the ability to estimate the ke f f and a eigenvalues for static geometries. This paper presents a description of the estimators and variance reduction techniques available in the toolkit and includes a preview of those slated for future releases. Along with the description of the underlying algorithms is a description of the available user inputs for controlling the iterations. The paper concludes with a comparison of the MCATK results with those provided by analytic solutions. The results match within expected statistical uncertainties and demonstrate MCATK’s usefulness in estimating these important quantities.
System for estimating fatigue damage
LeMonds, Jeffrey; Guzzo, Judith Ann; Liu, Shaopeng; Dani, Uttara Ashwin
2017-03-14
In one aspect, a system for estimating fatigue damage in a riser string is provided. The system includes a plurality of accelerometers which can be deployed along a riser string and a communications link to transmit accelerometer data from the plurality of accelerometers to one or more data processors in real time. With data from a limited number of accelerometers located at sensor locations, the system estimates an optimized current profile along the entire length of the riser including riser locations where no accelerometer is present. The optimized current profile is then used to estimate damage rates to individual riser components and to update a total accumulated damage to individual riser components. The number of sensor locations is small relative to the length of a deepwater riser string, and a riser string several miles long can be reliably monitored along its entire length by fewer than twenty sensor locations.
Motion models in attitude estimation
NASA Technical Reports Server (NTRS)
Chu, D.; Wheeler, Z.; Sedlak, J.
1994-01-01
Attitude estimator use observations from different times to reduce the effects of noise. If the vehicle is rotating, the attitude at one time needs to be propagated to that at another time. If the vehicle measures its angular velocity, attitude propagating entails integrating a rotational kinematics equation only. If a measured angular velocity is not available, torques can be computed and an additional rotational dynamics equation integrated to give the angular velocity. Initial conditions for either of these integrations come from the estimation process. Sometimes additional quantities, such as gyro and torque parameters, are also solved for. Although the partial derivatives of attitude with respect to initial attitude and gyro parameters are well known, the corresponding partial derivatives with respect to initial angular velocity and torque parameters are less familiar. They can be derived and computed numerically in a way that is analogous to that used for the initial attitude and gyro parameters. Previous papers have demonstrated the feasibility of using dynamics models for attitude estimation but have not provided details of how each angular velocity and torque parameters can be estimated. This tutorial paper provides some of that detail, notably how to compute the state transition matrix when closed form expressions are not available. It also attempts to put dynamics estimation in perspective by showing the progression from constant to gyro-propagated to dynamics-propagated attitude motion models. Readers not already familiar with attitude estimation will find this paper an introduction to the subject, and attitude specialists may appreciate the collection of heretofore scattered results brought together in a single place.
Point estimates for probability moments
Rosenblueth, Emilio
1975-01-01
Given a well-behaved real function Y of a real random variable X and the first two or three moments of X, expressions are derived for the moments of Y as linear combinations of powers of the point estimates y(x+) and y(x-), where x+ and x- are specific values of X. Higher-order approximations and approximations for discontinuous Y using more point estimates are also given. Second-moment approximations are generalized to the case when Y is a function of several variables. PMID:16578731
Density Estimation with Mercer Kernels
NASA Technical Reports Server (NTRS)
Macready, William G.
2003-01-01
We present a new method for density estimation based on Mercer kernels. The density estimate can be understood as the density induced on a data manifold by a mixture of Gaussians fit in a feature space. As is usual, the feature space and data manifold are defined with any suitable positive-definite kernel function. We modify the standard EM algorithm for mixtures of Gaussians to infer the parameters of the density. One benefit of the approach is it's conceptual simplicity, and uniform applicability over many different types of data. Preliminary results are presented for a number of simple problems.
Mars gravitational field estimation error
NASA Technical Reports Server (NTRS)
Compton, H. R.; Daniels, E. F.
1972-01-01
The error covariance matrices associated with a weighted least-squares differential correction process have been analyzed for accuracy in determining the gravitational coefficients through degree and order five in the Mars gravitational potential junction. The results are presented in terms of standard deviations for the assumed estimated parameters. The covariance matrices were calculated by assuming Doppler tracking data from a Mars orbiter, a priori statistics for the estimated parameters, and model error uncertainties for tracking-station locations, the Mars ephemeris, the astronomical unit, the Mars gravitational constant (G sub M), and the gravitational coefficients of degrees six and seven. Model errors were treated by using the concept of consider parameters.
Spacecraft platform cost estimating relationships
NASA Technical Reports Server (NTRS)
Gruhl, W. M.
1972-01-01
The three main cost areas of unmanned satellite development are discussed. The areas are identified as: (1) the spacecraft platform (SCP), (2) the payload or experiments, and (3) the postlaunch ground equipment and operations. The SCP normally accounts for over half of the total project cost and accurate estimates of SCP costs are required early in project planning as a basis for determining total project budget requirements. The development of single formula SCP cost estimating relationships (CER) from readily available data by statistical linear regression analysis is described. The advantages of single formula CER are presented.
LANDSAT (MSS): Image demographic estimations
NASA Technical Reports Server (NTRS)
Dejesusparada, N. (Principal Investigator); Foresti, C.
1977-01-01
The author has identified the following significant results. Two sets of urban test sites, one with 35 cities and one with 70 cities, were selected in the State, Sao Paulo. A high degree of colinearity (0.96) was found between urban and areal measurements taken from aerial photographs and LANDSAT MSS imagery. High coefficients were observed when census data were regressed against aerial information (0.95) and LANDSAT data (0.92). The validity of population estimations was tested by regressing three urban variables, against three classes of cities. Results supported the effectiveness of LANDSAT to estimate large city populations with diminishing effectiveness as urban areas decrease in size.
State Estimation for Tensegrity Robots
NASA Technical Reports Server (NTRS)
Caluwaerts, Ken; Bruce, Jonathan; Friesen, Jeffrey M.; Sunspiral, Vytas
2016-01-01
Tensegrity robots are a class of compliant robots that have many desirable traits when designing mass efficient systems that must interact with uncertain environments. Various promising control approaches have been proposed for tensegrity systems in simulation. Unfortunately, state estimation methods for tensegrity robots have not yet been thoroughly studied. In this paper, we present the design and evaluation of a state estimator for tensegrity robots. This state estimator will enable existing and future control algorithms to transfer from simulation to hardware. Our approach is based on the unscented Kalman filter (UKF) and combines inertial measurements, ultra wideband time-of-flight ranging measurements, and actuator state information. We evaluate the effectiveness of our method on the SUPERball, a tensegrity based planetary exploration robotic prototype. In particular, we conduct tests for evaluating both the robot's success in estimating global position in relation to fixed ranging base stations during rolling maneuvers as well as local behavior due to small-amplitude deformations induced by cable actuation.
Fuel Estimation Using Dynamic Response
2007-03-01
estimates from the bookkeeping method [17]. Another method used on the ASTRIUM -SAS communication satellites is the “Thermal Propellant Gaug- ing...Applied on ASTRIUM -SAS Telecom- munication Satellite.” 3rd International Conference on Spacecraft Propulsion. 131–138. Cannes, France: European Space
Estimating Supplies Program: Evaluation Report
2007-11-02
214 Ingrown Toenails Bilateral With Secondary Infections Unresolvable At Echelon 2 215 Ingrown Toenails Without Secondary Infection 216 Herpes...Select functional areas that provide treatment for the patient stream you input. For example, if you select the Surgical Company Operating Room...readiness. (U) The primary application of supply estimation is to determine logistical requirements for medical treatment . Traditionally, medical
Helicopter Toy and Lift Estimation
ERIC Educational Resources Information Center
Shakerin, Said
2013-01-01
A $1 plastic helicopter toy (called a Wacky Whirler) can be used to demonstrate lift. Students can make basic measurements of the toy, use reasonable assumptions and, with the lift formula, estimate the lift, and verify that it is sufficient to overcome the toy's weight. (Contains 1 figure.)
An Improved Cluster Richness Estimator
Rozo, Eduardo; Rykoff, Eli S.; Koester, Benjamin P.; McKay, Timothy; Hao, Jiangang; Evrard, August; Wechsler, Risa H.; Hansen, Sarah; Sheldon, Erin; Johnston, David; Becker, Matthew R.; Annis, James T.; Bleem, Lindsey; Scranton, Ryan; /Pittsburgh U.
2009-08-03
Minimizing the scatter between cluster mass and accessible observables is an important goal for cluster cosmology. In this work, we introduce a new matched filter richness estimator, and test its performance using the maxBCG cluster catalog. Our new estimator significantly reduces the variance in the L{sub X}-richness relation, from {sigma}{sub lnL{sub X}}{sup 2} = (0.86 {+-} 0.02){sup 2} to {sigma}{sub lnL{sub X}}{sup 2} = (0.69 {+-} 0.02){sup 2}. Relative to the maxBCG richness estimate, it also removes the strong redshift dependence of the richness scaling relations, and is significantly more robust to photometric and redshift errors. These improvements are largely due to our more sophisticated treatment of galaxy color data. We also demonstrate the scatter in the L{sub X}-richness relation depends on the aperture used to estimate cluster richness, and introduce a novel approach for optimizing said aperture which can be easily generalized to other mass tracers.
Precipitation Estimation for Military Hydrology.
1980-04-01
Imagery," NOAA Tech Memo NESS 86, NOAA-NESS, p 47, Washington DC 1 3 C. G. Griffith, W. L. Woodley, P. G. Grube, D. W. Martin, J. Stout, and D. Sikdar ... Sikdar , 1978, "Rain Estimation from Geosynchronous Satellite Imagery-Visible and Infrared Studies," Mon Wea Rev, 106:1153-1171. 17 14. Reynolds, D. W
Empirical equation estimates geothermal gradients
Kutasov, I.M. )
1995-01-02
An empirical equation can estimate geothermal (natural) temperature profiles in new exploration areas. These gradients are useful for cement slurry and mud design and for improving electrical and temperature log interpretation. Downhole circulating temperature logs and surface outlet temperatures are used for predicting the geothermal gradients.
ESTIMATING IMPERVIOUS COVER FROM REGIONALLY AVAILABLE DATA
The objective of this study is to compare and evaluate the reliability of different approaches for estimating impervious cover including three empirical formulations for estimating impervious cover from population density data, estimation from categorized land cover data, and to ...
Shrinkage approach for EEG covariance matrix estimation.
Beltrachini, Leandro; von Ellenrieder, Nicolas; Muravchik, Carlos H
2010-01-01
We present a shrinkage estimator for the EEG spatial covariance matrix of the background activity. We show that such an estimator has some advantages over the maximum likelihood and sample covariance estimators when the number of available data to carry out the estimation is low. We find sufficient conditions for the consistency of the shrinkage estimators and results concerning their numerical stability. We compare several shrinkage schemes and show how to improve the estimator by incorporating known structure of the covariance matrix.
Bayesian Estimation of Conditional Independence Graphs Improves Functional Connectivity Estimates
Hinne, Max; Janssen, Ronald J.; Heskes, Tom; van Gerven, Marcel A.J.
2015-01-01
Functional connectivity concerns the correlated activity between neuronal populations in spatially segregated regions of the brain, which may be studied using functional magnetic resonance imaging (fMRI). This coupled activity is conveniently expressed using covariance, but this measure fails to distinguish between direct and indirect effects. A popular alternative that addresses this issue is partial correlation, which regresses out the signal of potentially confounding variables, resulting in a measure that reveals only direct connections. Importantly, provided the data are normally distributed, if two variables are conditionally independent given all other variables, their respective partial correlation is zero. In this paper, we propose a probabilistic generative model that allows us to estimate functional connectivity in terms of both partial correlations and a graph representing conditional independencies. Simulation results show that this methodology is able to outperform the graphical LASSO, which is the de facto standard for estimating partial correlations. Furthermore, we apply the model to estimate functional connectivity for twenty subjects using resting-state fMRI data. Results show that our model provides a richer representation of functional connectivity as compared to considering partial correlations alone. Finally, we demonstrate how our approach can be extended in several ways, for instance to achieve data fusion by informing the conditional independence graph with data from probabilistic tractography. As our Bayesian formulation of functional connectivity provides access to the posterior distribution instead of only to point estimates, we are able to quantify the uncertainty associated with our results. This reveals that while we are able to infer a clear backbone of connectivity in our empirical results, the data are not accurately described by simply looking at the mode of the distribution over connectivity. The implication of this is that
Estimation of a discrete monotone distribution
Jankowski, Hanna K.; Wellner, Jon A.
2010-01-01
We study and compare three estimators of a discrete monotone distribution: (a) the (raw) empirical estimator; (b) the “method of rearrangements” estimator; and (c) the maximum likelihood estimator. We show that the maximum likelihood estimator strictly dominates both the rearrangement and empirical estimators in cases when the distribution has intervals of constancy. For example, when the distribution is uniform on {0, … , y}, the asymptotic risk of the method of rearrangements estimator (in squared ℓ2 norm) is y/(y + 1), while the asymptotic risk of the MLE is of order (log y)/(y + 1). For strictly decreasing distributions, the estimators are asymptotically equivalent. PMID:20419057
Estimating sediment discharge: Appendix D
Gray, John R.; Simões, Francisco J. M.
2008-01-01
Sediment-discharge measurements usually are available on a discrete or periodic basis. However, estimates of sediment transport often are needed for unmeasured periods, such as when daily or annual sediment-discharge values are sought, or when estimates of transport rates for unmeasured or hypothetical flows are required. Selected methods for estimating suspended-sediment, bed-load, bed- material-load, and total-load discharges have been presented in some detail elsewhere in this volume. The purposes of this contribution are to present some limitations and potential pitfalls associated with obtaining and using the requisite data and equations to estimate sediment discharges and to provide guidance for selecting appropriate estimating equations. Records of sediment discharge are derived from data collected with sufficient frequency to obtain reliable estimates for the computational interval and period. Most sediment- discharge records are computed at daily or annual intervals based on periodically collected data, although some partial records represent discrete or seasonal intervals such as those for flood periods. The method used to calculate sediment- discharge records is dependent on the types and frequency of available data. Records for suspended-sediment discharge computed by methods described by Porterfield (1972) are most prevalent, in part because measurement protocols and computational techniques are well established and because suspended sediment composes the bulk of sediment dis- charges for many rivers. Discharge records for bed load, total load, or in some cases bed-material load plus wash load are less common. Reliable estimation of sediment discharges presupposes that the data on which the estimates are based are comparable and reliable. Unfortunately, data describing a selected characteristic of sediment were not necessarily derived—collected, processed, analyzed, or interpreted—in a consistent manner. For example, bed-load data collected with
A parameter estimation subroutine package
NASA Technical Reports Server (NTRS)
Bierman, G. J.; Nead, M. W.
1978-01-01
Linear least squares estimation and regression analyses continue to play a major role in orbit determination and related areas. In this report we document a library of FORTRAN subroutines that have been developed to facilitate analyses of a variety of estimation problems. Our purpose is to present an easy to use, multi-purpose set of algorithms that are reasonably efficient and which use a minimal amount of computer storage. Subroutine inputs, outputs, usage and listings are given along with examples of how these routines can be used. The following outline indicates the scope of this report: Section (1) introduction with reference to background material; Section (2) examples and applications; Section (3) subroutine directory summary; Section (4) the subroutine directory user description with input, output, and usage explained; and Section (5) subroutine FORTRAN listings. The routines are compact and efficient and are far superior to the normal equation and Kalman filter data processing algorithms that are often used for least squares analyses.
Methods for Cloud Cover Estimation
NASA Technical Reports Server (NTRS)
Glackin, D. L.; Huning, J. R.; Smith, J. H.; Logan, T. L.
1984-01-01
Several methods for cloud cover estimation are described relevant to assessing the performance of a ground-based network of solar observatories. The methods rely on ground and satellite data sources and provide meteorological or climatological information. One means of acquiring long-term observations of solar oscillations is the establishment of a ground-based network of solar observatories. Criteria for station site selection are: gross cloudiness, accurate transparency information, and seeing. Alternative methods for computing this duty cycle are discussed. The cycle, or alternatively a time history of solar visibility from the network, can then be input to a model to determine the effect of duty cycle on derived solar seismology parameters. Cloudiness from space is studied to examine various means by which the duty cycle might be computed. Cloudiness, and to some extent transparency, can potentially be estimated from satellite data.
Estimating diversity via frequency ratios.
Willis, Amy; Bunge, John
2015-12-01
We wish to estimate the total number of classes in a population based on sample counts, especially in the presence of high latent diversity. Drawing on probability theory that characterizes distributions on the integers by ratios of consecutive probabilities, we construct a nonlinear regression model for the ratios of consecutive frequency counts. This allows us to predict the unobserved count and hence estimate the total diversity. We believe that this is the first approach to depart from the classical mixed Poisson model in this problem. Our method is geometrically intuitive and yields good fits to data with reasonable standard errors. It is especially well-suited to analyzing high diversity datasets derived from next-generation sequencing in microbial ecology. We demonstrate the method's performance in this context and via simulation, and we present a dataset for which our method outperforms all competitors.
Estimation of urban stormwater quality
Jennings, Marshall E.; Tasker, Gary D.
1988-01-01
Two data-based methods for estimating urban stormwater quality have recently been made available - a planning level method developed by the U.S. Environmental Protection Agency (EPA), and a nationwide regression method developed by the U.S. Geological Survey. Each method uses urban stormwater water-quality constituent data collected for the Nationwide Urban Runoff Program (NURP) during 1979-83. The constituents analyzed include 10 chemical constituents - chemical oxygen demand (COD), total suspended solids (TSS), dissolved solids (DS), total nitrogen (TN), total ammonia plus nitrogen (AN), total phosphorus (TP), dissolved phosphorous (DP), total copper (CU), total lead (PB), and total zinc (ZN). The purpose of this report is to briefly compare features of the two estimation methods.
Point Estimation and Confidence Interval Estimation for Binomial and Multinomial Parameters
1975-12-31
AD-A021 208 POINT ESTIMATION AND CONFIDENCE INTERVAL ESTIMATION FOR BINOMIAL AND MULTINOMIAL PARAMETERS Ramesh Chandra Union College...I 00 064098 O < POINT ESTIMATION AND CONFIDENCE INTERVAL ESTIMATION FOR BINOMIAL AND MULTINOMIAL PARAMETERS AES-7514 ■ - 1976...AES-7514 2 COVT ACCESSION NO * TITLC fan« Submit) Point Estimation and Confidence Interval Estimation for Binomial and Multinomial Parameters
Improved Event Location Uncertainty Estimates
2008-06-30
model (such as Gaussian, spherical or exponential) typically used in geostatistics, we define the robust variogram model as the median regression curve...variogram model estimation We define the robust variogram model as the median regression curve of the residual difference squares for station pairs of...develop methodologies that improve location uncertainties in the presence of correlated, systematic model errors and non-Gaussian measurement errors. We
ATR Performance Estimation Seed Program
2015-09-28
for this collec ion of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data ...sources, gathering and maintaining the data needed, and completing and reviewing the collec ion of information. Send comments regarding this burden...to produce simulated MCM sonar data and demonstrate the impact of system, environmental, and target scattering effects on ATR detection
Position Estimation Using Image Derivative
NASA Technical Reports Server (NTRS)
Mortari, Daniele; deDilectis, Francesco; Zanetti, Renato
2015-01-01
This paper describes an image processing algorithm to process Moon and/or Earth images. The theory presented is based on the fact that Moon hard edge points are characterized by the highest values of the image derivative. Outliers are eliminated by two sequential filters. Moon center and radius are then estimated by nonlinear least-squares using circular sigmoid functions. The proposed image processing has been applied and validated using real and synthetic Moon images.
Software Development Cost Estimating Handbook
2009-04-21
Hill AFB, UT Researching Blueprinting Technical writing I t l i i / ditin erna rev ew ng e ng Naval Center for Cost Analysis (NCCA), Arlington, VA...development processes Software estimating models Defense Acquisition Framework Data collection Acronyms T i lerm no ogy References Systems & Software...Designed for readability and comprehension Large right margin for notes Systems & Software Technology Conference 921 April 2009 Part I - Basics
Estimation of coastal density gradients
NASA Astrophysics Data System (ADS)
Howarth, M. J.; Palmer, M. R.; Polton, J. A.; O'Neill, C. K.
2012-04-01
Density gradients in coastal regions with significant freshwater input are large and variable and are a major control of nearshore circulation. However their measurement is difficult, especially where the gradients are largest close to the coast, with significant uncertainties because of a variety of factors - spatial and time scales are small, tidal currents are strong and water depths shallow. Whilst temperature measurements are relatively straightforward, measurements of salinity (the dominant control of spatial variability) can be less reliable in turbid coastal waters. Liverpool Bay has strong tidal mixing and receives fresh water principally from the Dee, Mersey, Ribble and Conwy estuaries, each with different catchment influences. Horizontal and vertical density gradients are variable both in space and time. The water column stratifies intermittently. A Coastal Observatory has been operational since 2002 with regular (quasi monthly) CTD surveys on a 9 km grid, an situ station, an instrumented ferry travelling between Birkenhead and Dublin and a shore-based HF radar system measuring surface currents and waves. These measurements are complementary, each having different space-time characteristics. For coastal gradients the ferry is particularly useful since measurements are made right from the mouth of Mersey. From measurements at the in situ site alone density gradients can only be estimated from the tidal excursion. A suite of coupled physical, wave and ecological models are run in association with these measurements. The models, here on a 1.8 km grid, enable detailed estimation of nearshore density gradients, provided appropriate river run-off data are available. Examples are presented of the density gradients estimated from the different measurements and models, together with accuracies and uncertainties, showing that systematic time series measurements within a few kilometres of the coast are a high priority. (Here gliders are an exciting prospect for
Estimating Phytoplankton Biomass and Productivity.
1981-06-01
Identlfy by block nuusbet) -Estimates of phytoplankton biomass and rates of production can provide a manager with some insight into questions concerning...and growth. Phytoplankton biomass is the amount of algal material present, whereas productivity is the rate at which algal cell material is produced...biomass and productivity parameters. Munawar et al. (1974) reported that cell volume was better correlated to chlorophyll a and photosynthe- sis rates
Entropy estimation and Fibonacci numbers
NASA Astrophysics Data System (ADS)
Timofeev, Evgeniy A.; Kaltchenko, Alexei
2013-05-01
We introduce a new metric on a space of right-sided infinite sequences drawn from a finite alphabet. Emerging from a problem of entropy estimation of a discrete stationary ergodic process, the metric is important on its own part and exhibits some interesting properties. Notably, the number of distinct metric values for a set of sequences of length m is equal to Fm+3 - 1, where Fm is a Fibonacci number.
Estimation of local spatial scale
NASA Technical Reports Server (NTRS)
Watson, Andrew B.
1987-01-01
The concept of local scale asserts that for a given class of psychophysical measurements, performance at any two visual field locations is equated by magnifying the targets by the local scale associated with each location. Local scale has been hypothesized to be equal to cortical magnification or alternatively to the linear density of receptors or ganglion cells. Here, it is shown that it is possible to estimate local scale without prior knowledge about the scale or its physiological basis.
Operator estimates in homogenization theory
NASA Astrophysics Data System (ADS)
Zhikov, V. V.; Pastukhova, S. E.
2016-06-01
This paper gives a systematic treatment of two methods for obtaining operator estimates: the shift method and the spectral method. Though substantially different in mathematical technique and physical motivation, these methods produce basically the same results. Besides the classical formulation of the homogenization problem, other formulations of the problem are also considered: homogenization in perforated domains, the case of an unbounded diffusion matrix, non-self-adjoint evolution equations, and higher-order elliptic operators. Bibliography: 62 titles.
Bilinear modeling and nonlinear estimation
NASA Technical Reports Server (NTRS)
Dwyer, Thomas A. W., III; Karray, Fakhreddine; Bennett, William H.
1989-01-01
New methods are illustrated for online nonlinear estimation applied to the lateral deflection of an elastic beam on board measurements of angular rates and angular accelerations. The development of the filter equations, together with practical issues of their numerical solution as developed from global linearization by nonlinear output injection are contrasted with the usual method of the extended Kalman filter (EKF). It is shown how nonlinear estimation due to gyroscopic coupling can be implemented as an adaptive covariance filter using off-the-shelf Kalman filter algorithms. The effect of the global linearization by nonlinear output injection is to introduce a change of coordinates in which only the process noise covariance is to be updated in online implementation. This is in contrast to the computational approach which arises in EKF methods arising by local linearization with respect to the current conditional mean. Processing refinements for nonlinear estimation based on optimal, nonlinear interpolation between observations are also highlighted. In these methods the extrapolation of the process dynamics between measurement updates is obtained by replacing a transition matrix with an operator spline that is optimized off-line from responses to selected test inputs.
Estimating the coherence of noise
NASA Astrophysics Data System (ADS)
Wallman, Joel
To harness the advantages of quantum information processing, quantum systems have to be controlled to within some maximum threshold error. Certifying whether the error is below the threshold is possible by performing full quantum process tomography, however, quantum process tomography is inefficient in the number of qubits and is sensitive to state-preparation and measurement errors (SPAM). Randomized benchmarking has been developed as an efficient method for estimating the average infidelity of noise to the identity. However, the worst-case error, as quantified by the diamond distance from the identity, can be more relevant to determining whether an experimental implementation is at the threshold for fault-tolerant quantum computation. The best possible bound on the worst-case error (without further assumptions on the noise) scales as the square root of the infidelity and can be orders of magnitude greater than the reported average error. We define a new quantification of the coherence of a general noise channel, the unitarity, and show that it can be estimated using an efficient protocol that is robust to SPAM. Furthermore, we also show how the unitarity can be used with the infidelity obtained from randomized benchmarking to obtain improved estimates of the diamond distance and to efficiently determine whether experimental noise is close to stochastic Pauli noise.
Measurement campaigns for holdup estimation
Picard, R.R. )
1988-07-01
The derivation of technically defensible holdup estimates is described. Considerations important in the planning of measurement campaigns to provide necessary data are reviewed and the role of statistical sampling is discussed. By design, the presentation is nonmathematical and intended for a general audience. Though clearly important, use of sampling principles in the planning of holdup-related activities is sometimes viewed with apprehension. Holdup is often poorly understood to begin with, and the incorporation of the esoteric matters only adds to an image problem. Unfortunately, there are no painless options. In many operating facilities, surface areas on which holdup has accumulated amount to many square miles. It is not practical to pursue 100% measurement of all such surface areas. Thus, some portion is measured - constituting a ''sample,'' whether obtained by a formal procedure or not. Understanding the principles behind sampling is important in planning and in developing legitimate holdup estimates. Although derivation of legitimate, facility-wide holdup estimates is not currently mandated by Department of Energy regulatory requirements, the related activities would greatly advance the present state of knowledge.
Loss estimation of Membramo earthquake
NASA Astrophysics Data System (ADS)
Damanik, R.; Sedayo, H.
2016-05-01
Papua Tectonics are dominated by the oblique collision of the Pacific plate along the north side of the island. A very high relative plate motions (i.e. 120 mm/year) between the Pacific and Papua-Australian Plates, gives this region a very high earthquake production rate, about twice as much as that of Sumatra, the western margin of Indonesia. Most of the seismicity occurring beneath the island of New Guinea is clustered near the Huon Peninsula, the Mamberamo region, and the Bird's Neck. At 04:41 local time(GMT+9), July 28th 2015, a large earthquake of Mw = 7.0 occurred at West Mamberamo Fault System. The earthquake focal mechanism are dominated by northwest-trending thrust mechanisms. GMPE and ATC vulnerability curve were used to estimate distribution of damage. Mean of estimated losses was caused by this earthquake is IDR78.6 billion. We estimated insurance loss will be only small portion in total general due to deductible.
NASA Technical Reports Server (NTRS)
Xu, Ru-Gang; Koga, Dennis (Technical Monitor)
2001-01-01
The goal of 'Estimate' is to take advantage of attitude information to produce better pose while staying flexible and robust. Currently there are several instruments that are used for attitude: gyros, inclinometers, and compasses. However, precise and useful attitude information cannot come from one instrument. Integration of rotational rates, from gyro data for example, would result in drift. Therefore, although gyros are accurate in the short-term, accuracy in the long term is unlikely. Using absolute instruments such as compasses and inclinometers can result in an accurate measurement of attitude in the long term. However, in the short term, the physical nature of compasses and inclinometers, and the dynamic nature of a mobile platform result in highly volatile and therefore useless data. The solution then is to use both absolute and relative data. Kalman Filtering is known to be able to combine gyro and compass/inclinometer data to produce stable and accurate attitude information. Since the model of motion is linear and the data comes in as discrete samples, a Discrete Kalman Filter was selected as the core of the new estimator. Therefore, 'Estimate' can be divided into two parts: the Discrete Kalman Filter and the code framework.
Abundance estimation and conservation biology
Nichols, J.D.; MacKenzie, D.I.
2004-01-01
Abundance is the state variable of interest in most population–level ecological research and in most programs involving management and conservation of animal populations. Abundance is the single parameter of interest in capture–recapture models for closed populations (e.g., Darroch, 1958; Otis et al., 1978; Chao, 2001). The initial capture–recapture models developed for partially (Darroch, 1959) and completely (Jolly, 1965; Seber, 1965) open populations represented efforts to relax the restrictive assumption of population closure for the purpose of estimating abundance. Subsequent emphases in capture–recapture work were on survival rate estimation in the 1970’s and 1980’s (e.g., Burnham et al., 1987; Lebreton et al.,1992), and on movement estimation in the 1990’s (Brownie et al., 1993; Schwarz et al., 1993). However, from the mid–1990’s until the present time, capture–recapture investigators have expressed a renewed interest in abundance and related parameters (Pradel, 1996; Schwarz & Arnason, 1996; Schwarz, 2001). The focus of this session was abundance, and presentations covered topics ranging from estimation of abundance and rate of change in abundance, to inferences about the demographic processes underlying changes in abundance, to occupancy as a surrogate of abundance. The plenary paper by Link & Barker (2004) is provocative and very interesting, and it contains a number of important messages and suggestions. Link & Barker (2004) emphasize that the increasing complexity of capture–recapture models has resulted in large numbers of parameters and that a challenge to ecologists is to extract ecological signals from this complexity. They offer hierarchical models as a natural approach to inference in which traditional parameters are viewed as realizations of stochastic processes. These processes are governed by hyperparameters, and the inferential approach focuses on these hyperparameters. Link & Barker (2004) also suggest that our attention
Statistical Aspects of Effect Size Estimation.
ERIC Educational Resources Information Center
Hedges, Larry V.
When the results of a series of independent studies are combined, it is useful to quantitatively estimate the magnitude of the effects. Several methods for estimating effect size are compared in this paper. Glass' estimator and the uniformly minimum variance unbiased estimator are based on the ratio of the sample mean difference and the pooled…
Psychometric Properties of IRT Proficiency Estimates
ERIC Educational Resources Information Center
Kolen, Michael J.; Tong, Ye
2010-01-01
Psychometric properties of item response theory proficiency estimates are considered in this paper. Proficiency estimators based on summed scores and pattern scores include non-Bayes maximum likelihood and test characteristic curve estimators and Bayesian estimators. The psychometric properties investigated include reliability, conditional…
Estimating the Costs of Preventive Interventions
ERIC Educational Resources Information Center
Foster, E. Michael; Porter, Michele M.; Ayers, Tim S.; Kaplan, Debra L.; Sandler, Irwin
2007-01-01
The goal of this article is to improve the practice and reporting of cost estimates of prevention programs. It reviews the steps in estimating the costs of an intervention and the principles that should guide estimation. The authors then review prior efforts to estimate intervention costs using a sample of well-known but diverse studies. Finally,…
Calculating weighted estimates of peak streamflow statistics
Cohn, Timothy A.; Berenbrock, Charles; Kiang, Julie E.; Mason, Jr., Robert R.
2012-01-01
According to the Federal guidelines for flood-frequency estimation, the uncertainty of peak streamflow statistics, such as the 1-percent annual exceedance probability (AEP) flow at a streamgage, can be reduced by combining the at-site estimate with the regional regression estimate to obtain a weighted estimate of the flow statistic. The procedure assumes the estimates are independent, which is reasonable in most practical situations. The purpose of this publication is to describe and make available a method for calculating a weighted estimate from the uncertainty or variance of the two independent estimates.
Wheat productivity estimates using LANDSAT data
NASA Technical Reports Server (NTRS)
Nalepka, R. F.; Colwell, J. E. (Principal Investigator); Rice, D. P.; Bresnahan, P. A.
1977-01-01
The author has identified the following significant results. Large area LANDSAT yield estimates were generated. These results were compared with estimates computed using a meteorological yield model (CCEA). Both of these estimates were compared with Kansas Crop and Livestock Reporting Service (KCLRS) estimates of yield, in an attempt to assess the relative and absolute accuracy of the LANDSAT and CCEA estimates. Results were inconclusive. A large area direct wheat prediction procedure was implemented. Initial results have produced a wheat production estimate comparable with the KCLRS estimate.
Use of robust estimators in parametric classifiers
NASA Technical Reports Server (NTRS)
Safavian, S. Rasoul; Landgrebe, David A.
1989-01-01
The parametric approach to density estimation and classifier design is a well studied subject. The parametric approach is desirable because basically it reduces the problem of classifier design to that of estimating a few parameters for each of the pattern classes. The class parameters are usually estimated using maximum-likelihood (ML) estimators. ML estimators are, however, very sensitive to the presence of outliers. Several robust estimators of mean and covariance matrix and their effect on the probability of error in classification are examined. Comments are made about alpha-ranked (alpha-trimmed) estimators.
Treelength Optimization for Phylogeny Estimation
Liu, Kevin; Warnow, Tandy
2012-01-01
The standard approach to phylogeny estimation uses two phases, in which the first phase produces an alignment on a set of homologous sequences, and the second phase estimates a tree on the multiple sequence alignment. POY, a method which seeks a tree/alignment pair minimizing the total treelength, is the most widely used alternative to this two-phase approach. The topological accuracy of trees computed under treelength optimization is, however, controversial. In particular, one study showed that treelength optimization using simple gap penalties produced poor trees and alignments, and suggested the possibility that if POY were used with an affine gap penalty, it might be able to be competitive with the best two-phase methods. In this paper we report on a study addressing this possibility. We present a new heuristic for treelength, called BeeTLe (Better Treelength), that is guaranteed to produce trees at least as short as POY. We then use this heuristic to analyze a large number of simulated and biological datasets, and compare the resultant trees and alignments to those produced using POY and also maximum likelihood (ML) and maximum parsimony (MP) trees computed on a number of alignments. In general, we find that trees produced by BeeTLe are shorter and more topologically accurate than POY trees, but that neither POY nor BeeTLe produces trees as topologically accurate as ML trees produced on standard alignments. These findings, taken as a whole, suggest that treelength optimization is not as good an approach to phylogenetic tree estimation as maximum likelihood based upon good alignment methods. PMID:22442677
Solar power satellite cost estimate
NASA Technical Reports Server (NTRS)
Harron, R. J.; Wadle, R. C.
1981-01-01
The solar power configuration costed is the 5 GW silicon solar cell reference system. The subsystems identified by work breakdown structure elements to the lowest level for which cost information was generated. This breakdown divides into five sections: the satellite, construction, transportation, the ground receiving station and maintenance. For each work breakdown structure element, a definition, design description and cost estimate were included. An effort was made to include for each element a reference that more thoroughly describes the element and the method of costing used. All costs are in 1977 dollars.
Estimating dome seeing for LSST
NASA Astrophysics Data System (ADS)
Sebag, Jacques; Vogiatzis, Konstantinos
2014-08-01
Begin Dome seeing is a critical effect influencing the optical performance of ground based telescopes. A previously reported combination of Computational Fluid Dynamics (CFD) and optical simulations to model dome seeing was implemented for the latest LSST enclosure geometry. To this end, high spatial resolution thermal unsteady CFD simulations were performed for three different telescope zenith angles and four azimuth angles. These simulations generate time records of refractive index values along the optical path, which are post-processed to estimate the image degradation due to dome seeing. This method allows us to derive the distribution of seeing contribution along the different optical path segments that composed the overall light path between the entrance of the dome up to the LSST science camera. These results are used to recognize potential problems and to guide the observatory design. In this paper, the modeling estimates are reviewed and assessed relative to the corresponding performance allocation, and combined with other simulator outputs to model the dome seeing impact during LSST operations.
Probabilistic elastography: estimating lung elasticity.
Risholm, Petter; Ross, James; Washko, George R; Wells, William M
2011-01-01
We formulate registration-based elastography in a probabilistic framework and apply it to study lung elasticity in the presence of emphysematous and fibrotic tissue. The elasticity calculations are based on a Finite Element discretization of a linear elastic biomechanical model. We marginalize over the boundary conditions (deformation) of the biomechanical model to determine the posterior distribution over elasticity parameters. Image similarity is included in the likelihood, an elastic prior is included to constrain the boundary conditions, while a Markov model is used to spatially smooth the inhomogeneous elasticity. We use a Markov Chain Monte Carlo (MCMC) technique to characterize the posterior distribution over elasticity from which we extract the most probable elasticity as well as the uncertainty of this estimate. Even though registration-based lung elastography with inhomogeneous elasticity is challenging due the problem's highly underdetermined nature and the sparse image information available in lung CT, we show promising preliminary results on estimating lung elasticity contrast in the presence of emphysematous and fibrotic tissue.
Estimating Bayesian Phylogenetic Information Content
Lewis, Paul O.; Chen, Ming-Hui; Kuo, Lynn; Lewis, Louise A.; Fučíková, Karolina; Neupane, Suman; Wang, Yu-Bo; Shi, Daoyuan
2016-01-01
Measuring the phylogenetic information content of data has a long history in systematics. Here we explore a Bayesian approach to information content estimation. The entropy of the posterior distribution compared with the entropy of the prior distribution provides a natural way to measure information content. If the data have no information relevant to ranking tree topologies beyond the information supplied by the prior, the posterior and prior will be identical. Information in data discourages consideration of some hypotheses allowed by the prior, resulting in a posterior distribution that is more concentrated (has lower entropy) than the prior. We focus on measuring information about tree topology using marginal posterior distributions of tree topologies. We show that both the accuracy and the computational efficiency of topological information content estimation improve with use of the conditional clade distribution, which also allows topological information content to be partitioned by clade. We explore two important applications of our method: providing a compelling definition of saturation and detecting conflict among data partitions that can negatively affect analyses of concatenated data. [Bayesian; concatenation; conditional clade distribution; entropy; information; phylogenetics; saturation.] PMID:27155008
Metal detector depth estimation algorithms
NASA Astrophysics Data System (ADS)
Marble, Jay; McMichael, Ian
2009-05-01
This paper looks at depth estimation techniques using electromagnetic induction (EMI) metal detectors. Four algorithms are considered. The first utilizes a vertical gradient sensor configuration. The second is a dual frequency approach. The third makes use of dipole and quadrapole receiver configurations. The fourth looks at coils of different sizes. Each algorithm is described along with its associated sensor. Two figures of merit ultimately define algorithm/sensor performance. The first is the depth of penetration obtainable. (That is, the maximum detection depth obtainable.) This describes the performance of the method to achieve detection of deep targets. The second is the achievable statistical depth resolution. This resolution describes the precision with which depth can be estimated. In this paper depth of penetration and statistical depth resolution are qualitatively determined for each sensor/algorithm. A scientific method is used to make these assessments. A field test was conducted using 2 lanes with emplaced UXO. The first lane contains 155 shells at increasing depths from 0" to 48". The second is more realistic containing objects of varying size. The first lane is used for algorithm training purposes, while the second is used for testing. The metal detectors used in this study are the: Geonics EM61, Geophex GEM5, Minelab STMR II, and the Vallon VMV16.
LOD estimation from DORIS observations
NASA Astrophysics Data System (ADS)
Stepanek, Petr; Filler, Vratislav; Buday, Michal; Hugentobler, Urs
2016-04-01
The difference between astronomically determined duration of the day and 86400 seconds is called length of day (LOD). The LOD could be also understood as the daily rate of the difference between the Universal Time UT1, based on the Earth rotation, and the International Atomic Time TAI. The LOD is estimated using various Satellite Geodesy techniques as GNSS and SLR, while absolute UT1-TAI difference is precisely determined by VLBI. Contrary to other IERS techniques, the LOD estimation using DORIS (Doppler Orbitography and Radiopositioning Integrated by satellite) measurement did not achieve a geodetic accuracy in the past, reaching the precision at the level of several ms per day. However, recent experiments performed by IDS (International DORIS Service) analysis centre at Geodetic Observatory Pecny show a possibility to reach accuracy around 0.1 ms per day, when not adjusting the cross-track harmonics in the Satellite orbit model. The paper presents the long term LOD series determined from the DORIS solutions. The series are compared with C04 as the reference. Results are discussed in the context of accuracy achieved with GNSS and SLR. Besides the multi-satellite DORIS solutions, also the LOD series from the individual DORIS satellite solutions are analysed.
Organ volume estimation using SPECT
Zaidi, H.
1996-06-01
Knowledge of in vivo thyroid volume has both diagnostic and therapeutic importance and could lead to a more precise quantification of absolute activity contained in the thyroid gland. In order to improve single-photon emission computed tomography (SPECT) quantitation, attenuation correction was performed according to Chang`s algorithm. The dual window method was used for scatter subtraction. The author used a Monte Carlo simulation of the SPECT system to accurately determine the scatter multiplier factor k. Volume estimation using SPECT was performed by summing up the volume elements (voxels) lying within the contour of the object, determined by a fixed threshold and the gray level histogram (GLH) method. Thyroid phantom and patient studies were performed and the influence of (1) fixed thresholding, (2) automatic thresholding, (3) attenuation, (4) scatter, and (5) reconstruction filter were investigated. This study shows that accurate volume estimation of the thyroid gland is feasible when accurate corrections are performed. The relative error is within 7% for the GLH method combined with attenuation and scatter corrections.
Estimation of continental precipitation recycling
NASA Technical Reports Server (NTRS)
Brubaker, Kaye L.; Entekhabi, Dara; Eagleson, P. S.
1993-01-01
The total amount of water that precipitates on large continental regions is supplied by two mechanisms: 1) advection from the surrounding areas external to the region and 2) evaporation and transpiration from the land surface within the region. The latter supply mechanism is tantamount to the recycling of precipitation over the continental area. The degree to which regional precipitation is supplied by recycled moisture is a potentially significant climate feedback mechanism and land surface-atmosphere interaction, which may contribute to the persistence and intensification of droughts. Gridded data on observed wind and humidity in the global atmosphere are used to determine the convergence of atmospheric water vapor over continental regions. A simplified model of the atmospheric moisture over continents and simultaneous estimates of regional precipitation are employed to estimate, for several large continental regions, the fraction of precipitation that is locally derived. The results indicate that the contribution of regional evaporation to regional precipitation varies substantially with location and season. For the regions studied, the ratio of locally contributed to total monthly precipitation generally lies between 0. 10 and 0.30 but is as high as 0.40 in several cases.
Aging persons' estimates of vehicular motion.
Schiff, W; Oldak, R; Shah, V
1992-12-01
Estimated arrival times of moving autos were examined in relation to viewer age, gender, motion trajectory, and velocity. Direct push-button judgments were compared with verbal estimates derived from velocity and distance, which were based on assumptions that perceivers compute arrival time from perceived distance and velocity. Experiment 1 showed that direct estimates of younger Ss were most accurate. Older women made the shortest (highly cautious) estimates of when cars would arrive. Verbal estimates were much lower than direct estimates, with little correlation between them. Experiment 2 extended target distances and velocities of targets, with the results replicating the main findings of Experiment 1. Judgment accuracy increased with target velocity, and verbal estimates were again poorer estimates of arrival time than direct ones, with different patterns of findings. Using verbal estimates to approximate judgments in traffic situations appears questionable.
Kalman filter estimation model in flood forecasting
NASA Astrophysics Data System (ADS)
Husain, Tahir
Elementary precipitation and runoff estimation problems associated with hydrologic data collection networks are formulated in conjunction with the Kalman Filter Estimation Model. Examples involve the estimation of runoff using data from a single precipitation station and also from a number of precipitation stations. The formulations demonstrate the role of state-space, measurement, and estimation equations of the Kalman Filter Model in flood forecasting. To facilitate the formulation, the unit hydrograph concept and antecedent precipitation index is adopted in the estimation model. The methodology is then applied to estimate various flood events in the Carnation Creek of British Columbia.
Variance estimation for nucleotide substitution models.
Chen, Weishan; Wang, Hsiuying
2015-09-01
The current variance estimators for most evolutionary models were derived when a nucleotide substitution number estimator was approximated with a simple first order Taylor expansion. In this study, we derive three variance estimators for the F81, F84, HKY85 and TN93 nucleotide substitution models, respectively. They are obtained using the second order Taylor expansion of the substitution number estimator, the first order Taylor expansion of a squared deviation and the second order Taylor expansion of a squared deviation, respectively. These variance estimators are compared with the existing variance estimator in terms of a simulation study. It shows that the variance estimator, which is derived using the second order Taylor expansion of a squared deviation, is more accurate than the other three estimators. In addition, we also compare these estimators with an estimator derived by the bootstrap method. The simulation shows that the performance of this bootstrap estimator is similar to the estimator derived by the second order Taylor expansion of a squared deviation. Since the latter one has an explicit form, it is more efficient than the bootstrap estimator.
Growth stage estimation. [crop calendars
NASA Technical Reports Server (NTRS)
Whitehead, V. S.; Phinney, D. E.; Crea, W. E. (Principal Investigator)
1979-01-01
Of the three candidate approaches to adjustment of the crop calendar to account for year-to-year weather differences, the Robertson triquadratic unit, a function of a nonlinear function of maximum and minimum temperature and day length, best described the rate of phenological development of wheat. The adjustable crop calendar (ACC) as implemented for LACIE is used to calculate the daily increment of development through six physiological stages of growth. Topics covered include dormancy modeling, the spring restart model, spring wheat starter model, winter starter model, winter wheat starter model, inclusion of the moisture variable, and display of crop stage estimation results. Assessment of the ACC accuracy over the period of LACIE operation indicates that the adjustable crop calendars used provided more accurate information than would have been available using historical norms. The models performed best under the conditions from which they were derived (Canadian spring wheat) and most poorly for the dwarf varieties and Southern Hemisphere applications.
Overdiagnosis: epidemiologic concepts and estimation.
Bae, Jong-Myon
2015-01-01
Overdiagnosis of thyroid cancer was propounded regarding the rapidly increasing incidence in South Korea. Overdiagnosis is defined as 'the detection of cancers that would never have been found were it not for the screening test', and may be an extreme form of lead bias due to indolent cancers, as is inevitable when conducting a cancer screening programme. Because it is solely an epidemiological concept, it can be estimated indirectly by phenomena such as a lack of compensatory drop in post-screening periods, or discrepancies between incidence and mortality. The erstwhile trials for quantifying the overdiagnosis in screening mammography were reviewed in order to secure the data needed to establish its prevalence in South Korea.
2007 Estimated International Energy Flows
Smith, C A; Belles, R D; Simon, A J
2011-03-10
An energy flow chart or 'atlas' for 136 countries has been constructed from data maintained by the International Energy Agency (IEA) and estimates of energy use patterns for the year 2007. Approximately 490 exajoules (460 quadrillion BTU) of primary energy are used in aggregate by these countries each year. While the basic structure of the energy system is consistent from country to country, patterns of resource use and consumption vary. Energy can be visualized as it flows from resources (i.e. coal, petroleum, natural gas) through transformations such as electricity generation to end uses (i.e. residential, commercial, industrial, transportation). These flow patterns are visualized in this atlas of 136 country-level energy flow charts.
Supplemental report on cost estimates'
1992-04-29
The Office of Management and Budget (OMB) and the U.S. Army Corps of Engineers have completed an analysis of the Department of Energy's (DOE) Fiscal Year (FY) 1993 budget request for its Environmental Restoration and Waste Management (ERWM) program. The results were presented to an interagency review group (IAG) of senior-Administration officials for their consideration in the budget process. This analysis included evaluations of the underlying legal requirements and cost estimates on which the ERWM budget request was based. The major conclusions are contained in a separate report entitled, ''Interagency Review of the Department of Energy Environmental Restoration and Waste Management Program.'' This Corps supplemental report provides greater detail on the cost analysis.
Comparison of space debris estimates
Canavan, G.H.; Judd, O.P.; Naka, R.F.
1996-10-01
Debris is thought to be a hazard to space systems through impact and cascading. The current environment is assessed as not threatening to defense systems. Projected reductions in launch rates to LEO should delay concerns for centuries. There is agreement between AFSPC and NASA analyses on catalogs and collision rates, but not on fragmentation rates. Experiments in the laboratory, field, and space are consistent with AFSPC estimates of the number of fragments per collision. A more careful treatment of growth rates greatly reduces long-term stability issues. Space debris has not been shown to be an issue in coming centuries; thus, it does not appear necessary for the Air Force to take additional steps to mitigate it.
Age Estimation in Forensic Sciences
Alkass, Kanar; Buchholz, Bruce A.; Ohtani, Susumu; Yamamoto, Toshiharu; Druid, Henrik; Spalding, Kirsty L.
2010-01-01
Age determination of unknown human bodies is important in the setting of a crime investigation or a mass disaster because the age at death, birth date, and year of death as well as gender can guide investigators to the correct identity among a large number of possible matches. Traditional morphological methods used by anthropologists to determine age are often imprecise, whereas chemical analysis of tooth dentin, such as aspartic acid racemization, has shown reproducible and more precise results. In this study, we analyzed teeth from Swedish individuals using both aspartic acid racemization and radiocarbon methodologies. The rationale behind using radiocarbon analysis is that aboveground testing of nuclear weapons during the cold war (1955–1963) caused an extreme increase in global levels of carbon-14 (14C), which has been carefully recorded over time. Forty-four teeth from 41 individuals were analyzed using aspartic acid racemization analysis of tooth crown dentin or radiocarbon analysis of enamel, and 10 of these were split and subjected to both radiocarbon and racemization analysis. Combined analysis showed that the two methods correlated well (R2 = 0.66, p < 0.05). Radiocarbon analysis showed an excellent precision with an overall absolute error of 1.0 ± 0.6 years. Aspartic acid racemization also showed a good precision with an overall absolute error of 5.4 ± 4.2 years. Whereas radiocarbon analysis gives an estimated year of birth, racemization analysis indicates the chronological age of the individual at the time of death. We show how these methods in combination can also assist in the estimation of date of death of an unidentified victim. This strategy can be of significant assistance in forensic casework involving dead victim identification. PMID:19965905
Estimating location without external cues.
Cheung, Allen
2014-10-01
The ability to determine one's location is fundamental to spatial navigation. Here, it is shown that localization is theoretically possible without the use of external cues, and without knowledge of initial position or orientation. With only error-prone self-motion estimates as input, a fully disoriented agent can, in principle, determine its location in familiar spaces with 1-fold rotational symmetry. Surprisingly, localization does not require the sensing of any external cue, including the boundary. The combination of self-motion estimates and an internal map of the arena provide enough information for localization. This stands in conflict with the supposition that 2D arenas are analogous to open fields. Using a rodent error model, it is shown that the localization performance which can be achieved is enough to initiate and maintain stable firing patterns like those of grid cells, starting from full disorientation. Successful localization was achieved when the rotational asymmetry was due to the external boundary, an interior barrier or a void space within an arena. Optimal localization performance was found to depend on arena shape, arena size, local and global rotational asymmetry, and the structure of the path taken during localization. Since allothetic cues including visual and boundary contact cues were not present, localization necessarily relied on the fusion of idiothetic self-motion cues and memory of the boundary. Implications for spatial navigation mechanisms are discussed, including possible relationships with place field overdispersion and hippocampal reverse replay. Based on these results, experiments are suggested to identify if and where information fusion occurs in the mammalian spatial memory system.
Efficient Estimation of the Standardized Value
ERIC Educational Resources Information Center
Longford, Nicholas T.
2009-01-01
We derive an estimator of the standardized value which, under the standard assumptions of normality and homoscedasticity, is more efficient than the established (asymptotically efficient) estimator and discuss its gains for small samples. (Contains 1 table and 3 figures.)
IMPROVING BIOGENIC EMISSION ESTIMATES WITH SATELLITE IMAGERY
This presentation will review how existing and future applications of satellite imagery can improve the accuracy of biogenic emission estimates. Existing applications of satellite imagery to biogenic emission estimates have focused on characterizing land cover. Vegetation dat...
Flight Mechanics/Estimation Theory Symposium, 1989
NASA Technical Reports Server (NTRS)
Stengle, Thomas (Editor)
1989-01-01
Numerous topics in flight mechanics and estimation were discussed. Satellite attitude control, quaternion estimation, orbit and attitude determination, spacecraft maneuvers, spacecraft navigation, gyroscope calibration, spacecraft rendevous, and atmospheric drag model calculations for spacecraft lifetime prediction are among the topics covered.
Efficient resampling methods for nonsmooth estimating functions
ZENG, DONGLIN
2009-01-01
Summary We propose a simple and general resampling strategy to estimate variances for parameter estimators derived from nonsmooth estimating functions. This approach applies to a wide variety of semiparametric and nonparametric problems in biostatistics. It does not require solving estimating equations and is thus much faster than the existing resampling procedures. Its usefulness is illustrated with heteroscedastic quantile regression and censored data rank regression. Numerical results based on simulated and real data are provided. PMID:17925303
Estimating OSNR of equalised QPSK signals.
Ives, David J; Thomsen, Benn C; Maher, Robert; Savory, Seb J
2011-12-12
We propose and demonstrate a technique to estimate the OSNR of an equalised QPSK signal based on the radial moments of the complex signal constellation. The technique is compared through simulation with maximum likelihood estimation and the effect of the block size used in the estimation is also assessed. The technique is verified experimentally and when combined with a single point calibration the OSNR of the input signal was estimated to within 0.5 dB.
Estimated freshwater withdrawals in Washington, 2010
Lane, Ron C.; Welch, Wendy B.
2015-03-18
The amount of public- and self-supplied water used for domestic, irrigation, livestock, aquaculture, industrial, mining, and thermoelectric power was estimated for state, county, and eastern and western regions of Washington during calendar year 2010. Withdrawals of freshwater for offstream uses were estimated to be about 4,885 million gallons per day. The total estimated freshwater withdrawals for 2010 was approximately 15 percent less than the 2005 estimate because of decreases in irrigation and thermoelectric power withdrawals.
Adaptive density estimator for galaxy surveys
NASA Astrophysics Data System (ADS)
Saar, Enn
2016-10-01
Galaxy number or luminosity density serves as a basis for many structure classification algorithms. Several methods are used to estimate this density. Among them kernel methods have probably the best statistical properties and allow also to estimate the local sample errors of the estimate. We introduce a kernel density estimator with an adaptive data-driven anisotropic kernel, describe its properties and demonstrate the wealth of additional information it gives us about the local properties of the galaxy distribution.
Continuously differentiable sample-spacing entropy estimation.
Ozertem, Umut; Uysal, Ismail; Erdogmus, Deniz
2008-11-01
The insufficiency of using only second-order statistics and premise of exploiting higher order statistics of the data has been well understood, and more advanced objectives including higher order statistics, especially those stemming from information theory, such as error entropy minimization, are now being studied and applied in many contexts of machine learning and signal processing. In the adaptive system training context, the main drawback of utilizing output error entropy as compared to correlation-estimation-based second-order statistics is the computational load of the entropy estimation, which is usually obtained via a plug-in kernel estimator. Sample-spacing estimates offer computationally inexpensive entropy estimators; however, resulting estimates are not differentiable, hence, not suitable for gradient-based adaptation. In this brief paper, we propose a nonparametric entropy estimator that captures the desirable properties of both approaches. The resulting estimator yields continuously differentiable estimates with a computational complexity at the order of those of the sample-spacing techniques. The proposed estimator is compared with the kernel density estimation (KDE)-based entropy estimator in the supervised neural network training framework with computation time and performance comparisons.
New Methodology for Natural Gas Production Estimates
2010-01-01
A new methodology is implemented with the monthly natural gas production estimates from the EIA-914 survey this month. The estimates, to be released April 29, 2010, include revisions for all of 2009. The fundamental changes in the new process include the timeliness of the historical data used for estimation and the frequency of sample updates, both of which are improved.
Cognitive Processes of Numerical Estimation in Children
ERIC Educational Resources Information Center
Ashcraft, Mark H.; Moore, Alex M.
2012-01-01
We tested children in Grades 1 to 5, as well as college students, on a number line estimation task and examined latencies and errors to explore the cognitive processes involved in estimation. The developmental trends in estimation were more consistent with the hypothesized shift from logarithmic to linear representation than with an account based…
Current Term Enrollment Estimates: Spring 2014
ERIC Educational Resources Information Center
National Student Clearinghouse, 2014
2014-01-01
Current Term Enrollment Estimates, published every December and May by the National Student Clearinghouse Research Center, include national enrollment estimates by institutional sector, state, enrollment intensity, age group, and gender. Enrollment estimates are adjusted for Clearinghouse data coverage rates by institutional sector, state, and…
Pseudolikelihood Estimation of the Rasch Model.
ERIC Educational Resources Information Center
Smit, Arnold; Kelderman, Henk
2000-01-01
Proposes an estimation method for the Rasch model that is based on the pseudolikelihood theory of B. Arnold and D. Strauss (1988). Simulation results show great similarity between estimates from this method with those from conditional maximum likelihood and unconditional maximum likelihood estimates for the item parameters of the Rasch model. (SLD)
Stability constant estimator user`s guide
Hay, B.P.; Castleton, K.J.; Rustad, J.R.
1996-12-01
The purpose of the Stability Constant Estimator (SCE) program is to estimate aqueous stability constants for 1:1 complexes of metal ions with ligands by using trends in existing stability constant data. Such estimates are useful to fill gaps in existing thermodynamic databases and to corroborate the accuracy of reported stability constant values.
Estimating Canopy Dark Respiration for Crop Models
NASA Technical Reports Server (NTRS)
Monje Mejia, Oscar Alberto
2014-01-01
Crop production is obtained from accurate estimates of daily carbon gain.Canopy gross photosynthesis (Pgross) can be estimated from biochemical models of photosynthesis using sun and shaded leaf portions and the amount of intercepted photosyntheticallyactive radiation (PAR).In turn, canopy daily net carbon gain can be estimated from canopy daily gross photosynthesis when canopy dark respiration (Rd) is known.
Missing Data and IRT Item Parameter Estimation.
ERIC Educational Resources Information Center
DeMars, Christine
The situation of nonrandomly missing data has theoretically different implications for item parameter estimation depending on whether joint maximum likelihood or marginal maximum likelihood methods are used in the estimation. The objective of this paper is to illustrate what potentially can happen, under these estimation procedures, when there is…
The Mayfield method of estimating nesting success: A model, estimators and simulation results
Hensler, G.L.; Nichols, J.D.
1981-01-01
Using a nesting model proposed by Mayfield we show that the estimator he proposes is a maximum likelihood estimator (m.l.e.). M.l.e. theory allows us to calculate the asymptotic distribution of this estimator, and we propose an estimator of the asymptotic variance. Using these estimators we give approximate confidence intervals and tests of significance for daily survival. Monte Carlo simulation results show the performance of our estimators and tests under many sets of conditions. A traditional estimator of nesting success is shown to be quite inferior to the Mayfield estimator. We give sample sizes required for a given accuracy under several sets of conditions.
Estimating the NIH Efficient Frontier
2012-01-01
Background The National Institutes of Health (NIH) is among the world’s largest investors in biomedical research, with a mandate to: “…lengthen life, and reduce the burdens of illness and disability.” Its funding decisions have been criticized as insufficiently focused on disease burden. We hypothesize that modern portfolio theory can create a closer link between basic research and outcome, and offer insight into basic-science related improvements in public health. We propose portfolio theory as a systematic framework for making biomedical funding allocation decisions–one that is directly tied to the risk/reward trade-off of burden-of-disease outcomes. Methods and Findings Using data from 1965 to 2007, we provide estimates of the NIH “efficient frontier”, the set of funding allocations across 7 groups of disease-oriented NIH institutes that yield the greatest expected return on investment for a given level of risk, where return on investment is measured by subsequent impact on U.S. years of life lost (YLL). The results suggest that NIH may be actively managing its research risk, given that the volatility of its current allocation is 17% less than that of an equal-allocation portfolio with similar expected returns. The estimated efficient frontier suggests that further improvements in expected return (89% to 119% vs. current) or reduction in risk (22% to 35% vs. current) are available holding risk or expected return, respectively, constant, and that 28% to 89% greater decrease in average years-of-life-lost per unit risk may be achievable. However, these results also reflect the imprecision of YLL as a measure of disease burden, the noisy statistical link between basic research and YLL, and other known limitations of portfolio theory itself. Conclusions Our analysis is intended to serve as a proof-of-concept and starting point for applying quantitative methods to allocating biomedical research funding that are objective, systematic, transparent
Mars cratering chronology: new estimates
NASA Astrophysics Data System (ADS)
Ivanov, B.
Many interpretations of Mars geologic evolution is making with the cratering chronology technique (e.g. Hartmann and Neukum, Space Sci. Rev. 96, 165-194, 2001). The core idea of the technique is that older planetary surfaces accumulate more impact craters of a given size than younger surfaces. Two issues are important for the cratering chronology: (i) the estimate of the Moon/Mars cratering ratio to transfer the absolute time scale form the Moon, studied with return sample missions, and (2) the relative importance of secondary impact craters in the interpretation of the available crater counts. In this presentation I describe a progress in both topics listed above. Modern impact rates on planets are defined by orbital evolution of small bodies under weak gravity and non-gravity forces, including resonances with large planets and effects of solar irradiation. In parallel with the celestial mechanics modeling we use the database of observed asteroids, converted into a planetary impact rate. The test of this technique is done for the Earth/moon cratering rate comparison with an independent verification with observed terrestrial atmospheric bursts of bolides and fireballs. For small craters (D<300 m) and young lunar surfaces (age < 100 Ma) the independent measurements of the lunar cratering rate and modern terrestrial bolide/fireball flux match pretty well, giving more confidence for the approach. However, for larger craters (300 m < D <3 km) one should assume the porous-like scaling law for lunar craters to match the astronomically estimated impact rate. This fact demands a reconsideration of Mars/moon cratering rate ratio, as the porosity of upper 1 km under Martian surface may be quite different from the lunar one due to larger Martian gravity and possible filling of porous space with ice/brine. The problem of secondary crater share among crater counts used for surface dating is analyzed by size-frequency distribution (SFD) of secondary and primary craters. The
Site characterization: a spatial estimation approach
Candy, J.V.; Mao, N.
1980-10-01
In this report the application of spatial estimation techniques or kriging to groundwater aquifers and geological borehole data is considered. The adequacy of these techniques to reliably develop contour maps from various data sets is investigated. The estimator is developed theoretically in a simplified fashion using vector-matrix calculus. The practice of spatial estimation is discussed and the estimator is then applied to two groundwater aquifer systems and used also to investigate geological formations from borehole data. It is shown that the estimator can provide reasonable results when designed properly.
ESTIMATING THE DISTRIBUTION OF HARVESTED ...
Habitat suitability models are used to forecast how environmental change may affect the abundance or distribution of species of interest. The development of habitat suitability models may be used to estimate the vulnerability of this valued ecosystem good to natural or anthropogenic stressors. Using natural history information, rule-based habitat suitability models were constructed in a GIS for two recreationally harvested bivalve species (cockles Clinocardium nuttallii; softshells Mya arenaria) common to NE Pacific estuaries (N. California to British Columbia). Tolerance limits of each species were evaluated with respect to four parameters that are easy to sample: salinity, depth, sediment grain size, and the presence of bioturbating burrowing shrimp and were determined through literature review. Spatially-explicit habitat maps were produced for Yaquina and Tillamook estuaries (Oregon) using environmental data from multiple studies ranging from 1960 to 2012. Suitability of a given location was ranked on a scale of 1-4 (lowest to highest) depending on the number of variables that fell within a bivalve’s tolerance limits. The models were tested by comparison of the distribution of each suitability class to the observed distribution of bivalves reported in benthic community studies (1996-2012). Results showed that the areas of highest habitat suitability (value=4) within our model contained the greatest proportion of bivalve observations and highest popula
Hydroacoustic estimates of fish abundance
Wilson, W.K.
1991-03-01
Hydroacoustics, as defined in the context of this report, is the use of a scientific sonar system to determine fish densities with respect to numbers and biomass. These two parameters provide a method of monitoring reservoir fish populations and detecting gross changes in the ecosystem. With respect to southeastern reservoirs, hydroacoustic surveys represent a new method of sampling open water areas and the best technology available. The advantages of this technology are large amounts of data can be collected in a relatively short period of time allowing improved statistical interpretation and data comparison, the pelagic (open water) zone can be sampled efficiently regardless of depth, and sampling is nondestructive and noninvasive with neither injury to the fish nor alteration of the environment. Hydroacoustics cannot provide species identification and related information on species composition or length/weight relationships. Also, sampling is limited to a minimum depth of ten feet which precludes the use of this equipment for sampling shallow shoreline areas. The objective of this study is to use hydroacoustic techniques to estimate fish standing stocks (i.e., numbers and biomass) in several areas of selected Tennessee Valley Reservoirs as part of a base level monitoring program to assess long-term changes in reservoir water quality.
Multimodal Estimation of Distribution Algorithms.
Yang, Qiang; Chen, Wei-Neng; Li, Yun; Chen, C L Philip; Xu, Xiang-Min; Zhang, Jun
2016-02-15
Taking the advantage of estimation of distribution algorithms (EDAs) in preserving high diversity, this paper proposes a multimodal EDA. Integrated with clustering strategies for crowding and speciation, two versions of this algorithm are developed, which operate at the niche level. Then these two algorithms are equipped with three distinctive techniques: 1) a dynamic cluster sizing strategy; 2) an alternative utilization of Gaussian and Cauchy distributions to generate offspring; and 3) an adaptive local search. The dynamic cluster sizing affords a potential balance between exploration and exploitation and reduces the sensitivity to the cluster size in the niching methods. Taking advantages of Gaussian and Cauchy distributions, we generate the offspring at the niche level through alternatively using these two distributions. Such utilization can also potentially offer a balance between exploration and exploitation. Further, solution accuracy is enhanced through a new local search scheme probabilistically conducted around seeds of niches with probabilities determined self-adaptively according to fitness values of these seeds. Extensive experiments conducted on 20 benchmark multimodal problems confirm that both algorithms can achieve competitive performance compared with several state-of-the-art multimodal algorithms, which is supported by nonparametric tests. Especially, the proposed algorithms are very promising for complex problems with many local optima.
Quantum rewinding via phase estimation
NASA Astrophysics Data System (ADS)
Tabia, Gelo Noel
2015-03-01
In cryptography, the notion of a zero-knowledge proof was introduced by Goldwasser, Micali, and Rackoff. An interactive proof system is said to be zero-knowledge if any verifier interacting with an honest prover learns nothing beyond the validity of the statement being proven. With recent advances in quantum information technologies, it has become interesting to ask if classical zero-knowledge proof systems remain secure against adversaries with quantum computers. The standard approach to show the zero-knowledge property involves constructing a simulator for a malicious verifier that can be rewinded to a previous step when the simulation fails. In the quantum setting, the simulator can be described by a quantum circuit that takes an arbitrary quantum state as auxiliary input but rewinding becomes a nontrivial issue. Watrous proposed a quantum rewinding technique in the case where the simulation's success probability is independent of the auxiliary input. Here I present a more general quantum rewinding scheme that employs the quantum phase estimation algorithm. This work was funded by institutional research grant IUT2-1 from the Estonian Research Council and by the European Union through the European Regional Development Fund.
Bayesian estimation of dose thresholds
NASA Technical Reports Server (NTRS)
Groer, P. G.; Carnes, B. A.
2003-01-01
An example is described of Bayesian estimation of radiation absorbed dose thresholds (subsequently simply referred to as dose thresholds) using a specific parametric model applied to a data set on mice exposed to 60Co gamma rays and fission neutrons. A Weibull based relative risk model with a dose threshold parameter was used to analyse, as an example, lung cancer mortality and determine the posterior density for the threshold dose after single exposures to 60Co gamma rays or fission neutrons from the JANUS reactor at Argonne National Laboratory. The data consisted of survival, censoring times and cause of death information for male B6CF1 unexposed and exposed mice. The 60Co gamma whole-body doses for the two exposed groups were 0.86 and 1.37 Gy. The neutron whole-body doses were 0.19 and 0.38 Gy. Marginal posterior densities for the dose thresholds for neutron and gamma radiation were calculated with numerical integration and found to have quite different shapes. The density of the threshold for 60Co is unimodal with a mode at about 0.50 Gy. The threshold density for fission neutrons declines monotonically from a maximum value at zero with increasing doses. The posterior densities for all other parameters were similar for the two radiation types.
Global Warming Estimation from MSU
NASA Technical Reports Server (NTRS)
Prabhakara, C.; Iacovazzi, Robert, Jr.
1999-01-01
In this study, we have developed time series of global temperature from 1980-97 based on the Microwave Sounding Unit (MSU) Ch 2 (53.74 GHz) observations taken from polar-orbiting NOAA operational satellites. In order to create these time series, systematic errors (approx. 0.1 K) in the Ch 2 data arising from inter-satellite differences are removed objectively. On the other hand, smaller systematic errors (approx. 0.03 K) in the data due to orbital drift of each satellite cannot be removed objectively. Such errors are expected to remain in the time series and leave an uncertainty in the inferred global temperature trend. With the help of a statistical method, the error in the MSU inferred global temperature trend resulting from orbital drifts and residual inter-satellite differences of all satellites is estimated to be 0.06 K decade. Incorporating this error, our analysis shows that the global temperature increased at a rate of 0.13 +/- 0.06 K decade during 1980-97.
Local Estimators for Spacecraft Formation Flying
NASA Technical Reports Server (NTRS)
Fathpour, Nanaz; Hadaegh, Fred Y.; Mesbahi, Mehran; Nabi, Marzieh
2011-01-01
A formation estimation architecture for formation flying builds upon the local information exchange among multiple local estimators. Spacecraft formation flying involves the coordination of states among multiple spacecraft through relative sensing, inter-spacecraft communication, and control. Most existing formation flying estimation algorithms can only be supported via highly centralized, all-to-all, static relative sensing. New algorithms are needed that are scalable, modular, and robust to variations in the topology and link characteristics of the formation exchange network. These distributed algorithms should rely on a local information-exchange network, relaxing the assumptions on existing algorithms. In this research, it was shown that only local observability is required to design a formation estimator and control law. The approach relies on breaking up the overall information-exchange network into sequence of local subnetworks, and invoking an agreement-type filter to reach consensus among local estimators within each local network. State estimates were obtained by a set of local measurements that were passed through a set of communicating Kalman filters to reach an overall state estimation for the formation. An optimization approach was also presented by means of which diffused estimates over the network can be incorporated in the local estimates obtained by each estimator via local measurements. This approach compares favorably with that obtained by a centralized Kalman filter, which requires complete knowledge of the raw measurement available to each estimator.
Weighted conditional least-squares estimation
Booth, J.G.
1987-01-01
A two-stage estimation procedure is proposed that generalizes the concept of conditional least squares. The method is instead based upon the minimization of a weighted sum of squares, where the weights are inverses of estimated conditional variance terms. Some general conditions are given under which the estimators are consistent and jointly asymptotically normal. More specific details are given for ergodic Markov processes with stationary transition probabilities. A comparison is made with the ordinary conditional least-squares estimators for two simple branching processes with immigration. The relationship between weighted conditional least squares and other, more well-known, estimators is also investigated. In particular, it is shown that in many cases estimated generalized least-squares estimators can be obtained using the weighted conditional least-squares approach. Applications to stochastic compartmental models, and linear models with nested error structures are considered.
Numerical Estimation in Deaf and Hearing Adults.
Bull, Rebecca; Marschark, Marc; Sapere, Patty; Davidson, Wendy A; Murphy, Derek; Nordmann, Emily
2011-08-01
Deaf students often lag behind hearing peers in numerical and mathematical abilities. Studies of hearing children with mathematical difficulties highlight the importance of estimation skills as the foundation for formal mathematical abilities, but research with adults is limited. Deaf and hearing college students were assessed on the Number-to-Position task as a measure of estimation, and completed standardised assessments of arithmetical and mathematical reasoning. Deaf students performed significantly more poorly on all measures, including making less accurate number-line estimates. For deaf students, there was also a strong relationship showing that those more accurate in making number-line estimates achieved higher scores on the math achievement tests. No such relationship was apparent for hearing students. Further insights into the estimation abilities of deaf individuals should be made, including tasks that require symbolic and non-symbolic estimation and which address the quality of estimation strategies being used.
Optimized tuner selection for engine performance estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L. (Inventor); Garg, Sanjay (Inventor)
2013-01-01
A methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. Theoretical Kalman filter estimation error bias and variance values are derived at steady-state operating conditions, and the tuner selection routine is applied to minimize these values. The new methodology yields an improvement in on-line engine performance estimation accuracy.
A comparison of marginal odds ratio estimators.
Loux, Travis M; Drake, Christiana; Smith-Gagen, Julie
2017-02-01
Uses of the propensity score to obtain estimates of causal effect have been investigated thoroughly under assumptions of linearity and additivity of exposure effect. When the outcome variable is binary relationships such as collapsibility, valid for the linear model, do not always hold. This article examines uses of the propensity score when both exposure and outcome are binary variables and the parameter of interest is the marginal odds ratio. We review stratification and matching by the propensity score when calculating the Mantel-Haenszel estimator and show that it is consistent for neither the marginal nor conditional odds ratio. We also investigate a marginal odds ratio estimator based on doubly robust estimators and summarize its performance relative to other recently proposed estimators under various conditions, including low exposure prevalence and model misspecification. Finally, we apply all estimators to a case study estimating the effect of Medicare plan type on the quality of care received by African-American breast cancer patients.
Budget estimates. Fiscal year 1998
1997-02-01
The U.S. Congress has determined that the safe use of nuclear materials for peaceful purposes is a legitimate and important national goal. It has entrusted the Nuclear Regulatory Commission (NRC) with the primary Federal responsibility for achieving that goal. The NRC`s mission, therefore, is to regulate the Nation`s civilian use of byproduct, source, and special nuclear materials to ensure adequate protection of public health and safety, to promote the common defense and security, and to protect the environment. The NRC`s FY 1998 budget requests new budget authority of $481,300,000 to be funded by two appropriations - one is the NRC`s Salaraies and Expenses appropriation for $476,500,000, and the other is NRC`s Office of Inspector General appropriation for $4,800,000. Of the funds appropriated to the NRC`s Salaries and Expenses, $17,000,000, shall be derived from the Nuclear Waste Fund and $2,000,000 shall be derived from general funds. The proposed FY 1998 appropriation legislation would also exempt the $2,000,000 for regulatory reviews and other assistance provided to the Department of Energy from the requirement that the NRC collect 100 percent of its budget from fees. The sums appropriated to the NRC`s Salaries and Expenses and NRC`s Office of Inspector General shall be reduced by the amount of revenues received during FY 1998 from licensing fees, inspection services, and other services and collections, so as to result in a final FY 1998 appropriation for the NRC of an estimated $19,000,000 - the amount appropriated from the Nuclear Waste Fund and from general funds. Revenues derived from enforcement actions shall be deposited to miscellaneous receipts of the Treasury.
Reliability Estimates for Power Supplies
Lee C. Cadwallader; Peter I. Petersen
2005-09-01
Failure rates for large power supplies at a fusion facility are critical knowledge needed to estimate availability of the facility or to set priorties for repairs and spare components. A study of the "failure to operate on demand" and "failure to continue to operate" failure rates has been performed for the large power supplies at DIII-D, which provide power to the magnet coils, the neutral beam injectors, the electron cyclotron heating systems, and the fast wave systems. When one of the power supplies fails to operate, the research program has to be either temporarily changed or halted. If one of the power supplies for the toroidal or ohmic heating coils fails, the operations have to be suspended or the research is continued at de-rated parameters until a repair is completed. If one of the power supplies used in the auxiliary plasma heating systems fails the research is often temporarily changed until a repair is completed. The power supplies are operated remotely and repairs are only performed when the power supplies are off line, so that failure of a power supply does not cause any risk to personnel. The DIII-D Trouble Report database was used to determine the number of power supply faults (over 1,700 reports), and tokamak annual operations data supplied the number of shots, operating times, and power supply usage for the DIII-D operating campaigns between mid-1987 and 2004. Where possible, these power supply failure rates from DIII-D will be compared to similar work that has been performed for the Joint European Torus equipment. These independent data sets support validation of the fusion-specific failure rate values.
The Sherpa Maximum Likelihood Estimator
NASA Astrophysics Data System (ADS)
Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.
2011-07-01
A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.
75 FR 44 - Temporary Suspension of the Population Estimates and Income Estimates Challenge Programs
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-04
... 0908171239-91412-02] RIN 0607-AA49 Temporary Suspension of the Population Estimates and Income Estimates... suspend the Population Estimates Challenge Program during both the decennial census year and the following... Procedure for Challenging Certain Population and Income Estimates) to accommodate the taking of the...
Atmospheric Turbulence Estimates from a Pulsed Lidar
NASA Technical Reports Server (NTRS)
Pruis, Matthew J.; Delisi, Donald P.; Ahmad, Nash'at N.; Proctor, Fred H.
2013-01-01
Estimates of the eddy dissipation rate (EDR) were obtained from measurements made by a coherent pulsed lidar and compared with estimates from mesoscale model simulations and measurements from an in situ sonic anemometer at the Denver International Airport and with EDR estimates from the last observation time of the trailing vortex pair. The estimates of EDR from the lidar were obtained using two different methodologies. The two methodologies show consistent estimates of the vertical profiles. Comparison of EDR derived from the Weather Research and Forecast (WRF) mesoscale model with the in situ lidar estimates show good agreement during the daytime convective boundary layer, but the WRF simulations tend to overestimate EDR during the nighttime. The EDR estimates from a sonic anemometer located at 7.3 meters above ground level are approximately one order of magnitude greater than both the WRF and lidar estimates - which are from greater heights - during the daytime convective boundary layer and substantially greater during the nighttime stable boundary layer. The consistency of the EDR estimates from different methods suggests a reasonable ability to predict the temporal evolution of a spatially averaged vertical profile of EDR in an airport terminal area using a mesoscale model during the daytime convective boundary layer. In the stable nighttime boundary layer, there may be added value to EDR estimates provided by in situ lidar measurements.
Interval Estimation of Seismic Hazard Parameters
NASA Astrophysics Data System (ADS)
Orlecka-Sikora, Beata; Lasocki, Stanislaw
2017-03-01
The paper considers Poisson temporal occurrence of earthquakes and presents a way to integrate uncertainties of the estimates of mean activity rate and magnitude cumulative distribution function in the interval estimation of the most widely used seismic hazard functions, such as the exceedance probability and the mean return period. The proposed algorithm can be used either when the Gutenberg-Richter model of magnitude distribution is accepted or when the nonparametric estimation is in use. When the Gutenberg-Richter model of magnitude distribution is used the interval estimation of its parameters is based on the asymptotic normality of the maximum likelihood estimator. When the nonparametric kernel estimation of magnitude distribution is used, we propose the iterated bias corrected and accelerated method for interval estimation based on the smoothed bootstrap and second-order bootstrap samples. The changes resulted from the integrated approach in the interval estimation of the seismic hazard functions with respect to the approach, which neglects the uncertainty of the mean activity rate estimates have been studied using Monte Carlo simulations and two real dataset examples. The results indicate that the uncertainty of mean activity rate affects significantly the interval estimates of hazard functions only when the product of activity rate and the time period, for which the hazard is estimated, is no more than 5.0. When this product becomes greater than 5.0, the impact of the uncertainty of cumulative distribution function of magnitude dominates the impact of the uncertainty of mean activity rate in the aggregated uncertainty of the hazard functions. Following, the interval estimates with and without inclusion of the uncertainty of mean activity rate converge. The presented algorithm is generic and can be applied also to capture the propagation of uncertainty of estimates, which are parameters of a multiparameter function, onto this function.
Fast, Continuous Audiogram Estimation using Machine Learning
Song, Xinyu D.; Wallace, Brittany M.; Gardner, Jacob R.; Ledbetter, Noah M.; Weinberger, Kilian Q.; Barbour, Dennis L.
2016-01-01
Objectives Pure-tone audiometry has been a staple of hearing assessments for decades. Many different procedures have been proposed for measuring thresholds with pure tones by systematically manipulating intensity one frequency at a time until a discrete threshold function is determined. The authors have developed a novel nonparametric approach for estimating a continuous threshold audiogram using Bayesian estimation and machine learning classification. The objective of this study is to assess the accuracy and reliability of this new method relative to a commonly used threshold measurement technique. Design The authors performed air conduction pure-tone audiometry on 21 participants between the ages of 18 and 90 years with varying degrees of hearing ability. Two repetitions of automated machine learning audiogram estimation and 1 repetition of conventional modified Hughson-Westlake ascending-descending audiogram estimation were acquired by an audiologist. The estimated hearing thresholds of these two techniques were compared at standard audiogram frequencies (i.e., 0.25, 0.5, 1, 2, 4, 8 kHz). Results The two threshold estimate methods delivered very similar estimates at standard audiogram frequencies. Specifically, the mean absolute difference between estimates was 4.16 ± 3.76 dB HL. The mean absolute difference between repeated measurements of the new machine learning procedure was 4.51 ± 4.45 dB HL. These values compare favorably to those of other threshold audiogram estimation procedures. Furthermore, the machine learning method generated threshold estimates from significantly fewer samples than the modified Hughson-Westlake procedure while returning a continuous threshold estimate as a function of frequency. Conclusions The new machine learning audiogram estimation technique produces continuous threshold audiogram estimates accurately, reliably, and efficiently, making it a strong candidate for widespread application in clinical and research audiometry. PMID
Estimating equations for biomarker based exposure estimation under non-steady-state conditions.
Bartell, Scott M; Johnson, Wesley O
2011-06-13
Unrealistic steady-state assumptions are often used to estimate toxicant exposure rates from biomarkers. A biomarker may instead be modeled as a weighted sum of historical time-varying exposures. Estimating equations are derived for a zero-inflated gamma distribution for daily exposures with a known exposure frequency. Simulation studies suggest that the estimating equations can provide accurate estimates of exposure magnitude at any reasonable sample size, and reasonable estimates of the exposure variance at larger sample sizes.
Quantiles, Parametric-Select Density Estimations, and Bi-Information Parameter Estimators.
1982-06-01
A non- parametric estimation method forms estimators which are not based on parametric models. Important examples of non-parametric estimators of a...raw descriptive functions F, f, Q, q, fQ. One distinguishes between parametric and non-parametric methods of estimating smooth functions. A parametric ... estimation method : (1) assumes a family F8, fo’ Q0, qo’ foQ8 of functions, called parametric models, which are indexed by a parameter 6 = ( l
COVARIANCE ASSISTED SCREENING AND ESTIMATION.
Ke, By Tracy; Jin, Jiashun; Fan, Jianqing
2014-11-01
Consider a linear model Y = X β + z, where X = Xn,p and z ~ N(0, In ). The vector β is unknown and it is of interest to separate its nonzero coordinates from the zero ones (i.e., variable selection). Motivated by examples in long-memory time series (Fan and Yao, 2003) and the change-point problem (Bhattacharya, 1994), we are primarily interested in the case where the Gram matrix G = X'X is non-sparse but sparsifiable by a finite order linear filter. We focus on the regime where signals are both rare and weak so that successful variable selection is very challenging but is still possible. We approach this problem by a new procedure called the Covariance Assisted Screening and Estimation (CASE). CASE first uses a linear filtering to reduce the original setting to a new regression model where the corresponding Gram (covariance) matrix is sparse. The new covariance matrix induces a sparse graph, which guides us to conduct multivariate screening without visiting all the submodels. By interacting with the signal sparsity, the graph enables us to decompose the original problem into many separated small-size subproblems (if only we know where they are!). Linear filtering also induces a so-called problem of information leakage, which can be overcome by the newly introduced patching technique. Together, these give rise to CASE, which is a two-stage Screen and Clean (Fan and Song, 2010; Wasserman and Roeder, 2009) procedure, where we first identify candidates of these submodels by patching and screening, and then re-examine each candidate to remove false positives. For any procedure β̂ for variable selection, we measure the performance by the minimax Hamming distance between the sign vectors of β̂ and β. We show that in a broad class of situations where the Gram matrix is non-sparse but sparsifiable, CASE achieves the optimal rate of convergence. The results are successfully applied to long-memory time series and the change-point model.
COVARIANCE ASSISTED SCREENING AND ESTIMATION
Ke, By Tracy; Jin, Jiashun; Fan, Jianqing
2014-01-01
Consider a linear model Y = X β + z, where X = Xn,p and z ~ N(0, In). The vector β is unknown and it is of interest to separate its nonzero coordinates from the zero ones (i.e., variable selection). Motivated by examples in long-memory time series (Fan and Yao, 2003) and the change-point problem (Bhattacharya, 1994), we are primarily interested in the case where the Gram matrix G = X′X is non-sparse but sparsifiable by a finite order linear filter. We focus on the regime where signals are both rare and weak so that successful variable selection is very challenging but is still possible. We approach this problem by a new procedure called the Covariance Assisted Screening and Estimation (CASE). CASE first uses a linear filtering to reduce the original setting to a new regression model where the corresponding Gram (covariance) matrix is sparse. The new covariance matrix induces a sparse graph, which guides us to conduct multivariate screening without visiting all the submodels. By interacting with the signal sparsity, the graph enables us to decompose the original problem into many separated small-size subproblems (if only we know where they are!). Linear filtering also induces a so-called problem of information leakage, which can be overcome by the newly introduced patching technique. Together, these give rise to CASE, which is a two-stage Screen and Clean (Fan and Song, 2010; Wasserman and Roeder, 2009) procedure, where we first identify candidates of these submodels by patching and screening, and then re-examine each candidate to remove false positives. For any procedure β̂ for variable selection, we measure the performance by the minimax Hamming distance between the sign vectors of β̂ and β. We show that in a broad class of situations where the Gram matrix is non-sparse but sparsifiable, CASE achieves the optimal rate of convergence. The results are successfully applied to long-memory time series and the change-point model. PMID:25541567
Crop acreage estimation using a Landsat-based estimator as an auxiliary variable
NASA Technical Reports Server (NTRS)
Chhikara, R. S.; Houston, A. G.; Lundgren, J. C.
1986-01-01
The problem of improving upon the ground survey estimates of crop acreages by utilizing Landsat data is addressed. Three estimators, called regression, ratio, and stratified ratio, are studied for bias and variance, and their relative efficiencies are compared. The approach is to formulate analytically the estimation problem that utilizes ground survey data, as collected by the U.S. Department of Agriculture, and Landsat data, which provide complete coverage for an area of interest, and then to conduct simulation studies. It is shown over a wide range of parametric conditions that the regression estimator is the most efficient unless there is a low correlation between the actual and estimated crop acreages in the sampled area segments, in which case the ratio and stratified ratio estimators are better. Furthermore, it is seen that the regression estimator is potentially biased due to estimating the regression coefficient from the training sample segments. Estimation of the variance of the regression estimator is also investigated. Two variance estimators are considered, the large sample variance estimator and an alternative estimator suggested by Cochran. The large sample estimate of variance is found to be biased and inferior to the Cochran estimate for small sample sizes.
Power system operations: State estimation distributed processing
NASA Astrophysics Data System (ADS)
Ebrahimian, Mohammad Reza
We present an application of a robust and fast parallel algorithm to power system state estimation with minimal amount of modifications to existing state estimators presently in place using the Auxiliary Problem Principle. We demonstrate its effectiveness on IEEE test systems, the Electric Reliability Counsel of Texas (ERCOT), and the Southwest Power Pool (SPP) systems. Since state estimation formulation may lead to an ill-conditioned system, we provide analytical explanations of the effects of mixtures of measurements on the condition of the state estimation information matrix. We demonstrate the closeness of the analytical equations to condition of several test case systems including IEEE RTS-96 and IEEE 118 bus systems. The research on the condition of the state estimation problem covers the centralized as well as distributed state estimation.
Correlation estimation with singly truncated bivariate data.
Im, Jongho; Ahn, Eunyong; Beck, Namseon; Kim, Jae Kwang; Park, Taesung
2017-02-27
Correlation coefficient estimates are often attenuated for truncated samples in the sense that the estimates are biased towards zero. Motivated by real data collected in South Sudan, we consider correlation coefficient estimation with singly truncated bivariate data. By considering a linear regression model in which a truncated variable is used as an explanatory variable, a consistent estimator for the regression slope can be obtained from the ordinary least squares method. A consistent estimator of the correlation coefficient is then obtained by multiplying the regression slope estimator by the variance ratio of the two variables. Results from two limited simulation studies confirm the validity and robustness of the proposed method. The proposed method is applied to the South Sudanese children's anthropometric and nutritional data collected by World Vision. Copyright © 2017 John Wiley & Sons, Ltd.
Illuminant spectrum estimation at a pixel.
Ratnasingam, Sivalogeswaran; Hernández-Andrés, Javier
2011-04-01
In this paper, an algorithm is proposed to estimate the spectral power distribution of a light source at a pixel. The first step of the algorithm is forming a two-dimensional illuminant invariant chromaticity space. In estimating the illuminant spectrum, generalized inverse estimation and Wiener estimation methods were applied. The chromaticity space was divided into small grids and a weight matrix was used to estimate the illuminant spectrum illuminating the pixels that fall within a grid. The algorithm was tested using a different number of sensor responses to determine the optimum number of sensors for accurate colorimetric and spectral reproduction. To investigate the performance of the algorithm realistically, the responses were multiplied with Gaussian noise and then quantized to 10 bits. The algorithm was tested with standard and measured data. Based on the results presented, the algorithm can be used with six sensors to obtain a colorimetrically good estimate of the illuminant spectrum at a pixel.
Standardization in software conversion of (ROM) estimating
NASA Technical Reports Server (NTRS)
Roat, G. H.
1984-01-01
Technical problems and their solutions comprise by far the majority of work involved in space simulation engineering. Fixed price contracts with schedule award fees are becoming more and more prevalent. Accurate estimation of these jobs is critical to maintain costs within limits and to predict realistic contract schedule dates. Computerized estimating may hold the answer to these new problems, though up to now computerized estimating has been complex, expensive, and geared to the business world, not to technical people. The objective of this effort was to provide a simple program on a desk top computer capable of providing a Rough Order of Magnitude (ROM) estimate in a short time. This program is not intended to provide a highly detailed breakdown of costs to a customer, but to provide a number which can be used as a rough estimate on short notice. With more debugging and fine tuning, a more detailed estimate can be made.
Notes on a New Coherence Estimator
Bickel, Douglas L.
2016-01-01
This document discusses some interesting features of the new coherence estimator in [1] . The estimator is d erived from a slightly different viewpoint. We discuss a few properties of the estimator, including presenting the probability density function of the denominator of the new estimator , which is a new feature of this estimator . Finally, we present an appr oximate equation for analysis of the sensitivity of the estimator to the knowledge of the noise value. ACKNOWLEDGEMENTS The preparation of this report is the result of an unfunded research and development activity. Sandia National Laboratories is a multi - program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE - AC04 - 94AL85000.
An estimate of global absolute dynamic topography
NASA Technical Reports Server (NTRS)
Tai, C.-K.; Wunsch, C.
1984-01-01
The absolute dynamic topography of the world ocean is estimated from the largest scales to a short-wavelength cutoff of about 6700 km for the period July through September, 1978. The data base consisted of the time-averaged sea-surface topography determined by Seasat and geoid estimates made at the Goddard Space Flight Center. The issues are those of accuracy and resolution. Use of the altimetric surface as a geoid estimate beyond the short-wavelength cutoff reduces the spectral leakage in the estimated dynamic topography from erroneous small-scale geoid estimates without contaminating the low wavenumbers. Comparison of the result with a similarly filtered version of Levitus' (1982) historical average dynamic topography shows good qualitative agreement. There is quantitative disagreement, but it is within the estimated errors of both methods of calculation.
Estimation of Location Difference for Fragmentary Samples.
1980-12-01
received a great deal of attention in recent statistical literature (Wilks, 1932; Anderson, 1957; Hocking and Smith, 1968; Meita and Gurland, 1969...to use the fragmentary sample in the most efficient way to estimate the shift parameter a. Gupta and Rohatgi (1981) considered the case that X and Y...from the incomplete pairs is a Hodges-Lehmann estimator of 6 based on Wilcoxon signed rank statistic . The estimator e(1,0) which uses all the data
On estimating the Venus spin vector
NASA Technical Reports Server (NTRS)
Argentiero, P. D.
1972-01-01
The improvement in spin vector and probe position estimates one may reasonably expect from the processing of such data is indicated. This was done by duplicating the ensemble calculations associated with a weighed least squares with a priori estimation technique applied to range rate data that were assumed to be unbiased and uncorrelated. The weighting matrix was assumed to be the inverse of the covariance matrix of the noise on the data. Attention is focused primarily on the spin vector estimation.
A Physical Model for Estimating Body Fat
1976-11-01
U.&. DEPARTMENT OF COMMERCE Natimnl Technical InWsrmatlg. Su,~ic AD-A034 111 A PHYSICAL MODEL FOR ESTIMATING BODY FAT SCHOOL OF AEROSPACE MEDICINE...PERIOD COVEREO A PHYSICAL MODEL FOR ESTIMATING BODY FAT Interim May 1972-May 1976 6. PERFORMING ORG. REPORT NUMBER 7. AUTHOR(s) -. CONTRACT OR...human subjects. The fat mass of seven body compartments is estimated and summed to obtain an esti- mate of the total body fat . Measurements were made
Antenna Axis Offset Estimation from VLBI
NASA Technical Reports Server (NTRS)
Kurdubov, Sergey; Skurikhina, Elena
2010-01-01
The antenna axis offsets were estimated from global solutions and single sessions. We have built a set of global solutions from R1 and R4 sessions and from the sets of sessions between SVETLOE repairs. We compared our estimates with local survey data for the stations of the QUASAR network. Svetloe station axis offset values have changed after repairs. For non-global networks, the axis offset value of a single station can significantly affect the EOP estimations.
State energy data report 1994: Consumption estimates
1996-10-01
This document provides annual time series estimates of State-level energy consumption by major economic sector. The estimates are developed in the State Energy Data System (SEDS), operated by EIA. SEDS provides State energy consumption estimates to members of Congress, Federal and State agencies, and the general public, and provides the historical series needed for EIA`s energy models. Division is made for each energy type and end use sector. Nuclear electric power is included.
Robust, Adaptive Radar Detection and Estimation
2015-07-21
AFRL-OSR-VA-TR-2015-0208 Robust, Adaptive Radar Detection and Estimation Vishal Monga PENNSYLVANIA STATE UNIVERSITY THE Final Report 07/21/2015...Robust, Adaptive Radar Detection and Estimation 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA9550-12-1-0333 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Monga...we develop robust estimators that can adapt to imperfect knowledge of physical constraints using an expected likelihood (EL) approach. We analyze
Communications availability: Estimation studies at AMSC
NASA Technical Reports Server (NTRS)
Sigler, C. Edward, Jr.
1994-01-01
The results of L-band communications availability work performed to date are presented. Results include a L-band communications availability estimate model and field propagation trials using an INMARSAT-M terminal. American Mobile Satellite Corporation's (AMSC's) primary concern centers on availability of voice communications intelligibility, with secondary concerns for circuit-switched data and fax. The model estimates for representative terrain/vegetation areas are applied to the contiguous U.S. for overall L-band communications availability estimates.
Systems Engineering Programmatic Estimation Using Technology Variance
NASA Technical Reports Server (NTRS)
Mog, Robert A.
2000-01-01
Unique and innovative system programmatic estimation is conducted using the variance of the packaged technologies. Covariance analysis is performed on the subsystems and components comprising the system of interest. Technological "return" and "variation" parameters are estimated. These parameters are combined with the model error to arrive at a measure of system development stability. The resulting estimates provide valuable information concerning the potential cost growth of the system under development.
Systems Engineering Programmatic Estimation Using Technology Variance
NASA Technical Reports Server (NTRS)
Mog, Robert A.
2000-01-01
Unique and innovative system programmatic estimation is conducted using the variance of the packaged technologies. Covariance analysis is performed oil the subsystems and components comprising the system of interest. Technological "returns" and "variation" parameters, are estimated. These parameters are combined with the model error to arrive at a measure of system development stability. The resulting estimates provide valuable information concerning the potential cost growth of the system under development.
Least Squares Estimation Without Priors or Supervision
Raphan, Martin; Simoncelli, Eero P.
2011-01-01
Selection of an optimal estimator typically relies on either supervised training samples (pairs of measurements and their associated true values) or a prior probability model for the true values. Here, we consider the problem of obtaining a least squares estimator given a measurement process with known statistics (i.e., a likelihood function) and a set of unsupervised measurements, each arising from a corresponding true value drawn randomly from an unknown distribution. We develop a general expression for a nonparametric empirical Bayes least squares (NEBLS) estimator, which expresses the optimal least squares estimator in terms of the measurement density, with no explicit reference to the unknown (prior) density. We study the conditions under which such estimators exist and derive specific forms for a variety of different measurement processes. We further show that each of these NEBLS estimators may be used to express the mean squared estimation error as an expectation over the measurement density alone, thus generalizing Stein’s unbiased risk estimator (SURE), which provides such an expression for the additive gaussian noise case. This error expression may then be optimized over noisy measurement samples, in the absence of supervised training data, yielding a generalized SURE-optimized parametric least squares (SURE2PLS) estimator. In the special case of a linear parameterization (i.e., a sum of nonlinear kernel functions), the objective function is quadratic, and we derive an incremental form for learning this estimator from data. We also show that combining the NEBLS form with its corresponding generalized SURE expression produces a generalization of the score-matching procedure for parametric density estimation. Finally, we have implemented several examples of such estimators, and we show that their performance is comparable to their optimal Bayesian or supervised regression counterparts for moderate to large amounts of data. PMID:21105827
Almost efficient estimation of relative risk regression
Fitzmaurice, Garrett M.; Lipsitz, Stuart R.; Arriaga, Alex; Sinha, Debajyoti; Greenberg, Caprice; Gawande, Atul A.
2014-01-01
Relative risks (RRs) are often considered the preferred measures of association in prospective studies, especially when the binary outcome of interest is common. In particular, many researchers regard RRs to be more intuitively interpretable than odds ratios. Although RR regression is a special case of generalized linear models, specifically with a log link function for the binomial (or Bernoulli) outcome, the resulting log-binomial regression does not respect the natural parameter constraints. Because log-binomial regression does not ensure that predicted probabilities are mapped to the [0,1] range, maximum likelihood (ML) estimation is often subject to numerical instability that leads to convergence problems. To circumvent these problems, a number of alternative approaches for estimating RR regression parameters have been proposed. One approach that has been widely studied is the use of Poisson regression estimating equations. The estimating equations for Poisson regression yield consistent, albeit inefficient, estimators of the RR regression parameters. We consider the relative efficiency of the Poisson regression estimator and develop an alternative, almost efficient estimator for the RR regression parameters. The proposed method uses near-optimal weights based on a Maclaurin series (Taylor series expanded around zero) approximation to the true Bernoulli or binomial weight function. This yields an almost efficient estimator while avoiding convergence problems. We examine the asymptotic relative efficiency of the proposed estimator for an increase in the number of terms in the series. Using simulations, we demonstrate the potential for convergence problems with standard ML estimation of the log-binomial regression model and illustrate how this is overcome using the proposed estimator. We apply the proposed estimator to a study of predictors of pre-operative use of beta blockers among patients undergoing colorectal surgery after diagnosis of colon cancer. PMID
Linear Covariance Analysis and Epoch State Estimators
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Carpenter, J. Russell
2012-01-01
This paper extends in two directions the results of prior work on generalized linear covariance analysis of both batch least-squares and sequential estimators. The first is an improved treatment of process noise in the batch, or epoch state, estimator with an epoch time that may be later than some or all of the measurements in the batch. The second is to account for process noise in specifying the gains in the epoch state estimator. We establish the conditions under which the latter estimator is equivalent to the Kalman filter.
Populations estimates of North American shorebirds, 2006
Morrison, R.I. Guy; McCaffery, Brian J.; Gill, Robert E.; Skagen, Susan K.; Jones, Stephanie L.; Page, Gary W.; Gratto-Trevor, Cheri L.; Andres, Brad A.
2006-01-01
This paper provides updates on population estimates for 52 species of shorebirds, involving 75 taxa, occurring in North America. New information resulting in a changed estimate is available for 39 of the 75 taxa (52%), involving 24 increases and 15 decreases. The preponderance of increased estimates is likely the result of improved estimates rather than actual increases in numbers. Many shorebird species/taxa are considered to be declining: current information on trends indicates negative trends outnumbered increasing trends by 42 to 2, with unknown or stable trends for 31 taxa.
Improved diagnostic model for estimating wind energy
Endlich, R.M.; Lee, J.D.
1983-03-01
Because wind data are available only at scattered locations, a quantitative method is needed to estimate the wind resource at specific sites where wind energy generation may be economically feasible. This report describes a computer model that makes such estimates. The model uses standard weather reports and terrain heights in deriving wind estimates; the method of computation has been changed from what has been used previously. The performance of the current model is compared with that of the earlier version at three sites; estimates of wind energy at four new sites are also presented.
A new algorithm for estimating gillnet selectivity
NASA Astrophysics Data System (ADS)
Tang, Yanli; Huang, Liuyi; Ge, Changzi; Liang, Zhenlin; Sun, Peng
2010-03-01
The estimation of gear selectivity is a critical issue in fishery stock assessment and management. Several methods have been developed for estimating gillnet selectivity, but they all have their limitations, such as inappropriate objective function in data fitting, lack of unique estimates due to the difficulty in finding global minima in minimization, biased estimates due to outliers, and estimations of selectivity being influenced by the predetermined selectivity functions. In this study, we develop a new algorithm that can overcome the above-mentioned problems in estimating the gillnet selectivity. The proposed algorithms include minimizing the sum of squared vertical distances between two adjacent points and minimizing the weighted sum of squared vertical distances between two adjacent points in the presence of outliers. According to the estimated gillnet selectivity curve, the selectivity function can also be determined. This study suggests that the proposed algorithm is not sensitive to outliers in selectivity data and improves on the previous methods in estimating gillnet selectivity and relative population density of fish when a gillnet is used as a sampling tool. We suggest the proposed approach be used in estimating gillnet selectivity.
Array algebra estimation in signal processing
NASA Astrophysics Data System (ADS)
Rauhala, U. A.
A general theory of linear estimators called array algebra estimation is interpreted in some terms of multidimensional digital signal processing, mathematical statistics, and numerical analysis. The theory has emerged during the past decade from the new field of a unified vector, matrix and tensor algebra called array algebra. The broad concepts of array algebra and its estimation theory cover several modern computerized sciences and technologies converting their established notations and terminology into one common language. Some concepts of digital signal processing are adopted into this language after a review of the principles of array algebra estimation and its predecessors in mathematical surveying sciences.
Linear Covariance Analysis and Epoch State Estimators
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Carpenter, J. Russell
2014-01-01
This paper extends in two directions the results of prior work on generalized linear covariance analysis of both batch least-squares and sequential estimators. The first is an improved treatment of process noise in the batch, or epoch state, estimator with an epoch time that may be later than some or all of the measurements in the batch. The second is to account for process noise in specifying the gains in the epoch state estimator. We establish the conditions under which the latter estimator is equivalent to the Kalman filter.
A posteriori error estimates for Maxwell equations
NASA Astrophysics Data System (ADS)
Schoeberl, Joachim
2008-06-01
Maxwell equations are posed as variational boundary value problems in the function space H(operatorname{curl}) and are discretized by Nedelec finite elements. In Beck et al., 2000, a residual type a posteriori error estimator was proposed and analyzed under certain conditions onto the domain. In the present paper, we prove the reliability of that error estimator on Lipschitz domains. The key is to establish new error estimates for the commuting quasi-interpolation operators recently introduced in J. Schoeberl, Commuting quasi-interpolation operators for mixed finite elements. Similar estimates are required for additive Schwarz preconditioning. To incorporate boundary conditions, we establish a new extension result.
Estimation and Accuracy after Model Selection
Efron, Bradley
2013-01-01
Classical statistical theory ignores model selection in assessing estimation accuracy. Here we consider bootstrap methods for computing standard errors and confidence intervals that take model selection into account. The methodology involves bagging, also known as bootstrap smoothing, to tame the erratic discontinuities of selection-based estimators. A useful new formula for the accuracy of bagging then provides standard errors for the smoothed estimators. Two examples, nonparametric and parametric, are carried through in detail: a regression model where the choice of degree (linear, quadratic, cubic, …) is determined by the Cp criterion, and a Lasso-based estimation problem. PMID:25346558
How EIA Estimates Natural Gas Production
2004-01-01
The Energy Information Administration (EIA) publishes estimates monthly and annually of the production of natural gas in the United States. The estimates are based on data EIA collects from gas producing states and data collected by the U. S. Minerals Management Service (MMS) in the Department of Interior. The states and MMS collect this information from producers of natural gas for various reasons, most often for revenue purposes. Because the information is not sufficiently complete or timely for inclusion in EIA's Natural Gas Monthly (NGM), EIA has developed estimation methodologies to generate monthly production estimates that are described in this document.
Early season spring small grains proportion estimation
NASA Technical Reports Server (NTRS)
Phinney, D. E.; Trichel, M. C.
1984-01-01
An accurate, automated method for estimating early season spring small grains from Landsat MSS data is discussed. The method is summarized and the results of its application to 100 sample segment-years of data from the US Northern Great Plains in 1976, 1977, 1978, and 1979 are summarized. The results show that this estimator provides accurate estimates earlier in the growing season than previous methods. Ground truth is required only in the estimator development, and data storage, transmission, preprocessing, and processing requirements are minimal.
Fatality estimator user’s guide
Huso, Manuela M.; Som, Nicholas; Ladd, Lew
2012-12-11
Only carcasses judged to have been killed after the previous search should be included in the fatality data set submitted to this estimator software. This estimator already corrects for carcasses missed in previous searches, so carcasses judged to have been missed at least once should be considered “incidental” and not included in the fatality data set used to estimate fatality. Note: When observed carcass count is <5 (including 0 for species known to be at risk, but not observed), USGS Data Series 881 (http://pubs.usgs.gov/ds/0881/) is recommended for fatality estimation.
[Modern spectral estimation of ICP-AES].
Zhang, Z; Jia, Q; Liu, S; Guo, L; Chen, H; Zeng, X
2000-06-01
The inductively coupled plasma atomic emission spectrometry (ICP-AES) and its signal characteristics were discussed using modern spectral estimation technique. The power spectra density (PSD) was calculated using the auto-regression (AR) model of modern spectra estimation. The Levinson-Durbin recursion method was used to estimate the model parameters which were used for the PSD computation. The results obtained with actual ICP-AES spectra and measurements showed that the spectral estimation technique was helpful for the better understanding about spectral composition and signal characteristics.
Quantum Enhanced Estimation of a Multidimensional Field.
Baumgratz, Tillmann; Datta, Animesh
2016-01-22
We present a framework for the quantum enhanced estimation of multiple parameters corresponding to noncommuting unitary generators. Our formalism provides a recipe for the simultaneous estimation of all three components of a magnetic field. We propose a probe state that surpasses the precision of estimating the three components individually, and we discuss measurements that come close to attaining the quantum limit. Our study also reveals that too much quantum entanglement may be detrimental to attaining the Heisenberg scaling in the estimation of unitarily generated parameters.
Spacecraft inertia estimation via constrained least squares
NASA Technical Reports Server (NTRS)
Keim, Jason A.; Acikmese, Behcet A.; Shields, Joel F.
2006-01-01
This paper presents a new formulation for spacecraft inertia estimation from test data. Specifically, the inertia estimation problem is formulated as a constrained least squares minimization problem with explicit bounds on the inertia matrix incorporated as LMIs [linear matrix inequalities). The resulting minimization problem is a semidefinite optimization that can be solved efficiently with guaranteed convergence to the global optimum by readily available algorithms. This method is applied to data collected from a robotic testbed consisting of a freely rotating body. The results show that the constrained least squares approach produces more accurate estimates of the inertia matrix than standard unconstrained least squares estimation methods.
Cosmochemical Estimates of Mantle Composition
NASA Astrophysics Data System (ADS)
Palme, H.; O'Neill, H. St. C.
2003-12-01
, and a crust. Both Daubrée and Boisse also expected that the Earth was composed of a similar sequence of concentric layers (see Burke, 1986; Marvin, 1996).At the beginning of the twentieth century Harkins at the University of Chicago thought that meteorites would provide a better estimate for the bulk composition of the Earth than the terrestrial rocks collected at the surface as we have only access to the "mere skin" of the Earth. Harkins made an attempt to reconstruct the composition of the hypothetical meteorite planet by compiling compositional data for 125 stony and 318 iron meteorites, and mixing the two components in ratios based on the observed falls of stones and irons. The results confirmed his prediction that elements with even atomic numbers are more abundant and therefore more stable than those with odd atomic numbers and he concluded that the elemental abundances in the bulk meteorite planet are determined by nucleosynthetic processes. For his meteorite planet Harkins calculated Mg/Si, Al/Si, and Fe/Si atomic ratios of 0.86, 0.079, and 0.83, very closely resembling corresponding ratios of the average solar system based on presently known element abundances in the Sun and in CI-meteorites (see Burke, 1986).If the Earth were similar compositionally to the meteorite planet, it should have a similarly high iron content, which requires that the major fraction of iron is concentrated in the interior of the Earth. The presence of a central metallic core to the Earth was suggested by Wiechert in 1897. The existence of the core was firmly established using the study of seismic wave propagation by Oldham in 1906 with the outer boundary of the core accurately located at a depth of 2,900km by Beno Gutenberg in 1913. In 1926 the fluidity of the outer core was finally accepted. The high density of the core and the high abundance of iron and nickel in meteorites led very early to the suggestion that iron and nickel are the dominant elements in the Earth's core (Brush
Load estimator (LOADEST): a FORTRAN program for estimating constituent loads in streams and rivers
Runkel, Robert L.; Crawford, Charles G.; Cohn, Timothy A.
2004-01-01
LOAD ESTimator (LOADEST) is a FORTRAN program for estimating constituent loads in streams and rivers. Given a time series of streamflow, additional data variables, and constituent concentration, LOADEST assists the user in developing a regression model for the estimation of constituent load (calibration). Explanatory variables within the regression model include various functions of streamflow, decimal time, and additional user-specified data variables. The formulated regression model then is used to estimate loads over a user-specified time interval (estimation). Mean load estimates, standard errors, and 95 percent confidence intervals are developed on a monthly and(or) seasonal basis. The calibration and estimation procedures within LOADEST are based on three statistical estimation methods. The first two methods, Adjusted Maximum Likelihood Estimation (AMLE) and Maximum Likelihood Estimation (MLE), are appropriate when the calibration model errors (residuals) are normally distributed. Of the two, AMLE is the method of choice when the calibration data set (time series of streamflow, additional data variables, and concentration) contains censored data. The third method, Least Absolute Deviation (LAD), is an alternative to maximum likelihood estimation when the residuals are not normally distributed. LOADEST output includes diagnostic tests and warnings to assist the user in determining the appropriate estimation method and in interpreting the estimated loads. This report describes the development and application of LOADEST. Sections of the report describe estimation theory, input/output specifications, sample applications, and installation instructions.
Cost Estimating Handbook for Environmental Restoration
1990-09-01
Environmental restoration (ER) projects have presented the DOE and cost estimators with a number of properties that are not comparable to the normal estimating climate within DOE. These properties include: An entirely new set of specialized expressions and terminology. A higher than normal exposure to cost and schedule risk, as compared to most other DOE projects, due to changing regulations, public involvement, resource shortages, and scope of work. A higher than normal percentage of indirect costs to the total estimated cost due primarily to record keeping, special training, liability, and indemnification. More than one estimate for a project, particularly in the assessment phase, in order to provide input into the evaluation of alternatives for the cleanup action. While some aspects of existing guidance for cost estimators will be applicable to environmental restoration projects, some components of the present guidelines will have to be modified to reflect the unique elements of these projects. The purpose of this Handbook is to assist cost estimators in the preparation of environmental restoration estimates for Environmental Restoration and Waste Management (EM) projects undertaken by DOE. The DOE has, in recent years, seen a significant increase in the number, size, and frequency of environmental restoration projects that must be costed by the various DOE offices. The coming years will show the EM program to be the largest non-weapons program undertaken by DOE. These projects create new and unique estimating requirements since historical cost and estimating precedents are meager at best. It is anticipated that this Handbook will enhance the quality of cost data within DOE in several ways by providing: The basis for accurate, consistent, and traceable baselines. Sound methodologies, guidelines, and estimating formats. Sources of cost data/databases and estimating tools and techniques available at DOE cost professionals.
Code of Federal Regulations, 2014 CFR
2014-01-01
...)(i) of this section only if the exchange rate is also estimated under paragraph (b)(2)(i) of this section and the estimated exchange rate affects the amount of such fees. (iii) Fees and taxes described in...)(vii). (1) Exchange rate. In disclosing the exchange rate as required under § 1005.31(b)(1)(iv),...
Z-Estimation and Stratified Samples
Breslow, Norman E.; Hu, Jie; Wellner, Jon A.
2015-01-01
The infinite dimensional Z-estimation theorem offers a systematic approach to joint estimation of both Euclidean and non-Euclidean parameters in probabiity models for data. It is easily adapted for stratified sampling designs. This is important in applications to censored survival data because the inverse probability weights that modify the standard estimating equations often depend on the entire follow-up history. Since the weights are not predictable, they complicate the usual theory based on martingales. This paper considers joint estimation of regression coefficients and baseline hazard functions in the Cox proportional and Lin-Ying additive hazards models. Weighted likelihood equations are used for the former and weighted estimating equations for the latter. Regression coefficients and baseline hazards may be combined to estimate individual survival probabilities. Efficiency is improved by calibrating or estimating the weights using information available for all subjects. Although inefficient in comparison with likelihood inference for incomplete data, which is often difficult to implement, the approach provides consistent estimates of desired population parameters even under model misspecification. PMID:25588605
APPROACH FOR ESTIMATING GLOBAL LANDFILL METHANE EMISSIONS
The report is an overview of available country-specific data and modeling approaches for estimating global landfill methane. Current estimates of global landfill methane indicate that landfills account for between 4 and 15% of the global methane budget. The report describes an ap...