-
Leukocyte Recognition Using EM-Algorithm
NASA Astrophysics Data System (ADS)
Colunga, Mario Chirinos; Siordia, Oscar Sánchez; Maybank, Stephen J.
This document describes a method for classifying images of blood cells. Three different classes of cells are used: Band Neutrophils, Eosinophils and Lymphocytes. The image pattern is projected down to a lower dimensional sub space using PCA; the probability density function for each class is modeled with a Gaussian mixture using the EM-Algorithm. A new cell image is classified using the maximum a posteriori decision rule.
-
Automatic Modulation Classification of Common Communication and Pulse Compression Radar Waveforms using Cyclic Features
DTIC Science & Technology
2013-03-01
intermediate frequency LFM linear frequency modulation MAP maximum a posteriori MATLAB® matrix laboratory ML maximun likelihood OFDM orthogonal frequency...spectrum, frequency hopping, and orthogonal frequency division multiplexing ( OFDM ) modulations. Feature analysis would be a good research thrust to...determine feature relevance and decide if removing any features improves performance. Also, extending the system for simulations using a MIMO receiver or
-
Total protein measurement in canine cerebrospinal fluid: agreement between a turbidimetric assay and 2 dye-binding methods and determination of reference intervals using an indirect a posteriori method.
PubMed
Riond, B; Steffen, F; Schmied, O; Hofmann-Lehmann, R; Lutz, H
2014-03-01
In veterinary clinical laboratories, qualitative tests for total protein measurement in canine cerebrospinal fluid (CSF) have been replaced by quantitative methods, which can be divided into dye-binding assays and turbidimetric methods. There is a lack of validation data and reference intervals (RIs) for these assays. The aim of the present study was to assess agreement between the turbidimetric benzethonium chloride method and 2 dye-binding methods (Pyrogallol Red-Molybdate method [PRM], Coomassie Brilliant Blue [CBB] technique) for measurement of total protein concentration in canine CSF. Furthermore, RIs were determined for all 3 methods using an indirect a posteriori method. For assay comparison, a total of 118 canine CSF specimens were analyzed. For RIs calculation, clinical records of 401 canine patients with normal CSF analysis were studied and classified according to their final diagnosis in pathologic and nonpathologic values. The turbidimetric assay showed excellent agreement with the PRM assay (mean bias 0.003 g/L [-0.26-0.27]). The CBB method generally showed higher total protein values than the turbidimetric assay and the PRM assay (mean bias -0.14 g/L for turbidimetric and PRM assay). From 90 of 401 canine patients, nonparametric reference intervals (2.5%, 97.5% quantile) were calculated (turbidimetric assay and PRM method: 0.08-0.35 g/L (90% CI: 0.07-0.08/0.33-0.39); CBB method: 0.17-0.55 g/L (90% CI: 0.16-0.18/0.52-0.61). Total protein concentration in canine CSF specimens remained stable for up to 6 months of storage at -80°C. Due to variations among methods, RIs for total protein concentration in canine CSF have to be calculated for each method. The a posteriori method of RIs calculation described here should encourage other veterinary laboratories to establish RIs that are laboratory-specific. ©2014 American Society for Veterinary Clinical Pathology and European Society for Veterinary Clinical Pathology.
-
Image-based topology for sensor gridlocking and association
NASA Astrophysics Data System (ADS)
Stanek, Clay J.; Javidi, Bahram; Yanni, Philip
2002-07-01
Correlation engines have been evolving since the implementation of radar. In modern sensor fusion architectures, correlation and gridlock filtering are required to produce common, continuous, and unambiguous tracks of all objects in the surveillance area. The objective is to provide a unified picture of the theatre or area of interest to battlefield decision makers, ultimately enabling them to make better inferences for future action and eliminate fratricide by reducing ambiguities. Here, correlation refers to association, which in this context is track-to-track association. A related process, gridlock filtering or gridlocking, refers to the reduction in navigation errors and sensor misalignment errors so that one sensor's track data can be accurately transformed into another sensor's coordinate system. As platforms gain multiple sensors, the correlation and gridlocking of tracks become significantly more difficult. Much of the existing correlation technology revolves around various interpretations of the generalized Bayesian decision rule: choose the action that minimizes conditional risk. One implementation of this principle equates the risk minimization statement to the comparison of ratios of a priori probability distributions to thresholds. The binary decision problem phrased in terms of likelihood ratios is also known as the famed Neyman-Pearson hypothesis test. Using another restatement of the principle for a symmetric loss function, risk minimization leads to a decision that maximizes the a posteriori probability distribution. Even for deterministic decision rules, situations can arise in correlation where there are ambiguities. For these situations, a common algorithm used is a sparse assignment technique such as the Munkres or JVC algorithm. Furthermore, associated tracks may be combined with the hope of reducing the positional uncertainty of a target or object identified by an existing track from the information of several fused/correlated tracks. Gridlocking is typically accomplished with some type of least-squares algorithm, such as the Kalman filtering technique, which attempts to locate the best bias error vector estimate from a set of correlated/fused track pairs. Here, we will introduce a new approach to this longstanding problem by adapting many of the familiar concepts from pattern recognition, ones certainly familiar to target recognition applications. Furthermore, we will show how this technique can lend itself to specialized processing, such as that available through an optical or hybrid correlator.
-
Pointwise mutual information quantifies intratumor heterogeneity in tissue sections labeled with multiple fluorescent biomarkers.
PubMed
Spagnolo, Daniel M; Gyanchandani, Rekha; Al-Kofahi, Yousef; Stern, Andrew M; Lezon, Timothy R; Gough, Albert; Meyer, Dan E; Ginty, Fiona; Sarachan, Brion; Fine, Jeffrey; Lee, Adrian V; Taylor, D Lansing; Chennubhotla, S Chakra
2016-01-01
Measures of spatial intratumor heterogeneity are potentially important diagnostic biomarkers for cancer progression, proliferation, and response to therapy. Spatial relationships among cells including cancer and stromal cells in the tumor microenvironment (TME) are key contributors to heterogeneity. We demonstrate how to quantify spatial heterogeneity from immunofluorescence pathology samples, using a set of 3 basic breast cancer biomarkers as a test case. We learn a set of dominant biomarker intensity patterns and map the spatial distribution of the biomarker patterns with a network. We then describe the pairwise association statistics for each pattern within the network using pointwise mutual information (PMI) and visually represent heterogeneity with a two-dimensional map. We found a salient set of 8 biomarker patterns to describe cellular phenotypes from a tissue microarray cohort containing 4 different breast cancer subtypes. After computing PMI for each pair of biomarker patterns in each patient and tumor replicate, we visualize the interactions that contribute to the resulting association statistics. Then, we demonstrate the potential for using PMI as a diagnostic biomarker, by comparing PMI maps and heterogeneity scores from patients across the 4 different cancer subtypes. Estrogen receptor positive invasive lobular carcinoma patient, AL13-6, exhibited the highest heterogeneity score among those tested, while estrogen receptor negative invasive ductal carcinoma patient, AL13-14, exhibited the lowest heterogeneity score. This paper presents an approach for describing intratumor heterogeneity, in a quantitative fashion (via PMI), which departs from the purely qualitative approaches currently used in the clinic. PMI is generalizable to highly multiplexed/hyperplexed immunofluorescence images, as well as spatial data from complementary in situ methods including FISSEQ and CyTOF, sampling many different components within the TME. We hypothesize that PMI will uncover key spatial interactions in the TME that contribute to disease proliferation and progression.
-
Preictal dynamics of EEG complexity in intracranially recorded epileptic seizure: a case report.
PubMed
Bob, Petr; Roman, Robert; Svetlak, Miroslav; Kukleta, Miloslav; Chladek, Jan; Brazdil, Milan
2014-11-01
Recent findings suggest that neural complexity reflecting a number of independent processes in the brain may characterize typical changes during epileptic seizures and may enable to describe preictal dynamics. With respect to previously reported findings suggesting specific changes in neural complexity during preictal period, we have used measure of pointwise correlation dimension (PD2) as a sensitive indicator of nonstationary changes in complexity of the electroencephalogram (EEG) signal. Although this measure of complexity in epileptic patients was previously reported by Feucht et al (Applications of correlation dimension and pointwise dimension for non-linear topographical analysis of focal onset seizures. Med Biol Comput. 1999;37:208-217), it was not used to study changes in preictal dynamics. With this aim to study preictal changes of EEG complexity, we have examined signals from 11 multicontact depth (intracerebral) EEG electrodes located in 108 cortical and subcortical brain sites, and from 3 scalp EEG electrodes in a patient with intractable epilepsy, who underwent preoperative evaluation before epilepsy surgery. From those 108 EEG contacts, records related to 44 electrode contacts implanted into lesional structures and white matter were not included into the experimental analysis.The results show that in comparison to interictal period (at about 8-6 minutes before seizure onset), there was a statistically significant decrease in PD2 complexity in the preictal period at about 2 minutes before seizure onset in all 64 intracranial channels localized in various brain sites that were included into the analysis and in 3 scalp EEG channels as well. Presented results suggest that using PD2 in EEG analysis may have significant implications for research of preictal dynamics and prediction of epileptic seizures.
-
Frequency doubling technology perimetry for detection of visual field progression in glaucoma: a pointwise linear regression analysis.
PubMed
Liu, Shu; Yu, Marco; Weinreb, Robert N; Lai, Gilda; Lam, Dennis Shun-Chiu; Leung, Christopher Kai-Shun
2014-05-02
We compared the detection of visual field progression and its rate of change between standard automated perimetry (SAP) and Matrix frequency doubling technology perimetry (FDTP) in glaucoma. We followed prospectively 217 eyes (179 glaucoma and 38 normal eyes) for SAP and FDTP testing at 4-month intervals for ≥36 months. Pointwise linear regression analysis was performed. A test location was considered progressing when the rate of change of visual sensitivity was ≤-1 dB/y for nonedge and ≤-2 dB/y for edge locations. Three criteria were used to define progression in an eye: ≥3 adjacent nonedge test locations (conservative), any three locations (moderate), and any two locations (liberal) progressed. The rate of change of visual sensitivity was calculated with linear mixed models. Of the 217 eyes, 6.1% and 3.9% progressed with the conservative criteria, 14.5% and 5.6% of eyes progressed with the moderate criteria, and 20.1% and 11.7% of eyes progressed with the liberal criteria by FDTP and SAP, respectively. Taking all test locations into consideration (total, 54 × 179 locations), FDTP detected more progressing locations (176) than SAP (103, P < 0.001). The rate of change of visual field mean deviation (MD) was significantly faster for FDTP (all with P < 0.001). No eyes showed progression in the normal group using the conservative and the moderate criteria. With a faster rate of change of visual sensitivity, FDTP detected more progressing eyes than SAP at a comparable level of specificity. Frequency doubling technology perimetry can provide a useful alternative to monitor glaucoma progression.
-
Geophysical approaches to inverse problems: A methodological comparison. Part 1: A Posteriori approach
NASA Technical Reports Server (NTRS)
Seidman, T. I.; Munteanu, M. J.
1979-01-01
The relationships of a variety of general computational methods (and variances) for treating illposed problems such as geophysical inverse problems are considered. Differences in approach and interpretation based on varying assumptions as to, e.g., the nature of measurement uncertainties are discussed along with the factors to be considered in selecting an approach. The reliability of the results of such computation is addressed.
-
Practical Considerations about Expected A Posteriori Estimation in Adaptive Testing: Adaptive A Priori, Adaptive Correction for Bias, and Adaptive Integration Interval.
ERIC Educational Resources Information Center
Raiche, Gilles; Blais, Jean-Guy
In a computerized adaptive test (CAT), it would be desirable to obtain an acceptable precision of the proficiency level estimate using an optimal number of items. Decreasing the number of items is accompanied, however, by a certain degree of bias when the true proficiency level differs significantly from the a priori estimate. G. Raiche (2000) has…
-
On the Least-Squares Fitting of Correlated Data: a Priorivs a PosterioriWeighting
NASA Astrophysics Data System (ADS)
Tellinghuisen, Joel
1996-10-01
One of the methods in common use for analyzing large data sets is a two-step procedure, in which subsets of the full data are first least-squares fitted to a preliminary set of parameters, and the latter are subsequently merged to yield the final parameters. The second step of this procedure is properly a correlated least-squares fit and requires the variance-covariance matrices from the first step to construct the weight matrix for the merge. There is, however, an ambiguity concerning the manner in which the first-step variance-covariance matrices are assessed, which leads to different statistical properties for the quantities determined in the merge. The issue is one ofa priorivsa posterioriassessment of weights, which is an application of what was originally calledinternalvsexternal consistencyby Birge [Phys. Rev.40,207-227 (1932)] and Deming ("Statistical Adjustment of Data." Dover, New York, 1964). In the present work the simplest case of a merge fit-that of an average as obtained from a global fit vs a two-step fit of partitioned data-is used to illustrate that only in the case of a priori weighting do the results have the usually expected and desired statistical properties: normal distributions for residuals,tdistributions for parameters assessed a posteriori, and χ2distributions for variances.
-
Top-down Estimates of Biomass Burning Emissions of Black Carbon in the Western United States
NASA Astrophysics Data System (ADS)
Mao, Y.; Li, Q.; Randerson, J. T.; Liou, K.
2011-12-01
We apply a Bayesian linear inversion to derive top-down estimates of biomass burning emissions of black carbon (BC) in the western United States (WUS) for May-November 2006 by inverting surface BC concentrations from the IMPROVE network using the GEOS-Chem chemical transport model. Model simulations are conducted at both 2°×2.5° (globally) and 0.55°×0.66° (nested over North America) horizontal resolutions. We first improve the spatial distributions and seasonal and interannual variations of the BC emissions from the Global Fire Emissions Database (GFEDv2) using MODIS 8-day active fire counts from 2005-2007. The GFEDv2 emissions in N. America are adjusted for three zones: boreal N. America, temperate N. America, and Mexico plus Central America. The resulting emissions are then used as a priori for the inversion. The a posteriori emissions are 2-5 times higher than the a priori in California and the Rockies. Model surface BC concentrations using the a posteriori estimate provide better agreement with IMPROVE observations (~20% increase in the Taylor skill score), including improved ability to capture the observed variability especially during June-July. However, model surface BC concentrations are still biased low by ~30%. Comparisons with the Fire Locating and Modeling of Burning Emissions (FLAMBE) are included.
-
Top-down Estimates of Biomass Burning Emissions of Black Carbon in the Western United States
NASA Astrophysics Data System (ADS)
Mao, Y.; Li, Q.; Randerson, J. T.; CHEN, D.; Zhang, L.; Liou, K.
2012-12-01
We apply a Bayesian linear inversion to derive top-down estimates of biomass burning emissions of black carbon (BC) in the western United States (WUS) for May-November 2006 by inverting surface BC concentrations from the IMPROVE network using the GEOS-Chem chemical transport model. Model simulations are conducted at both 2°×2.5° (globally) and 0.5°×0.667° (nested over North America) horizontal resolutions. We first improve the spatial distributions and seasonal and interannual variations of the BC emissions from the Global Fire Emissions Database (GFEDv2) using MODIS 8-day active fire counts from 2005-2007. The GFEDv2 emissions in N. America are adjusted for three zones: boreal N. America, temperate N. America, and Mexico plus Central America. The resulting emissions are then used as a priori for the inversion. The a posteriori emissions are 2-5 times higher than the a priori in California and the Rockies. Model surface BC concentrations using the a posteriori estimate provide better agreement with IMPROVE observations (~50% increase in the Taylor skill score), including improved ability to capture the observed variability especially during June-September. However, model surface BC concentrations are still biased low by ~30%. Comparisons with the Fire Locating and Modeling of Burning Emissions (FLAMBE) are included.
-
A meta-learning system based on genetic algorithms
NASA Astrophysics Data System (ADS)
Pellerin, Eric; Pigeon, Luc; Delisle, Sylvain
2004-04-01
The design of an efficient machine learning process through self-adaptation is a great challenge. The goal of meta-learning is to build a self-adaptive learning system that is constantly adapting to its specific (and dynamic) environment. To that end, the meta-learning mechanism must improve its bias dynamically by updating the current learning strategy in accordance with its available experiences or meta-knowledge. We suggest using genetic algorithms as the basis of an adaptive system. In this work, we propose a meta-learning system based on a combination of the a priori and a posteriori concepts. A priori refers to input information and knowledge available at the beginning in order to built and evolve one or more sets of parameters by exploiting the context of the system"s information. The self-learning component is based on genetic algorithms and neural Darwinism. A posteriori refers to the implicit knowledge discovered by estimation of the future states of parameters and is also applied to the finding of optimal parameters values. The in-progress research presented here suggests a framework for the discovery of knowledge that can support human experts in their intelligence information assessment tasks. The conclusion presents avenues for further research in genetic algorithms and their capability to learn to learn.
-
Sparsity-promoting and edge-preserving maximum a posteriori estimators in non-parametric Bayesian inverse problems
NASA Astrophysics Data System (ADS)
Agapiou, Sergios; Burger, Martin; Dashti, Masoumeh; Helin, Tapio
2018-04-01
We consider the inverse problem of recovering an unknown functional parameter u in a separable Banach space, from a noisy observation vector y of its image through a known possibly non-linear map {{\\mathcal G}} . We adopt a Bayesian approach to the problem and consider Besov space priors (see Lassas et al (2009 Inverse Problems Imaging 3 87-122)), which are well-known for their edge-preserving and sparsity-promoting properties and have recently attracted wide attention especially in the medical imaging community. Our key result is to show that in this non-parametric setup the maximum a posteriori (MAP) estimates are characterized by the minimizers of a generalized Onsager-Machlup functional of the posterior. This is done independently for the so-called weak and strong MAP estimates, which as we show coincide in our context. In addition, we prove a form of weak consistency for the MAP estimators in the infinitely informative data limit. Our results are remarkable for two reasons: first, the prior distribution is non-Gaussian and does not meet the smoothness conditions required in previous research on non-parametric MAP estimates. Second, the result analytically justifies existing uses of the MAP estimate in finite but high dimensional discretizations of Bayesian inverse problems with the considered Besov priors.
-
DOE Office of Scientific and Technical Information (OSTI.GOV)
JOHNSON, A.R.
Biological control is any activity taken to prevent, limit, clean up, or remediate potential environmental, health and safety, or workplace quality impacts from plants, animals, or microorganisms. At Hanford the principal emphasis of biological control is to prevent the transport of radioactive contamination by biological vectors (plants, animals, or microorganisms), and where necessary, control and clean up resulting contamination. Other aspects of biological control at Hanford include industrial weed control (e.g.; tumbleweeds), noxious weed control (invasive, non-native plant species), and pest control (undesirable animals such as rodents and stinging insects, and microorganisms such as molds that adversely affect the qualitymore » of the workplace environment). Biological control activities may be either preventive (a priori) or in response to existing contamination spread (a posteriori). Surveillance activities, including ground, vegetation, flying insect, and other surveys, and a priori control actions, such as herbicide spraying and placing biological barriers, are important in preventing radioactive contamination spread. If surveillance discovers that biological vectors have spread radioactive contamination, a posteriori control measures, such as fixing contamination, followed by cleanup and removal of the contamination to an approved disposal location are typical response functions. In some cases remediation following the contamination cleanup and removal is necessary. Biological control activities for industrial weeds, noxious weeds and pests have similar modes of prevention and response.« less
-
Pattern recognition for passive polarimetric data using nonparametric classifiers
NASA Astrophysics Data System (ADS)
Thilak, Vimal; Saini, Jatinder; Voelz, David G.; Creusere, Charles D.
2005-08-01
Passive polarization based imaging is a useful tool in computer vision and pattern recognition. A passive polarization imaging system forms a polarimetric image from the reflection of ambient light that contains useful information for computer vision tasks such as object detection (classification) and recognition. Applications of polarization based pattern recognition include material classification and automatic shape recognition. In this paper, we present two target detection algorithms for images captured by a passive polarimetric imaging system. The proposed detection algorithms are based on Bayesian decision theory. In these approaches, an object can belong to one of any given number classes and classification involves making decisions that minimize the average probability of making incorrect decisions. This minimum is achieved by assigning an object to the class that maximizes the a posteriori probability. Computing a posteriori probabilities requires estimates of class conditional probability density functions (likelihoods) and prior probabilities. A Probabilistic neural network (PNN), which is a nonparametric method that can compute Bayes optimal boundaries, and a -nearest neighbor (KNN) classifier, is used for density estimation and classification. The proposed algorithms are applied to polarimetric image data gathered in the laboratory with a liquid crystal-based system. The experimental results validate the effectiveness of the above algorithms for target detection from polarimetric data.
-
Breaking Computational Barriers: Real-time Analysis and Optimization with Large-scale Nonlinear Models via Model Reduction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlberg, Kevin Thomas; Drohmann, Martin; Tuminaro, Raymond S.
2014-10-01
Model reduction for dynamical systems is a promising approach for reducing the computational cost of large-scale physics-based simulations to enable high-fidelity models to be used in many- query (e.g., Bayesian inference) and near-real-time (e.g., fast-turnaround simulation) contexts. While model reduction works well for specialized problems such as linear time-invariant systems, it is much more difficult to obtain accurate, stable, and efficient reduced-order models (ROMs) for systems with general nonlinearities. This report describes several advances that enable nonlinear reduced-order models (ROMs) to be deployed in a variety of time-critical settings. First, we present an error bound for the Gauss-Newton with Approximatedmore » Tensors (GNAT) nonlinear model reduction technique. This bound allows the state-space error for the GNAT method to be quantified when applied with the backward Euler time-integration scheme. Second, we present a methodology for preserving classical Lagrangian structure in nonlinear model reduction. This technique guarantees that important properties--such as energy conservation and symplectic time-evolution maps--are preserved when performing model reduction for models described by a Lagrangian formalism (e.g., molecular dynamics, structural dynamics). Third, we present a novel technique for decreasing the temporal complexity --defined as the number of Newton-like iterations performed over the course of the simulation--by exploiting time-domain data. Fourth, we describe a novel method for refining projection-based reduced-order models a posteriori using a goal-oriented framework similar to mesh-adaptive h -refinement in finite elements. The technique allows the ROM to generate arbitrarily accurate solutions, thereby providing the ROM with a 'failsafe' mechanism in the event of insufficient training data. Finally, we present the reduced-order model error surrogate (ROMES) method for statistically quantifying reduced- order-model errors. This enables ROMs to be rigorously incorporated in uncertainty-quantification settings, as the error model can be treated as a source of epistemic uncertainty. This work was completed as part of a Truman Fellowship appointment. We note that much additional work was performed as part of the Fellowship. One salient project is the development of the Trilinos-based model-reduction software module Razor , which is currently bundled with the Albany PDE code and currently allows nonlinear reduced-order models to be constructed for any application supported in Albany. Other important projects include the following: 1. ROMES-equipped ROMs for Bayesian inference: K. Carlberg, M. Drohmann, F. Lu (Lawrence Berkeley National Laboratory), M. Morzfeld (Lawrence Berkeley National Laboratory). 2. ROM-enabled Krylov-subspace recycling: K. Carlberg, V. Forstall (University of Maryland), P. Tsuji, R. Tuminaro. 3. A pseudo balanced POD method using only dual snapshots: K. Carlberg, M. Sarovar. 4. An analysis of discrete v. continuous optimality in nonlinear model reduction: K. Carlberg, M. Barone, H. Antil (George Mason University). Journal articles for these projects are in progress at the time of this writing.« less
-
The effect of muscle contraction level on the cervical vestibular evoked myogenic potential (cVEMP): usefulness of amplitude normalization.
PubMed
Bogle, Jamie M; Zapala, David A; Criter, Robin; Burkard, Robert
2013-02-01
The cervical vestibular evoked myogenic potential (cVEMP) is a reflexive change in sternocleidomastoid (SCM) muscle contraction activity thought to be mediated by a saccular vestibulo-collic reflex. CVEMP amplitude varies with the state of the afferent (vestibular) limb of the vestibulo-collic reflex pathway, as well as with the level of SCM muscle contraction. It follows that in order for cVEMP amplitude to reflect the status of the afferent portion of the reflex pathway, muscle contraction level must be controlled. Historically, this has been accomplished by volitionally controlling muscle contraction level either with the aid of a biofeedback method, or by an a posteriori method that normalizes cVEMP amplitude by the level of muscle contraction. A posteriori normalization methods make the implicit assumption that mathematical normalization precisely removes the influence of the efferent limb of the vestibulo-collic pathway. With the cVEMP, however, we are violating basic assumptions of signal averaging: specifically, the background noise and the response are not independent. The influence of this signal-averaging violation on our ability to normalize cVEMP amplitude using a posteriori methods is not well understood. The aims of this investigation were to describe the effect of muscle contraction, as measured by a prestimulus electromyogenic estimate, on cVEMP amplitude and interaural amplitude asymmetry ratio, and to evaluate the benefit of using a commonly advocated a posteriori normalization method on cVEMP amplitude and asymmetry ratio variability. Prospective, repeated-measures design using a convenience sample. Ten healthy adult participants between 25 and 61 yr of age. cVEMP responses to 500 Hz tone bursts (120 dB pSPL) for three conditions describing maximum, moderate, and minimal muscle contraction. Mean (standard deviation) cVEMP amplitude and asymmetry ratios were calculated for each muscle-contraction condition. Repeated measures analysis of variance and t-tests compared the variability in cVEMP amplitude between sides and conditions. Linear regression analyses compared asymmetry ratios. Polynomial regression analyses described the corrected and uncorrected cVEMP amplitude growth functions. While cVEMP amplitude increased with increased muscle contraction, the relationship was not linear or even proportionate. In the majority of cases, once muscle contraction reached a certain "threshold" level, cVEMP amplitude increased rapidly and then saturated. Normalizing cVEMP amplitudes did not remove the relationship between cVEMP amplitude and muscle contraction level. As muscle contraction increased, the normalized amplitude increased, and then decreased, corresponding with the observed amplitude saturation. Abnormal asymmetry ratios (based on values reported in the literature) were noted for four instances of uncorrected amplitude asymmetry at less than maximum muscle contraction levels. Amplitude normalization did not substantially change the number of observed asymmetry ratios. Because cVEMP amplitude did not typically grow proportionally with muscle contraction level, amplitude normalization did not lead to stable cVEMP amplitudes or asymmetry ratios across varying muscle contraction levels. Until we better understand the relationships between muscle contraction level, surface electromyography (EMG) estimates of muscle contraction level, and cVEMP amplitude, the application of normalization methods to correct cVEMP amplitude appears unjustified. American Academy of Audiology.
-
Exploring image data assimilation in the prospect of high-resolution satellite data
NASA Astrophysics Data System (ADS)
Verron, J. A.; Duran, M.; Gaultier, L.; Brankart, J. M.; Brasseur, P.
2016-02-01
Many recent works show the key importance of studying the ocean at fine scales including the meso- and submesoscales. Satellite observations such as ocean color data provide informations on a wide range of scales but do not directly provide information on ocean dynamics. Satellite altimetry provide informations on the ocean dynamic topography (SSH) but so far with a limited resolution in space and even more, in time. However, in the near future, high-resolution SSH data (e.g. SWOT) will give a vision of the dynamic topography at such fine space resolution. This raises some challenging issues for data assimilation in physical oceanography: develop reliable methodology to assimilate high resolution data, make integrated use of various data sets including biogeochemical data, and even more simply, solve the challenge of handling large amont of data and huge state vectors. In this work, we propose to consider structured information rather than pointwise data. First, we take an image data assimilation approach in studying the feasibility of inverting tracer observations from Sea Surface Temperature and/or Ocean Color datasets, to improve the description of mesoscale dynamics provided by altimetric observations. Finite Size Lyapunov Exponents are used as an image proxy. The inverse problem is formulated in a Bayesian framework and expressed in terms of a cost function measuring the misfits between the two images. Second, we explore the inversion of SWOT-like high resolution SSH data and more especially the various possible proxies of the actual SSH that could be used to control the ocean circulation at various scales. One focus is made on controlling the subsurface ocean from surface only data. A key point lies in the errors and uncertainties that are associated to SWOT data.
-
Uncertainties for two-dimensional models of solar rotation from helioseismic eigenfrequency splitting
NASA Technical Reports Server (NTRS)
Genovese, Christopher R.; Stark, Philip B.; Thompson, Michael J.
1995-01-01
Observed solar p-mode frequency splittings can be used to estimate angular velocity as a function of position in the solar interior. Formal uncertainties of such estimates depend on the method of estimation (e.g., least-squares), the distribution of errors in the observations, and the parameterization imposed on the angular velocity. We obtain lower bounds on the uncertainties that do not depend on the method of estimation; the bounds depend on an assumed parameterization, but the fact that they are lower bounds for the 'true' uncertainty does not. Ninety-five percent confidence intervals for estimates of the angular velocity from 1986 Big Bear Solar Observatory (BBSO) data, based on a 3659 element tensor-product cubic-spline parameterization, are everywhere wider than 120 nHz, and exceed 60,000 nHz near the core. When compared with estimates of the solar rotation, these bounds reveal that useful inferences based on pointwise estimates of the angular velocity using 1986 BBSO splitting data are not feasible over most of the Sun's volume. The discouraging size of the uncertainties is due principally to the fact that helioseismic measurements are insensitive to changes in the angular velocity at individual points, so estimates of point values based on splittings are extremely uncertain. Functionals that measure distributed 'smooth' properties are, in general, better constrained than estimates of the rotation at a point. For example, the uncertainties in estimated differences of average rotation between adjacent blocks of about 0.001 solar volumes across the base of the convective zone are much smaller, and one of several estimated differences we compute appears significant at the 95% level.