NASA Astrophysics Data System (ADS)
Suzuki, Kazuyoshi; Zupanski, Milija
2018-01-01
In this study, we investigate the uncertainties associated with land surface processes in an ensemble predication context. Specifically, we compare the uncertainties produced by a coupled atmosphere-land modeling system with two different land surface models, the Noah- MP land surface model (LSM) and the Noah LSM, by using the Maximum Likelihood Ensemble Filter (MLEF) data assimilation system as a platform for ensemble prediction. We carried out 24-hour prediction simulations in Siberia with 32 ensemble members beginning at 00:00 UTC on 5 March 2013. We then compared the model prediction uncertainty of snow depth and solid precipitation with observation-based research products and evaluated the standard deviation of the ensemble spread. The prediction skill and ensemble spread exhibited high positive correlation for both LSMs, indicating a realistic uncertainty estimation. The inclusion of a multiple snowlayer model in the Noah-MP LSM was beneficial for reducing the uncertainties of snow depth and snow depth change compared to the Noah LSM, but the uncertainty in daily solid precipitation showed minimal difference between the two LSMs. The impact of LSM choice in reducing temperature uncertainty was limited to surface layers of the atmosphere. In summary, we found that the more sophisticated Noah-MP LSM reduces uncertainties associated with land surface processes compared to the Noah LSM. Thus, using prediction models with improved skill implies improved predictability and greater certainty of prediction.
NASA Astrophysics Data System (ADS)
Lim, S.; Park, S. K.; Zupanski, M.
2015-09-01
Ozone (O3) plays an important role in chemical reactions and is usually incorporated in chemical data assimilation (DA). In tropical cyclones (TCs), O3 usually shows a lower concentration inside the eyewall and an elevated concentration around the eye, impacting meteorological as well as chemical variables. To identify the impact of O3 observations on TC structure, including meteorological and chemical information, we developed a coupled meteorology-chemistry DA system by employing the Weather Research and Forecasting model coupled with Chemistry (WRF-Chem) and an ensemble-based DA algorithm - the maximum likelihood ensemble filter (MLEF). For a TC case that occurred over East Asia, Typhoon Nabi (2005), our results indicate that the ensemble forecast is reasonable, accompanied with larger background state uncertainty over the TC, and also over eastern China. Similarly, the assimilation of O3 observations impacts meteorological and chemical variables near the TC and over eastern China. The strongest impact on air quality in the lower troposphere was over China, likely due to the pollution advection. In the vicinity of the TC, however, the strongest impact on chemical variables adjustment was at higher levels. The impact on meteorological variables was similar in both over China and near the TC. The analysis results are verified using several measures that include the cost function, root mean square (RMS) error with respect to observations, and degrees of freedom for signal (DFS). All measures indicate a positive impact of DA on the analysis - the cost function and RMS error have decreased by 16.9 and 8.87 %, respectively. In particular, the DFS indicates a strong positive impact of observations in the TC area, with a weaker maximum over northeastern China.
NASA Astrophysics Data System (ADS)
Lim, S.; Park, S. K.; Zupanski, M.
2015-04-01
Since the air quality forecast is related to both chemistry and meteorology, the coupled atmosphere-chemistry data assimilation (DA) system is essential to air quality forecasting. Ozone (O3) plays an important role in chemical reactions and is usually assimilated in chemical DA. In tropical cyclones (TCs), O3 usually shows a lower concentration inside the eyewall and an elevated concentration around the eye, impacting atmospheric as well as chemical variables. To identify the impact of O3 observations on TC structure, including atmospheric and chemical information, we employed the Weather Research and Forecasting model coupled with Chemistry (WRF-Chem) with an ensemble-based DA algorithm - the maximum likelihood ensemble filter (MLEF). For a TC case that occurred over the East Asia, our results indicate that the ensemble forecast is reasonable, accompanied with larger background state uncertainty over the TC, and also over eastern China. Similarly, the assimilation of O3 observations impacts atmospheric and chemical variables near the TC and over eastern China. The strongest impact on air quality in the lower troposphere was over China, likely due to the pollution advection. In the vicinity of the TC, however, the strongest impact on chemical variables adjustment was at higher levels. The impact on atmospheric variables was similar in both over China and near the TC. The analysis results are validated using several measures that include the cost function, root-mean-squared error with respect to observations, and degrees of freedom for signal (DFS). All measures indicate a positive impact of DA on the analysis - the cost function and root mean square error have decreased by 16.9 and 8.87%, respectively. In particular, the DFS indicates a strong positive impact of observations in the TC area, with a weaker maximum over northeast China.
Liu, Yan; Li, Xuemei; Xie, Chen; Luo, Xiuzhen; Bao, Yonggang; Wu, Bin; Hu, Yuchi; Zhong, Zhong; Liu, Chang; Li, MinJie
2016-01-01
For centuries, mulberry leaf has been used in traditional Chinese medicine for the treatment of diabetes. This study aims to test the prevention effects of a proprietary mulberry leaf extract (MLE) and a formula consisting of MLE, fenugreek seed extract, and cinnamon cassia extract (MLEF) on insulin resistance development in animals. MLE was refined to contain 5% 1-deoxynojirimycin by weight. MLEF was formulated by mixing MLE with cinnamon cassia extract and fenugreek seed extract at a 6:5:3 ratio (by weight). First, the acute toxicity effects of MLE on ICR mice were examined at 5 g/kg BW dose. Second, two groups of normal rats were administrated with water or 150 mg/kg BW MLE per day for 29 days to evaluate MLE's effect on normal animals. Third, to examine the effects of MLE and MLEF on model animals, sixty SD rats were divided into five groups, namely, (1) normal, (2) model, (3) high-dose MLE (75 mg/kg BW) treatment; (4) low-dose MLE (15 mg/kg BW) treatment; and (5) MLEF (35 mg/kg BW) treatment. On the second week, rats in groups (2)-(5) were switched to high-energy diet for three weeks. Afterward, the rats were injected (ip) with a single dose of 105 mg/kg BW alloxan. After four more days, fasting blood glucose, post-prandial blood glucose, serum insulin, cholesterol, and triglyceride levels were measured. Last, liver lysates from animals were screened with 650 antibodies for changes in the expression or phosphorylation levels of signaling proteins. The results were further validated by Western blot analysis. We found that the maximum tolerance dose of MLE was greater than 5 g/kg in mice. The MLE at a 150 mg/kg BW dose showed no effect on fast blood glucose levels in normal rats. The MLE at a 75 mg/kg BW dose and MLEF at a 35 mg/kg BW dose, significantly (p < 0.05) reduced fast blood glucose levels in rats with impaired glucose and lipid metabolism. In total, 34 proteins with significant changes in expression and phosphorylation levels were identified. The changes of JNK, IRS1, and PDK1 were confirmed by western blot analysis. In conclusion, this study demonstrated the potential protective effects of MLE and MLEF against hyperglycemia induced by high-energy diet and toxic chemicals in rats for the first time. The most likely mechanism is the promotion of IRS1 phosphorylation, which leads to insulin sensitivity restoration.
Xie, Chen; Luo, Xiuzhen; Bao, Yonggang; Wu, Bin; Hu, Yuchi; Zhong, Zhong; Liu, Chang; Li, MinJie
2016-01-01
For centuries, mulberry leaf has been used in traditional Chinese medicine for the treatment of diabetes. This study aims to test the prevention effects of a proprietary mulberry leaf extract (MLE) and a formula consisting of MLE, fenugreek seed extract, and cinnamon cassia extract (MLEF) on insulin resistance development in animals. MLE was refined to contain 5% 1-deoxynojirimycin by weight. MLEF was formulated by mixing MLE with cinnamon cassia extract and fenugreek seed extract at a 6:5:3 ratio (by weight). First, the acute toxicity effects of MLE on ICR mice were examined at 5 g/kg BW dose. Second, two groups of normal rats were administrated with water or 150 mg/kg BW MLE per day for 29 days to evaluate MLE’s effect on normal animals. Third, to examine the effects of MLE and MLEF on model animals, sixty SD rats were divided into five groups, namely, (1) normal, (2) model, (3) high-dose MLE (75 mg/kg BW) treatment; (4) low-dose MLE (15 mg/kg BW) treatment; and (5) MLEF (35 mg/kg BW) treatment. On the second week, rats in groups (2)-(5) were switched to high-energy diet for three weeks. Afterward, the rats were injected (ip) with a single dose of 105 mg/kg BW alloxan. After four more days, fasting blood glucose, post-prandial blood glucose, serum insulin, cholesterol, and triglyceride levels were measured. Last, liver lysates from animals were screened with 650 antibodies for changes in the expression or phosphorylation levels of signaling proteins. The results were further validated by Western blot analysis. We found that the maximum tolerance dose of MLE was greater than 5 g/kg in mice. The MLE at a 150 mg/kg BW dose showed no effect on fast blood glucose levels in normal rats. The MLE at a 75 mg/kg BW dose and MLEF at a 35 mg/kg BW dose, significantly (p < 0.05) reduced fast blood glucose levels in rats with impaired glucose and lipid metabolism. In total, 34 proteins with significant changes in expression and phosphorylation levels were identified. The changes of JNK, IRS1, and PDK1 were confirmed by western blot analysis. In conclusion, this study demonstrated the potential protective effects of MLE and MLEF against hyperglycemia induced by high-energy diet and toxic chemicals in rats for the first time. The most likely mechanism is the promotion of IRS1 phosphorylation, which leads to insulin sensitivity restoration. PMID:27054886
Conservation of Mass and Preservation of Positivity with Ensemble-Type Kalman Filter Algorithms
NASA Technical Reports Server (NTRS)
Janjic, Tijana; Mclaughlin, Dennis; Cohn, Stephen E.; Verlaan, Martin
2014-01-01
This paper considers the incorporation of constraints to enforce physically based conservation laws in the ensemble Kalman filter. In particular, constraints are used to ensure that the ensemble members and the ensemble mean conserve mass and remain nonnegative through measurement updates. In certain situations filtering algorithms such as the ensemble Kalman filter (EnKF) and ensemble transform Kalman filter (ETKF) yield updated ensembles that conserve mass but are negative, even though the actual states must be nonnegative. In such situations if negative values are set to zero, or a log transform is introduced, the total mass will not be conserved. In this study, mass and positivity are both preserved by formulating the filter update as a set of quadratic programming problems that incorporate non-negativity constraints. Simple numerical experiments indicate that this approach can have a significant positive impact on the posterior ensemble distribution, giving results that are more physically plausible both for individual ensemble members and for the ensemble mean. In two examples, an update that includes a non-negativity constraint is able to properly describe the transport of a sharp feature (e.g., a triangle or cone). A number of implementation questions still need to be addressed, particularly the need to develop a computationally efficient quadratic programming update for large ensemble.
Concrete ensemble Kalman filters with rigorous catastrophic filter divergence
Kelly, David; Majda, Andrew J.; Tong, Xin T.
2015-01-01
The ensemble Kalman filter and ensemble square root filters are data assimilation methods used to combine high-dimensional, nonlinear dynamical models with observed data. Ensemble methods are indispensable tools in science and engineering and have enjoyed great success in geophysical sciences, because they allow for computationally cheap low-ensemble-state approximation for extremely high-dimensional turbulent forecast models. From a theoretical perspective, the dynamical properties of these methods are poorly understood. One of the central mysteries is the numerical phenomenon known as catastrophic filter divergence, whereby ensemble-state estimates explode to machine infinity, despite the true state remaining in a bounded region. In this article we provide a breakthrough insight into the phenomenon, by introducing a simple and natural forecast model that transparently exhibits catastrophic filter divergence under all ensemble methods and a large set of initializations. For this model, catastrophic filter divergence is not an artifact of numerical instability, but rather a true dynamical property of the filter. The divergence is not only validated numerically but also proven rigorously. The model cleanly illustrates mechanisms that give rise to catastrophic divergence and confirms intuitive accounts of the phenomena given in past literature. PMID:26261335
Concrete ensemble Kalman filters with rigorous catastrophic filter divergence.
Kelly, David; Majda, Andrew J; Tong, Xin T
2015-08-25
The ensemble Kalman filter and ensemble square root filters are data assimilation methods used to combine high-dimensional, nonlinear dynamical models with observed data. Ensemble methods are indispensable tools in science and engineering and have enjoyed great success in geophysical sciences, because they allow for computationally cheap low-ensemble-state approximation for extremely high-dimensional turbulent forecast models. From a theoretical perspective, the dynamical properties of these methods are poorly understood. One of the central mysteries is the numerical phenomenon known as catastrophic filter divergence, whereby ensemble-state estimates explode to machine infinity, despite the true state remaining in a bounded region. In this article we provide a breakthrough insight into the phenomenon, by introducing a simple and natural forecast model that transparently exhibits catastrophic filter divergence under all ensemble methods and a large set of initializations. For this model, catastrophic filter divergence is not an artifact of numerical instability, but rather a true dynamical property of the filter. The divergence is not only validated numerically but also proven rigorously. The model cleanly illustrates mechanisms that give rise to catastrophic divergence and confirms intuitive accounts of the phenomena given in past literature.
Behavior of Filters and Smoothers for Strongly Nonlinear Dynamics
NASA Technical Reports Server (NTRS)
Zhu, Yanqui; Cohn, Stephen E.; Todling, Ricardo
1999-01-01
The Kalman filter is the optimal filter in the presence of known gaussian error statistics and linear dynamics. Filter extension to nonlinear dynamics is non trivial in the sense of appropriately representing high order moments of the statistics. Monte Carlo, ensemble-based, methods have been advocated as the methodology for representing high order moments without any questionable closure assumptions. Investigation along these lines has been conducted for highly idealized dynamics such as the strongly nonlinear Lorenz model as well as more realistic models of the means and atmosphere. A few relevant issues in this context are related to the necessary number of ensemble members to properly represent the error statistics and, the necessary modifications in the usual filter situations to allow for correct update of the ensemble members. The ensemble technique has also been applied to the problem of smoothing for which similar questions apply. Ensemble smoother examples, however, seem to be quite puzzling in that results state estimates are worse than for their filter analogue. In this study, we use concepts in probability theory to revisit the ensemble methodology for filtering and smoothing in data assimilation. We use the Lorenz model to test and compare the behavior of a variety of implementations of ensemble filters. We also implement ensemble smoothers that are able to perform better than their filter counterparts. A discussion of feasibility of these techniques to large data assimilation problems will be given at the time of the conference.
The Behavior of Filters and Smoothers for Strongly Nonlinear Dynamics
NASA Technical Reports Server (NTRS)
Zhu, Yanqiu; Cohn, Stephen E.; Todling, Ricardo
1999-01-01
The Kalman filter is the optimal filter in the presence of known Gaussian error statistics and linear dynamics. Filter extension to nonlinear dynamics is non trivial in the sense of appropriately representing high order moments of the statistics. Monte Carlo, ensemble-based, methods have been advocated as the methodology for representing high order moments without any questionable closure assumptions (e.g., Miller 1994). Investigation along these lines has been conducted for highly idealized dynamics such as the strongly nonlinear Lorenz (1963) model as well as more realistic models of the oceans (Evensen and van Leeuwen 1996) and atmosphere (Houtekamer and Mitchell 1998). A few relevant issues in this context are related to the necessary number of ensemble members to properly represent the error statistics and, the necessary modifications in the usual filter equations to allow for correct update of the ensemble members (Burgers 1998). The ensemble technique has also been applied to the problem of smoothing for which similar questions apply. Ensemble smoother examples, however, seem to quite puzzling in that results of state estimate are worse than for their filter analogue (Evensen 1997). In this study, we use concepts in probability theory to revisit the ensemble methodology for filtering and smoothing in data assimilation. We use Lorenz (1963) model to test and compare the behavior of a variety implementations of ensemble filters. We also implement ensemble smoothers that are able to perform better than their filter counterparts. A discussion of feasibility of these techniques to large data assimilation problems will be given at the time of the conference.
Mass Conservation and Positivity Preservation with Ensemble-type Kalman Filter Algorithms
NASA Technical Reports Server (NTRS)
Janjic, Tijana; McLaughlin, Dennis B.; Cohn, Stephen E.; Verlaan, Martin
2013-01-01
Maintaining conservative physical laws numerically has long been recognized as being important in the development of numerical weather prediction (NWP) models. In the broader context of data assimilation, concerted efforts to maintain conservation laws numerically and to understand the significance of doing so have begun only recently. In order to enforce physically based conservation laws of total mass and positivity in the ensemble Kalman filter, we incorporate constraints to ensure that the filter ensemble members and the ensemble mean conserve mass and remain nonnegative through measurement updates. We show that the analysis steps of ensemble transform Kalman filter (ETKF) algorithm and ensemble Kalman filter algorithm (EnKF) can conserve the mass integral, but do not preserve positivity. Further, if localization is applied or if negative values are simply set to zero, then the total mass is not conserved either. In order to ensure mass conservation, a projection matrix that corrects for localization effects is constructed. In order to maintain both mass conservation and positivity preservation through the analysis step, we construct a data assimilation algorithms based on quadratic programming and ensemble Kalman filtering. Mass and positivity are both preserved by formulating the filter update as a set of quadratic programming problems that incorporate constraints. Some simple numerical experiments indicate that this approach can have a significant positive impact on the posterior ensemble distribution, giving results that are more physically plausible both for individual ensemble members and for the ensemble mean. The results show clear improvements in both analyses and forecasts, particularly in the presence of localized features. Behavior of the algorithm is also tested in presence of model error.
A simple new filter for nonlinear high-dimensional data assimilation
NASA Astrophysics Data System (ADS)
Tödter, Julian; Kirchgessner, Paul; Ahrens, Bodo
2015-04-01
The ensemble Kalman filter (EnKF) and its deterministic variants, mostly square root filters such as the ensemble transform Kalman filter (ETKF), represent a popular alternative to variational data assimilation schemes and are applied in a wide range of operational and research activities. Their forecast step employs an ensemble integration that fully respects the nonlinear nature of the analyzed system. In the analysis step, they implicitly assume the prior state and observation errors to be Gaussian. Consequently, in nonlinear systems, the analysis mean and covariance are biased, and these filters remain suboptimal. In contrast, the fully nonlinear, non-Gaussian particle filter (PF) only relies on Bayes' theorem, which guarantees an exact asymptotic behavior, but because of the so-called curse of dimensionality it is exposed to weight collapse. This work shows how to obtain a new analysis ensemble whose mean and covariance exactly match the Bayesian estimates. This is achieved by a deterministic matrix square root transformation of the forecast ensemble, and subsequently a suitable random rotation that significantly contributes to filter stability while preserving the required second-order statistics. The forecast step remains as in the ETKF. The proposed algorithm, which is fairly easy to implement and computationally efficient, is referred to as the nonlinear ensemble transform filter (NETF). The properties and performance of the proposed algorithm are investigated via a set of Lorenz experiments. They indicate that such a filter formulation can increase the analysis quality, even for relatively small ensemble sizes, compared to other ensemble filters in nonlinear, non-Gaussian scenarios. Furthermore, localization enhances the potential applicability of this PF-inspired scheme in larger-dimensional systems. Finally, the novel algorithm is coupled to a large-scale ocean general circulation model. The NETF is stable, behaves reasonably and shows a good performance with a realistic ensemble size. The results confirm that, in principle, it can be applied successfully and as simple as the ETKF in high-dimensional problems without further modifications of the algorithm, even though it is only based on the particle weights. This proves that the suggested method constitutes a useful filter for nonlinear, high-dimensional data assimilation, and is able to overcome the curse of dimensionality even in deterministic systems.
Hybrid Data Assimilation without Ensemble Filtering
NASA Technical Reports Server (NTRS)
Todling, Ricardo; Akkraoui, Amal El
2014-01-01
The Global Modeling and Assimilation Office is preparing to upgrade its three-dimensional variational system to a hybrid approach in which the ensemble is generated using a square-root ensemble Kalman filter (EnKF) and the variational problem is solved using the Grid-point Statistical Interpolation system. As in most EnKF applications, we found it necessary to employ a combination of multiplicative and additive inflations, to compensate for sampling and modeling errors, respectively and, to maintain the small-member ensemble solution close to the variational solution; we also found it necessary to re-center the members of the ensemble about the variational analysis. During tuning of the filter we have found re-centering and additive inflation to play a considerably larger role than expected, particularly in a dual-resolution context when the variational analysis is ran at larger resolution than the ensemble. This led us to consider a hybrid strategy in which the members of the ensemble are generated by simply converting the variational analysis to the resolution of the ensemble and applying additive inflation, thus bypassing the EnKF. Comparisons of this, so-called, filter-free hybrid procedure with an EnKF-based hybrid procedure and a control non-hybrid, traditional, scheme show both hybrid strategies to provide equally significant improvement over the control; more interestingly, the filter-free procedure was found to give qualitatively similar results to the EnKF-based procedure.
NASA Astrophysics Data System (ADS)
Hirpa, F. A.; Gebremichael, M.; Hopson, T. M.; Wojick, R.
2011-12-01
We present results of data assimilation of ground discharge observation and remotely sensed soil moisture observations into Sacramento Soil Moisture Accounting (SACSMA) model in a small watershed (1593 km2) in Minnesota, the Unites States. Specifically, we perform assimilation experiments with Ensemble Kalman Filter (EnKF) and Particle Filter (PF) in order to improve streamflow forecast accuracy at six hourly time step. The EnKF updates the soil moisture states in the SACSMA from the relative errors of the model and observations, while the PF adjust the weights of the state ensemble members based on the likelihood of the forecast. Results of the improvements of each filter over the reference model (without data assimilation) will be presented. Finally, the EnKF and PF are coupled together to further improve the streamflow forecast accuracy.
The role of model dynamics in ensemble Kalman filter performance for chaotic systems
Ng, G.-H.C.; McLaughlin, D.; Entekhabi, D.; Ahanin, A.
2011-01-01
The ensemble Kalman filter (EnKF) is susceptible to losing track of observations, or 'diverging', when applied to large chaotic systems such as atmospheric and ocean models. Past studies have demonstrated the adverse impact of sampling error during the filter's update step. We examine how system dynamics affect EnKF performance, and whether the absence of certain dynamic features in the ensemble may lead to divergence. The EnKF is applied to a simple chaotic model, and ensembles are checked against singular vectors of the tangent linear model, corresponding to short-term growth and Lyapunov vectors, corresponding to long-term growth. Results show that the ensemble strongly aligns itself with the subspace spanned by unstable Lyapunov vectors. Furthermore, the filter avoids divergence only if the full linearized long-term unstable subspace is spanned. However, short-term dynamics also become important as non-linearity in the system increases. Non-linear movement prevents errors in the long-term stable subspace from decaying indefinitely. If these errors then undergo linear intermittent growth, a small ensemble may fail to properly represent all important modes, causing filter divergence. A combination of long and short-term growth dynamics are thus critical to EnKF performance. These findings can help in developing practical robust filters based on model dynamics. ?? 2011 The Authors Tellus A ?? 2011 John Wiley & Sons A/S.
Comparison of different filter methods for data assimilation in the unsaturated zone
NASA Astrophysics Data System (ADS)
Lange, Natascha; Berkhahn, Simon; Erdal, Daniel; Neuweiler, Insa
2016-04-01
The unsaturated zone is an important compartment, which plays a role for the division of terrestrial water fluxes into surface runoff, groundwater recharge and evapotranspiration. For data assimilation in coupled systems it is therefore important to have a good representation of the unsaturated zone in the model. Flow processes in the unsaturated zone have all the typical features of flow in porous media: Processes can have long memory and as observations are scarce, hydraulic model parameters cannot be determined easily. However, they are important for the quality of model predictions. On top of that, the established flow models are highly non-linear. For these reasons, the use of the popular Ensemble Kalman filter as a data assimilation method to estimate state and parameters in unsaturated zone models could be questioned. With respect to the long process memory in the subsurface, it has been suggested that iterative filters and smoothers may be more suitable for parameter estimation in unsaturated media. We test the performance of different iterative filters and smoothers for data assimilation with a focus on parameter updates in the unsaturated zone. In particular we compare the Iterative Ensemble Kalman Filter and Smoother as introduced by Bocquet and Sakov (2013) as well as the Confirming Ensemble Kalman Filter and the modified Restart Ensemble Kalman Filter proposed by Song et al. (2014) to the original Ensemble Kalman Filter (Evensen, 2009). This is done with simple test cases generated numerically. We consider also test examples with layering structure, as a layering structure is often found in natural soils. We assume that observations are water content, obtained from TDR probes or other observation methods sampling relatively small volumes. Particularly in larger data assimilation frameworks, a reasonable balance between computational effort and quality of results has to be found. Therefore, we compare computational costs of the different methods as well as the quality of open loop model predictions and the estimated parameters. Bocquet, M. and P. Sakov, 2013: Joint state and parameter estimation with an iterative ensemble Kalman smoother, Nonlinear Processes in Geophysics 20(5): 803-818. Evensen, G., 2009: Data assimilation: The ensemble Kalman filter. Springer Science & Business Media. Song, X.H., L.S. Shi, M. Ye, J.Z. Yang and I.M. Navon, 2014: Numerical comparison of iterative ensemble Kalman filters for unsaturated flow inverse modeling. Vadose Zone Journal 13(2), 10.2136/vzj2013.05.0083.
2015-09-30
1 Approved for public release; distribution is unlimited. Toward the Development of a Coupled COAMPS-ROMS Ensemble Kalman Filter and Adjoint...system at NCAR. (2) Compare the performance of the Ensemble Kalman Filter (EnKF) using the Data Assimilation Research Testbed (DART) and 4...undercurrent is clearly visible. Figure 2 shows the horizontal temperature structure and circulation at a depth of 50 m within the surface mixed layer
Yang, Wan; Karspeck, Alicia; Shaman, Jeffrey
2014-01-01
A variety of filtering methods enable the recursive estimation of system state variables and inference of model parameters. These methods have found application in a range of disciplines and settings, including engineering design and forecasting, and, over the last two decades, have been applied to infectious disease epidemiology. For any system of interest, the ideal filter depends on the nonlinearity and complexity of the model to which it is applied, the quality and abundance of observations being entrained, and the ultimate application (e.g. forecast, parameter estimation, etc.). Here, we compare the performance of six state-of-the-art filter methods when used to model and forecast influenza activity. Three particle filters—a basic particle filter (PF) with resampling and regularization, maximum likelihood estimation via iterated filtering (MIF), and particle Markov chain Monte Carlo (pMCMC)—and three ensemble filters—the ensemble Kalman filter (EnKF), the ensemble adjustment Kalman filter (EAKF), and the rank histogram filter (RHF)—were used in conjunction with a humidity-forced susceptible-infectious-recovered-susceptible (SIRS) model and weekly estimates of influenza incidence. The modeling frameworks, first validated with synthetic influenza epidemic data, were then applied to fit and retrospectively forecast the historical incidence time series of seven influenza epidemics during 2003–2012, for 115 cities in the United States. Results suggest that when using the SIRS model the ensemble filters and the basic PF are more capable of faithfully recreating historical influenza incidence time series, while the MIF and pMCMC do not perform as well for multimodal outbreaks. For forecast of the week with the highest influenza activity, the accuracies of the six model-filter frameworks are comparable; the three particle filters perform slightly better predicting peaks 1–5 weeks in the future; the ensemble filters are more accurate predicting peaks in the past. PMID:24762780
2009-02-01
Evensen, G., 2003: The ensemble Kalman filter : Theoretical formulation and practical implementation. Ocean Dyn., 53, 343–357, doi:10.1007/s10236-003...0036-9. ——, 2006: Data Assimilation: The Ensemble Kalman Filter . Springer, 288 pp. Fang, F., C. C. Pain, I. M. Navon, G. J. Gorman, M. D. Piggott, P. A...E. J. Kostelich, M. Corazza, E. Kalnay, and D. J. Patil, 2004: A local ensemble Kalman filter for atmospheric data assimilation. Tellus, 56A, 415–428
A hybrid filtering method based on a novel empirical mode decomposition for friction signals
NASA Astrophysics Data System (ADS)
Li, Chengwei; Zhan, Liwei
2015-12-01
During a measurement, the measured signal usually contains noise. To remove the noise and preserve the important feature of the signal, we introduce a hybrid filtering method that uses a new intrinsic mode function (NIMF) and a modified Hausdorff distance. The NIMF is defined as the difference between the noisy signal and each intrinsic mode function (IMF), which is obtained by empirical mode decomposition (EMD), ensemble EMD, complementary ensemble EMD, or complete ensemble EMD with adaptive noise (CEEMDAN). The relevant mode selecting is based on the similarity between the first NIMF and the rest of the NIMFs. With this filtering method, the EMD and improved versions are used to filter the simulation and friction signals. The friction signal between an airplane tire and the runaway is recorded during a simulated airplane touchdown and features spikes of various amplitudes and noise. The filtering effectiveness of the four hybrid filtering methods are compared and discussed. The results show that the filtering method based on CEEMDAN outperforms other signal filtering methods.
An Ensemble-Based Smoother with Retrospectively Updated Weights for Highly Nonlinear Systems
NASA Technical Reports Server (NTRS)
Chin, T. M.; Turmon, M. J.; Jewell, J. B.; Ghil, M.
2006-01-01
Monte Carlo computational methods have been introduced into data assimilation for nonlinear systems in order to alleviate the computational burden of updating and propagating the full probability distribution. By propagating an ensemble of representative states, algorithms like the ensemble Kalman filter (EnKF) and the resampled particle filter (RPF) rely on the existing modeling infrastructure to approximate the distribution based on the evolution of this ensemble. This work presents an ensemble-based smoother that is applicable to the Monte Carlo filtering schemes like EnKF and RPF. At the minor cost of retrospectively updating a set of weights for ensemble members, this smoother has demonstrated superior capabilities in state tracking for two highly nonlinear problems: the double-well potential and trivariate Lorenz systems. The algorithm does not require retrospective adaptation of the ensemble members themselves, and it is thus suited to a streaming operational mode. The accuracy of the proposed backward-update scheme in estimating non-Gaussian distributions is evaluated by comparison to the more accurate estimates provided by a Markov chain Monte Carlo algorithm.
Kalman filter data assimilation: targeting observations and parameter estimation.
Bellsky, Thomas; Kostelich, Eric J; Mahalov, Alex
2014-06-01
This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly located observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation.
Kalman filter data assimilation: Targeting observations and parameter estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bellsky, Thomas, E-mail: bellskyt@asu.edu; Kostelich, Eric J.; Mahalov, Alex
2014-06-15
This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly locatedmore » observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation.« less
A Maximum Entropy Method for Particle Filtering
NASA Astrophysics Data System (ADS)
Eyink, Gregory L.; Kim, Sangil
2006-06-01
Standard ensemble or particle filtering schemes do not properly represent states of low priori probability when the number of available samples is too small, as is often the case in practical applications. We introduce here a set of parametric resampling methods to solve this problem. Motivated by a general H-theorem for relative entropy, we construct parametric models for the filter distributions as maximum-entropy/minimum-information models consistent with moments of the particle ensemble. When the prior distributions are modeled as mixtures of Gaussians, our method naturally generalizes the ensemble Kalman filter to systems with highly non-Gaussian statistics. We apply the new particle filters presented here to two simple test cases: a one-dimensional diffusion process in a double-well potential and the three-dimensional chaotic dynamical system of Lorenz.
NASA Astrophysics Data System (ADS)
Iglesias, Marco; Sawlan, Zaid; Scavino, Marco; Tempone, Raúl; Wood, Christopher
2018-07-01
In this work, we present the ensemble-marginalized Kalman filter (EnMKF), a sequential algorithm analogous to our previously proposed approach (Ruggeri et al 2017 Bayesian Anal. 12 407–33, Iglesias et al 2018 Int. J. Heat Mass Transfer 116 417–31), for estimating the state and parameters of linear parabolic partial differential equations in initial-boundary value problems when the boundary data are noisy. We apply EnMKF to infer the thermal properties of building walls and to estimate the corresponding heat flux from real and synthetic data. Compared with a modified ensemble Kalman filter (EnKF) that is not marginalized, EnMKF reduces the bias error, avoids the collapse of the ensemble without needing to add inflation, and converges to the mean field posterior using or less of the ensemble size required by EnKF. According to our results, the marginalization technique in EnMKF is key to performance improvement with smaller ensembles at any fixed time.
NASA Astrophysics Data System (ADS)
Hut, Rolf; Amisigo, Barnabas A.; Steele-Dunne, Susan; van de Giesen, Nick
2015-12-01
Reduction of Used Memory Ensemble Kalman Filtering (RumEnKF) is introduced as a variant on the Ensemble Kalman Filter (EnKF). RumEnKF differs from EnKF in that it does not store the entire ensemble, but rather only saves the first two moments of the ensemble distribution. In this way, the number of ensemble members that can be calculated is less dependent on available memory, and mainly on available computing power (CPU). RumEnKF is developed to make optimal use of current generation super computer architecture, where the number of available floating point operations (flops) increases more rapidly than the available memory and where inter-node communication can quickly become a bottleneck. RumEnKF reduces the used memory compared to the EnKF when the number of ensemble members is greater than half the number of state variables. In this paper, three simple models are used (auto-regressive, low dimensional Lorenz and high dimensional Lorenz) to show that RumEnKF performs similarly to the EnKF. Furthermore, it is also shown that increasing the ensemble size has a similar impact on the estimation error from the three algorithms.
NASA Astrophysics Data System (ADS)
Li, N.; Kinzelbach, W.; Li, H.; Li, W.; Chen, F.; Wang, L.
2017-12-01
Data assimilation techniques are widely used in hydrology to improve the reliability of hydrological models and to reduce model predictive uncertainties. This provides critical information for decision makers in water resources management. This study aims to evaluate a data assimilation system for the Guantao groundwater flow model coupled with a one-dimensional soil column simulation (Hydrus 1D) using an Unbiased Ensemble Square Root Filter (UnEnSRF) originating from the Ensemble Kalman Filter (EnKF) to update parameters and states, separately or simultaneously. To simplify the coupling between unsaturated and saturated zone, a linear relationship obtained from analyzing inputs to and outputs from Hydrus 1D is applied in the data assimilation process. Unlike EnKF, the UnEnSRF updates parameter ensemble mean and ensemble perturbations separately. In order to keep the ensemble filter working well during the data assimilation, two factors are introduced in the study. One is called damping factor to dampen the update amplitude of the posterior ensemble mean to avoid nonrealistic values. The other is called inflation factor to relax the posterior ensemble perturbations close to prior to avoid filter inbreeding problems. The sensitivities of the two factors are studied and their favorable values for the Guantao model are determined. The appropriate observation error and ensemble size were also determined to facilitate the further analysis. This study demonstrated that the data assimilation of both model parameters and states gives a smaller model prediction error but with larger uncertainty while the data assimilation of only model states provides a smaller predictive uncertainty but with a larger model prediction error. Data assimilation in a groundwater flow model will improve model prediction and at the same time make the model converge to the true parameters, which provides a successful base for applications in real time modelling or real time controlling strategies in groundwater resources management.
Fire spread estimation on forest wildfire using ensemble kalman filter
NASA Astrophysics Data System (ADS)
Syarifah, Wardatus; Apriliani, Erna
2018-04-01
Wildfire is one of the most frequent disasters in the world, for example forest wildfire, causing population of forest decrease. Forest wildfire, whether naturally occurring or prescribed, are potential risks for ecosystems and human settlements. These risks can be managed by monitoring the weather, prescribing fires to limit available fuel, and creating firebreaks. With computer simulations we can predict and explore how fires may spread. The model of fire spread on forest wildfire was established to determine the fire properties. The fire spread model is prepared based on the equation of the diffusion reaction model. There are many methods to estimate the spread of fire. The Kalman Filter Ensemble Method is a modified estimation method of the Kalman Filter algorithm that can be used to estimate linear and non-linear system models. In this research will apply Ensemble Kalman Filter (EnKF) method to estimate the spread of fire on forest wildfire. Before applying the EnKF method, the fire spread model will be discreted using finite difference method. At the end, the analysis obtained illustrated by numerical simulation using software. The simulation results show that the Ensemble Kalman Filter method is closer to the system model when the ensemble value is greater, while the covariance value of the system model and the smaller the measurement.
NASA Astrophysics Data System (ADS)
Szunyogh, Istvan; Kostelich, Eric J.; Gyarmati, G.; Patil, D. J.; Hunt, Brian R.; Kalnay, Eugenia; Ott, Edward; Yorke, James A.
2005-08-01
The accuracy and computational efficiency of the recently proposed local ensemble Kalman filter (LEKF) data assimilation scheme is investigated on a state-of-the-art operational numerical weather prediction model using simulated observations. The model selected for this purpose is the T62 horizontal- and 28-level vertical-resolution version of the Global Forecast System (GFS) of the National Center for Environmental Prediction. The performance of the data assimilation system is assessed for different configurations of the LEKF scheme. It is shown that a modest size (40-member) ensemble is sufficient to track the evolution of the atmospheric state with high accuracy. For this ensemble size, the computational time per analysis is less than 9 min on a cluster of PCs. The analyses are extremely accurate in the mid-latitude storm track regions. The largest analysis errors, which are typically much smaller than the observational errors, occur where parametrized physical processes play important roles. Because these are also the regions where model errors are expected to be the largest, limitations of a real-data implementation of the ensemble-based Kalman filter may be easily mistaken for model errors. In light of these results, the importance of testing the ensemble-based Kalman filter data assimilation systems on simulated observations is stressed.
An Ensemble Framework Coping with Instability in the Gene Selection Process.
Castellanos-Garzón, José A; Ramos, Juan; López-Sánchez, Daniel; de Paz, Juan F; Corchado, Juan M
2018-03-01
This paper proposes an ensemble framework for gene selection, which is aimed at addressing instability problems presented in the gene filtering task. The complex process of gene selection from gene expression data faces different instability problems from the informative gene subsets found by different filter methods. This makes the identification of significant genes by the experts difficult. The instability of results can come from filter methods, gene classifier methods, different datasets of the same disease and multiple valid groups of biomarkers. Even though there is a wide number of proposals, the complexity imposed by this problem remains a challenge today. This work proposes a framework involving five stages of gene filtering to discover biomarkers for diagnosis and classification tasks. This framework performs a process of stable feature selection, facing the problems above and, thus, providing a more suitable and reliable solution for clinical and research purposes. Our proposal involves a process of multistage gene filtering, in which several ensemble strategies for gene selection were added in such a way that different classifiers simultaneously assess gene subsets to face instability. Firstly, we apply an ensemble of recent gene selection methods to obtain diversity in the genes found (stability according to filter methods). Next, we apply an ensemble of known classifiers to filter genes relevant to all classifiers at a time (stability according to classification methods). The achieved results were evaluated in two different datasets of the same disease (pancreatic ductal adenocarcinoma), in search of stability according to the disease, for which promising results were achieved.
Ocean Predictability and Uncertainty Forecasts Using Local Ensemble Transfer Kalman Filter (LETKF)
NASA Astrophysics Data System (ADS)
Wei, M.; Hogan, P. J.; Rowley, C. D.; Smedstad, O. M.; Wallcraft, A. J.; Penny, S. G.
2017-12-01
Ocean predictability and uncertainty are studied with an ensemble system that has been developed based on the US Navy's operational HYCOM using the Local Ensemble Transfer Kalman Filter (LETKF) technology. One of the advantages of this method is that the best possible initial analysis states for the HYCOM forecasts are provided by the LETKF which assimilates operational observations using ensemble method. The background covariance during this assimilation process is implicitly supplied with the ensemble avoiding the difficult task of developing tangent linear and adjoint models out of HYCOM with the complicated hybrid isopycnal vertical coordinate for 4D-VAR. The flow-dependent background covariance from the ensemble will be an indispensable part in the next generation hybrid 4D-Var/ensemble data assimilation system. The predictability and uncertainty for the ocean forecasts are studied initially for the Gulf of Mexico. The results are compared with another ensemble system using Ensemble Transfer (ET) method which has been used in the Navy's operational center. The advantages and disadvantages are discussed.
Identifying Optimal Measurement Subspace for the Ensemble Kalman Filter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Ning; Huang, Zhenyu; Welch, Greg
2012-05-24
To reduce the computational load of the ensemble Kalman filter while maintaining its efficacy, an optimization algorithm based on the generalized eigenvalue decomposition method is proposed for identifying the most informative measurement subspace. When the number of measurements is large, the proposed algorithm can be used to make an effective tradeoff between computational complexity and estimation accuracy. This algorithm also can be extended to other Kalman filters for measurement subspace selection.
NASA Technical Reports Server (NTRS)
Keppenne, Christian L.
2013-01-01
A two-step ensemble recentering Kalman filter (ERKF) analysis scheme is introduced. The algorithm consists of a recentering step followed by an ensemble Kalman filter (EnKF) analysis step. The recentering step is formulated such as to adjust the prior distribution of an ensemble of model states so that the deviations of individual samples from the sample mean are unchanged but the original sample mean is shifted to the prior position of the most likely particle, where the likelihood of each particle is measured in terms of closeness to a chosen subset of the observations. The computational cost of the ERKF is essentially the same as that of a same size EnKF. The ERKF is applied to the assimilation of Argo temperature profiles into the OGCM component of an ensemble of NASA GEOS-5 coupled models. Unassimilated Argo salt data are used for validation. A surprisingly small number (16) of model trajectories is sufficient to significantly improve model estimates of salinity over estimates from an ensemble run without assimilation. The two-step algorithm also performs better than the EnKF although its performance is degraded in poorly observed regions.
Development of Super-Ensemble techniques for ocean analyses: the Mediterranean Sea case
NASA Astrophysics Data System (ADS)
Pistoia, Jenny; Pinardi, Nadia; Oddo, Paolo; Collins, Matthew; Korres, Gerasimos; Drillet, Yann
2017-04-01
Short-term ocean analyses for Sea Surface Temperature SST in the Mediterranean Sea can be improved by a statistical post-processing technique, called super-ensemble. This technique consists in a multi-linear regression algorithm applied to a Multi-Physics Multi-Model Super-Ensemble (MMSE) dataset, a collection of different operational forecasting analyses together with ad-hoc simulations produced by modifying selected numerical model parameterizations. A new linear regression algorithm based on Empirical Orthogonal Function filtering techniques is capable to prevent overfitting problems, even if best performances are achieved when we add correlation to the super-ensemble structure using a simple spatial filter applied after the linear regression. Our outcomes show that super-ensemble performances depend on the selection of an unbiased operator and the length of the learning period, but the quality of the generating MMSE dataset has the largest impact on the MMSE analysis Root Mean Square Error (RMSE) evaluated with respect to observed satellite SST. Lower RMSE analysis estimates result from the following choices: 15 days training period, an overconfident MMSE dataset (a subset with the higher quality ensemble members), and the least square algorithm being filtered a posteriori.
Assimilating Remotely Sensed Surface Soil Moisture into SWAT using Ensemble Kalman Filter
USDA-ARS?s Scientific Manuscript database
In this study, a 1-D Ensemble Kalman Filter has been used to update the soil moisture states of the Soil and Water Assessment Tool (SWAT) model. Experiments were conducted for the Cobb Creek Watershed in southeastern Oklahoma for 2006-2008. Assimilation of in situ data proved limited success in the ...
Deterministic Mean-Field Ensemble Kalman Filtering
Law, Kody J. H.; Tembine, Hamidou; Tempone, Raul
2016-05-03
The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. In this paper, a density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence κ between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d
Deterministic Mean-Field Ensemble Kalman Filtering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Law, Kody J. H.; Tembine, Hamidou; Tempone, Raul
The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. In this paper, a density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence κ between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d
NASA Astrophysics Data System (ADS)
Plaza Guingla, D. A.; Pauwels, V. R.; De Lannoy, G. J.; Matgen, P.; Giustarini, L.; De Keyser, R.
2012-12-01
The objective of this work is to analyze the improvement in the performance of the particle filter by including a resample-move step or by using a modified Gaussian particle filter. Specifically, the standard particle filter structure is altered by the inclusion of the Markov chain Monte Carlo move step. The second choice adopted in this study uses the moments of an ensemble Kalman filter analysis to define the importance density function within the Gaussian particle filter structure. Both variants of the standard particle filter are used in the assimilation of densely sampled discharge records into a conceptual rainfall-runoff model. In order to quantify the obtained improvement, discharge root mean square errors are compared for different particle filters, as well as for the ensemble Kalman filter. First, a synthetic experiment is carried out. The results indicate that the performance of the standard particle filter can be improved by the inclusion of the resample-move step, but its effectiveness is limited to situations with limited particle impoverishment. The results also show that the modified Gaussian particle filter outperforms the rest of the filters. Second, a real experiment is carried out in order to validate the findings from the synthetic experiment. The addition of the resample-move step does not show a considerable improvement due to performance limitations in the standard particle filter with real data. On the other hand, when an optimal importance density function is used in the Gaussian particle filter, the results show a considerably improved performance of the particle filter.
Ocean state and uncertainty forecasts using HYCOM with Local Ensemble Transfer Kalman Filter (LETKF)
NASA Astrophysics Data System (ADS)
Wei, Mozheng; Hogan, Pat; Rowley, Clark; Smedstad, Ole-Martin; Wallcraft, Alan; Penny, Steve
2017-04-01
An ensemble forecast system based on the US Navy's operational HYCOM using Local Ensemble Transfer Kalman Filter (LETKF) technology has been developed for ocean state and uncertainty forecasts. One of the advantages is that the best possible initial analysis states for the HYCOM forecasts are provided by the LETKF which assimilates the operational observations using ensemble method. The background covariance during this assimilation process is supplied with the ensemble, thus it avoids the difficulty of developing tangent linear and adjoint models for 4D-VAR from the complicated hybrid isopycnal vertical coordinate in HYCOM. Another advantage is that the ensemble system provides the valuable uncertainty estimate corresponding to every state forecast from HYCOM. Uncertainty forecasts have been proven to be critical for the downstream users and managers to make more scientifically sound decisions in numerical prediction community. In addition, ensemble mean is generally more accurate and skilful than the single traditional deterministic forecast with the same resolution. We will introduce the ensemble system design and setup, present some results from 30-member ensemble experiment, and discuss scientific, technical and computational issues and challenges, such as covariance localization, inflation, model related uncertainties and sensitivity to the ensemble size.
Decadal climate predictions improved by ocean ensemble dispersion filtering
NASA Astrophysics Data System (ADS)
Kadow, C.; Illing, S.; Kröner, I.; Ulbrich, U.; Cubasch, U.
2017-06-01
Decadal predictions by Earth system models aim to capture the state and phase of the climate several years in advance. Atmosphere-ocean interaction plays an important role for such climate forecasts. While short-term weather forecasts represent an initial value problem and long-term climate projections represent a boundary condition problem, the decadal climate prediction falls in-between these two time scales. In recent years, more precise initialization techniques of coupled Earth system models and increased ensemble sizes have improved decadal predictions. However, climate models in general start losing the initialized signal and its predictive skill from one forecast year to the next. Here we show that the climate prediction skill of an Earth system model can be improved by a shift of the ocean state toward the ensemble mean of its individual members at seasonal intervals. We found that this procedure, called ensemble dispersion filter, results in more accurate results than the standard decadal prediction. Global mean and regional temperature, precipitation, and winter cyclone predictions show an increased skill up to 5 years ahead. Furthermore, the novel technique outperforms predictions with larger ensembles and higher resolution. Our results demonstrate how decadal climate predictions benefit from ocean ensemble dispersion filtering toward the ensemble mean.
NASA Astrophysics Data System (ADS)
Parazoo, Nicholas C.
Mass transport along moist isentropic surfaces on baroclinic waves represents an important component of the atmospheric heat engine that operates between the equator and poles. This is also an important vehicle for tracer transport, and is correlated with ecosystem metabolism because large-scale baroclinicity and photosynthesis are both driven seasonally by variations in solar radiation. In this research, I pursue a dynamical framework for explaining atmospheric transport of CO2 by synoptic weather systems at middle and high latitudes. A global model of atmospheric tracer transport, driven by meteorological analysis in combination with a detailed description of surface fluxes, is used to create time varying CO2 distributions in the atmosphere. Simulated mass fluxes of CO2 are then decomposed into a zonal monthly mean component and deviations from the monthly mean in space and time. Mass fluxes of CO2 are described on moist isentropic surfaces to represent frontal transport along storm tracks. Forward simulations suggest that synoptic weather systems transport large amounts of CO2 north and south in northern mid-latitudes, up to 1 PgC month-1 during winter when baroclinic wave activity peaks. During boreal winter when northern plants respire, warm moist air, high in CO2, is swept upward and poleward along the east side of baroclinic waves and injected into the polar vortex, while cold dry air, low in CO 2, that had been transported into the polar vortex earlier in the year is advected equatorward. These synoptic eddies act to strongly reduce seasonality of CO2 in the biologically active mid-latitudes by 50% of that implied by local net ecosystem exchange while correspondingly amplifying seasonality in the Arctic. Transport along stormtracks is correlated with rising, moist, cloudy air, which systematically hides this CO2 transport from satellite observing systems. Meridional fluxes of CO2 are of comparable magnitude as surface exchange of CO2 in mid-latitudes, and thus require careful consideration in (inverse) modeling of the carbon cycle. Because synoptic transport of CO2 by frontal systems and moist processes is generally unobserved and poorly represented in global models, it may be a source of error for inverse flux estimates. Uncertainty in CO 2 transport by synoptic eddies is investigated using a global model driven by four reanalysis products from the Goddard EOS Data Assimilation System for 2005. Eddy transport is found to be highly variable between model analysis, with significant seasonal differences of up to 0.2 PgC, which represents up to 50% of fossil fuel emissions. The variations are caused primarily by differences in grid spacing and vertical mixing by moist convection and PBL turbulence. To test for aliasing of transport bias into inverse flux estimates, synthetic satellite data is generated using a model at 50 km global resolution and inverted using a global model run with coarse grid transport. An ensemble filtering method called the Maximum Likelihood Ensemble Filter (MLEF) is used to optimize fluxes. Flux estimates are found to be highly sensitive to transport biases at pixel and continental scale, with errors of up to 0.5 PgC year-1 in Europe and North America.
Hwang, Kyu-Baek; Lee, In-Hee; Park, Jin-Ho; Hambuch, Tina; Choe, Yongjoon; Kim, MinHyeok; Lee, Kyungjoon; Song, Taemin; Neu, Matthew B; Gupta, Neha; Kohane, Isaac S; Green, Robert C; Kong, Sek Won
2014-08-01
As whole genome sequencing (WGS) uncovers variants associated with rare and common diseases, an immediate challenge is to minimize false-positive findings due to sequencing and variant calling errors. False positives can be reduced by combining results from orthogonal sequencing methods, but costly. Here, we present variant filtering approaches using logistic regression (LR) and ensemble genotyping to minimize false positives without sacrificing sensitivity. We evaluated the methods using paired WGS datasets of an extended family prepared using two sequencing platforms and a validated set of variants in NA12878. Using LR or ensemble genotyping based filtering, false-negative rates were significantly reduced by 1.1- to 17.8-fold at the same levels of false discovery rates (5.4% for heterozygous and 4.5% for homozygous single nucleotide variants (SNVs); 30.0% for heterozygous and 18.7% for homozygous insertions; 25.2% for heterozygous and 16.6% for homozygous deletions) compared to the filtering based on genotype quality scores. Moreover, ensemble genotyping excluded > 98% (105,080 of 107,167) of false positives while retaining > 95% (897 of 937) of true positives in de novo mutation (DNM) discovery in NA12878, and performed better than a consensus method using two sequencing platforms. Our proposed methods were effective in prioritizing phenotype-associated variants, and an ensemble genotyping would be essential to minimize false-positive DNM candidates. © 2014 WILEY PERIODICALS, INC.
Hwang, Kyu-Baek; Lee, In-Hee; Park, Jin-Ho; Hambuch, Tina; Choi, Yongjoon; Kim, MinHyeok; Lee, Kyungjoon; Song, Taemin; Neu, Matthew B.; Gupta, Neha; Kohane, Isaac S.; Green, Robert C.; Kong, Sek Won
2014-01-01
As whole genome sequencing (WGS) uncovers variants associated with rare and common diseases, an immediate challenge is to minimize false positive findings due to sequencing and variant calling errors. False positives can be reduced by combining results from orthogonal sequencing methods, but costly. Here we present variant filtering approaches using logistic regression (LR) and ensemble genotyping to minimize false positives without sacrificing sensitivity. We evaluated the methods using paired WGS datasets of an extended family prepared using two sequencing platforms and a validated set of variants in NA12878. Using LR or ensemble genotyping based filtering, false negative rates were significantly reduced by 1.1- to 17.8-fold at the same levels of false discovery rates (5.4% for heterozygous and 4.5% for homozygous SNVs; 30.0% for heterozygous and 18.7% for homozygous insertions; 25.2% for heterozygous and 16.6% for homozygous deletions) compared to the filtering based on genotype quality scores. Moreover, ensemble genotyping excluded > 98% (105,080 of 107,167) of false positives while retaining > 95% (897 of 937) of true positives in de novo mutation (DNM) discovery, and performed better than a consensus method using two sequencing platforms. Our proposed methods were effective in prioritizing phenotype-associated variants, and ensemble genotyping would be essential to minimize false positive DNM candidates. PMID:24829188
NASA Astrophysics Data System (ADS)
Sun, Alexander Y.; Morris, Alan P.; Mohanty, Sitakanta
2009-07-01
Estimated parameter distributions in groundwater models may contain significant uncertainties because of data insufficiency. Therefore, adaptive uncertainty reduction strategies are needed to continuously improve model accuracy by fusing new observations. In recent years, various ensemble Kalman filters have been introduced as viable tools for updating high-dimensional model parameters. However, their usefulness is largely limited by the inherent assumption of Gaussian error statistics. Hydraulic conductivity distributions in alluvial aquifers, for example, are usually non-Gaussian as a result of complex depositional and diagenetic processes. In this study, we combine an ensemble Kalman filter with grid-based localization and a Gaussian mixture model (GMM) clustering techniques for updating high-dimensional, multimodal parameter distributions via dynamic data assimilation. We introduce innovative strategies (e.g., block updating and dimension reduction) to effectively reduce the computational costs associated with these modified ensemble Kalman filter schemes. The developed data assimilation schemes are demonstrated numerically for identifying the multimodal heterogeneous hydraulic conductivity distributions in a binary facies alluvial aquifer. Our results show that localization and GMM clustering are very promising techniques for assimilating high-dimensional, multimodal parameter distributions, and they outperform the corresponding global ensemble Kalman filter analysis scheme in all scenarios considered.
NASA Technical Reports Server (NTRS)
Buchard, Virginie; Da Silva, Arlindo; Todling, Ricardo
2017-01-01
In the GEOS near real-time system, as well as in MERRA-2 which is the latest reanalysis produced at NASAs Global Modeling and Assimilation Office(GMAO), the assimilation of aerosol observations is performed by means of a so-called analysis splitting method. In line with the transition of the GEOS meteorological data assimilation system to a hybrid Ensemble-Variational formulation, we are updating the aerosol component of our assimilation system to an ensemble square root filter(EnSRF; Whitaker and Hamill (2002)) type of scheme.We present a summary of our preliminary results of the assimilation of column integrated aerosol observations (Aerosol Optical Depth; AOD) using an Ensemble Square Root Filters (EnSRF) scheme and the ensemble members produced routinely by the meteorological assimilation.
NASA Astrophysics Data System (ADS)
Miyoshi, Takemasa; Kunii, Masaru
2012-03-01
The local ensemble transform Kalman filter (LETKF) is implemented with the Weather Research and Forecasting (WRF) model, and real observations are assimilated to assess the newly-developed WRF-LETKF system. The WRF model is a widely-used mesoscale numerical weather prediction model, and the LETKF is an ensemble Kalman filter (EnKF) algorithm particularly efficient in parallel computer architecture. This study aims to provide the basis of future research on mesoscale data assimilation using the WRF-LETKF system, an additional testbed to the existing EnKF systems with the WRF model used in the previous studies. The particular LETKF system adopted in this study is based on the system initially developed in 2004 and has been continuously improved through theoretical studies and wide applications to many kinds of dynamical models including realistic geophysical models. Most recent and important improvements include an adaptive covariance inflation scheme which considers the spatial and temporal inhomogeneity of inflation parameters. Experiments show that the LETKF successfully assimilates real observations and that adaptive inflation is advantageous. Additional experiments with various ensemble sizes show that using more ensemble members improves the analyses consistently.
Efficient data assimilation algorithm for bathymetry application
NASA Astrophysics Data System (ADS)
Ghorbanidehno, H.; Lee, J. H.; Farthing, M.; Hesser, T.; Kitanidis, P. K.; Darve, E. F.
2017-12-01
Information on the evolving state of the nearshore zone bathymetry is crucial to shoreline management, recreational safety, and naval operations. The high cost and complex logistics of using ship-based surveys for bathymetry estimation have encouraged the use of remote sensing techniques. Data assimilation methods combine the remote sensing data and nearshore hydrodynamic models to estimate the unknown bathymetry and the corresponding uncertainties. In particular, several recent efforts have combined Kalman Filter-based techniques such as ensembled-based Kalman filters with indirect video-based observations to address the bathymetry inversion problem. However, these methods often suffer from ensemble collapse and uncertainty underestimation. Here, the Compressed State Kalman Filter (CSKF) method is used to estimate the bathymetry based on observed wave celerity. In order to demonstrate the accuracy and robustness of the CSKF method, we consider twin tests with synthetic observations of wave celerity, while the bathymetry profiles are chosen based on surveys taken by the U.S. Army Corps of Engineer Field Research Facility (FRF) in Duck, NC. The first test case is a bathymetry estimation problem for a spatially smooth and temporally constant bathymetry profile. The second test case is a bathymetry estimation problem for a temporally evolving bathymetry from a smooth to a non-smooth profile. For both problems, we compare the results of CSKF with those obtained by the local ensemble transform Kalman filter (LETKF), which is a popular ensemble-based Kalman filter method.
NASA Astrophysics Data System (ADS)
Clark, E.; Wood, A.; Nijssen, B.; Newman, A. J.; Mendoza, P. A.
2016-12-01
The System for Hydrometeorological Applications, Research and Prediction (SHARP), developed at the National Center for Atmospheric Research (NCAR), University of Washington, U.S. Army Corps of Engineers, and U.S. Bureau of Reclamation, is a fully automated ensemble prediction system for short-term to seasonal applications. It incorporates uncertainty in initial hydrologic conditions (IHCs) and in hydrometeorological predictions. In this implementation, IHC uncertainty is estimated by propagating an ensemble of 100 plausible temperature and precipitation time series through the Sacramento/Snow-17 model. The forcing ensemble explicitly accounts for measurement and interpolation uncertainties in the development of gridded meteorological forcing time series. The resulting ensemble of derived IHCs exhibits a broad range of possible soil moisture and snow water equivalent (SWE) states. To select the IHCs that are most consistent with the observations, we employ a particle filter (PF) that weights IHC ensemble members based on observations of streamflow and SWE. These particles are then used to initialize ensemble precipitation and temperature forecasts downscaled from the Global Ensemble Forecast System (GEFS), generating a streamflow forecast ensemble. We test this method in two basins in the Pacific Northwest that are important for water resources management: 1) the Green River upstream of Howard Hanson Dam, and 2) the South Fork Flathead River upstream of Hungry Horse Dam. The first of these is characterized by mixed snow and rain, while the second is snow-dominated. The PF-based forecasts are compared to forecasts based on a single IHC (corresponding to median streamflow) paired with the full GEFS ensemble, and 2) the full IHC ensemble, without filtering, paired with the full GEFS ensemble. In addition to assessing improvements in the spread of IHCs, we perform a hindcast experiment to evaluate the utility of PF-based data assimilation on streamflow forecasts at 1- to 7-day lead times.
Assimilation of water temperature and discharge data for ensemble water temperature forecasting
NASA Astrophysics Data System (ADS)
Ouellet-Proulx, Sébastien; Chimi Chiadjeu, Olivier; Boucher, Marie-Amélie; St-Hilaire, André
2017-11-01
Recent work demonstrated the value of water temperature forecasts to improve water resources allocation and highlighted the importance of quantifying their uncertainty adequately. In this study, we perform a multisite cascading ensemble assimilation of discharge and water temperature on the Nechako River (Canada) using particle filters. Hydrological and thermal initial conditions were provided to a rainfall-runoff model, coupled to a thermal module, using ensemble meteorological forecasts as inputs to produce 5 day ensemble thermal forecasts. Results show good performances of the particle filters with improvements of the accuracy of initial conditions by more than 65% compared to simulations without data assimilation for both the hydrological and the thermal component. All thermal forecasts returned continuous ranked probability scores under 0.8 °C when using a set of 40 initial conditions and meteorological forecasts comprising 20 members. A greater contribution of the initial conditions to the total uncertainty of the system for 1-dayforecasts is observed (mean ensemble spread = 1.1 °C) compared to meteorological forcings (mean ensemble spread = 0.6 °C). The inclusion of meteorological uncertainty is critical to maintain reliable forecasts and proper ensemble spread for lead times of 2 days and more. This work demonstrates the ability of the particle filters to properly update the initial conditions of a coupled hydrological and thermal model and offers insights regarding the contribution of two major sources of uncertainty to the overall uncertainty in thermal forecasts.
USDA-ARS?s Scientific Manuscript database
Data from modern soil water contents probes can be used for data assimilation in soil water flow modeling, i.e. continual correction of the flow model performance based on observations. The ensemble Kalman filter appears to be an appropriate method for that. The method requires estimates of the unce...
An iterative ensemble quasi-linear data assimilation approach for integrated reservoir monitoring
NASA Astrophysics Data System (ADS)
Li, J. Y.; Kitanidis, P. K.
2013-12-01
Reservoir forecasting and management are increasingly relying on an integrated reservoir monitoring approach, which involves data assimilation to calibrate the complex process of multi-phase flow and transport in the porous medium. The numbers of unknowns and measurements arising in such joint inversion problems are usually very large. The ensemble Kalman filter and other ensemble-based techniques are popular because they circumvent the computational barriers of computing Jacobian matrices and covariance matrices explicitly and allow nonlinear error propagation. These algorithms are very useful but their performance is not well understood and it is not clear how many realizations are needed for satisfactory results. In this presentation we introduce an iterative ensemble quasi-linear data assimilation approach for integrated reservoir monitoring. It is intended for problems for which the posterior or conditional probability density function is not too different from a Gaussian, despite nonlinearity in the state transition and observation equations. The algorithm generates realizations that have the potential to adequately represent the conditional probability density function (pdf). Theoretical analysis sheds light on the conditions under which this algorithm should work well and explains why some applications require very few realizations while others require many. This algorithm is compared with the classical ensemble Kalman filter (Evensen, 2003) and with Gu and Oliver's (2007) iterative ensemble Kalman filter on a synthetic problem of monitoring a reservoir using wellbore pressure and flux data.
A Sequential Ensemble Prediction System at Convection Permitting Scales
NASA Astrophysics Data System (ADS)
Milan, M.; Simmer, C.
2012-04-01
A Sequential Assimilation Method (SAM) following some aspects of particle filtering with resampling, also called SIR (Sequential Importance Resampling), is introduced and applied in the framework of an Ensemble Prediction System (EPS) for weather forecasting on convection permitting scales, with focus to precipitation forecast. At this scale and beyond, the atmosphere increasingly exhibits chaotic behaviour and non linear state space evolution due to convectively driven processes. One way to take full account of non linear state developments are particle filter methods, their basic idea is the representation of the model probability density function by a number of ensemble members weighted by their likelihood with the observations. In particular particle filter with resampling abandons ensemble members (particles) with low weights restoring the original number of particles adding multiple copies of the members with high weights. In our SIR-like implementation we substitute the likelihood way to define weights and introduce a metric which quantifies the "distance" between the observed atmospheric state and the states simulated by the ensemble members. We also introduce a methodology to counteract filter degeneracy, i.e. the collapse of the simulated state space. To this goal we propose a combination of resampling taking account of simulated state space clustering and nudging. By keeping cluster representatives during resampling and filtering, the method maintains the potential for non linear system state development. We assume that a particle cluster with initially low likelihood may evolve in a state space with higher likelihood in a subsequent filter time thus mimicking non linear system state developments (e.g. sudden convection initiation) and remedies timing errors for convection due to model errors and/or imperfect initial condition. We apply a simplified version of the resampling, the particles with highest weights in each cluster are duplicated; for the model evolution for each particle pair one particle evolves using the forward model; the second particle, however, is nudged to the radar and satellite observation during its evolution based on the forward model.
Banerjee, Biswanath; Roy, Debasish; Vasu, Ram Mohan
2009-08-01
A computationally efficient pseudodynamical filtering setup is established for elasticity imaging (i.e., reconstruction of shear modulus distribution) in soft-tissue organs given statically recorded and partially measured displacement data. Unlike a regularized quasi-Newton method (QNM) that needs inversion of ill-conditioned matrices, the authors explore pseudodynamic extended and ensemble Kalman filters (PD-EKF and PD-EnKF) that use a parsimonious representation of states and bypass explicit regularization by recursion over pseudotime. Numerical experiments with QNM and the two filters suggest that the PD-EnKF is the most robust performer as it exhibits no sensitivity to process noise covariance and yields good reconstruction even with small ensemble sizes.
Ensemble data assimilation in the Red Sea: sensitivity to ensemble selection and atmospheric forcing
NASA Astrophysics Data System (ADS)
Toye, Habib; Zhan, Peng; Gopalakrishnan, Ganesh; Kartadikaria, Aditya R.; Huang, Huang; Knio, Omar; Hoteit, Ibrahim
2017-07-01
We present our efforts to build an ensemble data assimilation and forecasting system for the Red Sea. The system consists of the high-resolution Massachusetts Institute of Technology general circulation model (MITgcm) to simulate ocean circulation and of the Data Research Testbed (DART) for ensemble data assimilation. DART has been configured to integrate all members of an ensemble adjustment Kalman filter (EAKF) in parallel, based on which we adapted the ensemble operations in DART to use an invariant ensemble, i.e., an ensemble Optimal Interpolation (EnOI) algorithm. This approach requires only single forward model integration in the forecast step and therefore saves substantial computational cost. To deal with the strong seasonal variability of the Red Sea, the EnOI ensemble is then seasonally selected from a climatology of long-term model outputs. Observations of remote sensing sea surface height (SSH) and sea surface temperature (SST) are assimilated every 3 days. Real-time atmospheric fields from the National Center for Environmental Prediction (NCEP) and the European Center for Medium-Range Weather Forecasts (ECMWF) are used as forcing in different assimilation experiments. We investigate the behaviors of the EAKF and (seasonal-) EnOI and compare their performances for assimilating and forecasting the circulation of the Red Sea. We further assess the sensitivity of the assimilation system to various filtering parameters (ensemble size, inflation) and atmospheric forcing.
Information flow in an atmospheric model and data assimilation
NASA Astrophysics Data System (ADS)
Yoon, Young-noh
2011-12-01
Weather forecasting consists of two processes, model integration and analysis (data assimilation). During the model integration, the state estimate produced by the analysis evolves to the next cycle time according to the atmospheric model to become the background estimate. The analysis then produces a new state estimate by combining the background state estimate with new observations, and the cycle repeats. In an ensemble Kalman filter, the probability distribution of the state estimate is represented by an ensemble of sample states, and the covariance matrix is calculated using the ensemble of sample states. We perform numerical experiments on toy atmospheric models introduced by Lorenz in 2005 to study the information flow in an atmospheric model in conjunction with ensemble Kalman filtering for data assimilation. This dissertation consists of two parts. The first part of this dissertation is about the propagation of information and the use of localization in ensemble Kalman filtering. If we can perform data assimilation locally by considering the observations and the state variables only near each grid point, then we can reduce the number of ensemble members necessary to cover the probability distribution of the state estimate, reducing the computational cost for the data assimilation and the model integration. Several localized versions of the ensemble Kalman filter have been proposed. Although tests applying such schemes have proven them to be extremely promising, a full basic understanding of the rationale and limitations of localization is currently lacking. We address these issues and elucidate the role played by chaotic wave dynamics in the propagation of information and the resulting impact on forecasts. The second part of this dissertation is about ensemble regional data assimilation using joint states. Assuming that we have a global model and a regional model of higher accuracy defined in a subregion inside the global region, we propose a data assimilation scheme that produces the analyses for the global and the regional model simultaneously, considering forecast information from both models. We show that our new data assimilation scheme produces better results both in the subregion and the global region than the data assimilation scheme that produces the analyses for the global and the regional model separately.
Valdes-Abellan, Javier; Pachepsky, Yakov; Martinez, Gonzalo
2018-01-01
Data assimilation is becoming a promising technique in hydrologic modelling to update not only model states but also to infer model parameters, specifically to infer soil hydraulic properties in Richard-equation-based soil water models. The Ensemble Kalman Filter method is one of the most widely employed method among the different data assimilation alternatives. In this study the complete Matlab© code used to study soil data assimilation efficiency under different soil and climatic conditions is shown. The code shows the method how data assimilation through EnKF was implemented. Richards equation was solved by the used of Hydrus-1D software which was run from Matlab. •MATLAB routines are released to be used/modified without restrictions for other researchers•Data assimilation Ensemble Kalman Filter method code.•Soil water Richard equation flow solved by Hydrus-1D.
2011-09-01
the ensemble perturbations fXb(k): k5 1, . . . , Kg are from the same distribution; thus P̂bc ’ 1 K 2 1 K21 k51 Pbtc ’ Pbtc, and (18) p̂bcu ’ p btc u...19) where p̂bcu and p btc u are the uth column of P̂ bc u and P btc u , respectively. Similar arguments can be made to show that the filtered...estimate should also satisfy ~pbcu ’ p btc u , (20) where ~pbcu is the uth column of ~P bc u . We emphasize that Eqs. (19) and (20) do not provide
NASA Astrophysics Data System (ADS)
Yongye, Austin B.; Bender, Andreas; Martínez-Mayorga, Karina
2010-08-01
Representing the 3D structures of ligands in virtual screenings via multi-conformer ensembles can be computationally intensive, especially for compounds with a large number of rotatable bonds. Thus, reducing the size of multi-conformer databases and the number of query conformers, while simultaneously reproducing the bioactive conformer with good accuracy, is of crucial interest. While clustering and RMSD filtering methods are employed in existing conformer generators, the novelty of this work is the inclusion of a clustering scheme (NMRCLUST) that does not require a user-defined cut-off value. This algorithm simultaneously optimizes the number and the average spread of the clusters. Here we describe and test four inter-dependent approaches for selecting computer-generated conformers, namely: OMEGA, NMRCLUST, RMS filtering and averaged- RMS filtering. The bioactive conformations of 65 selected ligands were extracted from the corresponding protein:ligand complexes from the Protein Data Bank, including eight ligands that adopted dissimilar bound conformations within different receptors. We show that NMRCLUST can be employed to further filter OMEGA-generated conformers while maintaining biological relevance of the ensemble. It was observed that NMRCLUST (containing on average 10 times fewer conformers per compound) performed nearly as well as OMEGA, and both outperformed RMS filtering and averaged- RMS filtering in terms of identifying the bioactive conformations with excellent and good matches (0.5 < RMSD < 1.0 Å). Furthermore, we propose thresholds for OMEGA root-mean square filtering depending on the number of rotors in a compound: 0.8, 1.0 and 1.4 for structures with low (1-4), medium (5-9) and high (10-15) numbers of rotatable bonds, respectively. The protocol employed is general and can be applied to reduce the number of conformers in multi-conformer compound collections and alleviate the complexity of downstream data processing in virtual screening experiments.
NASA Astrophysics Data System (ADS)
Simon, E.; Bertino, L.; Samuelsen, A.
2011-12-01
Combined state-parameter estimation in ocean biogeochemical models with ensemble-based Kalman filters is a challenging task due to the non-linearity of the models, the constraints of positiveness that apply to the variables and parameters, and the non-Gaussian distribution of the variables in which they result. Furthermore, these models are sensitive to numerous parameters that are poorly known. Previous works [1] demonstrated that the Gaussian anamorphosis extensions of ensemble-based Kalman filters were relevant tools to perform combined state-parameter estimation in such non-Gaussian framework. In this study, we focus on the estimation of the grazing preferences parameters of zooplankton species. These parameters are introduced to model the diet of zooplankton species among phytoplankton species and detritus. They are positive values and their sum is equal to one. Because the sum-to-one constraint cannot be handled by ensemble-based Kalman filters, a reformulation of the parameterization is proposed. We investigate two types of changes of variables for the estimation of sum-to-one constrained parameters. The first one is based on Gelman [2] and leads to the estimation of normal distributed parameters. The second one is based on the representation of the unit sphere in spherical coordinates and leads to the estimation of parameters with bounded distributions (triangular or uniform). These formulations are illustrated and discussed in the framework of twin experiments realized in the 1D coupled model GOTM-NORWECOM with Gaussian anamorphosis extensions of the deterministic ensemble Kalman filter (DEnKF). [1] Simon E., Bertino L. : Gaussian anamorphosis extension of the DEnKF for combined state and parameter estimation : application to a 1D ocean ecosystem model. Journal of Marine Systems, 2011. doi :10.1016/j.jmarsys.2011.07.007 [2] Gelman A. : Method of Moments Using Monte Carlo Simulation. Journal of Computational and Graphical Statistics, 4, 1, 36-54, 1995.
NASA Technical Reports Server (NTRS)
Keppenne, Christian L.; Rienecker, Michele; Borovikov, Anna Y.; Suarez, Max
1999-01-01
A massively parallel ensemble Kalman filter (EnKF)is used to assimilate temperature data from the TOGA/TAO array and altimetry from TOPEX/POSEIDON into a Pacific basin version of the NASA Seasonal to Interannual Prediction Project (NSIPP)ls quasi-isopycnal ocean general circulation model. The EnKF is an approximate Kalman filter in which the error-covariance propagation step is modeled by the integration of multiple instances of a numerical model. An estimate of the true error covariances is then inferred from the distribution of the ensemble of model state vectors. This inplementation of the filter takes advantage of the inherent parallelism in the EnKF algorithm by running all the model instances concurrently. The Kalman filter update step also occurs in parallel by having each processor process the observations that occur in the region of physical space for which it is responsible. The massively parallel data assimilation system is validated by withholding some of the data and then quantifying the extent to which the withheld information can be inferred from the assimilation of the remaining data. The distributions of the forecast and analysis error covariances predicted by the ENKF are also examined.
Gundogdu, Erhan; Ozkan, Huseyin; Alatan, A Aydin
2017-11-01
Correlation filters have been successfully used in visual tracking due to their modeling power and computational efficiency. However, the state-of-the-art correlation filter-based (CFB) tracking algorithms tend to quickly discard the previous poses of the target, since they consider only a single filter in their models. On the contrary, our approach is to register multiple CFB trackers for previous poses and exploit the registered knowledge when an appearance change occurs. To this end, we propose a novel tracking algorithm [of complexity O(D) ] based on a large ensemble of CFB trackers. The ensemble [of size O(2 D ) ] is organized over a binary tree (depth D ), and learns the target appearance subspaces such that each constituent tracker becomes an expert of a certain appearance. During tracking, the proposed algorithm combines only the appearance-aware relevant experts to produce boosted tracking decisions. Additionally, we propose a versatile spatial windowing technique to enhance the individual expert trackers. For this purpose, spatial windows are learned for target objects as well as the correlation filters and then the windowed regions are processed for more robust correlations. In our extensive experiments on benchmark datasets, we achieve a substantial performance increase by using the proposed tracking algorithm together with the spatial windowing.
Application of Ensemble Kalman Filter in Power System State Tracking and Sensitivity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Yulan; Huang, Zhenyu; Zhou, Ning
2012-05-01
Ensemble Kalman Filter (EnKF) is proposed to track dynamic states of generators. The algorithm of EnKF and its application to generator state tracking are presented in detail. The accuracy and sensitivity of the method are analyzed with respect to initial state errors, measurement noise, unknown fault locations, time steps and parameter errors. It is demonstrated through simulation studies that even with some errors in the parameters, the developed EnKF can effectively track generator dynamic states using disturbance data.
Multilevel ensemble Kalman filtering
Hoel, Hakon; Law, Kody J. H.; Tempone, Raul
2016-06-14
This study embeds a multilevel Monte Carlo sampling strategy into the Monte Carlo step of the ensemble Kalman filter (EnKF) in the setting of finite dimensional signal evolution and noisy discrete-time observations. The signal dynamics is assumed to be governed by a stochastic differential equation (SDE), and a hierarchy of time grids is introduced for multilevel numerical integration of that SDE. Finally, the resulting multilevel EnKF is proved to asymptotically outperform EnKF in terms of computational cost versus approximation accuracy. The theoretical results are illustrated numerically.
xEMD procedures as a data - Assisted filtering method
NASA Astrophysics Data System (ADS)
Machrowska, Anna; Jonak, Józef
2018-01-01
The article presents the possibility of using Empirical Mode Decomposition (EMD), Ensemble Empirical Mode Decomposition (EEMD), Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN) and Improved Complete Ensemble Empirical Mode Decomposition (ICEEMD) algorithms for mechanical system condition monitoring applications. There were presented the results of the xEMD procedures used for vibration signals of system in different states of wear.
NASA Technical Reports Server (NTRS)
Keppenne, C. L.; Rienecker, M.; Borovikov, A. Y.
1999-01-01
Two massively parallel data assimilation systems in which the model forecast-error covariances are estimated from the distribution of an ensemble of model integrations are applied to the assimilation of 97-98 TOPEX/POSEIDON altimetry and TOGA/TAO temperature data into a Pacific basin version the NASA Seasonal to Interannual Prediction Project (NSIPP)ls quasi-isopycnal ocean general circulation model. in the first system, ensemble of model runs forced by an ensemble of atmospheric model simulations is used to calculate asymptotic error statistics. The data assimilation then occurs in the reduced phase space spanned by the corresponding leading empirical orthogonal functions. The second system is an ensemble Kalman filter in which new error statistics are computed during each assimilation cycle from the time-dependent ensemble distribution. The data assimilation experiments are conducted on NSIPP's 512-processor CRAY T3E. The two data assimilation systems are validated by withholding part of the data and quantifying the extent to which the withheld information can be inferred from the assimilation of the remaining data. The pros and cons of each system are discussed.
Can decadal climate predictions be improved by ocean ensemble dispersion filtering?
NASA Astrophysics Data System (ADS)
Kadow, C.; Illing, S.; Kröner, I.; Ulbrich, U.; Cubasch, U.
2017-12-01
Decadal predictions by Earth system models aim to capture the state and phase of the climate several years inadvance. Atmosphere-ocean interaction plays an important role for such climate forecasts. While short-termweather forecasts represent an initial value problem and long-term climate projections represent a boundarycondition problem, the decadal climate prediction falls in-between these two time scales. The ocean memorydue to its heat capacity holds big potential skill on the decadal scale. In recent years, more precise initializationtechniques of coupled Earth system models (incl. atmosphere and ocean) have improved decadal predictions.Ensembles are another important aspect. Applying slightly perturbed predictions results in an ensemble. Insteadof using and evaluating one prediction, but the whole ensemble or its ensemble average, improves a predictionsystem. However, climate models in general start losing the initialized signal and its predictive skill from oneforecast year to the next. Here we show that the climate prediction skill of an Earth system model can be improvedby a shift of the ocean state toward the ensemble mean of its individual members at seasonal intervals. Wefound that this procedure, called ensemble dispersion filter, results in more accurate results than the standarddecadal prediction. Global mean and regional temperature, precipitation, and winter cyclone predictions showan increased skill up to 5 years ahead. Furthermore, the novel technique outperforms predictions with largerensembles and higher resolution. Our results demonstrate how decadal climate predictions benefit from oceanensemble dispersion filtering toward the ensemble mean. This study is part of MiKlip (fona-miklip.de) - a major project on decadal climate prediction in Germany.We focus on the Max-Planck-Institute Earth System Model using the low-resolution version (MPI-ESM-LR) andMiKlip's basic initialization strategy as in 2017 published decadal climate forecast: http://www.fona-miklip.de/decadal-forecast-2017-2026/decadal-forecast-for-2017-2026/ More informations about this study in JAMES:DOI: 10.1002/2016MS000787
Vision-based posture recognition using an ensemble classifier and a vote filter
NASA Astrophysics Data System (ADS)
Ji, Peng; Wu, Changcheng; Xu, Xiaonong; Song, Aiguo; Li, Huijun
2016-10-01
Posture recognition is a very important Human-Robot Interaction (HRI) way. To segment effective posture from an image, we propose an improved region grow algorithm which combining with the Single Gauss Color Model. The experiment shows that the improved region grow algorithm can get the complete and accurate posture than traditional Single Gauss Model and region grow algorithm, and it can eliminate the similar region from the background at the same time. In the posture recognition part, and in order to improve the recognition rate, we propose a CNN ensemble classifier, and in order to reduce the misjudgments during a continuous gesture control, a vote filter is proposed and applied to the sequence of recognition results. Comparing with CNN classifier, the CNN ensemble classifier we proposed can yield a 96.27% recognition rate, which is better than that of CNN classifier, and the proposed vote filter can improve the recognition result and reduce the misjudgments during the consecutive gesture switch.
NASA Astrophysics Data System (ADS)
Akita, T.; Takaki, R.; Shima, E.
2012-04-01
An adaptive estimation method of spacecraft thermal mathematical model is presented. The method is based on the ensemble Kalman filter, which can effectively handle the nonlinearities contained in the thermal model. The state space equations of the thermal mathematical model is derived, where both temperature and uncertain thermal characteristic parameters are considered as the state variables. In the method, the thermal characteristic parameters are automatically estimated as the outputs of the filtered state variables, whereas, in the usual thermal model correlation, they are manually identified by experienced engineers using trial-and-error approach. A numerical experiment of a simple small satellite is provided to verify the effectiveness of the presented method.
Wang, Lutao; Xiao, Jun; Chai, Hua
2015-08-01
The successful suppression of clutter arising from stationary or slowly moving tissue is one of the key issues in medical ultrasound color blood imaging. Remaining clutter may cause bias in the mean blood frequency estimation and results in a potentially misleading description of blood-flow. In this paper, based on the principle of general wall-filter, the design process of three classes of filters, infinitely impulse response with projection initialization (Prj-IIR), polynomials regression (Pol-Reg), and eigen-based filters are previewed and analyzed. The performance of the filters was assessed by calculating the bias and variance of a mean blood velocity using a standard autocorrelation estimator. Simulation results show that the performance of Pol-Reg filter is similar to Prj-IIR filters. Both of them can offer accurate estimation of mean blood flow speed under steady clutter conditions, and the clutter rejection ability can be enhanced by increasing the ensemble size of Doppler vector. Eigen-based filters can effectively remove the non-stationary clutter component, and further improve the estimation accuracy for low speed blood flow signals. There is also no significant increase in computation complexity for eigen-based filters when the ensemble size is less than 10.
Performance Analysis of Local Ensemble Kalman Filter
NASA Astrophysics Data System (ADS)
Tong, Xin T.
2018-03-01
Ensemble Kalman filter (EnKF) is an important data assimilation method for high-dimensional geophysical systems. Efficient implementation of EnKF in practice often involves the localization technique, which updates each component using only information within a local radius. This paper rigorously analyzes the local EnKF (LEnKF) for linear systems and shows that the filter error can be dominated by the ensemble covariance, as long as (1) the sample size exceeds the logarithmic of state dimension and a constant that depends only on the local radius; (2) the forecast covariance matrix admits a stable localized structure. In particular, this indicates that with small system and observation noises, the filter error will be accurate in long time even if the initialization is not. The analysis also reveals an intrinsic inconsistency caused by the localization technique, and a stable localized structure is necessary to control this inconsistency. While this structure is usually taken for granted for the operation of LEnKF, it can also be rigorously proved for linear systems with sparse local observations and weak local interactions. These theoretical results are also validated by numerical implementation of LEnKF on a simple stochastic turbulence in two dynamical regimes.
Ensemble Kalman filter inference of spatially-varying Manning's n coefficients in the coastal ocean
NASA Astrophysics Data System (ADS)
Siripatana, Adil; Mayo, Talea; Knio, Omar; Dawson, Clint; Maître, Olivier Le; Hoteit, Ibrahim
2018-07-01
Ensemble Kalman (EnKF) filtering is an established framework for large scale state estimation problems. EnKFs can also be used for state-parameter estimation, using the so-called "Joint-EnKF" approach. The idea is simply to augment the state vector with the parameters to be estimated and assign invariant dynamics for the time evolution of the parameters. In this contribution, we investigate the efficiency of the Joint-EnKF for estimating spatially-varying Manning's n coefficients used to define the bottom roughness in the Shallow Water Equations (SWEs) of a coastal ocean model. Observation System Simulation Experiments (OSSEs) are conducted using the ADvanced CIRCulation (ADCIRC) model, which solves a modified form of the Shallow Water Equations. A deterministic EnKF, the Singular Evolutive Interpolated Kalman (SEIK) filter, is used to estimate a vector of Manning's n coefficients defined at the model nodal points by assimilating synthetic water elevation data. It is found that with reasonable ensemble size (O (10)) , the filter's estimate converges to the reference Manning's field. To enhance performance, we have further reduced the dimension of the parameter search space through a Karhunen-Loéve (KL) expansion. We have also iterated on the filter update step to better account for the nonlinearity of the parameter estimation problem. We study the sensitivity of the system to the ensemble size, localization scale, dimension of retained KL modes, and number of iterations. The performance of the proposed framework in term of estimation accuracy suggests that a well-tuned Joint-EnKF provides a promising robust approach to infer spatially varying seabed roughness parameters in the context of coastal ocean modeling.
NASA Astrophysics Data System (ADS)
Erdal, Daniel; Cirpka, Olaf A.
2017-04-01
Regional groundwater flow strongly depends on groundwater recharge and hydraulic conductivity. While conductivity is a spatially variable field, recharge can vary in both space and time. None of the two fields can be reliably observed on larger scales, and their estimation from other sparse data sets is an open topic. Further, common hydraulic-head observations may not suffice to constrain both fields simultaneously. In the current work we use the Ensemble Kalman filter to estimate spatially variable conductivity, spatiotemporally variable recharge and porosity for a synthetic phreatic aquifer. We use transient hydraulic-head and one spatially distributed set of environmental tracer observations to constrain the estimation. As environmental tracers generally reside for a long time in an aquifer, they require long simulation times and carries a long memory that makes them highly unsuitable for use in a sequential framework. Therefore, in this work we use the environmental tracer information to precondition the initial ensemble of recharge and conductivities, before starting the sequential filter. Thereby, we aim at improving the performance of the sequential filter by limiting the range of the recharge to values similar to the long-term annual recharge means and by creating an initial ensemble of conductivities that show similar pattern and values to the true field. The sequential filter is then used to further improve the parameters and to estimate the short term temporal behavior as well as the temporally evolving head field needed for short term predictions within the aquifer. For a virtual reality covering a subsection of the river Neckar it is shown that the use of environmental tracers can improve the performance of the filter. Results using the EnKF with and without this preconditioned initial ensemble are evaluated and discussed.
Preservation of physical properties with Ensemble-type Kalman Filter Algorithms
NASA Astrophysics Data System (ADS)
Janjic, T.
2017-12-01
We show the behavior of the localized Ensemble Kalman filter (EnKF) with respect to preservation of positivity, conservation of mass, energy and enstrophy in toy models that conserve these properties. In order to preserve physical properties in the analysis as well as to deal with the non-Gaussianity in an EnKF framework, Janjic et al. 2014 proposed the use of physically based constraints in the analysis step to constrain the solution. In particular, constraints were used to ensure that the ensemble members and the ensemble mean conserve mass and remain nonnegative through measurement updates. In the study, mass and positivity were both preserved by formulating the filter update as a set of quadratic programming problems that incorporate nonnegativity constraints. Simple numerical experiments indicated that this approach can have a significant positive impact on the posterior ensemble distribution, giving results that were more physically plausible both for individual ensemble members and for the ensemble mean. Moreover, in experiments designed to mimic the most important characteristics of convective motion, it is shown that the mass conservation- and positivity-constrained rain significantly suppresses noise seen in localized EnKF results. This is highly desirable in order to avoid spurious storms from appearing in the forecast starting from this initial condition (Lange and Craig 2014). In addition, the root mean square error is reduced for all fields and total mass of the rain is correctly simulated. Similarly, the enstrophy, divergence, as well as energy spectra can as well be strongly affected by localization radius, thinning interval, and inflation and depend on the variable that is observed (Zeng and Janjic, 2016). We constructed the ensemble data assimilation algorithm that conserves mass, total energy and enstrophy (Zeng et al., 2017). With 2D shallow water model experiments, it is found that the conservation of enstrophy within the data assimilation effectively avoids the spurious energy cascade of rotational part and thereby successfully suppresses the noise generated by the data assimilation algorithm. The 14-day deterministic and ensemble free forecast, starting from the initial condition enforced by both total energy and enstrophy constraints, produces the best prediction.
NASA Astrophysics Data System (ADS)
Yan, Yajing; Barth, Alexander; Beckers, Jean-Marie; Candille, Guillem; Brankart, Jean-Michel; Brasseur, Pierre
2016-04-01
In this paper, four assimilation schemes, including an intermittent assimilation scheme (INT) and three incremental assimilation schemes (IAU 0, IAU 50 and IAU 100), are compared in the same assimilation experiments with a realistic eddy permitting primitive equation model of the North Atlantic Ocean using the Ensemble Kalman Filter. The three IAU schemes differ from each other in the position of the increment update window that has the same size as the assimilation window. 0, 50 and 100 correspond to the degree of superposition of the increment update window on the current assimilation window. Sea surface height, sea surface temperature, and temperature profiles at depth collected between January and December 2005 are assimilated. Sixty ensemble members are generated by adding realistic noise to the forcing parameters related to the temperature. The ensemble is diagnosed and validated by comparison between the ensemble spread and the model/observation difference, as well as by rank histogram before the assimilation experiments The relevance of each assimilation scheme is evaluated through analyses on thermohaline variables and the current velocities. The results of the assimilation are assessed according to both deterministic and probabilistic metrics with independent/semi-independent observations. For deterministic validation, the ensemble means, together with the ensemble spreads are compared to the observations, in order to diagnose the ensemble distribution properties in a deterministic way. For probabilistic validation, the continuous ranked probability score (CRPS) is used to evaluate the ensemble forecast system according to reliability and resolution. The reliability is further decomposed into bias and dispersion by the reduced centered random variable (RCRV) score in order to investigate the reliability properties of the ensemble forecast system.
Multivariate localization methods for ensemble Kalman filtering
NASA Astrophysics Data System (ADS)
Roh, S.; Jun, M.; Szunyogh, I.; Genton, M. G.
2015-05-01
In ensemble Kalman filtering (EnKF), the small number of ensemble members that is feasible to use in a practical data assimilation application leads to sampling variability of the estimates of the background error covariances. The standard approach to reducing the effects of this sampling variability, which has also been found to be highly efficient in improving the performance of EnKF, is the localization of the estimates of the covariances. One family of localization techniques is based on taking the Schur (entry-wise) product of the ensemble-based sample covariance matrix and a correlation matrix whose entries are obtained by the discretization of a distance-dependent correlation function. While the proper definition of the localization function for a single state variable has been extensively investigated, a rigorous definition of the localization function for multiple state variables has been seldom considered. This paper introduces two strategies for the construction of localization functions for multiple state variables. The proposed localization functions are tested by assimilating simulated observations experiments into the bivariate Lorenz 95 model with their help.
Data Assimilation in the ADAPT Photospheric Flux Transport Model
Hickmann, Kyle S.; Godinez, Humberto C.; Henney, Carl J.; ...
2015-03-17
Global maps of the solar photospheric magnetic flux are fundamental drivers for simulations of the corona and solar wind and therefore are important predictors of geoeffective events. However, observations of the solar photosphere are only made intermittently over approximately half of the solar surface. The Air Force Data Assimilative Photospheric Flux Transport (ADAPT) model uses localized ensemble Kalman filtering techniques to adjust a set of photospheric simulations to agree with the available observations. At the same time, this information is propagated to areas of the simulation that have not been observed. ADAPT implements a local ensemble transform Kalman filter (LETKF)more » to accomplish data assimilation, allowing the covariance structure of the flux-transport model to influence assimilation of photosphere observations while eliminating spurious correlations between ensemble members arising from a limited ensemble size. We give a detailed account of the implementation of the LETKF into ADAPT. Advantages of the LETKF scheme over previously implemented assimilation methods are highlighted.« less
EMPIRE and pyenda: Two ensemble-based data assimilation systems written in Fortran and Python
NASA Astrophysics Data System (ADS)
Geppert, Gernot; Browne, Phil; van Leeuwen, Peter Jan; Merker, Claire
2017-04-01
We present and compare the features of two ensemble-based data assimilation frameworks, EMPIRE and pyenda. Both frameworks allow to couple models to the assimilation codes using the Message Passing Interface (MPI), leading to extremely efficient and fast coupling between models and the data-assimilation codes. The Fortran-based system EMPIRE (Employing Message Passing Interface for Researching Ensembles) is optimized for parallel, high-performance computing. It currently includes a suite of data assimilation algorithms including variants of the ensemble Kalman and several the particle filters. EMPIRE is targeted at models of all kinds of complexity and has been coupled to several geoscience models, eg. the Lorenz-63 model, a barotropic vorticity model, the general circulation model HadCM3, the ocean model NEMO, and the land-surface model JULES. The Python-based system pyenda (Python Ensemble Data Assimilation) allows Fortran- and Python-based models to be used for data assimilation. Models can be coupled either using MPI or by using a Python interface. Using Python allows quick prototyping and pyenda is aimed at small to medium scale models. pyenda currently includes variants of the ensemble Kalman filter and has been coupled to the Lorenz-63 model, an advection-based precipitation nowcasting scheme, and the dynamic global vegetation model JSBACH.
2013-01-01
Background Many problems in protein modeling require obtaining a discrete representation of the protein conformational space as an ensemble of conformations. In ab-initio structure prediction, in particular, where the goal is to predict the native structure of a protein chain given its amino-acid sequence, the ensemble needs to satisfy energetic constraints. Given the thermodynamic hypothesis, an effective ensemble contains low-energy conformations which are similar to the native structure. The high-dimensionality of the conformational space and the ruggedness of the underlying energy surface currently make it very difficult to obtain such an ensemble. Recent studies have proposed that Basin Hopping is a promising probabilistic search framework to obtain a discrete representation of the protein energy surface in terms of local minima. Basin Hopping performs a series of structural perturbations followed by energy minimizations with the goal of hopping between nearby energy minima. This approach has been shown to be effective in obtaining conformations near the native structure for small systems. Recent work by us has extended this framework to larger systems through employment of the molecular fragment replacement technique, resulting in rapid sampling of large ensembles. Methods This paper investigates the algorithmic components in Basin Hopping to both understand and control their effect on the sampling of near-native minima. Realizing that such an ensemble is reduced before further refinement in full ab-initio protocols, we take an additional step and analyze the quality of the ensemble retained by ensemble reduction techniques. We propose a novel multi-objective technique based on the Pareto front to filter the ensemble of sampled local minima. Results and conclusions We show that controlling the magnitude of the perturbation allows directly controlling the distance between consecutively-sampled local minima and, in turn, steering the exploration towards conformations near the native structure. For the minimization step, we show that the addition of Metropolis Monte Carlo-based minimization is no more effective than a simple greedy search. Finally, we show that the size of the ensemble of sampled local minima can be effectively and efficiently reduced by a multi-objective filter to obtain a simpler representation of the probed energy surface. PMID:24564970
NASA Astrophysics Data System (ADS)
Zapata Norberto, B.; Morales-Casique, E.; Herrera, G. S.
2017-12-01
Severe land subsidence due to groundwater extraction may occur in multiaquifer systems where highly compressible aquitards are present. The highly compressible nature of the aquitards leads to nonlinear consolidation where the groundwater flow parameters are stress-dependent. The case is further complicated by the heterogeneity of the hydrogeologic and geotechnical properties of the aquitards. We explore the effect of realistic vertical heterogeneity of hydrogeologic and geotechnical parameters on the consolidation of highly compressible aquitards by means of 1-D Monte Carlo numerical simulations. 2000 realizations are generated for each of the following parameters: hydraulic conductivity (K), compression index (Cc) and void ratio (e). The correlation structure, the mean and the variance for each parameter were obtained from a literature review about field studies in the lacustrine sediments of Mexico City. The results indicate that among the parameters considered, random K has the largest effect on the ensemble average behavior of the system. Random K leads to the largest variance (and therefore largest uncertainty) of total settlement, groundwater flux and time to reach steady state conditions. We further propose a data assimilation scheme by means of ensemble Kalman filter to estimate the ensemble mean distribution of K, pore-pressure and total settlement. We consider the case where pore-pressure measurements are available at given time intervals. We test our approach by generating a 1-D realization of K with exponential spatial correlation, and solving the nonlinear flow and consolidation problem. These results are taken as our "true" solution. We take pore-pressure "measurements" at different times from this "true" solution. The ensemble Kalman filter method is then employed to estimate ensemble mean distribution of K, pore-pressure and total settlement based on the sequential assimilation of these pore-pressure measurements. The ensemble-mean estimates from this procedure closely approximate those from the "true" solution. This procedure can be easily extended to other random variables such as compression index and void ratio.
Yongye, Austin B.; Bender, Andreas
2010-01-01
Representing the 3D structures of ligands in virtual screenings via multi-conformer ensembles can be computationally intensive, especially for compounds with a large number of rotatable bonds. Thus, reducing the size of multi-conformer databases and the number of query conformers, while simultaneously reproducing the bioactive conformer with good accuracy, is of crucial interest. While clustering and RMSD filtering methods are employed in existing conformer generators, the novelty of this work is the inclusion of a clustering scheme (NMRCLUST) that does not require a user-defined cut-off value. This algorithm simultaneously optimizes the number and the average spread of the clusters. Here we describe and test four inter-dependent approaches for selecting computer-generated conformers, namely: OMEGA, NMRCLUST, RMS filtering and averaged-RMS filtering. The bioactive conformations of 65 selected ligands were extracted from the corresponding protein:ligand complexes from the Protein Data Bank, including eight ligands that adopted dissimilar bound conformations within different receptors. We show that NMRCLUST can be employed to further filter OMEGA-generated conformers while maintaining biological relevance of the ensemble. It was observed that NMRCLUST (containing on average 10 times fewer conformers per compound) performed nearly as well as OMEGA, and both outperformed RMS filtering and averaged-RMS filtering in terms of identifying the bioactive conformations with excellent and good matches (0.5 < RMSD < 1.0 Å). Furthermore, we propose thresholds for OMEGA root-mean square filtering depending on the number of rotors in a compound: 0.8, 1.0 and 1.4 for structures with low (1–4), medium (5–9) and high (10–15) numbers of rotatable bonds, respectively. The protocol employed is general and can be applied to reduce the number of conformers in multi-conformer compound collections and alleviate the complexity of downstream data processing in virtual screening experiments. Electronic supplementary material The online version of this article (doi:10.1007/s10822-010-9365-1) contains supplementary material, which is available to authorized users. PMID:20499135
Stable time filtering of strongly unstable spatially extended systems
Grote, Marcus J.; Majda, Andrew J.
2006-01-01
Many contemporary problems in science involve making predictions based on partial observation of extremely complicated spatially extended systems with many degrees of freedom and with physical instabilities on both large and small scale. Various new ensemble filtering strategies have been developed recently for these applications, and new mathematical issues arise. Because ensembles are extremely expensive to generate, one such issue is whether it is possible under appropriate circumstances to take long time steps in an explicit difference scheme and violate the classical Courant–Friedrichs–Lewy (CFL)-stability condition yet obtain stable accurate filtering by using the observations. These issues are explored here both through elementary mathematical theory, which provides simple guidelines, and the detailed study of a prototype model. The prototype model involves an unstable finite difference scheme for a convection–diffusion equation, and it is demonstrated below that appropriate observations can result in stable accurate filtering of this strongly unstable spatially extended system. PMID:16682626
Stable time filtering of strongly unstable spatially extended systems.
Grote, Marcus J; Majda, Andrew J
2006-05-16
Many contemporary problems in science involve making predictions based on partial observation of extremely complicated spatially extended systems with many degrees of freedom and with physical instabilities on both large and small scale. Various new ensemble filtering strategies have been developed recently for these applications, and new mathematical issues arise. Because ensembles are extremely expensive to generate, one such issue is whether it is possible under appropriate circumstances to take long time steps in an explicit difference scheme and violate the classical Courant-Friedrichs-Lewy (CFL)-stability condition yet obtain stable accurate filtering by using the observations. These issues are explored here both through elementary mathematical theory, which provides simple guidelines, and the detailed study of a prototype model. The prototype model involves an unstable finite difference scheme for a convection-diffusion equation, and it is demonstrated below that appropriate observations can result in stable accurate filtering of this strongly unstable spatially extended system.
An optimal modification of a Kalman filter for time scales
NASA Technical Reports Server (NTRS)
Greenhall, C. A.
2003-01-01
The Kalman filter in question, which was implemented in the time scale algorithm TA(NIST), produces time scales with poor short-term stability. A simple modification of the error covariance matrix allows the filter to produce time scales with good stability at all averaging times, as verified by simulations of clock ensembles.
NASA Astrophysics Data System (ADS)
Noh, S.; Tachikawa, Y.; Shiiba, M.; Kim, S.
2011-12-01
Applications of the sequential data assimilation methods have been increasing in hydrology to reduce uncertainty in the model prediction. In a distributed hydrologic model, there are many types of state variables and each variable interacts with each other based on different time scales. However, the framework to deal with the delayed response, which originates from different time scale of hydrologic processes, has not been thoroughly addressed in the hydrologic data assimilation. In this study, we propose the lagged filtering scheme to consider the lagged response of internal states in a distributed hydrologic model using two filtering schemes; particle filtering (PF) and ensemble Kalman filtering (EnKF). The EnKF is one of the widely used sub-optimal filters implementing an efficient computation with limited number of ensemble members, however, still based on Gaussian approximation. PF can be an alternative in which the propagation of all uncertainties is carried out by a suitable selection of randomly generated particles without any assumptions about the nature of the distributions involved. In case of PF, advanced particle regularization scheme is implemented together to preserve the diversity of the particle system. In case of EnKF, the ensemble square root filter (EnSRF) are implemented. Each filtering method is parallelized and implemented in the high performance computing system. A distributed hydrologic model, the water and energy transfer processes (WEP) model, is applied for the Katsura River catchment, Japan to demonstrate the applicability of proposed approaches. Forecasted results via PF and EnKF are compared and analyzed in terms of the prediction accuracy and the probabilistic adequacy. Discussions are focused on the prospects and limitations of each data assimilation method.
Linear Reconstruction of Non-Stationary Image Ensembles Incorporating Blur and Noise Models
1998-03-01
for phase distortions due to noise which leads to less deblurring as noise increases [41]. In contrast, the vector Wiener filter incorporates some a...AFIT/DS/ENG/98- 06 Linear Reconstruction of Non-Stationary Image Ensembles Incorporating Blur and Noise Models DISSERTATION Stephen D. Ford Captain...Dissertation 4. TITLE AND SUBTITLE 5. FUNDING NUMBERS LINEAR RECONSTRUCTION OF NON-STATIONARY IMAGE ENSEMBLES INCORPORATING BLUR AND NOISE MODELS 6. AUTHOR(S
NASA Astrophysics Data System (ADS)
Miyoshi, T.; Teramura, T.; Ruiz, J.; Kondo, K.; Lien, G. Y.
2016-12-01
Convective weather is known to be highly nonlinear and chaotic, and it is hard to predict their location and timing precisely. Our Big Data Assimilation (BDA) effort has been exploring to use dense and frequent observations to avoid non-Gaussian probability density function (PDF) and to apply an ensemble Kalman filter under the Gaussian error assumption. The phased array weather radar (PAWR) can observe a dense three-dimensional volume scan with 100-m range resolution and 100 elevation angles in only 30 seconds. The BDA system assimilates the PAWR reflectivity and Doppler velocity observations every 30 seconds into 100 ensemble members of storm-scale numerical weather prediction (NWP) model at 100-m grid spacing. The 30-second-update, 100-m-mesh BDA system has been quite successful in multiple case studies of local severe rainfall events. However, with 1000 ensemble members, the reduced-resolution BDA system at 1-km grid spacing showed significant non-Gaussian PDF with every-30-second updates. With a 10240-member ensemble Kalman filter with a global NWP model at 112-km grid spacing, we found roughly 1000 members satisfactory to capture the non-Gaussian error structures. With these in mind, we explore how the density of observations in space and time affects the non-Gaussianity in an ensemble Kalman filter with a simple toy model. In this presentation, we will present the most up-to-date results of the BDA research, as well as the investigation with the toy model on the non-Gaussianity with dense and frequent observations.
A Global Carbon Assimilation System using a modified EnKF assimilation method
NASA Astrophysics Data System (ADS)
Zhang, S.; Zheng, X.; Chen, Z.; Dan, B.; Chen, J. M.; Yi, X.; Wang, L.; Wu, G.
2014-10-01
A Global Carbon Assimilation System based on Ensemble Kalman filter (GCAS-EK) is developed for assimilating atmospheric CO2 abundance data into an ecosystem model to simultaneously estimate the surface carbon fluxes and atmospheric CO2 distribution. This assimilation approach is based on the ensemble Kalman filter (EnKF), but with several new developments, including using analysis states to iteratively estimate ensemble forecast errors, and a maximum likelihood estimation of the inflation factors of the forecast and observation errors. The proposed assimilation approach is tested in observing system simulation experiments and then used to estimate the terrestrial ecosystem carbon fluxes and atmospheric CO2 distributions from 2002 to 2008. The results showed that this assimilation approach can effectively reduce the biases and uncertainties of the carbon fluxes simulated by the ecosystem model.
Constraining a Coastal Ocean Model by Surface Observations Using an Ensemble Kalman Filter
NASA Astrophysics Data System (ADS)
De Mey, P. J.; Ayoub, N. K.
2016-02-01
We explore the impact of assimilating sea surface temperature (SST) and sea surface height (SSH) observations in the Bay of Biscay (North-East Atlantic). The study is conducted in the SYMPHONIE coastal circulation model (Marsaleix et al., 2009) on a 3kmx3km grid, with 43 sigma levels. Ensembles are generated by perturbing the wind forcing to analyze the model error subspace spanned by its response to wind forcing uncertainties. The assimilation method is a 4D Ensemble Kalman Filter algorithm with localization. We use the SDAP code developed in the team (https://sourceforge.net/projects/sequoia-dap/). In a first step before the assimilation of real observations, we set up an Ensemble twin experiment protocol where a nature run as well as noisy pseudo-observations of SST and SSH are generated from an Ensemble member (later discarded from the assimilative Ensemble). Our objectives are to assess (1) the adequacy of the choice of error source and perturbation strategy and (2) how effective the surface observational constraint is at constraining the surface and subsurface fields. We first illustrate characteristics of the error subspace generated by the perturbation strategy. We then show that, while the EnKF solves a single seamless problem regardless of the region within our domain, the nature and effectiveness of the data constraint over the shelf differ from those over the abyssal plain.
The Principle of Energetic Consistency
NASA Technical Reports Server (NTRS)
Cohn, Stephen E.
2009-01-01
A basic result in estimation theory is that the minimum variance estimate of the dynamical state, given the observations, is the conditional mean estimate. This result holds independently of the specifics of any dynamical or observation nonlinearity or stochasticity, requiring only that the probability density function of the state, conditioned on the observations, has two moments. For nonlinear dynamics that conserve a total energy, this general result implies the principle of energetic consistency: if the dynamical variables are taken to be the natural energy variables, then the sum of the total energy of the conditional mean and the trace of the conditional covariance matrix (the total variance) is constant between observations. Ensemble Kalman filtering methods are designed to approximate the evolution of the conditional mean and covariance matrix. For them the principle of energetic consistency holds independently of ensemble size, even with covariance localization. However, full Kalman filter experiments with advection dynamics have shown that a small amount of numerical dissipation can cause a large, state-dependent loss of total variance, to the detriment of filter performance. The principle of energetic consistency offers a simple way to test whether this spurious loss of variance limits ensemble filter performance in full-blown applications. The classical second-moment closure (third-moment discard) equations also satisfy the principle of energetic consistency, independently of the rank of the conditional covariance matrix. Low-rank approximation of these equations offers an energetically consistent, computationally viable alternative to ensemble filtering. Current formulations of long-window, weak-constraint, four-dimensional variational methods are designed to approximate the conditional mode rather than the conditional mean. Thus they neglect the nonlinear bias term in the second-moment closure equation for the conditional mean. The principle of energetic consistency implies that, to precisely the extent that growing modes are important in data assimilation, this term is also important.
Multivariate localization methods for ensemble Kalman filtering
NASA Astrophysics Data System (ADS)
Roh, S.; Jun, M.; Szunyogh, I.; Genton, M. G.
2015-12-01
In ensemble Kalman filtering (EnKF), the small number of ensemble members that is feasible to use in a practical data assimilation application leads to sampling variability of the estimates of the background error covariances. The standard approach to reducing the effects of this sampling variability, which has also been found to be highly efficient in improving the performance of EnKF, is the localization of the estimates of the covariances. One family of localization techniques is based on taking the Schur (element-wise) product of the ensemble-based sample covariance matrix and a correlation matrix whose entries are obtained by the discretization of a distance-dependent correlation function. While the proper definition of the localization function for a single state variable has been extensively investigated, a rigorous definition of the localization function for multiple state variables that exist at the same locations has been seldom considered. This paper introduces two strategies for the construction of localization functions for multiple state variables. The proposed localization functions are tested by assimilating simulated observations experiments into the bivariate Lorenz 95 model with their help.
A GPU-Parallelized Eigen-Based Clutter Filter Framework for Ultrasound Color Flow Imaging.
Chee, Adrian J Y; Yiu, Billy Y S; Yu, Alfred C H
2017-01-01
Eigen-filters with attenuation response adapted to clutter statistics in color flow imaging (CFI) have shown improved flow detection sensitivity in the presence of tissue motion. Nevertheless, its practical adoption in clinical use is not straightforward due to the high computational cost for solving eigendecompositions. Here, we provide a pedagogical description of how a real-time computing framework for eigen-based clutter filtering can be developed through a single-instruction, multiple data (SIMD) computing approach that can be implemented on a graphical processing unit (GPU). Emphasis is placed on the single-ensemble-based eigen-filtering approach (Hankel singular value decomposition), since it is algorithmically compatible with GPU-based SIMD computing. The key algebraic principles and the corresponding SIMD algorithm are explained, and annotations on how such algorithm can be rationally implemented on the GPU are presented. Real-time efficacy of our framework was experimentally investigated on a single GPU device (GTX Titan X), and the computing throughput for varying scan depths and slow-time ensemble lengths was studied. Using our eigen-processing framework, real-time video-range throughput (24 frames/s) can be attained for CFI frames with full view in azimuth direction (128 scanlines), up to a scan depth of 5 cm ( λ pixel axial spacing) for slow-time ensemble length of 16 samples. The corresponding CFI image frames, with respect to the ones derived from non-adaptive polynomial regression clutter filtering, yielded enhanced flow detection sensitivity in vivo, as demonstrated in a carotid imaging case example. These findings indicate that the GPU-enabled eigen-based clutter filtering can improve CFI flow detection performance in real time.
NASA Astrophysics Data System (ADS)
Siripatana, Adil; Mayo, Talea; Sraj, Ihab; Knio, Omar; Dawson, Clint; Le Maitre, Olivier; Hoteit, Ibrahim
2017-08-01
Bayesian estimation/inversion is commonly used to quantify and reduce modeling uncertainties in coastal ocean model, especially in the framework of parameter estimation. Based on Bayes rule, the posterior probability distribution function (pdf) of the estimated quantities is obtained conditioned on available data. It can be computed either directly, using a Markov chain Monte Carlo (MCMC) approach, or by sequentially processing the data following a data assimilation approach, which is heavily exploited in large dimensional state estimation problems. The advantage of data assimilation schemes over MCMC-type methods arises from the ability to algorithmically accommodate a large number of uncertain quantities without significant increase in the computational requirements. However, only approximate estimates are generally obtained by this approach due to the restricted Gaussian prior and noise assumptions that are generally imposed in these methods. This contribution aims at evaluating the effectiveness of utilizing an ensemble Kalman-based data assimilation method for parameter estimation of a coastal ocean model against an MCMC polynomial chaos (PC)-based scheme. We focus on quantifying the uncertainties of a coastal ocean ADvanced CIRCulation (ADCIRC) model with respect to the Manning's n coefficients. Based on a realistic framework of observation system simulation experiments (OSSEs), we apply an ensemble Kalman filter and the MCMC method employing a surrogate of ADCIRC constructed by a non-intrusive PC expansion for evaluating the likelihood, and test both approaches under identical scenarios. We study the sensitivity of the estimated posteriors with respect to the parameters of the inference methods, including ensemble size, inflation factor, and PC order. A full analysis of both methods, in the context of coastal ocean model, suggests that an ensemble Kalman filter with appropriate ensemble size and well-tuned inflation provides reliable mean estimates and uncertainties of Manning's n coefficients compared to the full posterior distributions inferred by MCMC.
NASA Astrophysics Data System (ADS)
Liu, Junjie; Fung, Inez; Kalnay, Eugenia; Kang, Ji-Sun; Olsen, Edward T.; Chen, Luke
2012-03-01
This study is our first step toward the generation of 6 hourly 3-D CO2 fields that can be used to validate CO2 forecast models by combining CO2 observations from multiple sources using ensemble Kalman filtering. We discuss a procedure to assimilate Atmospheric Infrared Sounder (AIRS) column-averaged dry-air mole fraction of CO2 (Xco2) in conjunction with meteorological observations with the coupled Local Ensemble Transform Kalman Filter (LETKF)-Community Atmospheric Model version 3.5. We examine the impact of assimilating AIRS Xco2 observations on CO2 fields by comparing the results from the AIRS-run, which assimilates both AIRS Xco2 and meteorological observations, to those from the meteor-run, which only assimilates meteorological observations. We find that assimilating AIRS Xco2 results in a surface CO2 seasonal cycle and the N-S surface gradient closer to the observations. When taking account of the CO2 uncertainty estimation from the LETKF, the CO2 analysis brackets the observed seasonal cycle. Verification against independent aircraft observations shows that assimilating AIRS Xco2 improves the accuracy of the CO2 vertical profiles by about 0.5-2 ppm depending on location and altitude. The results show that the CO2 analysis ensemble spread at AIRS Xco2 space is between 0.5 and 2 ppm, and the CO2 analysis ensemble spread around the peak level of the averaging kernels is between 1 and 2 ppm. This uncertainty estimation is consistent with the magnitude of the CO2 analysis error verified against AIRS Xco2 observations and the independent aircraft CO2 vertical profiles.
Ensemble streamflow assimilation with the National Water Model.
NASA Astrophysics Data System (ADS)
Rafieeinasab, A.; McCreight, J. L.; Noh, S.; Seo, D. J.; Gochis, D.
2017-12-01
Through case studies of flooding across the US, we compare the performance of the National Water Model (NWM) data assimilation (DA) scheme to that of a newly implemented ensemble Kalman filter approach. The NOAA National Water Model (NWM) is an operational implementation of the community WRF-Hydro modeling system. As of August 2016, the NWM forecasts of distributed hydrologic states and fluxes (including soil moisture, snowpack, ET, and ponded water) over the contiguous United States have been publicly disseminated by the National Center for Environmental Prediction (NCEP) . It also provides streamflow forecasts at more than 2.7 million river reaches up to 30 days in advance. The NWM employs a nudging scheme to assimilate more than 6,000 USGS streamflow observations and provide initial conditions for its forecasts. A problem with nudging is how the forecasts relax quickly to open-loop bias in the forecast. This has been partially addressed by an experimental bias correction approach which was found to have issues with phase errors during flooding events. In this work, we present an ensemble streamflow data assimilation approach combining new channel-only capabilities of the NWM and HydroDART (a coupling of the offline WRF-Hydro model and NCAR's Data Assimilation Research Testbed; DART). Our approach focuses on the single model state of discharge and incorporates error distributions on channel-influxes (overland and groundwater) in the assimilation via an ensemble Kalman filter (EnKF). In order to avoid filter degeneracy associated with a limited number of ensemble at large scale, DART's covariance inflation (Anderson, 2009) and localization capabilities are implemented and evaluated. The current NWM data assimilation scheme is compared to preliminary results from the EnKF application for several flooding case studies across the US.
NASA Astrophysics Data System (ADS)
Reisner, J. M.; Dubey, M. K.
2010-12-01
To both quantify and reduce uncertainty in ice activation parameterizations for stratus clouds occurring in the temperature range between -5 to -10 C ensemble simulations of an ISDAC golden case have been conducted. To formulate the ensemble, three parameters found within an ice activation model have been sampled using a Latin hypercube technique over a parameter range that induces large variability in both number and mass of ice. The ice activation model is contained within a Lagrangian cloud model that simulates particle number as a function of radius for cloud ice, snow, graupel, cloud, and rain particles. A unique aspect of this model is that it produces very low levels of numerical diffusion that enable the model to accurately resolve the sharp cloud edges associated with the ISDAC stratus deck. Another important aspect of the model is that near the cloud edges the number of particles can be significantly increased to reduce sampling errors and accurately resolve physical processes such as collision-coalescence that occur in this region. Thus, given these relatively low numerical errors, as compared to traditional bin models, the sensitivity of a stratus deck to changes in parameters found within the activation model can be examined without fear of numerical contamination. Likewise, once the ensemble has been completed, ISDAC observations can be incorporated into a Kalman filter to optimally estimate the ice activation parameters and reduce overall model uncertainty. Hence, this work will highlight the ability of an ensemble Kalman filter system coupled to a highly accurate numerical model to estimate important parameters found within microphysical parameterizations containing high uncertainty.
Hybrid vs Adaptive Ensemble Kalman Filtering for Storm Surge Forecasting
NASA Astrophysics Data System (ADS)
Altaf, M. U.; Raboudi, N.; Gharamti, M. E.; Dawson, C.; McCabe, M. F.; Hoteit, I.
2014-12-01
Recent storm surge events due to Hurricanes in the Gulf of Mexico have motivated the efforts to accurately forecast water levels. Toward this goal, a parallel architecture has been implemented based on a high resolution storm surge model, ADCIRC. However the accuracy of the model notably depends on the quality and the recentness of the input data (mainly winds and bathymetry), model parameters (e.g. wind and bottom drag coefficients), and the resolution of the model grid. Given all these uncertainties in the system, the challenge is to build an efficient prediction system capable of providing accurate forecasts enough ahead of time for the authorities to evacuate the areas at risk. We have developed an ensemble-based data assimilation system to frequently assimilate available data into the ADCIRC model in order to improve the accuracy of the model. In this contribution we study and analyze the performances of different ensemble Kalman filter methodologies for efficient short-range storm surge forecasting, the aim being to produce the most accurate forecasts at the lowest possible computing time. Using Hurricane Ike meteorological data to force the ADCIRC model over a domain including the Gulf of Mexico coastline, we implement and compare the forecasts of the standard EnKF, the hybrid EnKF and an adaptive EnKF. The last two schemes have been introduced as efficient tools for enhancing the behavior of the EnKF when implemented with small ensembles by exploiting information from a static background covariance matrix. Covariance inflation and localization are implemented in all these filters. Our results suggest that both the hybrid and the adaptive approach provide significantly better forecasts than those resulting from the standard EnKF, even when implemented with much smaller ensembles.
Estimation of water level and steam temperature using ensemble Kalman filter square root (EnKF-SR)
NASA Astrophysics Data System (ADS)
Herlambang, T.; Mufarrikoh, Z.; Karya, D. F.; Rahmalia, D.
2018-04-01
The equipment unit which has the most vital role in the steam-powered electric power plant is boiler. Steam drum boiler is a tank functioning to separate fluida into has phase and liquid phase. The existence in boiler system has a vital role. The controlled variables in the steam drum boiler are water level and the steam temperature. If the water level is higher than the determined level, then the gas phase resulted will contain steam endangering the following process and making the resulted steam going to turbine get less, and the by causing damages to pipes in the boiler. On the contrary, if less than the height of determined water level, the resulted height will result in dry steam likely to endanger steam drum. Thus an error was observed between the determined. This paper studied the implementation of the Ensemble Kalman Filter Square Root (EnKF-SR) method in nonlinear model of the steam drum boiler equation. The computation to estimate the height of water level and the temperature of steam was by simulation using Matlab software. Thus an error was observed between the determined water level and the steam temperature, and that of estimated water level and steam temperature. The result of simulation by Ensemble Kalman Filter Square Root (EnKF-SR) on the nonlinear model of steam drum boiler showed that the error was less than 2%. The implementation of EnKF-SR on the steam drum boiler r model comprises of three simulations, each of which generates 200, 300 and 400 ensembles. The best simulation exhibited the error between the real condition and the estimated result, by generating 400 ensemble. The simulation in water level in order of 0.00002145 m, whereas in the steam temperature was some 0.00002121 kelvin.
A square root ensemble Kalman filter application to a motor-imagery brain-computer interface.
Kamrunnahar, M; Schiff, S J
2011-01-01
We here investigated a non-linear ensemble Kalman filter (SPKF) application to a motor imagery brain computer interface (BCI). A square root central difference Kalman filter (SR-CDKF) was used as an approach for brain state estimation in motor imagery task performance, using scalp electroencephalography (EEG) signals. Healthy human subjects imagined left vs. right hand movements and tongue vs. bilateral toe movements while scalp EEG signals were recorded. Offline data analysis was conducted for training the model as well as for decoding the imagery movements. Preliminary results indicate the feasibility of this approach with a decoding accuracy of 78%-90% for the hand movements and 70%-90% for the tongue-toes movements. Ongoing research includes online BCI applications of this approach as well as combined state and parameter estimation using this algorithm with different system dynamic models.
NASA Astrophysics Data System (ADS)
Che, Yanqiu; Yang, Tingting; Li, Ruixue; Li, Huiyan; Han, Chunxiao; Wang, Jiang; Wei, Xile
2015-09-01
In this paper, we propose a dynamic delayed feedback control approach or desynchronization of chaotic-bursting synchronous activities in an ensemble of globally coupled neuronal oscillators. We demonstrate that the difference signal between an ensemble's mean field and its time delayed state, filtered and fed back to the ensemble, can suppress the self-synchronization in the ensemble. These individual units are decoupled and stabilized at the desired desynchronized states while the stimulation signal reduces to the noise level. The effectiveness of the method is illustrated by examples of two different populations of globally coupled chaotic-bursting neurons. The proposed method has potential for mild, effective and demand-controlled therapy of neurological diseases characterized by pathological synchronization.
Ensemble Deep Learning for Biomedical Time Series Classification
2016-01-01
Ensemble learning has been proved to improve the generalization ability effectively in both theory and practice. In this paper, we briefly outline the current status of research on it first. Then, a new deep neural network-based ensemble method that integrates filtering views, local views, distorted views, explicit training, implicit training, subview prediction, and Simple Average is proposed for biomedical time series classification. Finally, we validate its effectiveness on the Chinese Cardiovascular Disease Database containing a large number of electrocardiogram recordings. The experimental results show that the proposed method has certain advantages compared to some well-known ensemble methods, such as Bagging and AdaBoost. PMID:27725828
NASA Astrophysics Data System (ADS)
Wang, S.; Huang, G. H.; Baetz, B. W.; Cai, X. M.; Ancell, B. C.; Fan, Y. R.
2017-11-01
The ensemble Kalman filter (EnKF) is recognized as a powerful data assimilation technique that generates an ensemble of model variables through stochastic perturbations of forcing data and observations. However, relatively little guidance exists with regard to the proper specification of the magnitude of the perturbation and the ensemble size, posing a significant challenge in optimally implementing the EnKF. This paper presents a robust data assimilation system (RDAS), in which a multi-factorial design of the EnKF experiments is first proposed for hydrologic ensemble predictions. A multi-way analysis of variance is then used to examine potential interactions among factors affecting the EnKF experiments, achieving optimality of the RDAS with maximized performance of hydrologic predictions. The RDAS is applied to the Xiangxi River watershed which is the most representative watershed in China's Three Gorges Reservoir region to demonstrate its validity and applicability. Results reveal that the pairwise interaction between perturbed precipitation and streamflow observations has the most significant impact on the performance of the EnKF system, and their interactions vary dynamically across different settings of the ensemble size and the evapotranspiration perturbation. In addition, the interactions among experimental factors vary greatly in magnitude and direction depending on different statistical metrics for model evaluation including the Nash-Sutcliffe efficiency and the Box-Cox transformed root-mean-square error. It is thus necessary to test various evaluation metrics in order to enhance the robustness of hydrologic prediction systems.
Online vegetation parameter estimation using passive microwave remote sensing observations
USDA-ARS?s Scientific Manuscript database
In adaptive system identification the Kalman filter can be used to identify the coefficient of the observation operator of a linear system. Here the ensemble Kalman filter is tested for adaptive online estimation of the vegetation opacity parameter of a radiative transfer model. A state augmentatio...
Adaptive correction of ensemble forecasts
NASA Astrophysics Data System (ADS)
Pelosi, Anna; Battista Chirico, Giovanni; Van den Bergh, Joris; Vannitsem, Stephane
2017-04-01
Forecasts from numerical weather prediction (NWP) models often suffer from both systematic and non-systematic errors. These are present in both deterministic and ensemble forecasts, and originate from various sources such as model error and subgrid variability. Statistical post-processing techniques can partly remove such errors, which is particularly important when NWP outputs concerning surface weather variables are employed for site specific applications. Many different post-processing techniques have been developed. For deterministic forecasts, adaptive methods such as the Kalman filter are often used, which sequentially post-process the forecasts by continuously updating the correction parameters as new ground observations become available. These methods are especially valuable when long training data sets do not exist. For ensemble forecasts, well-known techniques are ensemble model output statistics (EMOS), and so-called "member-by-member" approaches (MBM). Here, we introduce a new adaptive post-processing technique for ensemble predictions. The proposed method is a sequential Kalman filtering technique that fully exploits the information content of the ensemble. One correction equation is retrieved and applied to all members, however the parameters of the regression equations are retrieved by exploiting the second order statistics of the forecast ensemble. We compare our new method with two other techniques: a simple method that makes use of a running bias correction of the ensemble mean, and an MBM post-processing approach that rescales the ensemble mean and spread, based on minimization of the Continuous Ranked Probability Score (CRPS). We perform a verification study for the region of Campania in southern Italy. We use two years (2014-2015) of daily meteorological observations of 2-meter temperature and 10-meter wind speed from 18 ground-based automatic weather stations distributed across the region, comparing them with the corresponding COSMO-LEPS ensemble forecasts. Deterministic verification scores (e.g., mean absolute error, bias) and probabilistic scores (e.g., CRPS) are used to evaluate the post-processing techniques. We conclude that the new adaptive method outperforms the simpler running bias-correction. The proposed adaptive method often outperforms the MBM method in removing bias. The MBM method has the advantage of correcting the ensemble spread, although it needs more training data.
Computing a Comprehensible Model for Spam Filtering
NASA Astrophysics Data System (ADS)
Ruiz-Sepúlveda, Amparo; Triviño-Rodriguez, José L.; Morales-Bueno, Rafael
In this paper, we describe the application of the Desicion Tree Boosting (DTB) learning model to spam email filtering.This classification task implies the learning in a high dimensional feature space. So, it is an example of how the DTB algorithm performs in such feature space problems. In [1], it has been shown that hypotheses computed by the DTB model are more comprehensible that the ones computed by another ensemble methods. Hence, this paper tries to show that the DTB algorithm maintains the same comprehensibility of hypothesis in high dimensional feature space problems while achieving the performance of other ensemble methods. Four traditional evaluation measures (precision, recall, F1 and accuracy) have been considered for performance comparison between DTB and others models usually applied to spam email filtering. The size of the hypothesis computed by a DTB is smaller and more comprehensible than the hypothesis computed by Adaboost and Naïve Bayes.
A square root ensemble Kalman filter application to a motor-imagery brain-computer interface
Kamrunnahar, M.; Schiff, S. J.
2017-01-01
We here investigated a non-linear ensemble Kalman filter (SPKF) application to a motor imagery brain computer interface (BCI). A square root central difference Kalman filter (SR-CDKF) was used as an approach for brain state estimation in motor imagery task performance, using scalp electroencephalography (EEG) signals. Healthy human subjects imagined left vs. right hand movements and tongue vs. bilateral toe movements while scalp EEG signals were recorded. Offline data analysis was conducted for training the model as well as for decoding the imagery movements. Preliminary results indicate the feasibility of this approach with a decoding accuracy of 78%–90% for the hand movements and 70%–90% for the tongue-toes movements. Ongoing research includes online BCI applications of this approach as well as combined state and parameter estimation using this algorithm with different system dynamic models. PMID:22255799
NASA Technical Reports Server (NTRS)
Keppenne, Christian L.; Rienecker, Michele M.; Koblinsky, Chester (Technical Monitor)
2001-01-01
A multivariate ensemble Kalman filter (MvEnKF) implemented on a massively parallel computer architecture has been implemented for the Poseidon ocean circulation model and tested with a Pacific Basin model configuration. There are about two million prognostic state-vector variables. Parallelism for the data assimilation step is achieved by regionalization of the background-error covariances that are calculated from the phase-space distribution of the ensemble. Each processing element (PE) collects elements of a matrix measurement functional from nearby PEs. To avoid the introduction of spurious long-range covariances associated with finite ensemble sizes, the background-error covariances are given compact support by means of a Hadamard (element by element) product with a three-dimensional canonical correlation function. The methodology and the MvEnKF configuration are discussed. It is shown that the regionalization of the background covariances; has a negligible impact on the quality of the analyses. The parallel algorithm is very efficient for large numbers of observations but does not scale well beyond 100 PEs at the current model resolution. On a platform with distributed memory, memory rather than speed is the limiting factor.
A variational ensemble scheme for noisy image data assimilation
NASA Astrophysics Data System (ADS)
Yang, Yin; Robinson, Cordelia; Heitz, Dominique; Mémin, Etienne
2014-05-01
Data assimilation techniques aim at recovering a system state variables trajectory denoted as X, along time from partially observed noisy measurements of the system denoted as Y. These procedures, which couple dynamics and noisy measurements of the system, fulfill indeed a twofold objective. On one hand, they provide a denoising - or reconstruction - procedure of the data through a given model framework and on the other hand, they provide estimation procedures for unknown parameters of the dynamics. A standard variational data assimilation problem can be formulated as the minimization of the following objective function with respect to the initial discrepancy, η, from the background initial guess: δ« J(η(x)) = 1∥Xb (x) - X (t ,x)∥2 + 1 tf∥H(X (t,x ))- Y (t,x)∥2dt. 2 0 0 B 2 t0 R (1) where the observation operator H links the state variable and the measurements. The cost function can be interpreted as the log likelihood function associated to the a posteriori distribution of the state given the past history of measurements and the background. In this work, we aim at studying ensemble based optimal control strategies for data assimilation. Such formulation nicely combines the ingredients of ensemble Kalman filters and variational data assimilation (4DVar). It is also formulated as the minimization of the objective function (1), but similarly to ensemble filter, it introduces in its objective function an empirical ensemble-based background-error covariance defined as: B ≡ <(Xb -
The Ensemble Kalman filter: a signal processing perspective
NASA Astrophysics Data System (ADS)
Roth, Michael; Hendeby, Gustaf; Fritsche, Carsten; Gustafsson, Fredrik
2017-12-01
The ensemble Kalman filter (EnKF) is a Monte Carlo-based implementation of the Kalman filter (KF) for extremely high-dimensional, possibly nonlinear, and non-Gaussian state estimation problems. Its ability to handle state dimensions in the order of millions has made the EnKF a popular algorithm in different geoscientific disciplines. Despite a similarly vital need for scalable algorithms in signal processing, e.g., to make sense of the ever increasing amount of sensor data, the EnKF is hardly discussed in our field. This self-contained review is aimed at signal processing researchers and provides all the knowledge to get started with the EnKF. The algorithm is derived in a KF framework, without the often encountered geoscientific terminology. Algorithmic challenges and required extensions of the EnKF are provided, as well as relations to sigma point KF and particle filters. The relevant EnKF literature is summarized in an extensive survey and unique simulation examples, including popular benchmark problems, complement the theory with practical insights. The signal processing perspective highlights new directions of research and facilitates the exchange of potentially beneficial ideas, both for the EnKF and high-dimensional nonlinear and non-Gaussian filtering in general.
NASA Technical Reports Server (NTRS)
Sippel, Jason A.; Zhang, Fuqing
2009-01-01
This study uses short-range ensemble forecasts initialized with an Ensemble-Kalman filter to study the dynamics and predictability of Hurricane Humberto, which made landfall along the Texas coast in 2007. Statistical correlation is used to determine why some ensemble members strengthen the incipient low into a hurricane and others do not. It is found that deep moisture and high convective available potential energy (CAPE) are two of the most important factors for the genesis of Humberto. Variations in CAPE result in as much difference (ensemble spread) in the final hurricane intensity as do variations in deep moisture. CAPE differences here are related to the interaction between the cyclone and a nearby front, which tends to stabilize the lower troposphere in the vicinity of the circulation center. This subsequently weakens convection and slows genesis. Eventually the wind-induced surface heat exchange mechanism and differences in landfall time result in even larger ensemble spread. 1
Data assimilation in the low noise regime
NASA Astrophysics Data System (ADS)
Weare, J.; Vanden-Eijnden, E.
2012-12-01
On-line data assimilation techniques such as ensemble Kalman filters and particle filters tend to lose accuracy dramatically when presented with an unlikely observation. Such observation may be caused by an unusually large measurement error or reflect a rare fluctuation in the dynamics of the system. Over a long enough span of time it becomes likely that one or several of these events will occur. In some cases they are signatures of the most interesting features of the underlying system and their prediction becomes the primary focus of the data assimilation procedure. The Kuroshio or Black Current that runs along the eastern coast of Japan is an example of just such a system. It undergoes infrequent but dramatic changes of state between a small meander during which the current remains close to the coast of Japan, and a large meander during which the current bulges away from the coast. Because of the important role that the Kuroshio plays in distributing heat and salinity in the surrounding region, prediction of these transitions is of acute interest. { Here we focus on a regime in which both the stochastic forcing on the system and the observational noise are small. In this setting large deviation theory can be used to understand why standard filtering methods fail and guide the design of the more effective data assimilation techniques. Motivated by our large deviations analysis we propose several data assimilation strategies capable of efficiently handling rare events such as the transitions of the Kuroshio. These techniques are tested on a model of the Kuroshio and shown to perform much better than standard filtering methods.Here the sequence of observations (circles) are taken directly from one of our Kuroshio model's transition events from the small meander to the large meander. We tested two new algorithms (Algorithms 3 and 4 in the legend) motivated by our large deviations analysis as well as a standard particle filter and an ensemble Kalman filter. The parameters of each algorithm are chosen so that their costs are comparable. The particle filter and an ensemble Kalman filter fail to accurately track the transition. Algorithms 3 and 4 maintain accuracy (and smaller scale resolution) throughout the transition.
NASA Astrophysics Data System (ADS)
Liu, Di; Mishra, Ashok K.; Yu, Zhongbo
2016-07-01
This paper examines the combination of support vector machines (SVM) and the dual ensemble Kalman filter (EnKF) technique to estimate root zone soil moisture at different soil layers up to 100 cm depth. Multiple experiments are conducted in a data rich environment to construct and validate the SVM model and to explore the effectiveness and robustness of the EnKF technique. It was observed that the performance of SVM relies more on the initial length of training set than other factors (e.g., cost function, regularization parameter, and kernel parameters). The dual EnKF technique proved to be efficient to improve SVM with observed data either at each time step or at a flexible time steps. The EnKF technique can reach its maximum efficiency when the updating ensemble size approaches a certain threshold. It was observed that the SVM model performance for the multi-layer soil moisture estimation can be influenced by the rainfall magnitude (e.g., dry and wet spells).
Rainfall estimation with TFR model using Ensemble Kalman filter
NASA Astrophysics Data System (ADS)
Asyiqotur Rohmah, Nabila; Apriliani, Erna
2018-03-01
Rainfall fluctuation can affect condition of other environment, correlated with economic activity and public health. The increasing of global average temperature is influenced by the increasing of CO2 in the atmosphere, which caused climate change. Meanwhile, the forests as carbon sinks that help keep the carbon cycle and climate change mitigation. Climate change caused by rainfall intensity deviations can affect the economy of a region, and even countries. It encourages research on rainfall associated with an area of forest. In this study, the mathematics model that used is a model which describes the global temperatures, forest cover, and seasonal rainfall called the TFR (temperature, forest cover, and rainfall) model. The model will be discretized first, and then it will be estimated by the method of Ensemble Kalman Filter (EnKF). The result shows that the more ensembles used in estimation, the better the result is. Also, the accurateness of simulation result is influenced by measurement variable. If a variable is measurement data, the result of simulation is better.
Stock price estimation using ensemble Kalman Filter square root method
NASA Astrophysics Data System (ADS)
Karya, D. F.; Katias, P.; Herlambang, T.
2018-04-01
Shares are securities as the possession or equity evidence of an individual or corporation over an enterprise, especially public companies whose activity is stock trading. Investment in stocks trading is most likely to be the option of investors as stocks trading offers attractive profits. In determining a choice of safe investment in the stocks, the investors require a way of assessing the stock prices to buy so as to help optimize their profits. An effective method of analysis which will reduce the risk the investors may bear is by predicting or estimating the stock price. Estimation is carried out as a problem sometimes can be solved by using previous information or data related or relevant to the problem. The contribution of this paper is that the estimates of stock prices in high, low, and close categorycan be utilized as investors’ consideration for decision making in investment. In this paper, stock price estimation was made by using the Ensemble Kalman Filter Square Root method (EnKF-SR) and Ensemble Kalman Filter method (EnKF). The simulation results showed that the resulted estimation by applying EnKF method was more accurate than that by the EnKF-SR, with an estimation error of about 0.2 % by EnKF and an estimation error of 2.6 % by EnKF-SR.
Zhang, X L; Su, G F; Yuan, H Y; Chen, J G; Huang, Q Y
2014-09-15
Atmospheric dispersion models play an important role in nuclear power plant accident management. A reliable estimation of radioactive material distribution in short range (about 50 km) is in urgent need for population sheltering and evacuation planning. However, the meteorological data and the source term which greatly influence the accuracy of the atmospheric dispersion models are usually poorly known at the early phase of the emergency. In this study, a modified ensemble Kalman filter data assimilation method in conjunction with a Lagrangian puff-model is proposed to simultaneously improve the model prediction and reconstruct the source terms for short range atmospheric dispersion using the off-site environmental monitoring data. Four main uncertainty parameters are considered: source release rate, plume rise height, wind speed and wind direction. Twin experiments show that the method effectively improves the predicted concentration distribution, and the temporal profiles of source release rate and plume rise height are also successfully reconstructed. Moreover, the time lag in the response of ensemble Kalman filter is shortened. The method proposed here can be a useful tool not only in the nuclear power plant accident emergency management but also in other similar situation where hazardous material is released into the atmosphere. Copyright © 2014 Elsevier B.V. All rights reserved.
A Probabilistic Collocation Based Iterative Kalman Filter for Landfill Data Assimilation
NASA Astrophysics Data System (ADS)
Qiang, Z.; Zeng, L.; Wu, L.
2016-12-01
Due to the strong spatial heterogeneity of landfill, uncertainty is ubiquitous in gas transport process in landfill. To accurately characterize the landfill properties, the ensemble Kalman filter (EnKF) has been employed to assimilate the measurements, e.g., the gas pressure. As a Monte Carlo (MC) based method, the EnKF usually requires a large ensemble size, which poses a high computational cost for large scale problems. In this work, we propose a probabilistic collocation based iterative Kalman filter (PCIKF) to estimate permeability in a liquid-gas coupling model. This method employs polynomial chaos expansion (PCE) to represent and propagate the uncertainties of model parameters and states, and an iterative form of Kalman filter to assimilate the current gas pressure data. To further reduce the computation cost, the functional ANOVA (analysis of variance) decomposition is conducted, and only the first order ANOVA components are remained for PCE. Illustrated with numerical case studies, this proposed method shows significant superiority in computation efficiency compared with the traditional MC based iterative EnKF. The developed method has promising potential in reliable prediction and management of landfill gas production.
Subsurface characterization with localized ensemble Kalman filter employing adaptive thresholding
NASA Astrophysics Data System (ADS)
Delijani, Ebrahim Biniaz; Pishvaie, Mahmoud Reza; Boozarjomehry, Ramin Bozorgmehry
2014-07-01
Ensemble Kalman filter, EnKF, as a Monte Carlo sequential data assimilation method has emerged promisingly for subsurface media characterization during past decade. Due to high computational cost of large ensemble size, EnKF is limited to small ensemble set in practice. This results in appearance of spurious correlation in covariance structure leading to incorrect or probable divergence of updated realizations. In this paper, a universal/adaptive thresholding method is presented to remove and/or mitigate spurious correlation problem in the forecast covariance matrix. This method is, then, extended to regularize Kalman gain directly. Four different thresholding functions have been considered to threshold forecast covariance and gain matrices. These include hard, soft, lasso and Smoothly Clipped Absolute Deviation (SCAD) functions. Three benchmarks are used to evaluate the performances of these methods. These benchmarks include a small 1D linear model and two 2D water flooding (in petroleum reservoirs) cases whose levels of heterogeneity/nonlinearity are different. It should be noted that beside the adaptive thresholding, the standard distance dependant localization and bootstrap Kalman gain are also implemented for comparison purposes. We assessed each setup with different ensemble sets to investigate the sensitivity of each method on ensemble size. The results indicate that thresholding of forecast covariance yields more reliable performance than Kalman gain. Among thresholding function, SCAD is more robust for both covariance and gain estimation. Our analyses emphasize that not all assimilation cycles do require thresholding and it should be performed wisely during the early assimilation cycles. The proposed scheme of adaptive thresholding outperforms other methods for subsurface characterization of underlying benchmarks.
Constraining the ensemble Kalman filter for improved streamflow forecasting
NASA Astrophysics Data System (ADS)
Maxwell, Deborah H.; Jackson, Bethanna M.; McGregor, James
2018-05-01
Data assimilation techniques such as the Ensemble Kalman Filter (EnKF) are often applied to hydrological models with minimal state volume/capacity constraints enforced during ensemble generation. Flux constraints are rarely, if ever, applied. Consequently, model states can be adjusted beyond physically reasonable limits, compromising the integrity of model output. In this paper, we investigate the effect of constraining the EnKF on forecast performance. A "free run" in which no assimilation is applied is compared to a completely unconstrained EnKF implementation, a 'typical' hydrological implementation (in which mass constraints are enforced to ensure non-negativity and capacity thresholds of model states are not exceeded), and then to a more tightly constrained implementation where flux as well as mass constraints are imposed to force the rate of water movement to/from ensemble states to be within physically consistent boundaries. A three year period (2008-2010) was selected from the available data record (1976-2010). This was specifically chosen as it had no significant data gaps and represented well the range of flows observed in the longer dataset. Over this period, the standard implementation of the EnKF (no constraints) contained eight hydrological events where (multiple) physically inconsistent state adjustments were made. All were selected for analysis. Mass constraints alone did little to improve forecast performance; in fact, several were significantly degraded compared to the free run. In contrast, the combined use of mass and flux constraints significantly improved forecast performance in six events relative to all other implementations, while the remaining two events showed no significant difference in performance. Placing flux as well as mass constraints on the data assimilation framework encourages physically consistent state estimation and results in more accurate and reliable forward predictions of streamflow for robust decision-making. We also experiment with the observation error, which has a profound effect on filter performance. We note an interesting tension exists between specifying an error which reflects known uncertainties and errors in the measurement versus an error that allows "optimal" filter updating.
NASA Astrophysics Data System (ADS)
Clark, Elizabeth; Wood, Andy; Nijssen, Bart; Mendoza, Pablo; Newman, Andy; Nowak, Kenneth; Arnold, Jeffrey
2017-04-01
In an automated forecast system, hydrologic data assimilation (DA) performs the valuable function of correcting raw simulated watershed model states to better represent external observations, including measurements of streamflow, snow, soil moisture, and the like. Yet the incorporation of automated DA into operational forecasting systems has been a long-standing challenge due to the complexities of the hydrologic system, which include numerous lags between state and output variations. To help demonstrate that such methods can succeed in operational automated implementations, we present results from the real-time application of an ensemble particle filter (PF) for short-range (7 day lead) ensemble flow forecasts in western US river basins. We use the System for Hydromet Applications, Research and Prediction (SHARP), developed by the National Center for Atmospheric Research (NCAR) in collaboration with the University of Washington, U.S. Army Corps of Engineers, and U.S. Bureau of Reclamation. SHARP is a fully automated platform for short-term to seasonal hydrologic forecasting applications, incorporating uncertainty in initial hydrologic conditions (IHCs) and in hydrometeorological predictions through ensemble methods. In this implementation, IHC uncertainty is estimated by propagating an ensemble of 100 temperature and precipitation time series through conceptual and physically-oriented models. The resulting ensemble of derived IHCs exhibits a broad range of possible soil moisture and snow water equivalent (SWE) states. The PF selects and/or weights and resamples the IHCs that are most consistent with external streamflow observations, and uses the particles to initialize a streamflow forecast ensemble driven by ensemble precipitation and temperature forecasts downscaled from the Global Ensemble Forecast System (GEFS). We apply this method in real-time for several basins in the western US that are important for water resources management, and perform a hindcast experiment to evaluate the utility of PF-based data assimilation on streamflow forecasts skill. This presentation describes findings, including a comparison of sequential and non-sequential particle weighting methods.
Ensemble Averaged Probability Density Function (APDF) for Compressible Turbulent Reacting Flows
NASA Technical Reports Server (NTRS)
Shih, Tsan-Hsing; Liu, Nan-Suey
2012-01-01
In this paper, we present a concept of the averaged probability density function (APDF) for studying compressible turbulent reacting flows. The APDF is defined as an ensemble average of the fine grained probability density function (FG-PDF) with a mass density weighting. It can be used to exactly deduce the mass density weighted, ensemble averaged turbulent mean variables. The transport equation for APDF can be derived in two ways. One is the traditional way that starts from the transport equation of FG-PDF, in which the compressible Navier- Stokes equations are embedded. The resulting transport equation of APDF is then in a traditional form that contains conditional means of all terms from the right hand side of the Navier-Stokes equations except for the chemical reaction term. These conditional means are new unknown quantities that need to be modeled. Another way of deriving the transport equation of APDF is to start directly from the ensemble averaged Navier-Stokes equations. The resulting transport equation of APDF derived from this approach appears in a closed form without any need for additional modeling. The methodology of ensemble averaging presented in this paper can be extended to other averaging procedures: for example, the Reynolds time averaging for statistically steady flow and the Reynolds spatial averaging for statistically homogeneous flow. It can also be extended to a time or spatial filtering procedure to construct the filtered density function (FDF) for the large eddy simulation (LES) of compressible turbulent reacting flows.
A mesoscale hybrid data assimilation system based on the JMA nonhydrostatic model
NASA Astrophysics Data System (ADS)
Ito, K.; Kunii, M.; Kawabata, T. T.; Saito, K. K.; Duc, L. L.
2015-12-01
This work evaluates the potential of a hybrid ensemble Kalman filter and four-dimensional variational (4D-Var) data assimilation system for predicting severe weather events from a deterministic point of view. This hybrid system is an adjoint-based 4D-Var system using a background error covariance matrix constructed from the mixture of a so-called NMC method and perturbations in a local ensemble transform Kalman filter data assimilation system, both of which are based on the Japan Meteorological Agency nonhydrostatic model. To construct the background error covariance matrix, we investigated two types of schemes. One is a spatial localization scheme and the other is neighboring ensemble approach, which regards the result at a horizontally spatially shifted point in each ensemble member as that obtained from a different realization of ensemble simulation. An assimilation of a pseudo single-observation located to the north of a tropical cyclone (TC) yielded an analysis increment of wind and temperature physically consistent with what is expected for a mature TC in both hybrid systems, whereas an analysis increment in a 4D-Var system using a static background error covariance distorted a structure of the mature TC. Real data assimilation experiments applied to 4 TCs and 3 local heavy rainfall events showed that hybrid systems and EnKF provided better initial conditions than the NMC-based 4D-Var, both for predicting the intensity and track forecast of TCs and for the location and amount of local heavy rainfall events.
A comparison of linear and non-linear data assimilation methods using the NEMO ocean model
NASA Astrophysics Data System (ADS)
Kirchgessner, Paul; Tödter, Julian; Nerger, Lars
2015-04-01
The assimilation behavior of the widely used LETKF is compared with the Equivalent Weight Particle Filter (EWPF) in a data assimilation application with an idealized configuration of the NEMO ocean model. The experiments show how the different filter methods behave when they are applied to a realistic ocean test case. The LETKF is an ensemble-based Kalman filter, which assumes Gaussian error distributions and hence implicitly requires model linearity. In contrast, the EWPF is a fully nonlinear data assimilation method that does not rely on a particular error distribution. The EWPF has been demonstrated to work well in highly nonlinear situations, like in a model solving a barotropic vorticity equation, but it is still unknown how the assimilation performance compares to ensemble Kalman filters in realistic situations. For the experiments, twin assimilation experiments with a square basin configuration of the NEMO model are performed. The configuration simulates a double gyre, which exhibits significant nonlinearity. The LETKF and EWPF are both implemented in PDAF (Parallel Data Assimilation Framework, http://pdaf.awi.de), which ensures identical experimental conditions for both filters. To account for the nonlinearity, the assimilation skill of the two methods is assessed by using different statistical metrics, like CRPS and Histograms.
NASA Technical Reports Server (NTRS)
Pauwels, V. R. N.; DeLannoy, G. J. M.; Hendricks Franssen, H.-J.; Vereecken, H.
2013-01-01
In this paper, we present a two-stage hybrid Kalman filter to estimate both observation and forecast bias in hydrologic models, in addition to state variables. The biases are estimated using the discrete Kalman filter, and the state variables using the ensemble Kalman filter. A key issue in this multi-component assimilation scheme is the exact partitioning of the difference between observation and forecasts into state, forecast bias and observation bias updates. Here, the error covariances of the forecast bias and the unbiased states are calculated as constant fractions of the biased state error covariance, and the observation bias error covariance is a function of the observation prediction error covariance. In a series of synthetic experiments, focusing on the assimilation of discharge into a rainfall-runoff model, it is shown that both static and dynamic observation and forecast biases can be successfully estimated. The results indicate a strong improvement in the estimation of the state variables and resulting discharge as opposed to the use of a bias-unaware ensemble Kalman filter. Furthermore, minimal code modification in existing data assimilation software is needed to implement the method. The results suggest that a better performance of data assimilation methods should be possible if both forecast and observation biases are taken into account.
Rethinking the Default Construction of Multimodel Climate Ensembles
Rauser, Florian; Gleckler, Peter; Marotzke, Jochem
2015-07-21
Here, we discuss the current code of practice in the climate sciences to routinely create climate model ensembles as ensembles of opportunity from the newest phase of the Coupled Model Intercomparison Project (CMIP). We give a two-step argument to rethink this process. First, the differences between generations of ensembles corresponding to different CMIP phases in key climate quantities are not large enough to warrant an automatic separation into generational ensembles for CMIP3 and CMIP5. Second, we suggest that climate model ensembles cannot continue to be mere ensembles of opportunity but should always be based on a transparent scientific decision process.more » If ensembles can be constrained by observation, then they should be constructed as target ensembles that are specifically tailored to a physical question. If model ensembles cannot be constrained by observation, then they should be constructed as cross-generational ensembles, including all available model data to enhance structural model diversity and to better sample the underlying uncertainties. To facilitate this, CMIP should guide the necessarily ongoing process of updating experimental protocols for the evaluation and documentation of coupled models. Finally, with an emphasis on easy access to model data and facilitating the filtering of climate model data across all CMIP generations and experiments, our community could return to the underlying idea of using model data ensembles to improve uncertainty quantification, evaluation, and cross-institutional exchange.« less
The Cross-Entropy Based Multi-Filter Ensemble Method for Gene Selection.
Sun, Yingqiang; Lu, Chengbo; Li, Xiaobo
2018-05-17
The gene expression profile has the characteristics of a high dimension, low sample, and continuous type, and it is a great challenge to use gene expression profile data for the classification of tumor samples. This paper proposes a cross-entropy based multi-filter ensemble (CEMFE) method for microarray data classification. Firstly, multiple filters are used to select the microarray data in order to obtain a plurality of the pre-selected feature subsets with a different classification ability. The top N genes with the highest rank of each subset are integrated so as to form a new data set. Secondly, the cross-entropy algorithm is used to remove the redundant data in the data set. Finally, the wrapper method, which is based on forward feature selection, is used to select the best feature subset. The experimental results show that the proposed method is more efficient than other gene selection methods and that it can achieve a higher classification accuracy under fewer characteristic genes.
Ensemble Data Assimilation Without Ensembles: Methodology and Application to Ocean Data Assimilation
NASA Technical Reports Server (NTRS)
Keppenne, Christian L.; Rienecker, Michele M.; Kovach, Robin M.; Vernieres, Guillaume
2013-01-01
Two methods to estimate background error covariances for data assimilation are introduced. While both share properties with the ensemble Kalman filter (EnKF), they differ from it in that they do not require the integration of multiple model trajectories. Instead, all the necessary covariance information is obtained from a single model integration. The first method is referred-to as SAFE (Space Adaptive Forecast error Estimation) because it estimates error covariances from the spatial distribution of model variables within a single state vector. It can thus be thought of as sampling an ensemble in space. The second method, named FAST (Flow Adaptive error Statistics from a Time series), constructs an ensemble sampled from a moving window along a model trajectory. The underlying assumption in these methods is that forecast errors in data assimilation are primarily phase errors in space and/or time.
NASA Astrophysics Data System (ADS)
Tong, M.; Xue, M.
2006-12-01
An important source of model error for convective-scale data assimilation and prediction is microphysical parameterization. This study investigates the possibility of estimating up to five fundamental microphysical parameters, which are closely involved in the definition of drop size distribution of microphysical species in a commonly used single-moment ice microphysics scheme, using radar observations and the ensemble Kalman filter method. The five parameters include the intercept parameters for rain, snow and hail/graupel, and the bulk densities of hail/graupel and snow. Parameter sensitivity and identifiability are first examined. The ensemble square-root Kalman filter (EnSRF) is employed for simultaneous state and parameter estimation. OSS experiments are performed for a model-simulated supercell storm, in which the five microphysical parameters are estimated individually or in different combinations starting from different initial guesses. When error exists in only one of the microphysical parameters, the parameter can be successfully estimated without exception. The estimation of multiple parameters is found to be less robust, with end results of estimation being sensitive to the realization of the initial parameter perturbation. This is believed to be because of the reduced parameter identifiability and the existence of non-unique solutions. The results of state estimation are, however, always improved when simultaneous parameter estimation is performed, even when the estimated parameters values are not accurate.
Adaptive probabilistic collocation based Kalman filter for unsaturated flow problem
NASA Astrophysics Data System (ADS)
Man, J.; Li, W.; Zeng, L.; Wu, L.
2015-12-01
The ensemble Kalman filter (EnKF) has gained popularity in hydrological data assimilation problems. As a Monte Carlo based method, a relatively large ensemble size is usually required to guarantee the accuracy. As an alternative approach, the probabilistic collocation based Kalman filter (PCKF) employs the Polynomial Chaos to approximate the original system. In this way, the sampling error can be reduced. However, PCKF suffers from the so called "cure of dimensionality". When the system nonlinearity is strong and number of parameters is large, PCKF is even more computationally expensive than EnKF. Motivated by recent developments in uncertainty quantification, we propose a restart adaptive probabilistic collocation based Kalman filter (RAPCKF) for data assimilation in unsaturated flow problem. During the implementation of RAPCKF, the important parameters are identified and active PCE basis functions are adaptively selected. The "restart" technology is used to alleviate the inconsistency between model parameters and states. The performance of RAPCKF is tested by unsaturated flow numerical cases. It is shown that RAPCKF is more efficient than EnKF with the same computational cost. Compared with the traditional PCKF, the RAPCKF is more applicable in strongly nonlinear and high dimensional problems.
Park, Sang-Hoon; Lee, David; Lee, Sang-Goog
2018-02-01
For the last few years, many feature extraction methods have been proposed based on biological signals. Among these, the brain signals have the advantage that they can be obtained, even by people with peripheral nervous system damage. Motor imagery electroencephalograms (EEG) are inexpensive to measure, offer a high temporal resolution, and are intuitive. Therefore, these have received a significant amount of attention in various fields, including signal processing, cognitive science, and medicine. The common spatial pattern (CSP) algorithm is a useful method for feature extraction from motor imagery EEG. However, performance degradation occurs in a small-sample setting (SSS), because the CSP depends on sample-based covariance. Since the active frequency range is different for each subject, it is also inconvenient to set the frequency range to be different every time. In this paper, we propose the feature extraction method based on a filter bank to solve these problems. The proposed method consists of five steps. First, motor imagery EEG is divided by a using filter bank. Second, the regularized CSP (R-CSP) is applied to the divided EEG. Third, we select the features according to mutual information based on the individual feature algorithm. Fourth, parameter sets are selected for the ensemble. Finally, we classify using ensemble based on features. The brain-computer interface competition III data set IVa is used to evaluate the performance of the proposed method. The proposed method improves the mean classification accuracy by 12.34%, 11.57%, 9%, 4.95%, and 4.47% compared with CSP, SR-CSP, R-CSP, filter bank CSP (FBCSP), and SR-FBCSP. Compared with the filter bank R-CSP ( , ), which is a parameter selection version of the proposed method, the classification accuracy is improved by 3.49%. In particular, the proposed method shows a large improvement in performance in the SSS.
Extracting Drug-Drug Interactions with Word and Character-Level Recurrent Neural Networks
Kavuluru, Ramakanth; Rios, Anthony; Tran, Tung
2017-01-01
Drug-drug interactions (DDIs) are known to be responsible for nearly a third of all adverse drug reactions. Hence several current efforts focus on extracting signal from EMRs to prioritize DDIs that need further exploration. To this end, being able to extract explicit mentions of DDIs in free text narratives is an important task. In this paper, we explore recurrent neural network (RNN) architectures to detect and classify DDIs from unstructured text using the DDIExtraction dataset from the SemEval 2013 (task 9) shared task. Our methods are in line with those used in other recent deep learning efforts for relation extraction including DDI extraction. However, to our knowledge, we are the first to investigate the potential of character-level RNNs (Char-RNNs) for DDI extraction (and relation extraction in general). Furthermore, we explore a simple but effective model bootstrapping method to (a). build model averaging ensembles, (b). derive confidence intervals around mean micro-F scores (MMF), and (c). assess the average behavior of our methods. Without any rule based filtering of negative examples, a popular heuristic used by most earlier efforts, we achieve an MMF of 69.13. By adding simple replicable heuristics to filter negative instances we are able to achieve an MMF of 70.38. Furthermore, our best ensembles produce micro F-scores of 70.81 (without filtering) and 72.13 (with filtering), which are superior to metrics reported in published results. Although Char-RNNs turnout to be inferior to regular word based RNN models in overall comparisons, we find that ensembling models from both architectures results in nontrivial gains over simply using either alone, indicating that they complement each other. PMID:29034375
NASA Astrophysics Data System (ADS)
Rakovec, O.; Weerts, A.; Hazenberg, P.; Torfs, P.; Uijlenhoet, R.
2012-12-01
This paper presents a study on the optimal setup for discharge assimilation within a spatially distributed hydrological model (Rakovec et al., 2012a). The Ensemble Kalman filter (EnKF) is employed to update the grid-based distributed states of such an hourly spatially distributed version of the HBV-96 model. By using a physically based model for the routing, the time delay and attenuation are modelled more realistically. The discharge and states at a given time step are assumed to be dependent on the previous time step only (Markov property). Synthetic and real world experiments are carried out for the Upper Ourthe (1600 km2), a relatively quickly responding catchment in the Belgian Ardennes. The uncertain precipitation model forcings were obtained using a time-dependent multivariate spatial conditional simulation method (Rakovec et al., 2012b), which is further made conditional on preceding simulations. We assess the impact on the forecasted discharge of (1) various sets of the spatially distributed discharge gauges and (2) the filtering frequency. The results show that the hydrological forecast at the catchment outlet is improved by assimilating interior gauges. This augmentation of the observation vector improves the forecast more than increasing the updating frequency. In terms of the model states, the EnKF procedure is found to mainly change the pdfs of the two routing model storages, even when the uncertainty in the discharge simulations is smaller than the defined observation uncertainty. Rakovec, O., Weerts, A. H., Hazenberg, P., Torfs, P. J. J. F., and Uijlenhoet, R.: State updating of a distributed hydrological model with Ensemble Kalman Filtering: effects of updating frequency and observation network density on forecast accuracy, Hydrol. Earth Syst. Sci. Discuss., 9, 3961-3999, doi:10.5194/hessd-9-3961-2012, 2012a. Rakovec, O., Hazenberg, P., Torfs, P. J. J. F., Weerts, A. H., and Uijlenhoet, R.: Generating spatial precipitation ensembles: impact of temporal correlation structure, Hydrol. Earth Syst. Sci. Discuss., 9, 3087-3127, doi:10.5194/hessd-9-3087-2012, 2012b.
Van Delden, Jay S
2003-07-15
A novel, interferometric, polarization-interrogating filter assembly and method for the simultaneous measurement of all four Stokes parameters across a partially polarized irradiance image in a no-moving-parts, instantaneous, highly sensitive manner is described. In the reported embodiment of the filter, two spatially varying linear retarders and a linear polarizer comprise an ortho-Babinet, polarization-interrogating (OBPI) filter. The OBPI filter uniquely encodes the incident ensemble of electromagnetic wave fronts comprising a partially polarized irradiance image in a controlled, deterministic, spatially varying manner to map the complete state of polarization across the image to local variations in a superposed interference pattern. Experimental interferograms are reported along with a numerical simulation of the method.
The Role of Scale and Model Bias in ADAPT's Photospheric Eatimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Godinez Vazquez, Humberto C.; Hickmann, Kyle Scott; Arge, Charles Nicholas
2015-05-20
The Air Force Assimilative Photospheric flux Transport model (ADAPT), is a magnetic flux propagation based on Worden-Harvey (WH) model. ADAPT would be used to provide a global photospheric map of the Earth. A data assimilation method based on the Ensemble Kalman Filter (EnKF), a method of Monte Carlo approximation tied with Kalman filtering, is used in calculating the ADAPT models.
Reduced Kalman Filters for Clock Ensembles
NASA Technical Reports Server (NTRS)
Greenhall, Charles A.
2011-01-01
This paper summarizes the author's work ontimescales based on Kalman filters that act upon the clock comparisons. The natural Kalman timescale algorithm tends to optimize long-term timescale stability at the expense of short-term stability. By subjecting each post-measurement error covariance matrix to a non-transparent reduction operation, one obtains corrected clocks with improved short-term stability and little sacrifice of long-term stability.
2007-04-01
0602435N 6. AUTHOR(S) 5d. PROJECT NUMBER Michel Rixen, Jeffery W. Book, Paul J. Martin, Nadia Pinardi, Paolo Oddo, Jacopo Chiggiato , Nello Russo 5e. TASK...PREDICTION: AN OPERATIONAL EXAMPLE USING A KALMAN FILTER IN THE ADRIATIC SEA M. Rixen J, . Book 2, P. Martin 2, N. Pinardi 3, p. Oddo 3, j. Chiggiato ’, N
Ensemble Kalman Filter versus Ensemble Smoother for Data Assimilation in Groundwater Modeling
NASA Astrophysics Data System (ADS)
Li, L.; Cao, Z.; Zhou, H.
2017-12-01
Groundwater modeling calls for an effective and robust integrating method to fill the gap between the model and data. The Ensemble Kalman Filter (EnKF), a real-time data assimilation method, has been increasingly applied in multiple disciplines such as petroleum engineering and hydrogeology. In this approach, the groundwater models are sequentially updated using measured data such as hydraulic head and concentration data. As an alternative to the EnKF, the Ensemble Smoother (ES) was proposed with updating models using all the data together, and therefore needs a much less computational cost. To further improve the performance of the ES, an iterative ES was proposed for continuously updating the models by assimilating measurements together. In this work, we compare the performance of the EnKF, the ES and the iterative ES using a synthetic example in groundwater modeling. The hydraulic head data modeled on the basis of the reference conductivity field are utilized to inversely estimate conductivities at un-sampled locations. Results are evaluated in terms of the characterization of conductivity and groundwater flow and solute transport predictions. It is concluded that: (1) the iterative ES could achieve a comparable result with the EnKF, but needs a less computational cost; (2) the iterative ES has the better performance than the ES through continuously updating. These findings suggest that the iterative ES should be paid much more attention for data assimilation in groundwater modeling.
Parameter estimation for stiff deterministic dynamical systems via ensemble Kalman filter
NASA Astrophysics Data System (ADS)
Arnold, Andrea; Calvetti, Daniela; Somersalo, Erkki
2014-10-01
A commonly encountered problem in numerous areas of applications is to estimate the unknown coefficients of a dynamical system from direct or indirect observations at discrete times of some of the components of the state vector. A related problem is to estimate unobserved components of the state. An egregious example of such a problem is provided by metabolic models, in which the numerous model parameters and the concentrations of the metabolites in tissue are to be estimated from concentration data in the blood. A popular method for addressing similar questions in stochastic and turbulent dynamics is the ensemble Kalman filter (EnKF), a particle-based filtering method that generalizes classical Kalman filtering. In this work, we adapt the EnKF algorithm for deterministic systems in which the numerical approximation error is interpreted as a stochastic drift with variance based on classical error estimates of numerical integrators. This approach, which is particularly suitable for stiff systems where the stiffness may depend on the parameters, allows us to effectively exploit the parallel nature of particle methods. Moreover, we demonstrate how spatial prior information about the state vector, which helps the stability of the computed solution, can be incorporated into the filter. The viability of the approach is shown by computed examples, including a metabolic system modeling an ischemic episode in skeletal muscle, with a high number of unknown parameters.
Marucci-Wellman, Helen R; Corns, Helen L; Lehto, Mark R
2017-01-01
Injury narratives are now available real time and include useful information for injury surveillance and prevention. However, manual classification of the cause or events leading to injury found in large batches of narratives, such as workers compensation claims databases, can be prohibitive. In this study we compare the utility of four machine learning algorithms (Naïve Bayes, Single word and Bi-gram models, Support Vector Machine and Logistic Regression) for classifying narratives into Bureau of Labor Statistics Occupational Injury and Illness event leading to injury classifications for a large workers compensation database. These algorithms are known to do well classifying narrative text and are fairly easy to implement with off-the-shelf software packages such as Python. We propose human-machine learning ensemble approaches which maximize the power and accuracy of the algorithms for machine-assigned codes and allow for strategic filtering of rare, emerging or ambiguous narratives for manual review. We compare human-machine approaches based on filtering on the prediction strength of the classifier vs. agreement between algorithms. Regularized Logistic Regression (LR) was the best performing algorithm alone. Using this algorithm and filtering out the bottom 30% of predictions for manual review resulted in high accuracy (overall sensitivity/positive predictive value of 0.89) of the final machine-human coded dataset. The best pairings of algorithms included Naïve Bayes with Support Vector Machine whereby the triple ensemble NB SW =NB BI-GRAM =SVM had very high performance (0.93 overall sensitivity/positive predictive value and high accuracy (i.e. high sensitivity and positive predictive values)) across both large and small categories leaving 41% of the narratives for manual review. Integrating LR into this ensemble mix improved performance only slightly. For large administrative datasets we propose incorporation of methods based on human-machine pairings such as we have done here, utilizing readily-available off-the-shelf machine learning techniques and resulting in only a fraction of narratives that require manual review. Human-machine ensemble methods are likely to improve performance over total manual coding. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Estimating uncertainty of Full Waveform Inversion with Ensemble-based methods
NASA Astrophysics Data System (ADS)
Thurin, J.; Brossier, R.; Métivier, L.
2017-12-01
Uncertainty estimation is one key feature of tomographic applications for robust interpretation. However, this information is often missing in the frame of large scale linearized inversions, and only the results at convergence are shown, despite the ill-posed nature of the problem. This issue is common in the Full Waveform Inversion community.While few methodologies have already been proposed in the literature, standard FWI workflows do not include any systematic uncertainty quantifications methods yet, but often try to assess the result's quality through cross-comparison with other results from seismic or comparison with other geophysical data. With the development of large seismic networks/surveys, the increase in computational power and the more and more systematic application of FWI, it is crucial to tackle this problem and to propose robust and affordable workflows, in order to address the uncertainty quantification problem faced for near surface targets, crustal exploration, as well as regional and global scales.In this work (Thurin et al., 2017a,b), we propose an approach which takes advantage of the Ensemble Transform Kalman Filter (ETKF) proposed by Bishop et al., (2001), in order to estimate a low-rank approximation of the posterior covariance matrix of the FWI problem, allowing us to evaluate some uncertainty information of the solution. Instead of solving the FWI problem through a Bayesian inversion with the ETKF, we chose to combine a conventional FWI, based on local optimization, and the ETKF strategies. This scheme allows combining the efficiency of local optimization for solving large scale inverse problems and make the sampling of the local solution space possible thanks to its embarrassingly parallel property. References:Bishop, C. H., Etherton, B. J. and Majumdar, S. J., 2001. Adaptive sampling with the ensemble transform Kalman filter. Part I: Theoretical aspects. Monthly weather review, 129(3), 420-436.Thurin, J., Brossier, R. and Métivier, L. 2017,a.: Ensemble-Based Uncertainty Estimation in Full Waveform Inversion. 79th EAGE Conference and Exhibition 2017, (12 - 15 June, 2017)Thurin, J., Brossier, R. and Métivier, L. 2017,b.: An Ensemble-Transform Kalman Filter - Full Waveform Inversion scheme for Uncertainty estimation; SEG Technical Program Expanded Abstracts 2012
NASA Astrophysics Data System (ADS)
Seko, Hiromu; Kunii, Masaru; Yokota, Sho; Tsuyuki, Tadashi; Miyoshi, Takemasa
2015-12-01
Experiments simulating intense vortices associated with tornadoes that occurred on 6 May 2012 on the Kanto Plain, Japan, were performed with a nested local ensemble transform Kalman filter (LETKF) system. Intense vortices were reproduced by downscale experiments with a 12-member ensemble in which the initial conditions were obtained from the nested LETKF system analyses. The downscale experiments successfully generated intense vortices in three regions similar to the observed vortices, whereas only one tornado was reproduced by a deterministic forecast. The intense vorticity of the strongest tornado, which was observed in the southernmost region, was successfully reproduced by 10 of the 12 ensemble members. An examination of the results of the ensemble downscale experiments showed that the duration of intense vorticities tended to be longer when the vertical shear of the horizontal wind was larger and the lower airflow was more humid. Overall, the study results show that ensemble forecasts have the following merits: (1) probabilistic forecasts of the outbreak of intense vortices associated with tornadoes are possible; (2) the miss rate of outbreaks should decrease; and (3) environmental factors favoring outbreaks can be obtained by comparing the multiple possible scenarios of the ensemble forecasts.
Ensemble Kalman filter for the reconstruction of the Earth's mantle circulation
NASA Astrophysics Data System (ADS)
Bocher, Marie; Fournier, Alexandre; Coltice, Nicolas
2018-02-01
Recent advances in mantle convection modeling led to the release of a new generation of convection codes, able to self-consistently generate plate-like tectonics at their surface. Those models physically link mantle dynamics to surface tectonics. Combined with plate tectonic reconstructions, they have the potential to produce a new generation of mantle circulation models that use data assimilation methods and where uncertainties in plate tectonic reconstructions are taken into account. We provided a proof of this concept by applying a suboptimal Kalman filter to the reconstruction of mantle circulation (Bocher et al., 2016). Here, we propose to go one step further and apply the ensemble Kalman filter (EnKF) to this problem. The EnKF is a sequential Monte Carlo method particularly adapted to solve high-dimensional data assimilation problems with nonlinear dynamics. We tested the EnKF using synthetic observations consisting of surface velocity and heat flow measurements on a 2-D-spherical annulus model and compared it with the method developed previously. The EnKF performs on average better and is more stable than the former method. Less than 300 ensemble members are sufficient to reconstruct an evolution. We use covariance adaptive inflation and localization to correct for sampling errors. We show that the EnKF results are robust over a wide range of covariance localization parameters. The reconstruction is associated with an estimation of the error, and provides valuable information on where the reconstruction is to be trusted or not.
NASA Astrophysics Data System (ADS)
Medina, H.; Romano, N.; Chirico, G. B.
2014-07-01
This study presents a dual Kalman filter (DSUKF - dual standard-unscented Kalman filter) for retrieving states and parameters controlling the soil water dynamics in a homogeneous soil column, by assimilating near-surface state observations. The DSUKF couples a standard Kalman filter for retrieving the states of a linear solver of the Richards equation, and an unscented Kalman filter for retrieving the parameters of the soil hydraulic functions, which are defined according to the van Genuchten-Mualem closed-form model. The accuracy and the computational expense of the DSUKF are compared with those of the dual ensemble Kalman filter (DEnKF) implemented with a nonlinear solver of the Richards equation. Both the DSUKF and the DEnKF are applied with two alternative state-space formulations of the Richards equation, respectively differentiated by the type of variable employed for representing the states: either the soil water content (θ) or the soil water matric pressure head (h). The comparison analyses are conducted with reference to synthetic time series of the true states, noise corrupted observations, and synthetic time series of the meteorological forcing. The performance of the retrieval algorithms are examined accounting for the effects exerted on the output by the input parameters, the observation depth and assimilation frequency, as well as by the relationship between retrieved states and assimilated variables. The uncertainty of the states retrieved with DSUKF is considerably reduced, for any initial wrong parameterization, with similar accuracy but less computational effort than the DEnKF, when this is implemented with ensembles of 25 members. For ensemble sizes of the same order of those involved in the DSUKF, the DEnKF fails to provide reliable posterior estimates of states and parameters. The retrieval performance of the soil hydraulic parameters is strongly affected by several factors, such as the initial guess of the unknown parameters, the wet or dry range of the retrieved states, the boundary conditions, as well as the form (h-based or θ-based) of the state-space formulation. Several analyses are reported to show that the identifiability of the saturated hydraulic conductivity is hindered by the strong correlation with other parameters of the soil hydraulic functions defined according to the van Genuchten-Mualem closed-form model.
2013-09-30
accuracy of the analysis . Root mean square difference ( RMSD ) is much smaller for RIP than for either Simple Ocean Data Assimilation or Incremental... Analysis Update globally for temperature as well as salinity. Regionally the same results were found, with only one exception in which the salinity RMSD ...short-term forecast using a numerical model with the observations taken within the forecast time window. The resulting state is the so-called “ analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Man, Jun; Li, Weixuan; Zeng, Lingzao
2016-06-01
The ensemble Kalman filter (EnKF) has gained popularity in hydrological data assimilation problems. As a Monte Carlo based method, a relatively large ensemble size is usually required to guarantee the accuracy. As an alternative approach, the probabilistic collocation based Kalman filter (PCKF) employs the polynomial chaos to approximate the original system. In this way, the sampling error can be reduced. However, PCKF suffers from the so-called "curse of dimensionality". When the system nonlinearity is strong and number of parameters is large, PCKF could be even more computationally expensive than EnKF. Motivated by most recent developments in uncertainty quantification, we proposemore » a restart adaptive probabilistic collocation based Kalman filter (RAPCKF) for data assimilation in unsaturated flow problems. During the implementation of RAPCKF, the important parameters are identified and active PCE basis functions are adaptively selected. The "restart" technology is used to eliminate the inconsistency between model parameters and states. The performance of RAPCKF is tested with numerical cases of unsaturated flow models. It is shown that RAPCKF is more efficient than EnKF with the same computational cost. Compared with the traditional PCKF, the RAPCKF is more applicable in strongly nonlinear and high dimensional problems.« less
Climatic Models Ensemble-based Mid-21st Century Runoff Projections: A Bayesian Framework
NASA Astrophysics Data System (ADS)
Achieng, K. O.; Zhu, J.
2017-12-01
There are a number of North American Regional Climate Change Assessment Program (NARCCAP) climatic models that have been used to project surface runoff in the mid-21st century. Statistical model selection techniques are often used to select the model that best fits data. However, model selection techniques often lead to different conclusions. In this study, ten models are averaged in Bayesian paradigm to project runoff. Bayesian Model Averaging (BMA) is used to project and identify effect of model uncertainty on future runoff projections. Baseflow separation - a two-digital filter which is also called Eckhardt filter - is used to separate USGS streamflow (total runoff) into two components: baseflow and surface runoff. We use this surface runoff as the a priori runoff when conducting BMA of runoff simulated from the ten RCM models. The primary objective of this study is to evaluate how well RCM multi-model ensembles simulate surface runoff, in a Bayesian framework. Specifically, we investigate and discuss the following questions: How well do ten RCM models ensemble jointly simulate surface runoff by averaging over all the models using BMA, given a priori surface runoff? What are the effects of model uncertainty on surface runoff simulation?
NASA Astrophysics Data System (ADS)
Karmalkar, A.; Sexton, D.; Murphy, J.
2017-12-01
We present exploratory work towards developing an efficient strategy to select variants of a state-of-the-art but expensive climate model suitable for climate projection studies. The strategy combines information from a set of idealized perturbed parameter ensemble (PPE) and CMIP5 multi-model ensemble (MME) experiments, and uses two criteria as basis to select model variants for a PPE suitable for future projections: a) acceptable model performance at two different timescales, and b) maintaining diversity in model response to climate change. We demonstrate that there is a strong relationship between model errors at weather and climate timescales for a variety of key variables. This relationship is used to filter out parts of parameter space that do not give credible simulations of historical climate, while minimizing the impact on ranges in forcings and feedbacks that drive model responses to climate change. We use statistical emulation to explore the parameter space thoroughly, and demonstrate that about 90% can be filtered out without affecting diversity in global-scale climate change responses. This leads to identification of plausible parts of parameter space from which model variants can be selected for projection studies.
A new Method for the Estimation of Initial Condition Uncertainty Structures in Mesoscale Models
NASA Astrophysics Data System (ADS)
Keller, J. D.; Bach, L.; Hense, A.
2012-12-01
The estimation of fast growing error modes of a system is a key interest of ensemble data assimilation when assessing uncertainty in initial conditions. Over the last two decades three methods (and variations of these methods) have evolved for global numerical weather prediction models: ensemble Kalman filter, singular vectors and breeding of growing modes (or now ensemble transform). While the former incorporates a priori model error information and observation error estimates to determine ensemble initial conditions, the latter two techniques directly address the error structures associated with Lyapunov vectors. However, in global models these structures are mainly associated with transient global wave patterns. When assessing initial condition uncertainty in mesoscale limited area models, several problems regarding the aforementioned techniques arise: (a) additional sources of uncertainty on the smaller scales contribute to the error and (b) error structures from the global scale may quickly move through the model domain (depending on the size of the domain). To address the latter problem, perturbation structures from global models are often included in the mesoscale predictions as perturbed boundary conditions. However, the initial perturbations (when used) are often generated with a variant of an ensemble Kalman filter which does not necessarily focus on the large scale error patterns. In the framework of the European regional reanalysis project of the Hans-Ertel-Center for Weather Research we use a mesoscale model with an implemented nudging data assimilation scheme which does not support ensemble data assimilation at all. In preparation of an ensemble-based regional reanalysis and for the estimation of three-dimensional atmospheric covariance structures, we implemented a new method for the assessment of fast growing error modes for mesoscale limited area models. The so-called self-breeding is development based on the breeding of growing modes technique. Initial perturbations are integrated forward for a short time period and then rescaled and added to the initial state again. Iterating this rapid breeding cycle provides estimates for the initial uncertainty structure (or local Lyapunov vectors) given a specific norm. To avoid that all ensemble perturbations converge towards the leading local Lyapunov vector we apply an ensemble transform variant to orthogonalize the perturbations in the sub-space spanned by the ensemble. By choosing different kind of norms to measure perturbation growth, this technique allows for estimating uncertainty patterns targeted at specific sources of errors (e.g. convection, turbulence). With case study experiments we show applications of the self-breeding method for different sources of uncertainty and different horizontal scales.
NASA Astrophysics Data System (ADS)
Yan, Y.; Barth, A.; Beckers, J. M.; Candille, G.; Brankart, J. M.; Brasseur, P.
2015-07-01
Sea surface height, sea surface temperature, and temperature profiles at depth collected between January and December 2005 are assimilated into a realistic eddy permitting primitive equation model of the North Atlantic Ocean using the Ensemble Kalman Filter. Sixty ensemble members are generated by adding realistic noise to the forcing parameters related to the temperature. The ensemble is diagnosed and validated by comparison between the ensemble spread and the model/observation difference, as well as by rank histogram before the assimilation experiments. An incremental analysis update scheme is applied in order to reduce spurious oscillations due to the model state correction. The results of the assimilation are assessed according to both deterministic and probabilistic metrics with independent/semiindependent observations. For deterministic validation, the ensemble means, together with the ensemble spreads are compared to the observations, in order to diagnose the ensemble distribution properties in a deterministic way. For probabilistic validation, the continuous ranked probability score (CRPS) is used to evaluate the ensemble forecast system according to reliability and resolution. The reliability is further decomposed into bias and dispersion by the reduced centered random variable (RCRV) score in order to investigate the reliability properties of the ensemble forecast system. The improvement of the assimilation is demonstrated using these validation metrics. Finally, the deterministic validation and the probabilistic validation are analyzed jointly. The consistency and complementarity between both validations are highlighted.
Xia, Jie; Hsieh, Jui-Hua; Hu, Huabin; Wu, Song; Wang, Xiang Simon
2017-06-26
Structure-based virtual screening (SBVS) has become an indispensable technique for hit identification at the early stage of drug discovery. However, the accuracy of current scoring functions is not high enough to confer success to every target and thus remains to be improved. Previously, we had developed binary pose filters (PFs) using knowledge derived from the protein-ligand interface of a single X-ray structure of a specific target. This novel approach had been validated as an effective way to improve ligand enrichment. Continuing from it, in the present work we attempted to incorporate knowledge collected from diverse protein-ligand interfaces of multiple crystal structures of the same target to build PF ensembles (PFEs). Toward this end, we first constructed a comprehensive data set to meet the requirements of ensemble modeling and validation. This set contains 10 diverse targets, 118 well-prepared X-ray structures of protein-ligand complexes, and large benchmarking actives/decoys sets. Notably, we designed a unique workflow of two-layer classifiers based on the concept of ensemble learning and applied it to the construction of PFEs for all of the targets. Through extensive benchmarking studies, we demonstrated that (1) coupling PFE with Chemgauss4 significantly improves the early enrichment of Chemgauss4 itself and (2) PFEs show greater consistency in boosting early enrichment and larger overall enrichment than our prior PFs. In addition, we analyzed the pairwise topological similarities among cognate ligands used to construct PFEs and found that it is the higher chemical diversity of the cognate ligands that leads to the improved performance of PFEs. Taken together, the results so far prove that the incorporation of knowledge from diverse protein-ligand interfaces by ensemble modeling is able to enhance the screening competence of SBVS scoring functions.
NASA Astrophysics Data System (ADS)
Zhang, Shuwen; Li, Haorui; Zhang, Weidong; Qiu, Chongjian; Li, Xin
2005-11-01
The paper investigates the ability to retrieve the true soil moisture profile by assimilating near-surface soil moisture into a soil moisture model with an ensemble Kaiman filter (EnKF) assimilation scheme, including the effect of ensemble size, update interval and nonlinearities in the profile retrieval, the required time for full retrieval of the soil moisture profiles, and the possible influence of the depth of the soil moisture observation. These questions are addressed by a desktop study using synthetic data. The “true” soil moisture profiles are generated from the soil moisture model under the boundary condition of 0.5 cm d-1 evaporation. To test the assimilation schemes, the model is initialized with a poor initial guess of the soil moisture profile, and different ensemble sizes are tested showing that an ensemble of 40 members is enough to represent the covariance of the model forecasts. Also compared are the results with those from the direct insertion assimilation scheme, showing that the EnKF is superior to the direct insertion assimilation scheme, for hourly observations, with retrieval of the soil moisture profile being achieved in 16 h as compared to 12 days or more. For daily observations, the true soil moisture profile is achieved in about 15 days with the EnKF, but it is impossible to approximate the true moisture within 18 days by using direct insertion. It is also found that observation depth does not have a significant effect on profile retrieval time for the EnKF. The nonlinearities have some negative influence on the optimal estimates of soil moisture profile but not very seriously.
Enhancing coronary Wave Intensity Analysis robustness by high order central finite differences.
Rivolo, Simone; Asrress, Kaleab N; Chiribiri, Amedeo; Sammut, Eva; Wesolowski, Roman; Bloch, Lars Ø; Grøndal, Anne K; Hønge, Jesper L; Kim, Won Y; Marber, Michael; Redwood, Simon; Nagel, Eike; Smith, Nicolas P; Lee, Jack
2014-09-01
Coronary Wave Intensity Analysis (cWIA) is a technique capable of separating the effects of proximal arterial haemodynamics from cardiac mechanics. Studies have identified WIA-derived indices that are closely correlated with several disease processes and predictive of functional recovery following myocardial infarction. The cWIA clinical application has, however, been limited by technical challenges including a lack of standardization across different studies and the derived indices' sensitivity to the processing parameters. Specifically, a critical step in WIA is the noise removal for evaluation of derivatives of the acquired signals, typically performed by applying a Savitzky-Golay filter, to reduce the high frequency acquisition noise. The impact of the filter parameter selection on cWIA output, and on the derived clinical metrics (integral areas and peaks of the major waves), is first analysed. The sensitivity analysis is performed either by using the filter as a differentiator to calculate the signals' time derivative or by applying the filter to smooth the ensemble-averaged waveforms. Furthermore, the power-spectrum of the ensemble-averaged waveforms contains little high-frequency components, which motivated us to propose an alternative approach to compute the time derivatives of the acquired waveforms using a central finite difference scheme. The cWIA output and consequently the derived clinical metrics are significantly affected by the filter parameters, irrespective of its use as a smoothing filter or a differentiator. The proposed approach is parameter-free and, when applied to the 10 in-vivo human datasets and the 50 in-vivo animal datasets, enhances the cWIA robustness by significantly reducing the outcome variability (by 60%).
Nonlinear data assimilation using synchronization in a particle filter
NASA Astrophysics Data System (ADS)
Rodrigues-Pinheiro, Flavia; Van Leeuwen, Peter Jan
2017-04-01
Current data assimilation methods still face problems in strongly nonlinear cases. A promising solution is a particle filter, which provides a representation of the model probability density function by a discrete set of particles. However, the basic particle filter does not work in high-dimensional cases. The performance can be improved by considering the proposal density freedom. A potential choice of proposal density might come from the synchronisation theory, in which one tries to synchronise the model with the true evolution of a system using one-way coupling via the observations. In practice, an extra term is added to the model equations that damps growth of instabilities on the synchronisation manifold. When only part of the system is observed synchronization can be achieved via a time embedding, similar to smoothers in data assimilation. In this work, two new ideas are tested. First, ensemble-based time embedding, similar to an ensemble smoother or 4DEnsVar is used on each particle, avoiding the need for tangent-linear models and adjoint calculations. Tests were performed using Lorenz96 model for 20, 100 and 1000-dimension systems. Results show state-averaged synchronisation errors smaller than observation errors even in partly observed systems, suggesting that the scheme is a promising tool to steer model states to the truth. Next, we combine these efficient particles using an extension of the Implicit Equal-Weights Particle Filter, a particle filter that ensures equal weights for all particles, avoiding filter degeneracy by construction. Promising results will be shown on low- and high-dimensional Lorenz96 models, and the pros and cons of these new ideas will be discussed.
NASA Astrophysics Data System (ADS)
Zunz, Violette; Goosse, Hugues; Dubinkina, Svetlana
2014-05-01
In this study, we assess systematically the impact of different initialisation procedures on the predictability of the sea ice in the Southern Ocean. These initialisation strategies are based on three data assimilation methods: the nudging, the particle filter with sequential resampling and the nudging proposal particle filter. An Earth-system model of intermediate complexity has been used to perform hindcast simulations in a perfect model approach. The predictability of the Southern Ocean sea ice is estimated through two aspects: the spread of the hindcast ensemble, indicating the uncertainty on the ensemble, and the correlation between the ensemble mean and the pseudo-observations, used to assess the accuracy of the prediction. Our results show that, at decadal timescales, more sophisticated data assimilation methods as well as denser pseudo-observations used to initialise the hindcasts decrease the spread of the ensemble but improve only slightly the accuracy of the prediction of the sea ice in the Southern Ocean. Overall, the predictability at interannual timescales is limited, at most, to three years ahead. At multi-decadal timescales, there is a clear improvement of the correlation of the trend in sea ice extent between the hindcasts and the pseudo-observations if the initialisation takes into account the pseudo-observations. The correlation reaches values larger than 0.5 and is due to the inertia of the ocean, showing the importance of the quality of the initialisation below the sea ice.
NASA Astrophysics Data System (ADS)
Liu, Danian; Zhu, Jiang; Shu, Yeqiang; Wang, Dongxiao; Wang, Weiqiang; Cai, Shuqun
2018-06-01
The Northwestern Tropical Pacific Ocean (NWTPO) moorings observing system, including 15 moorings, was established in 2013 to provide velocity profile data. Observing system simulation experiments (OSSEs) were carried out to assess the ability of the observation system to monitor intraseasonal variability in a pilot study, where ideal "mooring-observed" velocity was assimilated using Ensemble Optimal Interpolation (EnOI) based on the Regional Oceanic Modeling System (ROMS). Because errors between the control and "nature" runs have a mesoscale structure, a random ensemble derived from 20-90-day bandpass-filtered nine-year model outputs is proved to be more appropriate for the NWTPO mooring array assimilation than a random ensemble derived from a 30-day running mean. The simulation of the intraseasonal currents in the North Equatorial Current (NEC), North Equatorial Countercurrent (NECC), and Equatorial Undercurrent (EUC) areas can be improved by assimilating velocity profiles using a 20-90-day bandpass-filtered ensemble. The root mean square errors (RMSEs) of the intraseasonal zonal (U) and meridional velocity (V) above 500 m depth within the study area (between 0°N-18°N and 122°E-147°E) were reduced by 15.4% and 16.9%, respectively. Improvements in the downstream area of the NEC moorings transect were optimum where the RMSEs of the intraseasonal velocities above 500 m were reduced by more than 30%. Assimilating velocity profiles can have a positive impact on the simulation and forecast of thermohaline structure and sea level anomalies in the ocean.
Efficient Data Assimilation Algorithms for Bathymetry Applications
NASA Astrophysics Data System (ADS)
Ghorbanidehno, H.; Kokkinaki, A.; Lee, J. H.; Farthing, M.; Hesser, T.; Kitanidis, P. K.; Darve, E. F.
2016-12-01
Information on the evolving state of the nearshore zone bathymetry is crucial to shoreline management, recreational safety, and naval operations. The high cost and complex logistics of using ship-based surveys for bathymetry estimation have encouraged the use of remote sensing monitoring. Data assimilation methods combine monitoring data and models of nearshore dynamics to estimate the unknown bathymetry and the corresponding uncertainties. Existing applications have been limited to the basic Kalman Filter (KF) and the Ensemble Kalman Filter (EnKF). The former can only be applied to low-dimensional problems due to its computational cost; the latter often suffers from ensemble collapse and uncertainty underestimation. This work explores the use of different variants of the Kalman Filter for bathymetry applications. In particular, we compare the performance of the EnKF to the Unscented Kalman Filter and the Hierarchical Kalman Filter, both of which are KF variants for non-linear problems. The objective is to identify which method can better handle the nonlinearities of nearshore physics, while also having a reasonable computational cost. We present two applications; first, the bathymetry of a synthetic one-dimensional cross section normal to the shore is estimated from wave speed measurements. Second, real remote measurements with unknown error statistics are used and compared to in situ bathymetric survey data collected at the USACE Field Research Facility in Duck, NC. We evaluate the information content of different data sets and explore the impact of measurement error and nonlinearities.
Localization of a variational particle smoother
NASA Astrophysics Data System (ADS)
Morzfeld, M.; Hodyss, D.; Poterjoy, J.
2017-12-01
Given the success of 4D-variational methods (4D-Var) in numerical weather prediction,and recent efforts to merge ensemble Kalman filters with 4D-Var,we consider a method to merge particle methods and 4D-Var.This leads us to revisit variational particle smoothers (varPS).We study the collapse of varPS in high-dimensional problemsand show how it can be prevented by weight-localization.We test varPS on the Lorenz'96 model of dimensionsn=40, n=400, and n=2000.In our numerical experiments, weight localization prevents the collapse of the varPS,and we note that the varPS yields results comparable to ensemble formulations of 4D-variational methods,while it outperforms EnKF with tuned localization and inflation,and the localized standard particle filter.Additional numerical experiments suggest that using localized weights in varPS may not yield significant advantages over unweighted or linearizedsolutions in near-Gaussian problems.
[Simulation of cropland soil moisture based on an ensemble Kalman filter].
Liu, Zhao; Zhou, Yan-Lian; Ju, Wei-Min; Gao, Ping
2011-11-01
By using an ensemble Kalman filter (EnKF) to assimilate the observed soil moisture data, the modified boreal ecosystem productivity simulator (BEPS) model was adopted to simulate the dynamics of soil moisture in winter wheat root zones at Xuzhou Agro-meteorological Station, Jiangsu Province of China during the growth seasons in 2000-2004. After the assimilation of observed data, the determination coefficient, root mean square error, and average absolute error of simulated soil moisture were in the ranges of 0.626-0.943, 0.018-0.042, and 0.021-0.041, respectively, with the simulation precision improved significantly, as compared with that before assimilation, indicating the applicability of data assimilation in improving the simulation of soil moisture. The experimental results at single point showed that the errors in the forcing data and observations and the frequency and soil depth of the assimilation of observed data all had obvious effects on the simulated soil moisture.
Dynamic State Estimation and Parameter Calibration of DFIG based on Ensemble Kalman Filter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fan, Rui; Huang, Zhenyu; Wang, Shaobu
2015-07-30
With the growing interest in the application of wind energy, doubly fed induction generator (DFIG) plays an essential role in the industry nowadays. To deal with the increasing stochastic variations introduced by intermittent wind resource and responsive loads, dynamic state estimation (DSE) are introduced in any power system associated with DFIGs. However, sometimes this dynamic analysis canould not work because the parameters of DFIGs are not accurate enough. To solve the problem, an ensemble Kalman filter (EnKF) method is proposed for the state estimation and parameter calibration tasks. In this paper, a DFIG is modeled and implemented with the EnKFmore » method. Sensitivity analysis is demonstrated regarding the measurement noise, initial state errors and parameter errors. The results indicate this EnKF method has a robust performance on the state estimation and parameter calibration of DFIGs.« less
The spectrotemporal filter mechanism of auditory selective attention
Lakatos, Peter; Musacchia, Gabriella; O’Connell, Monica N.; Falchier, Arnaud Y.; Javitt, Daniel C.; Schroeder, Charles E.
2013-01-01
SUMMARY While we have convincing evidence that attention to auditory stimuli modulates neuronal responses at or before the level of primary auditory cortex (A1), the underlying physiological mechanisms are unknown. We found that attending to rhythmic auditory streams resulted in the entrainment of ongoing oscillatory activity reflecting rhythmic excitability fluctuations in A1. Strikingly, while the rhythm of the entrained oscillations in A1 neuronal ensembles reflected the temporal structure of the attended stream, the phase depended on the attended frequency content. Counter-phase entrainment across differently tuned A1 regions resulted in both the amplification and sharpening of responses at attended time points, in essence acting as a spectrotemporal filter mechanism. Our data suggest that selective attention generates a dynamically evolving model of attended auditory stimulus streams in the form of modulatory subthreshold oscillations across tonotopically organized neuronal ensembles in A1 that enhances the representation of attended stimuli. PMID:23439126
NASA Astrophysics Data System (ADS)
Shen, Feifei; Xu, Dongmei; Xue, Ming; Min, Jinzhong
2017-07-01
This study examines the impacts of assimilating radar radial velocity (Vr) data for the simulation of hurricane Ike (2008) with two different ensemble generation techniques in the framework of the hybrid ensemble-variational (EnVar) data assimilation system of Weather Research and Forecasting model. For the generation of ensemble perturbations we apply two techniques, the ensemble transform Kalman filter (ETKF) and the ensemble of data assimilation (EDA). For the ETKF-EnVar, the forecast ensemble perturbations are updated by the ETKF, while for the EDA-EnVar, the hybrid is employed to update each ensemble member with perturbed observations. The ensemble mean is analyzed by the hybrid method with flow-dependent ensemble covariance for both EnVar. The sensitivity of analyses and forecasts to the two applied ensemble generation techniques is investigated in our current study. It is found that the EnVar system is rather stable with different ensemble update techniques in terms of its skill on improving the analyses and forecasts. The EDA-EnVar-based ensemble perturbations are likely to include slightly less organized spatial structures than those in ETKF-EnVar, and the perturbations of the latter are constructed more dynamically. Detailed diagnostics reveal that both of the EnVar schemes not only produce positive temperature increments around the hurricane center but also systematically adjust the hurricane location with the hurricane-specific error covariance. On average, the analysis and forecast from the ETKF-EnVar have slightly smaller errors than that from the EDA-EnVar in terms of track, intensity, and precipitation forecast. Moreover, ETKF-EnVar yields better forecasts when verified against conventional observations.
Calibration of sea ice dynamic parameters in an ocean-sea ice model using an ensemble Kalman filter
NASA Astrophysics Data System (ADS)
Massonnet, F.; Goosse, H.; Fichefet, T.; Counillon, F.
2014-07-01
The choice of parameter values is crucial in the course of sea ice model development, since parameters largely affect the modeled mean sea ice state. Manual tuning of parameters will soon become impractical, as sea ice models will likely include more parameters to calibrate, leading to an exponential increase of the number of possible combinations to test. Objective and automatic methods for parameter calibration are thus progressively called on to replace the traditional heuristic, "trial-and-error" recipes. Here a method for calibration of parameters based on the ensemble Kalman filter is implemented, tested and validated in the ocean-sea ice model NEMO-LIM3. Three dynamic parameters are calibrated: the ice strength parameter P*, the ocean-sea ice drag parameter Cw, and the atmosphere-sea ice drag parameter Ca. In twin, perfect-model experiments, the default parameter values are retrieved within 1 year of simulation. Using 2007-2012 real sea ice drift data, the calibration of the ice strength parameter P* and the oceanic drag parameter Cw improves clearly the Arctic sea ice drift properties. It is found that the estimation of the atmospheric drag Ca is not necessary if P* and Cw are already estimated. The large reduction in the sea ice speed bias with calibrated parameters comes with a slight overestimation of the winter sea ice areal export through Fram Strait and a slight improvement in the sea ice thickness distribution. Overall, the estimation of parameters with the ensemble Kalman filter represents an encouraging alternative to manual tuning for ocean-sea ice models.
A balanced Kalman filter ocean data assimilation system with application to the South Australian Sea
NASA Astrophysics Data System (ADS)
Li, Yi; Toumi, Ralf
2017-08-01
In this paper, an Ensemble Kalman Filter (EnKF) based regional ocean data assimilation system has been developed and applied to the South Australian Sea. This system consists of the data assimilation algorithm provided by the NCAR Data Assimilation Research Testbed (DART) and the Regional Ocean Modelling System (ROMS). We describe the first implementation of the physical balance operator (temperature-salinity, hydrostatic and geostrophic balance) to DART, to reduce the spurious waves which may be introduced during the data assimilation process. The effect of the balance operator is validated in both an idealised shallow water model and the ROMS model real case study. In the shallow water model, the geostrophic balance operator eliminates spurious ageostrophic waves and produces a better sea surface height (SSH) and velocity analysis and forecast. Its impact increases as the sea surface height and wind stress increase. In the real case, satellite-observed sea surface temperature (SST) and SSH are assimilated in the South Australian Sea with 50 ensembles using the Ensemble Adjustment Kalman Filter (EAKF). Assimilating SSH and SST enhances the estimation of SSH and SST in the entire domain, respectively. Assimilation with the balance operator produces a more realistic simulation of surface currents and subsurface temperature profile. The best improvement is obtained when only SSH is assimilated with the balance operator. A case study with a storm suggests that the benefit of the balance operator is of particular importance under high wind stress conditions. Implementing the balance operator could be a general benefit to ocean data assimilation systems.
Calibration of a Land Subsidence Model Using InSAR Data via the Ensemble Kalman Filter.
Li, Liangping; Zhang, Meijing; Katzenstein, Kurt
2017-11-01
The application of interferometric synthetic aperture radar (InSAR) has been increasingly used to improve capabilities to model land subsidence in hydrogeologic studies. A number of investigations over the last decade show how spatially detailed time-lapse images of ground displacements could be utilized to advance our understanding for better predictions. In this work, we use simulated land subsidences as observed measurements, mimicking InSAR data to inversely infer inelastic specific storage in a stochastic framework. The inelastic specific storage is assumed as a random variable and modeled using a geostatistical method such that the detailed variations in space could be represented and also that the uncertainties of both characterization of specific storage and prediction of land subsidence can be assessed. The ensemble Kalman filter (EnKF), a real-time data assimilation algorithm, is used to inversely calibrate a land subsidence model by matching simulated subsidences with InSAR data. The performance of the EnKF is demonstrated in a synthetic example in which simulated surface deformations using a reference field are assumed as InSAR data for inverse modeling. The results indicate: (1) the EnKF can be used successfully to calibrate a land subsidence model with InSAR data; the estimation of inelastic specific storage is improved, and uncertainty of prediction is reduced, when all the data are accounted for; and (2) if the same ensemble is used to estimate Kalman gain, the analysis errors could cause filter divergence; thus, it is essential to include localization in the EnKF for InSAR data assimilation. © 2017, National Ground Water Association.
NASA Astrophysics Data System (ADS)
Wang, Yuanbing; Min, Jinzhong; Chen, Yaodeng; Huang, Xiang-Yu; Zeng, Mingjian; Li, Xin
2017-01-01
This study evaluates the performance of three-dimensional variational (3DVar) and a hybrid data assimilation system using time-lagged ensembles in a heavy rainfall event. The time-lagged ensembles are constructed by sampling from a moving time window of 3 h along a model trajectory, which is economical and easy to implement. The proposed hybrid data assimilation system introduces flow-dependent error covariance derived from time-lagged ensemble into variational cost function without significantly increasing computational cost. Single observation tests are performed to document characteristic of the hybrid system. The sensitivity of precipitation forecasts to ensemble covariance weight and localization scale is investigated. Additionally, the TLEn-Var is evaluated and compared to the ETKF(ensemble transformed Kalman filter)-based hybrid assimilation within a continuously cycling framework, through which new hybrid analyses are produced every 3 h over 10 days. The 24 h accumulated precipitation, moisture, wind are analyzed between 3DVar and the hybrid assimilation using time-lagged ensembles. Results show that model states and precipitation forecast skill are improved by the hybrid assimilation using time-lagged ensembles compared with 3DVar. Simulation of the precipitable water and structure of the wind are also improved. Cyclonic wind increments are generated near the rainfall center, leading to an improved precipitation forecast. This study indicates that the hybrid data assimilation using time-lagged ensembles seems like a viable alternative or supplement in the complex models for some weather service agencies that have limited computing resources to conduct large size of ensembles.
Benefits of an ultra large and multiresolution ensemble for estimating available wind power
NASA Astrophysics Data System (ADS)
Berndt, Jonas; Hoppe, Charlotte; Elbern, Hendrik
2016-04-01
In this study we investigate the benefits of an ultra large ensemble with up to 1000 members including multiple nesting with a target horizontal resolution of 1 km. The ensemble shall be used as a basis to detect events of extreme errors in wind power forecasting. Forecast value is the wind vector at wind turbine hub height (~ 100 m) in the short range (1 to 24 hour). Current wind power forecast systems rest already on NWP ensemble models. However, only calibrated ensembles from meteorological institutions serve as input so far, with limited spatial resolution (˜10 - 80 km) and member number (˜ 50). Perturbations related to the specific merits of wind power production are yet missing. Thus, single extreme error events which are not detected by such ensemble power forecasts occur infrequently. The numerical forecast model used in this study is the Weather Research and Forecasting Model (WRF). Model uncertainties are represented by stochastic parametrization of sub-grid processes via stochastically perturbed parametrization tendencies and in conjunction via the complementary stochastic kinetic-energy backscatter scheme already provided by WRF. We perform continuous ensemble updates by comparing each ensemble member with available observations using a sequential importance resampling filter to improve the model accuracy while maintaining ensemble spread. Additionally, we use different ensemble systems from global models (ECMWF and GFS) as input and boundary conditions to capture different synoptic conditions. Critical weather situations which are connected to extreme error events are located and corresponding perturbation techniques are applied. The demanding computational effort is overcome by utilising the supercomputer JUQUEEN at the Forschungszentrum Juelich.
NASA Astrophysics Data System (ADS)
Abaza, Mabrouk; Anctil, François; Fortin, Vincent; Perreault, Luc
2017-12-01
Meteorological and hydrological ensemble prediction systems are imperfect. Their outputs could often be improved through the use of a statistical processor, opening up the question of the necessity of using both processors (meteorological and hydrological), only one of them, or none. This experiment compares the predictive distributions from four hydrological ensemble prediction systems (H-EPS) utilising the Ensemble Kalman filter (EnKF) probabilistic sequential data assimilation scheme. They differ in the inclusion or not of the Distribution Based Scaling (DBS) method for post-processing meteorological forecasts and the ensemble Bayesian Model Averaging (ensemble BMA) method for hydrological forecast post-processing. The experiment is implemented on three large watersheds and relies on the combination of two meteorological reforecast products: the 4-member Canadian reforecasts from the Canadian Centre for Meteorological and Environmental Prediction (CCMEP) and the 10-member American reforecasts from the National Oceanic and Atmospheric Administration (NOAA), leading to 14 members at each time step. Results show that all four tested H-EPS lead to resolution and sharpness values that are quite similar, with an advantage to DBS + EnKF. The ensemble BMA is unable to compensate for any bias left in the precipitation ensemble forecasts. On the other hand, it succeeds in calibrating ensemble members that are otherwise under-dispersed. If reliability is preferred over resolution and sharpness, DBS + EnKF + ensemble BMA performs best, making use of both processors in the H-EPS system. Conversely, for enhanced resolution and sharpness, DBS is the preferred method.
Scalable and balanced dynamic hybrid data assimilation
NASA Astrophysics Data System (ADS)
Kauranne, Tuomo; Amour, Idrissa; Gunia, Martin; Kallio, Kari; Lepistö, Ahti; Koponen, Sampsa
2017-04-01
Scalability of complex weather forecasting suites is dependent on the technical tools available for implementing highly parallel computational kernels, but to an equally large extent also on the dependence patterns between various components of the suite, such as observation processing, data assimilation and the forecast model. Scalability is a particular challenge for 4D variational assimilation methods that necessarily couple the forecast model into the assimilation process and subject this combination to an inherently serial quasi-Newton minimization process. Ensemble based assimilation methods are naturally more parallel, but large models force ensemble sizes to be small and that results in poor assimilation accuracy, somewhat akin to shooting with a shotgun in a million-dimensional space. The Variational Ensemble Kalman Filter (VEnKF) is an ensemble method that can attain the accuracy of 4D variational data assimilation with a small ensemble size. It achieves this by processing a Gaussian approximation of the current error covariance distribution, instead of a set of ensemble members, analogously to the Extended Kalman Filter EKF. Ensemble members are re-sampled every time a new set of observations is processed from a new approximation of that Gaussian distribution which makes VEnKF a dynamic assimilation method. After this a smoothing step is applied that turns VEnKF into a dynamic Variational Ensemble Kalman Smoother VEnKS. In this smoothing step, the same process is iterated with frequent re-sampling of the ensemble but now using past iterations as surrogate observations until the end result is a smooth and balanced model trajectory. In principle, VEnKF could suffer from similar scalability issues as 4D-Var. However, this can be avoided by isolating the forecast model completely from the minimization process by implementing the latter as a wrapper code whose only link to the model is calling for many parallel and totally independent model runs, all of them implemented as parallel model runs themselves. The only bottleneck in the process is the gathering and scattering of initial and final model state snapshots before and after the parallel runs which requires a very efficient and low-latency communication network. However, the volume of data communicated is small and the intervening minimization steps are only 3D-Var, which means their computational load is negligible compared with the fully parallel model runs. We present example results of scalable VEnKF with the 4D lake and shallow sea model COHERENS, assimilating simultaneously continuous in situ measurements in a single point and infrequent satellite images that cover a whole lake, with the fully scalable VEnKF.
NASA Astrophysics Data System (ADS)
Weerts, A.; Wood, A. W.; Clark, M. P.; Carney, S.; Day, G. N.; Lemans, M.; Sumihar, J.; Newman, A. J.
2014-12-01
In the US, the forecasting approach used by the NWS River Forecast Centers and other regional organizations such as the Bonneville Power Administration (BPA) or Tennessee Valley Authority (TVA) has traditionally involved manual model input and state modifications made by forecasters in real-time. This process is time consuming and requires expert knowledge and experience. The benefits of automated data assimilation (DA) as a strategy for avoiding manual modification approaches have been demonstrated in research studies (eg. Seo et al., 2009). This study explores the usage of various ensemble DA algorithms within the operational platform used by TVA. The final goal is to identify a DA algorithm that will guide the manual modification process used by TVA forecasters and realize considerable time gains (without loss of quality or even enhance the quality) within the forecast process. We evaluate the usability of various popular algorithms for DA that have been applied on a limited basis for operational hydrology. To this end, Delft-FEWS was wrapped (via piwebservice) in OpenDA to enable execution of FEWS workflows (and the chained models within these workflows, including SACSMA, UNITHG and LAGK) in a DA framework. Within OpenDA, several filter methods are available. We considered 4 algorithms: particle filter (RRF), Ensemble Kalman Filter and Asynchronous Ensemble Kalman and Particle filter. Retrospective simulation results for one location and algorithm (AEnKF) are illustrated in Figure 1. The initial results are promising. We will present verification results for these methods (and possible more) for a variety of sub basins in the Tennessee River basin. Finally, we will offer recommendations for guided DA based on our results. References Seo, D.-J., L. Cajina, R. Corby and T. Howieson, 2009: Automatic State Updating for Operational Streamflow Forecasting via Variational Data Assimilation, 367, Journal of Hydrology, 255-275. Figure 1. Retrospectively simulated streamflow for the headwater basin above Powell River at Jonesville (red is observed flow, blue is simulated flow without DA, black is simulated flow with DA)
Ensembl Plants: Integrating Tools for Visualizing, Mining, and Analyzing Plant Genomic Data.
Bolser, Dan M; Staines, Daniel M; Perry, Emily; Kersey, Paul J
2017-01-01
Ensembl Plants ( http://plants.ensembl.org ) is an integrative resource presenting genome-scale information for 39 sequenced plant species. Available data includes genome sequence, gene models, functional annotation, and polymorphic loci; for the latter, additional information including population structure, individual genotypes, linkage, and phenotype data is available for some species. Comparative data is also available, including genomic alignments and "gene trees," which show the inferred evolutionary history of each gene family represented in the resource. Access to the data is provided through a genome browser, which incorporates many specialist interfaces for different data types, through a variety of programmatic interfaces, and via a specialist data mining tool supporting rapid filtering and retrieval of bulk data. Genomic data from many non-plant species, including those of plant pathogens, pests, and pollinators, is also available via the same interfaces through other divisions of Ensembl.Ensembl Plants is updated 4-6 times a year and is developed in collaboration with our international partners in the Gramene ( http://www.gramene.org ) and transPLANT projects ( http://www.transplantdb.eu ).
NASA Astrophysics Data System (ADS)
Flampouris, Stylianos; Penny, Steve; Alves, Henrique
2017-04-01
The National Centers for Environmental Prediction (NCEP) of the National Oceanic and Atmospheric Administration (NOAA) provides the operational wave forecast for the US National Weather Service (NWS). Given the continuous efforts to improve forecast, NCEP is developing an ensemble-based data assimilation system, based on the local ensemble transform Kalman filter (LETKF), the existing operational global wave ensemble system (GWES) and on satellite and in-situ observations. While the LETKF was designed for atmospheric applications (Hunt et al 2007), and has been adapted for several ocean models (e.g. Penny 2016), this is the first time applied for oceanic waves assimilation. This new wave assimilation system provides a global estimation of the surface sea state and its approximate uncertainty. It achieves this by analyzing the 21-member ensemble of the significant wave height provided by GWES every 6h. Observations from four altimeters and all the available in-situ measurements are used in this analysis. The analysis of the significant wave height is used for initializing the next forecasting cycle; the data assimilation system is currently being tested for operational use.
NASA Astrophysics Data System (ADS)
Yao, Lei; Wang, Zhenpo; Ma, Jun
2015-10-01
This paper proposes a method of fault detection of the connection of Lithium-Ion batteries based on entropy for electric vehicle. In electric vehicle operation process, some factors, such as road conditions, driving habits, vehicle performance, always affect batteries by vibration, which easily cause loosing or virtual connection between batteries. Through the simulation of the battery charging and discharging experiment under vibration environment, the data of voltage fluctuation can be obtained. Meanwhile, an optimal filtering method is adopted using discrete cosine filter method to analyze the characteristics of system noise, based on the voltage set when batteries are working under different vibration frequency. Experimental data processed by filtering is analyzed based on local Shannon entropy, ensemble Shannon entropy and sample entropy. And the best way to find a method of fault detection of the connection of lithium-ion batteries based on entropy is presented for electric vehicle. The experimental data shows that ensemble Shannon entropy can predict the accurate time and the location of battery connection failure in real time. Besides electric-vehicle industry, this method can also be used in other areas in complex vibration environment.
The Ensemble Kalman Filter for Groundwater Plume Characterization: A Case Study.
Ross, James L; Andersen, Peter F
2018-04-17
The Kalman filter is an efficient data assimilation tool to refine an estimate of a state variable using measured data and the variable's correlations in space and/or time. The ensemble Kalman filter (EnKF) (Evensen 2004, 2009) is a Kalman filter variant that employs Monte Carlo analysis to define the correlations that help to refine the updated state. While use of EnKF in hydrology is somewhat limited, it has been successfully applied in other fields of engineering (e.g., oil reservoir modeling, weather forecasting). Here, EnKF is used to refine a simulated groundwater tetrachloroethylene (TCE) plume that underlies the Tooele Army Depot-North (TEAD-N) in Utah, based on observations of TCE in the aquifer. The resulting EnKF-based assimilated plume is simulated forward in time to predict future plume migration. The correlations that underpin EnKF updating implicitly contain information about how the plume developed over time under the influence of complex site hydrology and variable source history, as they are predicated on multiple realizations of a well-calibrated numerical groundwater flow and transport model. The EnKF methodology is compared to an ordinary kriging-based assimilation method with respect to the accurate representation of plume concentrations in order to determine the relative efficacy of EnKF for water quality data assimilation. © 2018, National Ground Water Association.
Barber, Jared; Tanase, Roxana; Yotov, Ivan
2016-06-01
Several Kalman filter algorithms are presented for data assimilation and parameter estimation for a nonlinear diffusion model of epithelial cell migration. These include the ensemble Kalman filter with Monte Carlo sampling and a stochastic collocation (SC) Kalman filter with structured sampling. Further, two types of noise are considered -uncorrelated noise resulting in one stochastic dimension for each element of the spatial grid and correlated noise parameterized by the Karhunen-Loeve (KL) expansion resulting in one stochastic dimension for each KL term. The efficiency and accuracy of the four methods are investigated for two cases with synthetic data with and without noise, as well as data from a laboratory experiment. While it is observed that all algorithms perform reasonably well in matching the target solution and estimating the diffusion coefficient and the growth rate, it is illustrated that the algorithms that employ SC and KL expansion are computationally more efficient, as they require fewer ensemble members for comparable accuracy. In the case of SC methods, this is due to improved approximation in stochastic space compared to Monte Carlo sampling. In the case of KL methods, the parameterization of the noise results in a stochastic space of smaller dimension. The most efficient method is the one combining SC and KL expansion. Copyright © 2016 Elsevier Inc. All rights reserved.
Ensemble Kalman filtering in presence of inequality constraints
NASA Astrophysics Data System (ADS)
van Leeuwen, P. J.
2009-04-01
Kalman filtering is presence of constraints is an active area of research. Based on the Gaussian assumption for the probability-density functions, it looks hard to bring in extra constraints in the formalism. On the other hand, in geophysical systems we often encounter constraints related to e.g. the underlying physics or chemistry, which are violated by the Gaussian assumption. For instance, concentrations are always non-negative, model layers have non-negative thickness, and sea-ice concentration is between 0 and 1. Several methods to bring inequality constraints into the Kalman-filter formalism have been proposed. One of them is probability density function (pdf) truncation, in which the Gaussian mass from the non-allowed part of the variables is just equally distributed over the pdf where the variables are alolwed, as proposed by Shimada et al. 1998. However, a problem with this method is that the probability that e.g. the sea-ice concentration is zero, is zero! The new method proposed here does not have this drawback. It assumes that the probability-density function is a truncated Gaussian, but the truncated mass is not distributed equally over all allowed values of the variables, but put into a delta distribution at the truncation point. This delta distribution can easily be handled with in Bayes theorem, leading to posterior probability density functions that are also truncated Gaussians with delta distributions at the truncation location. In this way a much better representation of the system is obtained, while still keeping most of the benefits of the Kalman-filter formalism. In the full Kalman filter the formalism is prohibitively expensive in large-scale systems, but efficient implementation is possible in ensemble variants of the kalman filter. Applications to low-dimensional systems and large-scale systems will be discussed.
A groundwater data assimilation application study in the Heihe mid-reach
NASA Astrophysics Data System (ADS)
Ragettli, S.; Marti, B. S.; Wolfgang, K.; Li, N.
2017-12-01
The present work focuses on modelling of the groundwater flow in the mid-reach of the endorheic river Heihe in the Zhangye oasis (Gansu province) in arid north-west China. In order to optimise the water resources management in the oasis, reliable forecasts of groundwater level development under different management options and environmental boundary conditions have to be produced. For this means, groundwater flow is modelled with Modflow and coupled to an Ensemble Kalman Filter programmed in Matlab. The model is updated with monthly time steps, featuring perturbed boundary conditions to account for uncertainty in model forcing. Constant biases between model and observations have been corrected prior to updating and compared to model runs without bias correction. Different options for data assimilation (states and/or parameters), updating frequency, and measures against filter inbreeding (damping factor, covariance inflation, spatial localization) have been tested against each other. Results show a high dependency of the Ensemble Kalman filter performance on the selection of observations for data assimilation. For the present regional model, bias correction is necessary for a good filter performance. A combination of spatial localization and covariance inflation is further advisable to reduce filter inbreeding problems. Best performance is achieved if parameter updates are not large, an indication for good prior model calibration. Asynchronous updating of parameter values once every five years (with data of the past five years) and synchronous updating of the groundwater levels is better suited for this groundwater system with not or slow changing parameter values than synchronous updating of both groundwater levels and parameters at every time step applying a damping factor. The filter is not able to correct time lags of signals.
Enhancing coronary Wave Intensity Analysis robustness by high order central finite differences
Rivolo, Simone; Asrress, Kaleab N.; Chiribiri, Amedeo; Sammut, Eva; Wesolowski, Roman; Bloch, Lars Ø.; Grøndal, Anne K.; Hønge, Jesper L.; Kim, Won Y.; Marber, Michael; Redwood, Simon; Nagel, Eike; Smith, Nicolas P.; Lee, Jack
2014-01-01
Background Coronary Wave Intensity Analysis (cWIA) is a technique capable of separating the effects of proximal arterial haemodynamics from cardiac mechanics. Studies have identified WIA-derived indices that are closely correlated with several disease processes and predictive of functional recovery following myocardial infarction. The cWIA clinical application has, however, been limited by technical challenges including a lack of standardization across different studies and the derived indices' sensitivity to the processing parameters. Specifically, a critical step in WIA is the noise removal for evaluation of derivatives of the acquired signals, typically performed by applying a Savitzky–Golay filter, to reduce the high frequency acquisition noise. Methods The impact of the filter parameter selection on cWIA output, and on the derived clinical metrics (integral areas and peaks of the major waves), is first analysed. The sensitivity analysis is performed either by using the filter as a differentiator to calculate the signals' time derivative or by applying the filter to smooth the ensemble-averaged waveforms. Furthermore, the power-spectrum of the ensemble-averaged waveforms contains little high-frequency components, which motivated us to propose an alternative approach to compute the time derivatives of the acquired waveforms using a central finite difference scheme. Results and Conclusion The cWIA output and consequently the derived clinical metrics are significantly affected by the filter parameters, irrespective of its use as a smoothing filter or a differentiator. The proposed approach is parameter-free and, when applied to the 10 in-vivo human datasets and the 50 in-vivo animal datasets, enhances the cWIA robustness by significantly reducing the outcome variability (by 60%). PMID:25187852
NASA Astrophysics Data System (ADS)
Demirkaya, Omer
2001-07-01
This study investigates the efficacy of filtering two-dimensional (2D) projection images of Computer Tomography (CT) by the nonlinear diffusion filtration in removing the statistical noise prior to reconstruction. The projection images of Shepp-Logan head phantom were degraded by Gaussian noise. The variance of the Gaussian distribution was adaptively changed depending on the intensity at a given pixel in the projection image. The corrupted projection images were then filtered using the nonlinear anisotropic diffusion filter. The filtered projections as well as original noisy projections were reconstructed using filtered backprojection (FBP) with Ram-Lak filter and/or Hanning window. The ensemble variance was computed for each pixel on a slice. The nonlinear filtering of projection images improved the SNR substantially, on the order of fourfold, in these synthetic images. The comparison of intensity profiles across a cross-sectional slice indicated that the filtering did not result in any significant loss of image resolution.
NASA Astrophysics Data System (ADS)
Kunii, M.; Ito, K.; Wada, A.
2015-12-01
An ensemble Kalman filter (EnKF) using a regional mesoscale atmosphere-ocean coupled model was developed to represent the uncertainties of sea surface temperature (SST) in ensemble data assimilation strategies. The system was evaluated through data assimilation cycle experiments over a one-month period from July to August 2014, during which a tropical cyclone as well as severe rainfall events occurred. The results showed that the data assimilation cycle with the coupled model could reproduce SST distributions realistically even without updating SST and salinity during the data assimilation cycle. Therefore, atmospheric variables and radiation applied as a forcing to ocean models can control oceanic variables to some extent in the current data assimilation configuration. However, investigations of the forecast error covariance estimated in EnKF revealed that the correlation between atmospheric and oceanic variables could possibly lead to less flow-dependent error covariance for atmospheric variables owing to the difference in the time scales between atmospheric and oceanic variables. A verification of the analyses showed positive impacts of applying the ocean model to EnKF on precipitation forecasts. The use of EnKF with the coupled model system captured intensity changes of a tropical cyclone better than it did with an uncoupled atmosphere model, even though the impact on the track forecast was negligibly small.
Ensemble-Based Assimilation of Aerosol Observations in GEOS-5
NASA Technical Reports Server (NTRS)
Buchard, V.; Da Silva, A.
2016-01-01
MERRA-2 is the latest Aerosol Reanalysis produced at NASA's Global Modeling Assimilation Office (GMAO) from 1979 to present. This reanalysis is based on a version of the GEOS-5 model radiatively coupled to GOCART aerosols and includes assimilation of bias corrected Aerosol Optical Depth (AOD) from AVHRR over ocean, MODIS sensors on both Terra and Aqua satellites, MISR over bright surfaces and AERONET data. In order to assimilate lidar profiles of aerosols, we are updating the aerosol component of our assimilation system to an Ensemble Kalman Filter (EnKF) type of scheme using ensembles generated routinely by the meteorological assimilation. Following the work performed with the first NASA's aerosol reanalysis (MERRAero), we first validate the vertical structure of MERRA-2 aerosol assimilated fields using CALIOP data over regions of particular interest during 2008.
Bayesian Tracking of Emerging Epidemics Using Ensemble Optimal Statistical Interpolation
Cobb, Loren; Krishnamurthy, Ashok; Mandel, Jan; Beezley, Jonathan D.
2014-01-01
We present a preliminary test of the Ensemble Optimal Statistical Interpolation (EnOSI) method for the statistical tracking of an emerging epidemic, with a comparison to its popular relative for Bayesian data assimilation, the Ensemble Kalman Filter (EnKF). The spatial data for this test was generated by a spatial susceptible-infectious-removed (S-I-R) epidemic model of an airborne infectious disease. Both tracking methods in this test employed Poisson rather than Gaussian noise, so as to handle epidemic data more accurately. The EnOSI and EnKF tracking methods worked well on the main body of the simulated spatial epidemic, but the EnOSI was able to detect and track a distant secondary focus of infection that the EnKF missed entirely. PMID:25113590
Bei, Naifang; Li, Guohui; Meng, Zhiyong; Weng, Yonghui; Zavala, Miguel; Molina, L T
2014-11-15
The purpose of this study is to investigate the impact of using an ensemble Kalman filter (EnKF) on air quality simulations in the California-Mexico border region on two days (May 30 and June 04, 2010) during Cal-Mex 2010. The uncertainties in ozone (O3) and aerosol simulations in the border area due to the meteorological initial uncertainties were examined through ensemble simulations. The ensemble spread of surface O3 averaged over the coastal region was less than 10ppb. The spreads in the nitrate and ammonium aerosols are substantial on both days, mostly caused by the large uncertainties in the surface temperature and humidity simulations. In general, the forecast initialized with the EnKF analysis (EnKF) improved the simulation of meteorological fields to some degree in the border region compared to the reference forecast initialized with NCEP analysis data (FCST) and the simulation with observation nudging (FDDA), which in turn leading to reasonable air quality simulations. The simulated surface O3 distributions by EnKF were consistently better than FCST and FDDA on both days. EnKF usually produced more reasonable simulations of nitrate and ammonium aerosols compared to the observations, but still have difficulties in improving the simulations of organic and sulfate aerosols. However, discrepancies between the EnKF simulations and the measurements were still considerably large, particularly for sulfate and organic aerosols, indicating that there are still ample rooms for improvement in the present data assimilation and/or the modeling systems. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Loizu, Javier; Massari, Christian; Álvarez-Mozos, Jesús; Casalí, Javier; Goñi, Mikel
2016-04-01
Assimilation of Surface Soil Moisture (SSM) observations obtained from remote sensing techniques have been shown to improve streamflow prediction at different time scales of hydrological modeling. Different sensors and methods have been tested for their application in SSM estimation, especially in the microwave region of the electromagnetic spectrum. The available observation devices include passive microwave sensors such as the Advanced Microwave Scanning Radiometer - Earth Observation System (AMSR-E) onboard the Aqua satellite and the Soil Moisture and Ocean Salinity (SMOS) mission. On the other hand, active microwave systems include Scatterometers (SCAT) onboard the European Remote Sensing satellites (ERS-1/2) and the Advanced Scatterometer (ASCAT) onboard MetOp-A satellite. Data assimilation (DA) include different techniques that have been applied in hydrology and other fields for decades. These techniques include, among others, Kalman Filtering (KF), Variational Assimilation or Particle Filtering. From the initial KF method, different techniques were developed to suit its application to different systems. The Ensemble Kalman Filter (EnKF), extensively applied in hydrological modeling improvement, shows its capability to deal with nonlinear model dynamics without linearizing model equations, as its main advantage. The objective of this study was to investigate whether data assimilation of SSM ASCAT observations, through the EnKF method, could improve streamflow simulation of mediterranean catchments with TOPLATS hydrological complex model. The DA technique was programmed in FORTRAN, and applied to hourly simulations of TOPLATS catchment model. TOPLATS (TOPMODEL-based Land-Atmosphere Transfer Scheme) was applied on its lumped version for two mediterranean catchments of similar size, located in northern Spain (Arga, 741 km2) and central Italy (Nestore, 720 km2). The model performs a separated computation of energy and water balances. In those balances, the soil is divided into two layers, the upper Surface Zone (SZ), and the deeper Transmission Zone (TZ). In this study, the SZ depth was fixed to 5 cm, for adequate assimilation of observed data. Available data was distributed as follows: first, the model was calibrated for the 2001-2007 period; then the 2007-2010 period was used for satellite data rescaling purposes. Finally, data assimilation was applied during the validation (2010-2013) period. Application of the EnKF required the following steps: 1) rescaling of satellite data, 2) transformation of rescaled data into Soil Water Index (SWI) through a moving average filter, where a T = 9 calibrated value was applied, 3) generation of a 50 member ensemble through perturbation of inputs (rainfall and temperature) and three selected parameters, 4) validation of the ensemble through the compliance of two criteria based on ensemble's spread, mean square error and skill and, 5) Kalman Gain calculation. In this work, comparison of three satellite data rescaling techniques: 1) cumulative distribution Function (CDF) matching, 2) variance matching and 3) linear least square regression was also performed. Results obtained in this study showed slight improvements of hourly Nash-Sutcliffe Efficiency (NSE) in both catchments, with the different rescaling methods evaluated. Larger improvements were found in terms of seasonal simulated volume error reduction.
Arshad, Sannia; Rho, Seungmin
2014-01-01
We have presented a classification framework that combines multiple heterogeneous classifiers in the presence of class label noise. An extension of m-Mediods based modeling is presented that generates model of various classes whilst identifying and filtering noisy training data. This noise free data is further used to learn model for other classifiers such as GMM and SVM. A weight learning method is then introduced to learn weights on each class for different classifiers to construct an ensemble. For this purpose, we applied genetic algorithm to search for an optimal weight vector on which classifier ensemble is expected to give the best accuracy. The proposed approach is evaluated on variety of real life datasets. It is also compared with existing standard ensemble techniques such as Adaboost, Bagging, and Random Subspace Methods. Experimental results show the superiority of proposed ensemble method as compared to its competitors, especially in the presence of class label noise and imbalance classes. PMID:25295302
Khalid, Shehzad; Arshad, Sannia; Jabbar, Sohail; Rho, Seungmin
2014-01-01
We have presented a classification framework that combines multiple heterogeneous classifiers in the presence of class label noise. An extension of m-Mediods based modeling is presented that generates model of various classes whilst identifying and filtering noisy training data. This noise free data is further used to learn model for other classifiers such as GMM and SVM. A weight learning method is then introduced to learn weights on each class for different classifiers to construct an ensemble. For this purpose, we applied genetic algorithm to search for an optimal weight vector on which classifier ensemble is expected to give the best accuracy. The proposed approach is evaluated on variety of real life datasets. It is also compared with existing standard ensemble techniques such as Adaboost, Bagging, and Random Subspace Methods. Experimental results show the superiority of proposed ensemble method as compared to its competitors, especially in the presence of class label noise and imbalance classes.
NASA Astrophysics Data System (ADS)
Yan, Yajing; Barth, Alexander; Beckers, Jean-Marie; Candille, Guillem; Brankart, Jean-Michel; Brasseur, Pierre
2015-04-01
Sea surface height, sea surface temperature and temperature profiles at depth collected between January and December 2005 are assimilated into a realistic eddy permitting primitive equation model of the North Atlantic Ocean using the Ensemble Kalman Filter. 60 ensemble members are generated by adding realistic noise to the forcing parameters related to the temperature. The ensemble is diagnosed and validated by comparison between the ensemble spread and the model/observation difference, as well as by rank histogram before the assimilation experiments. Incremental analysis update scheme is applied in order to reduce spurious oscillations due to the model state correction. The results of the assimilation are assessed according to both deterministic and probabilistic metrics with observations used in the assimilation experiments and independent observations, which goes further than most previous studies and constitutes one of the original points of this paper. Regarding the deterministic validation, the ensemble means, together with the ensemble spreads are compared to the observations in order to diagnose the ensemble distribution properties in a deterministic way. Regarding the probabilistic validation, the continuous ranked probability score (CRPS) is used to evaluate the ensemble forecast system according to reliability and resolution. The reliability is further decomposed into bias and dispersion by the reduced centred random variable (RCRV) score in order to investigate the reliability properties of the ensemble forecast system. The improvement of the assimilation is demonstrated using these validation metrics. Finally, the deterministic validation and the probabilistic validation are analysed jointly. The consistency and complementarity between both validations are highlighted. High reliable situations, in which the RMS error and the CRPS give the same information, are identified for the first time in this paper.
NASA Astrophysics Data System (ADS)
Rings, Joerg; Vrugt, Jasper A.; Schoups, Gerrit; Huisman, Johan A.; Vereecken, Harry
2012-05-01
Bayesian model averaging (BMA) is a standard method for combining predictive distributions from different models. In recent years, this method has enjoyed widespread application and use in many fields of study to improve the spread-skill relationship of forecast ensembles. The BMA predictive probability density function (pdf) of any quantity of interest is a weighted average of pdfs centered around the individual (possibly bias-corrected) forecasts, where the weights are equal to posterior probabilities of the models generating the forecasts, and reflect the individual models skill over a training (calibration) period. The original BMA approach presented by Raftery et al. (2005) assumes that the conditional pdf of each individual model is adequately described with a rather standard Gaussian or Gamma statistical distribution, possibly with a heteroscedastic variance. Here we analyze the advantages of using BMA with a flexible representation of the conditional pdf. A joint particle filtering and Gaussian mixture modeling framework is presented to derive analytically, as closely and consistently as possible, the evolving forecast density (conditional pdf) of each constituent ensemble member. The median forecasts and evolving conditional pdfs of the constituent models are subsequently combined using BMA to derive one overall predictive distribution. This paper introduces the theory and concepts of this new ensemble postprocessing method, and demonstrates its usefulness and applicability by numerical simulation of the rainfall-runoff transformation using discharge data from three different catchments in the contiguous United States. The revised BMA method receives significantly lower-prediction errors than the original default BMA method (due to filtering) with predictive uncertainty intervals that are substantially smaller but still statistically coherent (due to the use of a time-variant conditional pdf).
Li, Xue Jian; Mao, Fang Jie; Du, Hua Qiang; Zhou, Guo Mo; Xu, Xiao Jun; Li, Ping Heng; Liu, Yu Li; Cui, Lu
2016-12-01
LAI is one of the most important observation data in the research of carbon cycle of forest ecosystem, and it is also an important parameter to drive process-based ecosystem model. The Moso bamboo forest (MBF) and Lei bamboo forest (LBF) were selected as the study targets. Firstly, the MODIS LAI time series data during 2014-2015 was assimilated with Dual Ensemble Kalman Filter method. Secondly, the high quality assimilated MBF LAI and LBF LAI were used as input dataset to drive BEPS model for simulating the gross primary productivity (GPP), net ecosystem exchange (NEE) and total ecosystem respiration (TER) of the two types of bamboo forest ecosystem, respectively. The modeled carbon fluxes were evaluated by the observed carbon fluxes data, and the effects of different quality LAI inputs on carbon cycle simulation were also studied. The LAI assimilated using Dual Ensemble Kalman Filter of MBF and LBF were significantly correlated with the observed LAI, with high R 2 of 0.81 and 0.91 respectively, and lower RMSE and absolute bias, which represented the great improvement of the accuracy of MODIS LAI products. With the driving of assimilated LAI, the modeled GPP, NEE, and TER were also highly correlated with the flux observation data, with the R 2 of 0.66, 0.47, and 0.64 for MBF, respectively, and 0.66, 0.45, and 0.73 for LBF, respectively. The accuracy of carbon fluxes modeled with assimilated LAI was higher than that acquired by the locally adjusted cubic-spline capping method, in which, the accuracy of mo-deled NEE for MBF and LBF increased by 11.2% and 11.8% at the most degrees, respectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harlim, John, E-mail: jharlim@psu.edu; Mahdi, Adam, E-mail: amahdi@ncsu.edu; Majda, Andrew J., E-mail: jonjon@cims.nyu.edu
2014-01-15
A central issue in contemporary science is the development of nonlinear data driven statistical–dynamical models for time series of noisy partial observations from nature or a complex model. It has been established recently that ad-hoc quadratic multi-level regression models can have finite-time blow-up of statistical solutions and/or pathological behavior of their invariant measure. Recently, a new class of physics constrained nonlinear regression models were developed to ameliorate this pathological behavior. Here a new finite ensemble Kalman filtering algorithm is developed for estimating the state, the linear and nonlinear model coefficients, the model and the observation noise covariances from available partialmore » noisy observations of the state. Several stringent tests and applications of the method are developed here. In the most complex application, the perfect model has 57 degrees of freedom involving a zonal (east–west) jet, two topographic Rossby waves, and 54 nonlinearly interacting Rossby waves; the perfect model has significant non-Gaussian statistics in the zonal jet with blocked and unblocked regimes and a non-Gaussian skewed distribution due to interaction with the other 56 modes. We only observe the zonal jet contaminated by noise and apply the ensemble filter algorithm for estimation. Numerically, we find that a three dimensional nonlinear stochastic model with one level of memory mimics the statistical effect of the other 56 modes on the zonal jet in an accurate fashion, including the skew non-Gaussian distribution and autocorrelation decay. On the other hand, a similar stochastic model with zero memory levels fails to capture the crucial non-Gaussian behavior of the zonal jet from the perfect 57-mode model.« less
NASA Astrophysics Data System (ADS)
Montzka, Carsten; Hendricks Franssen, Harrie-Jan; Moradkhani, Hamid; Pütz, Thomas; Han, Xujun; Vereecken, Harry
2013-04-01
An adequate description of soil hydraulic properties is essential for a good performance of hydrological forecasts. So far, several studies showed that data assimilation could reduce the parameter uncertainty by considering soil moisture observations. However, these observations and also the model forcings were recorded with a specific measurement error. It seems a logical step to base state updating and parameter estimation on observations made at multiple time steps, in order to reduce the influence of outliers at single time steps given measurement errors and unknown model forcings. Such outliers could result in erroneous state estimation as well as inadequate parameters. This has been one of the reasons to use a smoothing technique as implemented for Bayesian data assimilation methods such as the Ensemble Kalman Filter (i.e. Ensemble Kalman Smoother). Recently, an ensemble-based smoother has been developed for state update with a SIR particle filter. However, this method has not been used for dual state-parameter estimation. In this contribution we present a Particle Smoother with sequentially smoothing of particle weights for state and parameter resampling within a time window as opposed to the single time step data assimilation used in filtering techniques. This can be seen as an intermediate variant between a parameter estimation technique using global optimization with estimation of single parameter sets valid for the whole period, and sequential Monte Carlo techniques with estimation of parameter sets evolving from one time step to another. The aims are i) to improve the forecast of evaporation and groundwater recharge by estimating hydraulic parameters, and ii) to reduce the impact of single erroneous model inputs/observations by a smoothing method. In order to validate the performance of the proposed method in a real world application, the experiment is conducted in a lysimeter environment.
NASA Astrophysics Data System (ADS)
Simon, Ehouarn; Samuelsen, Annette; Bertino, Laurent; Mouysset, Sandrine
2015-12-01
A sequence of one-year combined state-parameter estimation experiments has been conducted in a North Atlantic and Arctic Ocean configuration of the coupled physical-biogeochemical model HYCOM-NORWECOM over the period 2007-2010. The aim is to evaluate the ability of an ensemble-based data assimilation method to calibrate ecosystem model parameters in a pre-operational setting, namely the production of the MyOcean pilot reanalysis of the Arctic biology. For that purpose, four biological parameters (two phyto- and two zooplankton mortality rates) are estimated by assimilating weekly data such as, satellite-derived Sea Surface Temperature, along-track Sea Level Anomalies, ice concentrations and chlorophyll-a concentrations with an Ensemble Kalman Filter. The set of optimized parameters locally exhibits seasonal variations suggesting that time-dependent parameters should be used in ocean ecosystem models. A clustering analysis of the optimized parameters is performed in order to identify consistent ecosystem regions. In the north part of the domain, where the ecosystem model is the most reliable, most of them can be associated with Longhurst provinces and new provinces emerge in the Arctic Ocean. However, the clusters do not coincide anymore with the Longhurst provinces in the Tropics due to large model errors. Regarding the ecosystem state variables, the assimilation of satellite-derived chlorophyll concentration leads to significant reduction of the RMS errors in the observed variables during the first year, i.e. 2008, compared to a free run simulation. However, local filter divergences of the parameter component occur in 2009 and result in an increase in the RMS error at the time of the spring bloom.
Gantner, Melisa E; Peroni, Roxana N; Morales, Juan F; Villalba, María L; Ruiz, María E; Talevi, Alan
2017-08-28
Breast Cancer Resistance Protein (BCRP) is an ATP-dependent efflux transporter linked to the multidrug resistance phenomenon in many diseases such as epilepsy and cancer and a potential source of drug interactions. For these reasons, the early identification of substrates and nonsubstrates of this transporter during the drug discovery stage is of great interest. We have developed a computational nonlinear model ensemble based on conformational independent molecular descriptors using a combined strategy of genetic algorithms, J48 decision tree classifiers, and data fusion. The best model ensemble consists in averaging the ranking of the 12 decision trees that showed the best performance on the training set, which also demonstrated a good performance for the test set. It was experimentally validated using the ex vivo everted rat intestinal sac model. Five anticonvulsant drugs classified as nonsubstrates for BRCP by the model ensemble were experimentally evaluated, and none of them proved to be a BCRP substrate under the experimental conditions used, thus confirming the predictive ability of the model ensemble. The model ensemble reported here is a potentially valuable tool to be used as an in silico ADME filter in computer-aided drug discovery campaigns intended to overcome BCRP-mediated multidrug resistance issues and to prevent drug-drug interactions.
Exploring Model Error through Post-processing and an Ensemble Kalman Filter on Fire Weather Days
NASA Astrophysics Data System (ADS)
Erickson, Michael J.
The proliferation of coupling atmospheric ensemble data to models in other related fields requires a priori knowledge of atmospheric ensemble biases specific to the desired application. In that spirit, this dissertation focuses on elucidating atmospheric ensemble model bias and error through a variety of different methods specific to fire weather days (FWDs) over the Northeast United States (NEUS). Other than a handful of studies that use models to predict fire indices for single fire seasons (Molders 2008, Simpson et al. 2014), an extensive exploration of model performance specific to FWDs has not been attempted. Two unique definitions for FWDs are proposed; one that uses pre-existing fire indices (FWD1) and another from a new statistical fire weather index (FWD2) relating fire occurrence and near-surface meteorological observations. Ensemble model verification reveals FWDs to have warmer (> 1 K), moister (~ 0.4 g kg-1) and less windy (~ 1 m s-1) biases than the climatological average for both FWD1 and FWD2. These biases are not restricted to the near surface but exist through the entirety of the planetary boundary layer (PBL). Furthermore, post-processing methods are more effective when previous FWDs are incorporated into the statistical training, suggesting that model bias could be related to the synoptic flow pattern. An Ensemble Kalman Filter (EnKF) is used to explore the effectiveness of data assimilation during a period of extensive FWDs in April 2012. Model biases develop rapidly on FWDs, consistent with the FWD1 and FWD2 verification. However, the EnKF is effective at removing most biases for temperature, wind speed and specific humidity. Potential sources of error in the parameterized physics of the PBL are explored by rerunning the EnKF with simultaneous state and parameter estimation (SSPE) for two relevant parameters within the ACM2 PBL scheme. SSPE helps to reduce the cool temperature bias near the surface on FWDs, with the variability in parameter estimates exhibiting some relationship to model bias for temperature. This suggests the potential for structural model error within the ACM2 PBL scheme and could lead toward the future development of improved PBL parameterizations.
NASA Astrophysics Data System (ADS)
Yan, Y.; Barth, A.; Beckers, J. M.; Brankart, J. M.; Brasseur, P.; Candille, G.
2017-07-01
In this paper, three incremental analysis update schemes (IAU 0, IAU 50 and IAU 100) are compared in the same assimilation experiments with a realistic eddy permitting primitive equation model of the North Atlantic Ocean using the Ensemble Kalman Filter. The difference between the three IAU schemes lies on the position of the increment update window. The relevance of each IAU scheme is evaluated through analyses on both thermohaline and dynamical variables. The validation of the assimilation results is performed according to both deterministic and probabilistic metrics against different sources of observations. For deterministic validation, the ensemble mean and the ensemble spread are compared to the observations. For probabilistic validation, the continuous ranked probability score (CRPS) is used to evaluate the ensemble forecast system according to reliability and resolution. The reliability is further decomposed into bias and dispersion by the reduced centred random variable (RCRV) score. The obtained results show that 1) the IAU 50 scheme has the same performance as the IAU 100 scheme 2) the IAU 50/100 schemes outperform the IAU 0 scheme in error covariance propagation for thermohaline variables in relatively stable region, while the IAU 0 scheme outperforms the IAU 50/100 schemes in dynamical variables estimation in dynamically active region 3) in case with sufficient number of observations and good error specification, the impact of IAU schemes is negligible. The differences between the IAU 0 scheme and the IAU 50/100 schemes are mainly due to different model integration time and different instability (density inversion, large vertical velocity, etc.) induced by the increment update. The longer model integration time with the IAU 50/100 schemes, especially the free model integration, on one hand, allows for better re-establishment of the equilibrium model state, on the other hand, smooths the strong gradients in dynamically active region.
NASA Astrophysics Data System (ADS)
Sawada, Yohei; Nakaegawa, Tosiyuki; Miyoshi, Takemasa
2018-01-01
We examine the potential of assimilating river discharge observations into the atmosphere by strongly coupled river-atmosphere ensemble data assimilation. The Japan Meteorological Agency's Non-Hydrostatic atmospheric Model (JMA-NHM) is first coupled with a simple rainfall-runoff model. Next, the local ensemble transform Kalman filter is used for this coupled model to assimilate the observations of the rainfall-runoff model variables into the JMA-NHM model variables. This system makes it possible to do hydrometeorology backward, i.e., to inversely estimate atmospheric conditions from the information of river flows or a flood on land surfaces. We perform a proof-of-concept Observing System Simulation Experiment, which reveals that the assimilation of river discharge observations into the atmospheric model variables can improve the skill of the short-term severe rainfall forecast.
NASA Astrophysics Data System (ADS)
Rakovec, O.; Weerts, A. H.; Hazenberg, P.; Torfs, P. J. J. F.; Uijlenhoet, R.
2012-09-01
This paper presents a study on the optimal setup for discharge assimilation within a spatially distributed hydrological model. The Ensemble Kalman filter (EnKF) is employed to update the grid-based distributed states of such an hourly spatially distributed version of the HBV-96 model. By using a physically based model for the routing, the time delay and attenuation are modelled more realistically. The discharge and states at a given time step are assumed to be dependent on the previous time step only (Markov property). Synthetic and real world experiments are carried out for the Upper Ourthe (1600 km2), a relatively quickly responding catchment in the Belgian Ardennes. We assess the impact on the forecasted discharge of (1) various sets of the spatially distributed discharge gauges and (2) the filtering frequency. The results show that the hydrological forecast at the catchment outlet is improved by assimilating interior gauges. This augmentation of the observation vector improves the forecast more than increasing the updating frequency. In terms of the model states, the EnKF procedure is found to mainly change the pdfs of the two routing model storages, even when the uncertainty in the discharge simulations is smaller than the defined observation uncertainty.
Insights about data assimilation frameworks for integrating GRACE with hydrological models
NASA Astrophysics Data System (ADS)
Schumacher, Maike; Kusche, Jürgen; Van Dijk, Albert I. J. M.; Döll, Petra; Schuh, Wolf-Dieter
2016-04-01
Improving the understanding of changes in the water cycle represents a challenging objective that requires merging information from various disciplines. Debates exist on selecting an appropriate assimilation technique to integrate GRACE-derived terrestrial water storage changes (TWSC) into hydrological models in order to downscale and disaggregate GRACE TWSC, overcome model limitations, and improve monitoring and forecast skills. Yet, the effect of the specific data assimilation technique in conjunction with ill-conditioning, colored noise, resolution mismatch between GRACE and model, and other complications is still unclear. Due to its simplicity, ensemble Kalman filters or smoothers (EnKF/S) are often applied. In this study, we show that modification of the filter approach might open new avenues to improve the integration process. Particularly, we discuss an improved calibration and data assimilation (C/DA) framework (Schumacher et al., 2016), which is based on the EnKF and was extended by the square root analysis scheme (SQRA) and the singular evolutive interpolated Kalman (SEIK) filter. In addition, we discuss an off-line data blending approach (Van Dijk et al., 2014) that offers the chance to merge multi-model ensembles with GRACE observations. The investigations include: (i) a theoretical comparison, focusing on similarities and differences of the conceptual formulation of the filter algorithms, (ii) a practical comparison, for which the approaches were applied to an ensemble of runs of the WaterGAP Global Hydrology Model (WGHM), as well as (iii) an impact assessment of the GRACE error structure on C/DA results. First, a synthetic experiment over the Mississippi River Basin (USA) was used to gain insights about the C/DA set-up before applying it to real data. The results indicated promising performances when considering alternative methods, e.g. applying the SEIK algorithm improved the correlation coefficient and root mean square error (RMSE) of TWSC by 0.1 and 6 mm, with respect to the EnKF. We successfully transferred our framework to the Murray-Darling Basin (Australia), one of the largest and driest river basins over the world. Finally, we provide recommendations on an optimal C/DA strategy for real GRACE data integrations. Schumacher M, Kusche J, Döll P (2016): A Systematic Impact Assessment of GRACE Error Correlation on Data Assimilation in Hydrological Models. J Geod Van Dijk AIJM, Renzullo LJ, Wada Y, Tregoning P (2014): A global water cycle reanalysis (2003-2012) merging satellite gravimetry and altimetry observations with a hydrological multi-model ensemble. Hydrol Earth Syst Sci
Rastetter, Edward B; Williams, Mathew; Griffin, Kevin L; Kwiatkowski, Bonnie L; Tomasky, Gabrielle; Potosnak, Mark J; Stoy, Paul C; Shaver, Gaius R; Stieglitz, Marc; Hobbie, John E; Kling, George W
2010-07-01
Continuous time-series estimates of net ecosystem carbon exchange (NEE) are routinely made using eddy covariance techniques. Identifying and compensating for errors in the NEE time series can be automated using a signal processing filter like the ensemble Kalman filter (EnKF). The EnKF compares each measurement in the time series to a model prediction and updates the NEE estimate by weighting the measurement and model prediction relative to a specified measurement error estimate and an estimate of the model-prediction error that is continuously updated based on model predictions of earlier measurements in the time series. Because of the covariance among model variables, the EnKF can also update estimates of variables for which there is no direct measurement. The resulting estimates evolve through time, enabling the EnKF to be used to estimate dynamic variables like changes in leaf phenology. The evolving estimates can also serve as a means to test the embedded model and reconcile persistent deviations between observations and model predictions. We embedded a simple arctic NEE model into the EnKF and filtered data from an eddy covariance tower located in tussock tundra on the northern foothills of the Brooks Range in northern Alaska, USA. The model predicts NEE based only on leaf area, irradiance, and temperature and has been well corroborated for all the major vegetation types in the Low Arctic using chamber-based data. This is the first application of the model to eddy covariance data. We modified the EnKF by adding an adaptive noise estimator that provides a feedback between persistent model data deviations and the noise added to the ensemble of Monte Carlo simulations in the EnKF. We also ran the EnKF with both a specified leaf-area trajectory and with the EnKF sequentially recalibrating leaf-area estimates to compensate for persistent model-data deviations. When used together, adaptive noise estimation and sequential recalibration substantially improved filter performance, but it did not improve performance when used individually. The EnKF estimates of leaf area followed the expected springtime canopy phenology. However, there were also diel fluctuations in the leaf-area estimates; these are a clear indication of a model deficiency possibly related to vapor pressure effects on canopy conductance.
Effect of Data Assimilation Parameters on The Optimized Surface CO2 Flux in Asia
NASA Astrophysics Data System (ADS)
Kim, Hyunjung; Kim, Hyun Mee; Kim, Jinwoong; Cho, Chun-Ho
2018-02-01
In this study, CarbonTracker, an inverse modeling system based on the ensemble Kalman filter, was used to evaluate the effects of data assimilation parameters (assimilation window length and ensemble size) on the estimation of surface CO2 fluxes in Asia. Several experiments with different parameters were conducted, and the results were verified using CO2 concentration observations. The assimilation window lengths tested were 3, 5, 7, and 10 weeks, and the ensemble sizes were 100, 150, and 300. Therefore, a total of 12 experiments using combinations of these parameters were conducted. The experimental period was from January 2006 to December 2009. Differences between the optimized surface CO2 fluxes of the experiments were largest in the Eurasian Boreal (EB) area, followed by Eurasian Temperate (ET) and Tropical Asia (TA), and were larger in boreal summer than in boreal winter. The effect of ensemble size on the optimized biosphere flux is larger than the effect of the assimilation window length in Asia, but the importance of them varies in specific regions in Asia. The optimized biosphere flux was more sensitive to the assimilation window length in EB, whereas it was sensitive to the ensemble size as well as the assimilation window length in ET. The larger the ensemble size and the shorter the assimilation window length, the larger the uncertainty (i.e., spread of ensemble) of optimized surface CO2 fluxes. The 10-week assimilation window and 300 ensemble size were the optimal configuration for CarbonTracker in the Asian region based on several verifications using CO2 concentration measurements.
NASA Astrophysics Data System (ADS)
Khaki, M.; Hoteit, I.; Kuhn, M.; Awange, J.; Forootan, E.; van Dijk, A. I. J. M.; Schumacher, M.; Pattiaratchi, C.
2017-09-01
The time-variable terrestrial water storage (TWS) products from the Gravity Recovery And Climate Experiment (GRACE) have been increasingly used in recent years to improve the simulation of hydrological models by applying data assimilation techniques. In this study, for the first time, we assess the performance of the most popular data assimilation sequential techniques for integrating GRACE TWS into the World-Wide Water Resources Assessment (W3RA) model. We implement and test stochastic and deterministic ensemble-based Kalman filters (EnKF), as well as Particle filters (PF) using two different resampling approaches of Multinomial Resampling and Systematic Resampling. These choices provide various opportunities for weighting observations and model simulations during the assimilation and also accounting for error distributions. Particularly, the deterministic EnKF is tested to avoid perturbing observations before assimilation (that is the case in an ordinary EnKF). Gaussian-based random updates in the EnKF approaches likely do not fully represent the statistical properties of the model simulations and TWS observations. Therefore, the fully non-Gaussian PF is also applied to estimate more realistic updates. Monthly GRACE TWS are assimilated into W3RA covering the entire Australia. To evaluate the filters performances and analyze their impact on model simulations, their estimates are validated by independent in-situ measurements. Our results indicate that all implemented filters improve the estimation of water storage simulations of W3RA. The best results are obtained using two versions of deterministic EnKF, i.e. the Square Root Analysis (SQRA) scheme and the Ensemble Square Root Filter (EnSRF), respectively, improving the model groundwater estimations errors by 34% and 31% compared to a model run without assimilation. Applying the PF along with Systematic Resampling successfully decreases the model estimation error by 23%.
NASA Astrophysics Data System (ADS)
Zunz, Violette; Goosse, Hugues; Dubinkina, Svetlana
2015-04-01
In this study, we assess systematically the impact of different initialisation procedures on the predictability of the sea ice in the Southern Ocean. These initialisation strategies are based on three data assimilation methods: the nudging, the particle filter with sequential importance resampling and the nudging proposal particle filter. An Earth system model of intermediate complexity is used to perform hindcast simulations in a perfect model approach. The predictability of the Antarctic sea ice at interannual to multi-decadal timescales is estimated through two aspects: the spread of the hindcast ensemble, indicating the uncertainty of the ensemble, and the correlation between the ensemble mean and the pseudo-observations, used to assess the accuracy of the prediction. Our results show that at decadal timescales more sophisticated data assimilation methods as well as denser pseudo-observations used to initialise the hindcasts decrease the spread of the ensemble. However, our experiments did not clearly demonstrate that one of the initialisation methods systematically provides with a more accurate prediction of the sea ice in the Southern Ocean than the others. Overall, the predictability at interannual timescales is limited to 3 years ahead at most. At multi-decadal timescales, the trends in sea ice extent computed over the time period just after the initialisation are clearly better correlated between the hindcasts and the pseudo-observations if the initialisation takes into account the pseudo-observations. The correlation reaches values larger than 0.5 in winter. This high correlation has likely its origin in the slow evolution of the ocean ensured by its strong thermal inertia, showing the importance of the quality of the initialisation below the sea ice.
Ensemble Kalman Filter Data Assimilation in a Solar Dynamo Model
NASA Astrophysics Data System (ADS)
Dikpati, M.
2017-12-01
Despite great advancement in solar dynamo models since the first model by Parker in 1955, there remain many challenges in the quest to build a dynamo-based prediction scheme that can accurately predict the solar cycle features. One of these challenges is to implement modern data assimilation techniques, which have been used in the oceanic and atmospheric prediction models. Development of data assimilation in solar models are in the early stages. Recently, observing system simulation experiments (OSSE's) have been performed using Ensemble Kalman Filter data assimilation, in the framework of Data Assimilation Research Testbed of NCAR (NCAR-DART), for estimating parameters in a solar dynamo model. I will demonstrate how the selection of ensemble size, number of observations, amount of error in observations and the choice of assimilation interval play important role in parameter estimation. I will also show how the results of parameter reconstruction improve when accuracy in low-latitude observations is increased, despite large error in polar region data. I will then describe how implementation of data assimilation in a solar dynamo model can bring more accuracy in the prediction of polar fields in North and South hemispheres during the declining phase of cycle 24. Recent evidence indicates that the strength of the Sun's polar field during the cycle minima might be a reliable predictor for the next sunspot cycle's amplitude; therefore it is crucial to accurately predict the polar field strength and pattern.
IASI Radiance Data Assimilation in Local Ensemble Transform Kalman Filter
NASA Astrophysics Data System (ADS)
Cho, K.; Hyoung-Wook, C.; Jo, Y.
2016-12-01
Korea institute of Atmospheric Prediction Systems (KIAPS) is developing NWP model with data assimilation systems. Local Ensemble Transform Kalman Filter (LETKF) system, one of the data assimilation systems, has been developed for KIAPS Integrated Model (KIM) based on cubed-sphere grid and has successfully assimilated real data. LETKF data assimilation system has been extended to 4D- LETKF which considers time-evolving error covariance within assimilation window and IASI radiance data assimilation using KPOP (KIAPS package for observation processing) with RTTOV (Radiative Transfer for TOVS). The LETKF system is implementing semi operational prediction including conventional (sonde, aircraft) observation and AMSU-A (Advanced Microwave Sounding Unit-A) radiance data from April. Recently, the semi operational prediction system updated radiance observations including GPS-RO, AMV, IASI (Infrared Atmospheric Sounding Interferometer) data at July. A set of simulation of KIM with ne30np4 and 50 vertical levels (of top 0.3hPa) were carried out for short range forecast (10days) within semi operation prediction LETKF system with ensemble forecast 50 members. In order to only IASI impact, our experiments used only conventional and IAIS radiance data to same semi operational prediction set. We carried out sensitivity test for IAIS thinning method (3D and 4D). IASI observation number was increased by temporal (4D) thinning and the improvement of IASI radiance data impact on the forecast skill of model will expect.
NASA Astrophysics Data System (ADS)
Kasper, David; Cole, Jackson L.; Gardner, Cristilyn N.; Garver, Bethany; Jarka, Kyla L.; Kar, Aman; McGough, Aylin M.; PeQueen, David J.; Rivera, Daniel Ivan; Jang-Condell, Hannah; Kobulnicky, Henry A.; Dale, Daniel A.
2018-06-01
We present new multi-broadband transit photometry of HD 189733b observed with the Wyoming Infrared Observatory. With an ensemble of Sloan filter observations across multiple transits we have created an ultra-low resolution transmission spectrum to discern the nature of the exoplanet atmosphere. This data set exemplifies the capabilities of the 2.3 m observatory. The analysis was performed with a Markov-Chain Monte-Carlo method assisted by a Gaussian-processes regression model. These observations were taken as part of the University of Wyoming's 2017 Research Experience for Undergraduates (REU) and represent one of multiple hot Jupiter exoplanet targets for which we have transit event observations in multiple broadband filters.
NASA Astrophysics Data System (ADS)
Merker, Claire; Ament, Felix; Clemens, Marco
2017-04-01
The quantification of measurement uncertainty for rain radar data remains challenging. Radar reflectivity measurements are affected, amongst other things, by calibration errors, noise, blocking and clutter, and attenuation. Their combined impact on measurement accuracy is difficult to quantify due to incomplete process understanding and complex interdependencies. An improved quality assessment of rain radar measurements is of interest for applications both in meteorology and hydrology, for example for precipitation ensemble generation, rainfall runoff simulations, or in data assimilation for numerical weather prediction. Especially a detailed description of the spatial and temporal structure of errors is beneficial in order to make best use of the areal precipitation information provided by radars. Radar precipitation ensembles are one promising approach to represent spatially variable radar measurement errors. We present a method combining ensemble radar precipitation nowcasting with data assimilation to estimate radar measurement uncertainty at each pixel. This combination of ensemble forecast and observation yields a consistent spatial and temporal evolution of the radar error field. We use an advection-based nowcasting method to generate an ensemble reflectivity forecast from initial data of a rain radar network. Subsequently, reflectivity data from single radars is assimilated into the forecast using the Local Ensemble Transform Kalman Filter. The spread of the resulting analysis ensemble provides a flow-dependent, spatially and temporally correlated reflectivity error estimate at each pixel. We will present first case studies that illustrate the method using data from a high-resolution X-band radar network.
NASA Astrophysics Data System (ADS)
Yin, Dong-shan; Gao, Yu-ping; Zhao, Shu-hong
2017-07-01
Millisecond pulsars can generate another type of time scale that is totally independent of the atomic time scale, because the physical mechanisms of the pulsar time scale and the atomic time scale are quite different from each other. Usually the pulsar timing observations are not evenly sampled, and the internals between two data points range from several hours to more than half a month. Further more, these data sets are sparse. All this makes it difficult to generate an ensemble pulsar time scale. Hence, a new algorithm to calculate the ensemble pulsar time scale is proposed. Firstly, a cubic spline interpolation is used to densify the data set, and make the intervals between data points uniform. Then, the Vondrak filter is employed to smooth the data set, and get rid of the high-frequency noises, and finally the weighted average method is adopted to generate the ensemble pulsar time scale. The newly released NANOGRAV (North American Nanohertz Observatory for Gravitational Waves) 9-year data set is used to generate the ensemble pulsar time scale. This data set includes the 9-year observational data of 37 millisecond pulsars observed by the 100-meter Green Bank telescope and the 305-meter Arecibo telescope. It is found that the algorithm used in this paper can reduce effectively the influence caused by the noises in pulsar timing residuals, and improve the long-term stability of the ensemble pulsar time scale. Results indicate that the long-term (> 1 yr) stability of the ensemble pulsar time scale is better than 3.4 × 10-15.
The NRL relocatable ocean/acoustic ensemble forecast system
NASA Astrophysics Data System (ADS)
Rowley, C.; Martin, P.; Cummings, J.; Jacobs, G.; Coelho, E.; Bishop, C.; Hong, X.; Peggion, G.; Fabre, J.
2009-04-01
A globally relocatable regional ocean nowcast/forecast system has been developed to support rapid implementation of new regional forecast domains. The system is in operational use at the Naval Oceanographic Office for a growing number of regional and coastal implementations. The new system is the basis for an ocean acoustic ensemble forecast and adaptive sampling capability. We present an overview of the forecast system and the ocean ensemble and adaptive sampling methods. The forecast system consists of core ocean data analysis and forecast modules, software for domain configuration, surface and boundary condition forcing processing, and job control, and global databases for ocean climatology, bathymetry, tides, and river locations and transports. The analysis component is the Navy Coupled Ocean Data Assimilation (NCODA) system, a 3D multivariate optimum interpolation system that produces simultaneous analyses of temperature, salinity, geopotential, and vector velocity using remotely-sensed SST, SSH, and sea ice concentration, plus in situ observations of temperature, salinity, and currents from ships, buoys, XBTs, CTDs, profiling floats, and autonomous gliders. The forecast component is the Navy Coastal Ocean Model (NCOM). The system supports one-way nesting and multiple assimilation methods. The ensemble system uses the ensemble transform technique with error variance estimates from the NCODA analysis to represent initial condition error. Perturbed surface forcing or an atmospheric ensemble is used to represent errors in surface forcing. The ensemble transform Kalman filter is used to assess the impact of adaptive observations on future analysis and forecast uncertainty for both ocean and acoustic properties.
NASA Astrophysics Data System (ADS)
Dai, Cheng; Xue, Liang; Zhang, Dongxiao; Guadagnini, Alberto
2018-02-01
The authors regret that in the Acknowledgments Section an incorrect Grant Agreement number was reported for the Project "Furthering the knowledge Base for Reducing the Environmental Footprint of Shale Gas Development" FRACRISK. The correct Grant Agreement number is 636811.
Improving hydrologic predictions of a catchment model via assimilation of surface soil moisture
USDA-ARS?s Scientific Manuscript database
This paper examines the potential for improving Soil and Water Assessment Tool (SWAT) hydrologic predictions within the 341 km2 Cobb Creek Watershed in southwestern Oklahoma through the assimilation of surface soil moisture observations using an Ensemble Kalman filter (EnKF). In a series of synthet...
USDA-ARS?s Scientific Manuscript database
This paper aims to investigate how surface soil moisture data assimilation affects each hydrologic process and how spatially varying inputs affect the potential capability of surface soil moisture assimilation at the watershed scale. The Ensemble Kalman Filter (EnKF) is coupled with a watershed scal...
USDA-ARS?s Scientific Manuscript database
In Ensemble Kalman Filter (EnKF)-based data assimilation, the background prediction of a model is updated using observations and relative weights based on the model prediction and observation uncertainties. In practice, both model and observation uncertainties are difficult to quantify and they have...
Magnetometry with Ensembles of Nitrogen Vacancy Centers in Bulk Diamond
2015-10-23
the ESR curve. Any frequency components of the photodetector signal which are not close to the reference frequency, are filtered out. This mitigates ...indicating that we have not yet run up against thermal or flicker noise for these time scales. 5.3 Details of frequency modulation circuit In order
Minimalist ensemble algorithms for genome-wide protein localization prediction.
Lin, Jhih-Rong; Mondal, Ananda Mohan; Liu, Rong; Hu, Jianjun
2012-07-03
Computational prediction of protein subcellular localization can greatly help to elucidate its functions. Despite the existence of dozens of protein localization prediction algorithms, the prediction accuracy and coverage are still low. Several ensemble algorithms have been proposed to improve the prediction performance, which usually include as many as 10 or more individual localization algorithms. However, their performance is still limited by the running complexity and redundancy among individual prediction algorithms. This paper proposed a novel method for rational design of minimalist ensemble algorithms for practical genome-wide protein subcellular localization prediction. The algorithm is based on combining a feature selection based filter and a logistic regression classifier. Using a novel concept of contribution scores, we analyzed issues of algorithm redundancy, consensus mistakes, and algorithm complementarity in designing ensemble algorithms. We applied the proposed minimalist logistic regression (LR) ensemble algorithm to two genome-wide datasets of Yeast and Human and compared its performance with current ensemble algorithms. Experimental results showed that the minimalist ensemble algorithm can achieve high prediction accuracy with only 1/3 to 1/2 of individual predictors of current ensemble algorithms, which greatly reduces computational complexity and running time. It was found that the high performance ensemble algorithms are usually composed of the predictors that together cover most of available features. Compared to the best individual predictor, our ensemble algorithm improved the prediction accuracy from AUC score of 0.558 to 0.707 for the Yeast dataset and from 0.628 to 0.646 for the Human dataset. Compared with popular weighted voting based ensemble algorithms, our classifier-based ensemble algorithms achieved much better performance without suffering from inclusion of too many individual predictors. We proposed a method for rational design of minimalist ensemble algorithms using feature selection and classifiers. The proposed minimalist ensemble algorithm based on logistic regression can achieve equal or better prediction performance while using only half or one-third of individual predictors compared to other ensemble algorithms. The results also suggested that meta-predictors that take advantage of a variety of features by combining individual predictors tend to achieve the best performance. The LR ensemble server and related benchmark datasets are available at http://mleg.cse.sc.edu/LRensemble/cgi-bin/predict.cgi.
Minimalist ensemble algorithms for genome-wide protein localization prediction
2012-01-01
Background Computational prediction of protein subcellular localization can greatly help to elucidate its functions. Despite the existence of dozens of protein localization prediction algorithms, the prediction accuracy and coverage are still low. Several ensemble algorithms have been proposed to improve the prediction performance, which usually include as many as 10 or more individual localization algorithms. However, their performance is still limited by the running complexity and redundancy among individual prediction algorithms. Results This paper proposed a novel method for rational design of minimalist ensemble algorithms for practical genome-wide protein subcellular localization prediction. The algorithm is based on combining a feature selection based filter and a logistic regression classifier. Using a novel concept of contribution scores, we analyzed issues of algorithm redundancy, consensus mistakes, and algorithm complementarity in designing ensemble algorithms. We applied the proposed minimalist logistic regression (LR) ensemble algorithm to two genome-wide datasets of Yeast and Human and compared its performance with current ensemble algorithms. Experimental results showed that the minimalist ensemble algorithm can achieve high prediction accuracy with only 1/3 to 1/2 of individual predictors of current ensemble algorithms, which greatly reduces computational complexity and running time. It was found that the high performance ensemble algorithms are usually composed of the predictors that together cover most of available features. Compared to the best individual predictor, our ensemble algorithm improved the prediction accuracy from AUC score of 0.558 to 0.707 for the Yeast dataset and from 0.628 to 0.646 for the Human dataset. Compared with popular weighted voting based ensemble algorithms, our classifier-based ensemble algorithms achieved much better performance without suffering from inclusion of too many individual predictors. Conclusions We proposed a method for rational design of minimalist ensemble algorithms using feature selection and classifiers. The proposed minimalist ensemble algorithm based on logistic regression can achieve equal or better prediction performance while using only half or one-third of individual predictors compared to other ensemble algorithms. The results also suggested that meta-predictors that take advantage of a variety of features by combining individual predictors tend to achieve the best performance. The LR ensemble server and related benchmark datasets are available at http://mleg.cse.sc.edu/LRensemble/cgi-bin/predict.cgi. PMID:22759391
DOE Office of Scientific and Technical Information (OSTI.GOV)
Man, Jun; Zhang, Jiangjiang; Li, Weixuan
2016-10-01
The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees ofmore » freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.« less
Single-photon superradiant beating from a Doppler-broadened ladder-type atomic ensemble
NASA Astrophysics Data System (ADS)
Lee, Yoon-Seok; Lee, Sang Min; Kim, Heonoh; Moon, Han Seb
2017-12-01
We report on heralded-single-photon superradiant beating in the spontaneous four-wave mixing process of Doppler-broadened ladder-type 87Rb atoms. When Doppler-broadened atoms contribute to two-photon coherence, the detection probability amplitudes of the heralded single photons are coherently superposed despite inhomogeneous broadened atomic media. Single-photon superradiant beating is observed, which constitutes evidence for the coherent superposition of two-photon amplitudes from different velocity classes in the Doppler-broadened atomic ensemble. We present a theoretical model in which the single-photon superradiant beating originates from the interference between wavelength-separated two-photon amplitudes via the reabsorption filtering effect.
Ensemble-based data assimilation and optimal sensor placement for scalar source reconstruction
NASA Astrophysics Data System (ADS)
Mons, Vincent; Wang, Qi; Zaki, Tamer
2017-11-01
Reconstructing the characteristics of a scalar source from limited remote measurements in a turbulent flow is a problem of great interest for environmental monitoring, and is challenging due to several aspects. Firstly, the numerical estimation of the scalar dispersion in a turbulent flow requires significant computational resources. Secondly, in actual practice, only a limited number of observations are available, which generally makes the corresponding inverse problem ill-posed. Ensemble-based variational data assimilation techniques are adopted to solve the problem of scalar source localization in a turbulent channel flow at Reτ = 180 . This approach combines the components of variational data assimilation and ensemble Kalman filtering, and inherits the robustness from the former and the ease of implementation from the latter. An ensemble-based methodology for optimal sensor placement is also proposed in order to improve the condition of the inverse problem, which enhances the performances of the data assimilation scheme. This work has been partially funded by the Office of Naval Research (Grant N00014-16-1-2542) and by the National Science Foundation (Grant 1461870).
NASA Astrophysics Data System (ADS)
Curtis, Joseph E.; Raghunandan, Sindhu; Nanda, Hirsh; Krueger, Susan
2012-02-01
A program to construct ensembles of biomolecular structures that are consistent with experimental scattering data are described. Specifically, we generate an ensemble of biomolecular structures by varying sets of backbone dihedral angles that are then filtered using experimentally determined restraints to rapidly determine structures that have scattering profiles that are consistent with scattering data. We discuss an application of these tools to predict a set of structures for the HIV-1 Gag protein, an intrinsically disordered protein, that are consistent with small-angle neutron scattering experimental data. We have assembled these algorithms into a program called SASSIE for structure generation, visualization, and analysis of intrinsically disordered proteins and other macromolecular ensembles using neutron and X-ray scattering restraints. Program summaryProgram title: SASSIE Catalogue identifier: AEKL_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKL_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License v3 No. of lines in distributed program, including test data, etc.: 3 991 624 No. of bytes in distributed program, including test data, etc.: 826 Distribution format: tar.gz Programming language: Python, C/C++, Fortran Computer: PC/Mac Operating system: 32- and 64-bit Linux (Ubuntu 10.04, Centos 5.6) and Mac OS X (10.6.6) RAM: 1 GB Classification: 3 External routines: Python 2.6.5, numpy 1.4.0, swig 1.3.40, scipy 0.8.0, Gnuplot-py-1.8, Tcl 8.5, Tk 8.5, Mac installation requires aquaterm 1.0 (or X window system) and Xcode 3 development tools. Nature of problem: Open source software to generate structures of disordered biological molecules that subsequently allow for the comparison of computational and experimental results is limiting the use of scattering resources. Solution method: Starting with an all atom model of a protein, for example, users can input regions to vary dihedral angles, ensembles of structures can be generated. Additionally, simple two-body rigid-body rotations are supported with and without disordered regions. Generated structures can then be used to calculate small-angle scattering profiles which can then be filtered against experimentally determined data. Filtered structures can be visualized individually or as an ensemble using density plots. In the modular and expandable program framework the user can easily access our subroutines and structural coordinates can be easily obtained for study using other computational physics methods. Additional comments: The distribution file for this program is over 159 Mbytes and therefore is not delivered directly when download or Email is requested. Instead an html file giving details of how the program can be obtained is sent. Running time: Varies depending on application. Typically 10 minutes to 24 hours depending on the number of generated structures.
A study of regional-scale aerosol assimilation using a Stretch-NICAM
NASA Astrophysics Data System (ADS)
Misawa, S.; Dai, T.; Schutgens, N.; Nakajima, T.
2013-12-01
Although aerosol is considered to be harmful to human health and it became a social issue, aerosol models and emission inventories include large uncertainties. In recent studies, data assimilation is applied to aerosol simulation to get more accurate aerosol field and emission inventory. Most of these studies, however, are carried out only on global scale, and there are only a few researches about regional scale aerosol assimilation. In this study, we have created and verified an aerosol assimilation system on regional scale, in hopes to reduce an error associated with the aerosol emission inventory. Our aerosol assimilation system has been developed using an atmospheric climate model, NICAM (Non-hydrostaric ICosahedral Atmospheric Model; Satoh et al., 2008) with a stretch grid system and coupled with an aerosol transport model, SPRINTARS (Takemura et al., 2000). Also, this assimilation system is based on local ensemble transform Kalman filter (LETKF). To validate this system, we used a simulated observational data by adding some artificial errors to the surface aerosol fields constructed by Stretch-NICAM-SPRINTARS. We also included a small perturbation in original emission inventory. This assimilation with modified observational data and emission inventory was performed in Kanto-plane region around Tokyo, Japan, and the result indicates the system reducing a relative error of aerosol concentration by 20%. Furthermore, we examined a sensitivity of the aerosol assimilation system by varying the number of total ensemble (5, 10 and 15 ensembles) and local patch (domain) size (radius of 50km, 100km and 200km), both of which are the tuning parameters in LETKF. The result of the assimilation with different ensemble number 5, 10 and 15 shows that the larger the number of ensemble is, the smaller the relative error become. This is consistent with ensemble Kalman filter theory and imply that this assimilation system works properly. Also we found that assimilation system does not work well in a case of 200km radius, while a domain of 50km radius is less efficient than when domain of 100km radius is used.Therefore, we expect that the optimized size lies somewhere between 50km to 200km. We will show a real analysis of real data from suspended particle matter (SPM) network in the Kanto-plane region.
USDA-ARS?s Scientific Manuscript database
Estimation of soil moisture has received considerable attention in the areas of hydrology, agriculture, meteorology and environmental studies because of its role in the partitioning water and energy at the land surface. In this study, the Ensemble Kalman Filter (EnKF), a popular data assimilation te...
SSDA code to apply data assimilation in soil water flow modeling: Documentation and user manual
USDA-ARS?s Scientific Manuscript database
Soil water flow models are based on simplified assumptions about the mechanisms, processes, and parameters of water retention and flow. That causes errors in soil water flow model predictions. Data assimilation (DA) with the ensemble Kalman filter (EnKF) corrects modeling results based on measured s...
USDA-ARS?s Scientific Manuscript database
An Ensemble Kalman Filter-based data assimilation framework that links a crop growth model with active and passive (AP) microwave models was developed to improve estimates of soil moisture (SM) and vegetation biomass over a growing season of soybean. Complementarities in AP observations were incorpo...
Essential Dynamics of Secondary Eyewall Formation
2013-10-01
pronounced subsidence and greatly diminished radar reflectivity (Houze et al. 2007) that is clearly discernable between the two eyewalls. The sub- siding...Part I: Assimilation of T- PARC data based on the ensemble Kalman filter (EnKF). Mon. Wea. Rev., 140, 506–527. Zhang, D.-L., Y. Liu, andM. K. Yau, 2001
Advanced Atmospheric Ensemble Modeling Techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buckley, R.; Chiswell, S.; Kurzeja, R.
Ensemble modeling (EM), the creation of multiple atmospheric simulations for a given time period, has become an essential tool for characterizing uncertainties in model predictions. We explore two novel ensemble modeling techniques: (1) perturbation of model parameters (Adaptive Programming, AP), and (2) data assimilation (Ensemble Kalman Filter, EnKF). The current research is an extension to work from last year and examines transport on a small spatial scale (<100 km) in complex terrain, for more rigorous testing of the ensemble technique. Two different release cases were studied, a coastal release (SF6) and an inland release (Freon) which consisted of two releasemore » times. Observations of tracer concentration and meteorology are used to judge the ensemble results. In addition, adaptive grid techniques have been developed to reduce required computing resources for transport calculations. Using a 20- member ensemble, the standard approach generated downwind transport that was quantitatively good for both releases; however, the EnKF method produced additional improvement for the coastal release where the spatial and temporal differences due to interior valley heating lead to the inland movement of the plume. The AP technique showed improvements for both release cases, with more improvement shown in the inland release. This research demonstrated that transport accuracy can be improved when models are adapted to a particular location/time or when important local data is assimilated into the simulation and enhances SRNL’s capability in atmospheric transport modeling in support of its current customer base and local site missions, as well as our ability to attract new customers within the intelligence community.« less
NASA Technical Reports Server (NTRS)
Nearing, Grey S.; Crow, Wade T.; Thorp, Kelly R.; Moran, Mary S.; Reichle, Rolf H.; Gupta, Hoshin V.
2012-01-01
Observing system simulation experiments were used to investigate ensemble Bayesian state updating data assimilation of observations of leaf area index (LAI) and soil moisture (theta) for the purpose of improving single-season wheat yield estimates with the Decision Support System for Agrotechnology Transfer (DSSAT) CropSim-Ceres model. Assimilation was conducted in an energy-limited environment and a water-limited environment. Modeling uncertainty was prescribed to weather inputs, soil parameters and initial conditions, and cultivar parameters and through perturbations to model state transition equations. The ensemble Kalman filter and the sequential importance resampling filter were tested for the ability to attenuate effects of these types of uncertainty on yield estimates. LAI and theta observations were synthesized according to characteristics of existing remote sensing data, and effects of observation error were tested. Results indicate that the potential for assimilation to improve end-of-season yield estimates is low. Limitations are due to a lack of root zone soil moisture information, error in LAI observations, and a lack of correlation between leaf and grain growth.
Using a bias aware EnKF to account for unresolved structure in an unsaturated zone model
NASA Astrophysics Data System (ADS)
Erdal, D.; Neuweiler, I.; Wollschläger, U.
2014-01-01
When predicting flow in the unsaturated zone, any method for modeling the flow will have to define how, and to what level, the subsurface structure is resolved. In this paper, we use the Ensemble Kalman Filter to assimilate local soil water content observations from both a synthetic layered lysimeter and a real field experiment in layered soil in an unsaturated water flow model. We investigate the use of colored noise bias corrections to account for unresolved subsurface layering in a homogeneous model and compare this approach with a fully resolved model. In both models, we use a simplified model parameterization in the Ensemble Kalman Filter. The results show that the use of bias corrections can increase the predictive capability of a simplified homogeneous flow model if the bias corrections are applied to the model states. If correct knowledge of the layering structure is available, the fully resolved model performs best. However, if no, or erroneous, layering is used in the model, the use of a homogeneous model with bias corrections can be the better choice for modeling the behavior of the system.
NASA Technical Reports Server (NTRS)
Di Tomaso, Enza; Schutgens, Nick A. J.; Jorba, Oriol; Perez Garcia-Pando, Carlos
2017-01-01
A data assimilation capability has been built for the NMMB-MONARCH chemical weather prediction system, with a focus on mineral dust, a prominent type of aerosol. An ensemble-based Kalman filter technique (namely the local ensemble transform Kalman filter - LETKF) has been utilized to optimally combine model background and satellite retrievals. Our implementation of the ensemble is based on known uncertainties in the physical parametrizations of the dust emission scheme. Experiments showed that MODIS AOD retrievals using the Dark Target algorithm can help NMMB-MONARCH to better characterize atmospheric dust. This is particularly true for the analysis of the dust outflow in the Sahel region and over the African Atlantic coast. The assimilation of MODIS AOD retrievals based on the Deep Blue algorithm has a further positive impact in the analysis downwind from the strongest dust sources of the Sahara and in the Arabian Peninsula. An analysis-initialized forecast performs better (lower forecast error and higher correlation with observations) than a standard forecast, with the exception of underestimating dust in the long-range Atlantic transport and degradation of the temporal evolution of dust in some regions after day 1. Particularly relevant is the improved forecast over the Sahara throughout the forecast range thanks to the assimilation of Deep Blue retrievals over areas not easily covered by other observational datasets.The present study on mineral dust is a first step towards data assimilation with a complete aerosol prediction system that includes multiple aerosol species.
NASA Astrophysics Data System (ADS)
Di Tomaso, Enza; Schutgens, Nick A. J.; Jorba, Oriol; Pérez García-Pando, Carlos
2017-03-01
A data assimilation capability has been built for the NMMB-MONARCH chemical weather prediction system, with a focus on mineral dust, a prominent type of aerosol. An ensemble-based Kalman filter technique (namely the local ensemble transform Kalman filter - LETKF) has been utilized to optimally combine model background and satellite retrievals. Our implementation of the ensemble is based on known uncertainties in the physical parametrizations of the dust emission scheme. Experiments showed that MODIS AOD retrievals using the Dark Target algorithm can help NMMB-MONARCH to better characterize atmospheric dust. This is particularly true for the analysis of the dust outflow in the Sahel region and over the African Atlantic coast. The assimilation of MODIS AOD retrievals based on the Deep Blue algorithm has a further positive impact in the analysis downwind from the strongest dust sources of the Sahara and in the Arabian Peninsula. An analysis-initialized forecast performs better (lower forecast error and higher correlation with observations) than a standard forecast, with the exception of underestimating dust in the long-range Atlantic transport and degradation of the temporal evolution of dust in some regions after day 1. Particularly relevant is the improved forecast over the Sahara throughout the forecast range thanks to the assimilation of Deep Blue retrievals over areas not easily covered by other observational datasets. The present study on mineral dust is a first step towards data assimilation with a complete aerosol prediction system that includes multiple aerosol species.
NASA Astrophysics Data System (ADS)
Brune, Sebastian; Düsterhus, Andre; Pohlmann, Holger; Müller, Wolfgang; Baehr, Johanna
2017-04-01
We analyze the time dependency of decadal hindcast skill in the North Atlantic subpolar gyre within the time period 1961-2013. We compare anomaly correlation coefficients and interquartile ranges of total upper ocean heat content and sea surface temperature for three differently initialized sets of hindcast simulations with the global coupled model MPI-ESM. All initializations use weakly coupled assimilation with the same full-field nudging in the atmospheric component and different assimilation techniques for oceanic temperature and salinity: (1) ensemble Kalman filter assimilating EN4 and HadISST observations, (2) nudging of anomalies to ORAS4 reanalysis, (3) nudging of full values to ORAS4 reanalysis. We find that hindcast skill depends strongly on the evaluation time period, with higher hindcast skill during strong multiyear trends and lower hindcast skill in the absence of such trends. While there may only be small differences between the prediction systems in the analysis focusing on the entire hindcast period, these differences between the hindcast systems are much more pronounced when investigating any 20-year subperiod within the entire hindcast period. For the ensemble Kalman filter high skill in the assimilation experiment is generally linked to high skill in the initialized hindcasts. Such direct link does not seem to exist in the hindcasts initialized by either nudged system. In the ensemble Kalman filter initialized hindcasts, we find significant hindcast skill for up to 5 to 8 lead years, except for the 1970s. In the nudged system initialized hindcasts, hindcast skill is consistently diminished in lead years 2 and 3 with lowest skill in the 1970s as well. Overall, we find that a model-consistent assimilation technique can improve hindcast skill. Further, the evaluation of 20 year subperiods within the full hindcast period provides essential insights to judge the success of both the assimilation and the subsequent hindcast skill.
Analyzing the uncertainty of suspended sediment load prediction using sequential data assimilation
NASA Astrophysics Data System (ADS)
Leisenring, Marc; Moradkhani, Hamid
2012-10-01
SummaryA first step in understanding the impacts of sediment and controlling the sources of sediment is to quantify the mass loading. Since mass loading is the product of flow and concentration, the quantification of loads first requires the quantification of runoff volume. Using the National Weather Service's SNOW-17 and the Sacramento Soil Moisture Accounting (SAC-SMA) models, this study employed particle filter based Bayesian data assimilation methods to predict seasonal snow water equivalent (SWE) and runoff within a small watershed in the Lake Tahoe Basin located in California, USA. A procedure was developed to scale the variance multipliers (a.k.a hyperparameters) for model parameters and predictions based on the accuracy of the mean predictions relative to the ensemble spread. In addition, an online bias correction algorithm based on the lagged average bias was implemented to detect and correct for systematic bias in model forecasts prior to updating with the particle filter. Both of these methods significantly improved the performance of the particle filter without requiring excessively wide prediction bounds. The flow ensemble was linked to a non-linear regression model that was used to predict suspended sediment concentrations (SSCs) based on runoff rate and time of year. Runoff volumes and SSC were then combined to produce an ensemble of suspended sediment load estimates. Annual suspended sediment loads for the 5 years of simulation were finally computed along with 95% prediction intervals that account for uncertainty in both the SSC regression model and flow rate estimates. Understanding the uncertainty associated with annual suspended sediment load predictions is critical for making sound watershed management decisions aimed at maintaining the exceptional clarity of Lake Tahoe. The computational methods developed and applied in this research could assist with similar studies where it is important to quantify the predictive uncertainty of pollutant load estimates.
A comparison of optimal MIMO linear and nonlinear models for brain machine interfaces
NASA Astrophysics Data System (ADS)
Kim, S.-P.; Sanchez, J. C.; Rao, Y. N.; Erdogmus, D.; Carmena, J. M.; Lebedev, M. A.; Nicolelis, M. A. L.; Principe, J. C.
2006-06-01
The field of brain-machine interfaces requires the estimation of a mapping from spike trains collected in motor cortex areas to the hand kinematics of the behaving animal. This paper presents a systematic investigation of several linear (Wiener filter, LMS adaptive filters, gamma filter, subspace Wiener filters) and nonlinear models (time-delay neural network and local linear switching models) applied to datasets from two experiments in monkeys performing motor tasks (reaching for food and target hitting). Ensembles of 100-200 cortical neurons were simultaneously recorded in these experiments, and even larger neuronal samples are anticipated in the future. Due to the large size of the models (thousands of parameters), the major issue studied was the generalization performance. Every parameter of the models (not only the weights) was selected optimally using signal processing and machine learning techniques. The models were also compared statistically with respect to the Wiener filter as the baseline. Each of the optimization procedures produced improvements over that baseline for either one of the two datasets or both.
A comparison of optimal MIMO linear and nonlinear models for brain-machine interfaces.
Kim, S-P; Sanchez, J C; Rao, Y N; Erdogmus, D; Carmena, J M; Lebedev, M A; Nicolelis, M A L; Principe, J C
2006-06-01
The field of brain-machine interfaces requires the estimation of a mapping from spike trains collected in motor cortex areas to the hand kinematics of the behaving animal. This paper presents a systematic investigation of several linear (Wiener filter, LMS adaptive filters, gamma filter, subspace Wiener filters) and nonlinear models (time-delay neural network and local linear switching models) applied to datasets from two experiments in monkeys performing motor tasks (reaching for food and target hitting). Ensembles of 100-200 cortical neurons were simultaneously recorded in these experiments, and even larger neuronal samples are anticipated in the future. Due to the large size of the models (thousands of parameters), the major issue studied was the generalization performance. Every parameter of the models (not only the weights) was selected optimally using signal processing and machine learning techniques. The models were also compared statistically with respect to the Wiener filter as the baseline. Each of the optimization procedures produced improvements over that baseline for either one of the two datasets or both.
Adaptive interference cancel filter for evoked potential using high-order cumulants.
Lin, Bor-Shyh; Lin, Bor-Shing; Chong, Fok-Ching; Lai, Feipei
2004-01-01
This paper is to present evoked potential (EP) processing using adaptive interference cancel (AIC) filter with second and high order cumulants. In conventional ensemble averaging method, people have to conduct repetitively experiments to record the required data. Recently, the use of AIC structure with second statistics in processing EP has proved more efficiency than traditional averaging method, but it is sensitive to both of the reference signal statistics and the choice of step size. Thus, we proposed higher order statistics-based AIC method to improve these disadvantages. This study was experimented in somatosensory EP corrupted with EEG. Gradient type algorithm is used in AIC method. Comparisons with AIC filter on second, third, fourth order statistics are also presented in this paper. We observed that AIC filter with third order statistics has better convergent performance for EP processing and is not sensitive to the selection of step size and reference input.
Towards the Prediction of Decadal to Centennial Climate Processes in the Coupled Earth System Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Zhengyu; Kutzbach, J.; Jacob, R.
2011-12-05
In this proposal, we have made major advances in the understanding of decadal and long term climate variability. (a) We performed a systematic study of multidecadal climate variability in FOAM-LPJ and CCSM-T31, and are starting exploring decadal variability in the IPCC AR4 models. (b) We develop several novel methods for the assessment of climate feedbacks in the observation. (c) We also developed a new initialization scheme DAI (Dynamical Analogue Initialization) for ensemble decadal prediction. (d) We also studied climate-vegetation feedback in the observation and models. (e) Finally, we started a pilot program using Ensemble Kalman Filter in CGCM for decadalmore » climate prediction.« less
EMG prediction from Motor Cortical Recordings via a Non-Negative Point Process Filter
Nazarpour, Kianoush; Ethier, Christian; Paninski, Liam; Rebesco, James M.; Miall, R. Chris; Miller, Lee E.
2012-01-01
A constrained point process filtering mechanism for prediction of electromyogram (EMG) signals from multi-channel neural spike recordings is proposed here. Filters from the Kalman family are inherently sub-optimal in dealing with non-Gaussian observations, or a state evolution that deviates from the Gaussianity assumption. To address these limitations, we modeled the non-Gaussian neural spike train observations by using a generalized linear model (GLM) that encapsulates covariates of neural activity, including the neurons’ own spiking history, concurrent ensemble activity, and extrinsic covariates (EMG signals). In order to predict the envelopes of EMGs, we reformulated the Kalman filter (KF) in an optimization framework and utilized a non-negativity constraint. This structure characterizes the non-linear correspondence between neural activity and EMG signals reasonably. The EMGs were recorded from twelve forearm and hand muscles of a behaving monkey during a grip-force task. For the case of limited training data, the constrained point process filter improved the prediction accuracy when compared to a conventional Wiener cascade filter (a linear causal filter followed by a static non-linearity) for different bin sizes and delays between input spikes and EMG output. For longer training data sets, results of the proposed filter and that of the Wiener cascade filter were comparable. PMID:21659018
NASA Astrophysics Data System (ADS)
Cheung, Shao-Yong; Lee, Chieh-Han; Yu, Hwa-Lung
2017-04-01
Due to the limited hydrogeological observation data and high levels of uncertainty within, parameter estimation of the groundwater model has been an important issue. There are many methods of parameter estimation, for example, Kalman filter provides a real-time calibration of parameters through measurement of groundwater monitoring wells, related methods such as Extended Kalman Filter and Ensemble Kalman Filter are widely applied in groundwater research. However, Kalman Filter method is limited to linearity. This study propose a novel method, Bayesian Maximum Entropy Filtering, which provides a method that can considers the uncertainty of data in parameter estimation. With this two methods, we can estimate parameter by given hard data (certain) and soft data (uncertain) in the same time. In this study, we use Python and QGIS in groundwater model (MODFLOW) and development of Extended Kalman Filter and Bayesian Maximum Entropy Filtering in Python in parameter estimation. This method may provide a conventional filtering method and also consider the uncertainty of data. This study was conducted through numerical model experiment to explore, combine Bayesian maximum entropy filter and a hypothesis for the architecture of MODFLOW groundwater model numerical estimation. Through the virtual observation wells to simulate and observe the groundwater model periodically. The result showed that considering the uncertainty of data, the Bayesian maximum entropy filter will provide an ideal result of real-time parameters estimation.
USDA-ARS?s Scientific Manuscript database
We develop a robust understanding of the effects of assimilating remote sensing observations of leaf area index and soil moisture (in the top 5 cm) on DSSAT-CSM CropSim-Ceres wheat yield estimates. Synthetic observing system simulation experiments compare the abilities of the Ensemble Kalman Filter...
2012-08-15
Environmental Model ( GDEM ) 72 levels) was conserved in the interpolated profiles and small variations in the vertical field may have lead to large...Planner ETKF Ensemble Transform Kalman Filter G8NCOM 1/8⁰ Global NCOM GA Genetic Algorithm GDEM Generalized Digital Environmental Model GOST
Displacement data assimilation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenthal, W. Steven; Venkataramani, Shankar; Mariano, Arthur J.
We show that modifying a Bayesian data assimilation scheme by incorporating kinematically-consistent displacement corrections produces a scheme that is demonstrably better at estimating partially observed state vectors in a setting where feature information is important. While the displacement transformation is generic, here we implement it within an ensemble Kalman Filter framework and demonstrate its effectiveness in tracking stochastically perturbed vortices.
NASA Technical Reports Server (NTRS)
Blankenship, Clay B.; Crosson, William L.; Case, Jonathan L.; Hale, Robert
2010-01-01
Improve simulations of soil moisture/temperature, and consequently boundary layer states and processes, by assimilating AMSR-E soil moisture estimates into a coupled land surface-mesoscale model Provide a new land surface model as an option in the Land Information System (LIS)
NASA Astrophysics Data System (ADS)
Zheng, Fei; Zhu, Jiang
2017-04-01
How to design a reliable ensemble prediction strategy with considering the major uncertainties of a forecasting system is a crucial issue for performing an ensemble forecast. In this study, a new stochastic perturbation technique is developed to improve the prediction skills of El Niño-Southern Oscillation (ENSO) through using an intermediate coupled model. We first estimate and analyze the model uncertainties from the ensemble Kalman filter analysis results through assimilating the observed sea surface temperatures. Then, based on the pre-analyzed properties of model errors, we develop a zero-mean stochastic model-error model to characterize the model uncertainties mainly induced by the missed physical processes of the original model (e.g., stochastic atmospheric forcing, extra-tropical effects, Indian Ocean Dipole). Finally, we perturb each member of an ensemble forecast at each step by the developed stochastic model-error model during the 12-month forecasting process, and add the zero-mean perturbations into the physical fields to mimic the presence of missing processes and high-frequency stochastic noises. The impacts of stochastic model-error perturbations on ENSO deterministic predictions are examined by performing two sets of 21-yr hindcast experiments, which are initialized from the same initial conditions and differentiated by whether they consider the stochastic perturbations. The comparison results show that the stochastic perturbations have a significant effect on improving the ensemble-mean prediction skills during the entire 12-month forecasting process. This improvement occurs mainly because the nonlinear terms in the model can form a positive ensemble-mean from a series of zero-mean perturbations, which reduces the forecasting biases and then corrects the forecast through this nonlinear heating mechanism.
NASA Astrophysics Data System (ADS)
Ait-El-Fquih, Boujemaa; El Gharamti, Mohamad; Hoteit, Ibrahim
2016-08-01
Ensemble Kalman filtering (EnKF) is an efficient approach to addressing uncertainties in subsurface groundwater models. The EnKF sequentially integrates field data into simulation models to obtain a better characterization of the model's state and parameters. These are generally estimated following joint and dual filtering strategies, in which, at each assimilation cycle, a forecast step by the model is followed by an update step with incoming observations. The joint EnKF directly updates the augmented state-parameter vector, whereas the dual EnKF empirically employs two separate filters, first estimating the parameters and then estimating the state based on the updated parameters. To develop a Bayesian consistent dual approach and improve the state-parameter estimates and their consistency, we propose in this paper a one-step-ahead (OSA) smoothing formulation of the state-parameter Bayesian filtering problem from which we derive a new dual-type EnKF, the dual EnKFOSA. Compared with the standard dual EnKF, it imposes a new update step to the state, which is shown to enhance the performance of the dual approach with almost no increase in the computational cost. Numerical experiments are conducted with a two-dimensional (2-D) synthetic groundwater aquifer model to investigate the performance and robustness of the proposed dual EnKFOSA, and to evaluate its results against those of the joint and dual EnKFs. The proposed scheme is able to successfully recover both the hydraulic head and the aquifer conductivity, providing further reliable estimates of their uncertainties. Furthermore, it is found to be more robust to different assimilation settings, such as the spatial and temporal distribution of the observations, and the level of noise in the data. Based on our experimental setups, it yields up to 25 % more accurate state and parameter estimations than the joint and dual approaches.
Harnessing Orbital Debris to Sense the Space Environment
NASA Astrophysics Data System (ADS)
Mutschler, S.; Axelrad, P.; Matsuo, T.
A key requirement for accurate space situational awareness (SSA) is knowledge of the non-conservative forces that act on space objects. These effects vary temporally and spatially, driven by the dynamical behavior of space weather. Existing SSA algorithms adjust space weather models based on observations of calibration satellites. However, lack of sufficient data and mismodeling of non-conservative forces cause inaccuracies in space object motion prediction. The uncontrolled nature of debris makes it particularly sensitive to the variations in space weather. Our research takes advantage of this behavior by inverting observations of debris objects to infer the space environment parameters causing their motion. In addition, this research will produce more accurate predictions of the motion of debris objects. The hypothesis of this research is that it is possible to utilize a "cluster" of debris objects, objects within relatively close proximity of each other, to sense their local environment. We focus on deriving parameters of an atmospheric density model to more precisely predict the drag force on LEO objects. An Ensemble Kalman Filter (EnKF) is used for assimilation; the prior ensemble to the posterior ensemble is transformed during the measurement update in a manner that does not require inversion of large matrices. A prior ensemble is utilized to empirically determine the nonlinear relationship between measurements and density parameters. The filter estimates an extended state that includes position and velocity of the debris object, and atmospheric density parameters. The density is parameterized as a grid of values, distributed by latitude and local sidereal time over a spherical shell encompassing Earth. This research focuses on LEO object motion, but it can also be extended to additional orbital regimes for observation and refinement of magnetic field and solar radiation models. An observability analysis of the proposed approach is presented in terms of the measurement cadence necessary to estimate the local space environment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Juxiu Tong; Bill X. Hu; Hai Huang
2014-03-01
With growing importance of water resources in the world, remediations of anthropogenic contaminations due to reactive solute transport become even more important. A good understanding of reactive rate parameters such as kinetic parameters is the key to accurately predicting reactive solute transport processes and designing corresponding remediation schemes. For modeling reactive solute transport, it is very difficult to estimate chemical reaction rate parameters due to complex processes of chemical reactions and limited available data. To find a method to get the reactive rate parameters for the reactive urea hydrolysis transport modeling and obtain more accurate prediction for the chemical concentrations,more » we developed a data assimilation method based on an ensemble Kalman filter (EnKF) method to calibrate reactive rate parameters for modeling urea hydrolysis transport in a synthetic one-dimensional column at laboratory scale and to update modeling prediction. We applied a constrained EnKF method to pose constraints to the updated reactive rate parameters and the predicted solute concentrations based on their physical meanings after the data assimilation calibration. From the study results we concluded that we could efficiently improve the chemical reactive rate parameters with the data assimilation method via the EnKF, and at the same time we could improve solute concentration prediction. The more data we assimilated, the more accurate the reactive rate parameters and concentration prediction. The filter divergence problem was also solved in this study.« less
NASA Astrophysics Data System (ADS)
Wang, Lei; Liu, Zhiwen; Miao, Qiang; Zhang, Xin
2018-03-01
A time-frequency analysis method based on ensemble local mean decomposition (ELMD) and fast kurtogram (FK) is proposed for rotating machinery fault diagnosis. Local mean decomposition (LMD), as an adaptive non-stationary and nonlinear signal processing method, provides the capability to decompose multicomponent modulation signal into a series of demodulated mono-components. However, the occurring mode mixing is a serious drawback. To alleviate this, ELMD based on noise-assisted method was developed. Still, the existing environmental noise in the raw signal remains in corresponding PF with the component of interest. FK has good performance in impulse detection while strong environmental noise exists. But it is susceptible to non-Gaussian noise. The proposed method combines the merits of ELMD and FK to detect the fault for rotating machinery. Primarily, by applying ELMD the raw signal is decomposed into a set of product functions (PFs). Then, the PF which mostly characterizes fault information is selected according to kurtosis index. Finally, the selected PF signal is further filtered by an optimal band-pass filter based on FK to extract impulse signal. Fault identification can be deduced by the appearance of fault characteristic frequencies in the squared envelope spectrum of the filtered signal. The advantages of ELMD over LMD and EEMD are illustrated in the simulation analyses. Furthermore, the efficiency of the proposed method in fault diagnosis for rotating machinery is demonstrated on gearbox case and rolling bearing case analyses.
NASA Astrophysics Data System (ADS)
Noh, S. J.; Rakovec, O.; Kumar, R.; Samaniego, L. E.
2015-12-01
Accurate and reliable streamflow prediction is essential to mitigate social and economic damage coming from water-related disasters such as flood and drought. Sequential data assimilation (DA) may facilitate improved streamflow prediction using real-time observations to correct internal model states. In conventional DA methods such as state updating, parametric uncertainty is often ignored mainly due to practical limitations of methodology to specify modeling uncertainty with limited ensemble members. However, if parametric uncertainty related with routing and runoff components is not incorporated properly, predictive uncertainty by model ensemble may be insufficient to capture dynamics of observations, which may deteriorate predictability. Recently, a multi-scale parameter regionalization (MPR) method was proposed to make hydrologic predictions at different scales using a same set of model parameters without losing much of the model performance. The MPR method incorporated within the mesoscale hydrologic model (mHM, http://www.ufz.de/mhm) could effectively represent and control uncertainty of high-dimensional parameters in a distributed model using global parameters. In this study, we evaluate impacts of streamflow data assimilation over European river basins. Especially, a multi-parametric ensemble approach is tested to consider the effects of parametric uncertainty in DA. Because augmentation of parameters is not required within an assimilation window, the approach could be more stable with limited ensemble members and have potential for operational uses. To consider the response times and non-Gaussian characteristics of internal hydrologic processes, lagged particle filtering is utilized. The presentation will be focused on gains and limitations of streamflow data assimilation and multi-parametric ensemble method over large-scale basins.
Development of the NHM-LETKF regional reanalysis system assimilating conventional observations only
NASA Astrophysics Data System (ADS)
Fukui, S.; Iwasaki, T.; Saito, K. K.; Seko, H.; Kunii, M.
2016-12-01
The information about long-term high-resolution atmospheric fields is very useful for studying meso-scale responses to climate change or analyzing extreme events. We are developing a NHM-LETKF (the local ensemble transform Kalman filter with the nonhydrostatic model of the Japan Meteorological Agency (JMA)) regional reanalysis system assimilating only conventional observations that are available over about 60 years such as surface observations at observatories and upper air observations with radiosondes. The domain covers Japan and its surroundings. Before the long-term reanalysis is performed, an experiment using the system was conducted over August in 2014 in order to identify effectiveness and problems of the regional reanalysis system. In this study, we investigated the six-hour accumulated precipitations obtained by integration from the analysis fields. The reproduced precipitation was compared with the JMA's Radar/Rain-gauge Analyzed Precipitation data over Japan islands and the precipitation of JRA-55, which is used as lateral boundary conditions. The comparisons reveal the underestimation of the precipitation in the regional reanalysis. The underestimation is improved by extending the forecast time. In the regional reanalysis system, the analysis fields are derived using the ensemble mean fields, where the conflicting components among ensemble members are filtered out. Therefore, it is important to tune the inflation factor and lateral boundary perturbations not to smooth the analysis fields excessively and to consider more time to spin-up the fields. In the extended run, the underestimation still remains. This implies that the underestimation is attributed to the forecast model itself as well as the analysis scheme.
Feature selection for the classification of traced neurons.
López-Cabrera, José D; Lorenzo-Ginori, Juan V
2018-06-01
The great availability of computational tools to calculate the properties of traced neurons leads to the existence of many descriptors which allow the automated classification of neurons from these reconstructions. This situation determines the necessity to eliminate irrelevant features as well as making a selection of the most appropriate among them, in order to improve the quality of the classification obtained. The dataset used contains a total of 318 traced neurons, classified by human experts in 192 GABAergic interneurons and 126 pyramidal cells. The features were extracted by means of the L-measure software, which is one of the most used computational tools in neuroinformatics to quantify traced neurons. We review some current feature selection techniques as filter, wrapper, embedded and ensemble methods. The stability of the feature selection methods was measured. For the ensemble methods, several aggregation methods based on different metrics were applied to combine the subsets obtained during the feature selection process. The subsets obtained applying feature selection methods were evaluated using supervised classifiers, among which Random Forest, C4.5, SVM, Naïve Bayes, Knn, Decision Table and the Logistic classifier were used as classification algorithms. Feature selection methods of types filter, embedded, wrappers and ensembles were compared and the subsets returned were tested in classification tasks for different classification algorithms. L-measure features EucDistanceSD, PathDistanceSD, Branch_pathlengthAve, Branch_pathlengthSD and EucDistanceAve were present in more than 60% of the selected subsets which provides evidence about their importance in the classification of this neurons. Copyright © 2018 Elsevier B.V. All rights reserved.
Robust electroencephalogram phase estimation with applications in brain-computer interface systems.
Seraj, Esmaeil; Sameni, Reza
2017-03-01
In this study, a robust method is developed for frequency-specific electroencephalogram (EEG) phase extraction using the analytic representation of the EEG. Based on recent theoretical findings in this area, it is shown that some of the phase variations-previously associated to the brain response-are systematic side-effects of the methods used for EEG phase calculation, especially during low analytical amplitude segments of the EEG. With this insight, the proposed method generates randomized ensembles of the EEG phase using minor perturbations in the zero-pole loci of narrow-band filters, followed by phase estimation using the signal's analytical form and ensemble averaging over the randomized ensembles to obtain a robust EEG phase and frequency. This Monte Carlo estimation method is shown to be very robust to noise and minor changes of the filter parameters and reduces the effect of fake EEG phase jumps, which do not have a cerebral origin. As proof of concept, the proposed method is used for extracting EEG phase features for a brain computer interface (BCI) application. The results show significant improvement in classification rates using rather simple phase-related features and a standard K-nearest neighbors and random forest classifiers, over a standard BCI dataset. The average performance was improved between 4-7% (in absence of additive noise) and 8-12% (in presence of additive noise). The significance of these improvements was statistically confirmed by a paired sample t-test, with 0.01 and 0.03 p-values, respectively. The proposed method for EEG phase calculation is very generic and may be applied to other EEG phase-based studies.
USDA-ARS?s Scientific Manuscript database
The Ensemble Kalman Filter (EnKF), a popular data assimilation technique for non-linear systems was applied to the Root Zone Water Quality Model. Measured soil moisture data at four different depths (5cm, 20cm, 40cm and 60cm) from two agricultural fields (AS1 and AS2) in northeastern Indiana were us...
Simultaneous assimilation of AIRS and GOSAT CO2 observations with Ensemble Kalman filter
NASA Astrophysics Data System (ADS)
Liu, J.; Kalnay, E.; Fung, I.; Kang, J.
2012-12-01
Lack of CO2 vertical information could lead to bias in the surface CO2 flux estimation (Stephens et al., 2007). Liu et al. (2012) showed that assimilating AIRS CO2 observations, which are sensitive to middle to upper troposphere CO2, improves CO2 concentration, especially in the middle to upper troposphere. GOSAT is sensitive to CO2 over the whole column, but the spatial coverage is sparser than AIRS. In this study, we assimilate AIRS and GOSAT CO2 observations simultaneously along with surface flask CO2 observations and meteorology observations with Ensemble Kalman filter (EnKF) to constrain CO2 vertical profiles simulated by NCAR carbon-climate model. We will show the impact of assimilating AIRS and GOSAT CO2 on the CO2 vertical gradient, seasonal cycle and spatial gradient by assimilating only GOSAT or AIRS and comparing to the control experiment. The quality of CO2 analysis will be examined by validating against independent CO2 aircraft observations, and analyzing the relationship between CO2 analysis fields and major circulation, such as Madden Julian Oscillation. We will also discuss the deficiencies of the observation network in understanding the carbon cycle.
Calibrating Parameters of Power System Stability Models using Advanced Ensemble Kalman Filter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Renke; Diao, Ruisheng; Li, Yuanyuan
With the ever increasing penetration of renewable energy, smart loads, energy storage, and new market behavior, today’s power grid becomes more dynamic and stochastic, which may invalidate traditional study assumptions and pose great operational challenges. Thus, it is of critical importance to maintain good-quality models for secure and economic planning and real-time operation. Following the 1996 Western Systems Coordinating Council (WSCC) system blackout, North American Electric Reliability Corporation (NERC) and Western Electricity Coordinating Council (WECC) in North America enforced a number of policies and standards to guide the power industry to periodically validate power grid models and calibrate poor parametersmore » with the goal of building sufficient confidence in model quality. The PMU-based approach using online measurements without interfering with the operation of generators provides a low-cost alternative to meet NERC standards. This paper presents an innovative procedure and tool suites to validate and calibrate models based on a trajectory sensitivity analysis method and an advanced ensemble Kalman filter algorithm. The developed prototype demonstrates excellent performance in identifying and calibrating bad parameters of a realistic hydro power plant against multiple system events.« less
Adaptive error covariances estimation methods for ensemble Kalman filters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhen, Yicun, E-mail: zhen@math.psu.edu; Harlim, John, E-mail: jharlim@psu.edu
2015-08-01
This paper presents a computationally fast algorithm for estimating, both, the system and observation noise covariances of nonlinear dynamics, that can be used in an ensemble Kalman filtering framework. The new method is a modification of Belanger's recursive method, to avoid an expensive computational cost in inverting error covariance matrices of product of innovation processes of different lags when the number of observations becomes large. When we use only product of innovation processes up to one-lag, the computational cost is indeed comparable to a recently proposed method by Berry–Sauer's. However, our method is more flexible since it allows for usingmore » information from product of innovation processes of more than one-lag. Extensive numerical comparisons between the proposed method and both the original Belanger's and Berry–Sauer's schemes are shown in various examples, ranging from low-dimensional linear and nonlinear systems of SDEs and 40-dimensional stochastically forced Lorenz-96 model. Our numerical results suggest that the proposed scheme is as accurate as the original Belanger's scheme on low-dimensional problems and has a wider range of more accurate estimates compared to Berry–Sauer's method on L-96 example.« less
Adaptive spectral filtering of PIV cross correlations
NASA Astrophysics Data System (ADS)
Giarra, Matthew; Vlachos, Pavlos; Aether Lab Team
2016-11-01
Using cross correlations (CCs) in particle image velocimetry (PIV) assumes that tracer particles in interrogation regions (IRs) move with the same velocity. But this assumption is nearly always violated because real flows exhibit velocity gradients, which degrade the signal-to-noise ratio (SNR) of the CC and are a major driver of error in PIV. Iterative methods help reduce these errors, but even they can fail when gradients are large within individual IRs. We present an algorithm to mitigate the effects of velocity gradients on PIV measurements. Our algorithm is based on a model of the CC, which predicts a relationship between the PDF of particle displacements and the variation of the correlation's SNR across the Fourier spectrum. We give an algorithm to measure this SNR from the CC, and use this insight to create a filter that suppresses the low-SNR portions of the spectrum. Our algorithm extends to the ensemble correlation, where it accelerates the convergence of the measurement and also reveals the PDF of displacements of the ensemble (and therefore of statistical metrics like diffusion coefficient). Finally, our model provides theoretical foundations for a number of "rules of thumb" in PIV, like the quarter-window rule.
Wang, Xuan; Tandeo, Pierre; Fablet, Ronan; Husson, Romain; Guan, Lei; Chen, Ge
2016-01-01
The swell propagation model built on geometric optics is known to work well when simulating radiated swells from a far located storm. Based on this simple approximation, satellites have acquired plenty of large samples on basin-traversing swells induced by fierce storms situated in mid-latitudes. How to routinely reconstruct swell fields with these irregularly sampled observations from space via known swell propagation principle requires more examination. In this study, we apply 3-h interval pseudo SAR observations in the ensemble Kalman filter (EnKF) to reconstruct a swell field in ocean basin, and compare it with buoy swell partitions and polynomial regression results. As validated against in situ measurements, EnKF works well in terms of spatial–temporal consistency in far-field swell propagation scenarios. Using this framework, we further address the influence of EnKF parameters, and perform a sensitivity analysis to evaluate estimations made under different sets of parameters. Such analysis is of key interest with respect to future multiple-source routinely recorded swell field data. Satellite-derived swell data can serve as a valuable complementary dataset to in situ or wave re-analysis datasets. PMID:27898005
Unwinding the hairball graph: Pruning algorithms for weighted complex networks
NASA Astrophysics Data System (ADS)
Dianati, Navid
2016-01-01
Empirical networks of weighted dyadic relations often contain "noisy" edges that alter the global characteristics of the network and obfuscate the most important structures therein. Graph pruning is the process of identifying the most significant edges according to a generative null model and extracting the subgraph consisting of those edges. Here, we focus on integer-weighted graphs commonly arising when weights count the occurrences of an "event" relating the nodes. We introduce a simple and intuitive null model related to the configuration model of network generation and derive two significance filters from it: the marginal likelihood filter (MLF) and the global likelihood filter (GLF). The former is a fast algorithm assigning a significance score to each edge based on the marginal distribution of edge weights, whereas the latter is an ensemble approach which takes into account the correlations among edges. We apply these filters to the network of air traffic volume between US airports and recover a geographically faithful representation of the graph. Furthermore, compared with thresholding based on edge weight, we show that our filters extract a larger and significantly sparser giant component.
NASA Technical Reports Server (NTRS)
Shykoff, Barbara E.; Swanson, Harvey T.
1987-01-01
A new method for correction of mass spectrometer output signals is described. Response-time distortion is reduced independently of any model of mass spectrometer behavior. The delay of the system is found first from the cross-correlation function of a step change and its response. A two-sided time-domain digital correction filter (deconvolution filter) is generated next from the same step response data using a regression procedure. Other data are corrected using the filter and delay. The mean squared error between a step response and a step is reduced considerably more after the use of a deconvolution filter than after the application of a second-order model correction. O2 consumption and CO2 production values calculated from data corrupted by a simulated dynamic process return to near the uncorrupted values after correction. Although a clean step response or the ensemble average of several responses contaminated with noise is needed for the generation of the filter, random noise of magnitude not above 0.5 percent added to the response to be corrected does not impair the correction severely.
Efficient Decoding With Steady-State Kalman Filter in Neural Interface Systems
Malik, Wasim Q.; Truccolo, Wilson; Brown, Emery N.; Hochberg, Leigh R.
2011-01-01
The Kalman filter is commonly used in neural interface systems to decode neural activity and estimate the desired movement kinematics. We analyze a low-complexity Kalman filter implementation in which the filter gain is approximated by its steady-state form, computed offline before real-time decoding commences. We evaluate its performance using human motor cortical spike train data obtained from an intracortical recording array as part of an ongoing pilot clinical trial. We demonstrate that the standard Kalman filter gain converges to within 95% of the steady-state filter gain in 1.5 ± 0.5 s (mean ± s.d.). The difference in the intended movement velocity decoded by the two filters vanishes within 5 s, with a correlation coefficient of 0.99 between the two decoded velocities over the session length. We also find that the steady-state Kalman filter reduces the computational load (algorithm execution time) for decoding the firing rates of 25 ± 3 single units by a factor of 7.0 ± 0.9. We expect that the gain in computational efficiency will be much higher in systems with larger neural ensembles. The steady-state filter can thus provide substantial runtime efficiency at little cost in terms of estimation accuracy. This far more efficient neural decoding approach will facilitate the practical implementation of future large-dimensional, multisignal neural interface systems. PMID:21078582
Ensemble Simulation of the Atmospheric Radionuclides Discharged by the Fukushima Nuclear Accident
NASA Astrophysics Data System (ADS)
Sekiyama, Thomas; Kajino, Mizuo; Kunii, Masaru
2013-04-01
Enormous amounts of radionuclides were discharged into the atmosphere by a nuclear accident at the Fukushima Daiichi nuclear power plant (FDNPP) after the earthquake and tsunami on 11 March 2011. The radionuclides were dispersed from the power plant and deposited mainly over eastern Japan and the North Pacific Ocean. A lot of numerical simulations of the radionuclide dispersion and deposition had been attempted repeatedly since the nuclear accident. However, none of them were able to perfectly simulate the distribution of dose rates observed after the accident over eastern Japan. This was partly due to the error of the wind vectors and precipitations used in the numerical simulations; unfortunately, their deterministic simulations could not deal with the probability distribution of the simulation results and errors. Therefore, an ensemble simulation of the atmospheric radionuclides was performed using the ensemble Kalman filter (EnKF) data assimilation system coupled with the Japan Meteorological Agency (JMA) non-hydrostatic mesoscale model (NHM); this mesoscale model has been used operationally for daily weather forecasts by JMA. Meteorological observations were provided to the EnKF data assimilation system from the JMA operational-weather-forecast dataset. Through this ensemble data assimilation, twenty members of the meteorological analysis over eastern Japan from 11 to 31 March 2011 were successfully obtained. Using these meteorological ensemble analysis members, the radionuclide behavior in the atmosphere such as advection, convection, diffusion, dry deposition, and wet deposition was simulated. This ensemble simulation provided the multiple results of the radionuclide dispersion and distribution. Because a large ensemble deviation indicates the low accuracy of the numerical simulation, the probabilistic information is obtainable from the ensemble simulation results. For example, the uncertainty of precipitation triggered the uncertainty of wet deposition; the uncertainty of wet deposition triggered the uncertainty of atmospheric radionuclide amounts. Then the remained radionuclides were transported downwind; consequently the uncertainty signal of the radionuclide amounts was propagated downwind. The signal propagation was seen in the ensemble simulation by the tracking of the large deviation areas of radionuclide concentration and deposition. These statistics are able to provide information useful for the probabilistic prediction of radionuclides.
NASA Astrophysics Data System (ADS)
Pan, Yujie; Xue, Ming; Zhu, Kefeng; Wang, Mingjun
2018-05-01
A dual-resolution (DR) version of a regional ensemble Kalman filter (EnKF)-3D ensemble variational (3DEnVar) coupled hybrid data assimilation system is implemented as a prototype for the operational Rapid Refresh forecasting system. The DR 3DEnVar system combines a high-resolution (HR) deterministic background forecast with lower-resolution (LR) EnKF ensemble perturbations used for flow-dependent background error covariance to produce a HR analysis. The computational cost is substantially reduced by running the ensemble forecasts and EnKF analyses at LR. The DR 3DEnVar system is tested with 3-h cycles over a 9-day period using a 40/˜13-km grid spacing combination. The HR forecasts from the DR hybrid analyses are compared with forecasts launched from HR Gridpoint Statistical Interpolation (GSI) 3D variational (3DVar) analyses, and single LR hybrid analyses interpolated to the HR grid. With the DR 3DEnVar system, a 90% weight for the ensemble covariance yields the lowest forecast errors and the DR hybrid system clearly outperforms the HR GSI 3DVar. Humidity and wind forecasts are also better than those launched from interpolated LR hybrid analyses, but the temperature forecasts are slightly worse. The humidity forecasts are improved most. For precipitation forecasts, the DR 3DEnVar always outperforms HR GSI 3DVar. It also outperforms the LR 3DEnVar, except for the initial forecast period and lower thresholds.
Velazquez, Hector A; Riccardi, Demian; Xiao, Zhousheng; Quarles, Leigh Darryl; Yates, Charless Ryan; Baudry, Jerome; Smith, Jeremy C
2018-02-01
Ensemble docking is now commonly used in early-stage in silico drug discovery and can be used to attack difficult problems such as finding lead compounds which can disrupt protein-protein interactions. We give an example of this methodology here, as applied to fibroblast growth factor 23 (FGF23), a protein hormone that is responsible for regulating phosphate homeostasis. The first small-molecule antagonists of FGF23 were recently discovered by combining ensemble docking with extensive experimental target validation data (Science Signaling, 9, 2016, ra113). Here, we provide a detailed account of how ensemble-based high-throughput virtual screening was used to identify the antagonist compounds discovered in reference (Science Signaling, 9, 2016, ra113). Moreover, we perform further calculations, redocking those antagonist compounds identified in reference (Science Signaling, 9, 2016, ra113) that performed well on drug-likeness filters, to predict possible binding regions. These predicted binding modes are rescored with the molecular mechanics Poisson-Boltzmann surface area (MM/PBSA) approach to calculate the most likely binding site. Our findings suggest that the antagonist compounds antagonize FGF23 through the disruption of protein-protein interactions between FGF23 and fibroblast growth factor receptor (FGFR). © 2017 John Wiley & Sons A/S.
A Maximum Likelihood Ensemble Data Assimilation Method Tailored to the Inner Radiation Belt
NASA Astrophysics Data System (ADS)
Guild, T. B.; O'Brien, T. P., III; Mazur, J. E.
2014-12-01
The Earth's radiation belts are composed of energetic protons and electrons whose fluxes span many orders of magnitude, whose distributions are log-normal, and where data-model differences can be large and also log-normal. This physical system thus challenges standard data assimilation methods relying on underlying assumptions of Gaussian distributions of measurements and data-model differences, where innovations to the model are small. We have therefore developed a data assimilation method tailored to these properties of the inner radiation belt, analogous to the ensemble Kalman filter but for the unique cases of non-Gaussian model and measurement errors, and non-linear model and measurement distributions. We apply this method to the inner radiation belt proton populations, using the SIZM inner belt model [Selesnick et al., 2007] and SAMPEX/PET and HEO proton observations to select the most likely ensemble members contributing to the state of the inner belt. We will describe the algorithm, the method of generating ensemble members, our choice of minimizing the difference between instrument counts not phase space densities, and demonstrate the method with our reanalysis of the inner radiation belt throughout solar cycle 23. We will report on progress to continue our assimilation into solar cycle 24 using the Van Allen Probes/RPS observations.
NASA Astrophysics Data System (ADS)
Wang, S.; Ancell, B. C.; Huang, G. H.; Baetz, B. W.
2018-03-01
Data assimilation using the ensemble Kalman filter (EnKF) has been increasingly recognized as a promising tool for probabilistic hydrologic predictions. However, little effort has been made to conduct the pre- and post-processing of assimilation experiments, posing a significant challenge in achieving the best performance of hydrologic predictions. This paper presents a unified data assimilation framework for improving the robustness of hydrologic ensemble predictions. Statistical pre-processing of assimilation experiments is conducted through the factorial design and analysis to identify the best EnKF settings with maximized performance. After the data assimilation operation, statistical post-processing analysis is also performed through the factorial polynomial chaos expansion to efficiently address uncertainties in hydrologic predictions, as well as to explicitly reveal potential interactions among model parameters and their contributions to the predictive accuracy. In addition, the Gaussian anamorphosis is used to establish a seamless bridge between data assimilation and uncertainty quantification of hydrologic predictions. Both synthetic and real data assimilation experiments are carried out to demonstrate feasibility and applicability of the proposed methodology in the Guadalupe River basin, Texas. Results suggest that statistical pre- and post-processing of data assimilation experiments provide meaningful insights into the dynamic behavior of hydrologic systems and enhance robustness of hydrologic ensemble predictions.
NASA Astrophysics Data System (ADS)
Lisniak, D.; Meissner, D.; Klein, B.; Pinzinger, R.
2013-12-01
The German Federal Institute of Hydrology (BfG) offers navigational water-level forecasting services on the Federal Waterways, like the rivers Rhine and Danube. In cooperation with the Federal States this mandate also includes the forecasting of flood events. For the River Rhine, the most frequented inland waterway in Central Europe, the BfG employs a hydrological model (HBV) coupled to a hydraulic model (SOBEK) by the FEWS-framework to perform daily forecasts of water-levels operationally. Sensitivity studies have shown that the state of soil water storage in the hydrological model is a major factor of uncertainty when performing short- to medium-range forecasts some days ahead. Taking into account the various additional sources of uncertainty associated with hydrological modeling, including measurement uncertainties, it is essential to estimate an optimal initial state of the soil water storage before propagating it in time, forced by meteorological forecasts, and transforming it into discharge. We show, that using the Ensemble Kalman Filter these initial states can be updated straightforward under certain hydrologic conditions. However, this approach is not sufficient if the runoff is mainly generated by snow melt. Since the snow cover evolution is modeled rather poorly by the HBV-model in our operational setting, flood events caused by snow melt are consistently underestimated by the HBV-model, which has long term effects in basins characterized by a nival runoff regime. Thus, it appears beneficial to update the snow storage of the HBV-model with information derived from regionalized snow cover observations. We present a method to incorporate spatially distributed snow cover observations into the lumped HBV-model. We show the plausibility of this approach and asses the benefits of a coupled snow cover and soil water storage updating, which combine a direct insertion with an Ensemble Kalman Filter. The Ensemble Kalman Filter used here takes into account the internal routing mechanism of the HBV-model, which causes a delayed response of the simulated discharge at the catchment outlet to changes in internal states.
Spatio-temporal Eigenvector Filtering: Application on Bioenergy Crop Impacts
NASA Astrophysics Data System (ADS)
Wang, M.; Kamarianakis, Y.; Georgescu, M.
2017-12-01
A suite of 10-year ensemble-based simulations was conducted to investigate the hydroclimatic impacts due to large-scale deployment of perennial bioenergy crops across the continental United States. Given the large size of the simulated dataset (about 60Tb), traditional hierarchical spatio-temporal statistical modelling cannot be implemented for the evaluation of physics parameterizations and biofuel impacts. In this work, we propose a filtering algorithm that takes into account the spatio-temporal autocorrelation structure of the data while avoiding spatial confounding. This method is used to quantify the robustness of simulated hydroclimatic impacts associated with bioenergy crops to alternative physics parameterizations and observational datasets. Results are evaluated against those obtained from three alternative Bayesian spatio-temporal specifications.
NASA Astrophysics Data System (ADS)
Dikpati, Mausumi; Anderson, Jeffrey L.; Mitra, Dhrubaditya
2016-09-01
We implement an Ensemble Kalman Filter procedure using the Data Assimilation Research Testbed for assimilating “synthetic” meridional flow-speed data in a Babcock-Leighton-type flux-transport solar dynamo model. By performing several “observing system simulation experiments,” we reconstruct time variation in meridional flow speed and analyze sensitivity and robustness of reconstruction. Using 192 ensemble members including 10 observations, each with 4% error, we find that flow speed is reconstructed best if observations of near-surface poloidal fields from low latitudes and tachocline toroidal fields from midlatitudes are assimilated. If observations include a mixture of poloidal and toroidal fields from different latitude locations, reconstruction is reasonably good for ≤slant 40 % error in low-latitude data, even if observational error in polar region data becomes 200%, but deteriorates when observational error increases in low- and midlatitude data. Solar polar region observations are known to contain larger errors than those in low latitudes; our forward operator (a flux-transport dynamo model here) can sustain larger errors in polar region data, but is more sensitive to errors in low-latitude data. An optimal reconstruction is obtained if an assimilation interval of 15 days is used; 10- and 20-day assimilation intervals also give reasonably good results. Assimilation intervals \\lt 5 days do not produce faithful reconstructions of flow speed, because the system requires a minimum time to develop dynamics to respond to flow variations. Reconstruction also deteriorates if an assimilation interval \\gt 45 days is used, because the system’s inherent memory interferes with its short-term dynamics during a substantially long run without updating.
NASA Astrophysics Data System (ADS)
Li, Ning; McLaughlin, Dennis; Kinzelbach, Wolfgang; Li, WenPeng; Dong, XinGuang
2015-10-01
Model uncertainty needs to be quantified to provide objective assessments of the reliability of model predictions and of the risk associated with management decisions that rely on these predictions. This is particularly true in water resource studies that depend on model-based assessments of alternative management strategies. In recent decades, Bayesian data assimilation methods have been widely used in hydrology to assess uncertain model parameters and predictions. In this case study, a particular data assimilation algorithm, the Ensemble Smoother with Multiple Data Assimilation (ESMDA) (Emerick and Reynolds, 2012), is used to derive posterior samples of uncertain model parameters and forecasts for a distributed hydrological model of Yanqi basin, China. This model is constructed using MIKESHE/MIKE11software, which provides for coupling between surface and subsurface processes (DHI, 2011a-d). The random samples in the posterior parameter ensemble are obtained by using measurements to update 50 prior parameter samples generated with a Latin Hypercube Sampling (LHS) procedure. The posterior forecast samples are obtained from model runs that use the corresponding posterior parameter samples. Two iterative sample update methods are considered: one based on an a perturbed observation Kalman filter update and one based on a square root Kalman filter update. These alternatives give nearly the same results and converge in only two iterations. The uncertain parameters considered include hydraulic conductivities, drainage and river leakage factors, van Genuchten soil property parameters, and dispersion coefficients. The results show that the uncertainty in many of the parameters is reduced during the smoother updating process, reflecting information obtained from the observations. Some of the parameters are insensitive and do not benefit from measurement information. The correlation coefficients among certain parameters increase in each iteration, although they generally stay below 0.50.
Forecasting Influenza Epidemics in Hong Kong.
Yang, Wan; Cowling, Benjamin J; Lau, Eric H Y; Shaman, Jeffrey
2015-07-01
Recent advances in mathematical modeling and inference methodologies have enabled development of systems capable of forecasting seasonal influenza epidemics in temperate regions in real-time. However, in subtropical and tropical regions, influenza epidemics can occur throughout the year, making routine forecast of influenza more challenging. Here we develop and report forecast systems that are able to predict irregular non-seasonal influenza epidemics, using either the ensemble adjustment Kalman filter or a modified particle filter in conjunction with a susceptible-infected-recovered (SIR) model. We applied these model-filter systems to retrospectively forecast influenza epidemics in Hong Kong from January 1998 to December 2013, including the 2009 pandemic. The forecast systems were able to forecast both the peak timing and peak magnitude for 44 epidemics in 16 years caused by individual influenza strains (i.e., seasonal influenza A(H1N1), pandemic A(H1N1), A(H3N2), and B), as well as 19 aggregate epidemics caused by one or more of these influenza strains. Average forecast accuracies were 37% (for both peak timing and magnitude) at 1-3 week leads, and 51% (peak timing) and 50% (peak magnitude) at 0 lead. Forecast accuracy increased as the spread of a given forecast ensemble decreased; the forecast accuracy for peak timing (peak magnitude) increased up to 43% (45%) for H1N1, 93% (89%) for H3N2, and 53% (68%) for influenza B at 1-3 week leads. These findings suggest that accurate forecasts can be made at least 3 weeks in advance for subtropical and tropical regions.
Forecasting Influenza Epidemics in Hong Kong
Yang, Wan; Cowling, Benjamin J.; Lau, Eric H. Y.; Shaman, Jeffrey
2015-01-01
Recent advances in mathematical modeling and inference methodologies have enabled development of systems capable of forecasting seasonal influenza epidemics in temperate regions in real-time. However, in subtropical and tropical regions, influenza epidemics can occur throughout the year, making routine forecast of influenza more challenging. Here we develop and report forecast systems that are able to predict irregular non-seasonal influenza epidemics, using either the ensemble adjustment Kalman filter or a modified particle filter in conjunction with a susceptible-infected-recovered (SIR) model. We applied these model-filter systems to retrospectively forecast influenza epidemics in Hong Kong from January 1998 to December 2013, including the 2009 pandemic. The forecast systems were able to forecast both the peak timing and peak magnitude for 44 epidemics in 16 years caused by individual influenza strains (i.e., seasonal influenza A(H1N1), pandemic A(H1N1), A(H3N2), and B), as well as 19 aggregate epidemics caused by one or more of these influenza strains. Average forecast accuracies were 37% (for both peak timing and magnitude) at 1-3 week leads, and 51% (peak timing) and 50% (peak magnitude) at 0 lead. Forecast accuracy increased as the spread of a given forecast ensemble decreased; the forecast accuracy for peak timing (peak magnitude) increased up to 43% (45%) for H1N1, 93% (89%) for H3N2, and 53% (68%) for influenza B at 1-3 week leads. These findings suggest that accurate forecasts can be made at least 3 weeks in advance for subtropical and tropical regions. PMID:26226185
Estimation of the vortex length scale and intensity from two-dimensional samples
NASA Technical Reports Server (NTRS)
Reuss, D. L.; Cheng, W. P.
1992-01-01
A method is proposed for estimating flow features that influence flame wrinkling in reciprocating internal combustion engines, where traditional statistical measures of turbulence are suspect. Candidate methods were tested in a computed channel flow where traditional turbulence measures are valid and performance can be rationally evaluated. Two concepts are tested. First, spatial filtering is applied to the two-dimensional velocity distribution and found to reveal structures corresponding to the vorticity field. Decreasing the spatial-frequency cutoff of the filter locally changes the character and size of the flow structures that are revealed by the filter. Second, vortex length scale and intensity is estimated by computing the ensemble-average velocity distribution conditionally sampled on the vorticity peaks. The resulting conditionally sampled 'average vortex' has a peak velocity less than half the rms velocity and a size approximately equal to the two-point-correlation integral-length scale.
High-order noise filtering in nontrivial quantum logic gates.
Green, Todd; Uys, Hermann; Biercuk, Michael J
2012-07-13
Treating the effects of a time-dependent classical dephasing environment during quantum logic operations poses a theoretical challenge, as the application of noncommuting control operations gives rise to both dephasing and depolarization errors that must be accounted for in order to understand total average error rates. We develop a treatment based on effective Hamiltonian theory that allows us to efficiently model the effect of classical noise on nontrivial single-bit quantum logic operations composed of arbitrary control sequences. We present a general method to calculate the ensemble-averaged entanglement fidelity to arbitrary order in terms of noise filter functions, and provide explicit expressions to fourth order in the noise strength. In the weak noise limit we derive explicit filter functions for a broad class of piecewise-constant control sequences, and use them to study the performance of dynamically corrected gates, yielding good agreement with brute-force numerics.
NASA Technical Reports Server (NTRS)
Todling, Ricardo; Diniz, F. L. R.; Takacs, L. L.; Suarez, M. J.
2018-01-01
Many hybrid data assimilation systems currently used for NWP employ some form of dual-analysis system approach. Typically a hybrid variational analysis is responsible for creating initial conditions for high-resolution forecasts, and an ensemble analysis system is responsible for creating sample perturbations used to form the flow-dependent part of the background error covariance required in the hybrid analysis component. In many of these, the two analysis components employ different methodologies, e.g., variational and ensemble Kalman filter. In such cases, it is not uncommon to have observations treated rather differently between the two analyses components; recentering of the ensemble analysis around the hybrid analysis is used to compensated for such differences. Furthermore, in many cases, the hybrid variational high-resolution system implements some type of four-dimensional approach, whereas the underlying ensemble system relies on a three-dimensional approach, which again introduces discrepancies in the overall system. Connected to these is the expectation that one can reliably estimate observation impact on forecasts issued from hybrid analyses by using an ensemble approach based on the underlying ensemble strategy of dual-analysis systems. Just the realization that the ensemble analysis makes substantially different use of observations as compared to their hybrid counterpart should serve as enough evidence of the implausibility of such expectation. This presentation assembles numerous anecdotal evidence to illustrate the fact that hybrid dual-analysis systems must, at the very minimum, strive for consistent use of the observations in both analysis sub-components. Simpler than that, this work suggests that hybrid systems can reliably be constructed without the need to employ a dual-analysis approach. In practice, the idea of relying on a single analysis system is appealing from a cost-maintenance perspective. More generally, single-analysis systems avoid contradictions such as having to choose one sub-component to generate performance diagnostics to another, possibly not fully consistent, component.
NASA Astrophysics Data System (ADS)
El Gharamti, M.; Bethke, I.; Tjiputra, J.; Bertino, L.
2016-02-01
Given the recent strong international focus on developing new data assimilation systems for biological models, we present in this comparative study the application of newly developed state-parameters estimation tools to an ocean ecosystem model. It is quite known that the available physical models are still too simple compared to the complexity of the ocean biology. Furthermore, various biological parameters remain poorly unknown and hence wrong specifications of such parameters can lead to large model errors. Standard joint state-parameters augmentation technique using the ensemble Kalman filter (Stochastic EnKF) has been extensively tested in many geophysical applications. Some of these assimilation studies reported that jointly updating the state and the parameters might introduce significant inconsistency especially for strongly nonlinear models. This is usually the case for ecosystem models particularly during the period of the spring bloom. A better handling of the estimation problem is often carried out by separating the update of the state and the parameters using the so-called Dual EnKF. The dual filter is computationally more expensive than the Joint EnKF but is expected to perform more accurately. Using a similar separation strategy, we propose a new EnKF estimation algorithm in which we apply a one-step-ahead smoothing to the state. The new state-parameters estimation scheme is derived in a consistent Bayesian filtering framework and results in separate update steps for the state and the parameters. Unlike the classical filtering path, the new scheme starts with an update step and later a model propagation step is performed. We test the performance of the new smoothing-based schemes against the standard EnKF in a one-dimensional configuration of the Norwegian Earth System Model (NorESM) in the North Atlantic. We use nutrients profile (up to 2000 m deep) data and surface partial CO2 measurements from Mike weather station (66o N, 2o E) to estimate different biological parameters of phytoplanktons and zooplanktons. We analyze the performance of the filters in terms of complexity and accuracy of the state and parameters estimates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenthal, William Steven; Tartakovsky, Alex; Huang, Zhenyu
State and parameter estimation of power transmission networks is important for monitoring power grid operating conditions and analyzing transient stability. Wind power generation depends on fluctuating input power levels, which are correlated in time and contribute to uncertainty in turbine dynamical models. The ensemble Kalman filter (EnKF), a standard state estimation technique, uses a deterministic forecast and does not explicitly model time-correlated noise in parameters such as mechanical input power. However, this uncertainty affects the probability of fault-induced transient instability and increased prediction bias. Here a novel approach is to model input power noise with time-correlated stochastic fluctuations, and integratemore » them with the network dynamics during the forecast. While the EnKF has been used to calibrate constant parameters in turbine dynamical models, the calibration of a statistical model for a time-correlated parameter has not been investigated. In this study, twin experiments on a standard transmission network test case are used to validate our time-correlated noise model framework for state estimation of unsteady operating conditions and transient stability analysis, and a methodology is proposed for the inference of the mechanical input power time-correlation length parameter using time-series data from PMUs monitoring power dynamics at generator buses.« less
Context dependent anti-aliasing image reconstruction
NASA Technical Reports Server (NTRS)
Beaudet, Paul R.; Hunt, A.; Arlia, N.
1989-01-01
Image Reconstruction has been mostly confined to context free linear processes; the traditional continuum interpretation of digital array data uses a linear interpolator with or without an enhancement filter. Here, anti-aliasing context dependent interpretation techniques are investigated for image reconstruction. Pattern classification is applied to each neighborhood to assign it a context class; a different interpolation/filter is applied to neighborhoods of differing context. It is shown how the context dependent interpolation is computed through ensemble average statistics using high resolution training imagery from which the lower resolution image array data is obtained (simulation). A quadratic least squares (LS) context-free image quality model is described from which the context dependent interpolation coefficients are derived. It is shown how ensembles of high-resolution images can be used to capture the a priori special character of different context classes. As a consequence, a priori information such as the translational invariance of edges along the edge direction, edge discontinuity, and the character of corners is captured and can be used to interpret image array data with greater spatial resolution than would be expected by the Nyquist limit. A Gibb-like artifact associated with this super-resolution is discussed. More realistic context dependent image quality models are needed and a suggestion is made for using a quality model which now is finding application in data compression.
Assimilation of Tropical Cyclone Track and Wind Radius Data with an Ensemble Kalman Filter
NASA Astrophysics Data System (ADS)
Kunii, M.
2014-12-01
Improving tropical cyclone (TC) forecasts is one of the most important issues in meteorology, but TC intensity forecasts are a challenging task. Because the lack of observations near TCs usually results in degraded accuracy of initial fields, utilizing TC advisory data in data assimilation typically has started with an ensemble Kalman filtering (EnKF). In this study, TC intensity and position information was directly assimilated using the EnKF, and the impact of these observations was investigated by comparing different assimilation strategies. Another experiment with TC wind radius data was carried out to examine the influence of TC shape parameters. Sensitivity experiments indicated that the assimilation of TC intensity and position data yielded results that were superior to those based on conventional assimilation of TC minimum sea level pressure as a standard surface pressure observation. Assimilation of TC radius data modified TC outer circulations closer to observations. The impacts of these TC parameters were also evaluated using the case of Typhoon Talas in 2011. The TC intensity, position, and wind radius data led to improved TC track forecasts and thence to improved precipitation forecasts. These results imply that initialization with these TC-related observations benefits TC forecasts, offering promise for the prevention and mitigation of natural disasters caused by TCs.
Multi-scale assimilation of remotely sensed snow observations for hydrologic estimation
NASA Astrophysics Data System (ADS)
Andreadis, K.; Lettenmaier, D.
2008-12-01
Data assimilation provides a framework for optimally merging model predictions and remote sensing observations of snow properties (snow cover extent, water equivalent, grain size, melt state), ideally overcoming limitations of both. A synthetic twin experiment is used to evaluate a data assimilation system that would ingest remotely sensed observations from passive microwave and visible wavelength sensors (brightness temperature and snow cover extent derived products, respectively) with the objective of estimating snow water equivalent. Two data assimilation techniques are used, the Ensemble Kalman filter and the Ensemble Multiscale Kalman filter (EnMKF). One of the challenges inherent in such a data assimilation system is the discrepancy in spatial scales between the different types of snow-related observations. The EnMKF represents the sample model error covariance with a tree that relates the system state variables at different locations and scales through a set of parent-child relationships. This provides an attractive framework to efficiently assimilate observations at different spatial scales. This study provides a first assessment of the feasibility of a system that would assimilate observations from multiple sensors (MODIS snow cover and AMSR-E brightness temperatures) and at different spatial scales for snow water equivalent estimation. The relative value of the different types of observations is examined. Additionally, the error characteristics of both model and observations are discussed.
Rosenthal, William Steven; Tartakovsky, Alex; Huang, Zhenyu
2017-10-31
State and parameter estimation of power transmission networks is important for monitoring power grid operating conditions and analyzing transient stability. Wind power generation depends on fluctuating input power levels, which are correlated in time and contribute to uncertainty in turbine dynamical models. The ensemble Kalman filter (EnKF), a standard state estimation technique, uses a deterministic forecast and does not explicitly model time-correlated noise in parameters such as mechanical input power. However, this uncertainty affects the probability of fault-induced transient instability and increased prediction bias. Here a novel approach is to model input power noise with time-correlated stochastic fluctuations, and integratemore » them with the network dynamics during the forecast. While the EnKF has been used to calibrate constant parameters in turbine dynamical models, the calibration of a statistical model for a time-correlated parameter has not been investigated. In this study, twin experiments on a standard transmission network test case are used to validate our time-correlated noise model framework for state estimation of unsteady operating conditions and transient stability analysis, and a methodology is proposed for the inference of the mechanical input power time-correlation length parameter using time-series data from PMUs monitoring power dynamics at generator buses.« less
NASA Astrophysics Data System (ADS)
Dietze, M.; Raiho, A.; Fer, I.; Dawson, A.; Heilman, K.; Hooten, M.; McLachlan, J. S.; Moore, D. J.; Paciorek, C. J.; Pederson, N.; Rollinson, C.; Tipton, J.
2017-12-01
The pre-industrial period serves as an essential baseline against which we judge anthropogenic impacts on the earth's systems. However, direct measurements of key biogeochemical processes, such as carbon, water, and nutrient cycling, are absent for this period and there is no direct way to link paleoecological proxies, such as pollen and tree rings, to these processes. Process-based terrestrial ecosystem models provide a way to make inferences about the past, but have large uncertainties and by themselves often fail to capture much of the observed variability. Here we investigate the ability to improve inferences about pre-industrial biogeochemical cycles through the formal assimilation of proxy data into multiple process-based models. A Tobit ensemble filter with explicit estimation of process error was run at five sites across the eastern US for three models (LINKAGES, ED2, LPJ-GUESS). In addition to process error, the ensemble accounted for parameter uncertainty, estimated through the assimilation of the TRY and BETY trait databases, and driver uncertainty, accommodated by probabilistically downscaling and debiasing CMIP5 GCM output then filtering based on paleoclimate reconstructions. The assimilation was informed by four PalEON data products, each of which includes an explicit Bayesian error estimate: (1) STEPPS forest composition estimated from fossil pollen; (2) REFAB aboveground biomass (AGB) estimated from fossil pollen; (3) tree ring AGB and woody net primary productivity (wNPP); and (4) public land survey composition, stem density, and AGB. By comparing ensemble runs with and without data assimilation we are able to assess the information contribution of the proxy data to constraining biogeochemical fluxes, which is driven by the combination of model uncertainty, data uncertainty, and the strength of correlation between observed and unobserved quantities in the model ensemble. To our knowledge this is the first attempt at multi-model data assimilation with terrestrial ecosystem models. Results from the data-model assimilation allow us to assess the consistency across models in post-assimilation inferences about indirectly inferred quantities, such as GPP, soil carbon, and the water budget.
NOAA HRD's HEDAS Data Assimilation System's performance for the 2010 Atlantic Hurricane Season
NASA Astrophysics Data System (ADS)
Sellwood, K.; Aksoy, A.; Vukicevic, T.; Lorsolo, S.
2010-12-01
The Hurricane Ensemble Data Assimilation System (HEDAS) was developed at the Hurricane Research Division (HRD) of NOAA, in conjunction with an experimental version of the Hurricane Weather and Research Forecast model (HWRFx), in an effort to improve the initial representation of the hurricane vortex by utilizing high resolution in-situ data collected during NOAA’s Hurricane Field Program. HEDAS implements the “ensemble square root “ filter of Whitaker and Hamill (2002) using a 30 member ensemble obtained from NOAA/ESRL’s ensemble Kalman filter (EnKF) system and the assimilation is performed on a 3-km nest centered on the hurricane vortex. As part of NOAA’s Hurricane Forecast Improvement Program (HFIP), HEDAS will be run in a semi-operational mode for the first time during the 2010 Atlantic hurricane season and will assimilate airborne Doppler radar winds, dropwindsonde and flight level wind, temperature, pressure and relative humidity, and Stepped Frequency Microwave Radiometer surface wind observations as they become available. HEDAS has been implemented in an experimental mode for the cases of Hurricane Bill, 2009 and Paloma, 2008 to confirm functionality and determine the optimal configuration of the system. This test case demonstrates the importance of assimilating thermodynamic data in addition to wind observations and the benefit of increasing the quantity and distribution of observations. Applying HEDAS to a larger sample of storm forecasts would provide further insight into the behavior of the model when inner core aircraft observations are assimilated. The main focus of this talk will be to present a summary of HEDAS performance in the HWRFx model for the inaugural season. The HEDAS analyses and the resulting HWRFx forecasts will be compared with HWRFx analyses and forecasts produced concurrently using the HRD modeling group’s vortex initialization which does not employ data assimilation. The initial vortex and subsequent forecasts will be evaluated based on the thermodynamic structure, wind field, track and intensity. Related HEDAS research to be presented by HRD’s data assimilation group include evaluations of the geostrophic wind balance and covariance structures for the Bill experiments, and Observation System Simulation experiments (OSSEs) for the case of hurricane Paloma using both model generated and real observations.
An adaptive ANOVA-based PCKF for high-dimensional nonlinear inverse modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Weixuan, E-mail: weixuan.li@usc.edu; Lin, Guang, E-mail: guang.lin@pnnl.gov; Zhang, Dongxiao, E-mail: dxz@pku.edu.cn
2014-02-01
The probabilistic collocation-based Kalman filter (PCKF) is a recently developed approach for solving inverse problems. It resembles the ensemble Kalman filter (EnKF) in every aspect—except that it represents and propagates model uncertainty by polynomial chaos expansion (PCE) instead of an ensemble of model realizations. Previous studies have shown PCKF is a more efficient alternative to EnKF for many data assimilation problems. However, the accuracy and efficiency of PCKF depends on an appropriate truncation of the PCE series. Having more polynomial chaos basis functions in the expansion helps to capture uncertainty more accurately but increases computational cost. Selection of basis functionsmore » is particularly important for high-dimensional stochastic problems because the number of polynomial chaos basis functions required to represent model uncertainty grows dramatically as the number of input parameters (random dimensions) increases. In classic PCKF algorithms, the PCE basis functions are pre-set based on users' experience. Also, for sequential data assimilation problems, the basis functions kept in PCE expression remain unchanged in different Kalman filter loops, which could limit the accuracy and computational efficiency of classic PCKF algorithms. To address this issue, we present a new algorithm that adaptively selects PCE basis functions for different problems and automatically adjusts the number of basis functions in different Kalman filter loops. The algorithm is based on adaptive functional ANOVA (analysis of variance) decomposition, which approximates a high-dimensional function with the summation of a set of low-dimensional functions. Thus, instead of expanding the original model into PCE, we implement the PCE expansion on these low-dimensional functions, which is much less costly. We also propose a new adaptive criterion for ANOVA that is more suited for solving inverse problems. The new algorithm was tested with different examples and demonstrated great effectiveness in comparison with non-adaptive PCKF and EnKF algorithms.« less
An Adaptive ANOVA-based PCKF for High-Dimensional Nonlinear Inverse Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
LI, Weixuan; Lin, Guang; Zhang, Dongxiao
2014-02-01
The probabilistic collocation-based Kalman filter (PCKF) is a recently developed approach for solving inverse problems. It resembles the ensemble Kalman filter (EnKF) in every aspect—except that it represents and propagates model uncertainty by polynomial chaos expansion (PCE) instead of an ensemble of model realizations. Previous studies have shown PCKF is a more efficient alternative to EnKF for many data assimilation problems. However, the accuracy and efficiency of PCKF depends on an appropriate truncation of the PCE series. Having more polynomial chaos bases in the expansion helps to capture uncertainty more accurately but increases computational cost. Bases selection is particularly importantmore » for high-dimensional stochastic problems because the number of polynomial chaos bases required to represent model uncertainty grows dramatically as the number of input parameters (random dimensions) increases. In classic PCKF algorithms, the PCE bases are pre-set based on users’ experience. Also, for sequential data assimilation problems, the bases kept in PCE expression remain unchanged in different Kalman filter loops, which could limit the accuracy and computational efficiency of classic PCKF algorithms. To address this issue, we present a new algorithm that adaptively selects PCE bases for different problems and automatically adjusts the number of bases in different Kalman filter loops. The algorithm is based on adaptive functional ANOVA (analysis of variance) decomposition, which approximates a high-dimensional function with the summation of a set of low-dimensional functions. Thus, instead of expanding the original model into PCE, we implement the PCE expansion on these low-dimensional functions, which is much less costly. We also propose a new adaptive criterion for ANOVA that is more suited for solving inverse problems. The new algorithm is tested with different examples and demonstrated great effectiveness in comparison with non-adaptive PCKF and EnKF algorithms.« less
Simple and Efficient Single Photon Filter for a Rb-based Quantum Memory
NASA Astrophysics Data System (ADS)
Stack, Daniel; Li, Xiao; Quraishi, Qudsia
2015-05-01
Distribution of entangled quantum states over significant distances is important to the development of future quantum technologies such as long-distance cryptography, networks of atomic clocks, distributed quantum computing, etc. Long-lived quantum memories and single photons are building blocks for systems capable of realizing such applications. The ability to store and retrieve quantum information while filtering unwanted light signals is critical to the operation of quantum memories based on neutral-atom ensembles. We report on an efficient frequency filter which uses a glass cell filled with 85Rb vapor to attenuate noise photons by an order of magnitude with little loss to the single photons associated with the operation of our cold 87Rb quantum memory. An Ar buffer gas is required to differentiate between signal and noise photons or similar statement. Our simple, passive filter requires no optical pumping or external frequency references and provides an additional 18 dB attenuation of our pump laser for every 1 dB loss of the single photon signal. We observe improved non-classical correlations and our data shows that the addition of a frequency filter increases the non-classical correlations and readout efficiency of our quantum memory by ~ 35%.
NASA Astrophysics Data System (ADS)
Alvarez-Garreton, C.; Ryu, D.; Western, A. W.; Su, C.-H.; Crow, W. T.; Robertson, D. E.; Leahy, C.
2014-09-01
Assimilation of remotely sensed soil moisture data (SM-DA) to correct soil water stores of rainfall-runoff models has shown skill in improving streamflow prediction. In the case of large and sparsely monitored catchments, SM-DA is a particularly attractive tool. Within this context, we assimilate active and passive satellite soil moisture (SSM) retrievals using an ensemble Kalman filter to improve operational flood prediction within a large semi-arid catchment in Australia (>40 000 km2). We assess the importance of accounting for channel routing and the spatial distribution of forcing data by applying SM-DA to a lumped and a semi-distributed scheme of the probability distributed model (PDM). Our scheme also accounts for model error representation and seasonal biases and errors in the satellite data. Before assimilation, the semi-distributed model provided more accurate streamflow prediction (Nash-Sutcliffe efficiency, NS = 0.77) than the lumped model (NS = 0.67) at the catchment outlet. However, this did not ensure good performance at the "ungauged" inner catchments. After SM-DA, the streamflow ensemble prediction at the outlet was improved in both the lumped and the semi-distributed schemes: the root mean square error of the ensemble was reduced by 27 and 31%, respectively; the NS of the ensemble mean increased by 7 and 38%, respectively; the false alarm ratio was reduced by 15 and 25%, respectively; and the ensemble prediction spread was reduced while its reliability was maintained. Our findings imply that even when rainfall is the main driver of flooding in semi-arid catchments, adequately processed SSM can be used to reduce errors in the model soil moisture, which in turn provides better streamflow ensemble prediction. We demonstrate that SM-DA efficacy is enhanced when the spatial distribution in forcing data and routing processes are accounted for. At ungauged locations, SM-DA is effective at improving streamflow ensemble prediction, however, the updated prediction is still poor since SM-DA does not address systematic errors in the model.
Impacts of snow cover fraction data assimilation on modeled energy and moisture budgets
NASA Astrophysics Data System (ADS)
Arsenault, Kristi R.; Houser, Paul R.; De Lannoy, Gabriëlle J. M.; Dirmeyer, Paul A.
2013-07-01
Two data assimilation (DA) methods, a simple rule-based direct insertion (DI) approach and a one-dimensional ensemble Kalman filter (EnKF) method, are evaluated by assimilating snow cover fraction observations into the Community Land surface Model. The ensemble perturbation needed for the EnKF resulted in negative snowpack biases. Therefore, a correction is made to the ensemble bias using an approach that constrains the ensemble forecasts with a single unperturbed deterministic LSM run. This is shown to improve the final snow state analyses. The EnKF method produces slightly better results in higher elevation locations, whereas results indicate that the DI method has a performance advantage in lower elevation regions. In addition, the two DA methods are evaluated in terms of their overall impacts on the other land surface state variables (e.g., soil moisture) and fluxes (e.g., latent heat flux). The EnKF method is shown to have less impact overall than the DI method and causes less distortion of the hydrological budget. However, the land surface model adjusts more slowly to the smaller EnKF increments, which leads to smaller but slightly more persistent moisture budget errors than found with the DI updates. The DI method can remove almost instantly much of the modeled snowpack, but this also allows the model system to quickly revert to hydrological balance for nonsnowpack conditions.
Mesoscale data assimilation for a local severe rainfall event with the NHM-LETKF system
NASA Astrophysics Data System (ADS)
Kunii, M.
2013-12-01
This study aims to improve forecasts of local severe weather events through data assimilation and ensemble forecasting approaches. Here, the local ensemble transform Kalman filter (LETKF) is implemented with the Japan Meteorological Agency's nonhydrostatic model (NHM). The newly developed NHM-LETKF contains an adaptive inflation scheme and a spatial covariance localization scheme with physical distance. One-way nested analysis in which a finer-resolution LETKF is conducted by using the outputs of an outer model also becomes feasible. These new contents should enhance the potential of the LETKF for convective scale events. The NHM-LETKF is applied to a local severe rainfall event in Japan in 2012. Comparison of the root mean square errors between the model first guess and analysis reveals that the system assimilates observations appropriately. Analysis ensemble spreads indicate a significant increase around the time torrential rainfall occurred, which would imply an increase in the uncertainty of environmental fields. Forecasts initialized with LETKF analyses successfully capture intense rainfalls, suggesting that the system can work effectively for local severe weather. Investigation of probabilistic forecasts by ensemble forecasting indicates that this could become a reliable data source for decision making in the future. A one-way nested data assimilation scheme is also tested. The experiment results demonstrate that assimilation with a finer-resolution model provides an advantage in the quantitative precipitation forecasting of local severe weather conditions.
Mathematical foundations of hybrid data assimilation from a synchronization perspective
NASA Astrophysics Data System (ADS)
Penny, Stephen G.
2017-12-01
The state-of-the-art data assimilation methods used today in operational weather prediction centers around the world can be classified as generalized one-way coupled impulsive synchronization. This classification permits the investigation of hybrid data assimilation methods, which combine dynamic error estimates of the system state with long time-averaged (climatological) error estimates, from a synchronization perspective. Illustrative results show how dynamically informed formulations of the coupling matrix (via an Ensemble Kalman Filter, EnKF) can lead to synchronization when observing networks are sparse and how hybrid methods can lead to synchronization when those dynamic formulations are inadequate (due to small ensemble sizes). A large-scale application with a global ocean general circulation model is also presented. Results indicate that the hybrid methods also have useful applications in generalized synchronization, in particular, for correcting systematic model errors.
Mathematical foundations of hybrid data assimilation from a synchronization perspective.
Penny, Stephen G
2017-12-01
The state-of-the-art data assimilation methods used today in operational weather prediction centers around the world can be classified as generalized one-way coupled impulsive synchronization. This classification permits the investigation of hybrid data assimilation methods, which combine dynamic error estimates of the system state with long time-averaged (climatological) error estimates, from a synchronization perspective. Illustrative results show how dynamically informed formulations of the coupling matrix (via an Ensemble Kalman Filter, EnKF) can lead to synchronization when observing networks are sparse and how hybrid methods can lead to synchronization when those dynamic formulations are inadequate (due to small ensemble sizes). A large-scale application with a global ocean general circulation model is also presented. Results indicate that the hybrid methods also have useful applications in generalized synchronization, in particular, for correcting systematic model errors.
Are consistent equal-weight particle filters possible?
NASA Astrophysics Data System (ADS)
van Leeuwen, P. J.
2017-12-01
Particle filters are fully nonlinear data-assimilation methods that could potentially change the way we do data-assimilation in highly nonlinear high-dimensional geophysical systems. However, the standard particle filter in which the observations come in by changing the relative weights of the particles is degenerate. This means that one particle obtains weight one, and all other particles obtain a very small weight, effectively meaning that the ensemble of particles reduces to that one particle. For over 10 years now scientists have searched for solutions to this problem. One obvious solution seems to be localisation, in which each part of the state only sees a limited number of observations. However, for a realistic localisation radius based on physical arguments, the number of observations is typically too large, and the filter is still degenerate. Another route taken is trying to find proposal densities that lead to more similar particle weights. There is a simple proof, however, that shows that there is an optimum, the so-called optimal proposal density, and that optimum will lead to a degenerate filter. On the other hand, it is easy to come up with a counter example of a particle filter that is not degenerate in high-dimensional systems. Furthermore, several particle filters have been developed recently that claim to have equal or equivalent weights. In this presentation I will show how to construct a particle filter that is never degenerate in high-dimensional systems, and how that is still consistent with the proof that one cannot do better than the optimal proposal density. Furthermore, it will be shown how equal- and equivalent-weights particle filters fit within this framework. This insight will then lead to new ways to generate particle filters that are non-degenerate, opening up the field of nonlinear filtering in high-dimensional systems.
A Community Terrain-Following Ocean Modeling System (ROMS)
2015-09-30
funded NOPP project titled: Toward the Development of a Coupled COAMPS-ROMS Ensemble Kalman filter and adjoint with a focus on the Indian Ocean and the...surface temperature and surface salinity daily averages for 31-Jan-2014. Similarly, Figure 3 shows the sea surface height averaged solution for 31-Jan... temperature (upper panel; Celsius) and surface salinity (lower panel) for 31-Jan-2014. The refined solution for the Hudson Canyon grid is overlaid on
Enhanced Assimilation of InSAR Displacement and Well Data for Groundwater Monitoring
NASA Astrophysics Data System (ADS)
Abdullin, A.; Jonsson, S.
2016-12-01
Ground deformation related to aquifer exploitation can cause damage to buildings and infrastructure leading to major economic losses and sometimes even loss of human lives. Understanding reservoir behavior helps in assessing possible future ground movement and water depletion hazard of a region under study. We have developed an InSAR-based data assimilation framework for groundwater reservoirs that efficiently incorporates InSAR data for improved reservoir management and forecasts. InSAR displacement data are integrated with the groundwater modeling software MODFLOW using ensemble-based assimilation approaches. We have examined several Ensemble Methods for updating model parameters such as hydraulic conductivity and model variables like pressure head while simultaneously providing an estimate of the uncertainty. A realistic three-dimensional aquifer model was built to demonstrate the capability of the Ensemble Methods incorporating InSAR-derived displacement measurements. We find from these numerical tests that including both ground deformation and well water level data as observations improves the RMSE of the hydraulic conductivity estimate by up to 20% comparing to using only one type of observations. The RMSE estimation of this property after the final time step is similar for Ensemble Kalman Filter (EnKF), Ensemble Smoother (ES) and ES with multiple data assimilation (ES-MDA) methods. The results suggest that the high spatial and temporal resolution subsidence observations from InSAR are very helpful for accurately quantifying hydraulic parameters. We have tested the framework on several different examples and have found good performance in improving aquifer properties estimation, which should prove useful for groundwater management. Our ongoing work focuses on assimilating real InSAR-derived time series and hydraulic head data for calibrating and predicting aquifer properties of basin-wide groundwater systems.
Du, Gang; Jiang, Zhibin; Diao, Xiaodi; Yao, Yang
2013-07-01
Takagi-Sugeno (T-S) fuzzy neural networks (FNNs) can be used to handle complex, fuzzy, uncertain clinical pathway (CP) variances. However, there are many drawbacks, such as slow training rate, propensity to become trapped in a local minimum and poor ability to perform a global search. In order to improve overall performance of variance handling by T-S FNNs, a new CP variance handling method is proposed in this study. It is based on random cooperative decomposing particle swarm optimization with double mutation mechanism (RCDPSO_DM) for T-S FNNs. Moreover, the proposed integrated learning algorithm, combining the RCDPSO_DM algorithm with a Kalman filtering algorithm, is applied to optimize antecedent and consequent parameters of constructed T-S FNNs. Then, a multi-swarm cooperative immigrating particle swarm algorithm ensemble method is used for intelligent ensemble T-S FNNs with RCDPSO_DM optimization to further improve stability and accuracy of CP variance handling. Finally, two case studies on liver and kidney poisoning variances in osteosarcoma preoperative chemotherapy are used to validate the proposed method. The result demonstrates that intelligent ensemble T-S FNNs based on the RCDPSO_DM achieves superior performances, in terms of stability, efficiency, precision and generalizability, over PSO ensemble of all T-S FNNs with RCDPSO_DM optimization, single T-S FNNs with RCDPSO_DM optimization, standard T-S FNNs, standard Mamdani FNNs and T-S FNNs based on other algorithms (cooperative particle swarm optimization and particle swarm optimization) for CP variance handling. Therefore, it makes CP variance handling more effective. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Romanova, Vanya; Hense, Andreas; Wahl, Sabrina; Brune, Sebastian; Baehr, Johanna
2016-04-01
The decadal variability and its predictability of the surface net freshwater fluxes is compared in a set of retrospective predictions, all using the same model setup, and only differing in the implemented ocean initialisation method and ensemble generation method. The basic aim is to deduce the differences between the initialization/ensemble generation methods in view of the uncertainty of the verifying observational data sets. The analysis will give an approximation of the uncertainties of the net freshwater fluxes, which up to now appear to be one of the most uncertain products in observational data and model outputs. All ensemble generation methods are implemented into the MPI-ESM earth system model in the framework of the ongoing MiKlip project (www.fona-miklip.de). Hindcast experiments are initialised annually between 2000-2004, and from each start year 10 ensemble members are initialized for 5 years each. Four different ensemble generation methods are compared: (i) a method based on the Anomaly Transform method (Romanova and Hense, 2015) in which the initial oceanic perturbations represent orthogonal and balanced anomaly structures in space and time and between the variables taken from a control run, (ii) one-day-lagged ocean states from the MPI-ESM-LR baseline system (iii) one-day-lagged of ocean and atmospheric states with preceding full-field nudging to re-analysis in both the atmospheric and the oceanic component of the system - the baseline one MPI-ESM-LR system, (iv) an Ensemble Kalman Filter (EnKF) implemented into oceanic part of MPI-ESM (Brune et al. 2015), assimilating monthly subsurface oceanic temperature and salinity (EN3) using the Parallel Data Assimilation Framework (PDAF). The hindcasts are evaluated probabilistically using fresh water flux data sets from four different reanalysis data sets: MERRA, NCEP-R1, GFDL ocean reanalysis and GECCO2. The assessments show no clear differences in the evaluations scores on regional scales. However, on the global scale the physically motivated methods (i) and (iv) provide probabilistic hindcasts with a consistently higher reliability than the lagged initialization methods (ii)/(iii) despite the large uncertainties in the verifying observations and in the simulations.
A Pulsar Time Scale Based on Parkes Observations in 1995-2010
NASA Astrophysics Data System (ADS)
Rodin, A. E.; Fedorova, V. A.
2018-06-01
Timing of highly stable millisecond pulsars provides the possibility of independently verifying terrestrial time scales on intervals longer than a year. An ensemble pulsar time scale is constructed based on pulsar timing data obtained on the 64-m Parkes telescope (Australia) in 1995-2010. Optimal Wiener filters were applied to enhance the accuracy of the ensemble time scale. The run of the time-scale difference PTens-TT(BIPM2011) does not exceed 0.8 ± 0.4 μs over the entire studied time interval. The fractional instability of the difference PTens-TT(BIPM2011) over 15 years is σ z = (0.6 ± 1.6) × 10-15, which corresponds to an upper limit for the energy density of the gravitational-wave background Ω g h 2 10-10 and variations in the gravitational potential 10-15 Hz at the frequency 2 × 10-9 Hz.
Optical vector network analysis of ultranarrow transitions in 166Er3+ : 7LiYF4 crystal.
Kukharchyk, N; Sholokhov, D; Morozov, O; Korableva, S L; Cole, J H; Kalachev, A A; Bushev, P A
2018-02-15
We present optical vector network analysis (OVNA) of an isotopically purified Er166 3+ :LiYF 4 7 crystal. The OVNA method is based on generation and detection of a modulated optical sideband by using a radio-frequency vector network analyzer. This technique is widely used in the field of microwave photonics for the characterization of optical responses of optical devices such as filters and high-Q resonators. However, dense solid-state atomic ensembles induce a large phase shift on one of the optical sidebands that results in the appearance of extra features on the measured transmission response. We present a simple theoretical model that accurately describes the observed spectra and helps to reconstruct the absorption profile of a solid-state atomic ensemble as well as corresponding change of the refractive index in the vicinity of atomic resonances.
NASA Astrophysics Data System (ADS)
Hu, Xiao-Ming; Zhang, Fuqing; Nielsen-Gammon, John W.
2010-04-01
This study explores the treatment of model error and uncertainties through simultaneous state and parameter estimation (SSPE) with an ensemble Kalman filter (EnKF) in the simulation of a 2006 air pollution event over the greater Houston area during the Second Texas Air Quality Study (TexAQS-II). Two parameters in the atmospheric boundary layer parameterization associated with large model sensitivities are combined with standard prognostic variables in an augmented state vector to be continuously updated through assimilation of wind profiler observations. It is found that forecasts of the atmosphere with EnKF/SSPE are markedly improved over experiments with no state and/or parameter estimation. More specifically, the EnKF/SSPE is shown to help alleviate a near-surface cold bias and to alter the momentum mixing in the boundary layer to produce more realistic wind profiles.
Fast de novo discovery of low-energy protein loop conformations.
Wong, Samuel W K; Liu, Jun S; Kou, S C
2017-08-01
In the prediction of protein structure from amino acid sequence, loops are challenging regions for computational methods. Since loops are often located on the protein surface, they can have significant roles in determining protein functions and binding properties. Loop prediction without the aid of a structural template requires extensive conformational sampling and energy minimization, which are computationally difficult. In this article we present a new de novo loop sampling method, the Parallely filtered Energy Targeted All-atom Loop Sampler (PETALS) to rapidly locate low energy conformations. PETALS explores both backbone and side-chain positions of the loop region simultaneously according to the energy function selected by the user, and constructs a nonredundant ensemble of low energy loop conformations using filtering criteria. The method is illustrated with the DFIRE potential and DiSGro energy function for loops, and shown to be highly effective at discovering conformations with near-native (or better) energy. Using the same energy function as the DiSGro algorithm, PETALS samples conformations with both lower RMSDs and lower energies. PETALS is also useful for assessing the accuracy of different energy functions. PETALS runs rapidly, requiring an average time cost of 10 minutes for a length 12 loop on a single 3.2 GHz processor core, comparable to the fastest existing de novo methods for generating an ensemble of conformations. Proteins 2017; 85:1402-1412. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Major, Kevin J; Poutous, Menelaos K; Ewing, Kenneth J; Dunnill, Kevin F; Sanghera, Jasbinder S; Aggarwal, Ishwar D
2015-09-01
Optical filter-based chemical sensing techniques provide a new avenue to develop low-cost infrared sensors. These methods utilize multiple infrared optical filters to selectively measure different response functions for various chemicals, dependent on each chemical's infrared absorption. Rather than identifying distinct spectral features, which can then be used to determine the identity of a target chemical, optical filter-based approaches rely on measuring differences in the ensemble response between a given filter set and specific chemicals of interest. Therefore, the results of such methods are highly dependent on the original optical filter choice, which will dictate the selectivity, sensitivity, and stability of any filter-based sensing method. Recently, a method has been developed that utilizes unique detection vector operations defined by optical multifilter responses, to discriminate between volatile chemical vapors. This method, comparative-discrimination spectral detection (CDSD), is a technique which employs broadband optical filters to selectively discriminate between chemicals with highly overlapping infrared absorption spectra. CDSD has been shown to correctly distinguish between similar chemicals in the carbon-hydrogen stretch region of the infrared absorption spectra from 2800-3100 cm(-1). A key challenge to this approach is how to determine which optical filter sets should be utilized to achieve the greatest discrimination between target chemicals. Previous studies used empirical approaches to select the optical filter set; however this is insufficient to determine the optimum selectivity between strongly overlapping chemical spectra. Here we present a numerical approach to systematically study the effects of filter positioning and bandwidth on a number of three-chemical systems. We describe how both the filter properties, as well as the chemicals in each set, affect the CDSD results and subsequent discrimination. These results demonstrate the importance of choosing the proper filter set and chemicals for comparative discrimination, in order to identify the target chemical of interest in the presence of closely matched chemical interferents. These findings are an integral step in the development of experimental prototype sensors, which will utilize CDSD.
Mesoscale Predictability and Error Growth in Short Range Ensemble Forecasts
NASA Astrophysics Data System (ADS)
Gingrich, Mark
Although it was originally suggested that small-scale, unresolved errors corrupt forecasts at all scales through an inverse error cascade, some authors have proposed that those mesoscale circulations resulting from stationary forcing on the larger scale may inherit the predictability of the large-scale motions. Further, the relative contributions of large- and small-scale uncertainties in producing error growth in the mesoscales remain largely unknown. Here, 100 member ensemble forecasts are initialized from an ensemble Kalman filter (EnKF) to simulate two winter storms impacting the East Coast of the United States in 2010. Four verification metrics are considered: the local snow water equivalence, total liquid water, and 850 hPa temperatures representing mesoscale features; and the sea level pressure field representing a synoptic feature. It is found that while the predictability of the mesoscale features can be tied to the synoptic forecast, significant uncertainty existed on the synoptic scale at lead times as short as 18 hours. Therefore, mesoscale details remained uncertain in both storms due to uncertainties at the large scale. Additionally, the ensemble perturbation kinetic energy did not show an appreciable upscale propagation of error for either case. Instead, the initial condition perturbations from the cycling EnKF were maximized at large scales and immediately amplified at all scales without requiring initial upscale propagation. This suggests that relatively small errors in the synoptic-scale initialization may have more importance in limiting predictability than errors in the unresolved, small-scale initial conditions.
Adaptive noise canceling of electrocardiogram artifacts in single channel electroencephalogram.
Cho, Sung Pil; Song, Mi Hye; Park, Young Cheol; Choi, Ho Seon; Lee, Kyoung Joung
2007-01-01
A new method for estimating and eliminating electrocardiogram (ECG) artifacts from single channel scalp electroencephalogram (EEG) is proposed. The proposed method consists of emphasis of QRS complex from EEG using least squares acceleration (LSA) filter, generation of synchronized pulse with R-peak and ECG artifacts estimation and elimination using adaptive filter. The performance of the proposed method was evaluated using simulated and real EEG recordings, we found that the ECG artifacts were successfully estimated and eliminated in comparison with the conventional multi-channel techniques, which are independent component analysis (ICA) and ensemble average (EA) method. From this we can conclude that the proposed method is useful for the detecting and eliminating the ECG artifacts from single channel EEG and simple to use for ambulatory/portable EEG monitoring system.
A stacking ensemble learning framework for annual river ice breakup dates
NASA Astrophysics Data System (ADS)
Sun, Wei; Trevor, Bernard
2018-06-01
River ice breakup dates (BDs) are not merely a proxy indicator of climate variability and change, but a direct concern in the management of local ice-caused flooding. A framework of stacking ensemble learning for annual river ice BDs was developed, which included two-level components: member and combining models. The member models described the relations between BD and their affecting indicators; the combining models linked the predicted BD by each member models with the observed BD. Especially, Bayesian regularization back-propagation artificial neural network (BRANN), and adaptive neuro fuzzy inference systems (ANFIS) were employed as both member and combining models. The candidate combining models also included the simple average methods (SAM). The input variables for member models were selected by a hybrid filter and wrapper method. The performances of these models were examined using the leave-one-out cross validation. As the largest unregulated river in Alberta, Canada with ice jams frequently occurring in the vicinity of Fort McMurray, the Athabasca River at Fort McMurray was selected as the study area. The breakup dates and candidate affecting indicators in 1980-2015 were collected. The results showed that, the BRANN member models generally outperformed the ANFIS member models in terms of better performances and simpler structures. The difference between the R and MI rankings of inputs in the optimal member models may imply that the linear correlation based filter method would be feasible to generate a range of candidate inputs for further screening through other wrapper or embedded IVS methods. The SAM and BRANN combining models generally outperformed all member models. The optimal SAM combining model combined two BRANN member models and improved upon them in terms of average squared errors by 14.6% and 18.1% respectively. In this study, for the first time, the stacking ensemble learning was applied to forecasting of river ice breakup dates, which appeared promising for other river ice forecasting problems.
NASA Astrophysics Data System (ADS)
Thiboult, A.; Anctil, F.
2015-10-01
Forecast reliability and accuracy is a prerequisite for successful hydrological applications. This aim may be attained by using data assimilation techniques such as the popular Ensemble Kalman filter (EnKF). Despite its recognized capacity to enhance forecasting by creating a new set of initial conditions, implementation tests have been mostly carried out with a single model and few catchments leading to case specific conclusions. This paper performs an extensive testing to assess ensemble bias and reliability on 20 conceptual lumped models and 38 catchments in the Province of Québec with perfect meteorological forecast forcing. The study confirms that EnKF is a powerful tool for short range forecasting but also that it requires a more subtle setting than it is frequently recommended. The success of the updating procedure depends to a great extent on the specification of the hyper-parameters. In the implementation of the EnKF, the identification of the hyper-parameters is very unintuitive if the model error is not explicitly accounted for and best estimates of forcing and observation error lead to overconfident forecasts. It is shown that performance are also related to the choice of updated state variables and that all states variables should not systematically be updated. Additionally, the improvement over the open loop scheme depends on the watershed and hydrological model structure, as some models exhibit a poor compatibility with EnKF updating. Thus, it is not possible to conclude in detail on a single ideal manner to identify an optimal implementation; conclusions drawn from a unique event, catchment, or model are likely to be misleading since transferring hyper-parameters from a case to another may be hazardous. Finally, achieving reliability and bias jointly is a daunting challenge as the optimization of one score is done at the cost of the other.
NASA Astrophysics Data System (ADS)
Kollat, J. B.; Reed, P. M.
2009-12-01
This study contributes the ASSIST (Adaptive Strategies for Sampling in Space and Time) framework for improving long-term groundwater monitoring decisions across space and time while accounting for the influences of systematic model errors (or predictive bias). The ASSIST framework combines contaminant flow-and-transport modeling, bias-aware ensemble Kalman filtering (EnKF) and many-objective evolutionary optimization. Our goal in this work is to provide decision makers with a fuller understanding of the information tradeoffs they must confront when performing long-term groundwater monitoring network design. Our many-objective analysis considers up to 6 design objectives simultaneously and consequently synthesizes prior monitoring network design methodologies into a single, flexible framework. This study demonstrates the ASSIST framework using a tracer study conducted within a physical aquifer transport experimental tank located at the University of Vermont. The tank tracer experiment was extensively sampled to provide high resolution estimates of tracer plume behavior. The simulation component of the ASSIST framework consists of stochastic ensemble flow-and-transport predictions using ParFlow coupled with the Lagrangian SLIM transport model. The ParFlow and SLIM ensemble predictions are conditioned with tracer observations using a bias-aware EnKF. The EnKF allows decision makers to enhance plume transport predictions in space and time in the presence of uncertain and biased model predictions by conditioning them on uncertain measurement data. In this initial demonstration, the position and frequency of sampling were optimized to: (i) minimize monitoring cost, (ii) maximize information provided to the EnKF, (iii) minimize failure to detect the tracer, (iv) maximize the detection of tracer flux, (v) minimize error in quantifying tracer mass, and (vi) minimize error in quantifying the moment of the tracer plume. The results demonstrate that the many-objective problem formulation provides a tremendous amount of information for decision makers. Specifically our many-objective analysis highlights the limitations and potentially negative design consequences of traditional single and two-objective problem formulations. These consequences become apparent through visual exploration of high-dimensional tradeoffs and the identification of regions with interesting compromise solutions. The prediction characteristics of these compromise designs are explored in detail, as well as their implications for subsequent design decisions in both space and time.
Ensemble Smoother implemented in parallel for groundwater problems applications
NASA Astrophysics Data System (ADS)
Leyva, E.; Herrera, G. S.; de la Cruz, L. M.
2013-05-01
Data assimilation is a process that links forecasting models and measurements using the benefits from both sources. The Ensemble Kalman Filter (EnKF) is a data-assimilation sequential-method that was designed to address two of the main problems related to the use of the Extended Kalman Filter (EKF) with nonlinear models in large state spaces, i-e the use of a closure problem and massive computational requirements associated with the storage and subsequent integration of the error covariance matrix. The EnKF has gained popularity because of its simple conceptual formulation and relative ease of implementation. It has been used successfully in various applications of meteorology and oceanography and more recently in petroleum engineering and hydrogeology. The Ensemble Smoother (ES) is a method similar to EnKF, it was proposed by Van Leeuwen and Evensen (1996). Herrera (1998) proposed a version of the ES which we call Ensemble Smoother of Herrera (ESH) to distinguish it from the former. It was introduced for space-time optimization of groundwater monitoring networks. In recent years, this method has been used for data assimilation and parameter estimation in groundwater flow and transport models. The ES method uses Monte Carlo simulation, which consists of generating repeated realizations of the random variable considered, using a flow and transport model. However, often a large number of model runs are required for the moments of the variable to converge. Therefore, depending on the complexity of problem a serial computer may require many hours of continuous use to apply the ES. For this reason, it is required to parallelize the process in order to do it in a reasonable time. In this work we present the results of a parallelization strategy to reduce the execution time for doing a high number of realizations. The software GWQMonitor by Herrera (1998), implements all the algorithms required for the ESH in Fortran 90. We develop a script in Python using mpi4py, in order to execute GWQMonitor in parallel, applying the MPI library. Our approach is to calculate the initial inputs for each realization, and run groups of these realizations in separate processors. The only modification to the GWQMonitor was the final calculation of the covariance matrix. This strategy was applied to the study of a simplified aquifer in a rectangular domain of a single layer. We show the speedup and efficiency for different number of processors.
Applying Ensemble Kalman Filter to Regional Ocean Circulation Model in the East Asian Marginal Sea
NASA Astrophysics Data System (ADS)
Pak, Gyun-Do; Kim, Young Ho; Chang, Kyung-Il
2010-05-01
We successfully apply the ensemble Kalman filter (EnKF) data assimilation scheme to the East Sea Regional Ocean Model (ESROM). The ESROM solves the three dimensional ocean primitive equations with the hydrostatic and Boussinesq approximations. The domain of ESROM fully covers East Sea with grid intervals of approximately 0.1˚. The ESROM has one inflow port, the Korea Strait, and two outflow ports, the Tsugaru and Soya straits. High resolution bathymetry of 1/60˚ (Choi et al., 2002) is adopted for the model topography. The ESROM is initialized using hydrographic data from World Ocean Atlas (WOA), and forced by monthly mean surface and open boundary conditions supplied from European Centre for Medium-Range Weather Forecast data, WOA and so on. The EnKF system is composed of 16 ensembles and thousands of observation data are assimilated at every assimilation step into its parallel version, which significantly reduces the required memory and computational time more than 3-fold compared with its serial version. To prevent the collapse of ensembles due to rank deficiency, we employ various schemes such as localization and inflation of the background error covariance and disturbance of observations. Sea surface temperature from the Advanced Very High Resolution Radiometer and in-situ temperature profiles from various sources including Argo floats have been assimilated into the EnKF system. For cyclonic circulation in the northern East Sea and paths of the East Korean Warm Current and the Nearshore Branch, the EnKF system reproduces the mean surface circulation more realistically than that in the case without data assimilation. Simulated area-averaged vertical temperature profiles also agrees well with the Generalized Digital Environmental Model data, which indicates that the EnKF system corrects the warming of subsurface temperature and the erosion of the permanent thermocline that are usually observed in numerical models without data assimilation. We also quantitatively validate the EnKF system by comparing its results with observed temperatures at 100 m for two years in the southwestern East Sea. We find that spatial and temporal correlations are higher and root-mean-square errors are lower in the EnKF system as compared with those systems without data assimilation.
SVD analysis of Aura TES spectral residuals
NASA Technical Reports Server (NTRS)
Beer, Reinhard; Kulawik, Susan S.; Rodgers, Clive D.; Bowman, Kevin W.
2005-01-01
Singular Value Decomposition (SVD) analysis is both a powerful diagnostic tool and an effective method of noise filtering. We present the results of an SVD analysis of an ensemble of spectral residuals acquired in September 2004 from a 16-orbit Aura Tropospheric Emission Spectrometer (TES) Global Survey and compare them to alternative methods such as zonal averages. In particular, the technique highlights issues such as the orbital variation of instrument response and incompletely modeled effects of surface emissivity and atmospheric composition.
PIV Data Validation Software Package
NASA Technical Reports Server (NTRS)
Blackshire, James L.
1997-01-01
A PIV data validation and post-processing software package was developed to provide semi-automated data validation and data reduction capabilities for Particle Image Velocimetry data sets. The software provides three primary capabilities including (1) removal of spurious vector data, (2) filtering, smoothing, and interpolating of PIV data, and (3) calculations of out-of-plane vorticity, ensemble statistics, and turbulence statistics information. The software runs on an IBM PC/AT host computer working either under Microsoft Windows 3.1 or Windows 95 operating systems.
2012-01-01
Physique des Oceans UMR6523 (CNRS. I B(). IFREMER. IRD). Brest , France C. N. Barron E. Joseph Metzger Naval Research Laboratory, Stennis Space...AF447 flight from Rio to Paris . The airplane disappeared on June 1st 2009 near 3° N and 31° W, and a large international effort was organized to...to Runge-Kutta trajectory integration. The low- pass filter was accomplished by convolving the original (XiCM velocity fields at each time step and
Ensemble Kalman filters for dynamical systems with unresolved turbulence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grooms, Ian, E-mail: grooms@cims.nyu.edu; Lee, Yoonsang; Majda, Andrew J.
Ensemble Kalman filters are developed for turbulent dynamical systems where the forecast model does not resolve all the active scales of motion. Coarse-resolution models are intended to predict the large-scale part of the true dynamics, but observations invariably include contributions from both the resolved large scales and the unresolved small scales. The error due to the contribution of unresolved scales to the observations, called ‘representation’ or ‘representativeness’ error, is often included as part of the observation error, in addition to the raw measurement error, when estimating the large-scale part of the system. It is here shown how stochastic superparameterization (amore » multiscale method for subgridscale parameterization) can be used to provide estimates of the statistics of the unresolved scales. In addition, a new framework is developed wherein small-scale statistics can be used to estimate both the resolved and unresolved components of the solution. The one-dimensional test problem from dispersive wave turbulence used here is computationally tractable yet is particularly difficult for filtering because of the non-Gaussian extreme event statistics and substantial small scale turbulence: a shallow energy spectrum proportional to k{sup −5/6} (where k is the wavenumber) results in two-thirds of the climatological variance being carried by the unresolved small scales. Because the unresolved scales contain so much energy, filters that ignore the representation error fail utterly to provide meaningful estimates of the system state. Inclusion of a time-independent climatological estimate of the representation error in a standard framework leads to inaccurate estimates of the large-scale part of the signal; accurate estimates of the large scales are only achieved by using stochastic superparameterization to provide evolving, large-scale dependent predictions of the small-scale statistics. Again, because the unresolved scales contain so much energy, even an accurate estimate of the large-scale part of the system does not provide an accurate estimate of the true state. By providing simultaneous estimates of both the large- and small-scale parts of the solution, the new framework is able to provide accurate estimates of the true system state.« less
NASA Astrophysics Data System (ADS)
Briseño, Jessica; Herrera, Graciela S.
2010-05-01
Herrera (1998) proposed a method for the optimal design of groundwater quality monitoring networks that involves space and time in a combined form. The method was applied later by Herrera et al (2001) and by Herrera and Pinder (2005). To get the estimates of the contaminant concentration being analyzed, this method uses a space-time ensemble Kalman filter, based on a stochastic flow and transport model. When the method is applied, it is important that the characteristics of the stochastic model be congruent with field data, but, in general, it is laborious to manually achieve a good match between them. For this reason, the main objective of this work is to extend the space-time ensemble Kalman filter proposed by Herrera, to estimate the hydraulic conductivity, together with hydraulic head and contaminant concentration, and its application in a synthetic example. The method has three steps: 1) Given the mean and the semivariogram of the natural logarithm of hydraulic conductivity (ln K), random realizations of this parameter are obtained through two alternatives: Gaussian simulation (SGSim) and Latin Hypercube Sampling method (LHC). 2) The stochastic model is used to produce hydraulic head (h) and contaminant (C) realizations, for each one of the conductivity realizations. With these realization the mean of ln K, h and C are obtained, for h and C, the mean is calculated in space and time, and also the cross covariance matrix h-ln K-C in space and time. The covariance matrix is obtained averaging products of the ln K, h and C realizations on the estimation points and times, and the positions and times with data of the analyzed variables. The estimation points are the positions at which estimates of ln K, h or C are gathered. In an analogous way, the estimation times are those at which estimates of any of the three variables are gathered. 3) Finally the ln K, h and C estimate are obtained using the space-time ensemble Kalman filter. The realization mean for each one of the variables is used as the prior space-time estimate for the Kalman filter, and the space-time cross-covariance matrix of h-ln K-C as the prior estimate-error covariance-matrix. The synthetic example has a modeling area of 700 x 700 square meters; a triangular mesh model with 702 nodes and 1306 elements is used. A pumping well located in the central part of the study area is considered. For the contaminant transport model, a contaminant source area is present in the western part of the study area. The estimation points for hydraulic conductivity, hydraulic head and contaminant concentrations are located on a submesh of the model mesh (same location for h, ln K and c), composed by 48 nodes spread throughout the study area, with an approximately separation of 90 meters between nodes. The results analysis was done through the mean error, root mean square error, initial and final estimation maps of h, ln K and C at each time, and the initial and final variance maps of h, ln K and C. To obtain model convergence, 3000 realizations of ln K were required using SGSim, and only 1000 with LHC. The results show that for both alternatives, the Kalman filter estimates for h, ln K and C using h and C data, have errors which magnitudes decrease as data is added. HERRERA, G. S.(1998), Cost Effective Groundwater Quality Sampling Network Design. Ph. D. thesis, University of Vermont, Burlington, Vermont, 172 pp. HERRERA G., GUARNACCIA J., PINDER G. Y SIMUTA R.(2001),"Diseño de redes de monitoreo de la calidad del agua subterránea eficientes", Proceedings of the 2001 International Symposium on Environmental Hydraulics, Arizona, U.S.A. HERRERA G. S. and PINDER G.F. (2005), Space-time optimization of groundwater quality sampling networks Water Resour. Res., Vol. 41, No. 12, W12407, 10.1029/2004WR003626.
Correia, Carlos M; Teixeira, Joel
2014-12-01
Computationally efficient wave-front reconstruction techniques for astronomical adaptive-optics (AO) systems have seen great development in the past decade. Algorithms developed in the spatial-frequency (Fourier) domain have gathered much attention, especially for high-contrast imaging systems. In this paper we present the Wiener filter (resulting in the maximization of the Strehl ratio) and further develop formulae for the anti-aliasing (AA) Wiener filter that optimally takes into account high-order wave-front terms folded in-band during the sensing (i.e., discrete sampling) process. We employ a continuous spatial-frequency representation for the forward measurement operators and derive the Wiener filter when aliasing is explicitly taken into account. We further investigate and compare to classical estimates using least-squares filters the reconstructed wave-front, measurement noise, and aliasing propagation coefficients as a function of the system order. Regarding high-contrast systems, we provide achievable performance results as a function of an ensemble of forward models for the Shack-Hartmann wave-front sensor (using sparse and nonsparse representations) and compute point-spread-function raw intensities. We find that for a 32×32 single-conjugated AOs system the aliasing propagation coefficient is roughly 60% of the least-squares filters, whereas the noise propagation is around 80%. Contrast improvements of factors of up to 2 are achievable across the field in the H band. For current and next-generation high-contrast imagers, despite better aliasing mitigation, AA Wiener filtering cannot be used as a standalone method and must therefore be used in combination with optical spatial filters deployed before image formation actually takes place.
Ensemble candidate classification for the LOTAAS pulsar survey
NASA Astrophysics Data System (ADS)
Tan, C. M.; Lyon, R. J.; Stappers, B. W.; Cooper, S.; Hessels, J. W. T.; Kondratiev, V. I.; Michilli, D.; Sanidas, S.
2018-03-01
One of the biggest challenges arising from modern large-scale pulsar surveys is the number of candidates generated. Here, we implemented several improvements to the machine learning (ML) classifier previously used by the LOFAR Tied-Array All-Sky Survey (LOTAAS) to look for new pulsars via filtering the candidates obtained during periodicity searches. To assist the ML algorithm, we have introduced new features which capture the frequency and time evolution of the signal and improved the signal-to-noise calculation accounting for broad profiles. We enhanced the ML classifier by including a third class characterizing RFI instances, allowing candidates arising from RFI to be isolated, reducing the false positive return rate. We also introduced a new training data set used by the ML algorithm that includes a large sample of pulsars misclassified by the previous classifier. Lastly, we developed an ensemble classifier comprised of five different Decision Trees. Taken together these updates improve the pulsar recall rate by 2.5 per cent, while also improving the ability to identify pulsars with wide pulse profiles, often misclassified by the previous classifier. The new ensemble classifier is also able to reduce the percentage of false positive candidates identified from each LOTAAS pointing from 2.5 per cent (˜500 candidates) to 1.1 per cent (˜220 candidates).
Capability of GPGPU for Faster Thermal Analysis Used in Data Assimilation
NASA Astrophysics Data System (ADS)
Takaki, Ryoji; Akita, Takeshi; Shima, Eiji
A thermal mathematical model plays an important role in operations on orbit as well as spacecraft thermal designs. The thermal mathematical model has some uncertain thermal characteristic parameters, such as thermal contact resistances between components, effective emittances of multilayer insulation (MLI) blankets, discouraging make up efficiency and accuracy of the model. A particle filter which is one of successive data assimilation methods has been applied to construct spacecraft thermal mathematical models. This method conducts a lot of ensemble computations, which require large computational power. Recently, General Purpose computing in Graphics Processing Unit (GPGPU) has been attracted attention in high performance computing. Therefore GPGPU is applied to increase the computational speed of thermal analysis used in the particle filter. This paper shows the speed-up results by using GPGPU as well as the application method of GPGPU.
Sequential Bayesian geoacoustic inversion for mobile and compact source-receiver configuration.
Carrière, Olivier; Hermand, Jean-Pierre
2012-04-01
Geoacoustic characterization of wide areas through inversion requires easily deployable configurations including free-drifting platforms, underwater gliders and autonomous vehicles, typically performing repeated transmissions during their course. In this paper, the inverse problem is formulated as sequential Bayesian filtering to take advantage of repeated transmission measurements. Nonlinear Kalman filters implement a random-walk model for geometry and environment and an acoustic propagation code in the measurement model. Data from MREA/BP07 sea trials are tested consisting of multitone and frequency-modulated signals (bands: 0.25-0.8 and 0.8-1.6 kHz) received on a shallow vertical array of four hydrophones 5-m spaced drifting over 0.7-1.6 km range. Space- and time-coherent processing are applied to the respective signal types. Kalman filter outputs are compared to a sequence of global optimizations performed independently on each received signal. For both signal types, the sequential approach is more accurate but also more efficient. Due to frequency diversity, the processing of modulated signals produces a more stable tracking. Although an extended Kalman filter provides comparable estimates of the tracked parameters, the ensemble Kalman filter is necessary to properly assess uncertainty. In spite of mild range dependence and simplified bottom model, all tracked geoacoustic parameters are consistent with high-resolution seismic profiling, core logging P-wave velocity, and previous inversion results with fixed geometries.
Hydrologic trade-offs in conjunctive use management.
Bredehoeft, John
2011-01-01
An aquifer, in a stream/aquifer system, acts as a storage reservoir for groundwater. Groundwater pumping creates stream depletion that recharges the aquifer. As wells in the aquifer are moved away from the stream, the aquifer acts to filter out annual fluctuations in pumping; with distance the stream depletion tends to become equal to the total pumping averaged as an annual rate, with only a small fluctuation. This is true for both a single well and an ensemble of wells. A typical growing season in much of the western United States is 3 to 4 months. An ensemble of irrigation wells spread more or less uniformly across an aquifer several miles wide, pumping during the growing season, will deplete the stream by approximately one-third of the total amount of water pumped during the growing season. The remaining two-thirds of stream depletion occurs outside the growing season. Furthermore, it takes more than a decade of pumping for an ensemble of wells to reach a steady-state condition in which the impact on the stream is the same in succeeding years. After a decade or more of pumping, the depletion is nearly constant through the year, with only a small seasonal fluctuation: ±10%. Conversely, stream depletion following shutting down the pumping from an ensemble of wells takes more than a decade to fully recover from the prior pumping. Effectively managing a conjunctive groundwater and surface water system requires integrating the entire system into a single management institution with a long-term outlook. Copyright © 2010 The Author(s). Journal compilation © 2010 National Ground Water Association.
NASA Astrophysics Data System (ADS)
Robinson, B.; Herman, J. D.
2017-12-01
Long-term water supply planning is challenged by highly uncertain streamflow projections across climate models and emissions scenarios. Recent studies have devised infrastructure and policy responses that can withstand or adapt to an ensemble of scenarios, particularly those outside the envelope of historical variability. An important aspect of this process is whether the proposed thresholds for adaptation (i.e., observations that trigger a response) truly represent a trend toward future change. Here we propose an approach to connect observations of annual mean streamflow with long-term projections by filtering GCM-based streamflow ensembles. Visualizations are developed to investigate whether observed changes in mean annual streamflow can be linked to projected changes in end-of-century mean and variance relative to the full ensemble. A key focus is identifying thresholds that point to significant long-term changes in the distribution of streamflow (+/- 20% or greater) as early as possible. The analysis is performed on 87 sites in the Western United States, using streamflow ensembles through 2100 from a recent study by the U.S. Bureau of Reclamation. Results focus on three primary questions: (1) how many years of observed data are needed to identify the most extreme scenarios, and by what year can they be identified? (2) are these features different between sites? and (3) using this analysis, do observed flows to date at each site point to significant long-term changes? This study addresses the challenge of severe uncertainty in long-term streamflow projections by identifying key thresholds that can be observed to support water supply planning.
Identification of hydrological model parameter variation using ensemble Kalman filter
NASA Astrophysics Data System (ADS)
Deng, Chao; Liu, Pan; Guo, Shenglian; Li, Zejun; Wang, Dingbao
2016-12-01
Hydrological model parameters play an important role in the ability of model prediction. In a stationary context, parameters of hydrological models are treated as constants; however, model parameters may vary with time under climate change and anthropogenic activities. The technique of ensemble Kalman filter (EnKF) is proposed to identify the temporal variation of parameters for a two-parameter monthly water balance model (TWBM) by assimilating the runoff observations. Through a synthetic experiment, the proposed method is evaluated with time-invariant (i.e., constant) parameters and different types of parameter variations, including trend, abrupt change and periodicity. Various levels of observation uncertainty are designed to examine the performance of the EnKF. The results show that the EnKF can successfully capture the temporal variations of the model parameters. The application to the Wudinghe basin shows that the water storage capacity (SC) of the TWBM model has an apparent increasing trend during the period from 1958 to 2000. The identified temporal variation of SC is explained by land use and land cover changes due to soil and water conservation measures. In contrast, the application to the Tongtianhe basin shows that the estimated SC has no significant variation during the simulation period of 1982-2013, corresponding to the relatively stationary catchment properties. The evapotranspiration parameter (C) has temporal variations while no obvious change patterns exist. The proposed method provides an effective tool for quantifying the temporal variations of the model parameters, thereby improving the accuracy and reliability of model simulations and forecasts.
3D soil water nowcasting using electromagnetic conductivity imaging and the ensemble Kalman filter
NASA Astrophysics Data System (ADS)
Huang, Jingyi; McBratney, Alex; Minasny, Budiman; Triantafilis, John
2017-04-01
Mapping and immediate forecasting of soil water content (θ) and its movement can be challenging. Although apparent electrical conductivity (ECa) measured by electromagnetic induction has been used, it is difficult to apply it along a transect or across a field. Across a 3.95-ha field with varying soil texture, an ensemble Kalman filter (EnFK) was used to monitor and nowcast θ dynamics in 2-d and 3-d over 16 days. The EnKF combined a physical model fitted with θ measured by soil moisture sensors and an Artificial Neural Network model comprising estimate of true electrical conductivity (σ) generated by inversions of DUALEM-421S ECa data. Results showed that the spatio-temporal variation in θ can be successfully modelled using the EnKF (Lin's concordance = 0.89). Soil water dried fast at the beginning of the irrigation and decreased with time and soil depth, which were consistent with the classical soil drying theory and experiments. It was also found that the soil dried fast in the loamy and duplex soils across the field, which was attributable to deep drainage and preferential flows. It was concluded that the EnKF approach can be used to better the irrigation practice so that variation in irrigation is minimised and irrigation efficiency is improved by applying variable rates of irrigation across the field. In addition, soil water status can be nowcasted using this method with weather forecast information, which will provide guidance to farmers for real-time irrigation management.
NASA Astrophysics Data System (ADS)
Maki, T.; Sekiyama, T. T.; Shibata, K.; Miyazaki, K.; Miyoshi, T.; Yamada, K.; Yokoo, Y.; Iwasaki, T.
2011-12-01
In the current carbon cycle analysis, inverse modeling plays an important role. However, it requires enormous computational resources when we deal with more flux regions and more observations. The local ensemble transform Kalman filter (LETKF) is an alternative approach to reduce such problems. We constructed a carbon cycle analysis system with the LETKF and MRI (Meteorological Research Institute) online transport model (MJ98-CDTM). In MJ98-CDTM, an off-line transport model (CDTM) is directly coupled with the MRI/JMA GCM (MJ98). We further improved vertical transport processes in MJ98-CDTM from previous study. The LETKF includes enhanced features such as smoother to assimilate future observations, adaptive inflation and bias correction scheme. In this study, we use CO2 observations of surface data (continuous and flask), aircraft data (CONTRAIL) and satellite data (GOSAT), although we plan to assimilate AIRS tropospheric CO2 data. We developed a quality control system. We estimated 3-day-mean CO2 flux at a resolution of T42. Here, only CO2 concentrations and fluxes are analyzed whereas meteorological fields are nudged by the Japanese reanalysis (JCDAS). The horizontal localization length scale and assimilation window are chosen to be 1000 km and 3 days, respectively. The results indicate that the assimilation system works properly, better than free transport model run when we validate with independent CO2 concentration observational data and CO2 analysis data.
NASA Astrophysics Data System (ADS)
Durazo, Juan A.; Kostelich, Eric J.; Mahalov, Alex
2017-09-01
We propose a targeted observation strategy, based on the influence matrix diagnostic, that optimally selects where additional observations may be placed to improve ionospheric forecasts. This strategy is applied in data assimilation observing system experiments, where synthetic electron density vertical profiles, which represent those of Constellation Observing System for Meteorology, Ionosphere, and Climate/Formosa satellite 3, are assimilated into the Thermosphere-Ionosphere-Electrodynamics General Circulation Model using the local ensemble transform Kalman filter during the 26 September 2011 geomagnetic storm. During each analysis step, the observation vector is augmented with five synthetic vertical profiles optimally placed to target electron density errors, using our targeted observation strategy. Forecast improvement due to assimilation of augmented vertical profiles is measured with the root-mean-square error (RMSE) of analyzed electron density, averaged over 600 km regions centered around the augmented vertical profile locations. Assimilating vertical profiles with targeted locations yields about 60%-80% reduction in electron density RMSE, compared to a 15% average reduction when assimilating randomly placed vertical profiles. Assimilating vertical profiles whose locations target the zonal component of neutral winds (Un) yields on average a 25% RMSE reduction in Un estimates, compared to a 2% average improvement obtained with randomly placed vertical profiles. These results demonstrate that our targeted strategy can improve data assimilation efforts during extreme events by detecting regions where additional observations would provide the largest benefit to the forecast.
Improving Weather Forecasts Through Reduced Precision Data Assimilation
NASA Astrophysics Data System (ADS)
Hatfield, Samuel; Düben, Peter; Palmer, Tim
2017-04-01
We present a new approach for improving the efficiency of data assimilation, by trading numerical precision for computational speed. Future supercomputers will allow a greater choice of precision, so that models can use a level of precision that is commensurate with the model uncertainty. Previous studies have already indicated that the quality of climate and weather forecasts is not significantly degraded when using a precision less than double precision [1,2], but so far these studies have not considered data assimilation. Data assimilation is inherently uncertain due to the use of relatively long assimilation windows, noisy observations and imperfect models. Thus, the larger rounding errors incurred from reducing precision may be within the tolerance of the system. Lower precision arithmetic is cheaper, and so by reducing precision in ensemble data assimilation, we can redistribute computational resources towards, for example, a larger ensemble size. Because larger ensembles provide a better estimate of the underlying distribution and are less reliant on covariance inflation and localisation, lowering precision could actually allow us to improve the accuracy of weather forecasts. We will present results on how lowering numerical precision affects the performance of an ensemble data assimilation system, consisting of the Lorenz '96 toy atmospheric model and the ensemble square root filter. We run the system at half precision (using an emulation tool), and compare the results with simulations at single and double precision. We estimate that half precision assimilation with a larger ensemble can reduce assimilation error by 30%, with respect to double precision assimilation with a smaller ensemble, for no extra computational cost. This results in around half a day extra of skillful weather forecasts, if the error-doubling characteristics of the Lorenz '96 model are mapped to those of the real atmosphere. Additionally, we investigate the sensitivity of these results to observational error and assimilation window length. Half precision hardware will become available very shortly, with the introduction of Nvidia's Pascal GPU architecture and the Intel Knights Mill coprocessor. We hope that the results presented here will encourage the uptake of this hardware. References [1] Peter D. Düben and T. N. Palmer, 2014: Benchmark Tests for Numerical Weather Forecasts on Inexact Hardware, Mon. Weather Rev., 142, 3809-3829 [2] Peter D. Düben, Hugh McNamara and T. N. Palmer, 2014: The use of imprecise processing to improve accuracy in weather & climate prediction, J. Comput. Phys., 271, 2-18
Soft Mixer Assignment in a Hierarchical Generative Model of Natural Scene Statistics
Schwartz, Odelia; Sejnowski, Terrence J.; Dayan, Peter
2010-01-01
Gaussian scale mixture models offer a top-down description of signal generation that captures key bottom-up statistical characteristics of filter responses to images. However, the pattern of dependence among the filters for this class of models is prespecified. We propose a novel extension to the gaussian scale mixture model that learns the pattern of dependence from observed inputs and thereby induces a hierarchical representation of these inputs. Specifically, we propose that inputs are generated by gaussian variables (modeling local filter structure), multiplied by a mixer variable that is assigned probabilistically to each input from a set of possible mixers. We demonstrate inference of both components of the generative model, for synthesized data and for different classes of natural images, such as a generic ensemble and faces. For natural images, the mixer variable assignments show invariances resembling those of complex cells in visual cortex; the statistics of the gaussian components of the model are in accord with the outputs of divisive normalization models. We also show how our model helps interrelate a wide range of models of image statistics and cortical processing. PMID:16999575
Ensembler: Enabling High-Throughput Molecular Simulations at the Superfamily Scale.
Parton, Daniel L; Grinaway, Patrick B; Hanson, Sonya M; Beauchamp, Kyle A; Chodera, John D
2016-06-01
The rapidly expanding body of available genomic and protein structural data provides a rich resource for understanding protein dynamics with biomolecular simulation. While computational infrastructure has grown rapidly, simulations on an omics scale are not yet widespread, primarily because software infrastructure to enable simulations at this scale has not kept pace. It should now be possible to study protein dynamics across entire (super)families, exploiting both available structural biology data and conformational similarities across homologous proteins. Here, we present a new tool for enabling high-throughput simulation in the genomics era. Ensembler takes any set of sequences-from a single sequence to an entire superfamily-and shepherds them through various stages of modeling and refinement to produce simulation-ready structures. This includes comparative modeling to all relevant PDB structures (which may span multiple conformational states of interest), reconstruction of missing loops, addition of missing atoms, culling of nearly identical structures, assignment of appropriate protonation states, solvation in explicit solvent, and refinement and filtering with molecular simulation to ensure stable simulation. The output of this pipeline is an ensemble of structures ready for subsequent molecular simulations using computer clusters, supercomputers, or distributed computing projects like Folding@home. Ensembler thus automates much of the time-consuming process of preparing protein models suitable for simulation, while allowing scalability up to entire superfamilies. A particular advantage of this approach can be found in the construction of kinetic models of conformational dynamics-such as Markov state models (MSMs)-which benefit from a diverse array of initial configurations that span the accessible conformational states to aid sampling. We demonstrate the power of this approach by constructing models for all catalytic domains in the human tyrosine kinase family, using all available kinase catalytic domain structures from any organism as structural templates. Ensembler is free and open source software licensed under the GNU General Public License (GPL) v2. It is compatible with Linux and OS X. The latest release can be installed via the conda package manager, and the latest source can be downloaded from https://github.com/choderalab/ensembler.
NASA Astrophysics Data System (ADS)
Hill, A.; Weiss, C.; Ancell, B. C.
2017-12-01
The basic premise of observation targeting is that additional observations, when gathered and assimilated with a numerical weather prediction (NWP) model, will produce a more accurate forecast related to a specific phenomenon. Ensemble-sensitivity analysis (ESA; Ancell and Hakim 2007; Torn and Hakim 2008) is a tool capable of accurately estimating the proper location of targeted observations in areas that have initial model uncertainty and large error growth, as well as predicting the reduction of forecast variance due to the assimilated observation. ESA relates an ensemble of NWP model forecasts, specifically an ensemble of scalar forecast metrics, linearly to earlier model states. A thorough investigation is presented to determine how different factors of the forecast process are impacting our ability to successfully target new observations for mesoscale convection forecasts. Our primary goals for this work are to determine: (1) If targeted observations hold more positive impact over non-targeted (i.e. randomly chosen) observations; (2) If there are lead-time constraints to targeting for convection; (3) How inflation, localization, and the assimilation filter influence impact prediction and realized results; (4) If there exist differences between targeted observations at the surface versus aloft; and (5) how physics errors and nonlinearity may augment observation impacts.Ten cases of dryline-initiated convection between 2011 to 2013 are simulated within a simplified OSSE framework and presented here. Ensemble simulations are produced from a cycling system that utilizes the Weather Research and Forecasting (WRF) model v3.8.1 within the Data Assimilation Research Testbed (DART). A "truth" (nature) simulation is produced by supplying a 3-km WRF run with GFS analyses and integrating the model forward 90 hours, from the beginning of ensemble initialization through the end of the forecast. Target locations for surface and radiosonde observations are computed 6, 12, and 18 hours into the forecast based on a chosen scalar forecast response metric (e.g., maximum reflectivity at convection initiation). A variety of experiments are designed to achieve the aforementioned goals and will be presented, along with their results, detailing the feasibility of targeting for mesoscale convection forecasts.
NASA Astrophysics Data System (ADS)
Wood, Andy; Clark, Elizabeth; Mendoza, Pablo; Nijssen, Bart; Newman, Andy; Clark, Martyn; Nowak, Kenneth; Arnold, Jeffrey
2017-04-01
Many if not most national operational streamflow prediction systems rely on a forecaster-in-the-loop approach that require the hands-on-effort of an experienced human forecaster. This approach evolved from the need to correct for long-standing deficiencies in the models and datasets used in forecasting, and the practice often leads to skillful flow predictions despite the use of relatively simple, conceptual models. Yet the 'in-the-loop' forecast process is not reproducible, which limits opportunities to assess and incorporate new techniques systematically, and the effort required to make forecasts in this way is an obstacle to expanding forecast services - e.g., though adding new forecast locations or more frequent forecast updates, running more complex models, or producing forecast and hindcasts that can support verification. In the last decade, the hydrologic forecasting community has begun develop more centralized, 'over-the-loop' systems. The quality of these new forecast products will depend on their ability to leverage research in areas including earth system modeling, parameter estimation, data assimilation, statistical post-processing, weather and climate prediction, verification, and uncertainty estimation through the use of ensembles. Currently, many national operational streamflow forecasting and water management communities have little experience with the strengths and weaknesses of over-the-loop approaches, even as such systems are beginning to be deployed operationally in centers such as ECMWF. There is thus a need both to evaluate these forecasting advances and to demonstrate their potential in a public arena, raising awareness in forecast user communities and development programs alike. To address this need, the US National Center for Atmospheric Research is collaborating with the University of Washington, the Bureau of Reclamation and the US Army Corps of Engineers, using the NCAR 'System for Hydromet Analysis Research and Prediction Applications' (SHARP) to implement, assess and demonstrate real-time over-the-loop ensemble flow forecasts in a range of US watersheds. The system relies on fully ensemble techniques, including: an 100-member ensemble of meteorological model forcings and an ensemble particle filter data assimilation for initializing watershed states; analog/regression-based downscaling of ensemble weather forecasts from GEFS; and statistical post-processing of ensemble forecast outputs, all of which run in real-time within a workflow managed by ECWMF's ecFlow libraries over large US regional domains. We describe SHARP and present early hindcast and verification results for short to seasonal range streamflow forecasts in a number of US case study watersheds.
NASA Astrophysics Data System (ADS)
Lorente-Plazas, Raquel; Hacker, Josua P.; Collins, Nancy; Lee, Jared A.
2017-04-01
The impact of assimilating surface observations has been shown in several publications, for improving weather prediction inside of the boundary layer as well as the flow aloft. However, the assimilation of surface observations is often far from optimal due to the presence of both model and observation biases. The sources of these biases can be diverse: an instrumental offset, errors associated to the comparison of point-based observations and grid-cell average, etc. To overcome this challenge, a method was developed using the ensemble Kalman filter. The approach consists on representing each observation bias as a parameter. These bias parameters are added to the forward operator and they extend the state vector. As opposed to the observation bias estimation approaches most common in operational systems (e.g. for satellite radiances), the state vector and parameters are simultaneously updated by applying the Kalman filter equations to the augmented state. The method to estimate and correct the observation bias is evaluated using observing system simulation experiments (OSSEs) with the Weather Research and Forecasting (WRF) model. OSSEs are constructed for the conventional observation network including radiosondes, aircraft observations, atmospheric motion vectors, and surface observations. Three different kinds of biases are added to 2-meter temperature for synthetic METARs. From the simplest to more sophisticated, imposed biases are: (1) a spatially invariant bias, (2) a spatially varying bias proportional to topographic height differences between the model and the observations, and (3) bias that is proportional to the temperature. The target region characterized by complex terrain is the western U.S. on a domain with 30-km grid spacing. Observations are assimilated every 3 hours using an 80-member ensemble during September 2012. Results demonstrate that the approach is able to estimate and correct the bias when it is spatially invariant (experiment 1). More complex bias structure in experiments (2) and (3) are more difficult to estimate, but still possible. Estimated the parameter in experiments with unbiased observations results in spatial and temporal parameter variability about zero, and establishes a threshold on the accuracy of the parameter in further experiments. When the observations are biased, the mean parameter value is close to the true bias, but temporal and spatial variability in the parameter estimates is similar to the parameters used when estimating a zero bias in the observations. The distributions are related to other errors in the forecasts, indicating that the parameters are absorbing some of the forecast error from other sources. In this presentation we elucidate the reasons for the resulting parameter estimates, and their variability.
Operational aspects of asynchronous filtering for improved flood forecasting
NASA Astrophysics Data System (ADS)
Rakovec, Oldrich; Weerts, Albrecht; Sumihar, Julius; Uijlenhoet, Remko
2014-05-01
Hydrological forecasts can be made more reliable and less uncertain by recursively improving initial conditions. A common way of improving the initial conditions is to make use of data assimilation (DA), a feedback mechanism or update methodology which merges model estimates with available real world observations. The traditional implementation of the Ensemble Kalman Filter (EnKF; e.g. Evensen, 2009) is synchronous, commonly named a three dimensional (3-D) assimilation, which means that all assimilated observations correspond to the time of update. Asynchronous DA, also called four dimensional (4-D) assimilation, refers to an updating methodology, in which observations being assimilated into the model originate from times different to the time of update (Evensen, 2009; Sakov 2010). This study investigates how the capabilities of the DA procedure can be improved by applying alternative Kalman-type methods, e.g., the Asynchronous Ensemble Kalman Filter (AEnKF). The AEnKF assimilates observations with smaller computational costs than the original EnKF, which is beneficial for operational purposes. The results of discharge assimilation into a grid-based hydrological model for the Upper Ourthe catchment in Belgian Ardennes show that including past predictions and observations in the AEnKF improves the model forecasts as compared to the traditional EnKF. Additionally we show that elimination of the strongly non-linear relation between the soil moisture storage and assimilated discharge observations from the model update becomes beneficial for an improved operational forecasting, which is evaluated using several validation measures. In the current study we employed the HBV-96 model built within a recently developed open source modelling environment OpenStreams (2013). The advantage of using OpenStreams (2013) is that it enables direct communication with OpenDA (2013), an open source data assimilation toolbox. OpenDA provides a number of algorithms for model calibration and assimilation and is suitable to be connected to any kind of environmental model. This setup is embedded in the Delft Flood Early Warning System (Delft-FEWS, Werner et al., 2013) for making all simulations and forecast runs and handling of all hydrological and meteorological data. References: Evensen, G. (2009), Data Assimilation: The Ensemble Kalman Filter, Springer, doi:10.1007/978-3-642-03711-5. OpenDA (2013), The OpenDA data-assimilation toolbox, www.openda.org, (last access: 1 November 2013). OpenStreams (2013), OpenStreams, www.openstreams.nl, (last access: 1 November 2013). Sakov, P., G. Evensen, and L. Bertino (2010), Asynchronous data assimilation with the EnKF, Tellus, Series A: Dynamic Meteorology and Oceanography, 62(1), 24-29, doi:10.1111/j.1600-0870.2009.00417.x. Werner, M., J. Schellekens, P. Gijsbers, M. van Dijk, O. van den Akker, and K. Heynert (2013), The Delft-FEWS flow forecasting system, Environ. Mod. & Soft., 40(0), 65-77, doi: http://dx.doi.org/10.1016/j.envsoft.2012.07.010.
Hydrologic Remote Sensing and Land Surface Data Assimilation.
Moradkhani, Hamid
2008-05-06
Accurate, reliable and skillful forecasting of key environmental variables such as soil moisture and snow are of paramount importance due to their strong influence on many water resources applications including flood control, agricultural production and effective water resources management which collectively control the behavior of the climate system. Soil moisture is a key state variable in land surface-atmosphere interactions affecting surface energy fluxes, runoff and the radiation balance. Snow processes also have a large influence on land-atmosphere energy exchanges due to snow high albedo, low thermal conductivity and considerable spatial and temporal variability resulting in the dramatic change on surface and ground temperature. Measurement of these two variables is possible through variety of methods using ground-based and remote sensing procedures. Remote sensing, however, holds great promise for soil moisture and snow measurements which have considerable spatial and temporal variability. Merging these measurements with hydrologic model outputs in a systematic and effective way results in an improvement of land surface model prediction. Data Assimilation provides a mechanism to combine these two sources of estimation. Much success has been attained in recent years in using data from passive microwave sensors and assimilating them into the models. This paper provides an overview of the remote sensing measurement techniques for soil moisture and snow data and describes the advances in data assimilation techniques through the ensemble filtering, mainly Ensemble Kalman filter (EnKF) and Particle filter (PF), for improving the model prediction and reducing the uncertainties involved in prediction process. It is believed that PF provides a complete representation of the probability distribution of state variables of interests (according to sequential Bayes law) and could be a strong alternative to EnKF which is subject to some limitations including the linear updating rule and assumption of jointly normal distribution of errors in state variables and observation.
NASA Astrophysics Data System (ADS)
Wan, Xiaoqing; Zhao, Chunhui; Gao, Bing
2017-11-01
The integration of an edge-preserving filtering technique in the classification of a hyperspectral image (HSI) has been proven effective in enhancing classification performance. This paper proposes an ensemble strategy for HSI classification using an edge-preserving filter along with a deep learning model and edge detection. First, an adaptive guided filter is applied to the original HSI to reduce the noise in degraded images and to extract powerful spectral-spatial features. Second, the extracted features are fed as input to a stacked sparse autoencoder to adaptively exploit more invariant and deep feature representations; then, a random forest classifier is applied to fine-tune the entire pretrained network and determine the classification output. Third, a Prewitt compass operator is further performed on the HSI to extract the edges of the first principal component after dimension reduction. Moreover, the regional growth rule is applied to the resulting edge logical image to determine the local region for each unlabeled pixel. Finally, the categories of the corresponding neighborhood samples are determined in the original classification map; then, the major voting mechanism is implemented to generate the final output. Extensive experiments proved that the proposed method achieves competitive performance compared with several traditional approaches.
NASA Astrophysics Data System (ADS)
Counillon, Francois; Kimmritz, Madlen; Keenlyside, Noel; Wang, Yiguo; Bethke, Ingo
2017-04-01
The Norwegian Climate Prediction Model combines the Norwegian Earth System Model and the Ensemble Kalman Filter data assimilation method. The prediction skills of different versions of the system (with 30 members) are tested in the Nordic Seas and the Arctic region. Comparing the hindcasts branched from a SST-only assimilation run with a free ensemble run of 30 members, we are able to dissociate the predictability rooted in the external forcing from the predictability harvest from SST derived initial conditions. The latter adds predictability in the North Atlantic subpolar gyre and the Nordic Seas regions and overall there is very little degradation or forecast drift. Combined assimilation of SST and T-S profiles further improves the prediction skill in the Nordic Seas and into the Arctic. These lead to multi-year predictability in the high-latitudes. Ongoing developments of strongly coupled assimilation (ocean and sea ice) of ice concentration in idealized twin experiment will be shown, as way to further enhance prediction skill in the Arctic.
Nonlinear intrinsic variables and state reconstruction in multiscale simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dsilva, Carmeline J., E-mail: cdsilva@princeton.edu; Talmon, Ronen, E-mail: ronen.talmon@yale.edu; Coifman, Ronald R., E-mail: coifman@math.yale.edu
2013-11-14
Finding informative low-dimensional descriptions of high-dimensional simulation data (like the ones arising in molecular dynamics or kinetic Monte Carlo simulations of physical and chemical processes) is crucial to understanding physical phenomena, and can also dramatically assist in accelerating the simulations themselves. In this paper, we discuss and illustrate the use of nonlinear intrinsic variables (NIV) in the mining of high-dimensional multiscale simulation data. In particular, we focus on the way NIV allows us to functionally merge different simulation ensembles, and different partial observations of these ensembles, as well as to infer variables not explicitly measured. The approach relies on certainmore » simple features of the underlying process variability to filter out measurement noise and systematically recover a unique reference coordinate frame. We illustrate the approach through two distinct sets of atomistic simulations: a stochastic simulation of an enzyme reaction network exhibiting both fast and slow time scales, and a molecular dynamics simulation of alanine dipeptide in explicit water.« less
Nonlinear intrinsic variables and state reconstruction in multiscale simulations
NASA Astrophysics Data System (ADS)
Dsilva, Carmeline J.; Talmon, Ronen; Rabin, Neta; Coifman, Ronald R.; Kevrekidis, Ioannis G.
2013-11-01
Finding informative low-dimensional descriptions of high-dimensional simulation data (like the ones arising in molecular dynamics or kinetic Monte Carlo simulations of physical and chemical processes) is crucial to understanding physical phenomena, and can also dramatically assist in accelerating the simulations themselves. In this paper, we discuss and illustrate the use of nonlinear intrinsic variables (NIV) in the mining of high-dimensional multiscale simulation data. In particular, we focus on the way NIV allows us to functionally merge different simulation ensembles, and different partial observations of these ensembles, as well as to infer variables not explicitly measured. The approach relies on certain simple features of the underlying process variability to filter out measurement noise and systematically recover a unique reference coordinate frame. We illustrate the approach through two distinct sets of atomistic simulations: a stochastic simulation of an enzyme reaction network exhibiting both fast and slow time scales, and a molecular dynamics simulation of alanine dipeptide in explicit water.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Jared A.; Hacker, Joshua P.; Delle Monache, Luca
2016-12-14
A current barrier to greater deployment of offshore wind turbines is the poor quality of numerical weather prediction model wind and turbulence forecasts over open ocean. The bulk of development for atmospheric boundary layer (ABL) parameterization schemes has focused on land, partly due to a scarcity of observations over ocean. The 100-m FINO1 tower in the North Sea is one of the few sources worldwide of atmospheric profile observations from the sea surface to turbine hub height. These observations are crucial to developing a better understanding and modeling of physical processes in the marine ABL. In this study, we usemore » the WRF single column model (SCM), coupled with an ensemble Kalman filter from the Data Assimilation Research Testbed (DART), to create 100-member ensembles at the FINO1 location. The goal of this study is to determine the extent to which model parameter estimation can improve offshore wind forecasts.« less
Paleo Data Assimilation of Pseudo-Tree-Ring-Width Chronologies in a Climate Model
NASA Astrophysics Data System (ADS)
Fallah Hassanabadi, B.; Acevedo, W.; Reich, S.; Cubasch, U.
2016-12-01
Using the Time-Averaged Ensemble Kalman Filter (EnKF) and a forward model, we assimilate the pseudo Tree-Ring-Width (TRW) chronologies into an Atmospheric Global Circulation model. This study investigates several aspects of Paleo-Data Assimilation (PDA) within a perfect-model set-up: (i) we test the performance of several forward operators in the framework of a PDA-based climate reconstruction, (ii) compare the PDA-based simulations' skill against the free ensemble runs and (iii) inverstigate the skill of the "online" (with cycling) DA and the "off-line" (no-cycling) DA. In our experiments, the "online" (with cycling) PDA approach did not outperform the "off-line" (no-cycling) one, despite its considerable additional implementation complexity. On the other hand, it was observed that the error reduction achieved by assimilating a particular pseudo-TRW chronology is modulated by the strength of the yearly internal variability of the model at the chronology site. This result might help the dendrochronology community to optimize their sampling efforts.
NASA Technical Reports Server (NTRS)
Keppenne, Christian; Vernieres, Guillaume; Rienecker, Michele; Jacob, Jossy; Kovach, Robin
2011-01-01
Satellite altimetry measurements have provided global, evenly distributed observations of the ocean surface since 1993. However, the difficulties introduced by the presence of model biases and the requirement that data assimilation systems extrapolate the sea surface height (SSH) information to the subsurface in order to estimate the temperature, salinity and currents make it difficult to optimally exploit these measurements. This talk investigates the potential of the altimetry data assimilation once the biases are accounted for with an ad hoc bias estimation scheme. Either steady-state or state-dependent multivariate background-error covariances from an ensemble of model integrations are used to address the problem of extrapolating the information to the sub-surface. The GMAO ocean data assimilation system applied to an ensemble of coupled model instances using the GEOS-5 AGCM coupled to MOM4 is used in the investigation. To model the background error covariances, the system relies on a hybrid ensemble approach in which a small number of dynamically evolved model trajectories is augmented on the one hand with past instances of the state vector along each trajectory and, on the other, with a steady state ensemble of error estimates from a time series of short-term model forecasts. A state-dependent adaptive error-covariance localization and inflation algorithm controls how the SSH information is extrapolated to the sub-surface. A two-step predictor corrector approach is used to assimilate future information. Independent (not-assimilated) temperature and salinity observations from Argo floats are used to validate the assimilation. A two-step projection method in which the system first calculates a SSH increment and then projects this increment vertically onto the temperature, salt and current fields is found to be most effective in reconstructing the sub-surface information. The performance of the system in reconstructing the sub-surface fields is particularly impressive for temperature, but not as satisfactory for salt.
NASA Astrophysics Data System (ADS)
Pelosi, Anna; Falanga Bolognesi, Salvatore; De Michele, Carlo; Medina Gonzalez, Hanoi; Villani, Paolo; D'Urso, Guido; Battista Chirico, Giovanni
2015-04-01
Irrigation agriculture is one the biggest consumer of water in Europe, especially in southern regions, where it accounts for up to 70% of the total water consumption. The EU Common Agricultural Policy, combined with the Water Framework Directive, imposes to farmers and irrigation managers a substantial increase of the efficiency in the use of water in agriculture for the next decade. Ensemble numerical weather predictions can be valuable data for developing operational advisory irrigation services. We propose a stochastic ensemble-based model providing spatial and temporal estimates of crop water requirements, implemented within an advisory service offering detailed maps of irrigation water requirements and crop water consumption estimates, to be used by water irrigation managers and farmers. The stochastic model combines estimates of crop potential evapotranspiration retrieved from ensemble numerical weather forecasts (COSMO-LEPS, 16 members, 7 km resolution) and canopy parameters (LAI, albedo, fractional vegetation cover) derived from high resolution satellite images in the visible and near infrared wavelengths. The service provides users with daily estimates of crop water requirements for lead times up to five days. The temporal evolution of the crop potential evapotranspiration is simulated with autoregressive models. An ensemble Kalman filter is employed for updating model states by assimilating both ground based meteorological variables (where available) and numerical weather forecasts. The model has been applied in Campania region (Southern Italy), where a satellite assisted irrigation advisory service has been operating since 2006. This work presents the results of the system performance for one year of experimental service. The results suggest that the proposed model can be an effective support for a sustainable use and management of irrigation water, under conditions of water scarcity and drought. Since the evapotranspiration term represents a staple component in the water balance of a catchment, as outstanding future development, the model could also offer an advanced support for water resources management decisions at catchment scale.
NASA Astrophysics Data System (ADS)
Noh, Seong Jin; Rakovec, Oldrich; Kumar, Rohini; Samaniego, Luis
2016-04-01
There have been tremendous improvements in distributed hydrologic modeling (DHM) which made a process-based simulation with a high spatiotemporal resolution applicable on a large spatial scale. Despite of increasing information on heterogeneous property of a catchment, DHM is still subject to uncertainties inherently coming from model structure, parameters and input forcing. Sequential data assimilation (DA) may facilitate improved streamflow prediction via DHM using real-time observations to correct internal model states. In conventional DA methods such as state updating, parametric uncertainty is, however, often ignored mainly due to practical limitations of methodology to specify modeling uncertainty with limited ensemble members. If parametric uncertainty related with routing and runoff components is not incorporated properly, predictive uncertainty by DHM may be insufficient to capture dynamics of observations, which may deteriorate predictability. Recently, a multi-scale parameter regionalization (MPR) method was proposed to make hydrologic predictions at different scales using a same set of model parameters without losing much of the model performance. The MPR method incorporated within the mesoscale hydrologic model (mHM, http://www.ufz.de/mhm) could effectively represent and control uncertainty of high-dimensional parameters in a distributed model using global parameters. In this study, we present a global multi-parametric ensemble approach to incorporate parametric uncertainty of DHM in DA to improve streamflow predictions. To effectively represent and control uncertainty of high-dimensional parameters with limited number of ensemble, MPR method is incorporated with DA. Lagged particle filtering is utilized to consider the response times and non-Gaussian characteristics of internal hydrologic processes. The hindcasting experiments are implemented to evaluate impacts of the proposed DA method on streamflow predictions in multiple European river basins having different climate and catchment characteristics. Because augmentation of parameters is not required within an assimilation window, the approach could be stable with limited ensemble members and viable for practical uses.
NASA Astrophysics Data System (ADS)
Thilker, David
2017-08-01
We request 17 orbits to conduct a pilot study to examine the effectiveness of the WFC3/UVIS F300X filter for studying fundamental problems in star formation in the low density regime. In principle, the broader bandpass and higher throughput of F300X can halve the required observing time relative to F275W, the filter of choice for studying young stellar populations in nearby galaxies. Together with F475W and F600LP, this X filter set may be as effective as standard UVIS broadband filters for characterizing the physical properties of such populations. We will observe 5 low surface brightness targets with a range of properties to test potential issues with F300X: the red tail to 4000A and a red leak beyond, ghosts, and the wider bandpass. Masses and ages of massive stars, young star clusters, and clumps derived from photometry from the X filter set will be compared with corresponding measurements from standard filters. Beyond testing, our program will provide the first sample spanning a range of LSB galaxy properties for which HST UV imaging will be obtained, and a glimpse into the ensemble properties of the quanta of star formation in these strange environments. The increased observing efficiency would make more tractable programs which require several tens to hundreds of orbits to aggregate sufficient numbers of massive stars, young star clusters, and clumps to build statistical samples. We are hopeful that our pilot observations will broadly enable high-resolution UV imaging exploration of the low density frontier of star formation while HST is still in good health.
2014-12-27
oil spill plumes). Results can be used for operational applications or to derive enhanced background fields for other data assimilation systems, thus...with the ocean state and for which a forward model exists (e.g. oil spill plumes). The MEKF assumes that a prior system is running with several forecast...the ocean state and for which a forward model exists (e.g. oil spill images). The results discussed in this paper can be viewed as part of a framework
NASA Astrophysics Data System (ADS)
Saito, Kazuo; Hara, Masahiro; Kunii, Masaru; Seko, Hiromu; Yamaguchi, Munehiko
2011-05-01
Different initial perturbation methods for the mesoscale ensemble prediction were compared by the Meteorological Research Institute (MRI) as a part of the intercomparison of mesoscale ensemble prediction systems (EPSs) of the World Weather Research Programme (WWRP) Beijing 2008 Olympics Research and Development Project (B08RDP). Five initial perturbation methods for mesoscale ensemble prediction were developed for B08RDP and compared at MRI: (1) a downscaling method of the Japan Meteorological Agency (JMA)'s operational one-week EPS (WEP), (2) a targeted global model singular vector (GSV) method, (3) a mesoscale model singular vector (MSV) method based on the adjoint model of the JMA non-hydrostatic model (NHM), (4) a mesoscale breeding growing mode (MBD) method based on the NHM forecast and (5) a local ensemble transform (LET) method based on the local ensemble transform Kalman filter (LETKF) using NHM. These perturbation methods were applied to the preliminary experiments of the B08RDP Tier-1 mesoscale ensemble prediction with a horizontal resolution of 15 km. To make the comparison easier, the same horizontal resolution (40 km) was employed for the three mesoscale model-based initial perturbation methods (MSV, MBD and LET). The GSV method completely outperformed the WEP method, confirming the advantage of targeting in mesoscale EPS. The GSV method generally performed well with regard to root mean square errors of the ensemble mean, large growth rates of ensemble spreads throughout the 36-h forecast period, and high detection rates and high Brier skill scores (BSSs) for weak rains. On the other hand, the mesoscale model-based initial perturbation methods showed good detection rates and BSSs for intense rains. The MSV method showed a rapid growth in the ensemble spread of precipitation up to a forecast time of 6 h, which suggests suitability of the mesoscale SV for short-range EPSs, but the initial large growth of the perturbation did not last long. The performance of the MBD method was good for ensemble prediction of intense rain with a relatively small computing cost. The LET method showed similar characteristics to the MBD method, but the spread and growth rate were slightly smaller and the relative operating characteristic area skill score and BSS did not surpass those of MBD. These characteristic features of the five methods were confirmed by checking the evolution of the total energy norms and their growth rates. Characteristics of the initial perturbations obtained by four methods (GSV, MSV, MBD and LET) were examined for the case of a synoptic low-pressure system passing over eastern China. With GSV and MSV, the regions of large spread were near the low-pressure system, but with MSV, the distribution was more concentrated on the mesoscale disturbance. On the other hand, large-spread areas were observed southwest of the disturbance in MBD and LET. The horizontal pattern of LET perturbation was similar to that of MBD, but the amplitude of the LET perturbation reflected the observation density.
Holtzman, Tahl; Jörntell, Henrik
2011-01-01
Temporal coding of spike-times using oscillatory mechanisms allied to spike-time dependent plasticity could represent a powerful mechanism for neuronal communication. However, it is unclear how temporal coding is constructed at the single neuronal level. Here we investigate a novel class of highly regular, metronome-like neurones in the rat brainstem which form a major source of cerebellar afferents. Stimulation of sensory inputs evoked brief periods of inhibition that interrupted the regular firing of these cells leading to phase-shifted spike-time advancements and delays. Alongside phase-shifting, metronome cells also behaved as band-pass filters during rhythmic sensory stimulation, with maximal spike-stimulus synchronisation at frequencies close to the idiosyncratic firing frequency of each neurone. Phase-shifting and band-pass filtering serve to temporally align ensembles of metronome cells, leading to sustained volleys of near-coincident spike-times, thereby transmitting synchronised sensory information to downstream targets in the cerebellar cortex. PMID:22046297
A Noise-Filtered Under-Sampling Scheme for Imbalanced Classification.
Kang, Qi; Chen, XiaoShuang; Li, SiSi; Zhou, MengChu
2017-12-01
Under-sampling is a popular data preprocessing method in dealing with class imbalance problems, with the purposes of balancing datasets to achieve a high classification rate and avoiding the bias toward majority class examples. It always uses full minority data in a training dataset. However, some noisy minority examples may reduce the performance of classifiers. In this paper, a new under-sampling scheme is proposed by incorporating a noise filter before executing resampling. In order to verify the efficiency, this scheme is implemented based on four popular under-sampling methods, i.e., Undersampling + Adaboost, RUSBoost, UnderBagging, and EasyEnsemble through benchmarks and significance analysis. Furthermore, this paper also summarizes the relationship between algorithm performance and imbalanced ratio. Experimental results indicate that the proposed scheme can improve the original undersampling-based methods with significance in terms of three popular metrics for imbalanced classification, i.e., the area under the curve, -measure, and -mean.
Li, Chengwei; Zhan, Liwei
2015-08-01
To estimate the coefficient of friction between tire and runway surface during airplane touchdowns, we designed an experimental rig to simulate such events and to record the impact and friction forces being executed. Because of noise in the measured signals, we developed a filtering method that is based on the ensemble empirical mode decomposition and the bandwidth of probability density function of each intrinsic mode function to extract friction and impact force signals. We can quantify the coefficient of friction by calculating the maximum values of the filtered force signals. Signal measurements are recorded for different drop heights and tire rotational speeds, and the corresponding coefficient of friction is calculated. The result shows that the values of the coefficient of friction change only slightly. The random noise and experimental artifact are the major reason of the change.
NASA Technical Reports Server (NTRS)
Shih, Tsan-Hsing; Liu, Nan-Suey; Moder, Jeffrey P.
2015-01-01
This paper presents the numerical simulations of confined three-dimensional coaxial water jets. The objectives are to validate the newly proposed nonlinear turbulence models of momentum and scalar transport, and to evaluate the newly introduced scalar APDF and DWFDF equation along with its Eulerian implementation in the National Combustion Code (NCC). Simulations conducted include the steady RANS, the unsteady RANS (URANS), and the time-filtered Navier-Stokes (TFNS); both without and with invoking the APDF or DWFDF equation. When the APDF (ensemble averaged probability density function) or DWFDF (density weighted filtered density function) equation is invoked, the simulations are of a hybrid nature, i.e., the transport equations of energy and species are replaced by the APDF or DWFDF equation. Results of simulations are compared with the available experimental data. Some positive impacts of the nonlinear turbulence models and the Eulerian scalar APDF and DWFDF approach are observed.
Saad, Walid; Slika, Wael; Mawla, Zara; Saad, George
2017-12-01
Recently, there has been a growing interest in identifying suitable routes for the disposal of pharmaceutical wastes. This study investigates the potential of matrix materials composed of recycled polyethylene/polypropylene reclaimed from municipal solid wastes at immobilizing pharmaceutical solid wastes. Diclofenac (DF) drug product was embedded in boards of recycled plastic material, and leaching in water was assessed at various temperatures. DF concentrations were determined by high-performance liquid chromatography and revealed a maximum leachable fraction of 4% under accelerated conditions of 70°C, and less than 0.3% following 39 days of exposure at 20°C. The Ensemble Kalman Filter was employed to characterize the leaching behavior of DF. The filter verified the occurrence of leaching through diffusion, and was successful in predicting the leaching behavior of DF at 50°C and 70°C.
NASA Astrophysics Data System (ADS)
Chen, BinQiang; Zhang, ZhouSuo; Zi, YanYang; He, ZhengJia; Sun, Chuang
2013-10-01
Detecting transient vibration signatures is of vital importance for vibration-based condition monitoring and fault detection of the rotating machinery. However, raw mechanical signals collected by vibration sensors are generally mixtures of physical vibrations of the multiple mechanical components installed in the examined machinery. Fault-generated incipient vibration signatures masked by interfering contents are difficult to be identified. The fast kurtogram (FK) is a concise and smart gadget for characterizing these vibration features. The multi-rate filter-bank (MRFB) and the spectral kurtosis (SK) indicator of the FK are less powerful when strong interfering vibration contents exist, especially when the FK are applied to vibration signals of short duration. It is encountered that the impulsive interfering contents not authentically induced by mechanical faults complicate the optimal analyzing process and lead to incorrect choosing of the optimal analysis subband, therefore the original FK may leave out the essential fault signatures. To enhance the analyzing performance of FK for industrial applications, an improved version of fast kurtogram, named as "fast spatial-spectral ensemble kurtosis kurtogram", is presented. In the proposed technique, discrete quasi-analytic wavelet tight frame (QAWTF) expansion methods are incorporated as the detection filters. The QAWTF, constructed based on dual tree complex wavelet transform, possesses better vibration transient signature extracting ability and enhanced time-frequency localizability compared with conventional wavelet packet transforms (WPTs). Moreover, in the constructed QAWTF, a non-dyadic ensemble wavelet subband generating strategy is put forward to produce extra wavelet subbands that are capable of identifying fault features located in transition-band of WPT. On the other hand, an enhanced signal impulsiveness evaluating indicator, named "spatial-spectral ensemble kurtosis" (SSEK), is put forward and utilized as the quantitative measure to select optimal analyzing parameters. The SSEK indicator is robuster in evaluating the impulsiveness intensity of vibration signals due to its better suppressing ability of Gaussian noise, harmonics and sporadic impulsive shocks. Numerical validations, an experimental test and two engineering applications were used to verify the effectiveness of the proposed technique. The analyzing results of the numerical validations, experimental tests and engineering applications demonstrate that the proposed technique possesses robuster transient vibration content detecting performance in comparison with the original FK and the WPT-based FK method, especially when they are applied to the processing of vibration signals of relative limited duration.
Data assimilation in integrated hydrological modelling in the presence of observation bias
NASA Astrophysics Data System (ADS)
Rasmussen, J.; Madsen, H.; Jensen, K. H.; Refsgaard, J. C.
2015-08-01
The use of bias-aware Kalman filters for estimating and correcting observation bias in groundwater head observations is evaluated using both synthetic and real observations. In the synthetic test, groundwater head observations with a constant bias and unbiased stream discharge observations are assimilated in a catchment scale integrated hydrological model with the aim of updating stream discharge and groundwater head, as well as several model parameters relating to both stream flow and groundwater modeling. The Colored Noise Kalman filter (ColKF) and the Separate bias Kalman filter (SepKF) are tested and evaluated for correcting the observation biases. The study found that both methods were able to estimate most of the biases and that using any of the two bias estimation methods resulted in significant improvements over using a bias-unaware Kalman Filter. While the convergence of the ColKF was significantly faster than the convergence of the SepKF, a much larger ensemble size was required as the estimation of biases would otherwise fail. Real observations of groundwater head and stream discharge were also assimilated, resulting in improved stream flow modeling in terms of an increased Nash-Sutcliffe coefficient while no clear improvement in groundwater head modeling was observed. Both the ColKF and the SepKF tended to underestimate the biases, which resulted in drifting model behavior and sub-optimal parameter estimation, but both methods provided better state updating and parameter estimation than using a bias-unaware filter.
Data assimilation in integrated hydrological modelling in the presence of observation bias
NASA Astrophysics Data System (ADS)
Rasmussen, Jørn; Madsen, Henrik; Høgh Jensen, Karsten; Refsgaard, Jens Christian
2016-05-01
The use of bias-aware Kalman filters for estimating and correcting observation bias in groundwater head observations is evaluated using both synthetic and real observations. In the synthetic test, groundwater head observations with a constant bias and unbiased stream discharge observations are assimilated in a catchment-scale integrated hydrological model with the aim of updating stream discharge and groundwater head, as well as several model parameters relating to both streamflow and groundwater modelling. The coloured noise Kalman filter (ColKF) and the separate-bias Kalman filter (SepKF) are tested and evaluated for correcting the observation biases. The study found that both methods were able to estimate most of the biases and that using any of the two bias estimation methods resulted in significant improvements over using a bias-unaware Kalman filter. While the convergence of the ColKF was significantly faster than the convergence of the SepKF, a much larger ensemble size was required as the estimation of biases would otherwise fail. Real observations of groundwater head and stream discharge were also assimilated, resulting in improved streamflow modelling in terms of an increased Nash-Sutcliffe coefficient while no clear improvement in groundwater head modelling was observed. Both the ColKF and the SepKF tended to underestimate the biases, which resulted in drifting model behaviour and sub-optimal parameter estimation, but both methods provided better state updating and parameter estimation than using a bias-unaware filter.
Few-Photon Nonlinearity with an Atomic Ensemble in an Optical Cavity
NASA Astrophysics Data System (ADS)
Tanji, Haruka
2011-12-01
This thesis investigates the effect of the cavity vacuum field on the dispersive properties of an atomic ensemble in a strongly coupled high-finesse cavity. In particular, we demonstrate vacuum-induced transparency (VIT). The light absorption by the ensemble is suppressed by up to 40% in the presence of a cavity vacuum field. The sharp transparency peak is accompanied by the reduction in the group velocity of a light pulse, measured to be as low as 1800 m/s. This observation is a large step towards the realization of photon number-state filters, recently proposed by Nikoghosyan et al. Furthermore, we demonstrate few-photon optical nonlinearity, where the transparency is increased from 40% to 80% with ˜12 photons in the cavity mode. The result may be viewed as all-optical switching, where the transmission of photons in one mode may be controlled by 12 photons in another. These studies point to the possibility of nonlinear interaction between photons in different free-space modes, a scheme that circumvents cavity-coupling losses that plague cavity-based quantum information processing. Potential applications include advanced quantum devices such as photonic quantum gates, photon-number resolving detectors, and single-photon transistors. In the efforts leading up to these results, we investigate the collective enhancement of atomic coupling to a single mode of a low-finesse cavity. With the strong collective coupling, we obtain exquisite control of quantum states in the atom-photon coupled system. In this system, we demonstrate a heralded single-photon source with 84% conditional efficiency, a quantum bus for deterministic entanglement of two remote ensembles, and heralded polarization-state quantum memory with fidelity above 90%.
Ensemble Semi-supervised Frame-work for Brain Magnetic Resonance Imaging Tissue Segmentation.
Azmi, Reza; Pishgoo, Boshra; Norozi, Narges; Yeganeh, Samira
2013-04-01
Brain magnetic resonance images (MRIs) tissue segmentation is one of the most important parts of the clinical diagnostic tools. Pixel classification methods have been frequently used in the image segmentation with two supervised and unsupervised approaches up to now. Supervised segmentation methods lead to high accuracy, but they need a large amount of labeled data, which is hard, expensive, and slow to obtain. Moreover, they cannot use unlabeled data to train classifiers. On the other hand, unsupervised segmentation methods have no prior knowledge and lead to low level of performance. However, semi-supervised learning which uses a few labeled data together with a large amount of unlabeled data causes higher accuracy with less trouble. In this paper, we propose an ensemble semi-supervised frame-work for segmenting of brain magnetic resonance imaging (MRI) tissues that it has been used results of several semi-supervised classifiers simultaneously. Selecting appropriate classifiers has a significant role in the performance of this frame-work. Hence, in this paper, we present two semi-supervised algorithms expectation filtering maximization and MCo_Training that are improved versions of semi-supervised methods expectation maximization and Co_Training and increase segmentation accuracy. Afterward, we use these improved classifiers together with graph-based semi-supervised classifier as components of the ensemble frame-work. Experimental results show that performance of segmentation in this approach is higher than both supervised methods and the individual semi-supervised classifiers.
NASA Astrophysics Data System (ADS)
Kotsuki, Shunji; Terasaki, Koji; Yashiro, Hasashi; Tomita, Hirofumi; Satoh, Masaki; Miyoshi, Takemasa
2017-04-01
This study aims to improve precipitation forecasts from numerical weather prediction (NWP) models through effective use of satellite-derived precipitation data. Kotsuki et al. (2016, JGR-A) successfully improved the precipitation forecasts by assimilating the Japan Aerospace eXploration Agency (JAXA)'s Global Satellite Mapping of Precipitation (GSMaP) data into the Nonhydrostatic Icosahedral Atmospheric Model (NICAM) at 112-km horizontal resolution. Kotsuki et al. mitigated the non-Gaussianity of the precipitation variables by the Gaussian transform method for observed and forecasted precipitation using the previous 30-day precipitation data. This study extends the previous study by Kotsuki et al. and explores an online estimation of model parameters using ensemble data assimilation. We choose two globally-uniform parameters, one is the cloud-to-rain auto-conversion parameter of the Berry's scheme for large scale condensation and the other is the relative humidity threshold of the Arakawa-Schubert cumulus parameterization scheme. We perform the online-estimation of the two model parameters with an ensemble transform Kalman filter by assimilating the GSMaP precipitation data. The estimated parameters improve the analyzed and forecasted mixing ratio in the lower troposphere. Therefore, the parameter estimation would be a useful technique to improve the NWP models and their forecasts. This presentation will include the most recent progress up to the time of the symposium.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Teramoto, Atsushi, E-mail: teramoto@fujita-hu.ac.jp; Fujita, Hiroshi; Yamamuro, Osamu
Purpose: Automated detection of solitary pulmonary nodules using positron emission tomography (PET) and computed tomography (CT) images shows good sensitivity; however, it is difficult to detect nodules in contact with normal organs, and additional efforts are needed so that the number of false positives (FPs) can be further reduced. In this paper, the authors propose an improved FP-reduction method for the detection of pulmonary nodules in PET/CT images by means of convolutional neural networks (CNNs). Methods: The overall scheme detects pulmonary nodules using both CT and PET images. In the CT images, a massive region is first detected using anmore » active contour filter, which is a type of contrast enhancement filter that has a deformable kernel shape. Subsequently, high-uptake regions detected by the PET images are merged with the regions detected by the CT images. FP candidates are eliminated using an ensemble method; it consists of two feature extractions, one by shape/metabolic feature analysis and the other by a CNN, followed by a two-step classifier, one step being rule based and the other being based on support vector machines. Results: The authors evaluated the detection performance using 104 PET/CT images collected by a cancer-screening program. The sensitivity in detecting candidates at an initial stage was 97.2%, with 72.8 FPs/case. After performing the proposed FP-reduction method, the sensitivity of detection was 90.1%, with 4.9 FPs/case; the proposed method eliminated approximately half the FPs existing in the previous study. Conclusions: An improved FP-reduction scheme using CNN technique has been developed for the detection of pulmonary nodules in PET/CT images. The authors’ ensemble FP-reduction method eliminated 93% of the FPs; their proposed method using CNN technique eliminates approximately half the FPs existing in the previous study. These results indicate that their method may be useful in the computer-aided detection of pulmonary nodules using PET/CT images.« less
NASA Astrophysics Data System (ADS)
Rochoux, M. C.; Ricci, S.; Lucor, D.; Cuenot, B.; Trouvé, A.
2014-05-01
This paper is the first part in a series of two articles and presents a data-driven wildfire simulator for forecasting wildfire spread scenarios, at a reduced computational cost that is consistent with operational systems. The prototype simulator features the following components: a level-set-based fire propagation solver FIREFLY that adopts a regional-scale modeling viewpoint, treats wildfires as surface propagating fronts, and uses a description of the local rate of fire spread (ROS) as a function of environmental conditions based on Rothermel's model; a series of airborne-like observations of the fire front positions; and a data assimilation algorithm based on an ensemble Kalman filter (EnKF) for parameter estimation. This stochastic algorithm partly accounts for the non-linearities between the input parameters of the semi-empirical ROS model and the fire front position, and is sequentially applied to provide a spatially-uniform correction to wind and biomass fuel parameters as observations become available. A wildfire spread simulator combined with an ensemble-based data assimilation algorithm is therefore a promising approach to reduce uncertainties in the forecast position of the fire front and to introduce a paradigm-shift in the wildfire emergency response. In order to reduce the computational cost of the EnKF algorithm, a surrogate model based on a polynomial chaos (PC) expansion is used in place of the forward model FIREFLY in the resulting hybrid PC-EnKF algorithm. The performance of EnKF and PC-EnKF is assessed on synthetically-generated simple configurations of fire spread to provide valuable information and insight on the benefits of the PC-EnKF approach as well as on a controlled grassland fire experiment. The results indicate that the proposed PC-EnKF algorithm features similar performance to the standard EnKF algorithm, but at a much reduced computational cost. In particular, the re-analysis and forecast skills of data assimilation strongly relate to the spatial and temporal variability of the errors in the ROS model parameters.
NASA Astrophysics Data System (ADS)
Rochoux, M. C.; Ricci, S.; Lucor, D.; Cuenot, B.; Trouvé, A.
2014-11-01
This paper is the first part in a series of two articles and presents a data-driven wildfire simulator for forecasting wildfire spread scenarios, at a reduced computational cost that is consistent with operational systems. The prototype simulator features the following components: an Eulerian front propagation solver FIREFLY that adopts a regional-scale modeling viewpoint, treats wildfires as surface propagating fronts, and uses a description of the local rate of fire spread (ROS) as a function of environmental conditions based on Rothermel's model; a series of airborne-like observations of the fire front positions; and a data assimilation (DA) algorithm based on an ensemble Kalman filter (EnKF) for parameter estimation. This stochastic algorithm partly accounts for the nonlinearities between the input parameters of the semi-empirical ROS model and the fire front position, and is sequentially applied to provide a spatially uniform correction to wind and biomass fuel parameters as observations become available. A wildfire spread simulator combined with an ensemble-based DA algorithm is therefore a promising approach to reduce uncertainties in the forecast position of the fire front and to introduce a paradigm-shift in the wildfire emergency response. In order to reduce the computational cost of the EnKF algorithm, a surrogate model based on a polynomial chaos (PC) expansion is used in place of the forward model FIREFLY in the resulting hybrid PC-EnKF algorithm. The performance of EnKF and PC-EnKF is assessed on synthetically generated simple configurations of fire spread to provide valuable information and insight on the benefits of the PC-EnKF approach, as well as on a controlled grassland fire experiment. The results indicate that the proposed PC-EnKF algorithm features similar performance to the standard EnKF algorithm, but at a much reduced computational cost. In particular, the re-analysis and forecast skills of DA strongly relate to the spatial and temporal variability of the errors in the ROS model parameters.
Final report on "Carbon Data Assimilation with a Coupled Ensemble Kalman Filter"
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kalnay, Eugenia; Kang, Ji-Sun; Fung, Inez
2014-07-23
We proposed (and accomplished) the development of an Ensemble Kalman Filter (EnKF) approach for the estimation of surface carbon fluxes as if they were parameters, augmenting the model with them. Our system is quite different from previous approaches, such as carbon flux inversions, 4D-Var, and EnKF with approximate background error covariance (Peters et al., 2008). We showed (using observing system simulation experiments, OSSEs) that these differences lead to a more accurate estimation of the evolving surface carbon fluxes at model grid-scale resolution. The main properties of the LETKF-C are: a) The carbon cycle LETKF is coupled with the simultaneous assimilationmore » of the standard atmospheric variables, so that the ensemble wind transport of the CO2 provides an estimation of the carbon transport uncertainty. b) The use of an assimilation window (6hr) much shorter than the months-long windows used in other methods. This avoids the inevitable “blurring” of the signal that takes place in long windows due to turbulent mixing since the CO2 does not have time to mix before the next window. In this development we introduced new, advanced techniques that have since been adopted by the EnKF community (Kang, 2009, Kang et al., 2011, Kang et al. 2012). These advances include “variable localization” that reduces sampling errors in the estimation of the forecast error covariance, more advanced adaptive multiplicative and additive inflations, and vertical localization based on the time scale of the processes. The main result has been obtained using the LETKF-C with all these advances, and assimilating simulated atmospheric CO2 observations from different observing systems (surface flask observations of CO2 but no surface carbon fluxes observations, total column CO2 from GoSAT/OCO-2, and upper troposphere AIRS retrievals). After a spin-up of about one month, the LETKF-C succeeded in reconstructing the true evolving surface fluxes of carbon at a model grid resolution. When applied to the CAM3.5 model, the LETKF gave very promising results as well, although only one month is available.« less
Final Technical Report [Carbon Data Assimilation with a Coupled Ensemble Kalman Filter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kalnay, Eugenia
2013-08-30
We proposed (and accomplished) the development of an Ensemble Kalman Filter (EnKF) approach for the estimation of surface carbon fluxes as if they were parameters, augmenting the model with them. Our system is quite different from previous approaches, such as carbon flux inversions, 4D-Var, and EnKF with approximate background error covariance (Peters et al., 2008). We showed (using observing system simulation experiments, OSSEs) that these differences lead to a more accurate estimation of the evolving surface carbon fluxes at model grid-scale resolution. The main properties of the LETKF-C are: a) The carbon cycle LETKF is coupled with the simultaneous assimilationmore » of the standard atmospheric variables, so that the ensemble wind transport of the CO2 provides an estimation of the carbon transport uncertainty. b) The use of an assimilation window (6hr) much shorter than the months-long windows used in other methods. This avoids the inevitable “blurring” of the signal that takes place in long windows due to turbulent mixing since the CO2 does not have time to mix before the next window. In this development we introduced new, advanced techniques that have since been adopted by the EnKF community (Kang, 2009, Kang et al., 2011, Kang et al. 2012). These advances include “variable localization” that reduces sampling errors in the estimation of the forecast error covariance, more advanced adaptive multiplicative and additive inflations, and vertical localization based on the time scale of the processes. The main result has been obtained using the LETKF-C with all these advances, and assimilating simulated atmospheric CO2 observations from different observing systems (surface flask observations of CO2 but no surface carbon fluxes observations, total column CO2 from GoSAT/OCO-2, and upper troposphere AIRS retrievals). After a spin-up of about one month, the LETKF-C succeeded in reconstructing the true evolving surface fluxes of carbon at a model grid resolution. When applied to the CAM3.5 model, the LETKF gave very promising results as well, although only one month is available.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tribbia, Joseph
NCAR brought the latest version of the Community Earth System Model (version 1, CESM1) into the mix of models in the NMME effort. This new version uses our newest atmospheric model CAM5 and produces a coupled climate and ENSO that are generally as good or better than those of the Community Climate System Model version 4 (CCSM4). Compared to CCSM4, the new coupled model has a superior climate response with respect to low clouds in both the subtropical stratus regimes and the Arctic. However, CESM1 has been run to date using a prognostic aerosol model that more than doubles itsmore » computational cost. We are currently evaluating a version of the new model using prescribed aerosols and expect it will be ready for integrations in summer 2012. Because of this NCAR has not been able to complete the hindcast integrations using the NCAR loosely-coupled ensemble Kalman filter assimilation method nor has it contributed to the current (Stage I) NMME operational utilization. The expectation is that this model will be included in the NMME in late 2012 or early 2013. The initialization method will utilize the Ensemble Kalman Filter Assimilation methods developed at NCAR using the Data Assimilation Research Testbed (DART) in conjunction with Jeff Anderson’s team in CISL. This methodology has been used in our decadal prediction contributions to CMIP5. During the course of this project, NCAR has setup and performed all the needed hindcast and forecast simulations and provide the requested fields to our collaborators. In addition, NCAR researchers have participated fully in research themes (i) and (ii). Specifically, i) we have begun to evaluate and optimize our system in hindcast mode, focusing on the optimal number of ensemble members, methodologies to recalibrate individual dynamical models, and accessing our forecasts across multiple time scales, i.e., beyond two weeks, and ii) we have begun investigation of the role of different ocean initial conditions in seasonal forecasts. The completion of the calibration hindcasts for Seasonal to Interannual (SI) predictions and the maintenance of the data archive associated with the NCAR portion of this effort has been the responsibility of the Project Scientist I (Alicia Karspeck) that was partially supported on this project.« less
Balanced Atmospheric Data Assimilation
NASA Astrophysics Data System (ADS)
Hastermann, Gottfried; Reinhardt, Maria; Klein, Rupert; Reich, Sebastian
2017-04-01
The atmosphere's multi-scale structure poses several major challenges in numerical weather prediction. One of these arises in the context of data assimilation. The large-scale dynamics of the atmosphere are balanced in the sense that acoustic or rapid internal wave oscillations generally come with negligibly small amplitudes. If triggered artificially, however, through inappropriate initialization or by data assimilation, such oscillations can have a detrimental effect on forecast quality as they interact with the moist aerothermodynamics of the atmosphere. In the setting of sequential Bayesian data assimilation, we therefore investigate two different strategies to reduce these artificial oscillations induced by the analysis step. On the one hand, we develop a new modification for a local ensemble transform Kalman filter, which penalizes imbalances via a minimization problem. On the other hand, we modify the first steps of the subsequent forecast to push the ensemble members back to the slow evolution. We therefore propose the use of certain asymptotically consistent integrators that can blend between the balanced and the unbalanced evolution model seamlessly. In our work, we furthermore present numerical results and performance of the proposed methods for two nonlinear ordinary differential equation models, where we can identify the different scales clearly. The first one is a Lorenz 96 model coupled with a wave equation. In this case the balance relation is linear and the imbalances are caused only by the localization of the filter. The second one is the elastic double pendulum where the balance relation itself is already highly nonlinear. In both cases the methods perform very well and could significantly reduce the imbalances and therefore increase the forecast quality of the slow variables.
3D soil water nowcasting using electromagnetic conductivity imaging and the ensemble Kalman filter
NASA Astrophysics Data System (ADS)
Huang, Jingyi; McBratney, Alex B.; Minasny, Budiman; Triantafilis, John
2017-06-01
Mapping and immediate forecasting of soil water content (θ) and its movement can be challenging. Although inversion of apparent electrical conductivity (ECa) measured by electromagnetic induction to calculate depth-specific electrical conductivity (σ) has been used, it is difficult to apply it across a field. In this paper we use a calibration established along a transect, across a 3.94-ha field with varying soil texture, using an ensemble Kalman filter (EnKF) to monitor and nowcast the 3-dimensional θ dynamics on 16 separate days over a period of 38 days. The EnKF combined a physical model fitted with θ measured by soil moisture sensors and an Artificial Neural Network model comprising σ generated by quasi-3d inversions of DUALEM-421S ECa data. Results showed that the distribution of θ was controlled by soil texture, topography, and vegetation. Soil water dried fastest at the beginning after the initial irrigation event and decreased with time and soil depth, which was consistent with classical soil drying theory and experiments. It was also found that the soil dried fastest in the loamy and duplex soils present in the field, which was attributable to deep drainage and preferential flow. It was concluded that the EnKF approach can be used to improve the irrigation efficiency by applying variable irrigation rates across the field. In addition, soil water status can be nowcasted across large spatial extents using this method with weather forecast information, which will provide guidance to farmers for real-time irrigation management.
NASA Technical Reports Server (NTRS)
Jones, Thomas A.; Stensrud, David; Wicker, Louis; Minnis, Patrick; Palikonda, Rabindra
2015-01-01
Assimilating high-resolution radar reflectivity and radial velocity into convection-permitting numerical weather prediction models has proven to be an important tool for improving forecast skill of convection. The use of satellite data for the application is much less well understood, only recently receiving significant attention. Since both radar and satellite data provide independent information, combing these two sources of data in a robust manner potentially represents the future of high-resolution data assimilation. This research combines Geostationary Operational Environmental Satellite 13 (GOES-13) cloud water path (CWP) retrievals with Weather Surveillance Radar-1988 Doppler (WSR-88D) reflectivity and radial velocity to examine the impacts of assimilating each for a severe weather event occurring in Oklahoma on 24 May 2011. Data are assimilated into a 3-km model using an ensemble adjustment Kalman filter approach with 36 members over a 2-h assimilation window between 1800 and 2000 UTC. Forecasts are then generated for 90 min at 5-min intervals starting at 1930 and 2000 UTC. Results show that both satellite and radar data are able to initiate convection, but that assimilating both spins up a storm much faster. Assimilating CWP also performs well at suppressing spurious precipitation and cloud cover in the model as well as capturing the anvil characteristics of developed storms. Radar data are most effective at resolving the 3D characteristics of the core convection. Assimilating both satellite and radar data generally resulted in the best model analysis and most skillful forecast for this event.
NASA Technical Reports Server (NTRS)
Li, Bailing; Toll, David; Zhan, Xiwu; Cosgrove, Brian
2011-01-01
Model simulated soil moisture fields are often biased due to errors in input parameters and deficiencies in model physics. Satellite derived soil moisture estimates, if retrieved appropriately, represent the spatial mean of soil moisture in a footprint area, and can be used to reduce model bias (at locations near the surface) through data assimilation techniques. While assimilating the retrievals can reduce model bias, it can also destroy the mass balance enforced by the model governing equation because water is removed from or added to the soil by the assimilation algorithm. In addition, studies have shown that assimilation of surface observations can adversely impact soil moisture estimates in the lower soil layers due to imperfect model physics, even though the bias near the surface is decreased. In this study, an ensemble Kalman filter (EnKF) with a mass conservation updating scheme was developed to assimilate the actual value of Advanced Microwave Scanning Radiometer (AMSR-E) soil moisture retrievals to improve the mean of simulated soil moisture fields by the Noah land surface model. Assimilation results using the conventional and the mass conservation updating scheme in the Little Washita watershed of Oklahoma showed that, while both updating schemes reduced the bias in the shallow root zone, the mass conservation scheme provided better estimates in the deeper profile. The mass conservation scheme also yielded physically consistent estimates of fluxes and maintained the water budget. Impacts of model physics on the assimilation results are discussed.
NASA Astrophysics Data System (ADS)
Williams, John L.; Maxwell, Reed M.; Monache, Luca Delle
2013-12-01
Wind power is rapidly gaining prominence as a major source of renewable energy. Harnessing this promising energy source is challenging because of the chaotic nature of wind and its inherently intermittent nature. Accurate forecasting tools are critical to support the integration of wind energy into power grids and to maximize its impact on renewable energy portfolios. We have adapted the Data Assimilation Research Testbed (DART), a community software facility which includes the ensemble Kalman filter (EnKF) algorithm, to expand our capability to use observational data to improve forecasts produced with a fully coupled hydrologic and atmospheric modeling system, the ParFlow (PF) hydrologic model and the Weather Research and Forecasting (WRF) mesoscale atmospheric model, coupled via mass and energy fluxes across the land surface, and resulting in the PF.WRF model. Numerous studies have shown that soil moisture distribution and land surface vegetative processes profoundly influence atmospheric boundary layer development and weather processes on local and regional scales. We have used the PF.WRF model to explore the connections between the land surface and the atmosphere in terms of land surface energy flux partitioning and coupled variable fields including hydraulic conductivity, soil moisture, and wind speed and demonstrated that reductions in uncertainty in these coupled fields realized through assimilation of soil moisture observations propagate through the hydrologic and atmospheric system. The sensitivities found in this study will enable further studies to optimize observation strategies to maximize the utility of the PF.WRF-DART forecasting system.
NASA Astrophysics Data System (ADS)
Dreano, Denis; Tsiaras, Kostas; Triantafyllou, George; Hoteit, Ibrahim
2017-07-01
Forecasting the state of large marine ecosystems is important for many economic and public health applications. However, advanced three-dimensional (3D) ecosystem models, such as the European Regional Seas Ecosystem Model (ERSEM), are computationally expensive, especially when implemented within an ensemble data assimilation system requiring several parallel integrations. As an alternative to 3D ecological forecasting systems, we propose to implement a set of regional one-dimensional (1D) water-column ecological models that run at a fraction of the computational cost. The 1D model domains are determined using a Gaussian mixture model (GMM)-based clustering method and satellite chlorophyll-a (Chl-a) data. Regionally averaged Chl-a data is assimilated into the 1D models using the singular evolutive interpolated Kalman (SEIK) filter. To laterally exchange information between subregions and improve the forecasting skills, we introduce a new correction step to the assimilation scheme, in which we assimilate a statistical forecast of future Chl-a observations based on information from neighbouring regions. We apply this approach to the Red Sea and show that the assimilative 1D ecological models can forecast surface Chl-a concentration with high accuracy. The statistical assimilation step further improves the forecasting skill by as much as 50%. This general approach of clustering large marine areas and running several interacting 1D ecological models is very flexible. It allows many combinations of clustering, filtering and regression technics to be used and can be applied to build efficient forecasting systems in other large marine ecosystems.
DMD-based implementation of patterned optical filter arrays for compressive spectral imaging.
Rueda, Hoover; Arguello, Henry; Arce, Gonzalo R
2015-01-01
Compressive spectral imaging (CSI) captures multispectral imagery using fewer measurements than those required by traditional Shannon-Nyquist theory-based sensing procedures. CSI systems acquire coded and dispersed random projections of the scene rather than direct measurements of the voxels. To date, the coding procedure in CSI has been realized through the use of block-unblock coded apertures (CAs), commonly implemented as chrome-on-quartz photomasks. These apertures block or permit us to pass the entire spectrum from the scene at given spatial locations, thus modulating the spatial characteristics of the scene. This paper extends the framework of CSI by replacing the traditional block-unblock photomasks by patterned optical filter arrays, referred to as colored coded apertures (CCAs). These, in turn, allow the source to be modulated not only spatially but spectrally as well, entailing more powerful coding strategies. The proposed CCAs are synthesized through linear combinations of low-pass, high-pass, and bandpass filters, paired with binary pattern ensembles realized by a digital micromirror device. The optical forward model of the proposed CSI architecture is presented along with a proof-of-concept implementation, which achieves noticeable improvements in the quality of the reconstruction.
Semiparametric modeling: Correcting low-dimensional model error in parametric models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berry, Tyrus, E-mail: thb11@psu.edu; Harlim, John, E-mail: jharlim@psu.edu; Department of Meteorology, the Pennsylvania State University, 503 Walker Building, University Park, PA 16802-5013
2016-03-01
In this paper, a semiparametric modeling approach is introduced as a paradigm for addressing model error arising from unresolved physical phenomena. Our approach compensates for model error by learning an auxiliary dynamical model for the unknown parameters. Practically, the proposed approach consists of the following steps. Given a physics-based model and a noisy data set of historical observations, a Bayesian filtering algorithm is used to extract a time-series of the parameter values. Subsequently, the diffusion forecast algorithm is applied to the retrieved time-series in order to construct the auxiliary model for the time evolving parameters. The semiparametric forecasting algorithm consistsmore » of integrating the existing physics-based model with an ensemble of parameters sampled from the probability density function of the diffusion forecast. To specify initial conditions for the diffusion forecast, a Bayesian semiparametric filtering method that extends the Kalman-based filtering framework is introduced. In difficult test examples, which introduce chaotically and stochastically evolving hidden parameters into the Lorenz-96 model, we show that our approach can effectively compensate for model error, with forecasting skill comparable to that of the perfect model.« less
The temporal representation of speech in a nonlinear model of the guinea pig cochlea
NASA Astrophysics Data System (ADS)
Holmes, Stephen D.; Sumner, Christian J.; O'Mard, Lowel P.; Meddis, Ray
2004-12-01
The temporal representation of speechlike stimuli in the auditory-nerve output of a guinea pig cochlea model is described. The model consists of a bank of dual resonance nonlinear filters that simulate the vibratory response of the basilar membrane followed by a model of the inner hair cell/auditory nerve complex. The model is evaluated by comparing its output with published physiological auditory nerve data in response to single and double vowels. The evaluation includes analyses of individual fibers, as well as ensemble responses over a wide range of best frequencies. In all cases the model response closely follows the patterns in the physiological data, particularly the tendency for the temporal firing pattern of each fiber to represent the frequency of a nearby formant of the speech sound. In the model this behavior is largely a consequence of filter shapes; nonlinear filtering has only a small contribution at low frequencies. The guinea pig cochlear model produces a useful simulation of the measured physiological response to simple speech sounds and is therefore suitable for use in more advanced applications including attempts to generalize these principles to the response of human auditory system, both normal and impaired. .
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maier, Andreas; Wigstroem, Lars; Hofmann, Hannes G.
2011-11-15
Purpose: The combination of quickly rotating C-arm gantry with digital flat panel has enabled the acquisition of three-dimensional data (3D) in the interventional suite. However, image quality is still somewhat limited since the hardware has not been optimized for CT imaging. Adaptive anisotropic filtering has the ability to improve image quality by reducing the noise level and therewith the radiation dose without introducing noticeable blurring. By applying the filtering prior to 3D reconstruction, noise-induced streak artifacts are reduced as compared to processing in the image domain. Methods: 3D anisotropic adaptive filtering was used to process an ensemble of 2D x-raymore » views acquired along a circular trajectory around an object. After arranging the input data into a 3D space (2D projections + angle), the orientation of structures was estimated using a set of differently oriented filters. The resulting tensor representation of local orientation was utilized to control the anisotropic filtering. Low-pass filtering is applied only along structures to maintain high spatial frequency components perpendicular to these. The evaluation of the proposed algorithm includes numerical simulations, phantom experiments, and in-vivo data which were acquired using an AXIOM Artis dTA C-arm system (Siemens AG, Healthcare Sector, Forchheim, Germany). Spatial resolution and noise levels were compared with and without adaptive filtering. A human observer study was carried out to evaluate low-contrast detectability. Results: The adaptive anisotropic filtering algorithm was found to significantly improve low-contrast detectability by reducing the noise level by half (reduction of the standard deviation in certain areas from 74 to 30 HU). Virtually no degradation of high contrast spatial resolution was observed in the modulation transfer function (MTF) analysis. Although the algorithm is computationally intensive, hardware acceleration using Nvidia's CUDA Interface provided an 8.9-fold speed-up of the processing (from 1336 to 150 s). Conclusions: Adaptive anisotropic filtering has the potential to substantially improve image quality and/or reduce the radiation dose required for obtaining 3D image data using cone beam CT.« less
Data Assimilation and Predictability Studies on Typhoon Sinlaku (2008) Using the WRF-LETKF System
NASA Astrophysics Data System (ADS)
Miyoshi, T.; Kunii, M.
2011-12-01
Data assimilation and predictability studies on Tropical Cyclones with a particular focus on intensity forecasts are performed with the newly-developed Local Ensemble Transform Kalman Filter (LETKF) system with the WRF model. Taking advantage of intensive observations of the internationally collaborated T-PARC (THORPEX Pacific Asian Regional Campaign) project, we focus on Typhoon Sinlaku (2008) which intensified rapidly before making landfall to Taiwan. This study includes a number of data assimilation experiments, higher-resolution forecasts, and sensitivity analysis which quantifies impacts of observations on forecasts. This presentation includes latest achievements up to the time of the conference.
NASA Astrophysics Data System (ADS)
Pu, Z.; Zhang, H.
2013-12-01
Near-surface atmospheric observations are the main conventional observations for weather forecasts. However, in modern numerical weather prediction, the use of surface observations, especially those data over complex terrain, remains a unique challenge. There are fundamental difficulties in assimilating surface observations with three-dimensional variational data assimilation (3DVAR). In our early study[1] (Pu et al. 2013), a series of observing system simulation experiments was performed with the ensemble Kalman filter (EnKF) and compared with 3DVAR for its ability to assimilate surface observations with 3DVAR. Using the advanced research version of the Weather Research and Forecasting (WRF) model, results demonstrate that the EnKF can overcome some fundamental limitations that 3DVAR has in assimilating surface observations over complex terrain. Specifically, through its flow-dependent background error term, the EnKF produces more realistic analysis increments over complex terrain in general. Over complex terrain, the EnKF clearly performs better than 3DVAR, because it is more capable of handling surface data in the presence of terrain misrepresentation. With this presentation, we further examine the impact of EnKF data assimilation on the predictability of atmospheric conditions over complex terrain with the WRF model and the observations obtained from the most recent field experiments of the Mountain Terrain Atmospheric Modeling and Observations (MATERHORN) Program. The MATERHORN program provides comprehensive observations over mountainous regions, allowing the opportunity to study the predictability of atmospheric conditions over complex terrain in great details. Specifically, during fall 2012 and spring 2013, comprehensive observations were collected of soil states, surface energy budgets, near-surface atmospheric conditions, and profiling measurements from multiple platforms (e.g., balloon, lidar, radiosondes, etc.) over Dugway Proving Ground (DPG), Utah. With the near-surface observations and sounding data obtained during the MATERHORN fall 2012 field experiment, a month-long cycled EnKF analysis and forecast was produced with the WRF model and an advanced EnKF data assimilation system. Results are compared with the WRF near real-time forecasting during the same month and a set of analysis with 3DVAR data assimilation. Overall evaluation suggests some useful insights on the impacts of different data assimilation methods, surface and soil states, terrain representation on the predictability of atmospheric conditions over mountainous terrain. Details will be presented. References [1] Pu, Z., H. Zhang, and J. A. Anderson,. 'Ensemble Kalman filter assimilation of near-surface observations over complex terrain: Comparison with 3DVAR for short-range forecasts.' Tellus A, vol. 65,19620. 2013. http://dx.doi.org/10.3402/tellusa.v65i0. 19620.
New machine-learning algorithms for prediction of Parkinson's disease
NASA Astrophysics Data System (ADS)
Mandal, Indrajit; Sairam, N.
2014-03-01
This article presents an enhanced prediction accuracy of diagnosis of Parkinson's disease (PD) to prevent the delay and misdiagnosis of patients using the proposed robust inference system. New machine-learning methods are proposed and performance comparisons are based on specificity, sensitivity, accuracy and other measurable parameters. The robust methods of treating Parkinson's disease (PD) includes sparse multinomial logistic regression, rotation forest ensemble with support vector machines and principal components analysis, artificial neural networks, boosting methods. A new ensemble method comprising of the Bayesian network optimised by Tabu search algorithm as classifier and Haar wavelets as projection filter is used for relevant feature selection and ranking. The highest accuracy obtained by linear logistic regression and sparse multinomial logistic regression is 100% and sensitivity, specificity of 0.983 and 0.996, respectively. All the experiments are conducted over 95% and 99% confidence levels and establish the results with corrected t-tests. This work shows a high degree of advancement in software reliability and quality of the computer-aided diagnosis system and experimentally shows best results with supportive statistical inference.
NASA Astrophysics Data System (ADS)
Krishnamoorthy, C.; Balaji, C.
2016-05-01
In the present study, the effect of horizontal and vertical localization scales on the assimilation of direct SAPHIR radiances is studied. An Artificial Neural Network (ANN) has been used as a surrogate for the forward radiative calculations. The training input dataset for ANN consists of vertical layers of atmospheric pressure, temperature, relative humidity and other hydrometeor profiles with 6 channel Brightness Temperatures (BTs) as output. The best neural network architecture has been arrived at, by a neuron independence study. Since vertical localization of radiance data requires weighting functions, a ANN has been trained for this purpose. The radiances were ingested into the NWP using the Ensemble Kalman Filter (EnKF) technique. The horizontal localization has been taken care of, by using a Gaussian localization function centered around the observed coordinates. Similarly, the vertical localization is accomplished by assuming a function which depends on the weighting function of the channel to be assimilated. The effect of both horizontal and vertical localizations has been studied in terms of ensemble spread in the precipitation. Aditionally, improvements in 24 hr forecast from assimilation are also reported.
NASA Astrophysics Data System (ADS)
Kolstein, M.; De Lorenzo, G.; Mikhaylova, E.; Chmeissani, M.; Ariño, G.; Calderón, Y.; Ozsahin, I.; Uzun, D.
2013-04-01
The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For PET scanners, conventional algorithms like Filtered Back-Projection (FBP) and Ordered Subset Expectation Maximization (OSEM) are straightforward to use and give good results. However, FBP presents difficulties for detectors with limited angular coverage like PEM and Compton gamma cameras, whereas OSEM has an impractically large time and memory consumption for a Compton gamma camera with a large number of channels. In this article, the Origin Ensemble (OE) algorithm is evaluated as an alternative algorithm for image reconstruction. Monte Carlo simulations of the PET design are used to compare the performance of OE, FBP and OSEM in terms of the bias, variance and average mean squared error (MSE) image quality metrics. For the PEM and Compton camera designs, results obtained with OE are presented.
The Value of GRACE Data in Improving, Assessing and Evaluating Land Surface and Climate Models
NASA Astrophysics Data System (ADS)
Yang, Z.
2011-12-01
I will review how the Gravity Recovery and Climate Experiment (GRACE) satellite measurements have improved land surface models that are developed for weather, climate, and hydrological studies. GRACE-derived terrestrial water storage (TWS) changes have been successfully used to assess and evaluate the improved representations of land-surface hydrological processes such as groundwater-soil moisture interaction, frozen soil and infiltration, and the topographic control on runoff production, as evident in the simulations from the latest Noah-MP, the Community Land Model, and the Community Climate System Model. GRACE data sets have made it possible to estimate key terrestrial water storage components (snow mass, surface water, groundwater or water table depth), biomass, and surface water fluxes (evapotranspiration, solid precipitation, melt of snow/ice). Many of the examples will draw from my Land, Environment and Atmosphere Dynamics group's work on land surface model developments, snow mass retrieval, and multi-sensor snow data assimilation using the ensemble Karman filter and the ensemble Karman smoother. Finally, I will briefly outline some future directions in using GRACE in land surface modeling.
Fibrous filter efficiency and pressure drop in the viscous-inertial transition flow regime.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanchez, Andres L.; Brockmann, John E.; Dellinger, Jennifer Gwynne
2011-10-01
Fibrous filter pressure drop and aerosol collection efficiency were measured at low air pressures (0.2 to 0.8 atm) and high face velocities (5 to 20 meters per second) to give fiber Reynolds numbers in the viscous-inertial transition flow regime (1 to 16). In this regime, contemporary filtration theory based on Kuwabara's viscous flow through an ensemble of fibers under-predicts single fiber impaction by several orders of magnitude. Streamline curvature increases substantially as inertial forces become dominant. Dimensionless pressure drop measurements followed the viscous-inertial theory of Robinson and Franklin rather than Darcy's linear pressure-velocity relationship (1972). Sodium chloride and iron nano-agglomeratemore » test aerosols were used to evaluate the effects of particle density and shape factor. Total filter efficiency collapsed when plotted against the particle Stokes and fiber Reynolds numbers. Efficiencies were then fitted with an impactor type equation where the cutpoint Stokes number and a steepness parameter described data well in the sharply increasing portion of the curve (20% to 80% efficiency). The cutpoint Stokes number was a linearly decreasing function of fiber Reynolds number. Single fiber efficiencies were calculated from total filter efficiencies and compared to contemporary viscous flow impaction theory (Stechkina et al. 1969), and numerical simulations from the literature. Existing theories under-predicted measured single fiber efficiencies although the assumption of uniform flow conditions for each successive layer of fibers is questionable; the common exponential relationship between single fiber efficiency and total filter efficiency may not be appropriate in this regime.« less
NASA Astrophysics Data System (ADS)
Kaltenboeck, Rudolf; Kerschbaum, Markus; Hennermann, Karin; Mayer, Stefan
2013-04-01
Nowcasting of precipitation events, especially thunderstorm events or winter storms, has high impact on flight safety and efficiency for air traffic management. Future strategic planning by air traffic control will result in circumnavigation of potential hazardous areas, reduction of load around efficiency hot spots by offering alternatives, increase of handling capacity, anticipation of avoidance manoeuvres and increase of awareness before dangerous areas are entered by aircraft. To facilitate this rapid update forecasts of location, intensity, size, movement and development of local storms are necessary. Weather radar data deliver precipitation analysis of high temporal and spatial resolution close to real time by using clever scanning strategies. These data are the basis to generate rapid update forecasts in a time frame up to 2 hours and more for applications in aviation meteorological service provision, such as optimizing safety and economic impact in the context of sub-scale phenomena. On the basis of tracking radar echoes by correlation the movement vectors of successive weather radar images are calculated. For every new successive radar image a set of ensemble precipitation fields is collected by using different parameter sets like pattern match size, different time steps, filter methods and an implementation of history of tracking vectors and plausibility checks. This method considers the uncertainty in rain field displacement and different scales in time and space. By validating manually a set of case studies, the best verification method and skill score is defined and implemented into an online-verification scheme which calculates the optimized forecasts for different time steps and different areas by using different extrapolation ensemble members. To get information about the quality and reliability of the extrapolation process additional information of data quality (e.g. shielding in Alpine areas) is extrapolated and combined with an extrapolation-quality-index. Subsequently the probability and quality information of the forecast ensemble is available and flexible blending to numerical prediction model for each subarea is possible. Simultaneously with automatic processing the ensemble nowcasting product is visualized in a new innovative way which combines the intensity, probability and quality information for different subareas in one forecast image.
Testing particle filters on convective scale dynamics
NASA Astrophysics Data System (ADS)
Haslehner, Mylene; Craig, George. C.; Janjic, Tijana
2014-05-01
Particle filters have been developed in recent years to deal with highly nonlinear dynamics and non Gaussian error statistics that also characterize data assimilation on convective scales. In this work we explore the use of the efficient particle filter (P.v. Leeuwen, 2011) for convective scale data assimilation application. The method is tested in idealized setting, on two stochastic models. The models were designed to reproduce some of the properties of convection, for example the rapid development and decay of convective clouds. The first model is a simple one-dimensional, discrete state birth-death model of clouds (Craig and Würsch, 2012). For this model, the efficient particle filter that includes nudging the variables shows significant improvement compared to Ensemble Kalman Filter and Sequential Importance Resampling (SIR) particle filter. The success of the combination of nudging and resampling, measured as RMS error with respect to the 'true state', is proportional to the nudging intensity. Significantly, even a very weak nudging intensity brings notable improvement over SIR. The second model is a modified version of a stochastic shallow water model (Würsch and Craig 2013), which contains more realistic dynamical characteristics of convective scale phenomena. Using the efficient particle filter and different combination of observations of the three field variables (wind, water 'height' and rain) allows the particle filter to be evaluated in comparison to a regime where only nudging is used. Sensitivity to the properties of the model error covariance is also considered. Finally, criteria are identified under which the efficient particle filter outperforms nudging alone. References: Craig, G. C. and M. Würsch, 2012: The impact of localization and observation averaging for convective-scale data assimilation in a simple stochastic model. Q. J. R. Meteorol. Soc.,139, 515-523. Van Leeuwen, P. J., 2011: Efficient non-linear data assimilation in geophysical fluid dynamics. - Computers and Fluids, doi:10,1016/j.compfluid.2010.11.011, 1096 2011. Würsch, M. and G. C. Craig, 2013: A simple dynamical model of cumulus convection for data assimilation research, submitted to Met. Zeitschrift.
Recognition of emotions using multimodal physiological signals and an ensemble deep learning model.
Yin, Zhong; Zhao, Mengyuan; Wang, Yongxiong; Yang, Jingdong; Zhang, Jianhua
2017-03-01
Using deep-learning methodologies to analyze multimodal physiological signals becomes increasingly attractive for recognizing human emotions. However, the conventional deep emotion classifiers may suffer from the drawback of the lack of the expertise for determining model structure and the oversimplification of combining multimodal feature abstractions. In this study, a multiple-fusion-layer based ensemble classifier of stacked autoencoder (MESAE) is proposed for recognizing emotions, in which the deep structure is identified based on a physiological-data-driven approach. Each SAE consists of three hidden layers to filter the unwanted noise in the physiological features and derives the stable feature representations. An additional deep model is used to achieve the SAE ensembles. The physiological features are split into several subsets according to different feature extraction approaches with each subset separately encoded by a SAE. The derived SAE abstractions are combined according to the physiological modality to create six sets of encodings, which are then fed to a three-layer, adjacent-graph-based network for feature fusion. The fused features are used to recognize binary arousal or valence states. DEAP multimodal database was employed to validate the performance of the MESAE. By comparing with the best existing emotion classifier, the mean of classification rate and F-score improves by 5.26%. The superiority of the MESAE against the state-of-the-art shallow and deep emotion classifiers has been demonstrated under different sizes of the available physiological instances. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Ensemble Semi-supervised Frame-work for Brain Magnetic Resonance Imaging Tissue Segmentation
Azmi, Reza; Pishgoo, Boshra; Norozi, Narges; Yeganeh, Samira
2013-01-01
Brain magnetic resonance images (MRIs) tissue segmentation is one of the most important parts of the clinical diagnostic tools. Pixel classification methods have been frequently used in the image segmentation with two supervised and unsupervised approaches up to now. Supervised segmentation methods lead to high accuracy, but they need a large amount of labeled data, which is hard, expensive, and slow to obtain. Moreover, they cannot use unlabeled data to train classifiers. On the other hand, unsupervised segmentation methods have no prior knowledge and lead to low level of performance. However, semi-supervised learning which uses a few labeled data together with a large amount of unlabeled data causes higher accuracy with less trouble. In this paper, we propose an ensemble semi-supervised frame-work for segmenting of brain magnetic resonance imaging (MRI) tissues that it has been used results of several semi-supervised classifiers simultaneously. Selecting appropriate classifiers has a significant role in the performance of this frame-work. Hence, in this paper, we present two semi-supervised algorithms expectation filtering maximization and MCo_Training that are improved versions of semi-supervised methods expectation maximization and Co_Training and increase segmentation accuracy. Afterward, we use these improved classifiers together with graph-based semi-supervised classifier as components of the ensemble frame-work. Experimental results show that performance of segmentation in this approach is higher than both supervised methods and the individual semi-supervised classifiers. PMID:24098863
Decoding Trajectories from Posterior Parietal Cortex Ensembles
Mulliken, Grant H.; Musallam, Sam; Andersen, Richard A.
2009-01-01
High-level cognitive signals in the posterior parietal cortex (PPC) have previously been used to decode the intended endpoint of a reach, providing the first evidence that PPC can be used for direct control of a neural prosthesis (Musallam et al., 2004). Here we expand on this work by showing that PPC neural activity can be harnessed to estimate not only the endpoint but also to continuously control the trajectory of an end effector. Specifically, we trained two monkeys to use a joystick to guide a cursor on a computer screen to peripheral target locations while maintaining central ocular fixation. We found that we could accurately reconstruct the trajectory of the cursor using a relatively small ensemble of simultaneously recorded PPC neurons. Using a goal-based Kalman filter that incorporates target information into the state-space, we showed that the decoded estimate of cursor position could be significantly improved. Finally, we tested whether we could decode trajectories during closed-loop brain control sessions, in which the real-time position of the cursor was determined solely by a monkey’s neural activity in PPC. The monkey learned to perform brain control trajectories at 80% success rate(for 8 targets) after just 4–5 sessions. This improvement in behavioral performance was accompanied by a corresponding enhancement in neural tuning properties (i.e., increased tuning depth and coverage of encoding parameter space) as well as an increase in off-line decoding performance of the PPC ensemble. PMID:19036985
NASA Astrophysics Data System (ADS)
Lange, Heiner; Craig, George
2014-05-01
This study uses the Local Ensemble Transform Kalman Filter (LETKF) to perform storm-scale Data Assimilation of simulated Doppler radar observations into the non-hydrostatic, convection-permitting COSMO model. In perfect model experiments (OSSEs), it is investigated how the limited predictability of convective storms affects precipitation forecasts. The study compares a fine analysis scheme with small RMS errors to a coarse scheme that allows for errors in position, shape and occurrence of storms in the ensemble. The coarse scheme uses superobservations, a coarser grid for analysis weights, a larger localization radius and larger observation error that allow a broadening of the Gaussian error statistics. Three hour forecasts of convective systems (with typical lifetimes exceeding 6 hours) from the detailed analyses of the fine scheme are found to be advantageous to those of the coarse scheme during the first 1-2 hours, with respect to the predicted storm positions. After 3 hours in the convective regime used here, the forecast quality of the two schemes appears indiscernible, judging by RMSE and verification methods for rain-fields and objects. It is concluded that, for operational assimilation systems, the analysis scheme might not necessarily need to be detailed to the grid scale of the model. Depending on the forecast lead time, and on the presence of orographic or synoptic forcing that enhance the predictability of storm occurrences, analyses from a coarser scheme might suffice.
Handling the unknown soil hydraulic parameters in data assimilation for unsaturated flow problems
NASA Astrophysics Data System (ADS)
Lange, Natascha; Erdal, Daniel; Neuweiler, Insa
2017-04-01
Model predictions of flow in the unsaturated zone require the soil hydraulic parameters. However, these parameters cannot be determined easily in applications, in particular if observations are indirect and cover only a small range of possible states. Correlation of parameters or their correlation in the range of states that are observed is a problem, as different parameter combinations may reproduce approximately the same measured water content. In field campaigns this problem can be helped by adding more measurement devices. Often, observation networks are designed to feed models for long term prediction purposes (i.e. for weather forecasting). A popular way of making predictions with such kind of observations are data assimilation methods, like the ensemble Kalman filter (Evensen, 1994). These methods can be used for parameter estimation if the unknown parameters are included in the state vector and updated along with the model states. Given the difficulties related to estimation of the soil hydraulic parameters in general, it is questionable, though, whether these methods can really be used for parameter estimation under natural conditions. Therefore, we investigate the ability of the ensemble Kalman filter to estimate the soil hydraulic parameters. We use synthetic identical twin-experiments to guarantee full knowledge of the model and the true parameters. We use the van Genuchten model to describe the soil water retention and relative permeability functions. This model is unfortunately prone to the above mentioned pseudo-correlations of parameters. Therefore, we also test the simpler Russo Gardner model, which is less affected by that problem, in our experiments. The total number of unknown parameters is varied by considering different layers of soil. Besides, we study the influence of the parameter updates on the water content predictions. We test different iterative filter approaches and compare different observation strategies for parameter identification. Considering heterogeneous soils, we discuss the representativeness of different observation types to be used for the assimilation. G. Evensen. Sequential data assimilation with a nonlinear quasi-geostrophic model using Monte Carlo methods to forecast error statistics. Journal of Geophysical Research: Oceans, 99(C5):10143-10162, 1994
An improved state-parameter analysis of ecosystem models using data assimilation
Chen, M.; Liu, S.; Tieszen, L.L.; Hollinger, D.Y.
2008-01-01
Much of the effort spent in developing data assimilation methods for carbon dynamics analysis has focused on estimating optimal values for either model parameters or state variables. The main weakness of estimating parameter values alone (i.e., without considering state variables) is that all errors from input, output, and model structure are attributed to model parameter uncertainties. On the other hand, the accuracy of estimating state variables may be lowered if the temporal evolution of parameter values is not incorporated. This research develops a smoothed ensemble Kalman filter (SEnKF) by combining ensemble Kalman filter with kernel smoothing technique. SEnKF has following characteristics: (1) to estimate simultaneously the model states and parameters through concatenating unknown parameters and state variables into a joint state vector; (2) to mitigate dramatic, sudden changes of parameter values in parameter sampling and parameter evolution process, and control narrowing of parameter variance which results in filter divergence through adjusting smoothing factor in kernel smoothing algorithm; (3) to assimilate recursively data into the model and thus detect possible time variation of parameters; and (4) to address properly various sources of uncertainties stemming from input, output and parameter uncertainties. The SEnKF is tested by assimilating observed fluxes of carbon dioxide and environmental driving factor data from an AmeriFlux forest station located near Howland, Maine, USA, into a partition eddy flux model. Our analysis demonstrates that model parameters, such as light use efficiency, respiration coefficients, minimum and optimum temperatures for photosynthetic activity, and others, are highly constrained by eddy flux data at daily-to-seasonal time scales. The SEnKF stabilizes parameter values quickly regardless of the initial values of the parameters. Potential ecosystem light use efficiency demonstrates a strong seasonality. Results show that the simultaneous parameter estimation procedure significantly improves model predictions. Results also show that the SEnKF can dramatically reduce the variance in state variables stemming from the uncertainty of parameters and driving variables. The SEnKF is a robust and effective algorithm in evaluating and developing ecosystem models and in improving the understanding and quantification of carbon cycle parameters and processes. ?? 2008 Elsevier B.V.
The Improvement of Spatial-Temporal PM2.5 Resolution in Taiwan by Using Data Assimilation Method
NASA Astrophysics Data System (ADS)
Lin, Yong-Qing; Lin, Yuan-Chien
2017-04-01
Forecasting air pollution concentration, e.g., the concentration of PM2.5, is of great significance to protect human health and the environment. Accurate prediction of PM2.5 concentrations is limited in number and the data quality of air quality monitoring stations. The spatial and temporal variations of PM2.5 concentrations are measured by 76 National Air Quality Monitoring Stations (built by the TW-EPA) in Taiwan. The National Air Quality Monitoring Stations are costly and scarce because of the highly precise instrument and their size. Therefore, many places still out of the range of National Air Quality Monitoring Stations. Recently, there are an enormous number of portable air quality sensors called "AirBox" developed jointly by the Taiwan government and a private company. By virtue of its price and portative, the AirBox can provide higher resolution of space-time PM2.5 measurement. However, the spatiotemporal distribution and data quality are different between AirBox and National Air Quality Monitoring Stations. To integrate the heterogeneous PM2.5 data, the data assimilation method should be performed before further analysis. In this study, we propose a data assimilation method based on Ensemble Kalman Filter (EnKF), which is a variant of classic Kalman Filter, can be used to combine additional heterogeneous data from different source while modeling to improve the estimation of spatial-temporal PM2.5 concentration. The assimilation procedure uses the advantages of the two kinds of heterogeneous data and merges them to produce the final estimation. The results have shown that by combining AirBox PM2.5 data as additional information in our model based EnKF can bring the better estimation of spatial-temporal PM2.5 concentration and improve the it's space-time resolution. Under the approach proposed in this study, higher spatial-temporal resoultion could provide a very useful information for a better spatial-temporal data analysis and further environmental management, such as air pollution source localization and micro-scale air pollution analysis. Keywords: PM2.5, Data Assimilation, Ensemble Kalman Filter, Air Quality
NASA Astrophysics Data System (ADS)
Zhang, Shupeng; Yi, Xue; Zheng, Xiaogu; Chen, Zhuoqi; Dan, Bo; Zhang, Xuanze
2014-11-01
In this paper, a global carbon assimilation system (GCAS) is developed for optimizing the global land surface carbon flux at 1° resolution using multiple ecosystem models. In GCAS, three ecosystem models, Boreal Ecosystem Productivity Simulator, Carnegie-Ames-Stanford Approach, and Community Atmosphere Biosphere Land Exchange, produce the prior fluxes, and an atmospheric transport model, Model for OZone And Related chemical Tracers, is used to calculate atmospheric CO2 concentrations resulting from these prior fluxes. A local ensemble Kalman filter is developed to assimilate atmospheric CO2 data observed at 92 stations to optimize the carbon flux for six land regions, and the Bayesian model averaging method is implemented in GCAS to calculate the weighted average of the optimized fluxes based on individual ecosystem models. The weights for the models are found according to the closeness of their forecasted CO2 concentration to observation. Results of this study show that the model weights vary in time and space, allowing for an optimum utilization of different strengths of different ecosystem models. It is also demonstrated that spatial localization is an effective technique to avoid spurious optimization results for regions that are not well constrained by the atmospheric data. Based on the multimodel optimized flux from GCAS, we found that the average global terrestrial carbon sink over the 2002-2008 period is 2.97 ± 1.1 PgC yr-1, and the sinks are 0.88 ± 0.52, 0.27 ± 0.33, 0.67 ± 0.39, 0.90 ± 0.68, 0.21 ± 0.31, and 0.04 ± 0.08 PgC yr-1 for the North America, South America, Africa, Eurasia, Tropical Asia, and Australia, respectively. This multimodel GCAS can be used to improve global carbon cycle estimation.
NASA Astrophysics Data System (ADS)
Neal, J. C.; Wood, M.; Bermúdez, M.; Hostache, R.; Freer, J. E.; Bates, P. D.; Coxon, G.
2017-12-01
Remote sensing of flood inundation extent has long been a potential source of data for constraining and correcting simulations of floodplain inundation. Hydrodynamic models and the computing resources to run them have developed to the extent that simulation of flood inundation in two-dimensional space is now feasible over large river basins in near real-time. However, despite substantial evidence that there is useful information content within inundation extent data, even from low resolution SAR such as that gathered by Envisat ASAR in wide swath mode, making use of the information in a data assimilation system has proved difficult. He we review recent applications of the Ensemble Kalman Filter (EnKF) and Particle Filter for assimilating SAR data, with a focus on the River Severn UK and compare these with complementary research that has looked at the internal error sources and boundary condition errors using detailed terrestrial data that is not available in most locations. Previous applications of the EnKF to this reach have focused on upstream boundary conditions as the source of flow error, however this description of errors was too simplistic for the simulation of summer flood events where localised intense rainfall can be substantial. Therefore, we evaluate the introduction of uncertain lateral inflows to the ensemble. A further limitation of the existing EnKF based methods is the need to convert flood extent to water surface elevations by intersecting the shoreline location with a high quality digital elevation model (e.g. LiDAR). To simplify this data processing step, we evaluate a method to directly assimilate inundation extent as a EnKF model state rather than assimilating water heights, potentially allowing the scheme to be used where high-quality terrain data are sparse.
Spread-Spectrum Beamforming and Clutter Filtering for Plane-Wave Color Doppler Imaging.
Mansour, Omar; Poepping, Tamie L; Lacefield, James C
2016-07-21
Plane-wave imaging is desirable for its ability to achieve high frame rates, allowing the capture of fast dynamic events and continuous Doppler data. In most implementations of plane-wave imaging, multiple low-resolution images from different plane wave tilt angles are compounded to form a single high-resolution image, thereby reducing the frame rate. Compounding improves the lateral beam profile in the high-resolution image, but it also acts as a low-pass filter in slow time that causes attenuation and aliasing of signals with high Doppler shifts. This paper introduces a spread-spectrum color Doppler imaging method that produces high-resolution images without the use of compounding, thereby eliminating the tradeoff between beam quality, maximum unaliased Doppler frequency, and frame rate. The method uses a long, random sequence of transmit angles rather than a linear sweep of plane wave directions. The random angle sequence randomizes the phase of off-focus (clutter) signals, thereby spreading the clutter power in the Doppler spectrum, while keeping the spectrum of the in-focus signal intact. The ensemble of randomly tilted low-resolution frames also acts as the Doppler ensemble, so it can be much longer than a conventional linear sweep, thereby improving beam formation while also making the slow-time Doppler sampling frequency equal to the pulse repetition frequency. Experiments performed using a carotid artery phantom with constant flow demonstrate that the spread-spectrum method more accurately measures the parabolic flow profile of the vessel and outperforms conventional plane-wave Doppler in both contrast resolution and estimation of high flow velocities. The spread-spectrum method is expected to be valuable for Doppler applications that require measurement of high velocities at high frame rates.
NASA Astrophysics Data System (ADS)
Flores, Alejandro N.; Bras, Rafael L.; Entekhabi, Dara
2012-08-01
Soil moisture information is critical for applications like landslide susceptibility analysis and military trafficability assessment. Existing technologies cannot observe soil moisture at spatial scales of hillslopes (e.g., 100 to 102 m) and over large areas (e.g., 102 to 105 km2) with sufficiently high temporal coverage (e.g., days). Physics-based hydrologic models can simulate soil moisture at the necessary spatial and temporal scales, albeit with error. We develop and test a data assimilation framework based on the ensemble Kalman filter for constraining uncertain simulated high-resolution soil moisture fields to anticipated remote sensing products, specifically NASA's Soil Moisture Active-Passive (SMAP) mission, which will provide global L band microwave observation approximately every 2-3 days. The framework directly assimilates SMAP synthetic 3 km radar backscatter observations to update hillslope-scale bare soil moisture estimates from a physics-based model. Downscaling from 3 km observations to hillslope scales is achieved through the data assimilation algorithm. Assimilation reduces bias in near-surface soil moisture (e.g., top 10 cm) by approximately 0.05 m3/m3and expected root-mean-square errors by at least 60% in much of the watershed, relative to an open loop simulation. However, near-surface moisture estimates in channel and valley bottoms do not improve, and estimates of profile-integrated moisture throughout the watershed do not substantially improve. We discuss the implications of this work, focusing on ongoing efforts to improve soil moisture estimation in the entire soil profile through joint assimilation of other satellite (e.g., vegetation) and in situ soil moisture measurements.
NASA Astrophysics Data System (ADS)
Blyverket, J.; Hamer, P.; Bertino, L.; Lahoz, W. A.
2017-12-01
The European Space Agency Climate Change Initiative for soil moisture (ESA CCI SM) was initiated in 2012 for a period of six years, the objective for this period was to produce the most complete and consistent global soil moisture data record based on both active and passive sensors. The ESA CCI SM products consist of three surface soil moisture datasets: The ACTIVE product and the PASSIVE product were created by fusing scatterometer and radiometer soil moisture data, respectively. The COMBINED product is a blended product based on the former two datasets. In this study we assimilate globally both the ACTIVE and PASSIVE product at a 25 km spatial resolution. The different satellite platforms have different overpass times, an observation is mapped to the hours 00.00, 06.00, 12.00 or 18.00 if it falls within a 3 hour window centred at these times. We use the SURFEX land surface model with the ISBA diffusion scheme for the soil hydrology. For the assimilation routine we apply the Ensemble Transform Kalman Filter (ETKF). The land surface model is driven by perturbed MERRA-2 atmospheric forcing data, which has a temporal resolution of one hour and is mapped to the SURFEX model grid. Bias between the land surface model and the ESA CCI product is removed by cumulative distribution function (CDF) matching. This work is a step towards creating a global root zone soil moisture product from the most comprehensive satellite surface soil moisture product available. As a first step we consider the period from 2010 - 2016. This allows for comparison against other global root zone soil moisture products (SMAP Level 4, which is independent of the ESA CCI SM product).
NASA Astrophysics Data System (ADS)
Yu, Liuqian; Fennel, Katja; Bertino, Laurent; Gharamti, Mohamad El; Thompson, Keith R.
2018-06-01
Effective data assimilation methods for incorporating observations into marine biogeochemical models are required to improve hindcasts, nowcasts and forecasts of the ocean's biogeochemical state. Recent assimilation efforts have shown that updating model physics alone can degrade biogeochemical fields while only updating biogeochemical variables may not improve a model's predictive skill when the physical fields are inaccurate. Here we systematically investigate whether multivariate updates of physical and biogeochemical model states are superior to only updating either physical or biogeochemical variables. We conducted a series of twin experiments in an idealized ocean channel that experiences wind-driven upwelling. The forecast model was forced with biased wind stress and perturbed biogeochemical model parameters compared to the model run representing the "truth". Taking advantage of the multivariate nature of the deterministic Ensemble Kalman Filter (DEnKF), we assimilated different combinations of synthetic physical (sea surface height, sea surface temperature and temperature profiles) and biogeochemical (surface chlorophyll and nitrate profiles) observations. We show that when biogeochemical and physical properties are highly correlated (e.g., thermocline and nutricline), multivariate updates of both are essential for improving model skill and can be accomplished by assimilating either physical (e.g., temperature profiles) or biogeochemical (e.g., nutrient profiles) observations. In our idealized domain, the improvement is largely due to a better representation of nutrient upwelling, which results in a more accurate nutrient input into the euphotic zone. In contrast, assimilating surface chlorophyll improves the model state only slightly, because surface chlorophyll contains little information about the vertical density structure. We also show that a degradation of the correlation between observed subsurface temperature and nutrient fields, which has been an issue in several previous assimilation studies, can be reduced by multivariate updates of physical and biogeochemical fields.
NASA Astrophysics Data System (ADS)
Xu, Jianhui; Shu, Hong
2014-09-01
This study assesses the analysis performance of assimilating the Moderate Resolution Imaging Spectroradiometer (MODIS)-based albedo and snow cover fraction (SCF) separately or jointly into the physically based Common Land Model (CoLM). A direct insertion method (DI) is proposed to assimilate the black and white-sky albedos into the CoLM. The MODIS-based albedo is calculated with the MODIS bidirectional reflectance distribution function (BRDF) model parameters product (MCD43B1) and the solar zenith angle as estimated in the CoLM for each time step. Meanwhile, the MODIS SCF (MOD10A1) is assimilated into the CoLM using the deterministic ensemble Kalman filter (DEnKF) method. A new DEnKF-albedo assimilation scheme for integrating the DI and DEnKF assimilation schemes is proposed. Our assimilation results are validated against in situ snow depth observations from November 2008 to March 2009 at five sites in the Altay region of China. The experimental results show that all three data assimilation schemes can improve snow depth simulations. But overall, the DEnKF-albedo assimilation shows the best analysis performance as it significantly reduces the bias and root-mean-square error (RMSE) during the snow accumulation and ablation periods at all sites except for the Fuyun site. The SCF assimilation via DEnKF produces better results than the albedo assimilation via DI, implying that the albedo assimilation that indirectly updates the snow depth state variable is less efficient than the direct SCF assimilation. For the Fuyun site, the DEnKF-albedo scheme tends to overestimate the snow depth accumulation with the maximum bias and RMSE values because of the large positive innovation (observation minus forecast).
NASA Astrophysics Data System (ADS)
Williams, J. L.; Maxwell, R. M.; Delle Monache, L.
2012-12-01
Wind power is rapidly gaining prominence as a major source of renewable energy. Harnessing this promising energy source is challenging because of the chaotic nature of wind and its propensity to change speed and direction over short time scales. Accurate forecasting tools are critical to support the integration of wind energy into power grids and to maximize its impact on renewable energy portfolios. Numerous studies have shown that soil moisture distribution and land surface vegetative processes profoundly influence atmospheric boundary layer development and weather processes on local and regional scales. Using the PF.WRF model, a fully-coupled hydrologic and atmospheric model employing the ParFlow hydrologic model with the Weather Research and Forecasting model coupled via mass and energy fluxes across the land surface, we have explored the connections between the land surface and the atmosphere in terms of land surface energy flux partitioning and coupled variable fields including hydraulic conductivity, soil moisture and wind speed, and demonstrated that reductions in uncertainty in these coupled fields propagate through the hydrologic and atmospheric system. We have adapted the Data Assimilation Research Testbed (DART), an implementation of the robust Ensemble Kalman Filter data assimilation algorithm, to expand our capability to nudge forecasts produced with the PF.WRF model using observational data. Using a semi-idealized simulation domain, we examine the effects of assimilating observations of variables such as wind speed and temperature collected in the atmosphere, and land surface and subsurface observations such as soil moisture on the quality of forecast outputs. The sensitivities we find in this study will enable further studies to optimize observation collection to maximize the utility of the PF.WRF-DART forecasting system.
Investigation of flow and transport processes at the MADE site using ensemble Kalman filter
Liu, Gaisheng; Chen, Y.; Zhang, Dongxiao
2008-01-01
In this work the ensemble Kalman filter (EnKF) is applied to investigate the flow and transport processes at the macro-dispersion experiment (MADE) site in Columbus, MS. The EnKF is a sequential data assimilation approach that adjusts the unknown model parameter values based on the observed data with time. The classic advection-dispersion (AD) and the dual-domain mass transfer (DDMT) models are employed to analyze the tritium plume during the second MADE tracer experiment. The hydraulic conductivity (K), longitudinal dispersivity in the AD model, and mass transfer rate coefficient and mobile porosity ratio in the DDMT model, are estimated in this investigation. Because of its sequential feature, the EnKF allows for the temporal scaling of transport parameters during the tritium concentration analysis. Inverse simulation results indicate that for the AD model to reproduce the extensive spatial spreading of the tritium observed in the field, the K in the downgradient area needs to be increased significantly. The estimated K in the AD model becomes an order of magnitude higher than the in situ flowmeter measurements over a large portion of media. On the other hand, the DDMT model gives an estimation of K that is much more comparable with the flowmeter values. In addition, the simulated concentrations by the DDMT model show a better agreement with the observed values. The root mean square (RMS) between the observed and simulated tritium plumes is 0.77 for the AD model and 0.45 for the DDMT model at 328 days. Unlike the AD model, which gives inconsistent K estimates at different times, the DDMT model is able to invert the K values that consistently reproduce the observed tritium concentrations through all times. ?? 2008 Elsevier Ltd. All rights reserved.
Time Dependent Tomography of the Solar Corona in Three Spatial Dimensions
NASA Astrophysics Data System (ADS)
Butala, M. D.; Frazin, R. A.; Kamalabadi, F.
2006-12-01
The combination of the soon to be launched STEREO mission with SOHO will provide scientists with three simultaneous space-borne views of the Sun. The increase in available measurements will reduce the data acquisition time necessary to obtain 3D coronal electron density (N_e) estimates from coronagraph images using a technique called solar rotational tomography (SRT). However, the data acquisition period will still be long enough for the corona to dynamically evolve, requiring time dependent solar tomography. The Kalman filter (KF) would seem to be an ideal computational method for time dependent SRT. Unfortunately, the KF scales poorly with problem size and is, as a result, inapplicable. A Monte Carlo approximation to the KF called the localized ensemble Kalman filter was developed for massive applications and has the promise of making the time dependent estimation of the 3D coronal N_e possible. We present simulations showing that this method will make time dependent tomography in three spatial dimensions computationally feasible.
NASA Astrophysics Data System (ADS)
Pan, M.; Wood, E. F.
2004-05-01
This study explores a method to estimate various components of the water cycle (ET, runoff, land storage, etc.) based on a number of different info sources, including both observations and observation-enhanced model simulations. Different from existing data assimilations, this constrained Kalman filtering approach keeps the water budget perfectly closed while updating the states of the underlying model (VIC model) optimally using observations. Assimilating different data sources in this way has several advantages: (1) physical model is included to make estimation time series smooth, missing-free, and more physically consistent; (2) uncertainties in the model and observations are properly addressed; (3) model is constrained by observation thus to reduce model biases; (4) balance of water is always preserved along the assimilation. Experiments are carried out in Southern Great Plain region where necessary observations have been collected. This method may also be implemented in other applications with physical constraints (e.g. energy cycles) and at different scales.
Data Assimilation - Advances and Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Brian J.
2014-07-30
This presentation provides an overview of data assimilation (model calibration) for complex computer experiments. Calibration refers to the process of probabilistically constraining uncertain physics/engineering model inputs to be consistent with observed experimental data. An initial probability distribution for these parameters is updated using the experimental information. Utilization of surrogate models and empirical adjustment for model form error in code calibration form the basis for the statistical methodology considered. The role of probabilistic code calibration in supporting code validation is discussed. Incorporation of model form uncertainty in rigorous uncertainty quantification (UQ) analyses is also addressed. Design criteria used within a batchmore » sequential design algorithm are introduced for efficiently achieving predictive maturity and improved code calibration. Predictive maturity refers to obtaining stable predictive inference with calibrated computer codes. These approaches allow for augmentation of initial experiment designs for collecting new physical data. A standard framework for data assimilation is presented and techniques for updating the posterior distribution of the state variables based on particle filtering and the ensemble Kalman filter are introduced.« less
Automatic classification of endoscopic images for premalignant conditions of the esophagus
NASA Astrophysics Data System (ADS)
Boschetto, Davide; Gambaretto, Gloria; Grisan, Enrico
2016-03-01
Barrett's esophagus (BE) is a precancerous complication of gastroesophageal reflux disease in which normal stratified squamous epithelium lining the esophagus is replaced by intestinal metaplastic columnar epithelium. Repeated endoscopies and multiple biopsies are often necessary to establish the presence of intestinal metaplasia. Narrow Band Imaging (NBI) is an imaging technique commonly used with endoscopies that enhances the contrast of vascular pattern on the mucosa. We present a computer-based method for the automatic normal/metaplastic classification of endoscopic NBI images. Superpixel segmentation is used to identify and cluster pixels belonging to uniform regions. From each uniform clustered region of pixels, eight features maximizing differences among normal and metaplastic epithelium are extracted for the classification step. For each superpixel, the three mean intensities of each color channel are firstly selected as features. Three added features are the mean intensities for each superpixel after separately applying to the red-channel image three different morphological filters (top-hat filtering, entropy filtering and range filtering). The last two features require the computation of the Grey-Level Co-Occurrence Matrix (GLCM), and are reflective of the contrast and the homogeneity of each superpixel. The classification step is performed using an ensemble of 50 classification trees, with a 10-fold cross-validation scheme by training the classifier at each step on a random 70% of the images and testing on the remaining 30% of the dataset. Sensitivity and Specificity are respectively of 79.2% and 87.3%, with an overall accuracy of 83.9%.
Evidence against global attention filters selective for absolute bar-orientation in human vision.
Inverso, Matthew; Sun, Peng; Chubb, Charles; Wright, Charles E; Sperling, George
2016-01-01
The finding that an item of type A pops out from an array of distractors of type B typically is taken to support the inference that human vision contains a neural mechanism that is activated by items of type A but not by items of type B. Such a mechanism might be expected to yield a neural image in which items of type A produce high activation and items of type B low (or zero) activation. Access to such a neural image might further be expected to enable accurate estimation of the centroid of an ensemble of items of type A intermixed with to-be-ignored items of type B. Here, it is shown that as the number of items in stimulus displays is increased, performance in estimating the centroids of horizontal (vertical) items amid vertical (horizontal) distractors degrades much more quickly and dramatically than does performance in estimating the centroids of white (black) items among black (white) distractors. Together with previous findings, these results suggest that, although human vision does possess bottom-up neural mechanisms sensitive to abrupt local changes in bar-orientation, and although human vision does possess and utilize top-down global attention filters capable of selecting multiple items of one brightness or of one color from among others, it cannot use a top-down global attention filter capable of selecting multiple bars of a given absolute orientation and filtering bars of the opposite orientation in a centroid task.
NASA Astrophysics Data System (ADS)
Holt, C. R.; Szunyogh, I.; Gyarmati, G.; Hoffman, R. N.; Leidner, M.
2011-12-01
Tropical cyclone (TC) track and intensity forecasts have improved in recent years due to increased model resolution, improved data assimilation, and the rapid increase in the number of routinely assimilated observations over oceans. The data assimilation approach that has received the most attention in recent years is Ensemble Kalman Filtering (EnKF). The most attractive feature of the EnKF is that it uses a fully flow-dependent estimate of the error statistics, which can have important benefits for the analysis of rapidly developing TCs. We implement the Local Ensemble Transform Kalman Filter algorithm, a vari- ation of the EnKF, on a reduced-resolution version of the National Centers for Environmental Prediction (NCEP) Global Forecast System (GFS) model and the NCEP Regional Spectral Model (RSM) to build a coupled global-limited area anal- ysis/forecast system. This is the first time, to our knowledge, that such a system is used for the analysis and forecast of tropical cyclones. We use data from summer 2004 to study eight tropical cyclones in the Northwest Pacific. The benchmark data sets that we use to assess the performance of our system are the NCEP Reanalysis and the NCEP Operational GFS analyses from 2004. These benchmark analyses were both obtained by the Statistical Spectral Interpolation, which was the operational data assimilation system of NCEP in 2004. The GFS Operational analysis assimilated a large number of satellite radiance observations in addition to the observations assimilated in our system. All analyses are verified against the Joint Typhoon Warning Center Best Track data set. The errors are calculated for the position and intensity of the TCs. The global component of the ensemble-based system shows improvement in po- sition analysis over the NCEP Reanalysis, but shows no significant difference from the NCEP operational analysis for most of the storm tracks. The regional com- ponent of our system improves position analysis over all the global analyses. The intensity analyses, measured by the minimum sea level pressure, are of similar quality in all of the analyses. Regional deterministic forecasts started from our analyses are generally not significantly different from those started from the GFS operational analysis. On average, the regional experiments performed better for longer than 48 h sea level pressure forecasts, while the global forecast performed better in predicting the position for longer than 48 h.
A novel method to improve MODIS AOD retrievals in cloudy pixels using an analog ensemble approach
NASA Astrophysics Data System (ADS)
Kumar, R.; Raman, A.; Delle Monache, L.; Alessandrini, S.; Cheng, W. Y. Y.; Gaubert, B.; Arellano, A. F.
2016-12-01
Particulate matter (PM) concentrations are one of the fundamental indicators of air quality. Earth orbiting satellite platforms acquire column aerosol abundance that can in turn provide information about the PM concentrations. One of the serious limitations of column aerosol retrievals from low earth orbiting satellites is that these algorithms are based on clear sky assumptions. They do not retrieve AOD in cloudy pixels. After filtering cloudy pixels, these algorithms also arbitrarily remove brightest and darkest 25% of remaining pixels over ocean and brightest and darkest 50% pixels over land to filter any residual contamination from clouds. This becomes a critical issue especially in regions that experience monsoon, like Asia and North America. In case of North America, monsoon season experiences wide variety of extreme air quality events such as fires in California and dust storms in Arizona. Assessment of these episodic events warrants frequent monitoring of aerosol observations from remote sensing retrievals. In this study, we demonstrate a method to fill in cloudy pixels in Moderate Imaging Resolution Spectroradiometer (MODIS) AOD retrievals based on ensembles generated using an analog-based approach (AnEn). It provides a probabilistic distribution of AOD in cloudy pixels using historical records of model simulations of meteorological predictors such as AOD, relative humidity, and wind speed, and past observational records of MODIS AOD at a given target site. We use simulations from a coupled community weather forecasting model with chemistry (WRF-Chem) run at a resolution comparable to MODIS AOD. Analogs selected from summer months (June, July) of 2011-2013 from model and corresponding observations are used as a training dataset. Then, missing AOD retrievals in cloudy pixels in the last 31 days of the selected period are estimated. Here, we use AERONET stations as target sites to facilitate comparison against in-situ measurements. We use two approaches to evaluate the estimated AOD: 1) by comparing against reanalysis AOD, 2) by inverting AOD to PM10 concentrations and then comparing those with measured PM10. AnEn is an efficient approach to generate an ensemble as it involves only one model run and provides an estimate of uncertainty that complies with the physical and chemical state of the atmosphere.
Noise Equalization for Ultrafast Plane Wave Microvessel Imaging.
Song, Pengfei; Manduca, Armando; Trzasko, Joshua D; Chen, Shigao
2017-11-01
Ultrafast plane wave microvessel imaging significantly improves ultrasound Doppler sensitivity by increasing the number of Doppler ensembles that can be collected within a short period of time. The rich spatiotemporal plane wave data also enable more robust clutter filtering based on singular value decomposition. However, due to the lack of transmit focusing, plane wave microvessel imaging is very susceptible to noise. This paper was designed to: 1) study the relationship between ultrasound system noise (primarily time gain compensation induced) and microvessel blood flow signal and 2) propose an adaptive and computationally cost-effective noise equalization method that is independent of hardware or software imaging settings to improve microvessel image quality.
Bayesian network ensemble as a multivariate strategy to predict radiation pneumonitis risk
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Sangkyu, E-mail: sangkyu.lee@mail.mcgill.ca; Ybarra, Norma; Jeyaseelan, Krishinima
2015-05-15
Purpose: Prediction of radiation pneumonitis (RP) has been shown to be challenging due to the involvement of a variety of factors including dose–volume metrics and radiosensitivity biomarkers. Some of these factors are highly correlated and might affect prediction results when combined. Bayesian network (BN) provides a probabilistic framework to represent variable dependencies in a directed acyclic graph. The aim of this study is to integrate the BN framework and a systems’ biology approach to detect possible interactions among RP risk factors and exploit these relationships to enhance both the understanding and prediction of RP. Methods: The authors studied 54 nonsmall-cellmore » lung cancer patients who received curative 3D-conformal radiotherapy. Nineteen RP events were observed (common toxicity criteria for adverse events grade 2 or higher). Serum concentration of the following four candidate biomarkers were measured at baseline and midtreatment: alpha-2-macroglobulin, angiotensin converting enzyme (ACE), transforming growth factor, interleukin-6. Dose-volumetric and clinical parameters were also included as covariates. Feature selection was performed using a Markov blanket approach based on the Koller–Sahami filter. The Markov chain Monte Carlo technique estimated the posterior distribution of BN graphs built from the observed data of the selected variables and causality constraints. RP probability was estimated using a limited number of high posterior graphs (ensemble) and was averaged for the final RP estimate using Bayes’ rule. A resampling method based on bootstrapping was applied to model training and validation in order to control under- and overfit pitfalls. Results: RP prediction power of the BN ensemble approach reached its optimum at a size of 200. The optimized performance of the BN model recorded an area under the receiver operating characteristic curve (AUC) of 0.83, which was significantly higher than multivariate logistic regression (0.77), mean heart dose (0.69), and a pre-to-midtreatment change in ACE (0.66). When RP prediction was made only with pretreatment information, the AUC ranged from 0.76 to 0.81 depending on the ensemble size. Bootstrap validation of graph features in the ensemble quantified confidence of association between variables in the graphs where ten interactions were statistically significant. Conclusions: The presented BN methodology provides the flexibility to model hierarchical interactions between RP covariates, which is applied to probabilistic inference on RP. The authors’ preliminary results demonstrate that such framework combined with an ensemble method can possibly improve prediction of RP under real-life clinical circumstances such as missing data or treatment plan adaptation.« less
ENSO Bred Vectors in Coupled Ocean-Atmosphere General Circulation Models
NASA Technical Reports Server (NTRS)
Yang, S. C.; Cai, Ming; Kalnay, E.; Rienecker, M.; Yuan, G.; Toth, ZA.
2004-01-01
The breeding method has been implemented in the NASA Seasonal-to-Interannual Prediction Project (NSIPP) Coupled General Circulation Model (CGCM) with the goal of improving operational seasonal to interannual climate predictions through ensemble forecasting and data assimilation. The coupled instability as cap'tured by the breeding method is the first attempt to isolate the evolving ENSO instability and its corresponding global atmospheric response in a fully coupled ocean-atmosphere GCM. Our results show that the growth rate of the coupled bred vectors (BV) peaks at about 3 months before a background ENSO event. The dominant growing BV modes are reminiscent of the background ENSO anomalies and show a strong tropical response with wind/SST/thermocline interrelated in a manner similar to the background ENSO mode. They exhibit larger amplitudes in the eastern tropical Pacific, reflecting the natural dynamical sensitivity associated with the presence of the shallow thermocline. Moreover, the extratropical perturbations associated with these coupled BV modes reveal the variations related to the atmospheric teleconnection patterns associated with background ENSO variability, e.g. over the North Pacific and North America. A similar experiment was carried out with the NCEP/CFS03 CGCM. Comparisons between bred vectors from the NSIPP CGCM and NCEP/CFS03 CGCM demonstrate the robustness of the results. Our results strongly suggest that the breeding method can serve as a natural filter to identify the slowly varying, coupled instabilities in a coupled GCM, which can be used to construct ensemble perturbations for ensemble forecasts and to estimate the coupled background error covariance for coupled data assimilation.
Ensemble downscaling in coupled solar wind-magnetosphere modeling for space weather forecasting.
Owens, M J; Horbury, T S; Wicks, R T; McGregor, S L; Savani, N P; Xiong, M
2014-06-01
Advanced forecasting of space weather requires simulation of the whole Sun-to-Earth system, which necessitates driving magnetospheric models with the outputs from solar wind models. This presents a fundamental difficulty, as the magnetosphere is sensitive to both large-scale solar wind structures, which can be captured by solar wind models, and small-scale solar wind "noise," which is far below typical solar wind model resolution and results primarily from stochastic processes. Following similar approaches in terrestrial climate modeling, we propose statistical "downscaling" of solar wind model results prior to their use as input to a magnetospheric model. As magnetospheric response can be highly nonlinear, this is preferable to downscaling the results of magnetospheric modeling. To demonstrate the benefit of this approach, we first approximate solar wind model output by smoothing solar wind observations with an 8 h filter, then add small-scale structure back in through the addition of random noise with the observed spectral characteristics. Here we use a very simple parameterization of noise based upon the observed probability distribution functions of solar wind parameters, but more sophisticated methods will be developed in the future. An ensemble of results from the simple downscaling scheme are tested using a model-independent method and shown to add value to the magnetospheric forecast, both improving the best estimate and quantifying the uncertainty. We suggest a number of features desirable in an operational solar wind downscaling scheme. Solar wind models must be downscaled in order to drive magnetospheric models Ensemble downscaling is more effective than deterministic downscaling The magnetosphere responds nonlinearly to small-scale solar wind fluctuations.
NASA Astrophysics Data System (ADS)
Barre, J.; Edwards, D. P.; Worden, H. M.
2016-12-01
Wildfires tend to be more intense and hence costly and are predicted to increase in frequency under a warming climate. For example, the recent August 2015 Washington State fires were the largest in the state's history. Also in September and October 2015 very intense fires over Indonesia produced some of the highest concentration of carbon monoxide (CO) ever seen from space. Such larges fires impact not only the local environment but also affects air quality far downwind through the long-range transport of pollutants. Global to continental scale coverage showing the evolution of CO resulting from fire emission is available from satellite observations. Carbon monoxide is the only atmospheric trace gas for which satellite multispectral retrievals have demonstrated reliable independent profile information close to the surface and also higher in the free troposphere. The unique CO profile product from Terra/MOPITT clearly distinguishes near-surface CO from the free troposphere CO. Also previous studies have suggested strong correlations between primary emissions of fire organic and black carbon aerosols and CO. We will present results from the Ensemble Adjustement Kalman Filter (DART) system that has been developed to assimilate MOPITT CO in the global scale chemistry-climate model CAM-Chem. The ensemble technique allows inference on various fire model state variables such as CO emissions and also aerosol species resulting from fires such as organic and black carbon. The benefit of MOPITT CO assimilation on the Washington and Indonesian fire cases studies will be diagnosed regarding the CO fire emissions, black and organic carbon inference using the ensemble information.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Jared A.; Hacker, Joshua P.; Monache, Luca Delle
A current barrier to greater deployment of offshore wind turbines is the poor quality of numerical weather prediction model wind and turbulence forecasts over open ocean. The bulk of development for atmospheric boundary layer (ABL) parameterization schemes has focused on land, partly due to a scarcity of observations over ocean. The 100-m FINO1 tower in the North Sea is one of the few sources worldwide of atmospheric profile observations from the sea surface to turbine hub height. These observations are crucial to developing a better understanding and modeling of physical processes in the marine ABL. In this paper we usemore » the WRF single column model (SCM), coupled with an ensemble Kalman filter from the Data Assimilation Research Testbed (DART), to create 100-member ensembles at the FINO1 location. The goal of this study is to determine the extent to which model parameter estimation can improve offshore wind forecasts. Combining two datasets that provide lateral forcing for the SCM and two methods for determining z 0, the time-varying sea-surface roughness length, we conduct four WRF-SCM/DART experiments over the October-December 2006 period. The two methods for determining z 0 are the default Fairall-adjusted Charnock formulation in WRF, and using parameter estimation techniques to estimate z 0 in DART. Using DART to estimate z 0 is found to reduce 1-h forecast errors of wind speed over the Charnock-Fairall z 0 ensembles by 4%–22%. Finally, however, parameter estimation of z 0 does not simultaneously reduce turbulent flux forecast errors, indicating limitations of this approach and the need for new marine ABL parameterizations.« less
NASA Astrophysics Data System (ADS)
Chirico, G. B.; Medina, H.; Romano, N.
2014-07-01
This paper examines the potential of different algorithms, based on the Kalman filtering approach, for assimilating near-surface observations into a one-dimensional Richards equation governing soil water flow in soil. Our specific objectives are: (i) to compare the efficiency of different Kalman filter algorithms in retrieving matric pressure head profiles when they are implemented with different numerical schemes of the Richards equation; (ii) to evaluate the performance of these algorithms when nonlinearities arise from the nonlinearity of the observation equation, i.e. when surface soil water content observations are assimilated to retrieve matric pressure head values. The study is based on a synthetic simulation of an evaporation process from a homogeneous soil column. Our first objective is achieved by implementing a Standard Kalman Filter (SKF) algorithm with both an explicit finite difference scheme (EX) and a Crank-Nicolson (CN) linear finite difference scheme of the Richards equation. The Unscented (UKF) and Ensemble Kalman Filters (EnKF) are applied to handle the nonlinearity of a backward Euler finite difference scheme. To accomplish the second objective, an analogous framework is applied, with the exception of replacing SKF with the Extended Kalman Filter (EKF) in combination with a CN numerical scheme, so as to handle the nonlinearity of the observation equation. While the EX scheme is computationally too inefficient to be implemented in an operational assimilation scheme, the retrieval algorithm implemented with a CN scheme is found to be computationally more feasible and accurate than those implemented with the backward Euler scheme, at least for the examined one-dimensional problem. The UKF appears to be as feasible as the EnKF when one has to handle nonlinear numerical schemes or additional nonlinearities arising from the observation equation, at least for systems of small dimensionality as the one examined in this study.
Ensembles of adaptive spatial filters increase BCI performance: an online evaluation
NASA Astrophysics Data System (ADS)
Sannelli, Claudia; Vidaurre, Carmen; Müller, Klaus-Robert; Blankertz, Benjamin
2016-08-01
Objective: In electroencephalographic (EEG) data, signals from distinct sources within the brain are widely spread by volume conduction and superimposed such that sensors receive mixtures of a multitude of signals. This reduction of spatial information strongly hampers single-trial analysis of EEG data as, for example, required for brain-computer interfacing (BCI) when using features from spontaneous brain rhythms. Spatial filtering techniques are therefore greatly needed to extract meaningful information from EEG. Our goal is to show, in online operation, that common spatial pattern patches (CSPP) are valuable to counteract this problem. Approach: Even though the effect of spatial mixing can be encountered by spatial filters, there is a trade-off between performance and the requirement of calibration data. Laplacian derivations do not require calibration data at all, but their performance for single-trial classification is limited. Conversely, data-driven spatial filters, such as common spatial patterns (CSP), can lead to highly distinctive features; however they require a considerable amount of training data. Recently, we showed in an offline analysis that CSPP can establish a valuable compromise. In this paper, we confirm these results in an online BCI study. In order to demonstrate the paramount feature that CSPP requires little training data, we used them in an adaptive setting with 20 participants and focused on users who did not have success with previous BCI approaches. Main results: The results of the study show that CSPP adapts faster and thereby allows users to achieve better feedback within a shorter time than previous approaches performed with Laplacian derivations and CSP filters. The success of the experiment highlights that CSPP has the potential to further reduce BCI inefficiency. Significance: CSPP are a valuable compromise between CSP and Laplacian filters. They allow users to attain better feedback within a shorter time and thus reduce BCI inefficiency to one-fourth in comparison to previous non-adaptive paradigms.
Ensembles of adaptive spatial filters increase BCI performance: an online evaluation.
Sannelli, Claudia; Vidaurre, Carmen; Müller, Klaus-Robert; Blankertz, Benjamin
2016-08-01
In electroencephalographic (EEG) data, signals from distinct sources within the brain are widely spread by volume conduction and superimposed such that sensors receive mixtures of a multitude of signals. This reduction of spatial information strongly hampers single-trial analysis of EEG data as, for example, required for brain-computer interfacing (BCI) when using features from spontaneous brain rhythms. Spatial filtering techniques are therefore greatly needed to extract meaningful information from EEG. Our goal is to show, in online operation, that common spatial pattern patches (CSPP) are valuable to counteract this problem. Even though the effect of spatial mixing can be encountered by spatial filters, there is a trade-off between performance and the requirement of calibration data. Laplacian derivations do not require calibration data at all, but their performance for single-trial classification is limited. Conversely, data-driven spatial filters, such as common spatial patterns (CSP), can lead to highly distinctive features; however they require a considerable amount of training data. Recently, we showed in an offline analysis that CSPP can establish a valuable compromise. In this paper, we confirm these results in an online BCI study. In order to demonstrate the paramount feature that CSPP requires little training data, we used them in an adaptive setting with 20 participants and focused on users who did not have success with previous BCI approaches. The results of the study show that CSPP adapts faster and thereby allows users to achieve better feedback within a shorter time than previous approaches performed with Laplacian derivations and CSP filters. The success of the experiment highlights that CSPP has the potential to further reduce BCI inefficiency. CSPP are a valuable compromise between CSP and Laplacian filters. They allow users to attain better feedback within a shorter time and thus reduce BCI inefficiency to one-fourth in comparison to previous non-adaptive paradigms.
Dynamically adaptive data-driven simulation of extreme hydrological flows
NASA Astrophysics Data System (ADS)
Kumar Jain, Pushkar; Mandli, Kyle; Hoteit, Ibrahim; Knio, Omar; Dawson, Clint
2018-02-01
Hydrological hazards such as storm surges, tsunamis, and rainfall-induced flooding are physically complex events that are costly in loss of human life and economic productivity. Many such disasters could be mitigated through improved emergency evacuation in real-time and through the development of resilient infrastructure based on knowledge of how systems respond to extreme events. Data-driven computational modeling is a critical technology underpinning these efforts. This investigation focuses on the novel combination of methodologies in forward simulation and data assimilation. The forward geophysical model utilizes adaptive mesh refinement (AMR), a process by which a computational mesh can adapt in time and space based on the current state of a simulation. The forward solution is combined with ensemble based data assimilation methods, whereby observations from an event are assimilated into the forward simulation to improve the veracity of the solution, or used to invert for uncertain physical parameters. The novelty in our approach is the tight two-way coupling of AMR and ensemble filtering techniques. The technology is tested using actual data from the Chile tsunami event of February 27, 2010. These advances offer the promise of significantly transforming data-driven, real-time modeling of hydrological hazards, with potentially broader applications in other science domains.
NASA Astrophysics Data System (ADS)
Jun, Brian; Giarra, Matthew; Golz, Brian; Main, Russell; Vlachos, Pavlos
2016-11-01
We present a methodology to mitigate the major sources of error associated with two-dimensional confocal laser scanning microscopy (CLSM) images of nanoparticles flowing through a microfluidic channel. The correlation-based velocity measurements from CLSM images are subject to random error due to the Brownian motion of nanometer-sized tracer particles, and a bias error due to the formation of images by raster scanning. Here, we develop a novel ensemble phase correlation with dynamic optimal filter that maximizes the correlation strength, which diminishes the random error. In addition, we introduce an analytical model of CLSM measurement bias error correction due to two-dimensional image scanning of tracer particles. We tested our technique using both synthetic and experimental images of nanoparticles flowing through a microfluidic channel. We observed that our technique reduced the error by up to a factor of ten compared to ensemble standard cross correlation (SCC) for the images tested in the present work. Subsequently, we will assess our framework further, by interrogating nanoscale flow in the cell culture environment (transport within the lacunar-canalicular system) to demonstrate our ability to accurately resolve flow measurements in a biological system.
Hydrologic and geochemical data assimilation at the Hanford 300 Area
NASA Astrophysics Data System (ADS)
Chen, X.; Hammond, G. E.; Murray, C. J.; Zachara, J. M.
2012-12-01
In modeling the uranium migration within the Integrated Field Research Challenge (IFRC) site at the Hanford 300 Area, uncertainties arise from both hydrologic and geochemical sources. The hydrologic uncertainty includes the transient flow boundary conditions induced by dynamic variations in Columbia River stage and the underlying heterogeneous hydraulic conductivity field, while the geochemical uncertainty is a result of limited knowledge of the geochemical reaction processes and parameters, as well as heterogeneity in uranium source terms. In this work, multiple types of data, including the results from constant-injection tests, borehole flowmeter profiling, and conservative tracer tests, are sequentially assimilated across scales within a Bayesian framework to reduce the hydrologic uncertainty. The hydrologic data assimilation is then followed by geochemical data assimilation, where the goal is to infer the heterogeneous distribution of uranium sources using uranium breakthrough curves from a desorption test that took place at high spring water table. We demonstrate in our study that Ensemble-based data assimilation techniques (Ensemble Kalman filter and smoother) are efficient in integrating multiple types of data sequentially for uncertainty reduction. The computational demand is managed by using the multi-realization capability within the parallel PFLOTRAN simulator.
NASA Astrophysics Data System (ADS)
Vlasenko, A. V.; Sizonenko, A. B.; Zhdanov, A. A.
2018-05-01
Discrete time series or mappings are proposed for describing the dynamics of a nonlinear system. The article considers the problems of forecasting the dynamics of the system from the time series generated by it. In particular, the commercial rate of drilling oil and gas wells can be considered as a series where each next value depends on the previous one. The main parameter here is the technical drilling speed. With the aim of eliminating the measurement error and presenting the commercial speed of the object to the current with a good accuracy, future or any of the elapsed time points, the use of the Kalman filter is suggested. For the transition from a deterministic model to a probabilistic one, the use of ensemble modeling is suggested. Ensemble systems can provide a wide range of visual output, which helps the user to evaluate the measure of confidence in the model. In particular, the availability of information on the estimated calendar duration of the construction of oil and gas wells will allow drilling companies to optimize production planning by rationalizing the approach to loading drilling rigs, which ultimately leads to maximization of profit and an increase of their competitiveness.
Blending of Radial HF Radar Surface Current and Model Using ETKF Scheme For The Sunda Strait
NASA Astrophysics Data System (ADS)
Mujiasih, Subekti; Riyadi, Mochammad; Wandono, Dr; Wayan Suardana, I.; Nyoman Gede Wiryajaya, I.; Nyoman Suarsa, I.; Hartanto, Dwi; Barth, Alexander; Beckers, Jean-Marie
2017-04-01
Preliminary study of data blending of surface current for Sunda Strait-Indonesia has been done using the analysis scheme of the Ensemble Transform Kalman Filter (ETKF). The method is utilized to combine radial velocity from HF Radar and u and v component of velocity from Global Copernicus - Marine environment monitoring service (CMEMS) model. The initial ensemble is based on the time variability of the CMEMS model result. Data tested are from 2 CODAR Seasonde radar sites in Sunda Strait and 2 dates such as 09 September 2013 and 08 February 2016 at 12.00 UTC. The radial HF Radar data has a hourly temporal resolution, 20-60 km of spatial range, 3 km of range resolution, 5 degree of angular resolution and spatial resolution and 11.5-14 MHz of frequency range. The u and v component of the model velocity represents a daily mean with 1/12 degree spatial resolution. The radial data from one HF radar site is analyzed and the result compared to the equivalent radial velocity from CMEMS for the second HF radar site. Error checking is calculated by root mean squared error (RMSE). Calculation of ensemble analysis and ensemble mean is using Sangoma software package. The tested R which represents observation error covariance matrix, is a diagonal matrix with diagonal elements equal 0.05, 0.5 or 1.0 m2/s2. The initial ensemble members comes from a model simulation spanning a month (September 2013 or February 2016), one year (2013) or 4 years (2013-2016). The spatial distribution of the radial current are analyzed and the RMSE values obtained from independent HF radar station are optimized. It was verified that the analysis reproduces well the structure included in the analyzed HF radar data. More importantly, the analysis was also improved relative to the second independent HF radar site. RMSE of the improved analysis is better than first HF Radar site Analysis. The best result of the blending exercise was obtained for observation error variance equal to 0.05 m2/s2. This study is still preliminary step, but it gives promising result for bigger size of data, combining other model and further development. Keyword: HF Radar, Sunda Strait, ETKF, CMEMS
NASA Astrophysics Data System (ADS)
Tong, S.; Alessio, A. M.; Kinahan, P. E.
2010-03-01
The addition of accurate system modeling in PET image reconstruction results in images with distinct noise texture and characteristics. In particular, the incorporation of point spread functions (PSF) into the system model has been shown to visually reduce image noise, but the noise properties have not been thoroughly studied. This work offers a systematic evaluation of noise and signal properties in different combinations of reconstruction methods and parameters. We evaluate two fully 3D PET reconstruction algorithms: (1) OSEM with exact scanner line of response modeled (OSEM+LOR), (2) OSEM with line of response and a measured point spread function incorporated (OSEM+LOR+PSF), in combination with the effects of four post-reconstruction filtering parameters and 1-10 iterations, representing a range of clinically acceptable settings. We used a modified NEMA image quality (IQ) phantom, which was filled with 68Ge and consisted of six hot spheres of different sizes with a target/background ratio of 4:1. The phantom was scanned 50 times in 3D mode on a clinical system to provide independent noise realizations. Data were reconstructed with OSEM+LOR and OSEM+LOR+PSF using different reconstruction parameters, and our implementations of the algorithms match the vendor's product algorithms. With access to multiple realizations, background noise characteristics were quantified with four metrics. Image roughness and the standard deviation image measured the pixel-to-pixel variation; background variability and ensemble noise quantified the region-to-region variation. Image roughness is the image noise perceived when viewing an individual image. At matched iterations, the addition of PSF leads to images with less noise defined as image roughness (reduced by 35% for unfiltered data) and as the standard deviation image, while it has no effect on background variability or ensemble noise. In terms of signal to noise performance, PSF-based reconstruction has a 7% improvement in contrast recovery at matched ensemble noise levels and 20% improvement of quantitation SNR in unfiltered data. In addition, the relations between different metrics are studied. A linear correlation is observed between background variability and ensemble noise for all different combinations of reconstruction methods and parameters, suggesting that background variability is a reasonable surrogate for ensemble noise when multiple realizations of scans are not available.
Bonnet, V; Dumas, R; Cappozzo, A; Joukov, V; Daune, G; Kulić, D; Fraisse, P; Andary, S; Venture, G
2017-09-06
This paper presents a method for real-time estimation of the kinematics and kinetics of a human body performing a sagittal symmetric motor task, which would minimize the impact of the stereophotogrammetric soft tissue artefacts (STA). The method is based on a bi-dimensional mechanical model of the locomotor apparatus the state variables of which (joint angles, velocities and accelerations, and the segments lengths and inertial parameters) are estimated by a constrained extended Kalman filter (CEKF) that fuses input information made of both stereophotogrammetric and dynamometric measurement data. Filter gains are made to saturate in order to obtain plausible state variables and the measurement covariance matrix of the filter accounts for the expected STA maximal amplitudes. We hypothesised that the ensemble of constraints and input redundant information would allow the method to attenuate the STA propagation to the end results. The method was evaluated in ten human subjects performing a squat exercise. The CEKF estimated and measured skin marker trajectories exhibited a RMS difference lower than 4mm, thus in the range of STAs. The RMS differences between the measured ground reaction force and moment and those estimated using the proposed method (9N and 10Nm) were much lower than obtained using a classical inverse dynamics approach (22N and 30Nm). From the latter results it may be inferred that the presented method allows for a significant improvement of the accuracy with which kinematic variables and relevant time derivatives, model parameters and, therefore, intersegmental moments are estimated. Copyright © 2016 Elsevier Ltd. All rights reserved.
Data assimilation method based on the constraints of confidence region
NASA Astrophysics Data System (ADS)
Li, Yong; Li, Siming; Sheng, Yao; Wang, Luheng
2018-03-01
The ensemble Kalman filter (EnKF) is a distinguished data assimilation method that is widely used and studied in various fields including methodology and oceanography. However, due to the limited sample size or imprecise dynamics model, it is usually easy for the forecast error variance to be underestimated, which further leads to the phenomenon of filter divergence. Additionally, the assimilation results of the initial stage are poor if the initial condition settings differ greatly from the true initial state. To address these problems, the variance inflation procedure is usually adopted. In this paper, we propose a new method based on the constraints of a confidence region constructed by the observations, called EnCR, to estimate the inflation parameter of the forecast error variance of the EnKF method. In the new method, the state estimate is more robust to both the inaccurate forecast models and initial condition settings. The new method is compared with other adaptive data assimilation methods in the Lorenz-63 and Lorenz-96 models under various model parameter settings. The simulation results show that the new method performs better than the competing methods.
Tuning the ion selectivity of tetrameric cation channels by changing the number of ion binding sites
DOE Office of Scientific and Technical Information (OSTI.GOV)
Derebe, Mehabaw G.; Sauer, David B.; Zeng, Weizhong
2015-11-30
Selective ion conduction across ion channel pores is central to cellular physiology. To understand the underlying principles of ion selectivity in tetrameric cation channels, we engineered a set of cation channel pores based on the nonselective NaK channel and determined their structures to high resolution. These structures showcase an ensemble of selectivity filters with a various number of contiguous ion binding sites ranging from 2 to 4, with each individual site maintaining a geometry and ligand environment virtually identical to that of equivalent sites in K{sup +} channel selectivity filters. Combined with single channel electrophysiology, we show that only themore » channel with four ion binding sites is K{sup +} selective, whereas those with two or three are nonselective and permeate Na{sup +} and K{sup +} equally well. These observations strongly suggest that the number of contiguous ion binding sites in a single file is the key determinant of the channel's selectivity properties and the presence of four sites in K{sup +} channels is essential for highly selective and efficient permeation of K{sup +} ions.« less
Xiao, Zhu; Havyarimana, Vincent; Li, Tong; Wang, Dong
2016-05-13
In this paper, a novel nonlinear framework of smoothing method, non-Gaussian delayed particle smoother (nGDPS), is proposed, which enables vehicle state estimation (VSE) with high accuracy taking into account the non-Gaussianity of the measurement and process noises. Within the proposed method, the multivariate Student's t-distribution is adopted in order to compute the probability distribution function (PDF) related to the process and measurement noises, which are assumed to be non-Gaussian distributed. A computation approach based on Ensemble Kalman Filter (EnKF) is designed to cope with the mean and the covariance matrix of the proposal non-Gaussian distribution. A delayed Gibbs sampling algorithm, which incorporates smoothing of the sampled trajectories over a fixed-delay, is proposed to deal with the sample degeneracy of particles. The performance is investigated based on the real-world data, which is collected by low-cost on-board vehicle sensors. The comparison study based on the real-world experiments and the statistical analysis demonstrates that the proposed nGDPS has significant improvement on the vehicle state accuracy and outperforms the existing filtering and smoothing methods.
Relativistic Nonlocality and the EPR Paradox
NASA Astrophysics Data System (ADS)
Chamberlain, Thomas
2014-03-01
The exact violation of Bell's Inequalities is obtained with a local realistic model for spin. The model treats one particle that comprises a quantum ensemble and simulates the EPR data one coincidence at a time as a product state. Such a spin is represented by operators σx , iσy ,σz in its body frame rather than the usual set of σX ,σY ,σZ in the laboratory frame. This model, assumed valid in the absence of a measuring probe, contains both quantum polarizations and coherences. Each carries half the EPR correlation, but only half can be measured using coincidence techniques. The model further predicts the filter angles that maximize the spin correlation in EPR experiments.
Letters: Noise Equalization for Ultrafast Plane Wave Microvessel Imaging
Song, Pengfei; Manduca, Armando; Trzasko, Joshua D.
2017-01-01
Ultrafast plane wave microvessel imaging significantly improves ultrasound Doppler sensitivity by increasing the number of Doppler ensembles that can be collected within a short period of time. The rich spatiotemporal plane wave data also enables more robust clutter filtering based on singular value decomposition (SVD). However, due to the lack of transmit focusing, plane wave microvessel imaging is very susceptible to noise. This study was designed to: 1) study the relationship between ultrasound system noise (primarily time gain compensation-induced) and microvessel blood flow signal; 2) propose an adaptive and computationally cost-effective noise equalization method that is independent of hardware or software imaging settings to improve microvessel image quality. PMID:28880169
NASA Astrophysics Data System (ADS)
Revuelto, J.; Dumont, M.; Tuzet, F.; Vionnet, V.; Lafaysse, M.; Lecourt, G.; Vernay, M.; Morin, S.; Cosme, E.; Six, D.; Rabatel, A.
2017-12-01
Nowadays snowpack models show a good capability in simulating the evolution of snow in mountain areas. However singular deviations of meteorological forcing and shortcomings in the modelling of snow physical processes, when accumulated on time along a snow season, could produce large deviations from real snowpack state. The evaluation of these deviations is usually assessed with on-site observations from automatic weather stations. Nevertheless the location of these stations could strongly influence the results of these evaluations since local topography may have a marked influence on snowpack evolution. Despite the evaluation of snowpack models with automatic weather stations usually reveal good results, there exist a lack of large scale evaluations of simulations results on heterogeneous alpine terrain subjected to local topographic effects.This work firstly presents a complete evaluation of the detailed snowpack model Crocus over an extended mountain area, the Arve upper catchment (western European Alps). This catchment has a wide elevation range with a large area above 2000m a.s.l. and/or glaciated. The evaluation compares results obtained with distributed and semi-distributed simulations (the latter nowadays used on the operational forecasting). Daily observations of the snow covered area from MODIS satellite sensor, seasonal glacier surface mass balance evolution measured in more than 65 locations and the galciers annual equilibrium line altitude from Landsat/Spot/Aster satellites, have been used for model evaluation. Additionally the latest advances in producing ensemble snowpack simulations for assimilating satellite reflectance data over extended areas will be presented. These advances comprises the generation of an ensemble of downscaled high-resolution meteorological forcing from meso-scale meteorological models and the application of a particle filter scheme for assimilating satellite observations. Despite the results are prefatory, they show a good potential improving snowpack forecasting capabilities.
Lee, Jared A.; Hacker, Joshua P.; Monache, Luca Delle; ...
2016-08-03
A current barrier to greater deployment of offshore wind turbines is the poor quality of numerical weather prediction model wind and turbulence forecasts over open ocean. The bulk of development for atmospheric boundary layer (ABL) parameterization schemes has focused on land, partly due to a scarcity of observations over ocean. The 100-m FINO1 tower in the North Sea is one of the few sources worldwide of atmospheric profile observations from the sea surface to turbine hub height. These observations are crucial to developing a better understanding and modeling of physical processes in the marine ABL. In this paper we usemore » the WRF single column model (SCM), coupled with an ensemble Kalman filter from the Data Assimilation Research Testbed (DART), to create 100-member ensembles at the FINO1 location. The goal of this study is to determine the extent to which model parameter estimation can improve offshore wind forecasts. Combining two datasets that provide lateral forcing for the SCM and two methods for determining z 0, the time-varying sea-surface roughness length, we conduct four WRF-SCM/DART experiments over the October-December 2006 period. The two methods for determining z 0 are the default Fairall-adjusted Charnock formulation in WRF, and using parameter estimation techniques to estimate z 0 in DART. Using DART to estimate z 0 is found to reduce 1-h forecast errors of wind speed over the Charnock-Fairall z 0 ensembles by 4%–22%. Finally, however, parameter estimation of z 0 does not simultaneously reduce turbulent flux forecast errors, indicating limitations of this approach and the need for new marine ABL parameterizations.« less
NASA Astrophysics Data System (ADS)
Edwards, David; Barre, Jerome; Worden, Helen; Gaubert, Benjamin
2017-04-01
Intense and costly wildfires tend are predicted to increase in frequency under a warming climate. For example, the recent August 2015 Washington State fires were the largest in the state's history. Also in September and October 2015 very intense fires over Indonesia produced some of the highest concentrations of carbon monoxide (CO) ever seen from satellite. Such larges fires impact not only the local environment but also affect air quality far downwind through the long-range transport of pollutants. Global to continental scale coverage showing the evolution of CO resulting from fire emission is available from satellite observations. Carbon monoxide is the only atmospheric trace gas for which satellite multispectral retrievals have demonstrated reliable independent profile information close to the surface and also higher in the free troposphere. The unique CO profile product from Terra/MOPITT clearly distinguishes near-surface CO from the free troposphere CO. Also previous studies have suggested strong correlations between primary emissions of fire organic and black carbon aerosols and CO. We will present results from the Ensemble Adjustement Kalman Filter (DART) system that has been developed to assimilate MOPITT CO in the global-scale chemistry-climate model CAM-Chem. The ensemble technique allows inference on various fire model state variables such as CO emissions, and also aerosol species resulting from fires such as organic and black carbon. The benefit of MOPITT CO profile assimilation for estimating the CO emissions from the Washington and Indonesian fire cases will be discussed, along with the ability of the ensemble approach to infer information on the black and organic carbon aerosol distribution. This study builds on capability to quantitatively integrate satellite observations and models developed in recent years through projects funded by the NASA ACMAP Program.
Avoiding drift related to linear analysis update with Lagrangian coordinate models
NASA Astrophysics Data System (ADS)
Wang, Yiguo; Counillon, Francois; Bertino, Laurent
2015-04-01
When applying data assimilation to Lagrangian coordinate models, it is profitable to correct its grid (position, volume). In isopycnal ocean coordinate model, such information is provided by the layer thickness that can be massless but must remains positive (truncated Gaussian distribution). A linear gaussian analysis does not ensure positivity for such variable. Existing methods have been proposed to handle this issue - e.g. post processing, anamorphosis or resampling - but none ensures conservation of the mean, which is imperative in climate application. Here, a framework is introduced to test a new method, which proceed as following. First, layers for which analysis yields negative values are iteratively grouped with neighboring layers, resulting in a probability density function with a larger mean and smaller standard deviation that prevent appearance of negative values. Second, analysis increments of the grouped layer are uniformly distributed, which prevent massless layers to become filled and vice-versa. The new method is proved fully conservative with e.g. OI or 3DVAR but a small drift remains with ensemble-based methods (e.g. EnKF, DEnKF, …) during the update of the ensemble anomaly. However, the resulting drift with the latter is small (an order of magnitude smaller than with post-processing) and the increase of the computational cost moderate. The new method is demonstrated with a realistic application in the Norwegian Climate Prediction Model (NorCPM) that provides climate prediction by assimilating sea surface temperature with the Ensemble Kalman Filter in a fully coupled Earth System model (NorESM) with an isopycnal ocean model (MICOM). Over 25-year analysis period, the new method does not impair the predictive skill of the system but corrects the artificial steric drift introduced by data assimilation, and provide estimate in good agreement with IPCC AR5.
Ensemble downscaling in coupled solar wind-magnetosphere modeling for space weather forecasting
Owens, M J; Horbury, T S; Wicks, R T; McGregor, S L; Savani, N P; Xiong, M
2014-01-01
Advanced forecasting of space weather requires simulation of the whole Sun-to-Earth system, which necessitates driving magnetospheric models with the outputs from solar wind models. This presents a fundamental difficulty, as the magnetosphere is sensitive to both large-scale solar wind structures, which can be captured by solar wind models, and small-scale solar wind “noise,” which is far below typical solar wind model resolution and results primarily from stochastic processes. Following similar approaches in terrestrial climate modeling, we propose statistical “downscaling” of solar wind model results prior to their use as input to a magnetospheric model. As magnetospheric response can be highly nonlinear, this is preferable to downscaling the results of magnetospheric modeling. To demonstrate the benefit of this approach, we first approximate solar wind model output by smoothing solar wind observations with an 8 h filter, then add small-scale structure back in through the addition of random noise with the observed spectral characteristics. Here we use a very simple parameterization of noise based upon the observed probability distribution functions of solar wind parameters, but more sophisticated methods will be developed in the future. An ensemble of results from the simple downscaling scheme are tested using a model-independent method and shown to add value to the magnetospheric forecast, both improving the best estimate and quantifying the uncertainty. We suggest a number of features desirable in an operational solar wind downscaling scheme. Key Points Solar wind models must be downscaled in order to drive magnetospheric models Ensemble downscaling is more effective than deterministic downscaling The magnetosphere responds nonlinearly to small-scale solar wind fluctuations PMID:26213518
NASA Astrophysics Data System (ADS)
Trudel, Mélanie; Leconte, Robert; Paniconi, Claudio
2014-06-01
Data assimilation techniques not only enhance model simulations and forecast, they also provide the opportunity to obtain a diagnostic of both the model and observations used in the assimilation process. In this research, an ensemble Kalman filter was used to assimilate streamflow observations at a basin outlet and at interior locations, as well as soil moisture at two different depths (15 and 45 cm). The simulation model is the distributed physically-based hydrological model CATHY (CATchment HYdrology) and the study site is the Des Anglais watershed, a 690 km2 river basin located in southern Quebec, Canada. Use of Latin hypercube sampling instead of a conventional Monte Carlo method to generate the ensemble reduced the size of the ensemble, and therefore the calculation time. Different post-assimilation diagnostics, based on innovations (observation minus background), analysis residuals (observation minus analysis), and analysis increments (analysis minus background), were used to evaluate assimilation optimality. An important issue in data assimilation is the estimation of error covariance matrices. These diagnostics were also used in a calibration exercise to determine the standard deviation of model parameters, forcing data, and observations that led to optimal assimilations. The analysis of innovations showed a lag between the model forecast and the observation during rainfall events. Assimilation of streamflow observations corrected this discrepancy. Assimilation of outlet streamflow observations improved the Nash-Sutcliffe efficiencies (NSE) between the model forecast (one day) and the observation at both outlet and interior point locations, owing to the structure of the state vector used. However, assimilation of streamflow observations systematically increased the simulated soil moisture values.
NASA Technical Reports Server (NTRS)
Li, Jing; Li, Xichen; Carlson, Barbara E.; Kahn, Ralph A.; Lacis, Andrew A.; Dubovik, Oleg; Nakajima, Teruyuki
2016-01-01
Various space-based sensors have been designed and corresponding algorithms developed to retrieve aerosol optical depth (AOD), the very basic aerosol optical property, yet considerable disagreement still exists across these different satellite data sets. Surface-based observations aim to provide ground truth for validating satellite data; hence, their deployment locations should preferably contain as much spatial information as possible, i.e., high spatial representativeness. Using a novel Ensemble Kalman Filter (EnKF)- based approach, we objectively evaluate the spatial representativeness of current Aerosol Robotic Network (AERONET) sites. Multisensor monthly mean AOD data sets from Moderate Resolution Imaging Spectroradiometer, Multiangle Imaging Spectroradiometer, Sea-viewing Wide Field-of-view Sensor, Ozone Monitoring Instrument, and Polarization and Anisotropy of Reflectances for Atmospheric Sciences coupled with Observations from a Lidar are combined into a 605-member ensemble, and AERONET data are considered as the observations to be assimilated into this ensemble using the EnKF. The assessment is made by comparing the analysis error variance (that has been constrained by ground-based measurements), with the background error variance (based on satellite data alone). Results show that the total uncertainty is reduced by approximately 27% on average and could reach above 50% over certain places. The uncertainty reduction pattern also has distinct seasonal patterns, corresponding to the spatial distribution of seasonally varying aerosol types, such as dust in the spring for Northern Hemisphere and biomass burning in the fall for Southern Hemisphere. Dust and biomass burning sites have the highest spatial representativeness, rural and oceanic sites can also represent moderate spatial information, whereas the representativeness of urban sites is relatively localized. A spatial score ranging from 1 to 3 is assigned to each AERONET site based on the uncertainty reduction, indicating its representativeness level.
NASA Astrophysics Data System (ADS)
Kurtz, W.; Hendricks Franssen, H.-J.; Brunner, P.; Vereecken, H.
2013-10-01
River-aquifer exchange fluxes influence local and regional water balances and affect groundwater and river water quality and quantity. Unfortunately, river-aquifer exchange fluxes tend to be strongly spatially variable, and it is an open research question to which degree river bed heterogeneity has to be represented in a model in order to achieve reliable estimates of river-aquifer exchange fluxes. This research question is addressed in this paper with the help of synthetic simulation experiments, which mimic the Limmat aquifer in Zurich (Switzerland), where river-aquifer exchange fluxes and groundwater management activities play an important role. The solution of the unsaturated-saturated subsurface hydrological flow problem including river-aquifer interaction is calculated for ten different synthetic realities where the strongly heterogeneous river bed hydraulic conductivities (L) are perfectly known. Hydraulic head data (100 in the default scenario) are sampled from the synthetic realities. In subsequent data assimilation experiments, where L is unknown now, the hydraulic head data are used as conditioning information, with the help of the ensemble Kalman filter (EnKF). For each of the ten synthetic realities, four different ensembles of L are tested in the experiments with EnKF; one ensemble estimates high-resolution L fields with different L values for each element, and the other three ensembles estimate effective L values for 5, 3 or 2 zones. The calibration of higher-resolution L fields (i.e. fully heterogeneous or 5 zones) gives better results than the calibration of L for only 3 or 2 zones in terms of reproduction of states, stream-aquifer exchange fluxes and parameters. Effective L for a limited number of zones cannot always reproduce the true states and fluxes well and results in biased estimates of net exchange fluxes between aquifer and stream. Also in case only 10 head data are used for conditioning, the high-resolution characterization of L fields with EnKF is still feasible. For less heterogeneous river bed hydraulic conductivities, a high-resolution characterization of L is less important. When uncertainties in the hydraulic parameters of the aquifer are also regarded in the assimilation, the errors in state and flux predictions increase, but the ensemble with a high spatial resolution for L still outperforms the ensembles with effective L values. We conclude that for strongly heterogeneous river beds the commonly applied simplified representation of the streambed, with spatially homogeneous parameters or constant parameters for a few zones, might yield significant biases in the characterization of the water balance. For strongly heterogeneous river beds, we suggest adopting a stochastic field approach to model the spatially heterogeneous river beds geostatistically. The paper illustrates that EnKF is able to calibrate such heterogeneous streambeds on the basis of hydraulic head measurements, outperforming zonation approaches.
NASA Astrophysics Data System (ADS)
Li, Xuejian; Mao, Fangjie; Du, Huaqiang; Zhou, Guomo; Xu, Xiaojun; Han, Ning; Sun, Shaobo; Gao, Guolong; Chen, Liang
2017-04-01
Subtropical forest ecosystems play essential roles in the global carbon cycle and in carbon sequestration functions, which challenge the traditional understanding of the main functional areas of carbon sequestration in the temperate forests of Europe and America. The leaf area index (LAI) is an important biological parameter in the spatiotemporal simulation of the carbon cycle, and it has considerable significance in carbon cycle research. Dynamic retrieval based on remote sensing data is an important method with which to obtain large-scale high-accuracy assessments of LAI. This study developed an algorithm for assimilating LAI dynamics based on an integrated ensemble Kalman filter using MODIS LAI data, MODIS reflectance data, and canopy reflectance data modeled by PROSAIL, for three typical types of subtropical forest (Moso bamboo forest, Lei bamboo forest, and evergreen and deciduous broadleaf forest) in China during 2014-2015. There were some errors of assimilation in winter, because of the bad data quality of the MODIS product. Overall, the assimilated LAI well matched the observed LAI, with R2 of 0.82, 0.93, and 0.87, RMSE of 0.73, 0.49, and 0.42, and aBIAS of 0.50, 0.23, and 0.03 for Moso bamboo forest, Lei bamboo forest, and evergreen and deciduous broadleaf forest, respectively. The algorithm greatly decreased the uncertainty of the MODIS LAI in the growing season and it improved the accuracy of the MODIS LAI. The advantage of the algorithm is its use of biophysical parameters (e.g., measured LAI) in the LAI assimilation, which makes it possible to assimilate long-term MODIS LAI time series data, and to provide high-accuracy LAI data for the study of carbon cycle characteristics in subtropical forest ecosystems.
NASA Astrophysics Data System (ADS)
Bato, Mary Grace; Pinel, Virginie; Yan, Yajing
2016-04-01
The recent advances in Interferometric Synthetic Aperture Radar (InSAR) imaging and the increasing number of continuous Global Positioning System (GPS) networks recorded on volcanoes provide continuous and spatially extensive evolution of surface displacements during inter-eruptive periods. For basaltic volcanoes, these measurements combined with simple dynamical models (Lengliné et al. 2008 [1], Pinel et al, 2010 [2], Reverso et al, 2014 [3]) can be exploited to characterise and constrain parameters of one or several magmatic reservoirs using inversion methods. On the other hand, data assimilation-a time-stepping process that best combines models and observations, sometimes a priori information based on error statistics to predict the state of a dynamical system-has gained popularity in various fields of geoscience (e.g. ocean-weather forecasting, geomagnetism and natural resources exploration). In this work, we aim to first test the applicability and benefit of data assimilation, in particular the Ensemble Kalman Filter [4], in the field of volcanology. We predict the temporal behaviors of the overpressures and deformations by applying the two-magma chamber model of Reverso et. al., 2014 [3] and by using synthetic deformation data in order to establish our forecasting strategy. GPS time-series data of the recent eruptions at Grimsvötn volcano is used for the real case applicability of the method. [1] Lengliné, O., D Marsan, J Got, V. Pinel, V. Ferrazzini, P. Obuko, Seismicity and deformation induced by magma accumulation at three basaltic volcanoes, J. Geophys. Res., 113, B12305, 2008. [2] V. Pinel, C. Jaupart and F. Albino, On the relationship between cycles of eruptive activity and volcanic edifice growth, J. Volc. Geotherm. Res, 194, 150-164, 2010 [3] T. Reverso, J. Vandemeulebrouck, F. Jouanne, V. Pinel, T. Villemin, E. Sturkell, A two-magma chamber as a source of deformation at Grimsvötn volcano, Iceland, JGR, 2014 [4] Evensen, G., The Ensemble Kalman Filter: theoretical formulation and practical implementation. Ocean Dyn. 53, 343-367, 2003
NASA Astrophysics Data System (ADS)
Brune, Sebastian; Düsterhus, André; Pohlmann, Holger; Müller, Wolfgang A.; Baehr, Johanna
2017-11-01
We analyze the time dependency of decadal hindcast skill in the North Atlantic subpolar gyre within the time period 1961-2013. We compare anomaly correlation coefficients and temporal interquartile ranges of total upper ocean heat content and sea surface temperature for three differently initialized sets of hindcast simulations with the global coupled model MPI-ESM. All initializations use weakly coupled assimilation with the same full value nudging in the atmospheric component and different assimilation techniques for oceanic temperature and salinity: (1) ensemble Kalman filter assimilating EN4 observations and HadISST data, (2) nudging of anomalies to ORAS4 reanalysis, (3) nudging of full values to ORAS4 reanalysis. We find that hindcast skill depends strongly on the evaluation time period, with higher hindcast skill during strong multiyear trends, especially during the warming in the 1990s and lower hindcast skill in the absence of such trends. Differences between the prediction systems are more pronounced when investigating any 20-year subperiod within the entire hindcast period. In the ensemble Kalman filter initialized hindcasts, we find significant correlation skill for up to 5-8 lead years, albeit along with an overestimation of the temporal interquartile range. In the hindcasts initialized by anomaly nudging, significant correlation skill for lead years greater than two is only found in the 1980s and 1990s. In the hindcasts initialized by full value nudging, correlation skill is consistently lower than in the hindcasts initialized by anomaly nudging in the first lead years with re-emerging skill thereafter. The Atlantic meridional overturning circulation reacts on the density changes introduced by oceanic nudging, this limits the predictability in the subpolar gyre in the first lead years. Overall, we find that a model-consistent assimilation technique can improve hindcast skill. Further, the evaluation of 20 year subperiods within the full hindcast period provides essential insights to judge the success of both the assimilation and the subsequent hindcast quality.
NASA Astrophysics Data System (ADS)
Clark, E.; Wood, A.; Nijssen, B.; Clark, M. P.
2017-12-01
Short- to medium-range (1- to 7-day) streamflow forecasts are important for flood control operations and in issuing potentially life-save flood warnings. In the U.S., the National Weather Service River Forecast Centers (RFCs) issue such forecasts in real time, depending heavily on a manual data assimilation (DA) approach. Forecasters adjust model inputs, states, parameters and outputs based on experience and consideration of a range of supporting real-time information. Achieving high-quality forecasts from new automated, centralized forecast systems will depend critically on the adequacy of automated DA approaches to make analogous corrections to the forecasting system. Such approaches would further enable systematic evaluation of real-time flood forecasting methods and strategies. Toward this goal, we have implemented a real-time Sequential Importance Resampling particle filter (SIR-PF) approach to assimilate observed streamflow into simulated initial hydrologic conditions (states) for initializing ensemble flood forecasts. Assimilating streamflow alone in SIR-PF improves simulated streamflow and soil moisture during the model spin up period prior to a forecast, with consequent benefits for forecasts. Nevertheless, it only consistently limits error in simulated snow water equivalent during the snowmelt season and in basins where precipitation falls primarily as snow. We examine how the simulated initial conditions with and without SIR-PF propagate into 1- to 7-day ensemble streamflow forecasts. Forecasts are evaluated in terms of reliability and skill over a 10-year period from 2005-2015. The focus of this analysis is on how interactions between hydroclimate and SIR-PF performance impact forecast skill. To this end, we examine forecasts for 5 hydroclimatically diverse basins in the western U.S. Some of these basins receive most of their precipitation as snow, others as rain. Some freeze throughout the mid-winter while others experience significant mid-winter melt events. We describe the methodology and present seasonal and inter-basin variations in DA-enhanced forecast skill.
Global operational hydrological forecasts through eWaterCycle
NASA Astrophysics Data System (ADS)
van de Giesen, Nick; Bierkens, Marc; Donchyts, Gennadii; Drost, Niels; Hut, Rolf; Sutanudjaja, Edwin
2015-04-01
Central goal of the eWaterCycle project (www.ewatercycle.org) is the development of an operational hyper-resolution hydrological global model. This model is able to produce 14 day ensemble forecasts based on a hydrological model and operational weather data (presently NOAA's Global Ensemble Forecast System). Special attention is paid to prediction of situations in which water related issues are relevant, such as floods, droughts, navigation, hydropower generation, and irrigation stress. Near-real time satellite data will be assimilated in the hydrological simulations, which is a feature that will be presented for the first time at EGU 2015. First, we address challenges that are mainly computer science oriented but have direct practical hydrological implications. An important feature in this is the use of existing standards and open-source software to the maximum extent possible. For example, we use the Community Surface Dynamics Modeling System (CSDMS) approach to coupling models (Basic Model Interface (BMI)). The hydrological model underlying the project is PCR-GLOBWB, built by Utrecht University. This is the motor behind the predictions and state estimations. Parts of PCR-GLOBWB have been re-engineered to facilitate running it in a High Performance Computing (HPC) environment, run parallel on multiple nodes, as well as to use BMI. Hydrological models are not very CPU intensive compared to, say, atmospheric models. They are, however, memory hungry due to the localized processes and associated effective parameters. To accommodate this memory need, especially in an ensemble setting, a variation on the traditional Ensemble Kalman Filter was developed that needs much less on-chip memory. Due to the operational nature, the coupling of the hydrological model with hydraulic models is very important. The idea is not to run detailed hydraulic routing schemes over the complete globe but to have on-demand simulation prepared off-line with respect to topography and parameterizations. This allows for very detailed simulations at hectare to meter scales, where and when this is needed. At EGU 2015, the operational global eWaterCycle model will be presented for the first time, including forecasts at high resolution, the innovative data assimilation approach, and on-demand coupling with hydraulic models.
NASA Technical Reports Server (NTRS)
Zhang, Yong-Fei; Hoar, Tim J.; Yang, Zong-Liang; Anderson, Jeffrey L.; Toure, Ally M.; Rodell, Matthew
2014-01-01
To improve snowpack estimates in Community Land Model version 4 (CLM4), the Moderate Resolution Imaging Spectroradiometer (MODIS) snow cover fraction (SCF) was assimilated into the Community Land Model version 4 (CLM4) via the Data Assimilation Research Testbed (DART). The interface between CLM4 and DART is a flexible, extensible approach to land surface data assimilation. This data assimilation system has a large ensemble (80-member) atmospheric forcing that facilitates ensemble-based land data assimilation. We use 40 randomly chosen forcing members to drive 40 CLM members as a compromise between computational cost and the data assimilation performance. The localization distance, a parameter in DART, was tuned to optimize the data assimilation performance at the global scale. Snow water equivalent (SWE) and snow depth are adjusted via the ensemble adjustment Kalman filter, particularly in regions with large SCF variability. The root-mean-square error of the forecast SCF against MODIS SCF is largely reduced. In DJF (December-January-February), the discrepancy between MODIS and CLM4 is broadly ameliorated in the lower-middle latitudes (2345N). Only minimal modifications are made in the higher-middle (4566N) and high latitudes, part of which is due to the agreement between model and observation when snow cover is nearly 100. In some regions it also reveals that CLM4-modeled snow cover lacks heterogeneous features compared to MODIS. In MAM (March-April-May), adjustments to snowmove poleward mainly due to the northward movement of the snowline (i.e., where largest SCF uncertainty is and SCF assimilation has the greatest impact). The effectiveness of data assimilation also varies with vegetation types, with mixed performance over forest regions and consistently good performance over grass, which can partly be explained by the linearity of the relationship between SCF and SWE in the model ensembles. The updated snow depth was compared to the Canadian Meteorological Center (CMC) data. Differences between CMC and CLM4 are generally reduced in densely monitored regions.
Short-Range prediction of a Mediterranean Severe weather event using EnKF: Configuration tests
NASA Astrophysics Data System (ADS)
Carrio Carrio, Diego Saul; Homar Santaner, Víctor
2014-05-01
The afternoon of 4th October 2007, severe damaging winds and torrential rainfall affected the Island of Mallorca. This storm produced F2-F3 tornadoes in the vicinity of Palma, with one person killed and estimated damages to property exceeding 10 M€. Several studies have analysed the meteorological context in which this episode unfolded, describing the formation of a train of multiple thunderstorms along a warm front and the evolution of a squall line organized from convective activity initiated offshore Murcia during that morning. Couhet et al. (2011) attributed the correct simulation of the convective system and particularly its organization as a squall line to the correct representation of a convergence line at low-levels over the Alboran Sea during the first hours of the day. The numerical prediction of mesoscale phenomena which initiates, organizes and evolves over the sea is an extremely demanding challenge of great importance for coastal regions. In this study, we investigate the skill of a mesoscale ensemble data assimilation system to predict the severe phenomena occurred on 4th October 2007. We use an Ensemble Kalman Filter which assimilates conventional (surface, radiosonde and AMDAR) data using the DART implementation from (NCAR). On the one hand, we analyse the potential of the assimilation cycle to advect critical observational data towards decisive data-void areas over the sea. Furthermore, we assess the sensitivity of the ensemble products to the ensemble size, grid resolution, assimilation period and physics diversity in the mesoscale model. In particular, we focus on the effect of these numerical configurations on the representation of the convective activity and the precipitation field, as valuable predictands of high impact weather. Results show that the 6-h EnKF assimilation period produces initial fields that successfully represent the environment in which initiation occurred and thus the derived numerical predictions render improved evolutions of the squall line. Synthetic maps of severe convective risk reveals the improved predictability of the event using the EnKF as opposed to deterministic or downscaled configurations. Discussion on further improvements to the forecasting systems is provided.
Minimization for conditional simulation: Relationship to optimal transport
NASA Astrophysics Data System (ADS)
Oliver, Dean S.
2014-05-01
In this paper, we consider the problem of generating independent samples from a conditional distribution when independent samples from the prior distribution are available. Although there are exact methods for sampling from the posterior (e.g. Markov chain Monte Carlo or acceptance/rejection), these methods tend to be computationally demanding when evaluation of the likelihood function is expensive, as it is for most geoscience applications. As an alternative, in this paper we discuss deterministic mappings of variables distributed according to the prior to variables distributed according to the posterior. Although any deterministic mappings might be equally useful, we will focus our discussion on a class of algorithms that obtain implicit mappings by minimization of a cost function that includes measures of data mismatch and model variable mismatch. Algorithms of this type include quasi-linear estimation, randomized maximum likelihood, perturbed observation ensemble Kalman filter, and ensemble of perturbed analyses (4D-Var). When the prior pdf is Gaussian and the observation operators are linear, we show that these minimization-based simulation methods solve an optimal transport problem with a nonstandard cost function. When the observation operators are nonlinear, however, the mapping of variables from the prior to the posterior obtained from those methods is only approximate. Errors arise from neglect of the Jacobian determinant of the transformation and from the possibility of discontinuous mappings.
Mining SNPs from EST sequences using filters and ensemble classifiers.
Wang, J; Zou, Q; Guo, M Z
2010-05-04
Abundant single nucleotide polymorphisms (SNPs) provide the most complete information for genome-wide association studies. However, due to the bottleneck of manual discovery of putative SNPs and the inaccessibility of the original sequencing reads, it is essential to develop a more efficient and accurate computational method for automated SNP detection. We propose a novel computational method to rapidly find true SNPs in public-available EST (expressed sequence tag) databases; this method is implemented as SNPDigger. EST sequences are clustered and aligned. SNP candidates are then obtained according to a measure of redundant frequency. Several new informative biological features, such as the structural neighbor profiles and the physical position of the SNP, were extracted from EST sequences, and the effectiveness of these features was demonstrated. An ensemble classifier, which employs a carefully selected feature set, was included for the imbalanced training data. The sensitivity and specificity of our method both exceeded 80% for human genetic data in the cross validation. Our method enables detection of SNPs from the user's own EST dataset and can be used on species for which there is no genome data. Our tests showed that this method can effectively guide SNP discovery in ESTs and will be useful to avoid and save the cost of biological analyses.
Reagan, Andrew J; Dubief, Yves; Dodds, Peter Sheridan; Danforth, Christopher M
2016-01-01
A thermal convection loop is a annular chamber filled with water, heated on the bottom half and cooled on the top half. With sufficiently large forcing of heat, the direction of fluid flow in the loop oscillates chaotically, dynamics analogous to the Earth's weather. As is the case for state-of-the-art weather models, we only observe the statistics over a small region of state space, making prediction difficult. To overcome this challenge, data assimilation (DA) methods, and specifically ensemble methods, use the computational model itself to estimate the uncertainty of the model to optimally combine these observations into an initial condition for predicting the future state. Here, we build and verify four distinct DA methods, and then, we perform a twin model experiment with the computational fluid dynamics simulation of the loop using the Ensemble Transform Kalman Filter (ETKF) to assimilate observations and predict flow reversals. We show that using adaptively shaped localized covariance outperforms static localized covariance with the ETKF, and allows for the use of less observations in predicting flow reversals. We also show that a Dynamic Mode Decomposition (DMD) of the temperature and velocity fields recovers the low dimensional system underlying reversals, finding specific modes which together are predictive of reversal direction.
Reagan, Andrew J.; Dubief, Yves; Dodds, Peter Sheridan; Danforth, Christopher M.
2016-01-01
A thermal convection loop is a annular chamber filled with water, heated on the bottom half and cooled on the top half. With sufficiently large forcing of heat, the direction of fluid flow in the loop oscillates chaotically, dynamics analogous to the Earth’s weather. As is the case for state-of-the-art weather models, we only observe the statistics over a small region of state space, making prediction difficult. To overcome this challenge, data assimilation (DA) methods, and specifically ensemble methods, use the computational model itself to estimate the uncertainty of the model to optimally combine these observations into an initial condition for predicting the future state. Here, we build and verify four distinct DA methods, and then, we perform a twin model experiment with the computational fluid dynamics simulation of the loop using the Ensemble Transform Kalman Filter (ETKF) to assimilate observations and predict flow reversals. We show that using adaptively shaped localized covariance outperforms static localized covariance with the ETKF, and allows for the use of less observations in predicting flow reversals. We also show that a Dynamic Mode Decomposition (DMD) of the temperature and velocity fields recovers the low dimensional system underlying reversals, finding specific modes which together are predictive of reversal direction. PMID:26849061
Kim, Seongjung; Kim, Jongman; Ahn, Soonjae; Kim, Youngho
2018-04-18
Deaf people use sign or finger languages for communication, but these methods of communication are very specialized. For this reason, the deaf can suffer from social inequalities and financial losses due to their communication restrictions. In this study, we developed a finger language recognition algorithm based on an ensemble artificial neural network (E-ANN) using an armband system with 8-channel electromyography (EMG) sensors. The developed algorithm was composed of signal acquisition, filtering, segmentation, feature extraction and an E-ANN based classifier that was evaluated with the Korean finger language (14 consonants, 17 vowels and 7 numbers) in 17 subjects. E-ANN was categorized according to the number of classifiers (1 to 10) and size of training data (50 to 1500). The accuracy of the E-ANN-based classifier was obtained by 5-fold cross validation and compared with an artificial neural network (ANN)-based classifier. As the number of classifiers (1 to 8) and size of training data (50 to 300) increased, the average accuracy of the E-ANN-based classifier increased and the standard deviation decreased. The optimal E-ANN was composed with eight classifiers and 300 size of training data, and the accuracy of the E-ANN was significantly higher than that of the general ANN.
Sengottuvel, S; Khan, Pathan Fayaz; Mariyappa, N; Patel, Rajesh; Saipriya, S; Gireesan, K
2018-06-01
Cutaneous measurements of electrogastrogram (EGG) signals are heavily contaminated by artifacts due to cardiac activity, breathing, motion artifacts, and electrode drifts whose effective elimination remains an open problem. A common methodology is proposed by combining independent component analysis (ICA) and ensemble empirical mode decomposition (EEMD) to denoise gastric slow-wave signals in multichannel EGG data. Sixteen electrodes are fixed over the upper abdomen to measure the EGG signals under three gastric conditions, namely, preprandial, postprandial immediately, and postprandial 2 h after food for three healthy subjects and a subject with a gastric disorder. Instantaneous frequencies of intrinsic mode functions that are obtained by applying the EEMD technique are analyzed to individually identify and remove each of the artifacts. A critical investigation on the proposed ICA-EEMD method reveals its ability to provide a higher attenuation of artifacts and lower distortion than those obtained by the ICA-EMD method and conventional techniques, like bandpass and adaptive filtering. Characteristic changes in the slow-wave frequencies across the three gastric conditions could be determined from the denoised signals for all the cases. The results therefore encourage the use of the EEMD-based technique for denoising gastric signals to be used in clinical practice.
A Symbiotic Framework for coupling Machine Learning and Geosciences in Prediction and Predictability
NASA Astrophysics Data System (ADS)
Ravela, S.
2017-12-01
In this presentation we review the two directions of a symbiotic relationship between machine learning and the geosciences in relation to prediction and predictability. In the first direction, we develop ensemble, information theoretic and manifold learning framework to adaptively improve state and parameter estimates in nonlinear high-dimensional non-Gaussian problems, showing in particular that tractable variational approaches can be produced. We demonstrate these applications in the context of autonomous mapping of environmental coherent structures and other idealized problems. In the reverse direction, we show that data assimilation, particularly probabilistic approaches for filtering and smoothing offer a novel and useful way to train neural networks, and serve as a better basis than gradient based approaches when we must quantify uncertainty in association with nonlinear, chaotic processes. In many inference problems in geosciences we seek to build reduced models to characterize local sensitivies, adjoints or other mechanisms that propagate innovations and errors. Here, the particular use of neural approaches for such propagation trained using ensemble data assimilation provides a novel framework. Through these two examples of inference problems in the earth sciences, we show that not only is learning useful to broaden existing methodology, but in reverse, geophysical methodology can be used to influence paradigms in learning.
Satellite Data Assimilation within KIAPS-LETKF system
NASA Astrophysics Data System (ADS)
Jo, Y.; Lee, S., Sr.; Cho, K.
2016-12-01
Korea Institute of Atmospheric Prediction Systems (KIAPS) has been developing an ensemble data assimilation system using four-dimensional local ensemble transform kalman filter (LETKF; Hunt et al., 2007) within KIAPS Integrated Model (KIM), referred to as "KIAPS-LETKF". KIAPS-LETKF system was successfully evaluated with various Observing System Simulation Experiments (OSSEs) with NCAR Community Atmospheric Model - Spectral Element (Kang et al., 2013), which has fully unstructured quadrilateral meshes based on the cubed-sphere grid as the same grid system of KIM. Recently, assimilation of real observations has been conducted within the KIAPS-LETKF system with four-dimensional covariance functions over the 6-hr assimilation window. Then, conventional (e.g., sonde, aircraft, and surface) and satellite (e.g., AMSU-A, IASI, GPS-RO, and AMV) observations have been provided by the KIAPS Package for Observation Processing (KPOP). Wind speed prediction was found most beneficial due to ingestion of AMV and for the temperature prediction the improvement in assimilation is mostly due to ingestion of AMSU-A and IASI. However, some degradation in the simulation of the GPS-RO is presented in the upper stratosphere, even though GPS-RO leads positive impacts on the analysis and forecasts. We plan to test the bias correction method and several vertical localization strategies for radiance observations to improve analysis and forecast impacts.
Automatic adjustment of astrochronologic correlations
NASA Astrophysics Data System (ADS)
Zeeden, Christian; Kaboth, Stefanie; Hilgen, Frederik; Laskar, Jacques
2017-04-01
Here we present an algorithm for the automated adjustment and optimisation of correlations between proxy data and an orbital tuning target (or similar datasets as e.g. ice models) for the R environment (R Development Core Team 2008), building on the 'astrochron' package (Meyers et al.2014). The basis of this approach is an initial tuning on orbital (precession, obliquity, eccentricity) scale. We use filters of orbital frequency ranges related to e.g. precession, obliquity or eccentricity of data and compare these filters to an ensemble of target data, which may consist of e.g. different combinations of obliquity and precession, different phases of precession and obliquity, a mix of orbital and other data (e.g. ice models), or different orbital solutions. This approach allows for the identification of an ideal mix of precession and obliquity to be used as tuning target. In addition, the uncertainty related to different tuning tie points (and also precession- and obliquity contributions of the tuning target) can easily be assessed. Our message is to suggest an initial tuning and then obtain a reproducible tuned time scale, avoiding arbitrary chosen tie points and replacing these by automatically chosen ones, representing filter maxima (or minima). We present and discuss the above outlined approach and apply it to artificial and geological data. Artificial data are assessed to find optimal filter settings; real datasets are used to demonstrate the possibilities of such an approach. References: Meyers, S.R. (2014). Astrochron: An R Package for Astrochronology. http://cran.r-project.org/package=astrochron R Development Core Team (2008). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. ISBN 3-900051-07-0, URL http://www.R-project.org.
A Local Realistic Reconciliation of the EPR Paradox
NASA Astrophysics Data System (ADS)
Sanctuary, Bryan
2014-03-01
The exact violation of Bell's Inequalities is obtained with a local realistic model for spin. The model treats one particle that comprises a quantum ensemble and simulates the EPR data one coincidence at a time as a product state. Such a spin is represented by operators σx , iσy ,σz in its body frame rather than the usual set of σX ,σY ,σZ in the laboratory frame. This model, assumed valid in the absence of a measuring probe, contains both quantum polarizations and coherences. Each carries half the EPR correlation, but only half can be measured using coincidence techniques. The model further predicts the filter angles that maximize the spin correlation in EPR experiments.
Cornick, Matthew; Hunt, Brian; Ott, Edward; Kurtuldu, Huseyin; Schatz, Michael F
2009-03-01
Data assimilation refers to the process of estimating a system's state from a time series of measurements (which may be noisy or incomplete) in conjunction with a model for the system's time evolution. Here we demonstrate the applicability of a recently developed data assimilation method, the local ensemble transform Kalman filter, to nonlinear, high-dimensional, spatiotemporally chaotic flows in Rayleigh-Bénard convection experiments. Using this technique we are able to extract the full temperature and velocity fields from a time series of shadowgraph measurements. In addition, we describe extensions of the algorithm for estimating model parameters. Our results suggest the potential usefulness of our data assimilation technique to a broad class of experimental situations exhibiting spatiotemporal chaos.
NASA Astrophysics Data System (ADS)
Ji, Peng; Song, Aiguo; Song, Zimo; Liu, Yuqing; Jiang, Guohua; Zhao, Guopu
2017-02-01
In this paper, we describe a heading direction correction algorithm for a tracked mobile robot. To save hardware resources as far as possible, the mobile robot’s wrist camera is used as the only sensor, which is rotated to face stairs. An ensemble heading deviation detector is proposed to help the mobile robot correct its heading direction. To improve the generalization ability, a multi-scale Gabor filter is used to process the input image previously. Final deviation result is acquired by applying the majority vote strategy on all the classifiers’ results. The experimental results show that our detector is able to enable the mobile robot to correct its heading direction adaptively while it is climbing the stairs.
The Impact of AMSR-E Soil Moisture Assimilation on Evapotranspiration Estimation
NASA Technical Reports Server (NTRS)
Peters-Lidard, Christa D.; Kumar, Sujay; Mocko, David; Tian, Yudong
2012-01-01
An assessment ofETestimates for current LDAS systems is provided along with current research that demonstrates improvement in LSM ET estimates due to assimilating satellite-based soil moisture products. Using the Ensemble Kalman Filter in the Land Information System, we assimilate both NASA and Land Parameter Retrieval Model (LPRM) soil moisture products into the Noah LSM Version 3.2 with the North American LDAS phase 2 CNLDAS-2) forcing to mimic the NLDAS-2 configuration. Through comparisons with two global reference ET products, one based on interpolated flux tower data and one from a new satellite ET algorithm, over the NLDAS2 domain, we demonstrate improvement in ET estimates only when assimilating the LPRM soil moisture product.
NASA Astrophysics Data System (ADS)
Wu, Guocan; Zheng, Xiaogu; Dan, Bo
2016-04-01
The shallow soil moisture observations are assimilated into Common Land Model (CoLM) to estimate the soil moisture in different layers. The forecast error is inflated to improve the analysis state accuracy and the water balance constraint is adopted to reduce the water budget residual in the assimilation procedure. The experiment results illustrate that the adaptive forecast error inflation can reduce the analysis error, while the proper inflation layer can be selected based on the -2log-likelihood function of the innovation statistic. The water balance constraint can result in reducing water budget residual substantially, at a low cost of assimilation accuracy loss. The assimilation scheme can be potentially applied to assimilate the remote sensing data.
NASA Astrophysics Data System (ADS)
De Vleeschouwer, Niels; Verhoest, Niko E. C.; Gobeyn, Sacha; De Baets, Bernard; Verwaeren, Jan; Pauwels, Valentijn R. N.
2015-04-01
The continuous monitoring of soil moisture in a permanent network can yield an interesting data product for use in hydrological modeling. Major advantages of in situ observations compared to remote sensing products are the potential vertical extent of the measurements, the smaller temporal resolution of the observation time series, the smaller impact of land cover variability on the observation bias, etc. However, two major disadvantages are the typically small integration volume of in situ measurements, and the often large spacing between monitoring locations. This causes only a small part of the modeling domain to be directly observed. Furthermore, the spatial configuration of the monitoring network is typically non-dynamic in time. Generally, e.g. when applying data assimilation, maximizing the observed information under given circumstances will lead to a better qualitative and quantitative insight of the hydrological system. It is therefore advisable to perform a prior analysis in order to select those monitoring locations which are most predictive for the unobserved modeling domain. This research focuses on optimizing the configuration of a soil moisture monitoring network in the catchment of the Bellebeek, situated in Belgium. A recursive algorithm, strongly linked to the equations of the Ensemble Kalman Filter, has been developed to select the most predictive locations in the catchment. The basic idea behind the algorithm is twofold. On the one hand a minimization of the modeled soil moisture ensemble error covariance between the different monitoring locations is intended. This causes the monitoring locations to be as independent as possible regarding the modeled soil moisture dynamics. On the other hand, the modeled soil moisture ensemble error covariance between the monitoring locations and the unobserved modeling domain is maximized. The latter causes a selection of monitoring locations which are more predictive towards unobserved locations. The main factors that will influence the outcome of the algorithm are the following: the choice of the hydrological model, the uncertainty model applied for ensemble generation, the general wetness of the catchment during which the error covariance is computed, etc. In this research the influence of the latter two is examined more in-depth. Furthermore, the optimal network configuration resulting from the newly developed algorithm is compared to network configurations obtained by two other algorithms. The first algorithm is based on a temporal stability analysis of the modeled soil moisture in order to identify catchment representative monitoring locations with regard to average conditions. The second algorithm involves the clustering of available spatially distributed data (e.g. land cover and soil maps) that is not obtained by hydrological modeling.
NASA Astrophysics Data System (ADS)
Yoshimura, K.
2012-04-01
In the present study, an isotope-incorporated GCM simulation for AD1871 to AD2008 nudged toward the so-called "20th Century Reanalysis (20CR)" atmospheric fields is conducted. Beforehand the long-term integration, a method to downscale ensemble mean fields is proposed, since 20CR is a product of 56-member ensemble Kalman filtering data assimilation. The method applies a correction to one of the ensemble members in such a way that the seasonal mean is equal to that of the ensemble mean, and then the corrected member is inputted into the isotope-incorporated GCM (i.e., IsoGSM) with the global spectral nudging technique. Use of the method clearly improves the skill than the cases of using only a single member and of using the ensemble means; the skill becomes equivalent to when 3-6 members are directly used. By comparing with GNIP precipitation isotope database, it is confirmed that the 20C Isotope Reanalysis's performance for latter half of the 20th century is just comparable to the other latest studies. For more comparisons for older periods, proxy records including corals, tree-rings, and tropical ice cores are used. First for corals: the 20C Isotope Reanalysis successfully reproduced the δ18O in surface sea water recorded in the corals at many sites covering large parts of global tropical oceans. The comparison suggests that coral records represent past hydrologic balance information where interannual variability in precipitation is large. Secondly for tree-rings: δ18O of cellulose extracted from the annual rings of the long-lived Bristlecone Pine from White Mountain in Southern California is well reproduced by 20C Isotope Reanalysis. Similar good performance is obtained for Cambodia, too. However, the mechanisms driving the isotopic variations are different over California and Cambodia; for California, Hadley cell's expansion and consequent meridional shift of the submerging dry zone and changes in water vapor source is the dominant control, but in Cambodia more direct influence of ENSO associated with Walker circulation is a primal control for isotope. Thirdly for tropical ice cores: reproduction of tropical ice cores by the model is much more difficult than for other two proxies. In addition to horizontally high resolution simulation for isotopes, consideration of isotopic fractionations associated with snow sublimation, snow melt/refreeze, and horizontal transport of snow due to blizzard are necessary.
Weak constrained localized ensemble transform Kalman filter for radar data assimilation
NASA Astrophysics Data System (ADS)
Janjic, Tijana; Lange, Heiner
2015-04-01
The applications on convective scales require data assimilation with a numerical model with single digit horizontal resolution in km and time evolving error covariances. The ensemble Kalman filter (EnKF) algorithm incorporates these two requirements. However, some challenges for the convective scale applications remain unresolved when using the EnKF approach. These include a need on convective scale to estimate fields that are nonnegative (as rain, graupel, snow) and use of data sets as radar reflectivity or cloud products that have the same property. What underlines these examples are errors that are non-Gaussian in nature causing a problem with EnKF, which uses Gaussian error assumptions to produce the estimates from the previous forecast and the incoming data. Since the proper estimates of hydrometeors are crucial for prediction on convective scales, question arises whether EnKF method can be modified to improve these estimates and whether there is a way of optimizing use of radar observations to initialize NWP models due to importance of this data set for prediction of connective storms. In order to deal with non-Gaussian errors different approaches can be taken in the EnKF framework. For example, variables can be transformed by assuming the relevant state variables follow an appropriate pre-specified non-Gaussian distribution, such as the lognormal and truncated Gaussian distribution or, more generally, by carrying out a parameterized change of state variables known as Gaussian anamorphosis. In a recent work by Janjic et al. 2014, it was shown on a simple example how conservation of mass could be beneficial for assimilation of positive variables. The method developed in the paper outperformed the EnKF as well as the EnKF with the lognormal change of variables. As argued in the paper the reason for this, is that each of these methods preserves mass (EnKF) or positivity (lognormal EnKF) but not both. Only once both positivity and mass were preserved in a new algorithm, the good estimates of the fields were obtained. The alternative to strong constraint formulation in Janjic et al. 2014 is to modify LETKF algorithm to take into the account physical properties only approximately. In this work we will include the weak constraints in the LETKF algorithm for estimation of hydrometers. The benefit on prediction is illustrated in an idealized setup (Lange and Craig, 2013). This setup uses the non hydrostatic COSMO model with a 2 km horizontal resolution, and the LETKF as implemented in KENDA (Km-scale Ensemble Data Assimilation) system of German Weather Service (Reich et al. 2011). Due to the Gaussian assumptions that underline the LETKF algorithm, the analyses of water species will become negative in some grid points of the COSMO model. These values are set to zero currently in KENDA after the LETKF analysis step. The tests done within this setup show that such a procedure introduces a bias in the analysis ensemble with respect to the true, that increases in time due to the cycled data assimilation. The benefits of including the constraints in LETKF are illustrated on the bias values during assimilation and the prediction.
2015-01-01
Complex RNA structures are constructed from helical segments connected by flexible loops that move spontaneously and in response to binding of small molecule ligands and proteins. Understanding the conformational variability of RNA requires the characterization of the coupled time evolution of interconnected flexible domains. To elucidate the collective molecular motions and explore the conformational landscape of the HIV-1 TAR RNA, we describe a new methodology that utilizes energy-minimized structures generated by the program “Fragment Assembly of RNA with Full-Atom Refinement (FARFAR)”. We apply structural filters in the form of experimental residual dipolar couplings (RDCs) to select a subset of discrete energy-minimized conformers and carry out principal component analyses (PCA) to corroborate the choice of the filtered subset. We use this subset of structures to calculate solution T1 and T1ρ relaxation times for 13C spins in multiple residues in different domains of the molecule using two simulation protocols that we previously published. We match the experimental T1 times to within 2% and the T1ρ times to within less than 10% for helical residues. These results introduce a protocol to construct viable dynamic trajectories for RNA molecules that accord well with experimental NMR data and support the notion that the motions of the helical portions of this small RNA can be described by a relatively small number of discrete conformations exchanging over time scales longer than 1 μs. PMID:24479561
Statistical coding and decoding of heartbeat intervals.
Lucena, Fausto; Barros, Allan Kardec; Príncipe, José C; Ohnishi, Noboru
2011-01-01
The heart integrates neuroregulatory messages into specific bands of frequency, such that the overall amplitude spectrum of the cardiac output reflects the variations of the autonomic nervous system. This modulatory mechanism seems to be well adjusted to the unpredictability of the cardiac demand, maintaining a proper cardiac regulation. A longstanding theory holds that biological organisms facing an ever-changing environment are likely to evolve adaptive mechanisms to extract essential features in order to adjust their behavior. The key question, however, has been to understand how the neural circuitry self-organizes these feature detectors to select behaviorally relevant information. Previous studies in computational perception suggest that a neural population enhances information that is important for survival by minimizing the statistical redundancy of the stimuli. Herein we investigate whether the cardiac system makes use of a redundancy reduction strategy to regulate the cardiac rhythm. Based on a network of neural filters optimized to code heartbeat intervals, we learn a population code that maximizes the information across the neural ensemble. The emerging population code displays filter tuning proprieties whose characteristics explain diverse aspects of the autonomic cardiac regulation, such as the compromise between fast and slow cardiac responses. We show that the filters yield responses that are quantitatively similar to observed heart rate responses during direct sympathetic or parasympathetic nerve stimulation. Our findings suggest that the heart decodes autonomic stimuli according to information theory principles analogous to how perceptual cues are encoded by sensory systems.
Counteracting structural errors in ensemble forecast of influenza outbreaks.
Pei, Sen; Shaman, Jeffrey
2017-10-13
For influenza forecasts generated using dynamical models, forecast inaccuracy is partly attributable to the nonlinear growth of error. As a consequence, quantification of the nonlinear error structure in current forecast models is needed so that this growth can be corrected and forecast skill improved. Here, we inspect the error growth of a compartmental influenza model and find that a robust error structure arises naturally from the nonlinear model dynamics. By counteracting these structural errors, diagnosed using error breeding, we develop a new forecast approach that combines dynamical error correction and statistical filtering techniques. In retrospective forecasts of historical influenza outbreaks for 95 US cities from 2003 to 2014, overall forecast accuracy for outbreak peak timing, peak intensity and attack rate, are substantially improved for predicted lead times up to 10 weeks. This error growth correction method can be generalized to improve the forecast accuracy of other infectious disease dynamical models.Inaccuracy of influenza forecasts based on dynamical models is partly due to nonlinear error growth. Here the authors address the error structure of a compartmental influenza model, and develop a new improved forecast approach combining dynamical error correction and statistical filtering techniques.
Enhancing Data Assimilation by Evolutionary Particle Filter and Markov Chain Monte Carlo
NASA Astrophysics Data System (ADS)
Moradkhani, H.; Abbaszadeh, P.; Yan, H.
2016-12-01
Particle Filters (PFs) have received increasing attention by the researchers from different disciplines in hydro-geosciences as an effective method to improve model predictions in nonlinear and non-Gaussian dynamical systems. The implication of dual state and parameter estimation by means of data assimilation in hydrology and geoscience has evolved since 2005 from SIR-PF to PF-MCMC and now to the most effective and robust framework through evolutionary PF approach based on Genetic Algorithm (GA) and Markov Chain Monte Carlo (MCMC), the so-called EPF-MCMC. In this framework, the posterior distribution undergoes an evolutionary process to update an ensemble of prior states that more closely resemble realistic posterior probability distribution. The premise of this approach is that the particles move to optimal position using the GA optimization coupled with MCMC increasing the number of effective particles, hence the particle degeneracy is avoided while the particle diversity is improved. The proposed algorithm is applied on a conceptual and highly nonlinear hydrologic model and the effectiveness, robustness and reliability of the method in jointly estimating the states and parameters and also reducing the uncertainty is demonstrated for few river basins across the United States.
Xiao, Zhu; Havyarimana, Vincent; Li, Tong; Wang, Dong
2016-01-01
In this paper, a novel nonlinear framework of smoothing method, non-Gaussian delayed particle smoother (nGDPS), is proposed, which enables vehicle state estimation (VSE) with high accuracy taking into account the non-Gaussianity of the measurement and process noises. Within the proposed method, the multivariate Student’s t-distribution is adopted in order to compute the probability distribution function (PDF) related to the process and measurement noises, which are assumed to be non-Gaussian distributed. A computation approach based on Ensemble Kalman Filter (EnKF) is designed to cope with the mean and the covariance matrix of the proposal non-Gaussian distribution. A delayed Gibbs sampling algorithm, which incorporates smoothing of the sampled trajectories over a fixed-delay, is proposed to deal with the sample degeneracy of particles. The performance is investigated based on the real-world data, which is collected by low-cost on-board vehicle sensors. The comparison study based on the real-world experiments and the statistical analysis demonstrates that the proposed nGDPS has significant improvement on the vehicle state accuracy and outperforms the existing filtering and smoothing methods. PMID:27187405
BHR equations re-derived with immiscible particle effects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schwarzkopf, John Dennis; Horwitz, Jeremy A.
2015-05-01
Compressible and variable density turbulent flows with dispersed phase effects are found in many applications ranging from combustion to cloud formation. These types of flows are among the most challenging to simulate. While the exact equations governing a system of particles and fluid are known, computational resources limit the scale and detail that can be simulated in this type of problem. Therefore, a common method is to simulate averaged versions of the flow equations, which still capture salient physics and is relatively less computationally expensive. Besnard developed such a model for variable density miscible turbulence, where ensemble-averaging was applied tomore » the flow equations to yield a set of filtered equations. Besnard further derived transport equations for the Reynolds stresses, the turbulent mass flux, and the density-specific volume covariance, to help close the filtered momentum and continuity equations. We re-derive the exact BHR closure equations which include integral terms owing to immiscible effects. Physical interpretations of the additional terms are proposed along with simple models. The goal of this work is to extend the BHR model to allow for the simulation of turbulent flows where an immiscible dispersed phase is non-trivially coupled with the carrier phase.« less
Roy, Vandana; Shukla, Shailja; Shukla, Piyush Kumar; Rawat, Paresh
2017-01-01
The motion generated at the capturing time of electro-encephalography (EEG) signal leads to the artifacts, which may reduce the quality of obtained information. Existing artifact removal methods use canonical correlation analysis (CCA) for removing artifacts along with ensemble empirical mode decomposition (EEMD) and wavelet transform (WT). A new approach is proposed to further analyse and improve the filtering performance and reduce the filter computation time under highly noisy environment. This new approach of CCA is based on Gaussian elimination method which is used for calculating the correlation coefficients using backslash operation and is designed for EEG signal motion artifact removal. Gaussian elimination is used for solving linear equation to calculate Eigen values which reduces the computation cost of the CCA method. This novel proposed method is tested against currently available artifact removal techniques using EEMD-CCA and wavelet transform. The performance is tested on synthetic and real EEG signal data. The proposed artifact removal technique is evaluated using efficiency matrices such as del signal to noise ratio (DSNR), lambda ( λ ), root mean square error (RMSE), elapsed time, and ROC parameters. The results indicate suitablity of the proposed algorithm for use as a supplement to algorithms currently in use.
A comparison of breeding and ensemble transform vectors for global ensemble generation
NASA Astrophysics Data System (ADS)
Deng, Guo; Tian, Hua; Li, Xiaoli; Chen, Jing; Gong, Jiandong; Jiao, Meiyan
2012-02-01
To compare the initial perturbation techniques using breeding vectors and ensemble transform vectors, three ensemble prediction systems using both initial perturbation methods but with different ensemble member sizes based on the spectral model T213/L31 are constructed at the National Meteorological Center, China Meteorological Administration (NMC/CMA). A series of ensemble verification scores such as forecast skill of the ensemble mean, ensemble resolution, and ensemble reliability are introduced to identify the most important attributes of ensemble forecast systems. The results indicate that the ensemble transform technique is superior to the breeding vector method in light of the evaluation of anomaly correlation coefficient (ACC), which is a deterministic character of the ensemble mean, the root-mean-square error (RMSE) and spread, which are of probabilistic attributes, and the continuous ranked probability score (CRPS) and its decomposition. The advantage of the ensemble transform approach is attributed to its orthogonality among ensemble perturbations as well as its consistence with the data assimilation system. Therefore, this study may serve as a reference for configuration of the best ensemble prediction system to be used in operation.
The Ensembl REST API: Ensembl Data for Any Language.
Yates, Andrew; Beal, Kathryn; Keenan, Stephen; McLaren, William; Pignatelli, Miguel; Ritchie, Graham R S; Ruffier, Magali; Taylor, Kieron; Vullo, Alessandro; Flicek, Paul
2015-01-01
We present a Web service to access Ensembl data using Representational State Transfer (REST). The Ensembl REST server enables the easy retrieval of a wide range of Ensembl data by most programming languages, using standard formats such as JSON and FASTA while minimizing client work. We also introduce bindings to the popular Ensembl Variant Effect Predictor tool permitting large-scale programmatic variant analysis independent of any specific programming language. The Ensembl REST API can be accessed at http://rest.ensembl.org and source code is freely available under an Apache 2.0 license from http://github.com/Ensembl/ensembl-rest. © The Author 2014. Published by Oxford University Press.
Ensembl BioMarts: a hub for data retrieval across taxonomic space.
Kinsella, Rhoda J; Kähäri, Andreas; Haider, Syed; Zamora, Jorge; Proctor, Glenn; Spudich, Giulietta; Almeida-King, Jeff; Staines, Daniel; Derwent, Paul; Kerhornou, Arnaud; Kersey, Paul; Flicek, Paul
2011-01-01
For a number of years the BioMart data warehousing system has proven to be a valuable resource for scientists seeking a fast and versatile means of accessing the growing volume of genomic data provided by the Ensembl project. The launch of the Ensembl Genomes project in 2009 complemented the Ensembl project by utilizing the same visualization, interactive and programming tools to provide users with a means for accessing genome data from a further five domains: protists, bacteria, metazoa, plants and fungi. The Ensembl and Ensembl Genomes BioMarts provide a point of access to the high-quality gene annotation, variation data, functional and regulatory annotation and evolutionary relationships from genomes spanning the taxonomic space. This article aims to give a comprehensive overview of the Ensembl and Ensembl Genomes BioMarts as well as some useful examples and a description of current data content and future objectives. Database URLs: http://www.ensembl.org/biomart/martview/; http://metazoa.ensembl.org/biomart/martview/; http://plants.ensembl.org/biomart/martview/; http://protists.ensembl.org/biomart/martview/; http://fungi.ensembl.org/biomart/martview/; http://bacteria.ensembl.org/biomart/martview/.
Parametric reduced models for the nonlinear Schrödinger equation
NASA Astrophysics Data System (ADS)
Harlim, John; Li, Xiantao
2015-05-01
Reduced models for the (defocusing) nonlinear Schrödinger equation are developed. In particular, we develop reduced models that only involve the low-frequency modes given noisy observations of these modes. The ansatz of the reduced parametric models are obtained by employing a rational approximation and a colored-noise approximation, respectively, on the memory terms and the random noise of a generalized Langevin equation that is derived from the standard Mori-Zwanzig formalism. The parameters in the resulting reduced models are inferred from noisy observations with a recently developed ensemble Kalman filter-based parametrization method. The forecasting skill across different temperature regimes are verified by comparing the moments up to order four, a two-time correlation function statistics, and marginal densities of the coarse-grained variables.
Parametric reduced models for the nonlinear Schrödinger equation.
Harlim, John; Li, Xiantao
2015-05-01
Reduced models for the (defocusing) nonlinear Schrödinger equation are developed. In particular, we develop reduced models that only involve the low-frequency modes given noisy observations of these modes. The ansatz of the reduced parametric models are obtained by employing a rational approximation and a colored-noise approximation, respectively, on the memory terms and the random noise of a generalized Langevin equation that is derived from the standard Mori-Zwanzig formalism. The parameters in the resulting reduced models are inferred from noisy observations with a recently developed ensemble Kalman filter-based parametrization method. The forecasting skill across different temperature regimes are verified by comparing the moments up to order four, a two-time correlation function statistics, and marginal densities of the coarse-grained variables.
Simultaneous narrowband ultrasonic strain-flow imaging
NASA Astrophysics Data System (ADS)
Tsou, Jean K.; Mai, Jerome J.; Lupotti, Fermin A.; Insana, Michael F.
2004-04-01
We are summarizing new research aimed at forming spatially and temporally registered combinations of strain and color-flow images using echo data recorded from a commercial ultrasound system. Applications include diagnosis of vascular diseases and tumor malignancies. The challenge is to meet the diverse needs of each measurement. The approach is to first apply eigenfilters that separate echo components from moving tissues and blood flow, and then estimate blood velocity and tissue displacement from the filtered-IQ-signal phase modulations. At the cost of a lower acquisition frame rate, we find the autocorrelation strain estimator yields higher resolution strain estimate than the cross-correlator since estimates are made from ensembles at a single point in space. The technique is applied to in vivo carotid imaging, to demonstrate the sensitivity for strain-flow vascular imaging.
NASA Astrophysics Data System (ADS)
Botto, Anna; Camporese, Matteo
2017-04-01
Hydrological models allow scientists to predict the response of water systems under varying forcing conditions. In particular, many physically-based integrated models were recently developed in order to understand the fundamental hydrological processes occurring at the catchment scale. However, the use of this class of hydrological models is still relatively limited, as their prediction skills heavily depend on reliable parameter estimation, an operation that is never trivial, being normally affected by large uncertainty and requiring huge computational effort. The objective of this work is to test the potential of data assimilation to be used as an inverse modeling procedure for the broad class of integrated hydrological models. To pursue this goal, a Bayesian data assimilation (DA) algorithm based on a Monte Carlo approach, namely the ensemble Kalman filter (EnKF), is combined with the CATchment HYdrology (CATHY) model. In this approach, input variables (atmospheric forcing, soil parameters, initial conditions) are statistically perturbed providing an ensemble of realizations aimed at taking into account the uncertainty involved in the process. Each realization is propagated forward by the CATHY hydrological model within a parallel R framework, developed to reduce the computational effort. When measurements are available, the EnKF is used to update both the system state and soil parameters. In particular, four different assimilation scenarios are applied to test the capability of the modeling framework: first only pressure head or water content are assimilated, then, the combination of both, and finally both pressure head and water content together with the subsurface outflow. To demonstrate the effectiveness of the approach in a real-world scenario, an artificial hillslope was designed and built to provide real measurements for the DA analyses. The experimental facility, located in the Department of Civil, Environmental and Architectural Engineering of the University of Padova (Italy), consists of a reinforced concrete box containing a soil prism with maximum height of 3.5 m, length of 6 m and width of 2 m. The hillslope is equipped with six pairs of tensiometers and water content reflectometers, to monitor the pressure head and soil moisture content, respectively. Moreover, two tipping bucket flow gages were used to measure the surface and subsurface discharges at the outlet. A 12-day long experiment was carried out, during which a series of four rainfall events with constant rainfall rate were generated, interspersed with phases of drainage. During the experiment, measurements were collected at a relatively high resolution of 0.5 Hz. We report here on the capability of the data assimilation framework to estimate sets of plausible parameters that are consistent with the experimental setup.
NASA Astrophysics Data System (ADS)
Devers, Alexandre; Vidal, Jean-Philippe; Lauvernet, Claire; Graff, Benjamin
2017-04-01
The knowledge of historical French weather has recently been improved through the development of the SCOPE (Spatially COherent Probabilistic Extended) Climate reconstruction, a probabilistic high-resolution daily reconstruction of precipitation and temperature covering the period 1871-2012 and based on the statistical downscaling of the Twentieth Century Reanalysis (Caillouet et al., 2016). However, historical surface observations - even though rather scarce and sparse - do exist from at least the beginning of the period considered, and this information does not currently feed SCOPE Climate reconstructions. The goal of this study is therefore to assimilate these historical observations into SCOPE Climate reconstructions in order to build a 150-year meteorological reanalysis over France. This study considers "offline" data assimilation methods - Kalman filtering methods like the Ensemble Square Root Filter - that have successfully been used in recent paleoclimate studies, i.e. at much larger temporal and spatial scales (see e.g. Bhend et al., 2012). These methods are here applied for reconstructing the 8-24 August 1893 heat wave in France, using all available daily temperature observations from that period. Temperatures reached that summer were indeed compared at the time to those of Senegal (Garnier, 2012). Results show a spatially coherent view of the heat wave at the national scale as well as a reduced uncertainty compared to initial meteorological reconstructions, thus demonstrating the added value of data assimilation. In order to assess the performance of assimilation methods in a more recent context, these methods are also used to reconstruct the well-known 3-14 August 2003 heat wave by using (1) all available stations, and (2) the same station density as in August 1893, the rest of the observations being saved for validation. This analysis allows comparing two heat waves having occurred 100 years apart in France with different associated uncertainties, in terms of dynamics and intensity. Bhend, J., Franke, J., Folini, D., Wild, M., and Brönnimann, S.: An ensemble-based approach to climate reconstructions, Clim. Past, 8, 963-976, doi: 10.5194/cp-8-963-2012, 2012 Caillouet, L., Vidal, J-P., Sauquet, E., and Graff, B.: Probabilistic precipitation and temperature downscaling of the Twentieth Century Reanalysis over France, Clim. Past, 12, 635-662, doi: 10.5194/cp-12-635-2016, 2016. Garnier, E.: Sécheresses et canicules avant le Global Warming - 1500-1950. In: Canicules et froids extrêmes. L'Événement climatique et ses représentations (II) Histoire, littérature, peinture (Berchtlod, J., Le Roy ladurie, E., Sermain, J.-P., and Vasak, A., Eds.), 297-325, Hermann, 2012.
NASA Astrophysics Data System (ADS)
Li, S.; Rupp, D. E.; Hawkins, L.; Mote, P.; McNeall, D. J.; Sarah, S.; Wallom, D.; Betts, R. A.
2017-12-01
This study investigates the potential to reduce known summer hot/dry biases over Pacific Northwest in the UK Met Office's atmospheric model (HadAM3P) by simultaneously varying multiple model parameters. The bias-reduction process is done through a series of steps: 1) Generation of perturbed physics ensemble (PPE) through the volunteer computing network weather@home; 2) Using machine learning to train "cheap" and fast statistical emulators of climate model, to rule out regions of parameter spaces that lead to model variants that do not satisfy observational constraints, where the observational constraints (e.g., top-of-atmosphere energy flux, magnitude of annual temperature cycle, summer/winter temperature and precipitation) are introduced sequentially; 3) Designing a new PPE by "pre-filtering" using the emulator results. Steps 1) through 3) are repeated until results are considered to be satisfactory (3 times in our case). The process includes a sensitivity analysis to find dominant parameters for various model output metrics, which reduces the number of parameters to be perturbed with each new PPE. Relative to observational uncertainty, we achieve regional improvements without introducing large biases in other parts of the globe. Our results illustrate the potential of using machine learning to train cheap and fast statistical emulators of climate model, in combination with PPEs in systematic model improvement.
Comprehensive benchmarking and ensemble approaches for metagenomic classifiers.
McIntyre, Alexa B R; Ounit, Rachid; Afshinnekoo, Ebrahim; Prill, Robert J; Hénaff, Elizabeth; Alexander, Noah; Minot, Samuel S; Danko, David; Foox, Jonathan; Ahsanuddin, Sofia; Tighe, Scott; Hasan, Nur A; Subramanian, Poorani; Moffat, Kelly; Levy, Shawn; Lonardi, Stefano; Greenfield, Nick; Colwell, Rita R; Rosen, Gail L; Mason, Christopher E
2017-09-21
One of the main challenges in metagenomics is the identification of microorganisms in clinical and environmental samples. While an extensive and heterogeneous set of computational tools is available to classify microorganisms using whole-genome shotgun sequencing data, comprehensive comparisons of these methods are limited. In this study, we use the largest-to-date set of laboratory-generated and simulated controls across 846 species to evaluate the performance of 11 metagenomic classifiers. Tools were characterized on the basis of their ability to identify taxa at the genus, species, and strain levels, quantify relative abundances of taxa, and classify individual reads to the species level. Strikingly, the number of species identified by the 11 tools can differ by over three orders of magnitude on the same datasets. Various strategies can ameliorate taxonomic misclassification, including abundance filtering, ensemble approaches, and tool intersection. Nevertheless, these strategies were often insufficient to completely eliminate false positives from environmental samples, which are especially important where they concern medically relevant species. Overall, pairing tools with different classification strategies (k-mer, alignment, marker) can combine their respective advantages. This study provides positive and negative controls, titrated standards, and a guide for selecting tools for metagenomic analyses by comparing ranges of precision, accuracy, and recall. We show that proper experimental design and analysis parameters can reduce false positives, provide greater resolution of species in complex metagenomic samples, and improve the interpretation of results.
Murrell, Daniel S; Cortes-Ciriano, Isidro; van Westen, Gerard J P; Stott, Ian P; Bender, Andreas; Malliavin, Thérèse E; Glen, Robert C
2015-01-01
In silico predictive models have proved to be valuable for the optimisation of compound potency, selectivity and safety profiles in the drug discovery process. camb is an R package that provides an environment for the rapid generation of quantitative Structure-Property and Structure-Activity models for small molecules (including QSAR, QSPR, QSAM, PCM) and is aimed at both advanced and beginner R users. camb's capabilities include the standardisation of chemical structure representation, computation of 905 one-dimensional and 14 fingerprint type descriptors for small molecules, 8 types of amino acid descriptors, 13 whole protein sequence descriptors, filtering methods for feature selection, generation of predictive models (using an interface to the R package caret), as well as techniques to create model ensembles using techniques from the R package caretEnsemble). Results can be visualised through high-quality, customisable plots (R package ggplot2). Overall, camb constitutes an open-source framework to perform the following steps: (1) compound standardisation, (2) molecular and protein descriptor calculation, (3) descriptor pre-processing and model training, visualisation and validation, and (4) bioactivity/property prediction for new molecules. camb aims to speed model generation, in order to provide reproducibility and tests of robustness. QSPR and proteochemometric case studies are included which demonstrate camb's application.Graphical abstractFrom compounds and data to models: a complete model building workflow in one package.
NASA Astrophysics Data System (ADS)
O'Connor, J. Michael; Pretorius, P. Hendrik; Gifford, Howard C.; Licho, Robert; Joffe, Samuel; McGuiness, Matthew; Mehurg, Shannon; Zacharias, Michael; Brankov, Jovan G.
2012-02-01
Our previous Single Photon Emission Computed Tomography (SPECT) myocardial perfusion imaging (MPI) research explored the utility of numerical observers. We recently created two hundred and eighty simulated SPECT cardiac cases using Dynamic MCAT (DMCAT) and SIMIND Monte Carlo tools. All simulated cases were then processed with two reconstruction methods: iterative ordered subset expectation maximization (OSEM) and filtered back-projection (FBP). Observer study sets were assembled for both OSEM and FBP methods. Five physicians performed an observer study on one hundred and seventy-nine images from the simulated cases. The observer task was to indicate detection of any myocardial perfusion defect using the American Society of Nuclear Cardiology (ASNC) 17-segment cardiac model and the ASNC five-scale rating guidelines. Human observer Receiver Operating Characteristic (ROC) studies established the guidelines for the subsequent evaluation of numerical model observer (NO) performance. Several NOs were formulated and their performance was compared with the human observer performance. One type of NO was based on evaluation of a cardiac polar map that had been pre-processed using a gradient-magnitude watershed segmentation algorithm. The second type of NO was also based on analysis of a cardiac polar map but with use of a priori calculated average image derived from an ensemble of normal cases.
Diagnostics of sources of tropospheric ozone using data assimilation during the KORUS-AQ campaign
NASA Astrophysics Data System (ADS)
Gaubert, B.; Emmons, L. K.; Miyazaki, K.; Buchholz, R. R.; Tang, W.; Arellano, A. F., Jr.; Tilmes, S.; Barré, J.; Worden, H. M.; Raeder, K.; Anderson, J. L.; Edwards, D. P.
2017-12-01
Atmospheric oxidative capacity plays a crucial role in the fate of greenhouse gases and air pollutants as well as in the formation of secondary pollutants such as tropospheric ozone. The attribution of sources of tropospheric ozone is a difficult task because of biases in input parameters and forcings such as emissions and meteorology in addition to errors in chemical schemes. We assimilate satellite remote sensing observations of ozone precursors such as carbon monoxide (CO) and nitrogen dioxide (NO2) in the global coupled chemistry-transport model: Community Atmosphere Model with Chemistry (CAM-Chem). The assimilation is completed using an Ensemble Adjustment Kalman Filter (EAKF) in the Data Assimilation Research Testbed (DART) framework which allows estimates of unobserved parameters and potential constraints on secondary pollutants and emissions. The ensemble will be constructed using perturbations in chemical kinetics, different emission fields and by assimilating meteorological observations to fully assess uncertainties in the chemical fields of targeted species. We present a set of tools such as emission tags (CO and propane), combined with diagnostic analysis of chemical regimes and perturbation of emissions ratios to estimate a regional budget of primary and secondary pollutants in East Asia and their sensitivity to data assimilation. This study benefits from the large set of aircraft and ozonesonde in-situ observations from the Korea-United States Air Quality (KORUS-AQ) campaign that occurred in South Korea in May-June 2016.
A hybrid MLP-CNN classifier for very fine resolution remotely sensed image classification
NASA Astrophysics Data System (ADS)
Zhang, Ce; Pan, Xin; Li, Huapeng; Gardiner, Andy; Sargent, Isabel; Hare, Jonathon; Atkinson, Peter M.
2018-06-01
The contextual-based convolutional neural network (CNN) with deep architecture and pixel-based multilayer perceptron (MLP) with shallow structure are well-recognized neural network algorithms, representing the state-of-the-art deep learning method and the classical non-parametric machine learning approach, respectively. The two algorithms, which have very different behaviours, were integrated in a concise and effective way using a rule-based decision fusion approach for the classification of very fine spatial resolution (VFSR) remotely sensed imagery. The decision fusion rules, designed primarily based on the classification confidence of the CNN, reflect the generally complementary patterns of the individual classifiers. In consequence, the proposed ensemble classifier MLP-CNN harvests the complementary results acquired from the CNN based on deep spatial feature representation and from the MLP based on spectral discrimination. Meanwhile, limitations of the CNN due to the adoption of convolutional filters such as the uncertainty in object boundary partition and loss of useful fine spatial resolution detail were compensated. The effectiveness of the ensemble MLP-CNN classifier was tested in both urban and rural areas using aerial photography together with an additional satellite sensor dataset. The MLP-CNN classifier achieved promising performance, consistently outperforming the pixel-based MLP, spectral and textural-based MLP, and the contextual-based CNN in terms of classification accuracy. This research paves the way to effectively address the complicated problem of VFSR image classification.
Doppler color imaging. Principles and instrumentation.
Kremkau, F W
1992-01-01
DCI acquires Doppler-shifted echoes from a cross-section of tissue scanned by an ultrasound beam. These echoes are then presented in color and superimposed on the gray-scale anatomic image of non-Doppler-shifted echoes received during the scan. The flow echoes are assigned colors according to the color map chosen. Usually red, yellow, or white indicates positive Doppler shifts (approaching flow) and blue, cyan, or white indicates negative shifts (receding flow). Green is added to indicate variance (disturbed or turbulent flow). Several pulses (the number is called the ensemble length) are needed to generate a color scan line. Linear, convex, phased, and annular arrays are used to acquire the gray-scale and color-flow information. Doppler color-flow instruments are pulsed-Doppler instruments and are subject to the same limitations, such as Doppler angle dependence and aliasing, as other Doppler instruments. Color controls include gain, TGC, map selection, variance on/off, persistence, ensemble length, color/gray priority. Nyquist limit (PRF), baseline shift, wall filter, and color window angle, location, and size. Doppler color-flow instruments generally have output intensities intermediate between those of gray-scale imaging and pulsed-Doppler duplex instruments. Although there is no known risk with the use of color-flow instruments, prudent practice dictates that they be used for medical indications and with the minimum exposure time and instrument output required to obtain the needed diagnostic information.
Scale-Similar Models for Large-Eddy Simulations
NASA Technical Reports Server (NTRS)
Sarghini, F.
1999-01-01
Scale-similar models employ multiple filtering operations to identify the smallest resolved scales, which have been shown to be the most active in the interaction with the unresolved subgrid scales. They do not assume that the principal axes of the strain-rate tensor are aligned with those of the subgrid-scale stress (SGS) tensor, and allow the explicit calculation of the SGS energy. They can provide backscatter in a numerically stable and physically realistic manner, and predict SGS stresses in regions that are well correlated with the locations where large Reynolds stress occurs. In this paper, eddy viscosity and mixed models, which include an eddy-viscosity part as well as a scale-similar contribution, are applied to the simulation of two flows, a high Reynolds number plane channel flow, and a three-dimensional, nonequilibrium flow. The results show that simulations without models or with the Smagorinsky model are unable to predict nonequilibrium effects. Dynamic models provide an improvement of the results: the adjustment of the coefficient results in more accurate prediction of the perturbation from equilibrium. The Lagrangian-ensemble approach [Meneveau et al., J. Fluid Mech. 319, 353 (1996)] is found to be very beneficial. Models that included a scale-similar term and a dissipative one, as well as the Lagrangian ensemble averaging, gave results in the best agreement with the direct simulation and experimental data.
Constrained Local UniversE Simulations: a Local Group factory
NASA Astrophysics Data System (ADS)
Carlesi, Edoardo; Sorce, Jenny G.; Hoffman, Yehuda; Gottlöber, Stefan; Yepes, Gustavo; Libeskind, Noam I.; Pilipenko, Sergey V.; Knebe, Alexander; Courtois, Hélène; Tully, R. Brent; Steinmetz, Matthias
2016-05-01
Near-field cosmology is practised by studying the Local Group (LG) and its neighbourhood. This paper describes a framework for simulating the `near field' on the computer. Assuming the Λ cold dark matter (ΛCDM) model as a prior and applying the Bayesian tools of the Wiener filter and constrained realizations of Gaussian fields to the Cosmicflows-2 (CF2) survey of peculiar velocities, constrained simulations of our cosmic environment are performed. The aim of these simulations is to reproduce the LG and its local environment. Our main result is that the LG is likely a robust outcome of the ΛCDMscenario when subjected to the constraint derived from CF2 data, emerging in an environment akin to the observed one. Three levels of criteria are used to define the simulated LGs. At the base level, pairs of haloes must obey specific isolation, mass and separation criteria. At the second level, the orbital angular momentum and energy are constrained, and on the third one the phase of the orbit is constrained. Out of the 300 constrained simulations, 146 LGs obey the first set of criteria, 51 the second and 6 the third. The robustness of our LG `factory' enables the construction of a large ensemble of simulated LGs. Suitable candidates for high-resolution hydrodynamical simulations of the LG can be drawn from this ensemble, which can be used to perform comprehensive studies of the formation of the LG.
Supporting Operational Data Assimilation Capabilities to the Research Community
NASA Astrophysics Data System (ADS)
Shao, H.; Hu, M.; Stark, D. R.; Zhou, C.; Beck, J.; Ge, G.
2017-12-01
The Developmental Testbed Center (DTC), in partnership with the National Centers for Environmental Prediction (NCEP) and other operational and research institutions, provides operational data assimilation capabilities to the research community and helps transition research advances to operations. The primary data assimilation system supported currently by the DTC is the Gridpoint Statistical Interpolation (GSI) system and the National Oceanic and Atmospheric Administration (NOAA) Ensemble Kalman Filter (EnKF) system. GSI is a variational based system being used for daily operations at NOAA, NCEP, the National Aeronautics and Space Administration, and other operational agencies. Recently, GSI has evolved into a four-dimensional EnVar system. Since 2009, the DTC has been releasing the GSI code to the research community annually and providing user support. In addition to GSI, the DTC, in 2015, began supporting the ensemble based EnKF data assimilation system. EnKF shares the observation operator with GSI and therefore, just as GSI, can assimilate both conventional and non-conventional data (e.g., satellite radiance). Currently, EnKF is being implemented as part of the GSI based hybrid EnVar system for NCEP Global Forecast System operations. This paper will summarize the current code management and support framework for these two systems. Following that is a description of available community services and facilities. Also presented is the pathway for researchers to contribute their development to the daily operations of these data assimilation systems.
Yang, Shan; Al-Hashimi, Hashim M.
2016-01-01
A growing number of studies employ time-averaged experimental data to determine dynamic ensembles of biomolecules. While it is well known that different ensembles can satisfy experimental data to within error, the extent and nature of these degeneracies, and their impact on the accuracy of the ensemble determination remains poorly understood. Here, we use simulations and a recently introduced metric for assessing ensemble similarity to explore degeneracies in determining ensembles using NMR residual dipolar couplings (RDCs) with specific application to A-form helices in RNA. Various target ensembles were constructed representing different domain-domain orientational distributions that are confined to a topologically restricted (<10%) conformational space. Five independent sets of ensemble averaged RDCs were then computed for each target ensemble and a ‘sample and select’ scheme used to identify degenerate ensembles that satisfy RDCs to within experimental uncertainty. We find that ensembles with different ensemble sizes and that can differ significantly from the target ensemble (by as much as ΣΩ ~ 0.4 where ΣΩ varies between 0 and 1 for maximum and minimum ensemble similarity, respectively) can satisfy the ensemble averaged RDCs. These deviations increase with the number of unique conformers and breadth of the target distribution, and result in significant uncertainty in determining conformational entropy (as large as 5 kcal/mol at T = 298 K). Nevertheless, the RDC-degenerate ensembles are biased towards populated regions of the target ensemble, and capture other essential features of the distribution, including the shape. Our results identify ensemble size as a major source of uncertainty in determining ensembles and suggest that NMR interactions such as RDCs and spin relaxation, on their own, do not carry the necessary information needed to determine conformational entropy at a useful level of precision. The framework introduced here provides a general approach for exploring degeneracies in ensemble determination for different types of experimental data. PMID:26131693
Residue-level global and local ensemble-ensemble comparisons of protein domains.
Clark, Sarah A; Tronrud, Dale E; Karplus, P Andrew
2015-09-01
Many methods of protein structure generation such as NMR-based solution structure determination and template-based modeling do not produce a single model, but an ensemble of models consistent with the available information. Current strategies for comparing ensembles lose information because they use only a single representative structure. Here, we describe the ENSEMBLATOR and its novel strategy to directly compare two ensembles containing the same atoms to identify significant global and local backbone differences between them on per-atom and per-residue levels, respectively. The ENSEMBLATOR has four components: eePREP (ee for ensemble-ensemble), which selects atoms common to all models; eeCORE, which identifies atoms belonging to a cutoff-distance dependent common core; eeGLOBAL, which globally superimposes all models using the defined core atoms and calculates for each atom the two intraensemble variations, the interensemble variation, and the closest approach of members of the two ensembles; and eeLOCAL, which performs a local overlay of each dipeptide and, using a novel measure of local backbone similarity, reports the same four variations as eeGLOBAL. The combination of eeGLOBAL and eeLOCAL analyses identifies the most significant differences between ensembles. We illustrate the ENSEMBLATOR's capabilities by showing how using it to analyze NMR ensembles and to compare NMR ensembles with crystal structures provides novel insights compared to published studies. One of these studies leads us to suggest that a "consistency check" of NMR-derived ensembles may be a useful analysis step for NMR-based structure determinations in general. The ENSEMBLATOR 1.0 is available as a first generation tool to carry out ensemble-ensemble comparisons. © 2015 The Protein Society.
Residue-level global and local ensemble-ensemble comparisons of protein domains
Clark, Sarah A; Tronrud, Dale E; Andrew Karplus, P
2015-01-01
Many methods of protein structure generation such as NMR-based solution structure determination and template-based modeling do not produce a single model, but an ensemble of models consistent with the available information. Current strategies for comparing ensembles lose information because they use only a single representative structure. Here, we describe the ENSEMBLATOR and its novel strategy to directly compare two ensembles containing the same atoms to identify significant global and local backbone differences between them on per-atom and per-residue levels, respectively. The ENSEMBLATOR has four components: eePREP (ee for ensemble-ensemble), which selects atoms common to all models; eeCORE, which identifies atoms belonging to a cutoff-distance dependent common core; eeGLOBAL, which globally superimposes all models using the defined core atoms and calculates for each atom the two intraensemble variations, the interensemble variation, and the closest approach of members of the two ensembles; and eeLOCAL, which performs a local overlay of each dipeptide and, using a novel measure of local backbone similarity, reports the same four variations as eeGLOBAL. The combination of eeGLOBAL and eeLOCAL analyses identifies the most significant differences between ensembles. We illustrate the ENSEMBLATOR's capabilities by showing how using it to analyze NMR ensembles and to compare NMR ensembles with crystal structures provides novel insights compared to published studies. One of these studies leads us to suggest that a “consistency check” of NMR-derived ensembles may be a useful analysis step for NMR-based structure determinations in general. The ENSEMBLATOR 1.0 is available as a first generation tool to carry out ensemble-ensemble comparisons. PMID:26032515
The Ensembl REST API: Ensembl Data for Any Language
Yates, Andrew; Beal, Kathryn; Keenan, Stephen; McLaren, William; Pignatelli, Miguel; Ritchie, Graham R. S.; Ruffier, Magali; Taylor, Kieron; Vullo, Alessandro; Flicek, Paul
2015-01-01
Motivation: We present a Web service to access Ensembl data using Representational State Transfer (REST). The Ensembl REST server enables the easy retrieval of a wide range of Ensembl data by most programming languages, using standard formats such as JSON and FASTA while minimizing client work. We also introduce bindings to the popular Ensembl Variant Effect Predictor tool permitting large-scale programmatic variant analysis independent of any specific programming language. Availability and implementation: The Ensembl REST API can be accessed at http://rest.ensembl.org and source code is freely available under an Apache 2.0 license from http://github.com/Ensembl/ensembl-rest. Contact: ayates@ebi.ac.uk or flicek@ebi.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25236461
HLPI-Ensemble: Prediction of human lncRNA-protein interactions based on ensemble strategy.
Hu, Huan; Zhang, Li; Ai, Haixin; Zhang, Hui; Fan, Yetian; Zhao, Qi; Liu, Hongsheng
2018-03-27
LncRNA plays an important role in many biological and disease progression by binding to related proteins. However, the experimental methods for studying lncRNA-protein interactions are time-consuming and expensive. Although there are a few models designed to predict the interactions of ncRNA-protein, they all have some common drawbacks that limit their predictive performance. In this study, we present a model called HLPI-Ensemble designed specifically for human lncRNA-protein interactions. HLPI-Ensemble adopts the ensemble strategy based on three mainstream machine learning algorithms of Support Vector Machines (SVM), Random Forests (RF) and Extreme Gradient Boosting (XGB) to generate HLPI-SVM Ensemble, HLPI-RF Ensemble and HLPI-XGB Ensemble, respectively. The results of 10-fold cross-validation show that HLPI-SVM Ensemble, HLPI-RF Ensemble and HLPI-XGB Ensemble achieved AUCs of 0.95, 0.96 and 0.96, respectively, in the test dataset. Furthermore, we compared the performance of the HLPI-Ensemble models with the previous models through external validation dataset. The results show that the false positives (FPs) of HLPI-Ensemble models are much lower than that of the previous models, and other evaluation indicators of HLPI-Ensemble models are also higher than those of the previous models. It is further showed that HLPI-Ensemble models are superior in predicting human lncRNA-protein interaction compared with previous models. The HLPI-Ensemble is publicly available at: http://ccsipb.lnu.edu.cn/hlpiensemble/ .
Assimilation of sea ice concentration data in the Arctic via DART/CICE5 in the CESM1
NASA Astrophysics Data System (ADS)
Zhang, Y.; Bitz, C. M.; Anderson, J. L.; Collins, N.; Hendricks, J.; Hoar, T. J.; Raeder, K.
2016-12-01
Arctic sea ice cover has been experiencing significant reduction in the past few decades. Climate models predict that the Arctic Ocean may be ice-free in late summer within a few decades. Better sea ice prediction is crucial for regional and global climate prediction that are vital to human activities such as maritime shipping and subsistence hunting, as well as wildlife protection as animals face habitat loss. The physical processes involved with the persistence and re-emergence of sea ice cover are found to extend the predictability of sea ice concentration (SIC) and thickness at the regional scale up to several years. This motivates us to investigate sea ice predictability stemming from initial values of the sea ice cover. Data assimilation is a useful technique to combine observations and model forecasts to reconstruct the states of sea ice in the past and provide more accurate initial conditions for sea ice prediction. This work links the most recent version of the Los Alamos sea ice model (CICE5) within the Community Earth System Model version 1.5 (CESM1.5) and the Data Assimilation Research Testbed (DART). The linked DART/CICE5 is ideal to assimilate multi-scale and multivariate sea ice observations using an ensemble Kalman filter (EnKF). The study is focused on the assimilation of SIC data that impact SIC, sea ice thickness, and snow thickness. The ensemble sea ice model states are constructed by introducing uncertainties in atmospheric forcing and key model parameters. The ensemble atmospheric forcing is a reanalysis product generated with DART and the Community Atmosphere Model (CAM). We also perturb two model parameters that are found to contribute significantly to the model uncertainty in previous studies. This study applies perfect model observing system simulation experiments (OSSEs) to investigate data assimilation algorithms and post-processing methods. One of the ensemble members of a CICE5 free run is chosen as the truth. Daily synthetic observations are obtained by adding 15% random noise to the truth. Experiments assimilating the synthetic observations are then conducted to test the effectiveness of different data assimilation algorithms (e.g., localization and inflation) and post-processing methods (e.g., how to distribute the total increment of SIC into each ice thickness category).
Variability of cyclones over the North Atlantic and Europe since 1871
NASA Astrophysics Data System (ADS)
Welker, C.; Martius, O.
2012-04-01
The scarce availability of long-term atmospheric data series has so far limited the analysis of low-frequency activity and intensity changes of cyclones over the North Atlantic and Europe. A novel reanalysis product, the Twentieth Century Reanalysis (20CR; Compo et al., 2011), spanning 1871 to present, offers potentially a very valuable resource for the analysis of the decadal-scale variability of cyclones over the North Atlantic sector and Europe. In the 20CR, only observations of synoptic surface pressure were assimilated. Monthly sea surface temperature and sea ice distributions served as boundary conditions. An Ensemble Kalman Filter assimilation technique was applied. "First guess" fields were obtained from an ensemble (with 56 members) of short-range numerical weather prediction forecasts. We apply the cyclone identification algorithm of Wernli and Schwierz (2006) to this data set, i.e. to each individual ensemble member. This enables us to give an uncertainty estimation of our findings. We find that 20CR shows a temporally relatively homogeneous representation of cyclone activity over Europe and great parts of the North Atlantic. Pronounced decadal-scale variability is found both in the frequency and intensity of cyclones over the North Atlantic and Europe. The low-frequency variability is consistently represented in all ensemble members. Our analyses indicate that in the past approximately 140 years the variability of cyclone activity and intensity over the North Atlantic and Europe can principally be associated with the North Atlantic Oscillation and secondary with a pattern similar to the East Atlantic pattern. Regionally however, the correlation between cyclone activity and these dominant modes of variability changes over time. Compo, G. P., J. S. Whitaker, P. D. Sardeshmukh, N. Matsui, R. J. Allan, X. Yin, B. E. Gleason, R. S. Vose, G. Rutledge, P. Bessemoulin, S. Brönnimann, M. Brunet, R. I. Crouthamel, A. N. Grant, P. Y. Groisman, P. D. Jones, M. C. Kruk, A. C. Kruger, G. J. Marshall, M. Maugeri, H. Y. Mok, Ø. Nordli, T. F. Ross, R. M. Trigo, X. L. Wang, S. D. Woodruff, and S. J. Worley, 2011: The Twentieth Century Reanalysis project. Quarterly J. Roy. Meteorol. Soc., 137, 1-28. Wernli, H. and C. Schwierz, 2006: Surface cyclones in the ERA-40 dataset (1958-2001). Part I: Novel identification method and global climatology. J. Atmos. Sci., 63, 2486-2507.
NASA Astrophysics Data System (ADS)
Vandenbulcke, Luc; Barth, Alexander
2017-04-01
In the present European operational oceanography context, global and basin-scale models are run daily at different Monitoring and Forecasting Centers from the Copernicus Marine component (CMEMS). Regional forecasting centers, which run outside of CMEMS, then use these forecasts as initial conditions and/or boundary conditions for high-resolution or coastal forecasts. However, these improved simulations are lost to the basin-scale models (i.e. there is no feedback). Therefore, some potential improvements inside (and even outside) the areas covered by regional models are lost, and the risk for discrepancy between basin-scale and regional model remains high. The objective of this study is to simulate two-way nesting by extracting pseudo-observations from the regional models and assimilating them in the basin-scale models. The proposed method is called "upscaling". A ensemble of 100 one-way nested NEMO models of the Mediterranean Sea (Med) (1/16°) and the North-Western Med (1/80°) is implemented to simulate the period 2014-2015. Each member has perturbed initial conditions, atmospheric forcing fields and river discharge data. The Med model uses climatological Rhone river data, while the nested model uses measured daily discharges. The error of the pseudo-observations can be estimated by analyzing the ensemble of nested models. The pseudo-observations are then assimilated in the parent model by means of an Ensemble Kalman Filter. The experiments show that the proposed method improves different processes in the Med model, such as the position of the Northern Current and its incursion (or not) on the Gulf of Lions, the cold water mass on the shelf, and the position of the Rhone river plume. Regarding areas where no operational regional models exist, (some variables of) the parent model can still be improved by relating some resolved parameters to statistical properties of a higher-resolution simulation. This is the topic of a complementary study also presented at the EGU 2017 (Barth et al).
NASA Astrophysics Data System (ADS)
Micheletty, P. D.; Perrot, D.; Day, G. N.; Lhotak, J.; Quebbeman, J.; Park, G. H.; Carney, S.
2017-12-01
Water supply forecasting in the western United States is inextricably linked to snowmelt processes, as approximately 70-85% of total annual runoff comes from water stored in seasonal mountain snowpacks. Snowmelt-generated streamflow is vital to a variety of downstream uses; the Upper Colorado River Basin (UCRB) alone provides water supply for 25 million people, irrigation water for 3.5 million acres, and drives hydropower generation at Lake Powell. April-July water supply forecasts produced by the National Weather Service (NWS) Colorado Basin River Forecast Center (CBRFC) are critical to basin water management. The primary objective of this project as part of the NASA Water Resources Applied Science Program, is to improve water supply forecasting for the UCRB by assimilating satellite and ground snowpack observations into a distributed hydrologic model at various times during the snow accumulation and melt seasons. To do this, we have built a framework that uses an Ensemble Kalman Filter (EnKF) to update modeled snow water equivalent (SWE) states in the Hydrology Laboratory-Research Distributed Hydrologic Model (HL-RDHM) with spatially interpolated SNOTEL snow water equivalent (SWE) observations and products from the MODIS Snow Covered-Area and Grain size retrieval algorithm (when available). We have generated April-July water supply reforecasts for a 20-year period (1991-2010) for several headwater catchments in the UCRB using HL-RDHM and snow data assimilation in the Ensemble Streamflow Prediction (ESP) framework. The existing CBRFC ESP reforecasts will provide a baseline for comparison to determine whether the data assimilation process adds skill to the water supply forecasts. Preliminary results from one headwater basin show improved skill in water supply forecasting when HL-RDHM is run with the data assimilation step compared to HL-RDHM run without the data assimilation step, particularly in years when MODSCAG data were available (2000-2010). The final forecasting framework developed during this project will be delivered to CBRFC and run operationally for a set of pilot basins.
Knierim, Ellen; Schwarz, Jana Marie; Schuelke, Markus; Seelow, Dominik
2013-08-01
Many genetic disorders are caused by copy number variations (CNVs) in the human genome. However, the large number of benign CNV polymorphisms makes it difficult to delineate causative variants for a certain disease phenotype. Hence, we set out to create software that accumulates and visualises locus-specific knowledge and enables clinicians to study their own CNVs in the context of known polymorphisms and disease variants. CNV data from healthy cohorts (Database of Genomic Variants) and from disease-related databases (DECIPHER) were integrated into a joint resource. Data are presented in an interactive web-based application that allows inspection, evaluation and filtering of CNVs in single individuals or in entire cohorts. CNVinspector provides simple interfaces to upload CNV data, compare them with own or published control data and visualise the results in graphical interfaces. Beyond choosing control data from different public studies, platforms and methods, dedicated filter options allow the detection of CNVs that are either enriched in patients or depleted in controls. Alternatively, a search can be restricted to those CNVs that appear in individuals of similar clinical phenotype. For each gene of interest within a CNV, we provide a link to NCBI, ENSEMBL and the GeneDistiller search engine to browse for potential disease-associated genes. With its user-friendly handling, the integration of control data and the filtering options, CNVinspector will facilitate the daily work of clinical geneticists and accelerate the delineation of new syndromes and gene functions. CNVinspector is freely accessible under http://www.cnvinspector.org.
A collaborative filtering approach for protein-protein docking scoring functions.
Bourquard, Thomas; Bernauer, Julie; Azé, Jérôme; Poupon, Anne
2011-04-22
A protein-protein docking procedure traditionally consists in two successive tasks: a search algorithm generates a large number of candidate conformations mimicking the complex existing in vivo between two proteins, and a scoring function is used to rank them in order to extract a native-like one. We have already shown that using Voronoi constructions and a well chosen set of parameters, an accurate scoring function could be designed and optimized. However to be able to perform large-scale in silico exploration of the interactome, a near-native solution has to be found in the ten best-ranked solutions. This cannot yet be guaranteed by any of the existing scoring functions. In this work, we introduce a new procedure for conformation ranking. We previously developed a set of scoring functions where learning was performed using a genetic algorithm. These functions were used to assign a rank to each possible conformation. We now have a refined rank using different classifiers (decision trees, rules and support vector machines) in a collaborative filtering scheme. The scoring function newly obtained is evaluated using 10 fold cross-validation, and compared to the functions obtained using either genetic algorithms or collaborative filtering taken separately. This new approach was successfully applied to the CAPRI scoring ensembles. We show that for 10 targets out of 12, we are able to find a near-native conformation in the 10 best ranked solutions. Moreover, for 6 of them, the near-native conformation selected is of high accuracy. Finally, we show that this function dramatically enriches the 100 best-ranking conformations in near-native structures.
A Collaborative Filtering Approach for Protein-Protein Docking Scoring Functions
Bourquard, Thomas; Bernauer, Julie; Azé, Jérôme; Poupon, Anne
2011-01-01
A protein-protein docking procedure traditionally consists in two successive tasks: a search algorithm generates a large number of candidate conformations mimicking the complex existing in vivo between two proteins, and a scoring function is used to rank them in order to extract a native-like one. We have already shown that using Voronoi constructions and a well chosen set of parameters, an accurate scoring function could be designed and optimized. However to be able to perform large-scale in silico exploration of the interactome, a near-native solution has to be found in the ten best-ranked solutions. This cannot yet be guaranteed by any of the existing scoring functions. In this work, we introduce a new procedure for conformation ranking. We previously developed a set of scoring functions where learning was performed using a genetic algorithm. These functions were used to assign a rank to each possible conformation. We now have a refined rank using different classifiers (decision trees, rules and support vector machines) in a collaborative filtering scheme. The scoring function newly obtained is evaluated using 10 fold cross-validation, and compared to the functions obtained using either genetic algorithms or collaborative filtering taken separately. This new approach was successfully applied to the CAPRI scoring ensembles. We show that for 10 targets out of 12, we are able to find a near-native conformation in the 10 best ranked solutions. Moreover, for 6 of them, the near-native conformation selected is of high accuracy. Finally, we show that this function dramatically enriches the 100 best-ranking conformations in near-native structures. PMID:21526112
Adaptive fault feature extraction from wayside acoustic signals from train bearings
NASA Astrophysics Data System (ADS)
Zhang, Dingcheng; Entezami, Mani; Stewart, Edward; Roberts, Clive; Yu, Dejie
2018-07-01
Wayside acoustic detection of train bearing faults plays a significant role in maintaining safety in the railway transport system. However, the bearing fault information is normally masked by strong background noises and harmonic interferences generated by other components (e.g. axles and gears). In order to extract the bearing fault feature information effectively, a novel method called improved singular value decomposition (ISVD) with resonance-based signal sparse decomposition (RSSD), namely the ISVD-RSSD method, is proposed in this paper. A Savitzky-Golay (S-G) smoothing filter is used to filter singular vectors (SVs) in the ISVD method as an extension of the singular value decomposition (SVD) theorem. Hilbert spectrum entropy and a stepwise optimisation strategy are used to optimize the S-G filter's parameters. The RSSD method is able to nonlinearly decompose the wayside acoustic signal of a faulty train bearing into high and low resonance components, the latter of which contains bearing fault information. However, the high level of noise usually results in poor decomposition results from the RSSD method. Hence, the collected wayside acoustic signal must first be de-noised using the ISVD component of the ISVD-RSSD method. Next, the de-noised signal is decomposed by using the RSSD method. The obtained low resonance component is then demodulated with a Hilbert transform such that the bearing fault can be detected by observing Hilbert envelope spectra. The effectiveness of the ISVD-RSSD method is verified through both laboratory field-based experiments as described in the paper. The results indicate that the proposed method is superior to conventional spectrum analysis and ensemble empirical mode decomposition methods.
A novel coupling of noise reduction algorithms for particle flow simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zimoń, M.J., E-mail: malgorzata.zimon@stfc.ac.uk; James Weir Fluids Lab, Mechanical and Aerospace Engineering Department, The University of Strathclyde, Glasgow G1 1XJ; Reese, J.M.
2016-09-15
Proper orthogonal decomposition (POD) and its extension based on time-windows have been shown to greatly improve the effectiveness of recovering smooth ensemble solutions from noisy particle data. However, to successfully de-noise any molecular system, a large number of measurements still need to be provided. In order to achieve a better efficiency in processing time-dependent fields, we have combined POD with a well-established signal processing technique, wavelet-based thresholding. In this novel hybrid procedure, the wavelet filtering is applied within the POD domain and referred to as WAVinPOD. The algorithm exhibits promising results when applied to both synthetically generated signals and particlemore » data. In this work, the simulations compare the performance of our new approach with standard POD or wavelet analysis in extracting smooth profiles from noisy velocity and density fields. Numerical examples include molecular dynamics and dissipative particle dynamics simulations of unsteady force- and shear-driven liquid flows, as well as phase separation phenomenon. Simulation results confirm that WAVinPOD preserves the dimensionality reduction obtained using POD, while improving its filtering properties through the sparse representation of data in wavelet basis. This paper shows that WAVinPOD outperforms the other estimators for both synthetically generated signals and particle-based measurements, achieving a higher signal-to-noise ratio from a smaller number of samples. The new filtering methodology offers significant computational savings, particularly for multi-scale applications seeking to couple continuum informations with atomistic models. It is the first time that a rigorous analysis has compared de-noising techniques for particle-based fluid simulations.« less
Starshade Observation Scheduling for WFIRST
NASA Astrophysics Data System (ADS)
Soto, Gabriel; Garrett, Daniel; Delacroix, Christian; Savransky, Dmitry
2018-01-01
An exoplanet direct imaging mission can employ an external starshade for starlight suppression to achieve higher contrasts and potentially higher throughput than with an internal coronagraph. This separately-launched starshade spacecraft is assumed to maintain a single, constant separation distance from the space telescope—for this study, the Wide Field Infrared Survey Telescope (WFIRST)—based on a designated inner working angle during integration times. The science yield of such a mission can be quantified using the Exoplanet Open-Source Imaging Simulator (EXOSIMS): this simulator determines the distributions of mission outcomes, such as the types and amount of exoplanet detections, based on ensembles of end-to-end simulations of the mission. This study adds a starshade class to the survey simulation module of EXOSIMS and outlines a method for efficiently determining observation schedules. The new starshade class solves boundary value problems using circular restricted three-body dynamics to find fast, high-accuracy estimates of the starshade motion while repositioning between WFIRST observations. Fuel usage dictates the mission lifetime of the starshade given its limited fuel supply and is dominated by the Δv used to reposition the starshade between the LOS of different targets; the repositioning time-of-flight is kept constant in this study. A starshade burns less fuel to reach certain target stars based on their relative projected positions on a skymap; other targets with costly transfers can be filtered out to increase the starshade mission duration. Because the initial target list can consist of nearly 2000 stars, calculating the Δv required to move the starshade to every other star on the target list would be too computationally expensive and renders running ensembles of survey simulations infeasible. Assuming the starshade begins its transfer at the LOS of a certain star, a Δv curve is approximated for the remaining target stars based on their right ascension or declination angle, depending on the starting and ending position of WFIRST on its halo orbit. The required Δv for a given star can be quickly interpolated and used to filter out stars in the target list.
Volcano Deformation and Eruption Forecasting using Data Assimilation: Building the Strategy
NASA Astrophysics Data System (ADS)
Bato, M. G.; Pinel, V.; Yan, Y.
2016-12-01
In monitoring active volcanoes, the magma overpressure is one of the key parameters used in forecasting volcanic eruptions. This can be inferred from the ground displacements measured on the Earth's surface by applying inversion techniques. However, during the inversion, we lose the temporal characteristic along with huge amount of information about the behaviour of the volcano. Our work focuses on developing a strategy in order to better forecast the magma overpressure using data assimilation. Data assimilation is a sequential time-forward process that best combines models and observations, sometimes a priori information based on error statistics, to predict the state of a dynamical system. It has gained popularity in various fields of geoscience (e.g. ocean-weather forecasting, geomagnetism and natural resources exploration), but remains a new and emerging technique in the field of volcanology. With the increasing amount of geodetic data (i.e. InSAR and GPS) recorded on volcanoes nowadays, and the wide-range availability of dynamical models that can provide better understanding about the volcano plumbing system; developing a forecasting framework that can efficiently combine them is crucial. Here, we particularly built our strategy on the basis of the Ensemble Kalman Filter (EnKF) [1]. We predict the temporal behaviours of the magma overpressures and surface deformations by adopting the two-magma chamber model proposed by Reverso et. al., 2014 [2] and by using synthetic GPS and/or InSAR data. Several tests are performed in order to answer the following: 1) know the efficiency of EnKF in forecasting volcanic unrests, 2) constrain unknown parameters of the model, 3) properly use GPS and/or InSAR during assimilation and 4) compare EnKF with classic inversion while using the same dynamical model. Results show that EnKF works well with the synthetic cases and there is a great potential in utilising the method for real-time monitoring of volcanic unrests. [1] Evensen, G., The Ensemble Kalman Filter: theoretical formulation and practical implementation. Ocean Dyn.,53, 343-367, 2003 [2] T. Reverso, J. Vandemeulebrouck, F. Jouanne, V. Pinel, T. Villemin, E. Sturkell, A two-magma chamber as a source of deformation at Grimsvötn volcano, Iceland, JGR, 2014
Assimilation of IASI partial tropospheric columns with an Ensemble Kalman Filter over Europe
NASA Astrophysics Data System (ADS)
Coman, A.; Foret, G.; Beekmann, M.; Eremenko, M.; Dufour, G.; Gaubert, B.; Ung, A.; Schmechtig, C.; Flaud, J.-M.; Bergametti, G.
2011-09-01
Partial lower tropospheric ozone columns provided by the IASI (Infrared Atmospheric Sounding Interferometer) instrument have been assimilated into a chemistry-transport model at continental scale (CHIMERE) using an Ensemble Kalman Filter (EnKF). Analyses are made for the month of July 2007 over the European domain. Launched in 2006, aboard the MetOp-A satellite, IASI shows high sensitivity for ozone in the free troposphere and low sensitivity at the ground; therefore it is important to evaluate if assimilation of these observations can improve free tropospheric ozone, and possibly surface ozone. The analyses are validated against independent ozone observations from sondes, MOZAIC1 aircraft and ground based stations (AIRBASE - the European Air quality dataBase) and compared with respect to the free run of CHIMERE. These comparisons show a decrease in error of 6 parts-per-billion (ppb) in the free troposphere over the Frankfurt area, and also a reduction of the root mean square error (respectively bias) at the surface of 19% (33%) for more than 90% of existing ground stations. This provides evidence of the potential of data assimilation of tropospheric IASI columns to better describe the tropospheric ozone distribution, including surface ozone, despite the lower sensitivity. The changes in concentration resulting from the observational constraints were quantified and several geophysical explanations for the findings of this study were drawn. The corrections were most pronounced over Italy and the Mediterranean region, on the average we noted an average reduction of 8-9 ppb in the free troposphere with respect to the free run, and still a reduction of 5.5 ppb at ground, likely due to a longer residence time of air masses in this part associated to the general circulation pattern (i.e. dominant western circulation) and to persistent anticyclonic conditions over the Mediterranean basin. This is an important geophysical result, since the ozone burden is large over this area, with impact on the radiative balance and air quality. 1 Measurements of OZone, water vapour, carbon monoxide and nitrogen oxides by in-service AIrbus airCraft ( http://mozaic.aero.obs-mip.fr/web/)
Assimilation of IASI partial tropospheric columns with an Ensemble Kalman Filter over Europe
NASA Astrophysics Data System (ADS)
Coman, A.; Foret, G.; Beekmann, M.; Eremenko, M.; Dufour, G.; Gaubert, B.; Ung, A.; Schmechtig, C.; Flaud, J.-M.; Bergametti, G.
2012-03-01
Partial lower tropospheric ozone columns provided by the IASI (Infrared Atmospheric Sounding Interferometer) instrument have been assimilated into a chemistry-transport model at continental scale (CHIMERE) using an Ensemble Square Root Kalman Filter (EnSRF). Analyses are made for the month of July 2007 over the European domain. Launched in 2006, aboard the MetOp-A satellite, IASI shows high sensitivity for ozone in the free troposphere and low sensitivity at the ground; therefore it is important to evaluate if assimilation of these observations can improve free tropospheric ozone, and possibly surface ozone. The analyses are validated against independent ozone observations from sondes, MOZAIC1 aircraft and ground based stations (AIRBASE - the European Air quality dataBase) and compared with respect to the free run of CHIMERE. These comparisons show a decrease in error of 6 parts-per-billion (ppb) in the free troposphere over the Frankfurt area, and also a reduction of the root mean square error (respectively bias) at the surface of 19% (33%) for more than 90% of existing ground stations. This provides evidence of the potential of data assimilation of tropospheric IASI columns to better describe the tropospheric ozone distribution, including surface ozone, despite the lower sensitivity. The changes in concentration resulting from the observational constraints were quantified and several geophysical explanations for the findings of this study were drawn. The corrections were most pronounced over Italy and the Mediterranean region, we noted an average reduction of 8-9 ppb in the free troposphere with respect to the free run, and still a reduction of 5.5 ppb at ground, likely due to a longer residence time of air masses in this part associated to the general circulation pattern (i.e. dominant western circulation) and to persistent anticyclonic conditions over the Mediterranean basin. This is an important geophysical result, since the ozone burden is large over this area, with impact on the radiative balance and air quality. 1 Measurements of OZone, water vapour, carbon monoxide and nitrogen oxides by in-service AIrbus airCraft (http://mozaic.aero.obs-mip.fr/web/).
NASA Technical Reports Server (NTRS)
Hoffman, Matthew J.; Eluszkiewicz, Janusz; Weisenstein, Deborah; Uymin, Gennady; Moncet, Jean-Luc
2012-01-01
Motivated by the needs of Mars data assimilation. particularly quantification of measurement errors and generation of averaging kernels. we have evaluated atmospheric temperature retrievals from Mars Global Surveyor (MGS) Thermal Emission Spectrometer (TES) radiances. Multiple sets of retrievals have been considered in this study; (1) retrievals available from the Planetary Data System (PDS), (2) retrievals based on variants of the retrieval algorithm used to generate the PDS retrievals, and (3) retrievals produced using the Mars 1-Dimensional Retrieval (M1R) algorithm based on the Optimal Spectral Sampling (OSS ) forward model. The retrieved temperature profiles are compared to the MGS Radio Science (RS) temperature profiles. For the samples tested, the M1R temperature profiles can be made to agree within 2 K with the RS temperature profiles, but only after tuning the prior and error statistics. Use of a global prior that does not take into account the seasonal dependence leads errors of up 6 K. In polar samples. errors relative to the RS temperature profiles are even larger. In these samples, the PDS temperature profiles also exhibit a poor fit with RS temperatures. This fit is worse than reported in previous studies, indicating that the lack of fit is due to a bias correction to TES radiances implemented after 2004. To explain the differences between the PDS and Ml R temperatures, the algorithms are compared directly, with the OSS forward model inserted into the PDS algorithm. Factors such as the filtering parameter, the use of linear versus nonlinear constrained inversion, and the choice of the forward model, are found to contribute heavily to the differences in the temperature profiles retrieved in the polar regions, resulting in uncertainties of up to 6 K. Even outside the poles, changes in the a priori statistics result in different profile shapes which all fit the radiances within the specified error. The importance of the a priori statistics prevents reliable global retrievals based a single a priori and strongly implies that a robust science analysis must instead rely on retrievals employing localized a priori information, for example from an ensemble based data assimilation system such as the Local Ensemble Transform Kalman Filter (LETKF).
A remote sensing data assimilation system for cold land processes hydrologic estimation
NASA Astrophysics Data System (ADS)
Andreadis, Konstantinos M.
2009-12-01
Accurate forecasting of snow properties is important for effective water resources management, especially in mountainous areas. Model-based approaches are limited by biases and uncertainties. Remote sensing offers an opportunity for observation of snow properties over larger areas. Traditional approaches to direct estimation of snow properties from passive microwave remote sensing have been plagued by limitations such as the tendency of estimates to saturate for moderately deep snowpacks and the effects of mixed land cover. To address these complications, a data assimilation system is developed and evaluated in a three-part research. The data assimilation system requires the embedding of a microwave emissions model which uses modeled snowpack properties. In the first part of this study, such a model is evaluated using multi-scale TB measurements from the Cold Land Processes Experiment. The model's ability to reproduce snowpack microphysical properties is evaluated through comparison with snowpit measurements, while TB predictions are evaluated through comparison with in-situ, aircraft and satellite measurements. Point comparisons showed limitations in the model, while the spatial averaging and the effects of forest cover suppressed errors in comparisons with aircraft measurements. The layered character of snowpacks increases the complexity of algorithms intended to retrieve snow properties from the snowpack microwave return signal. Implementation of a retrieval strategy requires knowledge of stratigraphy, which practically can only be produced by models. In the second part of this study, we describe a multi-layer model designed for such applications. The model coupled with a radiative transfer scheme improved the estimation of TB, while potential impacts when assimilating radiances are explored. A system that merges SWE model predictions and observations of SCE and TB, is evaluated in the third part of this study over one winter season in the Upper Snake River basin. Two data assimilation techniques, the Ensemble Kalman filter and the Ensemble Multiscale Kalman filter are tested with the multilayer snow model forced by downscaled re-analysis meteorological observations. Both the EnKF and EnMKF showed modest improvements when compared with the open-loop simulation, relative to a baseline simulation which used in-situ meteorological data, while comparisons with in-situ SWE measurements showed an overall improvement.
NASA Astrophysics Data System (ADS)
Lin, H.; Baldwin, D. C.; Smithwick, E. A. H.
2015-12-01
Predicting root zone (0-100 cm) soil moisture (RZSM) content at a catchment-scale is essential for drought and flood predictions, irrigation planning, weather forecasting, and many other applications. Satellites, such as the NASA Soil Moisture Active Passive (SMAP), can estimate near-surface (0-5 cm) soil moisture content globally at coarse spatial resolutions. We develop a hierarchical Ensemble Kalman Filter (EnKF) data assimilation modeling system to downscale satellite-based near-surface soil moisture and to estimate RZSM content across the Shale Hills Critical Zone Observatory at a 1-m resolution in combination with ground-based soil moisture sensor data. In this example, a simple infiltration model within the EnKF-model has been parameterized for 6 soil-terrain units to forecast daily RZSM content in the catchment from 2009 - 2012 based on AMSRE. LiDAR-derived terrain variables define intra-unit RZSM variability using a novel covariance localization technique. This method also allows the mapping of uncertainty with our RZSM estimates for each time-step. A catchment-wide satellite-to-surface downscaling parameter, which nudges the satellite measurement closer to in situ near-surface data, is also calculated for each time-step. We find significant differences in predicted root zone moisture storage for different terrain units across the experimental time-period. Root mean square error from a cross-validation analysis of RZSM predictions using an independent dataset of catchment-wide in situ Time-Domain Reflectometry (TDR) measurements ranges from 0.060-0.096 cm3 cm-3, and the RZSM predictions are significantly (p < 0.05) correlated with TDR measurements [r = 0.47-0.68]. The predictive skill of this data assimilation system is similar to the Penn State Integrated Hydrologic Modeling (PIHM) system. Uncertainty estimates are significantly (p < 0.05) correlated to cross validation error during wet and dry conditions, but more so in dry summer seasons. Developing an EnKF-model system that downscales satellite data and predicts catchment-scale RZSM content is especially timely, given the anticipated release of SMAP surface moisture data in 2015.
NASA Astrophysics Data System (ADS)
Zovi, Francesco; Camporese, Matteo; Hendricks Franssen, Harrie-Jan; Huisman, Johan Alexander; Salandin, Paolo
2017-05-01
Alluvial aquifers are often characterized by the presence of braided high-permeable paleo-riverbeds, which constitute an interconnected preferential flow network whose localization is of fundamental importance to predict flow and transport dynamics. Classic geostatistical approaches based on two-point correlation (i.e., the variogram) cannot describe such particular shapes. In contrast, multiple point geostatistics can describe almost any kind of shape using the empirical probability distribution derived from a training image. However, even with a correct training image the exact positions of the channels are uncertain. State information like groundwater levels can constrain the channel positions using inverse modeling or data assimilation, but the method should be able to handle non-Gaussianity of the parameter distribution. Here the normal score ensemble Kalman filter (NS-EnKF) was chosen as the inverse conditioning algorithm to tackle this issue. Multiple point geostatistics and NS-EnKF have already been tested in synthetic examples, but in this study they are used for the first time in a real-world case study. The test site is an alluvial unconfined aquifer in northeastern Italy with an extension of approximately 3 km2. A satellite training image showing the braid shapes of the nearby river and electrical resistivity tomography (ERT) images were used as conditioning data to provide information on channel shape, size, and position. Measured groundwater levels were assimilated with the NS-EnKF to update the spatially distributed groundwater parameters (hydraulic conductivity and storage coefficients). Results from the study show that the inversion based on multiple point geostatistics does not outperform the one with a multiGaussian model and that the information from the ERT images did not improve site characterization. These results were further evaluated with a synthetic study that mimics the experimental site. The synthetic results showed that only for a much larger number of conditioning piezometric heads, multiple point geostatistics and ERT could improve aquifer characterization. This shows that state of the art stochastic methods need to be supported by abundant and high-quality subsurface data.
NASA Astrophysics Data System (ADS)
Oaida, C. M.; Andreadis, K.; Reager, J. T., II; Famiglietti, J. S.; Levoe, S.
2017-12-01
Accurately estimating how much snow water equivalent (SWE) is stored in mountainous regions characterized by complex terrain and snowmelt-driven hydrologic cycles is not only greatly desirable, but also a big challenge. Mountain snowpack exhibits high spatial variability across a broad range of spatial and temporal scales due to a multitude of physical and climatic factors, making it difficult to observe or estimate in its entirety. Combing remotely sensed data and high resolution hydrologic modeling through data assimilation (DA) has the potential to provide a spatially and temporally continuous SWE dataset at horizontal scales that capture sub-grid snow spatial variability and are also relevant to stakeholders such as water resource managers. Here, we present the evaluation of a new snow DA approach that uses a Local Ensemble Transform Kalman Filter (LETKF) in tandem with the Variable Infiltration Capacity macro-scale hydrologic model across the Western United States, at a daily temporal resolution, and a horizontal resolution of 1.75 km x 1.75 km. The LETKF is chosen for its relative simplicity, ease of implementation, and computational efficiency and scalability. The modeling/DA system assimilates daily MODIS Snow Covered Area and Grain Size (MODSCAG) fractional snow cover over, and has been developed to efficiently calculate SWE estimates over extended periods of time and covering large regional-scale areas at relatively high spatial resolution, ultimately producing a snow reanalysis-type dataset. Here we focus on the assessment of SWE produced by the DA scheme over several basins in California's Sierra Nevada Mountain range where Airborne Snow Observatory data is available, during the last five water years (2013-2017), which include both one of the driest and one of the wettest years. Comparison against such a spatially distributed SWE observational product provides a greater understanding of the model's ability to estimate SWE and SWE spatial variability, and highlights under which conditions snow cover DA can add value in estimating SWE.
Exploring the calibration of a wind forecast ensemble for energy applications
NASA Astrophysics Data System (ADS)
Heppelmann, Tobias; Ben Bouallegue, Zied; Theis, Susanne
2015-04-01
In the German research project EWeLiNE, Deutscher Wetterdienst (DWD) and Fraunhofer Institute for Wind Energy and Energy System Technology (IWES) are collaborating with three German Transmission System Operators (TSO) in order to provide the TSOs with improved probabilistic power forecasts. Probabilistic power forecasts are derived from probabilistic weather forecasts, themselves derived from ensemble prediction systems (EPS). Since the considered raw ensemble wind forecasts suffer from underdispersiveness and bias, calibration methods are developed for the correction of the model bias and the ensemble spread bias. The overall aim is to improve the ensemble forecasts such that the uncertainty of the possible weather deployment is depicted by the ensemble spread from the first forecast hours. Additionally, the ensemble members after calibration should remain physically consistent scenarios. We focus on probabilistic hourly wind forecasts with horizon of 21 h delivered by the convection permitting high-resolution ensemble system COSMO-DE-EPS which has become operational in 2012 at DWD. The ensemble consists of 20 ensemble members driven by four different global models. The model area includes whole Germany and parts of Central Europe with a horizontal resolution of 2.8 km and a vertical resolution of 50 model levels. For verification we use wind mast measurements around 100 m height that corresponds to the hub height of wind energy plants that belong to wind farms within the model area. Calibration of the ensemble forecasts can be performed by different statistical methods applied to the raw ensemble output. Here, we explore local bivariate Ensemble Model Output Statistics at individual sites and quantile regression with different predictors. Applying different methods, we already show an improvement of ensemble wind forecasts from COSMO-DE-EPS for energy applications. In addition, an ensemble copula coupling approach transfers the time-dependencies of the raw ensemble to the calibrated ensemble. The calibrated wind forecasts are evaluated first with univariate probabilistic scores and additionally with diagnostics of wind ramps in order to assess the time-consistency of the calibrated ensemble members.
NASA Astrophysics Data System (ADS)
Zhang, Liangjing; Dahle, Christoph; Neumayer, Karl-Hans; Dobslaw, Henryk; Flechtner, Frank; Thomas, Maik
2016-04-01
Terrestrial water storage (TWS) variations obtained from GRACE play an increasingly important role in various hydrological and hydro-meteorological applications. Since monthly-mean gravity fields are contaminated by errors caused by a number of sources with distinct spatial correlation structures, filtering is needed to remove in particular high frequency noise. Subsequently, bias and leakage caused by the filtering need to be corrected before the final results are interpreted as GRACE-based observations of TWS. Knowledge about the reliability and performance of different post-processing methods is highly important for the GRACE users. In this contribution, we re-assess a number of commonly used post-processing methods using a simulated GRACE-like gravity field time-series based on realistic orbits and instrument error assumptions as well as background error assumptions out of the updated ESA Earth System Model. Two non-isotropic filter methods from Kusche (2007) and Swenson and Wahr (2006) are tested. Rescaling factors estimated from five different hydrological models and the ensemble median are applied to the post-processed simulated GRACE-like TWS estimates to correct the bias and leakage. Since TWS anomalies out of the post-processed simulation results can be readily compared to the time-variable Earth System Model initially used as "truth" during the forward simulation step, we are able to thoroughly check the plausibility of our error estimation assessment and will subsequently recommend a processing strategy that shall also be applied to planned GRACE and GRACE-FO Level-3 products for hydrological applications provided by GFZ. Kusche, J. (2007): Approximate decorrelation and non-isotropic smoothing of time-variable GRACE-type gravity field models. J. Geodesy, 81 (11), 733-749, doi:10.1007/s00190-007-0143-3. Swenson, S. and Wahr, J. (2006): Post-processing removal of correlated errors in GRACE data. Geophysical Research Letters, 33(8):L08402.