Should learners reason one step at a time? A randomised trial of two diagnostic scheme designs.
Blissett, Sarah; Morrison, Deric; McCarty, David; Sibbald, Matthew
2017-04-01
Making a diagnosis can be difficult for learners as they must integrate multiple clinical variables. Diagnostic schemes can help learners with this complex task. A diagnostic scheme is an algorithm that organises possible diagnoses by assigning signs or symptoms (e.g. systolic murmur) to groups of similar diagnoses (e.g. aortic stenosis and aortic sclerosis) and provides distinguishing features to help discriminate between similar diagnoses (e.g. carotid pulse). The current literature does not identify whether scheme layouts should guide learners to reason one step at a time in a terminally branching scheme or weigh multiple variables simultaneously in a hybrid scheme. We compared diagnostic accuracy, perceptual errors and cognitive load using two scheme layouts for cardiac auscultation. Focused on the task of identifying murmurs on Harvey, a cardiopulmonary simulator, 86 internal medicine residents used two scheme layouts. The terminally branching scheme organised the information into single variable decisions. The hybrid scheme combined single variable decisions with a chart integrating multiple distinguishing features. Using a crossover design, participants completed one set of murmurs (diastolic or systolic) with either the terminally branching or the hybrid scheme. The second set of murmurs was completed with the other scheme. A repeated measures manova was performed to compare diagnostic accuracy, perceptual errors and cognitive load between the scheme layouts. There was a main effect of the scheme layout (Wilks' λ = 0.841, F 3,80 = 5.1, p = 0.003). Use of a terminally branching scheme was associated with increased diagnostic accuracy (65 versus 53%, p = 0.02), fewer perceptual errors (0.61 versus 0.98 errors, p = 0.001) and lower cognitive load (3.1 versus 3.5/7, p = 0.023). The terminally branching scheme was associated with improved diagnostic accuracy, fewer perceptual errors and lower cognitive load, suggesting that terminally branching schemes are effective for improving diagnostic accuracy. These findings can inform the design of schemes and other clinical decision aids. © 2017 John Wiley & Sons Ltd and The Association for the Study of Medical Education.
High-Order Accurate Solutions to the Helmholtz Equation in the Presence of Boundary Singularities
2015-03-31
FD scheme is only consistent for classical solutions of the PDE . For this reason, we implement the method of singularity subtraction as a means for...regularity due to the boundary conditions. This is because the FD scheme is only consistent for classical solutions of the PDE . For this reason, we...Introduction In the present work, we develop a high-order numerical method for solving linear elliptic PDEs with well-behaved variable coefficients on
Conceptual Questions and Lack of Formal Reasoning: Are They Mutually Exclusive?
ERIC Educational Resources Information Center
Igaz, Csaba; Proksa, Miroslav
2012-01-01
Using specially designed conceptual question pairs, 9th grade students were tested on tasks (presented as experimental situations in pictorial form) that involved controlling the variables' scheme of formal reasoning. The question topics focused on these three chemical contexts: chemistry in everyday life, chemistry without formal concepts, and…
Privacy Preserving Association Rule Mining Revisited: Privacy Enhancement and Resources Efficiency
NASA Astrophysics Data System (ADS)
Mohaisen, Abedelaziz; Jho, Nam-Su; Hong, Dowon; Nyang, Daehun
Privacy preserving association rule mining algorithms have been designed for discovering the relations between variables in data while maintaining the data privacy. In this article we revise one of the recently introduced schemes for association rule mining using fake transactions (FS). In particular, our analysis shows that the FS scheme has exhaustive storage and high computation requirements for guaranteeing a reasonable level of privacy. We introduce a realistic definition of privacy that benefits from the average case privacy and motivates the study of a weakness in the structure of FS by fake transactions filtering. In order to overcome this problem, we improve the FS scheme by presenting a hybrid scheme that considers both privacy and resources as two concurrent guidelines. Analytical and empirical results show the efficiency and applicability of our proposed scheme.
Improving the Representation of Snow Crystal Properties within a Single-Moment Microphysics Scheme
NASA Technical Reports Server (NTRS)
Molthan, Andrew L.; Petersen, Walter A.; Case, Jonathan L.; Dembek, Scott R.
2010-01-01
The assumptions of a single-moment microphysics scheme (NASA Goddard) were evaluated using a variety of surface, aircraft and radar data sets. Fixed distribution intercepts and snow bulk densities fail to represent the vertical variability and diversity of crystal populations for this event. Temperature-based equations have merit, but they can be adversely affected by complex temperature profiles that are inverted or isothermal. Column-based approaches can mitigate complex profiles of temperature but are restricted by the ability of the model to represent cloud depth. Spheres are insufficient for use in CloudSat reflectivity comparisons due to Mie resonance, but reasonable for Rayleigh scattering applications. Microphysics schemes will benefit from a greater range of snow crystal characteristics to accommodate naturally occurring diversity.
Improving Hydrological Simulations by Incorporating GRACE Data for Parameter Calibration
NASA Astrophysics Data System (ADS)
Bai, P.
2017-12-01
Hydrological model parameters are commonly calibrated by observed streamflow data. This calibration strategy is questioned when the modeled hydrological variables of interest are not limited to streamflow. Well-performed streamflow simulations do not guarantee the reliable reproduction of other hydrological variables. One of the reasons is that hydrological model parameters are not reasonably identified. The Gravity Recovery and Climate Experiment (GRACE) satellite-derived total water storage change (TWSC) data provide an opportunity to constrain hydrological model parameterizations in combination with streamflow observations. We constructed a multi-objective calibration scheme based on GRACE-derived TWSC and streamflow observations, with the aim of improving the parameterizations of hydrological models. The multi-objective calibration scheme was compared with the traditional single-objective calibration scheme, which is based only on streamflow observations. Two monthly hydrological models were employed on 22 Chinese catchments with different hydroclimatic conditions. The model evaluation was performed using observed streamflows, GRACE-derived TWSC, and evapotranspiraiton (ET) estimates from flux towers and from the water balance approach. Results showed that the multi-objective calibration provided more reliable TWSC and ET simulations without significant deterioration in the accuracy of streamflow simulations than the single-objective calibration. In addition, the improvements of TWSC and ET simulations were more significant in relatively dry catchments than in relatively wet catchments. This study highlights the importance of including additional constraints besides streamflow observations in the parameter estimation to improve the performances of hydrological models.
Tests of oceanic stochastic parameterisation in a seasonal forecast system.
NASA Astrophysics Data System (ADS)
Cooper, Fenwick; Andrejczuk, Miroslaw; Juricke, Stephan; Zanna, Laure; Palmer, Tim
2015-04-01
Over seasonal time scales, our aim is to compare the relative impact of ocean initial condition and model uncertainty, upon the ocean forecast skill and reliability. Over seasonal timescales we compare four oceanic stochastic parameterisation schemes applied in a 1x1 degree ocean model (NEMO) with a fully coupled T159 atmosphere (ECMWF IFS). The relative impacts upon the ocean of the resulting eddy induced activity, wind forcing and typical initial condition perturbations are quantified. Following the historical success of stochastic parameterisation in the atmosphere, two of the parameterisations tested were multiplicitave in nature: A stochastic variation of the Gent-McWilliams scheme and a stochastic diffusion scheme. We also consider a surface flux parameterisation (similar to that introduced by Williams, 2012), and stochastic perturbation of the equation of state (similar to that introduced by Brankart, 2013). The amplitude of the stochastic term in the Williams (2012) scheme was set to the physically reasonable amplitude considered in that paper. The amplitude of the stochastic term in each of the other schemes was increased to the limits of model stability. As expected, variability was increased. Up to 1 month after initialisation, ensemble spread induced by stochastic parameterisation is greater than that induced by the atmosphere, whilst being smaller than the initial condition perturbations currently used at ECMWF. After 1 month, the wind forcing becomes the dominant source of model ocean variability, even at depth.
Extracting a mix parameter from 2D radiography of variable density flow
NASA Astrophysics Data System (ADS)
Kurien, Susan; Doss, Forrest; Livescu, Daniel
2017-11-01
A methodology is presented for extracting quantities related to the statistical description of the mixing state from the 2D radiographic image of a flow. X-ray attenuation through a target flow is given by the Beer-Lambert law which exponentially damps the incident beam intensity by a factor proportional to the density, opacity and thickness of the target. By making reasonable assumptions for the mean density, opacity and effective thickness of the target flow, we estimate the contribution of density fluctuations to the attenuation. The fluctuations thus inferred may be used to form the correlation of density and specific-volume, averaged across the thickness of the flow in the direction of the beam. This correlation function, denoted by b in RANS modeling, quantifies turbulent mixing in variable density flows. The scheme is tested using DNS data computed for variable-density buoyancy-driven mixing. We quantify the deficits in the extracted value of b due to target thickness, Atwood number, and modeled noise in the incident beam. This analysis corroborates the proposed scheme to infer the mix parameter from thin targets at moderate to low Atwood numbers. The scheme is then applied to an image of counter-shear flow obtained from experiments at the National Ignition Facility. US Department of Energy.
NASA Astrophysics Data System (ADS)
Onken, Reiner
2017-04-01
The Regional Ocean Modeling System (ROMS) has been employed to explore the sensitivity of the forecast skill of mixed-layer properties to initial conditions, boundary conditions, and vertical mixing parameterisations. The initial and lateral boundary conditions were provided by the Mediterranean Forecasting System (MFS) or by the MERCATOR global ocean circulation model via one-way nesting; the initial conditions were additionally updated through the assimilation of observations. Nowcasts and forecasts from the weather forecast models COSMO-ME and COSMO-IT, partly melded with observations, served as surface boundary conditions. The vertical mixing was parameterised by the GLS (generic length scale) scheme Umlauf and Burchard (2003) in four different set-ups. All ROMS forecasts were validated against the observations which were taken during the REP14-MED survey to the west of Sardinia. Nesting ROMS in MERCATOR and updating the initial conditions through data assimilation provided the best agreement of the predicted mixed-layer properties with the time series from a moored thermistor chain. Further improvement was obtained by the usage of COSMO-ME atmospheric forcing, which was melded with real observations, and by the application of the k-ω vertical mixing scheme with increased vertical eddy diffusivity. The predicted temporal variability of the mixed-layer temperature was reasonably well correlated with the observed variability, while the modelled variability of the mixed-layer depth exhibited only agreement with the observations near the diurnal frequency peak. For the forecasted horizontal variability, reasonable agreement was found with observations from a ScanFish section, but only for the mesoscale wave number band; the observed sub-mesoscale variability was not reproduced by ROMS.
Distinguishing Schemes and Tasks in Children's Development of Multiplicative Reasoning
ERIC Educational Resources Information Center
Tzur, Ron; Johnson, Heather L.; McClintock, Evan; Kenney, Rachael H.; Xin, Yan P.; Si, Luo; Woordward, Jerry; Hord, Casey; Jin, Xianyan
2013-01-01
We present a synthesis of findings from constructivist teaching experiments regarding six schemes children construct for reasoning multiplicatively and tasks to promote them. We provide a task-generating platform game, depictions of each scheme, and supporting tasks. Tasks must be distinguished from children's thinking, and learning situations…
A country-wide probability sample of public attitudes toward stuttering in Portugal.
Valente, Ana Rita S; St Louis, Kenneth O; Leahy, Margaret; Hall, Andreia; Jesus, Luis M T
2017-06-01
Negative public attitudes toward stuttering have been widely reported, although differences among countries and regions exist. Clear reasons for these differences remain obscure. Published research is unavailable on public attitudes toward stuttering in Portugal as well as a representative sample that explores stuttering attitudes in an entire country. This study sought to (a) determine the feasibility of a country-wide probability sampling scheme to measure public stuttering attitudes in Portugal using a standard instrument (the Public Opinion Survey of Human Attributes-Stuttering [POSHA-S]) and (b) identify demographic variables that predict Portuguese attitudes. The POSHA-S was translated to European Portuguese through a five-step process. Thereafter, a local administrative office-based, three-stage, cluster, probability sampling scheme was carried out to obtain 311 adult respondents who filled out the questionnaire. The Portuguese population held stuttering attitudes that were generally within the average range of those observed from numerous previous POSHA-S samples. Demographic variables that predicted more versus less positive stuttering attitudes were respondents' age, region of the country, years of school completed, working situation, and number of languages spoken. Non-predicting variables were respondents' sex, marital status, and parental status. A local administrative office-based, probability sampling scheme generated a respondent profile similar to census data and indicated that Portuguese attitudes are generally typical. Copyright © 2017 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Z.; Department of Applied Mathematics and Mechanics, University of Science and Technology Beijing, Beijing 100083; Lin, P.
In this paper, we investigate numerically a diffuse interface model for the Navier–Stokes equation with fluid–fluid interface when the fluids have different densities [48]. Under minor reformulation of the system, we show that there is a continuous energy law underlying the system, assuming that all variables have reasonable regularities. It is shown in the literature that an energy law preserving method will perform better for multiphase problems. Thus for the reformulated system, we design a C{sup 0} finite element method and a special temporal scheme where the energy law is preserved at the discrete level. Such a discrete energy lawmore » (almost the same as the continuous energy law) for this variable density two-phase flow model has never been established before with C{sup 0} finite element. A Newton method is introduced to linearise the highly non-linear system of our discretization scheme. Some numerical experiments are carried out using the adaptive mesh to investigate the scenario of coalescing and rising drops with differing density ratio. The snapshots for the evolution of the interface together with the adaptive mesh at different times are presented to show that the evolution, including the break-up/pinch-off of the drop, can be handled smoothly by our numerical scheme. The discrete energy functional for the system is examined to show that the energy law at the discrete level is preserved by our scheme.« less
Passive tracking scheme for a single stationary observer
NASA Astrophysics Data System (ADS)
Chan, Y. T.; Rea, Terry
2001-08-01
While there are many techniques for Bearings-Only Tracking (BOT) in the ocean environment, they do not apply directly to the land situation. Generally, for tactical reasons, the land observer platform is stationary; but, it has two sensors, visual and infrared, for measuring bearings and a laser range finder (LRF) for measuring range. There is a requirement to develop a new BOT data fusion scheme that fuses the two sets of bearing readings, and together with a single LRF measurement, produces a unique track. This paper first develops a parameterized solution for the target speeds, prior to the occurrence of the LRF measurement, when the problem is unobservable. At, and after, the LRF measurement, a BOT formulated as a least squares (LS) estimator then produces a unique LS estimate of the target states. Bearing readings from the other sensor serve as instrumental variables in a data fusion setting to eliminate the bias in the BOT estimator. The result is recursive, unbiased and decentralized data fusion scheme. Results from two simulation experiments have corroborated the theoretical development and show that the scheme is optimal.
Continuous-variable quantum network coding for coherent states
NASA Astrophysics Data System (ADS)
Shang, Tao; Li, Ke; Liu, Jian-wei
2017-04-01
As far as the spectral characteristic of quantum information is concerned, the existing quantum network coding schemes can be looked on as the discrete-variable quantum network coding schemes. Considering the practical advantage of continuous variables, in this paper, we explore two feasible continuous-variable quantum network coding (CVQNC) schemes. Basic operations and CVQNC schemes are both provided. The first scheme is based on Gaussian cloning and ADD/SUB operators and can transmit two coherent states across with a fidelity of 1/2, while the second scheme utilizes continuous-variable quantum teleportation and can transmit two coherent states perfectly. By encoding classical information on quantum states, quantum network coding schemes can be utilized to transmit classical information. Scheme analysis shows that compared with the discrete-variable paradigms, the proposed CVQNC schemes provide better network throughput from the viewpoint of classical information transmission. By modulating the amplitude and phase quadratures of coherent states with classical characters, the first scheme and the second scheme can transmit 4{log _2}N and 2{log _2}N bits of information by a single network use, respectively.
A variable vertical resolution weather model with an explicitly resolved planetary boundary layer
NASA Technical Reports Server (NTRS)
Helfand, H. M.
1981-01-01
A version of the fourth order weather model incorporating surface wind stress data from SEASAT A scatterometer observations is presented. The Monin-Obukhov similarity theory is used to relate winds at the top of the surface layer to surface wind stress. A reasonable approximation of surface fluxes of heat, moisture, and momentum are obtainable using this method. A Richardson number adjustment scheme based on the ideas of Chang is used to allow for turbulence effects.
Prediction of episodic acidification in North-eastern USA: An empirical/mechanistic approach
Davies, T.D.; Tranter, M.; Wigington, P.J.; Eshleman, K.N.; Peters, N.E.; Van Sickle, J.; DeWalle, David R.; Murdoch, Peter S.
1999-01-01
Observations from the US Environmental Protection Agency's Episodic Response Project (ERP) in the North-eastern United States are used to develop an empirical/mechanistic scheme for prediction of the minimum values of acid neutralizing capacity (ANC) during episodes. An acidification episode is defined as a hydrological event during which ANC decreases. The pre-episode ANC is used to index the antecedent condition, and the stream flow increase reflects how much the relative contributions of sources of waters change during the episode. As much as 92% of the total variation in the minimum ANC in individual catchments can be explained (with levels of explanation >70% for nine of the 13 streams) by a multiple linear regression model that includes pre-episode ANC and change in discharge as independent variable. The predictive scheme is demonstrated to be regionally robust, with the regional variance explained ranging from 77 to 83%. The scheme is not successful for each ERP stream, and reasons are suggested for the individual failures. The potential for applying the predictive scheme to other watersheds is demonstrated by testing the model with data from the Panola Mountain Research Watershed in the South-eastern United States, where the variance explained by the model was 74%. The model can also be utilized to assess 'chemically new' and 'chemically old' water sources during acidification episodes.Observations from the US Environmental Protection Agency's Episodic Response Project (ERP) in the Northeastern United States are used to develop an empirical/mechanistic scheme for prediction of the minimum values of acid neutralizing capacity (ANC) during episodes. An acidification episode is defined as a hydrological event during which ANC decreases. The pre-episode ANC is used to index the antecedent condition, and the stream flow increase reflects how much the relative contributions of sources of waters change during the episode. As much as 92% of the total variation in the minimum ANC in individual catchments can be explained (with levels of explanation >70% for nine of the 13 streams) by a multiple linear regression model that includes pre-episode ANC and change in discharge as independent variables. The predictive scheme is demonstrated to be regionally robust, with the regional variance explained ranging from 77 to 83%. The scheme is not successful for each ERP stream, and reasons are suggested for the individual failures. The potential for applying the predictive scheme to other watersheds is demonstrated by testing the model with data from the Panola Mountain Research Watershed in the South-eastern United States, where the variance explained by the model was 74%. The model can also be utilized to assess `chemically new' and `chemically old' water sources during acidification episodes.
Barca, E; Castrignanò, A; Buttafuoco, G; De Benedetto, D; Passarella, G
2015-07-01
Soil survey is generally time-consuming, labor-intensive, and costly. Optimization of sampling scheme allows one to reduce the number of sampling points without decreasing or even increasing the accuracy of investigated attribute. Maps of bulk soil electrical conductivity (EC a ) recorded with electromagnetic induction (EMI) sensors could be effectively used to direct soil sampling design for assessing spatial variability of soil moisture. A protocol, using a field-scale bulk EC a survey, has been applied in an agricultural field in Apulia region (southeastern Italy). Spatial simulated annealing was used as a method to optimize spatial soil sampling scheme taking into account sampling constraints, field boundaries, and preliminary observations. Three optimization criteria were used. the first criterion (minimization of mean of the shortest distances, MMSD) optimizes the spreading of the point observations over the entire field by minimizing the expectation of the distance between an arbitrarily chosen point and its nearest observation; the second criterion (minimization of weighted mean of the shortest distances, MWMSD) is a weighted version of the MMSD, which uses the digital gradient of the grid EC a data as weighting function; and the third criterion (mean of average ordinary kriging variance, MAOKV) minimizes mean kriging estimation variance of the target variable. The last criterion utilizes the variogram model of soil water content estimated in a previous trial. The procedures, or a combination of them, were tested and compared in a real case. Simulated annealing was implemented by the software MSANOS able to define or redesign any sampling scheme by increasing or decreasing the original sampling locations. The output consists of the computed sampling scheme, the convergence time, and the cooling law, which can be an invaluable support to the process of sampling design. The proposed approach has found the optimal solution in a reasonable computation time. The use of bulk EC a gradient as an exhaustive variable, known at any node of an interpolation grid, has allowed the optimization of the sampling scheme, distinguishing among areas with different priority levels.
An efficient numerical method for solving the Boltzmann equation in multidimensions
NASA Astrophysics Data System (ADS)
Dimarco, Giacomo; Loubère, Raphaël; Narski, Jacek; Rey, Thomas
2018-01-01
In this paper we deal with the extension of the Fast Kinetic Scheme (FKS) (Dimarco and Loubère, 2013 [26]) originally constructed for solving the BGK equation, to the more challenging case of the Boltzmann equation. The scheme combines a robust and fast method for treating the transport part based on an innovative Lagrangian technique supplemented with conservative fast spectral schemes to treat the collisional operator by means of an operator splitting approach. This approach along with several implementation features related to the parallelization of the algorithm permits to construct an efficient simulation tool which is numerically tested against exact and reference solutions on classical problems arising in rarefied gas dynamic. We present results up to the 3 D × 3 D case for unsteady flows for the Variable Hard Sphere model which may serve as benchmark for future comparisons between different numerical methods for solving the multidimensional Boltzmann equation. For this reason, we also provide for each problem studied details on the computational cost and memory consumption as well as comparisons with the BGK model or the limit model of compressible Euler equations.
NASA Astrophysics Data System (ADS)
Tulet, Pierre; Crassier, Vincent; Cousin, Frederic; Suhre, Karsten; Rosset, Robert
2005-09-01
Classical aerosol schemes use either a sectional (bin) or lognormal approach. Both approaches have particular capabilities and interests: the sectional approach is able to describe every kind of distribution, whereas the lognormal one makes assumption of the distribution form with a fewer number of explicit variables. For this last reason we developed a three-moment lognormal aerosol scheme named ORILAM to be coupled in three-dimensional mesoscale or CTM models. This paper presents the concept and hypothesis of a range of aerosol processes such as nucleation, coagulation, condensation, sedimentation, and dry deposition. One particular interest of ORILAM is to keep explicit the aerosol composition and distribution (mass of each constituent, mean radius, and standard deviation of the distribution are explicit) using the prediction of three-moment (m0, m3, and m6). The new model was evaluated by comparing simulations to measurements from the Escompte campaign and to a previously published aerosol model. The numerical cost of the lognormal mode is lower than two bins of the sectional one.
NASA Astrophysics Data System (ADS)
Cuchiara, G. C.; Li, X.; Carvalho, J.; Rappenglück, B.
2014-10-01
With over 6 million inhabitants the Houston metropolitan area is the fourth-largest in the United States. Ozone concentration in this southeast Texas region frequently exceeds the National Ambient Air Quality Standard (NAAQS). For this reason our study employed the Weather Research and Forecasting model with Chemistry (WRF/Chem) to quantify meteorological prediction differences produced by four widely used PBL schemes and analyzed its impact on ozone predictions. The model results were compared to observational data in order to identify one superior PBL scheme better suited for the area. The four PBL schemes include two first-order closure schemes, the Yonsei University (YSU) and the Asymmetric Convective Model version 2 (ACM2); as well as two turbulent kinetic energy closure schemes, the Mellor-Yamada-Janjic (MYJ) and Quasi-Normal Scale Elimination (QNSE). Four 24 h forecasts were performed, one for each PBL scheme. Simulated vertical profiles for temperature, potential temperature, relative humidity, water vapor mixing ratio, and the u-v components of the wind were compared to measurements collected during the Second Texas Air Quality Study (TexAQS-II) Radical and Aerosol Measurements Project (TRAMP) experiment in summer 2006. Simulated ozone was compared against TRAMP data, and air quality stations from Continuous Monitoring Station (CAMS). Also, the evolutions of the PBL height and vertical mixing properties within the PBL for the four simulations were explored. Although the results yielded high correlation coefficients and small biases in almost all meteorological variables, the overall results did not indicate any preferred PBL scheme for the Houston case. However, for ozone prediction the YSU scheme showed greatest agreements with observed values.
NASA Astrophysics Data System (ADS)
Cuchiara, Gustavo C.; Li, Xiangshang; Carvalho, Jonas; Rappenglück, Bernhard
2015-04-01
With over 6 million inhabitants the Houston metropolitan area is the fourth-largest in the United States. Ozone concentration in this southeast Texas region frequently exceeds the National Ambient Air Quality Standard (NAAQS). For this reason our study employed the Weather Research and Forecasting model with Chemistry (WRF/Chem) to quantify meteorological prediction differences produced by four widely used PBL schemes and analyzed its impact on ozone predictions. The model results were compared to observational data in order to identify one superior PBL scheme better suited for the area. The four PBL schemes include two first-order closure schemes, the Yonsei University (YSU) and the Asymmetric Convective Model version 2 (ACM2); as well as two turbulent kinetic energy closure schemes, the Mellor-Yamada-Janjic (MYJ) and Quasi-Normal Scale Elimination (QNSE). Four 24 h forecasts were performed, one for each PBL scheme. Simulated vertical profiles for temperature, potential temperature, relative humidity, water vapor mixing ratio, and the u-v components of the wind were compared to measurements collected during the Second Texas Air Quality Study (TexAQS-II) Radical and Aerosol Measurements Project (TRAMP) experiment in summer 2006. Simulated ozone was compared against TRAMP data, and air quality stations from Continuous Monitoring Station (CAMS). Also, the evolutions of the PBL height and vertical mixing properties within the PBL for the four simulations were explored. Although the results yielded high correlation coefficients and small biases in almost all meteorological variables, the overall results did not indicate any preferred PBL scheme for the Houston case. However, for ozone prediction the YSU scheme showed greatest agreements with observed values.
The sixth generation robot in space
NASA Technical Reports Server (NTRS)
Butcher, A.; Das, A.; Reddy, Y. V.; Singh, H.
1990-01-01
The knowledge based simulator developed in the artificial intelligence laboratory has become a working test bed for experimenting with intelligent reasoning architectures. With this simulator, recently, small experiments have been done with an aim to simulate robot behavior to avoid colliding paths. An automatic extension of such experiments to intelligently planning robots in space demands advanced reasoning architectures. One such architecture for general purpose problem solving is explored. The robot, seen as a knowledge base machine, goes via predesigned abstraction mechanism for problem understanding and response generation. The three phases in one such abstraction scheme are: abstraction for representation, abstraction for evaluation, and abstraction for resolution. Such abstractions require multimodality. This multimodality requires the use of intensional variables to deal with beliefs in the system. Abstraction mechanisms help in synthesizing possible propagating lattices for such beliefs. The machine controller enters into a sixth generation paradigm.
Enhancing the Remote Variable Operations in NPSS/CCDK
NASA Technical Reports Server (NTRS)
Sang, Janche; Follen, Gregory; Kim, Chan; Lopez, Isaac; Townsend, Scott
2001-01-01
Many scientific applications in aerodynamics and solid mechanics are written in Fortran. Refitting these legacy Fortran codes with distributed objects can increase the code reusability. The remote variable scheme provided in NPSS/CCDK helps programmers easily migrate the Fortran codes towards a client-server platform. This scheme gives the client the capability of accessing the variables at the server site. In this paper, we review and enhance the remote variable scheme by using the operator overloading features in C++. The enhancement enables NPSS programmers to use remote variables in much the same way as traditional variables. The remote variable scheme adopts the lazy update approach and the prefetch method. The design strategies and implementation techniques are described in details. Preliminary performance evaluation shows that communication overhead can be greatly reduced.
ERIC Educational Resources Information Center
Hodkowski, Nicola M.; Gardner, Amber; Jorgensen, Cody; Hornbein, Peter; Johnson, Heather L.; Tzur, Ron
2016-01-01
In this paper we examine the application of Tzur's (2007) fine-grained assessment to the design of an assessment measure of a particular multiplicative scheme so that non-interview, good enough data can be obtained (on a large scale) to infer into elementary students' reasoning. We outline three design principles that surfaced through our recent…
Heemskerk, Laura; Norman, Geoff; Chou, Sophia; Mintz, Marcy; Mandin, Henry; McLaughlin, Kevin
2008-11-01
Previous studies have suggested an association between reasoning strategies and diagnostic success, but the influence on this relationship of variables such as question format and task difficulty, has not been studied. Our objective was to study the association between question format, task difficulty, reasoning strategies and diagnostic success. Study participants were 13 Internal Medicine residents at the University of Calgary. Each was given eight problem-solving questions in four clinical presentations and were randomized to groups that differed only in the question format, such that a question presented as short answer (SA) to the first group was presented as extended matching (EM) to the second group. There were equal numbers of SA/EM questions and straightforward/difficult tasks. Participants performed think-aloud during diagnostic reasoning. Data were analyzed using multiple logistic regression. Question format was associated with reasoning strategies; hypothetico-deductive reasoning being used more frequently on EM questions and scheme-inductive reasoning on SA questions. For SA question, non-analytic reasoning alone was used more frequently to answer straightforward cases than difficult cases, whereas for EM questions no such association was observed. EM format and straightforward task increased the odds of diagnostic success, whereas hypothetico-deductive reasoning was associated with reduced odds of success. Question format and task difficulty both influence diagnostic reasoning strategies and studies that examine the effect of reasoning strategies on diagnostic success should control for these effects. Further studies are needed to investigate the effect of reasoning strategies on performance of different groups of learners.
Diagnostic reasoning strategies and diagnostic success.
Coderre, S; Mandin, H; Harasym, P H; Fick, G H
2003-08-01
Cognitive psychology research supports the notion that experts use mental frameworks or "schemes", both to organize knowledge in memory and to solve clinical problems. The central purpose of this study was to determine the relationship between problem-solving strategies and the likelihood of diagnostic success. Think-aloud protocols were collected to determine the diagnostic reasoning used by experts and non-experts when attempting to diagnose clinical presentations in gastroenterology. Using logistic regression analysis, the study found that there is a relationship between diagnostic reasoning strategy and the likelihood of diagnostic success. Compared to hypothetico-deductive reasoning, the odds of diagnostic success were significantly greater when subjects used the diagnostic strategies of pattern recognition and scheme-inductive reasoning. Two other factors emerged as independent determinants of diagnostic success: expertise and clinical presentation. Not surprisingly, experts outperformed novices, while the content area of the clinical cases in each of the four clinical presentations demonstrated varying degrees of difficulty and thus diagnostic success. These findings have significant implications for medical educators. It supports the introduction of "schemes" as a means of enhancing memory organization and improving diagnostic success.
Expertise and reasoning with possibility: An explanation of modal logic and expert systems
NASA Technical Reports Server (NTRS)
Rochowiak, Daniel
1988-01-01
Recently systems of modal reasoning have been brought to the foreground of artificial intelligence studies. The intuitive idea of research efforts in this area is that in addition to the actual world in which sentences have certain truth values there are other worlds in which those sentences have different truth values. Such alternative worlds can be considered as possible worlds, and an agent may or may not have access to some or all of them. This approach to reasoning can be valuable in extending the expert system paradigm. Using the scheme of reasoning proposed by Toulmin, Reike and Janick and the modal system T, a scheme is proposed for expert reasoning that mitigates some of the criticisms raised by Schank and Nickerson.
Secure and Efficient Signature Scheme Based on NTRU for Mobile Payment
NASA Astrophysics Data System (ADS)
Xia, Yunhao; You, Lirong; Sun, Zhe; Sun, Zhixin
2017-10-01
Mobile payment becomes more and more popular, however the traditional public-key encryption algorithm has higher requirements for hardware which is not suitable for mobile terminals of limited computing resources. In addition, these public-key encryption algorithms do not have the ability of anti-quantum computing. This paper researches public-key encryption algorithm NTRU for quantum computation through analyzing the influence of parameter q and k on the probability of generating reasonable signature value. Two methods are proposed to improve the probability of generating reasonable signature value. Firstly, increase the value of parameter q. Secondly, add the authentication condition that meet the reasonable signature requirements during the signature phase. Experimental results show that the proposed signature scheme can realize the zero leakage of the private key information of the signature value, and increase the probability of generating the reasonable signature value. It also improve rate of the signature, and avoid the invalid signature propagation in the network, but the scheme for parameter selection has certain restrictions.
NASA Astrophysics Data System (ADS)
Martínez-Castro, Daniel; Vichot-Llano, Alejandro; Bezanilla-Morlot, Arnoldo; Centella-Artola, Abel; Campbell, Jayaka; Giorgi, Filippo; Viloria-Holguin, Cecilia C.
2018-06-01
A sensitivity study of the performance of the RegCM4 regional climate model driven by the ERA Interim reanalysis is conducted for the Central America and Caribbean region. A set of numerical experiments are completed using four configurations of the model, with a horizontal grid spacing of 25 km for a period of 6 years (1998-2003), using three of the convective parameterization schemes implemented in the model, the Emanuel scheme, the Grell over land-Emanuel over ocean scheme and two configurations of the Tiedtke scheme. The objective of the study is to investigate the ability of each configuration to reproduce different characteristics of the temperature, circulation and precipitation fields for the dry and rainy seasons. All schemes simulate the general temperature and precipitation patterns over land reasonably well, with relatively high correlations compared to observation datasets, though in specific regions there are positive or negative biases, greater in the rainy season. We also focus on some circulation features relevant for the region, such as the Caribbean low level jet and sea breeze circulations over islands, which are simulated by the model with varied performance across the different configurations. We find that no model configuration assessed is best performing for all the analysis criteria selected, but the Tiedtke configurations, which include the capability of tuning in particular the exchanges between cloud and environment air, provide the most balanced range of biases across variables, with no outstanding systematic bias emerging.
Qian, Yun; Yan, Huiping; Berg, Larry K.; ...
2016-10-28
Accuracy of turbulence parameterization in representing Planetary Boundary Layer (PBL) processes in climate models is critical for predicting the initiation and development of clouds, air quality issues, and underlying surface-atmosphere-cloud interactions. In this study, we 1) evaluate WRF model-simulated spatial patterns of precipitation and surface fluxes, as well as vertical profiles of potential temperature, humidity, moist static energy and moisture tendency terms as simulated by WRF at various spatial resolutions and with PBL, surface layer and shallow convection schemes against measurements, 2) identify model biases by examining the moisture tendency terms contributed by PBL and convection processes through nudging experiments,more » and 3) evaluate the dependence of modeled surface latent heat (LH) fluxes onPBL and surface layer schemes over the tropical ocean. The results show that PBL and surface parameterizations have surprisingly large impacts on precipitation, convection initiation and surface moisture fluxes over tropical oceans. All of the parameterizations tested tend to overpredict moisture in PBL and free atmosphere, and consequently result in larger moist static energy and precipitation. Moisture nudging tends to suppress the initiation of convection and reduces the excess precipitation. The reduction in precipitation bias in turn reduces the surface wind and LH flux biases, which suggests that the model drifts at least partly because of a positive feedback between precipitation and surface fluxes. The updated shallow convection scheme KF-CuP tends to suppress the initiation and development of deep convection, consequently decreasing precipitation. The Eta surface layer scheme predicts more reasonable LH fluxes and the LH-Wind Speed relationship than the MM5 scheme, especially when coupled with the MYJ scheme. By examining various parameterization schemes in WRF, we identify sources of biases and weaknesses of current PBL, surface layer and shallow convection schemes in reproducing PBL processes, the initiation of convection and intra-seasonal variability of precipitation.« less
Kim, Kyungsoo; Lim, Sung-Ho; Lee, Jaeseok; Kang, Won-Seok; Moon, Cheil; Choi, Ji-Woong
2016-01-01
Electroencephalograms (EEGs) measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI) studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR) is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of event related potential (ERP) signal that represents a brain’s response to a particular stimulus or a task. The averaging, however, is very sensitive to variable delays. In this study, we propose two time delay estimation (TDE) schemes based on a joint maximum likelihood (ML) criterion to compensate the uncertain delays which may be different in each trial. We evaluate the performance for different types of signals such as random, deterministic, and real EEG signals. The results show that the proposed schemes provide better performance than other conventional schemes employing averaged signal as a reference, e.g., up to 4 dB gain at the expected delay error of 10°. PMID:27322267
High-Order Accurate Solutions to the Helmholtz Equation in the Presence of Boundary Singularities
NASA Astrophysics Data System (ADS)
Britt, Darrell Steven, Jr.
Problems of time-harmonic wave propagation arise in important fields of study such as geological surveying, radar detection/evasion, and aircraft design. These often involve highfrequency waves, which demand high-order methods to mitigate the dispersion error. We propose a high-order method for computing solutions to the variable-coefficient inhomogeneous Helmholtz equation in two dimensions on domains bounded by piecewise smooth curves of arbitrary shape with a finite number of boundary singularities at known locations. We utilize compact finite difference (FD) schemes on regular structured grids to achieve highorder accuracy due to their efficiency and simplicity, as well as the capability to approximate variable-coefficient differential operators. In this work, a 4th-order compact FD scheme for the variable-coefficient Helmholtz equation on a Cartesian grid in 2D is derived and tested. The well known limitation of finite differences is that they lose accuracy when the boundary curve does not coincide with the discretization grid, which is a severe restriction on the geometry of the computational domain. Therefore, the algorithm presented in this work combines high-order FD schemes with the method of difference potentials (DP), which retains the efficiency of FD while allowing for boundary shapes that are not aligned with the grid without sacrificing the accuracy of the FD scheme. Additionally, the theory of DP allows for the universal treatment of the boundary conditions. One of the significant contributions of this work is the development of an implementation that accommodates general boundary conditions (BCs). In particular, Robin BCs with discontinuous coefficients are studied, for which we introduce a piecewise parameterization of the boundary curve. Problems with discontinuities in the boundary data itself are also studied. We observe that the design convergence rate suffers whenever the solution loses regularity due to the boundary conditions. This is because the FD scheme is only consistent for classical solutions of the PDE. For this reason, we implement the method of singularity subtraction as a means for restoring the design accuracy of the scheme in the presence of singularities at the boundary. While this method is well studied for low order methods and for problems in which singularities arise from the geometry (e.g., corners), we adapt it to our high-order scheme for curved boundaries via a conformal mapping and show that it can also be used to restore accuracy when the singularity arises from the BCs rather than the geometry. Altogether, the proposed methodology for 2D boundary value problems is computationally efficient, easily handles a wide class of boundary conditions and boundary shapes that are not aligned with the discretization grid, and requires little modification for solving new problems.
2009-01-01
Background Large discrepancies in signature composition and outcome concordance have been observed between different microarray breast cancer expression profiling studies. This is often ascribed to differences in array platform as well as biological variability. We conjecture that other reasons for the observed discrepancies are the measurement error associated with each feature and the choice of preprocessing method. Microarray data are known to be subject to technical variation and the confidence intervals around individual point estimates of expression levels can be wide. Furthermore, the estimated expression values also vary depending on the selected preprocessing scheme. In microarray breast cancer classification studies, however, these two forms of feature variability are almost always ignored and hence their exact role is unclear. Results We have performed a comprehensive sensitivity analysis of microarray breast cancer classification under the two types of feature variability mentioned above. We used data from six state of the art preprocessing methods, using a compendium consisting of eight diferent datasets, involving 1131 hybridizations, containing data from both one and two-color array technology. For a wide range of classifiers, we performed a joint study on performance, concordance and stability. In the stability analysis we explicitly tested classifiers for their noise tolerance by using perturbed expression profiles that are based on uncertainty information directly related to the preprocessing methods. Our results indicate that signature composition is strongly influenced by feature variability, even if the array platform and the stratification of patient samples are identical. In addition, we show that there is often a high level of discordance between individual class assignments for signatures constructed on data coming from different preprocessing schemes, even if the actual signature composition is identical. Conclusion Feature variability can have a strong impact on breast cancer signature composition, as well as the classification of individual patient samples. We therefore strongly recommend that feature variability is considered in analyzing data from microarray breast cancer expression profiling experiments. PMID:19941644
NASA Astrophysics Data System (ADS)
Yang, L. M.; Shu, C.; Yang, W. M.; Wu, J.
2018-04-01
High consumption of memory and computational effort is the major barrier to prevent the widespread use of the discrete velocity method (DVM) in the simulation of flows in all flow regimes. To overcome this drawback, an implicit DVM with a memory reduction technique for solving a steady discrete velocity Boltzmann equation (DVBE) is presented in this work. In the method, the distribution functions in the whole discrete velocity space do not need to be stored, and they are calculated from the macroscopic flow variables. As a result, its memory requirement is in the same order as the conventional Euler/Navier-Stokes solver. In the meantime, it is more efficient than the explicit DVM for the simulation of various flows. To make the method efficient for solving flow problems in all flow regimes, a prediction step is introduced to estimate the local equilibrium state of the DVBE. In the prediction step, the distribution function at the cell interface is calculated by the local solution of DVBE. For the flow simulation, when the cell size is less than the mean free path, the prediction step has almost no effect on the solution. However, when the cell size is much larger than the mean free path, the prediction step dominates the solution so as to provide reasonable results in such a flow regime. In addition, to further improve the computational efficiency of the developed scheme in the continuum flow regime, the implicit technique is also introduced into the prediction step. Numerical results showed that the proposed implicit scheme can provide reasonable results in all flow regimes and increase significantly the computational efficiency in the continuum flow regime as compared with the existing DVM solvers.
A k-Omega Turbulence Model for Quasi-Three-Dimensional Turbomachinery Flows
NASA Technical Reports Server (NTRS)
Chima, Rodrick V.
1995-01-01
A two-equation k-omega turbulence model has been developed and applied to a quasi-three-dimensional viscous analysis code for blade-to-blade flows in turbomachinery. the code includes the effects of rotation, radius change, and variable stream sheet thickness. The flow equations are given and the explicit runge-Kutta solution scheme is described. the k-omega model equations are also given and the upwind implicit approximate-factorization solution scheme is described. Three cases were calculated: transitional flow over a flat plate, a transonic compressor rotor, and transonic turbine vane with heat transfer. Results were compared to theory, experimental data, and to results using the Baldwin-Lomax turbulence model. The two models compared reasonably well with the data and surprisingly well with each other. Although the k-omega model behaves well numerically and simulates effects of transition, freestream turbulence, and wall roughness, it was not decisively better than the Baldwin-Lomax model for the cases considered here.
NASA Astrophysics Data System (ADS)
Madhulatha, A.; Rajeevan, M.
2018-02-01
Main objective of the present paper is to examine the role of various parameterization schemes in simulating the evolution of mesoscale convective system (MCS) occurred over south-east India. Using the Weather Research and Forecasting (WRF) model, numerical experiments are conducted by considering various planetary boundary layer, microphysics, and cumulus parameterization schemes. Performances of different schemes are evaluated by examining boundary layer, reflectivity, and precipitation features of MCS using ground-based and satellite observations. Among various physical parameterization schemes, Mellor-Yamada-Janjic (MYJ) boundary layer scheme is able to produce deep boundary layer height by simulating warm temperatures necessary for storm initiation; Thompson (THM) microphysics scheme is capable to simulate the reflectivity by reasonable distribution of different hydrometeors during various stages of system; Betts-Miller-Janjic (BMJ) cumulus scheme is able to capture the precipitation by proper representation of convective instability associated with MCS. Present analysis suggests that MYJ, a local turbulent kinetic energy boundary layer scheme, which accounts strong vertical mixing; THM, a six-class hybrid moment microphysics scheme, which considers number concentration along with mixing ratio of rain hydrometeors; and BMJ, a closure cumulus scheme, which adjusts thermodynamic profiles based on climatological profiles might have contributed for better performance of respective model simulations. Numerical simulation carried out using the above combination of schemes is able to capture storm initiation, propagation, surface variations, thermodynamic structure, and precipitation features reasonably well. This study clearly demonstrates that the simulation of MCS characteristics is highly sensitive to the choice of parameterization schemes.
Extricating Justification Scheme Theory in Middle School Mathematical Problem Solving
ERIC Educational Resources Information Center
Matteson, Shirley; Capraro, Mary Margaret; Capraro, Robert M.; Lincoln, Yvonna S.
2012-01-01
Twenty middle grades students were interviewed to gain insights into their reasoning about problem-solving strategies using a Problem Solving Justification Scheme as our theoretical lens and the basis for our analysis. The scheme was modified from the work of Harel and Sowder (1998) making it more broadly applicable and accounting for research…
Chao, Eunice; Krewski, Daniel
2008-12-01
This paper presents an exploratory evaluation of four functional components of a proposed risk-based classification scheme (RBCS) for crop-derived genetically modified (GM) foods in a concordance study. Two independent raters assigned concern levels to 20 reference GM foods using a rating form based on the proposed RBCS. The four components of evaluation were: (1) degree of concordance, (2) distribution across concern levels, (3) discriminating ability of the scheme, and (4) ease of use. At least one of the 20 reference foods was assigned to each of the possible concern levels, demonstrating the ability of the scheme to identify GM foods of different concern with respect to potential health risk. There was reasonably good concordance between the two raters for the three separate parts of the RBCS. The raters agreed that the criteria in the scheme were sufficiently clear in discriminating reference foods into different concern levels, and that with some experience, the scheme was reasonably easy to use. Specific issues and suggestions for improvements identified in the concordance study are discussed.
Improving multivariate Horner schemes with Monte Carlo tree search
NASA Astrophysics Data System (ADS)
Kuipers, J.; Plaat, A.; Vermaseren, J. A. M.; van den Herik, H. J.
2013-11-01
Optimizing the cost of evaluating a polynomial is a classic problem in computer science. For polynomials in one variable, Horner's method provides a scheme for producing a computationally efficient form. For multivariate polynomials it is possible to generalize Horner's method, but this leaves freedom in the order of the variables. Traditionally, greedy schemes like most-occurring variable first are used. This simple textbook algorithm has given remarkably efficient results. Finding better algorithms has proved difficult. In trying to improve upon the greedy scheme we have implemented Monte Carlo tree search, a recent search method from the field of artificial intelligence. This results in better Horner schemes and reduces the cost of evaluating polynomials, sometimes by factors up to two.
Atinga, Roger A; Abiiro, Gilbert Abotisem; Kuganab-Lem, Robert Bella
2015-03-01
To identify the factors influencing dropout from Ghana's health insurance scheme among populations living in slum communities. Cross-sectional data were collected from residents of 22 slums in the Accra Metropolitan Assembly. Cluster and systematic random sampling techniques were used to select and interview 600 individuals who had dropped out from the scheme 6 months prior to the study. Descriptive statistics and multivariate logistic regression models were computed to account for sample characteristics and reasons associated with the decision to dropout. The proportion of dropouts in the sample increased from the range of 6.8% in 2008 to 34.8% in 2012. Non-affordability of premium was the predominant reason followed by rare illness episodes, limited benefits of the scheme and poor service quality. Low-income earners and those with low education were significantly more likely to report premium non-affordability. Rare illness was a common reason among younger respondents, informal sector workers and respondents with higher education. All subgroups of age, education, occupation and income reported nominal benefits of the scheme as a reason for dropout. Interventions targeted at removing bottlenecks to health insurance enrolment are salient to maximising the size of the insurance pool. Strengthening service quality and extending the premium exemption to cover low-income families in slum communities is a valuable strategy to achieve universal health coverage. © 2014 John Wiley & Sons Ltd.
Chiang, Kai-Wei; Chang, Hsiu-Wen; Li, Chia-Yuan; Huang, Yun-Wen
2009-01-01
Digital mobile mapping, which integrates digital imaging with direct geo-referencing, has developed rapidly over the past fifteen years. Direct geo-referencing is the determination of the time-variable position and orientation parameters for a mobile digital imager. The most common technologies used for this purpose today are satellite positioning using Global Positioning System (GPS) and Inertial Navigation System (INS) using an Inertial Measurement Unit (IMU). They are usually integrated in such a way that the GPS receiver is the main position sensor, while the IMU is the main orientation sensor. The Kalman Filter (KF) is considered as the optimal estimation tool for real-time INS/GPS integrated kinematic position and orientation determination. An intelligent hybrid scheme consisting of an Artificial Neural Network (ANN) and KF has been proposed to overcome the limitations of KF and to improve the performance of the INS/GPS integrated system in previous studies. However, the accuracy requirements of general mobile mapping applications can’t be achieved easily, even by the use of the ANN-KF scheme. Therefore, this study proposes an intelligent position and orientation determination scheme that embeds ANN with conventional Rauch-Tung-Striebel (RTS) smoother to improve the overall accuracy of a MEMS INS/GPS integrated system in post-mission mode. By combining the Micro Electro Mechanical Systems (MEMS) INS/GPS integrated system and the intelligent ANN-RTS smoother scheme proposed in this study, a cheaper but still reasonably accurate position and orientation determination scheme can be anticipated. PMID:22574034
NASA Astrophysics Data System (ADS)
Yan, Y.; Barth, A.; Beckers, J. M.; Brankart, J. M.; Brasseur, P.; Candille, G.
2017-07-01
In this paper, three incremental analysis update schemes (IAU 0, IAU 50 and IAU 100) are compared in the same assimilation experiments with a realistic eddy permitting primitive equation model of the North Atlantic Ocean using the Ensemble Kalman Filter. The difference between the three IAU schemes lies on the position of the increment update window. The relevance of each IAU scheme is evaluated through analyses on both thermohaline and dynamical variables. The validation of the assimilation results is performed according to both deterministic and probabilistic metrics against different sources of observations. For deterministic validation, the ensemble mean and the ensemble spread are compared to the observations. For probabilistic validation, the continuous ranked probability score (CRPS) is used to evaluate the ensemble forecast system according to reliability and resolution. The reliability is further decomposed into bias and dispersion by the reduced centred random variable (RCRV) score. The obtained results show that 1) the IAU 50 scheme has the same performance as the IAU 100 scheme 2) the IAU 50/100 schemes outperform the IAU 0 scheme in error covariance propagation for thermohaline variables in relatively stable region, while the IAU 0 scheme outperforms the IAU 50/100 schemes in dynamical variables estimation in dynamically active region 3) in case with sufficient number of observations and good error specification, the impact of IAU schemes is negligible. The differences between the IAU 0 scheme and the IAU 50/100 schemes are mainly due to different model integration time and different instability (density inversion, large vertical velocity, etc.) induced by the increment update. The longer model integration time with the IAU 50/100 schemes, especially the free model integration, on one hand, allows for better re-establishment of the equilibrium model state, on the other hand, smooths the strong gradients in dynamically active region.
Numerical scoring for the Classic BILAG index.
Cresswell, Lynne; Yee, Chee-Seng; Farewell, Vernon; Rahman, Anisur; Teh, Lee-Suan; Griffiths, Bridget; Bruce, Ian N; Ahmad, Yasmeen; Prabu, Athiveeraramapandian; Akil, Mohammed; McHugh, Neil; Toescu, Veronica; D'Cruz, David; Khamashta, Munther A; Maddison, Peter; Isenberg, David A; Gordon, Caroline
2009-12-01
To develop an additive numerical scoring scheme for the Classic BILAG index. SLE patients were recruited into this multi-centre cross-sectional study. At every assessment, data were collected on disease activity and therapy. Logistic regression was used to model an increase in therapy, as an indicator of active disease, by the Classic BILAG score in eight systems. As both indicate inactivity, scores of D and E were set to 0 and used as the baseline in the fitted model. The coefficients from the fitted model were used to determine the numerical values for Grades A, B and C. Different scoring schemes were then compared using receiver operating characteristic (ROC) curves. Validation analysis was performed using assessments from a single centre. There were 1510 assessments from 369 SLE patients. The currently used coding scheme (A = 9, B = 3, C = 1 and D/E = 0) did not fit the data well. The regression model suggested three possible numerical scoring schemes: (i) A = 11, B = 6, C = 1 and D/E = 0; (ii) A = 12, B = 6, C = 1 and D/E = 0; and (iii) A = 11, B = 7, C = 1 and D/E = 0. These schemes produced comparable ROC curves. Based on this, A = 12, B = 6, C = 1 and D/E = 0 seemed a reasonable and practical choice. The validation analysis suggested that although the A = 12, B = 6, C = 1 and D/E = 0 coding is still reasonable, a scheme with slightly less weighting for B, such as A = 12, B = 5, C = 1 and D/E = 0, may be more appropriate. A reasonable additive numerical scoring scheme based on treatment decision for the Classic BILAG index is A = 12, B = 5, C = 1, D = 0 and E = 0.
Numerical scoring for the Classic BILAG index
Cresswell, Lynne; Yee, Chee-Seng; Farewell, Vernon; Rahman, Anisur; Teh, Lee-Suan; Griffiths, Bridget; Bruce, Ian N.; Ahmad, Yasmeen; Prabu, Athiveeraramapandian; Akil, Mohammed; McHugh, Neil; Toescu, Veronica; D’Cruz, David; Khamashta, Munther A.; Maddison, Peter; Isenberg, David A.
2009-01-01
Objective. To develop an additive numerical scoring scheme for the Classic BILAG index. Methods. SLE patients were recruited into this multi-centre cross-sectional study. At every assessment, data were collected on disease activity and therapy. Logistic regression was used to model an increase in therapy, as an indicator of active disease, by the Classic BILAG score in eight systems. As both indicate inactivity, scores of D and E were set to 0 and used as the baseline in the fitted model. The coefficients from the fitted model were used to determine the numerical values for Grades A, B and C. Different scoring schemes were then compared using receiver operating characteristic (ROC) curves. Validation analysis was performed using assessments from a single centre. Results. There were 1510 assessments from 369 SLE patients. The currently used coding scheme (A = 9, B = 3, C = 1 and D/E = 0) did not fit the data well. The regression model suggested three possible numerical scoring schemes: (i) A = 11, B = 6, C = 1 and D/E = 0; (ii) A = 12, B = 6, C = 1 and D/E = 0; and (iii) A = 11, B = 7, C = 1 and D/E = 0. These schemes produced comparable ROC curves. Based on this, A = 12, B = 6, C = 1 and D/E = 0 seemed a reasonable and practical choice. The validation analysis suggested that although the A = 12, B = 6, C = 1 and D/E = 0 coding is still reasonable, a scheme with slightly less weighting for B, such as A = 12, B = 5, C = 1 and D/E = 0, may be more appropriate. Conclusions. A reasonable additive numerical scoring scheme based on treatment decision for the Classic BILAG index is A = 12, B = 5, C = 1, D = 0 and E = 0. PMID:19779027
A Two-Timescale Discretization Scheme for Collocation
NASA Technical Reports Server (NTRS)
Desai, Prasun; Conway, Bruce A.
2004-01-01
The development of a two-timescale discretization scheme for collocation is presented. This scheme allows a larger discretization to be utilized for smoothly varying state variables and a second finer discretization to be utilized for state variables having higher frequency dynamics. As such. the discretization scheme can be tailored to the dynamics of the particular state variables. In so doing. the size of the overall Nonlinear Programming (NLP) problem can be reduced significantly. Two two-timescale discretization architecture schemes are described. Comparison of results between the two-timescale method and conventional collocation show very good agreement. Differences of less than 0.5 percent are observed. Consequently. a significant reduction (by two-thirds) in the number of NLP parameters and iterations required for convergence can be achieved without sacrificing solution accuracy.
Dynamic Downscaling of Seasonal Simulations over South America.
NASA Astrophysics Data System (ADS)
Misra, Vasubandhu; Dirmeyer, Paul A.; Kirtman, Ben P.
2003-01-01
In this paper multiple atmospheric global circulation model (AGCM) integrations at T42 spectral truncation and prescribed sea surface temperature were used to drive regional spectral model (RSM) simulations at 80-km resolution for the austral summer season (January-February-March). Relative to the AGCM, the RSM improves the ensemble mean simulation of precipitation and the lower- and upper-level tropospheric circulation over both tropical and subtropical South America and the neighboring ocean basins. It is also seen that the RSM exacerbates the dry bias over the northern tip of South America and the Nordeste region, and perpetuates the erroneous split intertropical convergence zone (ITCZ) over both the Pacific and Atlantic Ocean basins from the AGCM. The RSM at 80-km horizontal resolution is able to reasonably resolve the Altiplano plateau. This led to an improvement in the mean precipitation over the plateau. The improved resolution orography in the RSM did not substantially change the predictability of the precipitation, surface fluxes, or upper- and lower-level winds in the vicinity of the Andes Mountains from the AGCM. In spite of identical convective and land surface parameterization schemes, the diagnostic quantities, such as precipitation and surface fluxes, show significant differences in the intramodel variability over oceans and certain parts of the Amazon River basin (ARB). However, the prognostic variables of the models exhibit relatively similar model noise structures and magnitude. This suggests that the model physics are in large part responsible for the divergence of the solutions in the two models. However, the surface temperature and fluxes from the land surface scheme of the model [Simplified Simple Biosphere scheme (SSiB)] display comparable intramodel variability, except over certain parts of ARB in the two models. This suggests a certain resilience of predictability in SSiB (over the chosen domain of study) to variations in horizontal resolution. It is seen in this study that the summer precipitation over tropical and subtropical South America is highly unpredictable in both models.
NASA Astrophysics Data System (ADS)
Maher, Penelope; Vallis, Geoffrey K.; Sherwood, Steven C.; Webb, Mark J.; Sansom, Philip G.
2018-04-01
Convective parameterizations are widely believed to be essential for realistic simulations of the atmosphere. However, their deficiencies also result in model biases. The role of convection schemes in modern atmospheric models is examined using Selected Process On/Off Klima Intercomparison Experiment simulations without parameterized convection and forced with observed sea surface temperatures. Convection schemes are not required for reasonable climatological precipitation. However, they are essential for reasonable daily precipitation and constraining extreme daily precipitation that otherwise develops. Systematic effects on lapse rate and humidity are likewise modest compared with the intermodel spread. Without parameterized convection Kelvin waves are more realistic. An unexpectedly large moist Southern Hemisphere storm track bias is identified. This storm track bias persists without convection schemes, as does the double Intertropical Convergence Zone and excessive ocean precipitation biases. This suggests that model biases originate from processes other than convection or that convection schemes are missing key processes.
An all-at-once reduced Hessian SQP scheme for aerodynamic design optimization
NASA Technical Reports Server (NTRS)
Feng, Dan; Pulliam, Thomas H.
1995-01-01
This paper introduces a computational scheme for solving a class of aerodynamic design problems that can be posed as nonlinear equality constrained optimizations. The scheme treats the flow and design variables as independent variables, and solves the constrained optimization problem via reduced Hessian successive quadratic programming. It updates the design and flow variables simultaneously at each iteration and allows flow variables to be infeasible before convergence. The solution of an adjoint flow equation is never needed. In addition, a range space basis is chosen so that in a certain sense the 'cross term' ignored in reduced Hessian SQP methods is minimized. Numerical results for a nozzle design using the quasi-one-dimensional Euler equations show that this scheme is computationally efficient and robust. The computational cost of a typical nozzle design is only a fraction more than that of the corresponding analysis flow calculation. Superlinear convergence is also observed, which agrees with the theoretical properties of this scheme. All optimal solutions are obtained by starting far away from the final solution.
NASA Astrophysics Data System (ADS)
Li, Xin; Zeng, Mingjian; Wang, Yuan; Wang, Wenlan; Wu, Haiying; Mei, Haixia
2016-10-01
Different choices of control variables in variational assimilation can bring about different influences on the analyzed atmospheric state. Based on the WRF model's three-dimensional variational assimilation system, this study compares the behavior of two momentum control variable options—streamfunction velocity potential ( ψ-χ) and horizontal wind components ( U-V)—in radar wind data assimilation for a squall line case that occurred in Jiangsu Province on 24 August 2014. The wind increment from the single observation test shows that the ψ-χ control variable scheme produces negative increments in the neighborhood around the observation point because streamfunction and velocity potential preserve integrals of velocity. On the contrary, the U-V control variable scheme objectively reflects the information of the observation itself. Furthermore, radial velocity data from 17 Doppler radars in eastern China are assimilated. As compared to the impact of conventional observation, the assimilation of radar radial velocity based on the U-V control variable scheme significantly improves the mesoscale dynamic field in the initial condition. The enhanced low-level jet stream, water vapor convergence and low-level wind shear result in better squall line forecasting. However, the ψ-χ control variable scheme generates a discontinuous wind field and unrealistic convergence/divergence in the analyzed field, which lead to a degraded precipitation forecast.
NASA Astrophysics Data System (ADS)
Yang, L. M.; Shu, C.; Wang, Y.; Sun, Y.
2016-08-01
The sphere function-based gas kinetic scheme (GKS), which was presented by Shu and his coworkers [23] for simulation of inviscid compressible flows, is extended to simulate 3D viscous incompressible and compressible flows in this work. Firstly, we use certain discrete points to represent the spherical surface in the phase velocity space. Then, integrals along the spherical surface for conservation forms of moments, which are needed to recover 3D Navier-Stokes equations, are approximated by integral quadrature. The basic requirement is that these conservation forms of moments can be exactly satisfied by weighted summation of distribution functions at discrete points. It was found that the integral quadrature by eight discrete points on the spherical surface, which forms the D3Q8 discrete velocity model, can exactly match the integral. In this way, the conservative variables and numerical fluxes can be computed by weighted summation of distribution functions at eight discrete points. That is, the application of complicated formulations resultant from integrals can be replaced by a simple solution process. Several numerical examples including laminar flat plate boundary layer, 3D lid-driven cavity flow, steady flow through a 90° bending square duct, transonic flow around DPW-W1 wing and supersonic flow around NACA0012 airfoil are chosen to validate the proposed scheme. Numerical results demonstrate that the present scheme can provide reasonable numerical results for 3D viscous flows.
NASA Astrophysics Data System (ADS)
Peters, Andre; Nehls, Thomas; Wessolek, Gerd
2016-06-01
Weighing lysimeters with appropriate data filtering yield the most precise and unbiased information for precipitation (P) and evapotranspiration (ET). A recently introduced filter scheme for such data is the AWAT (Adaptive Window and Adaptive Threshold) filter (Peters et al., 2014). The filter applies an adaptive threshold to separate significant from insignificant mass changes, guaranteeing that P and ET are not overestimated, and uses a step interpolation between the significant mass changes. In this contribution we show that the step interpolation scheme, which reflects the resolution of the measuring system, can lead to unrealistic prediction of P and ET, especially if they are required in high temporal resolution. We introduce linear and spline interpolation schemes to overcome these problems. To guarantee that medium to strong precipitation events abruptly following low or zero fluxes are not smoothed in an unfavourable way, a simple heuristic selection criterion is used, which attributes such precipitations to the step interpolation. The three interpolation schemes (step, linear and spline) are tested and compared using a data set from a grass-reference lysimeter with 1 min resolution, ranging from 1 January to 5 August 2014. The selected output resolutions for P and ET prediction are 1 day, 1 h and 10 min. As expected, the step scheme yielded reasonable flux rates only for a resolution of 1 day, whereas the other two schemes are well able to yield reasonable results for any resolution. The spline scheme returned slightly better results than the linear scheme concerning the differences between filtered values and raw data. Moreover, this scheme allows continuous differentiability of filtered data so that any output resolution for the fluxes is sound. Since computational burden is not problematic for any of the interpolation schemes, we suggest always using the spline scheme.
NASA Astrophysics Data System (ADS)
Zanotti, Olindo; Dumbser, Michael
2016-01-01
We present a new version of conservative ADER-WENO finite volume schemes, in which both the high order spatial reconstruction as well as the time evolution of the reconstruction polynomials in the local space-time predictor stage are performed in primitive variables, rather than in conserved ones. To obtain a conservative method, the underlying finite volume scheme is still written in terms of the cell averages of the conserved quantities. Therefore, our new approach performs the spatial WENO reconstruction twice: the first WENO reconstruction is carried out on the known cell averages of the conservative variables. The WENO polynomials are then used at the cell centers to compute point values of the conserved variables, which are subsequently converted into point values of the primitive variables. This is the only place where the conversion from conservative to primitive variables is needed in the new scheme. Then, a second WENO reconstruction is performed on the point values of the primitive variables to obtain piecewise high order reconstruction polynomials of the primitive variables. The reconstruction polynomials are subsequently evolved in time with a novel space-time finite element predictor that is directly applied to the governing PDE written in primitive form. The resulting space-time polynomials of the primitive variables can then be directly used as input for the numerical fluxes at the cell boundaries in the underlying conservative finite volume scheme. Hence, the number of necessary conversions from the conserved to the primitive variables is reduced to just one single conversion at each cell center. We have verified the validity of the new approach over a wide range of hyperbolic systems, including the classical Euler equations of gas dynamics, the special relativistic hydrodynamics (RHD) and ideal magnetohydrodynamics (RMHD) equations, as well as the Baer-Nunziato model for compressible two-phase flows. In all cases we have noticed that the new ADER schemes provide less oscillatory solutions when compared to ADER finite volume schemes based on the reconstruction in conserved variables, especially for the RMHD and the Baer-Nunziato equations. For the RHD and RMHD equations, the overall accuracy is improved and the CPU time is reduced by about 25 %. Because of its increased accuracy and due to the reduced computational cost, we recommend to use this version of ADER as the standard one in the relativistic framework. At the end of the paper, the new approach has also been extended to ADER-DG schemes on space-time adaptive grids (AMR).
Effects of color scheme and message lines of variable message signs on driver performance.
Lai, Chien-Jung
2010-07-01
The advancement in variable message signs (VMS) technology has made it possible to display message with various formats. This study presented an ergonomic study on the message design of Chinese variable message signs on urban roads in Taiwan. Effects of color scheme (one, two and three) and number of message lines (single, double and triple) of VMS on participants' response performance were investigated through a laboratory experiment. Results of analysis showed that color scheme and number of message lines are significant factors for participants' response time to VMS. Participants responded faster for two-color than for one- and three-color scheme. Participants also took less response time for double line message than for single and triple line message. Both color scheme and number of message lines had no significant effect on participants' response accuracy. The preference survey after the experiment showed that most participants preferred two-color scheme and double line message to the other combinations. The results can assist in adopting appropriate color scheme and number of message lines of Chinese VMS. Copyright 2009 Elsevier Ltd. All rights reserved.
A hydrological emulator for global applications - HE v1.0.0
NASA Astrophysics Data System (ADS)
Liu, Yaling; Hejazi, Mohamad; Li, Hongyi; Zhang, Xuesong; Leng, Guoyong
2018-03-01
While global hydrological models (GHMs) are very useful in exploring water resources and interactions between the Earth and human systems, their use often requires numerous model inputs, complex model calibration, and high computation costs. To overcome these challenges, we construct an efficient open-source and ready-to-use hydrological emulator (HE) that can mimic complex GHMs at a range of spatial scales (e.g., basin, region, globe). More specifically, we construct both a lumped and a distributed scheme of the HE based on the monthly abcd model to explore the tradeoff between computational cost and model fidelity. Model predictability and computational efficiency are evaluated in simulating global runoff from 1971 to 2010 with both the lumped and distributed schemes. The results are compared against the runoff product from the widely used Variable Infiltration Capacity (VIC) model. Our evaluation indicates that the lumped and distributed schemes present comparable results regarding annual total quantity, spatial pattern, and temporal variation of the major water fluxes (e.g., total runoff, evapotranspiration) across the global 235 basins (e.g., correlation coefficient r between the annual total runoff from either of these two schemes and the VIC is > 0.96), except for several cold (e.g., Arctic, interior Tibet), dry (e.g., North Africa) and mountainous (e.g., Argentina) regions. Compared against the monthly total runoff product from the VIC (aggregated from daily runoff), the global mean Kling-Gupta efficiencies are 0.75 and 0.79 for the lumped and distributed schemes, respectively, with the distributed scheme better capturing spatial heterogeneity. Notably, the computation efficiency of the lumped scheme is 2 orders of magnitude higher than the distributed one and 7 orders more efficient than the VIC model. A case study of uncertainty analysis for the world's 16 basins with top annual streamflow is conducted using 100 000 model simulations, and it demonstrates the lumped scheme's extraordinary advantage in computational efficiency. Our results suggest that the revised lumped abcd model can serve as an efficient and reasonable HE for complex GHMs and is suitable for broad practical use, and the distributed scheme is also an efficient alternative if spatial heterogeneity is of more interest.
Variable length adjacent partitioning for PTS based PAPR reduction of OFDM signal
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ibraheem, Zeyid T.; Rahman, Md. Mijanur; Yaakob, S. N.
2015-05-15
Peak-to-Average power ratio (PAPR) is a major drawback in OFDM communication. It leads the power amplifier into nonlinear region operation resulting into loss of data integrity. As such, there is a strong motivation to find techniques to reduce PAPR. Partial Transmit Sequence (PTS) is an attractive scheme for this purpose. Judicious partitioning the OFDM data frame into disjoint subsets is a pivotal component of any PTS scheme. Out of the existing partitioning techniques, adjacent partitioning is characterized by an attractive trade-off between cost and performance. With an aim of determining effects of length variability of adjacent partitions, we performed anmore » investigation into the performances of a variable length adjacent partitioning (VL-AP) and fixed length adjacent partitioning in comparison with other partitioning schemes such as pseudorandom partitioning. Simulation results with different modulation and partitioning scenarios showed that fixed length adjacent partition had better performance compared to variable length adjacent partitioning. As expected, simulation results showed a slightly better performance of pseudorandom partitioning technique compared to fixed and variable adjacent partitioning schemes. However, as the pseudorandom technique incurs high computational complexities, adjacent partitioning schemes were still seen as favorable candidates for PAPR reduction.« less
Vehicle Integrated Prognostic Reasoner (VIPR) Metric Report
NASA Technical Reports Server (NTRS)
Cornhill, Dennis; Bharadwaj, Raj; Mylaraswamy, Dinkar
2013-01-01
This document outlines a set of metrics for evaluating the diagnostic and prognostic schemes developed for the Vehicle Integrated Prognostic Reasoner (VIPR), a system-level reasoner that encompasses the multiple levels of large, complex systems such as those for aircraft and spacecraft. VIPR health managers are organized hierarchically and operate together to derive diagnostic and prognostic inferences from symptoms and conditions reported by a set of diagnostic and prognostic monitors. For layered reasoners such as VIPR, the overall performance cannot be evaluated by metrics solely directed toward timely detection and accuracy of estimation of the faults in individual components. Among other factors, overall vehicle reasoner performance is governed by the effectiveness of the communication schemes between monitors and reasoners in the architecture, and the ability to propagate and fuse relevant information to make accurate, consistent, and timely predictions at different levels of the reasoner hierarchy. We outline an extended set of diagnostic and prognostics metrics that can be broadly categorized as evaluation measures for diagnostic coverage, prognostic coverage, accuracy of inferences, latency in making inferences, computational cost, and sensitivity to different fault and degradation conditions. We report metrics from Monte Carlo experiments using two variations of an aircraft reference model that supported both flat and hierarchical reasoning.
Performance of a TKE diffusion scheme in ECMWF IFS Single Column Model
NASA Astrophysics Data System (ADS)
Svensson, Jacob; Bazile, Eric; Sandu, Irina; Svensson, Gunilla
2015-04-01
Numerical Weather Prediction models (NWP) as well as climate models are used for decision making on all levels in society and their performance and accuracy are of great importance for both economical and safety reasons. Today's extensive use of weather apps and websites that directly uses model output even more highlights the importance of realistic output parameters. The turbulent atmospheric boundary layer (ABL) includes many physical processes which occur on a subgrid scale and need to be parameterized. As the absolute major part of the biosphere is located in the ABL, it is of great importance that these subgrid processes are parametrized so that they give realistic values of e.g. temperature and wind on the levels close to the surface. GEWEX (Global Energy and Water Exchange Project) Atmospheric Boundary Layer Study (GABLS), has the overall objective to improve the understanding and the representation of the atmospheric boundary layers in climate models. The study has pointed out that there is a need for a better understanding and representation of stable atmospheric boundary layers (SBL). Therefore four test cases have been designed to highlight the performance of and differences between a number of climate models and NWP:s in SBL. In the experiments, most global NWP and climate models have shown to be too diffusive in stable conditions and thus give too weak temperature gradients, too strong momentum mixing and too weak ageostrophic Ekman flow. The reason for this is that the models need enhanced diffusion to create enough friction for the large scale weather systems, which otherwise would be too fast and too active. In the GABLS test cases, turbulence schemes that use Turbulent Kinetic Energy (TKE) have shown to be more skilful than schemes that only use stability and gradients. TKE as a prognostic variable allows for advection both vertically and horizontally and gives a "memory" from previous time steps. Therefore, e.g. the ECMWF-GABLS workshop in 2011 recommended a move for global NWP models towards a TKE scheme. Here a comparison between a TKE diffusion scheme (based on the implementation in the ARPEGE model by Meteo France) is compared to ECMWF:s IFS operational first-order scheme and to a less diffusive version, using a single column version of ECMWF:s IFS model. Results from the test cases GABLS 1, 3 and 4 together with the Diurnal land/atmosphere coupling experiment (DICE) are presented.
a Thtee-Dimensional Variational Assimilation Scheme for Satellite Aod
NASA Astrophysics Data System (ADS)
Liang, Y.; Zang, Z.; You, W.
2018-04-01
A three-dimensional variational data assimilation scheme is designed for satellite AOD based on the IMPROVE (Interagency Monitoring of Protected Visual Environments) equation. The observation operator that simulates AOD from the control variables is established by the IMPROVE equation. All of the 16 control variables in the assimilation scheme are the mass concentrations of aerosol species from the Model for Simulation Aerosol Interactions and Chemistry scheme, so as to take advantage of this scheme in providing comprehensive analyses of species concentrations and size distributions as well as be calculating efficiently. The assimilation scheme can save computational resources as the IMPROVE equation is a quadratic equation. A single-point observation experiment shows that the information from the single-point AOD is effectively spread horizontally and vertically.
Designing a better weather display
NASA Astrophysics Data System (ADS)
Ware, Colin; Plumlee, Matthew
2012-01-01
The variables most commonly displayed on weather maps are atmospheric pressure, wind speed and direction, and surface temperature. But they are usually shown separately, not together on a single map. As a design exercise, we set the goal of finding out if it is possible to show all three variables (two 2D scalar fields and a 2D vector field) simultaneously such that values can be accurately read using keys for all variables, a reasonable level of detail is shown, and important meteorological features stand out clearly. Our solution involves employing three perceptual "channels", a color channel, a texture channel, and a motion channel in order to perceptually separate the variables and make them independently readable. We conducted an experiment to evaluate our new design both against a conventional solution, and against a glyph-based solution. The evaluation tested the abilities of novice subjects both to read values using a key, and to see meteorological patterns in the data. Our new scheme was superior especially in the representation of wind patterns using the motion channel, and it also performed well enough in the representation of pressure using the texture channel to suggest it as a viable design alternative.
Comparisons of Young Children's Private Speech Profiles: Analogical Versus Nonanalogical Reasoners.
ERIC Educational Resources Information Center
Manning, Brenda H.; White, C. Stephen
The primary intention of this study was to compare private speech profiles of young children classified as analogical reasoners (AR) with young children classified as nonanalogical reasoners (NAR). The secondary purpose was to investigate Berk's (1986) research methodology and categorical scheme for the collection and coding of private speech…
Modeling of the Wegener Bergeron Findeisen process—implications for aerosol indirect effects
NASA Astrophysics Data System (ADS)
Storelvmo, T.; Kristjánsson, J. E.; Lohmann, U.; Iversen, T.; Kirkevåg, A.; Seland, Ø.
2008-10-01
A new parameterization of the Wegener-Bergeron-Findeisen (WBF) process has been developed, and implemented in the general circulation model CAM-Oslo. The new parameterization scheme has important implications for the process of phase transition in mixed-phase clouds. The new treatment of the WBF process replaces a previous formulation, in which the onset of the WBF effect depended on a threshold value of the mixing ratio of cloud ice. As no observational guidance for such a threshold value exists, the previous treatment added uncertainty to estimates of aerosol effects on mixed-phase clouds. The new scheme takes subgrid variability into account when simulating the WBF process, allowing for smoother phase transitions in mixed-phase clouds compared to the previous approach. The new parameterization yields a model state which gives reasonable agreement with observed quantities, allowing for calculations of aerosol effects on mixed-phase clouds involving a reduced number of tunable parameters. Furthermore, we find a significant sensitivity to perturbations in ice nuclei concentrations with the new parameterization, which leads to a reversal of the traditional cloud lifetime effect.
Automatic control of NASA Langley's 0.3-meter cryogenic test facility
NASA Technical Reports Server (NTRS)
Thibodeaux, J. J.; Balakrishna, S.
1980-01-01
Experience during the past 6 years of operation of the 0.3-meter transonic cryogenic tunnel at the NASA Langley Research Center has shown that there are problems associated with efficient operation and control of cryogenic tunnels using manual control schemes. This is due to the high degree of process crosscoupling between the independent control variables (temperature, pressure, and fan drive speed) and the desired test condition (Mach number and Reynolds number). One problem has been the inability to maintain long-term accurate control of the test parameters. Additionally, the time required to change from one test condition to another has proven to be excessively long and much less efficient than desirable in terms of liquid nitrogen and electrical power usage. For these reasons, studies have been undertaken to: (1) develop and validate a mathematical model of the 0.3-meter cryogenic tunnel process, (2) utilize this model in a hybrid computer simulation to design temperature and pressure feedback control laws, and (3) evaluate the adequacy of these control schemes by analysis of closed-loop experimental data. This paper will present the results of these studies.
NASA Astrophysics Data System (ADS)
Gao, Tao; Wulan, Wulan; Yu, Xiao; Yang, Zelong; Gao, Jing; Hua, Weiqi; Yang, Peng; Si, Yaobing
2018-05-01
Spring precipitation is the predominant factor that controls meteorological drought in Inner Mongolia (IM), China. This study used the anomaly percentage of spring precipitation (PAP) as a drought index to measure spring drought. A scheme for forecasting seasonal drought was designed based on evidence of spring drought occurrence and speculative reasoning methods introduced in computer artificial intelligence theory. Forecast signals with sufficient lead-time for predictions of spring drought were extracted from eight crucial areas of oceans and 500-hPa geopotential height. Using standardized values, these signals were synthesized into three examples of spring drought evidence (SDE) depending on their primary effects on three major atmospheric circulation components of spring precipitation in IM: the western Pacific subtropical high, North Polar vortex, and East Asian trough. Thresholds for the SDE were determined following numerical analyses of the influential factors. Furthermore, five logical reasoning rules for distinguishing the occurrence of SDE were designed after examining all possible combined cases. The degree of confidence in the rules was determined based on estimations of their prior probabilities. Then, an optimized logical reasoning scheme was identified for judging the possibility of spring drought. The scheme was successful in hindcast predictions of 11 of the 16 (accuracy: 68.8%) spring droughts that have occurred during 1960-2009. Moreover, the accuracy ratio for the same period was 82.0% for drought (PAP ≤ -20%) or not (PAP > -20%). Predictions for the recent 6-year period (2010-2015) demonstrated successful outcomes.
Gas Evolution Dynamics in Godunov-Type Schemes and Analysis of Numerical Shock Instability
NASA Technical Reports Server (NTRS)
Xu, Kun
1999-01-01
In this paper we are going to study the gas evolution dynamics of the exact and approximate Riemann solvers, e.g., the Flux Vector Splitting (FVS) and the Flux Difference Splitting (FDS) schemes. Since the FVS scheme and the Kinetic Flux Vector Splitting (KFVS) scheme have the same physical mechanism and similar flux function, based on the analysis of the discretized KFVS scheme the weakness and advantage of the FVS scheme are closely observed. The subtle dissipative mechanism of the Godunov method in the 2D case is also analyzed, and the physical reason for shock instability, i.e., carbuncle phenomena and odd-even decoupling, is presented.
NASA Astrophysics Data System (ADS)
Konstantinidou, Aikaterini; Macagno, Fabrizio
2013-05-01
The purpose of this paper is to investigate the argumentative structure of students' arguments using argumentation schemes as an instrument for reconstructing the missing premises underlying their reasoning. Building on the recent literature in science education, in order for an explanation to be persuasive and achieve a conceptual change it needs to proceed from the interlocutor's background knowledge to the analysis of the unknown or wrongly interpreted phenomena. Argumentation schemes represent the abstract forms of the most used and common forms of human reasoning, combining logical principles with semantic concepts. By identifying the argument structure it is possible to retrieve the missing premises and the crucial concepts and definition on which the conclusion is based. This method of analysis will be shown to provide the teacher with an instrument to improve his or her explanations by taking into consideration the students' intuitions and deep background knowledge on a specific issue. In this fashion the teacher can advance counterarguments or propose new perspectives on the subject matter in order to persuade the students to accept new scientific concepts.
Middle School Children's Mathematical Reasoning and Proving Schemes
ERIC Educational Resources Information Center
Liu, Yating; Manouchehri, Azita
2013-01-01
In this work we explored proof schemes used by 41 middle school students when confronted with four mathematical propositions that demanded verification of accuracy of statements. The students' perception of mathematically complete vs. convincing arguments in different mathematics branches was also elicited. Lastly, we considered whether the…
NASA Astrophysics Data System (ADS)
Rehman, Asad; Ali, Ishtiaq; Qamar, Shamsul
An upwind space-time conservation element and solution element (CE/SE) scheme is extended to numerically approximate the dusty gas flow model. Unlike central CE/SE schemes, the current method uses the upwind procedure to derive the numerical fluxes through the inner boundary of conservation elements. These upwind fluxes are utilized to calculate the gradients of flow variables. For comparison and validation, the central upwind scheme is also applied to solve the same dusty gas flow model. The suggested upwind CE/SE scheme resolves the contact discontinuities more effectively and preserves the positivity of flow variables in low density flows. Several case studies are considered and the results of upwind CE/SE are compared with the solutions of central upwind scheme. The numerical results show better performance of the upwind CE/SE method as compared to the central upwind scheme.
Eynon, Michael John; O'Donnell, Christopher; Williams, Lynn
2016-07-01
Nine adults who had completed an exercise referral scheme participated in a semi-structured interview to uncover the key psychological factors associated with adherence to the scheme. Through thematic analysis, an exercise identity emerged to be a major factor associated with adherence to the scheme, which was formed of a number of underpinning constructs including changes in self-esteem, changes in self-efficacy and changes in self-regulatory strategies. Also, an additional theme of transitions in motivation to exercise was identified, showing participants' motivation to alter from extrinsic to intrinsic reasons to exercise during the scheme.
On Space-Time Inversion Invariance and its Relation to Non-Dissipatedness of a CESE Core Scheme
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung
2006-01-01
The core motivating ideas of the space-time CESE method are clearly presented and critically analyzed. It is explained why these ideas result in all the simplifying and enabling features of the CESE method. A thorough discussion of the a scheme, a two-level non-dissipative CESE solver of a simple advection equation with two independent mesh variables and two equations per mesh point is also presented. It is shown that the scheme possesses some rather intriguing properties such as: (i) its two independent mesh variables separately satisfy two decoupled three-level leapfrog schemes and (ii) it shares with the leapfrog scheme the same amplification factors, even though the a scheme and the leapfrog scheme have completely different origins and structures. It is also explained why the leapfrog scheme is not as robust as the a scheme. The amplification factors/matrices of several non-dissipative schemes are carefully studied and the key properties that contribute to their non-dissipatedness are clearly spelled out. Finally we define and establish space-time inversion (STI) invariance for several non-dissipative schemes and show that their non-dissipatedness is a result of their STI invariance.
Secondary School Students' Reasoning about Evolution
ERIC Educational Resources Information Center
To, Cheryl; Tenenbaum, Harriet R.; Hogh, Henriette
2017-01-01
This study examined age differences in young people's understanding of evolution theory in secondary school. A second aim of this study was to propose a new coding scheme that more accurately described students' conceptual understanding about evolutionary theory. We argue that coding schemes adopted in previous research may have overestimated…
Investigating the population structure and genetic differentiation of livestock guard dog breeds.
Bigi, D; Marelli, S P; Liotta, L; Frattini, S; Talenti, A; Pagnacco, G; Polli, M; Crepaldi, P
2018-01-14
Livestock guarding dogs are a valuable adjunct to the pastoral community. Having been traditionally selected for their working ability, they fulfil their function with minimal interaction or command from their human owners. In this study, the population structure and the genetic differentiation of three Italian livestock guardian breeds (Sila's Dog, Maremma and Abruzzese Sheepdog and Mannara's Dog) and three functionally and physically similar breeds (Cane Corso, Central Asian Shepherd Dog and Caucasian Shepherd Dog), totalling 179 dogs unrelated at the second generation, were investigated with 18 autosomal microsatellite markers. Values for the number of alleles per locus, observed and expected heterozygosity, Hardy-Weinberg Equilibrium, F stats, Nei's and Reynold's genetic distances, clustering and sub-population formation abilities and individual genetic structures were calculated. Our results show clear breed differentiation, whereby all the considered breeds show reasonable genetic variability despite small population sizes and variable selection schemes. These results provide meaningful data to stakeholders in specific breed and environmental conservation programmes.
NASA Technical Reports Server (NTRS)
Arakawa, A.; Lamb, V. R.
1979-01-01
A three-dimensional finite difference scheme for the solution of the shallow water momentum equations which accounts for the conservation of potential enstrophy in the flow of a homogeneous incompressible shallow atmosphere over steep topography as well as for total energy conservation is presented. The scheme is derived to be consistent with a reasonable scheme for potential vorticity advection in a long-term integration for a general flow with divergent mass flux. Numerical comparisons of the characteristics of the present potential enstrophy-conserving scheme with those of a scheme that conserves potential enstrophy only for purely horizontal nondivergent flow are presented which demonstrate the reduction of computational noise in the wind field with the enstrophy-conserving scheme and its convergence even in relatively coarse grids.
A comparison of two multi-variable integrator windup protection schemes
NASA Technical Reports Server (NTRS)
Mattern, Duane
1993-01-01
Two methods are examined for limit and integrator wind-up protection for multi-input, multi-output linear controllers subject to actuator constraints. The methods begin with an existing linear controller that satisfies the specifications for the nominal, small perturbation, linear model of the plant. The controllers are formulated to include an additional contribution to the state derivative calculations. The first method to be examined is the multi-variable version of the single-input, single-output, high gain, Conventional Anti-Windup (CAW) scheme. Except for the actuator limits, the CAW scheme is linear. The second scheme to be examined, denoted the Modified Anti-Windup (MAW) scheme, uses a scalar to modify the magnitude of the controller output vector while maintaining the vector direction. The calculation of the scalar modifier is a nonlinear function of the controller outputs and the actuator limits. In both cases the constrained actuator is tracked. These two integrator windup protection methods are demonstrated on a turbofan engine control system with five measurements, four control variables, and four actuators. The closed-loop responses of the two schemes are compared and contrasted during limit operation. The issue of maintaining the direction of the controller output vector using the Modified Anti-Windup scheme is discussed and the advantages and disadvantages of both of the IWP methods are presented.
Allowing for Horizontally Heterogeneous Clouds and Generalized Overlap in an Atmospheric GCM
NASA Technical Reports Server (NTRS)
Lee, D.; Oreopoulos, L.; Suarez, M.
2011-01-01
While fully accounting for 3D effects in Global Climate Models (GCMs) appears not realistic at the present time for a variety of reasons such as computational cost and unavailability of 3D cloud structure in the models, incorporation in radiation schemes of subgrid cloud variability described by one-point statistics is now considered feasible and is being actively pursued. This development has gained momentum once it was demonstrated that CPU-intensive spectrally explicit Independent Column Approximation (lCA) can be substituted by stochastic Monte Carlo ICA (McICA) calculations where spectral integration is accomplished in a manner that produces relatively benign random noise. The McICA approach has been implemented in Goddard's GEOS-5 atmospheric GCM as part of the implementation of the RRTMG radiation package. GEOS-5 with McICA and RRTMG can handle horizontally variable clouds which can be set via a cloud generator to arbitrarily overlap within the full spectrum of maximum and random both in terms of cloud fraction and layer condensate distributions. In our presentation we will show radiative and other impacts of the combined horizontal and vertical cloud variability on multi-year simulations of an otherwise untuned GEOS-5 with fixed SSTs. Introducing cloud horizontal heterogeneity without changing the mean amounts of condensate reduces reflected solar and increases thermal radiation to space, but disproportionate changes may increase the radiative imbalance at TOA. The net radiation at TOA can be modulated by allowing the parameters of the generalized overlap and heterogeneity scheme to vary, a dependence whose behavior we will discuss. The sensitivity of the cloud radiative forcing to the parameters of cloud horizontal heterogeneity and comparisons of CERES-derived forcing will be shown.
A hydrological emulator for global applications – HE v1.0.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yaling; Hejazi, Mohamad; Li, Hongyi
While global hydrological models (GHMs) are very useful in exploring water resources and interactions between the Earth and human systems, their use often requires numerous model inputs, complex model calibration, and high computation costs. To overcome these challenges, we construct an efficient open-source and ready-to-use hydrological emulator (HE) that can mimic complex GHMs at a range of spatial scales (e.g., basin, region, globe). More specifically, we construct both a lumped and a distributed scheme of the HE based on the monthly abcd model to explore the tradeoff between computational cost and model fidelity. Model predictability and computational efficiency are evaluatedmore » in simulating global runoff from 1971 to 2010 with both the lumped and distributed schemes. The results are compared against the runoff product from the widely used Variable Infiltration Capacity (VIC) model. Our evaluation indicates that the lumped and distributed schemes present comparable results regarding annual total quantity, spatial pattern, and temporal variation of the major water fluxes (e.g., total runoff, evapotranspiration) across the global 235 basins (e.g., correlation coefficient r between the annual total runoff from either of these two schemes and the VIC is > 0.96), except for several cold (e.g., Arctic, interior Tibet), dry (e.g., North Africa) and mountainous (e.g., Argentina) regions. Compared against the monthly total runoff product from the VIC (aggregated from daily runoff), the global mean Kling–Gupta efficiencies are 0.75 and 0.79 for the lumped and distributed schemes, respectively, with the distributed scheme better capturing spatial heterogeneity. Notably, the computation efficiency of the lumped scheme is 2 orders of magnitude higher than the distributed one and 7 orders more efficient than the VIC model. A case study of uncertainty analysis for the world's 16 basins with top annual streamflow is conducted using 100 000 model simulations, and it demonstrates the lumped scheme's extraordinary advantage in computational efficiency. Lastly, our results suggest that the revised lumped abcd model can serve as an efficient and reasonable HE for complex GHMs and is suitable for broad practical use, and the distributed scheme is also an efficient alternative if spatial heterogeneity is of more interest.« less
NASA Technical Reports Server (NTRS)
Rizk, Magdi H.
1988-01-01
A scheme is developed for solving constrained optimization problems in which the objective function and the constraint function are dependent on the solution of the nonlinear flow equations. The scheme updates the design parameter iterative solutions and the flow variable iterative solutions simultaneously. It is applied to an advanced propeller design problem with the Euler equations used as the flow governing equations. The scheme's accuracy, efficiency and sensitivity to the computational parameters are tested.
Aerodynamic optimization by simultaneously updating flow variables and design parameters
NASA Technical Reports Server (NTRS)
Rizk, M. H.
1990-01-01
The application of conventional optimization schemes to aerodynamic design problems leads to inner-outer iterative procedures that are very costly. An alternative approach is presented based on the idea of updating the flow variable iterative solutions and the design parameter iterative solutions simultaneously. Two schemes based on this idea are applied to problems of correcting wind tunnel wall interference and optimizing advanced propeller designs. The first of these schemes is applicable to a limited class of two-design-parameter problems with an equality constraint. It requires the computation of a single flow solution. The second scheme is suitable for application to general aerodynamic problems. It requires the computation of several flow solutions in parallel. In both schemes, the design parameters are updated as the iterative flow solutions evolve. Computations are performed to test the schemes' efficiency, accuracy, and sensitivity to variations in the computational parameters.
An approach to combining heuristic and qualitative reasoning in an expert system
NASA Technical Reports Server (NTRS)
Jiang, Wei-Si; Han, Chia Yung; Tsai, Lian Cheng; Wee, William G.
1988-01-01
An approach to combining the heuristic reasoning from shallow knowledge and the qualitative reasoning from deep knowledge is described. The shallow knowledge is represented in production rules and under the direct control of the inference engine. The deep knowledge is represented in frames, which may be put in a relational DataBase Management System. This approach takes advantage of both reasoning schemes and results in improved efficiency as well as expanded problem solving ability.
Harvesting model uncertainty for the simulation of interannual variability
NASA Astrophysics Data System (ADS)
Misra, Vasubandhu
2009-08-01
An innovative modeling strategy is introduced to account for uncertainty in the convective parameterization (CP) scheme of a coupled ocean-atmosphere model. The methodology involves calling the CP scheme several times at every given time step of the model integration to pick the most probable convective state. Each call of the CP scheme is unique in that one of its critical parameter values (which is unobserved but required by the scheme) is chosen randomly over a given range. This methodology is tested with the relaxed Arakawa-Schubert CP scheme in the Center for Ocean-Land-Atmosphere Studies (COLA) coupled general circulation model (CGCM). Relative to the control COLA CGCM, this methodology shows improvement in the El Niño-Southern Oscillation simulation and the Indian summer monsoon precipitation variability.
Zhang, Haixia; Zhao, Junkang; Gu, Caijiao; Cui, Yan; Rong, Huiying; Meng, Fanlong; Wang, Tong
2015-05-01
The study of the medical expenditure and its influencing factors among the students enrolling in Urban Resident Basic Medical Insurance (URBMI) in Taiyuan indicated that non response bias and selection bias coexist in dependent variable of the survey data. Unlike previous studies only focused on one missing mechanism, a two-stage method to deal with two missing mechanisms simultaneously was suggested in this study, combining multiple imputation with sample selection model. A total of 1 190 questionnaires were returned by the students (or their parents) selected in child care settings, schools and universities in Taiyuan by stratified cluster random sampling in 2012. In the returned questionnaires, 2.52% existed not missing at random (NMAR) of dependent variable and 7.14% existed missing at random (MAR) of dependent variable. First, multiple imputation was conducted for MAR by using completed data, then sample selection model was used to correct NMAR in multiple imputation, and a multi influencing factor analysis model was established. Based on 1 000 times resampling, the best scheme of filling the random missing values is the predictive mean matching (PMM) method under the missing proportion. With this optimal scheme, a two stage survey was conducted. Finally, it was found that the influencing factors on annual medical expenditure among the students enrolling in URBMI in Taiyuan included population group, annual household gross income, affordability of medical insurance expenditure, chronic disease, seeking medical care in hospital, seeking medical care in community health center or private clinic, hospitalization, hospitalization canceled due to certain reason, self medication and acceptable proportion of self-paid medical expenditure. The two-stage method combining multiple imputation with sample selection model can deal with non response bias and selection bias effectively in dependent variable of the survey data.
NASA Technical Reports Server (NTRS)
Syed, S. A.; Chiappetta, L. M.
1985-01-01
A methodological evaluation for two-finite differencing schemes for computer-aided gas turbine design is presented. The two computational schemes include; a Bounded Skewed Finite Differencing Scheme (BSUDS); and a Quadratic Upwind Differencing Scheme (QSDS). In the evaluation, the derivations of the schemes were incorporated into two-dimensional and three-dimensional versions of the Teaching Axisymmetric Characteristics Heuristically (TEACH) computer code. Assessments were made according to performance criteria for the solution of problems of turbulent, laminar, and coannular turbulent flow. The specific performance criteria used in the evaluation were simplicity, accuracy, and computational economy. It is found that the BSUDS scheme performed better with respect to the criteria than the QUDS. Some of the reasons for the more successful performance BSUDS are discussed.
Comparison of Several Numerical Methods for Simulation of Compressible Shear Layers
NASA Technical Reports Server (NTRS)
Kennedy, Christopher A.; Carpenter, Mark H.
1997-01-01
An investigation is conducted on several numerical schemes for use in the computation of two-dimensional, spatially evolving, laminar variable-density compressible shear layers. Schemes with various temporal accuracies and arbitrary spatial accuracy for both inviscid and viscous terms are presented and analyzed. All integration schemes use explicit or compact finite-difference derivative operators. Three classes of schemes are considered: an extension of MacCormack's original second-order temporally accurate method, a new third-order variant of the schemes proposed by Rusanov and by Kutier, Lomax, and Warming (RKLW), and third- and fourth-order Runge-Kutta schemes. In each scheme, stability and formal accuracy are considered for the interior operators on the convection-diffusion equation U(sub t) + aU(sub x) = alpha U(sub xx). Accuracy is also verified on the nonlinear problem, U(sub t) + F(sub x) = 0. Numerical treatments of various orders of accuracy are chosen and evaluated for asymptotic stability. Formally accurate boundary conditions are derived for several sixth- and eighth-order central-difference schemes. Damping of high wave-number data is accomplished with explicit filters of arbitrary order. Several schemes are used to compute variable-density compressible shear layers, where regions of large gradients exist.
A new 3D maser code applied to flaring events
NASA Astrophysics Data System (ADS)
Gray, M. D.; Mason, L.; Etoka, S.
2018-06-01
We set out the theory and discretization scheme for a new finite-element computer code, written specifically for the simulation of maser sources. The code was used to compute fractional inversions at each node of a 3D domain for a range of optical thicknesses. Saturation behaviour of the nodes with regard to location and optical depth was broadly as expected. We have demonstrated via formal solutions of the radiative transfer equation that the apparent size of the model maser cloud decreases as expected with optical depth as viewed by a distant observer. Simulations of rotation of the cloud allowed the construction of light curves for a number of observable quantities. Rotation of the model cloud may be a reasonable model for quasi-periodic variability, but cannot explain periodic flaring.
Integration of object-oriented knowledge representation with the CLIPS rule based system
NASA Technical Reports Server (NTRS)
Logie, David S.; Kamil, Hasan
1990-01-01
The paper describes a portion of the work aimed at developing an integrated, knowledge based environment for the development of engineering-oriented applications. An Object Representation Language (ORL) was implemented in C++ which is used to build and modify an object-oriented knowledge base. The ORL was designed in such a way so as to be easily integrated with other representation schemes that could effectively reason with the object base. Specifically, the integration of the ORL with the rule based system C Language Production Systems (CLIPS), developed at the NASA Johnson Space Center, will be discussed. The object-oriented knowledge representation provides a natural means of representing problem data as a collection of related objects. Objects are comprised of descriptive properties and interrelationships. The object-oriented model promotes efficient handling of the problem data by allowing knowledge to be encapsulated in objects. Data is inherited through an object network via the relationship links. Together, the two schemes complement each other in that the object-oriented approach efficiently handles problem data while the rule based knowledge is used to simulate the reasoning process. Alone, the object based knowledge is little more than an object-oriented data storage scheme; however, the CLIPS inference engine adds the mechanism to directly and automatically reason with that knowledge. In this hybrid scheme, the expert system dynamically queries for data and can modify the object base with complete access to all the functionality of the ORL from rules.
Case studies in configuration control for redundant robots
NASA Technical Reports Server (NTRS)
Seraji, H.; Lee, T.; Colbaugh, R.; Glass, K.
1989-01-01
A simple approach to configuration control of redundant robots is presented. The redundancy is utilized to control the robot configuration directly in task space, where the task will be performed. A number of task-related kinematic functions are defined and combined with the end-effector coordinates to form a set of configuration variables. An adaptive control scheme is then utilized to ensure that the configuration variables track the desired reference trajectories as closely as possible. Simulation results are presented to illustrate the control scheme. The scheme has also been implemented for direct online control of a PUMA industrial robot, and experimental results are presented. The simulation and experimental results validate the configuration control scheme for performing various realistic tasks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Haixia; Zhang, Jing
We propose a scheme for continuous-variable quantum cloning of coherent states with phase-conjugate input modes using linear optics. The quantum cloning machine yields M identical optimal clones from N replicas of a coherent state and N replicas of its phase conjugate. This scheme can be straightforwardly implemented with the setups accessible at present since its optical implementation only employs simple linear optical elements and homodyne detection. Compared with the original scheme for continuous-variable quantum cloning with phase-conjugate input modes proposed by Cerf and Iblisdir [Phys. Rev. Lett. 87, 247903 (2001)], which utilized a nondegenerate optical parametric amplifier, our scheme losesmore » the output of phase-conjugate clones and is regarded as irreversible quantum cloning.« less
Moon, Jongho; Choi, Younsung; Jung, Jaewook; Won, Dongho
2015-01-01
In multi-server environments, user authentication is a very important issue because it provides the authorization that enables users to access their data and services; furthermore, remote user authentication schemes for multi-server environments have solved the problem that has arisen from user's management of different identities and passwords. For this reason, numerous user authentication schemes that are designed for multi-server environments have been proposed over recent years. In 2015, Lu et al. improved upon Mishra et al.'s scheme, claiming that their remote user authentication scheme is more secure and practical; however, we found that Lu et al.'s scheme is still insecure and incorrect. In this paper, we demonstrate that Lu et al.'s scheme is vulnerable to outsider attack and user impersonation attack, and we propose a new biometrics-based scheme for authentication and key agreement that can be used in multi-server environments; then, we show that our proposed scheme is more secure and supports the required security properties.
Factors Affecting Self-Referral to Counselling Services in the Workplace: A Qualitative Study
ERIC Educational Resources Information Center
Athanasiades, Chrysostomos; Winthrop, Allan; Gough, Brendan
2008-01-01
The benefits of psychological support in the workplace (also known as workplace counselling) are well documented. Most large organisations in the UK have staff counselling schemes. However, it is unclear what, if any, factors affect employee decisions to use such schemes. This study has used a qualitative methodology to explore the reasons that…
The Why, What, and Impact of GPA at Oxford Brookes University
ERIC Educational Resources Information Center
Andrews, Matthew
2016-01-01
This paper examines the introduction at Oxford Brookes University of a Grade Point Average (GPA) scheme alongside the traditional honours degree classification. It considers the reasons for the introduction of GPA, the way in which the scheme was implemented, and offers an insight into the impact of GPA at Brookes. Finally, the paper considers…
Rethinking the "Social" in Educational Research: On What Underlies Scheme-Content Dualism
ERIC Educational Resources Information Center
Misawa, Koichiro
2016-01-01
Approaches to studying the "social" are prominent in educational research. Yet, because of their insufficient acknowledgement of the social nature of human beings and the reality we experience, such attempts often commit themselves to the dualism of scheme and content, which in turn is a by-product of the underlying dualism of reason and…
Bio-inspired online variable recruitment control of fluidic artificial muscles
NASA Astrophysics Data System (ADS)
Jenkins, Tyler E.; Chapman, Edward M.; Bryant, Matthew
2016-12-01
This paper details the creation of a hybrid variable recruitment control scheme for fluidic artificial muscle (FAM) actuators with an emphasis on maximizing system efficiency and switching control performance. Variable recruitment is the process of altering a system’s active number of actuators, allowing operation in distinct force regimes. Previously, FAM variable recruitment was only quantified with offline, manual valve switching; this study addresses the creation and characterization of novel, on-line FAM switching control algorithms. The bio-inspired algorithms are implemented in conjunction with a PID and model-based controller, and applied to a simulated plant model. Variable recruitment transition effects and chatter rejection are explored via a sensitivity analysis, allowing a system designer to weigh tradeoffs in actuator modeling, algorithm choice, and necessary hardware. Variable recruitment is further developed through simulation of a robotic arm tracking a variety of spline position inputs, requiring several levels of actuator recruitment. Switching controller performance is quantified and compared with baseline systems lacking variable recruitment. The work extends current variable recruitment knowledge by creating novel online variable recruitment control schemes, and exploring how online actuator recruitment affects system efficiency and control performance. Key topics associated with implementing a variable recruitment scheme, including the effects of modeling inaccuracies, hardware considerations, and switching transition concerns are also addressed.
NASA Astrophysics Data System (ADS)
Glazyrina, O. V.; Pavlova, M. F.
2016-11-01
We consider the parabolic inequality with monotone with respect to a gradient space operator, which is depended on integral with respect to space variables solution characteristic. We construct a two-layer differential scheme for this problem with use of penalty method, semidiscretization with respect to time variable method and the finite element method (FEM) with respect to space variables. We proved a convergence of constructed mothod.
NASA Astrophysics Data System (ADS)
Li, Changgang; Sun, Yanli; Yu, Yawei
2017-05-01
Under frequency load shedding (UFLS) is an important measure to tackle with frequency drop caused by load-generation imbalance. In existing schemes, loads are shed by relays in a discontinuous way, which is the major reason leading to under-shedding and over-shedding problems. With the application of power electronics technology, some loads can be controlled continuously, and it is possible to improve the UFSL with continuous loads. This paper proposes an UFLS scheme by shedding loads continuously. The load shedding amount is proportional to frequency deviation before frequency reaches its minimum during transient process. The feasibility of the proposed scheme is analysed with analytical system frequency response model. The impacts of governor droop, system inertia, and frequency threshold on the performance of the proposed UFLS scheme are discussed. Cases are demonstrated to validate the proposed scheme by comparing it with conventional UFLS schemes.
Companies' opinions and acceptance of global food safety initiative benchmarks after implementation.
Crandall, Phil; Van Loo, Ellen J; O'Bryan, Corliss A; Mauromoustakos, Andy; Yiannas, Frank; Dyenson, Natalie; Berdnik, Irina
2012-09-01
International attention has been focused on minimizing costs that may unnecessarily raise food prices. One important aspect to consider is the redundant and overlapping costs of food safety audits. The Global Food Safety Initiative (GFSI) has devised benchmarked schemes based on existing international food safety standards for use as a unifying standard accepted by many retailers. The present study was conducted to evaluate the impact of the decision made by Walmart Stores (Bentonville, AR) to require their suppliers to become GFSI compliant. An online survey of 174 retail suppliers was conducted to assess food suppliers' opinions of this requirement and the benefits suppliers realized when they transitioned from their previous food safety systems. The most common reason for becoming GFSI compliant was to meet customers' requirements; thus, supplier implementation of the GFSI standards was not entirely voluntary. Other reasons given for compliance were enhancing food safety and remaining competitive. About 54 % of food processing plants using GFSI benchmarked schemes followed the guidelines of Safe Quality Food 2000 and 37 % followed those of the British Retail Consortium. At the supplier level, 58 % followed Safe Quality Food 2000 and 31 % followed the British Retail Consortium. Respondents reported that the certification process took about 10 months. The most common reason for selecting a certain GFSI benchmarked scheme was because it was widely accepted by customers (retailers). Four other common reasons were (i) the standard has a good reputation in the industry, (ii) the standard was recommended by others, (iii) the standard is most often used in the industry, and (iv) the standard was required by one of their customers. Most suppliers agreed that increased safety of their products was required to comply with GFSI benchmarked schemes. They also agreed that the GFSI required a more carefully documented food safety management system, which often required improved company food safety practices and increased employee training. Adoption of a GFSI benchmarked scheme resulted in fewer audits, i.e., one less per year. An educational opportunity exists to acquaint retailers and suppliers worldwide with the benefits of having an internationally recognized certification program such as that recognized by the GFSI.
ERIC Educational Resources Information Center
Metz, Dale Evan; And Others
1992-01-01
A preliminary scheme for estimating the speech intelligibility of hearing-impaired speakers from acoustic parameters, using a computerized artificial neural network to process mathematically the acoustic input variables, is outlined. Tests with 60 hearing-impaired speakers found the scheme to be highly accurate in identifying speakers separated by…
Continuous-variable quantum homomorphic signature
NASA Astrophysics Data System (ADS)
Li, Ke; Shang, Tao; Liu, Jian-wei
2017-10-01
Quantum cryptography is believed to be unconditionally secure because its security is ensured by physical laws rather than computational complexity. According to spectrum characteristic, quantum information can be classified into two categories, namely discrete variables and continuous variables. Continuous-variable quantum protocols have gained much attention for their ability to transmit more information with lower cost. To verify the identities of different data sources in a quantum network, we propose a continuous-variable quantum homomorphic signature scheme. It is based on continuous-variable entanglement swapping and provides additive and subtractive homomorphism. Security analysis shows the proposed scheme is secure against replay, forgery and repudiation. Even under nonideal conditions, it supports effective verification within a certain verification threshold.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shue, Craig A; Gupta, Prof. Minaxi
Users are being tracked on the Internet more than ever before as Web sites and search engines gather pieces of information sufficient to identify and study their behavior. While many existing schemes provide strong anonymity, they are inappropriate when high bandwidth and low latency are required. In this work, we explore an anonymity scheme for end hosts whose performance makes it possible to have it always on. The scheme leverages the natural grouping of hosts in the same subnet and the universally available broadcast primitive to provide anonymity at line speeds. Our scheme is strongly resistant against all active ormore » passive adversaries as long as they are outside the subnet. Even within the subnet, our scheme provides reasonable resistance against adversaries, providing anonymity that is suitable for common Internet applications.« less
NASA Astrophysics Data System (ADS)
Yang, Can; Ma, Cheng; Hu, Linxi; He, Guangqiang
2018-06-01
We present a hierarchical modulation coherent communication protocol, which simultaneously achieves classical optical communication and continuous-variable quantum key distribution. Our hierarchical modulation scheme consists of a quadrature phase-shifting keying modulation for classical communication and a four-state discrete modulation for continuous-variable quantum key distribution. The simulation results based on practical parameters show that it is feasible to transmit both quantum information and classical information on a single carrier. We obtained a secure key rate of 10^{-3} bits/pulse to 10^{-1} bits/pulse within 40 kilometers, and in the meantime the maximum bit error rate for classical information is about 10^{-7}. Because continuous-variable quantum key distribution protocol is compatible with standard telecommunication technology, we think our hierarchical modulation scheme can be used to upgrade the digital communication systems to extend system function in the future.
NASA Astrophysics Data System (ADS)
Madala, Srikanth; Satyanarayana, A. N. V.; Srinivas, C. V.; Tyagi, Bhishma
2016-05-01
In the present study, advanced research WRF (ARW) model is employed to simulate convective thunderstorm episodes over Kharagpur (22°30'N, 87°20'E) region of Gangetic West Bengal, India. High-resolution simulations are conducted using 1 × 1 degree NCEP final analysis meteorological fields for initial and boundary conditions for events. The performance of two non-local [Yonsei University (YSU), Asymmetric Convective Model version 2 (ACM2)] and two local turbulence kinetic energy closures [Mellor-Yamada-Janjic (MYJ), Bougeault-Lacarrere (BouLac)] are evaluated in simulating planetary boundary layer (PBL) parameters and thermodynamic structure of the atmosphere. The model-simulated parameters are validated with available in situ meteorological observations obtained from micro-meteorological tower as well has high-resolution DigiCORA radiosonde ascents during STORM-2007 field experiment at the study location and Doppler Weather Radar (DWR) imageries. It has been found that the PBL structure simulated with the TKE closures MYJ and BouLac are in better agreement with observations than the non-local closures. The model simulations with these schemes also captured the reflectivity, surface pressure patterns such as wake-low, meso-high, pre-squall low and the convective updrafts and downdrafts reasonably well. Qualitative and quantitative comparisons reveal that the MYJ followed by BouLac schemes better simulated various features of the thunderstorm events over Kharagpur region. The better performance of MYJ followed by BouLac is evident in the lesser mean bias, mean absolute error, root mean square error and good correlation coefficient for various surface meteorological variables as well as thermo-dynamical structure of the atmosphere relative to other PBL schemes. The better performance of the TKE closures may be attributed to their higher mixing efficiency, larger convective energy and better simulation of humidity promoting moist convection relative to non-local schemes.
Zonn, I S
1995-01-01
During the second half of the 20th century Kalmykia has undergone severe desertification. Under Soviet rule, rangelands were increasingly devoted to animal production, and pastures were converted to cropland in a campaign to increase crops. Pastures were grazed two to three times their sustainable production, saiga populations and habitat greatly decreased, more than 17 million ha were subjected to wind erosion, 380,000 ha were transformed into moving sands, and 106,000 ha were ruined by secondary salinization and waterlogging. By the 1990s almost 80% of the Republic had undergone desertification, and 13% had been transformed into a true desert. In 1986 the General Scheme of Desertification Control was formulated. The scheme called for rotating pastures, reclaiming blown sand using silviculture, tilling overgrazed pastures and sowing fodder plants, and developing water supplies for pastures. In its early years the scheme has been successful. But the management of restored pastures usually reverts to the same farms responsible for the poor conditions, and there is great apprehension that degradation could reoccur. This case study concludes that the general cattle and agriculture development in Kalmykia is unviable for ecological and economic reasons, that Kalmykia should implement an adaptive policy oriented toward conservation and accommodating the interrelation and variability of land resources, that the desertification problem can be solved only by changing agrarian policy as a whole, and that a desertification control program must become an integral part of economic and social development of the Republic.
Recovery Schemes for Primitive Variables in General-relativistic Magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Siegel, Daniel M.; Mösta, Philipp; Desai, Dhruv; Wu, Samantha
2018-05-01
General-relativistic magnetohydrodynamic (GRMHD) simulations are an important tool to study a variety of astrophysical systems such as neutron star mergers, core-collapse supernovae, and accretion onto compact objects. A conservative GRMHD scheme numerically evolves a set of conservation equations for “conserved” quantities and requires the computation of certain primitive variables at every time step. This recovery procedure constitutes a core part of any conservative GRMHD scheme and it is closely tied to the equation of state (EOS) of the fluid. In the quest to include nuclear physics, weak interactions, and neutrino physics, state-of-the-art GRMHD simulations employ finite-temperature, composition-dependent EOSs. While different schemes have individually been proposed, the recovery problem still remains a major source of error, failure, and inefficiency in GRMHD simulations with advanced microphysics. The strengths and weaknesses of the different schemes when compared to each other remain unclear. Here we present the first systematic comparison of various recovery schemes used in different dynamical spacetime GRMHD codes for both analytic and tabulated microphysical EOSs. We assess the schemes in terms of (i) speed, (ii) accuracy, and (iii) robustness. We find large variations among the different schemes and that there is not a single ideal scheme. While the computationally most efficient schemes are less robust, the most robust schemes are computationally less efficient. More robust schemes may require an order of magnitude more calls to the EOS, which are computationally expensive. We propose an optimal strategy of an efficient three-dimensional Newton–Raphson scheme and a slower but more robust one-dimensional scheme as a fall-back.
NASA Astrophysics Data System (ADS)
Wang, Li; Li, Chuanghong
2018-02-01
As a sustainable form of ecological structure, green building is widespread concerned and advocated in society increasingly nowadays. In the survey and design phase of preliminary project construction, carrying out the evaluation and selection of green building design scheme, which is in accordance with the scientific and reasonable evaluation index system, can improve the ecological benefits of green building projects largely and effectively. Based on the new Green Building Evaluation Standard which came into effect on January 1, 2015, the evaluation index system of green building design scheme is constructed taking into account the evaluation contents related to the green building design scheme. We organized experts who are experienced in construction scheme optimization to mark and determine the weight of each evaluation index through the AHP method. The correlation degree was calculated between each evaluation scheme and ideal scheme by using multilevel gray relational analysis model and then the optimal scheme was determined. The feasibility and practicability of the evaluation method are verified by introducing examples.
An entropy-variables-based formulation of residual distribution schemes for non-equilibrium flows
NASA Astrophysics Data System (ADS)
Garicano-Mena, Jesús; Lani, Andrea; Degrez, Gérard
2018-06-01
In this paper we present an extension of Residual Distribution techniques for the simulation of compressible flows in non-equilibrium conditions. The latter are modeled by means of a state-of-the-art multi-species and two-temperature model. An entropy-based variable transformation that symmetrizes the projected advective Jacobian for such a thermophysical model is introduced. Moreover, the transformed advection Jacobian matrix presents a block diagonal structure, with mass-species and electronic-vibrational energy being completely decoupled from the momentum and total energy sub-system. The advantageous structure of the transformed advective Jacobian can be exploited by contour-integration-based Residual Distribution techniques: established schemes that operate on dense matrices can be substituted by the same scheme operating on the momentum-energy subsystem matrix and repeated application of scalar scheme to the mass-species and electronic-vibrational energy terms. Finally, the performance gain of the symmetrizing-variables formulation is quantified on a selection of representative testcases, ranging from subsonic to hypersonic, in inviscid or viscous conditions.
NASA Astrophysics Data System (ADS)
Zhou, Feng; Chen, Guoxian; Huang, Yuefei; Yang, Jerry Zhijian; Feng, Hui
2013-04-01
A new geometrical conservative interpolation on unstructured meshes is developed for preserving still water equilibrium and positivity of water depth at each iteration of mesh movement, leading to an adaptive moving finite volume (AMFV) scheme for modeling flood inundation over dry and complex topography. Unlike traditional schemes involving position-fixed meshes, the iteration process of the AFMV scheme moves a fewer number of the meshes adaptively in response to flow variables calculated in prior solutions and then simulates their posterior values on the new meshes. At each time step of the simulation, the AMFV scheme consists of three parts: an adaptive mesh movement to shift the vertices position, a geometrical conservative interpolation to remap the flow variables by summing the total mass over old meshes to avoid the generation of spurious waves, and a partial differential equations(PDEs) discretization to update the flow variables for a new time step. Five different test cases are presented to verify the computational advantages of the proposed scheme over nonadaptive methods. The results reveal three attractive features: (i) the AMFV scheme could preserve still water equilibrium and positivity of water depth within both mesh movement and PDE discretization steps; (ii) it improved the shock-capturing capability for handling topographic source terms and wet-dry interfaces by moving triangular meshes to approximate the spatial distribution of time-variant flood processes; (iii) it was able to solve the shallow water equations with a relatively higher accuracy and spatial-resolution with a lower computational cost.
Developing Systems of Notation as a Trace of Reasoning
ERIC Educational Resources Information Center
Tillema, Erik; Hackenberg, Amy
2011-01-01
In this paper, we engage in a thought experiment about how students might notate their reasoning for composing fractions multiplicatively (taking a fraction of a fraction and determining its size in relation to the whole). In the thought experiment we differentiate between two levels of a fraction composition scheme, which have been identified in…
Fred Applegate's Money-Making Scheme
ERIC Educational Resources Information Center
Leitze, Annette Ricks; Soots, Kristen L.
2015-01-01
Teachers across all grade levels agree that problem solving and reasoning are areas of weakness in students. Assessments among U.S. students indicate that these weaknesses persist (NCTM 2014) in spite of repeated calls that date back more than thirty years for increased problem solving, reasoning, and sense making in our schools. The NCTM is…
Proof in Algebra: Reasoning beyond Examples
ERIC Educational Resources Information Center
Otten, Samuel; Herbel-Eisenmann, Beth A.; Males, Lorraine M.
2010-01-01
The purpose of this article is to provide an image of what proof could look like in beginning algebra, a course that nearly every secondary school student encounters. The authors present an actual classroom vignette in which a rich opportunity for student reasoning arose. After analyzing the proof schemes at play, the authors provide a…
Multiswitching compound antisynchronization of four chaotic systems
NASA Astrophysics Data System (ADS)
Khan, Ayub; Khattar, Dinesh; Prajapati, Nitish
2017-12-01
Based on three drive-one response system, in this article, the authors investigate a novel synchronization scheme for a class of chaotic systems. The new scheme, multiswitching compound antisynchronization (MSCoAS), is a notable extension of the earlier multiswitching schemes concerning only one drive-one response system model. The concept of multiswitching synchronization is extended to compound synchronization scheme such that the state variables of three drive systems antisynchronize with different state variables of the response system, simultaneously. The study involving multiswitching of three drive systems and one response system is first of its kind. Various switched modified function projective antisynchronization schemes are obtained as special cases of MSCoAS, for a suitable choice of scaling factors. Using suitable controllers and Lyapunov stability theory, sufficient condition is obtained to achieve MSCoAS between four chaotic systems and the corresponding theoretical proof is given. Numerical simulations are performed using Lorenz system in MATLAB to demonstrate the validity of the presented method.
NASA Astrophysics Data System (ADS)
Jiang, Xue-Qin; Huang, Peng; Huang, Duan; Lin, Dakai; Zeng, Guihua
2017-02-01
Achieving information theoretic security with practical complexity is of great interest to continuous-variable quantum key distribution in the postprocessing procedure. In this paper, we propose a reconciliation scheme based on the punctured low-density parity-check (LDPC) codes. Compared to the well-known multidimensional reconciliation scheme, the present scheme has lower time complexity. Especially when the chosen punctured LDPC code achieves the Shannon capacity, the proposed reconciliation scheme can remove the information that has been leaked to an eavesdropper in the quantum transmission phase. Therefore, there is no information leaked to the eavesdropper after the reconciliation stage. This indicates that the privacy amplification algorithm of the postprocessing procedure is no more needed after the reconciliation process. These features lead to a higher secret key rate, optimal performance, and availability for the involved quantum key distribution scheme.
Self-referenced continuous-variable measurement-device-independent quantum key distribution
NASA Astrophysics Data System (ADS)
Wang, Yijun; Wang, Xudong; Li, Jiawei; Huang, Duan; Zhang, Ling; Guo, Ying
2018-05-01
We propose a scheme to remove the demand of transmitting a high-brightness local oscillator (LO) in continuous-variable measurement-device-independent quantum key distribution (CV-MDI QKD) protocol, which we call as the self-referenced (SR) CV-MDI QKD. We show that our scheme is immune to the side-channel attacks, such as the calibration attacks, the wavelength attacks and the LO fluctuation attacks, which are all exploiting the security loopholes introduced by transmitting the LO. Besides, the proposed scheme waives the necessity of complex multiplexer and demultiplexer, which can greatly simplify the QKD processes and improve the transmission efficiency. The numerical simulations under collective attacks show that all the improvements brought about by our scheme are only at the expense of slight transmission distance shortening. This scheme shows an available method to mend the security loopholes incurred by transmitting LO in CV-MDI QKD.
Diffusion of Zonal Variables Using Node-Centered Diffusion Solver
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, T B
2007-08-06
Tom Kaiser [1] has done some preliminary work to use the node-centered diffusion solver (originally developed by T. Palmer [2]) in Kull for diffusion of zonal variables such as electron temperature. To avoid numerical diffusion, Tom used a scheme developed by Shestakov et al. [3] and found their scheme could, in the vicinity of steep gradients, decouple nearest-neighbor zonal sub-meshes leading to 'alternating-zone' (red-black mode) errors. Tom extended their scheme to couple the sub-meshes with appropriate chosen artificial diffusion and thereby solved the 'alternating-zone' problem. Because the choice of the artificial diffusion coefficient could be very delicate, it is desirablemore » to use a scheme that does not require the artificial diffusion but still able to avoid both numerical diffusion and the 'alternating-zone' problem. In this document we present such a scheme.« less
NASA Astrophysics Data System (ADS)
Chertock, Alina; Cui, Shumo; Kurganov, Alexander; Özcan, Şeyma Nur; Tadmor, Eitan
2018-04-01
We develop a second-order well-balanced central-upwind scheme for the compressible Euler equations with gravitational source term. Here, we advocate a new paradigm based on a purely conservative reformulation of the equations using global fluxes. The proposed scheme is capable of exactly preserving steady-state solutions expressed in terms of a nonlocal equilibrium variable. A crucial step in the construction of the second-order scheme is a well-balanced piecewise linear reconstruction of equilibrium variables combined with a well-balanced central-upwind evolution in time, which is adapted to reduce the amount of numerical viscosity when the flow is at (near) steady-state regime. We show the performance of our newly developed central-upwind scheme and demonstrate importance of perfect balance between the fluxes and gravitational forces in a series of one- and two-dimensional examples.
NASA Astrophysics Data System (ADS)
Christensen, H. M.; Moroz, I.; Palmer, T.
2015-12-01
It is now acknowledged that representing model uncertainty in atmospheric simulators is essential for the production of reliable probabilistic ensemble forecasts, and a number of different techniques have been proposed for this purpose. Stochastic convection parameterization schemes use random numbers to represent the difference between a deterministic parameterization scheme and the true atmosphere, accounting for the unresolved sub grid-scale variability associated with convective clouds. An alternative approach varies the values of poorly constrained physical parameters in the model to represent the uncertainty in these parameters. This study presents new perturbed parameter schemes for use in the European Centre for Medium Range Weather Forecasts (ECMWF) convection scheme. Two types of scheme are developed and implemented. Both schemes represent the joint uncertainty in four of the parameters in the convection parametrisation scheme, which was estimated using the Ensemble Prediction and Parameter Estimation System (EPPES). The first scheme developed is a fixed perturbed parameter scheme, where the values of uncertain parameters are changed between ensemble members, but held constant over the duration of the forecast. The second is a stochastically varying perturbed parameter scheme. The performance of these schemes was compared to the ECMWF operational stochastic scheme, Stochastically Perturbed Parametrisation Tendencies (SPPT), and to a model which does not represent uncertainty in convection. The skill of probabilistic forecasts made using the different models was evaluated. While the perturbed parameter schemes improve on the stochastic parametrisation in some regards, the SPPT scheme outperforms the perturbed parameter approaches when considering forecast variables that are particularly sensitive to convection. Overall, SPPT schemes are the most skilful representations of model uncertainty due to convection parametrisation. Reference: H. M. Christensen, I. M. Moroz, and T. N. Palmer, 2015: Stochastic and Perturbed Parameter Representations of Model Uncertainty in Convection Parameterization. J. Atmos. Sci., 72, 2525-2544.
NASA Astrophysics Data System (ADS)
Wang, Kai; Zhang, Yang; Zhang, Xin; Fan, Jiwen; Leung, L. Ruby; Zheng, Bo; Zhang, Qiang; He, Kebin
2018-03-01
An advanced online-coupled meteorology and chemistry model WRF-CAM5 has been applied to East Asia using triple-nested domains at different grid resolutions (i.e., 36-, 12-, and 4-km) to simulate a severe dust storm period in spring 2010. Analyses are performed to evaluate the model performance and investigate model sensitivity to different horizontal grid sizes and aerosol activation parameterizations and to examine aerosol-cloud interactions and their impacts on the air quality. A comprehensive model evaluation of the baseline simulations using the default Abdul-Razzak and Ghan (AG) aerosol activation scheme shows that the model can well predict major meteorological variables such as 2-m temperature (T2), water vapor mixing ratio (Q2), 10-m wind speed (WS10) and wind direction (WD10), and shortwave and longwave radiation across different resolutions with domain-average normalized mean biases typically within ±15%. The baseline simulations also show moderate biases for precipitation and moderate-to-large underpredictions for other major variables associated with aerosol-cloud interactions such as cloud droplet number concentration (CDNC), cloud optical thickness (COT), and cloud liquid water path (LWP) due to uncertainties or limitations in the aerosol-cloud treatments. The model performance is sensitive to grid resolutions, especially for surface meteorological variables such as T2, Q2, WS10, and WD10, with the performance generally improving at finer grid resolutions for those variables. Comparison of the sensitivity simulations with an alternative (i.e., the Fountoukis and Nenes (FN) series scheme) and the default (i.e., AG scheme) aerosol activation scheme shows that the former predicts larger values for cloud variables such as CDNC and COT across all grid resolutions and improves the overall domain-average model performance for many cloud/radiation variables and precipitation. Sensitivity simulations using the FN series scheme also have large impacts on radiations, T2, precipitation, and air quality (e.g., decreasing O3) through complex aerosol-radiation-cloud-chemistry feedbacks. The inclusion of adsorptive activation of dust particles in the FN series scheme has similar impacts on the meteorology and air quality but to lesser extent as compared to differences between the FN series and AG schemes. Compared to the overall differences between the FN series and AG schemes, impacts of adsorptive activation of dust particles can contribute significantly to the increase of total CDNC (∼45%) during dust storm events and indicate their importance in modulating regional climate over East Asia.
Konstantakopoulou, E; Harper, R A; Edgar, D F; Lawrenson, J G
2014-05-29
To explore the views of optometrists, general practitioners (GPs) and ophthalmologists regarding the development and organisation of community-based enhanced optometric services. Qualitative study using free-text questionnaires and telephone interviews. A minor eye conditions scheme (MECS) and a glaucoma referral refinement scheme (GRRS) are based on accredited community optometry practices. 41 optometrists, 6 ophthalmologists and 25 GPs. The most common reason given by optometrists for participation in enhanced schemes was to further their professional development; however, as providers of 'for-profit' healthcare, it was clear that participants had also considered the impact of the schemes on their business. Lack of fit with the 'retail' business model of optometry was a frequently given reason for non-participation. The methods used for training and accreditation were generally thought to be appropriate, and participating optometrists welcomed the opportunities for ongoing training. The ophthalmologists involved in the MECS and GRRS expressed very positive views regarding the schemes and widely acknowledged that the new care pathways would reduce unnecessary referrals and shorten patient waiting times. GPs involved in the MECS were also very supportive. They felt that the scheme provided an 'expert' local opinion that could potentially reduce the number of secondary care referrals. The results of this study demonstrated strong stakeholder support for the development of community-based enhanced optometric services. Although optometrists welcomed the opportunity to develop their professional skills and knowledge, enhanced schemes must also provide a sufficient financial incentive so as not to compromise the profitability of their business. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
One-step generation of continuous-variable quadripartite cluster states in a circuit QED system
NASA Astrophysics Data System (ADS)
Yang, Zhi-peng; Li, Zhen; Ma, Sheng-li; Li, Fu-li
2017-07-01
We propose a dissipative scheme for one-step generation of continuous-variable quadripartite cluster states in a circuit QED setup consisting of four superconducting coplanar waveguide resonators and a gap-tunable superconducting flux qubit. With external driving fields to adjust the desired qubit-resonator and resonator-resonator interactions, we show that continuous-variable quadripartite cluster states of the four resonators can be generated with the assistance of energy relaxation of the qubit. By comparison with the previous proposals, the distinct advantage of our scheme is that only one step of quantum operation is needed to realize the quantum state engineering. This makes our scheme simpler and more feasible in experiment. Our result may have useful application for implementing quantum computation in solid-state circuit QED systems.
"Interactive Classification Technology"
NASA Technical Reports Server (NTRS)
deBessonet, Cary
1999-01-01
The investigators are upgrading a knowledge representation language called SL (Symbolic Language) and an automated reasoning system called SMS (Symbolic Manipulation System) to enable the technologies to be used in automated reasoning and interactive classification systems. The overall goals of the project are: a) the enhancement of the representation language SL to accommodate multiple perspectives and a wider range of meaning; b) the development of a sufficient set of operators to enable the interpreter of SL to handle representations of basic cognitive acts; and c) the development of a default inference scheme to operate over SL notation as it is encoded. As to particular goals the first-year work plan focused on inferencing and.representation issues, including: 1) the development of higher level cognitive/ classification functions and conceptual models for use in inferencing and decision making; 2) the specification of a more detailed scheme of defaults and the enrichment of SL notation to accommodate the scheme; and 3) the adoption of additional perspectives for inferencing.
Driving a car with custom-designed fuzzy inferencing VLSI chips and boards
NASA Technical Reports Server (NTRS)
Pin, Francois G.; Watanabe, Yutaka
1993-01-01
Vehicle control in a-priori unknown, unpredictable, and dynamic environments requires many calculational and reasoning schemes to operate on the basis of very imprecise, incomplete, or unreliable data. For such systems, in which all the uncertainties can not be engineered away, approximate reasoning may provide an alternative to the complexity and computational requirements of conventional uncertainty analysis and propagation techniques. Two types of computer boards including custom-designed VLSI chips were developed to add a fuzzy inferencing capability to real-time control systems. All inferencing rules on a chip are processed in parallel, allowing execution of the entire rule base in about 30 microseconds, and therefore, making control of 'reflex-type' of motions envisionable. The use of these boards and the approach using superposition of elemental sensor-based behaviors for the development of qualitative reasoning schemes emulating human-like navigation in a-priori unknown environments are first discussed. Then how the human-like navigation scheme implemented on one of the qualitative inferencing boards was installed on a test-bed platform to investigate two control modes for driving a car in a-priori unknown environments on the basis of sparse and imprecise sensor data is described. In the first mode, the car navigates fully autonomously, while in the second mode, the system acts as a driver's aid providing the driver with linguistic (fuzzy) commands to turn left or right and speed up or slow down depending on the obstacles perceived by the sensors. Experiments with both modes of control are described in which the system uses only three acoustic range (sonar) sensor channels to perceive the environment. Simulation results as well as indoors and outdoors experiments are presented and discussed to illustrate the feasibility and robustness of autonomous navigation and/or safety enhancing driver's aid using the new fuzzy inferencing hardware system and some human-like reasoning schemes which may include as little as six elemental behaviors embodied in fourteen qualitative rules.
Teacher argumentation in the secondary science classroom: Images of two modes of scientific inquiry
NASA Astrophysics Data System (ADS)
Gray, Ron E.
The purpose of this exploratory study was to examine scientific arguments constructed by secondary science teachers during instruction. The analysis focused on how arguments constructed by teachers differed based on the mode of inquiry underlying the topic. Specifically, how did the structure and content of arguments differ between experimentally and historically based topics? In addition, what factors mediate these differences? Four highly experienced high school science teachers were observed daily during instructional units for both experimental and historical science topics. Data sources include classroom observations, field notes, reflective memos, classroom artifacts, a nature of science survey, and teacher interviews. The arguments were analyzed for structure and content using Toulmin's argumentation pattern and Walton's schemes for presumptive reasoning revealing specific patterns of use between the two modes of inquiry. Interview data was analyzed to determine possible factors mediating these patterns. The results of this study reveal that highly experienced teachers present arguments to their students that, while simple in structure, reveal authentic images of science based on experimental and historical modes of inquiry. Structural analysis of the data revealed a common trend toward a greater amount of scientific data used to evidence knowledge claims in the historical science units. The presumptive reasoning analysis revealed that, while some presumptive reasoning schemes remained stable across the two units (e.g. 'causal inferences' and 'sign' schemes), others revealed different patterns of use including the 'analogy', 'evidence to hypothesis', 'example', and 'expert opinion' schemes. Finally, examination of the interview and survey data revealed five specific factors mediating the arguments constructed by the teachers: view of the nature of science, nature of the topic, teacher personal factors, view of students, and pedagogical decisions. These factors influenced both the structure and use of presumptive reasoning in the arguments. The results have implications for classroom practice, teacher education, and further research.
Moon, Jongho; Choi, Younsung; Jung, Jaewook; Won, Dongho
2015-01-01
In multi-server environments, user authentication is a very important issue because it provides the authorization that enables users to access their data and services; furthermore, remote user authentication schemes for multi-server environments have solved the problem that has arisen from user’s management of different identities and passwords. For this reason, numerous user authentication schemes that are designed for multi-server environments have been proposed over recent years. In 2015, Lu et al. improved upon Mishra et al.’s scheme, claiming that their remote user authentication scheme is more secure and practical; however, we found that Lu et al.’s scheme is still insecure and incorrect. In this paper, we demonstrate that Lu et al.’s scheme is vulnerable to outsider attack and user impersonation attack, and we propose a new biometrics-based scheme for authentication and key agreement that can be used in multi-server environments; then, we show that our proposed scheme is more secure and supports the required security properties. PMID:26709702
On large time step TVD scheme for hyperbolic conservation laws and its efficiency evaluation
NASA Astrophysics Data System (ADS)
Qian, ZhanSen; Lee, Chun-Hian
2012-08-01
A large time step (LTS) TVD scheme originally proposed by Harten is modified and further developed in the present paper and applied to Euler equations in multidimensional problems. By firstly revealing the drawbacks of Harten's original LTS TVD scheme, and reasoning the occurrence of the spurious oscillations, a modified formulation of its characteristic transformation is proposed and a high resolution, strongly robust LTS TVD scheme is formulated. The modified scheme is proven to be capable of taking larger number of time steps than the original one. Following the modified strategy, the LTS TVD schemes for Yee's upwind TVD scheme and Yee-Roe-Davis's symmetric TVD scheme are constructed. The family of the LTS schemes is then extended to multidimensional by time splitting procedure, and the associated boundary condition treatment suitable for the LTS scheme is also imposed. The numerical experiments on Sod's shock tube problem, inviscid flows over NACA0012 airfoil and ONERA M6 wing are performed to validate the developed schemes. Computational efficiencies for the respective schemes under different CFL numbers are also evaluated and compared. The results reveal that the improvement is sizable as compared to the respective single time step schemes, especially for the CFL number ranging from 1.0 to 4.0.
NASA Astrophysics Data System (ADS)
Lee, Euntaek; Ahn, Hyung Taek; Luo, Hong
2018-02-01
We apply a hyperbolic cell-centered finite volume method to solve a steady diffusion equation on unstructured meshes. This method, originally proposed by Nishikawa using a node-centered finite volume method, reformulates the elliptic nature of viscous fluxes into a set of augmented equations that makes the entire system hyperbolic. We introduce an efficient and accurate solution strategy for the cell-centered finite volume method. To obtain high-order accuracy for both solution and gradient variables, we use a successive order solution reconstruction: constant, linear, and quadratic (k-exact) reconstruction with an efficient reconstruction stencil, a so-called wrapping stencil. By the virtue of the cell-centered scheme, the source term evaluation was greatly simplified regardless of the solution order. For uniform schemes, we obtain the same order of accuracy, i.e., first, second, and third orders, for both the solution and its gradient variables. For hybrid schemes, recycling the gradient variable information for solution variable reconstruction makes one order of additional accuracy, i.e., second, third, and fourth orders, possible for the solution variable with less computational work than needed for uniform schemes. In general, the hyperbolic method can be an effective solution technique for diffusion problems, but instability is also observed for the discontinuous diffusion coefficient cases, which brings necessity for further investigation about the monotonicity preserving hyperbolic diffusion method.
Differentiating between precursor and control variables when analyzing reasoned action theories.
Hennessy, Michael; Bleakley, Amy; Fishbein, Martin; Brown, Larry; Diclemente, Ralph; Romer, Daniel; Valois, Robert; Vanable, Peter A; Carey, Michael P; Salazar, Laura
2010-02-01
This paper highlights the distinction between precursor and control variables in the context of reasoned action theory. Here the theory is combined with structural equation modeling to demonstrate how age and past sexual behavior should be situated in a reasoned action analysis. A two wave longitudinal survey sample of African-American adolescents is analyzed where the target behavior is having vaginal sex. Results differ when age and past behavior are used as control variables and when they are correctly used as precursors. Because control variables do not appear in any form of reasoned action theory, this approach to including background variables is not correct when analyzing data sets based on the theoretical axioms of the Theory of Reasoned Action, the Theory of Planned Behavior, or the Integrative Model.
Differentiating Between Precursor and Control Variables When Analyzing Reasoned Action Theories
Hennessy, Michael; Bleakley, Amy; Fishbein, Martin; Brown, Larry; DiClemente, Ralph; Romer, Daniel; Valois, Robert; Vanable, Peter A.; Carey, Michael P.; Salazar, Laura
2010-01-01
This paper highlights the distinction between precursor and control variables in the context of reasoned action theory. Here the theory is combined with structural equation modeling to demonstrate how age and past sexual behavior should be situated in a reasoned action analysis. A two wave longitudinal survey sample of African-American adolescents is analyzed where the target behavior is having vaginal sex. Results differ when age and past behavior are used as control variables and when they are correctly used as precursors. Because control variables do not appear in any form of reasoned action theory, this approach to including background variables is not correct when analyzing data sets based on the theoretical axioms of the Theory of Reasoned Action, the Theory of Planned Behavior, or the Integrative Model PMID:19370408
Zhou, Shaona; Han, Jing; Koenig, Kathleen; Raplinger, Amy; Pi, Yuan; Li, Dan; Xiao, Hua; Fu, Zhao; Bao, Lei
2016-03-01
Scientific reasoning is an important component under the cognitive strand of the 21st century skills and is highly emphasized in the new science education standards. This study focuses on the assessment of student reasoning in control of variables (COV), which is a core sub-skill of scientific reasoning. The main research question is to investigate the extent to which the existence of experimental data in questions impacts student reasoning and performance. This study also explores the effects of task contexts on student reasoning as well as students' abilities to distinguish between testability and causal influences of variables in COV experiments. Data were collected with students from both USA and China. Students received randomly one of two test versions, one with experimental data and one without. The results show that students from both populations (1) perform better when experimental data are not provided, (2) perform better in physics contexts than in real-life contexts, and (3) students have a tendency to equate non-influential variables to non-testable variables. In addition, based on the analysis of both quantitative and qualitative data, a possible progression of developmental levels of student reasoning in control of variables is proposed, which can be used to inform future development of assessment and instruction.
Zhou, Shaona; Han, Jing; Koenig, Kathleen; Raplinger, Amy; Pi, Yuan; Li, Dan; Xiao, Hua; Fu, Zhao
2015-01-01
Scientific reasoning is an important component under the cognitive strand of the 21st century skills and is highly emphasized in the new science education standards. This study focuses on the assessment of student reasoning in control of variables (COV), which is a core sub-skill of scientific reasoning. The main research question is to investigate the extent to which the existence of experimental data in questions impacts student reasoning and performance. This study also explores the effects of task contexts on student reasoning as well as students’ abilities to distinguish between testability and causal influences of variables in COV experiments. Data were collected with students from both USA and China. Students received randomly one of two test versions, one with experimental data and one without. The results show that students from both populations (1) perform better when experimental data are not provided, (2) perform better in physics contexts than in real-life contexts, and (3) students have a tendency to equate non-influential variables to non-testable variables. In addition, based on the analysis of both quantitative and qualitative data, a possible progression of developmental levels of student reasoning in control of variables is proposed, which can be used to inform future development of assessment and instruction. PMID:26949425
Rugged Metropolis sampling with simultaneous updating of two dynamical variables
NASA Astrophysics Data System (ADS)
Berg, Bernd A.; Zhou, Huan-Xiang
2005-07-01
The rugged Metropolis (RM) algorithm is a biased updating scheme which aims at directly hitting the most likely configurations in a rugged free-energy landscape. Details of the one-variable (RM1) implementation of this algorithm are presented. This is followed by an extension to simultaneous updating of two dynamical variables (RM2) . In a test with the brain peptide Met-Enkephalin in vacuum RM2 improves conventional Metropolis simulations by a factor of about 4. Correlations between three or more dihedral angles appear to prevent larger improvements at low temperatures. We also investigate a multihit Metropolis scheme, which spends more CPU time on variables with large autocorrelation times.
The a(3) Scheme--A Fourth-Order Space-Time Flux-Conserving and Neutrally Stable CESE Solver
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung
2008-01-01
The CESE development is driven by a belief that a solver should (i) enforce conservation laws in both space and time, and (ii) be built from a non-dissipative (i.e., neutrally stable) core scheme so that the numerical dissipation can be controlled effectively. To initiate a systematic CESE development of high order schemes, in this paper we provide a thorough discussion on the structure, consistency, stability, phase error, and accuracy of a new 4th-order space-time flux-conserving and neutrally stable CESE solver of an 1D scalar advection equation. The space-time stencil of this two-level explicit scheme is formed by one point at the upper time level and three points at the lower time level. Because it is associated with three independent mesh variables (the numerical analogues of the dependent variable and its 1st-order and 2ndorder spatial derivatives, respectively) and three equations per mesh point, the new scheme is referred to as the a(3) scheme. Through the von Neumann analysis, it is shown that the a(3) scheme is stable if and only if the Courant number is less than 0.5. Moreover, it is established numerically that the a(3) scheme is 4th-order accurate.
A Linearized Prognostic Cloud Scheme in NASAs Goddard Earth Observing System Data Assimilation Tools
NASA Technical Reports Server (NTRS)
Holdaway, Daniel; Errico, Ronald M.; Gelaro, Ronald; Kim, Jong G.; Mahajan, Rahul
2015-01-01
A linearized prognostic cloud scheme has been developed to accompany the linearized convection scheme recently implemented in NASA's Goddard Earth Observing System data assimilation tools. The linearization, developed from the nonlinear cloud scheme, treats cloud variables prognostically so they are subject to linearized advection, diffusion, generation, and evaporation. Four linearized cloud variables are modeled, the ice and water phases of clouds generated by large-scale condensation and, separately, by detraining convection. For each species the scheme models their sources, sublimation, evaporation, and autoconversion. Large-scale, anvil and convective species of precipitation are modeled and evaporated. The cloud scheme exhibits linearity and realistic perturbation growth, except around the generation of clouds through large-scale condensation. Discontinuities and steep gradients are widely used here and severe problems occur in the calculation of cloud fraction. For data assimilation applications this poor behavior is controlled by replacing this part of the scheme with a perturbation model. For observation impacts, where efficiency is less of a concern, a filtering is developed that examines the Jacobian. The replacement scheme is only invoked if Jacobian elements or eigenvalues violate a series of tuned constants. The linearized prognostic cloud scheme is tested by comparing the linear and nonlinear perturbation trajectories for 6-, 12-, and 24-h forecast times. The tangent linear model performs well and perturbations of clouds are well captured for the lead times of interest.
Numerical methods for incompressible viscous flows with engineering applications
NASA Technical Reports Server (NTRS)
Rose, M. E.; Ash, R. L.
1988-01-01
A numerical scheme has been developed to solve the incompressible, 3-D Navier-Stokes equations using velocity-vorticity variables. This report summarizes the development of the numerical approximation schemes for the divergence and curl of the velocity vector fields and the development of compact schemes for handling boundary and initial boundary value problems.
Calibrating SALT: a sampling scheme to improve estimates of suspended sediment yield
Robert B. Thomas
1986-01-01
Abstract - SALT (Selection At List Time) is a variable probability sampling scheme that provides unbiased estimates of suspended sediment yield and its variance. SALT performs better than standard schemes which are estimate variance. Sampling probabilities are based on a sediment rating function which promotes greater sampling intensity during periods of high...
Pilot-multiplexed continuous-variable quantum key distribution with a real local oscillator
NASA Astrophysics Data System (ADS)
Wang, Tao; Huang, Peng; Zhou, Yingming; Liu, Weiqi; Zeng, Guihua
2018-01-01
We propose a pilot-multiplexed continuous-variable quantum key distribution (CVQKD) scheme based on a local local oscillator (LLO). Our scheme utilizes time-multiplexing and polarization-multiplexing techniques to dramatically isolate the quantum signal from the pilot, employs two heterodyne detectors to separately detect the signal and the pilot, and adopts a phase compensation method to almost eliminate the multifrequency phase jitter. In order to analyze the performance of our scheme, a general LLO noise model is constructed. Besides the phase noise and the modulation noise, the photon-leakage noise from the reference path and the quantization noise due to the analog-to-digital converter (ADC) are also considered, which are first analyzed in the LLO regime. Under such general noise model, our scheme has a higher key rate and longer secure distance compared with the preexisting LLO schemes. Moreover, we also conduct an experiment to verify our pilot-multiplexed scheme. Results show that it maintains a low level of the phase noise and is expected to obtain a 554-Kbps secure key rate within a 15-km distance under the finite-size effect.
Moon, Jongho; Lee, Donghoon; Lee, Youngsook; Won, Dongho
2017-04-25
User authentication in wireless sensor networks is more difficult than in traditional networks owing to sensor network characteristics such as unreliable communication, limited resources, and unattended operation. For these reasons, various authentication schemes have been proposed to provide secure and efficient communication. In 2016, Park et al. proposed a secure biometric-based authentication scheme with smart card revocation/reissue for wireless sensor networks. However, we found that their scheme was still insecure against impersonation attack, and had a problem in the smart card revocation/reissue phase. In this paper, we show how an adversary can impersonate a legitimate user or sensor node, illegal smart card revocation/reissue and prove that Park et al.'s scheme fails to provide revocation/reissue. In addition, we propose an enhanced scheme that provides efficiency, as well as anonymity and security. Finally, we provide security and performance analysis between previous schemes and the proposed scheme, and provide formal analysis based on the random oracle model. The results prove that the proposed scheme can solve the weaknesses of impersonation attack and other security flaws in the security analysis section. Furthermore, performance analysis shows that the computational cost is lower than the previous scheme.
NASA Astrophysics Data System (ADS)
Hasan, Md Alfi; Islam, A. K. M. Saiful
2018-05-01
Accurate forecasting of heavy rainfall is crucial for the improvement of flood warning to prevent loss of life and property damage due to flash-flood-related landslides in the hilly region of Bangladesh. Forecasting heavy rainfall events is challenging where microphysics and cumulus parameterization schemes of Weather Research and Forecast (WRF) model play an important role. In this study, a comparison was made between observed and simulated rainfall using 19 different combinations of microphysics and cumulus schemes available in WRF over Bangladesh. Two severe rainfall events during 11th June 2007 and 24-27th June 2012, over the eastern hilly region of Bangladesh, were selected for performance evaluation using a number of indicators. A combination of the Stony Brook University microphysics scheme with Tiedtke cumulus scheme is found as the most suitable scheme for reproducing those events. Another combination of the single-moment 6-class microphysics scheme with New Grell 3D cumulus schemes also showed reasonable performance in forecasting heavy rainfall over this region. The sensitivity analysis confirms that cumulus schemes play a greater role than microphysics schemes for reproducing the heavy rainfall events using WRF.
Moon, Jongho; Lee, Donghoon; Lee, Youngsook; Won, Dongho
2017-01-01
User authentication in wireless sensor networks is more difficult than in traditional networks owing to sensor network characteristics such as unreliable communication, limited resources, and unattended operation. For these reasons, various authentication schemes have been proposed to provide secure and efficient communication. In 2016, Park et al. proposed a secure biometric-based authentication scheme with smart card revocation/reissue for wireless sensor networks. However, we found that their scheme was still insecure against impersonation attack, and had a problem in the smart card revocation/reissue phase. In this paper, we show how an adversary can impersonate a legitimate user or sensor node, illegal smart card revocation/reissue and prove that Park et al.’s scheme fails to provide revocation/reissue. In addition, we propose an enhanced scheme that provides efficiency, as well as anonymity and security. Finally, we provide security and performance analysis between previous schemes and the proposed scheme, and provide formal analysis based on the random oracle model. The results prove that the proposed scheme can solve the weaknesses of impersonation attack and other security flaws in the security analysis section. Furthermore, performance analysis shows that the computational cost is lower than the previous scheme. PMID:28441331
A fast chaos-based image encryption scheme with a dynamic state variables selection mechanism
NASA Astrophysics Data System (ADS)
Chen, Jun-xin; Zhu, Zhi-liang; Fu, Chong; Yu, Hai; Zhang, Li-bo
2015-03-01
In recent years, a variety of chaos-based image cryptosystems have been investigated to meet the increasing demand for real-time secure image transmission. Most of them are based on permutation-diffusion architecture, in which permutation and diffusion are two independent procedures with fixed control parameters. This property results in two flaws. (1) At least two chaotic state variables are required for encrypting one plain pixel, in permutation and diffusion stages respectively. Chaotic state variables produced with high computation complexity are not sufficiently used. (2) The key stream solely depends on the secret key, and hence the cryptosystem is vulnerable against known/chosen-plaintext attacks. In this paper, a fast chaos-based image encryption scheme with a dynamic state variables selection mechanism is proposed to enhance the security and promote the efficiency of chaos-based image cryptosystems. Experimental simulations and extensive cryptanalysis have been carried out and the results prove the superior security and high efficiency of the scheme.
Tropical Intraseasonal Variability in Version 3 of the GFDL Atmosphere Model
NASA Astrophysics Data System (ADS)
Benedict, J. J.; Maloney, E. D.; Sobel, A. H.; Frierson, D. M.; Donner, L.
2012-12-01
Tropical intraseasonal variability is examined in version 3 of the Geophysical Fluid Dynamics Laboratory Atmosphere Model (AM3). Compared to its predecessor AM2, AM3 uses a new treatment of deep and shallow cumulus convection and mesoscale clouds. The AM3 cumulus parameterization is a mass flux-based scheme but also, unlike that in AM2, incorporates subgrid-scale vertical velocities; these play a key role in cumulus microphysical processes. The AM3 convection scheme allows multi-phase water substance produced in deep cumuli to be transported directly into mesoscale clouds, which strongly influence large-scale moisture and radiation fields. We examine four AM3 simulations, using a control model and three versions with different modifications to the deep convection scheme. In the control AM3, using a convective closure based on CAPE relaxation, both the MJO and Kelvin waves are weak compared to those in observations. By modifying the convective closure and trigger assumptions to inhibit deep cumuli, AM3 produces reasonable intraseasonal variability but a degraded mean state. MJO-like disturbances in the modified AM3 propagate eastward at roughly the observed speed in the Indian Ocean but up to twice the observed speed in the West Pacific. Distinct differences in intraseasonal convective organization and propagation exist among the modified AM3 versions. Differences in vertical diabatic heating profiles associated with the MJO are also found. The two AM3 versions with the strongest intraseasonal signals have a more prominent "bottom-heavy" heating profile leading the disturbance center and "top-heavy" heating profile following the disturbance. The more realistic heating structures are associated with an improved depiction of moisture convergence and intraseasonal convective organization in AM3.ag correlations of 850 hPa zonal wind with precipitation at (left column) 90°E and (right column) 150°E. Both fields are bandpass filtered (20-100 days) and averaged between 15°S-15°N. Solid (dashed) contours represent positive (negative) correlations that are shaded dark (light) gray if they exceed the 95% statistical significance level. We use ERAI and TRMM for the observed wind and rainfall fields. In the left panels, index reference longitudes and the 5 m/s phase speed are marked by vertical and slanted thick lines, respectively. Right panels also contain the 10 m/s phase speed line.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Ju, E-mail: jliu@ices.utexas.edu; Gomez, Hector; Evans, John A.
2013-09-01
We propose a new methodology for the numerical solution of the isothermal Navier–Stokes–Korteweg equations. Our methodology is based on a semi-discrete Galerkin method invoking functional entropy variables, a generalization of classical entropy variables, and a new time integration scheme. We show that the resulting fully discrete scheme is unconditionally stable-in-energy, second-order time-accurate, and mass-conservative. We utilize isogeometric analysis for spatial discretization and verify the aforementioned properties by adopting the method of manufactured solutions and comparing coarse mesh solutions with overkill solutions. Various problems are simulated to show the capability of the method. Our methodology provides a means of constructing unconditionallymore » stable numerical schemes for nonlinear non-convex hyperbolic systems of conservation laws.« less
NASA Astrophysics Data System (ADS)
Han, Qiguo; Zhu, Kai; Shi, Wenming; Wu, Kuayu; Chen, Kai
2018-02-01
In order to solve the problem of low voltage ride through(LVRT) of the major auxiliary equipment’s variable-frequency drive (VFD) in thermal power plant, the scheme of supercapacitor paralleled in the DC link of VFD is put forward, furthermore, two solutions of direct parallel support and voltage boost parallel support of supercapacitor are proposed. The capacitor values for the relevant motor loads are calculated according to the law of energy conservation, and they are verified by Matlab simulation. At last, a set of test prototype is set up, and the test results prove the feasibility of the proposed schemes.
Pulley, Simon; Foster, Ian; Collins, Adrian L
2017-06-01
The objective classification of sediment source groups is at present an under-investigated aspect of source tracing studies, which has the potential to statistically improve discrimination between sediment sources and reduce uncertainty. This paper investigates this potential using three different source group classification schemes. The first classification scheme was simple surface and subsurface groupings (Scheme 1). The tracer signatures were then used in a two-step cluster analysis to identify the sediment source groupings naturally defined by the tracer signatures (Scheme 2). The cluster source groups were then modified by splitting each one into a surface and subsurface component to suit catchment management goals (Scheme 3). The schemes were tested using artificial mixtures of sediment source samples. Controlled corruptions were made to some of the mixtures to mimic the potential causes of tracer non-conservatism present when using tracers in natural fluvial environments. It was determined how accurately the known proportions of sediment sources in the mixtures were identified after unmixing modelling using the three classification schemes. The cluster analysis derived source groups (2) significantly increased tracer variability ratios (inter-/intra-source group variability) (up to 2122%, median 194%) compared to the surface and subsurface groupings (1). As a result, the composition of the artificial mixtures was identified an average of 9.8% more accurately on the 0-100% contribution scale. It was found that the cluster groups could be reclassified into a surface and subsurface component (3) with no significant increase in composite uncertainty (a 0.1% increase over Scheme 2). The far smaller effects of simulated tracer non-conservatism for the cluster analysis based schemes (2 and 3) was primarily attributed to the increased inter-group variability producing a far larger sediment source signal that the non-conservatism noise (1). Modified cluster analysis based classification methods have the potential to reduce composite uncertainty significantly in future source tracing studies. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Vallam, P.; Qin, X. S.
2017-10-01
Anthropogenic-driven climate change would affect the global ecosystem and is becoming a world-wide concern. Numerous studies have been undertaken to determine the future trends of meteorological variables at different scales. Despite these studies, there remains significant uncertainty in the prediction of future climates. To examine the uncertainty arising from using different schemes to downscale the meteorological variables for the future horizons, projections from different statistical downscaling schemes were examined. These schemes included statistical downscaling method (SDSM), change factor incorporated with LARS-WG, and bias corrected disaggregation (BCD) method. Global circulation models (GCMs) based on CMIP3 (HadCM3) and CMIP5 (CanESM2) were utilized to perturb the changes in the future climate. Five study sites (i.e., Alice Springs, Edmonton, Frankfurt, Miami, and Singapore) with diverse climatic conditions were chosen for examining the spatial variability of applying various statistical downscaling schemes. The study results indicated that the regions experiencing heavy precipitation intensities were most likely to demonstrate the divergence between the predictions from various statistical downscaling methods. Also, the variance computed in projecting the weather extremes indicated the uncertainty derived from selection of downscaling tools and climate models. This study could help gain an improved understanding about the features of different downscaling approaches and the overall downscaling uncertainty.
Optimal variable flip angle schemes for dynamic acquisition of exchanging hyperpolarized substrates
NASA Astrophysics Data System (ADS)
Xing, Yan; Reed, Galen D.; Pauly, John M.; Kerr, Adam B.; Larson, Peder E. Z.
2013-09-01
In metabolic MRI with hyperpolarized contrast agents, the signal levels vary over time due to T1 decay, T2 decay following RF excitations, and metabolic conversion. Efficient usage of the nonrenewable hyperpolarized magnetization requires specialized RF pulse schemes. In this work, we introduce two novel variable flip angle schemes for dynamic hyperpolarized MRI in which the flip angle is varied between excitations and between metabolites. These were optimized to distribute the magnetization relatively evenly throughout the acquisition by accounting for T1 decay, prior RF excitations, and metabolic conversion. Simulation results are presented to confirm the flip angle designs and evaluate the variability of signal dynamics across typical ranges of T1 and metabolic conversion. They were implemented using multiband spectral-spatial RF pulses to independently modulate the flip angle at various chemical shift frequencies. With these schemes we observed increased SNR of [1-13C]lactate generated from [1-13C]pyruvate, particularly at later time points. This will allow for improved characterization of tissue perfusion and metabolic profiles in dynamic hyperpolarized MRI.
A special protection scheme utilizing trajectory sensitivity analysis in power transmission
NASA Astrophysics Data System (ADS)
Suriyamongkol, Dan
In recent years, new measurement techniques have provided opportunities to improve the North American Power System observability, control and protection. This dissertation discusses the formulation and design of a special protection scheme based on a novel utilization of trajectory sensitivity techniques with inputs consisting of system state variables and parameters. Trajectory sensitivity analysis (TSA) has been used in previous publications as a method for power system security and stability assessment, and the mathematical formulation of TSA lends itself well to some of the time domain power system simulation techniques. Existing special protection schemes often have limited sets of goals and control actions. The proposed scheme aims to maintain stability while using as many control actions as possible. The approach here will use the TSA in a novel way by using the sensitivities of system state variables with respect to state parameter variations to determine the state parameter controls required to achieve the desired state variable movements. The initial application will operate based on the assumption that the modeled power system has full system observability, and practical considerations will be discussed.
1980-03-01
throttle torque capability. Various schemes are under development to reduce this disadvantage. These schemes include reducing compressor and turbine rotor...inertia, using a pelton wheel or burners, electronic feedback systems, and variable area turbocharging. Other turbocharging disadvantages include...around the turbine ) and using exhaust augmenters or combustors (wasteful of fuel, costly, and complex), and the variable area turbocharger (VAT). An
Deficiencies of the cryptography based on multiple-parameter fractional Fourier transform.
Ran, Qiwen; Zhang, Haiying; Zhang, Jin; Tan, Liying; Ma, Jing
2009-06-01
Methods of image encryption based on fractional Fourier transform have an incipient flaw in security. We show that the schemes have the deficiency that one group of encryption keys has many groups of keys to decrypt the encrypted image correctly for several reasons. In some schemes, many factors result in the deficiencies, such as the encryption scheme based on multiple-parameter fractional Fourier transform [Opt. Lett.33, 581 (2008)]. A modified method is proposed to avoid all the deficiencies. Security and reliability are greatly improved without increasing the complexity of the encryption process. (c) 2009 Optical Society of America.
Studies in integrated line-and packet-switched computer communication systems
NASA Astrophysics Data System (ADS)
Maglaris, B. S.
1980-06-01
The problem of efficiently allocating the bandwidth of a trunk to both types of traffic is handled for various system and traffic models. A performance analysis is carried out both for variable and fixed frame schemes. It is shown that variable frame schemes, adjusting the frame length according to the traffic variations, offer better trunk utilization at the cost of the additional hardware and software complexity needed because of the lack of synchronization. An optimization study on the fixed frame schemes follows. The problem of dynamically allocating the fixed frame to both types of traffic is formulated as a Markovian Decision process. It is shown that the movable boundary scheme, suggested for commercial implementations of integrated multiplexors, offers optimal or near optimal performance and simplicity of implementation. Finally, the behavior of the movable boundary integrated scheme is studied for tandem link connections. Under the assumptions made for the line-switched traffic, the forward allocation technique is found to offer the best alternative among different path set-up strategies.
Multidimensional modulation for next-generation transmission systems
NASA Astrophysics Data System (ADS)
Millar, David S.; Koike-Akino, Toshiaki; Kojima, Keisuke; Parsons, Kieran
2017-01-01
Recent research in multidimensional modulation has shown great promise in long reach applications. In this work, we will investigate the origins of this gain, the different approaches to multidimensional constellation design, and different performance metrics for coded modulation. We will also discuss the reason that such coded modulation schemes seem to have limited application at shorter distances, and the potential for other coded modulation schemes in future transmission systems.
Student Teachers’ Proof Schemes on Proof Tasks Involving Inequality: Deductive or Inductive?
NASA Astrophysics Data System (ADS)
Rosyidi, A. H.; Kohar, A. W.
2018-01-01
Exploring student teachers’ proof ability is crucial as it is important for improving the quality of their learning process and help their future students learn how to construct a proof. Hence, this study aims at exploring at the proof schemes of student teachers in the beginning of their studies. Data were collected from 130 proofs resulted by 65 Indonesian student teachers on two proof tasks involving algebraic inequality. To analyse, the proofs were classified into the refined proof schemes level proposed by Lee (2016) ranging from inductive, which only provides irrelevant inferences, to deductive proofs, which consider addressing formal representation. Findings present several examples of each of Lee’s level on the student teachers’ proofs spanning from irrelevant inferences, novice use of examples or logical reasoning, strategic use examples for reasoning, deductive inferences with major and minor logical coherence, and deductive proof with informal and formal representation. Besides, it was also found that more than half of the students’ proofs coded as inductive schemes, which does not meet the requirement for doing the proof for the proof tasks examined in this study. This study suggests teacher educators in teacher colleges to reform the curriculum regarding proof learning which can accommodate the improvement of student teachers’ proving ability from inductive to deductive proof as well from informal to formal proof.
Novel approach for dam break flow modeling using computational intelligence
NASA Astrophysics Data System (ADS)
Seyedashraf, Omid; Mehrabi, Mohammad; Akhtari, Ali Akbar
2018-04-01
A new methodology based on the computational intelligence (CI) system is proposed and tested for modeling the classic 1D dam-break flow problem. The reason to seek for a new solution lies in the shortcomings of the existing analytical and numerical models. This includes the difficulty of using the exact solutions and the unwanted fluctuations, which arise in the numerical results. In this research, the application of the radial-basis-function (RBF) and multi-layer-perceptron (MLP) systems is detailed for the solution of twenty-nine dam-break scenarios. The models are developed using seven variables, i.e. the length of the channel, the depths of the up-and downstream sections, time, and distance as the inputs. Moreover, the depths and velocities of each computational node in the flow domain are considered as the model outputs. The models are validated against the analytical, and Lax-Wendroff and MacCormack FDM schemes. The findings indicate that the employed CI models are able to replicate the overall shape of the shock- and rarefaction-waves. Furthermore, the MLP system outperforms RBF and the tested numerical schemes. A new monolithic equation is proposed based on the best fitting model, which can be used as an efficient alternative to the existing piecewise analytic equations.
NASA Astrophysics Data System (ADS)
Borge, Rafael; Alexandrov, Vassil; José del Vas, Juan; Lumbreras, Julio; Rodríguez, Encarnacion
Meteorological inputs play a vital role on regional air quality modelling. An extensive sensitivity analysis of the Weather Research and Forecasting (WRF) model was performed, in the framework of the Integrated Assessment Modelling System for the Iberian Peninsula (SIMCA) project. Up to 23 alternative model configurations, including Planetary Boundary Layer schemes, Microphysics, Land-surface models, Radiation schemes, Sea Surface Temperature and Four-Dimensional Data Assimilation were tested in a 3 km spatial resolution domain. Model results for the most significant meteorological variables, were assessed through a series of common statistics. The physics options identified to produce better results (Yonsei University Planetary Boundary Layer, WRF Single-Moment 6-class microphysics, Noah Land-surface model, Eta Geophysical Fluid Dynamics Laboratory longwave radiation and MM5 shortwave radiation schemes) along with other relevant user settings (time-varying Sea Surface Temperature and combined grid-observational nudging) where included in a "best case" configuration. This setup was tested and found to produce more accurate estimation of temperature, wind and humidity fields at surface level than any other configuration for the two episodes simulated. Planetary Boundary Layer height predictions showed a reasonable agreement with estimations derived from routine atmospheric soundings. Although some seasonal and geographical differences were observed, the model showed an acceptable behaviour overall. Despite being useful to define the most appropriate setup of the WRF model for air quality modelling over the Iberian Peninsula, this study provides a general overview of WRF sensitivity and can constitute a reference for future mesoscale meteorological modelling exercises.
A Two-Variable Grading Scheme.
ERIC Educational Resources Information Center
Applebaum, David C.
1979-01-01
Explains a flexible two-part grading scheme which attempts to mix the best of both the descriptive and analytical treatments of an introductory astronomy course, to allow for differences in the academic backgrounds of the students. (GA)
Teleportation-based continuous variable quantum cryptography
NASA Astrophysics Data System (ADS)
Luiz, F. S.; Rigolin, Gustavo
2017-03-01
We present a continuous variable (CV) quantum key distribution (QKD) scheme based on the CV quantum teleportation of coherent states that yields a raw secret key made up of discrete variables for both Alice and Bob. This protocol preserves the efficient detection schemes of current CV technology (no single-photon detection techniques) and, at the same time, has efficient error correction and privacy amplification schemes due to the binary modulation of the key. We show that for a certain type of incoherent attack, it is secure for almost any value of the transmittance of the optical line used by Alice to share entangled two-mode squeezed states with Bob (no 3 dB or 50% loss limitation characteristic of beam splitting attacks). The present CVQKD protocol works deterministically (no postselection needed) with efficient direct reconciliation techniques (no reverse reconciliation) in order to generate a secure key and beyond the 50% loss case at the incoherent attack level.
The controlled growth method - A tool for structural optimization
NASA Technical Reports Server (NTRS)
Hajela, P.; Sobieszczanski-Sobieski, J.
1981-01-01
An adaptive design variable linking scheme in a NLP based optimization algorithm is proposed and evaluated for feasibility of application. The present scheme, based on an intuitive effectiveness measure for each variable, differs from existing methodology in that a single dominant variable controls the growth of all others in a prescribed optimization cycle. The proposed method is implemented for truss assemblies and a wing box structure for stress, displacement and frequency constraints. Substantial reduction in computational time, even more so for structures under multiple load conditions, coupled with a minimal accompanying loss in accuracy, vindicates the algorithm.
Children's reasons for living, self-esteem, and violence.
Merwin, Rhonda M; Ellis, Jon B
2004-01-01
Attitudes toward violence and reasons for living in young adolescents with high, moderate, and low self-esteem were examined. The authors devised an Attitudes Toward Violence questionnaire; the Rosenberg's Self-esteem Scale (RSE) and the Brief Reasons for Living in Adolescents (BRFL-A) was used to assess adaptive characteristics. The independent variables were gender and self-esteem. The dependent variables were total Reasons for Living score and Attitudes Toward Violence score. Participants included 138 boys and 95 girls, ages 11 to 15 years (M = 13.3) from a city middle school. The results showed that for the dependent variable attitudes toward violence, main effects were found for both gender and self-esteem. For the dependent variable reasons for living, a main effect was found for self-esteem but not for gender. An inverse relationship was found between violence and reasons for living. Being male and low self-esteem emerged as predictors of more accepting attitudes toward violence. Low self-esteem was significantly related to fewer reasons for living.
What variables can influence clinical reasoning?
Ashoorion, Vahid; Liaghatdar, Mohammad Javad; Adibi, Peyman
2012-12-01
Clinical reasoning is one of the most important competencies that a physician should achieve. Many medical schools and licensing bodies try to predict it based on some general measures such as critical thinking, personality, and emotional intelligence. This study aimed at providing a model to design the relationship between the constructs. Sixty-nine medical students participated in this study. A battery test devised that consist four parts: Clinical reasoning measures, personality NEO inventory, Bar-On EQ inventory, and California critical thinking questionnaire. All participants completed the tests. Correlation and multiple regression analysis consumed for data analysis. There is low to moderate correlations between clinical reasoning and other variables. Emotional intelligence is the only variable that contributes clinical reasoning construct (r=0.17-0.34) (R(2) chnage = 0.46, P Value = 0.000). Although, clinical reasoning can be considered as a kind of thinking, no significant correlation detected between it and other constructs. Emotional intelligence (and its subscales) is the only variable that can be used for clinical reasoning prediction.
COLA: Optimizing Stream Processing Applications via Graph Partitioning
NASA Astrophysics Data System (ADS)
Khandekar, Rohit; Hildrum, Kirsten; Parekh, Sujay; Rajan, Deepak; Wolf, Joel; Wu, Kun-Lung; Andrade, Henrique; Gedik, Buğra
In this paper, we describe an optimization scheme for fusing compile-time operators into reasonably-sized run-time software units called processing elements (PEs). Such PEs are the basic deployable units in System S, a highly scalable distributed stream processing middleware system. Finding a high quality fusion significantly benefits the performance of streaming jobs. In order to maximize throughput, our solution approach attempts to minimize the processing cost associated with inter-PE stream traffic while simultaneously balancing load across the processing hosts. Our algorithm computes a hierarchical partitioning of the operator graph based on a minimum-ratio cut subroutine. We also incorporate several fusion constraints in order to support real-world System S jobs. We experimentally compare our algorithm with several other reasonable alternative schemes, highlighting the effectiveness of our approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Yeonhee; Kang, Moses; Muljadi, Eduard
This paper proposes a power-smoothing scheme for a variable-speed wind turbine generator (WTG) that can smooth out the WTG's fluctuating power caused by varying wind speeds, and thereby keep the system frequency within a narrow range. The proposed scheme employs an additional loop based on the system frequency deviation that operates in conjunction with the maximum power point tracking (MPPT) control loop. Unlike the conventional, fixed-gain scheme, its control gain is modified with the rotor speed. In the proposed scheme, the control gain is determined by considering the ratio of the output of the additional loop to that of themore » MPPT loop. To improve the contribution of the scheme toward maintaining the frequency while ensuring the stable operation of WTGs, in the low rotor speed region, the ratio is set to be proportional to the rotor speed; in the high rotor speed region, the ratio remains constant. The performance of the proposed scheme is investigated under varying wind conditions for the IEEE 14-bus system. The simulation results demonstrate that the scheme successfully operates regardless of the output power fluctuation of a WTG by adjusting the gain with the rotor speed, and thereby improves the frequency-regulating capability of a WTG.« less
Structure-based CoMFA as a predictive model - CYP2C9 inhibitors as a test case.
Yasuo, Kazuya; Yamaotsu, Noriyuki; Gouda, Hiroaki; Tsujishita, Hideki; Hirono, Shuichi
2009-04-01
In this study, we tried to establish a general scheme to create a model that could predict the affinity of small compounds to their target proteins. This scheme consists of a search for ligand-binding sites on a protein, a generation of bound conformations (poses) of ligands in each of the sites by docking, identifications of the correct poses of each ligand by consensus scoring and MM-PBSA analysis, and a construction of a CoMFA model with the obtained poses to predict the affinity of the ligands. By using a crystal structure of CYP 2C9 and the twenty known CYP inhibitors as a test case, we obtained a CoMFA model with a good statistics, which suggested that the classification of the binding sites as well as the predicted bound poses of the ligands should be reasonable enough. The scheme described here would give a method to predict the affinity of small compounds with a reasonable accuracy, which is expected to heighten the value of computational chemistry in the drug design process.
Additive schemes for certain operator-differential equations
NASA Astrophysics Data System (ADS)
Vabishchevich, P. N.
2010-12-01
Unconditionally stable finite difference schemes for the time approximation of first-order operator-differential systems with self-adjoint operators are constructed. Such systems arise in many applied problems, for example, in connection with nonstationary problems for the system of Stokes (Navier-Stokes) equations. Stability conditions in the corresponding Hilbert spaces for two-level weighted operator-difference schemes are obtained. Additive (splitting) schemes are proposed that involve the solution of simple problems at each time step. The results are used to construct splitting schemes with respect to spatial variables for nonstationary Navier-Stokes equations for incompressible fluid. The capabilities of additive schemes are illustrated using a two-dimensional model problem as an example.
Well-balanced Schemes for Gravitationally Stratified Media
NASA Astrophysics Data System (ADS)
Käppeli, R.; Mishra, S.
2015-10-01
We present a well-balanced scheme for the Euler equations with gravitation. The scheme is capable of maintaining exactly (up to machine precision) a discrete hydrostatic equilibrium without any assumption on a thermodynamic variable such as specific entropy or temperature. The well-balanced scheme is based on a local hydrostatic pressure reconstruction. Moreover, it is computationally efficient and can be incorporated into any existing algorithm in a straightforward manner. The presented scheme improves over standard ones especially when flows close to a hydrostatic equilibrium have to be simulated. The performance of the well-balanced scheme is demonstrated on an astrophysically relevant application: a toy model for core-collapse supernovae.
NASA Astrophysics Data System (ADS)
Britt, S.; Tsynkov, S.; Turkel, E.
2018-02-01
We solve the wave equation with variable wave speed on nonconforming domains with fourth order accuracy in both space and time. This is accomplished using an implicit finite difference (FD) scheme for the wave equation and solving an elliptic (modified Helmholtz) equation at each time step with fourth order spatial accuracy by the method of difference potentials (MDP). High-order MDP utilizes compact FD schemes on regular structured grids to efficiently solve problems on nonconforming domains while maintaining the design convergence rate of the underlying FD scheme. Asymptotically, the computational complexity of high-order MDP scales the same as that for FD.
Semi-quantum Secure Direct Communication Scheme Based on Bell States
NASA Astrophysics Data System (ADS)
Xie, Chen; Li, Lvzhou; Situ, Haozhen; He, Jianhao
2018-06-01
Recently, the idea of semi-quantumness has been often used in designing quantum cryptographic schemes, which allows some of the participants of a quantum cryptographic scheme to remain classical. One of the reasons why this idea is popular is that it allows a quantum information processing task to be accomplished by using quantum resources as few as possible. In this paper, we extend the idea to quantum secure direct communication(QSDC) by proposing a semi-quantum secure direct communication scheme. In the scheme, the message sender, Alice, encodes each bit into a Bell state |φ+> = 1/{√2}(|00> +|11> ) or |{Ψ }+> = 1/{√ 2}(|01> +|10> ), and the message receiver, Bob, who is classical in the sense that he can either let the qubit he received reflect undisturbed, or measure the qubit in the computational basis |0>, |1> and then resend it in the state he found. Moreover, the security analysis of our scheme is also given.
A simple parameterization of aerosol emissions in RAMS
NASA Astrophysics Data System (ADS)
Letcher, Theodore
Throughout the past decade, a high degree of attention has been focused on determining the microphysical impact of anthropogenically enhanced concentrations of Cloud Condensation Nuclei (CCN) on orographic snowfall in the mountains of the western United States. This area has garnered a lot of attention due to the implications this effect may have on local water resource distribution within the Region. Recent advances in computing power and the development of highly advanced microphysical schemes within numerical models have provided an estimation of the sensitivity that orographic snowfall has to changes in atmospheric CCN concentrations. However, what is still lacking is a coupling between these advanced microphysical schemes and a real-world representation of CCN sources. Previously, an attempt to representation the heterogeneous evolution of aerosol was made by coupling three-dimensional aerosol output from the WRF Chemistry model to the Colorado State University (CSU) Regional Atmospheric Modeling System (RAMS) (Ward et al. 2011). The biggest problem associated with this scheme was the computational expense. In fact, the computational expense associated with this scheme was so high, that it was prohibitive for simulations with fine enough resolution to accurately represent microphysical processes. To improve upon this method, a new parameterization for aerosol emission was developed in such a way that it was fully contained within RAMS. Several assumptions went into generating a computationally efficient aerosol emissions parameterization in RAMS. The most notable assumption was the decision to neglect the chemical processes in formed in the formation of Secondary Aerosol (SA), and instead treat SA as primary aerosol via short-term WRF-CHEM simulations. While, SA makes up a substantial portion of the total aerosol burden (much of which is made up of organic material), the representation of this process is highly complex and highly expensive within a numerical model. Furthermore, SA formation is greatly reduced during the winter months due to the lack of naturally produced organic VOC's. Because of these reasons, it was felt that neglecting SOA within the model was the best course of action. The actual parameterization uses a prescribed source map to add aerosol to the model at two vertical levels that surround an arbitrary height decided by the user. To best represent the real-world, the WRF Chemistry model was run using the National Emissions Inventory (NEI2005) to represent anthropogenic emissions and the Model Emissions of Gases and Aerosols from Nature (MEGAN) to represent natural contributions to aerosol. WRF Chemistry was run for one hour, after which the aerosol output along with the hygroscopicity parameter (κ) were saved into a data file that had the capacity to be interpolated to an arbitrary grid used in RAMS. The comparison of this parameterization to observations collected at Mesa Verde National Park (MVNP) during the Inhibition of Snowfall from Pollution Aerosol (ISPA-III) field campaign yielded promising results. The model was able to simulate the variability in near surface aerosol concentration with reasonable accuracy, though with a general low bias. Furthermore, this model compared much better to the observations than did the WRF Chemistry model using a fraction of the computational expense. This emissions scheme was able to show reasonable solutions regarding the aerosol concentrations and can therefore be used to provide an estimate of the seasonal impact of increased CCN on water resources in Western Colorado with relatively low computational expense.
Ocean Variability Effects on Underwater Acoustic Communications
2011-09-01
schemes for accessing wide frequency bands. Compared with OFDM schemes, the multiband MIMO transmission combined with time reversal processing...systems, or multiple- input/multiple-output ( MIMO ) systems, decision feedback equalization and interference cancellation schemes have been integrated...unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 2 MIMO receiver also iterates channel estimation and symbol demodulation with
Ashtiani Haghighi, Donya; Mobayen, Saleh
2018-04-01
This paper proposes an adaptive super-twisting decoupled terminal sliding mode control technique for a class of fourth-order systems. The adaptive-tuning law eliminates the requirement of the knowledge about the upper bounds of external perturbations. Using the proposed control procedure, the state variables of cart-pole system are converged to decoupled terminal sliding surfaces and their equilibrium points in the finite time. Moreover, via the super-twisting algorithm, the chattering phenomenon is avoided without affecting the control performance. The numerical results demonstrate the high stabilization accuracy and lower performance indices values of the suggested method over the other ones. The simulation results on the cart-pole system as well as experimental validations demonstrate that the proposed control technique exhibits a reasonable performance in comparison with the other methods. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Well-balanced high-order solver for blood flow in networks of vessels with variable properties.
Müller, Lucas O; Toro, Eleuterio F
2013-12-01
We present a well-balanced, high-order non-linear numerical scheme for solving a hyperbolic system that models one-dimensional flow in blood vessels with variable mechanical and geometrical properties along their length. Using a suitable set of test problems with exact solution, we rigorously assess the performance of the scheme. In particular, we assess the well-balanced property and the effective order of accuracy through an empirical convergence rate study. Schemes of up to fifth order of accuracy in both space and time are implemented and assessed. The numerical methodology is then extended to realistic networks of elastic vessels and is validated against published state-of-the-art numerical solutions and experimental measurements. It is envisaged that the present scheme will constitute the building block for a closed, global model for the human circulation system involving arteries, veins, capillaries and cerebrospinal fluid. Copyright © 2013 John Wiley & Sons, Ltd.
Gutierrez, Hialy; Shewade, Ashwini; Dai, Minghan; Mendoza-Arana, Pedro; Gómez-Dantés, Octavio; Jain, Nishant; Khonelidze, Irma; Nabyonga-Orem, Juliet; Saleh, Karima; Teerawattananon, Yot; Nishtar, Sania; Hornberger, John
2015-08-01
Lessons learned by countries that have successfully implemented coverage schemes for health services may be valuable for other countries, especially low- and middle-income countries (LMICs), which likewise are seeking to provide/expand coverage. The research team surveyed experts in population health management from LMICs for information on characteristics of health care coverage schemes and factors that influenced decision-making processes. The level of coverage provided by the different schemes varied. Nearly all the health care coverage schemes involved various representatives and stakeholders in their decision-making processes. Maternal and child health, cardiovascular diseases, cancer, and HIV were among the highest priorities guiding coverage development decisions. Evidence used to inform coverage decisions included medical literature, regional and global epidemiology, and coverage policies of other coverage schemes. Funding was the most commonly reported reason for restricting coverage. This exploratory study provides an overview of health care coverage schemes from participating LMICs and contributes to the scarce evidence base on coverage decision making. Sharing knowledge and experiences among LMICs can support efforts to establish systems for accessible, affordable, and equitable health care.
Zhang, Yichuan; Wang, Jiangping
2015-07-01
Rivers serve as a highly valued component in ecosystem and urban infrastructures. River planning should follow basic principles of maintaining or reconstructing the natural landscape and ecological functions of rivers. Optimization of planning scheme is a prerequisite for successful construction of urban rivers. Therefore, relevant studies on optimization of scheme for natural ecology planning of rivers is crucial. In the present study, four planning schemes for Zhaodingpal River in Xinxiang City, Henan Province were included as the objects for optimization. Fourteen factors that influenced the natural ecology planning of urban rivers were selected from five aspects so as to establish the ANP model. The data processing was done using Super Decisions software. The results showed that important degree of scheme 3 was highest. A scientific, reasonable and accurate evaluation of schemes could be made by ANP method on natural ecology planning of urban rivers. This method could be used to provide references for sustainable development and construction of urban rivers. ANP method is also suitable for optimization of schemes for urban green space planning and design.
Attack and improvements of fair quantum blind signature schemes
NASA Astrophysics Data System (ADS)
Zou, Xiangfu; Qiu, Daowen
2013-06-01
Blind signature schemes allow users to obtain the signature of a message while the signer learns neither the message nor the resulting signature. Therefore, blind signatures have been used to realize cryptographic protocols providing the anonymity of some participants, such as: secure electronic payment systems and electronic voting systems. A fair blind signature is a form of blind signature which the anonymity could be removed with the help of a trusted entity, when this is required for legal reasons. Recently, a fair quantum blind signature scheme was proposed and thought to be safe. In this paper, we first point out that there exists a new attack on fair quantum blind signature schemes. The attack shows that, if any sender has intercepted any valid signature, he (she) can counterfeit a valid signature for any message and can not be traced by the counterfeited blind signature. Then, we construct a fair quantum blind signature scheme by improved the existed one. The proposed fair quantum blind signature scheme can resist the preceding attack. Furthermore, we demonstrate the security of the proposed fair quantum blind signature scheme and compare it with the other one.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Man, Zhong-Xiao, E-mail: zxman@mail.qfnu.edu.cn; An, Nguyen Ba, E-mail: nban@iop.vast.ac.vn; Xia, Yun-Jie, E-mail: yjxia@mail.qfnu.edu.cn
In combination with the theories of open system and quantum recovering measurement, we propose a quantum state transfer scheme using spin chains by performing two sequential operations: a projective measurement on the spins of ‘environment’ followed by suitably designed quantum recovering measurements on the spins of interest. The scheme allows perfect transfer of arbitrary multispin states through multiple parallel spin chains with finite probability. Our scheme is universal in the sense that it is state-independent and applicable to any model possessing spin–spin interactions. We also present possible methods to implement the required measurements taking into account the current experimental technologies.more » As applications, we consider two typical models for which the probabilities of perfect state transfer are found to be reasonably high at optimally chosen moments during the time evolution. - Highlights: • Scheme that can achieve perfect quantum state transfer is devised. • The scheme is state-independent and applicable to any spin-interaction models. • The scheme allows perfect transfer of arbitrary multispin states. • Applications to two typical models are considered in detail.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Ying-Jie, E-mail: qfyingjie@iphy.ac.cn; Institute of Physics, Chinese Academy of Sciences, Beijing, 100190; Han, Wei
In this paper, we propose a scheme to enhance trapping of entanglement of two qubits in the environment of a photonic band gap material. Our entanglement trapping promotion scheme makes use of combined weak measurements and quantum measurement reversals. The optimal promotion of entanglement trapping can be acquired with a reasonable finite success probability by adjusting measurement strengths. - Highlights: • Propose a scheme to enhance entanglement trapping in photonic band gap material. • Weak measurement and its reversal are performed locally on individual qubits. • Obtain an optimal condition for maximizing the concurrence of entanglement trapping. • Entanglement suddenmore » death can be prevented by weak measurement in photonic band gap.« less
Computational design of the basic dynamical processes of the UCLA general circulation model
NASA Technical Reports Server (NTRS)
Arakawa, A.; Lamb, V. R.
1977-01-01
The 12-layer UCLA general circulation model encompassing troposphere and stratosphere (and superjacent 'sponge layer') is described. Prognostic variables are: surface pressure, horizontal velocity, temperature, water vapor and ozone in each layer, planetary boundary layer (PBL) depth, temperature, moisture and momentum discontinuities at PBL top, ground temperature and water storage, and mass of snow on ground. Selection of space finite-difference schemes for homogeneous incompressible flow, with/without a free surface, nonlinear two-dimensional nondivergent flow, enstrophy conserving schemes, momentum advection schemes, vertical and horizontal difference schemes, and time differencing schemes are discussed.
Optimal Multi-scale Demand-side Management for Continuous Power-Intensive Processes
NASA Astrophysics Data System (ADS)
Mitra, Sumit
With the advent of deregulation in electricity markets and an increasing share of intermittent power generation sources, the profitability of industrial consumers that operate power-intensive processes has become directly linked to the variability in energy prices. Thus, for industrial consumers that are able to adjust to the fluctuations, time-sensitive electricity prices (as part of so-called Demand-Side Management (DSM) in the smart grid) offer potential economical incentives. In this thesis, we introduce optimization models and decomposition strategies for the multi-scale Demand-Side Management of continuous power-intensive processes. On an operational level, we derive a mode formulation for scheduling under time-sensitive electricity prices. The formulation is applied to air separation plants and cement plants to minimize the operating cost. We also describe how a mode formulation can be used for industrial combined heat and power plants that are co-located at integrated chemical sites to increase operating profit by adjusting their steam and electricity production according to their inherent flexibility. Furthermore, a robust optimization formulation is developed to address the uncertainty in electricity prices by accounting for correlations and multiple ranges in the realization of the random variables. On a strategic level, we introduce a multi-scale model that provides an understanding of the value of flexibility of the current plant configuration and the value of additional flexibility in terms of retrofits for Demand-Side Management under product demand uncertainty. The integration of multiple time scales leads to large-scale two-stage stochastic programming problems, for which we need to apply decomposition strategies in order to obtain a good solution within a reasonable amount of time. Hence, we describe two decomposition schemes that can be applied to solve two-stage stochastic programming problems: First, a hybrid bi-level decomposition scheme with novel Lagrangean-type and subset-type cuts to strengthen the relaxation. Second, an enhanced cross-decomposition scheme that integrates Benders decomposition and Lagrangean decomposition on a scenario basis. To demonstrate the effectiveness of our developed methodology, we provide several industrial case studies throughout the thesis.
Quantum anonymous voting with unweighted continuous-variable graph states
NASA Astrophysics Data System (ADS)
Guo, Ying; Feng, Yanyan; Zeng, Guihua
2016-08-01
Motivated by the revealing topological structures of continuous-variable graph state (CVGS), we investigate the design of quantum voting scheme, which has serious advantages over the conventional ones in terms of efficiency and graphicness. Three phases are included, i.e., the preparing phase, the voting phase and the counting phase, together with three parties, i.e., the voters, the tallyman and the ballot agency. Two major voting operations are performed on the yielded CVGS in the voting process, namely the local rotation transformation and the displacement operation. The voting information is carried by the CVGS established before hand, whose persistent entanglement is deployed to keep the privacy of votes and the anonymity of legal voters. For practical applications, two CVGS-based quantum ballots, i.e., comparative ballot and anonymous survey, are specially designed, followed by the extended ballot schemes for the binary-valued and multi-valued ballots under some constraints for the voting design. Security is ensured by entanglement of the CVGS, the voting operations and the laws of quantum mechanics. The proposed schemes can be implemented using the standard off-the-shelf components when compared to discrete-variable quantum voting schemes attributing to the characteristics of the CV-based quantum cryptography.
Classification of close binary systems by Svechnikov
NASA Astrophysics Data System (ADS)
Dryomova, G. N.
The paper presents the historical overview of classification schemes of eclipsing variable stars with the foreground of advantages of the classification scheme by Svechnikov being widely appreciated for Close Binary Systems due to simplicity of classification criteria and brevity.
Meyer, Markus A.; Chand, Tanzila; Priess, Joerg A.
2015-01-01
Biomass for bioenergy is debated for its potential synergies or tradeoffs with other provisioning and regulating ecosystem services (ESS). This biomass may originate from different production systems and may be purposefully grown or obtained from residues. Increased concerns globally about the sustainable production of biomass for bioenergy has resulted in numerous certification schemes focusing on best management practices, mostly operating at the plot/field scale. In this study, we compare the ESS of two watersheds in the southeastern US. We show the ESS tradeoffs and synergies of plantation forestry, i.e., pine poles, and agricultural production, i.e., wheat straw and corn stover, with the counterfactual natural or semi-natural forest in both watersheds. The plantation forestry showed less distinct tradeoffs than did corn and wheat production, i.e., for carbon storage, P and sediment retention, groundwater recharge, and biodiversity. Using indicators of landscape composition and configuration, we showed that landscape planning can affect the overall ESS supply and can partly determine if locally set environmental thresholds are being met. Indicators on landscape composition, configuration and naturalness explained more than 30% of the variation in ESS supply. Landscape elements such as largely connected forest patches or more complex agricultural patches, e.g., mosaics with shrub and grassland patches, may enhance ESS supply in both of the bioenergy production systems. If tradeoffs between biomass production and other ESS are not addressed by landscape planning, it may be reasonable to include rules in certification schemes that require, e.g., the connectivity of natural or semi-natural forest patches in plantation forestry or semi-natural landscape elements in agricultural production systems. Integrating indicators on landscape configuration and composition into certification schemes is particularly relevant considering that certification schemes are governance tools used to ensure comparable sustainability standards for biomass produced in countries with variable or absent legal frameworks for landscape planning. PMID:25768660
Simple scheme to implement decoy-state reference-frame-independent quantum key distribution
NASA Astrophysics Data System (ADS)
Zhang, Chunmei; Zhu, Jianrong; Wang, Qin
2018-06-01
We propose a simple scheme to implement decoy-state reference-frame-independent quantum key distribution (RFI-QKD), where signal states are prepared in Z, X, and Y bases, decoy states are prepared in X and Y bases, and vacuum states are set to no bases. Different from the original decoy-state RFI-QKD scheme whose decoy states are prepared in Z, X and Y bases, in our scheme decoy states are only prepared in X and Y bases, which avoids the redundancy of decoy states in Z basis, saves the random number consumption, simplifies the encoding device of practical RFI-QKD systems, and makes the most of the finite pulses in a short time. Numerical simulations show that, considering the finite size effect with reasonable number of pulses in practical scenarios, our simple decoy-state RFI-QKD scheme exhibits at least comparable or even better performance than that of the original decoy-state RFI-QKD scheme. Especially, in terms of the resistance to the relative rotation of reference frames, our proposed scheme behaves much better than the original scheme, which has great potential to be adopted in current QKD systems.
A review on therapeutic drug monitoring of immunosuppressant drugs.
Mohammadpour, Niloufar; Elyasi, Sepideh; Vahdati, Naser; Mohammadpour, Amir Hooshang; Shamsara, Jamal
2011-11-01
: Immunosuppressants require therapeutic drug monitoring because of their narrow therapeutic index and significant inter-individual variability in blood concentrations. This variability can be because of factors like drug-nutrient interactions, drug-disease interactions, renal-insufficiency, inflammation and infection, gender, age, polymorphism and liver mass. Drug monitoring is widely practiced especially for cyclosporine, tacrolimus, sirolimus and mycophenolic acid. CYCLOSPORINE: Therapeutic monitoring of immunosuppressive therapy with cyclosporine is a critical requirement because of intra- and inter-patient variability of drug absorption, narrow therapeutic window and drug induced nephrotoxicity. MYCOPHENOLIC ACID MPA: Some reasons for therapeutic drug monitoring of MPA during post-transplant period include: relationship between MPA pharmacokinetic parameters and clinical outcomes, Inter-patient pharmacokinetic variability for MPA despite fixed MMF doses, alternations of MPA pharmacokinetics during the first months after transplantation, drug- drug interaction and influence of kidney function on MPA pharmacokinetic. SIROLIMUS: A recent review of the pharmacokinetics of sirolimus suggested a therapeutic range of 5 to 10 μg l(-1) in whole blood. However, the only consensus guidelines published on the therapeutic monitoring of sirolimus concluded that there was not enough information available about the clinical use of the drug to make recommendations. TACROLIMUS: Sudies have shown, in kidney and liver transplant patients, significant associations of low tacrolimus concentrations with rejection and of high concentrations with nephrotoxicity. Although the feasibility of a limited sampling scheme to predict AUC has been demonstrated, as yet, trough, or pre-dose, whole blood concentration monitoring is still the method of choice.
NASA Astrophysics Data System (ADS)
Ly, M.; Roca, R.; Hourdin, F.
2009-04-01
The Laboratoire de Météorologie Dynamique General circulation Model (LMDz) is ran in a nudged mode using various sets of atmospheric analysis during the wet season of 2006. The zoom capability of the model is used and reaches a mesh size of around 80km over the whole West African region. Sensitivity experiments have been performed in order to highlight the behaviour of the nudged model under a wide range of conditions: spatial and vertical resolution, zoom intensity, surface scheme formulation as well as for the forcing and driving parameters: relaxation time, type of analysis (ECMWF, NCEP/GFS, Sea Surface Temperature (climatology vs. 2006) and the nudging variables (wind, temperature, and combination). A combination of satellite data (E.g., GPCP rain estimates, METEOSAT Free tropospheric humidity,…) and in-situ observations acquired during the AMMA campaign (temperature and humidity profiles from radiosondes, GPS precipitable water,…) are all used to evaluate the simulations. The analysis is focused on the representation of the synoptic variability by the model in terms of rainfall and water vapour variability. It is shown that the model captures the free troposphere water vapour variability reasonably well with highly significant correlations between the radiosondes and the simulated fields. In the lowest levels of the atmosphere and in the upper troposphere, the agreement is less good. When the fields are filtered using a pass-band filter between 3-10 days, the correlation overall increases. Detailed of the sensitivity of these results to the simulation configuration mentioned above will be further discussed at the conference.
Dezetter, C; Bareille, N; Billon, D; Côrtes, C; Lechartier, C; Seegers, H
2017-10-01
An individual-based mechanistic, stochastic, and dynamic simulation model was developed to assess economic effects resulting from changes in performance for milk yield and solid contents, reproduction, health, and replacement, induced by the introduction of crossbreeding in Holstein dairy operations. Three crossbreeding schemes, Holstein × Montbéliarde, Holstein × Montbéliarde × Normande, and Holstein × Montbéliarde × Scandinavian Red, were implemented in Holstein dairy operations and compared with Holstein pure breeding. Sires were selected based on their estimated breeding value for milk. Two initial operations were simulated according to the prevalence (average or high) of reproductive and health disorders in the lactating herd. Evolution of operations was simulated during 15 yr under 2 alternative managerial goals (constant number of cows or constant volume of milk sold). After 15 yr, breed percentages reached equilibrium for the 2-breed but not for the 3-breed schemes. After 5 yr of simulation, all 3 crossbreeding schemes reduced average milk yield per cow-year compared with the pure Holstein scheme. Changes in other animal performance (milk solid contents, reproduction, udder health, and longevity) were always in favor of crossbreeding schemes. Under an objective of constant number of cows, margin over variable costs in average discounted value over the 15 yr of simulation was slightly increased by crossbreeding schemes, with an average prevalence of disorders up to €32/cow-year. In operations with a high prevalence of disorders, crossbreeding schemes increased the margin over variable costs up to €91/cow-year. Under an objective of constant volume of milk sold, crossbreeding schemes improved margin over variable costs up to €10/1,000L (corresponding to around €96/cow-year) for average prevalence of disorders, and up to €13/1,000L (corresponding to around €117/cow-year) for high prevalence of disorders. Under an objective of constant number of cows, an unfavorable pricing context (milk price vs. concentrates price) increased slightly crossbreeding positive effects on margin over variable costs. Under an objective of constant volume of milk, only very limited changes in differences of margins were found between the breeding schemes. Our results, obtained conditionally to the parameterization values used here, suggest that dairy crossbreeding should be considered as a relevant option for Holstein dairy operations with a production level until 9,000 kg/cow-year in France, and possibly in other countries. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Practical limitation for continuous-variable quantum cryptography using coherent States.
Namiki, Ryo; Hirano, Takuya
2004-03-19
In this Letter, first, we investigate the security of a continuous-variable quantum cryptographic scheme with a postselection process against individual beam splitting attack. It is shown that the scheme can be secure in the presence of the transmission loss owing to the postselection. Second, we provide a loss limit for continuous-variable quantum cryptography using coherent states taking into account excess Gaussian noise on quadrature distribution. Since the excess noise is reduced by the loss mechanism, a realistic intercept-resend attack which makes a Gaussian mixture of coherent states gives a loss limit in the presence of any excess Gaussian noise.
Takeda, Shuntaro; Furusawa, Akira
2017-09-22
We propose a scalable scheme for optical quantum computing using measurement-induced continuous-variable quantum gates in a loop-based architecture. Here, time-bin-encoded quantum information in a single spatial mode is deterministically processed in a nested loop by an electrically programmable gate sequence. This architecture can process any input state and an arbitrary number of modes with almost minimum resources, and offers a universal gate set for both qubits and continuous variables. Furthermore, quantum computing can be performed fault tolerantly by a known scheme for encoding a qubit in an infinite-dimensional Hilbert space of a single light mode.
NASA Astrophysics Data System (ADS)
Takeda, Shuntaro; Furusawa, Akira
2017-09-01
We propose a scalable scheme for optical quantum computing using measurement-induced continuous-variable quantum gates in a loop-based architecture. Here, time-bin-encoded quantum information in a single spatial mode is deterministically processed in a nested loop by an electrically programmable gate sequence. This architecture can process any input state and an arbitrary number of modes with almost minimum resources, and offers a universal gate set for both qubits and continuous variables. Furthermore, quantum computing can be performed fault tolerantly by a known scheme for encoding a qubit in an infinite-dimensional Hilbert space of a single light mode.
Fan, Jiwen; Han, Bin; Varble, Adam; ...
2017-09-06
An intercomparison study of a midlatitude mesoscale squall line is performed using the Weather Research and Forecasting (WRF) model at 1 km horizontal grid spacing with eight different cloud microphysics schemes to investigate processes that contribute to the large variability in simulated cloud and precipitation properties. All simulations tend to produce a wider area of high radar reflectivity (Z e > 45 dBZ) than observed but a much narrower stratiform area. Furthermore, the magnitude of the virtual potential temperature drop associated with the gust front passage is similar in simulations and observations, while the pressure rise and peak wind speedmore » are smaller than observed, possibly suggesting that simulated cold pools are shallower than observed. Most of the microphysics schemes overestimate vertical velocity and Z e in convective updrafts as compared with observational retrievals. Simulated precipitation rates and updraft velocities have significant variability across the eight schemes, even in this strongly dynamically driven system. Differences in simulated updraft velocity correlate well with differences in simulated buoyancy and low-level vertical perturbation pressure gradient, which appears related to cold pool intensity that is controlled by the evaporation rate. Simulations with stronger updrafts have a more optimal convective state, with stronger cold pools, ambient low-level vertical wind shear, and rear-inflow jets. We found that updraft velocity variability between schemes is mainly controlled by differences in simulated ice-related processes, which impact the overall latent heating rate, whereas surface rainfall variability increases in no-ice simulations mainly because of scheme differences in collision-coalescence parameterizations.« less
What variables can influence clinical reasoning?
Ashoorion, Vahid; Liaghatdar, Mohammad Javad; Adibi, Peyman
2012-01-01
Background: Clinical reasoning is one of the most important competencies that a physician should achieve. Many medical schools and licensing bodies try to predict it based on some general measures such as critical thinking, personality, and emotional intelligence. This study aimed at providing a model to design the relationship between the constructs. Materials and Methods: Sixty-nine medical students participated in this study. A battery test devised that consist four parts: Clinical reasoning measures, personality NEO inventory, Bar-On EQ inventory, and California critical thinking questionnaire. All participants completed the tests. Correlation and multiple regression analysis consumed for data analysis. Results: There is low to moderate correlations between clinical reasoning and other variables. Emotional intelligence is the only variable that contributes clinical reasoning construct (r=0.17-0.34) (R2 chnage = 0.46, P Value = 0.000). Conclusion: Although, clinical reasoning can be considered as a kind of thinking, no significant correlation detected between it and other constructs. Emotional intelligence (and its subscales) is the only variable that can be used for clinical reasoning prediction. PMID:23853636
Control approach development for variable recruitment artificial muscles
NASA Astrophysics Data System (ADS)
Jenkins, Tyler E.; Chapman, Edward M.; Bryant, Matthew
2016-04-01
This study characterizes hybrid control approaches for the variable recruitment of fluidic artificial muscles with double acting (antagonistic) actuation. Fluidic artificial muscle actuators have been explored by researchers due to their natural compliance, high force-to-weight ratio, and low cost of fabrication. Previous studies have attempted to improve system efficiency of the actuators through variable recruitment, i.e. using discrete changes in the number of active actuators. While current variable recruitment research utilizes manual valve switching, this paper details the current development of an online variable recruitment control scheme. By continuously controlling applied pressure and discretely controlling the number of active actuators, operation in the lowest possible recruitment state is ensured and working fluid consumption is minimized. Results provide insight into switching control scheme effects on working fluids, fabrication material choices, actuator modeling, and controller development decisions.
2014-01-07
this can have a disastrous effect on convergence rate. Even if steady state is obtained for low Mach number flows (after many iterations ), the results...rally lead do a diagonally dominant left-hand-side matrix, which causes stability problems for implicit Gauss - Seidel schemes. For this reason, matrix... convergence at the stagnation point. The iterations for each airfoil is also reported in Fig. 2. Without preconditioning, dramatic efficiency problems are seen
NASA Astrophysics Data System (ADS)
Deng, Shuang; Xiang, Wenting; Tian, Yangge
2009-10-01
Map coloring is a hard task even to the experienced map experts. In the GIS project, usually need to color map according to the customer, which make the work more complex. With the development of GIS, more and more programmers join the project team, which lack the training of cartology, their coloring map are harder to meet the requirements of customer. From the experience, customers with similar background usually have similar tastes for coloring map. So, we developed a GIS color scheme decision-making system which can select color schemes of similar customers from case base for customers to select and adjust. The system is a BS/CS mixed system, the client side use JSP and make it possible for the system developers to go on remote calling of the colors scheme cases in the database server and communicate with customers. Different with general case-based reasoning, even the customers are very similar, their selection may have difference, it is hard to provide a "best" option. So, we select the Simulated Annealing Algorithm (SAA) to arrange the emergence order of different color schemes. Customers can also dynamically adjust certain features colors based on existing case. The result shows that the system can facilitate the communication between the designers and the customers and improve the quality and efficiency of coloring map.
Parallel-Vector Algorithm For Rapid Structural Anlysis
NASA Technical Reports Server (NTRS)
Agarwal, Tarun R.; Nguyen, Duc T.; Storaasli, Olaf O.
1993-01-01
New algorithm developed to overcome deficiency of skyline storage scheme by use of variable-band storage scheme. Exploits both parallel and vector capabilities of modern high-performance computers. Gives engineers and designers opportunity to include more design variables and constraints during optimization of structures. Enables use of more refined finite-element meshes to obtain improved understanding of complex behaviors of aerospace structures leading to better, safer designs. Not only attractive for current supercomputers but also for next generation of shared-memory supercomputers.
On-Line Modal State Monitoring of Slowly Time-Varying Structures
NASA Technical Reports Server (NTRS)
Johnson, Erik A.; Bergman, Lawrence A.; Voulgaris, Petros G.
1997-01-01
Monitoring the dynamic response of structures is often performed for a variety of reasons. These reasons include condition-based maintenance, health monitoring, performance improvements, and control. In many cases the data analysis that is performed is part of a repetitive decision-making process, and in these cases the development of effective on-line monitoring schemes help to speed the decision-making process and reduce the risk of erroneous decisions. This report investigates the use of spatial modal filters for tracking the dynamics of slowly time-varying linear structures. The report includes an overview of modal filter theory followed by an overview of several structural system identification methods. Included in this discussion and comparison are H-infinity, eigensystem realization, and several time-domain least squares approaches. Finally, a two-stage adaptive on-line monitoring scheme is developed and evaluated.
Evaluation of subgrid-scale turbulence models using a fully simulated turbulent flow
NASA Technical Reports Server (NTRS)
Clark, R. A.; Ferziger, J. H.; Reynolds, W. C.
1977-01-01
An exact turbulent flow field was calculated on a three-dimensional grid with 64 points on a side. The flow simulates grid-generated turbulence from wind tunnel experiments. In this simulation, the grid spacing is small enough to include essentially all of the viscous energy dissipation, and the box is large enough to contain the largest eddy in the flow. The method is limited to low-turbulence Reynolds numbers, in our case R sub lambda = 36.6. To complete the calculation using a reasonable amount of computer time with reasonable accuracy, a third-order time-integration scheme was developed which runs at about the same speed as a simple first-order scheme. It obtains this accuracy by saving the velocity field and its first-time derivative at each time step. Fourth-order accurate space-differencing is used.
Jia, Di; Li, Yanlin; Wang, Guoliang; Gao, Huanyu; Yu, Yang
2016-01-01
To conclude the revision reason of unicompartmental knee arthroplasty (UKA) using computer-assisted technology so as to provide reference for reducing the revision incidence and improving the level of surgical technique and rehabilitation. The relevant literature on analyzing revision reason of UKA using computer-assisted technology in recent years was extensively reviewed. The revision reasons by computer-assisted technology are fracture of the medial tibial plateau, progressive osteoarthritis of reserved compartment, dislocation of mobile bearing, prosthesis loosening, polyethylene wear, and unexplained persistent pain. Computer-assisted technology can be used to analyze the revision reason of UKA and guide the best operating method and rehabilitation scheme by simulating the operative process and knee joint activities.
Gonioscopy in the dog: inter-examiner variability and the search for a grading scheme.
Oliver, J A C; Cottrell, B C; Newton, J R; Mellersh, C S
2017-11-01
To investigate inter-examiner variability in gonioscopic evaluation of pectinate ligament abnormality in dogs and to assess level of inter-examiner agreement for four different gonioscopy grading schemes. Two examiners performed gonioscopy in 98 eyes of 49 Welsh springer spaniel dogs and estimated the percentage circumference of iridocorneal angle affected by pectinate ligament abnormality to the nearest 5%. Percentage scores assigned to each eye by the two examiners were compared. Inter-examiner agreement was assessed following assignment of the percentage scores to each of four grading schemes by Cohen's kappa statistic. There was a strong positive correlation between the results of the two examiners (R=0·91). In general, Examiner 1 scored individual eyes higher than Examiner 2, especially for eyes in which both examiners diagnosed pectinate ligament abnormality. A "good" level of agreement could only be achieved with a gonioscopy grading scheme of no more than three categories and with a relatively large intermediate bandwidth (κ=0·68). A three-tiered grading scheme might represent an improvement on hereditary eye disease schemes which simply classify dogs to be either "affected" or "unaffected" for pectinate ligament abnormality. However, the large intermediate bandwidth of this scheme would only allow for the additional detection of those dogs with marked progression of pectinate ligament abnormality which would be considered most at risk of primary closed-angle glaucoma. © 2017 British Small Animal Veterinary Association.
Extrapolation of Functions of Many Variables by Means of Metric Analysis
NASA Astrophysics Data System (ADS)
Kryanev, Alexandr; Ivanov, Victor; Romanova, Anastasiya; Sevastianov, Leonid; Udumyan, David
2018-02-01
The paper considers a problem of extrapolating functions of several variables. It is assumed that the values of the function of m variables at a finite number of points in some domain D of the m-dimensional space are given. It is required to restore the value of the function at points outside the domain D. The paper proposes a fundamentally new method for functions of several variables extrapolation. In the presented paper, the method of extrapolating a function of many variables developed by us uses the interpolation scheme of metric analysis. To solve the extrapolation problem, a scheme based on metric analysis methods is proposed. This scheme consists of two stages. In the first stage, using the metric analysis, the function is interpolated to the points of the domain D belonging to the segment of the straight line connecting the center of the domain D with the point M, in which it is necessary to restore the value of the function. In the second stage, based on the auto regression model and metric analysis, the function values are predicted along the above straight-line segment beyond the domain D up to the point M. The presented numerical example demonstrates the efficiency of the method under consideration.
Exploring students' patterns of reasoning
NASA Astrophysics Data System (ADS)
Matloob Haghanikar, Mojgan
As part of a collaborative study of the science preparation of elementary school teachers, we investigated the quality of students' reasoning and explored the relationship between sophistication of reasoning and the degree to which the courses were considered inquiry oriented. To probe students' reasoning, we developed open-ended written content questions with the distinguishing feature of applying recently learned concepts in a new context. We devised a protocol for developing written content questions that provided a common structure for probing and classifying students' sophistication level of reasoning. In designing our protocol, we considered several distinct criteria, and classified students' responses based on their performance for each criterion. First, we classified concepts into three types: Descriptive, Hypothetical, and Theoretical and categorized the abstraction levels of the responses in terms of the types of concepts and the inter-relationship between the concepts. Second, we devised a rubric based on Bloom's revised taxonomy with seven traits (both knowledge types and cognitive processes) and a defined set of criteria to evaluate each trait. Along with analyzing students' reasoning, we visited universities and observed the courses in which the students were enrolled. We used the Reformed Teaching Observation Protocol (RTOP) to rank the courses with respect to characteristics that are valued for the inquiry courses. We conducted logistic regression for a sample of 18courses with about 900 students and reported the results for performing logistic regression to estimate the relationship between traits of reasoning and RTOP score. In addition, we analyzed conceptual structure of students' responses, based on conceptual classification schemes, and clustered students' responses into six categories. We derived regression model, to estimate the relationship between the sophistication of the categories of conceptual structure and RTOP scores. However, the outcome variable with six categories required a more complicated regression model, known as multinomial logistic regression, generalized from binary logistic regression. With the large amount of collected data, we found that the likelihood of the higher cognitive processes were in favor of classes with higher measures on inquiry. However, the usage of more abstract concepts with higher order conceptual structures was less prevalent in higher RTOP courses.
Gellynck, X; Jacobsen, R; Verhelst, P
2011-10-01
The competent waste authority in the Flemish region of Belgium created the 'Implementation plan household waste 2003-2007' and the 'Implementation plan sustainable management 2010-2015' to comply with EU regulation. It incorporates European and regional requirements and describes strategies, goals, actions and instruments for the collection and treatment of household waste. The central mandatory goal is to reduce and maintain the amount of residual household waste to 150 kg per capita per year between 2010-2015. In literature, a reasonable body of information has been published on the effectiveness and efficiency of a variety of policy instruments, but the information is complex, often contradictory and difficult to interpret. The objective of this paper is to identify, through the development of a binary logistic regression model, those variables of the waste collection scheme that help municipalities to reach the mandatory 150 kg goal. The model covers a number of variables for household characteristics, provision of recycling services, frequency of waste collection and charging for waste services. This paper, however, is not about waste prevention and reuse. The dataset originates from 2003. Four out of 12 variables in the model contributed significantly: income per capita, cost of residual waste collection, collection frequency and separate curbside collection of organic waste. Copyright © 2011 Elsevier Ltd. All rights reserved.
Double-moment cloud microphysics scheme for the deep convection parameterization in the GFDL AM3
NASA Astrophysics Data System (ADS)
Belochitski, A.; Donner, L.
2014-12-01
A double-moment cloud microphysical scheme originally developed by Morrision and Gettelman (2008) for the stratiform clouds and later adopted for the deep convection by Song and Zhang (2011) has been implemented in to the Geophysical Fluid Dynamics Laboratory's atmospheric general circulation model AM3. The scheme treats cloud drop, cloud ice, rain, and snow number concentrations and mixing ratios as diagnostic variables and incorporates processes of autoconversion, self-collection, collection between hydrometeor species, sedimentation, ice nucleation, drop activation, homogeneous and heterogeneous freezing, and the Bergeron-Findeisen process. Such detailed representation of microphysical processes makes the scheme suitable for studying the interactions between aerosols and convection, as well as aerosols' indirect effects on clouds and their roles in climate change. The scheme is first tested in the single column version of the GFDL AM3 using forcing data obtained at the U.S. Department of Energy Atmospheric Radiation Measurment project's Southern Great Planes site. Scheme's impact on SCM simulations is discussed. As the next step, runs of the full atmospheric GCM incorporating the new parameterization are compared to the unmodified version of GFDL AM3. Global climatological fields and their variability are contrasted with those of the original version of the GCM. Impact on cloud radiative forcing and climate sensitivity is investigated.
A well-balanced scheme for Ten-Moment Gaussian closure equations with source term
NASA Astrophysics Data System (ADS)
Meena, Asha Kumari; Kumar, Harish
2018-02-01
In this article, we consider the Ten-Moment equations with source term, which occurs in many applications related to plasma flows. We present a well-balanced second-order finite volume scheme. The scheme is well-balanced for general equation of state, provided we can write the hydrostatic solution as a function of the space variables. This is achieved by combining hydrostatic reconstruction with contact preserving, consistent numerical flux, and appropriate source discretization. Several numerical experiments are presented to demonstrate the well-balanced property and resulting accuracy of the proposed scheme.
Reasoning about Shape as a Pattern in Variability
ERIC Educational Resources Information Center
Bakker, Arthur
2004-01-01
This paper examines ways in which coherent reasoning about key concepts such as variability, sampling, data, and distribution can be developed as part of statistics education. Instructional activities that could support such reasoning were developed through design research conducted with students in grades 7 and 8. Results are reported from a…
Listener Habits and Choices — and Their Implications for Public Performance Venues
NASA Astrophysics Data System (ADS)
DODD, G.
2001-01-01
An 11-year longitudinal survey of patterns and preferences in music listening has revealed that a large majority of people would prefer to listen to music performed live but that only a small percentage of their exposure to music actually occurs at live performances. An initial analysis of the first few years of the survey suggests that choices concerning music can be influenced by cultural background, and that predominant music sources change as new technology becomes available. Reasons given by listeners for preferring to listen to a traditional, mechanical instrument rather than an electro-acoustic version of it indicate they are sensitive to an “originality” criterion. As a consequence, concert halls should be designed to operate as passive acoustics spaces. Further, listeners' reasons for electing to attend a live performance rather than listen to a recording or a live broadcast suggest that hall designers should try to maximize the sense of two-way communication between performers and listeners. An implication of this is that where active acoustics systems are to be incorporated in variable acoustics auditoria, those active systems which use a non-in-line approach are to be preferred over in-line schemes. However, listener evolution and new expectations may require a fundamental change in our approach to the acoustics of live performance venues.
Dynamic principle for ensemble control tools.
Samoletov, A; Vasiev, B
2017-11-28
Dynamical equations describing physical systems in contact with a thermal bath are commonly extended by mathematical tools called "thermostats." These tools are designed for sampling ensembles in statistical mechanics. Here we propose a dynamic principle underlying a range of thermostats which is derived using fundamental laws of statistical physics and ensures invariance of the canonical measure. The principle covers both stochastic and deterministic thermostat schemes. Our method has a clear advantage over a range of proposed and widely used thermostat schemes that are based on formal mathematical reasoning. Following the derivation of the proposed principle, we show its generality and illustrate its applications including design of temperature control tools that differ from the Nosé-Hoover-Langevin scheme.
Data rate enhancement of optical camera communications by compensating inter-frame gaps
NASA Astrophysics Data System (ADS)
Nguyen, Duy Thong; Park, Youngil
2017-07-01
Optical camera communications (OCC) is a convenient way of transmitting data between LED lamps and image sensors that are included in most smart devices. Although many schemes have been suggested to increase the data rate of the OCC system, it is still much lower than that of the photodiode-based LiFi system. One major reason of this low data rate is attributed to the inter-frame gap (IFG) of image sensor system, that is, the time gap between consecutive image frames. In this paper, we propose a way to compensate for this IFG efficiently by an interleaved Hamming coding scheme. The proposed scheme is implemented and the performance is measured.
Reporting the accuracy of biochemical measurements for epidemiologic and nutrition studies.
McShane, L M; Clark, L C; Combs, G F; Turnbull, B W
1991-06-01
Procedures for reporting and monitoring the accuracy of biochemical measurements are presented. They are proposed as standard reporting procedures for laboratory assays for epidemiologic and clinical-nutrition studies. The recommended procedures require identification and estimation of all major sources of variability and explanations of laboratory quality control procedures employed. Variance-components techniques are used to model the total variability and calculate a maximum percent error that provides an easily understandable measure of laboratory precision accounting for all sources of variability. This avoids ambiguities encountered when reporting an SD that may taken into account only a few of the potential sources of variability. Other proposed uses of the total-variability model include estimating precision of laboratory methods for various replication schemes and developing effective quality control-checking schemes. These procedures are demonstrated with an example of the analysis of alpha-tocopherol in human plasma by using high-performance liquid chromatography.
Magnetic resonance image compression using scalar-vector quantization
NASA Astrophysics Data System (ADS)
Mohsenian, Nader; Shahri, Homayoun
1995-12-01
A new coding scheme based on the scalar-vector quantizer (SVQ) is developed for compression of medical images. SVQ is a fixed-rate encoder and its rate-distortion performance is close to that of optimal entropy-constrained scalar quantizers (ECSQs) for memoryless sources. The use of a fixed-rate quantizer is expected to eliminate some of the complexity issues of using variable-length scalar quantizers. When transmission of images over noisy channels is considered, our coding scheme does not suffer from error propagation which is typical of coding schemes which use variable-length codes. For a set of magnetic resonance (MR) images, coding results obtained from SVQ and ECSQ at low bit-rates are indistinguishable. Furthermore, our encoded images are perceptually indistinguishable from the original, when displayed on a monitor. This makes our SVQ based coder an attractive compression scheme for picture archiving and communication systems (PACS), currently under consideration for an all digital radiology environment in hospitals, where reliable transmission, storage, and high fidelity reconstruction of images are desired.
Best Hiding Capacity Scheme for Variable Length Messages Using Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Bajaj, Ruchika; Bedi, Punam; Pal, S. K.
Steganography is an art of hiding information in such a way that prevents the detection of hidden messages. Besides security of data, the quantity of data that can be hidden in a single cover medium, is also very important. We present a secure data hiding scheme with high embedding capacity for messages of variable length based on Particle Swarm Optimization. This technique gives the best pixel positions in the cover image, which can be used to hide the secret data. In the proposed scheme, k bits of the secret message are substituted into k least significant bits of the image pixel, where k varies from 1 to 4 depending on the message length. The proposed scheme is tested and results compared with simple LSB substitution, uniform 4-bit LSB hiding (with PSO) for the test images Nature, Baboon, Lena and Kitty. The experimental study confirms that the proposed method achieves high data hiding capacity and maintains imperceptibility and minimizes the distortion between the cover image and the obtained stego image.
Motor-sensory confluence in tactile perception.
Saig, Avraham; Gordon, Goren; Assa, Eldad; Arieli, Amos; Ahissar, Ehud
2012-10-03
Perception involves motor control of sensory organs. However, the dynamics underlying emergence of perception from motor-sensory interactions are not yet known. Two extreme possibilities are as follows: (1) motor and sensory signals interact within an open-loop scheme in which motor signals determine sensory sampling but are not affected by sensory processing and (2) motor and sensory signals are affected by each other within a closed-loop scheme. We studied the scheme of motor-sensory interactions in humans using a novel object localization task that enabled monitoring the relevant overt motor and sensory variables. We found that motor variables were dynamically controlled within each perceptual trial, such that they gradually converged to steady values. Training on this task resulted in improvement in perceptual acuity, which was achieved solely by changes in motor variables, without any change in the acuity of sensory readout. The within-trial dynamics is captured by a hierarchical closed-loop model in which lower loops actively maintain constant sensory coding, and higher loops maintain constant sensory update flow. These findings demonstrate interchangeability of motor and sensory variables in perception, motor convergence during perception, and a consistent hierarchical closed-loop perceptual model.
NASA Astrophysics Data System (ADS)
Paiewonsky, Pablo; Elison Timm, Oliver
2018-03-01
In this paper, we present a simple dynamic global vegetation model whose primary intended use is auxiliary to the land-atmosphere coupling scheme of a climate model, particularly one of intermediate complexity. The model simulates and provides important ecological-only variables but also some hydrological and surface energy variables that are typically either simulated by land surface schemes or else used as boundary data input for these schemes. The model formulations and their derivations are presented here, in detail. The model includes some realistic and useful features for its level of complexity, including a photosynthetic dependency on light, full coupling of photosynthesis and transpiration through an interactive canopy resistance, and a soil organic carbon dependence for bare-soil albedo. We evaluate the model's performance by running it as part of a simple land surface scheme that is driven by reanalysis data. The evaluation against observational data includes net primary productivity, leaf area index, surface albedo, and diagnosed variables relevant for the closure of the hydrological cycle. In this setup, we find that the model gives an adequate to good simulation of basic large-scale ecological and hydrological variables. Of the variables analyzed in this paper, gross primary productivity is particularly well simulated. The results also reveal the current limitations of the model. The most significant deficiency is the excessive simulation of evapotranspiration in mid- to high northern latitudes during their winter to spring transition. The model has a relative advantage in situations that require some combination of computational efficiency, model transparency and tractability, and the simulation of the large-scale vegetation and land surface characteristics under non-present-day conditions.
Adaptive power allocation schemes based on IAFS algorithm for OFDM-based cognitive radio systems
NASA Astrophysics Data System (ADS)
Zhang, Shuying; Zhao, Xiaohui; Liang, Cong; Ding, Xu
2017-01-01
In cognitive radio (CR) systems, reasonable power allocation can increase transmission rate of CR users or secondary users (SUs) as much as possible and at the same time insure normal communication among primary users (PUs). This study proposes an optimal power allocation scheme for the OFDM-based CR system with one SU influenced by multiple PU interference constraints. This scheme is based on an improved artificial fish swarm (IAFS) algorithm in combination with the advantage of conventional artificial fish swarm (ASF) algorithm and particle swarm optimisation (PSO) algorithm. In performance comparison of IAFS algorithm with other intelligent algorithms by simulations, the superiority of the IAFS algorithm is illustrated; this superiority results in better performance of our proposed scheme than that of the power allocation algorithms proposed by the previous studies in the same scenario. Furthermore, our proposed scheme can obtain higher transmission data rate under the multiple PU interference constraints and the total power constraint of SU than that of the other mentioned works.
Aguayo-Ortiz, A; Mendoza, S; Olvera, D
2018-01-01
In this article we develop a Primitive Variable Recovery Scheme (PVRS) to solve any system of coupled differential conservative equations. This method obtains directly the primitive variables applying the chain rule to the time term of the conservative equations. With this, a traditional finite volume method for the flux is applied in order avoid violation of both, the entropy and "Rankine-Hugoniot" jump conditions. The time evolution is then computed using a forward finite difference scheme. This numerical technique evades the recovery of the primitive vector by solving an algebraic system of equations as it is often used and so, it generalises standard techniques to solve these kind of coupled systems. The article is presented bearing in mind special relativistic hydrodynamic numerical schemes with an added pedagogical view in the appendix section in order to easily comprehend the PVRS. We present the convergence of the method for standard shock-tube problems of special relativistic hydrodynamics and a graphical visualisation of the errors using the fluctuations of the numerical values with respect to exact analytic solutions. The PVRS circumvents the sometimes arduous computation that arises from standard numerical methods techniques, which obtain the desired primitive vector solution through an algebraic polynomial of the charges.
Mendoza, S.; Olvera, D.
2018-01-01
In this article we develop a Primitive Variable Recovery Scheme (PVRS) to solve any system of coupled differential conservative equations. This method obtains directly the primitive variables applying the chain rule to the time term of the conservative equations. With this, a traditional finite volume method for the flux is applied in order avoid violation of both, the entropy and “Rankine-Hugoniot” jump conditions. The time evolution is then computed using a forward finite difference scheme. This numerical technique evades the recovery of the primitive vector by solving an algebraic system of equations as it is often used and so, it generalises standard techniques to solve these kind of coupled systems. The article is presented bearing in mind special relativistic hydrodynamic numerical schemes with an added pedagogical view in the appendix section in order to easily comprehend the PVRS. We present the convergence of the method for standard shock-tube problems of special relativistic hydrodynamics and a graphical visualisation of the errors using the fluctuations of the numerical values with respect to exact analytic solutions. The PVRS circumvents the sometimes arduous computation that arises from standard numerical methods techniques, which obtain the desired primitive vector solution through an algebraic polynomial of the charges. PMID:29659602
NASA Astrophysics Data System (ADS)
Johnson, Marcus; Jung, Youngsun; Dawson, Daniel; Supinie, Timothy; Xue, Ming; Park, Jongsook; Lee, Yong-Hee
2018-07-01
The UK Met Office Unified Model (UM) is employed by many weather forecasting agencies around the globe. This model is designed to run across spatial and time scales and known to produce skillful predictions for large-scale weather systems. However, the model has only recently begun running operationally at horizontal grid spacings of ˜1.5 km [e.g., at the UK Met Office and the Korea Meteorological Administration (KMA)]. As its microphysics scheme was originally designed and tuned for large-scale precipitation systems, we investigate the performance of UM microphysics to determine potential inherent biases or weaknesses. Two rainfall cases from the KMA forecasting system are considered in this study: a Changma (quasi-stationary) front, and Typhoon Sanba (2012). The UM output is compared to polarimetric radar observations in terms of simulated polarimetric radar variables. Results show that the UM generally underpredicts median reflectivity in stratiform rain, producing high reflectivity cores and precipitation gaps between them. This is partially due to the diagnostic rain intercept parameter formulation used in the one-moment microphysics scheme. Model drop size is generally both underand overpredicted compared to observations. UM frozen hydrometeors favor generic ice (crystals and snow) rather than graupel, which is reasonable for Changma and typhoon cases. The model performed best with the typhoon case in terms of simulated precipitation coverage.
Mittal, Shruti; Adamusiak, Anna; Horsfield, Catherine; Loukopoulos, Ioannis; Karydis, Nikolaos; Kessaris, Nicos; Drage, Martin; Olsburgh, Jonathon; Watson, Christopher Je; Callaghan, Chris J
2017-07-01
A significant proportion of procured deceased donor kidneys are subsequently discarded. The UK Kidney Fast-Track Scheme (KFTS) was introduced in 2012, enabling kidneys at risk of discard to be simultaneously offered to participating centers. We undertook an analysis of discarded kidneys to determine if unnecessary organ discard was still occurring since the KFTS was introduced. Between April and June 2015, senior surgeons independently inspected 31 consecutive discarded kidneys from throughout the United Kingdom. All kidneys were biopsied. Organs were categorized as usable, possibly usable pending histology, or not usable for implantation. After histology reports were available, final assessments of usability were made. There were 19 donors (6 donations after brain death, 13 donations after circulatory death), with a median (range) donor age of 67 (29-83) years and Kidney Donor Profile Index of 93 (19-100). Reasons for discard were variable. Only 3 discarded kidneys had not entered the KFTS. After initial assessment postdiscard, 11 kidneys were assessed as usable, with 9 kidneys thought to be possibly usable. Consideration of histological data reduced the number of kidneys thought usable to 10 (10/31; 32%). The KFTS scheme is successfully identifying organs at high risk of discard, though potentially transplantable organs are still being discarded. Analyses of discarded organs are essential to identify barriers to organ utilization and develop strategies to reduce unnecessary discard.
The a(4) Scheme-A High Order Neutrally Stable CESE Solver
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung
2009-01-01
The CESE development is driven by a belief that a solver should (i) enforce conservation laws in both space and time, and (ii) be built from a nondissipative (i.e., neutrally stable) core scheme so that the numerical dissipation can be controlled effectively. To provide a solid foundation for a systematic CESE development of high order schemes, in this paper we describe a new high order (4-5th order) and neutrally stable CESE solver of a 1D advection equation with a constant advection speed a. The space-time stencil of this two-level explicit scheme is formed by one point at the upper time level and two points at the lower time level. Because it is associated with four independent mesh variables (the numerical analogues of the dependent variable and its first, second, and third-order spatial derivatives) and four equations per mesh point, the new scheme is referred to as the a(4) scheme. As in the case of other similar CESE neutrally stable solvers, the a(4) scheme enforces conservation laws in space-time locally and globally, and it has the basic, forward marching, and backward marching forms. Except for a singular case, these forms are equivalent and satisfy a space-time inversion (STI) invariant property which is shared by the advection equation. Based on the concept of STI invariance, a set of algebraic relations is developed and used to prove the a(4) scheme must be neutrally stable when it is stable. Numerically, it has been established that the scheme is stable if the value of the Courant number is less than 1/3
Odéen, Henrik; Todd, Nick; Diakite, Mahamadou; Minalga, Emilee; Payne, Allison; Parker, Dennis L.
2014-01-01
Purpose: To investigate k-space subsampling strategies to achieve fast, large field-of-view (FOV) temperature monitoring using segmented echo planar imaging (EPI) proton resonance frequency shift thermometry for MR guided high intensity focused ultrasound (MRgHIFU) applications. Methods: Five different k-space sampling approaches were investigated, varying sample spacing (equally vs nonequally spaced within the echo train), sampling density (variable sampling density in zero, one, and two dimensions), and utilizing sequential or centric sampling. Three of the schemes utilized sequential sampling with the sampling density varied in zero, one, and two dimensions, to investigate sampling the k-space center more frequently. Two of the schemes utilized centric sampling to acquire the k-space center with a longer echo time for improved phase measurements, and vary the sampling density in zero and two dimensions, respectively. Phantom experiments and a theoretical point spread function analysis were performed to investigate their performance. Variable density sampling in zero and two dimensions was also implemented in a non-EPI GRE pulse sequence for comparison. All subsampled data were reconstructed with a previously described temporally constrained reconstruction (TCR) algorithm. Results: The accuracy of each sampling strategy in measuring the temperature rise in the HIFU focal spot was measured in terms of the root-mean-square-error (RMSE) compared to fully sampled “truth.” For the schemes utilizing sequential sampling, the accuracy was found to improve with the dimensionality of the variable density sampling, giving values of 0.65 °C, 0.49 °C, and 0.35 °C for density variation in zero, one, and two dimensions, respectively. The schemes utilizing centric sampling were found to underestimate the temperature rise, with RMSE values of 1.05 °C and 1.31 °C, for variable density sampling in zero and two dimensions, respectively. Similar subsampling schemes with variable density sampling implemented in zero and two dimensions in a non-EPI GRE pulse sequence both resulted in accurate temperature measurements (RMSE of 0.70 °C and 0.63 °C, respectively). With sequential sampling in the described EPI implementation, temperature monitoring over a 192 × 144 × 135 mm3 FOV with a temporal resolution of 3.6 s was achieved, while keeping the RMSE compared to fully sampled “truth” below 0.35 °C. Conclusions: When segmented EPI readouts are used in conjunction with k-space subsampling for MR thermometry applications, sampling schemes with sequential sampling, with or without variable density sampling, obtain accurate phase and temperature measurements when using a TCR reconstruction algorithm. Improved temperature measurement accuracy can be achieved with variable density sampling. Centric sampling leads to phase bias, resulting in temperature underestimations. PMID:25186406
The Uses of Reason in Times of Technical Mediation.
Dorrestijn, Steven
2017-01-01
The art of living idiom suits well a practice-oriented approach in ethics of technology. But what remains or becomes of the functioning and use of reason in ethics? In reaction to the comments by Huijer this reply elaborates in more detail how Foucault's art of living can be adapted for a critical contemporary ethics of technology. And the aesthetic-political rationality in Foucault's ethics is compared with Wellner's suggestions of holding on to the notion of code but with a new meaning. Foucault's fourfold scheme of subjectivation and a distinction of "below and above reason" structure the argument.
Relationship among Demographic Variables and Pupils' Reasoning Ability
ERIC Educational Resources Information Center
Tella, Adeyinka; Tella, Adedeji; Adika, L. O.; Toyobo, Majekodunmi Oluwole
2008-01-01
Introduction: Pupils reasoning ability is a sine-qua-non to the evaluation of their performance in learning and an indicator of their potential predictors of future performance. Method: The study examined the relationship among demographic variables and reasoning ability of primary school pupils. It drew four hundred pupils from ten (10)…
ERIC Educational Resources Information Center
Ding, Lin
2014-01-01
This study seeks to test the causal influences of reasoning skills and epistemologies on student conceptual learning in physics. A causal model, integrating multiple variables that were investigated separately in the prior literature, is proposed and tested through path analysis. These variables include student preinstructional reasoning skills…
Bidirectional Elastic Image Registration Using B-Spline Affine Transformation
Gu, Suicheng; Meng, Xin; Sciurba, Frank C.; Wang, Chen; Kaminski, Naftali; Pu, Jiantao
2014-01-01
A registration scheme termed as B-spline affine transformation (BSAT) is presented in this study to elastically align two images. We define an affine transformation instead of the traditional translation at each control point. Mathematically, BSAT is a generalized form of the affine transformation and the traditional B-Spline transformation (BST). In order to improve the performance of the iterative closest point (ICP) method in registering two homologous shapes but with large deformation, a bi-directional instead of the traditional unidirectional objective / cost function is proposed. In implementation, the objective function is formulated as a sparse linear equation problem, and a sub-division strategy is used to achieve a reasonable efficiency in registration. The performance of the developed scheme was assessed using both two-dimensional (2D) synthesized dataset and three-dimensional (3D) volumetric computed tomography (CT) data. Our experiments showed that the proposed B-spline affine model could obtain reasonable registration accuracy. PMID:24530210
An Efficient Variable Length Coding Scheme for an IID Source
NASA Technical Reports Server (NTRS)
Cheung, K. -M.
1995-01-01
A scheme is examined for using two alternating Huffman codes to encode a discrete independent and identically distributed source with a dominant symbol. This combined strategy, or alternating runlength Huffman (ARH) coding, was found to be more efficient than ordinary coding in certain circumstances.
The variable flavor number scheme at next-to-leading order
NASA Astrophysics Data System (ADS)
Blümlein, J.; De Freitas, A.; Schneider, C.; Schönwald, K.
2018-07-01
We present the matching relations of the variable flavor number scheme at next-to-leading order, which are of importance to define heavy quark partonic distributions for the use at high energy colliders such as Tevatron and the LHC. The consideration of the two-mass effects due to both charm and bottom quarks, having rather similar masses, are important. These effects have not been considered in previous investigations. Numerical results are presented for a wide range of scales. We also present the corresponding contributions to the structure function F2 (x ,Q2).
Boonstra, Anne M; Stewart, Roy E; Köke, Albère J A; Oosterwijk, René F A; Swaan, Jeannette L; Schreurs, Karlein M G; Schiphorst Preuper, Henrica R
2016-01-01
Objectives: The 0-10 Numeric Rating Scale (NRS) is often used in pain management. The aims of our study were to determine the cut-off points for mild, moderate, and severe pain in terms of pain-related interference with functioning in patients with chronic musculoskeletal pain, to measure the variability of the optimal cut-off points, and to determine the influence of patients' catastrophizing and their sex on these cut-off points. Methods: 2854 patients were included. Pain was assessed by the NRS, functioning by the Pain Disability Index (PDI) and catastrophizing by the Pain Catastrophizing Scale (PCS). Cut-off point schemes were tested using ANOVAs with and without using the PSC scores or sex as co-variates and with the interaction between CP scheme and PCS score and sex, respectively. The variability of the optimal cut-off point schemes was quantified using bootstrapping procedure. Results and conclusion: The study showed that NRS scores ≤ 5 correspond to mild, scores of 6-7 to moderate and scores ≥8 to severe pain in terms of pain-related interference with functioning. Bootstrapping analysis identified this optimal NRS cut-off point scheme in 90% of the bootstrapping samples. The interpretation of the NRS is independent of sex, but seems to depend on catastrophizing. In patients with high catastrophizing tendency, the optimal cut-off point scheme equals that for the total study sample, but in patients with a low catastrophizing tendency, NRS scores ≤ 3 correspond to mild, scores of 4-6 to moderate and scores ≥7 to severe pain in terms of interference with functioning. In these optimal cut-off schemes, NRS scores of 4 and 5 correspond to moderate interference with functioning for patients with low catastrophizing tendency and to mild interference for patients with high catastrophizing tendency. Theoretically one would therefore expect that among the patients with NRS scores 4 and 5 there would be a higher average PDI score for those with low catastrophizing than for those with high catastrophizing. However, we found the opposite. The fact that we did not find the same optimal CP scheme in the subgroups with lower and higher catastrophizing tendency may be due to chance variability.
Boonstra, Anne M.; Stewart, Roy E.; Köke, Albère J. A.; Oosterwijk, René F. A.; Swaan, Jeannette L.; Schreurs, Karlein M. G.; Schiphorst Preuper, Henrica R.
2016-01-01
Objectives: The 0–10 Numeric Rating Scale (NRS) is often used in pain management. The aims of our study were to determine the cut-off points for mild, moderate, and severe pain in terms of pain-related interference with functioning in patients with chronic musculoskeletal pain, to measure the variability of the optimal cut-off points, and to determine the influence of patients’ catastrophizing and their sex on these cut-off points. Methods: 2854 patients were included. Pain was assessed by the NRS, functioning by the Pain Disability Index (PDI) and catastrophizing by the Pain Catastrophizing Scale (PCS). Cut-off point schemes were tested using ANOVAs with and without using the PSC scores or sex as co-variates and with the interaction between CP scheme and PCS score and sex, respectively. The variability of the optimal cut-off point schemes was quantified using bootstrapping procedure. Results and conclusion: The study showed that NRS scores ≤ 5 correspond to mild, scores of 6–7 to moderate and scores ≥8 to severe pain in terms of pain-related interference with functioning. Bootstrapping analysis identified this optimal NRS cut-off point scheme in 90% of the bootstrapping samples. The interpretation of the NRS is independent of sex, but seems to depend on catastrophizing. In patients with high catastrophizing tendency, the optimal cut-off point scheme equals that for the total study sample, but in patients with a low catastrophizing tendency, NRS scores ≤ 3 correspond to mild, scores of 4–6 to moderate and scores ≥7 to severe pain in terms of interference with functioning. In these optimal cut-off schemes, NRS scores of 4 and 5 correspond to moderate interference with functioning for patients with low catastrophizing tendency and to mild interference for patients with high catastrophizing tendency. Theoretically one would therefore expect that among the patients with NRS scores 4 and 5 there would be a higher average PDI score for those with low catastrophizing than for those with high catastrophizing. However, we found the opposite. The fact that we did not find the same optimal CP scheme in the subgroups with lower and higher catastrophizing tendency may be due to chance variability. PMID:27746750
A simple algorithm to improve the performance of the WENO scheme on non-uniform grids
NASA Astrophysics Data System (ADS)
Huang, Wen-Feng; Ren, Yu-Xin; Jiang, Xiong
2018-02-01
This paper presents a simple approach for improving the performance of the weighted essentially non-oscillatory (WENO) finite volume scheme on non-uniform grids. This technique relies on the reformulation of the fifth-order WENO-JS (WENO scheme presented by Jiang and Shu in J. Comput. Phys. 126:202-228, 1995) scheme designed on uniform grids in terms of one cell-averaged value and its left and/or right interfacial values of the dependent variable. The effect of grid non-uniformity is taken into consideration by a proper interpolation of the interfacial values. On non-uniform grids, the proposed scheme is much more accurate than the original WENO-JS scheme, which was designed for uniform grids. When the grid is uniform, the resulting scheme reduces to the original WENO-JS scheme. In the meantime, the proposed scheme is computationally much more efficient than the fifth-order WENO scheme designed specifically for the non-uniform grids. A number of numerical test cases are simulated to verify the performance of the present scheme.
MIMO transmit scheme based on morphological perceptron with competitive learning.
Valente, Raul Ambrozio; Abrão, Taufik
2016-08-01
This paper proposes a new multi-input multi-output (MIMO) transmit scheme aided by artificial neural network (ANN). The morphological perceptron with competitive learning (MP/CL) concept is deployed as a decision rule in the MIMO detection stage. The proposed MIMO transmission scheme is able to achieve double spectral efficiency; hence, in each time-slot the receiver decodes two symbols at a time instead one as Alamouti scheme. Other advantage of the proposed transmit scheme with MP/CL-aided detector is its polynomial complexity according to modulation order, while it becomes linear when the data stream length is greater than modulation order. The performance of the proposed scheme is compared to the traditional MIMO schemes, namely Alamouti scheme and maximum-likelihood MIMO (ML-MIMO) detector. Also, the proposed scheme is evaluated in a scenario with variable channel information along the frame. Numerical results have shown that the diversity gain under space-time coding Alamouti scheme is partially lost, which slightly reduces the bit-error rate (BER) performance of the proposed MP/CL-NN MIMO scheme. Copyright © 2016 Elsevier Ltd. All rights reserved.
The Design and Analysis of the Hydraulic-pressure Seal of the Engine Box
NASA Astrophysics Data System (ADS)
Chen, Zhenya; Shen, Xingquan; Xin, Zhijie; Guo, Tingting; Liao, Kewei
2017-12-01
According to the sealing requirements of engine casing, using NX software to establish three-dimensional solid model of the engine box. Designing two seals suppress schemes basing on analyzing the characteristics of the case structure, one of seal is using two pins on one side to localize, the other is using cylinder to top tight and fasten, Clarifying the reasons for the using the former scheme have a lower cost. At the same time analysesing of the forces and deformation of the former scheme using finite element analysis software and the NX software, results proved that the pressure scheme can meet the actual needs of the program. It illustrated the composition of the basic principles of manual pressure and hydraulic system, verifed the feasibility of the seal program using experiment, providing reference for the experimental program of hydrostatic pressure in the future.
Positivity-preserving numerical schemes for multidimensional advection
NASA Technical Reports Server (NTRS)
Leonard, B. P.; Macvean, M. K.; Lock, A. P.
1993-01-01
This report describes the construction of an explicit, single time-step, conservative, finite-volume method for multidimensional advective flow, based on a uniformly third-order polynomial interpolation algorithm (UTOPIA). Particular attention is paid to the problem of flow-to-grid angle-dependent, anisotropic distortion typical of one-dimensional schemes used component-wise. The third-order multidimensional scheme automatically includes certain cross-difference terms that guarantee good isotropy (and stability). However, above first-order, polynomial-based advection schemes do not preserve positivity (the multidimensional analogue of monotonicity). For this reason, a multidimensional generalization of the first author's universal flux-limiter is sought. This is a very challenging problem. A simple flux-limiter can be found; but this introduces strong anisotropic distortion. A more sophisticated technique, limiting part of the flux and then restoring the isotropy-maintaining cross-terms afterwards, gives more satisfactory results. Test cases are confined to two dimensions; three-dimensional extensions are briefly discussed.
A privacy-strengthened scheme for E-Healthcare monitoring system.
Huang, Chanying; Lee, Hwaseong; Lee, Dong Hoon
2012-10-01
Recent Advances in Wireless Body Area Networks (WBANs) offer unprecedented opportunities and challenges to the development of pervasive electronic healthcare (E-Healthcare) monitoring system. In E-Healthcare system, the processed data are patients' sensitive health data that are directly related to individuals' privacy. For this reason, privacy concern is of great importance for E-Healthcare system. Current existing systems for E-Healthcare services, however, have not yet provided sufficient privacy protection for patients. In order to offer adequate security and privacy, in this paper, we propose a privacy-enhanced scheme for patients' physical condition monitoring, which achieves dual effects: (1) providing unlinkability of health records and individual identity, and (2) supporting anonymous authentication and authorized data access. We also conduct a simulation experiment to evaluate the performance of the proposed scheme. The experimental results demonstrate that the proposed scheme achieves better performance in terms of computational complexity, communication overheads and querying efficiency compared with previous results.
Viscous flow computations using a second-order upwind differencing scheme
NASA Technical Reports Server (NTRS)
Chen, Y. S.
1988-01-01
In the present computations of a wide range of fluid flow problems by means of the primitive variables-incorporating Navier-Stokes equations, a mixed second-order upwinding scheme approximates the convective terms of the transport equations and the scheme's accuracy is verified for convection-dominated high Re number flow problems. An adaptive dissipation scheme is used as a monotonic supersonic shock flow capture mechanism. Many benchmark fluid flow problems, including the compressible and incompressible, laminar and turbulent, over a wide range of M and Re numbers, are presently studied to verify the accuracy and robustness of this numerical method.
NASA Astrophysics Data System (ADS)
Salimun, Ester; Tangang, Fredolin; Juneng, Liew
2010-06-01
A comparative study has been conducted to investigate the skill of four convection parameterization schemes, namely the Anthes-Kuo (AK), the Betts-Miller (BM), the Kain-Fritsch (KF), and the Grell (GR) schemes in the numerical simulation of an extreme precipitation episode over eastern Peninsular Malaysia using the Pennsylvania State University—National Center for Atmospheric Research Center (PSU-NCAR) Fifth Generation Mesoscale Model (MM5). The event is a commonly occurring westward propagating tropical depression weather system during a boreal winter resulting from an interaction between a cold surge and the quasi-stationary Borneo vortex. The model setup and other physical parameterizations are identical in all experiments and hence any difference in the simulation performance could be associated with the cumulus parameterization scheme used. From the predicted rainfall and structure of the storm, it is clear that the BM scheme has an edge over the other schemes. The rainfall intensity and spatial distribution were reasonably well simulated compared to observations. The BM scheme was also better in resolving the horizontal and vertical structures of the storm. Most of the rainfall simulated by the BM simulation was of the convective type. The failure of other schemes (AK, GR and KF) in simulating the event may be attributed to the trigger function, closure assumption, and precipitation scheme. On the other hand, the appropriateness of the BM scheme for this episode may not be generalized for other episodes or convective environments.
Moral Reasoning and Personality Variables in Relation to Moral Behavior.
ERIC Educational Resources Information Center
Dras, Stephen R.; And Others
The relation of moral reasoning to moral behavior has been the subject of a substantial number of empirical studies; it may be more productive to employ a configuration of characteristics to predict moral behavior. To investigate the relation of moral reasoning and personality variables to moral behavior, 74 undergraduates, 30 males and 44…
Global evaluation of ammonia bidirectional exchange and livestock diurnal variation schemes
Bidirectional air–surface exchange of ammonia (NH3) has been neglected in many air quality models. In this study, we implement the bidirectional exchange of NH3 in the GEOS-Chem global chemical transport model. We also introduce an updated diurnal variability scheme for NH3...
Improved simulation of precipitation in the tropics using a modified BMJ scheme in the WRF model
NASA Astrophysics Data System (ADS)
Fonseca, R. M.; Zhang, T.; Yong, K.-T.
2015-09-01
The successful modelling of the observed precipitation, a very important variable for a wide range of climate applications, continues to be one of the major challenges that climate scientists face today. When the Weather Research and Forecasting (WRF) model is used to dynamically downscale the Climate Forecast System Reanalysis (CFSR) over the Indo-Pacific region, with analysis (grid-point) nudging, it is found that the cumulus scheme used, Betts-Miller-Janjić (BMJ), produces excessive rainfall suggesting that it has to be modified for this region. Experimentation has shown that the cumulus precipitation is not very sensitive to changes in the cloud efficiency but varies greatly in response to modifications of the temperature and humidity reference profiles. A new version of the scheme, denoted "modified BMJ" scheme, where the humidity reference profile is more moist, was developed. In tropical belt simulations it was found to give a better estimate of the observed precipitation as given by the Tropical Rainfall Measuring Mission (TRMM) 3B42 data set than the default BMJ scheme for the whole tropics and both monsoon seasons. In fact, in some regions the model even outperforms CFSR. The advantage of modifying the BMJ scheme to produce better rainfall estimates lies in the final dynamical consistency of the rainfall with other dynamical and thermodynamical variables of the atmosphere.
Rural health prepayment schemes in China: towards a more active role for government.
Bloom, G; Shenglan, T
1999-04-01
A large majority of China's rural population were members of health prepayment schemes in the 1970's. Most of these schemes collapsed during the transition to a market economy. Some localities subsequently reestablished schemes. In early 1997 a new government policy identified health prepayment as a major potential source of rural health finance. This paper draws on the experience of existing schemes to explore how government can support implementation of this policy. The decision to support the establishment of health prepayment schemes is part of the government's effort to establish new sources of finance for social services. It believes that individuals are more likely to accept voluntary contributions to a prepayment scheme than tax increases. The voluntary nature of the contributions limits the possibilities for risk-sharing and redistribution between rich and poor. This underlines the need for the government to fund a substantial share of health expenditure out of general revenues, particularly in poor localities. The paper notes that many successful prepayment schemes depend on close supervision by local political leaders. It argues that the national programme will have to translate these measures into a regulatory system which defines the responsibilities of scheme management bodies and local governments. A number of prepayment schemes have collapsed because members did not feel they got value for money. Local health bureaux will have to cooperate with prepayment schemes to ensure that health facilities provide good quality services at a reasonable cost. Users' representatives can also monitor performance. The paper concludes that government needs to clarify the relationship between health prepayment schemes and other actors in rural localities in order to increase the chance that schemes will become a major source rural health finance.
ERIC Educational Resources Information Center
Beer, Francis A.
1994-01-01
Examines the word "reason" as it is used in political discourse. Argues that "reason"'s plasticity and flexibility help it to stimulate and evoke variable mental images and responses in different settings and situations. Notes that the example of reason of state shows "reason"'s rhetorical power and privilege, its…
Simulations of the failure scenarios of the crab cavities for the nominal scheme of the LHC
NASA Astrophysics Data System (ADS)
Yee, B.; Calaga, R.; Zimmermann, F.; Lopez, R.
2012-02-01
The Crab Cavity (CC) represents a possible solution to the problem of the reduction in luminosity due to the impact angle of two colliding beams. The CC is a Radio Frequency (RF) superconducting cavity which applies a transversal kick into a bunch of particles producing a rotation in order to have a head-on collision to improve the luminosity. For this reason people at the Beams Department-Accelerators & Beams Physics of CERN (BE-ABP) have studied the implementation of the CC scheme at the LHC. It is essential to study the failure scenarios and the damage that can be produced to the lattice devices. We have performed simulations of these failures for the nominal scheme.
Liu, Shuang; Wang, Jing; Zhang, Liang; Zhang, Xiang
2018-03-09
In China, increases in both the caesarean section (CS) rates and delivery costs have raised questions regarding the reform of the medical insurance payment system. Case payment is useful for regulating the behaviour of health providers and for controlling the CS rates and excessive increases in medical expenses. New Cooperative Medical Scheme (NCMS) agencies in Xi County in Henan Province piloted a case payment reform (CPR) in delivery for inpatients. We aimed to observe the changes in the CS rates, compare the changes in delivery-related variables, and identify variables related to delivery costs before and after the CPR in Xi County. Overall, 28,314 cases were selected from the Xi County NCMS agency from 2009 to 2010 and from 2014 to 2015. One-way ANOVA and chi-square tests were used to compare the distributions of CS and vaginal delivery (VD) before and after the CPR under different indicators. We applied multivariate linear regressions for the total medical cost of the VD and CS groups and total samples to identify the relationships between medical expenses and variables. The CS rates in Xi County increased from 26.1% to 32.5% after the CPR. The length of stay (LOS), total medical cost, and proportion of county hospitals increased in the CS and VD groups after the CPR, which had significant differences. The total medical cost in the CS and VD groups as well as the total samples was significantly influenced by inpatient age, LOS, and hospital type, and had a significant correlation with the CPR in the VD group and the total samples. The CPR might fail to control the growth of unreasonable medical expenses and regulate the behaviour of providers, which possibly resulted from the unreasonable compensation standard of case payments, prolonged LOS, and the increasing proportion of county hospitals. The NCMS should modify the case payment standard of delivery to inhibit providers' motivation to render CS services. The LOS should be controlled by implementing clinical guidelines, and a reference system should be established to guide patients in choosing reasonable hospitals.
Methods of separation of variables in turbulence theory
NASA Technical Reports Server (NTRS)
Tsuge, S.
1978-01-01
Two schemes of closing turbulent moment equations are proposed both of which make double correlation equations separated into single-point equations. The first is based on neglected triple correlation, leading to an equation differing from small perturbed gasdynamic equations where the separation constant appears as the frequency. Grid-produced turbulence is described in this light as time-independent, cylindrically-isotropic turbulence. Application to wall turbulence guided by a new asymptotic method for the Orr-Sommerfeld equation reveals a neutrally stable mode of essentially three dimensional nature. The second closure scheme is based on an assumption of identity of the separated variables through which triple and quadruple correlations are formed. The resulting equation adds, to its equivalent of the first scheme, an integral of nonlinear convolution in the frequency describing a role due to triple correlation of direct energy-cascading.
NASA Astrophysics Data System (ADS)
Christensen, H. M.; Berner, J.; Coleman, D.; Palmer, T.
2015-12-01
Stochastic parameterizations have been used for more than a decade in atmospheric models to represent the variability of unresolved sub-grid processes. They have a beneficial effect on the spread and mean state of medium- and extended-range forecasts (Buizza et al. 1999, Palmer et al. 2009). There is also increasing evidence that stochastic parameterization of unresolved processes could be beneficial for the climate of an atmospheric model through noise enhanced variability, noise-induced drift (Berner et al. 2008), and by enabling the climate simulator to explore other flow regimes (Christensen et al. 2015; Dawson and Palmer 2015). We present results showing the impact of including the Stochastically Perturbed Parameterization Tendencies scheme (SPPT) in coupled runs of the National Center for Atmospheric Research (NCAR) Community Atmosphere Model, version 4 (CAM4) with historical forcing. The SPPT scheme accounts for uncertainty in the CAM physical parameterization schemes, including the convection scheme, by perturbing the parametrised temperature, moisture and wind tendencies with a multiplicative noise term. SPPT results in a large improvement in the variability of the CAM4 modeled climate. In particular, SPPT results in a significant improvement to the representation of the El Nino-Southern Oscillation in CAM4, improving the power spectrum, as well as both the inter- and intra-annual variability of tropical pacific sea surface temperatures. References: Berner, J., Doblas-Reyes, F. J., Palmer, T. N., Shutts, G. J., & Weisheimer, A., 2008. Phil. Trans. R. Soc A, 366, 2559-2577 Buizza, R., Miller, M. and Palmer, T. N., 1999. Q.J.R. Meteorol. Soc., 125, 2887-2908. Christensen, H. M., I. M. Moroz & T. N. Palmer, 2015. Clim. Dynam., doi: 10.1007/s00382-014-2239-9 Dawson, A. and T. N. Palmer, 2015. Clim. Dynam., doi: 10.1007/s00382-014-2238-x Palmer, T.N., R. Buizza, F. Doblas-Reyes, et al., 2009, ECMWF technical memorandum 598.
ERIC Educational Resources Information Center
Belue, Paul T.; Cavey, Laurie Overman; Kinzel, Margaret T.
2017-01-01
In this exploratory study, we examined the effects of a quantitative reasoning instructional approach to linear equations in two variables on community college students' conceptual understanding, procedural fluency, and reasoning ability. This was done in comparison to the use of a traditional procedural approach for instruction on the same topic.…
A Framework for Dynamic Constraint Reasoning Using Procedural Constraints
NASA Technical Reports Server (NTRS)
Jonsson, Ari K.; Frank, Jeremy D.
1999-01-01
Many complex real-world decision and control problems contain an underlying constraint reasoning problem. This is particularly evident in a recently developed approach to planning, where almost all planning decisions are represented by constrained variables. This translates a significant part of the planning problem into a constraint network whose consistency determines the validity of the plan candidate. Since higher-level choices about control actions can add or remove variables and constraints, the underlying constraint network is invariably highly dynamic. Arbitrary domain-dependent constraints may be added to the constraint network and the constraint reasoning mechanism must be able to handle such constraints effectively. Additionally, real problems often require handling constraints over continuous variables. These requirements present a number of significant challenges for a constraint reasoning mechanism. In this paper, we introduce a general framework for handling dynamic constraint networks with real-valued variables, by using procedures to represent and effectively reason about general constraints. The framework is based on a sound theoretical foundation, and can be proven to be sound and complete under well-defined conditions. Furthermore, the framework provides hybrid reasoning capabilities, as alternative solution methods like mathematical programming can be incorporated into the framework, in the form of procedures.
Some observations on mesh refinement schemes applied to shock wave phenomena
NASA Technical Reports Server (NTRS)
Quirk, James J.
1995-01-01
This workshop's double-wedge test problem is taken from one of a sequence of experiments which were performed in order to classify the various canonical interactions between a planar shock wave and a double wedge. Therefore to build up a reasonably broad picture of the performance of our mesh refinement algorithm we have simulated three of these experiments and not just the workshop case. Here, using the results from these simulations together with their experimental counterparts, we make some general observations concerning the development of mesh refinement schemes for shock wave phenomena.
Measurement and analysis of chatter in a compliant model of a drillstring equipped with a PDC bit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elsayed, M.A.; Raymond, D.W.
1999-11-09
Typical laboratory testing of Polycrystalline Diamond Compact (PDC) bits is performed on relatively rigid setups. Even in hard rock, PDC bits exhibit reasonable life using such testing schemes. Unfortunately, field experience indicates otherwise. In this paper, the authors show that introducing compliance in testing setups provides better simulation of actual field conditions. Using such a scheme, they show that chatter can be severe even in softer rock, such as sandstone, and very destructive to the cutters in hard rock, such as sierra white granite.
Design and Implementation of Green Construction Scheme for a High-rise Residential Building Project
NASA Astrophysics Data System (ADS)
Zhou, Yong; Huang, You Zhen
2018-06-01
This paper mainly studies the green construction scheme of a high-rise residential building project. From "four sections one environmental protection", saving material, water saving, energy saving, economical use of land and environmental protection conduct analysis and research. Adopting scientific, advanced, reasonable and economical construction technology measures, implementing green construction method. Promoting energy-saving technologies in buildings, ensuring the sustainable use of resources, Maximum savings of resources and energy, increase energy efficiency, to reduce pollution, reducing the adverse environmental impact of construction activities, ensure construction safety, build sustainable buildings.
A Study of Convergence of the PMARC Matrices Applicable to WICS Calculations
NASA Technical Reports Server (NTRS)
Ghosh, Amitabha
1997-01-01
This report discusses some analytical procedures to enhance the real time solutions of PMARC matrices applicable to the Wall Interference Correction Scheme (WICS) currently being implemented at the 12 foot Pressure Tunnel. WICS calculations involve solving large linear systems in a reasonably speedy manner necessitating exploring further improvement in solution time. This paper therefore presents some of the associated theory of the solution of linear systems. Then it discusses a geometrical interpretation of the residual correction schemes. Finally some results of the current investigation are presented.
Hardware-assisted software clock synchronization for homogeneous distributed systems
NASA Technical Reports Server (NTRS)
Ramanathan, P.; Kandlur, Dilip D.; Shin, Kang G.
1990-01-01
A clock synchronization scheme that strikes a balance between hardware and software solutions is proposed. The proposed is a software algorithm that uses minimal additional hardware to achieve reasonably tight synchronization. Unlike other software solutions, the guaranteed worst-case skews can be made insensitive to the maximum variation of message transit delay in the system. The scheme is particularly suitable for large partially connected distributed systems with topologies that support simple point-to-point broadcast algorithms. Examples of such topologies include the hypercube and the mesh interconnection structures.
Shapiro, Paul R; Mao, Yi; Iliev, Ilian T; Mellema, Garrelt; Datta, Kanan K; Ahn, Kyungjin; Koda, Jun
2013-04-12
The 21 cm background from the epoch of reionization is a promising cosmological probe: line-of-sight velocity fluctuations distort redshift, so brightness fluctuations in Fourier space depend upon angle, which linear theory shows can separate cosmological from astrophysical information. Nonlinear fluctuations in ionization, density, and velocity change this, however. The validity and accuracy of the separation scheme are tested here for the first time, by detailed reionization simulations. The scheme works reasonably well early in reionization (≲40% ionized), but not late (≳80% ionized).
A Study of Convergence of the PMARC Matrices Applicable to WICS Calculations
NASA Technical Reports Server (NTRS)
Ghosh, Amitabha
1997-01-01
This report discusses some analytical procedures to enhance the real time solutions of PMARC matrices applicable to the Wall Interference Correction Scheme (WICS) currently being implemented at the 12 foot Pressure Tunell. WICS calculations involve solving large linear systems in a reasonably speedy manner necessitating exploring further improvement in solution time. This paper therefore presents some of the associated theory of the solution of linear systems. Then it discusses a geometrical interpretation of the residual correction schemes. Finally, some results of the current investigation are presented.
Experienced physicians benefit from analyzing initial diagnostic hypotheses
Bass, Adam; Geddes, Colin; Wright, Bruce; Coderre, Sylvain; Rikers, Remy; McLaughlin, Kevin
2013-01-01
Background Most incorrect diagnoses involve at least one cognitive error, of which premature closure is the most prevalent. While metacognitive strategies can mitigate premature closure in inexperienced learners, these are rarely studied in experienced physicians. Our objective here was to evaluate the effect of analytic information processing on diagnostic performance of nephrologists and nephrology residents. Methods We asked nine nephrologists and six nephrology residents at the University of Calgary and Glasgow University to diagnose ten nephrology cases. We provided presenting features along with contextual information, after which we asked for an initial diagnosis. We then primed participants to use either hypothetico-deductive reasoning or scheme-inductive reasoning to analyze the remaining case data and generate a final diagnosis. Results After analyzing initial hypotheses, both nephrologists and residents improved the accuracy of final diagnoses (31.1% vs. 65.6%, p < 0.001, and 40.0% vs. 70.0%, p < 0.001, respectively). We found a significant interaction between experience and analytic processing strategy (p = 0.02): nephrology residents had significantly increased odds of diagnostic success when using scheme-inductive reasoning (odds ratio [95% confidence interval] 5.69 [1.59, 20.33], p = 0.07), whereas the performance of experienced nephrologists did not differ between strategies (odds ratio 0.57 [0.23, 1.39], p = 0.20). Discussion Experienced nephrologists and nephrology residents can improve their performance by analyzing initial diagnostic hypotheses. The explanation of the interaction between experience and the effect of different reasoning strategies is unclear, but may relate to preferences in reasoning strategy, or the changes in knowledge structure with experience. PMID:26451203
Optimal Interpolation scheme to generate reference crop evapotranspiration
NASA Astrophysics Data System (ADS)
Tomas-Burguera, Miquel; Beguería, Santiago; Vicente-Serrano, Sergio; Maneta, Marco
2018-05-01
We used an Optimal Interpolation (OI) scheme to generate a reference crop evapotranspiration (ETo) grid, forcing meteorological variables, and their respective error variance in the Iberian Peninsula for the period 1989-2011. To perform the OI we used observational data from the Spanish Meteorological Agency (AEMET) and outputs from a physically-based climate model. To compute ETo we used five OI schemes to generate grids for the five observed climate variables necessary to compute ETo using the FAO-recommended form of the Penman-Monteith equation (FAO-PM). The granularity of the resulting grids are less sensitive to variations in the density and distribution of the observational network than those generated by other interpolation methods. This is because our implementation of the OI method uses a physically-based climate model as prior background information about the spatial distribution of the climatic variables, which is critical for under-observed regions. This provides temporal consistency in the spatial variability of the climatic fields. We also show that increases in the density and improvements in the distribution of the observational network reduces substantially the uncertainty of the climatic and ETo estimates. Finally, a sensitivity analysis of observational uncertainties and network densification suggests the existence of a trade-off between quantity and quality of observations.
A Secure and Privacy-Preserving Navigation Scheme Using Spatial Crowdsourcing in Fog-Based VANETs
Wang, Lingling; Liu, Guozhu; Sun, Lijun
2017-01-01
Fog-based VANETs (Vehicular ad hoc networks) is a new paradigm of vehicular ad hoc networks with the advantages of both vehicular cloud and fog computing. Real-time navigation schemes based on fog-based VANETs can promote the scheme performance efficiently. In this paper, we propose a secure and privacy-preserving navigation scheme by using vehicular spatial crowdsourcing based on fog-based VANETs. Fog nodes are used to generate and release the crowdsourcing tasks, and cooperatively find the optimal route according to the real-time traffic information collected by vehicles in their coverage areas. Meanwhile, the vehicle performing the crowdsourcing task can get a reasonable reward. The querying vehicle can retrieve the navigation results from each fog node successively when entering its coverage area, and follow the optimal route to the next fog node until it reaches the desired destination. Our scheme fulfills the security and privacy requirements of authentication, confidentiality and conditional privacy preservation. Some cryptographic primitives, including the Elgamal encryption algorithm, AES, randomized anonymous credentials and group signatures, are adopted to achieve this goal. Finally, we analyze the security and the efficiency of the proposed scheme. PMID:28338620
A Secure and Privacy-Preserving Navigation Scheme Using Spatial Crowdsourcing in Fog-Based VANETs.
Wang, Lingling; Liu, Guozhu; Sun, Lijun
2017-03-24
Fog-based VANETs (Vehicular ad hoc networks) is a new paradigm of vehicular ad hoc networks with the advantages of both vehicular cloud and fog computing. Real-time navigation schemes based on fog-based VANETs can promote the scheme performance efficiently. In this paper, we propose a secure and privacy-preserving navigation scheme by using vehicular spatial crowdsourcing based on fog-based VANETs. Fog nodes are used to generate and release the crowdsourcing tasks, and cooperatively find the optimal route according to the real-time traffic information collected by vehicles in their coverage areas. Meanwhile, the vehicle performing the crowdsourcing task can get a reasonable reward. The querying vehicle can retrieve the navigation results from each fog node successively when entering its coverage area, and follow the optimal route to the next fog node until it reaches the desired destination. Our scheme fulfills the security and privacy requirements of authentication, confidentiality and conditional privacy preservation. Some cryptographic primitives, including the Elgamal encryption algorithm, AES, randomized anonymous credentials and group signatures, are adopted to achieve this goal. Finally, we analyze the security and the efficiency of the proposed scheme.
NASA Technical Reports Server (NTRS)
Zhang, Jun; Ge, Lixin; Kouatchou, Jules
2000-01-01
A new fourth order compact difference scheme for the three dimensional convection diffusion equation with variable coefficients is presented. The novelty of this new difference scheme is that it Only requires 15 grid points and that it can be decoupled with two colors. The entire computational grid can be updated in two parallel subsweeps with the Gauss-Seidel type iterative method. This is compared with the known 19 point fourth order compact differenCe scheme which requires four colors to decouple the computational grid. Numerical results, with multigrid methods implemented on a shared memory parallel computer, are presented to compare the 15 point and the 19 point fourth order compact schemes.
NASA Astrophysics Data System (ADS)
De Meij, A.; Vinuesa, J.-F.; Maupas, V.
2018-05-01
The sensitivity of different microphysics and dynamics schemes on calculated global horizontal irradiation (GHI) values in the Weather Research Forecasting (WRF) model is studied. 13 sensitivity simulations were performed for which the microphysics, cumulus parameterization schemes and land surface models were changed. Firstly we evaluated the model's performance by comparing calculated GHI values for the Base Case with observations for the Reunion Island for 2014. In general, the model calculates the largest bias during the austral summer. This indicates that the model is less accurate in timing the formation and dissipation of clouds during the summer, when higher water vapor quantities are present in the atmosphere than during the austral winter. Secondly, the model sensitivity on changing the microphysics, cumulus parameterization and land surface models on calculated GHI values is evaluated. The sensitivity simulations showed that changing the microphysics from the Thompson scheme (or Single-Moment 6-class scheme) to the Morrison double-moment scheme, the relative bias improves from 45% to 10%. The underlying reason for this improvement is that the Morrison double-moment scheme predicts the mass and number concentrations of five hydrometeors, which help to improve the calculation of the densities, size and lifetime of the cloud droplets. While the single moment schemes only predicts the mass for less hydrometeors. Changing the cumulus parameterization schemes and land surface models does not have a large impact on GHI calculations.
A user-driven treadmill control scheme for simulating overground locomotion.
Kim, Jonghyun; Stanley, Christopher J; Curatalo, Lindsey A; Park, Hyung-Soon
2012-01-01
Treadmill-based locomotor training should simulate overground walking as closely as possible for optimal skill transfer. The constant speed of a standard treadmill encourages automaticity rather than engagement and fails to simulate the variable speeds encountered during real-world walking. To address this limitation, this paper proposes a user-driven treadmill velocity control scheme that allows the user to experience natural fluctuations in walking velocity with minimal unwanted inertial force due to acceleration/deceleration of the treadmill belt. A smart estimation limiter in the scheme effectively attenuates the inertial force during velocity changes. The proposed scheme requires measurement of pelvic and swing foot motions, and is developed for a treadmill of typical belt length (1.5 m). The proposed scheme is quantitatively evaluated here with four healthy subjects by comparing it with the most advanced control scheme identified in the literature.
Wittmann, Christoffer; Andersen, Ulrik L; Takeoka, Masahiro; Sych, Denis; Leuchs, Gerd
2010-03-12
We experimentally demonstrate a new measurement scheme for the discrimination of two coherent states. The measurement scheme is based on a displacement operation followed by a photon-number-resolving detector, and we show that it outperforms the standard homodyne detector which we, in addition, prove to be optimal within all Gaussian operations including conditional dynamics. We also show that the non-Gaussian detector is superior to the homodyne detector in a continuous variable quantum key distribution scheme.
Li, Xiong; Niu, Jianwei; Karuppiah, Marimuthu; Kumari, Saru; Wu, Fan
2016-12-01
Benefited from the development of network and communication technologies, E-health care systems and telemedicine have got the fast development. By using the E-health care systems, patient can enjoy the remote medical service provided by the medical server. Medical data are important privacy information for patient, so it is an important issue to ensure the secure of transmitted medical data through public network. Authentication scheme can thwart unauthorized users from accessing services via insecure network environments, so user authentication with privacy protection is an important mechanism for the security of E-health care systems. Recently, based on three factors (password, biometric and smart card), an user authentication scheme for E-health care systems was been proposed by Amin et al., and they claimed that their scheme can withstand most of common attacks. Unfortunate, we find that their scheme cannot achieve the untraceability feature of the patient. Besides, their scheme lacks a password check mechanism such that it is inefficient to find the unauthorized login by the mistake of input a wrong password. Due to the same reason, their scheme is vulnerable to Denial of Service (DoS) attack if the patient updates the password mistakenly by using a wrong password. In order improve the security level of authentication scheme for E-health care application, a robust user authentication scheme with privacy protection is proposed for E-health care systems. Then, security prove of our scheme are analysed. Security and performance analyses show that our scheme is more powerful and secure for E-health care systems when compared with other related schemes.
ERIC Educational Resources Information Center
Pfannkuch, Maxine; Arnold, Pip; Wild, Chris J.
2015-01-01
Currently, instruction pays little attention to the development of students' sampling variability reasoning in relation to statistical inference. In this paper, we briefly discuss the especially designed sampling variability learning experiences students aged about 15 engaged in as part of a research project. We examine assessment and…
Continuous variable quantum key distribution with modulated entangled states.
Madsen, Lars S; Usenko, Vladyslav C; Lassen, Mikael; Filip, Radim; Andersen, Ulrik L
2012-01-01
Quantum key distribution enables two remote parties to grow a shared key, which they can use for unconditionally secure communication over a certain distance. The maximal distance depends on the loss and the excess noise of the connecting quantum channel. Several quantum key distribution schemes based on coherent states and continuous variable measurements are resilient to high loss in the channel, but are strongly affected by small amounts of channel excess noise. Here we propose and experimentally address a continuous variable quantum key distribution protocol that uses modulated fragile entangled states of light to greatly enhance the robustness to channel noise. We experimentally demonstrate that the resulting quantum key distribution protocol can tolerate more noise than the benchmark set by the ideal continuous variable coherent state protocol. Our scheme represents a very promising avenue for extending the distance for which secure communication is possible.
ERIC Educational Resources Information Center
Claybrook, Billy G.
A new heuristic factorization scheme uses learning to improve the efficiency of determining the symbolic factorization of multivariable polynomials with interger coefficients and an arbitrary number of variables and terms. The factorization scheme makes extensive use of artificial intelligence techniques (e.g., model-building, learning, and…
Topology optimization for design of segmented permanent magnet arrays with ferromagnetic materials
NASA Astrophysics Data System (ADS)
Lee, Jaewook; Yoon, Minho; Nomura, Tsuyoshi; Dede, Ercan M.
2018-03-01
This paper presents multi-material topology optimization for the co-design of permanent magnet segments and iron material. Specifically, a co-design methodology is proposed to find an optimal border of permanent magnet segments, a pattern of magnetization directions, and an iron shape. A material interpolation scheme is proposed for material property representation among air, permanent magnet, and iron materials. In this scheme, the permanent magnet strength and permeability are controlled by density design variables, and permanent magnet magnetization directions are controlled by angle design variables. In addition, a scheme to penalize intermediate magnetization direction is proposed to achieve segmented permanent magnet arrays with discrete magnetization directions. In this scheme, permanent magnet strength is controlled depending on magnetization direction, and consequently the final permanent magnet design converges into permanent magnet segments having target discrete directions. To validate the effectiveness of the proposed approach, three design examples are provided. The examples include the design of a dipole Halbach cylinder, magnetic system with arbitrarily-shaped cavity, and multi-objective problem resembling a magnetic refrigeration device.
Multistage variable probability forest volume inventory. [the Defiance Unit of the Navajo Nation
NASA Technical Reports Server (NTRS)
Anderson, J. E. (Principal Investigator)
1979-01-01
An inventory scheme based on the use of computer processed LANDSAT MSS data was developed. Output from the inventory scheme provides an estimate of the standing net saw timber volume of a major timber species on a selected forested area of the Navajo Nation. Such estimates are based on the values of parameters currently used for scaled sawlog conversion to mill output. The multistage variable probability sampling appears capable of producing estimates which compare favorably with those produced using conventional techniques. In addition, the reduction in time, manpower, and overall costs lend it to numerous applications.
High-speed continuous-variable quantum key distribution without sending a local oscillator.
Huang, Duan; Huang, Peng; Lin, Dakai; Wang, Chao; Zeng, Guihua
2015-08-15
We report a 100-MHz continuous-variable quantum key distribution (CV-QKD) experiment over a 25-km fiber channel without sending a local oscillator (LO). We use a "locally" generated LO and implement with a 1-GHz shot-noise-limited homodyne detector to achieve high-speed quantum measurement, and we propose a secure phase compensation scheme to maintain a low level of excess noise. These make high-bit-rate CV-QKD significantly simpler for larger transmission distances compared with previous schemes in which both LO and quantum signals are transmitted through the insecure quantum channel.
Zhang, Peng; Liu, Ru-Xun; Wong, S C
2005-05-01
This paper develops macroscopic traffic flow models for a highway section with variable lanes and free-flow velocities, that involve spatially varying flux functions. To address this complex physical property, we develop a Riemann solver that derives the exact flux values at the interface of the Riemann problem. Based on this solver, we formulate Godunov-type numerical schemes to solve the traffic flow models. Numerical examples that simulate the traffic flow around a bottleneck that arises from a drop in traffic capacity on the highway section are given to illustrate the efficiency of these schemes.
Formulary evaluation of second-generation cephamycin derivatives using decision analysis.
Barriere, S L
1991-10-01
Use of decision analysis in the formulary evaluation of the second-generation cephamycin derivatives cefoxitin, cefotetan, and cefmetazole is described. The rating system used was adapted from one used for the third-generation cephalosporins. Data on spectrum of activity, pharmacokinetics, adverse reactions, cost, and stability were taken from the published literature and the FDA-approved product labeling. The weighting scheme used for the third-generation cephalosporins was altered somewhat to reflect the more important aspects of the cephamycin derivatives and their potential role in surgical prophylaxis. Sensitivity analysis was done to assess the variability of the final scores when the assigned weights were varied within a reasonable range. Scores for cefmetazole and cefotetan were similar and did not differ significantly after sensitivity analysis. Cefoxitin scored significantly lower than the other two drugs. In the absence of data suggesting that the N-methyl thiotetrazole side chains of cefmetazole and cefotetan cause substantial toxicity, these two drugs can be considered the most cost-efficient members of the second-generation cephamycins.
NASA Technical Reports Server (NTRS)
Whelan, Todd Michael
1996-01-01
In a real-time or batch mode simulation that is designed to model aircraft dynamics over a wide range of flight conditions, a table look- up scheme is implemented to determine the forces and moments on the vehicle based upon the values of parameters such as angle of attack, altitude, Mach number, and control surface deflections. Simulation Aerodynamic Variable Interface (SAVI) is a graphical user interface to the flight simulation input data, designed to operate on workstations that support X Windows. The purpose of the application is to provide two and three dimensional visualization of the data, to allow an intuitive sense of the data set. SAVI also allows the user to manipulate the data, either to conduct an interactive study of the influence of changes on the vehicle dynamics, or to make revisions to data set based on new information such as flight test. This paper discusses the reasons for developing the application, provides an overview of its capabilities, and outlines the software architecture and operating environment.
Patterns of Hierarchy in Formal and Principled Moral Reasoning.
ERIC Educational Resources Information Center
Zeidler, Dana Lewis
Measurements of formal reasoning and principled moral reasoning ability were obtained from a sample of 99 tenth grade students. Specific modes of formal reasoning (proportional reasoning, controlling variables, probabilistic, correlational and combinatorial reasoning) were first examined. Findings support the notion of hierarchical relationships…
SANTA BARBARA CLUSTER COMPARISON TEST WITH DISPH
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saitoh, Takayuki R.; Makino, Junichiro, E-mail: saitoh@elsi.jp
2016-06-01
The Santa Barbara cluster comparison project revealed that there is a systematic difference between entropy profiles of clusters of galaxies obtained by Eulerian mesh and Lagrangian smoothed particle hydrodynamics (SPH) codes: mesh codes gave a core with a constant entropy, whereas SPH codes did not. One possible reason for this difference is that mesh codes are not Galilean invariant. Another possible reason is the problem of the SPH method, which might give too much “protection” to cold clumps because of the unphysical surface tension induced at contact discontinuities. In this paper, we apply the density-independent formulation of SPH (DISPH), whichmore » can handle contact discontinuities accurately, to simulations of a cluster of galaxies and compare the results with those with the standard SPH. We obtained the entropy core when we adopt DISPH. The size of the core is, however, significantly smaller than those obtained with mesh simulations and is comparable to those obtained with quasi-Lagrangian schemes such as “moving mesh” and “mesh free” schemes. We conclude that both the standard SPH without artificial conductivity and Eulerian mesh codes have serious problems even with such an idealized simulation, while DISPH, SPH with artificial conductivity, and quasi-Lagrangian schemes have sufficient capability to deal with it.« less
NASA Astrophysics Data System (ADS)
Chen, Dechao; Zhang, Yunong
2017-10-01
Dual-arm redundant robot systems are usually required to handle primary tasks, repetitively and synchronously in practical applications. In this paper, a jerk-level synchronous repetitive motion scheme is proposed to remedy the joint-angle drift phenomenon and achieve the synchronous control of a dual-arm redundant robot system. The proposed scheme is novelly resolved at jerk level, which makes the joint variables, i.e. joint angles, joint velocities and joint accelerations, smooth and bounded. In addition, two types of dynamics algorithms, i.e. gradient-type (G-type) and zeroing-type (Z-type) dynamics algorithms, for the design of repetitive motion variable vectors, are presented in detail with the corresponding circuit schematics. Subsequently, the proposed scheme is reformulated as two dynamical quadratic programs (DQPs) and further integrated into a unified DQP (UDQP) for the synchronous control of a dual-arm robot system. The optimal solution of the UDQP is found by the piecewise-linear projection equation neural network. Moreover, simulations and comparisons based on a six-degrees-of-freedom planar dual-arm redundant robot system substantiate the operation effectiveness and tracking accuracy of the robot system with the proposed scheme for repetitive motion and synchronous control.
Coded throughput performance simulations for the time-varying satellite channel. M.S. Thesis
NASA Technical Reports Server (NTRS)
Han, LI
1995-01-01
The design of a reliable satellite communication link involving the data transfer from a small, low-orbit satellite to a ground station, but through a geostationary satellite, was examined. In such a scenario, the received signal power to noise density ratio increases as the transmitting low-orbit satellite comes into view, and then decreases as it then departs, resulting in a short-duration, time-varying communication link. The optimal values of the small satellite antenna beamwidth, signaling rate, modulation scheme and the theoretical link throughput (in bits per day) have been determined. The goal of this thesis is to choose a practical coding scheme which maximizes the daily link throughput while satisfying a prescribed probability of error requirement. We examine the throughput of both fixed rate and variable rate concatenated forward error correction (FEC) coding schemes for the additive white Gaussian noise (AWGN) channel, and then examine the effect of radio frequency interference (RFI) on the best coding scheme among them. Interleaving is used to mitigate degradation due to RFI. It was found that the variable rate concatenated coding scheme could achieve 74 percent of the theoretical throughput, equivalent to 1.11 Gbits/day based on the cutoff rate R(sub 0). For comparison, 87 percent is achievable for AWGN-only case.
Adaptive ISAR Imaging of Maneuvering Targets Based on a Modified Fourier Transform.
Wang, Binbin; Xu, Shiyou; Wu, Wenzhen; Hu, Pengjiang; Chen, Zengping
2018-04-27
Focusing on the inverse synthetic aperture radar (ISAR) imaging of maneuvering targets, this paper presents a new imaging method which works well when the target's maneuvering is not too severe. After translational motion compensation, we describe the equivalent rotation of maneuvering targets by two variables-the relative chirp rate of the linear frequency modulated (LFM) signal and the Doppler focus shift. The first variable indicates the target's motion status, and the second one represents the possible residual error of the translational motion compensation. With them, a modified Fourier transform matrix is constructed and then used for cross-range compression. Consequently, the imaging of maneuvering is converted into a two-dimensional parameter optimization problem in which a stable and clear ISAR image is guaranteed. A gradient descent optimization scheme is employed to obtain the accurate relative chirp rate and Doppler focus shift. Moreover, we designed an efficient and robust initialization process for the gradient descent method, thus, the well-focused ISAR images of maneuvering targets can be achieved adaptively. Human intervention is not needed, and it is quite convenient for practical ISAR imaging systems. Compared to precedent imaging methods, the new method achieves better imaging quality under reasonable computational cost. Simulation results are provided to validate the effectiveness and advantages of the proposed method.
NASA Astrophysics Data System (ADS)
Boudreaux, Andrew; Shaffer, Peter S.; Heron, Paula R. L.; McDermott, Lillian C.
2008-02-01
The ability of adult students to reason on the basis of the control of variables was the subject of an extended investigation. This paper describes the part of the study that focused on the reasoning required to decide whether or not a given variable influences the behavior of a system. The participants were undergraduates taking introductory Physics and K-8 teachers studying physics and physical science in inservice institutes and workshops. Although most of the students recognized the need to control variables, many had significant difficulty with the underlying reasoning. The results indicate serious shortcomings in the preparation of future scientists and in the education of a scientifically literate citizenry. There are also strong implications for the professional development of teachers, many of whom are expected to teach control of variables to young students.
Sensor data security level estimation scheme for wireless sensor networks.
Ramos, Alex; Filho, Raimir Holanda
2015-01-19
Due to their increasing dissemination, wireless sensor networks (WSNs) have become the target of more and more sophisticated attacks, even capable of circumventing both attack detection and prevention mechanisms. This may cause WSN users, who totally trust these security mechanisms, to think that a sensor reading is secure, even when an adversary has corrupted it. For that reason, a scheme capable of estimating the security level (SL) that these mechanisms provide to sensor data is needed, so that users can be aware of the actual security state of this data and can make better decisions on its use. However, existing security estimation schemes proposed for WSNs fully ignore detection mechanisms and analyze solely the security provided by prevention mechanisms. In this context, this work presents the sensor data security estimator (SDSE), a new comprehensive security estimation scheme for WSNs. SDSE is designed for estimating the sensor data security level based on security metrics that analyze both attack prevention and detection mechanisms. In order to validate our proposed scheme, we have carried out extensive simulations that show the high accuracy of SDSE estimates.
Sensor Data Security Level Estimation Scheme for Wireless Sensor Networks
Ramos, Alex; Filho, Raimir Holanda
2015-01-01
Due to their increasing dissemination, wireless sensor networks (WSNs) have become the target of more and more sophisticated attacks, even capable of circumventing both attack detection and prevention mechanisms. This may cause WSN users, who totally trust these security mechanisms, to think that a sensor reading is secure, even when an adversary has corrupted it. For that reason, a scheme capable of estimating the security level (SL) that these mechanisms provide to sensor data is needed, so that users can be aware of the actual security state of this data and can make better decisions on its use. However, existing security estimation schemes proposed for WSNs fully ignore detection mechanisms and analyze solely the security provided by prevention mechanisms. In this context, this work presents the sensor data security estimator (SDSE), a new comprehensive security estimation scheme for WSNs. SDSE is designed for estimating the sensor data security level based on security metrics that analyze both attack prevention and detection mechanisms. In order to validate our proposed scheme, we have carried out extensive simulations that show the high accuracy of SDSE estimates. PMID:25608215
NASA Astrophysics Data System (ADS)
Avolio, E.; Federico, S.; Miglietta, M. M.; Lo Feudo, T.; Calidonna, C. R.; Sempreviva, A. M.
2017-08-01
The sensitivity of boundary layer variables to five (two non-local and three local) planetary boundary-layer (PBL) parameterization schemes, available in the Weather Research and Forecasting (WRF) mesoscale meteorological model, is evaluated in an experimental site in Calabria region (southern Italy), in an area characterized by a complex orography near the sea. Results of 1 km × 1 km grid spacing simulations are compared with the data collected during a measurement campaign in summer 2009, considering hourly model outputs. Measurements from several instruments are taken into account for the performance evaluation: near surface variables (2 m temperature and relative humidity, downward shortwave radiation, 10 m wind speed and direction) from a surface station and a meteorological mast; vertical wind profiles from Lidar and Sodar; also, the aerosol backscattering from a ceilometer to estimate the PBL height. Results covering the whole measurement campaign show a cold and moist bias near the surface, mostly during daytime, for all schemes, as well as an overestimation of the downward shortwave radiation and wind speed. Wind speed and direction are also verified at vertical levels above the surface, where the model uncertainties are, usually, smaller than at the surface. A general anticlockwise rotation of the simulated flow with height is found at all levels. The mixing height is overestimated by all schemes and a possible role of the simulated sensible heat fluxes for this mismatching is investigated. On a single-case basis, significantly better results are obtained when the atmospheric conditions near the measurement site are dominated by synoptic forcing rather than by local circulations. From this study, it follows that the two first order non-local schemes, ACM2 and YSU, are the schemes with the best performance in representing parameters near the surface and in the boundary layer during the analyzed campaign.
Christina, Campbell Princess; Latifat, Taiwo Toyin; Collins, Nnaji Feziechukwu; Olatunbosun, Abolarin Thaddeus
2014-11-01
National Health Insurance Scheme (NHIS) is one of the health financing options adopted by Nigeria for improved healthcare access especially to the low income earners. One of the key operators of the scheme is the health care providers, thus their uptake of the scheme is fundamental to the survival of the scheme. The study reviewed the uptake of the NHIS by private health care providers in a Local Government Area in Lagos State. To assess the uptake of the NHIS by private healthcare practitioners. This descriptive cross-sectional study recruited 180 private healthcare providers selected by multistage sampling technique with a response rate of 88.9%. Awareness, knowledge and uptake of NHIS were 156 (97.5%), 110 (66.8%) and 97 (60.6%), respectively. Half of the respondents 82 (51.3%) were dissatisfied with the operations of the scheme. Major reasons were failure of entitlement payment by Health Maintenance Organisations 13 (81.3%) and their incurring losses in participating in the scheme 8(50%). There was a significant association between awareness, level of education, knowledge of NHIS and registration into scheme by the respondents P-value < 0.05. Awareness and knowledge of NHIS were commendable among the private health care providers. Six out of 10 had registered with the NHIS but half of the respondents 82 (51.3%) were dissatisfied with the scheme and 83 (57.2%) regretted participating in the scheme. There is need to improve payment modalities and ensure strict adherence to laid down policies.
ERIC Educational Resources Information Center
Norton, Anderson; Wilkins, Jesse L. M.
2010-01-01
In building models of students' fractions knowledge, two prominent frameworks have arisen: Kieren's rational number subconstructs, and Steffe's fractions schemes. The purpose of this paper is to clarify and reconcile aspects of those frameworks through a quantitative analysis. In particular, we focus on the measurement subconstruct and the…
East Asian winter monsoon forecasting schemes based on the NCEP's climate forecast system
NASA Astrophysics Data System (ADS)
Tian, Baoqiang; Fan, Ke; Yang, Hongqing
2017-12-01
The East Asian winter monsoon (EAWM) is the major climate system in the Northern Hemisphere during boreal winter. In this study, we developed two schemes to improve the forecasting skill of the interannual variability of the EAWM index (EAWMI) using the interannual increment prediction method, also known as the DY method. First, we found that version 2 of the NCEP's Climate Forecast System (CFSv2) showed higher skill in predicting the EAWMI in DY form than not. So, based on the advantage of the DY method, Scheme-I was obtained by adding the EAWMI DY predicted by CFSv2 to the observed EAWMI in the previous year. This scheme showed higher forecasting skill than CFSv2. Specifically, during 1983-2016, the temporal correlation coefficient between the Scheme-I-predicted and observed EAWMI was 0.47, exceeding the 99% significance level, with the root-mean-square error (RMSE) decreased by 12%. The autumn Arctic sea ice and North Pacific sea surface temperature (SST) are two important external forcing factors for the interannual variability of the EAWM. Therefore, a second (hybrid) prediction scheme, Scheme-II, was also developed. This scheme not only involved the EAWMI DY of CFSv2, but also the sea-ice concentration (SIC) observed the previous autumn in the Laptev and East Siberian seas and the temporal coefficients of the third mode of the North Pacific SST in DY form. We found that a negative SIC anomaly in the preceding autumn over the Laptev and the East Siberian seas could lead to a significant enhancement of the Aleutian low and East Asian westerly jet in the following winter. However, the intensity of the winter Siberian high was mainly affected by the third mode of the North Pacific autumn SST. Scheme-I and Scheme-II also showed higher predictive ability for the EAWMI in negative anomaly years compared to CFSv2. More importantly, the improvement in the prediction skill of the EAWMI by the new schemes, especially for Scheme-II, could enhance the forecasting skill of the winter 2-m air temperature (T-2m) in most parts of China, as well as the intensity of the Aleutian low and Siberian high in winter. The new schemes provide a theoretical basis for improving the prediction of winter climate in China.
NASA Astrophysics Data System (ADS)
Mukhopadhyay, P.; Phani Murali Krishna, R.; Goswami, Bidyut B.; Abhik, S.; Ganai, Malay; Mahakur, M.; Khairoutdinov, Marat; Dudhia, Jimmy
2016-05-01
Inspite of significant improvement in numerical model physics, resolution and numerics, the general circulation models (GCMs) find it difficult to simulate realistic seasonal and intraseasonal variabilities over global tropics and particularly over Indian summer monsoon (ISM) region. The bias is mainly attributed to the improper representation of physical processes. Among all the processes, the cloud and convective processes appear to play a major role in modulating model bias. In recent times, NCEP CFSv2 model is being adopted under Monsoon Mission for dynamical monsoon forecast over Indian region. The analyses of climate free run of CFSv2 in two resolutions namely at T126 and T382, show largely similar bias in simulating seasonal rainfall, in capturing the intraseasonal variability at different scales over the global tropics and also in capturing tropical waves. Thus, the biases of CFSv2 indicate a deficiency in model's parameterization of cloud and convective processes. Keeping this in background and also for the need to improve the model fidelity, two approaches have been adopted. Firstly, in the superparameterization, 32 cloud resolving models each with a horizontal resolution of 4 km are embedded in each GCM (CFSv2) grid and the conventional sub-grid scale convective parameterization is deactivated. This is done to demonstrate the role of resolving cloud processes which otherwise remain unresolved. The superparameterized CFSv2 (SP-CFS) is developed on a coarser version T62. The model is integrated for six and half years in climate free run mode being initialised from 16 May 2008. The analyses reveal that SP-CFS simulates a significantly improved mean state as compared to default CFS. The systematic bias of lesser rainfall over Indian land mass, colder troposphere has substantially been improved. Most importantly the convectively coupled equatorial waves and the eastward propagating MJO has been found to be simulated with more fidelity in SP-CFS. The reason of such betterment in model mean state has been found to be due to the systematic improvement in moisture field, temperature profile and moist instability. The model also has better simulated the cloud and rainfall relation. This initiative demonstrates the role of cloud processes on the mean state of coupled GCM. As the superparameterization approach is computationally expensive, so in another approach, the conventional Simplified Arakawa Schubert (SAS) scheme is replaced by a revised SAS scheme (RSAS) and also the old and simplified cloud scheme of Zhao-Karr (1997) has been replaced by WSM6 in CFSV2 (hereafter CFS-CR). The primary objective of such modifications is to improve the distribution of convective rain in the model by using RSAS and the grid-scale or the large scale nonconvective rain by WSM6. The WSM6 computes the tendency of six class (water vapour, cloud water, ice, snow, graupel, rain water) hydrometeors at each of the model grid and contributes in the low, middle and high cloud fraction. By incorporating WSM6, for the first time in a global climate model, we are able to show a reasonable simulation of cloud ice and cloud liquid water distribution vertically and spatially as compared to Cloudsat observations. The CFS-CR has also showed improvement in simulating annual rainfall cycle and intraseasonal variability over the ISM region. These improvements in CFS-CR are likely to be associated with improvement of the convective and stratiform rainfall distribution in the model. These initiatives clearly address a long standing issue of resolving the cloud processes in climate model and demonstrate that the improved cloud and convective process paramterizations can eventually reduce the systematic bias and improve the model fidelity.
Momentum conserving Brownian dynamics propagator for complex soft matter fluids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Padding, J. T.; Briels, W. J.
2014-12-28
We present a Galilean invariant, momentum conserving first order Brownian dynamics scheme for coarse-grained simulations of highly frictional soft matter systems. Friction forces are taken to be with respect to moving background material. The motion of the background material is described by locally averaged velocities in the neighborhood of the dissolved coarse coordinates. The velocity variables are updated by a momentum conserving scheme. The properties of the stochastic updates are derived through the Chapman-Kolmogorov and Fokker-Planck equations for the evolution of the probability distribution of coarse-grained position and velocity variables, by requiring the equilibrium distribution to be a stationary solution.more » We test our new scheme on concentrated star polymer solutions and find that the transverse current and velocity time auto-correlation functions behave as expected from hydrodynamics. In particular, the velocity auto-correlation functions display a long time tail in complete agreement with hydrodynamics.« less
Practical continuous-variable quantum key distribution without finite sampling bandwidth effects.
Li, Huasheng; Wang, Chao; Huang, Peng; Huang, Duan; Wang, Tao; Zeng, Guihua
2016-09-05
In a practical continuous-variable quantum key distribution system, finite sampling bandwidth of the employed analog-to-digital converter at the receiver's side may lead to inaccurate results of pulse peak sampling. Then, errors in the parameters estimation resulted. Subsequently, the system performance decreases and security loopholes are exposed to eavesdroppers. In this paper, we propose a novel data acquisition scheme which consists of two parts, i.e., a dynamic delay adjusting module and a statistical power feedback-control algorithm. The proposed scheme may improve dramatically the data acquisition precision of pulse peak sampling and remove the finite sampling bandwidth effects. Moreover, the optimal peak sampling position of a pulse signal can be dynamically calibrated through monitoring the change of the statistical power of the sampled data in the proposed scheme. This helps to resist against some practical attacks, such as the well-known local oscillator calibration attack.
NASA Astrophysics Data System (ADS)
Seino, Junji; Kageyama, Ryo; Fujinami, Mikito; Ikabata, Yasuhiro; Nakai, Hiromi
2018-06-01
A semi-local kinetic energy density functional (KEDF) was constructed based on machine learning (ML). The present scheme adopts electron densities and their gradients up to third-order as the explanatory variables for ML and the Kohn-Sham (KS) kinetic energy density as the response variable in atoms and molecules. Numerical assessments of the present scheme were performed in atomic and molecular systems, including first- and second-period elements. The results of 37 conventional KEDFs with explicit formulae were also compared with those of the ML KEDF with an implicit formula. The inclusion of the higher order gradients reduces the deviation of the total kinetic energies from the KS calculations in a stepwise manner. Furthermore, our scheme with the third-order gradient resulted in the closest kinetic energies to the KS calculations out of the presented functionals.
NASA Astrophysics Data System (ADS)
Qin, Yi; Lin, Yanluan; Xu, Shiming; Ma, Hsi-Yen; Xie, Shaocheng
2018-02-01
Low clouds strongly impact the radiation budget of the climate system, but their simulation in most GCMs has remained a challenge, especially over the subtropical stratocumulus region. Assuming a Gaussian distribution for the subgrid-scale total water and liquid water potential temperature, a new statistical cloud scheme is proposed and tested in NCAR Community Atmospheric Model version 5 (CAM5). The subgrid-scale variance is diagnosed from the turbulent and shallow convective processes in CAM5. The approach is able to maintain the consistency between cloud fraction and cloud condensate and thus alleviates the adjustment needed in the default relative humidity-based cloud fraction scheme. Short-term forecast simulations indicate that low cloud fraction and liquid water content, including their diurnal cycle, are improved due to a proper consideration of subgrid-scale variance over the southeastern Pacific Ocean region. Compared with the default cloud scheme, the new approach produced the mean climate reasonably well with improved shortwave cloud forcing (SWCF) due to more reasonable low cloud fraction and liquid water path over regions with predominant low clouds. Meanwhile, the SWCF bias over the tropical land regions is also alleviated. Furthermore, the simulated marine boundary layer clouds with the new approach extend further offshore and agree better with observations. The new approach is able to obtain the top of atmosphere (TOA) radiation balance with a slightly alleviated double ITCZ problem in preliminary coupled simulations. This study implies that a close coupling of cloud processes with other subgrid-scale physical processes is a promising approach to improve cloud simulations.
NASA Astrophysics Data System (ADS)
Li, Yutong; Wang, Yuxin; Duffy, Alex H. B.
2014-11-01
Computer-based conceptual design for routine design has made great strides, yet non-routine design has not been given due attention, and it is still poorly automated. Considering that the function-behavior-structure(FBS) model is widely used for modeling the conceptual design process, a computer-based creativity enhanced conceptual design model(CECD) for non-routine design of mechanical systems is presented. In the model, the leaf functions in the FBS model are decomposed into and represented with fine-grain basic operation actions(BOA), and the corresponding BOA set in the function domain is then constructed. Choosing building blocks from the database, and expressing their multiple functions with BOAs, the BOA set in the structure domain is formed. Through rule-based dynamic partition of the BOA set in the function domain, many variants of regenerated functional schemes are generated. For enhancing the capability to introduce new design variables into the conceptual design process, and dig out more innovative physical structure schemes, the indirect function-structure matching strategy based on reconstructing the combined structure schemes is adopted. By adjusting the tightness of the partition rules and the granularity of the divided BOA subsets, and making full use of the main function and secondary functions of each basic structure in the process of reconstructing of the physical structures, new design variables and variants are introduced into the physical structure scheme reconstructing process, and a great number of simpler physical structure schemes to accomplish the overall function organically are figured out. The creativity enhanced conceptual design model presented has a dominant capability in introducing new deign variables in function domain and digging out simpler physical structures to accomplish the overall function, therefore it can be utilized to solve non-routine conceptual design problem.
ERIC Educational Resources Information Center
Staver, John R.; Jacks, Tom
1988-01-01
Investigates the influence of five cognitive variables on high school students' performance on balancing chemical equations by inspection. Reports that reasoning, restructuring, and disembedding variables could be a single variable, and that working memory capacity does not influence overall performance. Results of hierarchical regression analysis…
NASA Astrophysics Data System (ADS)
Kaminski, J. W.; Semeniuk, K.; McConnell, J. C.; Lupu, A.; Mamun, A.
2012-12-01
The Global Environmental Multiscale model for Air Quality and climate change (GEM-AC) is a global general circulation model based on the GEM model developed by the Meteorological Service of Canada for operational weather forecasting. It can be run with a global uniform (GU) grid or a global variable (GV) grid where the core has uniform grid spacing and the exterior grid expands. With a GV grid high resolution regional runs can be accomplished without a concern for boundary conditions. The work described here uses GEM version 3.3.2. The gas-phase chemistry consists in detailed reactions of Ox, NOx, HOx, CO, CH4, NMVOCs, halocarbons, ClOx and BrO. We have recently added elements of the Global Modal-aerosol eXtension (GMXe) scheme to address aerosol microphysics and gas-aerosol partitioning. The evaluation of the MESSY GMXe aerosol scheme is addressed in another poster. The Canadian aerosol module (CAM) is also available. Tracers are advected using the semi-Lagrangian scheme native to GEM. The vertical transport includes parameterized subgrid scale turbulence and large scale convection. Dry deposition is implemented as a flux boundary condition in the vertical diffusion equation. For climate runs the GHGs CO2, CH4, N2O, CFCs in the radiation scheme are adjusted to the scenario considered. In GV regional mode at high resolutions a lake model, FLAKE is also included. Wet removal comprises both in-cloud and below-cloud scavenging. With the gas phase chemistry the model has been run for a series of ten year time slices on a 3°×3° global grid with 77 hybrid levels from the surface to 0.15 hPa. The tropospheric and stratospheric gas phase results are compared with satellite measurements including, ACE, MIPAS, MOPITT, and OSIRIS. Current evaluations of the ozone field and other stratospheric fields are encouraging and tropospheric lifetimes for CH4 and CH3CCl3 are in reasonable accord with tropospheric models. We will present results for current and future climate conditions forced by SST for 2050.
A Case Study of Using a Multilayered Thermodynamical Snow Model for Radiance Assimilation
NASA Technical Reports Server (NTRS)
Toure, Ally M.; Goita, Kalifa; Royer, Alain; Kim, Edward J.; Durand, Michael; Margulis, Steven A.; Lu, Huizhong
2011-01-01
A microwave radiance assimilation (RA) scheme for the retrieval of snow physical state variables requires a snowpack physical model (SM) coupled to a radiative transfer model. In order to assimilate microwave brightness temperatures (Tbs) at horizontal polarization (h-pol), an SM capable of resolving melt-refreeze crusts is required. To date, it has not been shown whether an RA scheme is tractable with the large number of state variables present in such an SM or whether melt-refreeze crust densities can be estimated. In this paper, an RA scheme is presented using the CROCUS SM which is capable of resolving melt-refreeze crusts. We assimilated both vertical (v) and horizontal (h) Tbs at 18.7 and 36.5 GHz. We found that assimilating Tb at both h-pol and vertical polarization (v-pol) into CROCUS dramatically improved snow depth estimates, with a bias of 1.4 cm compared to-7.3 cm reported by previous studies. Assimilation of both h-pol and v-pol led to more accurate results than assimilation of v-pol alone. The snow water equivalent (SWE) bias of the RA scheme was 0.4 cm, while the bias of the SWE estimated by an empirical retrieval algorithm was -2.9 cm. Characterization of melt-refreeze crusts via an RA scheme is demonstrated here for the first time; the RA scheme correctly identified the location of melt-refreeze crusts observed in situ.
NASA Astrophysics Data System (ADS)
Mishra, C.; Samantaray, A. K.; Chakraborty, G.
2016-09-01
Vibration analysis for diagnosis of faults in rolling element bearings is complicated when the rotor speed is variable or slow. In the former case, the time interval between the fault-induced impact responses in the vibration signal are non-uniform and the signal strength is variable. In the latter case, the fault-induced impact response strength is weak and generally gets buried in the noise, i.e. noise dominates the signal. This article proposes a diagnosis scheme based on a combination of a few signal processing techniques. The proposed scheme initially represents the vibration signal in terms of uniformly resampled angular position of the rotor shaft by using the interpolated instantaneous angular position measurements. Thereafter, intrinsic mode functions (IMFs) are generated through empirical mode decomposition (EMD) of resampled vibration signal which is followed by thresholding of IMFs and signal reconstruction to de-noise the signal and envelope order tracking to diagnose the faults. Data for validating the proposed diagnosis scheme are initially generated from a multi-body simulation model of rolling element bearing which is developed using bond graph approach. This bond graph model includes the ball and cage dynamics, localized fault geometry, contact mechanics, rotor unbalance, and friction and slip effects. The diagnosis scheme is finally validated with experiments performed with the help of a machine fault simulator (MFS) system. Some fault scenarios which could not be experimentally recreated are then generated through simulations and analyzed through the developed diagnosis scheme.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williamson, Mark S.; Son Wonmin; Heaney, Libby
Recently, it was demonstrated by Son et al., Phys. Rev. Lett. 102, 110404 (2009), that a separable bipartite continuous-variable quantum system can violate the Clauser-Horne-Shimony-Holt (CHSH) inequality via operationally local transformations. Operationally local transformations are parametrized only by local variables; however, in order to allow violation of the CHSH inequality, a maximally entangled ancilla was necessary. The use of the entangled ancilla in this scheme caused the state under test to become dependent on the measurement choice one uses to calculate the CHSH inequality, thus violating one of the assumptions used in deriving a Bell inequality, namely, the free willmore » or statistical independence assumption. The novelty in this scheme however is that the measurement settings can be external free parameters. In this paper, we generalize these operationally local transformations for multipartite Bell inequalities (with dichotomic observables) and provide necessary and sufficient conditions for violation within this scheme. Namely, a violation of a multipartite Bell inequality in this setting is contingent on whether an ancillary system admits any realistic local hidden variable model (i.e., whether the ancilla violates the given Bell inequality). These results indicate that violation of a Bell inequality performed on a system does not necessarily imply that the system is nonlocal. In fact, the system under test may be completely classical. However, nonlocality must have resided somewhere, this may have been in the environment, the physical variables used to manipulate the system or the detectors themselves provided the measurement settings are external free variables.« less
NASA Technical Reports Server (NTRS)
Harten, A.; Tal-Ezer, H.
1981-01-01
An implicit finite difference method of fourth order accuracy in space and time is introduced for the numerical solution of one-dimensional systems of hyperbolic conservation laws. The basic form of the method is a two-level scheme which is unconditionally stable and nondissipative. The scheme uses only three mesh points at level t and three mesh points at level t + delta t. The dissipative version of the basic method given is conditionally stable under the CFL (Courant-Friedrichs-Lewy) condition. This version is particularly useful for the numerical solution of problems with strong but nonstiff dynamic features, where the CFL restriction is reasonable on accuracy grounds. Numerical results are provided to illustrate properties of the proposed method.
Studies of Inviscid Flux Schemes for Acoustics and Turbulence Problems
NASA Technical Reports Server (NTRS)
Morris, Chris
2013-01-01
Five different central difference schemes, based on a conservative differencing form of the Kennedy and Gruber skew-symmetric scheme, were compared with six different upwind schemes based on primitive variable reconstruction and the Roe flux. These eleven schemes were tested on a one-dimensional acoustic standing wave problem, the Taylor-Green vortex problem and a turbulent channel flow problem. The central schemes were generally very accurate and stable, provided the grid stretching rate was kept below 10%. As near-DNS grid resolutions, the results were comparable to reference DNS calculations. At coarser grid resolutions, the need for an LES SGS model became apparent. There was a noticeable improvement moving from CD-2 to CD-4, and higher-order schemes appear to yield clear benefits on coarser grids. The UB-7 and CU-5 upwind schemes also performed very well at near-DNS grid resolutions. The UB-5 upwind scheme does not do as well, but does appear to be suitable for well-resolved DNS. The UF-2 and UB-3 upwind schemes, which have significant dissipation over a wide spectral range, appear to be poorly suited for DNS or LES.
ERIC Educational Resources Information Center
Inzunsa Cazares, Santiago
2016-01-01
This article presents the results of a qualitative research with a group of 15 university students of social sciences on informal inferential reasoning developed in a computer environment on concepts involved in the confidence intervals. The results indicate that students developed a correct reasoning about sampling variability and visualized…
Novel Threshold Changeable Secret Sharing Schemes Based on Polynomial Interpolation
Li, Mingchu; Guo, Cheng; Choo, Kim-Kwang Raymond; Ren, Yizhi
2016-01-01
After any distribution of secret sharing shadows in a threshold changeable secret sharing scheme, the threshold may need to be adjusted to deal with changes in the security policy and adversary structure. For example, when employees leave the organization, it is not realistic to expect departing employees to ensure the security of their secret shadows. Therefore, in 2012, Zhang et al. proposed (t → t′, n) and ({t1, t2,⋯, tN}, n) threshold changeable secret sharing schemes. However, their schemes suffer from a number of limitations such as strict limit on the threshold values, large storage space requirement for secret shadows, and significant computation for constructing and recovering polynomials. To address these limitations, we propose two improved dealer-free threshold changeable secret sharing schemes. In our schemes, we construct polynomials to update secret shadows, and use two-variable one-way function to resist collusion attacks and secure the information stored by the combiner. We then demonstrate our schemes can adjust the threshold safely. PMID:27792784
Novel Threshold Changeable Secret Sharing Schemes Based on Polynomial Interpolation.
Yuan, Lifeng; Li, Mingchu; Guo, Cheng; Choo, Kim-Kwang Raymond; Ren, Yizhi
2016-01-01
After any distribution of secret sharing shadows in a threshold changeable secret sharing scheme, the threshold may need to be adjusted to deal with changes in the security policy and adversary structure. For example, when employees leave the organization, it is not realistic to expect departing employees to ensure the security of their secret shadows. Therefore, in 2012, Zhang et al. proposed (t → t', n) and ({t1, t2,⋯, tN}, n) threshold changeable secret sharing schemes. However, their schemes suffer from a number of limitations such as strict limit on the threshold values, large storage space requirement for secret shadows, and significant computation for constructing and recovering polynomials. To address these limitations, we propose two improved dealer-free threshold changeable secret sharing schemes. In our schemes, we construct polynomials to update secret shadows, and use two-variable one-way function to resist collusion attacks and secure the information stored by the combiner. We then demonstrate our schemes can adjust the threshold safely.
Understanding the Concepts of Proportion and Ratio Constructed by Two Grade Six Students.
ERIC Educational Resources Information Center
Singh, Parmjit
2000-01-01
Reports on a study designed to construct an understanding of two grade 6 students' proportional reasoning schemes. Finds that two mental operations, unitizing and iterating, play an important role in students' use of multiplicative thinking in proportion tasks. (Author/MM)
Pushing particles in extreme fields
NASA Astrophysics Data System (ADS)
Gordon, Daniel F.; Hafizi, Bahman; Palastro, John
2017-03-01
The update of the particle momentum in an electromagnetic simulation typically employs the Boris scheme, which has the advantage that the magnetic field strictly performs no work on the particle. In an extreme field, however, it is found that onerously small time steps are required to maintain accuracy. One reason for this is that the operator splitting scheme fails. In particular, even if the electric field impulse and magnetic field rotation are computed exactly, a large error remains. The problem can be analyzed for the case of constant, but arbitrarily polarized and independent electric and magnetic fields. The error can be expressed in terms of exponentials of nested commutators of the generators of boosts and rotations. To second order in the field, the Boris scheme causes the error to vanish, but to third order in the field, there is an error that has to be controlled by decreasing the time step. This paper introduces a scheme that avoids this problem entirely, while respecting the property that magnetic fields cannot change the particle energy.
ρ resonance from the I = 1 ππ potential in lattice QCD
NASA Astrophysics Data System (ADS)
Kawai, Daisuke
2018-03-01
We calculate the phase shift for the I = 1 ππ scattering in 2+1 flavor lattice QCD at mπ = 410 MeV, using all-to-all propagators with the LapH smearing. We first investigate the sink operator independence of the I = 2 ππ scattering phase shift to estimate the systematics in the LapH smearing scheme in the HAL QCD method at mπ = 870 MeV. The difference in the scattering phase shift in this channel between the conventional point sink scheme and the smeared sink scheme is reasonably small as long as the next-toleading analysis is employed in the smeared sink scheme with larger smearing levels. We then extract the I = 1 ππ potential with the smeared sink operator, whose scattering phase shift shows a resonant behavior (ρ resonance). We also examine the pole of the S-matrix corresponding to the ρ resonance in the complex energy plane.
Compression of digital images over local area networks. Appendix 1: Item 3. M.S. Thesis
NASA Technical Reports Server (NTRS)
Gorjala, Bhargavi
1991-01-01
Differential Pulse Code Modulation (DPCM) has been used with speech for many years. It has not been as successful for images because of poor edge performance. The only corruption in DPC is quantizer error but this corruption becomes quite large in the region of an edge because of the abrupt changes in the statistics of the signal. We introduce two improved DPCM schemes; Edge correcting DPCM and Edge Preservation Differential Coding. These two coding schemes will detect the edges and take action to correct them. In an Edge Correcting scheme, the quantizer error for an edge is encoded using a recursive quantizer with entropy coding and sent to the receiver as side information. In an Edge Preserving scheme, when the quantizer input falls in the overload region, the quantizer error is encoded and sent to the receiver repeatedly until the quantizer input falls in the inner levels. Therefore these coding schemes increase the bit rate in the region of an edge and require variable rate channels. We implement these two variable rate coding schemes on a token wing network. Timed token protocol supports two classes of messages; asynchronous and synchronous. The synchronous class provides a pre-allocated bandwidth and guaranteed response time. The remaining bandwidth is dynamically allocated to the asynchronous class. The Edge Correcting DPCM is simulated by considering the edge information under the asynchronous class. For the simulation of the Edge Preserving scheme, the amount of information sent each time is fixed, but the length of the packet or the bit rate for that packet is chosen depending on the availability capacity. The performance of the network, and the performance of the image coding algorithms, is studied.
A Gas-Kinetic Scheme for Reactive Flows
NASA Technical Reports Server (NTRS)
Lian,Youg-Sheng; Xu, Kun
1998-01-01
In this paper, the gas-kinetic BGK scheme for the compressible flow equations is extended to chemical reactive flow. The mass fraction of the unburnt gas is implemented into the gas kinetic equation by assigning a new internal degree of freedom to the particle distribution function. The new variable can be also used to describe fluid trajectory for the nonreactive flows. Due to the gas-kinetic BGK model, the current scheme basically solves the Navier-Stokes chemical reactive flow equations. Numerical tests validate the accuracy and robustness of the current kinetic method.
Ganguly, Parthasarathi; Jehan, Kate; de Costa, Ayesha; Mavalankar, Dileep; Smith, Helen
2014-11-05
In India a lack of access to emergency obstetric care contributes to maternal deaths. In 2005 Gujarat state launched a public-private partnership (PPP) programme, Chiranjeevi Yojana (CY), under which the state pays accredited private obstetricians a fixed fee for providing free intrapartum care to poor and tribal women. A million women have delivered under CY so far. The participation of private obstetricians in the partnership is central to the programme's effectiveness. We explored with private obstetricians the reasons and experiences that influenced their decisions to participate in the CY programme. In this qualitative study we interviewed 24 purposefully selected private obstetricians in Gujarat. We explored their views on the scheme, the reasons and experiences leading up to decisions to participate, not participate or withdraw from the CY, as well as their opinions about the scheme's impact. We analysed data using the Framework approach. Participants expressed a tension between doing public good and making a profit. Bureaucratic procedures and perceptions of programme misuse seemed to influence providers to withdraw from the programme or not participate at all. Providers feared that participating in CY would lower the status of their practices and some were deterred by the likelihood of more clinically difficult cases among eligible CY beneficiaries. Some providers resented taking on what they saw as a state responsibility to provide safe maternity services to poor women. Younger obstetricians in the process of establishing private practices, and those in more remote, 'less competitive' areas, were more willing to participate in CY. Some doctors had reservations over the quality of care that doctors could provide given the financial constraints of the scheme. While some private obstetricians willingly participate in CY and are satisfied with its functioning, a larger number shared concerns about participation. Operational difficulties and a trust deficit between the public and private health sectors affect retention of private providers in the scheme. Further refinement of the scheme, in consultation with private partners, and trust building initiatives could strengthen the programme. These findings offer lessons to those developing public-private partnerships to widen access to health services for underprivileged groups.
Kettelhut, M M; Chiodini, P L; Edwards, H; Moody, A
2003-01-01
Background: The burden of parasitic disease imported into the temperate zone is increasing, and in the tropics remains very high. Thus, high quality diagnostic parasitology services are needed, but to implement clinical governance a measure of quality of service is required. Aim: To examine performance in the United Kingdom National External Quality Assessment Scheme for Parasitology for evidence of improved standards in parasite diagnosis in clinical specimens. Methods: Analysis of performance was made for the period 1986 to 2001, to look for trends in performance scores. Results: An overall rise in performance in faecal and blood parasitology schemes was found from 1986 to 2001. This was seen particularly in the identification of ova, cysts, and larvae in the faecal scheme, the detection of Plasmodium ovale and Plasmodium vivax in the blood scheme, and also in the correct identification of non-malarial blood parasites. Despite this improvement, there are still problems. In the faecal scheme, participants still experience difficulty in recognising small protozoan cysts, differentiating vegetable matter from cysts, and detecting ova and cysts when more than one species is present. In the blood scheme, participants have problems in identifying mixed malarial infections, distinguishing between P ovale and P vivax, and estimating the percentage parasitaemia. The reasons underlying these problems have been identified via the educational part of the scheme, and have been dealt with by distributing teaching sheets and undertaking practical sessions. Conclusions: UK NEQAS for Parasitology has helped to raise the standard of diagnostic parasitology in the UK. PMID:14645352
Continuous-variable quantum computing in optical time-frequency modes using quantum memories.
Humphreys, Peter C; Kolthammer, W Steven; Nunn, Joshua; Barbieri, Marco; Datta, Animesh; Walmsley, Ian A
2014-09-26
We develop a scheme for time-frequency encoded continuous-variable cluster-state quantum computing using quantum memories. In particular, we propose a method to produce, manipulate, and measure two-dimensional cluster states in a single spatial mode by exploiting the intrinsic time-frequency selectivity of Raman quantum memories. Time-frequency encoding enables the scheme to be extremely compact, requiring a number of memories that are a linear function of only the number of different frequencies in which the computational state is encoded, independent of its temporal duration. We therefore show that quantum memories can be a powerful component for scalable photonic quantum information processing architectures.
NASA Astrophysics Data System (ADS)
Wang, Tianyi; Gong, Feng; Lu, Anjiang; Zhang, Damin; Zhang, Zhengping
2017-12-01
In this paper, we propose a scheme that integrates quantum key distribution and private classical communication via continuous variables. The integrated scheme employs both quadratures of a weak coherent state, with encrypted bits encoded on the signs and Gaussian random numbers encoded on the values of the quadratures. The integration enables quantum and classical data to share the same physical and logical channel. Simulation results based on practical system parameters demonstrate that both classical communication and quantum communication can be implemented over distance of tens of kilometers, thus providing a potential solution for simultaneous transmission of quantum communication and classical communication.
Zhou, Yuefang; Cameron, Elaine; Forbes, Gillian; Humphris, Gerry
2012-08-01
To develop and validate the St Andrews Behavioural Interaction Coding Scheme (SABICS): a tool to record nurse-child interactive behaviours. The SABICS was developed primarily from observation of video recorded interactions; and refined through an iterative process of applying the scheme to new data sets. Its practical applicability was assessed via implementation of the scheme on specialised behavioural coding software. Reliability was calculated using Cohen's Kappa. Discriminant validity was assessed using logistic regression. The SABICS contains 48 codes. Fifty-five nurse-child interactions were successfully coded through administering the scheme on The Observer XT8.0 system. Two visualization results of interaction patterns demonstrated the scheme's capability of capturing complex interaction processes. Cohen's Kappa was 0.66 (inter-coder) and 0.88 and 0.78 (two intra-coders). The frequency of nurse behaviours, such as "instruction" (OR = 1.32, p = 0.027) and "praise" (OR = 2.04, p = 0.027), predicted a child receiving the intervention. The SABICS is a unique system to record interactions between dental nurses and 3-5 years old children. It records and displays complex nurse-child interactive behaviours. It is easily administered and demonstrates reasonable psychometric properties. The SABICS has potential for other paediatric settings. Its development procedure may be helpful for other similar coding scheme development. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
I = 2 ππ scattering phase shift from the HAL QCD method with the LapH smearing
NASA Astrophysics Data System (ADS)
Kawai, Daisuke; Aoki, Sinya; Doi, Takumi; Ikeda, Yoichi; Inoue, Takashi; Iritani, Takumi; Ishii, Noriyoshi; Miyamoto, Takaya; Nemura, Hidekatsu; Sasaki, Kenji
2018-04-01
Physical observables, such as the scattering phase shifts and binding energy, calculated from the non-local HAL QCD potential do not depend on the sink operators used to define the potential. In practical applications, the derivative expansion of the non-local potential is employed, so that physical observables may receive some scheme dependence at a given order of the expansion. In this paper, we compare the I=2ππ scattering phase shifts obtained in the point-sink scheme (the standard scheme in the HAL QCD method) and the smeared-sink scheme (the LapH smearing newly introduced in the HAL QCD method). Although potentials in different schemes have different forms as expected, we find that, for reasonably small smearing size, the resultant scattering phase shifts agree with each other if the next-to-leading-order (NLO) term is taken into account. We also find that the HAL QCD potential in the point-sink scheme has a negligible NLO term for a wide range of energies, which implies good convergence of the derivative expansion, while the potential in the smeared-sink scheme has a non-negligible NLO contribution. The implications of this observation for future studies of resonance channels (such as the I=0 and 1ππ scatterings) with smeared all-to-all propagators are briefly discussed.
Development of a Blood Pressure Measurement Instrument with Active Cuff Pressure Control Schemes.
Kuo, Chung-Hsien; Wu, Chun-Ju; Chou, Hung-Chyun; Chen, Guan-Ting; Kuo, Yu-Cheng
2017-01-01
This paper presents an oscillometric blood pressure (BP) measurement approach based on the active control schemes of cuff pressure. Compared with conventional electronic BP instruments, the novelty of the proposed BP measurement approach is to utilize a variable volume chamber which actively and stably alters the cuff pressure during inflating or deflating cycles. The variable volume chamber is operated with a closed-loop pressure control scheme, and it is activated by controlling the piston position of a single-acting cylinder driven by a screw motor. Therefore, the variable volume chamber could significantly eliminate the air turbulence disturbance during the air injection stage when compared to an air pump mechanism. Furthermore, the proposed active BP measurement approach is capable of measuring BP characteristics, including systolic blood pressure (SBP) and diastolic blood pressure (DBP), during the inflating cycle. Two modes of air injection measurement (AIM) and accurate dual-way measurement (ADM) were proposed. According to the healthy subject experiment results, AIM reduced 34.21% and ADM reduced 15.78% of the measurement time when compared to a commercial BP monitor. Furthermore, the ADM performed much consistently (i.e., less standard deviation) in the measurements when compared to a commercial BP monitor.
NASA Astrophysics Data System (ADS)
Liu, Jinxin; Chen, Xuefeng; Gao, Jiawei; Zhang, Xingwu
2016-12-01
Air vehicles, space vehicles and underwater vehicles, the cabins of which can be viewed as variable section cylindrical structures, have multiple rotational vibration sources (e.g., engines, propellers, compressors and motors), making the spectrum of noise multiple-harmonic. The suppression of such noise has been a focus of interests in the field of active vibration control (AVC). In this paper, a multiple-source multiple-harmonic (MSMH) active vibration suppression algorithm with feed-forward structure is proposed based on reference amplitude rectification and conjugate gradient method (CGM). An AVC simulation scheme called finite element model in-loop simulation (FEMILS) is also proposed for rapid algorithm verification. Numerical studies of AVC are conducted on a variable section cylindrical structure based on the proposed MSMH algorithm and FEMILS scheme. It can be seen from the numerical studies that: (1) the proposed MSMH algorithm can individually suppress each component of the multiple-harmonic noise with an unified and improved convergence rate; (2) the FEMILS scheme is convenient and straightforward for multiple-source simulations with an acceptable loop time. Moreover, the simulations have similar procedure to real-life control and can be easily extended to physical model platform.
Development and design of photovoltaic power prediction system
NASA Astrophysics Data System (ADS)
Wang, Zhijia; Zhou, Hai; Cheng, Xu
2018-02-01
In order to reduce the impact of power grid safety caused by volatility and randomness of the energy produced in photovoltaic power plants, this paper puts forward a construction scheme on photovoltaic power generation prediction system, introducing the technical requirements, system configuration and function of each module, and discussing the main technical features of the platform software development. The scheme has been applied in many PV power plants in the northwest of China. It shows that the system can produce reasonable prediction results, providing a right guidance for dispatching and efficient running for PV power plant.
Fast and efficient wireless power transfer via transitionless quantum driving.
Paul, Koushik; Sarma, Amarendra K
2018-03-07
Shortcut to adiabaticity (STA) techniques have the potential to drive a system beyond the adiabatic limits. Here, we present a robust and efficient method for wireless power transfer (WPT) between two coils based on the so-called transitionless quantum driving (TQD) algorithm. We show that it is possible to transfer power between the coils significantly fast compared to its adiabatic counterpart. The scheme is fairly robust against the variations in the coupling strength and the coupling distance between the coils. Also, the scheme is found to be reasonably immune to intrinsic losses in the coils.
Modeling of confined turbulent fluid-particle flows using Eulerian and Lagrangian schemes
NASA Technical Reports Server (NTRS)
Adeniji-Fashola, A.; Chen, C. P.
1990-01-01
Two important aspects of fluid-particulate interaction in dilute gas-particle turbulent flows (the turbulent particle dispersion and the turbulence modulation effects) are addressed, using the Eulerian and Lagrangian modeling approaches to describe the particulate phase. Gradient-diffusion approximations are employed in the Eulerian formulation, while a stochastic procedure is utilized to simulate turbulent dispersion in the Lagrangina formulation. The k-epsilon turbulence model is used to characterize the time and length scales of the continuous phase turbulence. Models proposed for both schemes are used to predict turbulent fully-developed gas-solid vertical pipe flow with reasonable accuracy.
Real-time qualitative reasoning for telerobotic systems
NASA Technical Reports Server (NTRS)
Pin, Eancois G.
1993-01-01
This paper discusses the sensor-based telerobotic driving of a car in a-priori unknown environments using 'human-like' reasoning schemes implemented on custom-designed VLSI fuzzy inferencing boards. These boards use the Fuzzy Set theoretic framework to allow very vast (30 kHz) processing of full sets of information that are expressed in qualitative form using membership functions. The sensor-based and fuzzy inferencing system was incorporated on an outdoor test-bed platform to investigate two control modes for driving a car on the basis of very sparse and imprecise range data. In the first mode, the car navigates fully autonomously to a goal specified by the operator, while in the second mode, the system acts as a telerobotic driver's aid providing the driver with linguistic (fuzzy) commands to turn left or right, speed up, slow down, stop, or back up depending on the obstacles perceived by the sensors. Indoor and outdoor experiments with both modes of control are described in which the system uses only three acoustic range (sonar) sensor channels to perceive the environment. Sample results are presented that illustrate the feasibility of developing autonomous navigation modules and robust, safety-enhancing driver's aids for telerobotic systems using the new fuzzy inferencing VLSI hardware and 'human-like' reasoning schemes.
Gabriel, Adel; Violato, Claudio
2013-01-01
The purpose of this study was to examine and compare diagnostic success and its relationship with the diagnostic reasoning process between novices and experts in psychiatry. Nine volunteers, comprising five expert psychiatrists and four clinical clerks, completed a think-aloud protocol while attempting to make a DSM-IV (Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition) diagnosis of a selected case with both Axis I and Axis III diagnoses. Expert psychiatrists made significantly more successful diagnoses for both the primary psychiatric and medical diagnoses than clinical clerks. Expert psychiatrists also gave fewer differential options. Analyzing the think-aloud protocols, expert psychiatrists were much more organized, made fewer mistakes, and utilized significantly less time to access their knowledge than clinical clerks. Both novices and experts seemed to use the hypothetic-deductive and scheme-inductive approaches to diagnosis. However, experts utilized hypothetic-deductive approaches significantly more often than novices. The hypothetic-deductive diagnostic strategy was utilized more than the scheme-inductive approach by both expert psychiatrists and clinical clerks. However, a specific relationship between diagnostic reasoning and diagnostic success could not be identified in this small pilot study. The author recommends a larger study that would include a detailed analysis of the think-aloud protocols.
Spurious sea ice formation caused by oscillatory ocean tracer advection schemes
NASA Astrophysics Data System (ADS)
Naughten, Kaitlin A.; Galton-Fenzi, Benjamin K.; Meissner, Katrin J.; England, Matthew H.; Brassington, Gary B.; Colberg, Frank; Hattermann, Tore; Debernard, Jens B.
2017-08-01
Tracer advection schemes used by ocean models are susceptible to artificial oscillations: a form of numerical error whereby the advected field alternates between overshooting and undershooting the exact solution, producing false extrema. Here we show that these oscillations have undesirable interactions with a coupled sea ice model. When oscillations cause the near-surface ocean temperature to fall below the freezing point, sea ice forms for no reason other than numerical error. This spurious sea ice formation has significant and wide-ranging impacts on Southern Ocean simulations, including the disappearance of coastal polynyas, stratification of the water column, erosion of Winter Water, and upwelling of warm Circumpolar Deep Water. This significantly limits the model's suitability for coupled ocean-ice and climate studies. Using the terrain-following-coordinate ocean model ROMS (Regional Ocean Modelling System) coupled to the sea ice model CICE (Community Ice CodE) on a circumpolar Antarctic domain, we compare the performance of three different tracer advection schemes, as well as two levels of parameterised diffusion and the addition of flux limiters to prevent numerical oscillations. The upwind third-order advection scheme performs better than the centered fourth-order and Akima fourth-order advection schemes, with far fewer incidents of spurious sea ice formation. The latter two schemes are less problematic with higher parameterised diffusion, although some supercooling artifacts persist. Spurious supercooling was eliminated by adding flux limiters to the upwind third-order scheme. We present this comparison as evidence of the problematic nature of oscillatory advection schemes in sea ice formation regions, and urge other ocean/sea-ice modellers to exercise caution when using such schemes.
NASA Technical Reports Server (NTRS)
Wu, Di; Dong, Xiquan; Xi, Baike; Feng, Zhe; Kennedy, Aaron; Mullendore, Gretchen; Gilmore, Matthew; Tao, Wei-Kuo
2013-01-01
This study investigates the impact of snow, graupel, and hail processes on simulated squall lines over the Southern Great Plains in the United States. The Weather Research and Forecasting (WRF) model is used to simulate two squall line events in Oklahoma during May 2007, and the simulations are validated against radar and surface observations. Several microphysics schemes are tested in this study, including the WRF 5-Class Microphysics (WSM5), WRF 6-Class Microphysics (WSM6), Goddard Cumulus Ensemble (GCE) Three Ice (3-ice) with graupel, Goddard Two Ice (2-ice), and Goddard 3-ice hail schemes. Simulated surface precipitation is sensitive to the microphysics scheme when the graupel or hail categories are included. All of the 3-ice schemes overestimate the total precipitation with WSM6 having the largest bias. The 2-ice schemes, without a graupel/hail category, produce less total precipitation than the 3-ice schemes. By applying a radar-based convective/stratiform partitioning algorithm, we find that including graupel/hail processes increases the convective areal coverage, precipitation intensity, updraft, and downdraft intensities, and reduces the stratiform areal coverage and precipitation intensity. For vertical structures, simulations have higher reflectivity values distributed aloft than the observed values in both the convective and stratiform regions. Three-ice schemes produce more high reflectivity values in convective regions, while 2-ice schemes produce more high reflectivity values in stratiform regions. In addition, this study has demonstrated that the radar-based convective/stratiform partitioning algorithm can reasonably identify WRF-simulated precipitation, wind, and microphysical fields in both convective and stratiform regions.
The Mathematical Bases for Qualitative Reasoning
1990-01-01
but solely in terms of ordinary language. A good deal of such qualitative reasoning makes implicit use of the properties of ordinal variables and...without use of mathematical formalisms, but solely in terms of ordinary language. A good deal of such qualitative reasoning makes implicit use of the...irenheit 2 QualItalive ReaonIng 27 Janury 1990 or Celsius temperature on either day. If we ar. considering an equation connecting two variables, wfAx), we
CREST v2.1 Refined by a Distributed Linear Reservoir Routing Scheme
NASA Astrophysics Data System (ADS)
Shen, X.; Hong, Y.; Zhang, K.; Hao, Z.; Wang, D.
2014-12-01
Hydrologic modeling is important in water resources management, and flooding disaster warning and management. Routing scheme is among the most important components of a hydrologic model. In this study, we replace the lumped LRR (linear reservoir routing) scheme used in previous versions of the distributed hydrological model, CREST (coupled routing and excess storage) by a newly proposed distributed LRR method, which is theoretically more suitable for distributed hydrological models. Consequently, we have effectively solved the problems of: 1) low values of channel flow in daily simulation, 2) discontinuous flow value along the river network during flood events and 3) irrational model parameters. The CREST model equipped with both the routing schemes have been tested in the Gan River basin. The distributed LRR scheme has been confirmed to outperform the lumped counterpart by two comparisons, hydrograph validation and visual speculation of the continuity of stream flow along the river: 1) The CREST v2.1 (version 2.1) with the implementation of the distributed LRR achieved excellent result of [NSCE(Nash coefficient), CC (correlation coefficient), bias] =[0.897, 0.947 -1.57%] while the original CREST v2.0 produced only negative NSCE, close to zero CC and large bias. 2) CREST v2.1 produced more naturally smooth river flow pattern along the river network while v2.0 simulated bumping and discontinuous discharge along the mainstream. Moreover, we further observe that by using the distributed LRR method, 1) all model parameters fell within their reasonable region after an automatic optimization; 2) CREST forced by satellite-based precipitation and PET products produces a reasonably well result, i.e., (NSCE, CC, bias) = (0.756, 0.871, -0.669%) in the case study, although there is still room to improve regarding their low spatial resolution and underestimation of the heavy rainfall events in the satellite products.
Investigation of combustion characteristics in a scramjet combustor using a modified flamelet model
NASA Astrophysics Data System (ADS)
Zhao, Guoyan; Sun, Mingbo; Wang, Hongbo; Ouyang, Hao
2018-07-01
In this study, the characteristics of supersonic combustion inside an ethylene-fueled scramjet combustor equipped with multi-cavities were investigated with different injection schemes. Experimental results showed that the flames concentrated in the cavity and separated boundary layer downstream of the cavity, and they occupied the flow channel further enhancing the bulk flow compression. The flame structure in distributed injection scheme differed from that in centralized injection scheme. In numerical simulations, a modified flamelet model was introduced to consider that the pressure distribution is far from homogenous inside the scramjet combustor. Compared with original flamelet model, numerical predictions based on the modified model showed better agreement with the experimental results, validating the reliability of the calculations. Based on the modified model, the simulations with different injection schemes were analysed. The predicted flame agreed reasonably with the experimental observations in structure. The CO masses were concentrated in cavity and subsonic region adjacent to the cavity shear layer leading to intense heat release. Compared with centralized scheme, the higher jet mixing efficiency in distributed scheme induced an intense combustion in posterior upper cavity and downstream of the cavity. From streamline and isolation surfaces, the combustion at trail of lower cavity was depressed since the bulk flow downstream of the cavity is pushed down.
Worries Teachers Should Forget.
ERIC Educational Resources Information Center
Frost, Joe L.
Worries that confront teachers in American schools are discussed, and reasons why these worries should be forgotten are provided. The worries are concerned with: breaking away from normative schemes of childhood education; grade level structure; promotion and retention; letter grades; standard test scores as instructional aids; and the search for…
The Temporal Logic of the Tower Chief System
NASA Technical Reports Server (NTRS)
Hazelton, Lyman R., Jr.
1990-01-01
The purpose is to describe the logic used in the reasoning scheme employed in the Tower Chief system, a runway configuration management system. First, a review of classical logic is given. Defensible logics, truth maintenance, default logic, temporally dependent propositions, and resource allocation and planning are discussed.
Parrish, Robert M; Hohenstein, Edward G; Martínez, Todd J; Sherrill, C David
2013-05-21
We investigate the application of molecular quadratures obtained from either standard Becke-type grids or discrete variable representation (DVR) techniques to the recently developed least-squares tensor hypercontraction (LS-THC) representation of the electron repulsion integral (ERI) tensor. LS-THC uses least-squares fitting to renormalize a two-sided pseudospectral decomposition of the ERI, over a physical-space quadrature grid. While this procedure is technically applicable with any choice of grid, the best efficiency is obtained when the quadrature is tuned to accurately reproduce the overlap metric for quadratic products of the primary orbital basis. Properly selected Becke DFT grids can roughly attain this property. Additionally, we provide algorithms for adopting the DVR techniques of the dynamics community to produce two different classes of grids which approximately attain this property. The simplest algorithm is radial discrete variable representation (R-DVR), which diagonalizes the finite auxiliary-basis representation of the radial coordinate for each atom, and then combines Lebedev-Laikov spherical quadratures and Becke atomic partitioning to produce the full molecular quadrature grid. The other algorithm is full discrete variable representation (F-DVR), which uses approximate simultaneous diagonalization of the finite auxiliary-basis representation of the full position operator to produce non-direct-product quadrature grids. The qualitative features of all three grid classes are discussed, and then the relative efficiencies of these grids are compared in the context of LS-THC-DF-MP2. Coarse Becke grids are found to give essentially the same accuracy and efficiency as R-DVR grids; however, the latter are built from explicit knowledge of the basis set and may guide future development of atom-centered grids. F-DVR is found to provide reasonable accuracy with markedly fewer points than either Becke or R-DVR schemes.
Broberg, Craig S; Mitchell, Julie; Rehel, Silven; Grant, Andrew; Gianola, Ann; Beninato, Peter; Winter, Christiane; Verstappen, Amy; Valente, Anne Marie; Weiss, Joseph; Zaidi, Ali; Earing, Michael G; Cook, Stephen; Daniels, Curt; Webb, Gary; Khairy, Paul; Marelli, Ariane; Gurvitz, Michelle Z; Sahn, David J
2015-10-01
The adoption of electronic health records (EHR) has created an opportunity for multicenter data collection, yet the feasibility and reliability of this methodology is unknown. The aim of this study was to integrate EHR data into a homogeneous central repository specifically addressing the field of adult congenital heart disease (ACHD). Target data variables were proposed and prioritized by consensus of investigators at five target ACHD programs. Database analysts determined which variables were available within their institutions' EHR and stratified their accessibility, and results were compared between centers. Data for patients seen in a single calendar year were extracted to a uniform database and subsequently consolidated. From 415 proposed target variables, only 28 were available in discrete formats at all centers. For variables of highest priority, 16/28 (57%) were available at all four sites, but only 11% for those of high priority. Integration was neither simple nor straightforward. Coding schemes in use for congenital heart diagnoses varied and would require additional user input for accurate mapping. There was considerable variability in procedure reporting formats and medication schemes, often with center-specific modifications. Despite the challenges, the final acquisition included limited data on 2161 patients, and allowed for population analysis of race/ethnicity, defect complexity, and body morphometrics. Large-scale multicenter automated data acquisition from EHRs is feasible yet challenging. Obstacles stem from variability in data formats, coding schemes, and adoption of non-standard lists within each EHR. The success of large-scale multicenter ACHD research will require institution-specific data integration efforts. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Sampling strategies for efficient estimation of tree foliage biomass
Hailemariam Temesgen; Vicente Monleon; Aaron Weiskittel; Duncan Wilson
2011-01-01
Conifer crowns can be highly variable both within and between trees, particularly with respect to foliage biomass and leaf area. A variety of sampling schemes have been used to estimate biomass and leaf area at the individual tree and stand scales. Rarely has the effectiveness of these sampling schemes been compared across stands or even across species. In addition,...
Clinical reasoning of nursing students on clinical placement: Clinical educators' perceptions.
Hunter, Sharyn; Arthur, Carol
2016-05-01
Graduate nurses may have knowledge and adequate clinical psychomotor skills however they have been identified as lacking the clinical reasoning skills to deliver safe, effective care suggesting contemporary educational approaches do not always facilitate the development of nursing students' clinical reasoning. While nursing literature explicates the concept of clinical reasoning and develops models that demonstrate clinical reasoning, there is very little published about nursing students and clinical reasoning during clinical placements. Semi-structured interviews were conducted with ten clinical educators to gain an understanding of how they recognised, developed and appraised nursing students' clinical reasoning while on clinical placement. This study found variability in the clinical educators' conceptualisation, recognition, and facilitation of students' clinical reasoning. Although most of the clinical educators conceptualised clinical reasoning as a process those who did not demonstrated the greatest variability in the recognition and facilitation of students' clinical reasoning. The clinical educators in this study also described being unable to adequately appraise a student's clinical reasoning during clinical placement with the use of the current performance assessment tool. Copyright © 2016 Elsevier Ltd. All rights reserved.
Azbel-Jackson, Lena; Heffernan, Claire; Gunn, George; Brownlie, Joe
2018-01-01
The article describes the influence of a disease control scheme (the Norfolk-Suffolk Bovine Viral Diarrhoea Disease (BVD) Eradication scheme) on farmers' bio-security attitudes and behaviours. In 2010, a survey of 100 cattle farmers (53 scheme members vs. 47 out of scheme farmers) was undertaken among cattle farmers residing in Norfolk and Suffolk counties in the UK. A cross-sectional independent measures design was employed. The main analytical tool was content analysis. The following variables at the farmer-level were explored: the specific BVD control measures adopted, livestock disease priorities, motivation for scheme membership, wider knowledge acquisition, biosecurity behaviours employed and training course attendance. The findings suggest that participation in the BVD scheme improved farmers' perception of the scheme benefits and participation in training courses. However, no association was found between the taking part in the BVD scheme and livestock disease priorities or motivation for scheme participation, or knowledge about BVD bio-security measures employed. Equally importantly, scheme membership did appear to influence the importance accorded specific bio-security measures. Yet such ranking did not appear to reflect the actual behaviours undertaken. As such, disease control efforts alone while necessary, are insufficient. Rather, to enhance farmer bio-security behaviours significant effort must be made to address underlying attitudes to the specific disease threat involved.
Azbel-Jackson, Lena; Heffernan, Claire; Gunn, George; Brownlie, Joe
2018-01-01
The article describes the influence of a disease control scheme (the Norfolk-Suffolk Bovine Viral Diarrhoea Disease (BVD) Eradication scheme) on farmers' bio-security attitudes and behaviours. In 2010, a survey of 100 cattle farmers (53 scheme members vs. 47 out of scheme farmers) was undertaken among cattle farmers residing in Norfolk and Suffolk counties in the UK. A cross-sectional independent measures design was employed. The main analytical tool was content analysis. The following variables at the farmer-level were explored: the specific BVD control measures adopted, livestock disease priorities, motivation for scheme membership, wider knowledge acquisition, biosecurity behaviours employed and training course attendance. The findings suggest that participation in the BVD scheme improved farmers' perception of the scheme benefits and participation in training courses. However, no association was found between the taking part in the BVD scheme and livestock disease priorities or motivation for scheme participation, or knowledge about BVD bio-security measures employed. Equally importantly, scheme membership did appear to influence the importance accorded specific bio-security measures. Yet such ranking did not appear to reflect the actual behaviours undertaken. As such, disease control efforts alone while necessary, are insufficient. Rather, to enhance farmer bio-security behaviours significant effort must be made to address underlying attitudes to the specific disease threat involved. PMID:29432435
Gao, Yongnian; Gao, Junfeng; Yin, Hongbin; Liu, Chuansheng; Xia, Ting; Wang, Jing; Huang, Qi
2015-03-15
Remote sensing has been widely used for ater quality monitoring, but most of these monitoring studies have only focused on a few water quality variables, such as chlorophyll-a, turbidity, and total suspended solids, which have typically been considered optically active variables. Remote sensing presents a challenge in estimating the phosphorus concentration in water. The total phosphorus (TP) in lakes has been estimated from remotely sensed observations, primarily using the simple individual band ratio or their natural logarithm and the statistical regression method based on the field TP data and the spectral reflectance. In this study, we investigated the possibility of establishing a spatial modeling scheme to estimate the TP concentration of a large lake from multi-spectral satellite imagery using band combinations and regional multivariate statistical modeling techniques, and we tested the applicability of the spatial modeling scheme. The results showed that HJ-1A CCD multi-spectral satellite imagery can be used to estimate the TP concentration in a lake. The correlation and regression analysis showed a highly significant positive relationship between the TP concentration and certain remotely sensed combination variables. The proposed modeling scheme had a higher accuracy for the TP concentration estimation in the large lake compared with the traditional individual band ratio method and the whole-lake scale regression-modeling scheme. The TP concentration values showed a clear spatial variability and were high in western Lake Chaohu and relatively low in eastern Lake Chaohu. The northernmost portion, the northeastern coastal zone and the southeastern portion of western Lake Chaohu had the highest TP concentrations, and the other regions had the lowest TP concentration values, except for the coastal zone of eastern Lake Chaohu. These results strongly suggested that the proposed modeling scheme, i.e., the band combinations and the regional multivariate statistical modeling techniques, demonstrated advantages for estimating the TP concentration in a large lake and had a strong potential for universal application for the TP concentration estimation in large lake waters worldwide. Copyright © 2014 Elsevier Ltd. All rights reserved.
Generating Researcher Networks with Identified Persons on a Semantic Service Platform
NASA Astrophysics Data System (ADS)
Jung, Hanmin; Lee, Mikyoung; Kim, Pyung; Lee, Seungwoo
This paper describes a Semantic Web-based method to acquire researcher networks by means of identification scheme, ontology, and reasoning. Three steps are required to realize it; resolving co-references, finding experts, and generating researcher networks. We adopt OntoFrame as an underlying semantic service platform and apply reasoning to make direct relations between far-off classes in ontology schema. 453,124 Elsevier journal articles with metadata and full-text documents in information technology and biomedical domains have been loaded and served on the platform as a test set.
Interactive Classification Technology
NASA Technical Reports Server (NTRS)
deBessonet, Cary
2000-01-01
The investigators upgraded a knowledge representation language called SL (Symbolic Language) and an automated reasoning system called SMS (Symbolic Manipulation System) to enable the more effective use of the technologies in automated reasoning and interactive classification systems. The overall goals of the project were: 1) the enhancement of the representation language SL to accommodate a wider range of meaning; 2) the development of a default inference scheme to operate over SL notation as it is encoded; and 3) the development of an interpreter for SL that would handle representations of some basic cognitive acts and perspectives.
Using new aggregation operators in rule-based intelligent control
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.; Chen, Yung-Yaw; Yager, Ronald R.
1990-01-01
A new aggregation operator is applied in the design of an approximate reasoning-based controller. The ordered weighted averaging (OWA) operator has the property of lying between the And function and the Or function used in previous fuzzy set reasoning systems. It is shown here that, by applying OWA operators, more generalized types of control rules, which may include linguistic quantifiers such as Many and Most, can be developed. The new aggregation operators, as tested in a cart-pole balancing control problem, illustrate improved performance when compared with existing fuzzy control aggregation schemes.
Homodyning and heterodyning the quantum phase
NASA Technical Reports Server (NTRS)
Dariano, Giacomo M.; Macchiavello, C.; Paris, M. G. A.
1994-01-01
The double-homodyne and the heterodyne detection schemes for phase shifts between two synchronous modes of the electromagnetic field are analyzed in the framework of quantum estimation theory. The probability operator-valued measures (POM's) of the detectors are evaluated and compared with the ideal one in the limit of strong local reference oscillator. The present operational approach leads to a reasonable definition of phase measurement, whose sensitivity is actually related to the output r.m.s. noise of the photodetector. We emphasize that the simple-homodyne scheme does not correspond to a proper phase-shift measurements as it is just a zero-point detector. The sensitivity of all detection schemes are optimized at fixed energy with respect to the input state of radiation. It is shown that the optimal sensitivity can be actually achieved using suited squeezed states.
Progressive compressive imager
NASA Astrophysics Data System (ADS)
Evladov, Sergei; Levi, Ofer; Stern, Adrian
2012-06-01
We have designed and built a working automatic progressive sampling imaging system based on the vector sensor concept, which utilizes a unique sampling scheme of Radon projections. This sampling scheme makes it possible to progressively add information resulting in tradeoff between compression and the quality of reconstruction. The uniqueness of our sampling is that in any moment of the acquisition process the reconstruction can produce a reasonable version of the image. The advantage of the gradual addition of the samples is seen when the sparsity rate of the object is unknown, and thus the number of needed measurements. We have developed the iterative algorithm OSO (Ordered Sets Optimization) which employs our sampling scheme for creation of nearly uniform distributed sets of samples, which allows the reconstruction of Mega-Pixel images. We present the good quality reconstruction from compressed data ratios of 1:20.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Darby, John L.
LinguisticBelief is a Java computer code that evaluates combinations of linguistic variables using an approximate reasoning rule base. Each variable is comprised of fuzzy sets, and a rule base describes the reasoning on combinations of variables fuzzy sets. Uncertainty is considered and propagated through the rule base using the belief/plausibility measure. The mathematics of fuzzy sets, approximate reasoning, and belief/ plausibility are complex. Without an automated tool, this complexity precludes their application to all but the simplest of problems. LinguisticBelief automates the use of these techniques, allowing complex problems to be evaluated easily. LinguisticBelief can be used free of chargemore » on any Windows XP machine. This report documents the use and structure of the LinguisticBelief code, and the deployment package for installation client machines.« less
NASA Astrophysics Data System (ADS)
Liu, Zhangjun; Liu, Zenghui; Peng, Yongbo
2018-03-01
In view of the Fourier-Stieltjes integral formula of multivariate stationary stochastic processes, a unified formulation accommodating spectral representation method (SRM) and proper orthogonal decomposition (POD) is deduced. By introducing random functions as constraints correlating the orthogonal random variables involved in the unified formulation, the dimension-reduction spectral representation method (DR-SRM) and the dimension-reduction proper orthogonal decomposition (DR-POD) are addressed. The proposed schemes are capable of representing the multivariate stationary stochastic process with a few elementary random variables, bypassing the challenges of high-dimensional random variables inherent in the conventional Monte Carlo methods. In order to accelerate the numerical simulation, the technique of Fast Fourier Transform (FFT) is integrated with the proposed schemes. For illustrative purposes, the simulation of horizontal wind velocity field along the deck of a large-span bridge is proceeded using the proposed methods containing 2 and 3 elementary random variables. Numerical simulation reveals the usefulness of the dimension-reduction representation methods.
Analyses and forecasts of a tornadic supercell outbreak using a 3DVAR system ensemble
NASA Astrophysics Data System (ADS)
Zhuang, Zhaorong; Yussouf, Nusrat; Gao, Jidong
2016-05-01
As part of NOAA's "Warn-On-Forecast" initiative, a convective-scale data assimilation and prediction system was developed using the WRF-ARW model and ARPS 3DVAR data assimilation technique. The system was then evaluated using retrospective short-range ensemble analyses and probabilistic forecasts of the tornadic supercell outbreak event that occurred on 24 May 2011 in Oklahoma, USA. A 36-member multi-physics ensemble system provided the initial and boundary conditions for a 3-km convective-scale ensemble system. Radial velocity and reflectivity observations from four WSR-88Ds were assimilated into the ensemble using the ARPS 3DVAR technique. Five data assimilation and forecast experiments were conducted to evaluate the sensitivity of the system to data assimilation frequencies, in-cloud temperature adjustment schemes, and fixed- and mixed-microphysics ensembles. The results indicated that the experiment with 5-min assimilation frequency quickly built up the storm and produced a more accurate analysis compared with the 10-min assimilation frequency experiment. The predicted vertical vorticity from the moist-adiabatic in-cloud temperature adjustment scheme was larger in magnitude than that from the latent heat scheme. Cycled data assimilation yielded good forecasts, where the ensemble probability of high vertical vorticity matched reasonably well with the observed tornado damage path. Overall, the results of the study suggest that the 3DVAR analysis and forecast system can provide reasonable forecasts of tornadic supercell storms.
NASA Astrophysics Data System (ADS)
Pu, Z.; Yu, Y.
2016-12-01
The prediction of Hurricane Joaquin's hairpin clockwise during 1 and 2 October 2015 presents a forecasting challenge during real-time numerical weather prediction, as tracks of several major numerical weather prediction models differ from each other. To investigate the large-scale environment and hurricane inner-core structures related to the hairpin turn of Joaquin, a series of high-resolution mesoscale numerical simulations of Hurricane Joaquin had been performed with an advanced research version of the Weather Research and Forecasting (WRF) model. The outcomes were compared with the observations obtained from the US Office of Naval Research's Tropical Cyclone Intensity (TCI) Experiment during 2015 hurricane season. Specifically, five groups of sensitivity experiments with different cumulus, boundary layer, and microphysical schemes as well as different initial and boundary conditions and initial times in WRF simulations had been performed. It is found that the choice of the cumulus parameterization scheme plays a significant role in reproducing reasonable track forecast during Joaquin's hairpin turn. The mid-level environmental steering flows can be the reason that leads to different tracks in the simulations with different cumulus schemes. In addition, differences in the distribution and amounts of the latent heating over the inner-core region are associated with discrepancies in the simulated intensity among different experiments. Detailed simulation results, comparison with TCI-2015 observations, and comprehensive diagnoses will be presented.
NASA Astrophysics Data System (ADS)
Liu, Mengqi; Liu, Haijun; Wang, Zhikai
2017-01-01
Traditional LCL grid-tied converters haven't the ability to limit the short-circuit fault current and only remove grid-connected converter using the breaker. However, the VSC converters become uncontrollable after the short circuit fault cutting off and the power switches may be damaged if the circuit breaker removes slowly. Compared to the filter function of the LCL passive components in traditional VSC converters, the novel LCL-VSC converter has the ability of limiting the short circuit fault current using the reasonable designed LCL parameters. In this paper the mathematical model of the LCL converter is established and the characteristics of the short circuit fault current generated by the ac side and dc side are analyzed. Thus one design and optimization scheme of the reasonable LCL passive parameter is proposed for the LCL-VSC converter having short circuit fault current limiting ability. In addition to ensuring the LCL passive components filtering the high-frequency harmonic, this scheme also considers the impedance characteristics to limit the fault current of AC and DC short circuit fault respectively flowing through the power switch no more than the maximum allowable operating current, in order to make the LCL converter working continuously. Finally, the 200kW simulation system is set up to prove the validity and feasibility of the theoretical analysis using the proposed design and optimization scheme.
ERIC Educational Resources Information Center
Nieminen, Pasi; Savinainen, Antti; Viiri, Jouni
2012-01-01
Previous physics education research has raised the question of "hidden variables" behind students' success in learning certain concepts. In the context of the force concept, it has been suggested that students' reasoning ability is one such variable. Strong positive correlations between students' preinstruction scores for reasoning…
Comparing Physics Scheme Performance for a Lake Effect Snowfall Event in Northern Lower Michigan
NASA Technical Reports Server (NTRS)
Molthan, Andrew; Arnott, Justin M.
2012-01-01
High resolution forecast models, such as those used to predict severe convective storms, can also be applied to predictions of lake effect snowfall. A high resolution WRF model forecast model is provided to support operations at NWS WFO Gaylord, Michigan, using a 12 ]km and 4 ]km nested configuration. This is comparable to the simulations performed by other NWS WFOs adjacent to the Great Lakes, including offices in the NWS Eastern Region who participate in regional ensemble efforts. Ensemble efforts require diversity in initial conditions and physics configurations to emulate the plausible range of events in order to ascertain the likelihood of different forecast scenarios. In addition to providing probabilistic guidance, individual members can be evaluated to determine whether they appear to be biased in some way, or to better understand how certain physics configurations may impact the resulting forecast. On January 20 ]21, 2011, a lake effect snow event occurred in Northern Lower Michigan, with cooperative observing and CoCoRaHS stations reporting new snow accumulations between 2 and 8 inches and liquid equivalents of 0.1 ]0.25 h. The event of January 21, 2011 was particularly well observed, with numerous surface reports available. It was also well represented by the WRF configuration operated at NWS Gaylord. Given that the default configuration produced a reasonable prediction, it is used here to evaluate the impacts of other physics configurations on the resulting prediction of the primary lake effect band and resulting QPF. Emphasis here is on differences in planetary boundary layer and cloud microphysics parameterizations, given their likely role in determining the evolution of shallow convection and precipitation processes. Results from an ensemble of seven microphysics schemes and three planetary boundary layer schemes are presented to demonstrate variability in forecast evolution, with results used in an attempt to improve the forecasts in the 2011 ]2012 lake effect season.
Keyboard Proficiency: An Essential Skill in a Technological Age. Number 2.
ERIC Educational Resources Information Center
Gillmon, Eve
A structured keyboard skills training scheme for students in England should be included within school curricula. Negative attitudes toward keyboard training prevail in schools although employers value keyboard application skills. There are several reasons why keyboard proficiency, which facilitates the efficient input and retrieval of text and…
The Development of Multiplicative Reasoning in the Learning of Mathematics.
ERIC Educational Resources Information Center
Harel, Guershon, Ed.; Confrey, Jere, Ed.
This book is a compilation of recent research on the development of multiplicative concepts. The sections and chapters are: (1) Theoretical Approaches: "Children's Multiplying Schemes" (L. Steffe), "Multiplicative Conceptual Field: What and Why?" (G. Vergnaud), "Extending the Meaning of Multiplication and Division" (B. Greer); (2) The Role of the…
The alpha(3) Scheme - A Fourth-Order Neutrally Stable CESE Solver
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung
2007-01-01
The conservation element and solution element (CESE) development is driven by a belief that a solver should (i) enforce conservation laws in both space and time, and (ii) be built from a non-dissipative (i.e., neutrally stable) core scheme so that the numerical dissipation can be controlled effectively. To provide a solid foundation for a systematic CESE development of high order schemes, in this paper we describe a new 4th-order neutrally stable CESE solver of the advection equation Theta u/Theta + alpha Theta u/Theta x = 0. The space-time stencil of this two-level explicit scheme is formed by one point at the upper time level and three points at the lower time level. Because it is associated with three independent mesh variables u(sup n) (sub j), (u(sub x))(sup n) (sub j) , and (uxz)(sup n) (sub j) (the numerical analogues of u, Theta u/Theta x, and Theta(exp 2)u/Theta x(exp 2), respectively) and four equations per mesh point, the new scheme is referred to as the alpha(3) scheme. As in the case of other similar CESE neutrally stable solvers, the alpha(3) scheme enforces conservation laws in space-time locally and globally, and it has the basic, forward marching, and backward marching forms. These forms are equivalent and satisfy a space-time inversion (STI) invariant property which is shared by the advection equation. Based on the concept of STI invariance, a set of algebraic relations is developed and used to prove that the alpha(3) scheme must be neutrally stable when it is stable. Moreover it is proved rigorously that all three amplification factors of the alpha(3) scheme are of unit magnitude for all phase angles if |v| <= 1/2 (v = alpha delta t/delta x). This theoretical result is consistent with the numerical stability condition |v| <= 1/2. Through numerical experiments, it is established that the alpha(3) scheme generally is (i) 4th-order accurate for the mesh variables u(sup n) (sub j) and (ux)(sup n) (sub j); and 2nd-order accurate for (uxx)(sup n) (sub j). However, in some exceptional cases, the scheme can achieve perfect accuracy aside from round-off errors.
Flap Gear for Airplanes : A New Scheme in Which Variation is Automatic
NASA Technical Reports Server (NTRS)
Tiltman, A Hessell
1927-01-01
A variable flap gear, which would function automatically and require no attention during flight appeared to be an attractive idea even in its early stages of development. The advantages of variable camber are described as well as the designs of these automatic flaps.
SIMINOFF, LAURA A.; STEP, MARY M.
2011-01-01
Many observational coding schemes have been offered to measure communication in health care settings. These schemes fall short of capturing multiple functions of communication among providers, patients, and other participants. After a brief review of observational communication coding, the authors present a comprehensive scheme for coding communication that is (a) grounded in communication theory, (b) accounts for instrumental and relational communication, and (c) captures important contextual features with tailored coding templates: the Siminoff Communication Content & Affect Program (SCCAP). To test SCCAP reliability and validity, the authors coded data from two communication studies. The SCCAP provided reliable measurement of communication variables including tailored content areas and observer ratings of speaker immediacy, affiliation, confirmation, and disconfirmation behaviors. PMID:21213170
Compact high order schemes with gradient-direction derivatives for absorbing boundary conditions
NASA Astrophysics Data System (ADS)
Gordon, Dan; Gordon, Rachel; Turkel, Eli
2015-09-01
We consider several compact high order absorbing boundary conditions (ABCs) for the Helmholtz equation in three dimensions. A technique called "the gradient method" (GM) for ABCs is also introduced and combined with the high order ABCs. GM is based on the principle of using directional derivatives in the direction of the wavefront propagation. The new ABCs are used together with the recently introduced compact sixth order finite difference scheme for variable wave numbers. Experiments on problems with known analytic solutions produced very accurate results, demonstrating the efficacy of the high order schemes, particularly when combined with GM. The new ABCs are then applied to the SEG/EAGE Salt model, showing the advantages of the new schemes.
On the primary variable switching technique for simulating unsaturated-saturated flows
NASA Astrophysics Data System (ADS)
Diersch, H.-J. G.; Perrochet, P.
Primary variable switching appears as a promising numerical technique for variably saturated flows. While the standard pressure-based form of the Richards equation can suffer from poor mass balance accuracy, the mixed form with its improved conservative properties can possess convergence difficulties for dry initial conditions. On the other hand, variable switching can overcome most of the stated numerical problems. The paper deals with variable switching for finite elements in two and three dimensions. The technique is incorporated in both an adaptive error-controlled predictor-corrector one-step Newton (PCOSN) iteration strategy and a target-based full Newton (TBFN) iteration scheme. Both schemes provide different behaviors with respect to accuracy and solution effort. Additionally, a simplified upstream weighting technique is used. Compared with conventional approaches the primary variable switching technique represents a fast and robust strategy for unsaturated problems with dry initial conditions. The impact of the primary variable switching technique is studied over a wide range of mostly 2D and partly difficult-to-solve problems (infiltration, drainage, perched water table, capillary barrier), where comparable results are available. It is shown that the TBFN iteration is an effective but error-prone procedure. TBFN sacrifices temporal accuracy in favor of accelerated convergence if aggressive time step sizes are chosen.
NASA Astrophysics Data System (ADS)
Bai, Guang-Fu; Hu, Lin; Jiang, Yang; Tian, Jing; Zi, Yue-Jiao; Wu, Ting-Wei; Huang, Feng-Qin
2017-08-01
In this paper, a photonic microwave waveform generator based on a dual-parallel Mach-Zehnder modulator is proposed and experimentally demonstrated. In this reported scheme, only one radio frequency signal is used to drive the dual-parallel Mach-Zehnder modulator. Meanwhile, dispersive elements or filters are not required in the proposed scheme, which make the scheme simpler and more stable. In this way, six variables can be adjusted. Through the different combinations of these variables, basic waveforms with full duty and small duty cycle can be generated. Tunability of the generator can be achieved by adjusting the frequency of the RF signal and the optical carrier. The corresponding theoretical analysis and simulation have been conducted. With guidance of theory and simulation, proof-of-concept experiments are carried out. The basic waveforms, including Gaussian, saw-up, and saw-down waveforms, with full duty and small duty cycle are generated at the repetition rate of 2 GHz. The theoretical and simulation results agree with the experimental results very well.
Unconditional security of entanglement-based continuous-variable quantum secret sharing
NASA Astrophysics Data System (ADS)
Kogias, Ioannis; Xiang, Yu; He, Qiongyi; Adesso, Gerardo
2017-01-01
The need for secrecy and security is essential in communication. Secret sharing is a conventional protocol to distribute a secret message to a group of parties, who cannot access it individually but need to cooperate in order to decode it. While several variants of this protocol have been investigated, including realizations using quantum systems, the security of quantum secret sharing schemes still remains unproven almost two decades after their original conception. Here we establish an unconditional security proof for entanglement-based continuous-variable quantum secret sharing schemes, in the limit of asymptotic keys and for an arbitrary number of players. We tackle the problem by resorting to the recently developed one-sided device-independent approach to quantum key distribution. We demonstrate theoretically the feasibility of our scheme, which can be implemented by Gaussian states and homodyne measurements, with no need for ideal single-photon sources or quantum memories. Our results contribute to validating quantum secret sharing as a viable primitive for quantum technologies.
Mixed coherent states in coupled chaotic systems: Design of secure wireless communication
NASA Astrophysics Data System (ADS)
Vigneshwaran, M.; Dana, S. K.; Padmanaban, E.
2016-12-01
A general coupling design is proposed to realize a mixed coherent (MC) state: coexistence of complete synchronization, antisynchronization, and amplitude death in different pairs of similar state variables of the coupled chaotic system. The stability of coupled system is ensured by the Lyapunov function and a scaling of each variable is also separately taken care of. When heterogeneity as a parameter mismatch is introduced in the coupled system, the coupling function facilitates to retain its coherence and displays the global stability with renewed scaling factor. Robust synchronization features facilitated by a MC state enable to design a dual modulation scheme: binary phase shift key (BPSK) and parameter mismatch shift key (PMSK), for secure data transmission. Two classes of decoders (coherent and noncoherent) are discussed, the noncoherent decoder shows better performance over the coherent decoder, mostly a noncoherent demodulator is preferred in biological implant applications. Both the modulation schemes are demonstrated numerically by using the Lorenz oscillator and the BPSK scheme is demonstrated experimentally using radio signals.
NASA Astrophysics Data System (ADS)
Guo, Ying; Li, Renjie; Liao, Qin; Zhou, Jian; Huang, Duan
2018-02-01
Discrete modulation is proven to be beneficial to improving the performance of continuous-variable quantum key distribution (CVQKD) in long-distance transmission. In this paper, we suggest a construct to improve the maximal generated secret key rate of discretely modulated eight-state CVQKD using an optical amplifier (OA) with a slight cost of transmission distance. In the proposed scheme, an optical amplifier is exploited to compensate imperfection of Bob's apparatus, so that the generated secret key rate of eight-state protocol is enhanced. Specifically, we investigate two types of optical amplifiers, phase-insensitive amplifier (PIA) and phase-sensitive amplifier (PSA), and thereby obtain approximately equivalent improved performance for eight-state CVQKD system when applying these two different amplifiers. Numeric simulation shows that the proposed scheme can well improve the generated secret key rate of eight-state CVQKD in both asymptotic limit and finite-size regime. We also show that the proposed scheme can achieve the relatively high-rate transmission at long-distance communication system.
ERIC Educational Resources Information Center
Foster, Geraldine R. K.; Tickle, Martin
2013-01-01
Background and objective: Some districts in the United Kingdom (UK), where the level of child dental caries is high and water fluoridation has not been possible, implement school-based fluoridated milk (FM) schemes. However, process variables, such as consent to drink FM and loss of children as they mature, impede the effectiveness of these…
Application of Intel Many Integrated Core (MIC) accelerators to the Pleim-Xiu land surface scheme
NASA Astrophysics Data System (ADS)
Huang, Melin; Huang, Bormin; Huang, Allen H.
2015-10-01
The land-surface model (LSM) is one physics process in the weather research and forecast (WRF) model. The LSM includes atmospheric information from the surface layer scheme, radiative forcing from the radiation scheme, and precipitation forcing from the microphysics and convective schemes, together with internal information on the land's state variables and land-surface properties. The LSM is to provide heat and moisture fluxes over land points and sea-ice points. The Pleim-Xiu (PX) scheme is one LSM. The PX LSM features three pathways for moisture fluxes: evapotranspiration, soil evaporation, and evaporation from wet canopies. To accelerate the computation process of this scheme, we employ Intel Xeon Phi Many Integrated Core (MIC) Architecture as it is a multiprocessor computer structure with merits of efficient parallelization and vectorization essentials. Our results show that the MIC-based optimization of this scheme running on Xeon Phi coprocessor 7120P improves the performance by 2.3x and 11.7x as compared to the original code respectively running on one CPU socket (eight cores) and on one CPU core with Intel Xeon E5-2670.
Involution and Difference Schemes for the Navier-Stokes Equations
NASA Astrophysics Data System (ADS)
Gerdt, Vladimir P.; Blinkov, Yuri A.
In the present paper we consider the Navier-Stokes equations for the two-dimensional viscous incompressible fluid flows and apply to these equations our earlier designed general algorithmic approach to generation of finite-difference schemes. In doing so, we complete first the Navier-Stokes equations to involution by computing their Janet basis and discretize this basis by its conversion into the integral conservation law form. Then we again complete the obtained difference system to involution with eliminating the partial derivatives and extracting the minimal Gröbner basis from the Janet basis. The elements in the obtained difference Gröbner basis that do not contain partial derivatives of the dependent variables compose a conservative difference scheme. By exploiting arbitrariness in the numerical integration approximation we derive two finite-difference schemes that are similar to the classical scheme by Harlow and Welch. Each of the two schemes is characterized by a 5×5 stencil on an orthogonal and uniform grid. We also demonstrate how an inconsistent difference scheme with a 3×3 stencil is generated by an inappropriate numerical approximation of the underlying integrals.
Design of fuzzy system by NNs and realization of adaptability
NASA Technical Reports Server (NTRS)
Takagi, Hideyuki
1993-01-01
The issue of designing and tuning fuzzy membership functions by neural networks (NN's) was started by NN-driven Fuzzy Reasoning in 1988. NN-driven fuzzy reasoning involves a NN embedded in the fuzzy system which generates membership values. In conventional fuzzy system design, the membership functions are hand-crafted by trial and error for each input variable. In contrast, NN-driven fuzzy reasoning considers several variables simultaneously and can design a multidimensional, nonlinear membership function for the entire subspace.
Gong, Yan-Xiao; Zhang, ShengLi; Xu, P; Zhu, S N
2016-03-21
We propose to generate a single-mode-squeezing two-mode squeezed vacuum state via a single χ(2) nonlinear photonic crystal. The state is favorable for existing Gaussian entanglement distillation schemes, since local squeezing operations can enhance the final entanglement and the success probability. The crystal is designed for enabling three concurrent quasi-phase-matching parametric-down conversions, and hence relieves the auxiliary on-line bi-side local squeezing operations. The compact source opens up a way for continuous-variable quantum technologies and could find more potential applications in future large-scale quantum networks.
Compact continuous-variable entanglement distillation.
Datta, Animesh; Zhang, Lijian; Nunn, Joshua; Langford, Nathan K; Feito, Alvaro; Plenio, Martin B; Walmsley, Ian A
2012-02-10
We introduce a new scheme for continuous-variable entanglement distillation that requires only linear temporal and constant physical or spatial resources. Distillation is the process by which high-quality entanglement may be distributed between distant nodes of a network in the unavoidable presence of decoherence. The known versions of this protocol scale exponentially in space and doubly exponentially in time. Our optimal scheme therefore provides exponential improvements over existing protocols. It uses a fixed-resource module-an entanglement distillery-comprising only four quantum memories of at most 50% storage efficiency and allowing a feasible experimental implementation. Tangible quantum advantages are obtainable by using existing off-resonant Raman quantum memories outside their conventional role of storage.
Suppression of chaos via control of energy flow
NASA Astrophysics Data System (ADS)
Guo, Shengli; Ma, Jun; Alsaedi, Ahmed
2018-03-01
Continuous energy supply is critical and important to support oscillating behaviour; otherwise, the oscillator will die. For nonlinear and chaotic circuits, enough energy supply is also important to keep electric devices working. In this paper, Hamilton energy is calculated for dimensionless dynamical system (e.g., the chaotic Lorenz system) using Helmholtz's theorem. The Hamilton energy is considered as a new variable and then the dynamical system is controlled by using the scheme of energy feedback. It is found that chaos can be suppressed even when intermittent feedback scheme is applied. This scheme is effective to control chaos and to stabilise other dynamical systems.
Aerodynamic design optimization via reduced Hessian SQP with solution refining
NASA Technical Reports Server (NTRS)
Feng, Dan; Pulliam, Thomas H.
1995-01-01
An all-at-once reduced Hessian Successive Quadratic Programming (SQP) scheme has been shown to be efficient for solving aerodynamic design optimization problems with a moderate number of design variables. This paper extends this scheme to allow solution refining. In particular, we introduce a reduced Hessian refining technique that is critical for making a smooth transition of the Hessian information from coarse grids to fine grids. Test results on a nozzle design using quasi-one-dimensional Euler equations show that through solution refining the efficiency and the robustness of the all-at-once reduced Hessian SQP scheme are significantly improved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heid, Matthias; Luetkenhaus, Norbert
2006-05-15
We investigate the performance of a continuous-variable quantum key distribution scheme in a practical setting. More specifically, we take a nonideal error reconciliation procedure into account. The quantum channel connecting the two honest parties is assumed to be lossy but noiseless. Secret key rates are given for the case that the measurement outcomes are postselected or a reverse reconciliation scheme is applied. The reverse reconciliation scheme loses its initial advantage in the practical setting. If one combines postselection with reverse reconciliation, however, much of this advantage can be recovered.
NASA Astrophysics Data System (ADS)
Zhang, Sijin; Austin, Geoff; Sutherland-Stacey, Luke
2014-05-01
Reverse Kessler warm rain processes were implemented within the Weather Research and Forecasting Model (WRF) and coupled with a Newtonian relaxation, or nudging technique designed to improve quantitative precipitation forecasting (QPF) in New Zealand by making use of observed radar reflectivity and modest computing facilities. One of the reasons for developing such a scheme, rather than using 4D-Var for example, is that radar VAR scheme in general, and 4D-Var in particular, requires computational resources beyond the capability of most university groups and indeed some national forecasting centres of small countries like New Zealand. The new scheme adjusts the model water vapor mixing ratio profiles based on observed reflectivity at each time step within an assimilation time window. The whole scheme can be divided into following steps: (i) The radar reflectivity is firstly converted to rain water, and (ii) then the rain water is used to derive cloud water content according to the reverse Kessler scheme; (iii) The cloud water content associated water vapor mixing ratio is then calculated based on the saturation adjustment processes; (iv) Finally the adjusted water vapor is nudged into the model and the model background is updated. 13 rainfall cases which occurred in the summer of 2011/2012 in New Zealand were used to evaluate the new scheme, different forecast scores were calculated and showed that the new scheme was able to improve precipitation forecasts on average up to around 7 hours ahead depending on different verification thresholds.
NASA Astrophysics Data System (ADS)
Salzmann, M.; Ming, Y.; Golaz, J.-C.; Ginoux, P. A.; Morrison, H.; Gettelman, A.; Krämer, M.; Donner, L. J.
2010-08-01
A new stratiform cloud scheme including a two-moment bulk microphysics module, a cloud cover parameterization allowing ice supersaturation, and an ice nucleation parameterization has been implemented into the recently developed GFDL AM3 general circulation model (GCM) as part of an effort to treat aerosol-cloud-radiation interactions more realistically. Unlike the original scheme, the new scheme facilitates the study of cloud-ice-aerosol interactions via influences of dust and sulfate on ice nucleation. While liquid and cloud ice water path associated with stratiform clouds are similar for the new and the original scheme, column integrated droplet numbers and global frequency distributions (PDFs) of droplet effective radii differ significantly. This difference is in part due to a difference in the implementation of the Wegener-Bergeron-Findeisen (WBF) mechanism, which leads to a larger contribution from super-cooled droplets in the original scheme. Clouds are more likely to be either completely glaciated or liquid due to the WBF mechanism in the new scheme. Super-saturations over ice simulated with the new scheme are in qualitative agreement with observations, and PDFs of ice numbers and effective radii appear reasonable in the light of observations. Especially, the temperature dependence of ice numbers qualitatively agrees with in-situ observations. The global average long-wave cloud forcing decreases in comparison to the original scheme as expected when super-saturation over ice is allowed. Anthropogenic aerosols lead to a larger decrease in short-wave absorption (SWABS) in the new model setup, but outgoing long-wave radiation (OLR) decreases as well, so that the net effect of including anthropogenic aerosols on the net radiation at the top of the atmosphere (netradTOA = SWABS-OLR) is of similar magnitude for the new and the original scheme.
NASA Astrophysics Data System (ADS)
Salzmann, M.; Ming, Y.; Golaz, J.-C.; Ginoux, P. A.; Morrison, H.; Gettelman, A.; Krämer, M.; Donner, L. J.
2010-03-01
A new stratiform cloud scheme including a two-moment bulk microphysics module, a cloud cover parameterization allowing ice supersaturation, and an ice nucleation parameterization has been implemented into the recently developed GFDL AM3 general circulation model (GCM) as part of an effort to treat aerosol-cloud-radiation interactions more realistically. Unlike the original scheme, the new scheme facilitates the study of cloud-ice-aerosol interactions via influences of dust and sulfate on ice nucleation. While liquid and cloud ice water path associated with stratiform clouds are similar for the new and the original scheme, column integrated droplet numbers and global frequency distributions (PDFs) of droplet effective radii differ significantly. This difference is in part due to a difference in the implementation of the Wegener-Bergeron-Findeisen (WBF) mechanism, which leads to a larger contribution from super-cooled droplets in the original scheme. Clouds are more likely to be either completely glaciated or liquid due to the WBF mechanism in the new scheme. Super-saturations over ice simulated with the new scheme are in qualitative agreement with observations, and PDFs of ice numbers and effective radii appear reasonable in the light of observations. Especially, the temperature dependence of ice numbers qualitatively agrees with in-situ observations. The global average long-wave cloud forcing decreases in comparison to the original scheme as expected when super-saturation over ice is allowed. Anthropogenic aerosols lead to a larger decrease in short-wave absorption (SWABS) in the new model setup, but outgoing long-wave radiation (OLR) decreases as well, so that the net effect of including anthropogenic aerosols on the net radiation at the top of the atmosphere (netradTOA = SWABS-OLR) is of similar magnitude for the new and the original scheme.
Simulation of the West African Monsoon using the MIT Regional Climate Model
NASA Astrophysics Data System (ADS)
Im, Eun-Soon; Gianotti, Rebecca L.; Eltahir, Elfatih A. B.
2013-04-01
We test the performance of the MIT Regional Climate Model (MRCM) in simulating the West African Monsoon. MRCM introduces several improvements over Regional Climate Model version 3 (RegCM3) including coupling of Integrated Biosphere Simulator (IBIS) land surface scheme, a new albedo assignment method, a new convective cloud and rainfall auto-conversion scheme, and a modified boundary layer height and cloud scheme. Using MRCM, we carried out a series of experiments implementing two different land surface schemes (IBIS and BATS) and three convection schemes (Grell with the Fritsch-Chappell closure, standard Emanuel, and modified Emanuel that includes the new convective cloud scheme). Our analysis primarily focused on comparing the precipitation characteristics, surface energy balance and large scale circulations against various observations. We document a significant sensitivity of the West African monsoon simulation to the choices of the land surface and convection schemes. In spite of several deficiencies, the simulation with the combination of IBIS and modified Emanuel schemes shows the best performance reflected in a marked improvement of precipitation in terms of spatial distribution and monsoon features. In particular, the coupling of IBIS leads to representations of the surface energy balance and partitioning that are consistent with observations. Therefore, the major components of the surface energy budget (including radiation fluxes) in the IBIS simulations are in better agreement with observation than those from our BATS simulation, or from previous similar studies (e.g Steiner et al., 2009), both qualitatively and quantitatively. The IBIS simulations also reasonably reproduce the dynamical structure of vertically stratified behavior of the atmospheric circulation with three major components: westerly monsoon flow, African Easterly Jet (AEJ), and Tropical Easterly Jet (TEJ). In addition, since the modified Emanuel scheme tends to reduce the precipitation amount, it improves the precipitation over regions suffering from systematic wet bias.
Implementation of the high-order schemes QUICK and LECUSSO in the COMMIX-1C Program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sakai, K.; Sun, J.G.; Sha, W.T.
Multidimensional analysis computer programs based on the finite volume method, such as COMMIX-1C, have been commonly used to simulate thermal-hydraulic phenomena in engineering systems such as nuclear reactors. In COMMIX-1C, the first-order schemes with respect to both space and time are used. In many situations such as flow recirculations and stratifications with steep gradient of velocity and temperature fields, however, high-order difference schemes are necessary for an accurate prediction of the fields. For these reasons, two second-order finite difference numerical schemes, QUICK (Quadratic Upstream Interpolation for Convective Kinematics) and LECUSSO (Local Exact Consistent Upwind Scheme of Second Order), have beenmore » implemented in the COMMIX-1C computer code. The formulations were derived for general three-dimensional flows with nonuniform grid sizes. Numerical oscillation analyses for QUICK and LECUSSO were performed. To damp the unphysical oscillations which occur in calculations with high-order schemes at high mesh Reynolds numbers, a new FRAM (Filtering Remedy and Methodology) scheme was developed and implemented. To be consistent with the high-order schemes, the pressure equation and the boundary conditions for all the conservation equations were also modified to be of second order. The new capabilities in the code are listed. Test calculations were performed to validate the implementation of the high-order schemes. They include the test of the one-dimensional nonlinear Burgers equation, two-dimensional scalar transport in two impinging streams, von Karmann vortex shedding, shear driven cavity flow, Couette flow, and circular pipe flow. The calculated results were compared with available data; the agreement is good.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pin, F.G.; Bender, S.R.
Most fuzzy logic-based reasoning schemes developed for robot control are fully reactive, i.e., the reasoning modules consist of fuzzy rule bases that represent direct mappings from the stimuli provided by the perception systems to the responses implemented by the motion controllers. Due to their totally reactive nature, such reasoning systems can encounter problems such as infinite loops and limit cycles. In this paper, we proposed an approach to remedy these problems by adding a memory and memory-related behaviors to basic reactive systems. Three major types of memory behaviors are addressed: memory creation, memory management, and memory utilization. These are firstmore » presented, and examples of their implementation for the recognition of limit cycles during the navigation of an autonomous robot in a priori unknown environments are then discussed.« less
Toward a Unified Representation of Atmospheric Convection in Variable-Resolution Climate Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walko, Robert
2016-11-07
The purpose of this project was to improve the representation of convection in atmospheric weather and climate models that employ computational grids with spatially-variable resolution. Specifically, our work targeted models whose grids are fine enough over selected regions that convection is resolved explicitly, while over other regions the grid is coarser and convection is represented as a subgrid-scale process. The working criterion for a successful scheme for representing convection over this range of grid resolution was that identical convective environments must produce very similar convective responses (i.e., the same precipitation amount, rate, and timing, and the same modification of themore » atmospheric profile) regardless of grid scale. The need for such a convective scheme has increased in recent years as more global weather and climate models have adopted variable resolution meshes that are often extended into the range of resolving convection in selected locations.« less
NASA Astrophysics Data System (ADS)
Gómez-Aguilar, J. F.
2018-03-01
In this paper, we analyze an alcoholism model which involves the impact of Twitter via Liouville-Caputo and Atangana-Baleanu-Caputo fractional derivatives with constant- and variable-order. Two fractional mathematical models are considered, with and without delay. Special solutions using an iterative scheme via Laplace and Sumudu transform were obtained. We studied the uniqueness and existence of the solutions employing the fixed point postulate. The generalized model with variable-order was solved numerically via the Adams method and the Adams-Bashforth-Moulton scheme. Stability and convergence of the numerical solutions were presented in details. Numerical examples of the approximate solutions are provided to show that the numerical methods are computationally efficient. Therefore, by including both the fractional derivatives and finite time delays in the alcoholism model studied, we believe that we have established a more complete and more realistic indicator of alcoholism model and affect the spread of the drinking.
Optimal variable-grid finite-difference modeling for porous media
NASA Astrophysics Data System (ADS)
Liu, Xinxin; Yin, Xingyao; Li, Haishan
2014-12-01
Numerical modeling of poroelastic waves by the finite-difference (FD) method is more expensive than that of acoustic or elastic waves. To improve the accuracy and computational efficiency of seismic modeling, variable-grid FD methods have been developed. In this paper, we derived optimal staggered-grid finite difference schemes with variable grid-spacing and time-step for seismic modeling in porous media. FD operators with small grid-spacing and time-step are adopted for low-velocity or small-scale geological bodies, while FD operators with big grid-spacing and time-step are adopted for high-velocity or large-scale regions. The dispersion relations of FD schemes were derived based on the plane wave theory, then the FD coefficients were obtained using the Taylor expansion. Dispersion analysis and modeling results demonstrated that the proposed method has higher accuracy with lower computational cost for poroelastic wave simulation in heterogeneous reservoirs.
NASA Technical Reports Server (NTRS)
Reed, K. W.; Stonesifer, R. B.; Atluri, S. N.
1983-01-01
A new hybrid-stress finite element algorith, suitable for analyses of large quasi-static deformations of inelastic solids, is presented. Principal variables in the formulation are the nominal stress-rate and spin. A such, a consistent reformulation of the constitutive equation is necessary, and is discussed. The finite element equations give rise to an initial value problem. Time integration has been accomplished by Euler and Runge-Kutta schemes and the superior accuracy of the higher order schemes is noted. In the course of integration of stress in time, it has been demonstrated that classical schemes such as Euler's and Runge-Kutta may lead to strong frame-dependence. As a remedy, modified integration schemes are proposed and the potential of the new schemes for suppressing frame dependence of numerically integrated stress is demonstrated. The topic of the development of valid creep fracture criteria is also addressed.
Signalling and obfuscation for congestion control
NASA Astrophysics Data System (ADS)
Mareček, Jakub; Shorten, Robert; Yu, Jia Yuan
2015-10-01
We aim to reduce the social cost of congestion in many smart city applications. In our model of congestion, agents interact over limited resources after receiving signals from a central agent that observes the state of congestion in real time. Under natural models of agent populations, we develop new signalling schemes and show that by introducing a non-trivial amount of uncertainty in the signals, we reduce the social cost of congestion, i.e., improve social welfare. The signalling schemes are efficient in terms of both communication and computation, and are consistent with past observations of the congestion. Moreover, the resulting population dynamics converge under reasonable assumptions.
NASA Astrophysics Data System (ADS)
Nicholls, Stephen D.; Decker, Steven G.; Tao, Wei-Kuo; Lang, Stephen E.; Shi, Jainn J.; Mohr, Karen I.
2017-03-01
This study evaluated the impact of five single- or double-moment bulk microphysics schemes (BMPSs) on Weather Research and Forecasting model (WRF) simulations of seven intense wintertime cyclones impacting the mid-Atlantic United States; 5-day long WRF simulations were initialized roughly 24 h prior to the onset of coastal cyclogenesis off the North Carolina coastline. In all, 35 model simulations (five BMPSs and seven cases) were run and their associated microphysics-related storm properties (hydrometer mixing ratios, precipitation, and radar reflectivity) were evaluated against model analysis and available gridded radar and ground-based precipitation products. Inter-BMPS comparisons of column-integrated mixing ratios and mixing ratio profiles reveal little variability in non-frozen hydrometeor species due to their shared programming heritage, yet their assumptions concerning snow and graupel intercepts, ice supersaturation, snow and graupel density maps, and terminal velocities led to considerable variability in both simulated frozen hydrometeor species and radar reflectivity. WRF-simulated precipitation fields exhibit minor spatiotemporal variability amongst BMPSs, yet their spatial extent is largely conserved. Compared to ground-based precipitation data, WRF simulations demonstrate low-to-moderate (0.217-0.414) threat scores and a rainfall distribution shifted toward higher values. Finally, an analysis of WRF and gridded radar reflectivity data via contoured frequency with altitude diagrams (CFADs) reveals notable variability amongst BMPSs, where better performing schemes favored lower graupel mixing ratios and better underlying aggregation assumptions.
Nicholls, Stephen D; Decker, Steven G; Tao, Wei-Kuo; Lang, Stephen E; Shi, Jainn J; Mohr, Karen I
2017-01-01
This study evaluated the impact of five, single- or double- moment bulk microphysics schemes (BMPSs) on Weather Research and Forecasting model (WRF) simulations of seven, intense winter time cyclones impacting the Mid-Atlantic United States. Five-day long WRF simulations were initialized roughly 24 hours prior to the onset of coastal cyclogenesis off the North Carolina coastline. In all, 35 model simulations (5 BMPSs and seven cases) were run and their associated microphysics-related storm properties (hydrometer mixing ratios, precipitation, and radar reflectivity) were evaluated against model analysis and available gridded radar and ground-based precipitation products. Inter-BMPS comparisons of column-integrated mixing ratios and mixing ratio profiles reveal little variability in non-frozen hydrometeor species due to their shared programming heritage, yet their assumptions concerning snow and graupel intercepts, ice supersaturation, snow and graupel density maps, and terminal velocities lead to considerable variability in both simulated frozen hydrometeor species and radar reflectivity. WRF-simulated precipitation fields exhibit minor spatio-temporal variability amongst BMPSs, yet their spatial extent is largely conserved. Compared to ground-based precipitation data, WRF-simulations demonstrate low-to-moderate (0.217-0.414) threat scores and a rainfall distribution shifted toward higher values. Finally, an analysis of WRF and gridded radar reflectivity data via contoured frequency with altitude (CFAD) diagrams reveals notable variability amongst BMPSs, where better performing schemes favored lower graupel mixing ratios and better underlying aggregation assumptions.
Nicholls, Stephen D.; Decker, Steven G.; Tao, Wei-Kuo; Lang, Stephen E.; Shi, Jainn J.; Mohr, Karen I.
2018-01-01
This study evaluated the impact of five, single- or double- moment bulk microphysics schemes (BMPSs) on Weather Research and Forecasting model (WRF) simulations of seven, intense winter time cyclones impacting the Mid-Atlantic United States. Five-day long WRF simulations were initialized roughly 24 hours prior to the onset of coastal cyclogenesis off the North Carolina coastline. In all, 35 model simulations (5 BMPSs and seven cases) were run and their associated microphysics-related storm properties (hydrometer mixing ratios, precipitation, and radar reflectivity) were evaluated against model analysis and available gridded radar and ground-based precipitation products. Inter-BMPS comparisons of column-integrated mixing ratios and mixing ratio profiles reveal little variability in non-frozen hydrometeor species due to their shared programming heritage, yet their assumptions concerning snow and graupel intercepts, ice supersaturation, snow and graupel density maps, and terminal velocities lead to considerable variability in both simulated frozen hydrometeor species and radar reflectivity. WRF-simulated precipitation fields exhibit minor spatio-temporal variability amongst BMPSs, yet their spatial extent is largely conserved. Compared to ground-based precipitation data, WRF-simulations demonstrate low-to-moderate (0.217–0.414) threat scores and a rainfall distribution shifted toward higher values. Finally, an analysis of WRF and gridded radar reflectivity data via contoured frequency with altitude (CFAD) diagrams reveals notable variability amongst BMPSs, where better performing schemes favored lower graupel mixing ratios and better underlying aggregation assumptions. PMID:29697705
NASA Technical Reports Server (NTRS)
Nicholls, Stephen D.; Decker, Steven G.; Tao, Wei-Kuo; Lang, Stephen E.; Shi, Jainn J.; Mohr, Karen Irene
2017-01-01
This study evaluated the impact of five single- or double-moment bulk microphysics schemes (BMPSs) on Weather Research and Forecasting model (WRF) simulations of seven intense wintertime cyclones impacting the mid-Atlantic United States; 5-day long WRF simulations were initialized roughly 24 hours prior to the onset of coastal cyclogenesis off the North Carolina coastline. In all, 35 model simulations (five BMPSs and seven cases) were run and their associated microphysics-related storm properties (hydrometer mixing ratios, precipitation, and radar reflectivity) were evaluated against model analysis and available gridded radar and ground-based precipitation products. Inter-BMPS comparisons of column-integrated mixing ratios and mixing ratio profiles reveal little variability in non-frozen hydrometeor species due to their shared programming heritage, yet their assumptions concerning snow and graupel intercepts, ice supersaturation, snow and graupel density maps, and terminal velocities led to considerable variability in both simulated frozen hydrometeor species and radar reflectivity. WRF-simulated precipitation fields exhibit minor spatiotemporal variability amongst BMPSs, yet their spatial extent is largely conserved. Compared to ground-based precipitation data, WRF simulations demonstrate low-to-moderate (0.217 to 0.414) threat scores and a rainfall distribution shifted toward higher values. Finally, an analysis of WRF and gridded radar reflectivity data via contoured frequency with altitude (CFAD) diagrams reveals notable variability amongst BMPSs, where better performing schemes favored lower graupel mixing ratios and better underlying aggregation assumptions.
Sahani, Ramesh
2013-10-01
The aim of this study is to examine the impact of forced settlement among the foraging Onges, which induced them to change their subsistence from full time foragers to settled consumer. Anthropometric study along with dietary investigation was considered before and afterwards they were forcibly settled. The anthropometric variables and indices show gradual increase among the Onges males but not so much in females. High prevalence of overweight and obesity is also reported. Comparison with other Andaman Islanders indicates that the group which was under the developmental schemes (forced to settle), is showing more mean values of anthropometric variables with prevalence of overweight and obesity than the group, which was not under the influence of developmental programme. Their dietary pattern and physical activity changed to a great extent. The protein content of the diets reduced significantly and the fat along with carbohydrates increased to a substantial amount. The contribution of protein to calories has been reduced substantially and now it is only around 10%, whereas in the past it was above 30%. Caloric intakes increased more than two times, while the physical activity level reduced to about half time. Decreased mobility and altered food habits are the probable reason for the gradual increase of body dimensions and prevalent overweight and obesity, which are the outcomes of forced settlement. Copyright © 2013 Elsevier GmbH. All rights reserved.
Validation of Microphysical Schemes in a CRM Using TRMM Satellite
NASA Astrophysics Data System (ADS)
Li, X.; Tao, W.; Matsui, T.; Liu, C.; Masunaga, H.
2007-12-01
The microphysical scheme in the Goddard Cumulus Ensemble (GCE) model has been the most heavily developed component in the past decade. The cloud-resolving model now has microphysical schemes ranging from the original Lin type bulk scheme, to improved bulk schemes, to a two-moment scheme, to a detailed bin spectral scheme. Even with the most sophisticated bin scheme, many uncertainties still exist, especially in ice phase microphysics. In this study, we take advantages of the long-term TRMM observations, especially the cloud profiles observed by the precipitation radar (PR), to validate microphysical schemes in the simulations of Mesoscale Convective Systems (MCSs). Two contrasting cases, a midlatitude summertime continental MCS with leading convection and trailing stratiform region, and an oceanic MCS in tropical western Pacific are studied. The simulated cloud structures and particle sizes are fed into a forward radiative transfer model to simulate the TRMM satellite sensors, i.e., the PR, the TRMM microwave imager (TMI) and the visible and infrared scanner (VIRS). MCS cases that match the structure and strength of the simulated systems over the 10-year period are used to construct statistics of different sensors. These statistics are then compared with the synthetic satellite data obtained from the forward radiative transfer calculations. It is found that the GCE model simulates the contrasts between the continental and oceanic case reasonably well, with less ice scattering in the oceanic case comparing with the continental case. However, the simulated ice scattering signals for both PR and TMI are generally stronger than the observations, especially for the bulk scheme and at the upper levels in the stratiform region. This indicates larger, denser snow/graupel particles at these levels. Adjusting microphysical schemes in the GCE model according the observations, especially the 3D cloud structure observed by TRMM PR, result in a much better agreement.
Computerized planning of prostate cryosurgery using variable cryoprobe insertion depth.
Rossi, Michael R; Tanaka, Daigo; Shimada, Kenji; Rabin, Yoed
2010-02-01
The current study presents a computerized planning scheme for prostate cryosurgery using a variable insertion depth strategy. This study is a part of an ongoing effort to develop computerized tools for cryosurgery. Based on typical clinical practices, previous automated planning schemes have required that all cryoprobes be aligned at a single insertion depth. The current study investigates the benefit of removing this constraint, in comparison with results based on uniform insertion depth planning as well as the so-called "pullback procedure". Planning is based on the so-called "bubble-packing method", and its quality is evaluated with bioheat transfer simulations. This study is based on five 3D prostate models, reconstructed from ultrasound imaging, and cryoprobe active length in the range of 15-35 mm. The variable insertion depth technique is found to consistently provide superior results when compared to the other placement methods. Furthermore, it is shown that both the optimal active length and the optimal number of cryoprobes vary among prostate models, based on the size and shape of the target region. Due to its low computational cost, the new scheme can be used to determine the optimal cryoprobe layout for a given prostate model in real time. Copyright 2008 Elsevier Inc. All rights reserved.
Adaptive Quadrature Detection for Multicarrier Continuous-Variable Quantum Key Distribution
NASA Astrophysics Data System (ADS)
Gyongyosi, Laszlo; Imre, Sandor
2015-03-01
We propose the adaptive quadrature detection for multicarrier continuous-variable quantum key distribution (CVQKD). A multicarrier CVQKD scheme uses Gaussian subcarrier continuous variables for the information conveying and Gaussian sub-channels for the transmission. The proposed multicarrier detection scheme dynamically adapts to the sub-channel conditions using a corresponding statistics which is provided by our sophisticated sub-channel estimation procedure. The sub-channel estimation phase determines the transmittance coefficients of the sub-channels, which information are used further in the adaptive quadrature decoding process. We define the technique called subcarrier spreading to estimate the transmittance conditions of the sub-channels with a theoretical error-minimum in the presence of a Gaussian noise. We introduce the terms of single and collective adaptive quadrature detection. We also extend the results for a multiuser multicarrier CVQKD scenario. We prove the achievable error probabilities, the signal-to-noise ratios, and quantify the attributes of the framework. The adaptive detection scheme allows to utilize the extra resources of multicarrier CVQKD and to maximize the amount of transmittable information. This work was partially supported by the GOP-1.1.1-11-2012-0092 (Secure quantum key distribution between two units on optical fiber network) project sponsored by the EU and European Structural Fund, and by the COST Action MP1006.
Living with a large reduction in permited loading by using a hydrograph-controlled release scheme
Conrads, P.A.; Martello, W.P.; Sullins, N.R.
2003-01-01
The Total Maximum Daily Load (TMDL) for ammonia and biochemical oxygen demand for the Pee Dee, Waccamaw, and Atlantic Intracoastal Waterway system near Myrtle Beach, South Carolina, mandated a 60-percent reduction in point-source loading. For waters with a naturally low background dissolved-oxygen concentrations, South Carolina anti-degradation rules in the water-quality regulations allows a permitted discharger a reduction of dissolved oxygen of 0.1 milligrams per liter (mg/L). This is known as the "0.1 rule." Permitted dischargers within this region of the State operate under the "0.1 rule" and cannot cause a cumulative impact greater than 0.1 mg/L on dissolved-oxygen concentrations. For municipal water-reclamation facilities to serve the rapidly growing resort and retirement community near Myrtle Beach, a variable loading scheme was developed to allow dischargers to utilize increased assimilative capacity during higher streamflow conditions while still meeting the requirements of a recently established TMDL. As part of the TMDL development, an extensive real-time data-collection network was established in the lower Waccamaw and Pee Dee River watershed where continuous measurements of streamflow, water level, dissolved oxygen, temperature, and specific conductance are collected. In addition, the dynamic BRANCH/BLTM models were calibrated and validated to simulate the water quality and tidal dynamics of the system. The assimilative capacities for various streamflows were also analyzed. The variable-loading scheme established total loadings for three streamflow levels. Model simulations show the results from the additional loading to be less than a 0.1 mg/L reduction in dissolved oxygen. As part of the loading scheme, the real-time network was redesigned to monitor streamflow entering the study area and water-quality conditions in the location of dissolved-oxygen "sags." The study reveals how one group of permit holders used a variable-loading scheme to implement restrictive permit limits without experiencing prohibitive capital expenditures or initiating a lengthy appeals process.
[Clinical reasoning in undergraduate nursing education: a scoping review].
Menezes, Sáskia Sampaio Cipriano de; Corrêa, Consuelo Garcia; Silva, Rita de Cássia Gengo E; Cruz, Diná de Almeida Monteiro Lopes da
2015-12-01
This study aimed at analyzing the current state of knowledge on clinical reasoning in undergraduate nursing education. A systematic scoping review through a search strategy applied to the MEDLINE database, and an analysis of the material recovered by extracting data done by two independent reviewers. The extracted data were analyzed and synthesized in a narrative manner. From the 1380 citations retrieved in the search, 23 were kept for review and their contents were summarized into five categories: 1) the experience of developing critical thinking/clinical reasoning/decision-making process; 2) teaching strategies related to the development of critical thinking/clinical reasoning/decision-making process; 3) measurement of variables related to the critical thinking/clinical reasoning/decision-making process; 4) relationship of variables involved in the critical thinking/clinical reasoning/decision-making process; and 5) theoretical development models of critical thinking/clinical reasoning/decision-making process for students. The biggest challenge for developing knowledge on teaching clinical reasoning seems to be finding consistency between theoretical perspectives on the development of clinical reasoning and methodologies, methods, and procedures in research initiatives in this field.
NASA Astrophysics Data System (ADS)
Maltese, A.; Capodici, F.; Ciraolo, G.; La Loggia, G.
2015-10-01
Temporal availability of grapes actual evapotranspiration is an emerging issue since vineyards farms are more and more converted from rainfed to irrigated agricultural systems. The manuscript aims to verify the accuracy of the actual evapotranspiration retrieval coupling a single source energy balance approach and two different temporal upscaling schemes. The first scheme tests the temporal upscaling of the main input variables, namely the NDVI, albedo and LST; the second scheme tests the temporal upscaling of the energy balance output, the actual evapotranspiration. The temporal upscaling schemes were implemented on: i) airborne remote sensing data acquired monthly during a whole irrigation season over a Sicilian vineyard; ii) low resolution MODIS products released daily or weekly; iii) meteorological data acquired by standard gauge stations. Daily MODIS LST products (MOD11A1) were disaggregated using the DisTrad model, 8-days black and white sky albedo products (MCD43A) allowed modeling the total albedo, and 8-days NDVI products (MOD13Q1) were modeled using the Fisher approach. Results were validated both in time and space. The temporal validation was carried out using the actual evapotranspiration measured in situ using data collected by a flux tower through the eddy covariance technique. The spatial validation involved airborne images acquired at different times from June to September 2008. Results aim to test whether the upscaling of the energy balance input or output data performed better.
NASA Astrophysics Data System (ADS)
Xu, Li; Weng, Peifen
2014-02-01
An improved fifth-order weighted essentially non-oscillatory (WENO-Z) scheme combined with the moving overset grid technique has been developed to compute unsteady compressible viscous flows on the helicopter rotor in forward flight. In order to enforce periodic rotation and pitching of the rotor and relative motion between rotor blades, the moving overset grid technique is extended, where a special judgement standard is presented near the odd surface of the blade grid during search donor cells by using the Inverse Map method. The WENO-Z scheme is adopted for reconstructing left and right state values with the Roe Riemann solver updating the inviscid fluxes and compared with the monotone upwind scheme for scalar conservation laws (MUSCL) and the classical WENO scheme. Since the WENO schemes require a six point stencil to build the fifth-order flux, the method of three layers of fringes for hole boundaries and artificial external boundaries is proposed to carry out flow information exchange between chimera grids. The time advance on the unsteady solution is performed by the full implicit dual time stepping method with Newton type LU-SGS subiteration, where the solutions of pseudo steady computation are as the initial fields of the unsteady flow computation. Numerical results on non-variable pitch rotor and periodic variable pitch rotor in forward flight reveal that the approach can effectively capture vortex wake with low dissipation and reach periodic solutions very soon.
Tegotae-based decentralised control scheme for autonomous gait transition of snake-like robots.
Kano, Takeshi; Yoshizawa, Ryo; Ishiguro, Akio
2017-08-04
Snakes change their locomotion patterns in response to the environment. This ability is a motivation for developing snake-like robots with highly adaptive functionality. In this study, a decentralised control scheme of snake-like robots that exhibited autonomous gait transition (i.e. the transition between concertina locomotion in narrow aisles and scaffold-based locomotion on unstructured terrains) was developed. Additionally, the control scheme was validated via simulations. A key insight revealed is that these locomotion patterns were not preprogrammed but emerged by exploiting Tegotae, a concept that describes the extent to which a perceived reaction matches a generated action. Unlike local reflexive mechanisms proposed previously, the Tegotae-based feedback mechanism enabled the robot to 'selectively' exploit environments beneficial for propulsion, and generated reasonable locomotion patterns. It is expected that the results of this study can form the basis to design robots that can work under unpredictable and unstructured environments.
NASA Technical Reports Server (NTRS)
Wang, Ten-See
1993-01-01
The objective of this study is to benchmark a four-engine clustered nozzle base flowfield with a computational fluid dynamics (CFD) model. The CFD model is a three-dimensional pressure-based, viscous flow formulation. An adaptive upwind scheme is employed for the spatial discretization. The upwind scheme is based on second and fourth order central differencing with adaptive artificial dissipation. Qualitative base flow features such as the reverse jet, wall jet, recompression shock, and plume-plume impingement have been captured. The computed quantitative flow properties such as the radial base pressure distribution, model centerline Mach number and static pressure variation, and base pressure characteristic curve agreed reasonably well with those of the measurement. Parametric study on the effect of grid resolution, turbulence model, inlet boundary condition and difference scheme on convective terms has been performed. The results showed that grid resolution had a strong influence on the accuracy of the base flowfield prediction.
MEDICINAL CANNABIS LAW REFORM: LESSONS FROM CANADIAN LITIGATION.
Freckelton, Ian
2015-06-01
This editorial reviews medicinal cannabis litigation in Canada's superior courts between 1998 and 2015. It reflects upon the outcomes of the decisions and the reasoning within them. It identifies the issues that have driven Canada's jurisprudence in relation to access to medicinal cannabis, particularly insofar as it has engaged patients' rights to liberty and security of the person. It argues that the sequence of medicinal schemes adopted and refined in Canada provides constructive guidance for countries such as Australia which are contemplating introduction of medicinal cannabis as a therapeutic option in compassionate circumstances for patients. In particular, it contends that Canada's experience suggests that strategies calculated to introduce such schemes in a gradualist way, enabling informed involvement by medical practitioners and pharmacists, and that provide for safe and inexpensive accessibility to forms of medicinal cannabis that are clearly distinguished from recreational use and unlikely to be diverted criminally maximise the chances of such schemes being accepted by key stakeholders.
Alexandrowicz, Rainer W; Friedrich, Fabian; Jahn, Rebecca; Soulier, Nathalie
2015-01-01
The present study compares the 30-, 20-, and 12-items versions of the General Health Questionnaire (GHQ) in the original coding and four different recoding schemes (Bimodal, Chronic, Modified Likert and a newly proposed Modified Chronic) with respect to their psychometric qualities. The dichotomized versions (i.e. Bimodal, Chronic and Modified Chronic) were evaluated with the Rasch-Model and the polytomous original version and the Modified Likert version were evaluated with the Partial Credit Model. In general, the versions under consideration showed agreement with the model assumption. However, the recoded versions exhibited some deficits with respect to the Outfit index. Because of the item deficits and for theoretical reasons we argue in favor of using the any of the three length versions with the original four-categorical coding scheme. Nevertheless, any of the versions appears apt for clinical use from a psychometric perspective.
Estimating the parasitaemia of Plasmodium falciparum: experience from a national EQA scheme
2013-01-01
Background To examine performance of the identification and estimation of percentage parasitaemia of Plasmodium falciparum in stained blood films distributed in the UK National External Quality Assessment Scheme (UKNEQAS) Blood Parasitology Scheme. Methods Analysis of performance for the diagnosis and estimation of the percentage parasitaemia of P. falciparum in Giemsa-stained thin blood films was made over a 15-year period to look for trends in performance. Results An average of 25% of participants failed to estimate the percentage parasitaemia, 17% overestimated and 8% underestimated, whilst 5% misidentified the malaria species present. Conclusions Although the results achieved by participants for other blood parasites have shown an overall improvement, the level of performance for estimation of the parasitaemia of P. falciparum remains unchanged over 15 years. Possible reasons include incorrect calculation, not examining the correct part of the film and not examining an adequate number of microscope fields. PMID:24261625
NASA Astrophysics Data System (ADS)
Kadhem, Hasan; Amagasa, Toshiyuki; Kitagawa, Hiroyuki
Encryption can provide strong security for sensitive data against inside and outside attacks. This is especially true in the “Database as Service” model, where confidentiality and privacy are important issues for the client. In fact, existing encryption approaches are vulnerable to a statistical attack because each value is encrypted to another fixed value. This paper presents a novel database encryption scheme called MV-OPES (Multivalued — Order Preserving Encryption Scheme), which allows privacy-preserving queries over encrypted databases with an improved security level. Our idea is to encrypt a value to different multiple values to prevent statistical attacks. At the same time, MV-OPES preserves the order of the integer values to allow comparison operations to be directly applied on encrypted data. Using calculated distance (range), we propose a novel method that allows a join query between relations based on inequality over encrypted values. We also present techniques to offload query execution load to a database server as much as possible, thereby making a better use of server resources in a database outsourcing environment. Our scheme can easily be integrated with current database systems as it is designed to work with existing indexing structures. It is robust against statistical attack and the estimation of true values. MV-OPES experiments show that security for sensitive data can be achieved with reasonable overhead, establishing the practicability of the scheme.
Singh, Ravendra; Ierapetritou, Marianthi; Ramachandran, Rohit
2013-11-01
The next generation of QbD based pharmaceutical products will be manufactured through continuous processing. This will allow the integration of online/inline monitoring tools, coupled with an efficient advanced model-based feedback control systems, to achieve precise control of process variables, so that the predefined product quality can be achieved consistently. The direct compaction process considered in this study is highly interactive and involves time delays for a number of process variables due to sensor placements, process equipment dimensions, and the flow characteristics of the solid material. A simple feedback regulatory control system (e.g., PI(D)) by itself may not be sufficient to achieve the tight process control that is mandated by regulatory authorities. The process presented herein comprises of coupled dynamics involving slow and fast responses, indicating the requirement of a hybrid control scheme such as a combined MPC-PID control scheme. In this manuscript, an efficient system-wide hybrid control strategy for an integrated continuous pharmaceutical tablet manufacturing process via direct compaction has been designed. The designed control system is a hybrid scheme of MPC-PID control. An effective controller parameter tuning strategy involving an ITAE method coupled with an optimization strategy has been used for tuning of both MPC and PID parameters. The designed hybrid control system has been implemented in a first-principles model-based flowsheet that was simulated in gPROMS (Process System Enterprise). Results demonstrate enhanced performance of critical quality attributes (CQAs) under the hybrid control scheme compared to only PID or MPC control schemes, illustrating the potential of a hybrid control scheme in improving pharmaceutical manufacturing operations. Copyright © 2013 Elsevier B.V. All rights reserved.
The Super Tuesday Outbreak: Forecast Sensitivities to Single-Moment Microphysics Schemes
NASA Technical Reports Server (NTRS)
Molthan, Andrew L.; Case, Jonathan L.; Dembek, Scott R.; Jedlovec, Gary J.; Lapenta, William M.
2008-01-01
Forecast precipitation and radar characteristics are used by operational centers to guide the issuance of advisory products. As operational numerical weather prediction is performed at increasingly finer spatial resolution, convective precipitation traditionally represented by sub-grid scale parameterization schemes is now being determined explicitly through single- or multi-moment bulk water microphysics routines. Gains in forecasting skill are expected through improved simulation of clouds and their microphysical processes. High resolution model grids and advanced parameterizations are now available through steady increases in computer resources. As with any parameterization, their reliability must be measured through performance metrics, with errors noted and targeted for improvement. Furthermore, the use of these schemes within an operational framework requires an understanding of limitations and an estimate of biases so that forecasters and model development teams can be aware of potential errors. The National Severe Storms Laboratory (NSSL) Spring Experiments have produced daily, high resolution forecasts used to evaluate forecast skill among an ensemble with varied physical parameterizations and data assimilation techniques. In this research, high resolution forecasts of the 5-6 February 2008 Super Tuesday Outbreak are replicated using the NSSL configuration in order to evaluate two components of simulated convection on a large domain: sensitivities of quantitative precipitation forecasts to assumptions within a single-moment bulk water microphysics scheme, and to determine if these schemes accurately depict the reflectivity characteristics of well-simulated, organized, cold frontal convection. As radar returns are sensitive to the amount of hydrometeor mass and the distribution of mass among variably sized targets, radar comparisons may guide potential improvements to a single-moment scheme. In addition, object-based verification metrics are evaluated for their utility in gauging model performance and QPF variability.
NASA Astrophysics Data System (ADS)
Yan, Yajing; Barth, Alexander; Beckers, Jean-Marie; Candille, Guillem; Brankart, Jean-Michel; Brasseur, Pierre
2016-04-01
In this paper, four assimilation schemes, including an intermittent assimilation scheme (INT) and three incremental assimilation schemes (IAU 0, IAU 50 and IAU 100), are compared in the same assimilation experiments with a realistic eddy permitting primitive equation model of the North Atlantic Ocean using the Ensemble Kalman Filter. The three IAU schemes differ from each other in the position of the increment update window that has the same size as the assimilation window. 0, 50 and 100 correspond to the degree of superposition of the increment update window on the current assimilation window. Sea surface height, sea surface temperature, and temperature profiles at depth collected between January and December 2005 are assimilated. Sixty ensemble members are generated by adding realistic noise to the forcing parameters related to the temperature. The ensemble is diagnosed and validated by comparison between the ensemble spread and the model/observation difference, as well as by rank histogram before the assimilation experiments The relevance of each assimilation scheme is evaluated through analyses on thermohaline variables and the current velocities. The results of the assimilation are assessed according to both deterministic and probabilistic metrics with independent/semi-independent observations. For deterministic validation, the ensemble means, together with the ensemble spreads are compared to the observations, in order to diagnose the ensemble distribution properties in a deterministic way. For probabilistic validation, the continuous ranked probability score (CRPS) is used to evaluate the ensemble forecast system according to reliability and resolution. The reliability is further decomposed into bias and dispersion by the reduced centered random variable (RCRV) score in order to investigate the reliability properties of the ensemble forecast system.
NASA Astrophysics Data System (ADS)
Ajami, H.; Sharma, A.
2016-12-01
A computationally efficient, semi-distributed hydrologic modeling framework is developed to simulate water balance at a catchment scale. The Soil Moisture and Runoff simulation Toolkit (SMART) is based upon the delineation of contiguous and topologically connected Hydrologic Response Units (HRUs). In SMART, HRUs are delineated using thresholds obtained from topographic and geomorphic analysis of a catchment, and simulation elements are distributed cross sections or equivalent cross sections (ECS) delineated in first order sub-basins. ECSs are formulated by aggregating topographic and physiographic properties of the part or entire first order sub-basins to further reduce computational time in SMART. Previous investigations using SMART have shown that temporal dynamics of soil moisture are well captured at a HRU level using the ECS delineation approach. However, spatial variability of soil moisture within a given HRU is ignored. Here, we examined a number of disaggregation schemes for soil moisture distribution in each HRU. The disaggregation schemes are either based on topographic based indices or a covariance matrix obtained from distributed soil moisture simulations. To assess the performance of the disaggregation schemes, soil moisture simulations from an integrated land surface-groundwater model, ParFlow.CLM in Baldry sub-catchment, Australia are used. ParFlow is a variably saturated sub-surface flow model that is coupled to the Common Land Model (CLM). Our results illustrate that the statistical disaggregation scheme performs better than the methods based on topographic data in approximating soil moisture distribution at a 60m scale. Moreover, the statistical disaggregation scheme maintains temporal correlation of simulated daily soil moisture while preserves the mean sub-basin soil moisture. Future work is focused on assessing the performance of this scheme in catchments with various topographic and climate settings.
LDFT-based watermarking resilient to local desynchronization attacks.
Tian, Huawei; Zhao, Yao; Ni, Rongrong; Qin, Lunming; Li, Xuelong
2013-12-01
Up to now, a watermarking scheme that is robust against desynchronization attacks (DAs) is still a grand challenge. Most image watermarking resynchronization schemes in literature can survive individual global DAs (e.g., rotation, scaling, translation, and other affine transforms), but few are resilient to challenging cropping and local DAs. The main reason is that robust features for watermark synchronization are only globally invariable rather than locally invariable. In this paper, we present a blind image watermarking resynchronization scheme against local transform attacks. First, we propose a new feature transform named local daisy feature transform (LDFT), which is not only globally but also locally invariable. Then, the binary space partitioning (BSP) tree is used to partition the geometrically invariant LDFT space. In the BSP tree, the location of each pixel is fixed under global transform, local transform, and cropping. Lastly, the watermarking sequence is embedded bit by bit into each leaf node of the BSP tree by using the logarithmic quantization index modulation watermarking embedding method. Simulation results show that the proposed watermarking scheme can survive numerous kinds of distortions, including common image-processing attacks, local and global DAs, and noninvertible cropping.
Rationale for constructing waste-disposal plants at existing enterprises
NASA Astrophysics Data System (ADS)
Strelkov, Alexander; Teplykh, Svetlana; Gorshkalev, Pavel; Proshina, Elizaveta
2017-10-01
The Federal State Statistics Service of the Republic of Tatarstan collected data on registered organizations involved in the fabrication and dyeing of fur. This paper describes wastewater characteristic of an existing enterprise, LLC “Melita”. This enterprise is a factory of a complete technology cycle, with the process starting from fur manufacture and design to implementation. Maximum capacity of the factory is 1800 skins per day, (excluding fur), and its average productivity is 1000 skins per day. Thorough examination of possible methods and schemes for wastewater from fur production purification showed that it is most reasonable to use technology schemes which included the structures of mechanical, physico-chemical and biological purification. As a result, the study provided a new technological scheme for industrial wastewater purification. This scheme offers using chloride barium and sodium hydrocarbons complex as reagents. For LLC “Melita”, the wastewater absorbing pond is the Volga River. Water quality indicators are taken according to the data of the FGBU “Hydrometeorology and Environmental Monitoring Office”. The research also calculates allowable discharge rates and environmental charges in the city sewer networks and ponds.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Odéen, Henrik, E-mail: h.odeen@gmail.com; Diakite, Mahamadou; Todd, Nick
2014-09-15
Purpose: To investigate k-space subsampling strategies to achieve fast, large field-of-view (FOV) temperature monitoring using segmented echo planar imaging (EPI) proton resonance frequency shift thermometry for MR guided high intensity focused ultrasound (MRgHIFU) applications. Methods: Five different k-space sampling approaches were investigated, varying sample spacing (equally vs nonequally spaced within the echo train), sampling density (variable sampling density in zero, one, and two dimensions), and utilizing sequential or centric sampling. Three of the schemes utilized sequential sampling with the sampling density varied in zero, one, and two dimensions, to investigate sampling the k-space center more frequently. Two of the schemesmore » utilized centric sampling to acquire the k-space center with a longer echo time for improved phase measurements, and vary the sampling density in zero and two dimensions, respectively. Phantom experiments and a theoretical point spread function analysis were performed to investigate their performance. Variable density sampling in zero and two dimensions was also implemented in a non-EPI GRE pulse sequence for comparison. All subsampled data were reconstructed with a previously described temporally constrained reconstruction (TCR) algorithm. Results: The accuracy of each sampling strategy in measuring the temperature rise in the HIFU focal spot was measured in terms of the root-mean-square-error (RMSE) compared to fully sampled “truth.” For the schemes utilizing sequential sampling, the accuracy was found to improve with the dimensionality of the variable density sampling, giving values of 0.65 °C, 0.49 °C, and 0.35 °C for density variation in zero, one, and two dimensions, respectively. The schemes utilizing centric sampling were found to underestimate the temperature rise, with RMSE values of 1.05 °C and 1.31 °C, for variable density sampling in zero and two dimensions, respectively. Similar subsampling schemes with variable density sampling implemented in zero and two dimensions in a non-EPI GRE pulse sequence both resulted in accurate temperature measurements (RMSE of 0.70 °C and 0.63 °C, respectively). With sequential sampling in the described EPI implementation, temperature monitoring over a 192 × 144 × 135 mm{sup 3} FOV with a temporal resolution of 3.6 s was achieved, while keeping the RMSE compared to fully sampled “truth” below 0.35 °C. Conclusions: When segmented EPI readouts are used in conjunction with k-space subsampling for MR thermometry applications, sampling schemes with sequential sampling, with or without variable density sampling, obtain accurate phase and temperature measurements when using a TCR reconstruction algorithm. Improved temperature measurement accuracy can be achieved with variable density sampling. Centric sampling leads to phase bias, resulting in temperature underestimations.« less
Robust Integration Schemes for Generalized Viscoplasticity with Internal-State Variables
NASA Technical Reports Server (NTRS)
Saleeb, Atef F.; Li, W.; Wilt, Thomas E.
1997-01-01
The scope of the work in this presentation focuses on the development of algorithms for the integration of rate dependent constitutive equations. In view of their robustness; i.e., their superior stability and convergence properties for isotropic and anisotropic coupled viscoplastic-damage models, implicit integration schemes have been selected. This is the simplest in its class and is one of the most widely used implicit integrators at present.
NASA Astrophysics Data System (ADS)
Ayalon, Michal; Watson, Anne; Lerman, Steve
2016-09-01
This study examines expressions of reasoning by some higher achieving 11 to 18 year-old English students responding to a survey consisting of function tasks developed in collaboration with their teachers. We report on 70 students, 10 from each of English years 7-13. Iterative and comparative analysis identified capabilities and difficulties of students and suggested conjectures concerning links between the affordances of the tasks, the curriculum, and students' responses. The paper focuses on five of the survey tasks and highlights connections between informal and formal expressions of reasoning about variables in learning. We introduce the notion of `schooled' expressions of reasoning, neither formal nor informal, to emphasise the role of the formatting tools introduced in school that shape future understanding and reasoning.
Cavanagh, Jorunn Pauline; Klingenberg, Claus; Hanssen, Anne-Merethe; Fredheim, Elizabeth Aarag; Francois, Patrice; Schrenzel, Jacques; Flægstad, Trond; Sollid, Johanna Ericson
2012-06-01
The notoriously multi-resistant Staphylococcus haemolyticus is an emerging pathogen causing serious infections in immunocompromised patients. Defining the population structure is important to detect outbreaks and spread of antimicrobial resistant clones. Currently, the standard typing technique is pulsed-field gel electrophoresis (PFGE). In this study we describe novel molecular typing schemes for S. haemolyticus using multi locus sequence typing (MLST) and multi locus variable number of tandem repeats (VNTR) analysis. Seven housekeeping genes (MLST) and five VNTR loci (MLVF) were selected for the novel typing schemes. A panel of 45 human and veterinary S. haemolyticus isolates was investigated. The collection had diverse PFGE patterns (38 PFGE types) and was sampled over a 20 year-period from eight countries. MLST resolved 17 sequence types (Simpsons index of diversity [SID]=0.877) and MLVF resolved 14 repeat types (SID=0.831). We found a low sequence diversity. Phylogenetic analysis clustered the isolates in three (MLST) and one (MLVF) clonal complexes, respectively. Taken together, neither the MLST nor the MLVF scheme was suitable to resolve the population structure of this S. haemolyticus collection. Future MLVF and MLST schemes will benefit from addition of more variable core genome sequences identified by comparing different fully sequenced S. haemolyticus genomes. Copyright © 2012 Elsevier B.V. All rights reserved.
Guidance and Control strategies for aerospace vehicles
NASA Technical Reports Server (NTRS)
Hibey, J. L.; Naidu, D. S.; Charalambous, C. D.
1989-01-01
A neighboring optimal guidance scheme was devised for a nonlinear dynamic system with stochastic inputs and perfect measurements as applicable to fuel optimal control of an aeroassisted orbital transfer vehicle. For the deterministic nonlinear dynamic system describing the atmospheric maneuver, a nominal trajectory was determined. Then, a neighboring, optimal guidance scheme was obtained for open loop and closed loop control configurations. Taking modelling uncertainties into account, a linear, stochastic, neighboring optimal guidance scheme was devised. Finally, the optimal trajectory was approximated as the sum of the deterministic nominal trajectory and the stochastic neighboring optimal solution. Numerical results are presented for a typical vehicle. A fuel-optimal control problem in aeroassisted noncoplanar orbital transfer is also addressed. The equations of motion for the atmospheric maneuver are nonlinear and the optimal (nominal) trajectory and control are obtained. In order to follow the nominal trajectory under actual conditions, a neighboring optimum guidance scheme is designed using linear quadratic regulator theory for onboard real-time implementation. One of the state variables is used as the independent variable in reference to the time. The weighting matrices in the performance index are chosen by a combination of a heuristic method and an optimal modal approach. The necessary feedback control law is obtained in order to minimize the deviations from the nominal conditions.
Clients' reasons for terminating psychotherapy: a quantitative and qualitative inquiry.
Roe, David; Dekel, Rachel; Harel, Galit; Fennig, Shmuel
2006-12-01
To study private-practice clients' perspective on reasons for psychotherapy termination and how these are related to demographic and treatment variables and to satisfaction with therapy. Eighty-four persons who had been in extended private-practice psychotherapy which ended in the preceding three years participated in the study. Mean number of months in treatment was 27.70 (SD = 18.70). Assessment included rating scales and open-ended questions assessing demographic variables, reasons for terminating therapy, and satisfaction with therapy. Quantitative results revealed that the most frequent reasons for termination were accomplishment of goals, circumstantial constraints and dissatisfaction with therapy, and that client satisfaction was positively related to positive reasons for termination. Qualitative results revealed two additional frequently mentioned reasons for termination: the client's need for independence and the client's involvement in new meaningful relationships. Findings suggest that psychotherapy termination may at times be required to facilitate the pursuit of personally meaningful goals.
Cendagorta, Joseph R; Bačić, Zlatko; Tuckerman, Mark E
2018-03-14
We introduce a scheme for approximating quantum time correlation functions numerically within the Feynman path integral formulation. Starting with the symmetrized version of the correlation function expressed as a discretized path integral, we introduce a change of integration variables often used in the derivation of trajectory-based semiclassical methods. In particular, we transform to sum and difference variables between forward and backward complex-time propagation paths. Once the transformation is performed, the potential energy is expanded in powers of the difference variables, which allows us to perform the integrals over these variables analytically. The manner in which this procedure is carried out results in an open-chain path integral (in the remaining sum variables) with a modified potential that is evaluated using imaginary-time path-integral sampling rather than requiring the generation of a large ensemble of trajectories. Consequently, any number of path integral sampling schemes can be employed to compute the remaining path integral, including Monte Carlo, path-integral molecular dynamics, or enhanced path-integral molecular dynamics. We believe that this approach constitutes a different perspective in semiclassical-type approximations to quantum time correlation functions. Importantly, we argue that our approximation can be systematically improved within a cumulant expansion formalism. We test this approximation on a set of one-dimensional problems that are commonly used to benchmark approximate quantum dynamical schemes. We show that the method is at least as accurate as the popular ring-polymer molecular dynamics technique and linearized semiclassical initial value representation for correlation functions of linear operators in most of these examples and improves the accuracy of correlation functions of nonlinear operators.
NASA Astrophysics Data System (ADS)
Cendagorta, Joseph R.; Bačić, Zlatko; Tuckerman, Mark E.
2018-03-01
We introduce a scheme for approximating quantum time correlation functions numerically within the Feynman path integral formulation. Starting with the symmetrized version of the correlation function expressed as a discretized path integral, we introduce a change of integration variables often used in the derivation of trajectory-based semiclassical methods. In particular, we transform to sum and difference variables between forward and backward complex-time propagation paths. Once the transformation is performed, the potential energy is expanded in powers of the difference variables, which allows us to perform the integrals over these variables analytically. The manner in which this procedure is carried out results in an open-chain path integral (in the remaining sum variables) with a modified potential that is evaluated using imaginary-time path-integral sampling rather than requiring the generation of a large ensemble of trajectories. Consequently, any number of path integral sampling schemes can be employed to compute the remaining path integral, including Monte Carlo, path-integral molecular dynamics, or enhanced path-integral molecular dynamics. We believe that this approach constitutes a different perspective in semiclassical-type approximations to quantum time correlation functions. Importantly, we argue that our approximation can be systematically improved within a cumulant expansion formalism. We test this approximation on a set of one-dimensional problems that are commonly used to benchmark approximate quantum dynamical schemes. We show that the method is at least as accurate as the popular ring-polymer molecular dynamics technique and linearized semiclassical initial value representation for correlation functions of linear operators in most of these examples and improves the accuracy of correlation functions of nonlinear operators.
NASA Astrophysics Data System (ADS)
Pagaran, Joseph; Weber, Mark; Burrows, John P.
The Sun's radiative output (total solar irradiance or TSI) determines the thermal structure of the Earth's atmosphere. Its variability is a strong function of wavelength, which drives the photochemistry and general circulation. Contributions to TSI variability from UV wavelengths below 400 nm, i.e. 0.227-day solar rotation or 0.1to be in the 40-60three decades of UV and about a decade of vis-IR observations. Significant progress in UV/vis-IR regions has been achieved with daily monitoring from SCIAMACHY aboard Envisat (ESA) in 2002 and by SIM aboard SORCE (NASA) about a year after. In this contribution, we intercompare SSI measurements from SCIAMACHY and SIM and RGB filters of SPM/VIRGO SoHO: same (a) day and (b) few 27-day time series of spectral measurements in both irradiance and integrated irradiance over selected wavelength intervals. Finally, we show how SSI measurements from GOME, SOLSTICE, in addition to SCIAMACHY and SIM, can be modeled together with solar proxies F10.7 cm, Mg II and sunspot index (PSI) to derive daily SSI variability in the period 1947-2008. The derived variabilities are currently being used as solar input to Bremen's 3D-CTM and are to be recommended as extended alternative to Berlin's FUBRaD radiation scheme. This proxy-based radiation scheme are compared with SATIRE, NRLSSI (or Lean et al.), SUSIM, SSAI (or DeLand et al), and SIP (or Solar2000) models. The use of realistic spectrally resolved solar input to CCMs is to better understand the effects of solar variability on chemistry and temperature in the middle atmosphere over several decades.
New Approaches for DC Balanced SpaceWire
NASA Technical Reports Server (NTRS)
Kisin, Alex; Rakow, Glenn
2016-01-01
Direct Current (DC) line balanced SpaceWire is attractive for a number of reasons. Firstly, a DC line balanced interface provides the ability to isolate the physical layer with either a transformer or capacitor to achieve higher common mode voltage rejection and/or the complete galvanic isolation in the case of a transformer. Secondly, it provides the possibility to reduce the number of conductors and transceivers in the classical SpaceWire interface by half by eliminating the Strobe line. Depending on the modulator scheme - the clock data recovery frequency requirements may be only twice that of the transmit clock, or even match the transmit clock: depending on the Field Programmable Gate Array (FPGA) decoder design. In this paper, several different implementation scenarios will be discussed. Two of these scenarios are backward compatible with the existing SpaceWire hardware standards except for changes at the character level. Three other scenarios, while decreasing by half the standard SpaceWire hardware components, will require changes at both the character and signal levels and work with fixed rates. Other scenarios with variable data rates will require an additional SpaceWire interface handshake initialization sequence.
Automatic efficiency optimization of an axial compressor with adjustable inlet guide vanes
NASA Astrophysics Data System (ADS)
Li, Jichao; Lin, Feng; Nie, Chaoqun; Chen, Jingyi
2012-04-01
The inlet attack angle of rotor blade reasonably can be adjusted with the change of the stagger angle of inlet guide vane (IGV); so the efficiency of each condition will be affected. For the purpose to improve the efficiency, the DSP (Digital Signal Processor) controller is designed to adjust the stagger angle of IGV automatically in order to optimize the efficiency at any operating condition. The A/D signal collection includes inlet static pressure, outlet static pressure, outlet total pressure, rotor speed and torque signal, the efficiency can be calculated in the DSP, and the angle signal for the stepping motor which control the IGV will be sent out from the D/A. Experimental investigations are performed in a three-stage, low-speed axial compressor with variable inlet guide vanes. It is demonstrated that the DSP designed can well adjust the stagger angle of IGV online, the efficiency under different conditions can be optimized. This establishment of DSP online adjustment scheme may provide a practical solution for improving performance of multi-stage axial flow compressor when its operating condition is varied.
Assimilation of Cloud Information in Numerical Weather Prediction Model in Southwest China
NASA Astrophysics Data System (ADS)
HENG, Z.
2016-12-01
Based on the ARPS Data Analysis System (ADAS), Weather Research and Forecasting (WRF) model, simulation experiments from July 1st 2015 to August 1st 2015 are conducted in the region of Southwest China. In the assimilation experiment (EXP), datasets from surface observations are assimilated, cloud information from weather Doppler radar, Fengyun-2E (FY-2E) geostationary satellite are retrieved by using the complex cloud analysis scheme in the ADAS, to insert microphysical variables and adjust the humility structure in the initial condition. As a control run (CTL), datasets from surface observations are assimilated, but no cloud information is used in the ADAS. The simulation result of a rainstorm caused by the Southwest Vortex during 14-15 July 2015 shows that, the EXP run has a better capability in representing the shape and intensity of precipitation, especially the center of rainstorm. The one-month inter-comparison of the initial and prediction results between the EXP and CTL runs reveled that, EXP runs can present a more reasonable phenomenon of rain and get a higher score in the rain prediction. Keywords: NWP, rainstorm, Data assimilation
ERIC Educational Resources Information Center
Schwichow, Martin; Christoph, Simon; Boone, William J.; Härtig, Hendrik
2016-01-01
The so-called control-of-variables strategy (CVS) incorporates the important scientific reasoning skills of designing controlled experiments and interpreting experimental outcomes. As CVS is a prominent component of science standards appropriate assessment instruments are required to measure these scientific reasoning skills and to evaluate the…
An experiment-based comparative study of fuzzy logic control
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.; Chen, Yung-Yaw; Lee, Chuen-Chein; Murugesan, S.; Jang, Jyh-Shing
1989-01-01
An approach is presented to the control of a dynamic physical system through the use of approximate reasoning. The approach has been implemented in a program named POLE, and the authors have successfully built a prototype hardware system to solve the cartpole balancing problem in real-time. The approach provides a complementary alternative to the conventional analytical control methodology and is of substantial use when a precise mathematical model of the process being controlled is not available. A set of criteria for comparing controllers based on approximate reasoning and those based on conventional control schemes is furnished.
NASA Astrophysics Data System (ADS)
Murawski, Aline; Bürger, Gerd; Vorogushyn, Sergiy; Merz, Bruno
2016-04-01
The use of a weather pattern based approach for downscaling of coarse, gridded atmospheric data, as usually obtained from the output of general circulation models (GCM), allows for investigating the impact of anthropogenic greenhouse gas emissions on fluxes and state variables of the hydrological cycle such as e.g. on runoff in large river catchments. Here we aim at attributing changes in high flows in the Rhine catchment to anthropogenic climate change. Therefore we run an objective classification scheme (simulated annealing and diversified randomisation - SANDRA, available from the cost733 classification software) on ERA20C reanalyses data and apply the established classification to GCMs from the CMIP5 project. After deriving weather pattern time series from GCM runs using forcing from all greenhouse gases (All-Hist) and using natural greenhouse gas forcing only (Nat-Hist), a weather generator will be employed to obtain climate data time series for the hydrological model. The parameters of the weather pattern classification (i.e. spatial extent, number of patterns, classification variables) need to be selected in a way that allows for good stratification of the meteorological variables that are of interest for the hydrological modelling. We evaluate the skill of the classification in stratifying meteorological data using a multi-variable approach. This allows for estimating the stratification skill for all meteorological variables together, not separately as usually done in existing similar work. The advantage of the multi-variable approach is to properly account for situations where e.g. two patterns are associated with similar mean daily temperature, but one pattern is dry while the other one is related to considerable amounts of precipitation. Thus, the separation of these two patterns would not be justified when considering temperature only, but is perfectly reasonable when accounting for precipitation as well. Besides that, the weather patterns derived from reanalyses data should be well represented in the All-Hist GCM runs in terms of e.g. frequency, seasonality, and persistence. In this contribution we show how to select the most appropriate weather pattern classification and how the classes derived from it are reflected in the GCMs.
Yang, Yi Isaac; Parrinello, Michele
2018-06-12
Collective variables are used often in many enhanced sampling methods, and their choice is a crucial factor in determining sampling efficiency. However, at times, searching for good collective variables can be challenging. In a recent paper, we combined time-lagged independent component analysis with well-tempered metadynamics in order to obtain improved collective variables from metadynamics runs that use lower quality collective variables [ McCarty, J.; Parrinello, M. J. Chem. Phys. 2017 , 147 , 204109 ]. In this work, we extend these ideas to variationally enhanced sampling. This leads to an efficient scheme that is able to make use of the many advantages of the variational scheme. We apply the method to alanine-3 in water. From an alanine-3 variationally enhanced sampling trajectory in which all the six dihedral angles are biased, we extract much better collective variables able to describe in exquisite detail the protein complex free energy surface in a low dimensional representation. The success of this investigation is helped by a more accurate way of calculating the correlation functions needed in the time-lagged independent component analysis and from the introduction of a new basis set to describe the dihedral angles arrangement.
Morioka, Yushi; Doi, Takeshi; Behera, Swadhin K
2018-01-26
Decadal climate variability in the southern Indian Ocean has great influences on southern African climate through modulation of atmospheric circulation. Although many efforts have been made to understanding physical mechanisms, predictability of the decadal climate variability, in particular, the internally generated variability independent from external atmospheric forcing, remains poorly understood. This study investigates predictability of the decadal climate variability in the southern Indian Ocean using a coupled general circulation model, called SINTEX-F. The ensemble members of the decadal reforecast experiments were initialized with a simple sea surface temperature (SST) nudging scheme. The observed positive and negative peaks during late 1990s and late 2000s are well reproduced in the reforecast experiments initiated from 1994 and 1999, respectively. The experiments initiated from 1994 successfully capture warm SST and high sea level pressure anomalies propagating from the South Atlantic to the southern Indian Ocean. Also, the other experiments initiated from 1999 skillfully predict phase change from a positive to negative peak. These results suggest that the SST-nudging initialization has the essence to capture the predictability of the internally generated decadal climate variability in the southern Indian Ocean.
NASA Astrophysics Data System (ADS)
Zepka, G. D.; Pinto, O.
2010-12-01
The intent of this study is to identify the combination of convective and microphysical WRF parameterizations that better adjusts to lightning occurrence over southeastern Brazil. Twelve thunderstorm days were simulated with WRF model using three different convective parameterizations (Kain-Fritsch, Betts-Miller-Janjic and Grell-Devenyi ensemble) and two different microphysical schemes (Purdue-Lin and WSM6). In order to test the combinations of parameterizations at the same time of lightning occurrence, a comparison was made between the WRF grid point values of surface-based Convective Available Potential Energy (CAPE), Lifted Index (LI), K-Index (KI) and equivalent potential temperature (theta-e), and the lightning locations nearby those grid points. Histograms were built up to show the ratio of the occurrence of different values of these variables for WRF grid points associated with lightning to all WRF grid points. The first conclusion from this analysis was that the choice of microphysics did not change appreciably the results as much as different convective schemes. The Betts-Miller-Janjic parameterization has generally worst skill to relate higher magnitudes for all four variables to lightning occurrence. The differences between the Kain-Fritsch and Grell-Devenyi ensemble schemes were not large. This fact can be attributed to the similar main assumptions used by these schemes that consider entrainment/detrainment processes along the cloud boundaries. After that, we examined three case studies using the combinations of convective and microphysical options without the Betts-Miller-Janjic scheme. Differently from the traditional verification procedures, fields of surface-based CAPE from WRF 10 km domain were compared to the Eta model, satellite images and lightning data. In general the more reliable convective scheme was Kain-Fritsch since it provided more consistent distribution of the CAPE fields with respect to satellite images and lightning data.
Volume 2: Explicit, multistage upwind schemes for Euler and Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Elmiligui, Alaa; Ash, Robert L.
1992-01-01
The objective of this study was to develop a high-resolution-explicit-multi-block numerical algorithm, suitable for efficient computation of the three-dimensional, time-dependent Euler and Navier-Stokes equations. The resulting algorithm has employed a finite volume approach, using monotonic upstream schemes for conservation laws (MUSCL)-type differencing to obtain state variables at cell interface. Variable interpolations were written in the k-scheme formulation. Inviscid fluxes were calculated via Roe's flux-difference splitting, and van Leer's flux-vector splitting techniques, which are considered state of the art. The viscous terms were discretized using a second-order, central-difference operator. Two classes of explicit time integration has been investigated for solving the compressible inviscid/viscous flow problems--two-state predictor-corrector schemes, and multistage time-stepping schemes. The coefficients of the multistage time-stepping schemes have been modified successfully to achieve better performance with upwind differencing. A technique was developed to optimize the coefficients for good high-frequency damping at relatively high CFL numbers. Local time-stepping, implicit residual smoothing, and multigrid procedure were added to the explicit time stepping scheme to accelerate convergence to steady-state. The developed algorithm was implemented successfully in a multi-block code, which provides complete topological and geometric flexibility. The only requirement is C degree continuity of the grid across the block interface. The algorithm has been validated on a diverse set of three-dimensional test cases of increasing complexity. The cases studied were: (1) supersonic corner flow; (2) supersonic plume flow; (3) laminar and turbulent flow over a flat plate; (4) transonic flow over an ONERA M6 wing; and (5) unsteady flow of a compressible jet impinging on a ground plane (with and without cross flow). The emphasis of the test cases was validation of code, and assessment of performance, as well as demonstration of flexibility.
A downscaling scheme for atmospheric variables to drive soil-vegetation-atmosphere transfer models
NASA Astrophysics Data System (ADS)
Schomburg, A.; Venema, V.; Lindau, R.; Ament, F.; Simmer, C.
2010-09-01
For driving soil-vegetation-transfer models or hydrological models, high-resolution atmospheric forcing data is needed. For most applications the resolution of atmospheric model output is too coarse. To avoid biases due to the non-linear processes, a downscaling system should predict the unresolved variability of the atmospheric forcing. For this purpose we derived a disaggregation system consisting of three steps: (1) a bi-quadratic spline-interpolation of the low-resolution data, (2) a so-called `deterministic' part, based on statistical rules between high-resolution surface variables and the desired atmospheric near-surface variables and (3) an autoregressive noise-generation step. The disaggregation system has been developed and tested based on high-resolution model output (400m horizontal grid spacing). A novel automatic search-algorithm has been developed for deriving the deterministic downscaling rules of step 2. When applied to the atmospheric variables of the lowest layer of the atmospheric COSMO-model, the disaggregation is able to adequately reconstruct the reference fields. Applying downscaling step 1 and 2, root mean square errors are decreased. Step 3 finally leads to a close match of the subgrid variability and temporal autocorrelation with the reference fields. The scheme can be applied to the output of atmospheric models, both for stand-alone offline simulations, and a fully coupled model system.
Stages in learning motor synergies: a view based on the equilibrium-point hypothesis.
Latash, Mark L
2010-10-01
This review describes a novel view on stages in motor learning based on recent developments of the notion of synergies, the uncontrolled manifold hypothesis, and the equilibrium-point hypothesis (referent configuration) that allow to merge these notions into a single scheme of motor control. The principle of abundance and the principle of minimal final action form the foundation for analyses of natural motor actions performed by redundant sets of elements. Two main stages of motor learning are introduced corresponding to (1) discovery and strengthening of motor synergies stabilizing salient performance variable(s) and (2) their weakening when other aspects of motor performance are optimized. The first stage may be viewed as consisting of two steps, the elaboration of an adequate referent configuration trajectory and the elaboration of multi-joint (multi-muscle) synergies stabilizing the referent configuration trajectory. Both steps are expected to lead to more variance in the space of elemental variables that is compatible with a desired time profile of the salient performance variable ("good variability"). Adjusting control to other aspects of performance during the second stage (for example, esthetics, energy expenditure, time, fatigue, etc.) may lead to a drop in the "good variability". Experimental support for the suggested scheme is reviewed. Copyright © 2009 Elsevier B.V. All rights reserved.
Approximate reasoning using terminological models
NASA Technical Reports Server (NTRS)
Yen, John; Vaidya, Nitin
1992-01-01
Term Subsumption Systems (TSS) form a knowledge-representation scheme in AI that can express the defining characteristics of concepts through a formal language that has a well-defined semantics and incorporates a reasoning mechanism that can deduce whether one concept subsumes another. However, TSS's have very limited ability to deal with the issue of uncertainty in knowledge bases. The objective of this research is to address issues in combining approximate reasoning with term subsumption systems. To do this, we have extended an existing AI architecture (CLASP) that is built on the top of a term subsumption system (LOOM). First, the assertional component of LOOM has been extended for asserting and representing uncertain propositions. Second, we have extended the pattern matcher of CLASP for plausible rule-based inferences. Third, an approximate reasoning model has been added to facilitate various kinds of approximate reasoning. And finally, the issue of inconsistency in truth values due to inheritance is addressed using justification of those values. This architecture enhances the reasoning capabilities of expert systems by providing support for reasoning under uncertainty using knowledge captured in TSS. Also, as definitional knowledge is explicit and separate from heuristic knowledge for plausible inferences, the maintainability of expert systems could be improved.
Measurement Models for Reasoned Action Theory.
Hennessy, Michael; Bleakley, Amy; Fishbein, Martin
2012-03-01
Quantitative researchers distinguish between causal and effect indicators. What are the analytic problems when both types of measures are present in a quantitative reasoned action analysis? To answer this question, we use data from a longitudinal study to estimate the association between two constructs central to reasoned action theory: behavioral beliefs and attitudes toward the behavior. The belief items are causal indicators that define a latent variable index while the attitude items are effect indicators that reflect the operation of a latent variable scale. We identify the issues when effect and causal indicators are present in a single analysis and conclude that both types of indicators can be incorporated in the analysis of data based on the reasoned action approach.
Misrepresentation and amendment of soil moisture in conceptual hydrological modelling
NASA Astrophysics Data System (ADS)
Zhuo, Lu; Han, Dawei
2016-04-01
Although many conceptual models are very effective in simulating river runoff, their soil moisture schemes are generally not realistic in comparison with the reality (i.e., getting the right answers for the wrong reasons). This study reveals two significant misrepresentations in those models through a case study using the Xinanjiang model which is representative of many well-known conceptual hydrological models. The first is the setting of the upper limit of its soil moisture at the field capacity, due to the 'holding excess runoff' concept (i.e., runoff begins on repletion of its storage to the field capacity). The second is neglect of capillary rise of water movement. A new scheme is therefore proposed to overcome those two issues. The amended model is as effective as its original form in flow modelling, but represents more logically realistic soil water processes. The purpose of the study is to enable the hydrological model to get the right answers for the right reasons. Therefore, the new model structure has a better capability in potentially assimilating soil moisture observations to enhance its real-time flood forecasting accuracy. The new scheme is evaluated in the Pontiac catchment of the USA through a comparison with satellite observed soil moisture. The correlation between the XAJ and the observed soil moisture is enhanced significantly from 0.64 to 0.70. In addition, a new soil moisture term called SMDS (Soil Moisture Deficit to Saturation) is proposed to complement the conventional SMD (Soil Moisture Deficit).
Application of neuroanatomical ontologies for neuroimaging data annotation.
Turner, Jessica A; Mejino, Jose L V; Brinkley, James F; Detwiler, Landon T; Lee, Hyo Jong; Martone, Maryann E; Rubin, Daniel L
2010-01-01
The annotation of functional neuroimaging results for data sharing and re-use is particularly challenging, due to the diversity of terminologies of neuroanatomical structures and cortical parcellation schemes. To address this challenge, we extended the Foundational Model of Anatomy Ontology (FMA) to include cytoarchitectural, Brodmann area labels, and a morphological cortical labeling scheme (e.g., the part of Brodmann area 6 in the left precentral gyrus). This representation was also used to augment the neuroanatomical axis of RadLex, the ontology for clinical imaging. The resulting neuroanatomical ontology contains explicit relationships indicating which brain regions are "part of" which other regions, across cytoarchitectural and morphological labeling schemas. We annotated a large functional neuroimaging dataset with terms from the ontology and applied a reasoning engine to analyze this dataset in conjunction with the ontology, and achieved successful inferences from the most specific level (e.g., how many subjects showed activation in a subpart of the middle frontal gyrus) to more general (how many activations were found in areas connected via a known white matter tract?). In summary, we have produced a neuroanatomical ontology that harmonizes several different terminologies of neuroanatomical structures and cortical parcellation schemes. This neuroanatomical ontology is publicly available as a view of FMA at the Bioportal website. The ontological encoding of anatomic knowledge can be exploited by computer reasoning engines to make inferences about neuroanatomical relationships described in imaging datasets using different terminologies. This approach could ultimately enable knowledge discovery from large, distributed fMRI studies or medical record mining.
Leverrier, Anthony; Grangier, Philippe
2009-05-08
We present a continuous-variable quantum key distribution protocol combining a discrete modulation and reverse reconciliation. This protocol is proven unconditionally secure and allows the distribution of secret keys over long distances, thanks to a reverse reconciliation scheme efficient at very low signal-to-noise ratio.
Considerations in the weathering of wood-plastic composites
Nicole M. Stark
2007-01-01
During weathering, wood-plastic composites (WPCs) can fade and lose stiffness and strength. Weathering variables that induce these changes include exposure to UV light and water. Each variable degrades WPCs independently, but can also act synergistically. Recent efforts have highlighted the need to understand how WPCs weather, and to develop schemes for protection. The...
Reverse design and characteristic study of multi-range HMCVT
NASA Astrophysics Data System (ADS)
Zhu, Zhen; Chen, Long; Zeng, Falin
2017-09-01
The reduction of fuel consumption and increase of transmission efficiency is one of the key problems of the agricultural machinery. Many promising technologies such as hydromechanical continuously variable transmissions (HMCVT) are the focus of research and investments, but there is little technical documentation that describes the design principle and presents the design parameters. This paper presents the design idea and characteristic study of HMCVT, in order to find out the suitable scheme for the big horsepower tractors. Analyzed the kinematics and dynamics of a large horsepower tractor, according to the characteristic parameters, a hydro-mechanical continuously variable transmission has been designed. Compared with the experimental curves and theoretical curves of the stepless speed regulation of transmission, the experimental result illustrates the rationality of the design scheme.
NASA Astrophysics Data System (ADS)
Zhou, Jian; Guo, Ying
2017-02-01
A continuous-variable measurement-device-independent (CV-MDI) multipartite quantum communication protocol is designed to realize multipartite communication based on the GHZ state analysis using Gaussian coherent states. It can remove detector side attack as the multi-mode measurement is blindly done in a suitable Black Box. The entanglement-based CV-MDI multipartite communication scheme and the equivalent prepare-and-measurement scheme are proposed to analyze the security and guide experiment, respectively. The general eavesdropping and coherent attack are considered for the security analysis. Subsequently, all the attacks are ascribed to coherent attack against imperfect links. The asymptotic key rate of the asymmetric configuration is also derived with the numeric simulations illustrating the performance of the proposed protocol.
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung
1995-01-01
A new numerical framework for solving conservation laws is being developed. This new framework differs substantially in both concept and methodology from the well-established methods, i.e., finite difference, finite volume, finite element, and spectral methods. It is conceptually simple and designed to overcome several key limitations of the above traditional methods. A two-level scheme for solving the convection-diffusion equation is constructed and used to illuminate the major differences between the present method and those previously mentioned. This explicit scheme, referred to as the a-mu scheme, has two independent marching variables.
Accuracy Improvement in Magnetic Field Modeling for an Axisymmetric Electromagnet
NASA Technical Reports Server (NTRS)
Ilin, Andrew V.; Chang-Diaz, Franklin R.; Gurieva, Yana L.; Il,in, Valery P.
2000-01-01
This paper examines the accuracy and calculation speed for the magnetic field computation in an axisymmetric electromagnet. Different numerical techniques, based on an adaptive nonuniform grid, high order finite difference approximations, and semi-analitical calculation of boundary conditions are considered. These techniques are being applied to the modeling of the Variable Specific Impulse Magnetoplasma Rocket. For high-accuracy calculations, a fourth-order scheme offers dramatic advantages over a second order scheme. For complex physical configurations of interest in plasma propulsion, a second-order scheme with nonuniform mesh gives the best results. Also, the relative advantages of various methods are described when the speed of computation is an important consideration.
Bond graph modelling of multibody dynamics and its symbolic scheme
NASA Astrophysics Data System (ADS)
Kawase, Takehiko; Yoshimura, Hiroaki
A bond graph method of modeling multibody dynamics is demonstrated. Specifically, a symbolic generation scheme which fully utilizes the bond graph information is presented. It is also demonstrated that structural understanding and representation in bond graph theory is quite powerful for the modeling of such large scale systems, and that the nonenergic multiport of junction structure, which is a multiport expression of the system structure, plays an important role, as first suggested by Paynter. The principal part of the proposed symbolic scheme, that is, the elimination of excess variables, is done through tearing and interconnection in the sense of Kron using newly defined causal and causal coefficient arrays.
Experimental evaluation of multiprocessor cache-based error recovery
NASA Technical Reports Server (NTRS)
Janssens, Bob; Fuchs, W. K.
1991-01-01
Several variations of cache-based checkpointing for rollback error recovery in shared-memory multiprocessors have been recently developed. By modifying the cache replacement policy, these techniques use the inherent redundancy in the memory hierarchy to periodically checkpoint the computation state. Three schemes, different in the manner in which they avoid rollback propagation, are evaluated. By simulation with address traces from parallel applications running on an Encore Multimax shared-memory multiprocessor, the performance effect of integrating the recovery schemes in the cache coherence protocol are evaluated. The results indicate that the cache-based schemes can provide checkpointing capability with low performance overhead but uncontrollable high variability in the checkpoint interval.
ULTRA-SHARP nonoscillatory convection schemes for high-speed steady multidimensional flow
NASA Technical Reports Server (NTRS)
Leonard, B. P.; Mokhtari, Simin
1990-01-01
For convection-dominated flows, classical second-order methods are notoriously oscillatory and often unstable. For this reason, many computational fluid dynamicists have adopted various forms of (inherently stable) first-order upwinding over the past few decades. Although it is now well known that first-order convection schemes suffer from serious inaccuracies attributable to artificial viscosity or numerical diffusion under high convection conditions, these methods continue to enjoy widespread popularity for numerical heat transfer calculations, apparently due to a perceived lack of viable high accuracy alternatives. But alternatives are available. For example, nonoscillatory methods used in gasdynamics, including currently popular TVD schemes, can be easily adapted to multidimensional incompressible flow and convective transport. This, in itself, would be a major advance for numerical convective heat transfer, for example. But, as is shown, second-order TVD schemes form only a small, overly restrictive, subclass of a much more universal, and extremely simple, nonoscillatory flux-limiting strategy which can be applied to convection schemes of arbitrarily high order accuracy, while requiring only a simple tridiagonal ADI line-solver, as used in the majority of general purpose iterative codes for incompressible flow and numerical heat transfer. The new universal limiter and associated solution procedures form the so-called ULTRA-SHARP alternative for high resolution nonoscillatory multidimensional steady state high speed convective modelling.
Leão, Erico; Montez, Carlos; Moraes, Ricardo; Portugal, Paulo; Vasques, Francisco
2017-01-01
The use of Wireless Sensor Network (WSN) technologies is an attractive option to support wide-scale monitoring applications, such as the ones that can be found in precision agriculture, environmental monitoring and industrial automation. The IEEE 802.15.4/ZigBee cluster-tree topology is a suitable topology to build wide-scale WSNs. Despite some of its known advantages, including timing synchronisation and duty-cycle operation, cluster-tree networks may suffer from severe network congestion problems due to the convergecast pattern of its communication traffic. Therefore, the careful adjustment of transmission opportunities (superframe durations) allocated to the cluster-heads is an important research issue. This paper proposes a set of proportional Superframe Duration Allocation (SDA) schemes, based on well-defined protocol and timing models, and on the message load imposed by child nodes (Load-SDA scheme), or by number of descendant nodes (Nodes-SDA scheme) of each cluster-head. The underlying reasoning is to adequately allocate transmission opportunities (superframe durations) and parametrize buffer sizes, in order to improve the network throughput and avoid typical problems, such as: network congestion, high end-to-end communication delays and discarded messages due to buffer overflows. Simulation assessments show how proposed allocation schemes may clearly improve the operation of wide-scale cluster-tree networks. PMID:28134822
ERIC Educational Resources Information Center
Fleener, M. Jayne
Current research and learning theory suggest that a hierarchy of proportional reasoning exists that can be tested. Using G. Vergnaud's four complexity variables (structure, content, numerical characteristics, and presentation) and T. E. Kieren's model of rational number knowledge building, an epistemic model of proportional reasoning was…
Downscaling scheme to drive soil-vegetation-atmosphere transfer models
NASA Astrophysics Data System (ADS)
Schomburg, Annika; Venema, Victor; Lindau, Ralf; Ament, Felix; Simmer, Clemens
2010-05-01
The earth's surface is characterized by heterogeneity at a broad range of scales. Weather forecast models and climate models are not able to resolve this heterogeneity at the smaller scales. Many processes in the soil or at the surface, however, are highly nonlinear. This holds, for example, for evaporation processes, where stomata or aerodynamic resistances are nonlinear functions of the local micro-climate. Other examples are threshold dependent processes, e.g., the generation of runoff or the melting of snow. It has been shown that using averaged parameters in the computation of these processes leads to errors and especially biases, due to the involved nonlinearities. Thus it is necessary to account for the sub-grid scale surface heterogeneities in atmospheric modeling. One approach to take the variability of the earth's surface into account is the mosaic approach. Here the soil-vegetation-atmosphere transfer (SVAT) model is run on an explicit higher resolution than the atmospheric part of a coupled model, which is feasible due to generally lower computational costs of a SVAT model compared to the atmospheric part. The question arises how to deal with the scale differences at the interface between the two resolutions. Usually the assumption of a homogeneous forcing for all sub-pixels is made. However, over a heterogeneous surface, usually the boundary layer is also heterogeneous. Thus, by assuming a constant atmospheric forcing again biases in the turbulent heat fluxes may occur due to neglected atmospheric forcing variability. Therefore we have developed and tested a downscaling scheme to disaggregate the atmospheric variables of the lower atmosphere that are used as input to force a SVAT model. Our downscaling scheme consists of three steps: 1) a bi-quadratic spline interpolation of the coarse-resolution field; 2) a "deterministic" part, where relationships between surface and near-surface variables are exploited; and 3) a noise-generation step, in which the still missing, not explained, variance is added as noise. The scheme has been developed and tested based on high-resolution (400 m) model output of the weather forecast (and regional climate) COSMO model. Downscaling steps 1 and 2 reduce the error made by the homogeneous assumption considerably, whereas the third step leads to close agreement of the sub-grid scale variance with the reference. This is, however, achieved at the cost of higher root mean square errors. Thus, before applying the downscaling system to atmospheric data a decision should be made whether the lowest possible errors (apply only downscaling step 1 and 2) or a most realistic sub-grid scale variability (apply also step 3) is desired. This downscaling scheme is currently being implemented into the COSMO model, where it will be used in combination with the mosaic approach. However, this downscaling scheme can also be applied to drive stand-alone SVAT models or hydrological models, which usually also need high-resolution atmospheric forcing data.
NASA Astrophysics Data System (ADS)
Lemarié, F.; Debreu, L.
2016-02-01
Recent papers by Shchepetkin (2015) and Lemarié et al. (2015) have emphasized that the time-step of an oceanic model with an Eulerian vertical coordinate and an explicit time-stepping scheme is very often restricted by vertical advection in a few hot spots (i.e. most of the grid points are integrated with small Courant numbers, compared to the Courant-Friedrichs-Lewy (CFL) condition, except just few spots where numerical instability of the explicit scheme occurs first). The consequence is that the numerics for vertical advection must have good stability properties while being robust to changes in Courant number in terms of accuracy. An other constraint for oceanic models is the strict control of numerical mixing imposed by the highly adiabatic nature of the oceanic interior (i.e. mixing must be very small in the vertical direction below the boundary layer). We examine in this talk the possibility of mitigating vertical Courant-Friedrichs-Lewy (CFL) restriction, while avoiding numerical inaccuracies associated with standard implicit advection schemes (i.e. large sensitivity of the solution on Courant number, large phase delay, and possibly excess of numerical damping with unphysical orientation). Most regional oceanic models have been successfully using fourth order compact schemes for vertical advection. In this talk we present a new general framework to derive generic expressions for (one-step) coupled time and space high order compact schemes (see Daru & Tenaud (2004) for a thorough description of coupled time and space schemes). Among other properties, we show that those schemes are unconditionally stable and have very good accuracy properties even for large Courant numbers while having a very reasonable computational cost. To our knowledge no unconditionally stable scheme with such high order accuracy in time and space have been presented so far in the literature. Furthermore, we show how those schemes can be made monotonic without compromising their stability properties.
On some Approximation Schemes for Steady Compressible Viscous Flow
NASA Astrophysics Data System (ADS)
Bause, M.; Heywood, J. G.; Novotny, A.; Padula, M.
This paper continues our development of approximation schemes for steady compressible viscous flow based on an iteration between a Stokes like problem for the velocity and a transport equation for the density, with the aim of improving their suitability for computations. Such schemes seem attractive for computations because they offer a reduction to standard problems for which there is already highly refined software, and because of the guidance that can be drawn from an existence theory based on them. Our objective here is to modify a recent scheme of Heywood and Padula [12], to improve its convergence properties. This scheme improved upon an earlier scheme of Padula [21], [23] through the use of a special ``effective pressure'' in linking the Stokes and transport problems. However, its convergence is limited for several reasons. Firstly, the steady transport equation itself is only solvable for general velocity fields if they satisfy certain smallness conditions. These conditions are met here by using a rescaled variant of the steady transport equation based on a pseudo time step for the equation of continuity. Another matter limiting the convergence of the scheme in [12] is that the Stokes linearization, which is a linearization about zero, has an inevitably small range of convergence. We replace it here with an Oseen or Newton linearization, either of which has a wider range of convergence, and converges more rapidly. The simplicity of the scheme offered in [12] was conducive to a relatively simple and clearly organized proof of its convergence. The proofs of convergence for the more complicated schemes proposed here are structured along the same lines. They strengthen the theorems of existence and uniqueness in [12] by weakening the smallness conditions that are needed. The expected improvement in the computational performance of the modified schemes has been confirmed by Bause [2], in an ongoing investigation.
A family of compact high order coupled time-space unconditionally stable vertical advection schemes
NASA Astrophysics Data System (ADS)
Lemarié, Florian; Debreu, Laurent
2016-04-01
Recent papers by Shchepetkin (2015) and Lemarié et al. (2015) have emphasized that the time-step of an oceanic model with an Eulerian vertical coordinate and an explicit time-stepping scheme is very often restricted by vertical advection in a few hot spots (i.e. most of the grid points are integrated with small Courant numbers, compared to the Courant-Friedrichs-Lewy (CFL) condition, except just few spots where numerical instability of the explicit scheme occurs first). The consequence is that the numerics for vertical advection must have good stability properties while being robust to changes in Courant number in terms of accuracy. An other constraint for oceanic models is the strict control of numerical mixing imposed by the highly adiabatic nature of the oceanic interior (i.e. mixing must be very small in the vertical direction below the boundary layer). We examine in this talk the possibility of mitigating vertical Courant-Friedrichs-Lewy (CFL) restriction, while avoiding numerical inaccuracies associated with standard implicit advection schemes (i.e. large sensitivity of the solution on Courant number, large phase delay, and possibly excess of numerical damping with unphysical orientation). Most regional oceanic models have been successfully using fourth order compact schemes for vertical advection. In this talk we present a new general framework to derive generic expressions for (one-step) coupled time and space high order compact schemes (see Daru & Tenaud (2004) for a thorough description of coupled time and space schemes). Among other properties, we show that those schemes are unconditionally stable and have very good accuracy properties even for large Courant numbers while having a very reasonable computational cost.
Performance Assessment of Ga District Mutual Health Insurance Scheme, Greater Accra Region, Ghana.
Nsiah-Boateng, Eric; Aikins, Moses
This study assessed performance of the Ga District Mutual Health Insurance Scheme over the period 2007-2009. The desk review method was used to collect secondary data on membership coverage, revenue, expenditure, and claims settlement patterns of the scheme. A household survey was also conducted in the Madina Township by using a self-administered semi-structured questionnaire to determine community coverage of the scheme. The study showed membership coverage of 21.8% and community coverage of 22.2%. The main reasons why respondents had not registered with the scheme are that contributions are high and it does not offer the services needed. Financially, the scheme depended largely on subsidies and reinsurance from the National Health Insurance Authority for 89.8% of its revenue. Approximately 92% of the total revenue was spent on medical claims, and 99% of provider claims were settled beyond the stipulated 4-week period. There is an increasing trend in medical claims expenditure and lengthy delay in claims settlements, with most of them being paid beyond the mandatory 4-week period. Introduction of cost-containment measures including co-payment and capitation payment mechanism would be necessary to reduce the escalating cost of medical claims. Adherence to the 4-week stipulated period for payment of medical claims would be important to ensure that health care providers are financially resourced to deliver continuous health services to insured members. Furthermore, resourcing the scheme would be useful for speedy vetting of claims and also, community education on the National Health Insurance Scheme to improve membership coverage and revenue from the informal sector. Copyright © 2013, International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc.
Pulmonary airways tree segmentation from CT examinations using adaptive volume of interest
NASA Astrophysics Data System (ADS)
Park, Sang Cheol; Kim, Won Pil; Zheng, Bin; Leader, Joseph K.; Pu, Jiantao; Tan, Jun; Gur, David
2009-02-01
Airways tree segmentation is an important step in quantitatively assessing the severity of and changes in several lung diseases such as chronic obstructive pulmonary disease (COPD), asthma, and cystic fibrosis. It can also be used in guiding bronchoscopy. The purpose of this study is to develop an automated scheme for segmenting the airways tree structure depicted on chest CT examinations. After lung volume segmentation, the scheme defines the first cylinder-like volume of interest (VOI) using a series of images depicting the trachea. The scheme then iteratively defines and adds subsequent VOIs using a region growing algorithm combined with adaptively determined thresholds in order to trace possible sections of airways located inside the combined VOI in question. The airway tree segmentation process is automatically terminated after the scheme assesses all defined VOIs in the iteratively assembled VOI list. In this preliminary study, ten CT examinations with 1.25mm section thickness and two different CT image reconstruction kernels ("bone" and "standard") were selected and used to test the proposed airways tree segmentation scheme. The experiment results showed that (1) adopting this approach affectively prevented the scheme from infiltrating into the parenchyma, (2) the proposed method reasonably accurately segmented the airways trees with lower false positive identification rate as compared with other previously reported schemes that are based on 2-D image segmentation and data analyses, and (3) the proposed adaptive, iterative threshold selection method for the region growing step in each identified VOI enables the scheme to segment the airways trees reliably to the 4th generation in this limited dataset with successful segmentation up to the 5th generation in a fraction of the airways tree branches.
NASA Astrophysics Data System (ADS)
Song, Seok Goo; Kwak, Sangmin; Lee, Kyungbook; Park, Donghee
2017-04-01
It is a critical element to predict the intensity and variability of strong ground motions in seismic hazard assessment. The characteristics and variability of earthquake rupture process may be a dominant factor in determining the intensity and variability of near-source strong ground motions. Song et al. (2014) demonstrated that the variability of earthquake rupture scenarios could be effectively quantified in the framework of 1-point and 2-point statistics of earthquake source parameters, constrained by rupture dynamics and past events. The developed pseudo-dynamic source modeling schemes were also validated against the recorded ground motion data of past events and empirical ground motion prediction equations (GMPEs) at the broadband platform (BBP) developed by the Southern California Earthquake Center (SCEC). Recently we improved the computational efficiency of the developed pseudo-dynamic source-modeling scheme by adopting the nonparametric co-regionalization algorithm, introduced and applied in geostatistics initially. We also investigated the effect of earthquake rupture process on near-source ground motion characteristics in the framework of 1-point and 2-point statistics, particularly focusing on the forward directivity region. Finally we will discuss whether the pseudo-dynamic source modeling can reproduce the variability (standard deviation) of empirical GMPEs and the efficiency of 1-point and 2-point statistics to address the variability of ground motions.
NASA Technical Reports Server (NTRS)
Hsu, Andrew T.; Lytle, John K.
1989-01-01
An algebraic adaptive grid scheme based on the concept of arc equidistribution is presented. The scheme locally adjusts the grid density based on gradients of selected flow variables from either finite difference or finite volume calculations. A user-prescribed grid stretching can be specified such that control of the grid spacing can be maintained in areas of known flowfield behavior. For example, the grid can be clustered near a wall for boundary layer resolution and made coarse near the outer boundary of an external flow. A grid smoothing technique is incorporated into the adaptive grid routine, which is found to be more robust and efficient than the weight function filtering technique employed by other researchers. Since the present algebraic scheme requires no iteration or solution of differential equations, the computer time needed for grid adaptation is trivial, making the scheme useful for three-dimensional flow problems. Applications to two- and three-dimensional flow problems show that a considerable improvement in flowfield resolution can be achieved by using the proposed adaptive grid scheme. Although the scheme was developed with steady flow in mind, it is a good candidate for unsteady flow computations because of its efficiency.
Spectral cumulus parameterization based on cloud-resolving model
NASA Astrophysics Data System (ADS)
Baba, Yuya
2018-02-01
We have developed a spectral cumulus parameterization using a cloud-resolving model. This includes a new parameterization of the entrainment rate which was derived from analysis of the cloud properties obtained from the cloud-resolving model simulation and was valid for both shallow and deep convection. The new scheme was examined in a single-column model experiment and compared with the existing parameterization of Gregory (2001, Q J R Meteorol Soc 127:53-72) (GR scheme). The results showed that the GR scheme simulated more shallow and diluted convection than the new scheme. To further validate the physical performance of the parameterizations, Atmospheric Model Intercomparison Project (AMIP) experiments were performed, and the results were compared with reanalysis data. The new scheme performed better than the GR scheme in terms of mean state and variability of atmospheric circulation, i.e., the new scheme improved positive bias of precipitation in western Pacific region, and improved positive bias of outgoing shortwave radiation over the ocean. The new scheme also simulated better features of convectively coupled equatorial waves and Madden-Julian oscillation. These improvements were found to be derived from the modification of parameterization for the entrainment rate, i.e., the proposed parameterization suppressed excessive increase of entrainment, thus suppressing excessive increase of low-level clouds.
Modulation Depth Estimation and Variable Selection in State-Space Models for Neural Interfaces
Hochberg, Leigh R.; Donoghue, John P.; Brown, Emery N.
2015-01-01
Rapid developments in neural interface technology are making it possible to record increasingly large signal sets of neural activity. Various factors such as asymmetrical information distribution and across-channel redundancy may, however, limit the benefit of high-dimensional signal sets, and the increased computational complexity may not yield corresponding improvement in system performance. High-dimensional system models may also lead to overfitting and lack of generalizability. To address these issues, we present a generalized modulation depth measure using the state-space framework that quantifies the tuning of a neural signal channel to relevant behavioral covariates. For a dynamical system, we develop computationally efficient procedures for estimating modulation depth from multivariate data. We show that this measure can be used to rank neural signals and select an optimal channel subset for inclusion in the neural decoding algorithm. We present a scheme for choosing the optimal subset based on model order selection criteria. We apply this method to neuronal ensemble spike-rate decoding in neural interfaces, using our framework to relate motor cortical activity with intended movement kinematics. With offline analysis of intracortical motor imagery data obtained from individuals with tetraplegia using the BrainGate neural interface, we demonstrate that our variable selection scheme is useful for identifying and ranking the most information-rich neural signals. We demonstrate that our approach offers several orders of magnitude lower complexity but virtually identical decoding performance compared to greedy search and other selection schemes. Our statistical analysis shows that the modulation depth of human motor cortical single-unit signals is well characterized by the generalized Pareto distribution. Our variable selection scheme has wide applicability in problems involving multisensor signal modeling and estimation in biomedical engineering systems. PMID:25265627
Evaluation of an improved intermediate complexity snow scheme in the ORCHIDEE land surface model
NASA Astrophysics Data System (ADS)
Wang, Tao; Ottlé, Catherine; Boone, Aaron; Ciais, Philippe; Brun, Eric; Morin, Samuel; Krinner, Gerhard; Piao, Shilong; Peng, Shushi
2013-06-01
Snow plays an important role in land surface models (LSM) for climate and model applied over Fran studies, but its current treatment as a single layer of constant density and thermal conductivity in ORCHIDEE (Organizing Carbon and Hydrology in Dynamic Ecosystems) induces significant deficiencies. The intermediate complexity snow scheme ISBA-ES (Interaction between Soil, Biosphere and Atmosphere-Explicit Snow) that includes key snow processes has been adapted and implemented into ORCHIDEE, referred to here as ORCHIDEE-ES. In this study, the adapted scheme is evaluated against the observations from the alpine site Col de Porte (CDP) with a continuous 18 year data set and from sites distributed in northern Eurasia. At CDP, the comparisons of snow depth, snow water equivalent, surface temperature, snow albedo, and snowmelt runoff reveal that the improved scheme in ORCHIDEE is capable of simulating the internal snow processes better than the original one. Preliminary sensitivity tests indicate that snow albedo parameterization is the main cause for the large difference in snow-related variables but not for soil temperature simulated by the two models. The ability of the ORCHIDEE-ES to better simulate snow thermal conductivity mainly results in differences in soil temperatures. These are confirmed by performing sensitivity analysis of ORCHIDEE-ES parameters using the Morris method. These features can enable us to more realistically investigate interactions between snow and soil thermal regimes (and related soil carbon decomposition). When the two models are compared over sites located in northern Eurasia from 1979 to 1993, snow-related variables and 20 cm soil temperature are better reproduced by ORCHIDEE-ES than ORCHIDEE, revealing a more accurate representation of spatio-temporal variability.
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Hou, Gene W.
1994-01-01
The straightforward automatic-differentiation and the hand-differentiated incremental iterative methods are interwoven to produce a hybrid scheme that captures some of the strengths of each strategy. With this compromise, discrete aerodynamic sensitivity derivatives are calculated with the efficient incremental iterative solution algorithm of the original flow code. Moreover, the principal advantage of automatic differentiation is retained (i.e., all complicated source code for the derivative calculations is constructed quickly with accuracy). The basic equations for second-order sensitivity derivatives are presented; four methods are compared. Each scheme requires that large systems are solved first for the first-order derivatives and, in all but one method, for the first-order adjoint variables. Of these latter three schemes, two require no solutions of large systems thereafter. For the other two for which additional systems are solved, the equations and solution procedures are analogous to those for the first order derivatives. From a practical viewpoint, implementation of the second-order methods is feasible only with software tools such as automatic differentiation, because of the extreme complexity and large number of terms. First- and second-order sensitivities are calculated accurately for two airfoil problems, including a turbulent flow example; both geometric-shape and flow-condition design variables are considered. Several methods are tested; results are compared on the basis of accuracy, computational time, and computer memory. For first-order derivatives, the hybrid incremental iterative scheme obtained with automatic differentiation is competitive with the best hand-differentiated method; for six independent variables, it is at least two to four times faster than central finite differences and requires only 60 percent more memory than the original code; the performance is expected to improve further in the future.
Global land-atmosphere coupling associated with cold climate processes
NASA Astrophysics Data System (ADS)
Dutra, Emanuel
This dissertation constitutes an assessment of the role of cold processes, associated with snow cover, in controlling the land-atmosphere coupling. The work was based on model simulations, including offline simulations with the land surface model HTESSEL, and coupled atmosphere simulations with the EC-EARTH climate model. A revised snow scheme was developed and tested in HTESSEL and EC-EARTH. The snow scheme is currently operational at the European Centre for Medium-Range Weather Forecasts integrated forecast system, and in the default configuration of EC-EARTH. The improved representation of the snowpack dynamics in HTESSEL resulted in improvements in the near surface temperature simulations of EC-EARTH. The new snow scheme development was complemented with the option of multi-layer version that showed its potential in modeling thick snowpacks. A key process was the snow thermal insulation that led to significant improvements of the surface water and energy balance components. Similar findings were observed when coupling the snow scheme to lake ice, where lake ice duration was significantly improved. An assessment on the snow cover sensitivity to horizontal resolution, parameterizations and atmospheric forcing within HTESSEL highlighted the role of the atmospheric forcing accuracy and snowpack parameterizations in detriment of horizontal resolution over flat regions. A set of experiments with and without free snow evolution was carried out with EC-EARTH to assess the impact of the interannual variability of snow cover on near surface and soil temperatures. It was found that snow cover interannual variability explained up to 60% of the total interannual variability of near surface temperature over snow covered regions. Although these findings are model dependent, the results showed consistency with previously published work. Furthermore, the detailed validation of the snow dynamics simulations in HTESSEL and EC-EARTH guarantees consistency of the results.
Cai, Hongmin; Peng, Yanxia; Ou, Caiwen; Chen, Minsheng; Li, Li
2014-01-01
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is increasingly used for breast cancer diagnosis as supplementary to conventional imaging techniques. Combining of diffusion-weighted imaging (DWI) of morphology and kinetic features from DCE-MRI to improve the discrimination power of malignant from benign breast masses is rarely reported. The study comprised of 234 female patients with 85 benign and 149 malignant lesions. Four distinct groups of features, coupling with pathological tests, were estimated to comprehensively characterize the pictorial properties of each lesion, which was obtained by a semi-automated segmentation method. Classical machine learning scheme including feature subset selection and various classification schemes were employed to build prognostic model, which served as a foundation for evaluating the combined effects of the multi-sided features for predicting of the types of lesions. Various measurements including cross validation and receiver operating characteristics were used to quantify the diagnostic performances of each feature as well as their combination. Seven features were all found to be statistically different between the malignant and the benign groups and their combination has achieved the highest classification accuracy. The seven features include one pathological variable of age, one morphological variable of slope, three texture features of entropy, inverse difference and information correlation, one kinetic feature of SER and one DWI feature of apparent diffusion coefficient (ADC). Together with the selected diagnostic features, various classical classification schemes were used to test their discrimination power through cross validation scheme. The averaged measurements of sensitivity, specificity, AUC and accuracy are 0.85, 0.89, 90.9% and 0.93, respectively. Multi-sided variables which characterize the morphological, kinetic, pathological properties and DWI measurement of ADC can dramatically improve the discriminatory power of breast lesions.
ERIC Educational Resources Information Center
Garfield, Joan; Le, Laura; Zieffler, Andrew; Ben-Zvi, Dani
2015-01-01
This paper describes the importance of developing students' reasoning about samples and sampling variability as a foundation for statistical thinking. Research on expert-novice thinking as well as statistical thinking is reviewed and compared. A case is made that statistical thinking is a type of expert thinking, and as such, research…
SIMULATIONS OF TRANSVERSE STACKING IN THE NSLS-II BOOSTER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fliller III, R.; Shaftan, T.
2011-03-28
The NSLS-II injection system consists of a 200 MeV linac and a 3 GeV booster. The linac needs to deliver 15 nC in 80 - 150 bunches to the booster every minute to achieve current stability goals in the storage ring. This is a very stringent requirement that has not been demonstrated at an operating light source. We have developed a scheme to transversely stack two bunch trains in the NSLS-II booster in order to alleviate the charge requirements on the linac. This scheme has been outlined previously. In this paper we show particle tracking simulations of the tracking scheme.more » We show simulations of the booster ramp with a stacked beam for a variety of lattice errors and injected beam parameters. In all cases the performance of the proposed stacking method is sufficient to reduce the required charge from the linac. For this reason the injection system of the NSLS-II booster is being designed to include this feature. The NSLS-II injection system consists of a 200 MeV linac and a 3 GeV booster. The injectors must provide 7.5nC in bunch trains 80-150 bunches long every minute for top off operation of the storage ring. Top off then requires that the linac deliver 15nC of charge once losses in the injector chain are taken into consideration. This is a very stringent requirement that has not been demonstrated at an operating light source. For this reason we have developed a method to transversely stack two bunch trains in the booster while maintaining the charge transport efficiency. This stacking scheme has been discussed previously. In this paper we show the simulations of the booster ramp with a single bunch train in the booster. Then we give a brief overview of the stacking scheme. Following, we show the results of stacking two bunch trains in the booster with varying beam emittances and train separations. The behavior of the beam through the ramp is examined showing that it is possible to stack two bunch trains in the booster.« less
Assessment of Scientific Reasoning as an Institutional Outcome
2016-04-01
expertise in the outcome domain. Student achievement of the Scientific Reasoning and Principles of Science was assessed in the 2012- 13 academic year by...scientific reasoning assessment. Overall, students were weakest when answering questions related to (a) proportional reasoning , (b) isolation of...variables, and (c) if-then reasoning . These findings are being incorporates into redesign of the core curriculum to enhance continuity among science courses
Modeling Pulse Transmission in the Monterey Bay Using Parabolic Equation Methods
1991-12-01
Collins 9-13 was chosen for this purpose due its energy conservation scheme , and its ability to efficiently incorporate higher order terms in its...pressure field generated by the PE model into normal modes. Additionally, this process provides increased physical understanding of mode coupling and...separation of variables (i.e. normal modes or fast field), as well as pure numerical schemes such as the parabolic equation methods, can be used. However, as
Nonlinear secret image sharing scheme.
Shin, Sang-Ho; Lee, Gil-Je; Yoo, Kee-Young
2014-01-01
Over the past decade, most of secret image sharing schemes have been proposed by using Shamir's technique. It is based on a linear combination polynomial arithmetic. Although Shamir's technique based secret image sharing schemes are efficient and scalable for various environments, there exists a security threat such as Tompa-Woll attack. Renvall and Ding proposed a new secret sharing technique based on nonlinear combination polynomial arithmetic in order to solve this threat. It is hard to apply to the secret image sharing. In this paper, we propose a (t, n)-threshold nonlinear secret image sharing scheme with steganography concept. In order to achieve a suitable and secure secret image sharing scheme, we adapt a modified LSB embedding technique with XOR Boolean algebra operation, define a new variable m, and change a range of prime p in sharing procedure. In order to evaluate efficiency and security of proposed scheme, we use the embedding capacity and PSNR. As a result of it, average value of PSNR and embedding capacity are 44.78 (dB) and 1.74t⌈log2 m⌉ bit-per-pixel (bpp), respectively.
Nonlinear Secret Image Sharing Scheme
Shin, Sang-Ho; Yoo, Kee-Young
2014-01-01
Over the past decade, most of secret image sharing schemes have been proposed by using Shamir's technique. It is based on a linear combination polynomial arithmetic. Although Shamir's technique based secret image sharing schemes are efficient and scalable for various environments, there exists a security threat such as Tompa-Woll attack. Renvall and Ding proposed a new secret sharing technique based on nonlinear combination polynomial arithmetic in order to solve this threat. It is hard to apply to the secret image sharing. In this paper, we propose a (t, n)-threshold nonlinear secret image sharing scheme with steganography concept. In order to achieve a suitable and secure secret image sharing scheme, we adapt a modified LSB embedding technique with XOR Boolean algebra operation, define a new variable m, and change a range of prime p in sharing procedure. In order to evaluate efficiency and security of proposed scheme, we use the embedding capacity and PSNR. As a result of it, average value of PSNR and embedding capacity are 44.78 (dB) and 1.74t⌈log2m⌉ bit-per-pixel (bpp), respectively. PMID:25140334
A New Approach for Constructing Highly Stable High Order CESE Schemes
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung
2010-01-01
A new approach is devised to construct high order CESE schemes which would avoid the common shortcomings of traditional high order schemes including: (a) susceptibility to computational instabilities; (b) computational inefficiency due to their local implicit nature (i.e., at each mesh points, need to solve a system of linear/nonlinear equations involving all the mesh variables associated with this mesh point); (c) use of large and elaborate stencils which complicates boundary treatments and also makes efficient parallel computing much harder; (d) difficulties in applications involving complex geometries; and (e) use of problem-specific techniques which are needed to overcome stability problems but often cause undesirable side effects. In fact it will be shown that, with the aid of a conceptual leap, one can build from a given 2nd-order CESE scheme its 4th-, 6th-, 8th-,... order versions which have the same stencil and same stability conditions of the 2nd-order scheme, and also retain all other advantages of the latter scheme. A sketch of multidimensional extensions will also be provided.
A novel hybrid approach with multidimensional-like effects for compressible flow computations
NASA Astrophysics Data System (ADS)
Kalita, Paragmoni; Dass, Anoop K.
2017-07-01
A multidimensional scheme achieves good resolution of strong and weak shocks irrespective of whether the discontinuities are aligned with or inclined to the grid. However, these schemes are computationally expensive. This paper achieves similar effects by hybridizing two schemes, namely, AUSM and DRLLF and coupling them through a novel shock switch that operates - unlike existing switches - on the gradient of the Mach number across the cell-interface. The schemes that are hybridized have contrasting properties. The AUSM scheme captures grid-aligned (and strong) shocks crisply but it is not so good for non-grid-aligned weaker shocks, whereas the DRLLF scheme achieves sharp resolution of non-grid-aligned weaker shocks, but is not as good for grid-aligned strong shocks. It is our experience that if conventional shock switches based on variables like density, pressure or Mach number are used to combine the schemes, the desired effect of crisp resolution of grid-aligned and non-grid-aligned discontinuities are not obtained. To circumvent this problem we design a shock switch based - for the first time - on the gradient of the cell-interface Mach number with very impressive results. Thus the strategy of hybridizing two carefully selected schemes together with the innovative design of the shock switch that couples them, affords a method that produces the effects of a multidimensional scheme with a lower computational cost. It is further seen that hybridization of the AUSM scheme with the recently developed DRLLFV scheme using the present shock switch gives another scheme that provides crisp resolution for both shocks and boundary layers. Merits of the scheme are established through a carefully selected set of numerical experiments.
Ring lens focusing and push-pull tracking scheme for optical disk systems
NASA Technical Reports Server (NTRS)
Gerber, R.; Zambuto, J.; Erwin, J. K.; Mansuripur, M.
1993-01-01
An experimental comparison of the ring lens and the astigmatic techniques of generating focus-error-signal (FES) in optical disk systems reveals that the ring lens generates a FES over two times steeper than that produced by the astigmat. Partly due to this large slope and, in part, because of its diffraction-limited behavior, the ring lens scheme exhibits superior performance characteristics. In particular the undesirable signal known as 'feedthrough' (induced on the FES by track-crossings during the seek operation) is lower by a factor of six compared to that observed with the astigmatic method. The ring lens is easy to align and has reasonable tolerance for positioning errors.
Asymmetric molecular-orbital tomography by manipulating electron trajectories
NASA Astrophysics Data System (ADS)
Wang, Bincheng; Zhang, Qingbin; Zhu, Xiaosong; Lan, Pengfei; Rezvani, Seyed Ali; Lu, Peixiang
2017-11-01
We present a scheme for tomographic imaging of asymmetric molecular orbital based on high-order harmonic generation with a two-color orthogonally polarized multicycle laser field. With the two-dimensional manipulation of the electron trajectories, the electrons can recollide with the target molecule from two noncollinear directions, and then the dipole moment generated from the single direction can be obtained to reconstructed the asymmetric molecular orbital. The recollision is independent from the molecular structure and the angular dependence of the ionization rate in the external field. For this reason, this scheme can avoid the negative effects arising from the modification of the angle-dependent ionization rate induced by Stark shift and be applied to various molecules.
Can the Principles of Cognitive Acceleration Be Used to Improve Numerical Reasoning in Science?
ERIC Educational Resources Information Center
Clowser, Anthony; Jones, Susan Wyn; Lewis, John
2018-01-01
This study investigates whether the Cognitive Acceleration through Science Education (CASE) scheme could be used to meet the demands of the Literacy and Numeracy Framework (LNF). The LNF is part of the Welsh Government's improvement strategy in response to perceived poor performance in the Programme for International Student Assessment (PISA)…
Parallel Logic Programming Architecture
1990-04-01
Section 3.1. 3.1. A STATIC ALLOCATION SCHEME (SAS) Methods that have been used for decomposing distributed problems in artificial intelligence...multiple agents, knowledge organization and allocation, and cooperative parallel execution. These difficulties are common to distributed artificial ...for the following reasons. First, intellegent backtracking requires much more bookkeeping and is therefore more costly during consult-time and during
Writing and Writer Identity: The Poor Relation and The Search for Voice in "Personal Literacy"
ERIC Educational Resources Information Center
Gardner, Paul
2018-01-01
The teaching of writing has been a relatively neglected aspect of research in literacy. Cultural and socio-economic reasons for this are suggested. In addition, teachers often readily acknowledge themselves as readers, but rarely as writers. Without a solid grasp of compositional processes, teachers are perhaps prone to adopt schemes that promote…
NASA Astrophysics Data System (ADS)
Abbaspour, S.; Mohammad Moosavi Nejad, S.
2018-05-01
Charged Higgs bosons are predicted by some non-minimal Higgs scenarios, such as models containing Higgs triplets and two-Higgs-doublet models, so that the experimental observation of these bosons would indicate physics beyond the Standard Model. In the present work, we introduce a channel to indirect search for the charged Higgses through the hadronic decay of polarized top quarks where a top quark decays into a charged Higgs H+ and a bottom-flavored meson B via the hadronization process of the produced bottom quark, t (↑) →H+ + b (→ B + jet). To obtain the energy spectrum of produced B-mesons we present, for the first time, an analytical expression for the O (αs) corrections to the differential decay width of the process t →H+ b in presence of a massive b-quark in the General-Mass Variable-Flavor-Number (GM-VFN) scheme. We find that the most reliable predictions for the B-hadron energy spectrum are made in the GM-VFN scheme, specifically, when the Type-II 2HDM scenario is concerned.
Entropy Splitting for High Order Numerical Simulation of Vortex Sound at Low Mach Numbers
NASA Technical Reports Server (NTRS)
Mueller, B.; Yee, H. C.; Mansour, Nagi (Technical Monitor)
2001-01-01
A method of minimizing numerical errors, and improving nonlinear stability and accuracy associated with low Mach number computational aeroacoustics (CAA) is proposed. The method consists of two levels. From the governing equation level, we condition the Euler equations in two steps. The first step is to split the inviscid flux derivatives into a conservative and a non-conservative portion that satisfies a so called generalized energy estimate. This involves the symmetrization of the Euler equations via a transformation of variables that are functions of the physical entropy. Owing to the large disparity of acoustic and stagnation quantities in low Mach number aeroacoustics, the second step is to reformulate the split Euler equations in perturbation form with the new unknowns as the small changes of the conservative variables with respect to their large stagnation values. From the numerical scheme level, a stable sixth-order central interior scheme with a third-order boundary schemes that satisfies the discrete analogue of the integration-by-parts procedure used in the continuous energy estimate (summation-by-parts property) is employed.
Large-scale expensive black-box function optimization
NASA Astrophysics Data System (ADS)
Rashid, Kashif; Bailey, William; Couët, Benoît
2012-09-01
This paper presents the application of an adaptive radial basis function method to a computationally expensive black-box reservoir simulation model of many variables. An iterative proxy-based scheme is used to tune the control variables, distributed for finer control over a varying number of intervals covering the total simulation period, to maximize asset NPV. The method shows that large-scale simulation-based function optimization of several hundred variables is practical and effective.
NASA Astrophysics Data System (ADS)
Tan, Zhihong; Kaul, Colleen M.; Pressel, Kyle G.; Cohen, Yair; Schneider, Tapio; Teixeira, João.
2018-03-01
Large-scale weather forecasting and climate models are beginning to reach horizontal resolutions of kilometers, at which common assumptions made in existing parameterization schemes of subgrid-scale turbulence and convection—such as that they adjust instantaneously to changes in resolved-scale dynamics—cease to be justifiable. Additionally, the common practice of representing boundary-layer turbulence, shallow convection, and deep convection by discontinuously different parameterizations schemes, each with its own set of parameters, has contributed to the proliferation of adjustable parameters in large-scale models. Here we lay the theoretical foundations for an extended eddy-diffusivity mass-flux (EDMF) scheme that has explicit time-dependence and memory of subgrid-scale variables and is designed to represent all subgrid-scale turbulence and convection, from boundary layer dynamics to deep convection, in a unified manner. Coherent up and downdrafts in the scheme are represented as prognostic plumes that interact with their environment and potentially with each other through entrainment and detrainment. The more isotropic turbulence in their environment is represented through diffusive fluxes, with diffusivities obtained from a turbulence kinetic energy budget that consistently partitions turbulence kinetic energy between plumes and environment. The cross-sectional area of up and downdrafts satisfies a prognostic continuity equation, which allows the plumes to cover variable and arbitrarily large fractions of a large-scale grid box and to have life cycles governed by their own internal dynamics. Relatively simple preliminary proposals for closure parameters are presented and are shown to lead to a successful simulation of shallow convection, including a time-dependent life cycle.
Inferring Cirrus Size Distributions Through Satellite Remote Sensing and Microphysical Databases
NASA Technical Reports Server (NTRS)
Mitchell, David; D'Entremont, Robert P.; Lawson, R. Paul
2010-01-01
Since cirrus clouds have a substantial influence on the global energy balance that depends on their microphysical properties, climate models should strive to realistically characterize the cirrus ice particle size distribution (PSD), at least in a climatological sense. To date, the airborne in situ measurements of the cirrus PSD have contained large uncertainties due to errors in measuring small ice crystals (D<60 m). This paper presents a method to remotely estimate the concentration of the small ice crystals relative to the larger ones using the 11- and 12- m channels aboard several satellites. By understanding the underlying physics producing the emissivity difference between these channels, this emissivity difference can be used to infer the relative concentration of small ice crystals. This is facilitated by enlisting temperature-dependent characterizations of the PSD (i.e., PSD schemes) based on in situ measurements. An average cirrus emissivity relationship between 12 and 11 m is developed here using the Moderate Resolution Imaging Spectroradiometer (MODIS) satellite instrument and is used to retrieve the PSD based on six different PSD schemes. The PSDs from the measurement-based PSD schemes are compared with corresponding retrieved PSDs to evaluate differences in small ice crystal concentrations. The retrieved PSDs generally had lower concentrations of small ice particles, with total number concentration independent of temperature. In addition, the temperature dependence of the PSD effective diameter De and fall speed Vf for these retrieved PSD schemes exhibited less variability relative to the unmodified PSD schemes. The reduced variability in the retrieved De and Vf was attributed to the lower concentrations of small ice crystals in the retrieved PSD.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fan, Jiwen; Han, Bin; Varble, Adam
A constrained model intercomparison study of a mid-latitude mesoscale squall line is performed using the Weather Research & Forecasting (WRF) model at 1-km horizontal grid spacing with eight cloud microphysics schemes, to understand specific processes that lead to the large spread of simulated cloud and precipitation at cloud-resolving scales, with a focus of this paper on convective cores. Various observational data are employed to evaluate the baseline simulations. All simulations tend to produce a wider convective area than observed, but a much narrower stratiform area, with most bulk schemes overpredicting radar reflectivity. The magnitudes of the virtual potential temperature drop,more » pressure rise, and the peak wind speed associated with the passage of the gust front are significantly smaller compared with the observations, suggesting simulated cool pools are weaker. Simulations also overestimate the vertical velocity and Ze in convective cores as compared with observational retrievals. The modeled updraft velocity and precipitation have a significant spread across the eight schemes even in this strongly dynamically-driven system. The spread of updraft velocity is attributed to the combined effects of the low-level perturbation pressure gradient determined by cold pool intensity and buoyancy that is not necessarily well correlated to differences in latent heating among the simulations. Variability of updraft velocity between schemes is also related to differences in ice-related parameterizations, whereas precipitation variability increases in no-ice simulations because of scheme differences in collision-coalescence parameterizations.« less
Tan, Zhihong; Kaul, Colleen M.; Pressel, Kyle G.; Cohen, Yair; Teixeira, João
2018-01-01
Abstract Large‐scale weather forecasting and climate models are beginning to reach horizontal resolutions of kilometers, at which common assumptions made in existing parameterization schemes of subgrid‐scale turbulence and convection—such as that they adjust instantaneously to changes in resolved‐scale dynamics—cease to be justifiable. Additionally, the common practice of representing boundary‐layer turbulence, shallow convection, and deep convection by discontinuously different parameterizations schemes, each with its own set of parameters, has contributed to the proliferation of adjustable parameters in large‐scale models. Here we lay the theoretical foundations for an extended eddy‐diffusivity mass‐flux (EDMF) scheme that has explicit time‐dependence and memory of subgrid‐scale variables and is designed to represent all subgrid‐scale turbulence and convection, from boundary layer dynamics to deep convection, in a unified manner. Coherent up and downdrafts in the scheme are represented as prognostic plumes that interact with their environment and potentially with each other through entrainment and detrainment. The more isotropic turbulence in their environment is represented through diffusive fluxes, with diffusivities obtained from a turbulence kinetic energy budget that consistently partitions turbulence kinetic energy between plumes and environment. The cross‐sectional area of up and downdrafts satisfies a prognostic continuity equation, which allows the plumes to cover variable and arbitrarily large fractions of a large‐scale grid box and to have life cycles governed by their own internal dynamics. Relatively simple preliminary proposals for closure parameters are presented and are shown to lead to a successful simulation of shallow convection, including a time‐dependent life cycle. PMID:29780442
Simulation of continuous variable quantum games without entanglement
NASA Astrophysics Data System (ADS)
Li, Shang-Bin
2011-07-01
A simulation scheme of quantum version of Cournot's duopoly is proposed, in which there is a new Nash equilibrium that may also be Pareto optimal without any entanglement involved. The unique property of this simulation scheme is decoherence-free against the symmetric photon loss. Furthermore, we analyze the effects of the asymmetric information on this simulation scheme and investigate the case of asymmetric game caused by asymmetric photon loss. A second-order phase transition-like behavior of the average profits of firms 1 and 2 in a Nash equilibrium can be observed with the change of the degree of asymmetry of the information or the degree of 'virtual cooperation'. It is also found that asymmetric photon loss in this simulation scheme plays a similar role as that with the asymmetric entangled states in the quantum game.
NASA Astrophysics Data System (ADS)
Tayebi, A.; Shekari, Y.; Heydari, M. H.
2017-07-01
Several physical phenomena such as transformation of pollutants, energy, particles and many others can be described by the well-known convection-diffusion equation which is a combination of the diffusion and advection equations. In this paper, this equation is generalized with the concept of variable-order fractional derivatives. The generalized equation is called variable-order time fractional advection-diffusion equation (V-OTFA-DE). An accurate and robust meshless method based on the moving least squares (MLS) approximation and the finite difference scheme is proposed for its numerical solution on two-dimensional (2-D) arbitrary domains. In the time domain, the finite difference technique with a θ-weighted scheme and in the space domain, the MLS approximation are employed to obtain appropriate semi-discrete solutions. Since the newly developed method is a meshless approach, it does not require any background mesh structure to obtain semi-discrete solutions of the problem under consideration, and the numerical solutions are constructed entirely based on a set of scattered nodes. The proposed method is validated in solving three different examples including two benchmark problems and an applied problem of pollutant distribution in the atmosphere. In all such cases, the obtained results show that the proposed method is very accurate and robust. Moreover, a remarkable property so-called positive scheme for the proposed method is observed in solving concentration transport phenomena.
Continuous variables logic via coupled automata using a DNAzyme cascade with feedback.
Lilienthal, S; Klein, M; Orbach, R; Willner, I; Remacle, F; Levine, R D
2017-03-01
The concentration of molecules can be changed by chemical reactions and thereby offer a continuous readout. Yet computer architecture is cast in textbooks in terms of binary valued, Boolean variables. To enable reactive chemical systems to compute we show how, using the Cox interpretation of probability theory, one can transcribe the equations of chemical kinetics as a sequence of coupled logic gates operating on continuous variables. It is discussed how the distinct chemical identity of a molecule allows us to create a common language for chemical kinetics and Boolean logic. Specifically, the logic AND operation is shown to be equivalent to a bimolecular process. The logic XOR operation represents chemical processes that take place concurrently. The values of the rate constants enter the logic scheme as inputs. By designing a reaction scheme with a feedback we endow the logic gates with a built in memory because their output then depends on the input and also on the present state of the system. Technically such a logic machine is an automaton. We report an experimental realization of three such coupled automata using a DNAzyme multilayer signaling cascade. A simple model verifies analytically that our experimental scheme provides an integrator generating a power series that is third order in time. The model identifies two parameters that govern the kinetics and shows how the initial concentrations of the substrates are the coefficients in the power series.
Measurement Models for Reasoned Action Theory
Hennessy, Michael; Bleakley, Amy; Fishbein, Martin
2012-01-01
Quantitative researchers distinguish between causal and effect indicators. What are the analytic problems when both types of measures are present in a quantitative reasoned action analysis? To answer this question, we use data from a longitudinal study to estimate the association between two constructs central to reasoned action theory: behavioral beliefs and attitudes toward the behavior. The belief items are causal indicators that define a latent variable index while the attitude items are effect indicators that reflect the operation of a latent variable scale. We identify the issues when effect and causal indicators are present in a single analysis and conclude that both types of indicators can be incorporated in the analysis of data based on the reasoned action approach. PMID:23243315
Development of an establishment scheme for a DGVM
NASA Astrophysics Data System (ADS)
Song, Xiang; Zeng, Xiaodong; Zhu, Jiawen; Shao, Pu
2016-07-01
Environmental changes are expected to shift the distribution and abundance of vegetation by determining seedling establishment and success. However, most current ecosystem models only focus on the impacts of abiotic factors on biogeophysics (e.g., global distribution, etc.), ignoring their roles in the population dynamics (e.g., seedling establishment rate, mortality rate, etc.) of ecological communities. Such neglect may lead to biases in ecosystem population dynamics (such as changes in population density for woody species in forest ecosystems) and characteristics. In the present study, a new establishment scheme for introducing soil water as a function rather than a threshold was developed and validated, using version 1.0 of the IAP-DGVM as a test bed. The results showed that soil water in the establishment scheme had a remarkable influence on forest transition zones. Compared with the original scheme, the new scheme significantly improved simulations of tree population density, especially in the peripheral areas of forests and transition zones. Consequently, biases in forest fractional coverage were reduced in approximately 78.8% of the global grid cells. The global simulated areas of tree, shrub, grass and bare soil performed better, where the relative biases were reduced from 34.3% to 4.8%, from 27.6% to 13.1%, from 55.2% to 9.2%, and from 37.6% to 3.6%, respectively. Furthermore, the new scheme had more reasonable dependencies of plant functional types (PFTs) on mean annual precipitation, and described the correct dominant PFTs in the tropical rainforest peripheral areas of the Amazon and central Africa.
Dry coolers and air-condensing units (Review)
NASA Astrophysics Data System (ADS)
Milman, O. O.; Anan'ev, P. A.
2016-03-01
The analysis of factors affecting the growth of shortage of freshwater is performed. The state and dynamics of the global market of dry coolers used at electric power plants are investigated. Substantial increase in number and maximum capacity of air-cooled condensers, which have been put into operation in the world in recent years, are noted. The key reasons facilitating the choice of developers of the dry coolers, in particular the independence of the location of thermal power plant from water sources, are enumerated. The main steam turbine heat removal schemes using air cooling are considered, their comparison of thermal efficiency is assessed, and the change of three important parameters, such as surface area of heat transfer, condensate pump flow, and pressure losses in the steam exhaust system, are estimated. It is shown that the most effective is the scheme of direct steam condensation in the heat-exchange tubes, but other schemes also have certain advantages. The air-cooling efficiency may be enhanced much more by using an air-cooling hybrid system: a combination of dry and wet cooling. The basic applied constructive solutions are shown: the arrangement of heat-exchange modules and the types of fans. The optimal mounting design of a fully shopassembled cooling system for heat-exchange modules is represented. Different types of heat-exchange tubes ribbing that take into account the operational features of cooling systems are shown. Heat transfer coefficients of the plants from different manufacturers are compared, and the main reasons for its decline are named. When using evaporative air cooling, it is possible to improve the efficiency of air-cooling units. The factors affecting the faultless performance of dry coolers (DC) and air-condensing units (ACU) and the ways of their elimination are described. A high velocity wind forcing reduces the efficiency of cooling systems and creates preconditions for the development of wind-driven devices. It is noted that global trends have a significant influence on the application of dry coolers in Russia, in view of the fact that some TPP have a surface condensers arrangement. The reasons that these systems are currently less efficient than the direct steam condensation in an air-cooled condenser are explained. It is shown that, in some cases, it is more reasonable to use mixing-type condensers in combination with a dry cooler. Measures for a full import substitution of steam exhaust heat removal systems are mentioned.
Reasoning in people with obsessive-compulsive disorder.
Simpson, Jane; Cove, Jennifer; Fineberg, Naomi; Msetfi, Rachel M; J Ball, Linden
2007-11-01
The aim of this study was to investigate the inductive and deductive reasoning abilities of people with obsessive-compulsive disorder (OCD). Following previous research, it was predicted that people with OCD would show different abilities on inductive reasoning tasks but similar abilities to controls on deductive reasoning tasks. A two-group comparison was used with both groups matched on a range of demographic variables. Where appropriate, unmatched variables were entered into the analyses as covariates. Twenty-three people with OCD and 25 control participants were assessed on two tasks: an inductive reasoning task (the 20-questions task) and a deductive reasoning task (a syllogistic reasoning task with a content-neutral and content-emotional manipulation). While no group differences emerged on several of the parameters of the inductive reasoning task, the OCD group did differ on one, and arguably the most important, parameter by asking fewer correct direct-hypothesis questions. The syllogistic reasoning task results were analysed using both correct response and conclusion acceptance data. While no main effects of group were evident, significant interactions indicated important differences in the way the OCD group reasoned with content neutral and emotional syllogisms. It was argued that the OCD group's patterns of response on both tasks were characterized by the need for more information, states of uncertainty, and doubt and postponement of a final decision.
Motor Synergies and the Equilibrium-Point Hypothesis
Latash, Mark L.
2010-01-01
The article offers a way to unite three recent developments in the field of motor control and coordination: (1) The notion of synergies is introduced based on the principle of motor abundance; (2) The uncontrolled manifold hypothesis is described as offering a computational framework to identify and quantify synergies; and (3) The equilibrium-point hypothesis is described for a single muscle, single joint, and multi-joint systems. Merging these concepts into a single coherent scheme requires focusing on control variables rather than performance variables. The principle of minimal final action is formulated as the guiding principle within the referent configuration hypothesis. Motor actions are associated with setting two types of variables by a controller, those that ultimately define average performance patterns and those that define associated synergies. Predictions of the suggested scheme are reviewed, such as the phenomenon of anticipatory synergy adjustments, quick actions without changes in synergies, atypical synergies, and changes in synergies with practice. A few models are briefly reviewed. PMID:20702893
Motor synergies and the equilibrium-point hypothesis.
Latash, Mark L
2010-07-01
The article offers a way to unite three recent developments in the field of motor control and coordination: (1) The notion of synergies is introduced based on the principle of motor abundance; (2) The uncontrolled manifold hypothesis is described as offering a computational framework to identify and quantify synergies; and (3) The equilibrium-point hypothesis is described for a single muscle, single joint, and multijoint systems. Merging these concepts into a single coherent scheme requires focusing on control variables rather than performance variables. The principle of minimal final action is formulated as the guiding principle within the referent configuration hypothesis. Motor actions are associated with setting two types of variables by a controller, those that ultimately define average performance patterns and those that define associated synergies. Predictions of the suggested scheme are reviewed, such as the phenomenon of anticipatory synergy adjustments, quick actions without changes in synergies, atypical synergies, and changes in synergies with practice. A few models are briefly reviewed.
NASA Astrophysics Data System (ADS)
He, Jing; Wen, Xuejie; Chen, Ming; Chen, Lin
2015-09-01
In this paper, a Golay complementary training sequence (TS)-based symbol synchronization scheme is proposed and experimentally demonstrated in multiband orthogonal frequency division multiplexing (MB-OFDM) ultra-wideband over fiber (UWBoF) system with a variable rate low-density parity-check (LDPC) code. Meanwhile, the coding gain and spectral efficiency in the variable rate LDPC-coded MB-OFDM UWBoF system are investigated. By utilizing the non-periodic auto-correlation property of the Golay complementary pair, the start point of LDPC-coded MB-OFDM UWB signal can be estimated accurately. After 100 km standard single-mode fiber (SSMF) transmission, at the bit error rate of 1×10-3, the experimental results show that the short block length 64QAM-LDPC coding provides a coding gain of 4.5 dB, 3.8 dB and 2.9 dB for a code rate of 62.5%, 75% and 87.5%, respectively.
A New Turbo-shaft Engine Control Law during Variable Rotor Speed Transient Process
NASA Astrophysics Data System (ADS)
Hua, Wei; Miao, Lizhen; Zhang, Haibo; Huang, Jinquan
2015-12-01
A closed-loop control law employing compressor guided vanes is firstly investigated to solve unacceptable fuel flow dynamic change in single fuel control for turbo-shaft engine here, especially for rotorcraft in variable rotor speed process. Based on an Augmented Linear Quadratic Regulator (ALQR) algorithm, a dual-input, single-output robust control scheme is proposed for a turbo-shaft engine, involving not only the closed loop adjustment of fuel flow but also that of compressor guided vanes. Furthermore, compared to single fuel control, some digital simulation cases using this new scheme about variable rotor speed have been implemented on the basis of an integrated system of helicopter and engine model. The results depict that the command tracking performance to the free turbine rotor speed can be asymptotically realized. Moreover, the fuel flow transient process has been significantly improved, and the fuel consumption has been dramatically cut down by more than 2% while keeping the helicopter level fight unchanged.
NASA Astrophysics Data System (ADS)
Bae, Kyung-Hoon; Lee, Jungjoon; Kim, Eun-Soo
2008-06-01
In this paper, a variable disparity estimation (VDE)-based intermediate view reconstruction (IVR) in dynamic flow allocation (DFA) over an Ethernet passive optical network (EPON)-based access network is proposed. In the proposed system, the stereoscopic images are estimated by a variable block-matching algorithm (VBMA), and they are transmitted to the receiver through DFA over EPON. This scheme improves a priority-based access network by converting it to a flow-based access network with a new access mechanism and scheduling algorithm, and then 16-view images are synthesized by the IVR using VDE. Some experimental results indicate that the proposed system improves the peak-signal-to-noise ratio (PSNR) to as high as 4.86 dB and reduces the processing time to 3.52 s. Additionally, the network service provider can provide upper limits of transmission delays by the flow. The modeling and simulation results, including mathematical analyses, from this scheme are also provided.
The personal and workplace characteristics of uninsured expatriate males in Saudi Arabia.
Alkhamis, Abdulwahab; Cosgrove, Peter; Mohamed, Gamal; Hassan, Amir
2017-01-19
A major concern by the health decision makers in Gulf Cooperative Council (GCC) countries is the burden of financing healthcare. While other GCC countries have been examining different options, Saudi Arabia has endeavoured to reform its private healthcare system and control expatriate access to government resources through the provision of Compulsory Employment-Based Health Insurance (CEBHI). The objective of this research was to investigate, in a natural setting, the characteristics of uninsured expatriates based on their personal and workplace characteristics. Using a cross-sectional survey, data were collected from a sample of 4,575 male expatriate employees using a multi-stage stratified cluster sampling technique. Descriptive statistics were used to summarize all variables, and the dependent variable was tabulated by access to health insurance and tested using Chi-square. Logistic analysis was performed, guided by the conceptual model. Of survey respondents, 30% were either uninsured or not yet enrolled in a health insurance scheme, 79.4% of these uninsured expatriates did not have valid reasons for being uninsured, with Iqama renewal accounting for 20.6% of the uninsured. The study found both personal and workplace characteristics were important factors influencing health insurance status. Compared with single expatriates, married expatriates (accompanied by their families) are 30% less likely to be uninsured. Moreover, workers occupying technical jobs requiring high school level of education or above were two-thirds more likely to be insured compared to unskilled workers. With regard to firm size, respondents employed in large companies (more than 50 employees) are more likely to be insured compared to those employed in small companies (less than ten employees). In relation to business type, the study found that compared to workers from the agricultural sector, industrial/manufacturing, construction and trading sectors, workers were, respectively, 76%, 85%, and 60% less likely to be uninsured. Although the CEBHI is mandatory, this study found that the characteristics of uninsured expatriates, in respect of their personal and workplace characteristics have similarities with the uninsured from other private employment-sponsored health insurance schemes. Other factors influencing access to health insurance, besides employee and workplace characteristics, include the development and extent of the country's insurance industry.
Effective field theory dimensional regularization
NASA Astrophysics Data System (ADS)
Lehmann, Dirk; Prézeau, Gary
2002-01-01
A Lorentz-covariant regularization scheme for effective field theories with an arbitrary number of propagating heavy and light particles is given. This regularization scheme leaves the low-energy analytic structure of Greens functions intact and preserves all the symmetries of the underlying Lagrangian. The power divergences of regularized loop integrals are controlled by the low-energy kinematic variables. Simple diagrammatic rules are derived for the regularization of arbitrary one-loop graphs and the generalization to higher loops is discussed.
Innovative FEL schemes using variable-gap undulators
NASA Astrophysics Data System (ADS)
Schneidmiller, E. A.; Yurkov, M. V.
2017-06-01
We discuss theoretical background and experimental verification of advanced schemes for X-ray FELs using variable gap undulators (harmonic lasing self-seeded FEL, reverse taper etc.) Harmonic lasing in XFELs is an opportunity to extend operating range of existing and planned X-ray FEL user facilities. Contrary to nonlinear harmonic generation, harmonic lasing can provide much more intense, stable, and narrow-band FEL beam which is easier to handle due to the suppressed fundamental. Another interesting application of harmonic lasing is Harmonic Lasing Self-Seeded (HLSS) FEL that allows to improve longitudinal coherence and spectral power of a SASE FEL. Recently this concept was successfully tested at the soft X-ray FEL user facility FLASH in the wavelength range between 4.5 nm and 15 nm. That was also the first experimental demonstration of harmonic lasing in a high-gain FEL and at a short wavelength (before it worked only in infrared FEL oscillators). Another innovative scheme that was tested at FLASH2 is the reverse tapering that can be used to produce circularly polarized radiation from a dedicated afterburner with strongly suppressed linearly polarized radiation from the main undulator. This scheme can also be used for an efficient background-free production of harmonics in an afterburner. Experiments on the frequency doubling that allowed to reach the shortest wavelength at FLASH as well as on post-saturation tapering to produce a record intencity in XUV regime are also discussed.
Metz, Thomas; Walewski, Joachim; Kaminski, Clemens F
2003-03-20
Evaluation schemes, e.g., least-squares fitting, are not generally applicable to any types of experiments. If the evaluation schemes were not derived from a measurement model that properly described the experiment to be evaluated, poorer precision or accuracy than attainable from the measured data could result. We outline ways in which statistical data evaluation schemes should be derived for all types of experiment, and we demonstrate them for laser-spectroscopic experiments, in which pulse-to-pulse fluctuations of the laser power cause correlated variations of laser intensity and generated signal intensity. The method of maximum likelihood is demonstrated in the derivation of an appropriate fitting scheme for this type of experiment. Statistical data evaluation contains the following steps. First, one has to provide a measurement model that considers statistical variation of all enclosed variables. Second, an evaluation scheme applicable to this particular model has to be derived or provided. Third, the scheme has to be characterized in terms of accuracy and precision. A criterion for accepting an evaluation scheme is that it have accuracy and precision as close as possible to the theoretical limit. The fitting scheme derived for experiments with pulsed lasers is compared to well-established schemes in terms of fitting power and rational functions. The precision is found to be as much as three timesbetter than for simple least-squares fitting. Our scheme also suppresses the bias on the estimated model parameters that other methods may exhibit if they are applied in an uncritical fashion. We focus on experiments in nonlinear spectroscopy, but the fitting scheme derived is applicable in many scientific disciplines.
Classification of Dust Days by Satellite Remotely Sensed Aerosol Products
NASA Technical Reports Server (NTRS)
Sorek-Hammer, M.; Cohen, A.; Levy, Robert C.; Ziv, B.; Broday, D. M.
2013-01-01
Considerable progress in satellite remote sensing (SRS) of dust particles has been seen in the last decade. From an environmental health perspective, such an event detection, after linking it to ground particulate matter (PM) concentrations, can proxy acute exposure to respirable particles of certain properties (i.e. size, composition, and toxicity). Being affected considerably by atmospheric dust, previous studies in the Eastern Mediterranean, and in Israel in particular, have focused on mechanistic and synoptic prediction, classification, and characterization of dust events. In particular, a scheme for identifying dust days (DD) in Israel based on ground PM10 (particulate matter of size smaller than 10 nm) measurements has been suggested, which has been validated by compositional analysis. This scheme requires information regarding ground PM10 levels, which is naturally limited in places with sparse ground-monitoring coverage. In such cases, SRS may be an efficient and cost-effective alternative to ground measurements. This work demonstrates a new model for identifying DD and non-DD (NDD) over Israel based on an integration of aerosol products from different satellite platforms (Moderate Resolution Imaging Spectroradiometer (MODIS) and Ozone Monitoring Instrument (OMI)). Analysis of ground-monitoring data from 2007 to 2008 in southern Israel revealed 67 DD, with more than 88 percent occurring during winter and spring. A Classification and Regression Tree (CART) model that was applied to a database containing ground monitoring (the dependent variable) and SRS aerosol product (the independent variables) records revealed an optimal set of binary variables for the identification of DD. These variables are combinations of the following primary variables: the calendar month, ground-level relative humidity (RH), the aerosol optical depth (AOD) from MODIS, and the aerosol absorbing index (AAI) from OMI. A logistic regression that uses these variables, coded as binary variables, demonstrated 93.2 percent correct classifications of DD and NDD. Evaluation of the combined CART-logistic regression scheme in an adjacent geographical region (Gush Dan) demonstrated good results. Using SRS aerosol products for DD and NDD, identification may enable us to distinguish between health, ecological, and environmental effects that result from exposure to these distinct particle populations.
Continuous-variable quantum authentication of physical unclonable keys
NASA Astrophysics Data System (ADS)
Nikolopoulos, Georgios M.; Diamanti, Eleni
2017-04-01
We propose a scheme for authentication of physical keys that are materialized by optical multiple-scattering media. The authentication relies on the optical response of the key when probed by randomly selected coherent states of light, and the use of standard wavefront-shaping techniques that direct the scattered photons coherently to a specific target mode at the output. The quadratures of the electromagnetic field of the scattered light at the target mode are analysed using a homodyne detection scheme, and the acceptance or rejection of the key is decided upon the outcomes of the measurements. The proposed scheme can be implemented with current technology and offers collision resistance and robustness against key cloning.
NASA Astrophysics Data System (ADS)
Wang, Jun; Zhao, Jianlin; Di, Jianglei; Jiang, Biqiang
2015-04-01
A scheme for recording fast process at nanosecond scale by using digital holographic interferometry with continuous wave (CW) laser is described and demonstrated experimentally, which employs delayed-time fibers and angular multiplexing technique and can realize the variable temporal resolution at nanosecond scale and different measured depths of object field at certain temporal resolution. The actual delay-time is controlled by two delayed-time fibers with different lengths. The object field information in two different states can be simultaneously recorded in a composite hologram. This scheme is also suitable for recording fast process at picosecond scale, by using an electro-optic modulator.
Cognitive balanced model: a conceptual scheme of diagnostic decision making.
Lucchiari, Claudio; Pravettoni, Gabriella
2012-02-01
Diagnostic reasoning is a critical aspect of clinical performance, having a high impact on quality and safety of care. Although diagnosis is fundamental in medicine, we still have a poor understanding of the factors that determine its course. According to traditional understanding, all information used in diagnostic reasoning is objective and logically driven. However, these conditions are not always met. Although we would be less likely to make an inaccurate diagnosis when following rational decision making, as described by normative models, the real diagnostic process works in a different way. Recent work has described the major cognitive biases in medicine as well as a number of strategies for reducing them, collectively called debiasing techniques. However, advances have encountered obstacles in achieving implementation into clinical practice. While traditional understanding of clinical reasoning has failed to consider contextual factors, most debiasing techniques seem to fail in raising sound and safer medical praxis. Technological solutions, being data driven, are fundamental in increasing care safety, but they need to consider human factors. Thus, balanced models, cognitive driven and technology based, are needed in day-to-day applications to actually improve the diagnostic process. The purpose of this article, then, is to provide insight into cognitive influences that have resulted in wrong, delayed or missed diagnosis. Using a cognitive approach, we describe the basis of medical error, with particular emphasis on diagnostic error. We then propose a conceptual scheme of the diagnostic process by the use of fuzzy cognitive maps. © 2011 Blackwell Publishing Ltd.
NASA Astrophysics Data System (ADS)
Ziaei, Vafa; Bredow, Thomas
2017-11-01
We propose a simple many-body based screening mixing strategy to considerably enhance the performance of the Bethe-Salpeter equation (BSE) approach for prediction of excitation energies of molecular systems. This strategy enables us to closely reproduce results of highly correlated equation of motion coupled cluster singles and doubles (EOM-CCSD) through optimal use of cancellation effects. We start from the Hartree-Fock (HF) reference state and take advantage of local density approximation (LDA) based random phase approximation (RPA) screening, denoted as W0-RPA@LDA with W0 as the dynamically screened interaction built upon LDA wave functions and energies. We further use this W0-RPA@LDA screening as an initial screening guess for calculation of quasiparticle energies in the framework of G0W0 @HF. The W0-RPA@LDA screening is further injected into the BSE. By applying such an approach on a set of 22 molecules for which the traditional G W /BSE approaches fail, we observe good agreement with respect to EOM-CCSD references. The reason for the observed good accuracy of this mixing ansatz (scheme A) lies in an optimal damping of HF exchange effect through the W0-RPA@LDA strong screening, leading to substantial decrease of typically overestimated HF electronic gap, and hence to better excitation energies. Further, we present a second multiscreening ansatz (scheme B), which is similar to scheme A with the exception that now the W0-RPA@HF screening is used in the BSE in order to further improve the overestimated excitation energies of carbonyl sulfide (COS) and disilane (Si2H6 ). The reason for improvement of the excitation energies in scheme B lies in the fact that W0-RPA@HF screening is less effective (and weaker than W0-RPA@LDA), which gives rise to stronger electron-hole effects in the BSE.
NASA Technical Reports Server (NTRS)
Jothiprasad, Giridhar; Mavriplis, Dimitri J.; Caughey, David A.
2002-01-01
The rapid increase in available computational power over the last decade has enabled higher resolution flow simulations and more widespread use of unstructured grid methods for complex geometries. While much of this effort has been focused on steady-state calculations in the aerodynamics community, the need to accurately predict off-design conditions, which may involve substantial amounts of flow separation, points to the need to efficiently simulate unsteady flow fields. Accurate unsteady flow simulations can easily require several orders of magnitude more computational effort than a corresponding steady-state simulation. For this reason, techniques for improving the efficiency of unsteady flow simulations are required in order to make such calculations feasible in the foreseeable future. The purpose of this work is to investigate possible reductions in computer time due to the choice of an efficient time-integration scheme from a series of schemes differing in the order of time-accuracy, and by the use of more efficient techniques to solve the nonlinear equations which arise while using implicit time-integration schemes. This investigation is carried out in the context of a two-dimensional unstructured mesh laminar Navier-Stokes solver.
KilBride, A L; Mason, S A; Honeyman, P C; Pritchard, D G; Hepple, S; Green, L E
2012-02-11
Animal health (AH) defines the outcome of their inspections of livestock holdings as full compliance with the legislation and welfare code (A), compliance with the legislation but not the code (B), non-compliance with legislation but no pain, distress or suffering obvious in the animals (C) or evidence of unnecessary pain or unnecessary distress (D). The aim of the present study was to investigate whether membership of farm assurance or organic certification schemes was associated with compliance with animal welfare legislation as inspected by AH. Participating schemes provided details of their members, past and present, and these records were matched against inspection data from AH. Multivariable multilevel logistic binomial models were built to investigate the association between compliance with legislation and membership of a farm assurance/organic scheme. The percentage of inspections coded A, B, C or D was 37.1, 35.6, 20.2 and 7.1 per cent, respectively. Once adjusted for year, country, enterprise, herd size and reason for inspection, there was a pattern of significantly reduced risk of codes C and D compared with A and B, in certified enterprises compared with the enterprises that were not known to be certified in all species.
Secure quantum signatures: a practical quantum technology (Conference Presentation)
NASA Astrophysics Data System (ADS)
Andersson, Erika
2016-10-01
Modern cryptography encompasses much more than encryption of secret messages. Signature schemes are widely used to guarantee that messages cannot be forged or tampered with, for example in e-mail, software updates and electronic commerce. Messages are also transferrable, which distinguishes digital signatures from message authentication. Transferability means that messages can be forwarded; in other words, that a sender is unlikely to be able to make one recipient accept a message which is subsequently rejected by another recipient if the message is forwarded. Similar to public-key encryption, the security of commonly used signature schemes relies on the assumed computational difficulty of problems such as finding discrete logarithms or factoring large primes. With quantum computers, such assumptions would no longer be valid. Partly for this reason, it is desirable to develop signature schemes with unconditional or information-theoretic security. Quantum signature schemes are one possible solution. Similar to quantum key distribution (QKD), their unconditional security relies only on the laws of quantum mechanics. Quantum signatures can be realized with the same system components as QKD, but are so far less investigated. This talk aims to provide an introduction to quantum signatures and to review theoretical and experimental progress so far.
New method for estimating daily global solar radiation over sloped topography in China
NASA Astrophysics Data System (ADS)
Shi, Guoping; Qiu, Xinfa; Zeng, Yan
2018-03-01
A new scheme for the estimation of daily global solar radiation over sloped topography in China is developed based on the Iqbal model C and MODIS cloud fraction. The effects of topography are determined using a digital elevation model. The scheme is tested using observations of solar radiation at 98 stations in China, and the results show that the mean absolute bias error is 1.51 MJ m-2 d-1 and the mean relative absolute bias error is 10.57%. Based on calculations using this scheme, the distribution of daily global solar radiation over slopes in China on four days in the middle of each season (15 January, 15 April, 15 July and 15 October 2003) at a spatial resolution of 1 km × 1 km are analyzed. To investigate the effects of topography on global solar radiation, the results determined in four mountains areas (Tianshan, Kunlun Mountains, Qinling, and Nanling) are discussed, and the typical characteristics of solar radiation over sloped surfaces revealed. In general, the new scheme can produce reasonable characteristics of solar radiation distribution at a high spatial resolution in mountain areas, which will be useful in analyses of mountain climate and planning for agricultural production.
Heather, Nick; Campion, Peter D.; Neville, Ronald G.; Maccabe, David
1987-01-01
Sixteen general practitioners participated in a controlled trial of the Scottish Health Education Group's DRAMS (drinking reasonably and moderately with self-control) scheme. The scheme was evaluated by randomly assigning 104 heavy or problem drinkers to three groups – a group participating in the DRAMS scheme (n = 34), a group given simple advice only (n = 32) and a non-intervention control group (n = 38). Six month follow-up information was obtained for 91 subjects (87.5% of initial sample). There were no significant differences between the groups in reduction in alcohol consumption, but patients in the DRAMS group showed a significantly greater reduction in a logarithmic measure of serum gamma-glutamyl-transpeptidase than patients in the group receiving advice only. Only 14 patients in the DRAMS group completed the full DRAMS procedure. For the sample as a whole, there was a significant reduction in alcohol consumption, a significant improvement on a measure of physical health and well-being, and significant reductions in the logarithmic measure of serum gamma-glutamyl transpeptidase and in mean corpuscular volume. The implications of these findings for future research into controlled drinking minimal interventions in general practice are discussed. PMID:3448228
Sodt, Alexander J; Mei, Ye; König, Gerhard; Tao, Peng; Steele, Ryan P; Brooks, Bernard R; Shao, Yihan
2015-03-05
In combined quantum mechanical/molecular mechanical (QM/MM) free energy calculations, it is often advantageous to have a frozen geometry for the quantum mechanical (QM) region. For such multiple-environment single-system (MESS) cases, two schemes are proposed here for estimating the polarization energy: the first scheme, termed MESS-E, involves a Roothaan step extrapolation of the self-consistent field (SCF) energy; whereas the other scheme, termed MESS-H, employs a Newton-Raphson correction using an approximate inverse electronic Hessian of the QM region (which is constructed only once). Both schemes are extremely efficient, because the expensive Fock updates and SCF iterations in standard QM/MM calculations are completely avoided at each configuration. They produce reasonably accurate QM/MM polarization energies: MESS-E can predict the polarization energy within 0.25 kcal/mol in terms of the mean signed error for two of our test cases, solvated methanol and solvated β-alanine, using the M06-2X or ωB97X-D functionals; MESS-H can reproduce the polarization energy within 0.2 kcal/mol for these two cases and for the oxyluciferin-luciferase complex, if the approximate inverse electronic Hessians are constructed with sufficient accuracy.
Sensitivity of Age-of-Air Calculations to the Choice of Advection Scheme
NASA Technical Reports Server (NTRS)
Eluszkiewicz, Janusz; Hemler, Richard S.; Mahlman, Jerry D.; Bruhwiler, Lori; Takacs, Lawrence L.
2000-01-01
The age of air has recently emerged as a diagnostic of atmospheric transport unaffected by chemical parameterizations, and the features in the age distributions computed in models have been interpreted in terms of the models' large-scale circulation field. This study shows, however, that in addition to the simulated large-scale circulation, three-dimensional age calculations can also be affected by the choice of advection scheme employed in solving the tracer continuity equation, Specifically, using the 3.0deg latitude X 3.6deg longitude and 40 vertical level version of the Geophysical Fluid Dynamics Laboratory SKYHI GCM and six online transport schemes ranging from Eulerian through semi-Lagrangian to fully Lagrangian, it will be demonstrated that the oldest ages are obtained using the nondiffusive centered-difference schemes while the youngest ages are computed with a semi-Lagrangian transport (SLT) scheme. The centered- difference schemes are capable of producing ages older than 10 years in the mesosphere, thus eliminating the "young bias" found in previous age-of-air calculations. At this stage, only limited intuitive explanations can be advanced for this sensitivity of age-of-air calculations to the choice of advection scheme, In particular, age distributions computed online with the National Center for Atmospheric Research Community Climate Model (MACCM3) using different varieties of the SLT scheme are substantially older than the SKYHI SLT distribution. The different varieties, including a noninterpolating-in-the-vertical version (which is essentially centered-difference in the vertical), also produce a narrower range of age distributions than the suite of advection schemes employed in the SKYHI model. While additional MACCM3 experiments with a wider range of schemes would be necessary to provide more definitive insights, the older and less variable MACCM3 age distributions can plausibly be interpreted as being due to the semi-implicit semi-Lagrangian dynamics employed in the MACCM3. This type of dynamical core (employed with a 60-min time step) is likely to reduce SLT's interpolation errors that are compounded by the short-term variability characteristic of the explicit centered-difference dynamics employed in the SKYHI model (time step of 3 min). In the extreme case of a very slowly varying circulation, the choice of advection scheme has no effect on two-dimensional (latitude-height) age-of-air calculations, owing to the smooth nature of the transport circulation in 2D models. These results suggest that nondiffusive schemes may be the preferred choice for multiyear simulations of tracers not overly sensitive to the requirement of monotonicity (this category includes many greenhouse gases). At the same time, age-of-air calculations offer a simple quantitative diagnostic of a scheme's long-term diffusive properties and may help in the evaluation of dynamical cores in multiyear integrations. On the other hand, the sensitivity of the computed ages to the model numerics calls for caution in using age of air as a diagnostic of a GCM's large-scale circulation field.
Quantitative Reasoning in Environmental Science: A Learning Progression
ERIC Educational Resources Information Center
Mayes, Robert Lee; Forrester, Jennifer Harris; Christus, Jennifer Schuttlefield; Peterson, Franziska Isabel; Bonilla, Rachel; Yestness, Nissa
2014-01-01
The ability of middle and high school students to reason quantitatively within the context of environmental science was investigated. A quantitative reasoning (QR) learning progression was created with three progress variables: quantification act, quantitative interpretation, and quantitative modeling. An iterative research design was used as it…
Role of memory errors in quantum repeaters
NASA Astrophysics Data System (ADS)
Hartmann, L.; Kraus, B.; Briegel, H.-J.; Dür, W.
2007-03-01
We investigate the influence of memory errors in the quantum repeater scheme for long-range quantum communication. We show that the communication distance is limited in standard operation mode due to memory errors resulting from unavoidable waiting times for classical signals. We show how to overcome these limitations by (i) improving local memory and (ii) introducing two operational modes of the quantum repeater. In both operational modes, the repeater is run blindly, i.e., without waiting for classical signals to arrive. In the first scheme, entanglement purification protocols based on one-way classical communication are used allowing to communicate over arbitrary distances. However, the error thresholds for noise in local control operations are very stringent. The second scheme makes use of entanglement purification protocols with two-way classical communication and inherits the favorable error thresholds of the repeater run in standard mode. One can increase the possible communication distance by an order of magnitude with reasonable overhead in physical resources. We outline the architecture of a quantum repeater that can possibly ensure intercontinental quantum communication.
Perfectly matched layers in a divergence preserving ADI scheme for electromagnetics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kraus, C.; ETH Zurich, Chair of Computational Science, 8092 Zuerich; Adelmann, A., E-mail: andreas.adelmann@psi.ch
For numerical simulations of highly relativistic and transversely accelerated charged particles including radiation fast algorithms are needed. While the radiation in particle accelerators has wavelengths in the order of 100 {mu}m the computational domain has dimensions roughly five orders of magnitude larger resulting in very large mesh sizes. The particles are confined to a small area of this domain only. To resolve the smallest scales close to the particles subgrids are envisioned. For reasons of stability the alternating direction implicit (ADI) scheme by Smithe et al. [D.N. Smithe, J.R. Cary, J.A. Carlsson, Divergence preservation in the ADI algorithms for electromagnetics,more » J. Comput. Phys. 228 (2009) 7289-7299] for Maxwell equations has been adopted. At the boundary of the domain absorbing boundary conditions have to be employed to prevent reflection of the radiation. In this paper we show how the divergence preserving ADI scheme has to be formulated in perfectly matched layers (PML) and compare the performance in several scenarios.« less
NASA Technical Reports Server (NTRS)
Wang, Ten-See
1993-01-01
The objective of this study is to benchmark a four-engine clustered nozzle base flowfield with a computational fluid dynamics (CFD) model. The CFD model is a pressure based, viscous flow formulation. An adaptive upwind scheme is employed for the spatial discretization. The upwind scheme is based on second and fourth order central differencing with adaptive artificial dissipation. Qualitative base flow features such as the reverse jet, wall jet, recompression shock, and plume-plume impingement have been captured. The computed quantitative flow properties such as the radial base pressure distribution, model centerline Mach number and static pressure variation, and base pressure characteristic curve agreed reasonably well with those of the measurement. Parametric study on the effect of grid resolution, turbulence model, inlet boundary condition and difference scheme on convective terms has been performed. The results showed that grid resolution and turbulence model are two primary factors that influence the accuracy of the base flowfield prediction.
A Theoretical Analysis: Physical Unclonable Functions and The Software Protection Problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nithyanand, Rishab; Solis, John H.
2011-09-01
Physical Unclonable Functions (PUFs) or Physical One Way Functions (P-OWFs) are physical systems whose responses to input stimuli (i.e., challenges) are easy to measure (within reasonable error bounds) but hard to clone. This property of unclonability is due to the accepted hardness of replicating the multitude of uncontrollable manufacturing characteristics and makes PUFs useful in solving problems such as device authentication, software protection, licensing, and certified execution. In this paper, we focus on the effectiveness of PUFs for software protection and show that traditional non-computational (black-box) PUFs cannot solve the problem against real world adversaries in offline settings. Our contributionsmore » are the following: We provide two real world adversary models (weak and strong variants) and present definitions for security against the adversaries. We continue by proposing schemes secure against the weak adversary and show that no scheme is secure against a strong adversary without the use of trusted hardware. Finally, we present a protection scheme secure against strong adversaries based on trusted hardware.« less
The scheme and research of TV series multidimensional comprehensive evaluation on cross-platform
NASA Astrophysics Data System (ADS)
Chai, Jianping; Bai, Xuesong; Zhou, Hongjun; Yin, Fulian
2016-10-01
As for shortcomings of the comprehensive evaluation system on traditional TV programs such as single data source, ignorance of new media as well as the high time cost and difficulty of making surveys, a new evaluation of TV series is proposed in this paper, which has a perspective in cross-platform multidimensional evaluation after broadcasting. This scheme considers the data directly collected from cable television and the Internet as research objects. It's based on TOPSIS principle, after preprocessing and calculation of the data, they become primary indicators that reflect different profiles of the viewing of TV series. Then after the process of reasonable empowerment and summation by the six methods(PCA, AHP, etc.), the primary indicators form the composite indices on different channels or websites. The scheme avoids the inefficiency and difficulty of survey and marking; At the same time, it not only reflects different dimensions of viewing, but also combines TV media and new media, completing the multidimensional comprehensive evaluation of TV series on cross-platform.
ERIC Educational Resources Information Center
Egilmez, Hatice Onuray; Engur, Doruk
2017-01-01
In this study, the self-efficacy and motivation of Zeki Muren Fine Arts High School piano students were examined based on different variables as well as the reasons for their failure. The data on their self-efficacy were obtained through self-efficacy scale of piano performance and the data on their motivation were obtained through motivation…
Comparison of Grouping Schemes for Exposure to Total Dust in Cement Factories in Korea.
Koh, Dong-Hee; Kim, Tae-Woo; Jang, Seung Hee; Ryu, Hyang-Woo; Park, Donguk
2015-08-01
The purpose of this study was to evaluate grouping schemes for exposure to total dust in cement industry workers using non-repeated measurement data. In total, 2370 total dust measurements taken from nine Portland cement factories in 1995-2009 were analyzed. Various grouping schemes were generated based on work process, job, factory, or average exposure. To characterize variance components of each grouping scheme, we developed mixed-effects models with a B-spline time trend incorporated as fixed effects and a grouping variable incorporated as a random effect. Using the estimated variance components, elasticity was calculated. To compare the prediction performances of different grouping schemes, 10-fold cross-validation tests were conducted, and root mean squared errors and pooled correlation coefficients were calculated for each grouping scheme. The five exposure groups created a posteriori by ranking job and factory combinations according to average dust exposure showed the best prediction performance and highest elasticity among various grouping schemes. Our findings suggest a grouping method based on ranking of job, and factory combinations would be the optimal choice in this population. Our grouping method may aid exposure assessment efforts in similar occupational settings, minimizing the misclassification of exposures. © The Author 2015. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.
NASA Astrophysics Data System (ADS)
Nkhoma, Bryson; Kayira, Gift
2016-04-01
Over the past two decades, Malawi has been adversely hit by climatic variability and changes, and irrigation schemes which rely mostly on water from rivers have been negatively affected. In the face of dwindling quantities of water, distribution and sharing of water for irrigation has been a source of contestations and conflicts. Women who constitute a significant section of irrigation farmers in schemes have been major culprits. The study seeks to analyze gender contestations and conflicts over the use of water in the schemes developed in the Lake Chilwa basin, in southern Malawi. Using oral and written sources as well as drawing evidence from participatory and field observations conducted at Likangala and Domasi irrigation schemes, the largest schemes in the basin, the study observes that women are not passive victims of male domination over the use of dwindling waters for irrigation farming. They have often used existing political and traditional structures developed in the management of water in the schemes to competitively gain monopoly over water. They have sometimes expressed their agency by engaging in irrigation activities that fall beyond the control of formal rules and regulations of irrigation agriculture. Other than being losers, women are winning the battle for water and land resources in the basin.
Saleem, M Rehan; Ashraf, Waqas; Zia, Saqib; Ali, Ishtiaq; Qamar, Shamsul
2018-01-01
This paper is concerned with the derivation of a well-balanced kinetic scheme to approximate a shallow flow model incorporating non-flat bottom topography and horizontal temperature gradients. The considered model equations, also called as Ripa system, are the non-homogeneous shallow water equations considering temperature gradients and non-uniform bottom topography. Due to the presence of temperature gradient terms, the steady state at rest is of primary interest from the physical point of view. However, capturing of this steady state is a challenging task for the applied numerical methods. The proposed well-balanced kinetic flux vector splitting (KFVS) scheme is non-oscillatory and second order accurate. The second order accuracy of the scheme is obtained by considering a MUSCL-type initial reconstruction and Runge-Kutta time stepping method. The scheme is applied to solve the model equations in one and two space dimensions. Several numerical case studies are carried out to validate the proposed numerical algorithm. The numerical results obtained are compared with those of staggered central NT scheme. The results obtained are also in good agreement with the recently published results in the literature, verifying the potential, efficiency, accuracy and robustness of the suggested numerical scheme.