Sample records for validating aggregation schemes

  1. Improving a Spectral Bin Microphysical Scheme Using TRMM Satellite Observations

    NASA Technical Reports Server (NTRS)

    Li, Xiaowen; Tao, Wei-Kuo; Matsui, Toshihisa; Liu, Chuntao; Masunaga, Hirohiko

    2010-01-01

    Comparisons between cloud model simulations and observations are crucial in validating model performance and improving physical processes represented in the mod Tel.hese modeled physical processes are idealized representations and almost always have large rooms for improvements. In this study, we use data from two different sensors onboard TRMM (Tropical Rainfall Measurement Mission) satellite to improve the microphysical scheme in the Goddard Cumulus Ensemble (GCE) model. TRMM observed mature-stage squall lines during late spring, early summer in central US over a 9-year period are compiled and compared with a case simulation by GCE model. A unique aspect of the GCE model is that it has a state-of-the-art spectral bin microphysical scheme, which uses 33 different bins to represent particle size distribution of each of the seven hydrometeor species. A forward radiative transfer model calculates TRMM Precipitation Radar (PR) reflectivity and TRMM Microwave Imager (TMI) 85 GHz brightness temperatures from simulated particle size distributions. Comparisons between model outputs and observations reveal that the model overestimates sizes of snow/aggregates in the stratiform region of the squall line. After adjusting temperature-dependent collection coefficients among ice-phase particles, PR comparisons become good while TMI comparisons worsen. Further investigations show that the partitioning between graupel (a high-density form of aggregate), and snow (a low-density form of aggregate) needs to be adjusted in order to have good comparisons in both PR reflectivity and TMI brightness temperature. This study shows that long-term satellite observations, especially those with multiple sensors, can be very useful in constraining model microphysics. It is also the first study in validating and improving a sophisticated spectral bin microphysical scheme according to long-term satellite observations.

  2. Formation and structure of stable aggregates in binary diffusion-limited cluster-cluster aggregation processes

    NASA Astrophysics Data System (ADS)

    López-López, J. M.; Moncho-Jordá, A.; Schmitt, A.; Hidalgo-Álvarez, R.

    2005-09-01

    Binary diffusion-limited cluster-cluster aggregation processes are studied as a function of the relative concentration of the two species. Both, short and long time behaviors are investigated by means of three-dimensional off-lattice Brownian Dynamics simulations. At short aggregation times, the validity of the Hogg-Healy-Fuerstenau approximation is shown. At long times, a single large cluster containing all initial particles is found to be formed when the relative concentration of the minority particles lies above a critical value. Below that value, stable aggregates remain in the system. These stable aggregates are composed by a few minority particles that are highly covered by majority ones. Our off-lattice simulations reveal a value of approximately 0.15 for the critical relative concentration. A qualitative explanation scheme for the formation and growth of the stable aggregates is developed. The simulations also explain the phenomenon of monomer discrimination that was observed recently in single cluster light scattering experiments.

  3. A Novel Magnetic Actuation Scheme to Disaggregate Nanoparticles and Enhance Passage across the Blood–Brain Barrier

    PubMed Central

    Le, Tuan-Anh; Amin, Faiz Ul; Kim, Myeong Ok

    2017-01-01

    The blood–brain barrier (BBB) hinders drug delivery to the brain. Despite various efforts to develop preprogramed actuation schemes for magnetic drug delivery, the unmodeled aggregation phenomenon limits drug delivery performance. This paper proposes a novel scheme with an aggregation model for a feed-forward magnetic actuation design. A simulation platform for aggregated particle delivery is developed and an actuation scheme is proposed to deliver aggregated magnetic nanoparticles (MNPs) using a discontinuous asymmetrical magnetic actuation. The experimental results with a Y-shaped channel indicated the success of the proposed scheme in steering and disaggregation. The delivery performance of the developed scheme was examined using a realistic, three-dimensional (3D) vessel simulation. Furthermore, the proposed scheme enhanced the transport and uptake of MNPs across the BBB in mice. The scheme presented here facilitates the passage of particles across the BBB to the brain using an electromagnetic actuation scheme. PMID:29271927

  4. A Novel Magnetic Actuation Scheme to Disaggregate Nanoparticles and Enhance Passage across the Blood-Brain Barrier.

    PubMed

    Hoshiar, Ali Kafash; Le, Tuan-Anh; Amin, Faiz Ul; Kim, Myeong Ok; Yoon, Jungwon

    2017-12-22

    The blood-brain barrier (BBB) hinders drug delivery to the brain. Despite various efforts to develop preprogramed actuation schemes for magnetic drug delivery, the unmodeled aggregation phenomenon limits drug delivery performance. This paper proposes a novel scheme with an aggregation model for a feed-forward magnetic actuation design. A simulation platform for aggregated particle delivery is developed and an actuation scheme is proposed to deliver aggregated magnetic nanoparticles (MNPs) using a discontinuous asymmetrical magnetic actuation. The experimental results with a Y-shaped channel indicated the success of the proposed scheme in steering and disaggregation. The delivery performance of the developed scheme was examined using a realistic, three-dimensional (3D) vessel simulation. Furthermore, the proposed scheme enhanced the transport and uptake of MNPs across the BBB in mice. The scheme presented here facilitates the passage of particles across the BBB to the brain using an electromagnetic actuation scheme.

  5. Enhanced DNA Sensing via Catalytic Aggregation of Gold Nanoparticles

    PubMed Central

    Huttanus, Herbert M.; Graugnard, Elton; Yurke, Bernard; Knowlton, William B.; Kuang, Wan; Hughes, William L.; Lee, Jeunghoon

    2014-01-01

    A catalytic colorimetric detection scheme that incorporates a DNA-based hybridization chain reaction into gold nanoparticles was designed and tested. While direct aggregation forms an inter-particle linkage from only ones target DNA strand, the catalytic aggregation forms multiple linkages from a single target DNA strand. Gold nanoparticles were functionalized with thiol-modified DNA strands capable of undergoing hybridization chain reactions. The changes in their absorption spectra were measured at different times and target concentrations and compared against direct aggregation. Catalytic aggregation showed a multifold increase in sensitivity at low target concentrations when compared to direct aggregation. Gel electrophoresis was performed to compare DNA hybridization reactions in catalytic and direct aggregation schemes, and the product formation was confirmed in the catalytic aggregation scheme at low levels of target concentrations. The catalytic aggregation scheme also showed high target specificity. This application of a DNA reaction network to gold nanoparticle-based colorimetric detection enables highly-sensitive, field-deployable, colorimetric readout systems capable of detecting a variety of biomolecules. PMID:23891867

  6. Privacy-Enhanced and Multifunctional Health Data Aggregation under Differential Privacy Guarantees

    PubMed Central

    Ren, Hao; Li, Hongwei; Liang, Xiaohui; He, Shibo; Dai, Yuanshun; Zhao, Lian

    2016-01-01

    With the rapid growth of the health data scale, the limited storage and computation resources of wireless body area sensor networks (WBANs) is becoming a barrier to their development. Therefore, outsourcing the encrypted health data to the cloud has been an appealing strategy. However, date aggregation will become difficult. Some recently-proposed schemes try to address this problem. However, there are still some functions and privacy issues that are not discussed. In this paper, we propose a privacy-enhanced and multifunctional health data aggregation scheme (PMHA-DP) under differential privacy. Specifically, we achieve a new aggregation function, weighted average (WAAS), and design a privacy-enhanced aggregation scheme (PAAS) to protect the aggregated data from cloud servers. Besides, a histogram aggregation scheme with high accuracy is proposed. PMHA-DP supports fault tolerance while preserving data privacy. The performance evaluation shows that the proposal leads to less communication overhead than the existing one. PMID:27626417

  7. Privacy-Enhanced and Multifunctional Health Data Aggregation under Differential Privacy Guarantees.

    PubMed

    Ren, Hao; Li, Hongwei; Liang, Xiaohui; He, Shibo; Dai, Yuanshun; Zhao, Lian

    2016-09-10

    With the rapid growth of the health data scale, the limited storage and computation resources of wireless body area sensor networks (WBANs) is becoming a barrier to their development. Therefore, outsourcing the encrypted health data to the cloud has been an appealing strategy. However, date aggregation will become difficult. Some recently-proposed schemes try to address this problem. However, there are still some functions and privacy issues that are not discussed. In this paper, we propose a privacy-enhanced and multifunctional health data aggregation scheme (PMHA-DP) under differential privacy. Specifically, we achieve a new aggregation function, weighted average (WAAS), and design a privacy-enhanced aggregation scheme (PAAS) to protect the aggregated data from cloud servers. Besides, a histogram aggregation scheme with high accuracy is proposed. PMHA-DP supports fault tolerance while preserving data privacy. The performance evaluation shows that the proposal leads to less communication overhead than the existing one.

  8. Adaptive Aggregation Routing to Reduce Delay for Multi-Layer Wireless Sensor Networks.

    PubMed

    Li, Xujing; Liu, Anfeng; Xie, Mande; Xiong, Neal N; Zeng, Zhiwen; Cai, Zhiping

    2018-04-16

    The quality of service (QoS) regarding delay, lifetime and reliability is the key to the application of wireless sensor networks (WSNs). Data aggregation is a method to effectively reduce the data transmission volume and improve the lifetime of a network. In the previous study, a common strategy required that data wait in the queue. When the length of the queue is greater than or equal to the predetermined aggregation threshold ( N t ) or the waiting time is equal to the aggregation timer ( T t ), data are forwarded at the expense of an increase in the delay. The primary contributions of the proposed Adaptive Aggregation Routing (AAR) scheme are the following: (a) the senders select the forwarding node dynamically according to the length of the data queue, which effectively reduces the delay. In the AAR scheme, the senders send data to the nodes with a long data queue. The advantages are that first, the nodes with a long data queue need a small amount of data to perform aggregation; therefore, the transmitted data can be fully utilized to make these nodes aggregate. Second, this scheme balances the aggregating and data sending load; thus, the lifetime increases. (b) An improved AAR scheme is proposed to improve the QoS. The aggregation deadline ( T t ) and the aggregation threshold ( N t ) are dynamically changed in the network. In WSNs, nodes far from the sink have residual energy because these nodes transmit less data than the other nodes. In the improved AAR scheme, the nodes far from the sink have a small value of T t and N t to reduce delay, and the nodes near the sink are set to a large value of T t and N t to reduce energy consumption. Thus, the end to end delay is reduced, a longer lifetime is achieved, and the residual energy is fully used. Simulation results demonstrate that compared with the previous scheme, the performance of the AAR scheme is improved. This scheme reduces the delay by 14.91%, improves the lifetime by 30.91%, and increases energy efficiency by 76.40%.

  9. Adaptive Aggregation Routing to Reduce Delay for Multi-Layer Wireless Sensor Networks

    PubMed Central

    Li, Xujing; Xie, Mande; Zeng, Zhiwen; Cai, Zhiping

    2018-01-01

    The quality of service (QoS) regarding delay, lifetime and reliability is the key to the application of wireless sensor networks (WSNs). Data aggregation is a method to effectively reduce the data transmission volume and improve the lifetime of a network. In the previous study, a common strategy required that data wait in the queue. When the length of the queue is greater than or equal to the predetermined aggregation threshold (Nt) or the waiting time is equal to the aggregation timer (Tt), data are forwarded at the expense of an increase in the delay. The primary contributions of the proposed Adaptive Aggregation Routing (AAR) scheme are the following: (a) the senders select the forwarding node dynamically according to the length of the data queue, which effectively reduces the delay. In the AAR scheme, the senders send data to the nodes with a long data queue. The advantages are that first, the nodes with a long data queue need a small amount of data to perform aggregation; therefore, the transmitted data can be fully utilized to make these nodes aggregate. Second, this scheme balances the aggregating and data sending load; thus, the lifetime increases. (b) An improved AAR scheme is proposed to improve the QoS. The aggregation deadline (Tt) and the aggregation threshold (Nt) are dynamically changed in the network. In WSNs, nodes far from the sink have residual energy because these nodes transmit less data than the other nodes. In the improved AAR scheme, the nodes far from the sink have a small value of Tt and Nt to reduce delay, and the nodes near the sink are set to a large value of Tt and Nt to reduce energy consumption. Thus, the end to end delay is reduced, a longer lifetime is achieved, and the residual energy is fully used. Simulation results demonstrate that compared with the previous scheme, the performance of the AAR scheme is improved. This scheme reduces the delay by 14.91%, improves the lifetime by 30.91%, and increases energy efficiency by 76.40%. PMID:29659535

  10. Area-averaged evapotranspiration over a heterogeneous land surface: aggregation of multi-point EC flux measurements with a high-resolution land-cover map and footprint analysis

    NASA Astrophysics Data System (ADS)

    Xu, Feinan; Wang, Weizhen; Wang, Jiemin; Xu, Ziwei; Qi, Yuan; Wu, Yueru

    2017-08-01

    The determination of area-averaged evapotranspiration (ET) at the satellite pixel scale/model grid scale over a heterogeneous land surface plays a significant role in developing and improving the parameterization schemes of the remote sensing based ET estimation models and general hydro-meteorological models. The Heihe Watershed Allied Telemetry Experimental Research (HiWATER) flux matrix provided a unique opportunity to build an aggregation scheme for area-averaged fluxes. On the basis of the HiWATER flux matrix dataset and high-resolution land-cover map, this study focused on estimating the area-averaged ET over a heterogeneous landscape with footprint analysis and multivariate regression. The procedure is as follows. Firstly, quality control and uncertainty estimation for the data of the flux matrix, including 17 eddy-covariance (EC) sites and four groups of large-aperture scintillometers (LASs), were carefully done. Secondly, the representativeness of each EC site was quantitatively evaluated; footprint analysis was also performed for each LAS path. Thirdly, based on the high-resolution land-cover map derived from aircraft remote sensing, a flux aggregation method was established combining footprint analysis and multiple-linear regression. Then, the area-averaged sensible heat fluxes obtained from the EC flux matrix were validated by the LAS measurements. Finally, the area-averaged ET of the kernel experimental area of HiWATER was estimated. Compared with the formerly used and rather simple approaches, such as the arithmetic average and area-weighted methods, the present scheme is not only with a much better database, but also has a solid grounding in physics and mathematics in the integration of area-averaged fluxes over a heterogeneous surface. Results from this study, both instantaneous and daily ET at the satellite pixel scale, can be used for the validation of relevant remote sensing models and land surface process models. Furthermore, this work will be extended to the water balance study of the whole Heihe River basin.

  11. Mapping Surface Cover Parameters Using Aggregation Rules and Remotely Sensed Cover Classes. Version 1.9

    NASA Technical Reports Server (NTRS)

    Arain, Altaf M.; Shuttleworth, W. James; Yang, Z-Liang; Michaud, Jene; Dolman, Johannes

    1997-01-01

    A coupled model, which combines the Biosphere-Atmosphere Transfer Scheme (BATS) with an advanced atmospheric boundary-layer model, was used to validate hypothetical aggregation rules for BATS-specific surface cover parameters. The model was initialized and tested with observations from the Anglo-Brazilian Amazonian Climate Observational Study and used to simulate surface fluxes for rain forest and pasture mixes at a site near Manaus in Brazil. The aggregation rules are shown to estimate parameters which give area-average surface fluxes similar to those calculated with explicit representation of forest and pasture patches for a range of meteorological and surface conditions relevant to this site, but the agreement deteriorates somewhat when there are large patch-to-patch differences in soil moisture. The aggregation rules, validated as above, were then applied to remotely sensed 1 km land cover data set to obtain grid-average values of BATS vegetation parameters for 2.8 deg x 2.8 deg and 1 deg x 1 deg grids within the conterminous United States. There are significant differences in key vegetation parameters (aerodynamic roughness length, albedo, leaf area index, and stomatal resistance) when aggregate parameters are compared to parameters for the single, dominant cover within the grid. However, the surface energy fluxes calculated by stand-alone BATS with the 2-year forcing, data from the International Satellite Land Surface Climatology Project (ISLSCP) CDROM were reasonably similar using aggregate-vegetation parameters and dominant-cover parameters, but there were some significant differences, particularly in the western USA.

  12. A Novel Component Carrier Configuration and Switching Scheme for Real-Time Traffic in a Cognitive-Radio-Based Spectrum Aggregation System

    PubMed Central

    Fu, Yunhai; Ma, Lin; Xu, Yubin

    2015-01-01

    In spectrum aggregation (SA), two or more component carriers (CCs) of different bandwidths in different bands can be aggregated to support a wider transmission bandwidth. The scheduling delay is the most important design constraint for the broadband wireless trunking (BWT) system, especially in the cognitive radio (CR) condition. The current resource scheduling schemes for spectrum aggregation become questionable and are not suitable for meeting the challenge of the delay requirement. Consequently, the authors propose a novel component carrier configuration and switching scheme for real-time traffic (RT-CCCS) to satisfy the delay requirement in the CR-based SA system. In this work, the authors consider a sensor-network-assisted CR network. The authors first introduce a resource scheduling structure for SA in the CR condition. Then the proposed scheme is analyzed in detail. Finally, simulations are carried out to verify the analysis on the proposed scheme. Simulation results prove that our proposed scheme can satisfy the delay requirement in the CR-based SA system. PMID:26393594

  13. INTERDISCIPLINARY PHYSICS AND RELATED AREAS OF SCIENCE AND TECHNOLOGY: Aggregation Behaviors of a Two-Species System with Lose-Lose Interactions

    NASA Astrophysics Data System (ADS)

    Song, Mei-Xia; Lin, Zhen-Quan; Li, Xiao-Dong; Ke, Jian-Hong

    2010-06-01

    We propose an aggregation evolution model of two-species (A- and B-species) aggregates to study the prevalent aggregation phenomena in social and economic systems. In this model, A- and B-species aggregates perform self-exchange-driven growths with the exchange rate kernels K (k,l) = Kkl and L(k,l) = Lkl, respectively, and the two species aggregates perform self-birth processes with the rate kernels J1(k) = J1k and J2(k) = J2k, and meanwhile the interaction between the aggregates of different species A and B causes a lose-lose scheme with the rate kernel H(k,l) = Hkl. Based on the mean-field theory, we investigated the evolution behaviors of the two species aggregates to study the competitions among above three aggregate evolution schemes on the distinct initial monomer concentrations A0 and B0 of the two species. The results show that the evolution behaviors of A- and B-species are crucially dominated by the competition between the two self-birth processes, and the initial monomer concentrations A0 and B0 play important roles, while the lose-lose scheme play important roles in some special cases.

  14. Classifying Human Activity Patterns from Smartphone Collected GPS data: a Fuzzy Classification and Aggregation Approach.

    PubMed

    Wan, Neng; Lin, Ge

    2016-12-01

    Smartphones have emerged as a promising type of equipment for monitoring human activities in environmental health studies. However, degraded location accuracy and inconsistency of smartphone-measured GPS data have limited its effectiveness for classifying human activity patterns. This study proposes a fuzzy classification scheme for differentiating human activity patterns from smartphone-collected GPS data. Specifically, a fuzzy logic reasoning was adopted to overcome the influence of location uncertainty by estimating the probability of different activity types for single GPS points. Based on that approach, a segment aggregation method was developed to infer activity patterns, while adjusting for uncertainties of point attributes. Validations of the proposed methods were carried out based on a convenient sample of three subjects with different types of smartphones. The results indicate desirable accuracy (e.g., up to 96% in activity identification) with use of this method. Two examples were provided in the appendix to illustrate how the proposed methods could be applied in environmental health studies. Researchers could tailor this scheme to fit a variety of research topics.

  15. Impact of spatial and temporal aggregation of input parameters on the assessment of irrigation scheme performance

    NASA Astrophysics Data System (ADS)

    Lorite, I. J.; Mateos, L.; Fereres, E.

    2005-01-01

    SummaryThe simulations of dynamic, spatially distributed non-linear models are impacted by the degree of spatial and temporal aggregation of their input parameters and variables. This paper deals with the impact of these aggregations on the assessment of irrigation scheme performance by simulating water use and crop yield. The analysis was carried out on a 7000 ha irrigation scheme located in Southern Spain. Four irrigation seasons differing in rainfall patterns were simulated (from 1996/1997 to 1999/2000) with the actual soil parameters and with hypothetical soil parameters representing wider ranges of soil variability. Three spatial aggregation levels were considered: (I) individual parcels (about 800), (II) command areas (83) and (III) the whole irrigation scheme. Equally, five temporal aggregation levels were defined: daily, weekly, monthly, quarterly and annually. The results showed little impact of spatial aggregation in the predictions of irrigation requirements and of crop yield for the scheme. The impact of aggregation was greater in rainy years, for deep-rooted crops (sunflower) and in scenarios with heterogeneous soils. The highest impact on irrigation requirement estimations was in the scenario of most heterogeneous soil and in 1999/2000, a year with frequent rainfall during the irrigation season: difference of 7% between aggregation levels I and III was found. Equally, it was found that temporal aggregation had only significant impact on irrigation requirements predictions for time steps longer than 4 months. In general, simulated annual irrigation requirements decreased as the time step increased. The impact was greater in rainy years (specially with abundant and concentrated rain events) and in crops which cycles coincide in part with the rainy season (garlic, winter cereals and olive). It is concluded that in this case, average, representative values for the main inputs of the model (crop, soil properties and sowing dates) can generate results within 1% of those obtained by providing spatially specific values for about 800 parcels.

  16. A Scheme to Smooth Aggregated Traffic from Sensors with Periodic Reports

    PubMed Central

    Oh, Sungmin; Jang, Ju Wook

    2017-01-01

    The possibility of smoothing aggregated traffic from sensors with varying reporting periods and frame sizes to be carried on an access link is investigated. A straightforward optimization would take O(pn) time, whereas our heuristic scheme takes O(np) time where n, p denote the number of sensors and size of periods, respectively. Our heuristic scheme performs local optimization sensor by sensor, starting with the smallest to largest periods. This is based on an observation that sensors with large offsets have more choices in offsets to avoid traffic peaks than the sensors with smaller periods. A MATLAB simulation shows that our scheme excels the known scheme by M. Grenier et al. in a similar situation (aggregating periodic traffic in a controller area network) for almost all possible permutations. The performance of our scheme is very close to the straightforward optimization, which compares all possible permutations. We expect that our scheme would greatly contribute in smoothing the traffic from an ever-increasing number of IoT sensors to the gateway, reducing the burden on the access link to the Internet. PMID:28273831

  17. A Scheme to Smooth Aggregated Traffic from Sensors with Periodic Reports.

    PubMed

    Oh, Sungmin; Jang, Ju Wook

    2017-03-03

    The possibility of smoothing aggregated traffic from sensors with varying reporting periods and frame sizes to be carried on an access link is investigated. A straightforward optimization would take O(pn) time, whereas our heuristic scheme takes O(np) time where n, p denote the number of sensors and size of periods, respectively. Our heuristic scheme performs local optimization sensor by sensor, starting with the smallest to largest periods. This is based on an observation that sensors with large offsets have more choices in offsets to avoid traffic peaks than the sensors with smaller periods. A MATLAB simulation shows that our scheme excels the known scheme by M. Grenier et al. in a similar situation (aggregating periodic traffic in a controller area network) for almost all possible permutations. The performance of our scheme is very close to the straightforward optimization, which compares all possible permutations. We expect that our scheme would greatly contribute in smoothing the traffic from an ever-increasing number of IoT sensors to the gateway, reducing the burden on the access link to the Internet.

  18. Channel and Timeslot Co-Scheduling with Minimal Channel Switching for Data Aggregation in MWSNs

    PubMed Central

    Yeoum, Sanggil; Kang, Byungseok; Lee, Jinkyu; Choo, Hyunseung

    2017-01-01

    Collision-free transmission and efficient data transfer between nodes can be achieved through a set of channels in multichannel wireless sensor networks (MWSNs). While using multiple channels, we have to carefully consider channel interference, channel and time slot (resources) optimization, channel switching delay, and energy consumption. Since sensor nodes operate on low battery power, the energy consumed in channel switching becomes an important challenge. In this paper, we propose channel and time slot scheduling for minimal channel switching in MWSNs, while achieving efficient and collision-free transmission between nodes. The proposed scheme constructs a duty-cycled tree while reducing the amount of channel switching. As a next step, collision-free time slots are assigned to every node based on the minimal data collection delay. The experimental results demonstrate that the validity of our scheme reduces the amount of channel switching by 17.5%, reduces energy consumption for channel switching by 28%, and reduces the schedule length by 46%, as compared to the existing schemes. PMID:28471416

  19. Channel and Timeslot Co-Scheduling with Minimal Channel Switching for Data Aggregation in MWSNs.

    PubMed

    Yeoum, Sanggil; Kang, Byungseok; Lee, Jinkyu; Choo, Hyunseung

    2017-05-04

    Collision-free transmission and efficient data transfer between nodes can be achieved through a set of channels in multichannel wireless sensor networks (MWSNs). While using multiple channels, we have to carefully consider channel interference, channel and time slot (resources) optimization, channel switching delay, and energy consumption. Since sensor nodes operate on low battery power, the energy consumed in channel switching becomes an important challenge. In this paper, we propose channel and time slot scheduling for minimal channel switching in MWSNs, while achieving efficient and collision-free transmission between nodes. The proposed scheme constructs a duty-cycled tree while reducing the amount of channel switching. As a next step, collision-free time slots are assigned to every node based on the minimal data collection delay. The experimental results demonstrate that the validity of our scheme reduces the amount of channel switching by 17.5%, reduces energy consumption for channel switching by 28%, and reduces the schedule length by 46%, as compared to the existing schemes.

  20. The Aggregate Representation of Terrestrial Land Covers Within Global Climate Models (GCM)

    NASA Technical Reports Server (NTRS)

    Shuttleworth, W. James; Sorooshian, Soroosh

    1996-01-01

    This project had four initial objectives: (1) to create a realistic coupled surface-atmosphere model to investigate the aggregate description of heterogeneous surfaces; (2) to develop a simple heuristic model of surface-atmosphere interactions; (3) using the above models, to test aggregation rules for a variety of realistic cover and meteorological conditions; and (4) to reconcile biosphere-atmosphere transfer scheme (BATS) land covers with those that can be recognized from space; Our progress in meeting these objectives can be summarized as follows. Objective 1: The first objective was achieved in the first year of the project by coupling the Biosphere-Atmosphere Transfer Scheme (BATS) with a proven two-dimensional model of the atmospheric boundary layer. The resulting model, BATS-ABL, is described in detail in a Masters thesis and reported in a paper in the Journal of Hydrology Objective 2: The potential value of the heuristic model was re-evaluated early in the project and a decision was made to focus subsequent research around modeling studies with the BATS-ABL model. The value of using such coupled surface-atmosphere models in this research area was further confirmed by the success of the Tucson Aggregation Workshop. Objective 3: There was excellent progress in using the BATS-ABL model to test aggregation rules for a variety of realistic covers. The foci of attention have been the site of the First International Satellite Land Surface Climatology Project Field Experiment (FIFE) in Kansas and one of the study sites of the Anglo-Brazilian Amazonian Climate Observational Study (ABRACOS) near the city of Manaus, Amazonas, Brazil. These two sites were selected because of the ready availability of relevant field data to validate and initiate the BATS-ABL model. The results of these tests are given in a Masters thesis, and reported in two papers. Objective 4: Progress far exceeded original expectations not only in reconciling BATS land covers with those that can be recognized from space, but also in then applying remotely-sensed land cover data to map aggregate values of BATS parameters for heterogeneous covers and interpreting these parameters in terms of surface-atmosphere exchanges.

  1. Automating the expert consensus paradigm for robust lung tissue classification

    NASA Astrophysics Data System (ADS)

    Rajagopalan, Srinivasan; Karwoski, Ronald A.; Raghunath, Sushravya; Bartholmai, Brian J.; Robb, Richard A.

    2012-03-01

    Clinicians confirm the efficacy of dynamic multidisciplinary interactions in diagnosing Lung disease/wellness from CT scans. However, routine clinical practice cannot readily accomodate such interactions. Current schemes for automating lung tissue classification are based on a single elusive disease differentiating metric; this undermines their reliability in routine diagnosis. We propose a computational workflow that uses a collection (#: 15) of probability density functions (pdf)-based similarity metrics to automatically cluster pattern-specific (#patterns: 5) volumes of interest (#VOI: 976) extracted from the lung CT scans of 14 patients. The resultant clusters are refined for intra-partition compactness and subsequently aggregated into a super cluster using a cluster ensemble technique. The super clusters were validated against the consensus agreement of four clinical experts. The aggregations correlated strongly with expert consensus. By effectively mimicking the expertise of physicians, the proposed workflow could make automation of lung tissue classification a clinical reality.

  2. An atomistic simulation scheme for modeling crystal formation from solution.

    PubMed

    Kawska, Agnieszka; Brickmann, Jürgen; Kniep, Rüdiger; Hochrein, Oliver; Zahn, Dirk

    2006-01-14

    We present an atomistic simulation scheme for investigating crystal growth from solution. Molecular-dynamics simulation studies of such processes typically suffer from considerable limitations concerning both system size and simulation times. In our method this time-length scale problem is circumvented by an iterative scheme which combines a Monte Carlo-type approach for the identification of ion adsorption sites and, after each growth step, structural optimization of the ion cluster and the solvent by means of molecular-dynamics simulation runs. An important approximation of our method is based on assuming full structural relaxation of the aggregates between each of the growth steps. This concept only holds for compounds of low solubility. To illustrate our method we studied CaF2 aggregate growth from aqueous solution, which may be taken as prototypes for compounds of very low solubility. The limitations of our simulation scheme are illustrated by the example of NaCl aggregation from aqueous solution, which corresponds to a solute/solvent combination of very high salt solubility.

  3. RiPPAS: A Ring-Based Privacy-Preserving Aggregation Scheme in Wireless Sensor Networks

    PubMed Central

    Zhang, Kejia; Han, Qilong; Cai, Zhipeng; Yin, Guisheng

    2017-01-01

    Recently, data privacy in wireless sensor networks (WSNs) has been paid increased attention. The characteristics of WSNs determine that users’ queries are mainly aggregation queries. In this paper, the problem of processing aggregation queries in WSNs with data privacy preservation is investigated. A Ring-based Privacy-Preserving Aggregation Scheme (RiPPAS) is proposed. RiPPAS adopts ring structure to perform aggregation. It uses pseudonym mechanism for anonymous communication and uses homomorphic encryption technique to add noise to the data easily to be disclosed. RiPPAS can handle both sum() queries and min()/max() queries, while the existing privacy-preserving aggregation methods can only deal with sum() queries. For processing sum() queries, compared with the existing methods, RiPPAS has advantages in the aspects of privacy preservation and communication efficiency, which can be proved by theoretical analysis and simulation results. For processing min()/max() queries, RiPPAS provides effective privacy preservation and has low communication overhead. PMID:28178197

  4. Secure and Cost-Effective Distributed Aggregation for Mobile Sensor Networks

    PubMed Central

    Guo, Kehua; Zhang, Ping; Ma, Jianhua

    2016-01-01

    Secure data aggregation (SDA) schemes are widely used in distributed applications, such as mobile sensor networks, to reduce communication cost, prolong the network life cycle and provide security. However, most SDA are only suited for a single type of statistics (i.e., summation-based or comparison-based statistics) and are not applicable to obtaining multiple statistic results. Most SDA are also inefficient for dynamic networks. This paper presents multi-functional secure data aggregation (MFSDA), in which the mapping step and coding step are introduced to provide value-preserving and order-preserving and, later, to enable arbitrary statistics support in the same query. MFSDA is suited for dynamic networks because these active nodes can be counted directly from aggregation data. The proposed scheme is tolerant to many types of attacks. The network load of the proposed scheme is balanced, and no significant bottleneck exists. The MFSDA includes two versions: MFSDA-I and MFSDA-II. The first one can obtain accurate results, while the second one is a more generalized version that can significantly reduce network traffic at the expense of less accuracy loss. PMID:27120599

  5. Understanding latent structures of clinical information logistics: A bottom-up approach for model building and validating the workflow composite score.

    PubMed

    Esdar, Moritz; Hübner, Ursula; Liebe, Jan-David; Hüsers, Jens; Thye, Johannes

    2017-01-01

    Clinical information logistics is a construct that aims to describe and explain various phenomena of information provision to drive clinical processes. It can be measured by the workflow composite score, an aggregated indicator of the degree of IT support in clinical processes. This study primarily aimed to investigate the yet unknown empirical patterns constituting this construct. The second goal was to derive a data-driven weighting scheme for the constituents of the workflow composite score and to contrast this scheme with a literature based, top-down procedure. This approach should finally test the validity and robustness of the workflow composite score. Based on secondary data from 183 German hospitals, a tiered factor analytic approach (confirmatory and subsequent exploratory factor analysis) was pursued. A weighting scheme, which was based on factor loadings obtained in the analyses, was put into practice. We were able to identify five statistically significant factors of clinical information logistics that accounted for 63% of the overall variance. These factors were "flow of data and information", "mobility", "clinical decision support and patient safety", "electronic patient record" and "integration and distribution". The system of weights derived from the factor loadings resulted in values for the workflow composite score that differed only slightly from the score values that had been previously published based on a top-down approach. Our findings give insight into the internal composition of clinical information logistics both in terms of factors and weights. They also allowed us to propose a coherent model of clinical information logistics from a technical perspective that joins empirical findings with theoretical knowledge. Despite the new scheme of weights applied to the calculation of the workflow composite score, the score behaved robustly, which is yet another hint of its validity and therefore its usefulness. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  6. A smart checkpointing scheme for improving the reliability of clustering routing protocols.

    PubMed

    Min, Hong; Jung, Jinman; Kim, Bongjae; Cho, Yookun; Heo, Junyoung; Yi, Sangho; Hong, Jiman

    2010-01-01

    In wireless sensor networks, system architectures and applications are designed to consider both resource constraints and scalability, because such networks are composed of numerous sensor nodes with various sensors and actuators, small memories, low-power microprocessors, radio modules, and batteries. Clustering routing protocols based on data aggregation schemes aimed at minimizing packet numbers have been proposed to meet these requirements. In clustering routing protocols, the cluster head plays an important role. The cluster head collects data from its member nodes and aggregates the collected data. To improve reliability and reduce recovery latency, we propose a checkpointing scheme for the cluster head. In the proposed scheme, backup nodes monitor and checkpoint the current state of the cluster head periodically. We also derive the checkpointing interval that maximizes reliability while using the same amount of energy consumed by clustering routing protocols that operate without checkpointing. Experimental comparisons with existing non-checkpointing schemes show that our scheme reduces both energy consumption and recovery latency.

  7. A Smart Checkpointing Scheme for Improving the Reliability of Clustering Routing Protocols

    PubMed Central

    Min, Hong; Jung, Jinman; Kim, Bongjae; Cho, Yookun; Heo, Junyoung; Yi, Sangho; Hong, Jiman

    2010-01-01

    In wireless sensor networks, system architectures and applications are designed to consider both resource constraints and scalability, because such networks are composed of numerous sensor nodes with various sensors and actuators, small memories, low-power microprocessors, radio modules, and batteries. Clustering routing protocols based on data aggregation schemes aimed at minimizing packet numbers have been proposed to meet these requirements. In clustering routing protocols, the cluster head plays an important role. The cluster head collects data from its member nodes and aggregates the collected data. To improve reliability and reduce recovery latency, we propose a checkpointing scheme for the cluster head. In the proposed scheme, backup nodes monitor and checkpoint the current state of the cluster head periodically. We also derive the checkpointing interval that maximizes reliability while using the same amount of energy consumed by clustering routing protocols that operate without checkpointing. Experimental comparisons with existing non-checkpointing schemes show that our scheme reduces both energy consumption and recovery latency. PMID:22163389

  8. Cost-Efficient and Multi-Functional Secure Aggregation in Large Scale Distributed Application

    PubMed Central

    Zhang, Ping; Li, Wenjun; Sun, Hua

    2016-01-01

    Secure aggregation is an essential component of modern distributed applications and data mining platforms. Aggregated statistical results are typically adopted in constructing a data cube for data analysis at multiple abstraction levels in data warehouse platforms. Generating different types of statistical results efficiently at the same time (or referred to as enabling multi-functional support) is a fundamental requirement in practice. However, most of the existing schemes support a very limited number of statistics. Securely obtaining typical statistical results simultaneously in the distribution system, without recovering the original data, is still an open problem. In this paper, we present SEDAR, which is a SEcure Data Aggregation scheme under the Range segmentation model. Range segmentation model is proposed to reduce the communication cost by capturing the data characteristics, and different range uses different aggregation strategy. For raw data in the dominant range, SEDAR encodes them into well defined vectors to provide value-preservation and order-preservation, and thus provides the basis for multi-functional aggregation. A homomorphic encryption scheme is used to achieve data privacy. We also present two enhanced versions. The first one is a Random based SEDAR (REDAR), and the second is a Compression based SEDAR (CEDAR). Both of them can significantly reduce communication cost with the trade-off lower security and lower accuracy, respectively. Experimental evaluations, based on six different scenes of real data, show that all of them have an excellent performance on cost and accuracy. PMID:27551747

  9. Cost-Efficient and Multi-Functional Secure Aggregation in Large Scale Distributed Application.

    PubMed

    Zhang, Ping; Li, Wenjun; Sun, Hua

    2016-01-01

    Secure aggregation is an essential component of modern distributed applications and data mining platforms. Aggregated statistical results are typically adopted in constructing a data cube for data analysis at multiple abstraction levels in data warehouse platforms. Generating different types of statistical results efficiently at the same time (or referred to as enabling multi-functional support) is a fundamental requirement in practice. However, most of the existing schemes support a very limited number of statistics. Securely obtaining typical statistical results simultaneously in the distribution system, without recovering the original data, is still an open problem. In this paper, we present SEDAR, which is a SEcure Data Aggregation scheme under the Range segmentation model. Range segmentation model is proposed to reduce the communication cost by capturing the data characteristics, and different range uses different aggregation strategy. For raw data in the dominant range, SEDAR encodes them into well defined vectors to provide value-preservation and order-preservation, and thus provides the basis for multi-functional aggregation. A homomorphic encryption scheme is used to achieve data privacy. We also present two enhanced versions. The first one is a Random based SEDAR (REDAR), and the second is a Compression based SEDAR (CEDAR). Both of them can significantly reduce communication cost with the trade-off lower security and lower accuracy, respectively. Experimental evaluations, based on six different scenes of real data, show that all of them have an excellent performance on cost and accuracy.

  10. Demonstration of flexible multicasting and aggregation functionality for TWDM-PON

    NASA Astrophysics Data System (ADS)

    Chen, Yuanxiang; Li, Juhao; Zhu, Paikun; Zhu, Jinglong; Tian, Yu; Wu, Zhongying; Peng, Huangfa; Xu, Yongchi; Chen, Jingbiao; He, Yongqi; Chen, Zhangyuan

    2017-06-01

    The time- and wavelength-division multiplexed passive optical network (TWDM-PON) has been recognized as an attractive solution to provide broadband access for the next-generation networks. In this paper, we propose flexible service multicasting and aggregation functionality for TWDM-PON utilizing multiple-pump four-wave-mixing (FWM) and cyclic arrayed waveguide grating (AWG). With the proposed scheme, multiple TWDM-PON links share a single optical line terminal (OLT), which can greatly reduce the network deployment expense and achieve efficient network resource utilization by load balancing among different optical distribution networks (ODNs). The proposed scheme is compatible with existing TDM-PON infrastructure with fixed-wavelength OLT transmitter, thus smooth service upgrade can be achieved. Utilizing the proposed scheme, we demonstrate a proof-of-concept experiment with 10-Gb/s OOK and 10-Gb/s QPSK orthogonal frequency division multiplexing (OFDM) signal multicasting and aggregating to seven PON links. Compared with back-to-back (BTB) channel, the newly generated multicasting OOK signal and OFDM signal have power penalty of 1.6 dB and 2 dB at the BER of 10-3, respectively. For the aggregation of multiple channels, no obvious power penalty is observed. What is more, to verify the flexibility of the proposed scheme, we reconfigure the wavelength selective switch (WSS) and adjust the number of pumps to realize flexible multicasting functionality. One to three, one to seven, one to thirteen and one to twenty-one multicasting are achieved without modifying OLT structure.

  11. Secure Data Aggregation in Wireless Sensor Network-Fujisaki Okamoto(FO) Authentication Scheme against Sybil Attack.

    PubMed

    Nirmal Raja, K; Maraline Beno, M

    2017-07-01

    In the wireless sensor network(WSN) security is a major issue. There are several network security schemes proposed in research. In the network, malicious nodes obstruct the performance of the network. The network can be vulnerable by Sybil attack. When a node illicitly assertions multiple identities or claims fake IDs, the WSN grieves from an attack named Sybil attack. This attack threatens wireless sensor network in data aggregation, synchronizing system, routing, fair resource allocation and misbehavior detection. Henceforth, the research is carried out to prevent the Sybil attack and increase the performance of the network. This paper presents the novel security mechanism and Fujisaki Okamoto algorithm and also application of the work. The Fujisaki-Okamoto (FO) algorithm is ID based cryptographic scheme and gives strong authentication against Sybil attack. By using Network simulator2 (NS2) the scheme is simulated. In this proposed scheme broadcasting key, time taken for different key sizes, energy consumption, Packet delivery ratio, Throughput were analyzed.

  12. Aerial cooperative transporting and assembling control using multiple quadrotor-manipulator systems

    NASA Astrophysics Data System (ADS)

    Qi, Yuhua; Wang, Jianan; Shan, Jiayuan

    2018-02-01

    In this paper, a fully distributed control scheme for aerial cooperative transporting and assembling is proposed using multiple quadrotor-manipulator systems with each quadrotor equipped with a robotic manipulator. First, the kinematic and dynamic models of a quadrotor with multi-Degree of Freedom (DOF) robotic manipulator are established together using Euler-Lagrange equations. Based on the aggregated dynamic model, the control scheme consisting of position controller, attitude controller and manipulator controller is presented. Regarding cooperative transporting and assembling, multiple quadrotor-manipulator systems should be able to form a desired formation without collision among quadrotors from any initial position. The desired formation is achieved by the distributed position controller and attitude controller, while the collision avoidance is guaranteed by an artificial potential function method. Then, the transporting and assembling tasks request the manipulators to reach the desired angles cooperatively, which is achieved by the distributed manipulator controller. The overall stability of the closed-loop system is proven by a Lyapunov method and Matrosov's theorem. In the end, the proposed control scheme is simplified for the real application and then validated by two formation flying missions of four quadrotors with 2-DOF manipulators.

  13. EPPRD: An Efficient Privacy-Preserving Power Requirement and Distribution Aggregation Scheme for a Smart Grid.

    PubMed

    Zhang, Lei; Zhang, Jing

    2017-08-07

    A Smart Grid (SG) facilitates bidirectional demand-response communication between individual users and power providers with high computation and communication performance but also brings about the risk of leaking users' private information. Therefore, improving the individual power requirement and distribution efficiency to ensure communication reliability while preserving user privacy is a new challenge for SG. Based on this issue, we propose an efficient and privacy-preserving power requirement and distribution aggregation scheme (EPPRD) based on a hierarchical communication architecture. In the proposed scheme, an efficient encryption and authentication mechanism is proposed for better fit to each individual demand-response situation. Through extensive analysis and experiment, we demonstrate how the EPPRD resists various security threats and preserves user privacy while satisfying the individual requirement in a semi-honest model; it involves less communication overhead and computation time than the existing competing schemes.

  14. EPPRD: An Efficient Privacy-Preserving Power Requirement and Distribution Aggregation Scheme for a Smart Grid

    PubMed Central

    Zhang, Lei; Zhang, Jing

    2017-01-01

    A Smart Grid (SG) facilitates bidirectional demand-response communication between individual users and power providers with high computation and communication performance but also brings about the risk of leaking users’ private information. Therefore, improving the individual power requirement and distribution efficiency to ensure communication reliability while preserving user privacy is a new challenge for SG. Based on this issue, we propose an efficient and privacy-preserving power requirement and distribution aggregation scheme (EPPRD) based on a hierarchical communication architecture. In the proposed scheme, an efficient encryption and authentication mechanism is proposed for better fit to each individual demand-response situation. Through extensive analysis and experiment, we demonstrate how the EPPRD resists various security threats and preserves user privacy while satisfying the individual requirement in a semi-honest model; it involves less communication overhead and computation time than the existing competing schemes. PMID:28783122

  15. Evaluating the Performance of Single and Double Moment Microphysics Schemes During a Synoptic-Scale Snowfall Event

    NASA Technical Reports Server (NTRS)

    Molthan, Andrew L.

    2011-01-01

    Increases in computing resources have allowed for the utilization of high-resolution weather forecast models capable of resolving cloud microphysical and precipitation processes among varying numbers of hydrometeor categories. Several microphysics schemes are currently available within the Weather Research and Forecasting (WRF) model, ranging from single-moment predictions of precipitation content to double-moment predictions that include a prediction of particle number concentrations. Each scheme incorporates several assumptions related to the size distribution, shape, and fall speed relationships of ice crystals in order to simulate cold-cloud processes and resulting precipitation. Field campaign data offer a means of evaluating the assumptions present within each scheme. The Canadian CloudSat/CALIPSO Validation Project (C3VP) represented collaboration among the CloudSat, CALIPSO, and NASA Global Precipitation Measurement mission communities, to observe cold season precipitation processes relevant to forecast model evaluation and the eventual development of satellite retrievals of cloud properties and precipitation rates. During the C3VP campaign, widespread snowfall occurred on 22 January 2007, sampled by aircraft and surface instrumentation that provided particle size distributions, ice water content, and fall speed estimations along with traditional surface measurements of temperature and precipitation. In this study, four single-moment and two double-moment microphysics schemes were utilized to generate hypothetical WRF forecasts of the event, with C3VP data used in evaluation of their varying assumptions. Schemes that incorporate flexibility in size distribution parameters and density assumptions are shown to be preferable to fixed constants, and that a double-moment representation of the snow category may be beneficial when representing the effects of aggregation. These results may guide forecast centers in optimal configurations of their forecast models for winter weather and identify best practices present within these various schemes.

  16. Stability of fluctuating and transient aggregates of amphiphilic solutes in aqueous binary mixtures: Studies of dimethylsulfoxide, ethanol, and tert-butyl alcohol

    NASA Astrophysics Data System (ADS)

    Banerjee, Saikat; Bagchi, Biman

    2013-10-01

    In aqueous binary mixtures, amphiphilic solutes such as dimethylsulfoxide (DMSO), ethanol, tert-butyl alcohol (TBA), etc., are known to form aggregates (or large clusters) at small to intermediate solute concentrations. These aggregates are transient in nature. Although the system remains homogeneous on macroscopic length and time scales, the microheterogeneous aggregation may profoundly affect the properties of the mixture in several distinct ways, particularly if the survival times of the aggregates are longer than density relaxation times of the binary liquid. Here we propose a theoretical scheme to quantify the lifetime and thus the stability of these microheterogeneous clusters, and apply the scheme to calculate the same for water-ethanol, water-DMSO, and water-TBA mixtures. We show that the lifetime of these clusters can range from less than a picosecond (ps) for ethanol clusters to few tens of ps for DMSO and TBA clusters. This helps explaining the absence of a strong composition dependent anomaly in water-ethanol mixtures but the presence of the same in water-DMSO and water-TBA mixtures.

  17. Equity in health care financing in Palestine: the value-added of the disaggregate approach.

    PubMed

    Abu-Zaineh, Mohammad; Mataria, Awad; Luchini, Stéphane; Moatti, Jean-Paul

    2008-06-01

    This paper analyzes the redistributive effect and progressivity associated with the current health care financing schemes in the Occupied Palestinian Territory, using data from the first Palestinian Household Health Expenditure Survey conducted in 2004. The paper goes beyond the commonly used "aggregate summary index approach" to apply a more detailed "disaggregate approach". Such an approach is borrowed from the general economic literature on taxation, and examines redistributive and vertical effects over specific parts of the income distribution, using the dominance criterion. In addition, the paper employs a bootstrap method to test for the statistical significance of the inequality measures. While both the aggregate and disaggregate approaches confirm the pro-rich and regressive character of out-of-pocket payments, the aggregate approach does not ascertain the potential progressive feature of any of the available insurance schemes. The disaggregate approach, however, significantly reveals a progressive aspect, for over half of the population, of the government health insurance scheme, and demonstrates that the regressivity of the out-of-pocket payments is most pronounced among the worst-off classes of the population. Recommendations are advanced to improve the performance of the government insurance schemes to enhance its capacity in limiting inequalities in health care financing in the Occupied Palestinian Territory.

  18. Diabatization for Time-Dependent Density Functional Theory: Exciton Transfers and Related Conical Intersections.

    PubMed

    Tamura, Hiroyuki

    2016-11-23

    Intermolecular exciton transfers and related conical intersections are analyzed by diabatization for time-dependent density functional theory. The diabatic states are expressed as a linear combination of the adiabatic states so as to emulate the well-defined reference states. The singlet exciton coupling calculated by the diabatization scheme includes contributions from the Coulomb (Förster) and electron exchange (Dexter) couplings. For triplet exciton transfers, the Dexter coupling, charge transfer integral, and diabatic potentials of stacked molecules are calculated for analyzing direct and superexchange pathways. We discuss some topologies of molecular aggregates that induce conical intersections on the vanishing points of the exciton coupling, namely boundary of H- and J-aggregates and T-shape aggregates, as well as canceled exciton coupling to the bright state of H-aggregate, i.e., selective exciton transfer to the dark state. The diabatization scheme automatically accounts for the Berry phase by fixing the signs of reference states while scanning the coordinates.

  19. A Secure Trust Establishment Scheme for Wireless Sensor Networks

    PubMed Central

    Ishmanov, Farruh; Kim, Sung Won; Nam, Seung Yeob

    2014-01-01

    Trust establishment is an important tool to improve cooperation and enhance security in wireless sensor networks. The core of trust establishment is trust estimation. If a trust estimation method is not robust against attack and misbehavior, the trust values produced will be meaningless, and system performance will be degraded. We present a novel trust estimation method that is robust against on-off attacks and persistent malicious behavior. Moreover, in order to aggregate recommendations securely, we propose using a modified one-step M-estimator scheme. The novelty of the proposed scheme arises from combining past misbehavior with current status in a comprehensive way. Specifically, we introduce an aggregated misbehavior component in trust estimation, which assists in detecting an on-off attack and persistent malicious behavior. In order to determine the current status of the node, we employ previous trust values and current measured misbehavior components. These components are combined to obtain a robust trust value. Theoretical analyses and evaluation results show that our scheme performs better than other trust schemes in terms of detecting an on-off attack and persistent misbehavior. PMID:24451471

  20. Multimedia content description framework

    NASA Technical Reports Server (NTRS)

    Bergman, Lawrence David (Inventor); Mohan, Rakesh (Inventor); Li, Chung-Sheng (Inventor); Smith, John Richard (Inventor); Kim, Michelle Yoonk Yung (Inventor)

    2003-01-01

    A framework is provided for describing multimedia content and a system in which a plurality of multimedia storage devices employing the content description methods of the present invention can interoperate. In accordance with one form of the present invention, the content description framework is a description scheme (DS) for describing streams or aggregations of multimedia objects, which may comprise audio, images, video, text, time series, and various other modalities. This description scheme can accommodate an essentially limitless number of descriptors in terms of features, semantics or metadata, and facilitate content-based search, index, and retrieval, among other capabilities, for both streamed or aggregated multimedia objects.

  1. AN AGGREGATION AND EPISODE SELECTION SCHEME FOR EPA'S MODELS-3 CMAQ

    EPA Science Inventory

    The development of an episode selection and aggregation approach, designed to support distributional estimation for use with the Models-3 Community Multiscale Air Quality (CMAQ) model, is described. The approach utilized cluster analysis of the 700 hPa u and v wind field compo...

  2. Assessing a Reclaimed Concrete Up-Cycling Scheme through Life-Cycle Analysis

    NASA Astrophysics Data System (ADS)

    Guignot, Sylvain; Bru, Kathy; Touzé, Solène; Ménard, Yannick

    The present study evaluates the environmental impacts of a recycling scheme for gravels from building concretes wastes, in which the liberated aggregates are reused in structural concretes while the residual mortar fines are sent to the raw mill of a clinker kiln.

  3. A cell-based assay for aggregation inhibitors as therapeutics of polyglutamine-repeat disease and validation in Drosophila

    NASA Astrophysics Data System (ADS)

    Apostol, Barbara L.; Kazantsev, Alexsey; Raffioni, Simona; Illes, Katalin; Pallos, Judit; Bodai, Laszlo; Slepko, Natalia; Bear, James E.; Gertler, Frank B.; Hersch, Steven; Housman, David E.; Marsh, J. Lawrence; Michels Thompson, Leslie

    2003-05-01

    The formation of polyglutamine-containing aggregates and inclusions are hallmarks of pathogenesis in Huntington's disease that can be recapitulated in model systems. Although the contribution of inclusions to pathogenesis is unclear, cell-based assays can be used to screen for chemical compounds that affect aggregation and may provide therapeutic benefit. We have developed inducible PC12 cell-culture models to screen for loss of visible aggregates. To test the validity of this approach, compounds that inhibit aggregation in the PC12 cell-based screen were tested in a Drosophila model of polyglutamine-repeat disease. The disruption of aggregation in PC12 cells strongly correlates with suppression of neuronal degeneration in Drosophila. Thus, the engineered PC12 cells coupled with the Drosophila model provide a rapid and effective method to screen and validate compounds.

  4. Secure Data Aggregation with Fully Homomorphic Encryption in Large-Scale Wireless Sensor Networks.

    PubMed

    Li, Xing; Chen, Dexin; Li, Chunyan; Wang, Liangmin

    2015-07-03

    With the rapid development of wireless communication technology, sensor technology, information acquisition and processing technology, sensor networks will finally have a deep influence on all aspects of people's lives. The battery resources of sensor nodes should be managed efficiently in order to prolong network lifetime in large-scale wireless sensor networks (LWSNs). Data aggregation represents an important method to remove redundancy as well as unnecessary data transmission and hence cut down the energy used in communication. As sensor nodes are deployed in hostile environments, the security of the sensitive information such as confidentiality and integrity should be considered. This paper proposes Fully homomorphic Encryption based Secure data Aggregation (FESA) in LWSNs which can protect end-to-end data confidentiality and support arbitrary aggregation operations over encrypted data. In addition, by utilizing message authentication codes (MACs), this scheme can also verify data integrity during data aggregation and forwarding processes so that false data can be detected as early as possible. Although the FHE increase the computation overhead due to its large public key size, simulation results show that it is implementable in LWSNs and performs well. Compared with other protocols, the transmitted data and network overhead are reduced in our scheme.

  5. Using partial site aggregation to reduce bias in random utility travel cost models

    NASA Astrophysics Data System (ADS)

    Lupi, Frank; Feather, Peter M.

    1998-12-01

    We propose a "partial aggregation" strategy for defining the recreation sites that enter choice sets in random utility models. Under the proposal, the most popular sites and sites that will be the subject of policy analysis enter choice sets as individual sites while remaining sites are aggregated into groups of similar sites. The scheme balances the desire to include all potential substitute sites in the choice sets with practical data and modeling constraints. Unlike fully aggregate models, our analysis and empirical applications suggest that the partial aggregation approach reasonably approximates the results of a disaggregate model. The partial aggregation approach offers all of the data and computational advantages of models with aggregate sites but does not suffer from the same degree of bias as fully aggregate models.

  6. A Linguistic Model in Component Oriented Programming

    NASA Astrophysics Data System (ADS)

    Crăciunean, Daniel Cristian; Crăciunean, Vasile

    2016-12-01

    It is a fact that the component-oriented programming, well organized, can bring a large increase in efficiency in the development of large software systems. This paper proposes a model for building software systems by assembling components that can operate independently of each other. The model is based on a computing environment that runs parallel and distributed applications. This paper introduces concepts as: abstract aggregation scheme and aggregation application. Basically, an aggregation application is an application that is obtained by combining corresponding components. In our model an aggregation application is a word in a language.

  7. Using new aggregation operators in rule-based intelligent control

    NASA Technical Reports Server (NTRS)

    Berenji, Hamid R.; Chen, Yung-Yaw; Yager, Ronald R.

    1990-01-01

    A new aggregation operator is applied in the design of an approximate reasoning-based controller. The ordered weighted averaging (OWA) operator has the property of lying between the And function and the Or function used in previous fuzzy set reasoning systems. It is shown here that, by applying OWA operators, more generalized types of control rules, which may include linguistic quantifiers such as Many and Most, can be developed. The new aggregation operators, as tested in a cart-pole balancing control problem, illustrate improved performance when compared with existing fuzzy control aggregation schemes.

  8. Directional virtual backbone based data aggregation scheme for Wireless Visual Sensor Networks.

    PubMed

    Zhang, Jing; Liu, Shi-Jian; Tsai, Pei-Wei; Zou, Fu-Min; Ji, Xiao-Rong

    2018-01-01

    Data gathering is a fundamental task in Wireless Visual Sensor Networks (WVSNs). Features of directional antennas and the visual data make WVSNs more complex than the conventional Wireless Sensor Network (WSN). The virtual backbone is a technique, which is capable of constructing clusters. The version associating with the aggregation operation is also referred to as the virtual backbone tree. In most of the existing literature, the main focus is on the efficiency brought by the construction of clusters that the existing methods neglect local-balance problems in general. To fill up this gap, Directional Virtual Backbone based Data Aggregation Scheme (DVBDAS) for the WVSNs is proposed in this paper. In addition, a measurement called the energy consumption density is proposed for evaluating the adequacy of results in the cluster-based construction problems. Moreover, the directional virtual backbone construction scheme is proposed by considering the local-balanced factor. Furthermore, the associated network coding mechanism is utilized to construct DVBDAS. Finally, both the theoretical analysis of the proposed DVBDAS and the simulations are given for evaluating the performance. The experimental results prove that the proposed DVBDAS achieves higher performance in terms of both the energy preservation and the network lifetime extension than the existing methods.

  9. Validation of source approval of HMA surface mix aggregate : final report.

    DOT National Transportation Integrated Search

    2016-04-01

    The main focus of this research project was to develop methodologies for the validation of source approval of hot : mix asphalt surface mix aggregate. In order to further enhance the validation process, a secondary focus was also to : create a spectr...

  10. A priori discretization error metrics for distributed hydrologic modeling applications

    NASA Astrophysics Data System (ADS)

    Liu, Hongli; Tolson, Bryan A.; Craig, James R.; Shafii, Mahyar

    2016-12-01

    Watershed spatial discretization is an important step in developing a distributed hydrologic model. A key difficulty in the spatial discretization process is maintaining a balance between the aggregation-induced information loss and the increase in computational burden caused by the inclusion of additional computational units. Objective identification of an appropriate discretization scheme still remains a challenge, in part because of the lack of quantitative measures for assessing discretization quality, particularly prior to simulation. This study proposes a priori discretization error metrics to quantify the information loss of any candidate discretization scheme without having to run and calibrate a hydrologic model. These error metrics are applicable to multi-variable and multi-site discretization evaluation and provide directly interpretable information to the hydrologic modeler about discretization quality. The first metric, a subbasin error metric, quantifies the routing information loss from discretization, and the second, a hydrological response unit (HRU) error metric, improves upon existing a priori metrics by quantifying the information loss due to changes in land cover or soil type property aggregation. The metrics are straightforward to understand and easy to recode. Informed by the error metrics, a two-step discretization decision-making approach is proposed with the advantage of reducing extreme errors and meeting the user-specified discretization error targets. The metrics and decision-making approach are applied to the discretization of the Grand River watershed in Ontario, Canada. Results show that information loss increases as discretization gets coarser. Moreover, results help to explain the modeling difficulties associated with smaller upstream subbasins since the worst discretization errors and highest error variability appear in smaller upstream areas instead of larger downstream drainage areas. Hydrologic modeling experiments under candidate discretization schemes validate the strong correlation between the proposed discretization error metrics and hydrologic simulation responses. Discretization decision-making results show that the common and convenient approach of making uniform discretization decisions across the watershed performs worse than the proposed non-uniform discretization approach in terms of preserving spatial heterogeneity under the same computational cost.

  11. Comparing MCDA Aggregation Methods in Constructing Composite Indicators Using the Shannon-Spearman Measure

    ERIC Educational Resources Information Center

    Zhou, P.; Ang, B. W.

    2009-01-01

    Composite indicators have been increasingly recognized as a useful tool for performance monitoring, benchmarking comparisons and public communication in a wide range of fields. The usefulness of a composite indicator depends heavily on the underlying data aggregation scheme where multiple criteria decision analysis (MCDA) is commonly used. A…

  12. DEVELOPMENT OF AN AGGREGATION AND EPISODE SELECTION SCHEME TO SUPPORT THE MODELS-3 COMMUNITY MULTISCALE AIR QUALITY MODEL

    EPA Science Inventory

    The development of an episode selection and aggregation approach, designed to support distributional estimation of use with the Models-3 Community Multiscale Air Quality (CMAQ) model, is described. The approach utilized cluster analysis of the 700-hPa east-west and north-south...

  13. Validation of source approval of HMA surface mix aggregate using spectrometer : final report.

    DOT National Transportation Integrated Search

    2016-04-01

    The main focus of this research project was to develop methodologies for the validation of source approval of hot : mix asphalt surface mix aggregate. In order to further enhance the validation process, a secondary focus was also to : create a spectr...

  14. Adaptive Data Aggregation and Compression to Improve Energy Utilization in Solar-Powered Wireless Sensor Networks

    PubMed Central

    Yoon, Ikjune; Kim, Hyeok; Noh, Dong Kun

    2017-01-01

    A node in a solar-powered wireless sensor network (WSN) collects energy when the sun shines and stores it in a battery or capacitor for use when no solar power is available, in particular at night. In our scheme, each tiny node in a WSN periodically determines its energy budget, which takes into account its residual energy, and its likely acquisition and consumption. If it expects to acquire more energy than it can store, the data which has it has sensed is aggregated with data from other nodes, compressed, and transmitted. Otherwise, the node continues to sense data, but turns off its wireless communication to reduce energy consumption. We compared several schemes by simulation. Our scheme reduced the number of nodes forced to black out due to lack of energy so that more data arrives at the sink node. PMID:28555010

  15. Adaptive Data Aggregation and Compression to Improve Energy Utilization in Solar-Powered Wireless Sensor Networks.

    PubMed

    Yoon, Ikjune; Kim, Hyeok; Noh, Dong Kun

    2017-05-27

    A node in a solar-powered wireless sensor network (WSN) collects energy when the sun shines and stores it in a battery or capacitor for use when no solar power is available, in particular at night. In our scheme, each tiny node in a WSN periodically determines its energy budget, which takes into account its residual energy, and its likely acquisition and consumption. If it expects to acquire more energy than it can store, the data which has it has sensed is aggregated with data from other nodes, compressed, and transmitted. Otherwise, the node continues to sense data, but turns off its wireless communication to reduce energy consumption. We compared several schemes by simulation. Our scheme reduced the number of nodes forced to black out due to lack of energy so that more data arrives at the sink node.

  16. Secure Data Aggregation with Fully Homomorphic Encryption in Large-Scale Wireless Sensor Networks

    PubMed Central

    Li, Xing; Chen, Dexin; Li, Chunyan; Wang, Liangmin

    2015-01-01

    With the rapid development of wireless communication technology, sensor technology, information acquisition and processing technology, sensor networks will finally have a deep influence on all aspects of people’s lives. The battery resources of sensor nodes should be managed efficiently in order to prolong network lifetime in large-scale wireless sensor networks (LWSNs). Data aggregation represents an important method to remove redundancy as well as unnecessary data transmission and hence cut down the energy used in communication. As sensor nodes are deployed in hostile environments, the security of the sensitive information such as confidentiality and integrity should be considered. This paper proposes Fully homomorphic Encryption based Secure data Aggregation (FESA) in LWSNs which can protect end-to-end data confidentiality and support arbitrary aggregation operations over encrypted data. In addition, by utilizing message authentication codes (MACs), this scheme can also verify data integrity during data aggregation and forwarding processes so that false data can be detected as early as possible. Although the FHE increase the computation overhead due to its large public key size, simulation results show that it is implementable in LWSNs and performs well. Compared with other protocols, the transmitted data and network overhead are reduced in our scheme. PMID:26151208

  17. Spectrally efficient digitized radio-over-fiber system with k-means clustering-based multidimensional quantization.

    PubMed

    Zhang, Lu; Pang, Xiaodan; Ozolins, Oskars; Udalcovs, Aleksejs; Popov, Sergei; Xiao, Shilin; Hu, Weisheng; Chen, Jiajia

    2018-04-01

    We propose a spectrally efficient digitized radio-over-fiber (D-RoF) system by grouping highly correlated neighboring samples of the analog signals into multidimensional vectors, where the k-means clustering algorithm is adopted for adaptive quantization. A 30  Gbit/s D-RoF system is experimentally demonstrated to validate the proposed scheme, reporting a carrier aggregation of up to 40 100 MHz orthogonal frequency division multiplexing (OFDM) channels with quadrate amplitude modulation (QAM) order of 4 and an aggregation of 10 100 MHz OFDM channels with a QAM order of 16384. The equivalent common public radio interface rates from 37 to 150  Gbit/s are supported. Besides, the error vector magnitude (EVM) of 8% is achieved with the number of quantization bits of 4, and the EVM can be further reduced to 1% by increasing the number of quantization bits to 7. Compared with conventional pulse coding modulation-based D-RoF systems, the proposed D-RoF system improves the signal-to-noise-ratio up to ∼9  dB and greatly reduces the EVM, given the same number of quantization bits.

  18. An improved snow scheme for the ECMWF land surface model: Description and offline validation

    Treesearch

    Emanuel Dutra; Gianpaolo Balsamo; Pedro Viterbo; Pedro M. A. Miranda; Anton Beljaars; Christoph Schar; Kelly Elder

    2010-01-01

    A new snow scheme for the European Centre for Medium-Range Weather Forecasts (ECMWF) land surface model has been tested and validated. The scheme includes a new parameterization of snow density, incorporating a liquid water reservoir, and revised formulations for the subgrid snow cover fraction and snow albedo. Offline validation (covering a wide range of spatial and...

  19. A Delay-Aware and Reliable Data Aggregation for Cyber-Physical Sensing

    PubMed Central

    Zhang, Jinhuan; Long, Jun; Zhang, Chengyuan; Zhao, Guihu

    2017-01-01

    Physical information sensed by various sensors in a cyber-physical system should be collected for further operation. In many applications, data aggregation should take reliability and delay into consideration. To address these problems, a novel Tiered Structure Routing-based Delay-Aware and Reliable Data Aggregation scheme named TSR-DARDA for spherical physical objects is proposed. By dividing the spherical network constructed by dispersed sensor nodes into circular tiers with specifically designed widths and cells, TSTR-DARDA tries to enable as many nodes as possible to transmit data simultaneously. In order to ensure transmission reliability, lost packets are retransmitted. Moreover, to minimize the latency while maintaining reliability for data collection, in-network aggregation and broadcast techniques are adopted to deal with the transmission between data collecting nodes in the outer layer and their parent data collecting nodes in the inner layer. Thus, the optimization problem is transformed to minimize the delay under reliability constraints by controlling the system parameters. To demonstrate the effectiveness of the proposed scheme, we have conducted extensive theoretical analysis and comparisons to evaluate the performance of TSR-DARDA. The analysis and simulations show that TSR-DARDA leads to lower delay with reliability satisfaction. PMID:28218668

  20. A Secure-Enhanced Data Aggregation Based on ECC in Wireless Sensor Networks

    PubMed Central

    Zhou, Qiang; Yang, Geng; He, Liwen

    2014-01-01

    Data aggregation is an important technique for reducing the energy consumption of sensor nodes in wireless sensor networks (WSNs). However, compromised aggregators may forge false values as the aggregated results of their child nodes in order to conduct stealthy attacks or steal other nodes' privacy. This paper proposes a Secure-Enhanced Data Aggregation based on Elliptic Curve Cryptography (SEDA-ECC). The design of SEDA-ECC is based on the principles of privacy homomorphic encryption (PH) and divide-and-conquer. An aggregation tree disjoint method is first adopted to divide the tree into three subtrees of similar sizes, and a PH-based aggregation is performed in each subtree to generate an aggregated subtree result. Then the forged result can be identified by the base station (BS) by comparing the aggregated count value. Finally, the aggregated result can be calculated by the BS according to the remaining results that have not been forged. Extensive analysis and simulations show that SEDA-ECC can achieve the highest security level on the aggregated result with appropriate energy consumption compared with other asymmetric schemes. PMID:24732099

  1. An IDS Alerts Aggregation Algorithm Based on Rough Set Theory

    NASA Astrophysics Data System (ADS)

    Zhang, Ru; Guo, Tao; Liu, Jianyi

    2018-03-01

    Within a system in which has been deployed several IDS, a great number of alerts can be triggered by a single security event, making real alerts harder to be found. To deal with redundant alerts, we propose a scheme based on rough set theory. In combination with basic concepts in rough set theory, the importance of attributes in alerts was calculated firstly. With the result of attributes importance, we could compute the similarity of two alerts, which will be compared with a pre-defined threshold to determine whether these two alerts can be aggregated or not. Also, time interval should be taken into consideration. Allowed time interval for different types of alerts is computed individually, since different types of alerts may have different time gap between two alerts. In the end of this paper, we apply proposed scheme on DAPRA98 dataset and the results of experiment show that our scheme can efficiently reduce the redundancy of alerts so that administrators of security system could avoid wasting time on useless alerts.

  2. Evaluation of Model Microphysics within Precipitation Bands of Extratropical Cyclones

    NASA Technical Reports Server (NTRS)

    Colle, Brian A.; Yu, Ruyi; Molthan, Andrew L.; Nesbitt, Steven

    2013-01-01

    Recent studies evaluating the bulk microphysical schemes (BMPs) within cloud resolving models (CRMs) have indicated large uncertainties and errors in the amount and size distributions of snow and cloud ice aloft. The snow prediction is sensitive to the snow densities, habits, and degree of riming within the BMPs. Improving these BMPs is a crucial step toward improving both weather forecasting and climate predictions. Several microphysical schemes in the Weather Research and Forecasting (WRF) model down to 1.33-km grid spacing are evaluated using aircraft, radar, and ground in situ data from the Global Precipitation Mission Coldseason Precipitation Experiment (GCPEx) experiment, as well as a few years (15 winter storms) of surface measurements of riming, crystal habit, snow density, and radar measurements at Stony Brook, NY (SBNY on north shore of Long Island) during the 2009-2012 winter seasons. Surface microphysical measurements at SBNY were taken every 15 to 30 minutes using a stereo microscope and camera, and snow depth and snow density were also recorded. During these storms, a vertically-pointing Ku-band radar was used to observe the vertical evolution of reflectivity and Doppler vertical velocities. A Particle Size and Velocity (PARSIVEL) disdrometer was also used to measure the surface size distribution and fall speeds of snow at SBNY. For the 15 cases at SBNY, the WSM6, Morrison (MORR), Thompson (THOM2), and Stony Brook (SBU-YLIN) BMPs were validated. A non-spherical snow assumption (THOM2 and SBU-YLIN) simulated a more realistic distribution of reflectivity than spherical snow assumptions in the WSM6 and MORR schemes. The MORR, WSM6, and SBU-YLIN schemes are comparable to the observed velocity distribution in light and moderate riming periods. The THOM2 is approx 0.25 m/s too slow with its velocity distribution in these periods. In heavier riming, the vertical Doppler velocities in the WSM6, THOM2, and MORR schemes were approx 0.25 m/s too slow, while the SBU-YLIN was 0.25 to 0.5 m/s too fast. Overall, the BMPs simulate a size distribution close to the observed for D < 4 mm in the dendritic, plates, and mixed habit periods. The model BMPs underestimate the size distribution when large aggregates were observed. For D > 6 mm in the dendrites, side planes, and mixed habit periods, the BMPs are likely not simulating enough aggregation to create a larger size distribution, although the MORR (double moment) scheme seemed to perform best. These SBNY results will be compared with some results from GCPEx for a warm frontal snow band observed at 18 February 2012.

  3. Evaluation of Model Microphysics Within Precipitation Bands of Extratropical Cyclones

    NASA Technical Reports Server (NTRS)

    Colle, Brian A.; Molthan, Andrew; Yu, Ruyi; Stark, David; Yuter, Sandra; Nesbitt, Steven

    2013-01-01

    Recent studies evaluating the bulk microphysical schemes (BMPs) within cloud resolving models (CRMs) have indicated large uncertainties and errors in the amount and size distributions of snow and cloud ice aloft. The snow prediction is sensitive to the snow densities, habits, and degree of riming within the BMPs. Improving these BMPs is a crucial step toward improving both weather forecasting and climate predictions. Several microphysical schemes in the Weather Research and Forecasting (WRF) model down to 1.33-km grid spacing are evaluated using aircraft, radar, and ground in situ data from the Global Precipitation Mission Coldseason Precipitation Experiment (GCPEx) experiment, as well as a few years (15 winter storms) of surface measurements of riming, crystal habit, snow density, and radar measurements at Stony Brook, NY (SBNY on north shore of Long Island) during the 2009-2012 winter seasons. Surface microphysical measurements at SBNY were taken every 15 to 30 minutes using a stereo microscope and camera, and snow depth and snow density were also recorded. During these storms, a vertically-pointing Ku-band radar was used to observe the vertical evolution of reflectivity and Doppler vertical velocities. A Particle Size and Velocity (PARSIVEL) disdrometer was also used to measure the surface size distribution and fall speeds of snow at SBNY. For the 15 cases at SBNY, the WSM6, Morrison (MORR), Thompson (THOM2), and Stony Brook (SBU-YLIN) BMPs were validated. A non-spherical snow assumption (THOM2 and SBU-YLIN) simulated a more realistic distribution of reflectivity than spherical snow assumptions in the WSM6 and MORR schemes. The MORR, WSM6, and SBU-YLIN schemes are comparable to the observed velocity distribution in light and moderate riming periods. The THOM2 is 0.25 meters per second too slow with its velocity distribution in these periods. In heavier riming, the vertical Doppler velocities in the WSM6, THOM2, and MORR schemes were 0.25 meters per second too slow, while the SBU-YLIN was 0.25 to 0.5 meters per second too fast. Overall, the BMPs simulate a size distribution close to the observed for D < 4 mm in the dendritic, plates, and mixed habit periods. The model BMPs underestimate the size distribution when large aggregates were observed. For D > 6 mm in the dendrites, side planes, and mixed habit periods, the BMPs are likely not simulating enough aggregation to create a larger size distribution, although the MORR (double moment) scheme seemed to perform best. These SBNY results will be compared with some results from GCPEx for a warm frontal snow band observed at 18 February 2012.

  4. A Comprehensive Comparison of Multiparty Secure Additions with Differential Privacy

    PubMed Central

    Goryczka, Slawomir; Xiong, Li

    2016-01-01

    This paper considers the problem of secure data aggregation (mainly summation) in a distributed setting, while ensuring differential privacy of the result. We study secure multiparty addition protocols using well known security schemes: Shamir’s secret sharing, perturbation-based, and various encryptions. We supplement our study with our new enhanced encryption scheme EFT, which is efficient and fault tolerant. Differential privacy of the final result is achieved by either distributed Laplace or Geometric mechanism (respectively DLPA or DGPA), while approximated differential privacy is achieved by diluted mechanisms. Distributed random noise is generated collectively by all participants, which draw random variables from one of several distributions: Gamma, Gauss, Geometric, or their diluted versions. We introduce a new distributed privacy mechanism with noise drawn from the Laplace distribution, which achieves smaller redundant noise with efficiency. We compare complexity and security characteristics of the protocols with different differential privacy mechanisms and security schemes. More importantly, we implemented all protocols and present an experimental comparison on their performance and scalability in a real distributed environment. Based on the evaluations, we identify our security scheme and Laplace DLPA as the most efficient for secure distributed data aggregation with privacy. PMID:28919841

  5. A Comprehensive Comparison of Multiparty Secure Additions with Differential Privacy.

    PubMed

    Goryczka, Slawomir; Xiong, Li

    2017-01-01

    This paper considers the problem of secure data aggregation (mainly summation) in a distributed setting, while ensuring differential privacy of the result. We study secure multiparty addition protocols using well known security schemes: Shamir's secret sharing, perturbation-based, and various encryptions. We supplement our study with our new enhanced encryption scheme EFT, which is efficient and fault tolerant. Differential privacy of the final result is achieved by either distributed Laplace or Geometric mechanism (respectively DLPA or DGPA), while approximated differential privacy is achieved by diluted mechanisms. Distributed random noise is generated collectively by all participants, which draw random variables from one of several distributions: Gamma, Gauss, Geometric, or their diluted versions. We introduce a new distributed privacy mechanism with noise drawn from the Laplace distribution, which achieves smaller redundant noise with efficiency. We compare complexity and security characteristics of the protocols with different differential privacy mechanisms and security schemes. More importantly, we implemented all protocols and present an experimental comparison on their performance and scalability in a real distributed environment. Based on the evaluations, we identify our security scheme and Laplace DLPA as the most efficient for secure distributed data aggregation with privacy.

  6. A risk-based classification scheme for genetically modified foods. I: Conceptual development.

    PubMed

    Chao, Eunice; Krewski, Daniel

    2008-12-01

    The predominant paradigm for the premarket assessment of genetically modified (GM) foods reflects heightened public concern by focusing on foods modified by recombinant deoxyribonucleic acid (rDNA) techniques, while foods modified by other methods of genetic modification are generally not assessed for safety. To determine whether a GM product requires less or more regulatory oversight and testing, we developed and evaluated a risk-based classification scheme (RBCS) for crop-derived GM foods. The results of this research are presented in three papers. This paper describes the conceptual development of the proposed RBCS that focuses on two categories of adverse health effects: (1) toxic and antinutritional effects, and (2) allergenic effects. The factors that may affect the level of potential health risks of GM foods are identified. For each factor identified, criteria for differentiating health risk potential are developed. The extent to which a GM food satisfies applicable criteria for each factor is rated separately. A concern level for each category of health effects is then determined by aggregating the ratings for the factors using predetermined aggregation rules. An overview of the proposed scheme is presented, as well as the application of the scheme to a hypothetical GM food.

  7. Kinetics of insulin aggregation in aqueous solutions upon agitation in the presence of hydrophobic surfaces.

    PubMed Central

    Sluzky, V; Tamada, J A; Klibanov, A M; Langer, R

    1991-01-01

    The stability of protein-based pharmaceuticals (e.g., insulin) is important for their production, storage, and delivery. To gain an understanding of insulin's aggregation mechanism in aqueous solutions, the effects of agitation rate, interfacial interactions, and insulin concentration on the overall aggregation rate were examined. Ultraviolet absorption spectroscopy, high-performance liquid chromatography, and quasielastic light scattering analyses were used to monitor the aggregation reaction and identify intermediate species. The reaction proceeded in two stages; insulin stability was enhanced at higher concentration. Mathematical modeling of proposed kinetic schemes was employed to identify possible reaction pathways and to explain greater stability at higher insulin concentration. Images PMID:1946348

  8. Top-d Rank Aggregation in Web Meta-search Engine

    NASA Astrophysics Data System (ADS)

    Fang, Qizhi; Xiao, Han; Zhu, Shanfeng

    In this paper, we consider the rank aggregation problem for information retrieval over Web making use of a kind of metric, the coherence, which considers both the normalized Kendall-τ distance and the size of overlap between two partial rankings. In general, the top-d coherence aggregation problem is defined as: given collection of partial rankings Π = {τ 1,τ 2, ⋯ , τ K }, how to find a final ranking π with specific length d, which maximizes the total coherence Φ(π,Pi)=sum_{i=1}^K Φ(π,tau_i). The corresponding complexity and algorithmic issues are discussed in this paper. Our main technical contribution is a polynomial time approximation scheme (PTAS) for a restricted top-d coherence aggregation problem.

  9. A Note on the Incremental Validity of Aggregate Predictors.

    ERIC Educational Resources Information Center

    Day, H. D.; Marshall, David

    Three computer simulations were conducted to show that very high aggregate predictive validity coefficients can occur when the across-case variability in absolute score stability occurring in both the predictor and criterion matrices is quite small. In light of the increase in internal consistency reliability achieved by the method of aggregation…

  10. A surface hydrology model for regional vector borne disease models

    NASA Astrophysics Data System (ADS)

    Tompkins, Adrian; Asare, Ernest; Bomblies, Arne; Amekudzi, Leonard

    2016-04-01

    Small, sun-lit temporary pools that form during the rainy season are important breeding sites for many key mosquito vectors responsible for the transmission of malaria and other diseases. The representation of this surface hydrology in mathematical disease models is challenging, due to their small-scale, dependence on the terrain and the difficulty of setting soil parameters. Here we introduce a model that represents the temporal evolution of the aggregate statistics of breeding sites in a single pond fractional coverage parameter. The model is based on a simple, geometrical assumption concerning the terrain, and accounts for the processes of surface runoff, pond overflow, infiltration and evaporation. Soil moisture, soil properties and large-scale terrain slope are accounted for using a calibration parameter that sets the equivalent catchment fraction. The model is calibrated and then evaluated using in situ pond measurements in Ghana and ultra-high (10m) resolution explicit simulations for a village in Niger. Despite the model's simplicity, it is shown to reproduce the variability and mean of the pond aggregate water coverage well for both locations and validation techniques. Example malaria simulations for Uganda will be shown using this new scheme with a generic calibration setting, evaluated using district malaria case data. Possible methods for implementing regional calibration will be briefly discussed.

  11. Rotational Brownian Dynamics simulations of clathrin cage formation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ilie, Ioana M.; Briels, Wim J.; MESA+ Institute for Nanotechnology, University of Twente, P.O. Box 217, 7500 AE Enschede

    2014-08-14

    The self-assembly of nearly rigid proteins into ordered aggregates is well suited for modeling by the patchy particle approach. Patchy particles are traditionally simulated using Monte Carlo methods, to study the phase diagram, while Brownian Dynamics simulations would reveal insights into the assembly dynamics. However, Brownian Dynamics of rotating anisotropic particles gives rise to a number of complications not encountered in translational Brownian Dynamics. We thoroughly test the Rotational Brownian Dynamics scheme proposed by Naess and Elsgaeter [Macromol. Theory Simul. 13, 419 (2004); Naess and Elsgaeter Macromol. Theory Simul. 14, 300 (2005)], confirming its validity. We then apply the algorithmmore » to simulate a patchy particle model of clathrin, a three-legged protein involved in vesicle production from lipid membranes during endocytosis. Using this algorithm we recover time scales for cage assembly comparable to those from experiments. We also briefly discuss the undulatory dynamics of the polyhedral cage.« less

  12. Optimum aggregation of geographically distributed flexible resources in strategic smart-grid/microgrid locations

    DOE PAGES

    Bhattarai, Bishnu P.; Myers, Kurt S.; Bak-Jensen, Brigitte; ...

    2017-05-17

    This paper determines optimum aggregation areas for a given distribution network considering spatial distribution of loads and costs of aggregation. An elitist genetic algorithm combined with a hierarchical clustering and a Thevenin network reduction is implemented to compute strategic locations and aggregate demand within each area. The aggregation reduces large distribution networks having thousands of nodes to an equivalent network with few aggregated loads, thereby significantly reducing the computational burden. Furthermore, it not only helps distribution system operators in making faster operational decisions by understanding during which time of the day will be in need of flexibility, from which specificmore » area, and in which amount, but also enables the flexibilities stemming from small distributed resources to be traded in various power/energy markets. A combination of central and local aggregation scheme where a central aggregator enables market participation, while local aggregators materialize the accepted bids, is implemented to realize this concept. The effectiveness of the proposed method is evaluated by comparing network performances with and without aggregation. Finally, for a given network configuration, steady-state performance of aggregated network is significantly accurate (≈ ±1.5% error) compared to very high errors associated with forecast of individual consumer demand.« less

  13. Optimum aggregation of geographically distributed flexible resources in strategic smart-grid/microgrid locations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhattarai, Bishnu P.; Myers, Kurt S.; Bak-Jensen, Brigitte

    This paper determines optimum aggregation areas for a given distribution network considering spatial distribution of loads and costs of aggregation. An elitist genetic algorithm combined with a hierarchical clustering and a Thevenin network reduction is implemented to compute strategic locations and aggregate demand within each area. The aggregation reduces large distribution networks having thousands of nodes to an equivalent network with few aggregated loads, thereby significantly reducing the computational burden. Furthermore, it not only helps distribution system operators in making faster operational decisions by understanding during which time of the day will be in need of flexibility, from which specificmore » area, and in which amount, but also enables the flexibilities stemming from small distributed resources to be traded in various power/energy markets. A combination of central and local aggregation scheme where a central aggregator enables market participation, while local aggregators materialize the accepted bids, is implemented to realize this concept. The effectiveness of the proposed method is evaluated by comparing network performances with and without aggregation. Finally, for a given network configuration, steady-state performance of aggregated network is significantly accurate (≈ ±1.5% error) compared to very high errors associated with forecast of individual consumer demand.« less

  14. A Secure Routing Protocol for Wireless Sensor Networks Considering Secure Data Aggregation.

    PubMed

    Rahayu, Triana Mugia; Lee, Sang-Gon; Lee, Hoon-Jae

    2015-06-26

    The commonly unattended and hostile deployments of WSNs and their resource-constrained sensor devices have led to an increasing demand for secure energy-efficient protocols. Routing and data aggregation receive the most attention since they are among the daily network routines. With the awareness of such demand, we found that so far there has been no work that lays out a secure routing protocol as the foundation for a secure data aggregation protocol. We argue that the secure routing role would be rendered useless if the data aggregation scheme built on it is not secure. Conversely, the secure data aggregation protocol needs a secure underlying routing protocol as its foundation in order to be effectively optimal. As an attempt for the solution, we devise an energy-aware protocol based on LEACH and ESPDA that combines secure routing protocol and secure data aggregation protocol. We then evaluate its security effectiveness and its energy-efficiency aspects, knowing that there are always trade-off between both.

  15. A Secure Routing Protocol for Wireless Sensor Networks Considering Secure Data Aggregation

    PubMed Central

    Rahayu, Triana Mugia; Lee, Sang-Gon; Lee, Hoon-Jae

    2015-01-01

    The commonly unattended and hostile deployments of WSNs and their resource-constrained sensor devices have led to an increasing demand for secure energy-efficient protocols. Routing and data aggregation receive the most attention since they are among the daily network routines. With the awareness of such demand, we found that so far there has been no work that lays out a secure routing protocol as the foundation for a secure data aggregation protocol. We argue that the secure routing role would be rendered useless if the data aggregation scheme built on it is not secure. Conversely, the secure data aggregation protocol needs a secure underlying routing protocol as its foundation in order to be effectively optimal. As an attempt for the solution, we devise an energy-aware protocol based on LEACH and ESPDA that combines secure routing protocol and secure data aggregation protocol. We then evaluate its security effectiveness and its energy-efficiency aspects, knowing that there are always trade-off between both. PMID:26131669

  16. Benthic impacts of intertidal oyster culture, with consideration of taxonomic sufficiency.

    PubMed

    Forrest, Barrie M; Creese, Robert G

    2006-01-01

    An investigation of the impacts from elevated intertidal Pacific oyster culture in a New Zealand estuary showed enhanced sedimentation beneath culture racks compared with other sites. Seabed elevation beneath racks was generally lower than between them, suggesting that topographic patterns more likely result from a local effect of rack structures on hydrodynamic processes than from enhanced deposition. Compared with control sites, seabed sediments within the farm had a greater silt/clay and organic content, and a lower redox potential and shear strength. While a marked trend in macrofaunal species richness was not evident, species composition and dominance patterns were consistent with a disturbance gradient, with farm effects not evident 35 m from the perimeter of the racks. Of the environmental variables measured, sediment shear strength was most closely associated with the distribution and density of macrofauna, suggesting that human-induced disturbance from farming operations may have contributed to the biological patterns. To evaluate the taxonomic sufficiency needed to document impacts, aggregation to the family level based on Linnean classification was compared with an aggregation scheme based on ;general groups' identifiable with limited taxonomic expertise. Compared with species-level analyses, spatial patterns of impact were equally discernible at both aggregation levels used, provided density rather than presence/absence data were used. Once baseline conditions are established and the efficacy of taxonomic aggregation demonstrated, a ;general group' scheme provides an appropriate and increasingly relevant tool for routine monitoring.

  17. Reduction of bias and variance for evaluation of computer-aided diagnostic schemes.

    PubMed

    Li, Qiang; Doi, Kunio

    2006-04-01

    Computer-aided diagnostic (CAD) schemes have been developed to assist radiologists in detecting various lesions in medical images. In addition to the development, an equally important problem is the reliable evaluation of the performance levels of various CAD schemes. It is good to see that more and more investigators are employing more reliable evaluation methods such as leave-one-out and cross validation, instead of less reliable methods such as resubstitution, for assessing their CAD schemes. However, the common applications of leave-one-out and cross-validation evaluation methods do not necessarily imply that the estimated performance levels are accurate and precise. Pitfalls often occur in the use of leave-one-out and cross-validation evaluation methods, and they lead to unreliable estimation of performance levels. In this study, we first identified a number of typical pitfalls for the evaluation of CAD schemes, and conducted a Monte Carlo simulation experiment for each of the pitfalls to demonstrate quantitatively the extent of bias and/or variance caused by the pitfall. Our experimental results indicate that considerable bias and variance may exist in the estimated performance levels of CAD schemes if one employs various flawed leave-one-out and cross-validation evaluation methods. In addition, for promoting and utilizing a high standard for reliable evaluation of CAD schemes, we attempt to make recommendations, whenever possible, for overcoming these pitfalls. We believe that, with the recommended evaluation methods, we can considerably reduce the bias and variance in the estimated performance levels of CAD schemes.

  18. Simplification of a dust emission scheme and comparison with data

    NASA Astrophysics Data System (ADS)

    Shao, Yaping

    2004-05-01

    A simplification of a dust emission scheme is proposed, which takes into account of saltation bombardment and aggregates disintegration. The statement of the scheme is that dust emission is proportional to streamwise saltation flux, but the proportionality depends on soil texture and soil plastic pressure p. For small p values (loose soils), dust emission rate is proportional to u*4 (u* is friction velocity) but not necessarily so in general. The dust emission predictions using the scheme are compared with several data sets published in the literature. The comparison enables the estimate of a model parameter and soil plastic pressure for various soils. While more data are needed for further verification, a general guideline for choosing model parameters is recommended.

  19. Distributed Trust Management for Validating SLA Choreographies

    NASA Astrophysics Data System (ADS)

    Haq, Irfan Ul; Alnemr, Rehab; Paschke, Adrian; Schikuta, Erich; Boley, Harold; Meinel, Christoph

    For business workflow automation in a service-enriched environment such as a grid or a cloud, services scattered across heterogeneous Virtual Organizations (VOs) can be aggregated in a producer-consumer manner, building hierarchical structures of added value. In order to preserve the supply chain, the Service Level Agreements (SLAs) corresponding to the underlying choreography of services should also be incrementally aggregated. This cross-VO hierarchical SLA aggregation requires validation, for which a distributed trust system becomes a prerequisite. Elaborating our previous work on rule-based SLA validation, we propose a hybrid distributed trust model. This new model is based on Public Key Infrastructure (PKI) and reputation-based trust systems. It helps preventing SLA violations by identifying violation-prone services at service selection stage and actively contributes in breach management at the time of penalty enforcement.

  20. Model Validation and Site Characterization for Early Deployment MHK Sites and Establishment of Wave Classification Scheme

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kilcher, Levi F

    Model Validation and Site Characterization for Early Deployment Marine and Hydrokinetic Energy Sites and Establishment of Wave Classification Scheme presentation from from Water Power Technologies Office Peer Review, FY14-FY16.

  1. Real Time Land-Surface Hydrologic Modeling Over Continental US

    NASA Technical Reports Server (NTRS)

    Houser, Paul R.

    1998-01-01

    The land surface component of the hydrological cycle is fundamental to the overall functioning of the atmospheric and climate processes. Spatially and temporally variable rainfall and available energy, combined with land surface heterogeneity cause complex variations in all processes related to surface hydrology. The characterization of the spatial and temporal variability of water and energy cycles are critical to improve our understanding of land surface-atmosphere interaction and the impact of land surface processes on climate extremes. Because the accurate knowledge of these processes and their variability is important for climate predictions, most Numerical Weather Prediction (NWP) centers have incorporated land surface schemes in their models. However, errors in the NWP forcing accumulate in the surface and energy stores, leading to incorrect surface water and energy partitioning and related processes. This has motivated the NWP to impose ad hoc corrections to the land surface states to prevent this drift. A proposed methodology is to develop Land Data Assimilation schemes (LDAS), which are uncoupled models forced with observations, and not affected by NWP forcing biases. The proposed research is being implemented as a real time operation using an existing Surface Vegetation Atmosphere Transfer Scheme (SVATS) model at a 40 km degree resolution across the United States to evaluate these critical science questions. The model will be forced with real time output from numerical prediction models, satellite data, and radar precipitation measurements. Model parameters will be derived from the existing GIS vegetation and soil coverages. The model results will be aggregated to various scales to assess water and energy balances and these will be validated with various in-situ observations.

  2. Modified hybrid subcarrier/amplitude/ phase/polarization LDPC-coded modulation for 400 Gb/s optical transmission and beyond.

    PubMed

    Batshon, Hussam G; Djordjevic, Ivan; Xu, Lei; Wang, Ting

    2010-06-21

    In this paper, we present a modified coded hybrid subcarrier/ amplitude/phase/polarization (H-SAPP) modulation scheme as a technique capable of achieving beyond 400 Gb/s single-channel transmission over optical channels. The modified H-SAPP scheme profits from the available resources in addition to geometry to increase the bandwidth efficiency of the transmission system, and so increases the aggregate rate of the system. In this report we present the modified H-SAPP scheme and focus on an example that allows 11 bits/Symbol that can achieve 440 Gb/s transmission using components of 50 Giga Symbol/s (GS/s).

  3. Enabling technologies for millimeter-wave radio-over-fiber systems in next generation heterogeneous mobile access networks

    NASA Astrophysics Data System (ADS)

    Zhang, Junwen; Yu, Jianjun; Wang, Jing; Xu, Mu; Cheng, Lin; Lu, Feng; Shen, Shuyi; Yan, Yan; Cho, Hyunwoo; Guidotti, Daniel; Chang, Gee-kung

    2017-01-01

    Fifth-generation (5G) wireless access network promises to support higher access data rate with more than 1,000 times capacity with respect to current long-term evolution (LTE) systems. New radio-access-technologies (RATs) based on higher carrier frequencies to millimeter-wave (MMW) radio-over-fiber, and carrier-aggregation (CA) using multi-band resources are intensively studied to support the high data rate access and effectively use of frequency resources in heterogeneous mobile network (Het-Net). In this paper, we investigate several enabling technologies for MMW RoF systems in 5G Het-Net. Efficient mobile fronthaul (MFH) solutions for 5G centralized radio access network (C-RAN) and beyond are proposed, analyzed and experimentally demonstrated based on the analog scheme. Digital predistortion based on memory polynomial for analog MFH linearization are presented with improved EVM performances and receiver sensitivity. We also propose and experimentally demonstrate a novel inter-/intra- RAT CA scheme for 5G Het- Net. The real-time standard 4G-LTE signal is carrier-aggregated with three broadband 60GHz MMW signals based on proposed optical-domain band-mapping method. RATs based on new waveforms have also been studied here to achieve higher spectral-efficiency (SE) in asynchronous environments. Full-duplex asynchronous quasi-gapless carrier aggregation scheme for MMW ROF inter-/intra-RAT based on the FBMC is also presented with 4G-LTE signals. Compared with OFDM-based signals with large guard-bands, FBMC achieves higher spectral-efficiency with better EVM performance at less received power and smaller guard-bands.

  4. Antibody quantum dot conjugates developed via copper-free click chemistry for rapid analysis of biological samples using a microfluidic microsphere array system.

    PubMed

    Kotagiri, Nalinikanth; Li, Zhenyu; Xu, Xiaoxiao; Mondal, Suman; Nehorai, Arye; Achilefu, Samuel

    2014-07-16

    Antibody-based proteomics is an enabling technology that has significant implications for cancer biomarker discovery, diagnostic screening, prognostic and pharmacodynamic evaluation of disease state, and targeted therapeutics. Quantum dot based fluoro-immunoconjugates possess promising features toward realization of this goal such as high photostability, brightness, and multispectral tunability. However, current strategies to generate such conjugates are riddled with complications such as improper orientation of antigen binding sites of the antibody, aggregation, and stability issues. We report a facile yet effective strategy to conjugate anti-epidermal growth factor receptor (EGFR) antibody to quantum dots using copper-free click reaction, and compared them to similar constructs prepared using traditional strategies such as succinimidyl-4-(N-maleimidomethyl) cyclohexane-1-carboxylate (SMCC) and biotin-streptavidin schemes. The Fc and Fab regions of the conjugates retain their binding potential, compared to those generated through the traditional schemes. We further applied the conjugates in testing a novel microsphere array device designed to carry out sensitive detection of cancer biomarkers through fluoroimmunoassays. Using purified EGFR, we determined the limit of detection of the microscopy centric system to be 12.5 ng/mL. The biological assay, in silico, was successfully tested and validated by using tumor cell lysates, as well as human serum from breast cancer patients, and the results were compared to normal serum. A pattern consistent with established clinical data was observed, which further validates the effectiveness of the developed conjugates and its successful implementation both in vitro as well as in silico fluoroimmunoassays. The results suggest the potential development of a high throughput in silico paradigm for predicting the class of patient cancer based on EGFR expression levels relative to normal reference levels in blood.

  5. Validity of Particle-Counting Method Using Laser-Light Scattering for Detecting Platelet Aggregation in Diabetic Patients

    NASA Astrophysics Data System (ADS)

    Nakadate, Hiromichi; Sekizuka, Eiichi; Minamitani, Haruyuki

    We aimed to study the validity of a new analytical approach that reflected the phase from platelet activation to the formation of small platelet aggregates. We hoped that this new approach would enable us to use the particle-counting method with laser-light scattering to measure platelet aggregation in healthy controls and in diabetic patients without complications. We measured agonist-induced platelet aggregation for 10 min. Agonist was added to the platelet-rich plasma 1 min after measurement started. We compared the total scattered light intensity from small aggregates over a 10-min period (established analytical approach) and that over a 2-min period from 1 to 3 min after measurement started (new analytical approach). Consequently platelet aggregation in diabetics with HbA1c ≥ 6.5% was significantly greater than in healthy controls by both analytical approaches. However, platelet aggregation in diabetics with HbA1c < 6.5%, i.e. patients in the early stages of diabetes, was significantly greater than in healthy controls only by the new analytical approach, not by the established analytical approach. These results suggest that platelet aggregation as detected by the particle-counting method using laser-light scattering could be applied in clinical examinations by our new analytical approach.

  6. Producing custom regional climate data sets for impact assessment with xarray

    NASA Astrophysics Data System (ADS)

    Simcock, J. G.; Delgado, M.; Greenstone, M.; Hsiang, S. M.; Kopp, R. E.; Carleton, T.; Hultgren, A.; Jina, A.; Nath, I.; Rising, J. A.; Rode, A.; Yuan, J.; Chong, T.; Dobbels, G.; Hussain, A.; Song, Y.; Wang, J.; Mohan, S.; Larsen, K.; Houser, T.

    2017-12-01

    Research in the field of climate impact assessment and valuation frequently requires the pairing of economic observations with historical or projected weather variables. Impact assessments with large geographic scope or spatially aggregated data frequently require climate variables to be prepared for use with administrative/political regions, economic districts such as utility service areas, physical regions such as watersheds, or other larger, non-gridded shapes. Approaches to preparing such data in the literature vary from methods developed out of convenience to more complex measures intended to account for spatial heterogeneity. But more sophisticated methods are difficult to implement, from both a theoretical and a technical standpoint. We present a new python package designed to assist researchers in the preparation of historical and projected climate data for arbitrary spatial definitions. Users specify transformations by providing (a) sets of regions in the form of shapefiles, (b) gridded data to be transformed, and, optionally, (c) gridded weights to use in the transformation. By default, aggregation to regions is conducted such that the resulting regional data draws from each grid cell according to the cell's share of total region area. However, researchers can provide alternative weighting schemes, such that the regional data is weighted by, for example, the population or planted agricultural area within each cell. An advantage of this method is that it enables easy preparation of nonlinear transformations of the climate data before aggregation to regions, allowing aggregated variables to more accurately capture the spatial heterogeneity within a region in the transformed data. At this session, we will allow attendees to view transformed climate projections, examining the effect of various weighting schemes and nonlinear transformations on aggregate regional values, highlighting the implications for climate impact assessment work.

  7. Reduced-Order Models for Load Management in the Power Grid

    NASA Astrophysics Data System (ADS)

    Alizadeh, Mahnoosh

    In recent years, considerable research efforts have been directed towards designing control schemes that can leverage the inherent flexibility of electricity demand that is not tapped into in today's electricity markets. It is expected that these control schemes will be carried out by for-profit entities referred to as aggregators that operate at the edge of the power grid network. While the aggregator control problem is receiving much attention, more high-level questions of how these aggregators should plan their market participation, interact with the main grid and with each other, remain rather understudied. Answering these questions requires a large-scale model for the aggregate flexibility that can be harnessed from the a population of customers, particularly for residences and small businesses. The contribution of this thesis towards this goal is divided into three parts: In Chapter 3, a reduced-order model for a large population of heterogeneous appliances is provided by clustering load profiles that share similar degrees of freedom together. The use of such reduced-order model for system planning and optimal market decision making requires a foresighted approximation of the number of appliances that will join each cluster. Thus, Chapter 4 provides a systematic framework to generate such forecasts for the case of Electric Vehicles, based on real-world battery charging data. While these two chapters set aside the economic side that is naturally involved with participation in demand response programs and mainly focus on the control problem, Chapter 5 is dedicated to the study of optimal pricing mechanisms in order to recruit heterogeneous customers in a demand response program in which an aggregator can directly manage their appliances' load under their specified preferences. Prices are proportional to the wholesale market savings that can result from each recruitment event.

  8. Problem of the thermodynamic status of the mixed-layer minerals

    USGS Publications Warehouse

    Zen, E.-A.

    1962-01-01

    Minerals that show mixed layering, particularly with the component layers in random sequence, pose problems because they may behave thermodynamically as single phases or as polyphase aggregates. Two operational criteria are proposed for their distinction. The first scheme requires two samples of mixed-layer material which differ only in the proportions of the layers. If each of these two samples are allowed to equilibrate with the same suitably chosen monitoring solution, then the intensive parameters of the solution will be invariant if the mixed-layer sample is a polyphase aggregate, but not otherwise. The second scheme makes use of the fact that portions of many titration curves of clay minerals show constancy of the chemical activities of the components in the equilibrating solutions, suggesting phase separation. If such phase separation occurs for a mixed-layer material, then, knowing the number of independent components in the system, it should be possible to decide on the number of phases the mixed-layer material represents. Knowledge of the phase status of mixed-layer material is essential to the study of the equilibrium relations of mineral assemblages involving such material, because a given mixed-layer mineral will be plotted and treated differently on a phase diagram, depending on whether it is a single phase or a polyphase aggregate. Extension of the titration technique to minerals other than the mixed-layer type is possible. In particular, this method may be used to determine if cryptoperthites and peristerites are polyphase aggregates. In general, for any high-order phase separation, the method may be used to decide just at what point in this continuous process the system must be regarded operationally as a polyphase aggregate. ?? 1962.

  9. The Impact of Hospital Payment Schemes on Healthcare and Mortality: Evidence from Hospital Payment Reforms in OECD Countries.

    PubMed

    Wubulihasimu, Parida; Brouwer, Werner; van Baal, Pieter

    2016-08-01

    In this study, aggregate-level panel data from 20 Organization for Economic Cooperation and Development countries over three decades (1980-2009) were used to investigate the impact of hospital payment reforms on healthcare output and mortality. Hospital payment schemes were classified as fixed-budget (i.e. not directly based on activities), fee-for-service (FFS) or patient-based payment (PBP) schemes. The data were analysed using a difference-in-difference model that allows for a structural change in outcomes due to payment reform. The results suggest that FFS schemes increase the growth rate of healthcare output, whereas PBP schemes positively affect life expectancy at age 65 years. However, these results should be interpreted with caution, as results are sensitive to model specification. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  10. Standardization, evaluation and early-phase method validation of an analytical scheme for batch-consistency N-glycosylation analysis of recombinant produced glycoproteins.

    PubMed

    Zietze, Stefan; Müller, Rainer H; Brecht, René

    2008-03-01

    In order to set up a batch-to-batch-consistency analytical scheme for N-glycosylation analysis, several sample preparation steps including enzyme digestions and fluorophore labelling and two HPLC-methods were established. The whole method scheme was standardized, evaluated and validated according to the requirements on analytical testing in early clinical drug development by usage of a recombinant produced reference glycoprotein (RGP). The standardization of the methods was performed by clearly defined standard operation procedures. During evaluation of the methods, the major interest was in the loss determination of oligosaccharides within the analytical scheme. Validation of the methods was performed with respect to specificity, linearity, repeatability, LOD and LOQ. Due to the fact that reference N-glycan standards were not available, a statistical approach was chosen to derive accuracy from the linearity data. After finishing the validation procedure, defined limits for method variability could be calculated and differences observed in consistency analysis could be separated into significant and incidental ones.

  11. Agent Based Simulation Design for Aggregation and Disaggregation

    DTIC Science & Technology

    2011-12-01

    of law, no person shall be subject to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB... PERSON a. REPORT unclassified b. ABSTRACT unclassified c. THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18...theo- retical analysis) finding conditions under which aggregation equations might be reasonably valid (requires theo- retical analysis

  12. The Mechanisms of Aberrant Protein Aggregation

    NASA Astrophysics Data System (ADS)

    Cohen, Samuel; Vendruscolo, Michele; Dobson, Chris; Knowles, Tuomas

    2012-02-01

    We discuss the development of a kinetic theory for understanding the aberrant loss of solubility of proteins. The failure to maintain protein solubility results often in the assembly of organized linear structures, commonly known as amyloid fibrils, the formation of which is associated with over 50 clinical disorders including Alzheimer's and Parkinson's diseases. A true microscopic understanding of the mechanisms that drive these aggregation processes has proved difficult to achieve. To address this challenge, we apply the methodologies of chemical kinetics to the biomolecular self-assembly pathways related to protein aggregation. We discuss the relevant master equation and analytical approaches to studying it. In particular, we derive the underlying rate laws in closed-form using a self-consistent solution scheme; the solutions that we obtain reveal scaling behaviors that are very generally present in systems of growing linear aggregates, and, moreover, provide a general route through which to relate experimental measurements to mechanistic information. We conclude by outlining a study of the aggregation of the Alzheimer's amyloid-beta peptide. The study identifies the dominant microscopic mechanism of aggregation and reveals previously unidentified therapeutic strategies.

  13. An Inherent-Optical-Property-Centered Approach to Correct the Angular Effects in Water-Leaving Radiance

    DTIC Science & Technology

    2011-07-01

    10%. These results demonstrate that the IOP-based BRDF correction scheme (which is composed of the R„ model along with the IOP retrieval...distribution was averaged over 10 min 5. Validation of the lOP-Based BRDF Correction Scheme The IOP-based BRDF correction scheme is applied to both...oceanic and coastal waters were very consistent qualitatively and quantitatively and thus validate the IOP- based BRDF correction system, at least

  14. Aggregating the response in time series regression models, applied to weather-related cardiovascular mortality

    NASA Astrophysics Data System (ADS)

    Masselot, Pierre; Chebana, Fateh; Bélanger, Diane; St-Hilaire, André; Abdous, Belkacem; Gosselin, Pierre; Ouarda, Taha B. M. J.

    2018-07-01

    In environmental epidemiology studies, health response data (e.g. hospitalization or mortality) are often noisy because of hospital organization and other social factors. The noise in the data can hide the true signal related to the exposure. The signal can be unveiled by performing a temporal aggregation on health data and then using it as the response in regression analysis. From aggregated series, a general methodology is introduced to account for the particularities of an aggregated response in a regression setting. This methodology can be used with usually applied regression models in weather-related health studies, such as generalized additive models (GAM) and distributed lag nonlinear models (DLNM). In particular, the residuals are modelled using an autoregressive-moving average (ARMA) model to account for the temporal dependence. The proposed methodology is illustrated by modelling the influence of temperature on cardiovascular mortality in Canada. A comparison with classical DLNMs is provided and several aggregation methods are compared. Results show that there is an increase in the fit quality when the response is aggregated, and that the estimated relationship focuses more on the outcome over several days than the classical DLNM. More precisely, among various investigated aggregation schemes, it was found that an aggregation with an asymmetric Epanechnikov kernel is more suited for studying the temperature-mortality relationship.

  15. Joint histogram-based cost aggregation for stereo matching.

    PubMed

    Min, Dongbo; Lu, Jiangbo; Do, Minh N

    2013-10-01

    This paper presents a novel method for performing efficient cost aggregation in stereo matching. The cost aggregation problem is reformulated from the perspective of a histogram, giving us the potential to reduce the complexity of the cost aggregation in stereo matching significantly. Differently from previous methods which have tried to reduce the complexity in terms of the size of an image and a matching window, our approach focuses on reducing the computational redundancy that exists among the search range, caused by a repeated filtering for all the hypotheses. Moreover, we also reduce the complexity of the window-based filtering through an efficient sampling scheme inside the matching window. The tradeoff between accuracy and complexity is extensively investigated by varying the parameters used in the proposed method. Experimental results show that the proposed method provides high-quality disparity maps with low complexity and outperforms existing local methods. This paper also provides new insights into complexity-constrained stereo-matching algorithm design.

  16. Role of small oligomers on the amyloidogenic aggregation free-energy landscape.

    PubMed

    He, Xianglan; Giurleo, Jason T; Talaga, David S

    2010-01-08

    We combine atomic-force-microscopy particle-size-distribution measurements with earlier measurements on 1-anilino-8-naphthalene sulfonate, thioflavin T, and dynamic light scattering to develop a quantitative kinetic model for the aggregation of beta-lactoglobulin into amyloid. We directly compare our simulations to the population distributions provided by dynamic light scattering and atomic force microscopy. We combine species in the simulation according to structural type for comparison with fluorescence fingerprint results. The kinetic model of amyloidogenesis leads to an aggregation free-energy landscape. We define the roles of and propose a classification scheme for different oligomeric species based on their location in the aggregation free-energy landscape. We relate the different types of oligomers to the amyloid cascade hypothesis and the toxic oligomer hypothesis for amyloid-related diseases. We discuss existing kinetic mechanisms in terms of the different types of oligomers. We provide a possible resolution to the toxic oligomer-amyloid coincidence.

  17. Improved Monte Carlo Scheme for Efficient Particle Transfer in Heterogeneous Systems in the Grand Canonical Ensemble: Application to Vapor-Liquid Nucleation.

    PubMed

    Loeffler, Troy D; Sepehri, Aliasghar; Chen, Bin

    2015-09-08

    Reformulation of existing Monte Carlo algorithms used in the study of grand canonical systems has yielded massive improvements in efficiency. Here we present an energy biasing scheme designed to address targeting issues encountered in particle swap moves using sophisticated algorithms such as the Aggregation-Volume-Bias and Unbonding-Bonding methods. Specifically, this energy biasing scheme allows a particle to be inserted to (or removed from) a region that is more acceptable. As a result, this new method showed a several-fold increase in insertion/removal efficiency in addition to an accelerated rate of convergence for the thermodynamic properties of the system.

  18. Enhancing Privacy in Participatory Sensing Applications with Multidimensional Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forrest, Stephanie; He, Wenbo; Groat, Michael

    2013-01-01

    Participatory sensing applications rely on individuals to share personal data to produce aggregated models and knowledge. In this setting, privacy concerns can discourage widespread adoption of new applications. We present a privacy-preserving participatory sensing scheme based on negative surveys for both continuous and multivariate categorical data. Without relying on encryption, our algorithms enhance the privacy of sensed data in an energy and computation efficient manner. Simulations and implementation on Android smart phones illustrate how multidimensional data can be aggregated in a useful and privacy-enhancing manner.

  19. Sensitivity of effective rainfall amount to land use description using GIS tool. Case of a small mediterranean catchment

    NASA Astrophysics Data System (ADS)

    Payraudeau, S.; Tournoud, M. G.; Cernesson, F.

    Distributed modelling in hydrology assess catchment subdivision to take into account physic characteristics. In this paper, we test the effect of land use aggregation scheme on catchment hydrological response. Evolution of intra-subcatchment land use is studied using statistic and entropy methods. The SCS-CN method is used to calculate effective rainfall which is here assimilated to hydrological response. Our purpose is to determine the existence of a critical threshold-area appropriate for the application of hydrological modelling. Land use aggregation effects on effective rainfall is assessed on small mediterranean catchment. The results show that land use aggregation and land use classification type have significant effects on hydrological modelling and in particular on effective rainfall modelling.

  20. Comparison of different incremental analysis update schemes in a realistic assimilation system with Ensemble Kalman Filter

    NASA Astrophysics Data System (ADS)

    Yan, Y.; Barth, A.; Beckers, J. M.; Brankart, J. M.; Brasseur, P.; Candille, G.

    2017-07-01

    In this paper, three incremental analysis update schemes (IAU 0, IAU 50 and IAU 100) are compared in the same assimilation experiments with a realistic eddy permitting primitive equation model of the North Atlantic Ocean using the Ensemble Kalman Filter. The difference between the three IAU schemes lies on the position of the increment update window. The relevance of each IAU scheme is evaluated through analyses on both thermohaline and dynamical variables. The validation of the assimilation results is performed according to both deterministic and probabilistic metrics against different sources of observations. For deterministic validation, the ensemble mean and the ensemble spread are compared to the observations. For probabilistic validation, the continuous ranked probability score (CRPS) is used to evaluate the ensemble forecast system according to reliability and resolution. The reliability is further decomposed into bias and dispersion by the reduced centred random variable (RCRV) score. The obtained results show that 1) the IAU 50 scheme has the same performance as the IAU 100 scheme 2) the IAU 50/100 schemes outperform the IAU 0 scheme in error covariance propagation for thermohaline variables in relatively stable region, while the IAU 0 scheme outperforms the IAU 50/100 schemes in dynamical variables estimation in dynamically active region 3) in case with sufficient number of observations and good error specification, the impact of IAU schemes is negligible. The differences between the IAU 0 scheme and the IAU 50/100 schemes are mainly due to different model integration time and different instability (density inversion, large vertical velocity, etc.) induced by the increment update. The longer model integration time with the IAU 50/100 schemes, especially the free model integration, on one hand, allows for better re-establishment of the equilibrium model state, on the other hand, smooths the strong gradients in dynamically active region.

  1. A gas-kinetic BGK scheme for the compressible Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Xu, Kun

    2000-01-01

    This paper presents an improved gas-kinetic scheme based on the Bhatnagar-Gross-Krook (BGK) model for the compressible Navier-Stokes equations. The current method extends the previous gas-kinetic Navier-Stokes solver developed by Xu and Prendergast by implementing a general nonequilibrium state to represent the gas distribution function at the beginning of each time step. As a result, the requirement in the previous scheme, such as the particle collision time being less than the time step for the validity of the BGK Navier-Stokes solution, is removed. Therefore, the applicable regime of the current method is much enlarged and the Navier-Stokes solution can be obtained accurately regardless of the ratio between the collision time and the time step. The gas-kinetic Navier-Stokes solver developed by Chou and Baganoff is the limiting case of the current method, and it is valid only under such a limiting condition. Also, in this paper, the appropriate implementation of boundary condition for the kinetic scheme, different kinetic limiting cases, and the Prandtl number fix are presented. The connection among artificial dissipative central schemes, Godunov-type schemes, and the gas-kinetic BGK method is discussed. Many numerical tests are included to validate the current method.

  2. IEEE 802.15.4 Frame Aggregation Enhancement to Provide High Performance in Life-Critical Patient Monitoring Systems

    PubMed Central

    Akbar, Muhammad Sajjad; Yu, Hongnian; Cang, Shuang

    2017-01-01

    In wireless body area sensor networks (WBASNs), Quality of Service (QoS) provision for patient monitoring systems in terms of time-critical deadlines, high throughput and energy efficiency is a challenging task. The periodic data from these systems generates a large number of small packets in a short time period which needs an efficient channel access mechanism. The IEEE 802.15.4 standard is recommended for low power devices and widely used for many wireless sensor networks applications. It provides a hybrid channel access mechanism at the Media Access Control (MAC) layer which plays a key role in overall successful transmission in WBASNs. There are many WBASN’s MAC protocols that use this hybrid channel access mechanism in variety of sensor applications. However, these protocols are less efficient for patient monitoring systems where life critical data requires limited delay, high throughput and energy efficient communication simultaneously. To address these issues, this paper proposes a frame aggregation scheme by using the aggregated-MAC protocol data unit (A-MPDU) which works with the IEEE 802.15.4 MAC layer. To implement the scheme accurately, we develop a traffic patterns analysis mechanism to understand the requirements of the sensor nodes in patient monitoring systems, then model the channel access to find the performance gap on the basis of obtained requirements, finally propose the design based on the needs of patient monitoring systems. The mechanism is initially verified using numerical modelling and then simulation is conducted using NS2.29, Castalia 3.2 and OMNeT++. The proposed scheme provides the optimal performance considering the required QoS. PMID:28134853

  3. IEEE 802.15.4 Frame Aggregation Enhancement to Provide High Performance in Life-Critical Patient Monitoring Systems.

    PubMed

    Akbar, Muhammad Sajjad; Yu, Hongnian; Cang, Shuang

    2017-01-28

    In wireless body area sensor networks (WBASNs), Quality of Service (QoS) provision for patient monitoring systems in terms of time-critical deadlines, high throughput and energy efficiency is a challenging task. The periodic data from these systems generates a large number of small packets in a short time period which needs an efficient channel access mechanism. The IEEE 802.15.4 standard is recommended for low power devices and widely used for many wireless sensor networks applications. It provides a hybrid channel access mechanism at the Media Access Control (MAC) layer which plays a key role in overall successful transmission in WBASNs. There are many WBASN's MAC protocols that use this hybrid channel access mechanism in variety of sensor applications. However, these protocols are less efficient for patient monitoring systems where life critical data requires limited delay, high throughput and energy efficient communication simultaneously. To address these issues, this paper proposes a frame aggregation scheme by using the aggregated-MAC protocol data unit (A-MPDU) which works with the IEEE 802.15.4 MAC layer. To implement the scheme accurately, we develop a traffic patterns analysis mechanism to understand the requirements of the sensor nodes in patient monitoring systems, then model the channel access to find the performance gap on the basis of obtained requirements, finally propose the design based on the needs of patient monitoring systems. The mechanism is initially verified using numerical modelling and then simulation is conducted using NS2.29, Castalia 3.2 and OMNeT++. The proposed scheme provides the optimal performance considering the required QoS.

  4. Validation of TxDOT flexible pavement skid prediction model : workshop : student guide.

    DOT National Transportation Integrated Search

    2017-05-01

    Course Materials: : Background summary of Research Project 0-5627. : Short presentation of research tasks and findings from Research Project 0-6746. : Aggregate characterization with Aggregate Imaging Measurement System (AIMS) and Micro-D...

  5. Intrusion-aware alert validation algorithm for cooperative distributed intrusion detection schemes of wireless sensor networks.

    PubMed

    Shaikh, Riaz Ahmed; Jameel, Hassan; d'Auriol, Brian J; Lee, Heejo; Lee, Sungyoung; Song, Young-Jae

    2009-01-01

    Existing anomaly and intrusion detection schemes of wireless sensor networks have mainly focused on the detection of intrusions. Once the intrusion is detected, an alerts or claims will be generated. However, any unidentified malicious nodes in the network could send faulty anomaly and intrusion claims about the legitimate nodes to the other nodes. Verifying the validity of such claims is a critical and challenging issue that is not considered in the existing cooperative-based distributed anomaly and intrusion detection schemes of wireless sensor networks. In this paper, we propose a validation algorithm that addresses this problem. This algorithm utilizes the concept of intrusion-aware reliability that helps to provide adequate reliability at a modest communication cost. In this paper, we also provide a security resiliency analysis of the proposed intrusion-aware alert validation algorithm.

  6. Intrusion-Aware Alert Validation Algorithm for Cooperative Distributed Intrusion Detection Schemes of Wireless Sensor Networks

    PubMed Central

    Shaikh, Riaz Ahmed; Jameel, Hassan; d’Auriol, Brian J.; Lee, Heejo; Lee, Sungyoung; Song, Young-Jae

    2009-01-01

    Existing anomaly and intrusion detection schemes of wireless sensor networks have mainly focused on the detection of intrusions. Once the intrusion is detected, an alerts or claims will be generated. However, any unidentified malicious nodes in the network could send faulty anomaly and intrusion claims about the legitimate nodes to the other nodes. Verifying the validity of such claims is a critical and challenging issue that is not considered in the existing cooperative-based distributed anomaly and intrusion detection schemes of wireless sensor networks. In this paper, we propose a validation algorithm that addresses this problem. This algorithm utilizes the concept of intrusion-aware reliability that helps to provide adequate reliability at a modest communication cost. In this paper, we also provide a security resiliency analysis of the proposed intrusion-aware alert validation algorithm. PMID:22454568

  7. An alternative resource sharing scheme for land mobile satellite services

    NASA Technical Reports Server (NTRS)

    Yan, Tsun-Yee; Sue, Miles K.

    1990-01-01

    A preliminary comparison between the two competing channelization concepts for the Land Mobile Satellite Services (LMSS), namely frequency division (FD) and code division (CD), is presented. Both random access and demand-assigned approaches are considered under these concepts. The CD concept is compared with the traditional FD concept based on the system consideration and a projected traffic model. It is shown that CD is not particularly attractive for the first generation Mobile Satellite Services because of the spectral occupancy of the network bandwidth. However, the CD concept is a viable alternative for future systems such as the personal access satellite system (PASS) in the Ka-band spectrum where spectral efficiency is not of prime concern. The effects of power robbing and voice activity factor are incorporated. It was shown that the traditional rule of thumb of dividing the number of raw channels by the voice activity factor to obtain the effective number of channels is only valid asymptotically as the aggregated traffic approaches infinity.

  8. An alternative resource sharing scheme for land mobile satellite services

    NASA Astrophysics Data System (ADS)

    Yan, Tsun-Yee; Sue, Miles K.

    A preliminary comparison between the two competing channelization concepts for the Land Mobile Satellite Services (LMSS), namely frequency division (FD) and code division (CD), is presented. Both random access and demand-assigned approaches are considered under these concepts. The CD concept is compared with the traditional FD concept based on the system consideration and a projected traffic model. It is shown that CD is not particularly attractive for the first generation Mobile Satellite Services because of the spectral occupancy of the network bandwidth. However, the CD concept is a viable alternative for future systems such as the personal access satellite system (PASS) in the Ka-band spectrum where spectral efficiency is not of prime concern. The effects of power robbing and voice activity factor are incorporated. It was shown that the traditional rule of thumb of dividing the number of raw channels by the voice activity factor to obtain the effective number of channels is only valid asymptotically as the aggregated traffic approaches infinity.

  9. A Robust and Effective Smart-Card-Based Remote User Authentication Mechanism Using Hash Function

    PubMed Central

    Odelu, Vanga; Goswami, Adrijit

    2014-01-01

    In a remote user authentication scheme, a remote server verifies whether a login user is genuine and trustworthy, and also for mutual authentication purpose a login user validates whether the remote server is genuine and trustworthy. Several remote user authentication schemes using the password, the biometrics, and the smart card have been proposed in the literature. However, most schemes proposed in the literature are either computationally expensive or insecure against several known attacks. In this paper, we aim to propose a new robust and effective password-based remote user authentication scheme using smart card. Our scheme is efficient, because our scheme uses only efficient one-way hash function and bitwise XOR operations. Through the rigorous informal and formal security analysis, we show that our scheme is secure against possible known attacks. We perform the simulation for the formal security analysis using the widely accepted AVISPA (Automated Validation Internet Security Protocols and Applications) tool to ensure that our scheme is secure against passive and active attacks. Furthermore, our scheme supports efficiently the password change phase always locally without contacting the remote server and correctly. In addition, our scheme performs significantly better than other existing schemes in terms of communication, computational overheads, security, and features provided by our scheme. PMID:24892078

  10. A robust and effective smart-card-based remote user authentication mechanism using hash function.

    PubMed

    Das, Ashok Kumar; Odelu, Vanga; Goswami, Adrijit

    2014-01-01

    In a remote user authentication scheme, a remote server verifies whether a login user is genuine and trustworthy, and also for mutual authentication purpose a login user validates whether the remote server is genuine and trustworthy. Several remote user authentication schemes using the password, the biometrics, and the smart card have been proposed in the literature. However, most schemes proposed in the literature are either computationally expensive or insecure against several known attacks. In this paper, we aim to propose a new robust and effective password-based remote user authentication scheme using smart card. Our scheme is efficient, because our scheme uses only efficient one-way hash function and bitwise XOR operations. Through the rigorous informal and formal security analysis, we show that our scheme is secure against possible known attacks. We perform the simulation for the formal security analysis using the widely accepted AVISPA (Automated Validation Internet Security Protocols and Applications) tool to ensure that our scheme is secure against passive and active attacks. Furthermore, our scheme supports efficiently the password change phase always locally without contacting the remote server and correctly. In addition, our scheme performs significantly better than other existing schemes in terms of communication, computational overheads, security, and features provided by our scheme.

  11. 3D simulations of early blood vessel formation

    NASA Astrophysics Data System (ADS)

    Cavalli, F.; Gamba, A.; Naldi, G.; Semplice, M.; Valdembri, D.; Serini, G.

    2007-08-01

    Blood vessel networks form by spontaneous aggregation of individual cells migrating toward vascularization sites (vasculogenesis). A successful theoretical model of two-dimensional experimental vasculogenesis has been recently proposed, showing the relevance of percolation concepts and of cell cross-talk (chemotactic autocrine loop) to the understanding of this self-aggregation process. Here we study the natural 3D extension of the computational model proposed earlier, which is relevant for the investigation of the genuinely three-dimensional process of vasculogenesis in vertebrate embryos. The computational model is based on a multidimensional Burgers equation coupled with a reaction diffusion equation for a chemotactic factor and a mass conservation law. The numerical approximation of the computational model is obtained by high order relaxed schemes. Space and time discretization are performed by using TVD schemes and, respectively, IMEX schemes. Due to the computational costs of realistic simulations, we have implemented the numerical algorithm on a cluster for parallel computation. Starting from initial conditions mimicking the experimentally observed ones, numerical simulations produce network-like structures qualitatively similar to those observed in the early stages of in vivo vasculogenesis. We develop the computation of critical percolative indices as a robust measure of the network geometry as a first step towards the comparison of computational and experimental data.

  12. An Updated Scheme for Categorizing Foods Implicated in Foodborne Disease Outbreaks: A Tri-Agency Collaboration.

    PubMed

    Richardson, LaTonia Clay; Bazaco, Michael C; Parker, Cary Chen; Dewey-Mattia, Daniel; Golden, Neal; Jones, Karen; Klontz, Karl; Travis, Curtis; Kufel, Joanna Zablotsky; Cole, Dana

    2017-12-01

    Foodborne disease data collected during outbreak investigations are used to estimate the percentage of foodborne illnesses attributable to specific food categories. Current food categories do not reflect whether or how the food has been processed and exclude many multiple-ingredient foods. Representatives from three federal agencies worked collaboratively in the Interagency Food Safety Analytics Collaboration (IFSAC) to develop a hierarchical scheme for categorizing foods implicated in outbreaks, which accounts for the type of processing and provides more specific food categories for regulatory purposes. IFSAC also developed standard assumptions for assigning foods to specific food categories, including some multiple-ingredient foods. The number and percentage of outbreaks assignable to each level of the hierarchy were summarized. The IFSAC scheme is a five-level hierarchy for categorizing implicated foods with increasingly specific subcategories at each level, resulting in a total of 234 food categories. Subcategories allow distinguishing features of implicated foods to be reported, such as pasteurized versus unpasteurized fluid milk, shell eggs versus liquid egg products, ready-to-eat versus raw meats, and five different varieties of fruit categories. Twenty-four aggregate food categories contained a sufficient number of outbreaks for source attribution analyses. Among 9791 outbreaks reported from 1998 to 2014 with an identified food vehicle, 4607 (47%) were assignable to food categories using this scheme. Among these, 4218 (92%) were assigned to one of the 24 aggregate food categories, and 840 (18%) were assigned to the most specific category possible. Updates to the food categorization scheme and new methods for assigning implicated foods to specific food categories can help increase the number of outbreaks attributed to a single food category. The increased specificity of food categories in this scheme may help improve source attribution analyses, eventually leading to improved foodborne illness source attribution estimates and enhanced food safety and regulatory efforts.

  13. Pneumatic Distension of Ventricular Mural Architecture Validated Histologically.

    PubMed

    Burg, M C; Lunkenheimer, P; Niederer, P; Brune, C; Redmann, K; Smerup, M; Spiegel, U; Becker, F; Maintz, D; Heindel, W; Anderson, R H

    2016-11-01

    Purpose: There are ongoing arguments as to how cardiomyocytes are aggregated together within the ventricular walls. We used pneumatic distension through the coronary arteries to exaggerate the gaps between the aggregated cardiomyocytes, analyzing the pattern revealed using computed tomography, and validating our findings by histology. Methods: We distended 10 porcine hearts, arresting 4 in diastole by infusion of cardioplegic solutions, and 4 in systole by injection of barium chloride. Mural architecture was revealed by computed tomography, measuring also the angulations of the long chains of cardiomyocytes. We prepared the remaining 2 hearts for histology by perfusion with formaldehyde. Results: Increasing pressures of pneumatic distension elongated the ventricular walls, but produced insignificant changes in mural thickness. The distension exaggerated the spaces between the aggregated cardiomyocytes, compartmenting the walls into epicardial, central, and endocardial regions, with a feathered arrangement of transitions between them. Marked variation was noted in the thicknesses of the parts in the different ventricular segments, with no visible anatomical boundaries between them. Measurements of angulations revealed intruding and extruding populations of cardiomyocytes that deviated from a surface-parallel alignment. Scrolling through the stacks of tomographic images revealed marked spiraling of the aggregated cardiomyocytes when traced from base to apex. Conclusion: Our findings call into question the current assumption that cardiomyocytes are uniformly aggregated together in a tangential fashion. There is marked heterogeneity in the architecture of the different ventricular segments, with the aggregated units never extending in a fully transmural fashion. Key Points: • Pneumographic computed tomography reveals an organized structure of the ventricular walls.• Aggregated cardiomyocytes form a structured continuum, with marked regional heterogeneity.• Global ventricular function results from antagonistic forces generated by aggregated cardiomyocytes. Citation Format: • Burg MC, Lunkenheimer P, Niederer P et al. Pneumatic Distension of Ventricular Mural Architecture Validated Histologically. Fortschr Röntgenstr 2016; 188: 1045 - 1053. © Georg Thieme Verlag KG Stuttgart · New York.

  14. Reproducibility and Variability of I/O Performance on BG/Q: Lessons Learned from a Data Aggregation Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tessier, Francois; Vishwanath, Venkatram

    2017-11-28

    Reading and writing data efficiently from different tiers of storage is necessary for most scientific simulations to achieve good performance at scale. Many software solutions have been developed to decrease the I/O bottleneck. One wellknown strategy, in the context of collective I/O operations, is the two-phase I/O scheme. This strategy consists of selecting a subset of processes to aggregate contiguous pieces of data before performing reads/writes. In our previous work, we implemented the two-phase I/O scheme with a MPI-based topology-aware algorithm. Our algorithm showed very good performance at scale compared to the standard I/O libraries such as POSIX I/O andmore » MPI I/O. However, the algorithm had several limitations hindering a satisfying reproducibility of our experiments. In this paper, we extend our work by 1) identifying the obstacles we face to reproduce our experiments and 2) discovering solutions that reduce the unpredictability of our results.« less

  15. On the validation of cloud parametrization schemes in numerical atmospheric models with satellite data from ISCCP

    NASA Astrophysics Data System (ADS)

    Meinke, I.

    2003-04-01

    A new method is presented to validate cloud parametrization schemes in numerical atmospheric models with satellite data of scanning radiometers. This method is applied to the regional atmospheric model HRM (High Resolution Regional Model) using satellite data from ISCCP (International Satellite Cloud Climatology Project). Due to the limited reliability of former validations there has been a need for developing a new validation method: Up to now differences between simulated and measured cloud properties are mostly declared as deficiencies of the cloud parametrization scheme without further investigation. Other uncertainties connected with the model or with the measurements have not been taken into account. Therefore changes in the cloud parametrization scheme based on such kind of validations might not be realistic. The new method estimates uncertainties of the model and the measurements. Criteria for comparisons of simulated and measured data are derived to localize deficiencies in the model. For a better specification of these deficiencies simulated clouds are classified regarding their parametrization. With this classification the localized model deficiencies are allocated to a certain parametrization scheme. Applying this method to the regional model HRM the quality of forecasting cloud properties is estimated in detail. The overestimation of simulated clouds in low emissivity heights especially during the night is localized as model deficiency. This is caused by subscale cloudiness. As the simulation of subscale clouds in the regional model HRM is described by a relative humidity parametrization these deficiencies are connected with this parameterization.

  16. Developing a contributing factor classification scheme for Rasmussen's AcciMap: Reliability and validity evaluation.

    PubMed

    Goode, N; Salmon, P M; Taylor, N Z; Lenné, M G; Finch, C F

    2017-10-01

    One factor potentially limiting the uptake of Rasmussen's (1997) Accimap method by practitioners is the lack of a contributing factor classification scheme to guide accident analyses. This article evaluates the intra- and inter-rater reliability and criterion-referenced validity of a classification scheme developed to support the use of Accimap by led outdoor activity (LOA) practitioners. The classification scheme has two levels: the system level describes the actors, artefacts and activity context in terms of 14 codes; the descriptor level breaks the system level codes down into 107 specific contributing factors. The study involved 11 LOA practitioners using the scheme on two separate occasions to code a pre-determined list of contributing factors identified from four incident reports. Criterion-referenced validity was assessed by comparing the codes selected by LOA practitioners to those selected by the method creators. Mean intra-rater reliability scores at the system (M = 83.6%) and descriptor (M = 74%) levels were acceptable. Mean inter-rater reliability scores were not consistently acceptable for both coding attempts at the system level (M T1  = 68.8%; M T2  = 73.9%), and were poor at the descriptor level (M T1  = 58.5%; M T2  = 64.1%). Mean criterion referenced validity scores at the system level were acceptable (M T1  = 73.9%; M T2  = 75.3%). However, they were not consistently acceptable at the descriptor level (M T1  = 67.6%; M T2  = 70.8%). Overall, the results indicate that the classification scheme does not currently satisfy reliability and validity requirements, and that further work is required. The implications for the design and development of contributing factors classification schemes are discussed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Validation of individual and aggregate global flood hazard models for two major floods in Africa.

    NASA Astrophysics Data System (ADS)

    Trigg, M.; Bernhofen, M.; Whyman, C.

    2017-12-01

    A recent intercomparison of global flood hazard models undertaken by the Global Flood Partnership shows that there is an urgent requirement to undertake more validation of the models against flood observations. As part of the intercomparison, the aggregated model dataset resulting from the project was provided as open access data. We compare the individual and aggregated flood extent output from the six global models and test these against two major floods in the African Continent within the last decade, namely severe flooding on the Niger River in Nigeria in 2012, and on the Zambezi River in Mozambique in 2007. We test if aggregating different number and combination of models increases model fit to the observations compared with the individual model outputs. We present results that illustrate some of the challenges of comparing imperfect models with imperfect observations and also that of defining the probability of a real event in order to test standard model output probabilities. Finally, we propose a collective set of open access validation flood events, with associated observational data and descriptions that provide a standard set of tests across different climates and hydraulic conditions.

  18. BagMOOV: A novel ensemble for heart disease prediction bootstrap aggregation with multi-objective optimized voting.

    PubMed

    Bashir, Saba; Qamar, Usman; Khan, Farhan Hassan

    2015-06-01

    Conventional clinical decision support systems are based on individual classifiers or simple combination of these classifiers which tend to show moderate performance. This research paper presents a novel classifier ensemble framework based on enhanced bagging approach with multi-objective weighted voting scheme for prediction and analysis of heart disease. The proposed model overcomes the limitations of conventional performance by utilizing an ensemble of five heterogeneous classifiers: Naïve Bayes, linear regression, quadratic discriminant analysis, instance based learner and support vector machines. Five different datasets are used for experimentation, evaluation and validation. The datasets are obtained from publicly available data repositories. Effectiveness of the proposed ensemble is investigated by comparison of results with several classifiers. Prediction results of the proposed ensemble model are assessed by ten fold cross validation and ANOVA statistics. The experimental evaluation shows that the proposed framework deals with all type of attributes and achieved high diagnosis accuracy of 84.16 %, 93.29 % sensitivity, 96.70 % specificity, and 82.15 % f-measure. The f-ratio higher than f-critical and p value less than 0.05 for 95 % confidence interval indicate that the results are extremely statistically significant for most of the datasets.

  19. Recovering Protein-Protein and Domain-Domain Interactions from Aggregation of IP-MS Proteomics of Coregulator Complexes

    PubMed Central

    Mazloom, Amin R.; Dannenfelser, Ruth; Clark, Neil R.; Grigoryan, Arsen V.; Linder, Kathryn M.; Cardozo, Timothy J.; Bond, Julia C.; Boran, Aislyn D. W.; Iyengar, Ravi; Malovannaya, Anna; Lanz, Rainer B.; Ma'ayan, Avi

    2011-01-01

    Coregulator proteins (CoRegs) are part of multi-protein complexes that transiently assemble with transcription factors and chromatin modifiers to regulate gene expression. In this study we analyzed data from 3,290 immuno-precipitations (IP) followed by mass spectrometry (MS) applied to human cell lines aimed at identifying CoRegs complexes. Using the semi-quantitative spectral counts, we scored binary protein-protein and domain-domain associations with several equations. Unlike previous applications, our methods scored prey-prey protein-protein interactions regardless of the baits used. We also predicted domain-domain interactions underlying predicted protein-protein interactions. The quality of predicted protein-protein and domain-domain interactions was evaluated using known binary interactions from the literature, whereas one protein-protein interaction, between STRN and CTTNBP2NL, was validated experimentally; and one domain-domain interaction, between the HEAT domain of PPP2R1A and the Pkinase domain of STK25, was validated using molecular docking simulations. The scoring schemes presented here recovered known, and predicted many new, complexes, protein-protein, and domain-domain interactions. The networks that resulted from the predictions are provided as a web-based interactive application at http://maayanlab.net/HT-IP-MS-2-PPI-DDI/. PMID:22219718

  20. Validation of a Proposed Tumor Regression Grading Scheme for Pancreatic Ductal Adenocarcinoma After Neoadjuvant Therapy as a Prognostic Indicator for Survival.

    PubMed

    Lee, Sun Mi; Katz, Matthew H G; Liu, Li; Sundar, Manonmani; Wang, Hua; Varadhachary, Gauri R; Wolff, Robert A; Lee, Jeffrey E; Maitra, Anirban; Fleming, Jason B; Rashid, Asif; Wang, Huamin

    2016-12-01

    Neoadjuvant therapy has been increasingly used to treat patients with potentially resectable pancreatic ductal adenocarcinoma (PDAC). Although the College of American Pathologists (CAP) grading scheme for tumor response in posttherapy specimens has been used, its clinical significance has not been validated. Previously, we proposed a 3-tier histologic tumor regression grading (HTRG) scheme (HTRG 0, no viable tumor; HTRG 1, <5% viable tumor cells; HTRG 2, ≥5% viable tumor cells) and showed that the 3-tier HTRG scheme correlated with prognosis. In this study, we sought to validate our proposed HTRG scheme in a new cohort of 167 consecutive PDAC patients who completed neoadjuvant therapy and pancreaticoduodenectomy. We found that patients with HTRG 0 or 1 were associated with a lower frequency of lymph node metastasis (P=0.004) and recurrence (P=0.01), lower ypT (P<0.001) and AJCC stage (P<0.001), longer disease-free survival (DFS, P=0.004) and overall survival (OS, P=0.02) than those with HTRG 2. However, there was no difference in either DFS or OS between the groups with CAP grade 2 and those with CAP grade 3 (P>0.05). In multivariate analysis, HTRG grade 0 or 1 was an independent prognostic factor for better DFS (P=0.03), but not OS. Therefore we validated the proposed HTRG scheme from our previous study. The proposed HTRG scheme is simple and easy to apply in practice by pathologists and might be used as a successful surrogate for longer DFS in patients with potentially resectable PDAC who completed neoadjuvant therapy and surgery.

  1. An Introduction to "Benefit of the Doubt" Composite Indicators

    ERIC Educational Resources Information Center

    Cherchye, Laurens; Moesen, Willem; Rogge, Nicky; Van Puyenbroeck, Tom

    2007-01-01

    Despite their increasing use, composite indicators remain controversial. The undesirable dependence of countries' rankings on the preliminary normalization stage, and the disagreement among experts/stakeholders on the specific weighting scheme used to aggregate sub-indicators, are often invoked to undermine the credibility of composite indicators.…

  2. Aggregating the response in time series regression models, applied to weather-related cardiovascular mortality.

    PubMed

    Masselot, Pierre; Chebana, Fateh; Bélanger, Diane; St-Hilaire, André; Abdous, Belkacem; Gosselin, Pierre; Ouarda, Taha B M J

    2018-07-01

    In environmental epidemiology studies, health response data (e.g. hospitalization or mortality) are often noisy because of hospital organization and other social factors. The noise in the data can hide the true signal related to the exposure. The signal can be unveiled by performing a temporal aggregation on health data and then using it as the response in regression analysis. From aggregated series, a general methodology is introduced to account for the particularities of an aggregated response in a regression setting. This methodology can be used with usually applied regression models in weather-related health studies, such as generalized additive models (GAM) and distributed lag nonlinear models (DLNM). In particular, the residuals are modelled using an autoregressive-moving average (ARMA) model to account for the temporal dependence. The proposed methodology is illustrated by modelling the influence of temperature on cardiovascular mortality in Canada. A comparison with classical DLNMs is provided and several aggregation methods are compared. Results show that there is an increase in the fit quality when the response is aggregated, and that the estimated relationship focuses more on the outcome over several days than the classical DLNM. More precisely, among various investigated aggregation schemes, it was found that an aggregation with an asymmetric Epanechnikov kernel is more suited for studying the temperature-mortality relationship. Copyright © 2018. Published by Elsevier B.V.

  3. Innovative Equipment and Production Method for Mixed Fodder in the Conditions of Agricultural Enterprises

    NASA Astrophysics Data System (ADS)

    Sabiev, U. K.; Demchuk, E. V.; Myalo, V. V.; Soyunov, A. S.

    2017-07-01

    It is recommended to feed the cattle and poultry with grain fodder in the form of feed mixture balanced according to the content. Feeding of grain fodder in the form of stock feed is inefficient and economically unreasonable. The article is devoted to actual problem - the preparation of mixed fodder in the conditions of agricultural enterprises. Review and critical analyses of mixed fodder assemblies and aggregates are given. Structural and technical schemes of small-size mixed fodder aggregate with intensified attachments of vibrating and percussive action for preparation of bulk feed mixture in the conditions of agricultural enterprises were developed. The mixed fodder aggregate for its preparation in the places of direct consumption from own grain fodder production and purchased protein and vitamin supplements is also suggested. Mixed fodder aggregate allows to get prepared mixed fodder of high uniformity at low cost of energy and price of production that is becoming profitable for livestock breeding. Model line-up of suggested mixed fodder aggregate with different productivity both for small and big agricultural enterprises is considered.

  4. Spatial aggregation query in dynamic geosensor networks

    NASA Astrophysics Data System (ADS)

    Yi, Baolin; Feng, Dayang; Xiao, Shisong; Zhao, Erdun

    2007-11-01

    Wireless sensor networks have been widely used for civilian and military applications, such as environmental monitoring and vehicle tracking. In many of these applications, the researches mainly aim at building sensor network based systems to leverage the sensed data to applications. However, the existing works seldom exploited spatial aggregation query considering the dynamic characteristics of sensor networks. In this paper, we investigate how to process spatial aggregation query over dynamic geosensor networks where both the sink node and sensor nodes are mobile and propose several novel improvements on enabling techniques. The mobility of sensors makes the existing routing protocol based on information of fixed framework or the neighborhood infeasible. We present an improved location-based stateless implicit geographic forwarding (IGF) protocol for routing a query toward the area specified by query window, a diameter-based window aggregation query (DWAQ) algorithm for query propagation and data aggregation in the query window, finally considering the location changing of the sink node, we present two schemes to forward the result to the sink node. Simulation results show that the proposed algorithms can improve query latency and query accuracy.

  5. Oral health finance and expenditure in South Africa.

    PubMed

    Naidoo, L C; Stephen, L X

    1997-12-01

    The objective of this paper was to examine the cost of oral health in South Africa over the past decade Particular emphasis was placed on the contribution made by medical schemes which is the main source of private health care funding. Some of the problems facing this huge industry were also briefly explored. Primary aggregate data on oral health expenditure were obtained from the Department of Health, Pretoria and from the offices of the Registrar of Medical Schemes, Pretoria. The results show that in 1994, 4.7 per cent of the total health care budget was allocated to oral health. Of this amount, 14.2 per cent came from the state, 71.9 per cent from medical schemes and the remainder calculated to be from direct out-of-pocket payments. Furthermore, real expenditure for oral health by medical schemes grew robustly and almost continuously from 1984 through to 1994, generally outstripping medical inflation.

  6. A Suboptimal Power-Saving Transmission Scheme in Multiple Component Carrier Networks

    NASA Astrophysics Data System (ADS)

    Chung, Yao-Liang; Tsai, Zsehong

    Power consumption due to transmissions in base stations (BSs) has been a major contributor to communication-related CO2 emissions. A power optimization model is developed in this study with respect to radio resource allocation and activation in a multiple Component Carrier (CC) environment. We formulate and solve the power-minimization problem of the BS transceivers for multiple-CC networks with carrier aggregation, while maintaining the overall system and respective users' utilities above minimum levels. The optimized power consumption based on this model can be viewed as a lower bound of that of other algorithms employed in practice. A suboptimal scheme with low computation complexity is proposed. Numerical results show that the power consumption of our scheme is much better than that of the conventional one in which all CCs are always active, if both schemes maintain the same required utilities.

  7. Secure Cluster Head Sensor Elections Using Signal Strength Estimation and Ordered Transmissions

    PubMed Central

    Wang, Gicheol; Cho, Gihwan

    2009-01-01

    In clustered sensor networks, electing CHs (Cluster Heads) in a secure manner is very important because they collect data from sensors and send the aggregated data to the sink. If a compromised node is elected as a CH, it can illegally acquire data from all the members and even send forged data to the sink. Nevertheless, most of the existing CH election schemes have not treated the problem of the secure CH election. Recently, random value based protocols have been proposed to resolve the secure CH election problem. However, these schemes cannot prevent an attacker from suppressing its contribution for the change of CH election result and from selectively forwarding its contribution for the disagreement of CH election result. In this paper, we propose a modified random value scheme to prevent these disturbances. Our scheme dynamically adjusts the forwarding order of contributions and discards a received contribution when its signal strength is lower than the specified level to prevent these malicious actions. The simulation results have shown that our scheme effectively prevents attackers from changing and splitting an agreement of CH election result. Also, they have shown that our scheme is relatively energy-efficient than other schemes. PMID:22408550

  8. A data-driven weighting scheme for multivariate phenotypic endpoints recapitulates zebrafish developmental cascades

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Guozhu, E-mail: gzhang6@ncsu.edu

    Zebrafish have become a key alternative model for studying health effects of environmental stressors, partly due to their genetic similarity to humans, fast generation time, and the efficiency of generating high-dimensional systematic data. Studies aiming to characterize adverse health effects in zebrafish typically include several phenotypic measurements (endpoints). While there is a solid biomedical basis for capturing a comprehensive set of endpoints, making summary judgments regarding health effects requires thoughtful integration across endpoints. Here, we introduce a Bayesian method to quantify the informativeness of 17 distinct zebrafish endpoints as a data-driven weighting scheme for a multi-endpoint summary measure, called weightedmore » Aggregate Entropy (wAggE). We implement wAggE using high-throughput screening (HTS) data from zebrafish exposed to five concentrations of all 1060 ToxCast chemicals. Our results show that our empirical weighting scheme provides better performance in terms of the Receiver Operating Characteristic (ROC) curve for identifying significant morphological effects and improves robustness over traditional curve-fitting approaches. From a biological perspective, our results suggest that developmental cascade effects triggered by chemical exposure can be recapitulated by analyzing the relationships among endpoints. Thus, wAggE offers a powerful approach for analysis of multivariate phenotypes that can reveal underlying etiological processes. - Highlights: • Introduced a data-driven weighting scheme for multiple phenotypic endpoints. • Weighted Aggregate Entropy (wAggE) implies differential importance of endpoints. • Endpoint relationships reveal developmental cascade effects triggered by exposure. • wAggE is generalizable to multi-endpoint data of different shapes and scales.« less

  9. Field performance evaluations of Illinois aggregates for subgrade replacement and subbase : phase II.

    DOT National Transportation Integrated Search

    2013-04-01

    The project objective was to validate the results from ICT Project R27-1, which characterized in the : laboratory the strength, stiffness, and deformation behaviors of three different aggregate types : commonly used in Illinois for subgrade replaceme...

  10. An Efficient and Practical Smart Card Based Anonymity Preserving User Authentication Scheme for TMIS using Elliptic Curve Cryptography.

    PubMed

    Amin, Ruhul; Islam, S K Hafizul; Biswas, G P; Khan, Muhammad Khurram; Kumar, Neeraj

    2015-11-01

    In the last few years, numerous remote user authentication and session key agreement schemes have been put forwarded for Telecare Medical Information System, where the patient and medical server exchange medical information using Internet. We have found that most of the schemes are not usable for practical applications due to known security weaknesses. It is also worth to note that unrestricted number of patients login to the single medical server across the globe. Therefore, the computation and maintenance overhead would be high and the server may fail to provide services. In this article, we have designed a medical system architecture and a standard mutual authentication scheme for single medical server, where the patient can securely exchange medical data with the doctor(s) via trusted central medical server over any insecure network. We then explored the security of the scheme with its resilience to attacks. Moreover, we formally validated the proposed scheme through the simulation using Automated Validation of Internet Security Schemes and Applications software whose outcomes confirm that the scheme is protected against active and passive attacks. The performance comparison demonstrated that the proposed scheme has lower communication cost than the existing schemes in literature. In addition, the computation cost of the proposed scheme is nearly equal to the exiting schemes. The proposed scheme not only efficient in terms of different security attacks, but it also provides an efficient login, mutual authentication, session key agreement and verification and password update phases along with password recovery.

  11. Unconditionally secure commitment in position-based quantum cryptography.

    PubMed

    Nadeem, Muhammad

    2014-10-27

    A new commitment scheme based on position-verification and non-local quantum correlations is presented here for the first time in literature. The only credential for unconditional security is the position of committer and non-local correlations generated; neither receiver has any pre-shared data with the committer nor does receiver require trusted and authenticated quantum/classical channels between him and the committer. In the proposed scheme, receiver trusts the commitment only if the scheme itself verifies position of the committer and validates her commitment through non-local quantum correlations in a single round. The position-based commitment scheme bounds committer to reveal valid commitment within allocated time and guarantees that the receiver will not be able to get information about commitment unless committer reveals. The scheme works for the commitment of both bits and qubits and is equally secure against committer/receiver as well as against any third party who may have interests in destroying the commitment. Our proposed scheme is unconditionally secure in general and evades Mayers and Lo-Chau attacks in particular.

  12. Enhancement of the Open National Combustion Code (OpenNCC) and Initial Simulation of Energy Efficient Engine Combustor

    NASA Technical Reports Server (NTRS)

    Miki, Kenji; Moder, Jeff; Liou, Meng-Sing

    2016-01-01

    In this paper, we present the recent enhancement of the Open National Combustion Code (OpenNCC) and apply the OpenNCC to model a realistic combustor configuration (Energy Efficient Engine (E3)). First, we perform a series of validation tests for the newly-implemented advection upstream splitting method (AUSM) and the extended version of the AUSM-family schemes (AUSM+-up). Compared with the analytical/experimental data of the validation tests, we achieved good agreement. In the steady-state E3 cold flow results using the Reynolds-averaged Navier-Stokes(RANS), we find a noticeable difference in the flow fields calculated by the two different numerical schemes, the standard Jameson- Schmidt-Turkel (JST) scheme and the AUSM scheme. The main differences are that the AUSM scheme is less numerical dissipative and it predicts much stronger reverse flow in the recirculation zone. This study indicates that two schemes could show different flame-holding predictions and overall flame structures.

  13. Attack and improvements of fair quantum blind signature schemes

    NASA Astrophysics Data System (ADS)

    Zou, Xiangfu; Qiu, Daowen

    2013-06-01

    Blind signature schemes allow users to obtain the signature of a message while the signer learns neither the message nor the resulting signature. Therefore, blind signatures have been used to realize cryptographic protocols providing the anonymity of some participants, such as: secure electronic payment systems and electronic voting systems. A fair blind signature is a form of blind signature which the anonymity could be removed with the help of a trusted entity, when this is required for legal reasons. Recently, a fair quantum blind signature scheme was proposed and thought to be safe. In this paper, we first point out that there exists a new attack on fair quantum blind signature schemes. The attack shows that, if any sender has intercepted any valid signature, he (she) can counterfeit a valid signature for any message and can not be traced by the counterfeited blind signature. Then, we construct a fair quantum blind signature scheme by improved the existed one. The proposed fair quantum blind signature scheme can resist the preceding attack. Furthermore, we demonstrate the security of the proposed fair quantum blind signature scheme and compare it with the other one.

  14. 75 FR 70818 - Traffic Separation Schemes: In the Strait of Juan de Fuca and Its Approaches; in Puget Sound and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-19

    ...-AA48 Traffic Separation Schemes: In the Strait of Juan de Fuca and Its Approaches; in Puget Sound and..., the Coast Guard codifies traffic separation schemes in the Strait of Juan de Fuca and its approaches.... These traffic separation schemes (TSSs) were validated by a Port Access Route Study (PARS) conducted...

  15. Rational design of mutations that change the aggregation rate of a protein while maintaining its native structure and stability

    NASA Astrophysics Data System (ADS)

    Camilloni, Carlo; Sala, Benedetta Maria; Sormanni, Pietro; Porcari, Riccardo; Corazza, Alessandra; De Rosa, Matteo; Zanini, Stefano; Barbiroli, Alberto; Esposito, Gennaro; Bolognesi, Martino; Bellotti, Vittorio; Vendruscolo, Michele; Ricagno, Stefano

    2016-05-01

    A wide range of human diseases is associated with mutations that, destabilizing proteins native state, promote their aggregation. However, the mechanisms leading from folded to aggregated states are still incompletely understood. To investigate these mechanisms, we used a combination of NMR spectroscopy and molecular dynamics simulations to compare the native state dynamics of Beta-2 microglobulin (β2m), whose aggregation is associated with dialysis-related amyloidosis, and its aggregation-resistant mutant W60G. Our results indicate that W60G low aggregation propensity can be explained, beyond its higher stability, by an increased average protection of the aggregation-prone residues at its surface. To validate these findings, we designed β2m variants that alter the aggregation-prone exposed surface of wild-type and W60G β2m modifying their aggregation propensity. These results allowed us to pinpoint the role of dynamics in β2m aggregation and to provide a new strategy to tune protein aggregation by modulating the exposure of aggregation-prone residues.

  16. The analytical {\\mathscr{O}}({a}_{s}^{4}) expression for the polarized Bjorken sum rule in the miniMOM scheme and the consequences for the generalized Crewther relation

    NASA Astrophysics Data System (ADS)

    Kataev, A. L.; Molokoedov, V. S.

    2017-12-01

    The analytical {\\mathscr{O}}({a}s4) perturbative QCD expression for the flavour non-singlet contribution to the Bjorken polarized sum rule in the rather applicable at present gauge-dependent miniMOM scheme is obtained. For the considered three values of the gauge parameter, namely ξ = 0 (Landau gauge), ξ = -1 (anti-Feynman gauge) and ξ = -3 (Stefanis-Mikhailov gauge), the scheme-dependent coefficients are considerably smaller than the gauge-independent {\\overline{{MS}}} results. It is found that the fundamental property of the factorization of the QCD renormalization group β-function in the generalized Crewther relation, which is valid in the gauge-invariant {\\overline{{MS}}} scheme up to {\\mathscr{O}}({a}s4)-level at least, is unexpectedly valid at the same level in the miniMOM-scheme for ξ = 0, and for ξ = -1 and ξ = -3 in part.

  17. Dietary Screener in the 2009 CHIS: Validation

    Cancer.gov

    In the Eating at America's Table Study and the Observing Protein and Energy Nutrition Study, Risk Factors Branch staff assessed the validity of created aggregate variables from the 2009 CHIS Dietary Screener.

  18. Validation of an aggregate exposure model for substances in consumer products: a case study of diethyl phthalate in personal care products

    PubMed Central

    Delmaar, Christiaan; Bokkers, Bas; ter Burg, Wouter; Schuur, Gerlienke

    2015-01-01

    As personal care products (PCPs) are used in close contact with a person, they are a major source of consumer exposure to chemical substances contained in these products. The estimation of realistic consumer exposure to substances in PCPs is currently hampered by the lack of appropriate data and methods. To estimate aggregate exposure of consumers to substances contained in PCPs, a person-oriented consumer exposure model has been developed (the Probabilistic Aggregate Consumer Exposure Model, PACEM). The model simulates daily exposure in a population based on product use data collected from a survey among the Dutch population. The model is validated by comparing diethyl phthalate (DEP) dose estimates to dose estimates based on biomonitoring data. It was found that the model's estimates compared well with the estimates based on biomonitoring data. This suggests that the person-oriented PACEM model is a practical tool for assessing realistic aggregate exposures to substances in PCPs. In the future, PACEM will be extended with use pattern data on other product groups. This will allow for assessing aggregate exposure to substances in consumer products across different product groups. PMID:25352161

  19. Principles for problem aggregation and assignment in medium scale multiprocessors

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Saltz, Joel H.

    1987-01-01

    One of the most important issues in parallel processing is the mapping of workload to processors. This paper considers a large class of problems having a high degree of potential fine grained parallelism, and execution requirements that are either not predictable, or are too costly to predict. The main issues in mapping such a problem onto medium scale multiprocessors are those of aggregation and assignment. We study a method of parameterized aggregation that makes few assumptions about the workload. The mapping of aggregate units of work onto processors is uniform, and exploits locality of workload intensity to balance the unknown workload. In general, a finer aggregate granularity leads to a better balance at the price of increased communication/synchronization costs; the aggregation parameters can be adjusted to find a reasonable granularity. The effectiveness of this scheme is demonstrated on three model problems: an adaptive one-dimensional fluid dynamics problem with message passing, a sparse triangular linear system solver on both a shared memory and a message-passing machine, and a two-dimensional time-driven battlefield simulation employing message passing. Using the model problems, the tradeoffs are studied between balanced workload and the communication/synchronization costs. Finally, an analytical model is used to explain why the method balances workload and minimizes the variance in system behavior.

  20. Simulations of forest mortality in Colorado River basin

    NASA Astrophysics Data System (ADS)

    Wei, L.; Xu, C.; Johnson, D. J.; Zhou, H.; McDowell, N.

    2017-12-01

    The Colorado River Basin (CRB) had experienced multiple severe forest mortality events under the recent changing climate. Such forest mortality events may have great impacts on ecosystem services and water budget of the watershed. It is hence important to estimate and predict the forest mortality in the CRB with climate change. We simulated forest mortality in the CRB with a model of plant hydraulics within the FATES (the Functionally Assembled Terrestrial Ecosystem Simulator) coupled to the DOE Earth System model (ACME: Accelerated Climate Model of Energy) at a 0.5 x 0.5 degree resolution. Moreover, we incorporated a stable carbon isotope (δ13C) module to ACME(FATE) and used it as a new predictor of forest mortality. The δ13C values of plants with C3 photosynthetic pathway (almost all trees are C3 plants) can indicate the water stress plants experiencing (the more intensive stress, the less negative δ13C value). We set a δ13C threshold in model simulation, above which forest mortality initiates. We validate the mortality simulations with field data based on Forest Inventory and Analysis (FIA) data, which were aggregated into the same spatial resolution as the model simulations. Different mortality schemes in the model (carbon starvation, hydraulic failure, and δ13C) were tested and compared. Each scheme demonstrated its strength and the plant hydraulics module provided more reliable simulations of forest mortality than the earlier ACME(FATE) version. Further testing is required for better forest mortality modelling.

  1. Dual-phase steel sheets under cyclic tension-compression to large strains: Experiments and crystal plasticity modeling

    NASA Astrophysics Data System (ADS)

    Zecevic, Milovan; Korkolis, Yannis P.; Kuwabara, Toshihiko; Knezevic, Marko

    2016-11-01

    In this work, we develop a physically-based crystal plasticity model for the prediction of cyclic tension-compression deformation of multi-phase materials, specifically dual-phase (DP) steels. The model is elasto-plastic in nature and integrates a hardening law based on statistically stored dislocation density, localized hardening due to geometrically necessary dislocations (GNDs), slip-system-level kinematic backstresses, and annihilation of dislocations. The model further features a two level homogenization scheme where the first level is the overall response of a two-phase polycrystalline aggregate and the second level is the homogenized response of the martensite polycrystalline regions. The model is applied to simulate a cyclic tension-compression-tension deformation behavior of DP590 steel sheets. From experiments, we observe that the material exhibits a typical decreasing hardening rate during forward loading, followed by a linear and then a non-linear unloading upon the load reversal, the Bauschinger effect, and changes in hardening rate during strain reversals. To predict these effects, we identify the model parameters using a portion of the measured data and validate and verify them using the remaining data. The developed model is capable of predicting all the particular features of the cyclic deformation of DP590 steel, with great accuracy. From the predictions, we infer and discuss the effects of GNDs, the backstresses, dislocation annihilation, and the two-level homogenization scheme on capturing the cyclic deformation behavior of the material.

  2. PAVS: A New Privacy-Preserving Data Aggregation Scheme for Vehicle Sensing Systems.

    PubMed

    Xu, Chang; Lu, Rongxing; Wang, Huaxiong; Zhu, Liehuang; Huang, Cheng

    2017-03-03

    Air pollution has become one of the most pressing environmental issues in recent years. According to a World Health Organization (WHO) report, air pollution has led to the deaths of millions of people worldwide. Accordingly, expensive and complex air-monitoring instruments have been exploited to measure air pollution. Comparatively, a vehicle sensing system (VSS), as it can be effectively used for many purposes and can bring huge financial benefits in reducing high maintenance and repair costs, has received considerable attention. However, the privacy issues of VSS including vehicles' location privacy have not been well addressed. Therefore, in this paper, we propose a new privacy-preserving data aggregation scheme, called PAVS, for VSS. Specifically, PAVS combines privacy-preserving classification and privacy-preserving statistics on both the mean E(·) and variance Var(·), which makes VSS more promising, as, with minimal privacy leakage, more vehicles are willing to participate in sensing. Detailed analysis shows that the proposed PAVS can achieve the properties of privacy preservation, data accuracy and scalability. In addition, the performance evaluations via extensive simulations also demonstrate its efficiency.

  3. Using Human iPSC-Derived Neurons to Model TAU Aggregation

    PubMed Central

    Verheyen, An; Diels, Annick; Dijkmans, Joyce; Oyelami, Tutu; Meneghello, Giulia; Mertens, Liesbeth; Versweyveld, Sofie; Borgers, Marianne; Buist, Arjan; Peeters, Pieter; Cik, Miroslav

    2015-01-01

    Alzheimer’s disease and frontotemporal dementia are amongst the most common forms of dementia characterized by the formation and deposition of abnormal TAU in the brain. In order to develop a translational human TAU aggregation model suitable for screening, we transduced TAU harboring the pro-aggregating P301L mutation into control hiPSC-derived neural progenitor cells followed by differentiation into cortical neurons. TAU aggregation and phosphorylation was quantified using AlphaLISA technology. Although no spontaneous aggregation was observed upon expressing TAU-P301L in neurons, seeding with preformed aggregates consisting of the TAU-microtubule binding repeat domain triggered robust TAU aggregation and hyperphosphorylation already after 2 weeks, without affecting general cell health. To validate our model, activity of two autophagy inducers was tested. Both rapamycin and trehalose significantly reduced TAU aggregation levels suggesting that iPSC-derived neurons allow for the generation of a biologically relevant human Tauopathy model, highly suitable to screen for compounds that modulate TAU aggregation. PMID:26720731

  4. Randomized central limit theorems: A unified theory.

    PubMed

    Eliazar, Iddo; Klafter, Joseph

    2010-08-01

    The central limit theorems (CLTs) characterize the macroscopic statistical behavior of large ensembles of independent and identically distributed random variables. The CLTs assert that the universal probability laws governing ensembles' aggregate statistics are either Gaussian or Lévy, and that the universal probability laws governing ensembles' extreme statistics are Fréchet, Weibull, or Gumbel. The scaling schemes underlying the CLTs are deterministic-scaling all ensemble components by a common deterministic scale. However, there are "random environment" settings in which the underlying scaling schemes are stochastic-scaling the ensemble components by different random scales. Examples of such settings include Holtsmark's law for gravitational fields and the Stretched Exponential law for relaxation times. In this paper we establish a unified theory of randomized central limit theorems (RCLTs)-in which the deterministic CLT scaling schemes are replaced with stochastic scaling schemes-and present "randomized counterparts" to the classic CLTs. The RCLT scaling schemes are shown to be governed by Poisson processes with power-law statistics, and the RCLTs are shown to universally yield the Lévy, Fréchet, and Weibull probability laws.

  5. A back-fitting algorithm to improve real-time flood forecasting

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaojing; Liu, Pan; Cheng, Lei; Liu, Zhangjun; Zhao, Yan

    2018-07-01

    Real-time flood forecasting is important for decision-making with regards to flood control and disaster reduction. The conventional approach involves a postprocessor calibration strategy that first calibrates the hydrological model and then estimates errors. This procedure can simulate streamflow consistent with observations, but obtained parameters are not optimal. Joint calibration strategies address this issue by refining hydrological model parameters jointly with the autoregressive (AR) model. In this study, five alternative schemes are used to forecast floods. Scheme I uses only the hydrological model, while scheme II includes an AR model for error correction. In scheme III, differencing is used to remove non-stationarity in the error series. A joint inference strategy employed in scheme IV calibrates the hydrological and AR models simultaneously. The back-fitting algorithm, a basic approach for training an additive model, is adopted in scheme V to alternately recalibrate hydrological and AR model parameters. The performance of the five schemes is compared with a case study of 15 recorded flood events from China's Baiyunshan reservoir basin. Our results show that (1) schemes IV and V outperform scheme III during the calibration and validation periods and (2) scheme V is inferior to scheme IV in the calibration period, but provides better results in the validation period. Joint calibration strategies can therefore improve the accuracy of flood forecasting. Additionally, the back-fitting recalibration strategy produces weaker overcorrection and a more robust performance compared with the joint inference strategy.

  6. Multistatic Array Sampling Scheme for Fast Near-Field Image Reconstruction

    DTIC Science & Technology

    2016-01-01

    1 Multistatic Array Sampling Scheme for Fast Near-Field Image Reconstruction William F. Moulder, James D. Krieger, Denise T. Maurais-Galejs, Huy...described and validated experimentally with the formation of high quality microwave images. It is further shown that the scheme is more than two orders of... scheme (wherein transmitters and receivers are co-located) which require NTNR transmit-receive elements to achieve the same sampling. The second

  7. Spectral cumulus parameterization based on cloud-resolving model

    NASA Astrophysics Data System (ADS)

    Baba, Yuya

    2018-02-01

    We have developed a spectral cumulus parameterization using a cloud-resolving model. This includes a new parameterization of the entrainment rate which was derived from analysis of the cloud properties obtained from the cloud-resolving model simulation and was valid for both shallow and deep convection. The new scheme was examined in a single-column model experiment and compared with the existing parameterization of Gregory (2001, Q J R Meteorol Soc 127:53-72) (GR scheme). The results showed that the GR scheme simulated more shallow and diluted convection than the new scheme. To further validate the physical performance of the parameterizations, Atmospheric Model Intercomparison Project (AMIP) experiments were performed, and the results were compared with reanalysis data. The new scheme performed better than the GR scheme in terms of mean state and variability of atmospheric circulation, i.e., the new scheme improved positive bias of precipitation in western Pacific region, and improved positive bias of outgoing shortwave radiation over the ocean. The new scheme also simulated better features of convectively coupled equatorial waves and Madden-Julian oscillation. These improvements were found to be derived from the modification of parameterization for the entrainment rate, i.e., the proposed parameterization suppressed excessive increase of entrainment, thus suppressing excessive increase of low-level clouds.

  8. CoFlame: A refined and validated numerical algorithm for modeling sooting laminar coflow diffusion flames

    NASA Astrophysics Data System (ADS)

    Eaves, Nick A.; Zhang, Qingan; Liu, Fengshan; Guo, Hongsheng; Dworkin, Seth B.; Thomson, Murray J.

    2016-10-01

    Mitigation of soot emissions from combustion devices is a global concern. For example, recent EURO 6 regulations for vehicles have placed stringent limits on soot emissions. In order to allow design engineers to achieve the goal of reduced soot emissions, they must have the tools to so. Due to the complex nature of soot formation, which includes growth and oxidation, detailed numerical models are required to gain fundamental insights into the mechanisms of soot formation. A detailed description of the CoFlame FORTRAN code which models sooting laminar coflow diffusion flames is given. The code solves axial and radial velocity, temperature, species conservation, and soot aggregate and primary particle number density equations. The sectional particle dynamics model includes nucleation, PAH condensation and HACA surface growth, surface oxidation, coagulation, fragmentation, particle diffusion, and thermophoresis. The code utilizes a distributed memory parallelization scheme with strip-domain decomposition. The public release of the CoFlame code, which has been refined in terms of coding structure, to the research community accompanies this paper. CoFlame is validated against experimental data for reattachment length in an axi-symmetric pipe with a sudden expansion, and ethylene-air and methane-air diffusion flames for multiple soot morphological parameters and gas-phase species. Finally, the parallel performance and computational costs of the code is investigated.

  9. Reconciling global mammal prioritization schemes into a strategy.

    PubMed

    Rondinini, Carlo; Boitani, Luigi; Rodrigues, Ana S L; Brooks, Thomas M; Pressey, Robert L; Visconti, Piero; Baillie, Jonathan E M; Baisero, Daniele; Cabeza, Mar; Crooks, Kevin R; Di Marco, Moreno; Redford, Kent H; Andelman, Sandy A; Hoffmann, Michael; Maiorano, Luigi; Stuart, Simon N; Wilson, Kerrie A

    2011-09-27

    The huge conservation interest that mammals attract and the large datasets that have been collected on them have propelled a diversity of global mammal prioritization schemes, but no comprehensive global mammal conservation strategy. We highlight some of the potential discrepancies between the schemes presented in this theme issue, including: conservation of species or areas, reactive and proactive conservation approaches, conservation knowledge and action, levels of aggregation of indicators of trend and scale issues. We propose that recently collected global mammal data and many of the mammal prioritization schemes now available could be incorporated into a comprehensive global strategy for the conservation of mammals. The task of developing such a strategy should be coordinated by a super-partes, authoritative institution (e.g. the International Union for Conservation of Nature, IUCN). The strategy would facilitate funding agencies, conservation organizations and national institutions to rapidly identify a number of short-term and long-term global conservation priorities, and act complementarily to achieve them.

  10. Dyadic coping in Latino couples: validity of the Spanish version of the Dyadic Coping Inventory.

    PubMed

    Falconier, Mariana Karin; Nussbeck, Fridtjof; Bodenmann, Guy

    2013-01-01

    This study seeks to validate the Spanish version of the Dyadic Coping Inventory (DCI) in a Latino population with data from 113 heterosexual couples. Results for both partners confirm the factorial structure for the Spanish version (Subscales: Stress Communication, Emotion- and Problem-Focused Supportive, Delegated, and Negative Dyadic Coping, Emotion- and Problem-Focused Common Dyadic Coping, and Evaluation of Dyadic Coping; Aggregated Scales: Dyadic Coping by Oneself and by Partner) and support the discriminant validity of its subscales and the concurrent, and criterion validity of the subscales and aggregated scales. These results do not only indicate that the Spanish version of the DCI can be used reliably as a measure of coping in Spanish-speaking Latino couples, but they also suggest that this group relies on dyadic coping frequently and that this type of coping is associated with positive relationship functioning and individual coping. Limitations and implications are discussed.

  11. A rapid boundary integral equation technique for protein electrostatics

    NASA Astrophysics Data System (ADS)

    Grandison, Scott; Penfold, Robert; Vanden-Broeck, Jean-Marc

    2007-06-01

    A new boundary integral formulation is proposed for the solution of electrostatic field problems involving piecewise uniform dielectric continua. Direct Coulomb contributions to the total potential are treated exactly and Green's theorem is applied only to the residual reaction field generated by surface polarisation charge induced at dielectric boundaries. The implementation shows significantly improved numerical stability over alternative schemes involving the total field or its surface normal derivatives. Although strictly respecting the electrostatic boundary conditions, the partitioned scheme does introduce a jump artefact at the interface. Comparison against analytic results in canonical geometries, however, demonstrates that simple interpolation near the boundary is a cheap and effective way to circumvent this characteristic in typical applications. The new scheme is tested in a naive model to successfully predict the ground state orientation of biomolecular aggregates comprising the soybean storage protein, glycinin.

  12. Aggregate Timber Supply: From the Forest to the Market

    Treesearch

    David N. Wear; Subhrendu K. Pattanayak

    2003-01-01

    Timber supply modeling is a means of formalizing the production behavior of heterogeneous landowners managing a wide variety of forest types and vintages within a region. The critical challenge of timber supply modeling is constructing theoretically valid and empirically practical aggregate descriptions of harvest behavior. Understanding timber supply is essential for...

  13. A Paradigm Regained: Conflict Perspective on Language Use in Bilingual Educational and Social Contexts.

    ERIC Educational Resources Information Center

    Williams, Eddie

    The validity of the consensus paradigm dominant in sociolinguistics is questioned. Social scientists working in this paradigm take the perspective of society as an aggregate operating through agreement between its constituent elements, working to the benefit of the aggregate. The best-known of the consensus-oriented theories is…

  14. Aggregate Size Dependence of Amyloid Adsorption onto Charged Interfaces

    PubMed Central

    2017-01-01

    Amyloid aggregates are associated with a range of human neurodegenerative disorders, and it has been shown that neurotoxicity is dependent on aggregate size. Combining molecular simulation with analytical theory, a predictive model is proposed for the adsorption of amyloid aggregates onto oppositely charged surfaces, where the interaction is governed by an interplay between electrostatic attraction and entropic repulsion. Predictions are experimentally validated against quartz crystal microbalance–dissipation experiments of amyloid beta peptides and fragmented fibrils in the presence of a supported lipid bilayer. Assuming amyloids as rigid, elongated particles, we observe nonmonotonic trends for the extent of adsorption with respect to aggregate size and preferential adsorption of smaller aggregates over larger ones. Our findings describe a general phenomenon with implications for stiff polyions and rodlike particles that are electrostatically attracted to a surface. PMID:29284092

  15. 17 CFR 240.13d-1 - Filing of Schedules 13D and 13G.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... regulatory scheme applicable to the equivalent U.S. institution; and (K) A group, provided that all the... influencing the control of the issuer, nor in connection with or as a participant in any transaction having... or control person, provided the aggregate amount held directly by the parent or control person, and...

  16. 17 CFR 240.13d-1 - Filing of Schedules 13D and 13G.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... regulatory scheme applicable to the equivalent U.S. institution; and (K) A group, provided that all the... control of the issuer, nor in connection with or as a participant in any transaction having such purpose... or control person, provided the aggregate amount held directly by the parent or control person, and...

  17. Modeling variability and scale integration of LAI measurements

    Treesearch

    Kris Nackaerts; Pol Coppin

    2000-01-01

    Rapid and reliable estimation of leaf area at various scales is important for research on chance detection of leaf area index (LAI) as an indicator of ecosystem condition. It is of utmost importance to know to what extent boundary and illumination conditions, data aggregation method, and sampling scheme influence the relative accuracy of stand-level LAI measurements....

  18. Multiobjective design of aquifer monitoring networks for optimal spatial prediction and geostatistical parameter estimation

    NASA Astrophysics Data System (ADS)

    Alzraiee, Ayman H.; Bau, Domenico A.; Garcia, Luis A.

    2013-06-01

    Effective sampling of hydrogeological systems is essential in guiding groundwater management practices. Optimal sampling of groundwater systems has previously been formulated based on the assumption that heterogeneous subsurface properties can be modeled using a geostatistical approach. Therefore, the monitoring schemes have been developed to concurrently minimize the uncertainty in the spatial distribution of systems' states and parameters, such as the hydraulic conductivity K and the hydraulic head H, and the uncertainty in the geostatistical model of system parameters using a single objective function that aggregates all objectives. However, it has been shown that the aggregation of possibly conflicting objective functions is sensitive to the adopted aggregation scheme and may lead to distorted results. In addition, the uncertainties in geostatistical parameters affect the uncertainty in the spatial prediction of K and H according to a complex nonlinear relationship, which has often been ineffectively evaluated using a first-order approximation. In this study, we propose a multiobjective optimization framework to assist the design of monitoring networks of K and H with the goal of optimizing their spatial predictions and estimating the geostatistical parameters of the K field. The framework stems from the combination of a data assimilation (DA) algorithm and a multiobjective evolutionary algorithm (MOEA). The DA algorithm is based on the ensemble Kalman filter, a Monte-Carlo-based Bayesian update scheme for nonlinear systems, which is employed to approximate the posterior uncertainty in K, H, and the geostatistical parameters of K obtained by collecting new measurements. Multiple MOEA experiments are used to investigate the trade-off among design objectives and identify the corresponding monitoring schemes. The methodology is applied to design a sampling network for a shallow unconfined groundwater system located in Rocky Ford, Colorado. Results indicate that the effect of uncertainties associated with the geostatistical parameters on the spatial prediction might be significantly alleviated (by up to 80% of the prior uncertainty in K and by 90% of the prior uncertainty in H) by sampling evenly distributed measurements with a spatial measurement density of more than 1 observation per 60 m × 60 m grid block. In addition, exploration of the interaction of objective functions indicates that the ability of head measurements to reduce the uncertainty associated with the correlation scale is comparable to the effect of hydraulic conductivity measurements.

  19. The Epidemiology of Modern Test Score Use: Anticipating Aggregation, Adjustment, and Equating

    ERIC Educational Resources Information Center

    Ho, Andrew

    2013-01-01

    In his thoughtful focus article, Haertel (this issue) pushes testing experts to broaden the scope of their validation efforts and to invite scholars from other disciplines to join them. He credits existing validation frameworks for helping the measurement community to identify incomplete or nonexistent validity arguments. However, he notes his…

  20. GEOMETRIC CROSS SECTIONS OF DUST AGGREGATES AND A COMPRESSION MODEL FOR AGGREGATE COLLISIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suyama, Toru; Wada, Koji; Tanaka, Hidekazu

    2012-07-10

    Geometric cross sections of dust aggregates determine their coupling with disk gas, which governs their motions in protoplanetary disks. Collisional outcomes also depend on geometric cross sections of initial aggregates. In a previous paper, we performed three-dimensional N-body simulations of sequential collisions of aggregates composed of a number of sub-micron-sized icy particles and examined radii of gyration (and bulk densities) of the obtained aggregates. We showed that collisional compression of aggregates is not efficient and that aggregates remain fluffy. In the present study, we examine geometric cross sections of the aggregates. Their cross sections decrease due to compression as wellmore » as to their gyration radii. It is found that a relation between the cross section and the gyration radius proposed by Okuzumi et al. is valid for the compressed aggregates. We also refine the compression model proposed in our previous paper. The refined model enables us to calculate the evolution of both gyration radii and cross sections of growing aggregates and reproduces well our numerical results of sequential aggregate collisions. The refined model can describe non-equal-mass collisions as well as equal-mass cases. Although we do not take into account oblique collisions in the present study, oblique collisions would further hinder compression of aggregates.« less

  1. Describing Myxococcus xanthus Aggregation Using Ostwald Ripening Equations for Thin Liquid Films

    PubMed Central

    Bahar, Fatmagül; Pratt-Szeliga, Philip C.; Angus, Stuart; Guo, Jiaye; Welch, Roy D.

    2014-01-01

    When starved, a swarm of millions of Myxococcus xanthus cells coordinate their movement from outward swarming to inward coalescence. The cells then execute a synchronous program of multicellular development, arranging themselves into dome shaped aggregates. Over the course of development, about half of the initial aggregates disappear, while others persist and mature into fruiting bodies. This work seeks to develop a quantitative model for aggregation that accurately simulates which will disappear and which will persist. We analyzed time-lapse movies of M. xanthus development, modeled aggregation using the equations that describe Ostwald ripening of droplets in thin liquid films, and predicted the disappearance and persistence of aggregates with an average accuracy of 85%. We then experimentally validated a prediction that is fundamental to this model by tracking individual fluorescent cells as they moved between aggregates and demonstrating that cell movement towards and away from aggregates correlates with aggregate disappearance. Describing development through this model may limit the number and type of molecular genetic signals needed to complete M. xanthus development, and it provides numerous additional testable predictions. PMID:25231319

  2. AggModel: A soil organic matter model with measurable pools for use in incubation studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Segoli, Moran; De Gryze, S.; Dou, Fugen

    2013-01-01

    Current soil organic matter (SOM) models are empirical in nature by employing few conceptual SOM pools that have a specific turnover time, but that are not measurable and have no direct relationship with soil structural properties. Most soil particles are held together in aggregates and the number, size and stability of these aggregates significantly affect the size and amount of organic matter contained in these aggregates, and its susceptibility to decomposition. While it has been shown that soil aggregates and their dynamics can be measured directly in the laboratory and in the field, the impact of soil aggregate dynamics onmore » SOM decomposition has not been explicitly incorporated in ecosystem models. Here, we present AggModel, a conceptual and simulation model that integrates soil aggregate and SOM dynamics. In AggModel, we consider unaggregated and microaggregated soil that can exist within or external to macroaggregated soil. Each of the four aggregate size classes contains particulate organic matter and mineral-associated organic matter fractions. We used published data from laboratory incubations to calibrate and validate the biological and environmental effects on the rate of formation and breakdown of macroaggregates and microaggregates, and the organic matter dynamics within these different aggregate fractions. After calibration, AggModel explained more than 70% of the variation in aggregate masses and over 90% of the variation in aggregate-associated carbon. The model estimated the turnover time of macroaggregates as 32 days and 166 days for microaggregates. Sensitivity analysis of AggModel parameterization supported the notion that macroaggregate turnover rate has a strong control over microaggregate masses and, hence, carbon sequestration. In addition to AggModel being a proof-of-concept, the advantage of a model that is based on measurable SOM fractions is that its internal structure and dynamics can be directly calibrated and validated by using experimental data. In conclusion, AggModel successfully incorporates the explicit representation for the turnover of soil aggregates and their influence on SOM dynamics and can form the basis for new SOM modules within existing ecosystem models.« less

  3. Understanding crowd-powered search groups: a social network perspective.

    PubMed

    Zhang, Qingpeng; Wang, Fei-Yue; Zeng, Daniel; Wang, Tao

    2012-01-01

    Crowd-powered search is a new form of search and problem solving scheme that involves collaboration among a potentially large number of voluntary Web users. Human flesh search (HFS), a particular form of crowd-powered search originated in China, has seen tremendous growth since its inception in 2001. HFS presents a valuable test-bed for scientists to validate existing and new theories in social computing, sociology, behavioral sciences, and so forth. In this research, we construct an aggregated HFS group, consisting of the participants and their relationships in a comprehensive set of identified HFS episodes. We study the topological properties and the evolution of the aggregated network and different sub-groups in the network. We also identify the key HFS participants according to a variety of measures. We found that, as compared with other online social networks, HFS participant network shares the power-law degree distribution and small-world property, but with a looser and more distributed organizational structure, leading to the diversity, decentralization, and independence of HFS participants. In addition, the HFS group has been becoming increasingly decentralized. The comparisons of different HFS sub-groups reveal that HFS participants collaborated more often when they conducted the searches in local platforms or the searches requiring a certain level of professional knowledge background. On the contrary, HFS participants did not collaborate much when they performed the search task in national platforms or the searches with general topics that did not require specific information and learning. We also observed that the key HFS information contributors, carriers, and transmitters came from different groups of HFS participants.

  4. [Succession caused by beaver (Castor fiber L.) life activity: I. What is learnt from the calibration of a simple Markov model].

    PubMed

    Logofet, D O; Evstigneev, O I; Aleĭnikov, A A; Morozova, A O

    2014-01-01

    A homogeneous Markov chain of three aggregated states "pond--swamp--wood" is proposed as a model of cyclic zoogenic successions caused by beaver (Castor fiber L.) life activity in a forest biogeocoenosis. To calibrate the chain transition matrix, the data have appeared sufficient that were gained from field studies undertaken in "Bryanskii Les" Reserve in the years of 2002-2008. Major outcomes of the calibrated model ensue from the formulae of finite homogeneous Markov chain theory: the stationary probability distribution of states, thematrix (T) of mean first passage times, and the mean durations (M(j)) of succession stages. The former illustrates the distribution of relative areas under succession stages if the current trends and transition rates of succession are conserved in the long-term--it has appeared close to the observed distribution. Matrix T provides for quantitative characteristics of the cyclic process, specifying the ranges the experts proposed for the duration of stages in the conceptual scheme of succession. The calculated values of M(j) detect potential discrepancies between empirical data, the expert knowledge that summarizes the data, and the postulates accepted in the mathematical model. The calculated M2 value falls outside the expert range, which gives a reason to doubt the validity of expert estimation proposed, the aggregation mode chosen for chain states, or/and the accuracy-of data available, i.e., to draw certain "lessons" from partially successful calibration. Refusal to postulate the time homogeneity or the Markov property of the chain is also discussed among possible ways to improve the model.

  5. Using the Johns Hopkins' Aggregated Diagnosis Groups (ADGs) to predict 1-year mortality in population-based cohorts of patients with diabetes in Ontario, Canada.

    PubMed

    Austin, P C; Shah, B R; Newman, A; Anderson, G M

    2012-09-01

    There are limited validated methods to ascertain comorbidities for risk adjustment in ambulatory populations of patients with diabetes using administrative health-care databases. The objective was to examine the ability of the Johns Hopkins' Aggregated Diagnosis Groups to predict mortality in population-based ambulatory samples of both incident and prevalent subjects with diabetes. Retrospective cohorts constructed using population-based administrative data. The incident cohort consisted of all 346,297 subjects diagnosed with diabetes between 1 April 2004 and 31 March 2008. The prevalent cohort consisted of all 879,849 subjects with pre-existing diabetes on 1 January, 2007. The outcome was death within 1 year of the subject's index date. A logistic regression model consisting of age, sex and indicator variables for 22 of the 32 Johns Hopkins' Aggregated Diagnosis Group categories had excellent discrimination for predicting mortality in incident diabetes patients: the c-statistic was 0.87 in an independent validation sample. A similar model had excellent discrimination for predicting mortality in prevalent diabetes patients: the c-statistic was 0.84 in an independent validation sample. Both models demonstrated very good calibration, denoting good agreement between observed and predicted mortality across the range of predicted mortality in which the large majority of subjects lay. For comparative purposes, regression models incorporating the Charlson comorbidity index, age and sex, age and sex, and age alone had poorer discrimination than the model that incorporated the Johns Hopkins' Aggregated Diagnosis Groups. Logistical regression models using age, sex and the John Hopkins' Aggregated Diagnosis Groups were able to accurately predict 1-year mortality in population-based samples of patients with diabetes. © 2011 The Authors. Diabetic Medicine © 2011 Diabetes UK.

  6. Validation of the use of synthetic imagery for camouflage effectiveness assessment

    NASA Astrophysics Data System (ADS)

    Newman, Sarah; Gilmore, Marilyn A.; Moorhead, Ian R.; Filbee, David R.

    2002-08-01

    CAMEO-SIM was developed as a laboratory method to assess the effectiveness of aircraft camouflage schemes. It is a physically accurate synthetic image generator, rendering in any waveband between 0.4 and 14 microns. Camouflage schemes are assessed by displaying imagery to observers under controlled laboratory conditions or by analyzing the digital image and calculating the contrast statistics between the target and background. Code verification has taken place during development. However, validation of CAMEO-SIM is essential to ensure that the imagery produced is suitable to be used for camouflage effectiveness assessment. Real world characteristics are inherently variable, so exact pixel to pixel correlation is unnecessary. For camouflage effectiveness assessment it is more important to be confident that the comparative effects of different schemes are correct, but prediction of detection ranges is also desirable. Several different tests have been undertaken to validate CAMEO-SIM for the purpose of assessing camouflage effectiveness. Simple scenes have been modeled and measured. Thermal and visual properties of the synthetic and real scenes have been compared. This paper describes the validation tests and discusses the suitability of CAMEO-SIM for camouflage assessment.

  7. The Development and Validation of a New Land Surface Model for Regional and Global Climate Modeling

    NASA Astrophysics Data System (ADS)

    Lynch-Stieglitz, Marc

    1995-11-01

    A new land-surface scheme intended for use in mesoscale and global climate models has been developed and validated. The ground scheme consists of 6 soil layers. Diffusion and a modified tipping bucket model govern heat and water flow respectively. A 3 layer snow model has been incorporated into a modified BEST vegetation scheme. TOPMODEL equations and Digital Elevation Model data are used to generate baseflow which supports lowland saturated zones. Soil moisture heterogeneity represented by saturated lowlands subsequently impacts watershed evapotranspiration, the partitioning of surface fluxes, and the development of the storm hydrograph. Five years of meteorological and hydrological data from the Sleepers river watershed located in the eastern highlands of Vermont where winter snow cover is significant were then used to drive and validate the new scheme. Site validation data were sufficient to evaluate model performance with regard to various aspects of the watershed water balance, including snowpack growth/ablation, the spring snowmelt hydrograph, storm hydrographs, and the seasonal development of watershed evapotranspiration and soil moisture. By including topographic effects, not only are the main spring hydrographs and individual storm hydrographs adequately resolved, but the mechanisms generating runoff are consistent with current views of hydrologic processes. The seasonal movement of the mean water table depth and the saturated area of the watershed are consistent with site data and the overall model hydroclimatology, including the surface fluxes, seems reasonable.

  8. On the Multilevel Solution Algorithm for Markov Chains

    NASA Technical Reports Server (NTRS)

    Horton, Graham

    1997-01-01

    We discuss the recently introduced multilevel algorithm for the steady-state solution of Markov chains. The method is based on an aggregation principle which is well established in the literature and features a multiplicative coarse-level correction. Recursive application of the aggregation principle, which uses an operator-dependent coarsening, yields a multi-level method which has been shown experimentally to give results significantly faster than the typical methods currently in use. When cast as a multigrid-like method, the algorithm is seen to be a Galerkin-Full Approximation Scheme with a solution-dependent prolongation operator. Special properties of this prolongation lead to the cancellation of the computationally intensive terms of the coarse-level equations.

  9. A Quantum Proxy Blind Signature Scheme Based on Genuine Five-Qubit Entangled State

    NASA Astrophysics Data System (ADS)

    Zeng, Chuan; Zhang, Jian-Zhong; Xie, Shu-Cui

    2017-06-01

    In this paper, a quantum proxy blind signature scheme based on controlled quantum teleportation is proposed. This scheme uses a genuine five-qubit entangled state as quantum channel and adopts the classical Vernam algorithm to blind message. We use the physical characteristics of quantum mechanics to implement delegation, signature and verification. Security analysis shows that our scheme is valid and satisfy the properties of a proxy blind signature, such as blindness, verifiability, unforgeability, undeniability.

  10. Moving beyond Means: Revealing Features of the Learning Environment by Investigating the Consensus among Student Ratings

    ERIC Educational Resources Information Center

    Schweig, Jonathan David

    2016-01-01

    Student ratings, a critical component in policy efforts to assess and improve teaching, are often collected using questionnaires, and inferences about teachers are then based on aggregated student survey responses. While considerable attention has been paid to the reliability and validity of these aggregates, much less attention has been paid to…

  11. Multicenter validation study of a transplantation-specific cytogenetics grouping scheme for patients with myelodysplastic syndromes.

    PubMed

    Armand, P; Deeg, H J; Kim, H T; Lee, H; Armistead, P; de Lima, M; Gupta, V; Soiffer, R J

    2010-05-01

    Cytogenetics is an important prognostic factor for patients with myelodysplastic syndromes (MDS). However, existing cytogenetics grouping schemes are based on patients treated with supportive care, and may not be optimal for patients undergoing allo-SCT. We proposed earlier an SCT-specific cytogenetics grouping scheme for patients with MDS and AML arising from MDS, based on an analysis of patients transplanted at the Dana-Farber Cancer Institute/Brigham and Women's Hospital. Under this scheme, abnormalities of chromosome 7 and complex karyotype are considered adverse risk, whereas all others are considered standard risk. In this retrospective study, we validated this scheme on an independent multicenter cohort of 546 patients. Adverse cytogenetics was the strongest prognostic factor for outcome in this cohort. The 4-year relapse-free survival and OS were 42 and 46%, respectively, in the standard-risk group, vs 21 and 23% in the adverse group (P<0.0001 for both comparisons). This grouping scheme retained its prognostic significance irrespective of patient age, disease type, earlier leukemogenic therapy and conditioning intensity. Therapy-related disease was not associated with increased mortality in this cohort, after taking cytogenetics into account. We propose that this SCT-specific cytogenetics grouping scheme be used for patients with MDS or AML arising from MDS who are considering or undergoing SCT.

  12. Application of kinetic flux vector splitting scheme for solving multi-dimensional hydrodynamical models of semiconductor devices

    NASA Astrophysics Data System (ADS)

    Nisar, Ubaid Ahmed; Ashraf, Waqas; Qamar, Shamsul

    In this article, one and two-dimensional hydrodynamical models of semiconductor devices are numerically investigated. The models treat the propagation of electrons in a semiconductor device as the flow of a charged compressible fluid. It plays an important role in predicting the behavior of electron flow in semiconductor devices. Mathematically, the governing equations form a convection-diffusion type system with a right hand side describing the relaxation effects and interaction with a self consistent electric field. The proposed numerical scheme is a splitting scheme based on the kinetic flux-vector splitting (KFVS) method for the hyperbolic step, and a semi-implicit Runge-Kutta method for the relaxation step. The KFVS method is based on the direct splitting of macroscopic flux functions of the system on the cell interfaces. The second order accuracy of the scheme is achieved by using MUSCL-type initial reconstruction and Runge-Kutta time stepping method. Several case studies are considered. For validation, the results of current scheme are compared with those obtained from the splitting scheme based on the NT central scheme. The effects of various parameters such as low field mobility, device length, lattice temperature and voltage are analyzed. The accuracy, efficiency and simplicity of the proposed KFVS scheme validates its generic applicability to the given model equations. A two dimensional simulation is also performed by KFVS method for a MESFET device, producing results in good agreement with those obtained by NT-central scheme.

  13. On Classification in the Study of Failure, and a Challenge to Classifiers

    NASA Technical Reports Server (NTRS)

    Wasson, Kimberly S.

    2003-01-01

    Classification schemes are abundant in the literature of failure. They serve a number of purposes, some more successfully than others. We examine several classification schemes constructed for various purposes relating to failure and its investigation, and discuss their values and limits. The analysis results in a continuum of uses for classification schemes, that suggests that the value of certain properties of these schemes is dependent on the goals a classification is designed to forward. The contrast in the value of different properties for different uses highlights a particular shortcoming: we argue that while humans are good at developing one kind of scheme: dynamic, flexible classifications used for exploratory purposes, we are not so good at developing another: static, rigid classifications used to trap and organize data for specific analytic goals. Our lack of strong foundation in developing valid instantiations of the latter impedes progress toward a number of investigative goals. This shortcoming and its consequences pose a challenge to researchers in the study of failure: to develop new methods for constructing and validating static classification schemes of demonstrable value in promoting the goals of investigations. We note current productive activity in this area, and outline foundations for more.

  14. Evaluating resilience of DNP3-controlled SCADA systems against event buffer flooding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, Guanhua; Nicol, David M; Jin, Dong

    2010-12-16

    The DNP3 protocol is widely used in SCADA systems (particularly electrical power) as a means of communicating observed sensor state information back to a control center. Typical architectures using DNP3 have a two level hierarchy, where a specialized data aggregator device receives observed state from devices within a local region, and the control center collects the aggregated state from the data aggregator. The DNP3 communication between control center and data aggregator is asynchronous with the DNP3 communication between data aggregator and relays; this leads to the possibility of completely filling a data aggregator's buffer of pending events, when a relaymore » is compromised or spoofed and sends overly many (false) events to the data aggregator. This paper investigates how a real-world SCADA device responds to event buffer flooding. A Discrete-Time Markov Chain (DTMC) model is developed for understanding this. The DTMC model is validated by a Moebius simulation model and data collected on real SCADA testbed.« less

  15. High-fidelity and low-latency mobile fronthaul based on segment-wise TDM and MIMO-interleaved arraying.

    PubMed

    Li, Longsheng; Bi, Meihua; Miao, Xin; Fu, Yan; Hu, Weisheng

    2018-01-22

    In this paper, we firstly demonstrate an advanced arraying scheme in the TDM-based analog mobile fronthaul system to enhance the signal fidelity, in which the segment of the antenna carrier signal (AxC) with an appropriate length is served as the granularity for TDM aggregation. Without introducing extra processing, the entire system can be realized by simple DSP. The theoretical analysis is presented to verify the feasibility of this scheme, and to evaluate its effectiveness, the experiment with ~7-GHz bandwidth and 20 8 × 8 MIMO group signals are conducted. Results show that the segment-wise TDM is completely compatible with the MIMO-interleaved arraying, which is employed in an existing TDM scheme to improve the bandwidth efficiency. Moreover, compared to the existing TDM schemes, our scheme can not only satisfy the latency requirement of 5G but also significantly reduce the multiplexed signal bandwidth, hence providing higher signal fidelity in the bandwidth-limited fronthaul system. The experimental result of EVM verifies that 256-QAM is supportable using the segment-wise TDM arraying with only 250-ns latency, while with the ordinary TDM arraying, only 64-QAM is bearable.

  16. NEAMS-IPL MOOSE Framework Activities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slaughter, Andrew Edward; Permann, Cody James; Kong, Fande

    The Multiapp Picard iteration Milestone’s purpose was to support a framework level “tight-coupling” method within the hierarchical Multiapp’s execution scheme. This new solution scheme gives developers new choices for running multiphysics applications, particularly those with very strong nonlinear effects or those requiring coupling across disparate time or spatial scales. Figure 1 shows a typical Multiapp setup in MOOSE. Each node represents a separate simulation containing a separate equation system. MOOSE solves the equation system on each node in turn, in a user-controlled manner. Information can be aggregated or split and transferred from parent to child or child to parent asmore » needed between solves. Performing a tightly coupled execution scheme using this method wasn’t possible in the original implementation. This is was due to the inability to back up to a previous state once a converged solution was accepted at a particular Multiapp level.« less

  17. Validation of Twitter opinion trends with national polling aggregates: Hillary Clinton vs Donald Trump.

    PubMed

    Bovet, Alexandre; Morone, Flaviano; Makse, Hernán A

    2018-06-06

    Measuring and forecasting opinion trends from real-time social media is a long-standing goal of big-data analytics. Despite the large amount of work addressing this question, there has been no clear validation of online social media opinion trend with traditional surveys. Here we develop a method to infer the opinion of Twitter users by using a combination of statistical physics of complex networks and machine learning based on hashtags co-occurrence to build an in-domain training set of the order of a million tweets. We validate our method in the context of 2016 US Presidential Election by comparing the Twitter opinion trend with the New York Times National Polling Average, representing an aggregate of hundreds of independent traditional polls. The Twitter opinion trend follows the aggregated NYT polls with remarkable accuracy. We investigate the dynamics of the social network formed by the interactions among millions of Twitter supporters and infer the support of each user to the presidential candidates. Our analytics unleash the power of Twitter to uncover social trends from elections, brands to political movements, and at a fraction of the cost of traditional surveys.

  18. Quantifying Errors in TRMM-Based Multi-Sensor QPE Products Over Land in Preparation for GPM

    NASA Technical Reports Server (NTRS)

    Peters-Lidard, Christa D.; Tian, Yudong

    2011-01-01

    Determining uncertainties in satellite-based multi-sensor quantitative precipitation estimates over land of fundamental importance to both data producers and hydro climatological applications. ,Evaluating TRMM-era products also lays the groundwork and sets the direction for algorithm and applications development for future missions including GPM. QPE uncertainties result mostly from the interplay of systematic errors and random errors. In this work, we will synthesize our recent results quantifying the error characteristics of satellite-based precipitation estimates. Both systematic errors and total uncertainties have been analyzed for six different TRMM-era precipitation products (3B42, 3B42RT, CMORPH, PERSIANN, NRL and GSMap). For systematic errors, we devised an error decomposition scheme to separate errors in precipitation estimates into three independent components, hit biases, missed precipitation and false precipitation. This decomposition scheme reveals hydroclimatologically-relevant error features and provides a better link to the error sources than conventional analysis, because in the latter these error components tend to cancel one another when aggregated or averaged in space or time. For the random errors, we calculated the measurement spread from the ensemble of these six quasi-independent products, and thus produced a global map of measurement uncertainties. The map yields a global view of the error characteristics and their regional and seasonal variations, reveals many undocumented error features over areas with no validation data available, and provides better guidance to global assimilation of satellite-based precipitation data. Insights gained from these results and how they could help with GPM will be highlighted.

  19. On the Analysis of Case-Control Studies in Cluster-correlated Data Settings.

    PubMed

    Haneuse, Sebastien; Rivera-Rodriguez, Claudia

    2018-01-01

    In resource-limited settings, long-term evaluation of national antiretroviral treatment (ART) programs often relies on aggregated data, the analysis of which may be subject to ecological bias. As researchers and policy makers consider evaluating individual-level outcomes such as treatment adherence or mortality, the well-known case-control design is appealing in that it provides efficiency gains over random sampling. In the context that motivates this article, valid estimation and inference requires acknowledging any clustering, although, to our knowledge, no statistical methods have been published for the analysis of case-control data for which the underlying population exhibits clustering. Furthermore, in the specific context of an ongoing collaboration in Malawi, rather than performing case-control sampling across all clinics, case-control sampling within clinics has been suggested as a more practical strategy. To our knowledge, although similar outcome-dependent sampling schemes have been described in the literature, a case-control design specific to correlated data settings is new. In this article, we describe this design, discuss balanced versus unbalanced sampling techniques, and provide a general approach to analyzing case-control studies in cluster-correlated settings based on inverse probability-weighted generalized estimating equations. Inference is based on a robust sandwich estimator with correlation parameters estimated to ensure appropriate accounting of the outcome-dependent sampling scheme. We conduct comprehensive simulations, based in part on real data on a sample of N = 78,155 program registrants in Malawi between 2005 and 2007, to evaluate small-sample operating characteristics and potential trade-offs associated with standard case-control sampling or when case-control sampling is performed within clusters.

  20. Optically-synchronized encoder and multiplexer scheme for interleaved photonics analog-to-digital conversion

    NASA Astrophysics Data System (ADS)

    Villa, Carlos; Kumavor, Patrick; Donkor, Eric

    2008-04-01

    Photonics Analog-to-Digital Converters (ADCs) utilize a train of optical pulses to sample an electrical input waveform applied to an electrooptic modulator or a reverse biased photodiode. In the former, the resulting train of amplitude-modulated optical pulses is detected (converter to electrical) and quantized using a conversional electronics ADC- as at present there are no practical, cost-effective optical quantizers available with performance that rival electronic quantizers. In the latter, the electrical samples are directly quantized by the electronics ADC. In both cases however, the sampling rate is limited by the speed with which the electronics ADC can quantize the electrical samples. One way to increase the sampling rate by a factor N is by using the time-interleaved technique which consists of a parallel array of N electrical ADC converters, which have the same sampling rate but different sampling phase. Each operating at a quantization rate of fs/N where fs is the aggregated sampling rate. In a system with no real-time operation, the N channels digital outputs are stored in memory, and then aggregated (multiplexed) to obtain the digital representation of the analog input waveform. Alternatively, for real-time operation systems the reduction of storing time in the multiplexing process is desired to improve the time response of the ADC. The complete elimination of memories come expenses of concurrent timing and synchronization in the aggregation of the digital signal that became critical for a good digital representation of the analog signal waveform. In this paper we propose and demonstrate a novel optically synchronized encoder and multiplexer scheme for interleaved photonics ADCs that utilize the N optical signals used to sample different phases of an analog input signal to synchronize the multiplexing of the resulting N digital output channels in a single digital output port. As a proof of concept, four 320 Megasamples/sec 12-bit of resolution digital signals were multiplexed to form an aggregated 1.28 Gigasamples/sec single digital output signal.

  1. Pre-K-8 Prospective Teachers' Understanding of Fractions: An Extension of Fractions Schemes and Operations Research

    ERIC Educational Resources Information Center

    Lovin, LouAnn H.; Stevens, Alexis L.; Siegfried, John; Wilkins, Jesse L. M.; Norton, Anderson

    2018-01-01

    In an effort to expand our knowledge base pertaining to pre-K-8 prospective teachers' understanding of fractions, the present study was designed to extend the work on fractions schemes and operations to this population. One purpose of our study was to validate the fractions schemes and operations hierarchy with the pre-K-8 prospective teacher…

  2. On the validity of the modified equation approach to the stability analysis of finite-difference methods

    NASA Technical Reports Server (NTRS)

    Chang, Sin-Chung

    1987-01-01

    The validity of the modified equation stability analysis introduced by Warming and Hyett was investigated. It is shown that the procedure used in the derivation of the modified equation is flawed and generally leads to invalid results. Moreover, the interpretation of the modified equation as the exact partial differential equation solved by a finite-difference method generally cannot be justified even if spatial periodicity is assumed. For a two-level scheme, due to a series of mathematical quirks, the connection between the modified equation approach and the von Neuman method established by Warming and Hyett turns out to be correct despite its questionable original derivation. However, this connection is only partially valid for a scheme involving more than two time levels. In the von Neumann analysis, the complex error multiplication factor associated with a wave number generally has (L-1) roots for an L-level scheme. It is shown that the modified equation provides information about only one of these roots.

  3. Validation of sea ice models using an uncertainty-based distance metric for multiple model variables: NEW METRIC FOR SEA ICE MODEL VALIDATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Urrego-Blanco, Jorge R.; Hunke, Elizabeth C.; Urban, Nathan M.

    Here, we implement a variance-based distance metric (D n) to objectively assess skill of sea ice models when multiple output variables or uncertainties in both model predictions and observations need to be considered. The metric compares observations and model data pairs on common spatial and temporal grids improving upon highly aggregated metrics (e.g., total sea ice extent or volume) by capturing the spatial character of model skill. The D n metric is a gamma-distributed statistic that is more general than the χ 2 statistic commonly used to assess model fit, which requires the assumption that the model is unbiased andmore » can only incorporate observational error in the analysis. The D n statistic does not assume that the model is unbiased, and allows the incorporation of multiple observational data sets for the same variable and simultaneously for different variables, along with different types of variances that can characterize uncertainties in both observations and the model. This approach represents a step to establish a systematic framework for probabilistic validation of sea ice models. The methodology is also useful for model tuning by using the D n metric as a cost function and incorporating model parametric uncertainty as part of a scheme to optimize model functionality. We apply this approach to evaluate different configurations of the standalone Los Alamos sea ice model (CICE) encompassing the parametric uncertainty in the model, and to find new sets of model configurations that produce better agreement than previous configurations between model and observational estimates of sea ice concentration and thickness.« less

  4. Validation of sea ice models using an uncertainty-based distance metric for multiple model variables: NEW METRIC FOR SEA ICE MODEL VALIDATION

    DOE PAGES

    Urrego-Blanco, Jorge R.; Hunke, Elizabeth C.; Urban, Nathan M.; ...

    2017-04-01

    Here, we implement a variance-based distance metric (D n) to objectively assess skill of sea ice models when multiple output variables or uncertainties in both model predictions and observations need to be considered. The metric compares observations and model data pairs on common spatial and temporal grids improving upon highly aggregated metrics (e.g., total sea ice extent or volume) by capturing the spatial character of model skill. The D n metric is a gamma-distributed statistic that is more general than the χ 2 statistic commonly used to assess model fit, which requires the assumption that the model is unbiased andmore » can only incorporate observational error in the analysis. The D n statistic does not assume that the model is unbiased, and allows the incorporation of multiple observational data sets for the same variable and simultaneously for different variables, along with different types of variances that can characterize uncertainties in both observations and the model. This approach represents a step to establish a systematic framework for probabilistic validation of sea ice models. The methodology is also useful for model tuning by using the D n metric as a cost function and incorporating model parametric uncertainty as part of a scheme to optimize model functionality. We apply this approach to evaluate different configurations of the standalone Los Alamos sea ice model (CICE) encompassing the parametric uncertainty in the model, and to find new sets of model configurations that produce better agreement than previous configurations between model and observational estimates of sea ice concentration and thickness.« less

  5. Aggregated N-of-1 randomized controlled trials: modern data analytics applied to a clinically valid method of intervention effectiveness.

    PubMed

    Cushing, Christopher C; Walters, Ryan W; Hoffman, Lesa

    2014-03-01

    Aggregated N-of-1 randomized controlled trials (RCTs) combined with multilevel modeling represent a methodological advancement that may help bridge science and practice in pediatric psychology. The purpose of this article is to offer a primer for pediatric psychologists interested in conducting aggregated N-of-1 RCTs. An overview of N-of-1 RCT methodology is provided and 2 simulated data sets are analyzed to demonstrate the clinical and research potential of the methodology. The simulated data example demonstrates the utility of aggregated N-of-1 RCTs for understanding the clinical impact of an intervention for a given individual and the modeling of covariates to explain why an intervention worked for one patient and not another. Aggregated N-of-1 RCTs hold potential for improving the science and practice of pediatric psychology.

  6. Randomized central limit theorems: A unified theory

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo; Klafter, Joseph

    2010-08-01

    The central limit theorems (CLTs) characterize the macroscopic statistical behavior of large ensembles of independent and identically distributed random variables. The CLTs assert that the universal probability laws governing ensembles’ aggregate statistics are either Gaussian or Lévy, and that the universal probability laws governing ensembles’ extreme statistics are Fréchet, Weibull, or Gumbel. The scaling schemes underlying the CLTs are deterministic—scaling all ensemble components by a common deterministic scale. However, there are “random environment” settings in which the underlying scaling schemes are stochastic—scaling the ensemble components by different random scales. Examples of such settings include Holtsmark’s law for gravitational fields and the Stretched Exponential law for relaxation times. In this paper we establish a unified theory of randomized central limit theorems (RCLTs)—in which the deterministic CLT scaling schemes are replaced with stochastic scaling schemes—and present “randomized counterparts” to the classic CLTs. The RCLT scaling schemes are shown to be governed by Poisson processes with power-law statistics, and the RCLTs are shown to universally yield the Lévy, Fréchet, and Weibull probability laws.

  7. Fault detection and multiclassifier fusion for unmanned aerial vehicles (UAVs)

    NASA Astrophysics Data System (ADS)

    Yan, Weizhong

    2001-03-01

    UAVs demand more accurate fault accommodation for their mission manager and vehicle control system in order to achieve a reliability level that is comparable to that of a pilot aircraft. This paper attempts to apply multi-classifier fusion techniques to achieve the necessary performance of the fault detection function for the Lockheed Martin Skunk Works (LMSW) UAV Mission Manager. Three different classifiers that meet the design requirements of the fault detection of the UAAV are employed. The binary decision outputs from the classifiers are then aggregated using three different classifier fusion schemes, namely, majority vote, weighted majority vote, and Naieve Bayes combination. All of the three schemes are simple and need no retraining. The three fusion schemes (except the majority vote that gives an average performance of the three classifiers) show the classification performance that is better than or equal to that of the best individual. The unavoidable correlation between the classifiers with binary outputs is observed in this study. We conclude that it is the correlation between the classifiers that limits the fusion schemes to achieve an even better performance.

  8. Validation and augmentation of Inrix arterial travel time data using independent sources : [research summary].

    DOT National Transportation Integrated Search

    2015-02-01

    Although the freeway travel time data has been validated extensively in recent : years, the quality of arterial travel time data is not well known. This project : presents a comprehensive validation scheme for arterial travel time data based : on GPS...

  9. Preparation and measurement methods for studying nanoparticle aggregate surface chemistry.

    PubMed

    Szakal, Christopher; McCarthy, James A; Ugelow, Melissa S; Konicek, Andrew R; Louis, Kacie; Yezer, Benjamin; Herzing, Andrew A; Hamers, Robert J; Holbrook, R David

    2012-07-01

    Despite best efforts at controlling nanoparticle (NP) surface chemistries, the environment surrounding nanomaterials is always changing and can impart a permanent chemical memory. We present a set of preparation and measurement methods to be used as the foundation for studying the surface chemical memory of engineered NP aggregates. We attempt to bridge the gap between controlled lab studies and real-world NP samples, specifically TiO(2), by using well-characterized and consistently synthesized NPs, controllably producing NP aggregates with precision drop-on-demand inkjet printing for subsequent chemical measurements, monitoring the physical morphology of the NP aggregate depositions with scanning electron microscopy (SEM), acquiring "surface-to-bulk" mass spectra of the NP aggregate surfaces with time-of-flight secondary ion mass spectrometry (ToF-SIMS), and developing a data analysis scheme to interpret chemical signatures more accurately from thousands of data files. We present differences in mass spectral peak ratios for bare TiO(2) NPs compared to NPs mixed separately with natural organic matter (NOM) or pond water. The results suggest that subtle changes in the local environment can alter the surface chemistry of TiO(2) NPs, as monitored by Ti(+)/TiO(+) and Ti(+)/C(3)H(5)(+) peak ratios. The subtle changes in the absolute surface chemistry of NP aggregates vs. that of the subsurface are explored. It is envisioned that the methods developed herein can be adapted for monitoring the surface chemistries of a variety of engineered NPs obtained from diverse natural environments.

  10. Renormalization scheme and gauge (in)dependence of the generalized Crewther relation: what are the real grounds of the β-factorization property?

    NASA Astrophysics Data System (ADS)

    Garkusha, A. V.; Kataev, A. L.; Molokoedov, V. S.

    2018-02-01

    The problem of scheme and gauge dependence of the factorization property of the renormalization group β-function in the SU( N c ) QCD generalized Crewther relation (GCR), which connects the flavor non-singlet contributions to the Adler and Bjorken polarized sum rule functions, is investigated at the O({a}_s^4) level of perturbation theory. It is known that in the gauge-invariant renormalization \\overline{MS} -scheme this property holds in the QCD GCR at least at this order. To study whether this factorization property is true in all gauge-invariant schemes, we consider the MS-like schemes in QCD and the QED-limit of the GCR in the \\overline{MS} -scheme and in two other gauge-independent subtraction schemes, namely in the momentum MOM and the on-shell OS schemes. In these schemes we confirm the existence of the β-function factorization in the QCD and QED variants of the GCR. The problem of the possible β-factorization in the gauge-dependent renormalization schemes in QCD is studied. To investigate this problem we consider the gauge non-invariant mMOM and MOMgggg-schemes. We demonstrate that in the mMOM scheme at the O({a}_s^3) level the β-factorization is valid for three values of the gauge parameter ξ only, namely for ξ = -3 , -1 and ξ = 0. In the O({a}_s^4) order of PT it remains valid only for case of the Landau gauge ξ = 0. The consideration of these two gauge-dependent schemes for the QCD GCR allows us to conclude that the factorization of RG β-function will always be implemented in any MOM-like renormalization schemes with linear covariant gauge at ξ = 0 and ξ = -3 at the O({a}_s^3) approximation. It is demonstrated that if factorization property for the MS-like schemes is true in all orders of PT, as theoretically indicated in the several works on the subject, then the factorization will also occur in the arbitrary MOM-like scheme in the Landau gauge in all orders of perturbation theory as well.

  11. Development of a novel coding scheme (SABICS) to record nurse-child interactive behaviours in a community dental preventive intervention.

    PubMed

    Zhou, Yuefang; Cameron, Elaine; Forbes, Gillian; Humphris, Gerry

    2012-08-01

    To develop and validate the St Andrews Behavioural Interaction Coding Scheme (SABICS): a tool to record nurse-child interactive behaviours. The SABICS was developed primarily from observation of video recorded interactions; and refined through an iterative process of applying the scheme to new data sets. Its practical applicability was assessed via implementation of the scheme on specialised behavioural coding software. Reliability was calculated using Cohen's Kappa. Discriminant validity was assessed using logistic regression. The SABICS contains 48 codes. Fifty-five nurse-child interactions were successfully coded through administering the scheme on The Observer XT8.0 system. Two visualization results of interaction patterns demonstrated the scheme's capability of capturing complex interaction processes. Cohen's Kappa was 0.66 (inter-coder) and 0.88 and 0.78 (two intra-coders). The frequency of nurse behaviours, such as "instruction" (OR = 1.32, p = 0.027) and "praise" (OR = 2.04, p = 0.027), predicted a child receiving the intervention. The SABICS is a unique system to record interactions between dental nurses and 3-5 years old children. It records and displays complex nurse-child interactive behaviours. It is easily administered and demonstrates reasonable psychometric properties. The SABICS has potential for other paediatric settings. Its development procedure may be helpful for other similar coding scheme development. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  12. A Numerical Study of Cirrus Clouds. Part I: Model Description.

    NASA Astrophysics Data System (ADS)

    Liu, Hui-Chun; Wang, Pao K.; Schlesinger, Robert E.

    2003-04-01

    This article, the first of a two-part series, presents a detailed description of a two-dimensional numerical cloud model directed toward elucidating the physical processes governing the evolution of cirrus clouds. The two primary scientific purposes of this work are (a) to determine the evolution and maintenance mechanisms of cirrus clouds and try to explain why some cirrus can persist for a long time; and (b) to investigate the influence of certain physical factors such as radiation, ice crystal habit, latent heat, ventilation effects, and aggregation mechanisms on the evolution of cirrus. The second part will discuss sets of model experiments that were run to address objectives (a) and (b), respectively.As set forth in this paper, the aforementioned two-dimensional numerical model, which comprises the research tool for this study, is organized into three modules that embody dynamics, microphysics, and radiation. The dynamic module develops a set of equations to describe shallow moist convection, also parameterizing turbulence by using a 1.5-order closure scheme. The microphysical module uses a double-moment scheme to simulate the evolution of the size distribution of ice particles. Heterogeneous and homogeneous nucleation of haze particles are included, along with other ice crystal processes such as diffusional growth, sedimentation, and aggregation. The radiation module uses a two-stream radiative transfer scheme to determine the radiative fluxes and heating rates, while the cloud optical properties are determined by the modified anomalous diffraction theory (MADT) for ice particles. One of the main advantages of this cirrus model is its explicit formulation of the microphysical and radiative properties as functions of ice crystal habit.

  13. Impact of WRF model PBL schemes on air quality simulations over Catalonia, Spain.

    PubMed

    Banks, R F; Baldasano, J M

    2016-12-01

    Here we analyze the impact of four planetary boundary-layer (PBL) parametrization schemes from the Weather Research and Forecasting (WRF) numerical weather prediction model on simulations of meteorological variables and predicted pollutant concentrations from an air quality forecast system (AQFS). The current setup of the Spanish operational AQFS, CALIOPE, is composed of the WRF-ARW V3.5.1 meteorological model tied to the Yonsei University (YSU) PBL scheme, HERMES v2 emissions model, CMAQ V5.0.2 chemical transport model, and dust outputs from BSC-DREAM8bv2. We test the performance of the YSU scheme against the Assymetric Convective Model Version 2 (ACM2), Mellor-Yamada-Janjic (MYJ), and Bougeault-Lacarrère (BouLac) schemes. The one-day diagnostic case study is selected to represent the most frequent synoptic condition in the northeast Iberian Peninsula during spring 2015; regional recirculations. It is shown that the ACM2 PBL scheme performs well with daytime PBL height, as validated against estimates retrieved using a micro-pulse lidar system (mean bias=-0.11km). In turn, the BouLac scheme showed WRF-simulated air and dew point temperature closer to METAR surface meteorological observations. Results are more ambiguous when simulated pollutant concentrations from CMAQ are validated against network urban, suburban, and rural background stations. The ACM2 scheme showed the lowest mean bias (-0.96μgm -3 ) with respect to surface ozone at urban stations, while the YSU scheme performed best with simulated nitrogen dioxide (-6.48μgm -3 ). The poorest results were with simulated particulate matter, with similar results found with all schemes tested. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  14. Testing two temporal upscaling schemes for the estimation of the time variability of the actual evapotranspiration

    NASA Astrophysics Data System (ADS)

    Maltese, A.; Capodici, F.; Ciraolo, G.; La Loggia, G.

    2015-10-01

    Temporal availability of grapes actual evapotranspiration is an emerging issue since vineyards farms are more and more converted from rainfed to irrigated agricultural systems. The manuscript aims to verify the accuracy of the actual evapotranspiration retrieval coupling a single source energy balance approach and two different temporal upscaling schemes. The first scheme tests the temporal upscaling of the main input variables, namely the NDVI, albedo and LST; the second scheme tests the temporal upscaling of the energy balance output, the actual evapotranspiration. The temporal upscaling schemes were implemented on: i) airborne remote sensing data acquired monthly during a whole irrigation season over a Sicilian vineyard; ii) low resolution MODIS products released daily or weekly; iii) meteorological data acquired by standard gauge stations. Daily MODIS LST products (MOD11A1) were disaggregated using the DisTrad model, 8-days black and white sky albedo products (MCD43A) allowed modeling the total albedo, and 8-days NDVI products (MOD13Q1) were modeled using the Fisher approach. Results were validated both in time and space. The temporal validation was carried out using the actual evapotranspiration measured in situ using data collected by a flux tower through the eddy covariance technique. The spatial validation involved airborne images acquired at different times from June to September 2008. Results aim to test whether the upscaling of the energy balance input or output data performed better.

  15. Smart Grid Privacy through Distributed Trust

    NASA Astrophysics Data System (ADS)

    Lipton, Benjamin

    Though the smart electrical grid promises many advantages in efficiency and reliability, the risks to consumer privacy have impeded its deployment. Researchers have proposed protecting privacy by aggregating user data before it reaches the utility, using techniques of homomorphic encryption to prevent exposure of unaggregated values. However, such schemes generally require users to trust in the correct operation of a single aggregation server. We propose two alternative systems based on secret sharing techniques that distribute this trust among multiple service providers, protecting user privacy against a misbehaving server. We also provide an extensive evaluation of the systems considered, comparing their robustness to privacy compromise, error handling, computational performance, and data transmission costs. We conclude that while all the systems should be computationally feasible on smart meters, the two methods based on secret sharing require much less computation while also providing better protection against corrupted aggregators. Building systems using these techniques could help defend the privacy of electricity customers, as well as customers of other utilities as they move to a more data-driven architecture.

  16. McrEngine: A Scalable Checkpointing System Using Data-Aware Aggregation and Compression

    DOE PAGES

    Islam, Tanzima Zerin; Mohror, Kathryn; Bagchi, Saurabh; ...

    2013-01-01

    High performance computing (HPC) systems use checkpoint-restart to tolerate failures. Typically, applications store their states in checkpoints on a parallel file system (PFS). As applications scale up, checkpoint-restart incurs high overheads due to contention for PFS resources. The high overheads force large-scale applications to reduce checkpoint frequency, which means more compute time is lost in the event of failure. We alleviate this problem through a scalable checkpoint-restart system, mcrEngine. McrEngine aggregates checkpoints from multiple application processes with knowledge of the data semantics available through widely-used I/O libraries, e.g., HDF5 and netCDF, and compresses them. Our novel scheme improves compressibility ofmore » checkpoints up to 115% over simple concatenation and compression. Our evaluation with large-scale application checkpoints show that mcrEngine reduces checkpointing overhead by up to 87% and restart overhead by up to 62% over a baseline with no aggregation or compression.« less

  17. SERS detection of Biomolecules at Physiological pH via aggregation of Gold Nanorods mediated by Optical Forces and Plasmonic Heating

    NASA Astrophysics Data System (ADS)

    Fazio, Barbara; D'Andrea, Cristiano; Foti, Antonino; Messina, Elena; Irrera, Alessia; Donato, Maria Grazia; Villari, Valentina; Micali, Norberto; Maragò, Onofrio M.; Gucciardi, Pietro G.

    2016-06-01

    Strategies for in-liquid molecular detection via Surface Enhanced Raman Scattering (SERS) are currently based on chemically-driven aggregation or optical trapping of metal nanoparticles in presence of the target molecules. Such strategies allow the formation of SERS-active clusters that efficiently embed the molecule at the “hot spots” of the nanoparticles and enhance its Raman scattering by orders of magnitude. Here we report on a novel scheme that exploits the radiation pressure to locally push gold nanorods and induce their aggregation in buffered solutions of biomolecules, achieving biomolecular SERS detection at almost neutral pH. The sensor is applied to detect non-resonant amino acids and proteins, namely Phenylalanine (Phe), Bovine Serum Albumin (BSA) and Lysozyme (Lys), reaching detection limits in the μg/mL range. Being a chemical free and contactless technique, our methodology is easy to implement, fast to operate, needs small sample volumes and has potential for integration in microfluidic circuits for biomarkers detection.

  18. SERS detection of Biomolecules at Physiological pH via aggregation of Gold Nanorods mediated by Optical Forces and Plasmonic Heating

    PubMed Central

    Fazio, Barbara; D’Andrea, Cristiano; Foti, Antonino; Messina, Elena; Irrera, Alessia; Donato, Maria Grazia; Villari, Valentina; Micali, Norberto; Maragò, Onofrio M.; Gucciardi, Pietro G.

    2016-01-01

    Strategies for in-liquid molecular detection via Surface Enhanced Raman Scattering (SERS) are currently based on chemically-driven aggregation or optical trapping of metal nanoparticles in presence of the target molecules. Such strategies allow the formation of SERS-active clusters that efficiently embed the molecule at the “hot spots” of the nanoparticles and enhance its Raman scattering by orders of magnitude. Here we report on a novel scheme that exploits the radiation pressure to locally push gold nanorods and induce their aggregation in buffered solutions of biomolecules, achieving biomolecular SERS detection at almost neutral pH. The sensor is applied to detect non-resonant amino acids and proteins, namely Phenylalanine (Phe), Bovine Serum Albumin (BSA) and Lysozyme (Lys), reaching detection limits in the μg/mL range. Being a chemical free and contactless technique, our methodology is easy to implement, fast to operate, needs small sample volumes and has potential for integration in microfluidic circuits for biomarkers detection. PMID:27246267

  19. Inverting pump-probe spectroscopy for state tomography of excitonic systems.

    PubMed

    Hoyer, Stephan; Whaley, K Birgitta

    2013-04-28

    We propose a two-step protocol for inverting ultrafast spectroscopy experiments on a molecular aggregate to extract the time-evolution of the excited state density matrix. The first step is a deconvolution of the experimental signal to determine a pump-dependent response function. The second step inverts this response function to obtain the quantum state of the system, given a model for how the system evolves following the probe interaction. We demonstrate this inversion analytically and numerically for a dimer model system, and evaluate the feasibility of scaling it to larger molecular aggregates such as photosynthetic protein-pigment complexes. Our scheme provides a direct alternative to the approach of determining all Hamiltonian parameters and then simulating excited state dynamics.

  20. Comparison of different assimilation schemes in an operational assimilation system with Ensemble Kalman Filter

    NASA Astrophysics Data System (ADS)

    Yan, Yajing; Barth, Alexander; Beckers, Jean-Marie; Candille, Guillem; Brankart, Jean-Michel; Brasseur, Pierre

    2016-04-01

    In this paper, four assimilation schemes, including an intermittent assimilation scheme (INT) and three incremental assimilation schemes (IAU 0, IAU 50 and IAU 100), are compared in the same assimilation experiments with a realistic eddy permitting primitive equation model of the North Atlantic Ocean using the Ensemble Kalman Filter. The three IAU schemes differ from each other in the position of the increment update window that has the same size as the assimilation window. 0, 50 and 100 correspond to the degree of superposition of the increment update window on the current assimilation window. Sea surface height, sea surface temperature, and temperature profiles at depth collected between January and December 2005 are assimilated. Sixty ensemble members are generated by adding realistic noise to the forcing parameters related to the temperature. The ensemble is diagnosed and validated by comparison between the ensemble spread and the model/observation difference, as well as by rank histogram before the assimilation experiments The relevance of each assimilation scheme is evaluated through analyses on thermohaline variables and the current velocities. The results of the assimilation are assessed according to both deterministic and probabilistic metrics with independent/semi-independent observations. For deterministic validation, the ensemble means, together with the ensemble spreads are compared to the observations, in order to diagnose the ensemble distribution properties in a deterministic way. For probabilistic validation, the continuous ranked probability score (CRPS) is used to evaluate the ensemble forecast system according to reliability and resolution. The reliability is further decomposed into bias and dispersion by the reduced centered random variable (RCRV) score in order to investigate the reliability properties of the ensemble forecast system.

  1. Validation of a multi-criteria evaluation model for animal welfare.

    PubMed

    Martín, P; Czycholl, I; Buxadé, C; Krieter, J

    2017-04-01

    The aim of this paper was to validate an alternative multi-criteria evaluation system to assess animal welfare on farms based on the Welfare Quality® (WQ) project, using an example of welfare assessment of growing pigs. This alternative methodology aimed to be more transparent for stakeholders and more flexible than the methodology proposed by WQ. The WQ assessment protocol for growing pigs was implemented to collect data in different farms in Schleswig-Holstein, Germany. In total, 44 observations were carried out. The aggregation system proposed in the WQ protocol follows a three-step aggregation process. Measures are aggregated into criteria, criteria into principles and principles into an overall assessment. This study focussed on the first two steps of the aggregation. Multi-attribute utility theory (MAUT) was used to produce a value of welfare for each criterion and principle. The utility functions and the aggregation function were constructed in two separated steps. The MACBETH (Measuring Attractiveness by a Categorical-Based Evaluation Technique) method was used for utility function determination and the Choquet integral (CI) was used as an aggregation operator. The WQ decision-makers' preferences were fitted in order to construct the utility functions and to determine the CI parameters. The validation of the MAUT model was divided into two steps, first, the results of the model were compared with the results of the WQ project at criteria and principle level, and second, a sensitivity analysis of our model was carried out to demonstrate the relative importance of welfare measures in the different steps of the multi-criteria aggregation process. Using the MAUT, similar results were obtained to those obtained when applying the WQ protocol aggregation methods, both at criteria and principle level. Thus, this model could be implemented to produce an overall assessment of animal welfare in the context of the WQ protocol for growing pigs. Furthermore, this methodology could also be used as a framework in order to produce an overall assessment of welfare for other livestock species. Two main findings are obtained from the sensitivity analysis, first, a limited number of measures had a strong influence on improving or worsening the level of welfare at criteria level and second, the MAUT model was not very sensitive to an improvement in or a worsening of single welfare measures at principle level. The use of weighted sums and the conversion of disease measures into ordinal scores should be reconsidered.

  2. Scoring Rubric Development: Validity and Reliability.

    ERIC Educational Resources Information Center

    Moskal, Barbara M.; Leydens, Jon A.

    2000-01-01

    Provides clear definitions of the terms "validity" and "reliability" in the context of developing scoring rubrics and illustrates these definitions through examples. Also clarifies how validity and reliability may be addressed in the development of scoring rubrics, defined as descriptive scoring schemes developed to guide the analysis of the…

  3. TU-EF-204-01: Accurate Prediction of CT Tube Current Modulation: Estimating Tube Current Modulation Schemes for Voxelized Patient Models Used in Monte Carlo Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McMillan, K; Bostani, M; McNitt-Gray, M

    2015-06-15

    Purpose: Most patient models used in Monte Carlo-based estimates of CT dose, including computational phantoms, do not have tube current modulation (TCM) data associated with them. While not a problem for fixed tube current simulations, this is a limitation when modeling the effects of TCM. Therefore, the purpose of this work was to develop and validate methods to estimate TCM schemes for any voxelized patient model. Methods: For 10 patients who received clinically-indicated chest (n=5) and abdomen/pelvis (n=5) scans on a Siemens CT scanner, both CT localizer radiograph (“topogram”) and image data were collected. Methods were devised to estimate themore » complete x-y-z TCM scheme using patient attenuation data: (a) available in the Siemens CT localizer radiograph/topogram itself (“actual-topo”) and (b) from a simulated topogram (“sim-topo”) derived from a projection of the image data. For comparison, the actual TCM scheme was extracted from the projection data of each patient. For validation, Monte Carlo simulations were performed using each TCM scheme to estimate dose to the lungs (chest scans) and liver (abdomen/pelvis scans). Organ doses from simulations using the actual TCM were compared to those using each of the estimated TCM methods (“actual-topo” and “sim-topo”). Results: For chest scans, the average differences between doses estimated using actual TCM schemes and estimated TCM schemes (“actual-topo” and “sim-topo”) were 3.70% and 4.98%, respectively. For abdomen/pelvis scans, the average differences were 5.55% and 6.97%, respectively. Conclusion: Strong agreement between doses estimated using actual and estimated TCM schemes validates the methods for simulating Siemens topograms and converting attenuation data into TCM schemes. This indicates that the methods developed in this work can be used to accurately estimate TCM schemes for any patient model or computational phantom, whether a CT localizer radiograph is available or not. Funding Support: NIH Grant R01-EB017095; Disclosures - Michael McNitt-Gray: Institutional Research Agreement, Siemens AG; Research Support, Siemens AG; Consultant, Flaherty Sensabaugh Bonasso PLLC; Consultant, Fulbright and Jaworski; Disclosures - Cynthia McCollough: Research Grant, Siemens Healthcare.« less

  4. Degradation of surfactant-associated protein B (SP-B) during in vitro conversion of large to small surfactant aggregates.

    PubMed Central

    Veldhuizen, R A; Inchley, K; Hearn, S A; Lewis, J F; Possmayer, F

    1993-01-01

    Pulmonary surfactant obtained from lung lavages can be separated by differential centrifugation into two distinct subfractions known as large surfactant aggregates and small surfactant aggregates. The large-aggregate fraction is the precursor of the small-aggregate fraction. The ratio of the small non-surface-active to large surface-active surfactant aggregates increases after birth and in several types of lung injury. We have utilized an in vitro system, surface area cycling, to study the conversion of large into small aggregates. Small aggregates generated by surface area cycling were separated from large aggregates by centrifugation at 40,000 g for 15 min rather than by the normal sucrose gradient centrifugation. This new separation method was validated by morphological studies. Surface-tension-reducing activity of total surfactant extracts, as measured with a pulsating-bubble surfactometer, was impaired after surface area cycling. This impairment was related to the generation of small aggregates. Immunoblot analysis of large and small aggregates separated by sucrose gradient centrifugation revealed the presence of detectable amounts of surfactant-associated protein B (SP-B) in large aggregates but not in small aggregates. SP-A was detectable in both large and small aggregates. PAGE of cycled and non-cycled surfactant showed a reduction in SP-B after surface area cycling. We conclude that SP-B is degraded during the formation of small aggregates in vitro and that a change in surface area appears to be necessary for exposing SP-B to protease activity. Images Figure 2 Figure 5 Figure 6 Figure 7 PMID:8216208

  5. Optimization study on multiple train formation scheme of urban rail transit

    NASA Astrophysics Data System (ADS)

    Xia, Xiaomei; Ding, Yong; Wen, Xin

    2018-05-01

    The new organization method, represented by the mixed operation of multi-marshalling trains, can adapt to the characteristics of the uneven distribution of passenger flow, but the research on this aspect is still not perfect enough. This paper introduced the passenger sharing rate and congestion penalty coefficient with different train formations. On this basis, this paper established an optimization model with the minimum passenger cost and operation cost as objective, and operation frequency and passenger demand as constraint. The ideal point method is used to solve this model. Compared with the fixed marshalling operation model, the overall cost of this scheme saves 9.24% and 4.43% respectively. This result not only validates the validity of the model, but also illustrate the advantages of the multiple train formations scheme.

  6. Experimental demonstration of an OFDM based visible light communication system using inter-block precoding and superimposed pilots

    NASA Astrophysics Data System (ADS)

    Zhang, Junwei; Hong, Xuezhi; Liu, Jie; Guo, Changjian

    2018-04-01

    In this work, we investigate and experimentally demonstrate an orthogonal frequency division multiplexing (OFDM) based high speed wavelength-division multiplexed (WDM) visible light communication (VLC) system using an inter-block data precoding and superimposed pilots (DP-SP) based channel estimation (CE) scheme. The residual signal-to-pilot interference (SPI) can be eliminated by using inter-block data precoding, resulting in a significant improvement in estimated accuracy and the overall system performance compared with uncoded SP based CE scheme. We also study the power allocation/overhead problem of the training for DP-SP, uncoded SP and conventional preamble based CE schemes, from which we obtain the optimum signal-to-pilot power ratio (SPR)/overhead percentage for all above cases. Intra-symbol frequency-domain averaging (ISFA) is also adopted to further enhance the accuracy of CE. By using the DP-SP based CE scheme, aggregate data rates of 1.87-Gbit/s and 1.57-Gbit/s are experimentally demonstrated over 0.8-m and 2-m indoor free space transmission, respectively, using a commercially available red, green and blue (RGB) light emitting diode (LED) with WDM. Experimental results show that the DP-SP based CE scheme is comparable to the conventional preamble based CE scheme in term of received Q factor and data rate while entailing a much smaller overhead-size.

  7. The use of index tests to determine the mechanical properties of crushed aggregates from Precambrian basement complex rocks, Ado-Ekiti, SW Nigeria

    NASA Astrophysics Data System (ADS)

    Afolagboye, Lekan Olatayo; Talabi, Abel Ojo; Oyelami, Charles Adebayo

    2017-05-01

    This study assessed the possibility of using index tests to determine the mechanical properties of crushed aggregates. The aggregates used in this study were derived from major Precambrian basement rocks in Ado-Ekiti, Nigeria. Regression analyses were performed to determine the empirical relations that mechanical properties of the aggregates may have with the point load strength (IS(50)), Schmidt rebound hammer value (SHR) and unconfined compressive strength (UCS) of the rocks. For all the data, strong correlation coefficients were found between IS(50), SHR, UCS, and mechanical properties of the aggregates. The regression analysis conducted on the different rocks separately showed that correlations coefficients obtained between the IS(50), SHR, UCS and mechanical properties of the aggregates were stronger than those of the grouped rocks. The T-test and F-test showed that the derived models were valid. This study has shown that the mechanical properties of the aggregates can be estimated from IS(50), SHR and USC but the influence of rock type on the relationships should be taken into consideration.

  8. A novel quantum group signature scheme without using entangled states

    NASA Astrophysics Data System (ADS)

    Xu, Guang-Bao; Zhang, Ke-Jia

    2015-07-01

    In this paper, we propose a novel quantum group signature scheme. It can make the signer sign a message on behalf of the group without the help of group manager (the arbitrator), which is different from the previous schemes. In addition, a signature can be verified again when its signer disavows she has ever generated it. We analyze the validity and the security of the proposed signature scheme. Moreover, we discuss the advantages and the disadvantages of the new scheme and the existing ones. The results show that our scheme satisfies all the characteristics of a group signature and has more advantages than the previous ones. Like its classic counterpart, our scheme can be used in many application scenarios, such as e-government and e-business.

  9. Evolution of Snow-Size Spectra in Cyclonic Storms. Part I: Snow Growth by Vapor Deposition and Aggregation.

    NASA Astrophysics Data System (ADS)

    Mitchell, David L.

    1988-11-01

    Based on the stochastic collection equation, height- and time-dependent snow growth models were developed for unrimed stratiform snowfall. Moment conservation equations were parameterized and solved by constraining the size distribution to be of the form N(D)dD = N0 exp(D)dD, yielding expressions for the slope parameter, , and the y-intercept parameters, NO, as functions of height or time. The processes of vapor deposition and aggregation were treated analytically without neglecting changes in ice crystal habits, while the ice particle breakup process was dealt with empirically.The models were compared against vertical profiles of snow-size spectra, obtained from aircraft measurements, for three case studies. The predicted spectra are in good agreement with the observed evolution of snow-size spectra in all three cases, indicating the proposed scheme for ice particle aggregation was successful. The temperature dependence of aggregation was assumed to result from differences in ice crystal habit. Using data from an earlier study, the aggregation efficiency between two levels in a cloud was calculated. Finally, other height-dependent, steady-state snowfall models in the literature were compared against spectra from one of the above case studies. The agreement between the predicted and observed spectra regarding these models was less favorable than was obtained from the models presented here.

  10. Emissive H-Aggregates of an Ultrafast Molecular Rotor: A Promising Platform for Sensing Heparin.

    PubMed

    Mudliar, Niyati H; Singh, Prabhat K

    2016-11-23

    Constructing "turn on" fluorescent probes for heparin, a most widely used anticoagulant in clinics, from commercially available materials is of great importance, but remains challenging. Here, we report the formation of a rarely observed emissive H-aggregate of an ultrafast molecular rotor dye, Thioflavin-T, in the presence of heparin, which provides an excellent platform for simple, economic and rapid fluorescence turn-on sensing of heparin. Generally, H-aggregates are considered as serious problem in the field of biomolecular sensing, owing to their poorly emissive nature resulting from excitonic interaction. To the best of our knowledge, this is the first report, where contrastingly, the turn-on emission from the H-aggregates has been utilized in the biomolecule sensing scheme, and enables a very efficient and selective detection of a vital biomolecule and a drug with its extensive medical applications, i.e., heparin. Our sensor system offers several advantages including, emission in the biologically advantageous red-region, dual sensing, i.e., both by fluorimetry and colorimetry, and most importantly constructed from in-expensive commercially available dye molecule, which is expected to impart a large impact on the sensing field of heparin. Our system displays good performance in complex biological media of serum samples. The novel Thioflavin-T aggregate emission could be also used to probe the interaction of heparin with its only clinically approved antidote, Protamine.

  11. A probabilistic mechanical model for prediction of aggregates’ size distribution effect on concrete compressive strength

    NASA Astrophysics Data System (ADS)

    Miled, Karim; Limam, Oualid; Sab, Karam

    2012-06-01

    To predict aggregates' size distribution effect on the concrete compressive strength, a probabilistic mechanical model is proposed. Within this model, a Voronoi tessellation of a set of non-overlapping and rigid spherical aggregates is used to describe the concrete microstructure. Moreover, aggregates' diameters are defined as statistical variables and their size distribution function is identified to the experimental sieve curve. Then, an inter-aggregate failure criterion is proposed to describe the compressive-shear crushing of the hardened cement paste when concrete is subjected to uniaxial compression. Using a homogenization approach based on statistical homogenization and on geometrical simplifications, an analytical formula predicting the concrete compressive strength is obtained. This formula highlights the effects of cement paste strength and aggregates' size distribution and volume fraction on the concrete compressive strength. According to the proposed model, increasing the concrete strength for the same cement paste and the same aggregates' volume fraction is obtained by decreasing both aggregates' maximum size and the percentage of coarse aggregates. Finally, the validity of the model has been discussed through a comparison with experimental results (15 concrete compressive strengths ranging between 46 and 106 MPa) taken from literature and showing a good agreement with the model predictions.

  12. Microbiological Validation of the IVGEN System

    NASA Technical Reports Server (NTRS)

    Porter, David A.

    2013-01-01

    The principal purpose of this report is to describe a validation process that can be performed in part on the ground prior to launch, and in space for the IVGEN system. The general approach taken is derived from standard pharmaceutical industry validation schemes modified to fit the special requirements of in-space usage.

  13. Development of a stress-mode sensitive viscoelastic constitutive relationship for asphalt concrete: experimental and numerical modeling

    NASA Astrophysics Data System (ADS)

    Karimi, Mohammad M.; Tabatabaee, Nader; Jahanbakhsh, H.; Jahangiri, Behnam

    2017-08-01

    Asphalt binder is responsible for the thermo-viscoelastic mechanical behavior of asphalt concrete. Upon application of pure compressive stress to an asphalt concrete specimen, the stress is transferred by mechanisms such as aggregate interlock and the adhesion/cohesion properties of asphalt mastic. In the pure tensile stress mode, aggregate interlock plays a limited role in stress transfer, and the mastic phase plays the dominant role through its adhesive/cohesive and viscoelastic properties. Under actual combined loading patterns, any coordinate direction may experience different stress modes; therefore, the mechanical behavior is not the same in the different directions and the asphalt specimen behaves as an anisotropic material. The present study developed an anisotropic nonlinear viscoelastic constitutive relationship that is sensitive to the tension/compression stress mode by extending Schapery's nonlinear viscoelastic model. The proposed constitutive relationship was implemented in Abaqus using a user material (UMAT) subroutine in an implicit scheme. Uniaxial compression and indirect tension (IDT) testing were used to characterize the viscoelastic properties of the bituminous materials and to calibrate and validate the proposed constitutive relationship. Compressive and tensile creep compliances were calculated using uniaxial compression, as well as IDT test results, for different creep-recovery loading patterns at intermediate temperature. The results showed that both tensile creep compliance and its rate were greater than those of compression. The calculated deflections based on these IDT test simulations were compared with experimental measurements and were deemed acceptable. This suggests that the proposed viscoelastic constitutive relationship correctly demonstrates the viscoelastic response and is more accurate for analysis of asphalt concrete in the laboratory or in situ.

  14. Numerical Analysis Using WRF-SBM for the Cloud Microphysical Structures in the C3VP Field Campaign: Impacts of Supercooled Droplets and Resultant Riming on Snow Microphysics

    NASA Technical Reports Server (NTRS)

    Iguchi, Takamichi; Matsui, Toshihisa; Shi, Jainn J.; Tao, Wei-Kuo; Khain, Alexander P.; Hao, Arthur; Cifelli, Robert; Heymsfield, Andrew; Tokay, Ali

    2012-01-01

    Two distinct snowfall events are observed over the region near the Great Lakes during 19-23 January 2007 under the intensive measurement campaign of the Canadian CloudSat/CALIPSO validation project (C3VP). These events are numerically investigated using the Weather Research and Forecasting model coupled with a spectral bin microphysics (WRF-SBM) scheme that allows a smooth calculation of riming process by predicting the rimed mass fraction on snow aggregates. The fundamental structures of the observed two snowfall systems are distinctly characterized by a localized intense lake-effect snowstorm in one case and a widely distributed moderate snowfall by the synoptic-scale system in another case. Furthermore, the observed microphysical structures are distinguished by differences in bulk density of solid-phase particles, which are probably linked to the presence or absence of supercooled droplets. The WRF-SBM coupled with Goddard Satellite Data Simulator Unit (G-SDSU) has successfully simulated these distinctive structures in the three-dimensional weather prediction run with a horizontal resolution of 1 km. In particular, riming on snow aggregates by supercooled droplets is considered to be of importance in reproducing the specialized microphysical structures in the case studies. Additional sensitivity tests for the lake-effect snowstorm case are conducted utilizing different planetary boundary layer (PBL) models or the same SBM but without the riming process. The PBL process has a large impact on determining the cloud microphysical structure of the lake-effect snowstorm as well as the surface precipitation pattern, whereas the riming process has little influence on the surface precipitation because of the small height of the system.

  15. An improved and effective secure password-based authentication and key agreement scheme using smart cards for the telecare medicine information system.

    PubMed

    Das, Ashok Kumar; Bruhadeshwar, Bezawada

    2013-10-01

    Recently Lee and Liu proposed an efficient password based authentication and key agreement scheme using smart card for the telecare medicine information system [J. Med. Syst. (2013) 37:9933]. In this paper, we show that though their scheme is efficient, their scheme still has two security weaknesses such as (1) it has design flaws in authentication phase and (2) it has design flaws in password change phase. In order to withstand these flaws found in Lee-Liu's scheme, we propose an improvement of their scheme. Our improved scheme keeps also the original merits of Lee-Liu's scheme. We show that our scheme is efficient as compared to Lee-Liu's scheme. Further, through the security analysis, we show that our scheme is secure against possible known attacks. In addition, we simulate our scheme for the formal security verification using the widely-accepted AVISPA (Automated Validation of Internet Security Protocols and Applications) tool to show that our scheme is secure against passive and active attacks.

  16. Invisibly Sanitizable Digital Signature Scheme

    NASA Astrophysics Data System (ADS)

    Miyazaki, Kunihiko; Hanaoka, Goichiro; Imai, Hideki

    A digital signature does not allow any alteration of the document to which it is attached. Appropriate alteration of some signed documents, however, should be allowed because there are security requirements other than the integrity of the document. In the disclosure of official information, for example, sensitive information such as personal information or national secrets is masked when an official document is sanitized so that its nonsensitive information can be disclosed when it is requested by a citizen. If this disclosure is done digitally by using the current digital signature schemes, the citizen cannot verify the disclosed information because it has been altered to prevent the leakage of sensitive information. The confidentiality of official information is thus incompatible with the integrity of that information, and this is called the digital document sanitizing problem. Conventional solutions such as content extraction signatures and digitally signed document sanitizing schemes with disclosure condition control can either let the sanitizer assign disclosure conditions or hide the number of sanitized portions. The digitally signed document sanitizing scheme we propose here is based on the aggregate signature derived from bilinear maps and can do both. Moreover, the proposed scheme can sanitize a signed document invisibly, that is, no one can distinguish whether the signed document has been sanitized or not.

  17. Emerging Security Mechanisms for Medical Cyber Physical Systems.

    PubMed

    Kocabas, Ovunc; Soyata, Tolga; Aktas, Mehmet K

    2016-01-01

    The following decade will witness a surge in remote health-monitoring systems that are based on body-worn monitoring devices. These Medical Cyber Physical Systems (MCPS) will be capable of transmitting the acquired data to a private or public cloud for storage and processing. Machine learning algorithms running in the cloud and processing this data can provide decision support to healthcare professionals. There is no doubt that the security and privacy of the medical data is one of the most important concerns in designing an MCPS. In this paper, we depict the general architecture of an MCPS consisting of four layers: data acquisition, data aggregation, cloud processing, and action. Due to the differences in hardware and communication capabilities of each layer, different encryption schemes must be used to guarantee data privacy within that layer. We survey conventional and emerging encryption schemes based on their ability to provide secure storage, data sharing, and secure computation. Our detailed experimental evaluation of each scheme shows that while the emerging encryption schemes enable exciting new features such as secure sharing and secure computation, they introduce several orders-of-magnitude computational and storage overhead. We conclude our paper by outlining future research directions to improve the usability of the emerging encryption schemes in an MCPS.

  18. A Secure and Efficient Threshold Group Signature Scheme

    NASA Astrophysics Data System (ADS)

    Zhang, Yansheng; Wang, Xueming; Qiu, Gege

    The paper presents a secure and efficient threshold group signature scheme aiming at two problems of current threshold group signature schemes: conspiracy attack and inefficiency. Scheme proposed in this paper takes strategy of separating designed clerk who is responsible for collecting and authenticating each individual signature from group, the designed clerk don't participate in distribution of group secret key and has his own public key and private key, designed clerk needs to sign part information of threshold group signature after collecting signatures. Thus verifier has to verify signature of the group after validating signature of the designed clerk. This scheme is proved to be secure against conspiracy attack at last and is more efficient by comparing with other schemes.

  19. Intercomparison of land-surface parameterizations launched

    NASA Astrophysics Data System (ADS)

    Henderson-Sellers, A.; Dickinson, R. E.

    One of the crucial tasks for climatic and hydrological scientists over the next several years will be validating land surface process parameterizations used in climate models. There is not, necessarily, a unique set of parameters to be used. Different scientists will want to attempt to capture processes through various methods “for example, Avissar and Verstraete, 1990”. Validation of some aspects of the available (and proposed) schemes' performance is clearly required. It would also be valuable to compare the behavior of the existing schemes [for example, Dickinson et al., 1991; Henderson-Sellers, 1992a].The WMO-CAS Working Group on Numerical Experimentation (WGNE) and the Science Panel of the GEWEX Continental-Scale International Project (GCIP) [for example, Chahine, 1992] have agreed to launch the joint WGNE/GCIP Project for Intercomparison of Land-Surface Parameterization Schemes (PILPS). The principal goal of this project is to achieve greater understanding of the capabilities and potential applications of existing and new land-surface schemes in atmospheric models. It is not anticipated that a single “best” scheme will emerge. Rather, the aim is to explore alternative models in ways compatible with their authors' or exploiters' goals and to increase understanding of the characteristics of these models in the scientific community.

  20. Data-driven agent-based modeling, with application to rooftop solar adoption

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Haifeng; Vorobeychik, Yevgeniy; Letchford, Joshua

    Agent-based modeling is commonly used for studying complex system properties emergent from interactions among many agents. We present a novel data-driven agent-based modeling framework applied to forecasting individual and aggregate residential rooftop solar adoption in San Diego county. Our first step is to learn a model of individual agent behavior from combined data of individual adoption characteristics and property assessment. We then construct an agent-based simulation with the learned model embedded in artificial agents, and proceed to validate it using a holdout sequence of collective adoption decisions. We demonstrate that the resulting agent-based model successfully forecasts solar adoption trends andmore » provides a meaningful quantification of uncertainty about its predictions. We utilize our model to optimize two classes of policies aimed at spurring solar adoption: one that subsidizes the cost of adoption, and another that gives away free systems to low-income house- holds. We find that the optimal policies derived for the latter class are significantly more efficacious, whereas the policies similar to the current California Solar Initiative incentive scheme appear to have a limited impact on overall adoption trends.« less

  1. Detection of chewing from piezoelectric film sensor signals using ensemble classifiers.

    PubMed

    Farooq, Muhammad; Sazonov, Edward

    2016-08-01

    Selection and use of pattern recognition algorithms is application dependent. In this work, we explored the use of several ensembles of weak classifiers to classify signals captured from a wearable sensor system to detect food intake based on chewing. Three sensor signals (Piezoelectric sensor, accelerometer, and hand to mouth gesture) were collected from 12 subjects in free-living conditions for 24 hrs. Sensor signals were divided into 10 seconds epochs and for each epoch combination of time and frequency domain features were computed. In this work, we present a comparison of three different ensemble techniques: boosting (AdaBoost), bootstrap aggregation (bagging) and stacking, each trained with 3 different weak classifiers (Decision Trees, Linear Discriminant Analysis (LDA) and Logistic Regression). Type of feature normalization used can also impact the classification results. For each ensemble method, three feature normalization techniques: (no-normalization, z-score normalization, and minmax normalization) were tested. A 12 fold cross-validation scheme was used to evaluate the performance of each model where the performance was evaluated in terms of precision, recall, and accuracy. Best results achieved here show an improvement of about 4% over our previous algorithms.

  2. Data-driven agent-based modeling, with application to rooftop solar adoption

    DOE PAGES

    Zhang, Haifeng; Vorobeychik, Yevgeniy; Letchford, Joshua; ...

    2016-01-25

    Agent-based modeling is commonly used for studying complex system properties emergent from interactions among many agents. We present a novel data-driven agent-based modeling framework applied to forecasting individual and aggregate residential rooftop solar adoption in San Diego county. Our first step is to learn a model of individual agent behavior from combined data of individual adoption characteristics and property assessment. We then construct an agent-based simulation with the learned model embedded in artificial agents, and proceed to validate it using a holdout sequence of collective adoption decisions. We demonstrate that the resulting agent-based model successfully forecasts solar adoption trends andmore » provides a meaningful quantification of uncertainty about its predictions. We utilize our model to optimize two classes of policies aimed at spurring solar adoption: one that subsidizes the cost of adoption, and another that gives away free systems to low-income house- holds. We find that the optimal policies derived for the latter class are significantly more efficacious, whereas the policies similar to the current California Solar Initiative incentive scheme appear to have a limited impact on overall adoption trends.« less

  3. Transactive control of fast-acting demand response based on thermostatic loads in real-time retail electricity markets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Behboodi, Sahand; Chassin, David P.; Djilali, Ned

    Coordinated operation of distributed thermostatic loads such as heat pumps and air conditioners can reduce energy costs and prevents grid congestion, while maintaining room temperatures in the comfort range set by consumers. This paper furthers efforts towards enabling thermostatically controlled loads (TCLs) to participate in real-time retail electricity markets under a transactive control paradigm. An agent-based approach is used to develop an effective and low complexity demand response control scheme for TCLs. The proposed scheme adjusts aggregated thermostatic loads according to real-time grid conditions under both heating and cooling modes. Here, a case study is presented showing the method reducesmore » consumer electricity costs by over 10% compared to uncoordinated operation.« less

  4. Transactive control of fast-acting demand response based on thermostatic loads in real-time retail electricity markets

    DOE PAGES

    Behboodi, Sahand; Chassin, David P.; Djilali, Ned; ...

    2017-07-29

    Coordinated operation of distributed thermostatic loads such as heat pumps and air conditioners can reduce energy costs and prevents grid congestion, while maintaining room temperatures in the comfort range set by consumers. This paper furthers efforts towards enabling thermostatically controlled loads (TCLs) to participate in real-time retail electricity markets under a transactive control paradigm. An agent-based approach is used to develop an effective and low complexity demand response control scheme for TCLs. The proposed scheme adjusts aggregated thermostatic loads according to real-time grid conditions under both heating and cooling modes. Here, a case study is presented showing the method reducesmore » consumer electricity costs by over 10% compared to uncoordinated operation.« less

  5. Channel Estimation and Pilot Design for Massive MIMO Systems with Block-Structured Compressive Sensing

    NASA Astrophysics Data System (ADS)

    Lv, ZhuoKai; Yang, Tiejun; Zhu, Chunhua

    2018-03-01

    Through utilizing the technology of compressive sensing (CS), the channel estimation methods can achieve the purpose of reducing pilots and improving spectrum efficiency. The channel estimation and pilot design scheme are explored during the correspondence under the help of block-structured CS in massive MIMO systems. The block coherence property of the aggregate system matrix can be minimized so that the pilot design scheme based on stochastic search is proposed. Moreover, the block sparsity adaptive matching pursuit (BSAMP) algorithm under the common sparsity model is proposed so that the channel estimation can be caught precisely. Simulation results are to be proved the proposed design algorithm with superimposed pilots design and the BSAMP algorithm can provide better channel estimation than existing methods.

  6. Dissipative particle dynamics: Systematic parametrization using water-octanol partition coefficients

    NASA Astrophysics Data System (ADS)

    Anderson, Richard L.; Bray, David J.; Ferrante, Andrea S.; Noro, Massimo G.; Stott, Ian P.; Warren, Patrick B.

    2017-09-01

    We present a systematic, top-down, thermodynamic parametrization scheme for dissipative particle dynamics (DPD) using water-octanol partition coefficients, supplemented by water-octanol phase equilibria and pure liquid phase density data. We demonstrate the feasibility of computing the required partition coefficients in DPD using brute-force simulation, within an adaptive semi-automatic staged optimization scheme. We test the methodology by fitting to experimental partition coefficient data for twenty one small molecules in five classes comprising alcohols and poly-alcohols, amines, ethers and simple aromatics, and alkanes (i.e., hexane). Finally, we illustrate the transferability of a subset of the determined parameters by calculating the critical micelle concentrations and mean aggregation numbers of selected alkyl ethoxylate surfactants, in good agreement with reported experimental values.

  7. Effects of macroeconomic trends on social security spending due to sickness and disability.

    PubMed

    Khan, Jahangir; Gerdtham, Ulf-G; Jansson, Bjarne

    2004-11-01

    We analyzed the relationship between macroeconomic conditions, measured as unemployment rate and social security spending, from 4 social security schemes and total spending due to sickness and disability. We obtained aggregated panel data from 13 Organization for Economic Cooperation and Development member countries for 1980-1996. We used regression analysis and fixed effect models to examine spending on sickness benefits, disability pensions, occupational-injury benefits, survivor's pensions, and total spending. A decline in unemployment increased sickness benefits spending and reduced disability pension spending. These effects reversed direction after 4 years of unemployment. Inclusion of mortality rate as an additional variable in the analysis did not affect the findings. Macroeconomic conditions influence some reimbursements from social security schemes but not total spending.

  8. A scheme based on ICD-10 diagnoses and drug prescriptions to stage chronic kidney disease severity in healthcare administrative records.

    PubMed

    Friberg, Leif; Gasparini, Alessandro; Carrero, Juan Jesus

    2018-04-01

    Information about renal function is important for drug safety studies using administrative health databases. However, serum creatinine values are seldom available in these registries. Our aim was to develop and test a simple scheme for stratification of renal function without access to laboratory test results. Our scheme uses registry data about diagnoses, contacts, dialysis and drug use. We validated the scheme in the Stockholm CREAtinine Measurements (SCREAM) project using information on approximately 1.1 million individuals residing in the Stockholm County who underwent calibrated creatinine testing during 2006-11, linked with data about health care contacts and filled drug prescriptions. Estimated glomerular filtration rate (eGFR) was calculated with the CKD-EPI formula and used as the gold standard for validation of the scheme. When the scheme classified patients as having eGFR <30 mL/min/1.73 m 2 , it was correct in 93.5% of cases. The specificity of the scheme was close to 100% in all age groups. The sensitivity was poor, ranging from 68.2% in the youngest age quartile, down to 10.7% in the oldest age quartile. Age-related decline in renal function makes a large proportion of elderly patients fall into the chronic kidney disease (CKD) range without receiving CKD diagnoses, as this often is seen as part of normal ageing. In the absence of renal function tests, our scheme may be of value for identifying patients with moderate and severe CKD on the basis of diagnostic and prescription data for use in studies of large healthcare databases.

  9. Validity and reliability of Patient-Reported Outcomes Measurement Information System (PROMIS) Instruments in Osteoarthritis

    PubMed Central

    Broderick, Joan E.; Schneider, Stefan; Junghaenel, Doerte U.; Schwartz, Joseph E.; Stone, Arthur A.

    2013-01-01

    Objective Evaluation of known group validity, ecological validity, and test-retest reliability of four domain instruments from the Patient Reported Outcomes Measurement System (PROMIS) in osteoarthritis (OA) patients. Methods Recruitment of an osteoarthritis sample and a comparison general population (GP) through an Internet survey panel. Pain intensity, pain interference, physical functioning, and fatigue were assessed for 4 consecutive weeks with PROMIS short forms on a daily basis and compared with same-domain Computer Adaptive Test (CAT) instruments that use a 7-day recall. Known group validity (comparison of OA and GP), ecological validity (comparison of aggregated daily measures with CATs), and test-retest reliability were evaluated. Results The recruited samples matched (age, sex, race, ethnicity) the demographic characteristics of the U.S. sample for arthritis and the 2009 Census for the GP. Compliance with repeated measurements was excellent: > 95%. Known group validity for CATs was demonstrated with large effect sizes (pain intensity: 1.42, pain interference: 1.25, and fatigue: .85). Ecological validity was also established through high correlations between aggregated daily measures and weekly CATs (≥ .86). Test-retest validity (7-day) was very good (≥ .80). Conclusion PROMIS CAT instruments demonstrated known group and ecological validity in a comparison of osteoarthritis patients with a general population sample. Adequate test-retest reliability was also observed. These data provide encouraging initial data on the utility of these PROMIS instruments for clinical and research outcomes in osteoarthritis patients. PMID:23592494

  10. Prediction of Protein Aggregation in High Concentration Protein Solutions Utilizing Protein-Protein Interactions Determined by Low Volume Static Light Scattering.

    PubMed

    Hofmann, Melanie; Winzer, Matthias; Weber, Christian; Gieseler, Henning

    2016-06-01

    The development of highly concentrated protein formulations is more demanding than for conventional concentrations due to an elevated protein aggregation tendency. Predictive protein-protein interaction parameters, such as the second virial coefficient B22 or the interaction parameter kD, have already been used to predict aggregation tendency and optimize protein formulations. However, these parameters can only be determined in diluted solutions, up to 20 mg/mL. And their validity at high concentrations is currently controversially discussed. This work presents a μ-scale screening approach which has been adapted to early industrial project needs. The procedure is based on static light scattering to directly determine protein-protein interactions at concentrations up to 100 mg/mL. Three different therapeutic molecules were formulated, varying in pH, salt content, and addition of excipients (e.g., sugars, amino acids, polysorbates, or other macromolecules). Validity of the predicted aggregation tendency was confirmed by stability data of selected formulations. Based on the results obtained, the new prediction method is a promising screening tool for fast and easy formulation development of highly concentrated protein solutions, consuming only microliter of sample volumes. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  11. Validation of Microphysical Schemes in a CRM Using TRMM Satellite

    NASA Astrophysics Data System (ADS)

    Li, X.; Tao, W.; Matsui, T.; Liu, C.; Masunaga, H.

    2007-12-01

    The microphysical scheme in the Goddard Cumulus Ensemble (GCE) model has been the most heavily developed component in the past decade. The cloud-resolving model now has microphysical schemes ranging from the original Lin type bulk scheme, to improved bulk schemes, to a two-moment scheme, to a detailed bin spectral scheme. Even with the most sophisticated bin scheme, many uncertainties still exist, especially in ice phase microphysics. In this study, we take advantages of the long-term TRMM observations, especially the cloud profiles observed by the precipitation radar (PR), to validate microphysical schemes in the simulations of Mesoscale Convective Systems (MCSs). Two contrasting cases, a midlatitude summertime continental MCS with leading convection and trailing stratiform region, and an oceanic MCS in tropical western Pacific are studied. The simulated cloud structures and particle sizes are fed into a forward radiative transfer model to simulate the TRMM satellite sensors, i.e., the PR, the TRMM microwave imager (TMI) and the visible and infrared scanner (VIRS). MCS cases that match the structure and strength of the simulated systems over the 10-year period are used to construct statistics of different sensors. These statistics are then compared with the synthetic satellite data obtained from the forward radiative transfer calculations. It is found that the GCE model simulates the contrasts between the continental and oceanic case reasonably well, with less ice scattering in the oceanic case comparing with the continental case. However, the simulated ice scattering signals for both PR and TMI are generally stronger than the observations, especially for the bulk scheme and at the upper levels in the stratiform region. This indicates larger, denser snow/graupel particles at these levels. Adjusting microphysical schemes in the GCE model according the observations, especially the 3D cloud structure observed by TRMM PR, result in a much better agreement.

  12. Analysis of sensitivity to different parameterization schemes for a subtropical cyclone

    NASA Astrophysics Data System (ADS)

    Quitián-Hernández, L.; Fernández-González, S.; González-Alemán, J. J.; Valero, F.; Martín, M. L.

    2018-05-01

    A sensitivity analysis to diverse WRF model physical parameterization schemes is carried out during the lifecycle of a Subtropical cyclone (STC). STCs are low-pressure systems that share tropical and extratropical characteristics, with hybrid thermal structures. In October 2014, a STC made landfall in the Canary Islands, causing widespread damage from strong winds and precipitation there. The system began to develop on October 18 and its effects lasted until October 21. Accurate simulation of this type of cyclone continues to be a major challenge because of its rapid intensification and unique characteristics. In the present study, several numerical simulations were performed using the WRF model to do a sensitivity analysis of its various parameterization schemes for the development and intensification of the STC. The combination of parameterization schemes that best simulated this type of phenomenon was thereby determined. In particular, the parameterization combinations that included the Tiedtke cumulus schemes had the most positive effects on model results. Moreover, concerning STC track validation, optimal results were attained when the STC was fully formed and all convective processes stabilized. Furthermore, to obtain the parameterization schemes that optimally categorize STC structure, a verification using Cyclone Phase Space is assessed. Consequently, the combination of parameterizations including the Tiedtke cumulus schemes were again the best in categorizing the cyclone's subtropical structure. For strength validation, related atmospheric variables such as wind speed and precipitable water were analyzed. Finally, the effects of using a deterministic or probabilistic approach in simulating intense convective phenomena were evaluated.

  13. Time-Delayed Two-Step Selective Laser Photodamage of Dye-Biomolecule Complexes

    NASA Astrophysics Data System (ADS)

    Andreoni, A.; Cubeddu, R.; de Silvestri, S.; Laporta, P.; Svelto, O.

    1980-08-01

    A scheme is proposed for laser-selective photodamage of biological molecules, based on time-delayed two-step photoionization of a dye molecule bound to the biomolecule. The validity of the scheme is experimentally demonstrated in the case of the dye Proflavine, bound to synthetic polynucleotides.

  14. The Construct Validity of Language Aptitude: A Meta-Analysis

    ERIC Educational Resources Information Center

    Li, Shaofeng

    2016-01-01

    A meta-analysis was conducted to examine the construct validity of language aptitude by synthesizing the existing research that has been accumulated over the past five decades. The study aimed to provide a thorough understanding of the construct by aggregating the data reported in the primary research on its correlations with other individual…

  15. A Second Dystopia in Education: Validity Issues in Authentic Assessment Practices

    ERIC Educational Resources Information Center

    Hathcoat, John D.; Penn, Jeremy D.; Barnes, Laura L.; Comer, Johnathan C.

    2016-01-01

    Authentic assessments used in response to accountability demands in higher education face at least two threats to validity. First, a lack of interchangeability between assessment tasks introduces bias when using aggregate-based scores at an institutional level. Second, reliance on written products to capture constructs such as critical thinking…

  16. Cocited Author Mapping as a Valid Representation of Intellectual Structure.

    ERIC Educational Resources Information Center

    McCain, Katherine W.

    1986-01-01

    To test validity of cocitation studies as representations of intellectual structure, five-six years of aggregate cocitation data for 41 authors in macroeconomics and 49 authors in genetics of fruit flies were compared with independent judgments of interauthor similarity collected from 14 macroeconomists and 15 geneticists via a card-sorting…

  17. DNS of Flows over Periodic Hills using a Discontinuous-Galerkin Spectral-Element Method

    NASA Technical Reports Server (NTRS)

    Diosady, Laslo T.; Murman, Scott M.

    2014-01-01

    Direct numerical simulation (DNS) of turbulent compressible flows is performed using a higher-order space-time discontinuous-Galerkin finite-element method. The numerical scheme is validated by performing DNS of the evolution of the Taylor-Green vortex and turbulent flow in a channel. The higher-order method is shown to provide increased accuracy relative to low-order methods at a given number of degrees of freedom. The turbulent flow over a periodic array of hills in a channel is simulated at Reynolds number 10,595 using an 8th-order scheme in space and a 4th-order scheme in time. These results are validated against previous large eddy simulation (LES) results. A preliminary analysis provides insight into how these detailed simulations can be used to improve Reynoldsaveraged Navier-Stokes (RANS) modeling

  18. Fast Proton Titration Scheme for Multiscale Modeling of Protein Solutions.

    PubMed

    Teixeira, Andre Azevedo Reis; Lund, Mikael; da Silva, Fernando Luís Barroso

    2010-10-12

    Proton exchange between titratable amino acid residues and the surrounding solution gives rise to exciting electric processes in proteins. We present a proton titration scheme for studying acid-base equilibria in Metropolis Monte Carlo simulations where salt is treated at the Debye-Hückel level. The method, rooted in the Kirkwood model of impenetrable spheres, is applied on the three milk proteins α-lactalbumin, β-lactoglobulin, and lactoferrin, for which we investigate the net-charge, molecular dipole moment, and charge capacitance. Over a wide range of pH and salt conditions, excellent agreement is found with more elaborate simulations where salt is explicitly included. The implicit salt scheme is orders of magnitude faster than the explicit analog and allows for transparent interpretation of physical mechanisms. It is shown how the method can be expanded to multiscale modeling of aqueous salt solutions of many biomolecules with nonstatic charge distributions. Important examples are protein-protein aggregation, protein-polyelectrolyte complexation, and protein-membrane association.

  19. FPGA implementation of advanced FEC schemes for intelligent aggregation networks

    NASA Astrophysics Data System (ADS)

    Zou, Ding; Djordjevic, Ivan B.

    2016-02-01

    In state-of-the-art fiber-optics communication systems the fixed forward error correction (FEC) and constellation size are employed. While it is important to closely approach the Shannon limit by using turbo product codes (TPC) and low-density parity-check (LDPC) codes with soft-decision decoding (SDD) algorithm; rate-adaptive techniques, which enable increased information rates over short links and reliable transmission over long links, are likely to become more important with ever-increasing network traffic demands. In this invited paper, we describe a rate adaptive non-binary LDPC coding technique, and demonstrate its flexibility and good performance exhibiting no error floor at BER down to 10-15 in entire code rate range, by FPGA-based emulation, making it a viable solution in the next-generation high-speed intelligent aggregation networks.

  20. Cryptanalysis of Chatterjee-Sarkar Hierarchical Identity-Based Encryption Scheme at PKC 06

    NASA Astrophysics Data System (ADS)

    Park, Jong Hwan; Lee, Dong Hoon

    In 2006, Chatterjee and Sarkar proposed a hierarchical identity-based encryption (HIBE) scheme which can support an unbounded number of identity levels. This property is particularly useful in providing forward secrecy by embedding time components within hierarchical identities. In this paper we show that their scheme does not provide the claimed property. Our analysis shows that if the number of identity levels becomes larger than the value of a fixed public parameter, an unintended receiver can reconstruct a new valid ciphertext and decrypt the ciphertext using his or her own private key. The analysis is similarly applied to a multi-receiver identity-based encryption scheme presented as an application of Chatterjee and Sarkar's HIBE scheme.

  1. Linearly first- and second-order, unconditionally energy stable schemes for the phase field crystal model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xiaofeng, E-mail: xfyang@math.sc.edu; Han, Daozhi, E-mail: djhan@iu.edu

    2017-02-01

    In this paper, we develop a series of linear, unconditionally energy stable numerical schemes for solving the classical phase field crystal model. The temporal discretizations are based on the first order Euler method, the second order backward differentiation formulas (BDF2) and the second order Crank–Nicolson method, respectively. The schemes lead to linear elliptic equations to be solved at each time step, and the induced linear systems are symmetric positive definite. We prove that all three schemes are unconditionally energy stable rigorously. Various classical numerical experiments in 2D and 3D are performed to validate the accuracy and efficiency of the proposedmore » schemes.« less

  2. Shear-induced aggregation dynamics in a polymer microrod suspension

    NASA Astrophysics Data System (ADS)

    Kumar, Pramukta S.

    A non-Brownian suspension of micron scale rods is found to exhibit reversible shear-driven formation of disordered aggregates resulting in dramatic viscosity enhancement at low shear rates. Aggregate formation is imaged at low magnification using a combined rheometer and fluorescence microscope system. The size and structure of these aggregates are found to depend on shear rate and concentration, with larger aggregates present at lower shear rates and higher concentrations. Quantitative measurements of the early-stage aggregation process are modeled by a collision driven growth of porous structures which show that the aggregate density increases with a shear rate. A Krieger-Dougherty type constitutive relation and steady-state viscosity measurements are used to estimate the intrinsic viscosity of complex structures developed under shear. Higher magnification images are collected and used to validate the aggregate size versus density relationship, as well as to obtain particle flow fields via PIV. The flow fields provide a tantalizing view of fluctuations involved in the aggregation process. Interaction strength is estimated via contact force measurements and JKR theory and found to be extremely strong in comparison to shear forces present in the system, estimated using hydrodynamic arguments. All of the results are then combined to produce a consistent conceptual model of aggregation in the system that features testable consequences. These results represent a direct, quantitative, experimental study of aggregation and viscosity enhancement in rod suspension, and demonstrate a strategy for inferring inaccessible microscopic geometric properties of a dynamic system through the combination of quantitative imaging and rheology.

  3. Program scheme using common source lines in channel stacked NAND flash memory with layer selection by multilevel operation

    NASA Astrophysics Data System (ADS)

    Kim, Do-Bin; Kwon, Dae Woong; Kim, Seunghyun; Lee, Sang-Ho; Park, Byung-Gook

    2018-02-01

    To obtain high channel boosting potential and reduce a program disturbance in channel stacked NAND flash memory with layer selection by multilevel (LSM) operation, a new program scheme using boosted common source line (CSL) is proposed. The proposed scheme can be achieved by applying proper bias to each layer through its own CSL. Technology computer-aided design (TCAD) simulations are performed to verify the validity of the new method in LSM. Through TCAD simulation, it is revealed that the program disturbance characteristics is effectively improved by the proposed scheme.

  4. X-ray tests of a two-dimensional stigmatic imaging scheme with variable magnifications

    DOE PAGES

    Lu, J.; Bitter, M.; Hill, K. W.; ...

    2014-07-22

    A two-dimensional stigmatic x-ray imaging scheme, consisting of two spherically bent crystals, one concave and one convex, was recently proposed [M. Bitter et al., Rev. Sci. Instrum. 83, 10E527 (2012)]. We report that the Bragg angles and the radii of curvature of the two crystals of this imaging scheme are matched to eliminate the astigmatism and to satisfy the Bragg condition across both crystal surfaces for a given x-ray energy. In this paper, we consider more general configurations of this imaging scheme, which allow us to vary the magnification for a given pair of crystals and x-ray energy. The stigmaticmore » imaging scheme has been validated for the first time by imaging x-rays generated by a micro-focus x-ray source with source size of 8.4 μm validated by knife-edge measurements. Results are presented from imaging the tungsten Lα1 emission at 8.3976 keV, using a convex Si-422 crystal and a concave Si-533 crystal with 2d-spacings of 2.21707 Å and 1.65635 Å and radii of curvature of 500 ± 1 mm and 823 ± 1 mm, respectively, showing a spatial resolution of 54.9 μm. Finally, this imaging scheme is expected to be of interest for the two-dimensional imaging of laser produced plasmas.« less

  5. Large space structure model reduction and control system design based upon actuator and sensor influence functions

    NASA Technical Reports Server (NTRS)

    Yam, Y.; Lang, J. H.; Johnson, T. L.; Shih, S.; Staelin, D. H.

    1983-01-01

    A model reduction procedure based on aggregation with respect to sensor and actuator influences rather than modes is presented for large systems of coupled second-order differential equations. Perturbation expressions which can predict the effects of spillover on both the aggregated and residual states are derived. These expressions lead to the development of control system design constraints which are sufficient to guarantee, to within the validity of the perturbations, that the residual states are not destabilized by control systems designed from the reduced model. A numerical example is provided to illustrate the application of the aggregation and control system design method.

  6. Transient shear viscosity of weakly aggregating polystyrene latex dispersions

    NASA Astrophysics Data System (ADS)

    de Rooij, R.; Potanin, A. A.; van den Ende, D.; Mellema, J.

    1994-04-01

    The transient behavior of the viscosity (stress growth) of a weakly aggregating polystyrene latex dispersion after a step from a high shear rate to a lower shear rate has been measured and modeled. Single particles cluster together into spherical fractal aggregates. The steady state size of these aggregates is determined by the shear stresses exerted on the latter by the flow field. The restructuring process taking place when going from a starting situation with monodisperse spherical aggregates to larger monodisperse spherical aggregates is described by the capture of primary fractal aggregates by growing aggregates until a new steady state is reached. It is assumed that the aggregation mechanism is diffusion limited. The model is valid if the radii of primary aggregates Rprim are much smaller than the radii of the growing aggregates. Fitting the model to experimental data at two volume fractions and a number of step sizes in shear rate yielded physically reasonable values of Rprim at fractal dimensions 2.1≤df≤2.2. The latter range is in good agreement with the range 2.0≤df≤2.3 obtained from steady shear results. The experimental data have also been fitted to a numerical solution of the diffusion equation for primary aggregates for a cell model with moving boundary, also yielding 2.1≤df≤2.2. The range for df found from both approaches agrees well with the range df≊2.1-2.2 determined from computer simulations on diffusion-limited aggregation including restructuring or thermal breakup after formation of bonds. Thus a simple model has been put forward which may capture the basic features of the aggregating model dispersion on a microstructural level and leads to physically acceptable parameter values.

  7. Fluid-structure interaction with the entropic lattice Boltzmann method

    NASA Astrophysics Data System (ADS)

    Dorschner, B.; Chikatamarla, S. S.; Karlin, I. V.

    2018-02-01

    We propose a fluid-structure interaction (FSI) scheme using the entropic multi-relaxation time lattice Boltzmann (KBC) model for the fluid domain in combination with a nonlinear finite element solver for the structural part. We show the validity of the proposed scheme for various challenging setups by comparison to literature data. Beyond validation, we extend the KBC model to multiphase flows and couple it with a finite element method (FEM) solver. Robustness and viability of the entropic multi-relaxation time model for complex FSI applications is shown by simulations of droplet impact on elastic superhydrophobic surfaces.

  8. The NERC Vocabulary Server: Version 2.0

    NASA Astrophysics Data System (ADS)

    Leadbetter, A. M.; Lowry, R. K.

    2012-12-01

    The Natural Environment Research Council (NERC) Vocabulary Server (NVS) has been used to publish controlled vocabularies of terms relevant to marine environmental sciences since 2006 (version 0) with version 1 being introduced in 2007. It has been used for - metadata mark-up with verifiable content - populating dynamic drop down lists - semantic cross-walk between metadata schemata - so-called smart search - and the semantic enablement of Open Geospatial Consortium (OGC) Web Processing Services in the NERC Data Grid and the European Commission SeaDataNet, Geo-Seas, and European Marine Observation and Data Network (EMODnet) projects. The NVS is based on the Simple Knowledge Organization System (SKOS) model. SKOS is based on the "concept", which it defines as a "unit of thought", that is an idea or notion such as "oil spill". Following a version change for SKOS in 2009 there was a desire to upgrade the NVS to incorporate the changes. This version of SKOS introduces the ability to aggregate concepts in both collections and schemes. The design of version 2 of the NVS uses both types of aggregation: schemes for the discovery of content through hierarchical thesauri and collections for the publication and addressing of content. Other desired changes from version 1 of the NVS included: - the removal of the potential for multiple identifiers for the same concept to ensure consistent addressing of concepts - the addition of content and technical governance information in the payload documents to provide an audit trail to users of NVS content - the removal of XML snippets from concept definitions in order to correctly validate XML serializations of the SKOS - the addition of the ability to map into external knowledge organization systems in order to extend the knowledge base - a more truly RESTful approach URL access to the NVS to make the development of applications on top of the NVS easier - and support for multiple human languages to increase the user base of the NVS Version 2 of the NVS (NVS2.0) underpins the semantic layer for the Open Service Network for Marine Environmental Data (NETMAR) project, funded by the European Commission under the Seventh Framework Programme. Within NETMAR, NVS2.0 has been used for: - semantic validation of inputs to chained OGC Web Processing Services - smart discovery of data and services - integration of data from distributed nodes of the International Coastal Atlas Network Since its deployment, NVS2.0 has been adopted within the European SeaDataNet community's software products which has significantly increased the usage of the NVS2.0 Application Programming Interace (API), as illustrated in Table 1. Here we present the results of upgrading the NVS to version 2 and show applications which have been built on top of the NVS2.0 API, including a SPARQL endpoint and a hierarchical catalogue of oceanographic hardware.Table 1. NVS2.0 API usage by month from 467 unique IP addressest;

  9. Systematic Review of Methods in Low-Consensus Fields: Supporting Commensuration through `Construct-Centered Methods Aggregation' in the Case of Climate Change Vulnerability Research.

    PubMed

    Delaney, Aogán; Tamás, Peter A; Crane, Todd A; Chesterman, Sabrina

    2016-01-01

    There is increasing interest in using systematic review to synthesize evidence on the social and environmental effects of and adaptations to climate change. Use of systematic review for evidence in this field is complicated by the heterogeneity of methods used and by uneven reporting. In order to facilitate synthesis of results and design of subsequent research a method, construct-centered methods aggregation, was designed to 1) provide a transparent, valid and reliable description of research methods, 2) support comparability of primary studies and 3) contribute to a shared empirical basis for improving research practice. Rather than taking research reports at face value, research designs are reviewed through inductive analysis. This involves bottom-up identification of constructs, definitions and operationalizations; assessment of concepts' commensurability through comparison of definitions; identification of theoretical frameworks through patterns of construct use; and integration of transparently reported and valid operationalizations into ideal-type research frameworks. Through the integration of reliable bottom-up inductive coding from operationalizations and top-down coding driven from stated theory with expert interpretation, construct-centered methods aggregation enabled both resolution of heterogeneity within identically named constructs and merging of differently labeled but identical constructs. These two processes allowed transparent, rigorous and contextually sensitive synthesis of the research presented in an uneven set of reports undertaken in a heterogenous field. If adopted more broadly, construct-centered methods aggregation may contribute to the emergence of a valid, empirically-grounded description of methods used in primary research. These descriptions may function as a set of expectations that improves the transparency of reporting and as an evolving comprehensive framework that supports both interpretation of existing and design of future research.

  10. Energy efficient data representation and aggregation with event region detection in wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Banerjee, Torsha

    Unlike conventional networks, wireless sensor networks (WSNs) are limited in power, have much smaller memory buffers, and possess relatively slower processing speeds. These characteristics necessitate minimum transfer and storage of information in order to prolong the network lifetime. In this dissertation, we exploit the spatio-temporal nature of sensor data to approximate the current values of the sensors based on readings obtained from neighboring sensors and itself. We propose a Tree based polynomial REGression algorithm, (TREG) that addresses the problem of data compression in wireless sensor networks. Instead of aggregated data, a polynomial function (P) is computed by the regression function, TREG. The coefficients of P are then passed to achieve the following goals: (i) The sink can get attribute values in the regions devoid of sensor nodes, and (ii) Readings over any portion of the region can be obtained at one time by querying the root of the tree. As the size of the data packet from each tree node to its parent remains constant, the proposed scheme scales very well with growing network density or increased coverage area. Since physical attributes exhibit a gradual change over time, we propose an iterative scheme, UPDATE_COEFF, which obviates the need to perform the regression function repeatedly and uses approximations based on previous readings. Extensive simulations are performed on real world data to demonstrate the effectiveness of our proposed aggregation algorithm, TREG. Results reveal that for a network density of 0.0025 nodes/m2, a complete binary tree of depth 4 could provide the absolute error to be less than 6%. A data compression ratio of about 0.02 is achieved using our proposed algorithm, which is almost independent of the tree depth. In addition, our proposed updating scheme makes the aggregation process faster while maintaining the desired error bounds. We also propose a Polynomial-based scheme that addresses the problem of Event Region Detection (PERD) for WSNs. When a single event occurs, a child of the tree sends a Flagged Polynomial (FP) to its parent, if the readings approximated by it falls outside the data range defining the existing phenomenon. After the aggregation process is over, the root having the two polynomials, P and FP can be queried for FP (approximating the new event region) instead of flooding the whole network. For multiple such events, instead of computing a polynomial corresponding to each new event, areas with same data range are combined by the corresponding tree nodes and the aggregated coefficients are passed on. Results reveal that a new event can be detected by PERD while error in detection remains constant and is less than a threshold of 10%. As the node density increases, accuracy and delay for event detection are found to remain almost constant, making PERD highly scalable. Whenever an event occurs in a WSN, data is generated by closeby sensors and relaying the data to the base station (BS) make sensors closer to the BS run out of energy at a much faster rate than sensors in other parts of the network. This gives rise to an unequal distribution of residual energy in the network and makes those sensors with lower remaining energy level die at much faster rate than others. We propose a scheme for enhancing network Lifetime using mobile cluster heads (CH) in a WSN. To maintain remaining energy more evenly, some energy-rich nodes are designated as CHs which move in a controlled manner towards sensors rich in energy and data. This eliminates multihop transmission required by the static sensors and thus increases the overall lifetime of the WSN. We combine the idea of clustering and mobile CH to first form clusters of static sensor nodes. A collaborative strategy among the CHs further increases the lifetime of the network. Time taken for transmitting data to the BS is reduced further by making the CHs follow a connectivity strategy that always maintain a connected path to the BS. Spatial correlation of sensor data can be further exploited for dynamic channel selection in Cellular Communication. In such a scenario within a licensed band, wireless sensors can be deployed (each sensor tuned to a frequency of the channel at a particular time) to sense the interference power of the frequency band. In an ideal channel, interference temperature (IT) which is directly proportional to the interference power, can be assumed to vary spatially with the frequency of the sub channel. We propose a scheme for fitting the sub channel frequencies and corresponding ITs to a regression model for calculating the IT of a random sub channel for further analysis of the channel interference at the base station. Our scheme, based on the readings reported by Sensors helps in Dynamic Channel Selection (S-DCS) in extended C-band for assignment to unlicensed secondary users. S-DCS proves to be economic from energy consumption point of view and it also achieves accuracy with error bound within 6.8%. Again, users are assigned empty sub channels without actually probing them, incurring minimum delay in the process. The overall channel throughput is maximized along with fairness to individual users.

  11. Benefits of a 4th Ice Class in the Simulated Radar Reflectivities of Convective Systems Using a Bulk Microphysics Scheme

    NASA Technical Reports Server (NTRS)

    Lang, Stephen E.; Tao, Wei-Kuo; Chern, Jiun-Dar; Wu, Di; Li, Xiaowen

    2015-01-01

    Numerous cloud microphysical schemes designed for cloud and mesoscale models are currently in use, ranging from simple bulk to multi-moment, multi-class to explicit bin schemes. This study details the benefits of adding a 4th ice class (hail) to an already improved 3-class ice bulk microphysics scheme developed for the Goddard Cumulus Ensemble model based on Rutledge and Hobbs (1983,1984). Besides the addition and modification of several hail processes from Lin et al. (1983), further modifications were made to the 3-ice processes, including allowing greater ice super saturation and mitigating spurious evaporationsublimation in the saturation adjustment scheme, allowing graupelhail to become snow via vapor growth and hail to become graupel via riming, and the inclusion of a rain evaporation correction and vapor diffusivity factor. The improved 3-ice snowgraupel size-mapping schemes were adjusted to be more stable at higher mixing rations and to increase the aggregation effect for snow. A snow density mapping was also added. The new scheme was applied to an intense continental squall line and a weaker, loosely-organized continental case using three different hail intercepts. Peak simulated reflectivities agree well with radar for both the intense and weaker case and were better than earlier 3-ice versions when using a moderate and large intercept for hail, respectively. Simulated reflectivity distributions versus height were also improved versus radar in both cases compared to earlier 3-ice versions. The bin-based rain evaporation correction affected the squall line case more but did not change the overall agreement in reflectivity distributions.

  12. Interference between Coulombic and CT-mediated couplings in molecular aggregates: H- to J-aggregate transformation in perylene-based π-stacks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hestand, Nicholas J.; Spano, Frank C.

    2015-12-28

    The spectroscopic differences between J and H-aggregates are traditionally attributed to the spatial dependence of the Coulombic coupling, as originally proposed by Kasha. However, in tightly packed molecular aggregates wave functions on neighboring molecules overlap, leading to an additional charge transfer (CT) mediated exciton coupling with a vastly different spatial dependence. The latter is governed by the nodal patterns of the molecular LUMOs and HOMOs from which the electron (t{sub e}) and hole (t{sub h}) transfer integrals derive. The sign of the CT-mediated coupling depends on the sign of the product t{sub e}t{sub h} and is therefore highly sensitive tomore » small (sub-Angstrom) transverse displacements or slips. Given that Coulombic and CT-mediated couplings exist simultaneously in tightly packed molecular systems, the interference between the two must be considered when defining J and H-aggregates. Generally, such π-stacked aggregates do not abide by the traditional classification scheme of Kasha: for example, even when the Coulomb coupling is strong the presence of a similarly strong but destructively interfering CT-mediated coupling results in “null-aggregates” which spectroscopically resemble uncoupled molecules. Based on a Frenkel/CT Holstein Hamiltonian that takes into account both sources of electronic coupling as well as intramolecular vibrations, vibronic spectral signatures are developed for integrated Frenkel/CT systems in both the perturbative and resonance regimes. In the perturbative regime, the sign of the lowest exciton band curvature, which rigorously defines J and H-aggregation, is directly tracked by the ratio of the first two vibronic peak intensities. Even in the resonance regime, the vibronic ratio remains a useful tool to evaluate the J or H nature of the system. The theory developed is applied to the reversible H to J-aggregate transformations recently observed in several perylene bisimide systems.« less

  13. Real-time amyloid aggregation monitoring with a photonic crystal-based approach.

    PubMed

    Santi, Sara; Musi, Valeria; Descrovi, Emiliano; Paeder, Vincent; Di Francesco, Joab; Hvozdara, Lubos; van der Wal, Peter; Lashuel, Hilal A; Pastore, Annalisa; Neier, Reinhard; Herzig, Hans Peter

    2013-10-21

    We propose the application of a new label-free optical technique based on photonic nanostructures to real-time monitor the amyloid-beta 1-42 (Aβ(1-42)) fibrillization, including the early stages of the aggregation process, which are related to the onset of the Alzheimer's Disease (AD). The aggregation of Aβ peptides into amyloid fibrils has commonly been associated with neuronal death, which culminates in the clinical features of the incurable degenerative AD. Recent studies revealed that cell toxicity is determined by the formation of soluble oligomeric forms of Aβ peptides in the early stages of aggregation. At this phase, classical amyloid detection techniques lack in sensitivity. Upon a chemical passivation of the sensing surface by means of polyethylene glycol, the proposed approach allows an accurate, real-time monitoring of the refractive index variation of the solution, wherein Aβ(1-42) peptides are aggregating. This measurement is directly related to the aggregation state of the peptide throughout oligomerization and subsequent fibrillization. Our findings open new perspectives in the understanding of the dynamics of amyloid formation, and validate this approach as a new and powerful method to screen aggregation at early stages. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. A hypothetical hierarchical mechanism of the self-assembly of the Escherichia coli RNA polymerase σ(70) subunit.

    PubMed

    Koroleva, O N; Dubrovin, E V; Tolstova, A P; Kuzmina, N V; Laptinskaya, T V; Yaminsky, I V; Drutsa, V L

    2016-02-21

    Diverse morphology of aggregates of amyloidogenic proteins has been attracting much attention in the last few years, and there is still no complete understanding of the relationships between various types of aggregates. In this work, we propose the model, which universally explains the formation of morphologically different (wormlike and rodlike) aggregates on the example of a σ(70) subunit of RNA polymerase, which has been recently shown to form amyloid fibrils. Aggregates were studied using AFM in solution and depolarized dynamic light scattering. The obtained results demonstrate comparably low Young's moduli of the wormlike structures (7.8-12.3 MPa) indicating less structured aggregation of monomeric proteins than that typical for β-sheet formation. To shed light on the molecular interaction of the protein during the aggregation, early stages of fibrillization of the σ(70) subunit were modeled using all-atom molecular dynamics. Simulations have shown that the σ(70) subunit is able to form quasi-symmetric extended dimers, which may further interact with each other and grow linearly. The proposed general model explains different pathways of σ(70) subunit aggregation and may be valid for other amyloid proteins.

  15. Partner aggression and problem drinking across the lifespan: how much do they decline?

    PubMed

    O'Leary, K Daniel; Woodin, Erica M

    2005-11-01

    Cross-sectional analyses from nationally-representative samples demonstrate significant age-related trends in partner aggression and problem drinking. Both behaviors are most prevalent in the early to mid-twenties and increasingly less common thereafter. Aggregate associations based on percentage of individuals displaying the behavior in each age range are dramatically stronger than those found when correlating individuals' ages and behavior. Multilevel modeling demonstrates that group-level effects do not mask associations found at the level of the individual for either problem drinking or partner aggression. An analysis of recent abstracts from psychology journals showed that issues of aggregate and individual data are rarely if ever discussed, and even well-known statistics books in psychology rarely discuss such issues. The interpretation of aggregate data will become increasing important as psychologists themselves, and in collaboration with epidemiologists and sociologists, have access to large data sets that allow for data aggregation. Both aggregate and individual analyses are valid, although they provide answers to different questions. Individual analyses are necessary for predicting individual behavior; aggregate analyses are useful in policy planning for large scale prevention and intervention. Strengths and limitations of cross-sectional community samples and aggregate data are also discussed.

  16. Acid-induced aggregation propensity of nivolumab is dependent on the Fc.

    PubMed

    Liu, Boning; Guo, Huaizu; Xu, Jin; Qin, Ting; Xu, Lu; Zhang, Junjie; Guo, Qingcheng; Zhang, Dapeng; Qian, Weizhu; Li, Bohua; Dai, Jianxin; Hou, Sheng; Guo, Yajun; Wang, Hao

    2016-01-01

    Nivolumab, an anti-programmed death (PD)1 IgG4 antibody, has shown notable success as a cancer treatment. Here, we report that nivolumab was susceptible to aggregation during manufacturing, particularly in routine purification steps. Our experimental results showed that exposure to low pH caused aggregation of nivolumab, and the Fc was primarily responsible for an acid-induced unfolding phenomenon. To compare the intrinsic propensity of acid-induced aggregation for other IgGs subclasses, tocilizumab (IgG1), panitumumab (IgG2) and atezolizumab (aglyco-IgG1) were also investigated. The accurate pH threshold of acid-induced aggregation for individual IgG Fc subclasses was identified and ranked as: IgG1 < aglyco-IgG1 < IgG2 < IgG4. This result was cross-validated by thermostability and conformation analysis. We also assessed the effect of several protein stabilizers on nivolumab, and found mannitol ameliorated the acid-induced aggregation of the molecule. Our results provide valuable insight into downstream manufacturing process development, especially for immune checkpoint modulating molecules with a human IgG4 backbone.

  17. PIYAS-proceeding to intelligent service oriented memory allocation for flash based data centric sensor devices in wireless sensor networks.

    PubMed

    Rizvi, Sanam Shahla; Chung, Tae-Sun

    2010-01-01

    Flash memory has become a more widespread storage medium for modern wireless devices because of its effective characteristics like non-volatility, small size, light weight, fast access speed, shock resistance, high reliability and low power consumption. Sensor nodes are highly resource constrained in terms of limited processing speed, runtime memory, persistent storage, communication bandwidth and finite energy. Therefore, for wireless sensor networks supporting sense, store, merge and send schemes, an efficient and reliable file system is highly required with consideration of sensor node constraints. In this paper, we propose a novel log structured external NAND flash memory based file system, called Proceeding to Intelligent service oriented memorY Allocation for flash based data centric Sensor devices in wireless sensor networks (PIYAS). This is the extended version of our previously proposed PIYA [1]. The main goals of the PIYAS scheme are to achieve instant mounting and reduced SRAM space by keeping memory mapping information to a very low size of and to provide high query response throughput by allocation of memory to the sensor data by network business rules. The scheme intelligently samples and stores the raw data and provides high in-network data availability by keeping the aggregate data for a longer period of time than any other scheme has done before. We propose effective garbage collection and wear-leveling schemes as well. The experimental results show that PIYAS is an optimized memory management scheme allowing high performance for wireless sensor networks.

  18. A Novel Position Compensation Scheme for Cable-Pulley Mechanisms Used in Laparoscopic Surgical Robots

    PubMed Central

    Liang, Yunlei; Du, Zhijiang; Sun, Lining

    2017-01-01

    The tendon driven mechanism using a cable and pulley to transmit power is adopted by many surgical robots. However, backlash hysteresis objectively exists in cable-pulley mechanisms, and this nonlinear problem is a great challenge in precise position control during the surgical procedure. Previous studies mainly focused on the transmission characteristics of the cable-driven system and constructed transmission models under particular assumptions to solve nonlinear problems. However, these approaches are limited because the modeling process is complex and the transmission models lack general applicability. This paper presents a novel position compensation control scheme to reduce the impact of backlash hysteresis on the positioning accuracy of surgical robots’ end-effectors. In this paper, a position compensation scheme using a support vector machine based on feedforward control is presented to reduce the position tracking error. To validate the proposed approach, experimental validations are conducted on our cable-pulley system and comparative experiments are carried out. The results show remarkable improvements in the performance of reducing the positioning error for the use of the proposed scheme. PMID:28974011

  19. Error Consistency Analysis Scheme for Infrared Ultraspectral Sounding Retrieval Error Budget Estimation

    NASA Technical Reports Server (NTRS)

    Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, Larry, L.

    2013-01-01

    Great effort has been devoted towards validating geophysical parameters retrieved from ultraspectral infrared radiances obtained from satellite remote sensors. An error consistency analysis scheme (ECAS), utilizing fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of mean difference and standard deviation of error in both spectral radiance and retrieval domains. The retrieval error is assessed through ECAS without relying on other independent measurements such as radiosonde data. ECAS establishes a link between the accuracies of radiances and retrieved geophysical parameters. ECAS can be applied to measurements from any ultraspectral instrument and any retrieval scheme with its associated RTM. In this manuscript, ECAS is described and demonstrated with measurements from the MetOp-A satellite Infrared Atmospheric Sounding Interferometer (IASI). This scheme can be used together with other validation methodologies to give a more definitive characterization of the error and/or uncertainty of geophysical parameters retrieved from ultraspectral radiances observed from current and future satellite remote sensors such as IASI, the Atmospheric Infrared Sounder (AIRS), and the Cross-track Infrared Sounder (CrIS).

  20. Public-key quantum digital signature scheme with one-time pad private-key

    NASA Astrophysics Data System (ADS)

    Chen, Feng-Lin; Liu, Wan-Fang; Chen, Su-Gen; Wang, Zhi-Hua

    2018-01-01

    A quantum digital signature scheme is firstly proposed based on public-key quantum cryptosystem. In the scheme, the verification public-key is derived from the signer's identity information (such as e-mail) on the foundation of identity-based encryption, and the signature private-key is generated by one-time pad (OTP) protocol. The public-key and private-key pair belongs to classical bits, but the signature cipher belongs to quantum qubits. After the signer announces the public-key and generates the final quantum signature, each verifier can verify publicly whether the signature is valid or not with the public-key and quantum digital digest. Analysis results show that the proposed scheme satisfies non-repudiation and unforgeability. Information-theoretic security of the scheme is ensured by quantum indistinguishability mechanics and OTP protocol. Based on the public-key cryptosystem, the proposed scheme is easier to be realized compared with other quantum signature schemes under current technical conditions.

  1. An upwind space-time conservation element and solution element scheme for solving dusty gas flow model

    NASA Astrophysics Data System (ADS)

    Rehman, Asad; Ali, Ishtiaq; Qamar, Shamsul

    An upwind space-time conservation element and solution element (CE/SE) scheme is extended to numerically approximate the dusty gas flow model. Unlike central CE/SE schemes, the current method uses the upwind procedure to derive the numerical fluxes through the inner boundary of conservation elements. These upwind fluxes are utilized to calculate the gradients of flow variables. For comparison and validation, the central upwind scheme is also applied to solve the same dusty gas flow model. The suggested upwind CE/SE scheme resolves the contact discontinuities more effectively and preserves the positivity of flow variables in low density flows. Several case studies are considered and the results of upwind CE/SE are compared with the solutions of central upwind scheme. The numerical results show better performance of the upwind CE/SE method as compared to the central upwind scheme.

  2. An Under-frequency Load Shedding Scheme with Continuous Load Control Proportional to Frequency Deviation

    NASA Astrophysics Data System (ADS)

    Li, Changgang; Sun, Yanli; Yu, Yawei

    2017-05-01

    Under frequency load shedding (UFLS) is an important measure to tackle with frequency drop caused by load-generation imbalance. In existing schemes, loads are shed by relays in a discontinuous way, which is the major reason leading to under-shedding and over-shedding problems. With the application of power electronics technology, some loads can be controlled continuously, and it is possible to improve the UFSL with continuous loads. This paper proposes an UFLS scheme by shedding loads continuously. The load shedding amount is proportional to frequency deviation before frequency reaches its minimum during transient process. The feasibility of the proposed scheme is analysed with analytical system frequency response model. The impacts of governor droop, system inertia, and frequency threshold on the performance of the proposed UFLS scheme are discussed. Cases are demonstrated to validate the proposed scheme by comparing it with conventional UFLS schemes.

  3. A Principal-Agent Perspective on Counterinsurgency Situations

    DTIC Science & Technology

    2011-06-01

    complicated because the agent knows about the schemes that have been offered by all principals at the time that he communicates with any of them. Bernheim ...If there exists an equilibrium, each principal must select an aggregate offer that implements the equilibrium action at minimum cost. Bernheim ...21] B.D. Bernheim and M.D. Whinston. “Common Agency”. Econometrica, 54(4):923–942, July 1986. [22] M. Peters. “Common Agency and the revelation

  4. Formation of active inclusion bodies induced by hydrophobic self-assembling peptide GFIL8.

    PubMed

    Wang, Xu; Zhou, Bihong; Hu, Weike; Zhao, Qing; Lin, Zhanglin

    2015-06-16

    In the last few decades, several groups have observed that proteins expressed as inclusion bodies (IBs) in bacteria could still be biologically active when terminally fused to an appropriate aggregation-prone partner such as pyruvate oxidase from Paenibacillus polymyxa (PoxB). More recently, we have demonstrated that three amphipathic self-assembling peptides, an alpha helical peptide 18A, a beta-strand peptide ELK16, and a surfactant-like peptide L6KD, have properties that induce target proteins into active IBs. We have developed an efficient protein expression and purification approach for these active IBs by introducing a self-cleavable intein molecule. In this study, the self-assembling peptide GFIL8 (GFILGFIL) with only hydrophobic residues was analyzed, and this peptide effectively induced the formation of cytoplasmic IBs in Escherichia coli when terminally attached to lipase A and amadoriase II. The protein aggregates in cells were confirmed by transmission electron microscopy analysis and retained ~50% of their specific activities relative to the native counterparts. We constructed an expression and separation coupled tag (ESCT) by incorporating an intein molecule, the Mxe GyrA intein. Soluble target proteins were successfully released from active IBs upon cleavage of the intein between the GFIL8 tag and the target protein, which was mediated by dithiothreitol. A variant of GFIL8, GFIL16 (GFILGFILGFILGFIL), improved the ESCT scheme by efficiently eliminating interference from the soluble intein-GFIL8 molecule. The yields of target proteins at the laboratory scale were 3.0-7.5 μg/mg wet cell pellet, which is comparable to the yields from similar ESCT constructs using 18A, ELK16, or the elastin-like peptide tag scheme. The all-hydrophobic self-assembling peptide GFIL8 induced the formation of active IBs in E. coli when terminally attached to target proteins. GFIL8 and its variant GFIL16 can act as a "pull-down" tag to produce purified soluble proteins with reasonable quantity and purity from active aggregates. Owing to the structural simplicity, strong hydrophobicity, and high aggregating efficiency, these peptides can be further explored for enzyme production and immobilization.

  5. A secure and efficient chaotic map-based authenticated key agreement scheme for telecare medicine information systems.

    PubMed

    Mishra, Dheerendra; Srinivas, Jangirala; Mukhopadhyay, Sourav

    2014-10-01

    Advancement in network technology provides new ways to utilize telecare medicine information systems (TMIS) for patient care. Although TMIS usually faces various attacks as the services are provided over the public network. Recently, Jiang et al. proposed a chaotic map-based remote user authentication scheme for TMIS. Their scheme has the merits of low cost and session key agreement using Chaos theory. It enhances the security of the system by resisting various attacks. In this paper, we analyze the security of Jiang et al.'s scheme and demonstrate that their scheme is vulnerable to denial of service attack. Moreover, we demonstrate flaws in password change phase of their scheme. Further, our aim is to propose a new chaos map-based anonymous user authentication scheme for TMIS to overcome the weaknesses of Jiang et al.'s scheme, while also retaining the original merits of their scheme. We also show that our scheme is secure against various known attacks including the attacks found in Jiang et al.'s scheme. The proposed scheme is comparable in terms of the communication and computational overheads with Jiang et al.'s scheme and other related existing schemes. Moreover, we demonstrate the validity of the proposed scheme through the BAN (Burrows, Abadi, and Needham) logic.

  6. Pulse design for multilevel systems by utilizing Lie transforms

    NASA Astrophysics Data System (ADS)

    Kang, Yi-Hao; Chen, Ye-Hong; Shi, Zhi-Cheng; Huang, Bi-Hua; Song, Jie; Xia, Yan

    2018-03-01

    We put forward a scheme to design pulses to manipulate multilevel systems with Lie transforms. A formula to reverse construct a control Hamiltonian is given and is applied in pulse design in the three- and four-level systems as examples. To demonstrate the validity of the scheme, we perform numerical simulations, which show the population transfers for cascaded three-level and N -type four-level Rydberg atoms can be completed successfully with high fidelities. Therefore, the scheme may benefit quantum information tasks based on multilevel systems.

  7. A Novel Quantum Blind Signature Scheme with Four-Particle Cluster States

    NASA Astrophysics Data System (ADS)

    Fan, Ling

    2016-03-01

    In an arbitrated quantum signature scheme, the signer signs the message and the receiver verifies the signature's validity with the assistance of the arbitrator. We present an arbitrated quantum blind signature scheme by measuring four-particle cluster states and coding. By using the special relationship of four-particle cluster states, we cannot only support the security of quantum signature, but also guarantee the anonymity of the message owner. It has a wide application to E-payment system, E-government, E-business, and etc.

  8. Erratum: Development, appraisal, validation and implementation of a consensus protocol for the assessment of cerebral amyloid angiopathy in post-mortem brain tissue.

    PubMed

    Love, Seth; Chalmers, Katy; Ince, Paul; Esiri, Margaret; Attems, Johannes; Kalaria, Raj; Jellinger, Kurt; Yamada, Masahito; McCarron, Mark; Minett, Thais; Matthews, Fiona; Greenberg, Steven; Mann, David; Kehoe, Patrick Gavin

    2015-01-01

    In a collaboration involving 11 groups with research interests in cerebral amyloid angiopathy (CAA), we used a two-stage process to develop and in turn validate a new consensus protocol and scoring scheme for the assessment of CAA and associated vasculopathic abnormalities in post-mortem brain tissue. Stage one used an iterative Delphi-style survey to develop the consensus protocol. The resultant scoring scheme was tested on a series of digital images and paraffin sections that were circulated blind to a number of scorers. The scoring scheme and choice of staining methods were refined by open-forum discussion. The agreed protocol scored parenchymal and meningeal CAA on a 0-3 scale, capillary CAA as present/absent and vasculopathy on 0-2 scale, in the 4 cortical lobes that were scored separately. A further assessment involving three centres was then undertaken. Neuropathologists in three centres (Bristol, Oxford and Sheffield) independently scored sections from 75 cases (25 from each centre) and high inter-rater reliability was demonstrated. Stage two used the results of the three-centre assessment to validate the protocol by investigating previously described associations between APOE genotype (previously determined), and both CAA and vasculopathy. Association of capillary CAA with or without arteriolar CAA with APOE ε4 was confirmed. However APOE ε2 was also found to be a strong risk factor for the development of CAA, not only in AD but also in elderly non-demented controls. Further validation of this protocol and scoring scheme is encouraged, to aid its wider adoption to facilitate collaborative and replication studies of CAA.[This corrects the article on p. 19 in vol. 3, PMID: 24754000.].

  9. Development, appraisal, validation and implementation of a consensus protocol for the assessment of cerebral amyloid angiopathy in post-mortem brain tissue

    PubMed Central

    Love, Seth; Chalmers, Katy; Ince, Paul; Esiri, Margaret; Attems, Johannes; Kalaria, Raj; Jellinger, Kurt; Yamada, Masahito; McCarron, Mark; Minett, Thais; Matthews, Fiona; Greenberg, Steven; Mann, David; Kehoe, Patrick Gavin

    2015-01-01

    In a collaboration involving 11 groups with research interests in cerebral amyloid angiopathy (CAA), we used a two-stage process to develop and in turn validate a new consensus protocol and scoring scheme for the assessment of CAA and associated vasculopathic abnormalities in post-mortem brain tissue. Stage one used an iterative Delphi-style survey to develop the consensus protocol. The resultant scoring scheme was tested on a series of digital images and paraffin sections that were circulated blind to a number of scorers. The scoring scheme and choice of staining methods were refined by open-forum discussion. The agreed protocol scored parenchymal and meningeal CAA on a 0-3 scale, capillary CAA as present/absent and vasculopathy on 0-2 scale, in the 4 cortical lobes that were scored separately. A further assessment involving three centres was then undertaken. Neuropathologists in three centres (Bristol, Oxford and Sheffield) independently scored sections from 75 cases (25 from each centre) and high inter-rater reliability was demonstrated. Stage two used the results of the three-centre assessment to validate the protocol by investigating previously described associations between APOE genotype (previously determined), and both CAA and vasculopathy. Association of capillary CAA with or without arteriolar CAA with APOE ε4 was confirmed. However APOE ε2 was also found to be a strong risk factor for the development of CAA, not only in AD but also in elderly non-demented controls. Further validation of this protocol and scoring scheme is encouraged, to aid its wider adoption to facilitate collaborative and replication studies of CAA. PMID:26807344

  10. Development, appraisal, validation and implementation of a consensus protocol for the assessment of cerebral amyloid angiopathy in post-mortem brain tissue

    PubMed Central

    Love, Seth; Chalmers, Katy; Ince, Paul; Esiri, Margaret; Attems, Johannes; Jellinger, Kurt; Yamada, Masahito; McCarron, Mark; Minett, Thais; Matthews, Fiona; Greenberg, Steven; Mann, David; Kehoe, Patrick Gavin

    2014-01-01

    In a collaboration involving 11 groups with research interests in cerebral amyloid angiopathy (CAA), we used a two-stage process to develop and in turn validate a new consensus protocol and scoring scheme for the assessment of CAA and associated vasculopathic abnormalities in post-mortem brain tissue. Stage one used an iterative Delphi-style survey to develop the consensus protocol. The resultant scoring scheme was tested on a series of digital images and paraffin sections that were circulated blind to a number of scorers. The scoring scheme and choice of staining methods were refined by open-forum discussion. The agreed protocol scored parenchymal and meningeal CAA on a 0-3 scale, capillary CAA as present/absent and vasculopathy on 0-2 scale, in the 4 cortical lobes that were scored separately. A further assessment involving three centres was then undertaken. Neuropathologists in three centres (Bristol, Oxford and Sheffield) independently scored sections from 75 cases (25 from each centre) and high inter-rater reliability was demonstrated. Stage two used the results of the three-centre assessment to validate the protocol by investigating previously described associations between APOE genotype (previously determined), and both CAA and vasculopathy. Association of capillary CAA with or without arteriolar CAA with APOE ε4 was confirmed. However APOE ε2 was also found to be a strong risk factor for the development of CAA, not only in AD but also in elderly non-demented controls. Further validation of this protocol and scoring scheme is encouraged, to aid its wider adoption to facilitate collaborative and replication studies of CAA. PMID:24754000

  11. Time cycle analysis and simulation of material flow in MOX process layout

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chakraborty, S.; Saraswat, A.; Danny, K.M.

    The (U,Pu)O{sub 2} MOX fuel is the driver fuel for the upcoming PFBR (Prototype Fast Breeder Reactor). The fuel has around 30% PuO{sub 2}. The presence of high percentages of reprocessed PuO{sub 2} necessitates the design of optimized fuel fabrication process line which will address both production need as well as meet regulatory norms regarding radiological safety criteria. The powder pellet route has highly unbalanced time cycle. This difficulty can be overcome by optimizing process layout in terms of equipment redundancy and scheduling of input powder batches. Different schemes are tested before implementing in the process line with the helpmore » of a software. This software simulates the material movement through the optimized process layout. The different material processing schemes have been devised and validity of the schemes are tested with the software. Schemes in which production batches are meeting at any glove box location are considered invalid. A valid scheme ensures adequate spacing between the production batches and at the same time it meets the production target. This software can be further improved by accurately calculating material movement time through glove box train. One important factor is considering material handling time with automation systems in place.« less

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, J., E-mail: jlu@pppl.gov; Bitter, M.; Hill, K. W.

    A two-dimensional stigmatic x-ray imaging scheme, consisting of two spherically bent crystals, one concave and one convex, was recently proposed [M. Bitter et al., Rev. Sci. Instrum. 83, 10E527 (2012)]. The Bragg angles and the radii of curvature of the two crystals of this imaging scheme are matched to eliminate the astigmatism and to satisfy the Bragg condition across both crystal surfaces for a given x-ray energy. In this paper, we consider more general configurations of this imaging scheme, which allow us to vary the magnification for a given pair of crystals and x-ray energy. The stigmatic imaging scheme hasmore » been validated for the first time by imaging x-rays generated by a micro-focus x-ray source with source size of 8.4 μm validated by knife-edge measurements. Results are presented from imaging the tungsten Lα1 emission at 8.3976 keV, using a convex Si-422 crystal and a concave Si-533 crystal with 2d-spacings of 2.21707 Å and 1.65635 Å and radii of curvature of 500 ± 1 mm and 823 ± 1 mm, respectively, showing a spatial resolution of 54.9 μm. This imaging scheme is expected to be of interest for the two-dimensional imaging of laser produced plasmas.« less

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, J.; Bitter, M.; Hill, K. W.

    A two-dimensional stigmatic x-ray imaging scheme, consisting of two spherically bent crystals, one concave and one convex, was recently proposed [M. Bitter et al., Rev. Sci. Instrum. 83, 10E527 (2012)]. We report that the Bragg angles and the radii of curvature of the two crystals of this imaging scheme are matched to eliminate the astigmatism and to satisfy the Bragg condition across both crystal surfaces for a given x-ray energy. In this paper, we consider more general configurations of this imaging scheme, which allow us to vary the magnification for a given pair of crystals and x-ray energy. The stigmaticmore » imaging scheme has been validated for the first time by imaging x-rays generated by a micro-focus x-ray source with source size of 8.4 μm validated by knife-edge measurements. Results are presented from imaging the tungsten Lα1 emission at 8.3976 keV, using a convex Si-422 crystal and a concave Si-533 crystal with 2d-spacings of 2.21707 Å and 1.65635 Å and radii of curvature of 500 ± 1 mm and 823 ± 1 mm, respectively, showing a spatial resolution of 54.9 μm. Finally, this imaging scheme is expected to be of interest for the two-dimensional imaging of laser produced plasmas.« less

  14. An exponential time-integrator scheme for steady and unsteady inviscid flows

    NASA Astrophysics Data System (ADS)

    Li, Shu-Jie; Luo, Li-Shi; Wang, Z. J.; Ju, Lili

    2018-07-01

    An exponential time-integrator scheme of second-order accuracy based on the predictor-corrector methodology, denoted PCEXP, is developed to solve multi-dimensional nonlinear partial differential equations pertaining to fluid dynamics. The effective and efficient implementation of PCEXP is realized by means of the Krylov method. The linear stability and truncation error are analyzed through a one-dimensional model equation. The proposed PCEXP scheme is applied to the Euler equations discretized with a discontinuous Galerkin method in both two and three dimensions. The effectiveness and efficiency of the PCEXP scheme are demonstrated for both steady and unsteady inviscid flows. The accuracy and efficiency of the PCEXP scheme are verified and validated through comparisons with the explicit third-order total variation diminishing Runge-Kutta scheme (TVDRK3), the implicit backward Euler (BE) and the implicit second-order backward difference formula (BDF2). For unsteady flows, the PCEXP scheme generates a temporal error much smaller than the BDF2 scheme does, while maintaining the expected acceleration at the same time. Moreover, the PCEXP scheme is also shown to achieve the computational efficiency comparable to the implicit schemes for steady flows.

  15. Scaling field data to calibrate and validate moderate spatial resolution remote sensing models

    USGS Publications Warehouse

    Baccini, A.; Friedl, M.A.; Woodcock, C.E.; Zhu, Z.

    2007-01-01

    Validation and calibration are essential components of nearly all remote sensing-based studies. In both cases, ground measurements are collected and then related to the remote sensing observations or model results. In many situations, and particularly in studies that use moderate resolution remote sensing, a mismatch exists between the sensor's field of view and the scale at which in situ measurements are collected. The use of in situ measurements for model calibration and validation, therefore, requires a robust and defensible method to spatially aggregate ground measurements to the scale at which the remotely sensed data are acquired. This paper examines this challenge and specifically considers two different approaches for aggregating field measurements to match the spatial resolution of moderate spatial resolution remote sensing data: (a) landscape stratification; and (b) averaging of fine spatial resolution maps. The results show that an empirically estimated stratification based on a regression tree method provides a statistically defensible and operational basis for performing this type of procedure. 

  16. Shaken, but not stirred: how vortical flow drives small-scale aggregations of gyrotactic phytoplankton

    NASA Astrophysics Data System (ADS)

    Barry, Michael; Durham, William; Climent, Eric; Stocker, Roman

    2011-11-01

    Coastal ocean observations reveal that motile phytoplankton form aggregations at the Kolmogorov scale (mm-cm), whereas non-motile cells do not. We propose a new mechanism for the formation of this small-scale patchiness based on the interplay of turbulence and gyrotactic motility. Counterintuitively, turbulence does not stir a plankton suspension to homogeneity but drives aggregations instead. Through controlled laboratory experiments we show that the alga Heterosigma akashiwo rapidly forms aggregations in a cavity-driven vortical flow that approximates Kolmogorov eddies. Gyrotactic motility is found to be the key ingredient for aggregation, as non-motile cells remain randomly distributed. Observations are in remarkable agreement with a 3D model, and the validity of this mechanism for generating patchiness has been extended to realistic turbulent flows using Direct Numerical Simulations. Because small-scale patchiness influences rates of predation, sexual reproduction, infection, and nutrient competition, this result indicates that gyrotactic motility can profoundly affect phytoplankton ecology.

  17. Residual Mechanical Properties of Concrete Made with Crushed Clay Bricks and Roof Tiles Aggregate after Exposure to High Temperatures

    PubMed Central

    Miličević, Ivana; Štirmer, Nina; Banjad Pečur, Ivana

    2016-01-01

    This paper presents the residual mechanical properties of concrete made with crushed bricks and clay roof tile aggregates after exposure to high temperatures. One referent mixture and eight mixtures with different percentages of replacement of natural aggregate by crushed bricks and roof tiles are experimentally tested. The properties of the concrete were measured before and after exposure to 200, 400, 600 and 800 °C. In order to evaluate the basic residual mechanical properties of concrete with crushed bricks and roof tiles after exposure to high temperatures, ultrasonic pulse velocity is used as a non-destructive test method and the results are compared with those of a destructive method for validation. The mixture with the highest percentage of replacement of natural aggregate by crushed brick and roof tile aggregate has the best physical, mechanical, and thermal properties for application of such concrete in precast concrete elements exposed to high temperatures. PMID:28773420

  18. Novel conformal technique to reduce staircasing artifacts at material boundaries for FDTD modeling of the bioheat equation.

    PubMed

    Neufeld, E; Chavannes, N; Samaras, T; Kuster, N

    2007-08-07

    The modeling of thermal effects, often based on the Pennes Bioheat Equation, is becoming increasingly popular. The FDTD technique commonly used in this context suffers considerably from staircasing errors at boundaries. A new conformal technique is proposed that can easily be integrated into existing implementations without requiring a special update scheme. It scales fluxes at interfaces with factors derived from the local surface normal. The new scheme is validated using an analytical solution, and an error analysis is performed to understand its behavior. The new scheme behaves considerably better than the standard scheme. Furthermore, in contrast to the standard scheme, it is possible to obtain with it more accurate solutions by increasing the grid resolution.

  19. Clinical risk scoring for predicting non-alcoholic fatty liver disease in metabolic syndrome patients (NAFLD-MS score).

    PubMed

    Saokaew, Surasak; Kanchanasuwan, Shada; Apisarnthanarak, Piyaporn; Charoensak, Aphinya; Charatcharoenwitthaya, Phunchai; Phisalprapa, Pochamana; Chaiyakunapruk, Nathorn

    2017-10-01

    Non-alcoholic fatty liver disease (NAFLD) can progress from simple steatosis to hepatocellular carcinoma. None of tools have been developed specifically for high-risk patients. This study aimed to develop a simple risk scoring to predict NAFLD in patients with metabolic syndrome (MetS). A total of 509 patients with MetS were recruited. All were diagnosed by clinicians with ultrasonography-confirmed whether they were patients with NAFLD. Patients were randomly divided into derivation (n=400) and validation (n=109) cohort. To develop the risk score, clinical risk indicators measured at the time of recruitment were built by logistic regression. Regression coefficients were transformed into item scores and added up to a total score. A risk scoring scheme was developed from clinical predictors: BMI ≥25, AST/ALT ≥1, ALT ≥40, type 2 diabetes mellitus and central obesity. The scoring scheme was applied in validation cohort to test the performance. The scheme explained, by area under the receiver operating characteristic curve (AuROC), 76.8% of being NAFLD with good calibration (Hosmer-Lemeshow χ 2 =4.35; P=.629). The positive likelihood ratio of NAFLD in patients with low risk (scores below 3) and high risk (scores 5 and over) were 2.32 (95% CI: 1.90-2.82) and 7.77 (95% CI: 2.47-24.47) respectively. When applied in validation cohort, the score showed good performance with AuROC 76.7%, and illustrated 84%, and 100% certainty in low- and high-risk groups respectively. A simple and non-invasive scoring scheme of five predictors provides good prediction indices for NAFLD in MetS patients. This scheme may help clinicians in order to take further appropriate action. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  20. Generation, Validation, and Application of Abundance Map Reference Data for Spectral Unmixing

    NASA Astrophysics Data System (ADS)

    Williams, McKay D.

    Reference data ("ground truth") maps traditionally have been used to assess the accuracy of imaging spectrometer classification algorithms. However, these reference data can be prohibitively expensive to produce, often do not include sub-pixel abundance estimates necessary to assess spectral unmixing algorithms, and lack published validation reports. Our research proposes methodologies to efficiently generate, validate, and apply abundance map reference data (AMRD) to airborne remote sensing scenes. We generated scene-wide AMRD for three different remote sensing scenes using our remotely sensed reference data (RSRD) technique, which spatially aggregates unmixing results from fine scale imagery (e.g., 1-m Ground Sample Distance (GSD)) to co-located coarse scale imagery (e.g., 10-m GSD or larger). We validated the accuracy of this methodology by estimating AMRD in 51 randomly-selected 10 m x 10 m plots, using seven independent methods and observers, including field surveys by two observers, imagery analysis by two observers, and RSRD using three algorithms. Results indicated statistically-significant differences between all versions of AMRD, suggesting that all forms of reference data need to be validated. Given these significant differences between the independent versions of AMRD, we proposed that the mean of all (MOA) versions of reference data for each plot and class were most likely to represent true abundances. We then compared each version of AMRD to MOA. Best case accuracy was achieved by a version of imagery analysis, which had a mean coverage area error of 2.0%, with a standard deviation of 5.6%. One of the RSRD algorithms was nearly as accurate, achieving a mean error of 3.0%, with a standard deviation of 6.3%, showing the potential of RSRD-based AMRD generation. Application of validated AMRD to specific coarse scale imagery involved three main parts: 1) spatial alignment of coarse and fine scale imagery, 2) aggregation of fine scale abundances to produce coarse scale imagery-specific AMRD, and 3) demonstration of comparisons between coarse scale unmixing abundances and AMRD. Spatial alignment was performed using our scene-wide spectral comparison (SWSC) algorithm, which aligned imagery with accuracy approaching the distance of a single fine scale pixel. We compared simple rectangular aggregation to coarse sensor point spread function (PSF) aggregation, and found that the PSF approach returned lower error, but that rectangular aggregation more accurately estimated true abundances at ground level. We demonstrated various metrics for comparing unmixing results to AMRD, including mean absolute error (MAE) and linear regression (LR). We additionally introduced reference data mean adjusted MAE (MA-MAE), and reference data confidence interval adjusted MAE (CIA-MAE), which account for known error in the reference data itself. MA-MAE analysis indicated that fully constrained linear unmixing of coarse scale imagery across all three scenes returned an error of 10.83% per class and pixel, with regression analysis yielding a slope = 0.85, intercept = 0.04, and R2 = 0.81. Our reference data research has demonstrated a viable methodology to efficiently generate, validate, and apply AMRD to specific examples of airborne remote sensing imagery, thereby enabling direct quantitative assessment of spectral unmixing performance.

  1. Experimental validation of convection-diffusion discretisation scheme employed for computational modelling of biological mass transport

    PubMed Central

    2010-01-01

    Background The finite volume solver Fluent (Lebanon, NH, USA) is a computational fluid dynamics software employed to analyse biological mass-transport in the vasculature. A principal consideration for computational modelling of blood-side mass-transport is convection-diffusion discretisation scheme selection. Due to numerous discretisation schemes available when developing a mass-transport numerical model, the results obtained should either be validated against benchmark theoretical solutions or experimentally obtained results. Methods An idealised aneurysm model was selected for the experimental and computational mass-transport analysis of species concentration due to its well-defined recirculation region within the aneurysmal sac, allowing species concentration to vary slowly with time. The experimental results were obtained from fluid samples extracted from a glass aneurysm model, using the direct spectrophometric concentration measurement technique. The computational analysis was conducted using the four convection-diffusion discretisation schemes available to the Fluent user, including the First-Order Upwind, the Power Law, the Second-Order Upwind and the Quadratic Upstream Interpolation for Convective Kinetics (QUICK) schemes. The fluid has a diffusivity of 3.125 × 10-10 m2/s in water, resulting in a Peclet number of 2,560,000, indicating strongly convection-dominated flow. Results The discretisation scheme applied to the solution of the convection-diffusion equation, for blood-side mass-transport within the vasculature, has a significant influence on the resultant species concentration field. The First-Order Upwind and the Power Law schemes produce similar results. The Second-Order Upwind and QUICK schemes also correlate well but differ considerably from the concentration contour plots of the First-Order Upwind and Power Law schemes. The computational results were then compared to the experimental findings. An average error of 140% and 116% was demonstrated between the experimental results and those obtained from the First-Order Upwind and Power Law schemes, respectively. However, both the Second-Order upwind and QUICK schemes accurately predict species concentration under high Peclet number, convection-dominated flow conditions. Conclusion Convection-diffusion discretisation scheme selection has a strong influence on resultant species concentration fields, as determined by CFD. Furthermore, either the Second-Order or QUICK discretisation schemes should be implemented when numerically modelling convection-dominated mass-transport conditions. Finally, care should be taken not to utilize computationally inexpensive discretisation schemes at the cost of accuracy in resultant species concentration. PMID:20642816

  2. Rank-based methods for modeling dependence between loss triangles.

    PubMed

    Côté, Marie-Pier; Genest, Christian; Abdallah, Anas

    2016-01-01

    In order to determine the risk capital for their aggregate portfolio, property and casualty insurance companies must fit a multivariate model to the loss triangle data relating to each of their lines of business. As an inadequate choice of dependence structure may have an undesirable effect on reserve estimation, a two-stage inference strategy is proposed in this paper to assist with model selection and validation. Generalized linear models are first fitted to the margins. Standardized residuals from these models are then linked through a copula selected and validated using rank-based methods. The approach is illustrated with data from six lines of business of a large Canadian insurance company for which two hierarchical dependence models are considered, i.e., a fully nested Archimedean copula structure and a copula-based risk aggregation model.

  3. Rigorous-two-Steps scheme of TRIPOLI-4® Monte Carlo code validation for shutdown dose rate calculation

    NASA Astrophysics Data System (ADS)

    Jaboulay, Jean-Charles; Brun, Emeric; Hugot, François-Xavier; Huynh, Tan-Dat; Malouch, Fadhel; Mancusi, Davide; Tsilanizara, Aime

    2017-09-01

    After fission or fusion reactor shutdown the activated structure emits decay photons. For maintenance operations the radiation dose map must be established in the reactor building. Several calculation schemes have been developed to calculate the shutdown dose rate. These schemes are widely developed in fusion application and more precisely for the ITER tokamak. This paper presents the rigorous-two-steps scheme implemented at CEA. It is based on the TRIPOLI-4® Monte Carlo code and the inventory code MENDEL. The ITER shutdown dose rate benchmark has been carried out, results are in a good agreement with the other participant.

  4. Self-match based on polling scheme for passive optical network monitoring

    NASA Astrophysics Data System (ADS)

    Zhang, Xuan; Guo, Hao; Jia, Xinhong; Liao, Qinghua

    2018-06-01

    We propose a self-match based on polling scheme for passive optical network monitoring. Each end-user is equipped with an optical matcher that exploits only the specific length patchcord and two different fiber Bragg gratings with 100% reflectivity. The simple and low-cost scheme can greatly simplify the final recognition processing of the network link status and reduce the sensitivity of the photodetector. We analyze the time-domain relation between reflected pulses and establish the calculation model to evaluate the false alarm rate. The feasibility of the proposed scheme and the validity of the time-domain relation analysis are experimentally demonstrated.

  5. Melt transport - a personal cashing-up

    NASA Astrophysics Data System (ADS)

    Renner, J.

    2005-12-01

    The flow of fluids through rocks transports heat and material and changes bulk composition. The large-scale chemical differentiation of the Earth is related to flow of partial melts. From the perspective of current understanding of tectonic processes, prominent examples of such transport processes are the formation of oceanic crust from ascending basic melts at mid-ocean ridges, melt segregation involved in the solidification of the Earth's core, and dissolution-precipitation creep in subduction channels. Transport and deformation cannot be separated for partially molten aggregates. Permeability is only defined as an instantaneous parameter in the sense that Darcy's law is assumed to be valid; it is not an explicit parameter in the fundamental mechanical conservation laws but can be derived from them in certain circumstances as a result of averaging schemes. The governing, explicit physical properties in the mechanical equations are the shear and bulk viscosities of the solid framework and the fluid viscosity and compressibility. Constraints on the magnitude of these properties are available today from experiments at specific loading configurations, i.e., more or less well constrained initial and boundary conditions. The melt pressure remains the least controlled parameter. While the fluid viscosity is often much lower than the solid's the two-phase aggregate may exhibit considerable strength owing to the difficulty of moving the fluid through the branched pore network. The extremes in behavior depend on the time scale of loading, as known from daily live experiences (spounge, Danish coffee-pot, human tissue between neighboring bones). Several theoretical approaches attempted to formulate mechanical constitutive equations for two-phase aggregates. An important issue is the handling of internal variables in these equations. At experimental conditions, grain size, melt pocket orientation and crystallographic orientation -prime candidates for internal variables- change considerably and potentially contribute significantly to the total dissipation of the external work. Theoretically founded evolution equations for these internal variables are lacking. In experiments, both the kinetics of grain growth but also the resultant shape of grains is affected by the presence of melt. The latter is linked to the alignment of melt pockets with the maximum principle stress. Thus, the melt redistribution causes direct anisotropy but also indirect through a shape-preferred orientation of solid grains. Notably, the foliation is parallel to the maximum principle stress in contrast to deformation controlled by crystal defects alone. Extremum principles developed for dissipation potentials in the framework of irreversible thermodynamics may allow us to postulate evolution equations. Owing to their significant effect on aggregate viscosities understanding the evolution of internal variables is mandatory for substantial large-scale modeling.

  6. Classification and assessment of retrieved electron density maps in coherent X-ray diffraction imaging using multivariate analysis.

    PubMed

    Sekiguchi, Yuki; Oroguchi, Tomotaka; Nakasako, Masayoshi

    2016-01-01

    Coherent X-ray diffraction imaging (CXDI) is one of the techniques used to visualize structures of non-crystalline particles of micrometer to submicrometer size from materials and biological science. In the structural analysis of CXDI, the electron density map of a sample particle can theoretically be reconstructed from a diffraction pattern by using phase-retrieval (PR) algorithms. However, in practice, the reconstruction is difficult because diffraction patterns are affected by Poisson noise and miss data in small-angle regions due to the beam stop and the saturation of detector pixels. In contrast to X-ray protein crystallography, in which the phases of diffracted waves are experimentally estimated, phase retrieval in CXDI relies entirely on the computational procedure driven by the PR algorithms. Thus, objective criteria and methods to assess the accuracy of retrieved electron density maps are necessary in addition to conventional parameters monitoring the convergence of PR calculations. Here, a data analysis scheme, named ASURA, is proposed which selects the most probable electron density maps from a set of maps retrieved from 1000 different random seeds for a diffraction pattern. Each electron density map composed of J pixels is expressed as a point in a J-dimensional space. Principal component analysis is applied to describe characteristics in the distribution of the maps in the J-dimensional space. When the distribution is characterized by a small number of principal components, the distribution is classified using the k-means clustering method. The classified maps are evaluated by several parameters to assess the quality of the maps. Using the proposed scheme, structure analysis of a diffraction pattern from a non-crystalline particle is conducted in two stages: estimation of the overall shape and determination of the fine structure inside the support shape. In each stage, the most accurate and probable density maps are objectively selected. The validity of the proposed scheme is examined by application to diffraction data that were obtained from an aggregate of metal particles and a biological specimen at the XFEL facility SACLA using custom-made diffraction apparatus.

  7. Adaptive vector validation in image velocimetry to minimise the influence of outlier clusters

    NASA Astrophysics Data System (ADS)

    Masullo, Alessandro; Theunissen, Raf

    2016-03-01

    The universal outlier detection scheme (Westerweel and Scarano in Exp Fluids 39:1096-1100, 2005) and the distance-weighted universal outlier detection scheme for unstructured data (Duncan et al. in Meas Sci Technol 21:057002, 2010) are the most common PIV data validation routines. However, such techniques rely on a spatial comparison of each vector with those in a fixed-size neighbourhood and their performance subsequently suffers in the presence of clusters of outliers. This paper proposes an advancement to render outlier detection more robust while reducing the probability of mistakenly invalidating correct vectors. Velocity fields undergo a preliminary evaluation in terms of local coherency, which parametrises the extent of the neighbourhood with which each vector will be compared subsequently. Such adaptivity is shown to reduce the number of undetected outliers, even when implemented in the afore validation schemes. In addition, the authors present an alternative residual definition considering vector magnitude and angle adopting a modified Gaussian-weighted distance-based averaging median. This procedure is able to adapt the degree of acceptable background fluctuations in velocity to the local displacement magnitude. The traditional, extended and recommended validation methods are numerically assessed on the basis of flow fields from an isolated vortex, a turbulent channel flow and a DNS simulation of forced isotropic turbulence. The resulting validation method is adaptive, requires no user-defined parameters and is demonstrated to yield the best performances in terms of outlier under- and over-detection. Finally, the novel validation routine is applied to the PIV analysis of experimental studies focused on the near wake behind a porous disc and on a supersonic jet, illustrating the potential gains in spatial resolution and accuracy.

  8. Evolution of mixed surfactant aggregates in solutions and at solid/solution interfaces

    NASA Astrophysics Data System (ADS)

    Zhang, Rui

    Surfactant systems have been widely used in such as enhanced oil recovery, waste treatment and metallurgy, etc., in order to solve the problem of global energy crisis, to remove the pollutants and to generate novel energy resources. Almost all surfactant systems are invariably mixtures due to beneficial and economic considerations. The sizes and shapes of aggregates in solutions and at solid/solution interfaces become important, since the nanostructures of mixed aggregates determine solution and adsorption properties. A major hurdle in science is the lack of information on the type of complexes and aggregates formed by mixtures and the lack of techniques for deriving such information. Using techniques such as analytical ultracentrifuge, small angle neutron scattering, surface tension, fluorescence, cryo-TEM, light scattering and ultrafiltration, the nanostructures of aggregates of sugar based n-dodecyl-beta-D-maltoside (DM) and nonionic pentaethyleneglycol monododecyl ether or nonyl phenol ethoxylated decyl ether (NP-10) and their mixtures have been investigated to prove the hypothesis that the aggregation behavior is linked to packing of the surfactant governed by the molecular interactions as well as the molecular structures. The results from both sedimentation velocity and sedimentation equilibrium experiments suggest coexistence of two types of micelles in nonyl phenol ethoxylated decyl ether solutions and its mixtures with n-dodecyl-beta-D-maltoside while only one micellar species is present in n-dodecyl-beta-D-maltoside solutions, in good agreement with those from small angle neutron scattering, cryo-TEM, light scattering and ultrafiltration. Type I micelles were primary micelles at cmc while type II micelles were elongated micelles. On the other hand, the nanostructures of mixed surface aggregates have been quantitatively predicted for the first time using a modified packing index. As a continuation of the Somasundaran-Fuersteneau adsorption model, a modified one-step model has been developed to fully understand the adsorption behavior of surfactant mixtures and obtained thermodynamic information on aggregation number and standard free energy for surface aggregation. The findings are expected to provide fundamental basis for the design optimal surfactant schemes for desired purposes.

  9. Outcomes of Quality Assurance: A Discussion of Knowledge, Methodology and Validity

    ERIC Educational Resources Information Center

    Stensaker, Bjorn

    2008-01-01

    A common characteristic in many quality assurance schemes around the world is their implicit and often narrowly formulated understanding of how organisational change is to take place as a result of the process. By identifying some of the underlying assumptions related to organisational change in current quality assurance schemes, the aim of this…

  10. Investigation and Rehabilitation to Extend Service Life of DSS-13 Antenna Concrete Foundation

    NASA Technical Reports Server (NTRS)

    Riewe, A. A., Jr.

    1984-01-01

    An investigation to establish the cause and, devise a repair technique to maintain the serviceability of the DSS-13 26 meter antenna is described. Core samples are obtained from the concrete and various laboratory tests conducted. In-place nondestructive type tests are also performed. The tests established that the concrete is deteriorating because of alkali aggregate reactivity. This is a phenomenon wherein certain siliceous constituents present in some aggregates react with alkalies in the portland cement to produce a silica gel which, in turn, imbibes water, swells, and cracks the concrete. The scheme consists of a supplemental steel frame friction pile anchored grade beam encircling the existing foundation. This system provides adequate bracing against base shear and overturning due to seismic loading. Larger cracks are sealed using a pressure injected two-component epoxy.

  11. Bond graph modeling and experimental verification of a novel scheme for fault diagnosis of rolling element bearings in special operating conditions

    NASA Astrophysics Data System (ADS)

    Mishra, C.; Samantaray, A. K.; Chakraborty, G.

    2016-09-01

    Vibration analysis for diagnosis of faults in rolling element bearings is complicated when the rotor speed is variable or slow. In the former case, the time interval between the fault-induced impact responses in the vibration signal are non-uniform and the signal strength is variable. In the latter case, the fault-induced impact response strength is weak and generally gets buried in the noise, i.e. noise dominates the signal. This article proposes a diagnosis scheme based on a combination of a few signal processing techniques. The proposed scheme initially represents the vibration signal in terms of uniformly resampled angular position of the rotor shaft by using the interpolated instantaneous angular position measurements. Thereafter, intrinsic mode functions (IMFs) are generated through empirical mode decomposition (EMD) of resampled vibration signal which is followed by thresholding of IMFs and signal reconstruction to de-noise the signal and envelope order tracking to diagnose the faults. Data for validating the proposed diagnosis scheme are initially generated from a multi-body simulation model of rolling element bearing which is developed using bond graph approach. This bond graph model includes the ball and cage dynamics, localized fault geometry, contact mechanics, rotor unbalance, and friction and slip effects. The diagnosis scheme is finally validated with experiments performed with the help of a machine fault simulator (MFS) system. Some fault scenarios which could not be experimentally recreated are then generated through simulations and analyzed through the developed diagnosis scheme.

  12. Active Inference and Learning in the Cerebellum.

    PubMed

    Friston, Karl; Herreros, Ivan

    2016-09-01

    This letter offers a computational account of Pavlovian conditioning in the cerebellum based on active inference and predictive coding. Using eyeblink conditioning as a canonical paradigm, we formulate a minimal generative model that can account for spontaneous blinking, startle responses, and (delay or trace) conditioning. We then establish the face validity of the model using simulated responses to unconditioned and conditioned stimuli to reproduce the sorts of behavior that are observed empirically. The scheme's anatomical validity is then addressed by associating variables in the predictive coding scheme with nuclei and neuronal populations to match the (extrinsic and intrinsic) connectivity of the cerebellar (eyeblink conditioning) system. Finally, we try to establish predictive validity by reproducing selective failures of delay conditioning, trace conditioning, and extinction using (simulated and reversible) focal lesions. Although rather metaphorical, the ensuing scheme can account for a remarkable range of anatomical and neurophysiological aspects of cerebellar circuitry-and the specificity of lesion-deficit mappings that have been established experimentally. From a computational perspective, this work shows how conditioning or learning can be formulated in terms of minimizing variational free energy (or maximizing Bayesian model evidence) using exactly the same principles that underlie predictive coding in perception.

  13. The reliability and validity of the SF-8 with a conflict-affected population in northern Uganda.

    PubMed

    Roberts, Bayard; Browne, John; Ocaka, Kaducu Felix; Oyok, Thomas; Sondorp, Egbert

    2008-12-02

    The SF-8 is a health-related quality of life instrument that could provide a useful means of assessing general physical and mental health amongst populations affected by conflict. The purpose of this study was to test the validity and reliability of the SF-8 with a conflict-affected population in northern Uganda. A cross-sectional multi-staged, random cluster survey was conducted with 1206 adults in camps for internally displaced persons in Gulu and Amuru districts of northern Uganda. Data quality was assessed by analysing the number of incomplete responses to SF-8 items. Response distribution was analysed using aggregate endorsement frequency. Test-retest reliability was assessed in a separate smaller survey using the intraclass correlation test. Construct validity was measured using principal component analysis, and the Pearson Correlation test for item-summary score correlation and inter-instrument correlations. Known groups validity was assessed using a two sample t-test to evaluates the ability of the SF-8 to discriminate between groups known to have, and not have, physical and mental health problems. The SF-8 showed excellent data quality. It showed acceptable item response distribution based upon analysis of aggregate endorsement frequencies. Test-retest showed a good intraclass correlation of 0.61 for PCS and 0.68 for MCS. The principal component analysis indicated strong construct validity and concurred with the results of the validity tests by the SF-8 developers. The SF-8 also showed strong construct validity between the 8 items and PCS and MCS summary score, moderate inter-instrument validity, and strong known groups validity. This study provides evidence on the reliability and validity of the SF-8 amongst IDPs in northern Uganda.

  14. The reliability and validity of the SF-8 with a conflict-affected population in northern Uganda

    PubMed Central

    Roberts, Bayard; Browne, John; Ocaka, Kaducu Felix; Oyok, Thomas; Sondorp, Egbert

    2008-01-01

    Background The SF-8 is a health-related quality of life instrument that could provide a useful means of assessing general physical and mental health amongst populations affected by conflict. The purpose of this study was to test the validity and reliability of the SF-8 with a conflict-affected population in northern Uganda. Methods A cross-sectional multi-staged, random cluster survey was conducted with 1206 adults in camps for internally displaced persons in Gulu and Amuru districts of northern Uganda. Data quality was assessed by analysing the number of incomplete responses to SF-8 items. Response distribution was analysed using aggregate endorsement frequency. Test-retest reliability was assessed in a separate smaller survey using the intraclass correlation test. Construct validity was measured using principal component analysis, and the Pearson Correlation test for item-summary score correlation and inter-instrument correlations. Known groups validity was assessed using a two sample t-test to evaluates the ability of the SF-8 to discriminate between groups known to have, and not have, physical and mental health problems. Results The SF-8 showed excellent data quality. It showed acceptable item response distribution based upon analysis of aggregate endorsement frequencies. Test-retest showed a good intraclass correlation of 0.61 for PCS and 0.68 for MCS. The principal component analysis indicated strong construct validity and concurred with the results of the validity tests by the SF-8 developers. The SF-8 also showed strong construct validity between the 8 items and PCS and MCS summary score, moderate inter-instrument validity, and strong known groups validity. Conclusion This study provides evidence on the reliability and validity of the SF-8 amongst IDPs in northern Uganda. PMID:19055716

  15. Modeling the impact of soil aggregate size on selenium immobilization

    NASA Astrophysics Data System (ADS)

    Kausch, M. F.; Pallud, C. E.

    2013-03-01

    Soil aggregates are mm- to cm-sized microporous structures separated by macropores. Whereas fast advective transport prevails in macropores, advection is inhibited by the low permeability of intra-aggregate micropores. This can lead to mass transfer limitations and the formation of aggregate scale concentration gradients affecting the distribution and transport of redox sensitive elements. Selenium (Se) mobilized through irrigation of seleniferous soils has emerged as a major aquatic contaminant. In the absence of oxygen, the bioavailable oxyanions selenate, Se(VI), and selenite, Se(IV), can be microbially reduced to solid, elemental Se, Se(0), and anoxic microzones within soil aggregates are thought to promote this process in otherwise well-aerated soils. To evaluate the impact of soil aggregate size on selenium retention, we developed a dynamic 2-D reactive transport model of selenium cycling in a single idealized aggregate surrounded by a macropore. The model was developed based on flow-through-reactor experiments involving artificial soil aggregates (diameter: 2.5 cm) made of sand and containing Enterobacter cloacae SLD1a-1 that reduces Se(VI) via Se(IV) to Se(0). Aggregates were surrounded by a constant flow providing Se(VI) and pyruvate under oxic or anoxic conditions. In the model, reactions were implemented with double-Monod rate equations coupled to the transport of pyruvate, O2, and Se species. The spatial and temporal dynamics of the model were validated with data from experiments, and predictive simulations were performed covering aggregate sizes 1-2.5 cm in diameter. Simulations predict that selenium retention scales with aggregate size. Depending on O2, Se(VI), and pyruvate concentrations, selenium retention was 4-23 times higher in 2.5 cm aggregates compared to 1 cm aggregates. Under oxic conditions, aggregate size and pyruvate concentrations were found to have a positive synergistic effect on selenium retention. Promoting soil aggregation on seleniferous agricultural soils, through organic matter amendments and conservation tillage, may thus help decrease the impacts of selenium contaminated drainage water on downstream aquatic ecosystems.

  16. Modeling the impact of soil aggregate size on selenium immobilization

    NASA Astrophysics Data System (ADS)

    Kausch, M. F.; Pallud, C. E.

    2012-09-01

    Soil aggregates are mm- to cm-sized microporous structures separated by macropores. Whereas fast advective transport prevails in macropores, advection is inhibited by the low permeability of intra-aggregate micropores. This can lead to mass transfer limitations and the formation of aggregate-scale concentration gradients affecting the distribution and transport of redox sensitive elements. Selenium (Se) mobilized through irrigation of seleniferous soils has emerged as a major aquatic contaminant. In the absence of oxygen, the bioavailable oxyanions selenate, Se(VI), and selenite, Se(IV), can be microbially reduced to solid, elemental Se, Se(0), and anoxic microzones within soil aggregates are thought to promote this process in otherwise well aerated soils. To evaluate the impact of soil aggregate size on selenium retention, we developed a dynamic 2-D reactive transport model of selenium cycling in a single idealized aggregate surrounded by a macropore. The model was developed based on flow-through-reactor experiments involving artificial soil aggregates (diameter: 2.5 cm) made of sand and containing Enterobacter cloacae SLD1a-1 that reduces Se(VI) via Se(IV) to Se(0). Aggregates were surrounded by a constant flow providing Se(VI) and pyruvate under oxic or anoxic conditions. In the model, reactions were implemented with double-Monod rate equations coupled to the transport of pyruvate, O2, and Se-species. The spatial and temporal dynamics of the model were validated with data from experiments and predictive simulations were performed covering aggregate sizes between 1 and 2.5 cm diameter. Simulations predict that selenium retention scales with aggregate size. Depending on O2, Se(VI), and pyruvate concentrations, selenium retention was 4-23 times higher in 2.5-cm-aggregates compared to 1-cm-aggregates. Under oxic conditions, aggregate size and pyruvate-concentrations were found to have a positive synergistic effect on selenium retention. Promoting soil aggregation on seleniferous agricultural soils, through organic matter amendments and conservation tillage, may thus help decrease the impacts of selenium contaminated drainage water on downstream aquatic ecosystems.

  17. On the security of two remote user authentication schemes for telecare medical information systems.

    PubMed

    Kim, Kee-Won; Lee, Jae-Dong

    2014-05-01

    The telecare medical information systems (TMISs) support convenient and rapid health-care services. A secure and efficient authentication scheme for TMIS provides safeguarding patients' electronic patient records (EPRs) and helps health care workers and medical personnel to rapidly making correct clinical decisions. Recently, Kumari et al. proposed a password based user authentication scheme using smart cards for TMIS, and claimed that the proposed scheme could resist various malicious attacks. However, we point out that their scheme is still vulnerable to lost smart card and cannot provide forward secrecy. Subsequently, Das and Goswami proposed a secure and efficient uniqueness-and-anonymity-preserving remote user authentication scheme for connected health care. They simulated their scheme for the formal security verification using the widely-accepted automated validation of Internet security protocols and applications (AVISPA) tool to ensure that their scheme is secure against passive and active attacks. However, we show that their scheme is still vulnerable to smart card loss attacks and cannot provide forward secrecy property. The proposed cryptanalysis discourages any use of the two schemes under investigation in practice and reveals some subtleties and challenges in designing this type of schemes.

  18. A privacy preserving secure and efficient authentication scheme for telecare medical information systems.

    PubMed

    Mishra, Raghavendra; Barnwal, Amit Kumar

    2015-05-01

    The Telecare medical information system (TMIS) presents effective healthcare delivery services by employing information and communication technologies. The emerging privacy and security are always a matter of great concern in TMIS. Recently, Chen at al. presented a password based authentication schemes to address the privacy and security. Later on, it is proved insecure against various active and passive attacks. To erase the drawbacks of Chen et al.'s anonymous authentication scheme, several password based authentication schemes have been proposed using public key cryptosystem. However, most of them do not present pre-smart card authentication which leads to inefficient login and password change phases. To present an authentication scheme with pre-smart card authentication, we present an improved anonymous smart card based authentication scheme for TMIS. The proposed scheme protects user anonymity and satisfies all the desirable security attributes. Moreover, the proposed scheme presents efficient login and password change phases where incorrect input can be quickly detected and a user can freely change his password without server assistance. Moreover, we demonstrate the validity of the proposed scheme by utilizing the widely-accepted BAN (Burrows, Abadi, and Needham) logic. The proposed scheme is also comparable in terms of computational overheads with relevant schemes.

  19. On large time step TVD scheme for hyperbolic conservation laws and its efficiency evaluation

    NASA Astrophysics Data System (ADS)

    Qian, ZhanSen; Lee, Chun-Hian

    2012-08-01

    A large time step (LTS) TVD scheme originally proposed by Harten is modified and further developed in the present paper and applied to Euler equations in multidimensional problems. By firstly revealing the drawbacks of Harten's original LTS TVD scheme, and reasoning the occurrence of the spurious oscillations, a modified formulation of its characteristic transformation is proposed and a high resolution, strongly robust LTS TVD scheme is formulated. The modified scheme is proven to be capable of taking larger number of time steps than the original one. Following the modified strategy, the LTS TVD schemes for Yee's upwind TVD scheme and Yee-Roe-Davis's symmetric TVD scheme are constructed. The family of the LTS schemes is then extended to multidimensional by time splitting procedure, and the associated boundary condition treatment suitable for the LTS scheme is also imposed. The numerical experiments on Sod's shock tube problem, inviscid flows over NACA0012 airfoil and ONERA M6 wing are performed to validate the developed schemes. Computational efficiencies for the respective schemes under different CFL numbers are also evaluated and compared. The results reveal that the improvement is sizable as compared to the respective single time step schemes, especially for the CFL number ranging from 1.0 to 4.0.

  20. An enhanced biometric authentication scheme for telecare medicine information systems with nonce using chaotic hash function.

    PubMed

    Das, Ashok Kumar; Goswami, Adrijit

    2014-06-01

    Recently, Awasthi and Srivastava proposed a novel biometric remote user authentication scheme for the telecare medicine information system (TMIS) with nonce. Their scheme is very efficient as it is based on efficient chaotic one-way hash function and bitwise XOR operations. In this paper, we first analyze Awasthi-Srivastava's scheme and then show that their scheme has several drawbacks: (1) incorrect password change phase, (2) fails to preserve user anonymity property, (3) fails to establish a secret session key beween a legal user and the server, (4) fails to protect strong replay attack, and (5) lacks rigorous formal security analysis. We then a propose a novel and secure biometric-based remote user authentication scheme in order to withstand the security flaw found in Awasthi-Srivastava's scheme and enhance the features required for an idle user authentication scheme. Through the rigorous informal and formal security analysis, we show that our scheme is secure against possible known attacks. In addition, we simulate our scheme for the formal security verification using the widely-accepted AVISPA (Automated Validation of Internet Security Protocols and Applications) tool and show that our scheme is secure against passive and active attacks, including the replay and man-in-the-middle attacks. Our scheme is also efficient as compared to Awasthi-Srivastava's scheme.

  1. Validation of an Instrument and Testing Protocol for Measuring the Combinatorial Analysis Schema.

    ERIC Educational Resources Information Center

    Staver, John R.; Harty, Harold

    1979-01-01

    Designs a testing situation to examine the presence of combinatorial analysis, to establish construct validity in the use of an instrument, Combinatorial Analysis Behavior Observation Scheme (CABOS), and to investigate the presence of the schema in young adolescents. (Author/GA)

  2. Reliability and Validity of the Dyadic Observed Communication Scale (DOCS).

    PubMed

    Hadley, Wendy; Stewart, Angela; Hunter, Heather L; Affleck, Katelyn; Donenberg, Geri; Diclemente, Ralph; Brown, Larry K

    2013-02-01

    We evaluated the reliability and validity of the Dyadic Observed Communication Scale (DOCS) coding scheme, which was developed to capture a range of communication components between parents and adolescents. Adolescents and their caregivers were recruited from mental health facilities for participation in a large, multi-site family-based HIV prevention intervention study. Seventy-one dyads were randomly selected from the larger study sample and coded using the DOCS at baseline. Preliminary validity and reliability of the DOCS was examined using various methods, such as comparing results to self-report measures and examining interrater reliability. Results suggest that the DOCS is a reliable and valid measure of observed communication among parent-adolescent dyads that captures both verbal and nonverbal communication behaviors that are typical intervention targets. The DOCS is a viable coding scheme for use by researchers and clinicians examining parent-adolescent communication. Coders can be trained to reliably capture individual and dyadic components of communication for parents and adolescents and this complex information can be obtained relatively quickly.

  3. Developing a methodology for the inverse estimation of root architectural parameters from field based sampling schemes

    NASA Astrophysics Data System (ADS)

    Morandage, Shehan; Schnepf, Andrea; Vanderborght, Jan; Javaux, Mathieu; Leitner, Daniel; Laloy, Eric; Vereecken, Harry

    2017-04-01

    Root traits are increasingly important in breading of new crop varieties. E.g., longer and fewer lateral roots are suggested to improve drought resistance of wheat. Thus, detailed root architectural parameters are important. However, classical field sampling of roots only provides more aggregated information such as root length density (coring), root counts per area (trenches) or root arrival curves at certain depths (rhizotubes). We investigate the possibility of obtaining the information about root system architecture of plants using field based classical root sampling schemes, based on sensitivity analysis and inverse parameter estimation. This methodology was developed based on a virtual experiment where a root architectural model was used to simulate root system development in a field, parameterized for winter wheat. This information provided the ground truth which is normally unknown in a real field experiment. The three sampling schemes coring, trenching, and rhizotubes where virtually applied to and aggregated information computed. Morris OAT global sensitivity analysis method was then performed to determine the most sensitive parameters of root architecture model for the three different sampling methods. The estimated means and the standard deviation of elementary effects of a total number of 37 parameters were evaluated. Upper and lower bounds of the parameters were obtained based on literature and published data of winter wheat root architectural parameters. Root length density profiles of coring, arrival curve characteristics observed in rhizotubes, and root counts in grids of trench profile method were evaluated statistically to investigate the influence of each parameter using five different error functions. Number of branches, insertion angle inter-nodal distance, and elongation rates are the most sensitive parameters and the parameter sensitivity varies slightly with the depth. Most parameters and their interaction with the other parameters show highly nonlinear effect to the model output. The most sensitive parameters will be subject to inverse estimation from the virtual field sampling data using DREAMzs algorithm. The estimated parameters can then be compared with the ground truth in order to determine the suitability of the sampling schemes to identify specific traits or parameters of the root growth model.

  4. Optical detection of glyphosate in water

    NASA Astrophysics Data System (ADS)

    de Góes, R. E.; Possetti, G. R. C.; Muller, M.; Fabris, J. L.

    2017-04-01

    This work shows preliminary results of the detection of Glyphosate in water by using optical fiber spectroscopy. A colloid with citrate-caped silver nanoparticles was employed as substrate for the measurements. A cross analysis between optical absorption and inelastic scattering evidenced a controlled aggregation of the sample constituents, leading to the possibility of quantitative detection of the analyte. The estimate limit of detection for Glyphosate in water for the proposed sensing scheme was about 1.7 mg/L.

  5. Defect Categorization: Making Use of a Decade of Widely Varying Historical Data

    NASA Technical Reports Server (NTRS)

    Shull, Forrest; Seaman, Carolyn; Godfrey, Sara H.; Guo, Yuepu

    2008-01-01

    This paper describes our experience in aggregating a number of historical datasets containing inspection defect data using different categorizing schemes. Our goal was to make use of the historical data by creating models to guide future development projects. We describe our approach to reconciling the different choices used in the historical datasets to categorize defects, and the challenges we faced. We also present a set of recommendations for others involved in classifying defects.

  6. Natural sampling strategy

    NASA Technical Reports Server (NTRS)

    Hallum, C. R.; Basu, J. P. (Principal Investigator)

    1979-01-01

    A natural stratum-based sampling scheme and the aggregation procedures for estimating wheat area, yield, and production and their associated prediction error estimates are described. The methodology utilizes LANDSAT imagery and agrophysical data to permit an improved stratification in foreign areas by ignoring political boundaries and restratifying along boundaries that are more homogeneous with respect to the distribution of agricultural density, soil characteristics, and average climatic conditions. A summary of test results is given including a discussion of the various problems encountered.

  7. Enhancing Privacy in Participatory Sensing Applications with Multidimensional Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Groat, Michael; Forrest, Stephanie; Horey, James L

    2012-01-01

    Participatory sensing applications rely on individuals to share local and personal data with others to produce aggregated models and knowledge. In this setting, privacy is an important consideration, and lack of privacy could discourage widespread adoption of many exciting applications. We present a privacy-preserving participatory sensing scheme for multidimensional data which uses negative surveys. Multidimensional data, such as vectors of attributes that include location and environment fields, pose a particular challenge for privacy protection and are common in participatory sensing applications. When reporting data in a negative survey, an individual participant randomly selects a value from the set complement ofmore » the sensed data value, once for each dimension, and returns the negative values to a central collection server. Using algorithms described in this paper, the server can reconstruct the probability density functions of the original distributions of sensed values, without knowing the participants actual data. As a consequence, complicated encryption and key management schemes are avoided, conserving energy. We study trade-offs between accuracy and privacy, and their relationships to the number of dimensions, categories, and participants. We introduce dimensional adjustment, a method that reduces the magnification of error associated with earlier work. Two simulation scenarios illustrate how the approach can protect the privacy of a participant's multidimensional data while allowing useful population information to be aggregated.« less

  8. Antimicrobial preservatives induce aggregation of interferon alpha-2a: The order in which preservatives induce protein aggregation is independent of the protein

    PubMed Central

    Bis, Regina L.; Mallela, Krishna M.G.

    2014-01-01

    Antimicrobial preservatives (APs) are included in liquid multi-dose protein formulations to combat the growth of microbes and bacteria. These compounds have been shown to cause protein aggregation, which leads to serious immunogenic and toxic side-effects in patients. Our earlier work on a model protein cytochrome c (Cyt c) demonstrated that APs cause protein aggregation in a specific manner. The aim of this study is to validate the conclusions obtained from our model protein studies on a pharmaceutical protein. Interferon α-2a (IFNA2) is available as a therapeutic treatment for numerous immune-compromised disorders including leukemia and hepatitis c, and APs have been used in its multi-dose formulation. Similar to Cyt c, APs induced IFNA2 aggregation, demonstrated by the loss of soluble monomer and increase in solution turbidity. The extent of IFNA2 aggregation increased with the increase in AP concentration. IFNA2 aggregation also depended on the nature of AP, and followed the order m-cresol > phenol > benzyl alcohol > phenoxyethanol. This specific order exactly matched with that observed for the model protein Cyt c. These and previously published results on antibodies and other recombinant proteins suggest that the general mechanism by which APs induce protein aggregation may be independent of the protein. PMID:24974985

  9. Antimicrobial preservatives induce aggregation of interferon alpha-2a: the order in which preservatives induce protein aggregation is independent of the protein.

    PubMed

    Bis, Regina L; Mallela, Krishna M G

    2014-09-10

    Antimicrobial preservatives (APs) are included in liquid multi-dose protein formulations to combat the growth of microbes and bacteria. These compounds have been shown to cause protein aggregation, which leads to serious immunogenic and toxic side-effects in patients. Our earlier work on a model protein cytochrome c (Cyt c) demonstrated that APs cause protein aggregation in a specific manner. The aim of this study is to validate the conclusions obtained from our model protein studies on a pharmaceutical protein. Interferon α-2a (IFNA2) is available as a therapeutic treatment for numerous immune-compromised disorders including leukemia and hepatitis C, and APs have been used in its multi-dose formulation. Similar to Cyt c, APs induced IFNA2 aggregation, demonstrated by the loss of soluble monomer and increase in solution turbidity. The extent of IFNA2 aggregation increased with the increase in AP concentration. IFNA2 aggregation also depended on the nature of AP, and followed the order m-cresol>phenol>benzyl alcohol>phenoxyethanol. This specific order exactly matched with that observed for the model protein Cyt c. These and previously published results on antibodies and other recombinant proteins suggest that the general mechanism by which APs induce protein aggregation may be independent of the protein. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Not just fractal surfaces, but surface fractal aggregates: Derivation of the expression for the structure factor and its applications

    NASA Astrophysics Data System (ADS)

    Besselink, R.; Stawski, T. M.; Van Driessche, A. E. S.; Benning, L. G.

    2016-12-01

    Densely packed surface fractal aggregates form in systems with high local volume fractions of particles with very short diffusion lengths, which effectively means that particles have little space to move. However, there are no prior mathematical models, which would describe scattering from such surface fractal aggregates and which would allow the subdivision between inter- and intraparticle interferences of such aggregates. Here, we show that by including a form factor function of the primary particles building the aggregate, a finite size of the surface fractal interfacial sub-surfaces can be derived from a structure factor term. This formalism allows us to define both a finite specific surface area for fractal aggregates and the fraction of particle interfacial sub-surfaces at the perimeter of an aggregate. The derived surface fractal model is validated by comparing it with an ab initio approach that involves the generation of a "brick-in-a-wall" von Koch type contour fractals. Moreover, we show that this approach explains observed scattering intensities from in situ experiments that followed gypsum (CaSO4 ṡ 2H2O) precipitation from highly supersaturated solutions. Our model of densely packed "brick-in-a-wall" surface fractal aggregates may well be the key precursor step in the formation of several types of mosaic- and meso-crystals.

  11. A Game Theory Algorithm for Intra-Cluster Data Aggregation in a Vehicular Ad Hoc Network

    PubMed Central

    Chen, Yuzhong; Weng, Shining; Guo, Wenzhong; Xiong, Naixue

    2016-01-01

    Vehicular ad hoc networks (VANETs) have an important role in urban management and planning. The effective integration of vehicle information in VANETs is critical to traffic analysis, large-scale vehicle route planning and intelligent transportation scheduling. However, given the limitations in the precision of the output information of a single sensor and the difficulty of information sharing among various sensors in a highly dynamic VANET, effectively performing data aggregation in VANETs remains a challenge. Moreover, current studies have mainly focused on data aggregation in large-scale environments but have rarely discussed the issue of intra-cluster data aggregation in VANETs. In this study, we propose a multi-player game theory algorithm for intra-cluster data aggregation in VANETs by analyzing the competitive and cooperative relationships among sensor nodes. Several sensor-centric metrics are proposed to measure the data redundancy and stability of a cluster. We then study the utility function to achieve efficient intra-cluster data aggregation by considering both data redundancy and cluster stability. In particular, we prove the existence of a unique Nash equilibrium in the game model, and conduct extensive experiments to validate the proposed algorithm. Results demonstrate that the proposed algorithm has advantages over typical data aggregation algorithms in both accuracy and efficiency. PMID:26907272

  12. A Game Theory Algorithm for Intra-Cluster Data Aggregation in a Vehicular Ad Hoc Network.

    PubMed

    Chen, Yuzhong; Weng, Shining; Guo, Wenzhong; Xiong, Naixue

    2016-02-19

    Vehicular ad hoc networks (VANETs) have an important role in urban management and planning. The effective integration of vehicle information in VANETs is critical to traffic analysis, large-scale vehicle route planning and intelligent transportation scheduling. However, given the limitations in the precision of the output information of a single sensor and the difficulty of information sharing among various sensors in a highly dynamic VANET, effectively performing data aggregation in VANETs remains a challenge. Moreover, current studies have mainly focused on data aggregation in large-scale environments but have rarely discussed the issue of intra-cluster data aggregation in VANETs. In this study, we propose a multi-player game theory algorithm for intra-cluster data aggregation in VANETs by analyzing the competitive and cooperative relationships among sensor nodes. Several sensor-centric metrics are proposed to measure the data redundancy and stability of a cluster. We then study the utility function to achieve efficient intra-cluster data aggregation by considering both data redundancy and cluster stability. In particular, we prove the existence of a unique Nash equilibrium in the game model, and conduct extensive experiments to validate the proposed algorithm. Results demonstrate that the proposed algorithm has advantages over typical data aggregation algorithms in both accuracy and efficiency.

  13. Compressive strength and hydration processes of concrete with recycled aggregates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koenders, Eduardus A.B., E-mail: e.a.b.koenders@coc.ufrj.br; Microlab, Delft University of Technology; Pepe, Marco, E-mail: mapepe@unisa.it

    2014-02-15

    This paper deals with the correlation between the time evolution of the degree of hydration and the compressive strength of Recycled Aggregate Concrete (RAC) for different water to cement ratios and initial moisture conditions of the Recycled Concrete Aggregates (RCAs). Particularly, the influence of such moisture conditions is investigated by monitoring the hydration process and determining the compressive strength development of fully dry or fully saturated recycled aggregates in four RAC mixtures. Hydration processes are monitored via temperature measurements in hardening concrete samples and the time evolution of the degree of hydration is determined through a 1D hydration and heatmore » flow model. The effect of the initial moisture condition of RCAs employed in the considered concrete mixtures clearly emerges from this study. In fact, a novel conceptual method is proposed to predict the compressive strength of RAC-systems, from the initial mixture parameters and the hardening conditions. -- Highlights: •The concrete industry is more and more concerned with sustainability issues. •The use of recycled aggregates is a promising solution to enhance sustainability. •Recycled aggregates affect both hydration processes and compressive strength. •A fundamental approach is proposed to unveil the influence of recycled aggregates. •Some experimental comparisons are presented to validate the proposed approach.« less

  14. Measurement of the temperature-dependent threshold shear-stress of red blood cell aggregation.

    PubMed

    Lim, Hyun-Jung; Nam, Jeong-Hun; Lee, Yong-Jin; Shin, Sehyun

    2009-09-01

    Red blood cell (RBC) aggregation is becoming an important hemorheological parameter, which typically exhibits temperature dependence. Quite recently, a critical shear-stress was proposed as a new dimensional index to represent the aggregative and disaggregative behaviors of RBCs. The present study investigated the effect of the temperature on the critical shear-stress that is required to keep RBC aggregates dispersed. The critical shear-stress was measured at various temperatures (4, 10, 20, 30, and 37 degrees C) through the use of a transient microfluidic aggregometry. The critical shear-stress significantly increased as the blood temperature lowered, which accorded with the increase in the low-shear blood viscosity with the lowering of the temperature. Furthermore, the critical shear-stress also showed good agreement with the threshold shear-stress, as measured in a rotational Couette flow. These findings assist in rheologically validating the critical shear-stress, as defined in the microfluidic aggregometry.

  15. Part I: Steady States in Two-Species Particle Aggregation. Part II: Sparse Representations for Multiscale PDE

    DTIC Science & Technology

    2015-03-01

    University of California Los Angeles Part I: Steady States in Two-Species Particle Aggregation Part II: Sparse Representations for Multiscale PDE A ...Highway, Suite 1204, Arlington VA 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to a ...penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE MAR 2015

  16. Ligand-promoted protein folding by biased kinetic partitioning.

    PubMed

    Hingorani, Karan S; Metcalf, Matthew C; Deming, Derrick T; Garman, Scott C; Powers, Evan T; Gierasch, Lila M

    2017-04-01

    Protein folding in cells occurs in the presence of high concentrations of endogenous binding partners, and exogenous binding partners have been exploited as pharmacological chaperones. A combined mathematical modeling and experimental approach shows that a ligand improves the folding of a destabilized protein by biasing the kinetic partitioning between folding and alternative fates (aggregation or degradation). Computationally predicted inhibition of test protein aggregation and degradation as a function of ligand concentration are validated by experiments in two disparate cellular systems.

  17. Ligand-Promoted Protein Folding by Biased Kinetic Partitioning

    PubMed Central

    Hingorani, Karan S.; Metcalf, Matthew C.; Deming, Derrick T.; Garman, Scott C.; Powers, Evan T.; Gierasch, Lila M.

    2017-01-01

    Protein folding in cells occurs in the presence of high concentrations of endogenous binding partners, and exogenous binding partners have been exploited as pharmacological chaperones. A combined mathematical modeling and experimental approach shows that a ligand improves the folding of a destabilized protein by biasing the kinetic partitioning between folding and alternative fates (aggregation or degradation). Computationally predicted inhibition of test protein aggregation and degradation as a function of ligand concentration are validated by experiments in two disparate cellular systems. PMID:28218913

  18. A Novel IEEE 802.15.4e DSME MAC for Wireless Sensor Networks

    PubMed Central

    Sahoo, Prasan Kumar; Pattanaik, Sudhir Ranjan; Wu, Shih-Lin

    2017-01-01

    IEEE 802.15.4e standard proposes Deterministic and Synchronous Multichannel Extension (DSME) mode for wireless sensor networks (WSNs) to support industrial, commercial and health care applications. In this paper, a new channel access scheme and beacon scheduling schemes are designed for the IEEE 802.15.4e enabled WSNs in star topology to reduce the network discovery time and energy consumption. In addition, a new dynamic guaranteed retransmission slot allocation scheme is designed for devices with the failure Guaranteed Time Slot (GTS) transmission to reduce the retransmission delay. To evaluate our schemes, analytical models are designed to analyze the performance of WSNs in terms of reliability, delay, throughput and energy consumption. Our schemes are validated with simulation and analytical results and are observed that simulation results well match with the analytical one. The evaluated results of our designed schemes can improve the reliability, throughput, delay, and energy consumptions significantly. PMID:28275216

  19. A Novel IEEE 802.15.4e DSME MAC for Wireless Sensor Networks.

    PubMed

    Sahoo, Prasan Kumar; Pattanaik, Sudhir Ranjan; Wu, Shih-Lin

    2017-01-16

    IEEE 802.15.4e standard proposes Deterministic and Synchronous Multichannel Extension (DSME) mode for wireless sensor networks (WSNs) to support industrial, commercial and health care applications. In this paper, a new channel access scheme and beacon scheduling schemes are designed for the IEEE 802.15.4e enabled WSNs in star topology to reduce the network discovery time and energy consumption. In addition, a new dynamic guaranteed retransmission slot allocation scheme is designed for devices with the failure Guaranteed Time Slot (GTS) transmission to reduce the retransmission delay. To evaluate our schemes, analytical models are designed to analyze the performance of WSNs in terms of reliability, delay, throughput and energy consumption. Our schemes are validated with simulation and analytical results and are observed that simulation results well match with the analytical one. The evaluated results of our designed schemes can improve the reliability, throughput, delay, and energy consumptions significantly.

  20. A Gas-Kinetic Scheme for Reactive Flows

    NASA Technical Reports Server (NTRS)

    Lian,Youg-Sheng; Xu, Kun

    1998-01-01

    In this paper, the gas-kinetic BGK scheme for the compressible flow equations is extended to chemical reactive flow. The mass fraction of the unburnt gas is implemented into the gas kinetic equation by assigning a new internal degree of freedom to the particle distribution function. The new variable can be also used to describe fluid trajectory for the nonreactive flows. Due to the gas-kinetic BGK model, the current scheme basically solves the Navier-Stokes chemical reactive flow equations. Numerical tests validate the accuracy and robustness of the current kinetic method.

  1. Collar grids for intersecting geometric components within the Chimera overlapped grid scheme

    NASA Technical Reports Server (NTRS)

    Parks, Steven J.; Buning, Pieter G.; Chan, William M.; Steger, Joseph L.

    1991-01-01

    A method for overcoming problems with using the Chimera overset grid scheme in the region of intersecting geometry components is presented. A 'collar grid' resolves the intersection region and provides communication between the component grids. This approach is validated by comparing computed and experimental data for a flow about a wing/body configuration. Application of the collar grid scheme to the Orbiter fuselage and vertical tail intersection in a computation of the full Space Shuttle launch vehicle demonstrates its usefulness for simulation of flow about complex aerospace vehicles.

  2. A Novel Quantum Blind Signature Scheme with Four-particle GHZ States

    NASA Astrophysics Data System (ADS)

    Fan, Ling; Zhang, Ke-Jia; Qin, Su-Juan; Guo, Fen-Zhuo

    2016-02-01

    In an arbitrated quantum signature scheme, the signer signs the message and the receiver verifies the signature's validity with the assistance of the arbitrator. We present an arbitrated quantum blind signature scheme by using four-particle entangled Greenberger-Horne-Zeilinger (GHZ) states. By using the special relationship of four-particle GHZ states, we cannot only support the security of quantum signature, but also guarantee the anonymity of the message owner. It has a wide application to E-payment system, E-government, E-business, and etc.

  3. Unconditionally Secure Blind Signatures

    NASA Astrophysics Data System (ADS)

    Hara, Yuki; Seito, Takenobu; Shikata, Junji; Matsumoto, Tsutomu

    The blind signature scheme introduced by Chaum allows a user to obtain a valid signature for a message from a signer such that the message is kept secret for the signer. Blind signature schemes have mainly been studied from a viewpoint of computational security so far. In this paper, we study blind signatures in unconditional setting. Specifically, we newly introduce a model of unconditionally secure blind signature schemes (USBS, for short). Also, we propose security notions and their formalization in our model. Finally, we propose a construction method for USBS that is provably secure in our security notions.

  4. Numerical scoring for the Classic BILAG index.

    PubMed

    Cresswell, Lynne; Yee, Chee-Seng; Farewell, Vernon; Rahman, Anisur; Teh, Lee-Suan; Griffiths, Bridget; Bruce, Ian N; Ahmad, Yasmeen; Prabu, Athiveeraramapandian; Akil, Mohammed; McHugh, Neil; Toescu, Veronica; D'Cruz, David; Khamashta, Munther A; Maddison, Peter; Isenberg, David A; Gordon, Caroline

    2009-12-01

    To develop an additive numerical scoring scheme for the Classic BILAG index. SLE patients were recruited into this multi-centre cross-sectional study. At every assessment, data were collected on disease activity and therapy. Logistic regression was used to model an increase in therapy, as an indicator of active disease, by the Classic BILAG score in eight systems. As both indicate inactivity, scores of D and E were set to 0 and used as the baseline in the fitted model. The coefficients from the fitted model were used to determine the numerical values for Grades A, B and C. Different scoring schemes were then compared using receiver operating characteristic (ROC) curves. Validation analysis was performed using assessments from a single centre. There were 1510 assessments from 369 SLE patients. The currently used coding scheme (A = 9, B = 3, C = 1 and D/E = 0) did not fit the data well. The regression model suggested three possible numerical scoring schemes: (i) A = 11, B = 6, C = 1 and D/E = 0; (ii) A = 12, B = 6, C = 1 and D/E = 0; and (iii) A = 11, B = 7, C = 1 and D/E = 0. These schemes produced comparable ROC curves. Based on this, A = 12, B = 6, C = 1 and D/E = 0 seemed a reasonable and practical choice. The validation analysis suggested that although the A = 12, B = 6, C = 1 and D/E = 0 coding is still reasonable, a scheme with slightly less weighting for B, such as A = 12, B = 5, C = 1 and D/E = 0, may be more appropriate. A reasonable additive numerical scoring scheme based on treatment decision for the Classic BILAG index is A = 12, B = 5, C = 1, D = 0 and E = 0.

  5. Numerical scoring for the Classic BILAG index

    PubMed Central

    Cresswell, Lynne; Yee, Chee-Seng; Farewell, Vernon; Rahman, Anisur; Teh, Lee-Suan; Griffiths, Bridget; Bruce, Ian N.; Ahmad, Yasmeen; Prabu, Athiveeraramapandian; Akil, Mohammed; McHugh, Neil; Toescu, Veronica; D’Cruz, David; Khamashta, Munther A.; Maddison, Peter; Isenberg, David A.

    2009-01-01

    Objective. To develop an additive numerical scoring scheme for the Classic BILAG index. Methods. SLE patients were recruited into this multi-centre cross-sectional study. At every assessment, data were collected on disease activity and therapy. Logistic regression was used to model an increase in therapy, as an indicator of active disease, by the Classic BILAG score in eight systems. As both indicate inactivity, scores of D and E were set to 0 and used as the baseline in the fitted model. The coefficients from the fitted model were used to determine the numerical values for Grades A, B and C. Different scoring schemes were then compared using receiver operating characteristic (ROC) curves. Validation analysis was performed using assessments from a single centre. Results. There were 1510 assessments from 369 SLE patients. The currently used coding scheme (A = 9, B = 3, C = 1 and D/E = 0) did not fit the data well. The regression model suggested three possible numerical scoring schemes: (i) A = 11, B = 6, C = 1 and D/E = 0; (ii) A = 12, B = 6, C = 1 and D/E = 0; and (iii) A = 11, B = 7, C = 1 and D/E = 0. These schemes produced comparable ROC curves. Based on this, A = 12, B = 6, C = 1 and D/E = 0 seemed a reasonable and practical choice. The validation analysis suggested that although the A = 12, B = 6, C = 1 and D/E = 0 coding is still reasonable, a scheme with slightly less weighting for B, such as A = 12, B = 5, C = 1 and D/E = 0, may be more appropriate. Conclusions. A reasonable additive numerical scoring scheme based on treatment decision for the Classic BILAG index is A = 12, B = 5, C = 1, D = 0 and E = 0. PMID:19779027

  6. Sleep Bruxism-Tooth Grinding Prevalence, Characteristics and Familial Aggregation: A Large Cross-Sectional Survey and Polysomnographic Validation

    PubMed Central

    Khoury, Samar; Carra, Maria Clotilde; Huynh, Nelly; Montplaisir, Jacques; Lavigne, Gilles J.

    2016-01-01

    Study Objectives: Sleep bruxism (SB) is characterized by tooth grinding and jaw clenching during sleep. Familial factors may contribute to the occurrence of SB. This study aims are: (1) revisit the prevalence and characteristics of SB in a large cross-sectional survey and assess familial aggregation of SB, (2) assess comorbidity such as insomnia and pain, (3) compare survey data in a subset of subjects diagnosed using polysomnography research criteria. Methods: A sample of 6,357 individuals from the general population in Quebec, Canada, undertook an online survey to assess the prevalence of SB, comorbidities, and familial aggregation. Data on familial aggregation were compared to 111 SB subjects diagnosed using polysomnography. Results: Regularly occurring SB was reported by 8.6% of the general population, decreases with age, without any gender difference. SB awareness is concomitant with complaints of difficulties maintaining sleep in 47.6% of the cases. A third of SB positive probands reported pain. A 2.5 risk ratio of having a first-degree family member with SB was found in SB positive probands. The risk of reporting SB in first-degree family ranges from 1.4 to 2.9 with increasing severity of reported SB. Polysomnographic data shows that 37% of SB subjects had at least one first-degree relative with reported SB with a relative risk ratio of 4.625. Conclusions: Our results support the heritability of SB-tooth grinding and that sleep quality and pain are concomitant in a significant number of SB subjects. Citation: Khoury S, Carra MC, Huynh N, Montplaisir J, Lavigne GJ. Sleep bruxism-tooth grinding prevalence, characteristics and familial aggregation: a large cross-sectional survey and polysomnographic validation. SLEEP 2016;39(11):2049–2056. PMID:27568807

  7. Quantitative evaluation of morphological changes in activated platelets in vitro using digital holographic microscopy.

    PubMed

    Kitamura, Yutaka; Isobe, Kazushige; Kawabata, Hideo; Tsujino, Tetsuhiro; Watanabe, Taisuke; Nakamura, Masayuki; Toyoda, Toshihisa; Okudera, Hajime; Okuda, Kazuhiro; Nakata, Koh; Kawase, Tomoyuki

    2018-06-18

    Platelet activation and aggregation have been conventionally evaluated using an aggregometer. However, this method is suitable for short-term but not long-term quantitative evaluation of platelet aggregation, morphological changes, and/or adhesion to specific materials. The recently developed digital holographic microscopy (DHM) has enabled the quantitative evaluation of cell size and morphology without labeling or destruction. Thus, we aim to validate its applicability in quantitatively evaluating changes in cell morphology, especially in the aggregation and spreading of activated platelets, thus modifying typical image analysis procedures to suit aggregated platelets. Freshly prepared platelet-rich plasma was washed with phosphate-buffered saline and treated with 0.1% CaCl 2 . Platelets were then fixed and subjected to DHM, scanning electron microscopy (SEM), atomic force microscopy, optical microscopy, and flow cytometry (FCM). Tightly aggregated platelets were identified as single cells. Data obtained from time-course experiments were plotted two-dimensionally according to the average optical thickness versus attachment area and divided into four regions. The majority of the control platelets, which supposedly contained small and round platelets, were distributed in the lower left region. As activation time increased, however, this population dispersed toward the upper right region. The distribution shift demonstrated by DHM was essentially consistent with data obtained from SEM and FCM. Therefore, DHM was validated as a promising device for testing platelet function given that it allows for the quantitative evaluation of activation-dependent morphological changes in platelets. DHM technology will be applicable to the quality assurance of platelet concentrates, as well as diagnosis and drug discovery related to platelet functions. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. A generalized approach for producing, quantifying, and validating citizen science data from wildlife images.

    PubMed

    Swanson, Alexandra; Kosmala, Margaret; Lintott, Chris; Packer, Craig

    2016-06-01

    Citizen science has the potential to expand the scope and scale of research in ecology and conservation, but many professional researchers remain skeptical of data produced by nonexperts. We devised an approach for producing accurate, reliable data from untrained, nonexpert volunteers. On the citizen science website www.snapshotserengeti.org, more than 28,000 volunteers classified 1.51 million images taken in a large-scale camera-trap survey in Serengeti National Park, Tanzania. Each image was circulated to, on average, 27 volunteers, and their classifications were aggregated using a simple plurality algorithm. We validated the aggregated answers against a data set of 3829 images verified by experts and calculated 3 certainty metrics-level of agreement among classifications (evenness), fraction of classifications supporting the aggregated answer (fraction support), and fraction of classifiers who reported "nothing here" for an image that was ultimately classified as containing an animal (fraction blank)-to measure confidence that an aggregated answer was correct. Overall, aggregated volunteer answers agreed with the expert-verified data on 98% of images, but accuracy differed by species commonness such that rare species had higher rates of false positives and false negatives. Easily calculated analysis of variance and post-hoc Tukey tests indicated that the certainty metrics were significant indicators of whether each image was correctly classified or classifiable. Thus, the certainty metrics can be used to identify images for expert review. Bootstrapping analyses further indicated that 90% of images were correctly classified with just 5 volunteers per image. Species classifications based on the plurality vote of multiple citizen scientists can provide a reliable foundation for large-scale monitoring of African wildlife. © 2016 The Authors. Conservation Biology published by Wiley Periodicals, Inc. on behalf of Society for Conservation Biology.

  9. Determination of soluble immunoglobulin G in bovine colostrum products by Protein G affinity chromatography-turbidity correction and method validation.

    PubMed

    Holland, Patrick T; Cargill, Anne; Selwood, Andrew I; Arnold, Kate; Krammer, Jacqueline L; Pearce, Kevin N

    2011-05-25

    Immunoglobulin-containing food products and nutraceuticals such as bovine colostrum are of interest to consumers as they may provide health benefits. Commercial scale colostrum products are valued for their immunoglobulin G (IgG) content and therefore require accurate analysis. One of the most commonly used methods for determining total soluble IgG in colostrum products is based on affinity chromatography using a Protein G column and UV detection. This paper documents improvements to the accuracy of the Protein G analysis of IgG in colostrum products, especially those containing aggregated forms of IgG. Capillary electrophoresis-sodium dodecyl sulfate (CE-SDS) analysis confirmed that aggregated IgG measured by Protein G does not contain significant amounts of casein or other milk proteins. Size exclusion chromatography identified the content of soluble IgG as mainly monomeric IgG and aggregated material MW > 450 kDa with small amounts of dimer and trimer. The turbidity of the eluting IgG, mainly associated with aggregated IgG, had a significant effect on the quantitative results. Practical techniques were developed to correct affinity LC data for turbidity on an accurate, consistent, and efficient basis. The method was validated in two laboratories using a variety of colostrum powders. Precision for IgG was 2-3% (RSD(r)) and 3-12% (RSD(R)). Recovery was 100.2 ± 2.4% (mean ± RSD, n = 10). Greater amounts of aggregated IgG were solubilized by a higher solution:sample ratio and extended times of mixing or sonication, especially for freeze-dried material. It is concluded that the method without acid precipitation and with turbidity correction provides accurate, precise, and robust data for total soluble IgG and is suitable for product specification and quality control of colostrum products.

  10. A generalized approach for producing, quantifying, and validating citizen science data from wildlife images

    PubMed Central

    Kosmala, Margaret; Lintott, Chris; Packer, Craig

    2016-01-01

    Abstract Citizen science has the potential to expand the scope and scale of research in ecology and conservation, but many professional researchers remain skeptical of data produced by nonexperts. We devised an approach for producing accurate, reliable data from untrained, nonexpert volunteers. On the citizen science website www.snapshotserengeti.org, more than 28,000 volunteers classified 1.51 million images taken in a large‐scale camera‐trap survey in Serengeti National Park, Tanzania. Each image was circulated to, on average, 27 volunteers, and their classifications were aggregated using a simple plurality algorithm. We validated the aggregated answers against a data set of 3829 images verified by experts and calculated 3 certainty metrics—level of agreement among classifications (evenness), fraction of classifications supporting the aggregated answer (fraction support), and fraction of classifiers who reported “nothing here” for an image that was ultimately classified as containing an animal (fraction blank)—to measure confidence that an aggregated answer was correct. Overall, aggregated volunteer answers agreed with the expert‐verified data on 98% of images, but accuracy differed by species commonness such that rare species had higher rates of false positives and false negatives. Easily calculated analysis of variance and post‐hoc Tukey tests indicated that the certainty metrics were significant indicators of whether each image was correctly classified or classifiable. Thus, the certainty metrics can be used to identify images for expert review. Bootstrapping analyses further indicated that 90% of images were correctly classified with just 5 volunteers per image. Species classifications based on the plurality vote of multiple citizen scientists can provide a reliable foundation for large‐scale monitoring of African wildlife. PMID:27111678

  11. Acid-induced aggregation propensity of nivolumab is dependent on the Fc

    PubMed Central

    Liu, Boning; Guo, Huaizu; Xu, Jin; Qin, Ting; Xu, Lu; Zhang, Junjie; Guo, Qingcheng; Zhang, Dapeng; Qian, Weizhu; Li, Bohua; Dai, Jianxin; Hou, Sheng; Guo, Yajun; Wang, Hao

    2016-01-01

    ABSTRACT Nivolumab, an anti-programmed death (PD)1 IgG4 antibody, has shown notable success as a cancer treatment. Here, we report that nivolumab was susceptible to aggregation during manufacturing, particularly in routine purification steps. Our experimental results showed that exposure to low pH caused aggregation of nivolumab, and the Fc was primarily responsible for an acid-induced unfolding phenomenon. To compare the intrinsic propensity of acid-induced aggregation for other IgGs subclasses, tocilizumab (IgG1), panitumumab (IgG2) and atezolizumab (aglyco-IgG1) were also investigated. The accurate pH threshold of acid-induced aggregation for individual IgG Fc subclasses was identified and ranked as: IgG1 < aglyco-IgG1 < IgG2 < IgG4. This result was cross-validated by thermostability and conformation analysis. We also assessed the effect of several protein stabilizers on nivolumab, and found mannitol ameliorated the acid-induced aggregation of the molecule. Our results provide valuable insight into downstream manufacturing process development, especially for immune checkpoint modulating molecules with a human IgG4 backbone. PMID:27310175

  12. Report on Pairing-based Cryptography.

    PubMed

    Moody, Dustin; Peralta, Rene; Perlner, Ray; Regenscheid, Andrew; Roginsky, Allen; Chen, Lily

    2015-01-01

    This report summarizes study results on pairing-based cryptography. The main purpose of the study is to form NIST's position on standardizing and recommending pairing-based cryptography schemes currently published in research literature and standardized in other standard bodies. The report reviews the mathematical background of pairings. This includes topics such as pairing-friendly elliptic curves and how to compute various pairings. It includes a brief introduction to existing identity-based encryption (IBE) schemes and other cryptographic schemes using pairing technology. The report provides a complete study of the current status of standard activities on pairing-based cryptographic schemes. It explores different application scenarios for pairing-based cryptography schemes. As an important aspect of adopting pairing-based schemes, the report also considers the challenges inherent in validation testing of cryptographic algorithms and modules. Based on the study, the report suggests an approach for including pairing-based cryptography schemes in the NIST cryptographic toolkit. The report also outlines several questions that will require further study if this approach is followed.

  13. Report on Pairing-based Cryptography

    PubMed Central

    Moody, Dustin; Peralta, Rene; Perlner, Ray; Regenscheid, Andrew; Roginsky, Allen; Chen, Lily

    2015-01-01

    This report summarizes study results on pairing-based cryptography. The main purpose of the study is to form NIST’s position on standardizing and recommending pairing-based cryptography schemes currently published in research literature and standardized in other standard bodies. The report reviews the mathematical background of pairings. This includes topics such as pairing-friendly elliptic curves and how to compute various pairings. It includes a brief introduction to existing identity-based encryption (IBE) schemes and other cryptographic schemes using pairing technology. The report provides a complete study of the current status of standard activities on pairing-based cryptographic schemes. It explores different application scenarios for pairing-based cryptography schemes. As an important aspect of adopting pairing-based schemes, the report also considers the challenges inherent in validation testing of cryptographic algorithms and modules. Based on the study, the report suggests an approach for including pairing-based cryptography schemes in the NIST cryptographic toolkit. The report also outlines several questions that will require further study if this approach is followed. PMID:26958435

  14. Evolutionary algorithm based heuristic scheme for nonlinear heat transfer equations.

    PubMed

    Ullah, Azmat; Malik, Suheel Abdullah; Alimgeer, Khurram Saleem

    2018-01-01

    In this paper, a hybrid heuristic scheme based on two different basis functions i.e. Log Sigmoid and Bernstein Polynomial with unknown parameters is used for solving the nonlinear heat transfer equations efficiently. The proposed technique transforms the given nonlinear ordinary differential equation into an equivalent global error minimization problem. Trial solution for the given nonlinear differential equation is formulated using a fitness function with unknown parameters. The proposed hybrid scheme of Genetic Algorithm (GA) with Interior Point Algorithm (IPA) is opted to solve the minimization problem and to achieve the optimal values of unknown parameters. The effectiveness of the proposed scheme is validated by solving nonlinear heat transfer equations. The results obtained by the proposed scheme are compared and found in sharp agreement with both the exact solution and solution obtained by Haar Wavelet-Quasilinearization technique which witnesses the effectiveness and viability of the suggested scheme. Moreover, the statistical analysis is also conducted for investigating the stability and reliability of the presented scheme.

  15. An interlaboratory comparison programme on radio frequency electromagnetic field measurements: the second round of the scheme.

    PubMed

    Nicolopoulou, E P; Ztoupis, I N; Karabetsos, E; Gonos, I F; Stathopulos, I A

    2015-04-01

    The second round of an interlaboratory comparison scheme on radio frequency electromagnetic field measurements has been conducted in order to evaluate the overall performance of laboratories that perform measurements in the vicinity of mobile phone base stations and broadcast antenna facilities. The participants recorded the electric field strength produced by two high frequency signal generators inside an anechoic chamber in three measurement scenarios with the antennas transmitting each time different signals at the FM, VHF, UHF and GSM frequency bands. In each measurement scenario, the participants also used their measurements in order to calculate the relative exposure ratios. The results were evaluated in each test level calculating performance statistics (z-scores and En numbers). Subsequently, possible sources of errors for each participating laboratory were discussed, and the overall evaluation of their performances was determined by using an aggregated performance statistic. A comparison between the two rounds proves the necessity of the scheme. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  16. Demonstration of micro-projection enabled short-range communication system for 5G.

    PubMed

    Chou, Hsi-Hsir; Tsai, Cheng-Yu

    2016-06-13

    A liquid crystal on silicon (LCoS) based polarization modulated image (PMI) system architecture using red-, green- and blue-based light-emitting diodes (LEDs), which offers simultaneous micro-projection and high-speed data transmission at nearly a gigabit, serving as an alternative short-range communication (SRC) approach for personal communication device (PCD) application in 5G, is proposed and experimentally demonstrated. In order to make the proposed system architecture transparent to the future possible wireless data modulation format, baseband modulation schemes such as multilevel pulse amplitude modulation (M-PAM), M-ary phase shift keying modulation (M-PSK) and M-ary quadrature amplitude modulation (M-QAM) which can be further employed by more advanced multicarrier modulation schemes (such as DMT, OFDM and CAP) were used to investigate the highest possible data transmission rate of the proposed system architecture. The results demonstrated that an aggregative data transmission rate of 892 Mb/s and 900 Mb/s at a BER of 10^(-3) can be achieved by using 16-QAM baseband modulation scheme when data transmission were performed with and without micro-projection simultaneously.

  17. Case studies in configuration control for redundant robots

    NASA Technical Reports Server (NTRS)

    Seraji, H.; Lee, T.; Colbaugh, R.; Glass, K.

    1989-01-01

    A simple approach to configuration control of redundant robots is presented. The redundancy is utilized to control the robot configuration directly in task space, where the task will be performed. A number of task-related kinematic functions are defined and combined with the end-effector coordinates to form a set of configuration variables. An adaptive control scheme is then utilized to ensure that the configuration variables track the desired reference trajectories as closely as possible. Simulation results are presented to illustrate the control scheme. The scheme has also been implemented for direct online control of a PUMA industrial robot, and experimental results are presented. The simulation and experimental results validate the configuration control scheme for performing various realistic tasks.

  18. A semi-implicit level set method for multiphase flows and fluid-structure interaction problems

    NASA Astrophysics Data System (ADS)

    Cottet, Georges-Henri; Maitre, Emmanuel

    2016-06-01

    In this paper we present a novel semi-implicit time-discretization of the level set method introduced in [8] for fluid-structure interaction problems. The idea stems from a linear stability analysis derived on a simplified one-dimensional problem. The semi-implicit scheme relies on a simple filter operating as a pre-processing on the level set function. It applies to multiphase flows driven by surface tension as well as to fluid-structure interaction problems. The semi-implicit scheme avoids the stability constraints that explicit scheme need to satisfy and reduces significantly the computational cost. It is validated through comparisons with the original explicit scheme and refinement studies on two-dimensional benchmarks.

  19. Pavement performance testing : research implementation plan.

    DOT National Transportation Integrated Search

    2005-01-01

    STATEMENT OF NEED: Validate effect of materials variables in the Superpave mix design system as : affects rutting and fatigue performance. : RESEARCH OBJECTIVES: 1. Determine the effect of aggregate characteristics and gradation and polymer modifier ...

  20. An Inverse Problem for a Class of Conditional Probability Measure-Dependent Evolution Equations

    PubMed Central

    Mirzaev, Inom; Byrne, Erin C.; Bortz, David M.

    2016-01-01

    We investigate the inverse problem of identifying a conditional probability measure in measure-dependent evolution equations arising in size-structured population modeling. We formulate the inverse problem as a least squares problem for the probability measure estimation. Using the Prohorov metric framework, we prove existence and consistency of the least squares estimates and outline a discretization scheme for approximating a conditional probability measure. For this scheme, we prove general method stability. The work is motivated by Partial Differential Equation (PDE) models of flocculation for which the shape of the post-fragmentation conditional probability measure greatly impacts the solution dynamics. To illustrate our methodology, we apply the theory to a particular PDE model that arises in the study of population dynamics for flocculating bacterial aggregates in suspension, and provide numerical evidence for the utility of the approach. PMID:28316360

  1. A Bulk Microphysics Parameterization with Multiple Ice Precipitation Categories.

    NASA Astrophysics Data System (ADS)

    Straka, Jerry M.; Mansell, Edward R.

    2005-04-01

    A single-moment bulk microphysics scheme with multiple ice precipitation categories is described. It has 2 liquid hydrometeor categories (cloud droplets and rain) and 10 ice categories that are characterized by habit, size, and density—two ice crystal habits (column and plate), rimed cloud ice, snow (ice crystal aggregates), three categories of graupel with different densities and intercepts, frozen drops, small hail, and large hail. The concept of riming history is implemented for conversions among the graupel and frozen drops categories. The multiple precipitation ice categories allow a range of particle densities and fall velocities for simulating a variety of convective storms with minimal parameter tuning. The scheme is applied to two cases—an idealized continental multicell storm that demonstrates the ice precipitation process, and a small Florida maritime storm in which the warm rain process is important.

  2. Uncertainty aggregation and reduction in structure-material performance prediction

    NASA Astrophysics Data System (ADS)

    Hu, Zhen; Mahadevan, Sankaran; Ao, Dan

    2018-02-01

    An uncertainty aggregation and reduction framework is presented for structure-material performance prediction. Different types of uncertainty sources, structural analysis model, and material performance prediction model are connected through a Bayesian network for systematic uncertainty aggregation analysis. To reduce the uncertainty in the computational structure-material performance prediction model, Bayesian updating using experimental observation data is investigated based on the Bayesian network. It is observed that the Bayesian updating results will have large error if the model cannot accurately represent the actual physics, and that this error will be propagated to the predicted performance distribution. To address this issue, this paper proposes a novel uncertainty reduction method by integrating Bayesian calibration with model validation adaptively. The observation domain of the quantity of interest is first discretized into multiple segments. An adaptive algorithm is then developed to perform model validation and Bayesian updating over these observation segments sequentially. Only information from observation segments where the model prediction is highly reliable is used for Bayesian updating; this is found to increase the effectiveness and efficiency of uncertainty reduction. A composite rotorcraft hub component fatigue life prediction model, which combines a finite element structural analysis model and a material damage model, is used to demonstrate the proposed method.

  3. Interval-valued intuitionistic fuzzy matrix games based on Archimedean t-conorm and t-norm

    NASA Astrophysics Data System (ADS)

    Xia, Meimei

    2018-04-01

    Fuzzy game theory has been applied in many decision-making problems. The matrix game with interval-valued intuitionistic fuzzy numbers (IVIFNs) is investigated based on Archimedean t-conorm and t-norm. The existing matrix games with IVIFNs are all based on Algebraic t-conorm and t-norm, which are special cases of Archimedean t-conorm and t-norm. In this paper, the intuitionistic fuzzy aggregation operators based on Archimedean t-conorm and t-norm are employed to aggregate the payoffs of players. To derive the solution of the matrix game with IVIFNs, several mathematical programming models are developed based on Archimedean t-conorm and t-norm. The proposed models can be transformed into a pair of primal-dual linear programming models, based on which, the solution of the matrix game with IVIFNs is obtained. It is proved that the theorems being valid in the exiting matrix game with IVIFNs are still true when the general aggregation operator is used in the proposed matrix game with IVIFNs. The proposed method is an extension of the existing ones and can provide more choices for players. An example is given to illustrate the validity and the applicability of the proposed method.

  4. A comprehensive evaluation of two MODIS evapotranspiration products over the conterminous United States: using point and gridded FLUXNET and water balance ET

    USGS Publications Warehouse

    Velpuri, Naga M.; Senay, Gabriel B.; Singh, Ramesh K.; Bohms, Stefanie; Verdin, James P.

    2013-01-01

    Remote sensing datasets are increasingly being used to provide spatially explicit large scale evapotranspiration (ET) estimates. Extensive evaluation of such large scale estimates is necessary before they can be used in various applications. In this study, two monthly MODIS 1 km ET products, MODIS global ET (MOD16) and Operational Simplified Surface Energy Balance (SSEBop) ET, are validated over the conterminous United States at both point and basin scales. Point scale validation was performed using eddy covariance FLUXNET ET (FLET) data (2001–2007) aggregated by year, land cover, elevation and climate zone. Basin scale validation was performed using annual gridded FLUXNET ET (GFET) and annual basin water balance ET (WBET) data aggregated by various hydrologic unit code (HUC) levels. Point scale validation using monthly data aggregated by years revealed that the MOD16 ET and SSEBop ET products showed overall comparable annual accuracies. For most land cover types, both ET products showed comparable results. However, SSEBop showed higher performance for Grassland and Forest classes; MOD16 showed improved performance in the Woody Savanna class. Accuracy of both the ET products was also found to be comparable over different climate zones. However, SSEBop data showed higher skill score across the climate zones covering the western United States. Validation results at different HUC levels over 2000–2011 using GFET as a reference indicate higher accuracies for MOD16 ET data. MOD16, SSEBop and GFET data were validated against WBET (2000–2009), and results indicate that both MOD16 and SSEBop ET matched the accuracies of the global GFET dataset at different HUC levels. Our results indicate that both MODIS ET products effectively reproduced basin scale ET response (up to 25% uncertainty) compared to CONUS-wide point-based ET response (up to 50–60% uncertainty) illustrating the reliability of MODIS ET products for basin-scale ET estimation. Results from this research would guide the additional parameter refinement required for the MOD16 and SSEBop algorithms in order to further improve their accuracy and performance for agro-hydrologic applications.

  5. Natural Aggregation Approach based Home Energy Manage System with User Satisfaction Modelling

    NASA Astrophysics Data System (ADS)

    Luo, F. J.; Ranzi, G.; Dong, Z. Y.; Murata, J.

    2017-07-01

    With the prevalence of advanced sensing and two-way communication technologies, Home Energy Management System (HEMS) has attracted lots of attentions in recent years. This paper proposes a HEMS that optimally schedules the controllable Residential Energy Resources (RERs) in a Time-of-Use (TOU) pricing and high solar power penetrated environment. The HEMS aims to minimize the overall operational cost of the home, and the user’s satisfactions and requirements on the operation of different household appliances are modelled and considered in the HEMS. Further, a new biological self-aggregation intelligence based optimization technique previously proposed by the authors, i.e., Natural Aggregation Algorithm (NAA), is applied to solve the proposed HEMS optimization model. Simulations are conducted to validate the proposed method.

  6. Cluster designs to assess the prevalence of acute malnutrition by lot quality assurance sampling: a validation study by computer simulation.

    PubMed

    Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J

    2009-04-01

    Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67x3 (67 clusters of three observations) and a 33x6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67x3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis.

  7. The NERC Vocabulary Server: Version 2.0

    NASA Astrophysics Data System (ADS)

    Leadbetter, A.; Lowry, R.; Clements, O.

    2012-04-01

    The NERC Vocabulary Server (NVS) has been used to publish controlled vocabularies of terms relevant to the marine environmental sciences domain since 2006 (version 0) with version 1 being introduced in 2007. It has been used for • metadata mark-up with verifiable content • populating dynamic drop down lists • semantic cross-walk between metadata schemata • so-called smart search • and the semantic enablement of Open Geospatial Consortium Web Processing Services in projects including: the NERC Data Grid; SeaDataNet; Geo-Seas; and the European Marine Observation and Data Network (EMODnet). The NVS is based on the Simple Knowledge Organization System (SKOS) model and following a version change for SKOS in 2009 there was a desire to upgrade the NVS to incorporate the changes in this standard. SKOS is based on the "concept", which it defines as a "unit of thought", that is an idea or notion such as "oil spill". The latest version of SKOS introduces the ability to aggregate concepts in both collections and schemes. The design of version 2 of the NVS uses both types of aggregation: schemes for the discovery of content through hierarchical thesauri and collections for the publication and addressing of content. Other desired changes from version 1 of the NVS included: • the removal of the potential for multiple Uniform Resource Names for the same concept to ensure consistent identification of concepts • the addition of content and technical governance information in the payload documents to provide an audit trail to users of NVS content • the removal of XML snippets from concept definitions in order to correctly validate XML serializations of the SKOS • the addition of the ability to map into external knowledge organization systems in order to extend the knowledge base • a more truly RESTful approach URL access to the NVS to make the development of applications on top of the NVS easier • and support for multiple human languages to increase the user base of the NVS Version 2 of the NVS underpins the semantic layer for the Open Service Network for Marine Environmental Data (NETMAR) project, funded by the European Commission under the Seventh Framework Programme. Here we present the results of upgrading the NVS from version 1 to 2 and show applications which have been built on top of the NVS using its Application Programming Interface, including a demonstration version of a SPARQL interface.

  8. High-order asynchrony-tolerant finite difference schemes for partial differential equations

    NASA Astrophysics Data System (ADS)

    Aditya, Konduri; Donzis, Diego A.

    2017-12-01

    Synchronizations of processing elements (PEs) in massively parallel simulations, which arise due to communication or load imbalances between PEs, significantly affect the scalability of scientific applications. We have recently proposed a method based on finite-difference schemes to solve partial differential equations in an asynchronous fashion - synchronization between PEs is relaxed at a mathematical level. While standard schemes can maintain their stability in the presence of asynchrony, their accuracy is drastically affected. In this work, we present a general methodology to derive asynchrony-tolerant (AT) finite difference schemes of arbitrary order of accuracy, which can maintain their accuracy when synchronizations are relaxed. We show that there are several choices available in selecting a stencil to derive these schemes and discuss their effect on numerical and computational performance. We provide a simple classification of schemes based on the stencil and derive schemes that are representative of different classes. Their numerical error is rigorously analyzed within a statistical framework to obtain the overall accuracy of the solution. Results from numerical experiments are used to validate the performance of the schemes.

  9. Implementation of a Cross-Layer Sensing Medium-Access Control Scheme.

    PubMed

    Su, Yishan; Fu, Xiaomei; Han, Guangyao; Xu, Naishen; Jin, Zhigang

    2017-04-10

    In this paper, compressed sensing (CS) theory is utilized in a medium-access control (MAC) scheme for wireless sensor networks (WSNs). We propose a new, cross-layer compressed sensing medium-access control (CL CS-MAC) scheme, combining the physical layer and data link layer, where the wireless transmission in physical layer is considered as a compress process of requested packets in a data link layer according to compressed sensing (CS) theory. We first introduced using compressive complex requests to identify the exact active sensor nodes, which makes the scheme more efficient. Moreover, because the reconstruction process is executed in a complex field of a physical layer, where no bit and frame synchronizations are needed, the asynchronous and random requests scheme can be implemented without synchronization payload. We set up a testbed based on software-defined radio (SDR) to implement the proposed CL CS-MAC scheme practically and to demonstrate the validation. For large-scale WSNs, the simulation results show that the proposed CL CS-MAC scheme provides higher throughput and robustness than the carrier sense multiple access (CSMA) and compressed sensing medium-access control (CS-MAC) schemes.

  10. Effects of Pump-turbine S-shaped Characteristics on Transient Behaviours: Experimental Investigation

    NASA Astrophysics Data System (ADS)

    Zeng, Wei; Yang, Jiandong; Hu, Jinhong; Tang, Renbo

    2017-05-01

    A pumped storage stations model was set up and introduced in the previous paper. In the model station, the S-shaped characteristic curves was measured at the load rejection condition with the guide vanes stalling. Load rejection tests where guide-vane closed linearly were performed to validate the effect of the S-shaped characteristics on hydraulic transients. Load rejection experiments with different guide vane closing schemes were also performed to determine a suitable scheme considering the S-shaped characteristics. The condition of one pump turbine rejecting its load after another defined as one-after-another (OAA) load rejection was performed to validate the possibility of S-induced extreme draft tube pressure.

  11. Identification of material constants for piezoelectric transformers by three-dimensional, finite-element method and a design-sensitivity method.

    PubMed

    Joo, Hyun-Woo; Lee, Chang-Hwan; Rho, Jong-Seok; Jung, Hyun-Kyo

    2003-08-01

    In this paper, an inversion scheme for piezoelectric constants of piezoelectric transformers is proposed. The impedance of piezoelectric transducers is calculated using a three-dimensional finite element method. The validity of this is confirmed experimentally. The effects of material coefficients on piezoelectric transformers are investigated numerically. Six material coefficient variables for piezoelectric transformers were selected, and a design sensitivity method was adopted as an inversion scheme. The validity of the proposed method was confirmed by step-up ratio calculations. The proposed method is applied to the analysis of a sample piezoelectric transformer, and its resonance characteristics are obtained by numerically combined equivalent circuit method.

  12. Fault Detection for Automotive Shock Absorber

    NASA Astrophysics Data System (ADS)

    Hernandez-Alcantara, Diana; Morales-Menendez, Ruben; Amezquita-Brooks, Luis

    2015-11-01

    Fault detection for automotive semi-active shock absorbers is a challenge due to the non-linear dynamics and the strong influence of the disturbances such as the road profile. First obstacle for this task, is the modeling of the fault, which has been shown to be of multiplicative nature. Many of the most widespread fault detection schemes consider additive faults. Two model-based fault algorithms for semiactive shock absorber are compared: an observer-based approach and a parameter identification approach. The performance of these schemes is validated and compared using a commercial vehicle model that was experimentally validated. Early results shows that a parameter identification approach is more accurate, whereas an observer-based approach is less sensible to parametric uncertainty.

  13. Nonvolatile memory with Co-SiO2 core-shell nanocrystals as charge storage nodes in floating gate

    NASA Astrophysics Data System (ADS)

    Liu, Hai; Ferrer, Domingo A.; Ferdousi, Fahmida; Banerjee, Sanjay K.

    2009-11-01

    In this letter, we reported nanocrystal floating gate memory with Co-SiO2 core-shell nanocrystal charge storage nodes. By using a water-in-oil microemulsion scheme, Co-SiO2 core-shell nanocrystals were synthesized and closely packed to achieve high density matrix in the floating gate without aggregation. The insulator shell also can help to increase the thermal stability of the nanocrystal metal core during the fabrication process to improve memory performance.

  14. Analysis of an ABE Scheme with Verifiable Outsourced Decryption.

    PubMed

    Liao, Yongjian; He, Yichuan; Li, Fagen; Jiang, Shaoquan; Zhou, Shijie

    2018-01-10

    Attribute-based encryption (ABE) is a popular cryptographic technology to protect the security of users' data in cloud computing. In order to reduce its decryption cost, outsourcing the decryption of ciphertexts is an available method, which enables users to outsource a large number of decryption operations to the cloud service provider. To guarantee the correctness of transformed ciphertexts computed by the cloud server via the outsourced decryption, it is necessary to check the correctness of the outsourced decryption to ensure security for the data of users. Recently, Li et al. proposed a full verifiability of the outsourced decryption of ABE scheme (ABE-VOD) for the authorized users and unauthorized users, which can simultaneously check the correctness of the transformed ciphertext for both them. However, in this paper we show that their ABE-VOD scheme cannot obtain the results which they had shown, such as finding out all invalid ciphertexts, and checking the correctness of the transformed ciphertext for the authorized user via checking it for the unauthorized user. We first construct some invalid ciphertexts which can pass the validity checking in the decryption algorithm. That means their "verify-then-decrypt" skill is unavailable. Next, we show that the method to check the validity of the outsourced decryption for the authorized users via checking it for the unauthorized users is not always correct. That is to say, there exist some invalid ciphertexts which can pass the validity checking for the unauthorized user, but cannot pass the validity checking for the authorized user.

  15. Analysis of an ABE Scheme with Verifiable Outsourced Decryption

    PubMed Central

    He, Yichuan; Li, Fagen; Jiang, Shaoquan; Zhou, Shijie

    2018-01-01

    Attribute-based encryption (ABE) is a popular cryptographic technology to protect the security of users’ data in cloud computing. In order to reduce its decryption cost, outsourcing the decryption of ciphertexts is an available method, which enables users to outsource a large number of decryption operations to the cloud service provider. To guarantee the correctness of transformed ciphertexts computed by the cloud server via the outsourced decryption, it is necessary to check the correctness of the outsourced decryption to ensure security for the data of users. Recently, Li et al. proposed a full verifiability of the outsourced decryption of ABE scheme (ABE-VOD) for the authorized users and unauthorized users, which can simultaneously check the correctness of the transformed ciphertext for both them. However, in this paper we show that their ABE-VOD scheme cannot obtain the results which they had shown, such as finding out all invalid ciphertexts, and checking the correctness of the transformed ciphertext for the authorized user via checking it for the unauthorized user. We first construct some invalid ciphertexts which can pass the validity checking in the decryption algorithm. That means their “verify-then-decrypt” skill is unavailable. Next, we show that the method to check the validity of the outsourced decryption for the authorized users via checking it for the unauthorized users is not always correct. That is to say, there exist some invalid ciphertexts which can pass the validity checking for the unauthorized user, but cannot pass the validity checking for the authorized user. PMID:29320418

  16. SGC Tests for Influence of Material Composition on Compaction Characteristic of Asphalt Mixtures

    PubMed Central

    Chen, Qun

    2013-01-01

    Compaction characteristic of the surface layer asphalt mixture (13-type gradation mixture) was studied using Superpave gyratory compactor (SGC) simulative compaction tests. Based on analysis of densification curve of gyratory compaction, influence rules of the contents of mineral aggregates of all sizes and asphalt on compaction characteristic of asphalt mixtures were obtained. SGC Tests show that, for the mixture with a bigger content of asphalt, its density increases faster, that there is an optimal amount of fine aggregates for optimal compaction and that an appropriate amount of mineral powder will improve workability of mixtures, but overmuch mineral powder will make mixtures dry and hard. Conclusions based on SGC tests can provide basis for how to adjust material composition for improving compaction performance of asphalt mixtures, and for the designed asphalt mixture, its compaction performance can be predicted through these conclusions, which also contributes to the choice of compaction schemes. PMID:23818830

  17. SGC tests for influence of material composition on compaction characteristic of asphalt mixtures.

    PubMed

    Chen, Qun; Li, Yuzhi

    2013-01-01

    Compaction characteristic of the surface layer asphalt mixture (13-type gradation mixture) was studied using Superpave gyratory compactor (SGC) simulative compaction tests. Based on analysis of densification curve of gyratory compaction, influence rules of the contents of mineral aggregates of all sizes and asphalt on compaction characteristic of asphalt mixtures were obtained. SGC Tests show that, for the mixture with a bigger content of asphalt, its density increases faster, that there is an optimal amount of fine aggregates for optimal compaction and that an appropriate amount of mineral powder will improve workability of mixtures, but overmuch mineral powder will make mixtures dry and hard. Conclusions based on SGC tests can provide basis for how to adjust material composition for improving compaction performance of asphalt mixtures, and for the designed asphalt mixture, its compaction performance can be predicted through these conclusions, which also contributes to the choice of compaction schemes.

  18. Composing problem solvers for simulation experimentation: a case study on steady state estimation.

    PubMed

    Leye, Stefan; Ewald, Roland; Uhrmacher, Adelinde M

    2014-01-01

    Simulation experiments involve various sub-tasks, e.g., parameter optimization, simulation execution, or output data analysis. Many algorithms can be applied to such tasks, but their performance depends on the given problem. Steady state estimation in systems biology is a typical example for this: several estimators have been proposed, each with its own (dis-)advantages. Experimenters, therefore, must choose from the available options, even though they may not be aware of the consequences. To support those users, we propose a general scheme to aggregate such algorithms to so-called synthetic problem solvers, which exploit algorithm differences to improve overall performance. Our approach subsumes various aggregation mechanisms, supports automatic configuration from training data (e.g., via ensemble learning or portfolio selection), and extends the plugin system of the open source modeling and simulation framework James II. We show the benefits of our approach by applying it to steady state estimation for cell-biological models.

  19. Validation of a RANS transition model using a high-order weighted compact nonlinear scheme

    NASA Astrophysics Data System (ADS)

    Tu, GuoHua; Deng, XiaoGang; Mao, MeiLiang

    2013-04-01

    A modified transition model is given based on the shear stress transport (SST) turbulence model and an intermittency transport equation. The energy gradient term in the original model is replaced by flow strain rate to saving computational costs. The model employs local variables only, and then it can be conveniently implemented in modern computational fluid dynamics codes. The fifth-order weighted compact nonlinear scheme and the fourth-order staggered scheme are applied to discrete the governing equations for the purpose of minimizing discretization errors, so as to mitigate the confusion between numerical errors and transition model errors. The high-order package is compared with a second-order TVD method on simulating the transitional flow of a flat plate. Numerical results indicate that the high-order package give better grid convergence property than that of the second-order method. Validation of the transition model is performed for transitional flows ranging from low speed to hypersonic speed.

  20. Direct reconstruction in CT-analogous pharmacokinetic diffuse fluorescence tomography: two-dimensional simulative and experimental validations

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Zhang, Yanqi; Zhang, Limin; Li, Jiao; Zhou, Zhongxing; Zhao, Huijuan; Gao, Feng

    2016-04-01

    We present a generalized strategy for direct reconstruction in pharmacokinetic diffuse fluorescence tomography (DFT) with CT-analogous scanning mode, which can accomplish one-step reconstruction of the indocyanine-green pharmacokinetic-rate images within in vivo small animals by incorporating the compartmental kinetic model into an adaptive extended Kalman filtering scheme and using an instantaneous sampling dataset. This scheme, compared with the established indirect and direct methods, eliminates the interim error of the DFT inversion and relaxes the expensive requirement of the instrument for obtaining highly time-resolved date-sets of complete 360 deg projections. The scheme is validated by two-dimensional simulations for the two-compartment model and pilot phantom experiments for the one-compartment model, suggesting that the proposed method can estimate the compartmental concentrations and the pharmacokinetic-rates simultaneously with a fair quantitative and localization accuracy, and is well suitable for cost-effective and dense-sampling instrumentation based on the highly-sensitive photon counting technique.

  1. Inversion Schemes to Retrieve Atmospheric and Oceanic Parameters from SeaWiFS Data

    NASA Technical Reports Server (NTRS)

    Deschamps, P.-Y.; Frouin, R.

    1997-01-01

    The investigation focuses on two key issues in satellite ocean color remote sensing, namely the presence of whitecaps on the sea surface and the validity of the aerosol models selected for the atmospheric correction of SeaWiFS data. Experiments were designed and conducted at the Scripps Institution of Oceanography to measure the optical properties of whitecaps and to study the aerosol optical properties in a typical mid-latitude coastal environment. CIMEL Electronique sunphotometers, now integrated in the AERONET network, were also deployed permanently in Bermuda and in Lanai, calibration/validation sites for SeaWiFS and MODIS. Original results were obtained on the spectral reflectance of whitecaps and on the choice of aerosol models for atmospheric correction schemes and the type of measurements that should be made to verify those schemes. Bio-optical algorithms to remotely sense primary productivity from space were also evaluated, as well as current algorithms to estimate PAR at the earth's surface.

  2. Efficient and accurate numerical schemes for a hydro-dynamically coupled phase field diblock copolymer model

    NASA Astrophysics Data System (ADS)

    Cheng, Qing; Yang, Xiaofeng; Shen, Jie

    2017-07-01

    In this paper, we consider numerical approximations of a hydro-dynamically coupled phase field diblock copolymer model, in which the free energy contains a kinetic potential, a gradient entropy, a Ginzburg-Landau double well potential, and a long range nonlocal type potential. We develop a set of second order time marching schemes for this system using the "Invariant Energy Quadratization" approach for the double well potential, the projection method for the Navier-Stokes equation, and a subtle implicit-explicit treatment for the stress and convective term. The resulting schemes are linear and lead to symmetric positive definite systems at each time step, thus they can be efficiently solved. We further prove that these schemes are unconditionally energy stable. Various numerical experiments are performed to validate the accuracy and energy stability of the proposed schemes.

  3. Adaptive elimination of synchronization in coupled oscillator

    NASA Astrophysics Data System (ADS)

    Zhou, Shijie; Ji, Peng; Zhou, Qing; Feng, Jianfeng; Kurths, Jürgen; Lin, Wei

    2017-08-01

    We present here an adaptive control scheme with a feedback delay to achieve elimination of synchronization in a large population of coupled and synchronized oscillators. We validate the feasibility of this scheme not only in the coupled Kuramoto’s oscillators with a unimodal or bimodal distribution of natural frequency, but also in two representative models of neuronal networks, namely, the FitzHugh-Nagumo spiking oscillators and the Hindmarsh-Rose bursting oscillators. More significantly, we analytically illustrate the feasibility of the proposed scheme with a feedback delay and reveal how the exact topological form of the bimodal natural frequency distribution influences the scheme performance. We anticipate that our developed scheme will deepen the understanding and refinement of those controllers, e.g. techniques of deep brain stimulation, which have been implemented in remedying some synchronization-induced mental disorders including Parkinson disease and epilepsy.

  4. Energy efficient strategy for throughput improvement in wireless sensor networks.

    PubMed

    Jabbar, Sohail; Minhas, Abid Ali; Imran, Muhammad; Khalid, Shehzad; Saleem, Kashif

    2015-01-23

    Network lifetime and throughput are one of the prime concerns while designing routing protocols for wireless sensor networks (WSNs). However, most of the existing schemes are either geared towards prolonging network lifetime or improving throughput. This paper presents an energy efficient routing scheme for throughput improvement in WSN. The proposed scheme exploits multilayer cluster design for energy efficient forwarding node selection, cluster heads rotation and both inter- and intra-cluster routing. To improve throughput, we rotate the role of cluster head among various nodes based on two threshold levels which reduces the number of dropped packets. We conducted simulations in the NS2 simulator to validate the performance of the proposed scheme. Simulation results demonstrate the performance efficiency of the proposed scheme in terms of various metrics compared to similar approaches published in the literature.

  5. Energy Efficient Strategy for Throughput Improvement in Wireless Sensor Networks

    PubMed Central

    Jabbar, Sohail; Minhas, Abid Ali; Imran, Muhammad; Khalid, Shehzad; Saleem, Kashif

    2015-01-01

    Network lifetime and throughput are one of the prime concerns while designing routing protocols for wireless sensor networks (WSNs). However, most of the existing schemes are either geared towards prolonging network lifetime or improving throughput. This paper presents an energy efficient routing scheme for throughput improvement in WSN. The proposed scheme exploits multilayer cluster design for energy efficient forwarding node selection, cluster heads rotation and both inter- and intra-cluster routing. To improve throughput, we rotate the role of cluster head among various nodes based on two threshold levels which reduces the number of dropped packets. We conducted simulations in the NS2 simulator to validate the performance of the proposed scheme. Simulation results demonstrate the performance efficiency of the proposed scheme in terms of various metrics compared to similar approaches published in the literature. PMID:25625902

  6. A secondstep in development of a checklist for screening risk for violence in acute psychiatric patients: evaluation of interrater reliability of the Preliminary Scheme 33.

    PubMed

    Bjørkly, Stål; Moger, Tron A

    2007-12-01

    The Acute Project is a research project conducted on acute psychiatric admission wards in Norway. The objective is to develop and validate a structured, easy-to-use screening checklist for assessment of risk for violence in patients both during their stay in the ward and after discharge. The Preliminary Scheme 33 is a 33-item screening checklist with content domain inspired by the Historical-Clinical-Risk Management Scheme (HCR-20), the Brøset Violence Checklist, and eight risk factors extracted from the literature on risk assessment. The Preliminary Scheme 33 was designed and tested in two steps by a research group which includes the authors. The common aim of both steps was to develop this into a time economical, reliable, and valid checklist. In the first step in 2006 the predictive validity of the individual items was tested. The present work presents results from the second step, a study conducted to assess the interrater reliability of the 33 items. Eight clinicians working in an acute psychiatric unit volunteered to be raters and were trained to score the 33 items on a three-point scale in relation to 15 clinical vignettes, which contained information from 15 acute psychiatric patients' files. Analysis showed high interrater reliability for the total score with an intraclass correlation coefficient (ICC) of .86 (95% CI: 0.74-0.94). However, a substantial proportion of the items had medium to low ICCs. Consequences of this finding for further development of these items into a brief screen are discussed.

  7. Best Practicable Aggregation of Species: a step forward for species surrogacy in environmental assessment and monitoring

    PubMed Central

    Bevilacqua, Stanislao; Claudet, Joachim; Terlizzi, Antonio

    2013-01-01

    The available taxonomic expertise and knowledge of species is still inadequate to cope with the urgent need for cost-effective methods to quantifying community response to natural and anthropogenic drivers of change. So far, the mainstream approach to overcome these impediments has focused on using higher taxa as surrogates for species. However, the use of such taxonomic surrogates often limits inferences about the causality of community patterns, which in turn is essential for effective environmental management strategies. Here, we propose an alternative approach to species surrogacy, the “Best Practicable Aggregation of Species” (BestAgg), in which surrogates exulate from fixed taxonomic schemes. The approach uses null models from random aggregations of species to minimizing the number of surrogates without causing significant losses of information on community patterns. Surrogate types are then selected in order to maximize ecological information. We applied the approach to real case studies on natural and human-driven gradients from marine benthic communities. Outcomes from BestAgg were also compared with those obtained using classic taxonomic surrogates. Results showed that BestAgg surrogates are effective in detecting community changes. In contrast to classic taxonomic surrogates, BestAgg surrogates allow retaining significantly higher information on species-level community patterns than what is expected to occur by chance and a potential time saving during sample processing up to 25% higher. Our findings showed that BestAgg surrogates from a pilot study could be used successfully in similar environmental investigations in the same area, or for subsequent long-term monitoring programs. BestAgg is virtually applicable to any environmental context, allowing exploiting multiple surrogacy schemes beyond stagnant perspectives strictly relying on taxonomic relatedness among species. This prerogative is crucial to extend the concept of species surrogacy to ecological traits of species, thus leading to ecologically meaningful surrogates that, while cost effective in reflecting community patterns, may also contribute to unveil underlying processes. A specific R code for BestAgg is provided. PMID:24198939

  8. Historical extension of operational NDVI products for livestock insurance in Kenya

    NASA Astrophysics Data System (ADS)

    Vrieling, Anton; Meroni, Michele; Shee, Apurba; Mude, Andrew G.; Woodard, Joshua; de Bie, C. A. J. M. (Kees); Rembold, Felix

    2014-05-01

    Droughts induce livestock losses that severely affect Kenyan pastoralists. Recent index insurance schemes have the potential of being a viable tool for insuring pastoralists against drought-related risk. Such schemes require as input a forage scarcity (or drought) index that can be reliably updated in near real-time, and that strongly relates to livestock mortality. Generally, a long record (>25 years) of the index is needed to correctly estimate mortality risk and calculate the related insurance premium. Data from current operational satellites used for large-scale vegetation monitoring span over a maximum of 15 years, a time period that is considered insufficient for accurate premium computation. This study examines how operational NDVI datasets compare to, and could be combined with the non-operational recently constructed 30-year GIMMS AVHRR record (1981-2011) to provide a near-real time drought index with a long term archive for the arid lands of Kenya. We compared six freely available, near-real time NDVI products: five from MODIS and one from SPOT-VEGETATION. Prior to comparison, all datasets were averaged in time for the two vegetative seasons in Kenya, and aggregated spatially at the administrative division level at which the insurance is offered. The feasibility of extending the resulting aggregated drought indices back in time was assessed using jackknifed R2 statistics (leave-one-year-out) for the overlapping period 2002-2011. We found that division-specific models were more effective than a global model for linking the division-level temporal variability of the index between NDVI products. Based on our results, good scope exists for historically extending the aggregated drought index, thus providing a longer operational record for insurance purposes. We showed that this extension may have large effects on the calculated insurance premium. Finally, we discuss several possible improvements to the drought index.

  9. Scaling Linguistic Characterization of Precipitation Variability

    NASA Astrophysics Data System (ADS)

    Primo, C.; Gutierrez, J. M.

    2003-04-01

    Rainfall variability is influenced by changes in the aggregation of daily rainfall. This problem is of great importance for hydrological, agricultural and ecological applications. Rainfall averages, or accumulations, are widely used as standard climatic parameters. However different aggregation schemes may lead to the same average or accumulated values. In this paper we present a fractal method to characterize different aggregation schemes. The method provides scaling exponents characterizing weekly or monthly rainfall patterns for a given station. To this aim, we establish an analogy with linguistic analysis, considering precipitation as a discrete variable (e.g., rain, no rain). Each weekly, or monthly, symbolic precipitation sequence of observed precipitation is then considered as a "word" (in this case, a binary word) which defines a specific weekly rainfall pattern. Thus, each site defines a "language" characterized by the words observed in that site during a period representative of the climatology. Then, the more variable the observed weekly precipitation sequences, the more complex the obtained language. To characterize these languages, we first applied the Zipf's method obtaining scaling histograms of rank ordered frequencies. However, to obtain significant exponents, the scaling must be maintained some orders of magnitude, requiring long sequences of daily precipitation which are not available at particular stations. Thus this analysis is not suitable for applications involving particular stations (such as regionalization). Then, we introduce an alternative fractal method applicable to data from local stations. The so-called Chaos-Game method uses Iterated Function Systems (IFS) for graphically representing rainfall languages, in a way that complex languages define complex graphical patterns. The box-counting dimension and the entropy of the resulting patterns are used as linguistic parameters to quantitatively characterize the complexity of the patterns. We illustrate the high climatological discrimination power of the linguistic parameters in the Iberian peninsula, when compared with other standard techniques (such as seasonal mean accumulated precipitation). As an example, standard and linguistic parameters are used as inputs for a clustering regionalization method, comparing the resulting clusters.

  10. A robust uniqueness-and-anonymity-preserving remote user authentication scheme for connected health care.

    PubMed

    Wen, Fengtong

    2013-12-01

    User authentication plays an important role to protect resources or services from being accessed by unauthorized users. In a recent paper, Das et al. proposed a secure and efficient uniqueness-and-anonymity-preserving remote user authentication scheme for connected health care. This scheme uses three factors, e.g. biometrics, password, and smart card, to protect the security. It protects user privacy and is believed to have many abilities to resist a range of network attacks, even if the secret information stored in the smart card is compromised. In this paper, we analyze the security of Das et al.'s scheme, and show that the scheme is in fact insecure against the replay attack, user impersonation attacks and off-line guessing attacks. Then, we also propose a robust uniqueness-and-anonymity-preserving remote user authentication scheme for connected health care. Compared with the existing schemes, our protocol uses a different user authentication mechanism to resist replay attack. We show that our proposed scheme can provide stronger security than previous protocols. Furthermore, we demonstrate the validity of the proposed scheme through the BAN (Burrows, Abadi, and Needham) logic.

  11. A splitting scheme based on the space-time CE/SE method for solving multi-dimensional hydrodynamical models of semiconductor devices

    NASA Astrophysics Data System (ADS)

    Nisar, Ubaid Ahmed; Ashraf, Waqas; Qamar, Shamsul

    2016-08-01

    Numerical solutions of the hydrodynamical model of semiconductor devices are presented in one and two-space dimension. The model describes the charge transport in semiconductor devices. Mathematically, the models can be written as a convection-diffusion type system with a right hand side describing the relaxation effects and interaction with a self consistent electric field. The proposed numerical scheme is a splitting scheme based on the conservation element and solution element (CE/SE) method for hyperbolic step, and a semi-implicit scheme for the relaxation step. The numerical results of the suggested scheme are compared with the splitting scheme based on Nessyahu-Tadmor (NT) central scheme for convection step and the same semi-implicit scheme for the relaxation step. The effects of various parameters such as low field mobility, device length, lattice temperature and voltages for one-space dimensional hydrodynamic model are explored to further validate the generic applicability of the CE/SE method for the current model equations. A two dimensional simulation is also performed by CE/SE method for a MESFET device, producing results in good agreement with those obtained by NT-central scheme.

  12. Enhanced smartcard-based password-authenticated key agreement using extended chaotic maps.

    PubMed

    Lee, Tian-Fu; Hsiao, Chia-Hung; Hwang, Shi-Han; Lin, Tsung-Hung

    2017-01-01

    A smartcard based password-authenticated key agreement scheme enables a legal user to log in to a remote authentication server and access remote services through public networks using a weak password and a smart card. Lin recently presented an improved chaotic maps-based password-authenticated key agreement scheme that used smartcards to eliminate the weaknesses of the scheme of Guo and Chang, which does not provide strong user anonymity and violates session key security. However, the improved scheme of Lin does not exhibit the freshness property and the validity of messages so it still fails to withstand denial-of-service and privileged-insider attacks. Additionally, a single malicious participant can predetermine the session key such that the improved scheme does not exhibit the contributory property of key agreements. This investigation discusses these weaknesses and proposes an enhanced smartcard-based password-authenticated key agreement scheme that utilizes extended chaotic maps. The session security of this enhanced scheme is based on the extended chaotic map-based Diffie-Hellman problem, and is proven in the real-or-random and the sequence of games models. Moreover, the enhanced scheme ensures the freshness of communicating messages by appending timestamps, and thereby avoids the weaknesses in previous schemes.

  13. Enhanced smartcard-based password-authenticated key agreement using extended chaotic maps

    PubMed Central

    Lee, Tian-Fu; Hsiao, Chia-Hung; Hwang, Shi-Han

    2017-01-01

    A smartcard based password-authenticated key agreement scheme enables a legal user to log in to a remote authentication server and access remote services through public networks using a weak password and a smart card. Lin recently presented an improved chaotic maps-based password-authenticated key agreement scheme that used smartcards to eliminate the weaknesses of the scheme of Guo and Chang, which does not provide strong user anonymity and violates session key security. However, the improved scheme of Lin does not exhibit the freshness property and the validity of messages so it still fails to withstand denial-of-service and privileged-insider attacks. Additionally, a single malicious participant can predetermine the session key such that the improved scheme does not exhibit the contributory property of key agreements. This investigation discusses these weaknesses and proposes an enhanced smartcard-based password-authenticated key agreement scheme that utilizes extended chaotic maps. The session security of this enhanced scheme is based on the extended chaotic map-based Diffie-Hellman problem, and is proven in the real-or-random and the sequence of games models. Moreover, the enhanced scheme ensures the freshness of communicating messages by appending timestamps, and thereby avoids the weaknesses in previous schemes. PMID:28759615

  14. Novel Directional Protection Scheme for the FREEDM Smart Grid System

    NASA Astrophysics Data System (ADS)

    Sharma, Nitish

    This research primarily deals with the design and validation of the protection system for a large scale meshed distribution system. The large scale system simulation (LSSS) is a system level PSCAD model which is used to validate component models for different time-scale platforms, to provide a virtual testing platform for the Future Renewable Electric Energy Delivery and Management (FREEDM) system. It is also used to validate the cases of power system protection, renewable energy integration and storage, and load profiles. The protection of the FREEDM system against any abnormal condition is one of the important tasks. The addition of distributed generation and power electronic based solid state transformer adds to the complexity of the protection. The FREEDM loop system has a fault current limiter and in addition, the Solid State Transformer (SST) limits the fault current at 2.0 per unit. Former students at ASU have developed the protection scheme using fiber-optic cable. However, during the NSF-FREEDM site visit, the National Science Foundation (NSF) team regarded the system incompatible for the long distances. Hence, a new protection scheme with a wireless scheme is presented in this thesis. The use of wireless communication is extended to protect the large scale meshed distributed generation from any fault. The trip signal generated by the pilot protection system is used to trigger the FID (fault isolation device) which is an electronic circuit breaker operation (switched off/opening the FIDs). The trip signal must be received and accepted by the SST, and it must block the SST operation immediately. A comprehensive protection system for the large scale meshed distribution system has been developed in PSCAD with the ability to quickly detect the faults. The validation of the protection system is performed by building a hardware model using commercial relays at the ASU power laboratory.

  15. Negative refraction using Raman transitions and chirality

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sikes, D. E.; Yavuz, D. D.

    2011-11-15

    We present a scheme that achieves negative refraction with low absorption in far-off resonant atomic systems. The scheme utilizes Raman resonances and does not require the simultaneous presence of an electric-dipole transition and a magnetic-dipole transition near the same wavelength. We show that two interfering Raman tran-sitions coupled to a magnetic-dipole transition can achieve a negative index of refraction with low absorption through magnetoelectric cross-coupling. We confirm the validity of the analytical results with exact numerical simulations of the density matrix. We also discuss possible experimental implementations of the scheme in rare-earth metal atomic systems.

  16. A Comprehensive Observational Coding Scheme for Analyzing Instrumental, Affective, and Relational Communication in Health Care Contexts

    PubMed Central

    SIMINOFF, LAURA A.; STEP, MARY M.

    2011-01-01

    Many observational coding schemes have been offered to measure communication in health care settings. These schemes fall short of capturing multiple functions of communication among providers, patients, and other participants. After a brief review of observational communication coding, the authors present a comprehensive scheme for coding communication that is (a) grounded in communication theory, (b) accounts for instrumental and relational communication, and (c) captures important contextual features with tailored coding templates: the Siminoff Communication Content & Affect Program (SCCAP). To test SCCAP reliability and validity, the authors coded data from two communication studies. The SCCAP provided reliable measurement of communication variables including tailored content areas and observer ratings of speaker immediacy, affiliation, confirmation, and disconfirmation behaviors. PMID:21213170

  17. Solution of the 2-D steady-state radiative transfer equation in participating media with specular reflections using SUPG and DG finite elements

    NASA Astrophysics Data System (ADS)

    Le Hardy, D.; Favennec, Y.; Rousseau, B.

    2016-08-01

    The 2D radiative transfer equation coupled with specular reflection boundary conditions is solved using finite element schemes. Both Discontinuous Galerkin and Streamline-Upwind Petrov-Galerkin variational formulations are fully developed. These two schemes are validated step-by-step for all involved operators (transport, scattering, reflection) using analytical formulations. Numerical comparisons of the two schemes, in terms of convergence rate, reveal that the quadratic SUPG scheme proves efficient for solving such problems. This comparison constitutes the main issue of the paper. Moreover, the solution process is accelerated using block SOR-type iterative methods, for which the determination of the optimal parameter is found in a very cheap way.

  18. A Comparative Analysis of the Validity of US State- and County-Level Social Capital Measures and Their Associations with Population Health

    ERIC Educational Resources Information Center

    Lee, Chul-Joo; Kim, Daniel

    2013-01-01

    The goals of this study were to validate a number of available collective social capital measures at the US state and county levels, and to examine the relative extent to which these social capital measures are associated with population health outcomes. Measures of social capital at the US state level included aggregate indices based on the…

  19. A meta-analysis of an implicit measure of personality functioning: the Mutuality of Autonomy Scale.

    PubMed

    Graceffo, Robert A; Mihura, Joni L; Meyer, Gregory J

    2014-01-01

    The Mutuality of Autonomy scale (MA) is a Rorschach variable designed to capture the degree to which individuals mentally represent self and other as mutually autonomous versus pathologically destructive (Urist, 1977). Discussions of the MA's validity found in articles and chapters usually claim good support, which we evaluated by a systematic review and meta-analysis of its construct validity. Overall, in a random effects analysis across 24 samples (N = 1,801) and 91 effect sizes, the MA scale was found to maintain a relationship of r =.20, 95% CI [.16,.25], with relevant validity criteria. We hypothesized that MA summary scores that aggregate more MA response-level data would maintain the strongest relationship with relevant validity criteria. Results supported this hypothesis (aggregated scoring method: r =.24, k = 57, S = 24; nonaggregated scoring methods: r =.15, k = 34, S = 10; p =.039, 2-tailed). Across 7 exploratory moderator analyses, only 1 (criterion method) produced significant results. Criteria derived from the Thematic Apperception Test produced smaller effects than clinician ratings, diagnostic differentiation, and self-attributed characteristics; criteria derived from observer reports produced smaller effects than clinician ratings and self-attributed characteristics. Implications of the study's findings are discussed in terms of both research and clinical work.

  20. A Conceptual Approach to Assimilating Remote Sensing Data to Improve Soil Moisture Profile Estimates in a Surface Flux/Hydrology Model. 2; Aggregation

    NASA Technical Reports Server (NTRS)

    Schamschula, Marius; Crosson, William L.; Inguva, Ramarao; Yates, Thomas; Laymen, Charles A.; Caulfield, John

    1998-01-01

    This is a follow up on the preceding presentation by Crosson. The grid size for remote microwave measurements is much coarser than the hydrological model computational grids. To validate the hydrological models with measurements we propose mechanisms to aggregate the hydrological model outputs for soil moisture to allow comparison with measurements. Weighted neighborhood averaging methods are proposed to facilitate the comparison. We will also discuss such complications as misalignment, rotation and other distortions introduced by a generalized sensor image.

  1. From Sub-basin to Grid Scale Soil Moisture Disaggregation in SMART, A Semi-distributed Hydrologic Modeling Framework

    NASA Astrophysics Data System (ADS)

    Ajami, H.; Sharma, A.

    2016-12-01

    A computationally efficient, semi-distributed hydrologic modeling framework is developed to simulate water balance at a catchment scale. The Soil Moisture and Runoff simulation Toolkit (SMART) is based upon the delineation of contiguous and topologically connected Hydrologic Response Units (HRUs). In SMART, HRUs are delineated using thresholds obtained from topographic and geomorphic analysis of a catchment, and simulation elements are distributed cross sections or equivalent cross sections (ECS) delineated in first order sub-basins. ECSs are formulated by aggregating topographic and physiographic properties of the part or entire first order sub-basins to further reduce computational time in SMART. Previous investigations using SMART have shown that temporal dynamics of soil moisture are well captured at a HRU level using the ECS delineation approach. However, spatial variability of soil moisture within a given HRU is ignored. Here, we examined a number of disaggregation schemes for soil moisture distribution in each HRU. The disaggregation schemes are either based on topographic based indices or a covariance matrix obtained from distributed soil moisture simulations. To assess the performance of the disaggregation schemes, soil moisture simulations from an integrated land surface-groundwater model, ParFlow.CLM in Baldry sub-catchment, Australia are used. ParFlow is a variably saturated sub-surface flow model that is coupled to the Common Land Model (CLM). Our results illustrate that the statistical disaggregation scheme performs better than the methods based on topographic data in approximating soil moisture distribution at a 60m scale. Moreover, the statistical disaggregation scheme maintains temporal correlation of simulated daily soil moisture while preserves the mean sub-basin soil moisture. Future work is focused on assessing the performance of this scheme in catchments with various topographic and climate settings.

  2. Intelligent Power Swing Detection Scheme to Prevent False Relay Tripping Using S-Transform

    NASA Astrophysics Data System (ADS)

    Mohamad, Nor Z.; Abidin, Ahmad F.; Musirin, Ismail

    2014-06-01

    Distance relay design is equipped with out-of-step tripping scheme to ensure correct distance relay operation during power swing. The out-of-step condition is a consequence result from unstable power swing. It requires proper detection of power swing to initiate a tripping signal followed by separation of unstable part from the entire power system. The distinguishing process of unstable swing from stable swing poses a challenging task. This paper presents an intelligent approach to detect power swing based on S-Transform signal processing tool. The proposed scheme is based on the use of S-Transform feature of active power at the distance relay measurement point. It is demonstrated that the proposed scheme is able to detect and discriminate the unstable swing from stable swing occurring in the system. To ascertain validity of the proposed scheme, simulations were carried out with the IEEE 39 bus system and its performance has been compared with the wavelet transform-based power swing detection scheme.

  3. Central Upwind Scheme for a Compressible Two-Phase Flow Model

    PubMed Central

    Ahmed, Munshoor; Saleem, M. Rehan; Zia, Saqib; Qamar, Shamsul

    2015-01-01

    In this article, a compressible two-phase reduced five-equation flow model is numerically investigated. The model is non-conservative and the governing equations consist of two equations describing the conservation of mass, one for overall momentum and one for total energy. The fifth equation is the energy equation for one of the two phases and it includes source term on the right-hand side which represents the energy exchange between two fluids in the form of mechanical and thermodynamical work. For the numerical approximation of the model a high resolution central upwind scheme is implemented. This is a non-oscillatory upwind biased finite volume scheme which does not require a Riemann solver at each time step. Few numerical case studies of two-phase flows are presented. For validation and comparison, the same model is also solved by using kinetic flux-vector splitting (KFVS) and staggered central schemes. It was found that central upwind scheme produces comparable results to the KFVS scheme. PMID:26039242

  4. Central upwind scheme for a compressible two-phase flow model.

    PubMed

    Ahmed, Munshoor; Saleem, M Rehan; Zia, Saqib; Qamar, Shamsul

    2015-01-01

    In this article, a compressible two-phase reduced five-equation flow model is numerically investigated. The model is non-conservative and the governing equations consist of two equations describing the conservation of mass, one for overall momentum and one for total energy. The fifth equation is the energy equation for one of the two phases and it includes source term on the right-hand side which represents the energy exchange between two fluids in the form of mechanical and thermodynamical work. For the numerical approximation of the model a high resolution central upwind scheme is implemented. This is a non-oscillatory upwind biased finite volume scheme which does not require a Riemann solver at each time step. Few numerical case studies of two-phase flows are presented. For validation and comparison, the same model is also solved by using kinetic flux-vector splitting (KFVS) and staggered central schemes. It was found that central upwind scheme produces comparable results to the KFVS scheme.

  5. Shear-induced reaction-limited aggregation kinetics of Brownian particles at arbitrary concentrations

    NASA Astrophysics Data System (ADS)

    Zaccone, Alessio; Gentili, Daniele; Wu, Hua; Morbidelli, Massimo

    2010-04-01

    The aggregation of interacting Brownian particles in sheared concentrated suspensions is an important issue in colloid and soft matter science per se. Also, it serves as a model to understand biochemical reactions occurring in vivo where both crowding and shear play an important role. We present an effective medium approach within the Smoluchowski equation with shear which allows one to calculate the encounter kinetics through a potential barrier under shear at arbitrary colloid concentrations. Experiments on a model colloidal system in simple shear flow support the validity of the model in the concentration range considered. By generalizing Kramers' rate theory to the presence of shear and collective hydrodynamics, our model explains the significant increase in the shear-induced reaction-limited aggregation kinetics upon increasing the colloid concentration.

  6. Method of recommending items to a user based on user interest

    DOEpatents

    Bollen, John; Van De Sompel, Herbert

    2013-11-05

    Although recording of usage data is common in scholarly information services, its exploitation for the creation of value-added services remains limited due to concerns regarding, among others, user privacy, data validity, and the lack of accepted standards for the representation, sharing and aggregation of usage data. A technical, standards-based architecture for sharing usage information is presented. In this architecture, OpenURL-compliant linking servers aggregate usage information of a specific user community as it navigates the distributed information environment that it has access to. This usage information is made OAI-PMH harvestable so that usage information exposed by many linking servers can be aggregated to facilitate the creation of value-added services with a reach beyond that of a single community or a single information service.

  7. Usage based indicators to assess the impact of scholarly works: architecture and method

    DOEpatents

    Bollen, Johan [Santa Fe, NM; Van De Sompel, Herbert [Santa Fe, NM

    2012-03-13

    Although recording of usage data is common in scholarly information services, its exploitation for the creation of value-added services remains limited due to concerns regarding, among others, user privacy, data validity, and the lack of accepted standards for the representation, sharing and aggregation of usage data. A technical, standards-based architecture for sharing usage information is presented. In this architecture, OpenURL-compliant linking servers aggregate usage information of a specific user community as it navigates the distributed information environment that it has access to. This usage information is made OAI-PMH harvestable so that usage information exposed by many linking servers can be aggregated to facilitate the creation of value-added services with a reach beyond that of a single community or a single information service.

  8. Avoiding Deontic Explosion by Contextually Restricting Aggregation

    NASA Astrophysics Data System (ADS)

    Meheus, Joke; Beirlaen, Mathieu; van de Putte, Frederik

    In this paper, we present an adaptive logic for deontic conflicts, called P2.1 r , that is based on Goble's logic SDL a P e - a bimodal extension of Goble's logic P that invalidates aggregation for all prima facie obligations. The logic P2.1 r has several advantages with respect to SDL a P e. For consistent sets of obligations it yields the same results as Standard Deontic Logic and for inconsistent sets of obligations, it validates aggregation "as much as possible". It thus leads to a richer consequence set than SDL a P e. The logic P2.1 r avoids Goble's criticisms against other non-adjunctive systems of deontic logic. Moreover, it can handle all the 'toy examples' from the literature as well as more complex ones.

  9. Performance and Maqasid al-Shari'ah's Pentagon-Shaped Ethical Measurement.

    PubMed

    Bedoui, Houssem Eddine; Mansour, Walid

    2015-06-01

    Business performance is traditionally viewed from the one-dimensional financial angle. This paper develops a new approach that links performance to the ethical vision of Islam based on maqasid al-shari'ah (i.e., the objectives of Islamic law). The approach involves a Pentagon-shaped performance scheme structure via five pillars, namely wealth, posterity, intellect, faith, and human self. Such a scheme ensures that any firm or organization can ethically contribute to the promotion of human welfare, prevent corruption, and enhance social and economic stability and not merely maximize its own performance in terms of its financial return. A quantitative measure of ethical performance is developed. It surprisingly shows that a firm or organization following only the financial aspect at the expense of the others performs poorly. This paper discusses further the practical instances of the quantitative measurement of the ethical aspects of the system taken at an aggregate level.

  10. A simple, robust and efficient high-order accurate shock-capturing scheme for compressible flows: Towards minimalism

    NASA Astrophysics Data System (ADS)

    Ohwada, Taku; Shibata, Yuki; Kato, Takuma; Nakamura, Taichi

    2018-06-01

    Developed is a high-order accurate shock-capturing scheme for the compressible Euler/Navier-Stokes equations; the formal accuracy is 5th order in space and 4th order in time. The performance and efficiency of the scheme are validated in various numerical tests. The main ingredients of the scheme are nothing special; they are variants of the standard numerical flux, MUSCL, the usual Lagrange's polynomial and the conventional Runge-Kutta method. The scheme can compute a boundary layer accurately with a rational resolution and capture a stationary contact discontinuity sharply without inner points. And yet it is endowed with high resistance against shock anomalies (carbuncle phenomenon, post-shock oscillations, etc.). A good balance between high robustness and low dissipation is achieved by blending three types of numerical fluxes according to physical situation in an intuitively easy-to-understand way. The performance of the scheme is largely comparable to that of WENO5-Rusanov, while its computational cost is 30-40% less than of that of the advanced scheme.

  11. Low-complexity and modulation-format-independent carrier phase estimation scheme using linear approximation for elastic optical networks

    NASA Astrophysics Data System (ADS)

    Yang, Tao; Chen, Xue; Shi, Sheping; Sun, Erkun; Shi, Chen

    2018-03-01

    We propose a low-complexity and modulation-format-independent carrier phase estimation (CPE) scheme based on two-stage modified blind phase search (MBPS) with linear approximation to compensate the phase noise of arbitrary m-ary quadrature amplitude modulation (m-QAM) signals in elastic optical networks (EONs). Comprehensive numerical simulations are carried out in the case that the highest possible modulation format in EONs is 256-QAM. The simulation results not only verify its advantages of higher estimation accuracy and modulation-format independence, i.e., universality, but also demonstrate that the implementation complexity is significantly reduced by at least one-fourth in comparison with the traditional BPS scheme. In addition, the proposed scheme shows similar laser linewidth tolerance with the traditional BPS scheme. The slightly better OSNR performance of the scheme is also experimentally validated for PM-QPSK and PM-16QAM systems, respectively. The coexistent advantages of low-complexity and modulation-format-independence could make the proposed scheme an attractive candidate for flexible receiver-side DSP unit in EONs.

  12. Outage Performance Analysis of Relay Selection Schemes in Wireless Energy Harvesting Cooperative Networks over Non-Identical Rayleigh Fading Channels †

    PubMed Central

    Do, Nhu Tri; Bao, Vo Nguyen Quoc; An, Beongku

    2016-01-01

    In this paper, we study relay selection in decode-and-forward wireless energy harvesting cooperative networks. In contrast to conventional cooperative networks, the relays harvest energy from the source’s radio-frequency radiation and then use that energy to forward the source information. Considering power splitting receiver architecture used at relays to harvest energy, we are concerned with the performance of two popular relay selection schemes, namely, partial relay selection (PRS) scheme and optimal relay selection (ORS) scheme. In particular, we analyze the system performance in terms of outage probability (OP) over independent and non-identical (i.n.i.d.) Rayleigh fading channels. We derive the closed-form approximations for the system outage probabilities of both schemes and validate the analysis by the Monte-Carlo simulation. The numerical results provide comprehensive performance comparison between the PRS and ORS schemes and reveal the effect of wireless energy harvesting on the outage performances of both schemes. Additionally, we also show the advantages and drawbacks of the wireless energy harvesting cooperative networks and compare to the conventional cooperative networks. PMID:26927119

  13. Outage Performance Analysis of Relay Selection Schemes in Wireless Energy Harvesting Cooperative Networks over Non-Identical Rayleigh Fading Channels.

    PubMed

    Do, Nhu Tri; Bao, Vo Nguyen Quoc; An, Beongku

    2016-02-26

    In this paper, we study relay selection in decode-and-forward wireless energy harvesting cooperative networks. In contrast to conventional cooperative networks, the relays harvest energy from the source's radio-frequency radiation and then use that energy to forward the source information. Considering power splitting receiver architecture used at relays to harvest energy, we are concerned with the performance of two popular relay selection schemes, namely, partial relay selection (PRS) scheme and optimal relay selection (ORS) scheme. In particular, we analyze the system performance in terms of outage probability (OP) over independent and non-identical (i.n.i.d.) Rayleigh fading channels. We derive the closed-form approximations for the system outage probabilities of both schemes and validate the analysis by the Monte-Carlo simulation. The numerical results provide comprehensive performance comparison between the PRS and ORS schemes and reveal the effect of wireless energy harvesting on the outage performances of both schemes. Additionally, we also show the advantages and drawbacks of the wireless energy harvesting cooperative networks and compare to the conventional cooperative networks.

  14. An Energy Efficient Mutual Authentication and Key Agreement Scheme Preserving Anonymity for Wireless Sensor Networks.

    PubMed

    Lu, Yanrong; Li, Lixiang; Peng, Haipeng; Yang, Yixian

    2016-06-08

    WSNs (Wireless sensor networks) are nowadays viewed as a vital portion of the IoTs (Internet of Things). Security is a significant issue in WSNs, especially in resource-constrained environments. AKA (Authentication and key agreement) enhances the security of WSNs against adversaries attempting to get sensitive sensor data. Various AKA schemes have been developed for verifying the legitimate users of a WSN. Firstly, we scrutinize Amin-Biswas's currently scheme and demonstrate the major security loopholes in their works. Next, we propose a lightweight AKA scheme, using symmetric key cryptography based on smart card, which is resilient against all well known security attacks. Furthermore, we prove the scheme accomplishes mutual handshake and session key agreement property securely between the participates involved under BAN (Burrows, Abadi and Needham) logic. Moreover, formal security analysis and simulations are also conducted using AVISPA(Automated Validation of Internet Security Protocols and Applications) to show that our scheme is secure against active and passive attacks. Additionally, performance analysis shows that our proposed scheme is secure and efficient to apply for resource-constrained WSNs.

  15. An Energy Efficient Mutual Authentication and Key Agreement Scheme Preserving Anonymity for Wireless Sensor Networks

    PubMed Central

    Lu, Yanrong; Li, Lixiang; Peng, Haipeng; Yang, Yixian

    2016-01-01

    WSNs (Wireless sensor networks) are nowadays viewed as a vital portion of the IoTs (Internet of Things). Security is a significant issue in WSNs, especially in resource-constrained environments. AKA (Authentication and key agreement) enhances the security of WSNs against adversaries attempting to get sensitive sensor data. Various AKA schemes have been developed for verifying the legitimate users of a WSN. Firstly, we scrutinize Amin-Biswas’s currently scheme and demonstrate the major security loopholes in their works. Next, we propose a lightweight AKA scheme, using symmetric key cryptography based on smart card, which is resilient against all well known security attacks. Furthermore, we prove the scheme accomplishes mutual handshake and session key agreement property securely between the participates involved under BAN (Burrows, Abadi and Needham) logic. Moreover, formal security analysis and simulations are also conducted using AVISPA(Automated Validation of Internet Security Protocols and Applications) to show that our scheme is secure against active and passive attacks. Additionally, performance analysis shows that our proposed scheme is secure and efficient to apply for resource-constrained WSNs. PMID:27338382

  16. Development and Application of a Structural Health Monitoring System Based on Wireless Smart Aggregates

    PubMed Central

    Ma, Haoyan; Li, Peng; Song, Gangbing; Wu, Jianxin

    2017-01-01

    Structural health monitoring (SHM) systems can improve the safety and reliability of structures, reduce maintenance costs, and extend service life. Research on concrete SHMs using piezoelectric-based smart aggregates have reached great achievements. However, the newly developed techniques have not been widely applied in practical engineering, largely due to the wiring problems associated with large-scale structural health monitoring. The cumbersome wiring requires much material and labor work, and more importantly, the associated maintenance work is also very heavy. Targeting a practical large scale concrete crack detection (CCD) application, a smart aggregates-based wireless sensor network system is proposed for the CCD application. The developed CCD system uses Zigbee 802.15.4 protocols, and is able to perform dynamic stress monitoring, structural impact capturing, and internal crack detection. The system has been experimentally validated, and the experimental results demonstrated the effectiveness of the proposed system. This work provides important support for practical CCD applications using wireless smart aggregates. PMID:28714927

  17. Development and Application of a Structural Health Monitoring System Based on Wireless Smart Aggregates.

    PubMed

    Yan, Shi; Ma, Haoyan; Li, Peng; Song, Gangbing; Wu, Jianxin

    2017-07-17

    Structural health monitoring (SHM) systems can improve the safety and reliability of structures, reduce maintenance costs, and extend service life. Research on concrete SHMs using piezoelectric-based smart aggregates have reached great achievements. However, the newly developed techniques have not been widely applied in practical engineering, largely due to the wiring problems associated with large-scale structural health monitoring. The cumbersome wiring requires much material and labor work, and more importantly, the associated maintenance work is also very heavy. Targeting a practical large scale concrete crack detection (CCD) application, a smart aggregates-based wireless sensor network system is proposed for the CCD application. The developed CCD system uses Zigbee 802.15.4 protocols, and is able to perform dynamic stress monitoring, structural impact capturing, and internal crack detection. The system has been experimentally validated, and the experimental results demonstrated the effectiveness of the proposed system. This work provides important support for practical CCD applications using wireless smart aggregates.

  18. Multiswitching compound antisynchronization of four chaotic systems

    NASA Astrophysics Data System (ADS)

    Khan, Ayub; Khattar, Dinesh; Prajapati, Nitish

    2017-12-01

    Based on three drive-one response system, in this article, the authors investigate a novel synchronization scheme for a class of chaotic systems. The new scheme, multiswitching compound antisynchronization (MSCoAS), is a notable extension of the earlier multiswitching schemes concerning only one drive-one response system model. The concept of multiswitching synchronization is extended to compound synchronization scheme such that the state variables of three drive systems antisynchronize with different state variables of the response system, simultaneously. The study involving multiswitching of three drive systems and one response system is first of its kind. Various switched modified function projective antisynchronization schemes are obtained as special cases of MSCoAS, for a suitable choice of scaling factors. Using suitable controllers and Lyapunov stability theory, sufficient condition is obtained to achieve MSCoAS between four chaotic systems and the corresponding theoretical proof is given. Numerical simulations are performed using Lorenz system in MATLAB to demonstrate the validity of the presented method.

  19. An Efficient Offloading Scheme For MEC System Considering Delay and Energy Consumption

    NASA Astrophysics Data System (ADS)

    Sun, Yanhua; Hao, Zhe; Zhang, Yanhua

    2018-01-01

    With the increasing numbers of mobile devices, mobile edge computing (MEC) which provides cloud computing capabilities proximate to mobile devices in 5G networks has been envisioned as a promising paradigm to enhance users experience. In this paper, we investigate a joint consideration of delay and energy consumption offloading scheme (JCDE) for MEC system in 5G heterogeneous networks. An optimization is formulated to minimize the delay as well as energy consumption of the offloading system, which the delay and energy consumption of transmitting and calculating tasks are taken into account. We adopt an iterative greedy algorithm to solve the optimization problem. Furthermore, simulations were carried out to validate the utility and effectiveness of our proposed scheme. The effect of parameter variations on the system is analysed as well. Numerical results demonstrate delay and energy efficiency promotion of our proposed scheme compared with another paper’s scheme.

  20. Generation of steady entanglement via unilateral qubit driving in bad cavities.

    PubMed

    Jin, Zhao; Su, Shi-Lei; Zhu, Ai-Dong; Wang, Hong-Fu; Shen, Li-Tuo; Zhang, Shou

    2017-12-15

    We propose a scheme for generating an entangled state for two atoms trapped in two separate cavities coupled to each other. The scheme is based on the competition between the unitary dynamics induced by the classical fields and the collective decays induced by the dissipation of two non-local bosonic modes. In this scheme, only one qubit is driven by external classical fields, whereas the other need not be manipulated via classical driving. This is meaningful for experimental implementation between separate nodes of a quantum network. The steady entanglement can be obtained regardless of the initial state, and the robustness of the scheme against parameter fluctuations is numerically demonstrated. We also give an analytical derivation of the stationary fidelity to enable a discussion of the validity of this regime. Furthermore, based on the dissipative entanglement preparation scheme, we construct a quantum state transfer setup with multiple nodes as a practical application.

  1. Stress and Fracture Analyses Under Elastic-plastic and Creep Conditions: Some Basic Developments and Computational Approaches

    NASA Technical Reports Server (NTRS)

    Reed, K. W.; Stonesifer, R. B.; Atluri, S. N.

    1983-01-01

    A new hybrid-stress finite element algorith, suitable for analyses of large quasi-static deformations of inelastic solids, is presented. Principal variables in the formulation are the nominal stress-rate and spin. A such, a consistent reformulation of the constitutive equation is necessary, and is discussed. The finite element equations give rise to an initial value problem. Time integration has been accomplished by Euler and Runge-Kutta schemes and the superior accuracy of the higher order schemes is noted. In the course of integration of stress in time, it has been demonstrated that classical schemes such as Euler's and Runge-Kutta may lead to strong frame-dependence. As a remedy, modified integration schemes are proposed and the potential of the new schemes for suppressing frame dependence of numerically integrated stress is demonstrated. The topic of the development of valid creep fracture criteria is also addressed.

  2. Direct adaptive control of a PUMA 560 industrial robot

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun; Lee, Thomas; Delpech, Michel

    1989-01-01

    The implementation and experimental validation of a new direct adaptive control scheme on a PUMA 560 industrial robot is described. The testbed facility consists of a Unimation PUMA 560 six-jointed robot and controller, and a DEC MicroVAX II computer which hosts the Robot Control C Library software. The control algorithm is implemented on the MicroVAX which acts as a digital controller for the PUMA robot, and the Unimation controller is effectively bypassed and used merely as an I/O device to interface the MicroVAX to the joint motors. The control algorithm for each robot joint consists of an auxiliary signal generated by a constant-gain Proportional plus Integral plus Derivative (PID) controller, and an adaptive position-velocity (PD) feedback controller with adjustable gains. The adaptive independent joint controllers compensate for the inter-joint couplings and achieve accurate trajectory tracking without the need for the complex dynamic model and parameter values of the robot. Extensive experimental results on PUMA joint control are presented to confirm the feasibility of the proposed scheme, in spite of strong interactions between joint motions. Experimental results validate the capabilities of the proposed control scheme. The control scheme is extremely simple and computationally very fast for concurrent processing with high sampling rates.

  3. The 7 up 7 down inventory: a 14-item measure of manic and depressive tendencies carved from the General Behavior Inventory.

    PubMed

    Youngstrom, Eric A; Murray, Greg; Johnson, Sheri L; Findling, Robert L

    2013-12-01

    The aim of this study was to develop and validate manic and depressive scales carved from the full-length General Behavior Inventory (GBI). The brief version was designed to be applicable for youths and adults and to improve separation between mania and depression dimensions. Data came from 9 studies (2 youth clinical samples, aggregate N = 738, and 7 nonclinical adult samples, aggregate N = 1,756). Items with high factor loadings on the 2 extracted dimensions of mania and depression were identified from both data sets, and final item selection was based on internal reliability criteria. Confirmatory factor analyses described the 2-factor model's fit. Criterion validity was compared between mania and depression scales, and with the full-length GBI scales. For both mania and depression factors, 7 items produced a psychometrically adequate measure applicable across both aggregate samples. Internal reliability of the Mania scale was .81 (youth) and .83 (adult) and for Depression was .93 (youth) and .95 (adult). By design, the brief scales were less strongly correlated with each other than were the original GBI scales. Construct validity of the new instrument was supported in observed discriminant and convergent relationships with external correlates and discrimination of diagnostic groups. The new brief GBI, the 7 Up 7 Down Inventory, demonstrates sound psychometric properties across a wide age range, showing expected relationships with external correlates. The new instrument provides a clearer separation of manic and depressive tendencies than the original. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  4. Proposal of an environmental performance index to assess solid waste treatment technologies.

    PubMed

    Coelho, Hosmanny Mauro Goulart; Lange, Liséte Celina; Coelho, Lineker Max Goulart

    2012-07-01

    Although the concern with sustainable development and environment protection has considerably grown in the last years it is noted that the majority of decision making models and tools are still either excessively tied to economic aspects or geared to the production process. Moreover, existing models focus on the priority steps of solid waste management, beyond waste energy recovery and disposal. So, in order to help the lack of models and tools aiming at the waste treatment and final disposal, a new concept is proposed: the Cleaner Treatment, which is based on the Cleaner Production principles. This paper focuses on the development and validation of the Cleaner Treatment Index (CTI), to assess environmental performance of waste treatment technologies based on the Cleaner Treatment concept. The index is formed by aggregation (summation or product) of several indicators that consists in operational parameters. The weights of the indicator were established by Delphi Method and Brazilian Environmental Laws. In addition, sensitivity analyses were carried out comparing both aggregation methods. Finally, index validation was carried out by applying the CTI to 10 waste-to-energy plants data. From sensitivity analysis and validation results it is possible to infer that summation model is the most suitable aggregation method. For summation method, CTI results were superior to 0.5 (in a scale from 0 to 1) for most facilities evaluated. So, this study demonstrates that CTI is a simple and robust tool to assess and compare the environmental performance of different treatment plants being an excellent quantitative tool to support Cleaner Treatment implementation. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. Sensitivity of a Cloud-Resolving Model to the Bulk and Explicit Bin Microphysical Schemes. Part 1; Validations with a PRE-STORM Case

    NASA Technical Reports Server (NTRS)

    Li, Xiao-Wen; Tao, Wei-Kuo; Khain, Alexander P.; Simpson, Joanne; Johnson, Daniel E.

    2004-01-01

    A cloud-resolving model is used to study sensitivities of two different microphysical schemes, one is the bulk type, and the other is an explicit bin scheme, in simulating a mid-latitude squall line case (PRE-STORM, June 10-11, 1985). Simulations using different microphysical schemes are compared with each other and also with the observations. Both the bulk and bin models reproduce the general features during the developing and mature stage of the system. The leading convective zone, the trailing stratiform region, the horizontal wind flow patterns, pressure perturbation associated with the storm dynamics, and the cool pool in front of the system all agree well with the observations. Both the observations and the bulk scheme simulation serve as validations for the newly incorporated bin scheme. However, it is also shown that, the bulk and bin simulations have distinct differences, most notably in the stratiform region. Weak convective cells exist in the stratiform region in the bulk simulation, but not in the bin simulation. These weak convective cells in the stratiform region are remnants of the previous stronger convections at the leading edge of the system. The bin simulation, on the other hand, has a horizontally homogeneous stratiform cloud structure, which agrees better with the observations. Preliminary examinations of the downdraft core strength, the potential temperature perturbation, and the evaporative cooling rate show that the differences between the bulk and bin models are due mainly to the stronger low-level evaporative cooling in convective zone simulated in the bulk model. Further quantitative analysis and sensitivity tests for this case using both the bulk and bin models will be presented in a companion paper.

  6. Distributed multi-criteria model evaluation and spatial association analysis

    NASA Astrophysics Data System (ADS)

    Scherer, Laura; Pfister, Stephan

    2015-04-01

    Model performance, if evaluated, is often communicated by a single indicator and at an aggregated level; however, it does not embrace the trade-offs between different indicators and the inherent spatial heterogeneity of model efficiency. In this study, we simulated the water balance of the Mississippi watershed using the Soil and Water Assessment Tool (SWAT). The model was calibrated against monthly river discharge at 131 measurement stations. Its time series were bisected to allow for subsequent validation at the same gauges. Furthermore, the model was validated against evapotranspiration which was available as a continuous raster based on remote sensing. The model performance was evaluated for each of the 451 sub-watersheds using four different criteria: 1) Nash-Sutcliffe efficiency (NSE), 2) percent bias (PBIAS), 3) root mean square error (RMSE) normalized to standard deviation (RSR), as well as 4) a combined indicator of the squared correlation coefficient and the linear regression slope (bR2). Conditions that might lead to a poor model performance include aridity, a very flat and steep relief, snowfall and dams, as indicated by previous research. In an attempt to explain spatial differences in model efficiency, the goodness of the model was spatially compared to these four phenomena by means of a bivariate spatial association measure which combines Pearson's correlation coefficient and Moran's index for spatial autocorrelation. In order to assess the model performance of the Mississippi watershed as a whole, three different averages of the sub-watershed results were computed by 1) applying equal weights, 2) weighting by the mean observed river discharge, 3) weighting by the upstream catchment area and the square root of the time series length. Ratings of model performance differed significantly in space and according to efficiency criterion. The model performed much better in the humid Eastern region than in the arid Western region which was confirmed by the high spatial association with the aridity index (ratio of mean annual precipitation to mean annual potential evapotranspiration). This association was still significant when controlling for slopes which manifested the second highest spatial association. In line with these findings, overall model efficiency of the entire Mississippi watershed appeared better when weighted with mean observed river discharge. Furthermore, the model received the highest rating with regards to PBIAS and was judged worst when considering NSE as the most comprehensive indicator. No universal performance indicator exists that considers all aspects of a hydrograph. Therefore, sound model evaluation must take into account multiple criteria. Since model efficiency varies in space which is masked by aggregated ratings spatially explicit model goodness should be communicated as standard praxis - at least as a measure of spatial variability of indicators. Furthermore, transparent documentation of the evaluation procedure also with regards to weighting of aggregated model performance is crucial but often lacking in published research. Finally, the high spatial association between model performance and aridity highlights the need to improve modelling schemes for arid conditions as priority over other aspects that might weaken model goodness.

  7. Validation of source approval of HMA surface mix aggregate : research summary.

    DOT National Transportation Integrated Search

    2016-04-01

    Pavement surfaces must maintain an adequate level of friction in order to provide a : safe surface for vehicles. The Maryland State Highway Administration (SHA) is : responsible for ensuring that flexible pavement construction, using hot mix asphalt ...

  8. A hydrological emulator for global applications - HE v1.0.0

    NASA Astrophysics Data System (ADS)

    Liu, Yaling; Hejazi, Mohamad; Li, Hongyi; Zhang, Xuesong; Leng, Guoyong

    2018-03-01

    While global hydrological models (GHMs) are very useful in exploring water resources and interactions between the Earth and human systems, their use often requires numerous model inputs, complex model calibration, and high computation costs. To overcome these challenges, we construct an efficient open-source and ready-to-use hydrological emulator (HE) that can mimic complex GHMs at a range of spatial scales (e.g., basin, region, globe). More specifically, we construct both a lumped and a distributed scheme of the HE based on the monthly abcd model to explore the tradeoff between computational cost and model fidelity. Model predictability and computational efficiency are evaluated in simulating global runoff from 1971 to 2010 with both the lumped and distributed schemes. The results are compared against the runoff product from the widely used Variable Infiltration Capacity (VIC) model. Our evaluation indicates that the lumped and distributed schemes present comparable results regarding annual total quantity, spatial pattern, and temporal variation of the major water fluxes (e.g., total runoff, evapotranspiration) across the global 235 basins (e.g., correlation coefficient r between the annual total runoff from either of these two schemes and the VIC is > 0.96), except for several cold (e.g., Arctic, interior Tibet), dry (e.g., North Africa) and mountainous (e.g., Argentina) regions. Compared against the monthly total runoff product from the VIC (aggregated from daily runoff), the global mean Kling-Gupta efficiencies are 0.75 and 0.79 for the lumped and distributed schemes, respectively, with the distributed scheme better capturing spatial heterogeneity. Notably, the computation efficiency of the lumped scheme is 2 orders of magnitude higher than the distributed one and 7 orders more efficient than the VIC model. A case study of uncertainty analysis for the world's 16 basins with top annual streamflow is conducted using 100 000 model simulations, and it demonstrates the lumped scheme's extraordinary advantage in computational efficiency. Our results suggest that the revised lumped abcd model can serve as an efficient and reasonable HE for complex GHMs and is suitable for broad practical use, and the distributed scheme is also an efficient alternative if spatial heterogeneity is of more interest.

  9. Casein Aggregates Built Step-by-Step on Charged Polyelectrolyte Film Surfaces Are Calcium Phosphate-cemented*

    PubMed Central

    Nagy, Krisztina; Pilbat, Ana-Maria; Groma, Géza; Szalontai, Balázs; Cuisinier, Frédéric J. G.

    2010-01-01

    The possible mechanism of casein aggregation and micelle buildup was studied in a new approach by letting α-casein adsorb from low concentration (0.1 mg·ml−1) solutions onto the charged surfaces of polyelectrolyte films. It was found that α-casein could adsorb onto both positively and negatively charged surfaces. However, only when its negative phosphoseryl clusters remained free, i.e. when it adsorbed onto a negative surface, could calcium phosphate (CaP) nanoclusters bind to the casein molecules. Once the CaP clusters were in place, step-by-step building of multilayered casein architectures became possible. The presence of CaP was essential; neither Ca2+ nor phosphate could alone facilitate casein aggregation. Thus, it seems that CaP is the organizing motive in the casein micelle formation. Atomic force microscopy revealed that even a single adsorbed casein layer was composed of very small (in the range of tens of nanometers) spherical forms. The stiffness of the adsorbed casein layer largely increased in the presence of CaP. On this basis, we can imagine that casein micelles emerge according to the following scheme. The amphipathic casein monomers aggregate into oligomers via hydrophobic interactions even in the absence of CaP. Full scale, CaP-carrying micelles could materialize by interlocking these casein oligomers with CaP nanoclusters. Such a mechanism would not contradict former experimental results and could offer a synthesis between the submicelle and the block copolymer models of casein micelles. PMID:20921229

  10. Inclusion of aggregation effect to evaluate the performance of organic dyes in dye-sensitized solar cells

    NASA Astrophysics Data System (ADS)

    Sun, Kenan; Zhang, Weiyi; Heng, Panpan; Wang, Li; Zhang, Jinglai

    2018-05-01

    Two new indoline-based D-A-π-A dyes, D3F and D3F2 (see Scheme 1), are developed on the basis of the reported D3 by insertion of one or two F atoms on benzothiadiazole group. Our central aim is to explore high-efficiency organic dyes applied in dye-sensitized solar cells by inclusion of a simple group rather than by employment of new complicated groups. The performance of two new designed organic dyes, D3F and D3F2, is compared with that of D3 from various aspects including absorption spectrum, light harvesting efficiency, driving force, and open-circuit voltage. Besides the isolated dye, the interfacial property between dye and TiO2 surface is studied. D3F and D3F2 do not show absolute superiority than D3 not only for the isolated dyes but also for the monomeric adsorption system. However, D3F and D3F2 would effectively reduce the influence of aggregation resulting in the much smaller intermolecular electronic coupling. Although the aggregation has attracted much attention recently, it is studied alone in most of studies. To comprehensively evaluate the performance of dye-sensitized solar cells, it is necessary to consider aggregation along with electron injection time from dye into TiO2 rather than only static items, such as, band gap and absorption region.

  11. Modelling tephra dispersal and ash aggregation: The 26th April 1979 eruption, La Soufrière St. Vincent

    NASA Astrophysics Data System (ADS)

    Poret, M.; Costa, A.; Folch, A.; Martí, A.

    2017-11-01

    On the 26th April 1979, La Soufrière St. Vincent volcano (West Indies) erupted producing a tephra fallout that blanketed the main island and the neighboring Bequia Island, located southwards. Using deposit measurements and the available observations reported in Brazier et al. (1982), we estimated the optimal Eruption Source Parameters, such as the Mass Eruption Rate (MER), the Total Erupted Mass (TEM) and the Total Grain-Size Distribution (TGSD) by means of a computational inversion method. Tephra transport and deposition were simulated using the 3D Eulerian model FALL3D. The field-based TGSD reconstructed by Brazier et al. (1982) shows a bi-modal pattern having a coarse and a fine population with modes around 0.5 and 0.06 mm, respectively. A significant amount of aggregates was observed during the eruption. To quantify the relevance of aggregation processes on the bulk tephra deposit, we performed a comparative study in which we accounted for aggregation using three different schemes, computing ash aggregation within the plume under wet conditions, i.e. considering both the effects of air moisture and magmatic water, consistently with the eruptive phreatomagmatic eruption features. The sensitivity to the driving meteorological model (WRF/ARW) was also investigated by considering two different spatial resolutions (5 and 1 km) and model output frequencies. Results show that, for such short-lived explosive eruptions, high-resolution meteorological data are critical. Optimal results best-fitting all available observations indicate a column height of 12 km above the vent, a MER of 7.8 × 106 kg/s which, for an eruption duration of 370 s, gives a TEM of 2.8 × 109 kg. The optimal aggregate mean diameter obtained is 1.5Φ with a density of 350 kg/m3, contributing to 22% of the deposit mass.

  12. VAVUQ, Python and Matlab freeware for Verification and Validation, Uncertainty Quantification

    NASA Astrophysics Data System (ADS)

    Courtney, J. E.; Zamani, K.; Bombardelli, F. A.; Fleenor, W. E.

    2015-12-01

    A package of scripts is presented for automated Verification and Validation (V&V) and Uncertainty Quantification (UQ) for engineering codes that approximate Partial Differential Equations (PDFs). The code post-processes model results to produce V&V and UQ information. This information can be used to assess model performance. Automated information on code performance can allow for a systematic methodology to assess the quality of model approximations. The software implements common and accepted code verification schemes. The software uses the Method of Manufactured Solutions (MMS), the Method of Exact Solution (MES), Cross-Code Verification, and Richardson Extrapolation (RE) for solution (calculation) verification. It also includes common statistical measures that can be used for model skill assessment. Complete RE can be conducted for complex geometries by implementing high-order non-oscillating numerical interpolation schemes within the software. Model approximation uncertainty is quantified by calculating lower and upper bounds of numerical error from the RE results. The software is also able to calculate the Grid Convergence Index (GCI), and to handle adaptive meshes and models that implement mixed order schemes. Four examples are provided to demonstrate the use of the software for code and solution verification, model validation and uncertainty quantification. The software is used for code verification of a mixed-order compact difference heat transport solver; the solution verification of a 2D shallow-water-wave solver for tidal flow modeling in estuaries; the model validation of a two-phase flow computation in a hydraulic jump compared to experimental data; and numerical uncertainty quantification for 3D CFD modeling of the flow patterns in a Gust erosion chamber.

  13. Proposed new classification scheme for chemical injury to the human eye.

    PubMed

    Bagley, Daniel M; Casterton, Phillip L; Dressler, William E; Edelhauser, Henry F; Kruszewski, Francis H; McCulley, James P; Nussenblatt, Robert B; Osborne, Rosemarie; Rothenstein, Arthur; Stitzel, Katherine A; Thomas, Karluss; Ward, Sherry L

    2006-07-01

    Various ocular alkali burn classification schemes have been published and used to grade human chemical eye injuries for the purpose of identifying treatments and forecasting outcomes. The ILSI chemical eye injury classification scheme was developed for the additional purpose of collecting detailed human eye injury data to provide information on the mechanisms associated with chemical eye injuries. This information will have clinical application, as well as use in the development and validation of new methods to assess ocular toxicity. A panel of ophthalmic researchers proposed the new classification scheme based upon current knowledge of the mechanisms of eye injury, and their collective clinical and research experience. Additional ophthalmologists and researchers were surveyed to critique the scheme. The draft scheme was revised, and the proposed scheme represents the best consensus from at least 23 physicians and scientists. The new scheme classifies chemical eye injury into five categories based on clinical signs, symptoms, and expected outcomes. Diagnostic classification is based primarily on two clinical endpoints: (1) the extent (area) of injury at the limbus, and (2) the degree of injury (area and depth) to the cornea. The new classification scheme provides a uniform system for scoring eye injury across chemical classes, and provides enough detail for the clinician to collect data that will be relevant to identifying the mechanisms of ocular injury.

  14. Student-Centered Reliability, Concurrent Validity and Instructional Sensitivity in Scoring of Students' Concept Maps in a University Science Laboratory

    ERIC Educational Resources Information Center

    Kaya, Osman Nafiz; Kilic, Ziya

    2004-01-01

    Student-centered approach of scoring the concept maps consisted of three elements namely symbol system, individual portfolio and scoring scheme. We scored student-constructed concept maps based on 5 concept map criteria: validity of concepts, adequacy of propositions, significance of cross-links, relevancy of examples, and interconnectedness. With…

  15. Developing Scale for Determining the Social Participation Skills for Children and Analyzing Its Psychometric Characteristics

    ERIC Educational Resources Information Center

    Samanci, Osman; Ocakci, Ebru; Seçer, Ismail

    2018-01-01

    The purpose of this research is to conduct validity and reliability studies of the Scale for the Determining Social Participation for Children, developed to measure social participation skills of children aged 7-10 years. During the development of the scale, pilot schemes, validity analyzes, and reliability analyzes were conducted. In this…

  16. Meaningful Understanding and Systems Thinking in Organic Chemistry: Validating Measurement and Exploring Relationships

    ERIC Educational Resources Information Center

    Vachliotis, Theodoros; Salta, Katerina; Tzougraki, Chryssa

    2014-01-01

    The purpose of this study was dual: First, to develop and validate assessment schemes for assessing 11th grade students' meaningful understanding of organic chemistry concepts, as well as their systems thinking skills in the domain. Second, to explore the relationship between the two constructs of interest based on students' performance…

  17. A High Affinity Red Fluorescence and Colorimetric Probe for Amyloid β Aggregates

    NASA Astrophysics Data System (ADS)

    Rajasekhar, K.; Narayanaswamy, Nagarjun; Murugan, N. Arul; Kuang, Guanglin; Ågren, Hans; Govindaraju, T.

    2016-04-01

    A major challenge in the Alzheimer’s disease (AD) is its timely diagnosis. Amyloid β (Aβ) aggregates have been proposed as the most viable biomarker for the diagnosis of AD. Here, we demonstrate hemicyanine-based benzothiazole-coumarin (TC) as a potential probe for the detection of highly toxic Aβ42 aggregates through switch-on, enhanced (~30 fold) red fluorescence (Emax = 654 nm) and characteristic colorimetric (light red to purple) optical outputs. Interestingly, TC exhibits selectivity towards Aβ42 fibrils compared to other abnormal protein aggregates. TC probe show nanomolar binding affinity (Ka = 1.72 × 107 M-1) towards Aβ42 aggregates and also displace ThT bound to Aβ42 fibrils due to its high binding affinity. The Aβ42 fibril-specific red-shift in the absorption spectra of TC responsible for the observed colorimetric optical output has been attributed to micro-environment change around the probe from hydrophilic-like to hydrophobic-like nature. The binding site, binding energy and changes in optical properties observed for TC upon interaction with Aβ42 fibrils have been further validated by molecular docking and time dependent density functional theory studies.

  18. Unfolding energy spectra of double-periodicity two-dimensional systems: Twisted bilayer graphene and MoS2 on graphene

    NASA Astrophysics Data System (ADS)

    Matsushita, Yu-ichiro; Nishi, Hirofumi; Iwata, Jun-ichi; Kosugi, Taichi; Oshiyama, Atsushi

    2018-01-01

    We propose an unfolding scheme to analyze energy spectra of complex large-scale systems which are inherently of double periodicity on the basis of the density-functional theory. Applying our method to a twisted bilayer graphene (tBLG) and a stack of monolayer MoS2 on graphene (MoS2/graphene) as examples, we first show that the conventional unfolding scheme in the past using a single primitive-cell representation causes serious problems in analyses of the energy spectra. We then introduce our multispace representation scheme in the unfolding method and clarify its validity. Velocity renormalization of Dirac electrons in tBLG and mini gaps of Dirac cones in MoS2/graphene are elucidated in the present unfolding scheme.

  19. PBT assessment under REACH: Screening for low aquatic bioaccumulation with QSAR classifications based on physicochemical properties to replace BCF in vivo testing on fish.

    PubMed

    Nendza, Monika; Kühne, Ralph; Lombardo, Anna; Strempel, Sebastian; Schüürmann, Gerrit

    2018-03-01

    Aquatic bioconcentration factors (BCFs) are critical in PBT (persistent, bioaccumulative, toxic) and risk assessment of chemicals. High costs and use of more than 100 fish per standard BCF study (OECD 305) call for alternative methods to replace as much in vivo testing as possible. The BCF waiving scheme is a screening tool combining QSAR classifications based on physicochemical properties related to the distribution (hydrophobicity, ionisation), persistence (biodegradability, hydrolysis), solubility and volatility (Henry's law constant) of substances in water bodies and aquatic biota to predict substances with low aquatic bioaccumulation (nonB, BCF<2000). The BCF waiving scheme was developed with a dataset of reliable BCFs for 998 compounds and externally validated with another 181 substances. It performs with 100% sensitivity (no false negatives), >50% efficacy (waiving potential), and complies with the OECD principles for valid QSARs. The chemical applicability domain of the BCF waiving scheme is given by the structures of the training set, with some compound classes explicitly excluded like organometallics, poly- and perfluorinated compounds, aromatic triphenylphosphates, surfactants. The prediction confidence of the BCF waiving scheme is based on applicability domain compliance, consensus modelling, and the structural similarity with known nonB and B/vB substances. Compounds classified as nonB by the BCF waiving scheme are candidates for waiving of BCF in vivo testing on fish due to low concern with regard to the B criterion. The BCF waiving scheme supports the 3Rs with a possible reduction of >50% of BCF in vivo testing on fish. If the target chemical is outside the applicability domain of the BCF waiving scheme or not classified as nonB, further assessments with in silico, in vitro or in vivo methods are necessary to either confirm or reject bioaccumulative behaviour. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Volume 2: Explicit, multistage upwind schemes for Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Elmiligui, Alaa; Ash, Robert L.

    1992-01-01

    The objective of this study was to develop a high-resolution-explicit-multi-block numerical algorithm, suitable for efficient computation of the three-dimensional, time-dependent Euler and Navier-Stokes equations. The resulting algorithm has employed a finite volume approach, using monotonic upstream schemes for conservation laws (MUSCL)-type differencing to obtain state variables at cell interface. Variable interpolations were written in the k-scheme formulation. Inviscid fluxes were calculated via Roe's flux-difference splitting, and van Leer's flux-vector splitting techniques, which are considered state of the art. The viscous terms were discretized using a second-order, central-difference operator. Two classes of explicit time integration has been investigated for solving the compressible inviscid/viscous flow problems--two-state predictor-corrector schemes, and multistage time-stepping schemes. The coefficients of the multistage time-stepping schemes have been modified successfully to achieve better performance with upwind differencing. A technique was developed to optimize the coefficients for good high-frequency damping at relatively high CFL numbers. Local time-stepping, implicit residual smoothing, and multigrid procedure were added to the explicit time stepping scheme to accelerate convergence to steady-state. The developed algorithm was implemented successfully in a multi-block code, which provides complete topological and geometric flexibility. The only requirement is C degree continuity of the grid across the block interface. The algorithm has been validated on a diverse set of three-dimensional test cases of increasing complexity. The cases studied were: (1) supersonic corner flow; (2) supersonic plume flow; (3) laminar and turbulent flow over a flat plate; (4) transonic flow over an ONERA M6 wing; and (5) unsteady flow of a compressible jet impinging on a ground plane (with and without cross flow). The emphasis of the test cases was validation of code, and assessment of performance, as well as demonstration of flexibility.

  1. Investigating the Willingness to Pay for a Contributory National Health Insurance Scheme in Saudi Arabia: A Cross-sectional Stated Preference Approach.

    PubMed

    Al-Hanawi, Mohammed Khaled; Vaidya, Kirit; Alsharqi, Omar; Onwujekwe, Obinna

    2018-04-01

    The Saudi Healthcare System is universal, financed entirely from government revenue principally derived from oil, and is 'free at the point of delivery' (non-contributory). However, this system is unlikely to be sustainable in the medium to long term. This study investigates the feasibility and acceptability of healthcare financing reform by examining households' willingness to pay (WTP) for a contributory national health insurance scheme. Using the contingent valuation method, a pre-tested interviewer-administered questionnaire was used to collect data from 1187 heads of household in Jeddah province over a 5-month period. Multi-stage sampling was employed to select the study sample. Using a double-bounded dichotomous choice with the follow-up elicitation method, respondents were asked to state their WTP for a hypothetical contributory national health insurance scheme. Tobit regression analysis was used to examine the factors associated with WTP and assess the construct validity of elicited WTP. Over two-thirds (69.6%) indicated that they were willing to participate in and pay for a contributory national health insurance scheme. The mean WTP was 50 Saudi Riyal (US$13.33) per household member per month. Tobit regression analysis showed that household size, satisfaction with the quality of public healthcare services, perceptions about financing healthcare, education and income were the main determinants of WTP. This study demonstrates a theoretically valid WTP for a contributory national health insurance scheme by Saudi people. The research shows that willingness to participate in and pay for a contributory national health insurance scheme depends on participant characteristics. Identifying and understanding the main influencing factors associated with WTP are important to help facilitate establishing and implementing the national health insurance scheme. The results could assist policy-makers to develop and set insurance premiums, thus providing an additional source of healthcare financing.

  2. Generating spatially optimized habitat in a trade-off between social optimality and budget efficiency.

    PubMed

    Drechsler, Martin

    2017-02-01

    Auctions have been proposed as alternatives to payments for environmental services when spatial interactions and costs are better known to landowners than to the conservation agency (asymmetric information). Recently, an auction scheme was proposed that delivers optimal conservation in the sense that social welfare is maximized. I examined the social welfare and the budget efficiency delivered by this scheme, where social welfare represents the difference between the monetized ecological benefit and the conservation cost incurred to the landowners and budget efficiency is defined as maximizing the ecological benefit for a given conservation budget. For the analysis, I considered a stylized landscape with land patches that can be used for agriculture or conservation. The ecological benefit was measured by an objective function that increases with increasing number and spatial aggregation of conserved land patches. I compared the social welfare and the budget efficiency of the auction scheme with an agglomeration payment, a policy scheme that considers spatial interactions and that was proposed recently. The auction delivered a higher level of social welfare than the agglomeration payment. However, the agglomeration payment was more efficient budgetarily than the auction, so the comparative performances of the 2 schemes depended on the chosen policy criterion-social welfare or budget efficiency. Both policy criteria are relevant for conservation. Which one should be chosen depends on the problem at hand, for example, whether social preferences should be taken into account in the decision of how much money to invest in conservation or whether the available conservation budget is strictly limited. © 2016 Society for Conservation Biology.

  3. Why is the simulated climatology of tropical cyclones so sensitive to the choice of cumulus parameterization scheme in the WRF model?

    NASA Astrophysics Data System (ADS)

    Zhang, Chunxi; Wang, Yuqing

    2018-01-01

    The sensitivity of simulated tropical cyclones (TCs) to the choice of cumulus parameterization (CP) scheme in the advanced Weather Research and Forecasting Model (WRF-ARW) version 3.5 is analyzed based on ten seasonal simulations with 20-km horizontal grid spacing over the western North Pacific. Results show that the simulated frequency and intensity of TCs are very sensitive to the choice of the CP scheme. The sensitivity can be explained well by the difference in the low-level circulation in a height and sorted moisture space. By transporting moist static energy from dry to moist region, the low-level circulation is important to convective self-aggregation which is believed to be related to genesis of TC-like vortices (TCLVs) and TCs in idealized settings. The radiative and evaporative cooling associated with low-level clouds and shallow convection in dry regions is found to play a crucial role in driving the moisture-sorted low-level circulation. With shallow convection turned off in a CP scheme, relatively strong precipitation occurs frequently in dry regions. In this case, the diabatic cooling can still drive the low-level circulation but its strength is reduced and thus TCLV/TC genesis is suppressed. The inclusion of the cumulus momentum transport (CMT) in a CP scheme can considerably suppress genesis of TCLVs/TCs, while changes in the moisture-sorted low-level circulation and horizontal distribution of precipitation are trivial, indicating that the CMT modulates the TCLVs/TCs activities in the model by mechanisms other than the horizontal transport of moist static energy.

  4. Two-Level Scheduling for Video Transmission over Downlink OFDMA Networks

    PubMed Central

    Tham, Mau-Luen

    2016-01-01

    This paper presents a two-level scheduling scheme for video transmission over downlink orthogonal frequency-division multiple access (OFDMA) networks. It aims to maximize the aggregate quality of the video users subject to the playback delay and resource constraints, by exploiting the multiuser diversity and the video characteristics. The upper level schedules the transmission of video packets among multiple users based on an overall target bit-error-rate (BER), the importance level of packet and resource consumption efficiency factor. Instead, the lower level renders unequal error protection (UEP) in terms of target BER among the scheduled packets by solving a weighted sum distortion minimization problem, where each user weight reflects the total importance level of the packets that has been scheduled for that user. Frequency-selective power is then water-filled over all the assigned subcarriers in order to leverage the potential channel coding gain. Realistic simulation results demonstrate that the proposed scheme significantly outperforms the state-of-the-art scheduling scheme by up to 6.8 dB in terms of peak-signal-to-noise-ratio (PSNR). Further test evaluates the suitability of equal power allocation which is the common assumption in the literature. PMID:26906398

  5. Comprehensive validation scheme for in situ fiber optics dissolution method for pharmaceutical drug product testing.

    PubMed

    Mirza, Tahseen; Liu, Qian Julie; Vivilecchia, Richard; Joshi, Yatindra

    2009-03-01

    There has been a growing interest during the past decade in the use of fiber optics dissolution testing. Use of this novel technology is mainly confined to research and development laboratories. It has not yet emerged as a tool for end product release testing despite its ability to generate in situ results and efficiency improvement. One potential reason may be the lack of clear validation guidelines that can be applied for the assessment of suitability of fiber optics. This article describes a comprehensive validation scheme and development of a reliable, robust, reproducible and cost-effective dissolution test using fiber optics technology. The test was successfully applied for characterizing the dissolution behavior of a 40-mg immediate-release tablet dosage form that is under development at Novartis Pharmaceuticals, East Hanover, New Jersey. The method was validated for the following parameters: linearity, precision, accuracy, specificity, and robustness. In particular, robustness was evaluated in terms of probe sampling depth and probe orientation. The in situ fiber optic method was found to be comparable to the existing manual sampling dissolution method. Finally, the fiber optic dissolution test was successfully performed by different operators on different days, to further enhance the validity of the method. The results demonstrate that the fiber optics technology can be successfully validated for end product dissolution/release testing. (c) 2008 Wiley-Liss, Inc. and the American Pharmacists Association

  6. Real-time validation of receiver state information in optical space-time block code systems.

    PubMed

    Alamia, John; Kurzweg, Timothy

    2014-06-15

    Free space optical interconnect (FSOI) systems are a promising solution to interconnect bottlenecks in high-speed systems. To overcome some sources of diminished FSOI performance caused by close proximity of multiple optical channels, multiple-input multiple-output (MIMO) systems implementing encoding schemes such as space-time block coding (STBC) have been developed. These schemes utilize information pertaining to the optical channel to reconstruct transmitted data. The STBC system is dependent on accurate channel state information (CSI) for optimal system performance. As a result of dynamic changes in optical channels, a system in operation will need to have updated CSI. Therefore, validation of the CSI during operation is a necessary tool to ensure FSOI systems operate efficiently. In this Letter, we demonstrate a method of validating CSI, in real time, through the use of moving averages of the maximum likelihood decoder data, and its capacity to predict the bit error rate (BER) of the system.

  7. Cluster designs to assess the prevalence of acute malnutrition by lot quality assurance sampling: a validation study by computer simulation

    PubMed Central

    Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J

    2009-01-01

    Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67×3 (67 clusters of three observations) and a 33×6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67×3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis. PMID:20011037

  8. A new uniformly valid asymptotic integration algorithm for elasto-plastic creep and unified viscoplastic theories including continuum damage

    NASA Technical Reports Server (NTRS)

    Chulya, Abhisak; Walker, Kevin P.

    1991-01-01

    A new scheme to integrate a system of stiff differential equations for both the elasto-plastic creep and the unified viscoplastic theories is presented. The method has high stability, allows large time increments, and is implicit and iterative. It is suitable for use with continuum damage theories. The scheme was incorporated into MARC, a commercial finite element code through a user subroutine called HYPELA. Results from numerical problems under complex loading histories are presented for both small and large scale analysis. To demonstrate the scheme's accuracy and efficiency, comparisons to a self-adaptive forward Euler method are made.

  9. A new uniformly valid asymptotic integration algorithm for elasto-plastic-creep and unified viscoplastic theories including continuum damage

    NASA Technical Reports Server (NTRS)

    Chulya, A.; Walker, K. P.

    1989-01-01

    A new scheme to integrate a system of stiff differential equations for both the elasto-plastic creep and the unified viscoplastic theories is presented. The method has high stability, allows large time increments, and is implicit and iterative. It is suitable for use with continuum damage theories. The scheme was incorporated into MARC, a commercial finite element code through a user subroutine called HYPELA. Results from numerical problems under complex loading histories are presented for both small and large scale analysis. To demonstrate the scheme's accuracy and efficiency, comparisons to a self-adaptive forward Euler method are made.

  10. The synchronisation of fractional-order hyperchaos compound system

    NASA Astrophysics Data System (ADS)

    Noghredani, Naeimadeen; Riahi, Aminreza; Pariz, Naser; Karimpour, Ali

    2018-02-01

    This paper presents a new compound synchronisation scheme among four hyperchaotic memristor system with incommensurate fractional-order derivatives. First a new controller was designed based on adaptive technique to minimise the errors and guarantee compound synchronisation of four fractional-order memristor chaotic systems. According to the suitability of compound synchronisation as a reliable solution for secure communication, we then examined the application of the proposed adaptive compound synchronisation scheme in the presence of noise for secure communication. In addition, the unpredictability and complexity of the drive systems enhance the security of secure communication. The corresponding theoretical analysis and results of simulation validated the effectiveness of the proposed synchronisation scheme using MATLAB.

  11. Validation of RAP and/or RAS in hydraulic cement concrete : technical report.

    DOT National Transportation Integrated Search

    2017-05-01

    The increasing maintenance and rehabilitation actions lead to considerable amounts of reclaimed asphalt pavement : (RAP) left in stockpiles in the United States. The possible use of RAP in Portland cement concrete (PCC) as aggregate : replacement not...

  12. Multi-zonal Navier-Stokes code with the LU-SGS scheme

    NASA Technical Reports Server (NTRS)

    Klopfer, G. H.; Yoon, S.

    1993-01-01

    The LU-SGS (lower upper symmetric Gauss Seidel) algorithm has been implemented into the Compressible Navier-Stokes, Finite Volume (CNSFV) code and validated with a multizonal Navier-Stokes simulation of a transonic turbulent flow around an Onera M6 transport wing. The convergence rate and robustness of the code have been improved and the computational cost has been reduced by at least a factor of 2 over the diagonal Beam-Warming scheme.

  13. An international measure of awareness and beliefs about cancer: development and testing of the ABC

    PubMed Central

    Simon, Alice E; Forbes, Lindsay J L; Boniface, David; Warburton, Fiona; Brain, Kate E; Dessaix, Anita; Donnelly, Michael; Haynes, Kerry; Hvidberg, Line; Lagerlund, Magdalena; Petermann, Lisa; Tishelman, Carol; Vedsted, Peter; Vigmostad, Maria Nyre; Wardle, Jane; Ramirez, Amanda J

    2012-01-01

    Objectives To develop an internationally validated measure of cancer awareness and beliefs; the awareness and beliefs about cancer (ABC) measure. Design and setting Items modified from existing measures were assessed by a working group in six countries (Australia, Canada, Denmark, Norway, Sweden and the UK). Validation studies were completed in the UK, and cross-sectional surveys of the general population were carried out in the six participating countries. Participants Testing in UK English included cognitive interviewing for face validity (N=10), calculation of content validity indexes (six assessors), and assessment of test–retest reliability (N=97). Conceptual and cultural equivalence of modified (Canadian and Australian) and translated (Danish, Norwegian, Swedish and Canadian French) ABC versions were tested quantitatively for equivalence of meaning (≥4 assessors per country) and in bilingual cognitive interviews (three interviews per translation). Response patterns were assessed in surveys of adults aged 50+ years (N≥2000) in each country. Main outcomes Psychometric properties were evaluated through tests of validity and reliability, conceptual and cultural equivalence and systematic item analysis. Test–retest reliability used weighted-κ and intraclass correlations. Construction and validation of aggregate scores was by factor analysis for (1) beliefs about cancer outcomes, (2) beliefs about barriers to symptomatic presentation, and item summation for (3) awareness of cancer symptoms and (4) awareness of cancer risk factors. Results The English ABC had acceptable test–retest reliability and content validity. International assessments of equivalence identified a small number of items where wording needed adjustment. Survey response patterns showed that items performed well in terms of difficulty and discrimination across countries except for awareness of cancer outcomes in Australia. Aggregate scores had consistent factor structures across countries. Conclusions The ABC is a reliable and valid international measure of cancer awareness and beliefs. The methods used to validate and harmonise the ABC may serve as a methodological guide in international survey research. PMID:23253874

  14. Memristive device based learning for navigation in robots.

    PubMed

    Sarim, Mohammad; Kumar, Manish; Jha, Rashmi; Minai, Ali A

    2017-11-08

    Biomimetic robots have gained attention recently for various applications ranging from resource hunting to search and rescue operations during disasters. Biological species are known to intuitively learn from the environment, gather and process data, and make appropriate decisions. Such sophisticated computing capabilities in robots are difficult to achieve, especially if done in real-time with ultra-low energy consumption. Here, we present a novel memristive device based learning architecture for robots. Two terminal memristive devices with resistive switching of oxide layer are modeled in a crossbar array to develop a neuromorphic platform that can impart active real-time learning capabilities in a robot. This approach is validated by navigating a robot vehicle in an unknown environment with randomly placed obstacles. Further, the proposed scheme is compared with reinforcement learning based algorithms using local and global knowledge of the environment. The simulation as well as experimental results corroborate the validity and potential of the proposed learning scheme for robots. The results also show that our learning scheme approaches an optimal solution for some environment layouts in robot navigation.

  15. Experimental validation of the Achromatic Telescopic Squeezing (ATS) scheme at the LHC

    NASA Astrophysics Data System (ADS)

    Fartoukh, S.; Bruce, R.; Carlier, F.; Coello De Portugal, J.; Garcia-Tabares, A.; Maclean, E.; Malina, L.; Mereghetti, A.; Mirarchi, D.; Persson, T.; Pojer, M.; Ponce, L.; Redaelli, S.; Salvachua, B.; Skowronski, P.; Solfaroli, M.; Tomas, R.; Valuch, D.; Wegscheider, A.; Wenninger, J.

    2017-07-01

    The Achromatic Telescopic Squeezing scheme offers new techniques to deliver unprecedentedly small beam spot size at the interaction points of the ATLAS and CMS experiments of the LHC, while perfectly controlling the chromatic properties of the corresponding optics (linear and non-linear chromaticities, off-momentum beta-beating, spurious dispersion induced by the crossing bumps). The first series of beam tests with ATS optics were achieved during the LHC Run I (2011/2012) for a first validation of the basics of the scheme at small intensity. In 2016, a new generation of more performing ATS optics was developed and more extensively tested in the machine, still with probe beams for optics measurement and correction at β* = 10 cm, but also with a few nominal bunches to establish first collisions at nominal β* (40 cm) and beyond (33 cm), and to analysis the robustness of these optics in terms of collimation and machine protection. The paper will highlight the most relevant and conclusive results which were obtained during this second series of ATS tests.

  16. Individual bioaerosol particle discrimination by multi-photon excited fluorescence.

    PubMed

    Kiselev, Denis; Bonacina, Luigi; Wolf, Jean-Pierre

    2011-11-21

    Femtosecond laser induced multi-photon excited fluorescence (MPEF) from individual airborne particles is tested for the first time for discriminating bioaerosols. The fluorescence spectra, analysed in 32 channels, exhibit a composite character originating from simultaneous two-photon and three-photon excitation at 790 nm. Simulants of bacteria aggregates (clusters of dyed polystyrene microspheres) and different pollen particles (Ragweed, Pecan, Mulberry) are clearly discriminated by their MPEF spectra. This demonstration experiment opens the way to more sophisticated spectroscopic schemes like pump-probe and coherent control. © 2011 Optical Society of America

  17. Mapping Reef Fish and the Seascape: Using Acoustics and Spatial Modeling to Guide Coastal Management

    PubMed Central

    Costa, Bryan; Taylor, J. Christopher; Kracker, Laura; Battista, Tim; Pittman, Simon

    2014-01-01

    Reef fish distributions are patchy in time and space with some coral reef habitats supporting higher densities (i.e., aggregations) of fish than others. Identifying and quantifying fish aggregations (particularly during spawning events) are often top priorities for coastal managers. However, the rapid mapping of these aggregations using conventional survey methods (e.g., non-technical SCUBA diving and remotely operated cameras) are limited by depth, visibility and time. Acoustic sensors (i.e., splitbeam and multibeam echosounders) are not constrained by these same limitations, and were used to concurrently map and quantify the location, density and size of reef fish along with seafloor structure in two, separate locations in the U.S. Virgin Islands. Reef fish aggregations were documented along the shelf edge, an ecologically important ecotone in the region. Fish were grouped into three classes according to body size, and relationships with the benthic seascape were modeled in one area using Boosted Regression Trees. These models were validated in a second area to test their predictive performance in locations where fish have not been mapped. Models predicting the density of large fish (≥29 cm) performed well (i.e., AUC = 0.77). Water depth and standard deviation of depth were the most influential predictors at two spatial scales (100 and 300 m). Models of small (≤11 cm) and medium (12–28 cm) fish performed poorly (i.e., AUC = 0.49 to 0.68) due to the high prevalence (45–79%) of smaller fish in both locations, and the unequal prevalence of smaller fish in the training and validation areas. Integrating acoustic sensors with spatial modeling offers a new and reliable approach to rapidly identify fish aggregations and to predict the density large fish in un-surveyed locations. This integrative approach will help coastal managers to prioritize sites, and focus their limited resources on areas that may be of higher conservation value. PMID:24454886

  18. The quality of the new birth certificate data: a validation study in North Carolina.

    PubMed Central

    Buescher, P A; Taylor, K P; Davis, M H; Bowling, J M

    1993-01-01

    A random sample of 395 December 1989 North Carolina birth certificates and the corresponding maternal hospital medical records were examined to validate selected items. Reporting was very accurate for birth-weight, Apgar score, and method of delivery; fair to good for tobacco use, prenatal care, weight gain during pregnancy, obstetrical procedures, and events of labor and delivery; and poor for medical history and alcohol use. This study suggests that many of the new birth certificate items will support valid aggregate analyses for maternal and child health research and evaluation. PMID:8342728

  19. The influence of local mechanisms on large scale seismic vulnerability estimation of masonry building aggregates

    NASA Astrophysics Data System (ADS)

    Formisano, Antonio; Chieffo, Nicola; Milo, Bartolomeo; Fabbrocino, Francesco

    2016-12-01

    The current paper deals with the seismic vulnerability evaluation of masonry constructions grouped in aggregates through an "ad hoc" quick vulnerability form based on new assessment parameters considering local collapse mechanisms. First, a parametric kinematic analysis on masonry walls with different height (h) / thickness (t) ratios has been developed with the purpose of identifying the collapse load multiplier for activation of the main four first-order failure mechanisms. Subsequently, a form initially conceived for building aggregates suffering second-mode collapse mechanisms, has been expanded on the basis of the achieved results. Tre proposed quick vulnerability technique has been applied to one case study within the territory of Arsita (Teramo, Italy) and, finally, it has been also validated by the comparison of results with those deriving from application of the well-known FaMIVE procedure.

  20. Development of an Expert Judgement Elicitation and Calibration Methodology for Risk Analysis in Conceptual Vehicle Design

    NASA Technical Reports Server (NTRS)

    Unal, Resit; Keating, Charles; Conway, Bruce; Chytka, Trina

    2004-01-01

    A comprehensive expert-judgment elicitation methodology to quantify input parameter uncertainty and analysis tool uncertainty in a conceptual launch vehicle design analysis has been developed. The ten-phase methodology seeks to obtain expert judgment opinion for quantifying uncertainties as a probability distribution so that multidisciplinary risk analysis studies can be performed. The calibration and aggregation techniques presented as part of the methodology are aimed at improving individual expert estimates, and provide an approach to aggregate multiple expert judgments into a single probability distribution. The purpose of this report is to document the methodology development and its validation through application to a reference aerospace vehicle. A detailed summary of the application exercise, including calibration and aggregation results is presented. A discussion of possible future steps in this research area is given.

  1. Spectral relationships for atmospheric correction. I. Validation of red and near infra-red marine reflectance relationships.

    PubMed

    Goyens, C; Jamet, C; Ruddick, K G

    2013-09-09

    The present study provides an extensive overview of red and near infra-red (NIR) spectral relationships found in the literature and used to constrain red or NIR-modeling schemes in current atmospheric correction (AC) algorithms with the aim to improve water-leaving reflectance retrievals, ρw(λ), in turbid waters. However, most of these spectral relationships have been developed with restricted datasets and, subsequently, may not be globally valid, explaining the need of an accurate validation exercise. Spectral relationships are validated here with turbid in situ data for ρw(λ). Functions estimating ρw(λ) in the red were only valid for moderately turbid waters (ρw(λNIR) < 3.10(-3)). In contrast, bounding equations used to limit ρw(667) retrievals according to the water signal at 555 nm, appeared to be valid for all turbidity ranges presented in the in situ dataset. In the NIR region of the spectrum, the constant NIR reflectance ratio suggested by Ruddick et al. (2006) (Limnol. Oceanogr. 51, 1167-1179), was valid for moderately to very turbid waters (ρw(λNIR) < 10(-2)) while the polynomial function, initially developed by Wang et al. (2012) (Opt. Express 20, 741-753) with remote sensing reflectances over the Western Pacific, was also valid for extremely turbid waters (ρw(λNIR) > 10(-2)). The results of this study suggest to use the red bounding equations and the polynomial NIR function to constrain red or NIR-modeling schemes in AC processes with the aim to improve ρw(λ) retrievals where current AC algorithms fail.

  2. A biophysical insight into the formation of aggregates upon trifluoroethanol induced structural and conformational changes in garlic cystatin.

    PubMed

    Siddiqui, Mohd Faizan; Bano, Bilqees

    2018-06-06

    Intrinsic and extrinsic factors are responsible for the transition of soluble proteins into aggregated form. Trifluoroethanol is among such potent extrinsic factor which facilitates the formation of aggregated structure. It disrupts the interactive forces and destabilizes the native structure of the protein. The present study investigates the effect of trifluoroethanol (TFE) on garlic cystatin. Garlic cystatin was incubated with increasing concentration of TFE (0-90% v/v) for 4 h. Incubation of GPC with TFE induces structural changes thereby resulting in the formation of aggregates. Inactivation of garlic phytocystatin was confirmed by cysteine proteinase inhibitory activity. Garlic cystatin at 30% TFE exhibits native-like secondary structure and high ANS fluorescence, thus suggesting the presence of molten globule state. Circular dichroism and FTIR confirmed the transition of the native alpha-helical structure of garlic cystatin to the beta-sheet structure at 60% TFE. Furthermore, increased ThT fluorescence and redshift in Congo red absorbance assay confirmed the presence of aggregates. Rayleigh and turbidity assay was also performed to validate the aggregation results. Scanning electron microscopy was followed to analyze the morphological changes which confirm the presence of sheath-like structure at 60% TFE. The study sheds light on the conformational behavior of a plant protein when kept under stress condition induced by an extrinsic factor. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. Accreditation the Education Development Centers of Medical-Sciences Universities: Another Step toward Quality Improvement in Education

    PubMed Central

    Haghdoost, AA; Momtazmanesh, N; Shoghi, F; Mohagheghi, M; Mehrolhassani, MH

    2013-01-01

    Background: In order to improve the quality of education in universities of medical sciences (UMS), and because of the key role of education development centers (EDCs), an accreditation scheme was developed to evaluate their performance. Method: A group of experts in the medical education field was selected based on pre-defined criteria by EDC of Ministry of Health and Medical education. The team, worked intensively for 6 months to develop a list of essential standards to assess the performance of EDCs. Having checked for the content validity of standards, clear and measurable indicators were created via consensus. Then, required information were collected from UMS EDCs; the first round of accreditation was carried out just to check the acceptability of this scheme, and make force universities to prepare themselves for the next factual round of accreditation. Results: Five standards domains were developed as the conceptual framework for defining main categories of indicators. This included: governing and leadership, educational planning, faculty development, assessment and examination and research in education. Nearly all of UMS filled all required data forms precisely with minimum confusion which shows the practicality of this accreditation scheme. Conclusion: It seems that the UMS have enough interest to provide required information for this accreditation scheme. However, in order to receive promising results, most of universities have to work intensively in order to prepare minimum levels in all required standards. However, it seems that in long term, implementation of a valid accreditation scheme plays an important role in improvement of the quality of medical education around the country. PMID:23865031

  4. Accreditation the Education Development Centers of Medical-Sciences Universities: Another Step toward Quality Improvement in Education.

    PubMed

    Haghdoost, Aa; Momtazmanesh, N; Shoghi, F; Mohagheghi, M; Mehrolhassani, Mh

    2013-01-01

    In order to improve the quality of education in universities of medical sciences (UMS), and because of the key role of education development centers (EDCs), an accreditation scheme was developed to evaluate their performance. A group of experts in the medical education field was selected based on pre-defined criteria by EDC of Ministry of Health and Medical education. The team, worked intensively for 6 months to develop a list of essential standards to assess the performance of EDCs. Having checked for the content validity of standards, clear and measurable indicators were created via consensus. Then, required information were collected from UMS EDCs; the first round of accreditation was carried out just to check the acceptability of this scheme, and make force universities to prepare themselves for the next factual round of accreditation. Five standards domains were developed as the conceptual framework for defining main categories of indicators. This included: governing and leadership, educational planning, faculty development, assessment and examination and research in education. Nearly all of UMS filled all required data forms precisely with minimum confusion which shows the practicality of this accreditation scheme. It seems that the UMS have enough interest to provide required information for this accreditation scheme. However, in order to receive promising results, most of universities have to work intensively in order to prepare minimum levels in all required standards. However, it seems that in long term, implementation of a valid accreditation scheme plays an important role in improvement of the quality of medical education around the country.

  5. Qualification of APOLLO2 BWR calculation scheme on the BASALA mock-up

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vaglio-Gaudard, C.; Santamarina, A.; Sargeni, A.

    2006-07-01

    A new neutronic APOLLO2/MOC/SHEM/CEA2005 calculation scheme for BWR applications has been developed by the French 'Commissariat a l'Energie Atomique'. This scheme is based on the latest calculation methodology (accurate mutual and self-shielding formalism, MOC treatment of the transport equation) and the recent JEFF3.1 nuclear data library. This paper presents the experimental validation of this new calculation scheme on the BASALA BWR mock-up The BASALA programme is devoted to the measurements of the physical parameters of high moderation 100% MOX BWR cores, in hot and cold conditions. The experimental validation of the calculation scheme deals with core reactivity, fission rate maps,more » reactivity worth of void and absorbers (cruciform control blades and Gd pins), as well as temperature coefficient. Results of the analysis using APOLLO2/MOC/SHEM/CEA2005 show an overestimation of the core reactivity by 600 pcm for BASALA-Hot and 750 pcm for BASALA-Cold. Reactivity worth of gadolinium poison pins and hafnium or B{sub 4}C control blades are predicted by APOLLO2 calculation within 2% accuracy. Furthermore, the radial power map is well predicted for every core configuration, including Void configuration and Hf / B{sub 4}C configurations: fission rates in the central assembly are calculated within the {+-}2% experimental uncertainty for the reference cores. The C/E bias on the isothermal Moderator Temperature Coefficient, using the CEA2005 library based on JEFF3.1 file, amounts to -1.7{+-}03 pcm/ deg. C on the range 10 deg. C-80 deg. C. (authors)« less

  6. Effect of climate change on the irrigation and discharge scheme for winter wheat in Huaibei Plain, China

    NASA Astrophysics Data System (ADS)

    Zhu, Y.; Ren, L.; Lü, H.

    2017-12-01

    On the Huaibei Plain of Anhui Province, China, winter wheat (WW) is the most prominent crop. The study area belongs to transitional climate, with shallow water table. The original climate change is complex, in addition, global warming make the climate change more complex. The winter wheat growth period is from October to June, just during the rainless season, the WW growth always depends on part of irrigation water. Under such complex climate change, the rainfall varies during the growing seasons, and water table elevations also vary. Thus, water tables supply variable moisture change between soil water and groundwater, which impact the irrigation and discharge scheme for plant growth and yield. In Huaibei plain, the environmental pollution is very serious because of agricultural use of chemical fertilizer, pesticide, herbicide and etc. In order to protect river water and groundwater from pollution, the irrigation and discharge scheme should be estimated accurately. Therefore, determining the irrigation and discharge scheme for winter wheat under climate change is important for the plant growth management decision-making. Based on field observations and local weather data of 2004-2005 and 2005-2006, the numerical model HYDRUS-1D was validated and calibrated by comparing simulated and measured root-zone soil water contents. The validated model was used to estimate the irrigation and discharge scheme in 2010-2090 under the scenarios described by HadCM3 (1970 to 2000 climate states are taken as baselines) with winter wheat growth in an optimum state indicated by growth height and LAI.

  7. Cryptosporidium genotyping in Europe: The current status and processes for a harmonised multi-locus genotyping scheme.

    PubMed

    Chalmers, Rachel M; Pérez-Cordón, Gregorio; Cacció, Simone M; Klotz, Christian; Robertson, Lucy J

    2018-06-13

    Due to the occurrence of genetic recombination, a reliable and discriminatory method to genotype Cryptosporidium isolates at the intra-species level requires the analysis of multiple loci, but a standardised scheme is not currently available. A workshop was held at the Robert Koch Institute, Berlin in 2016 that gathered 23 scientists with appropriate expertise (in either Cryptosporidium genotyping and/or surveillance, epidemiology or outbreaks) to discuss the processes for the development of a robust, standardised, multi-locus genotyping (MLG) scheme and propose an approach. The background evidence and main conclusions were outlined in a previously published report; the objectives of this further report are to describe 1) the current use of Cryptosporidium genotyping, 2) the elicitation and synthesis of the participants' opinions, and 3) the agreed processes and criteria for the development, evaluation and validation of a standardised MLG scheme for Cryptosporidium surveillance and outbreak investigations. Cryptosporidium was characterised to the species level in 7/12 (58%) participating European countries, mostly for human outbreak investigations. Further genotyping was mostly by sequencing the gp60 gene. A ranking exercise of performance and convenience criteria found that portability, biological robustness, typeability, and discriminatory power were considered by participants as the most important attributes in developing a multilocus scheme. The major barrier to implementation was lack of funding. A structured process for marker identification, evaluation, validation, implementation, and maintenance was proposed and outlined for application to Cryptosporidium, with prioritisation of Cryptosporidium parvum to support investigation of transmission in Europe. Copyright © 2018 Elsevier Inc. All rights reserved.

  8. A Self-Assembled Aggregate Composed of a Fatty Acid Membrane and the Building Blocks of Biological Polymers Provides a First Step in the Emergence of Protocells

    PubMed Central

    Black, Roy A.; Blosser, Matthew C.

    2016-01-01

    We propose that the first step in the origin of cellular life on Earth was the self-assembly of fatty acids with the building blocks of RNA and protein, resulting in a stable aggregate. This scheme provides explanations for the selection and concentration of the prebiotic components of cells; the stabilization and growth of early membranes; the catalysis of biopolymer synthesis; and the co-localization of membranes, RNA and protein. In this article, we review the evidence and rationale for the formation of the proposed aggregate: (i) the well-established phenomenon of self-assembly of fatty acids to form vesicles; (ii) our published evidence that nucleobases and sugars bind to and stabilize such vesicles; and (iii) the reasons why amino acids likely do so as well. We then explain how the conformational constraints and altered chemical environment due to binding of the components to the membrane could facilitate the formation of nucleosides, oligonucleotides and peptides. We conclude by discussing how the resulting oligomers, even if short and random, could have increased vesicle stability and growth more than their building blocks did, and how competition among these vesicles could have led to longer polymers with complex functions. PMID:27529283

  9. Mesoscale Fracture Analysis of Multiphase Cementitious Composites Using Peridynamics

    PubMed Central

    Yaghoobi, Amin; Chorzepa, Mi G.; Kim, S. Sonny; Durham, Stephan A.

    2017-01-01

    Concrete is a complex heterogeneous material, and thus, it is important to develop numerical modeling methods to enhance the prediction accuracy of the fracture mechanism. In this study, a two-dimensional mesoscale model is developed using a non-ordinary state-based peridynamic (NOSBPD) method. Fracture in a concrete cube specimen subjected to pure tension is studied. The presence of heterogeneous materials consisting of coarse aggregates, interfacial transition zones, air voids and cementitious matrix is characterized as particle points in a two-dimensional mesoscale model. Coarse aggregates and voids are generated using uniform probability distributions, while a statistical study is provided to comprise the effect of random distributions of constituent materials. In obtaining the steady-state response, an incremental and iterative solver is adopted for the dynamic relaxation method. Load-displacement curves and damage patterns are compared with available experimental and finite element analysis (FEA) results. Although the proposed model uses much simpler material damage models and discretization schemes, the load-displacement curves show no difference from the FEA results. Furthermore, no mesh refinement is necessary, as fracture is inherently characterized by bond breakages. Finally, a sensitivity study is conducted to understand the effect of aggregate volume fraction and porosity on the load capacity of the proposed mesoscale model. PMID:28772518

  10. Effects of aggregation of drug and diagnostic codes on the performance of the high-dimensional propensity score algorithm: an empirical example.

    PubMed

    Le, Hoa V; Poole, Charles; Brookhart, M Alan; Schoenbach, Victor J; Beach, Kathleen J; Layton, J Bradley; Stürmer, Til

    2013-11-19

    The High-Dimensional Propensity Score (hd-PS) algorithm can select and adjust for baseline confounders of treatment-outcome associations in pharmacoepidemiologic studies that use healthcare claims data. How hd-PS performance is affected by aggregating medications or medical diagnoses has not been assessed. We evaluated the effects of aggregating medications or diagnoses on hd-PS performance in an empirical example using resampled cohorts with small sample size, rare outcome incidence, or low exposure prevalence. In a cohort study comparing the risk of upper gastrointestinal complications in celecoxib or traditional NSAIDs (diclofenac, ibuprofen) initiators with rheumatoid arthritis and osteoarthritis, we (1) aggregated medications and International Classification of Diseases-9 (ICD-9) diagnoses into hierarchies of the Anatomical Therapeutic Chemical classification (ATC) and the Clinical Classification Software (CCS), respectively, and (2) sampled the full cohort using techniques validated by simulations to create 9,600 samples to compare 16 aggregation scenarios across 50% and 20% samples with varying outcome incidence and exposure prevalence. We applied hd-PS to estimate relative risks (RR) using 5 dimensions, predefined confounders, ≤ 500 hd-PS covariates, and propensity score deciles. For each scenario, we calculated: (1) the geometric mean RR; (2) the difference between the scenario mean ln(RR) and the ln(RR) from published randomized controlled trials (RCT); and (3) the proportional difference in the degree of estimated confounding between that scenario and the base scenario (no aggregation). Compared with the base scenario, aggregations of medications into ATC level 4 alone or in combination with aggregation of diagnoses into CCS level 1 improved the hd-PS confounding adjustment in most scenarios, reducing residual confounding compared with the RCT findings by up to 19%. Aggregation of codes using hierarchical coding systems may improve the performance of the hd-PS to control for confounders. The balance of advantages and disadvantages of aggregation is likely to vary across research settings.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sprik, Samuel; Kurtz, Jennifer M; Ainscough, Christopher D

    In this presentation, the National Renewable Energy Laboratory presented aggregated analysis results on the performance of existing hydrogen stations, including performance, operation, utilization, maintenance, safety, hydrogen quality, and cost. The U.S. Department of Energy funds technology validation work at NREL through its National Fuel Cell Technology Evaluation Center (NFCTEC).

  12. Children's Behavior in the Postanesthesia Care Unit: The Development of the Child Behavior Coding System-PACU (CBCS-P)

    PubMed Central

    Tan, Edwin T.; Martin, Sarah R.; Fortier, Michelle A.; Kain, Zeev N.

    2012-01-01

    Objective To develop and validate a behavioral coding measure, the Children's Behavior Coding System-PACU (CBCS-P), for children's distress and nondistress behaviors while in the postanesthesia recovery unit. Methods A multidisciplinary team examined videotapes of children in the PACU and developed a coding scheme that subsequently underwent a refinement process (CBCS-P). To examine the reliability and validity of the coding system, 121 children and their parents were videotaped during their stay in the PACU. Participants were healthy children undergoing elective, outpatient surgery and general anesthesia. The CBCS-P was utilized and objective data from medical charts (analgesic consumption and pain scores) were extracted to establish validity. Results Kappa values indicated good-to-excellent (κ's > .65) interrater reliability of the individual codes. The CBCS-P had good criterion validity when compared to children's analgesic consumption and pain scores. Conclusions The CBCS-P is a reliable, observational coding method that captures children's distress and nondistress postoperative behaviors. These findings highlight the importance of considering context in both the development and application of observational coding schemes. PMID:22167123

  13. Phase-Image Encryption Based on 3D-Lorenz Chaotic System and Double Random Phase Encoding

    NASA Astrophysics Data System (ADS)

    Sharma, Neha; Saini, Indu; Yadav, AK; Singh, Phool

    2017-12-01

    In this paper, an encryption scheme for phase-images based on 3D-Lorenz chaotic system in Fourier domain under the 4f optical system is presented. The encryption scheme uses a random amplitude mask in the spatial domain and a random phase mask in the frequency domain. Its inputs are phase-images, which are relatively more secure as compared to the intensity images because of non-linearity. The proposed scheme further derives its strength from the use of 3D-Lorenz transform in the frequency domain. Although the experimental setup for optical realization of the proposed scheme has been provided, the results presented here are based on simulations on MATLAB. It has been validated for grayscale images, and is found to be sensitive to the encryption parameters of the Lorenz system. The attacks analysis shows that the key-space is large enough to resist brute-force attack, and the scheme is also resistant to the noise and occlusion attacks. Statistical analysis and the analysis based on correlation distribution of adjacent pixels have been performed to test the efficacy of the encryption scheme. The results have indicated that the proposed encryption scheme possesses a high level of security.

  14. Numerical solution of special ultra-relativistic Euler equations using central upwind scheme

    NASA Astrophysics Data System (ADS)

    Ghaffar, Tayabia; Yousaf, Muhammad; Qamar, Shamsul

    2018-06-01

    This article is concerned with the numerical approximation of one and two-dimensional special ultra-relativistic Euler equations. The governing equations are coupled first-order nonlinear hyperbolic partial differential equations. These equations describe perfect fluid flow in terms of the particle density, the four-velocity and the pressure. A high-resolution shock-capturing central upwind scheme is employed to solve the model equations. To avoid excessive numerical diffusion, the considered scheme avails the specific information of local propagation speeds. By using Runge-Kutta time stepping method and MUSCL-type initial reconstruction, we have obtained 2nd order accuracy of the proposed scheme. After discussing the model equations and the numerical technique, several 1D and 2D test problems are investigated. For all the numerical test cases, our proposed scheme demonstrates very good agreement with the results obtained by well-established algorithms, even in the case of highly relativistic 2D test problems. For validation and comparison, the staggered central scheme and the kinetic flux-vector splitting (KFVS) method are also implemented to the same model. The robustness and efficiency of central upwind scheme is demonstrated by the numerical results.

  15. Numerical Simulations of Reacting Flows Using Asynchrony-Tolerant Schemes for Exascale Computing

    NASA Astrophysics Data System (ADS)

    Cleary, Emmet; Konduri, Aditya; Chen, Jacqueline

    2017-11-01

    Communication and data synchronization between processing elements (PEs) are likely to pose a major challenge in scalability of solvers at the exascale. Recently developed asynchrony-tolerant (AT) finite difference schemes address this issue by relaxing communication and synchronization between PEs at a mathematical level while preserving accuracy, resulting in improved scalability. The performance of these schemes has been validated for simple linear and nonlinear homogeneous PDEs. However, many problems of practical interest are governed by highly nonlinear PDEs with source terms, whose solution may be sensitive to perturbations caused by communication asynchrony. The current work applies the AT schemes to combustion problems with chemical source terms, yielding a stiff system of PDEs with nonlinear source terms highly sensitive to temperature. Examples shown will use single-step and multi-step CH4 mechanisms for 1D premixed and nonpremixed flames. Error analysis will be discussed both in physical and spectral space. Results show that additional errors introduced by the AT schemes are negligible and the schemes preserve their accuracy. We acknowledge funding from the DOE Computational Science Graduate Fellowship administered by the Krell Institute.

  16. An enhanced password authentication scheme for session initiation protocol with perfect forward secrecy.

    PubMed

    Qiu, Shuming; Xu, Guoai; Ahmad, Haseeb; Guo, Yanhui

    2018-01-01

    The Session Initiation Protocol (SIP) is an extensive and esteemed communication protocol employed to regulate signaling as well as for controlling multimedia communication sessions. Recently, Kumari et al. proposed an improved smart card based authentication scheme for SIP based on Farash's scheme. Farash claimed that his protocol is resistant against various known attacks. But, we observe some accountable flaws in Farash's protocol. We point out that Farash's protocol is prone to key-compromise impersonation attack and is unable to provide pre-verification in the smart card, efficient password change and perfect forward secrecy. To overcome these limitations, in this paper we present an enhanced authentication mechanism based on Kumari et al.'s scheme. We prove that the proposed protocol not only overcomes the issues in Farash's scheme, but it can also resist against all known attacks. We also provide the security analysis of the proposed scheme with the help of widespread AVISPA (Automated Validation of Internet Security Protocols and Applications) software. At last, comparing with the earlier proposals in terms of security and efficiency, we conclude that the proposed protocol is efficient and more secure.

  17. Integrated optical 3D digital imaging based on DSP scheme

    NASA Astrophysics Data System (ADS)

    Wang, Xiaodong; Peng, Xiang; Gao, Bruce Z.

    2008-03-01

    We present a scheme of integrated optical 3-D digital imaging (IO3DI) based on digital signal processor (DSP), which can acquire range images independently without PC support. This scheme is based on a parallel hardware structure with aid of DSP and field programmable gate array (FPGA) to realize 3-D imaging. In this integrated scheme of 3-D imaging, the phase measurement profilometry is adopted. To realize the pipeline processing of the fringe projection, image acquisition and fringe pattern analysis, we present a multi-threads application program that is developed under the environment of DSP/BIOS RTOS (real-time operating system). Since RTOS provides a preemptive kernel and powerful configuration tool, with which we are able to achieve a real-time scheduling and synchronization. To accelerate automatic fringe analysis and phase unwrapping, we make use of the technique of software optimization. The proposed scheme can reach a performance of 39.5 f/s (frames per second), so it may well fit into real-time fringe-pattern analysis and can implement fast 3-D imaging. Experiment results are also presented to show the validity of proposed scheme.

  18. Advection of Microphysical Scalars in Terminal Area Simulation System (TASS)

    NASA Technical Reports Server (NTRS)

    Ahmad, Nashat N.; Proctor, Fred H.

    2011-01-01

    The Terminal Area Simulation System (TASS) is a large eddy scale atmospheric flow model with extensive turbulence and microphysics packages. It has been applied successfully in the past to a diverse set of problems ranging from prediction of severe convective events (Proctor et al. 2002), tracking storms and for simulating weapons effects such as the dispersion and fallout of fission debris (Bacon and Sarma 1991), etc. More recently, TASS has been used for predicting the transport and decay of wake vortices behind aircraft (Proctor 2009). An essential part of the TASS model is its comprehensive microphysics package, which relies on the accurate computation of microphysical scalar transport. This paper describes an evaluation of the Leonard scheme implemented in the TASS model for transporting microphysical scalars. The scheme is validated against benchmark cases with exact solutions and compared with two other schemes - a Monotone Upstream-centered Scheme for Conservation Laws (MUSCL)-type scheme after van Leer and LeVeque's high-resolution wave propagation method. Finally, a comparison between the schemes is made against an incident of severe tornadic super-cell convection near Del City, Oklahoma.

  19. Systematic Review of Methods in Low-Consensus Fields: Supporting Commensuration through `Construct-Centered Methods Aggregation’ in the Case of Climate Change Vulnerability Research

    PubMed Central

    Crane, Todd A.; Chesterman, Sabrina

    2016-01-01

    There is increasing interest in using systematic review to synthesize evidence on the social and environmental effects of and adaptations to climate change. Use of systematic review for evidence in this field is complicated by the heterogeneity of methods used and by uneven reporting. In order to facilitate synthesis of results and design of subsequent research a method, construct-centered methods aggregation, was designed to 1) provide a transparent, valid and reliable description of research methods, 2) support comparability of primary studies and 3) contribute to a shared empirical basis for improving research practice. Rather than taking research reports at face value, research designs are reviewed through inductive analysis. This involves bottom-up identification of constructs, definitions and operationalizations; assessment of concepts’ commensurability through comparison of definitions; identification of theoretical frameworks through patterns of construct use; and integration of transparently reported and valid operationalizations into ideal-type research frameworks. Through the integration of reliable bottom-up inductive coding from operationalizations and top-down coding driven from stated theory with expert interpretation, construct-centered methods aggregation enabled both resolution of heterogeneity within identically named constructs and merging of differently labeled but identical constructs. These two processes allowed transparent, rigorous and contextually sensitive synthesis of the research presented in an uneven set of reports undertaken in a heterogenous field. If adopted more broadly, construct-centered methods aggregation may contribute to the emergence of a valid, empirically-grounded description of methods used in primary research. These descriptions may function as a set of expectations that improves the transparency of reporting and as an evolving comprehensive framework that supports both interpretation of existing and design of future research. PMID:26901409

  20. Neural Network Autopilot System for a Mathematical Model of the Boeing 747

    DTIC Science & Technology

    1998-08-04

    the NASA/Aurora Theseus ", Thesis WVU MAE Dept., Morgantown, WV, June 1996. [9] Napolitano, M.R., Neppach, C, Casdorph, V., Naylor, S. "On-Line...Validation Schemes for Implementation on the NASA/Aurora Theseus ", Thesis WVU MAE Dept., Morgantown, WV, June 1996. [9] Napolitano, M.R., Neppach, C...Schemes for Implementation on the NASA/Aurora Theseus ", Thesis WVU MAE Dept., Morgantown, WV, June 1996. [9] Napolitano, M.R., Neppach, C, Casdorph, V

  1. Coded excitation for infrared non-destructive testing of carbon fiber reinforced plastics.

    PubMed

    Mulaveesala, Ravibabu; Venkata Ghali, Subbarao

    2011-05-01

    This paper proposes a Barker coded excitation for defect detection using infrared non-destructive testing. Capability of the proposed excitation scheme is highlighted with recently introduced correlation based post processing approach and compared with the existing phase based analysis by taking the signal to noise ratio into consideration. Applicability of the proposed scheme has been experimentally validated on a carbon fiber reinforced plastic specimen containing flat bottom holes located at different depths.

  2. Simulation study on combination of GRACE monthly gravity field solutions

    NASA Astrophysics Data System (ADS)

    Jean, Yoomin; Meyer, Ulrich; Jäggi, Adrian

    2016-04-01

    The GRACE monthly gravity fields from different processing centers are combined in the frame of the project EGSIEM. This combination is done on solution level first to define weights which will be used for a combination on normal equation level. The applied weights are based on the deviation of the individual gravity fields from the arithmetic mean of all involved gravity fields. This kind of weighting scheme relies on the assumption that the true gravity field is close to the arithmetic mean of the involved individual gravity fields. However, the arithmetic mean can be affected by systematic errors in individual gravity fields, which consequently results in inappropriate weights. For the future operational scientific combination service of GRACE monthly gravity fields, it is necessary to examine the validity of the weighting scheme also in possible extreme cases. To investigate this, we make a simulation study on the combination of gravity fields. Firstly, we show how a deviated gravity field can affect the combined solution in terms of signal and noise in the spatial domain. We also show the impact of systematic errors in individual gravity fields on the resulting combined solution. Then, we investigate whether the weighting scheme still works in the presence of outliers. The result of this simulation study will be useful to understand and validate the weighting scheme applied to the combination of the monthly gravity fields.

  3. Effectiveness of a quality management program in dental care practices.

    PubMed

    Goetz, Katja; Campbell, Stephen M; Broge, Björn; Brodowski, Marc; Wensing, Michel; Szecsenyi, Joachim

    2014-04-28

    Structured quality management is an important aspect for improving patient dental care outcomes, but reliable evidence to validate effects is lacking. We aimed to examine the effectiveness of a quality management program in primary dental care settings in Germany. This was an exploratory study with a before-after-design. 45 dental care practices that had completed the European Practice Assessment (EPA) accreditation scheme twice (intervention group) were selected for the study. The mean interval between the before and after assessment was 36 months. The comparison group comprised of 56 dental practices that had undergone their first assessment simultaneously with follow-up assessment in the intervention group. Aggregated scores for five EPA domains: 'infrastructure', 'information', 'finance', 'quality and safety' and 'people' were calculated. In the intervention group, small non-significant improvements were found in the EPA domains. At follow-up, the intervention group had higher scores on EPA domains as compared with the comparison group (range of differences was 4.2 to 10.8 across domains). These differences were all significant in regression analyses, which controlled for relevant dental practice characteristics. Dental care practices that implemented a quality management program had better organizational quality in contrast to a comparison group. This may reflect both improvements in the intervention group and a selection effect of dental practices volunteering for the first round of EPA practice assessment.

  4. Maximum Power Point Tracking with Dichotomy and Gradient Method for Automobile Exhaust Thermoelectric Generators

    NASA Astrophysics Data System (ADS)

    Fang, W.; Quan, S. H.; Xie, C. J.; Tang, X. F.; Wang, L. L.; Huang, L.

    2016-03-01

    In this study, a direct-current/direct-current (DC/DC) converter with maximum power point tracking (MPPT) is developed to down-convert the high voltage DC output from a thermoelectric generator to the lower voltage required to charge batteries. To improve the tracking accuracy and speed of the converter, a novel MPPT control scheme characterized by an aggregated dichotomy and gradient (ADG) method is proposed. In the first stage, the dichotomy algorithm is used as a fast search method to find the approximate region of the maximum power point. The gradient method is then applied for rapid and accurate tracking of the maximum power point. To validate the proposed MPPT method, a test bench composed of an automobile exhaust thermoelectric generator was constructed for harvesting the automotive exhaust heat energy. Steady-state and transient tracking experiments under five different load conditions were carried out using a DC/DC converter with the proposed ADG and with three traditional methods. The experimental results show that the ADG method can track the maximum power within 140 ms with a 1.1% error rate when the engine operates at 3300 rpm@71 NM, which is superior to the performance of the single dichotomy method, the single gradient method and the perturbation and observation method from the viewpoint of improved tracking accuracy and speed.

  5. Indirect measurement of three-photon correlation in nonclassical light sources

    NASA Astrophysics Data System (ADS)

    Ann, Byoung-moo; Song, Younghoon; Kim, Junki; Yang, Daeho; An, Kyungwon

    2016-06-01

    We observe the three-photon correlation in nonclassical light sources by using an indirect measurement scheme based on the dead-time effect of photon-counting detectors. We first develop a general theory which enables us to extract the three-photon correlation from the two-photon correlation of an arbitrary light source measured with detectors with finite dead times. We then confirm the validity of our measurement scheme in experiments done with a cavity-QED microlaser operating with a large intracavity mean photon number exhibiting both sub- and super-Poissonian photon statistics. The experimental results are in good agreement with the theoretical expectation. Our measurement scheme provides an alternative approach for N -photon correlation measurement employing (N -1 ) detectors and thus a reduced measurement time for a given signal-to-noise ratio, compared to the usual scheme requiring N detectors.

  6. A recursive field-normalized bibliometric performance indicator: an application to the field of library and information science.

    PubMed

    Waltman, Ludo; Yan, Erjia; van Eck, Nees Jan

    2011-10-01

    Two commonly used ideas in the development of citation-based research performance indicators are the idea of normalizing citation counts based on a field classification scheme and the idea of recursive citation weighing (like in PageRank-inspired indicators). We combine these two ideas in a single indicator, referred to as the recursive mean normalized citation score indicator, and we study the validity of this indicator. Our empirical analysis shows that the proposed indicator is highly sensitive to the field classification scheme that is used. The indicator also has a strong tendency to reinforce biases caused by the classification scheme. Based on these observations, we advise against the use of indicators in which the idea of normalization based on a field classification scheme and the idea of recursive citation weighing are combined.

  7. Real-time photonic sampling with improved signal-to-noise and distortion ratio using polarization-dependent modulators

    NASA Astrophysics Data System (ADS)

    Liang, Dong; Zhang, Zhiyao; Liu, Yong; Li, Xiaojun; Jiang, Wei; Tan, Qinggui

    2018-04-01

    A real-time photonic sampling structure with effective nonlinearity suppression and excellent signal-to-noise ratio (SNR) performance is proposed. The key points of this scheme are the polarization-dependent modulators (P-DMZMs) and the sagnac loop structure. Thanks to the polarization sensitive characteristic of P-DMZMs, the differences between transfer functions of the fundamental signal and the distortion become visible. Meanwhile, the selection of specific biases in P-DMZMs is helpful to achieve a preferable linearized performance with a low noise level for real-time photonic sampling. Compared with the quadrature-biased scheme, the proposed scheme is capable of valid nonlinearity suppression and is able to provide a better SNR performance even in a large frequency range. The proposed scheme is proved to be effective and easily implemented for real time photonic applications.

  8. Performance evaluation methodology for historical document image binarization.

    PubMed

    Ntirogiannis, Konstantinos; Gatos, Basilis; Pratikakis, Ioannis

    2013-02-01

    Document image binarization is of great importance in the document image analysis and recognition pipeline since it affects further stages of the recognition process. The evaluation of a binarization method aids in studying its algorithmic behavior, as well as verifying its effectiveness, by providing qualitative and quantitative indication of its performance. This paper addresses a pixel-based binarization evaluation methodology for historical handwritten/machine-printed document images. In the proposed evaluation scheme, the recall and precision evaluation measures are properly modified using a weighting scheme that diminishes any potential evaluation bias. Additional performance metrics of the proposed evaluation scheme consist of the percentage rates of broken and missed text, false alarms, background noise, character enlargement, and merging. Several experiments conducted in comparison with other pixel-based evaluation measures demonstrate the validity of the proposed evaluation scheme.

  9. Polarization-basis tracking scheme for quantum key distribution using revealed sifted key bits.

    PubMed

    Ding, Yu-Yang; Chen, Wei; Chen, Hua; Wang, Chao; Li, Ya-Ping; Wang, Shuang; Yin, Zhen-Qiang; Guo, Guang-Can; Han, Zheng-Fu

    2017-03-15

    The calibration of the polarization basis between the transmitter and receiver is an important task in quantum key distribution. A continuously working polarization-basis tracking scheme (PBTS) will effectively promote the efficiency of the system and reduce the potential security risk when switching between the transmission and calibration modes. Here, we proposed a single-photon level continuously working PBTS using only sifted key bits revealed during an error correction procedure, without introducing additional reference light or interrupting the transmission of quantum signals. We applied the scheme to a polarization-encoding BB84 QKD system in a 50 km fiber channel, and obtained an average quantum bit error rate (QBER) of 2.32% and a standard derivation of 0.87% during 24 h of continuous operation. The stable and relatively low QBER validates the effectiveness of the scheme.

  10. Forcing scheme analysis for the axisymmetric lattice Boltzmann method under incompressible limit.

    PubMed

    Zhang, Liangqi; Yang, Shiliang; Zeng, Zhong; Chen, Jie; Yin, Linmao; Chew, Jia Wei

    2017-04-01

    Because the standard lattice Boltzmann (LB) method is proposed for Cartesian Navier-Stokes (NS) equations, additional source terms are necessary in the axisymmetric LB method for representing the axisymmetric effects. Therefore, the accuracy and applicability of the axisymmetric LB models depend on the forcing schemes adopted for discretization of the source terms. In this study, three forcing schemes, namely, the trapezium rule based scheme, the direct forcing scheme, and the semi-implicit centered scheme, are analyzed theoretically by investigating their derived macroscopic equations in the diffusive scale. Particularly, the finite difference interpretation of the standard LB method is extended to the LB equations with source terms, and then the accuracy of different forcing schemes is evaluated for the axisymmetric LB method. Theoretical analysis indicates that the discrete lattice effects arising from the direct forcing scheme are part of the truncation error terms and thus would not affect the overall accuracy of the standard LB method with general force term (i.e., only the source terms in the momentum equation are considered), but lead to incorrect macroscopic equations for the axisymmetric LB models. On the other hand, the trapezium rule based scheme and the semi-implicit centered scheme both have the advantage of avoiding the discrete lattice effects and recovering the correct macroscopic equations. Numerical tests applied for validating the theoretical analysis show that both the numerical stability and the accuracy of the axisymmetric LB simulations are affected by the direct forcing scheme, which indicate that forcing schemes free of the discrete lattice effects are necessary for the axisymmetric LB method.

  11. Aggregation of nanoparticles in endosomes and lysosomes produces surface-enhanced Raman spectroscopy

    NASA Astrophysics Data System (ADS)

    Lucas, Leanne J.; Chen, Xiaoke K.; Smith, Aaron J.; Korbelik, Mladen; Zeng, Haishan; Lee, Patrick W. K.; Hewitt, Kevin Cecil

    2015-01-01

    The purpose of this study was to explore the use of surface-enhanced Raman spectroscopy (SERS) to image the distribution of epidermal growth factor receptor (EGFR) in cells. To accomplish this task, 30-nm gold nanoparticles (AuNPs) tagged with antibodies to EGFR (1012 per mL) were incubated with cells (106 per mL) of the A431 human epidermoid carcinoma and normal human bronchial epithelial cell lines. Using the 632.8-nm excitation line of a He-Ne laser, Raman spectroscopy measurements were performed using a point mapping scheme. Normal cells show little to no enhancement. SERS signals were observed inside the cytoplasm of A431 cells with an overall enhancement of 4 to 7 orders of magnitude. Raman intensity maps of the 1450 and 1583 cm-1 peaks correlate well with the expected distribution of EGFR and AuNPs, aggregated following uptake by endosomes and lysosomes. Spectral features from tyrosine and tryptophan residues dominate the SERS signals.

  12. Chain architecture and micellization: A mean-field coarse-grained model for poly(ethylene oxide) alkyl ether surfactants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    García Daza, Fabián A.; Mackie, Allan D., E-mail: allan.mackie@urv.cat; Colville, Alexander J.

    2015-03-21

    Microscopic modeling of surfactant systems is expected to be an important tool to describe, understand, and take full advantage of the micellization process for different molecular architectures. Here, we implement a single chain mean field theory to study the relevant equilibrium properties such as the critical micelle concentration (CMC) and aggregation number for three sets of surfactants with different geometries maintaining constant the number of hydrophobic and hydrophilic monomers. The results demonstrate the direct effect of the block organization for the surfactants under study by means of an analysis of the excess energy and entropy which can be accurately determinedmore » from the mean-field scheme. Our analysis reveals that the CMC values are sensitive to branching in the hydrophilic head part of the surfactant and can be observed in the entropy-enthalpy balance, while aggregation numbers are also affected by splitting the hydrophobic tail of the surfactant and are manifested by slight changes in the packing entropy.« less

  13. CAVIAR: CLASSIFICATION VIA AGGREGATED REGRESSION AND ITS APPLICATION IN CLASSIFYING OASIS BRAIN DATABASE

    PubMed Central

    Chen, Ting; Rangarajan, Anand; Vemuri, Baba C.

    2010-01-01

    This paper presents a novel classification via aggregated regression algorithm – dubbed CAVIAR – and its application to the OASIS MRI brain image database. The CAVIAR algorithm simultaneously combines a set of weak learners based on the assumption that the weight combination for the final strong hypothesis in CAVIAR depends on both the weak learners and the training data. A regularization scheme using the nearest neighbor method is imposed in the testing stage to avoid overfitting. A closed form solution to the cost function is derived for this algorithm. We use a novel feature – the histogram of the deformation field between the MRI brain scan and the atlas which captures the structural changes in the scan with respect to the atlas brain – and this allows us to automatically discriminate between various classes within OASIS [1] using CAVIAR. We empirically show that CAVIAR significantly increases the performance of the weak classifiers by showcasing the performance of our technique on OASIS. PMID:21151847

  14. CAVIAR: CLASSIFICATION VIA AGGREGATED REGRESSION AND ITS APPLICATION IN CLASSIFYING OASIS BRAIN DATABASE.

    PubMed

    Chen, Ting; Rangarajan, Anand; Vemuri, Baba C

    2010-04-14

    This paper presents a novel classification via aggregated regression algorithm - dubbed CAVIAR - and its application to the OASIS MRI brain image database. The CAVIAR algorithm simultaneously combines a set of weak learners based on the assumption that the weight combination for the final strong hypothesis in CAVIAR depends on both the weak learners and the training data. A regularization scheme using the nearest neighbor method is imposed in the testing stage to avoid overfitting. A closed form solution to the cost function is derived for this algorithm. We use a novel feature - the histogram of the deformation field between the MRI brain scan and the atlas which captures the structural changes in the scan with respect to the atlas brain - and this allows us to automatically discriminate between various classes within OASIS [1] using CAVIAR. We empirically show that CAVIAR significantly increases the performance of the weak classifiers by showcasing the performance of our technique on OASIS.

  15. Chain architecture and micellization: A mean-field coarse-grained model for poly(ethylene oxide) alkyl ether surfactants

    NASA Astrophysics Data System (ADS)

    García Daza, Fabián A.; Colville, Alexander J.; Mackie, Allan D.

    2015-03-01

    Microscopic modeling of surfactant systems is expected to be an important tool to describe, understand, and take full advantage of the micellization process for different molecular architectures. Here, we implement a single chain mean field theory to study the relevant equilibrium properties such as the critical micelle concentration (CMC) and aggregation number for three sets of surfactants with different geometries maintaining constant the number of hydrophobic and hydrophilic monomers. The results demonstrate the direct effect of the block organization for the surfactants under study by means of an analysis of the excess energy and entropy which can be accurately determined from the mean-field scheme. Our analysis reveals that the CMC values are sensitive to branching in the hydrophilic head part of the surfactant and can be observed in the entropy-enthalpy balance, while aggregation numbers are also affected by splitting the hydrophobic tail of the surfactant and are manifested by slight changes in the packing entropy.

  16. Multifocus image fusion scheme based on the multiscale curvature in nonsubsampled contourlet transform domain

    NASA Astrophysics Data System (ADS)

    Li, Xiaosong; Li, Huafeng; Yu, Zhengtao; Kong, Yingchun

    2015-07-01

    An efficient multifocus image fusion scheme in nonsubsampled contourlet transform (NSCT) domain is proposed. Based on the property of optical imaging and the theory of defocused image, we present a selection principle for lowpass frequency coefficients and also investigate the connection between a low-frequency image and the defocused image. Generally, the NSCT algorithm decomposes detail image information indwells in different scales and different directions in the bandpass subband coefficient. In order to correctly pick out the prefused bandpass directional coefficients, we introduce multiscale curvature, which not only inherits the advantages of windows with different sizes, but also correctly recognizes the focused pixels from source images, and then develop a new fusion scheme of the bandpass subband coefficients. The fused image can be obtained by inverse NSCT with the different fused coefficients. Several multifocus image fusion methods are compared with the proposed scheme. The experimental results clearly indicate the validity and superiority of the proposed scheme in terms of both the visual qualities and the quantitative evaluation.

  17. Influences of different land use spatial control schemes on farmland conversion and urban development.

    PubMed

    Zhou, Min; Tan, Shukui; Zhang, Lu

    2015-01-01

    Land use planning is always officially implemented as an effective tool to control urban development and protect farmland. However, its impact on land use change remains untested in China. Using a case study of Hang-Jia-Hu region, the main objective of this paper was to investigate the influence of different land use spatial control schemes on farmland conversion and urban development. Comparisons of farmland conversion and urban development patterns between the urban planning area and the non-urban planning area were characterized by using remote sensing, geographical information systems, and landscape metrics. Results indicated that farmland conversion in the non-urban planning area was more intensive than that in the urban planning area, and that farmland patterns was more fragmented in the non-urban planning area. Built-up land patterns in the non-urban planning area showed a trend of aggregation, while those in the urban planning area had a dual trend of fragmentation and aggregation. Existing built-up areas had less influence on built-up land sprawl in the non-urban planning area than that in the urban planning area. Built-up land sprawl in the form of continuous development in the urban planning area led to farmland conversion; and in the non-urban planning area, built-up land sprawl in the form of leapfrogging development resulted in farmland areal declines and fragmentation. We argued that it is a basic requirement to integrate land use plans in urban and non-urban planning areas for land use planning and management.

  18. Influences of Different Land Use Spatial Control Schemes on Farmland Conversion and Urban Development

    PubMed Central

    Zhou, Min; Tan, Shukui; Zhang, Lu

    2015-01-01

    Land use planning is always officially implemented as an effective tool to control urban development and protect farmland. However, its impact on land use change remains untested in China. Using a case study of Hang-Jia-Hu region, the main objective of this paper was to investigate the influence of different land use spatial control schemes on farmland conversion and urban development. Comparisons of farmland conversion and urban development patterns between the urban planning area and the non-urban planning area were characterized by using remote sensing, geographical information systems, and landscape metrics. Results indicated that farmland conversion in the non-urban planning area was more intensive than that in the urban planning area, and that farmland patterns was more fragmented in the non-urban planning area. Built-up land patterns in the non-urban planning area showed a trend of aggregation, while those in the urban planning area had a dual trend of fragmentation and aggregation. Existing built-up areas had less influence on built-up land sprawl in the non-urban planning area than that in the urban planning area. Built-up land sprawl in the form of continuous development in the urban planning area led to farmland conversion; and in the non-urban planning area, built-up land sprawl in the form of leapfrogging development resulted in farmland areal declines and fragmentation. We argued that it is a basic requirement to integrate land use plans in urban and non-urban planning areas for land use planning and management. PMID:25915897

  19. Parallel Aircraft Trajectory Optimization with Analytic Derivatives

    NASA Technical Reports Server (NTRS)

    Falck, Robert D.; Gray, Justin S.; Naylor, Bret

    2016-01-01

    Trajectory optimization is an integral component for the design of aerospace vehicles, but emerging aircraft technologies have introduced new demands on trajectory analysis that current tools are not well suited to address. Designing aircraft with technologies such as hybrid electric propulsion and morphing wings requires consideration of the operational behavior as well as the physical design characteristics of the aircraft. The addition of operational variables can dramatically increase the number of design variables which motivates the use of gradient based optimization with analytic derivatives to solve the larger optimization problems. In this work we develop an aircraft trajectory analysis tool using a Legendre-Gauss-Lobatto based collocation scheme, providing analytic derivatives via the OpenMDAO multidisciplinary optimization framework. This collocation method uses an implicit time integration scheme that provides a high degree of sparsity and thus several potential options for parallelization. The performance of the new implementation was investigated via a series of single and multi-trajectory optimizations using a combination of parallel computing and constraint aggregation. The computational performance results show that in order to take full advantage of the sparsity in the problem it is vital to parallelize both the non-linear analysis evaluations and the derivative computations themselves. The constraint aggregation results showed a significant numerical challenge due to difficulty in achieving tight convergence tolerances. Overall, the results demonstrate the value of applying analytic derivatives to trajectory optimization problems and lay the foundation for future application of this collocation based method to the design of aircraft with where operational scheduling of technologies is key to achieving good performance.

  20. A framework for the probabilistic analysis of meteotsunamis

    USGS Publications Warehouse

    Geist, Eric L.; ten Brink, Uri S.; Gove, Matthew D.

    2014-01-01

    A probabilistic technique is developed to assess the hazard from meteotsunamis. Meteotsunamis are unusual sea-level events, generated when the speed of an atmospheric pressure or wind disturbance is comparable to the phase speed of long waves in the ocean. A general aggregation equation is proposed for the probabilistic analysis, based on previous frameworks established for both tsunamis and storm surges, incorporating different sources and source parameters of meteotsunamis. Parameterization of atmospheric disturbances and numerical modeling is performed for the computation of maximum meteotsunami wave amplitudes near the coast. A historical record of pressure disturbances is used to establish a continuous analytic distribution of each parameter as well as the overall Poisson rate of occurrence. A demonstration study is presented for the northeast U.S. in which only isolated atmospheric pressure disturbances from squall lines and derechos are considered. For this study, Automated Surface Observing System stations are used to determine the historical parameters of squall lines from 2000 to 2013. The probabilistic equations are implemented using a Monte Carlo scheme, where a synthetic catalog of squall lines is compiled by sampling the parameter distributions. For each entry in the catalog, ocean wave amplitudes are computed using a numerical hydrodynamic model. Aggregation of the results from the Monte Carlo scheme results in a meteotsunami hazard curve that plots the annualized rate of exceedance with respect to maximum event amplitude for a particular location along the coast. Results from using multiple synthetic catalogs, resampled from the parent parameter distributions, yield mean and quantile hazard curves. Further refinements and improvements for probabilistic analysis of meteotsunamis are discussed.

  1. Interaction of two cylinders in shear flow at low Wi

    NASA Astrophysics Data System (ADS)

    Brown, M. J.; Leal, L. G.

    2001-11-01

    Experiments [Lyon et al., 2001; Boussima et al., 1996; Michelle et al., 1977] have shown that non-Brownian, non-Colloidal, and charge-neutral particles, when suspended in viscoelastic media and subjected to shear, will aggregate and flow-align above a critical shear rate Wi O(10). Giesekus [1978] proposed a mechanism for aggregation based on the attractive hoop thrusts about two particles in viscoelastic flow. This pairwise mechanism of attraction is borne out in studies of sedimenting particles [Feng & Joseph, 1996; Joseph et al., 1994], and seems a valid explanation for the aggregation observed in sedimenting suspensions over all Wi > Re [Joseph et al., 1994; Phillips, 1996.] Consideration of the flow around two particles in shear would lead one to expect attraction by this hoop thrust mechanism as well. However, it remains unclear why shear-induced aggregation only occurs above a critical Wi. A first step in understanding this criticality is to establish the low Wi behavior of two particles in shear. In this talk, we report on the interaction of two freely-mobile cylinders as predicted by an n-th order fluid computation.

  2. 40 CFR 721.537 - Organosilane ester.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... the substance beyond the aggregate production volume limit, unless that person conducts this study on.... Scientific valid means that the study was conducted according to: (1) The test guidelines specified in... weeks prior to exceeding the applicable production volume limit. The final report shall contain the...

  3. A secure and robust password-based remote user authentication scheme using smart cards for the integrated EPR information system.

    PubMed

    Das, Ashok Kumar

    2015-03-01

    An integrated EPR (Electronic Patient Record) information system of all the patients provides the medical institutions and the academia with most of the patients' information in details for them to make corrective decisions and clinical decisions in order to maintain and analyze patients' health. In such system, the illegal access must be restricted and the information from theft during transmission over the insecure Internet must be prevented. Lee et al. proposed an efficient password-based remote user authentication scheme using smart card for the integrated EPR information system. Their scheme is very efficient due to usage of one-way hash function and bitwise exclusive-or (XOR) operations. However, in this paper, we show that though their scheme is very efficient, their scheme has three security weaknesses such as (1) it has design flaws in password change phase, (2) it fails to protect privileged insider attack and (3) it lacks the formal security verification. We also find that another recently proposed Wen's scheme has the same security drawbacks as in Lee at al.'s scheme. In order to remedy these security weaknesses found in Lee et al.'s scheme and Wen's scheme, we propose a secure and efficient password-based remote user authentication scheme using smart cards for the integrated EPR information system. We show that our scheme is also efficient as compared to Lee et al.'s scheme and Wen's scheme as our scheme only uses one-way hash function and bitwise exclusive-or (XOR) operations. Through the security analysis, we show that our scheme is secure against possible known attacks. Furthermore, we simulate our scheme for the formal security verification using the widely-accepted AVISPA (Automated Validation of Internet Security Protocols and Applications) tool and show that our scheme is secure against passive and active attacks.

  4. Active module identification in intracellular networks using a memetic algorithm with a new binary decoding scheme.

    PubMed

    Li, Dong; Pan, Zhisong; Hu, Guyu; Zhu, Zexuan; He, Shan

    2017-03-14

    Active modules are connected regions in biological network which show significant changes in expression over particular conditions. The identification of such modules is important since it may reveal the regulatory and signaling mechanisms that associate with a given cellular response. In this paper, we propose a novel active module identification algorithm based on a memetic algorithm. We propose a novel encoding/decoding scheme to ensure the connectedness of the identified active modules. Based on the scheme, we also design and incorporate a local search operator into the memetic algorithm to improve its performance. The effectiveness of proposed algorithm is validated on both small and large protein interaction networks.

  5. An Intelligent Actuator Fault Reconstruction Scheme for Robotic Manipulators.

    PubMed

    Xiao, Bing; Yin, Shen

    2018-02-01

    This paper investigates a difficult problem of reconstructing actuator faults for robotic manipulators. An intelligent approach with fast reconstruction property is developed. This is achieved by using observer technique. This scheme is capable of precisely reconstructing the actual actuator fault. It is shown by Lyapunov stability analysis that the reconstruction error can converge to zero after finite time. A perfect reconstruction performance including precise and fast properties can be provided for actuator fault. The most important feature of the scheme is that, it does not depend on control law, dynamic model of actuator, faults' type, and also their time-profile. This super reconstruction performance and capability of the proposed approach are further validated by simulation and experimental results.

  6. Encryption and display of multiple-image information using computer-generated holography with modified GS iterative algorithm

    NASA Astrophysics Data System (ADS)

    Xiao, Dan; Li, Xiaowei; Liu, Su-Juan; Wang, Qiong-Hua

    2018-03-01

    In this paper, a new scheme of multiple-image encryption and display based on computer-generated holography (CGH) and maximum length cellular automata (MLCA) is presented. With the scheme, the computer-generated hologram, which has the information of the three primitive images, is generated by modified Gerchberg-Saxton (GS) iterative algorithm using three different fractional orders in fractional Fourier domain firstly. Then the hologram is encrypted using MLCA mask. The ciphertext can be decrypted combined with the fractional orders and the rules of MLCA. Numerical simulations and experimental display results have been carried out to verify the validity and feasibility of the proposed scheme.

  7. Longitudinal phase-space coating of beam in a storage ring

    NASA Astrophysics Data System (ADS)

    Bhat, C. M.

    2014-06-01

    In this Letter, I report on a novel scheme for beam stacking without any beam emittance dilution using a barrier rf system in synchrotrons. The general principle of the scheme called longitudinal phase-space coating, validation of the concept via multi-particle beam dynamics simulations applied to the Fermilab Recycler, and its experimental demonstration are presented. In addition, it has been shown and illustrated that the rf gymnastics involved in this scheme can be used in measuring the incoherent synchrotron tune spectrum of the beam in barrier buckets and in producing a clean hollow beam in longitudinal phase space. The method of beam stacking in synchrotrons presented here is the first of its kind.

  8. A Derivation of the Analytical Relationship between the Projected Albedo-Area Product of a Space Object and its Aggregate Photometric Measurements

    DTIC Science & Technology

    2013-09-01

    model , they are, for all intents and purposes, simply unit-less linear weights. Although this equation is technically valid for a Lambertian... modeled as a single flat facet, the same model cannot be assumed equally valid for the body. The body, after all, is a complex, three dimensional...facet (termed the “body”) and the solar tracking parts of the object as another facet (termed the solar panels). This comprises the two-facet model

  9. Magnetic Resonance Characterization of Hepatic Storage Iron in Transfusional Iron Overload

    PubMed Central

    Tang, Haiying; Jensen, Jens H.; Sammet, Christina L.; Sheth, Sujit; Swaminathan, Srirama V.; Hultman, Kristi; Kim, Daniel; Wu, Ed X.; Brown, Truman R.; Brittenham, Gary M.

    2013-01-01

    Purpose To quantify the two principal forms of hepatic storage iron, diffuse, soluble iron (primarily ferritin), and aggregated, insoluble iron (primarily hemosiderin) using a new MRI method in patients with transfusional iron overload. Materials and Methods Six healthy volunteers and twenty patients with transfusion-dependent thalassemia syndromes and iron overload were examined. Ferritin- and hemosiderin-like iron were determined based on the measurement of two distinct relaxation parameters: the “reduced” transverse relaxation rate, RR2 and the “aggregation index,” A, using three sets of Carr-Purcell-Meiboom-Gill (CPMG) datasets with different interecho spacings. Agarose phantoms, simulating the relaxation and susceptibility properties of tissue with different concentrations of dispersed (ferritin-like) and aggregated (hemosiderin-like) iron, were employed for validation. Results Both phantom and in vivo human data confirmed that transverse relaxation components associated with the dispersed and aggregated iron could be separated using the two-parameter (RR2, A) method. The MRI-determined total hepatic storage iron was highly correlated (r = 0.95) with measurements derived from biopsy or biosusceptometry. As total hepatic storage iron increased, the proportion stored as aggregated iron became greater. Conclusion This method provides a new means for non-invasive MRI determination of the partition of hepatic storage iron between ferritin and hemosiderin in iron overload disorders. PMID:23720394

  10. MR characterization of hepatic storage iron in transfusional iron overload.

    PubMed

    Tang, Haiying; Jensen, Jens H; Sammet, Christina L; Sheth, Sujit; Swaminathan, Srirama V; Hultman, Kristi; Kim, Daniel; Wu, Ed X; Brown, Truman R; Brittenham, Gary M

    2014-02-01

    To quantify the two principal forms of hepatic storage iron, diffuse, soluble iron (primarily ferritin), and aggregated, insoluble iron (primarily hemosiderin) using a new MRI method in patients with transfusional iron overload. Six healthy volunteers and 20 patients with transfusion-dependent thalassemia syndromes and iron overload were examined. Ferritin- and hemosiderin-like iron were determined based on the measurement of two distinct relaxation parameters: the "reduced" transverse relaxation rate, RR2 , and the "aggregation index," A, using three sets of Carr-Purcell-Meiboom-Gill (CPMG) datasets with different interecho spacings. Agarose phantoms, simulating the relaxation and susceptibility properties of tissue with different concentrations of dispersed (ferritin-like) and aggregated (hemosiderin-like) iron, were used for validation. Both phantom and in vivo human data confirmed that transverse relaxation components associated with the dispersed and aggregated iron could be separated using the two-parameter (RR2 , A) method. The MRI-determined total hepatic storage iron was highly correlated (r = 0.95) with measurements derived from biopsy or biosusceptometry. As total hepatic storage iron increased, the proportion stored as aggregated iron became greater. This method provides a new means for noninvasive MRI determination of the partition of hepatic storage iron between ferritin and hemosiderin in iron overload disorders. Copyright © 2013 Wiley Periodicals, Inc.

  11. LandScape: a simple method to aggregate p-values and other stochastic variables without a priori grouping.

    PubMed

    Wiuf, Carsten; Schaumburg-Müller Pallesen, Jonatan; Foldager, Leslie; Grove, Jakob

    2016-08-01

    In many areas of science it is custom to perform many, potentially millions, of tests simultaneously. To gain statistical power it is common to group tests based on a priori criteria such as predefined regions or by sliding windows. However, it is not straightforward to choose grouping criteria and the results might depend on the chosen criteria. Methods that summarize, or aggregate, test statistics or p-values, without relying on a priori criteria, are therefore desirable. We present a simple method to aggregate a sequence of stochastic variables, such as test statistics or p-values, into fewer variables without assuming a priori defined groups. We provide different ways to evaluate the significance of the aggregated variables based on theoretical considerations and resampling techniques, and show that under certain assumptions the FWER is controlled in the strong sense. Validity of the method was demonstrated using simulations and real data analyses. Our method may be a useful supplement to standard procedures relying on evaluation of test statistics individually. Moreover, by being agnostic and not relying on predefined selected regions, it might be a practical alternative to conventionally used methods of aggregation of p-values over regions. The method is implemented in Python and freely available online (through GitHub, see the Supplementary information).

  12. A novel heuristic for optimization aggregate production problem: Evidence from flat panel display in Malaysia

    NASA Astrophysics Data System (ADS)

    Al-Kuhali, K.; Hussain M., I.; Zain Z., M.; Mullenix, P.

    2015-05-01

    Aim: This paper contribute to the flat panel display industry it terms of aggregate production planning. Methodology: For the minimization cost of total production of LCD manufacturing, a linear programming was applied. The decision variables are general production costs, additional cost incurred for overtime production, additional cost incurred for subcontracting, inventory carrying cost, backorder costs and adjustments for changes incurred within labour levels. Model has been developed considering a manufacturer having several product types, which the maximum types are N, along a total time period of T. Results: Industrial case study based on Malaysia is presented to test and to validate the developed linear programming model for aggregate production planning. Conclusion: The model development is fit under stable environment conditions. Overall it can be recommended to adapt the proven linear programming model to production planning of Malaysian flat panel display industry.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coleman, Jody Rustyn; Poland, Richard W.

    A system and method for the secure storage and transmission of data is provided. A data aggregate device can be configured to receive secure data from a data source, such as a sensor, and encrypt the secure data using a suitable encryption technique, such as a shared private key technique, a public key encryption technique, a Diffie-Hellman key exchange technique, or other suitable encryption technique. The encrypted secure data can be provided from the data aggregate device to different remote devices over a plurality of segregated or isolated data paths. Each of the isolated data paths can include an optoisolatormore » that is configured to provide one-way transmission of the encrypted secure data from the data aggregate device over the isolated data path. External data can be received through a secure data filter which, by validating the external data, allows for key exchange and other various adjustments from an external source.« less

  14. Authenticated sensor interface device

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coleman, Jody Rustyn; Poland, Richard W.

    A system and method for the secure storage and transmission of data is provided. A data aggregate device can be configured to receive secure data from a data source, such as a sensor, and encrypt the secure data using a suitable encryption technique, such as a shared private key technique, a public key encryption technique, a Diffie-Hellman key exchange technique, or other suitable encryption technique. The encrypted secure data can be provided from the data aggregate device to different remote devices over a plurality of segregated or isolated data paths. Each of the isolated data paths can include an optoisolatormore » that is configured to provide one-way transmission of the encrypted secure data from the data aggregate device over the isolated data path. External data can be received through a secure data filter which, by validating the external data, allows for key exchange and other various adjustments from an external source.« less

  15. Decision Making Based on Fuzzy Aggregation Operators for Medical Diagnosis from Dental X-ray images.

    PubMed

    Ngan, Tran Thi; Tuan, Tran Manh; Son, Le Hoang; Minh, Nguyen Hai; Dey, Nilanjan

    2016-12-01

    Medical diagnosis is considered as an important step in dentistry treatment which assists clinicians to give their decision about diseases of a patient. It has been affirmed that the accuracy of medical diagnosis, which is much influenced by the clinicians' experience and knowledge, plays an important role to effective treatment therapies. In this paper, we propose a novel decision making method based on fuzzy aggregation operators for medical diagnosis from dental X-Ray images. It firstly divides a dental X-Ray image into some segments and identified equivalent diseases by a classification method called Affinity Propagation Clustering (APC+). Lastly, the most potential disease is found using fuzzy aggregation operators. The experimental validation on real dental datasets of Hanoi Medical University Hospital, Vietnam showed the superiority of the proposed method against the relevant ones in terms of accuracy.

  16. Biosynthesis and computational analysis of amine-ended dual thiol ligand functionalized gold nanoparticles for conventional spectroscopy detection of melamine.

    PubMed

    Anand, K; Singh, Thishana; Madhumitha, G; Phulukdaree, A; Gengan, Robert M; Chuturgoon, A A

    2017-04-01

    The bio-synthesized DTAuNPs have an average size of 21nm. The aggregation extent depends on the concentration of melamine, which was validated by UV-vis spectra and visual method of melamine detection was developed. The major observation in this method was the color change of DTAuNPs from red to purple due to the aggregation of ligand capped gold nanoparticles instigated by melamine. The reaction of color changes were processed due to the shifting of bonding in hydrogen in between nanoparticles and melamine. The aggregation extent depends on the concentration of melamine, which can be validated UV-vis spectra and visual method of detecting melamine is developed. The electron density and conventional UV-vis, FTIR spectroscopy and DFT studies on the ligand was performed using computational methods. The theoretical and experimental data for the energy transitions and the molar extinction coefficients of the ligands studied has been obtained. Further, the ligand capped gold nanoparticles was assessed for cytotoxicity against A549 cells which resulted in significant decrease in cell viability was noted in 50μg/mL DTAu, 4-ATP and AXT treated cells at 2h (85% and 66%) and 6h (83% and 36%) respectively, (p<0.01) were studied and reported in this manuscript. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Implementing forward recovery using checkpointing in distributed systems

    NASA Technical Reports Server (NTRS)

    Long, Junsheng; Fuchs, W. K.; Abraham, Jacob A.

    1991-01-01

    The paper describes the implementation of a forward recovery scheme using checkpoints and replicated tasks. The implementation is based on the concept of lookahead execution and rollback validation. In the experiment, two tasks are selected for the normal execution and one for rollback validation. It is shown that the recovery strategy has nearly error-free execution time and an average redundancy lower than TMR.

  18. Field-deployable colorimetric biosensor system for the rapid detection of pathogenic organisms

    NASA Astrophysics Data System (ADS)

    Duy, Janice

    The rapid identification of pathogenic organisms is necessary for recognizing and managing human and environmental health risks. Numerous detection schemes are available, but most are difficult to employ in non-laboratory settings due to their need for bulky, specialized equipment, multiple reagents, or highly trained personnel. To address this problem, a rapid, field-compatible biosensor system based on the colorimetric detection of nucleic acid hybrids was developed. Peptide nucleic acid (PNA) probes were used to capture ribosomal RNA sequences from environmental samples. Non-target nucleic acids, including single-base mismatches flanked by adenines and uracils, were removed with a micrococcal nuclease digestion step. Matched PNA-RNA hybrids remained intact and were indicated by the cyanine dye DiSC2(5). PNA-containing duplexes function as templates for the aggregation of DiSC2(5), visualized as a change in solution color from blue to purple. This transition can be measured as an increase in the solution absorbance at 540 nm (dye aggregate) at the expense of the dye monomer peak at 650 nm. These concomitant spectral changes were used to calculate a "hybridization signal" using the ratio A aggregate/Amonomer ≈ A540/A650. Testing with pathogenic environmental samples was accomplished using two model organisms: the harmful algal bloom-causing dinoflagellate Alexandrium species, and the potato wart disease-causing fungus Synchytrium endobioticum. In both cases, the colorimetric approach was able to distinguish the targets with sensitivities rivaling those of established techniques, but with the advantages of decreased hands-on time and cost. Assay fieldability was tested with a portable colorimeter designed to quantify the dye-indicated hybridization signal and assembled from commercially available components. Side-by-side testing revealed no difference in the sensing performance of the colorimeter compared to a laboratory spectrophotometer (Pearson's r=0.99935). Assay results were obtained within 15 minutes, with a limit of detection down to 10--17 mole. This quick, inexpensive and robust system has the potential to replace laborious pathogen identification schemes in field environments, and is easily adapted for the detection of different organisms.

  19. An Investigation of Aggregation in Synergistic Solvent Extraction Systems

    NASA Astrophysics Data System (ADS)

    Jackson, Andy Steven

    With an increasing focus on anthropogenic climate change, nuclear reactors present an attractive option for base load power generation with regard to air pollution and carbon emissions, especially when compared with traditional fossil fuel based options. However, used nuclear fuel (UNF) is highly radiotoxic and contains minor actinides (americium and curium) which remain more radiotoxic than natural uranium ore for hundreds of thousands of years, presenting a challenge for long-term storage . Advanced nuclear fuel recycling can reduce this required storage time to thousands of years by removing the highly radiotoxic minor actinides. Many advanced separation schemes have been proposed to achieve this separation but none have been implemented to date. A key feature among many proposed schemes is the use of more than one extraction reagent in a single extraction phase, which can lead to the phenomenon known as "synergism" in which the extraction efficiency for a combination of the reagents is greater than that of the individual extractants alone. This feature is not well understood for many systems and a comprehensive picture of the mechanism behind synergism does not exist. There are several proposed mechanisms for synergism though none have been used to model multiple extraction systems. This work examines several proposed advanced extractant combinations which exhibit synergism: 2-bromodecanoic acid (BDA) with 2,2':6',2"-terpyridine (TERPY), tri-n-butylphosphine oxide (TPBO) with 2-thenoyltrifluoro acetone (HTTA), and dinonylnaphthalene sulfonic acid (HDNNS) with 5,8-diethyl-7-hydroxy-dodecan-6-oxime (LIX). We examine two proposed synergistic mechanisms involving and attempt to verify the ability of these mechanisms to predict the extraction behavior of the chosen systems. These are a reverse micellar catalyzed extraction model and a mixed complex formation model. Neither was able to effectively predict the synergistic behavior of the systems. We further examine these systems for the presence of large reverse micellar aggregates and thermodynamic signatures of aggregation. Behaviors differed widely from system to system, suggesting the possibility of more than one mechanism being responsible for similar observed extraction trends.

  20. Cryptanalysis and Enhancement of Anonymity Preserving Remote User Mutual Authentication and Session Key Agreement Scheme for E-Health Care Systems.

    PubMed

    Amin, Ruhul; Islam, S K Hafizul; Biswas, G P; Khan, Muhammad Khurram; Li, Xiong

    2015-11-01

    The E-health care systems employ IT infrastructure for maximizing health care resources utilization as well as providing flexible opportunities to the remote patient. Therefore, transmission of medical data over any public networks is necessary in health care system. Note that patient authentication including secure data transmission in e-health care system is critical issue. Although several user authentication schemes for accessing remote services are available, their security analysis show that none of them are free from relevant security attacks. We reviewed Das et al.'s scheme and demonstrated their scheme lacks proper protection against several security attacks such as user anonymity, off-line password guessing attack, smart card theft attack, user impersonation attack, server impersonation attack, session key discloser attack. In order to overcome the mentioned security pitfalls, this paper proposes an anonymity preserving remote patient authentication scheme usable in E-health care systems. We then validated the security of the proposed scheme using BAN logic that ensures secure mutual authentication and session key agreement. We also presented the experimental results of the proposed scheme using AVISPA software and the results ensure that our scheme is secure under OFMC and CL-AtSe models. Moreover, resilience of relevant security attacks has been proved through both formal and informal security analysis. The performance analysis and comparison with other schemes are also made, and it has been found that the proposed scheme overcomes the security drawbacks of the Das et al.'s scheme and additionally achieves extra security requirements.

Top