A Semantics-Based Information Distribution Framework for Large Web-Based Course Forum System
ERIC Educational Resources Information Center
Chim, Hung; Deng, Xiaotie
2008-01-01
We propose a novel data distribution framework for developing a large Web-based course forum system. In the distributed architectural design, each forum server is fully equipped with the ability to support some course forums independently. The forum servers collaborating with each other constitute the whole forum system. Therefore, the workload of…
USDA-ARS?s Scientific Manuscript database
A distributed biosphere hydrological model, the so called water and energy budget-based distributed hydrological model (WEB-DHM), has been developed by fully coupling a biosphere scheme (SiB2) with a geomorphology-based hydrological model (GBHM). SiB2 describes the transfer of turbulent fluxes (ener...
Distributed Pheromone-Based Swarming Control of Unmanned Air and Ground Vehicles for RSTA
2008-03-20
Forthcoming in Proceedings of SPIE Defense & Security Conference, March 2008, Orlando, FL Distributed Pheromone -Based Swarming Control of Unmanned...describes recent advances in a fully distributed digital pheromone algorithm that has demonstrated its effectiveness in managing the complexity of...onboard digital pheromone responding to the needs of the automatic target recognition algorithms. UAVs and UGVs controlled by the same pheromone algorithm
Methodology and application of combined watershed and ground-water models in Kansas
Sophocleous, M.; Perkins, S.P.
2000-01-01
Increased irrigation in Kansas and other regions during the last several decades has caused serious water depletion, making the development of comprehensive strategies and tools to resolve such problems increasingly important. This paper makes the case for an intermediate complexity, quasi-distributed, comprehensive, large-watershed model, which falls between the fully distributed, physically based hydrological modeling system of the type of the SHE model and the lumped, conceptual rainfall-runoff modeling system of the type of the Stanford watershed model. This is achieved by integrating the quasi-distributed watershed model SWAT with the fully-distributed ground-water model MODFLOW. The advantage of this approach is the appreciably smaller input data requirements and the use of readily available data (compared to the fully distributed, physically based models), the statistical handling of watershed heterogeneities by employing the hydrologic-response-unit concept, and the significantly increased flexibility in handling stream-aquifer interactions, distributed well withdrawals, and multiple land uses. The mechanics of integrating the component watershed and ground-water models are outlined, and three real-world management applications of the integrated model from Kansas are briefly presented. Three different aspects of the integrated model are emphasized: (1) management applications of a Decision Support System for the integrated model (Rattlesnake Creek subbasin); (2) alternative conceptual models of spatial heterogeneity related to the presence or absence of an underlying aquifer with shallow or deep water table (Lower Republican River basin); and (3) the general nature of the integrated model linkage by employing a watershed simulator other than SWAT (Wet Walnut Creek basin). These applications demonstrate the practicality and versatility of this relatively simple and conceptually clear approach, making public acceptance of the integrated watershed modeling system much easier. This approach also enhances model calibration and thus the reliability of model results. (C) 2000 Elsevier Science B.V.Increased irrigation in Kansas and other regions during the last several decades has caused serious water depletion, making the development of comprehensive strategies and tools to resolve such problems increasingly important. This paper makes the case for an intermediate complexity, quasi-distributed, comprehensive, large-watershed model, which falls between the fully distributed, physically based hydrological modeling system of the type of the SHE model and the lumped, conceptual rainfall-runoff modeling system of the type of the Stanford watershed model. This is achieved by integrating the quasi-distributed watershed model SWAT with the fully-distributed ground-water model MODFLOW. The advantage of this approach is the appreciably smaller input data requirements and the use of readily available data (compared to the fully distributed, physically based models), the statistical handling of watershed heterogeneities by employing the hydrologic-response-unit concept, and the significantly increased flexibility in handling stream-aquifer interactions, distributed well withdrawals, and multiple land uses. The mechanics of integrating the component watershed and ground-water models are outlined, and three real-world management applications of the integrated model from Kansas are briefly presented. Three different aspects of the integrated model are emphasized: (1) management applications of a Decision Support System for the integrated model (Rattlesnake Creek subbasin); (2) alternative conceptual models of spatial heterogeneity related to the presence or absence of an underlying aquifer with shallow or deep water table (Lower Republican River basin); and (3) the general nature of the integrated model linkage by employing a watershed simulator other than SWAT (Wet Walnut Creek basin). These applications demonstrate the practicality and ve
Fully Decentralized Semi-supervised Learning via Privacy-preserving Matrix Completion.
Fierimonte, Roberto; Scardapane, Simone; Uncini, Aurelio; Panella, Massimo
2016-08-26
Distributed learning refers to the problem of inferring a function when the training data are distributed among different nodes. While significant work has been done in the contexts of supervised and unsupervised learning, the intermediate case of Semi-supervised learning in the distributed setting has received less attention. In this paper, we propose an algorithm for this class of problems, by extending the framework of manifold regularization. The main component of the proposed algorithm consists of a fully distributed computation of the adjacency matrix of the training patterns. To this end, we propose a novel algorithm for low-rank distributed matrix completion, based on the framework of diffusion adaptation. Overall, the distributed Semi-supervised algorithm is efficient and scalable, and it can preserve privacy by the inclusion of flexible privacy-preserving mechanisms for similarity computation. The experimental results and comparison on a wide range of standard Semi-supervised benchmarks validate our proposal.
A new taxonomy for distributed computer systems based upon operating system structure
NASA Technical Reports Server (NTRS)
Foudriat, E. C.
1985-01-01
Characteristics of the resource structure found in the operating system are considered as a mechanism for classifying distributed computer systems. Since the operating system resources, themselves, are too diversified to provide a consistent classification, the structure upon which resources are built and shared are examined. The location and control character of this indivisibility provides the taxonomy for separating uniprocessors, computer networks, network computers (fully distributed processing systems or decentralized computers) and algorithm and/or data control multiprocessors. The taxonomy is important because it divides machines into a classification that is relevant or important to the client and not the hardware architect. It also defines the character of the kernel O/S structure needed for future computer systems. What constitutes an operating system for a fully distributed processor is discussed in detail.
NASA Astrophysics Data System (ADS)
Beck, Jeffrey; Bos, Jeremy P.
2017-05-01
We compare several modifications to the open-source wave optics package, WavePy, intended to improve execution time. Specifically, we compare the relative performance of the Intel MKL, a CPU based OpenCV distribution, and GPU-based version. Performance is compared between distributions both on the same compute platform and between a fully-featured computing workstation and the NVIDIA Jetson TX1 platform. Comparisons are drawn in terms of both execution time and power consumption. We have found that substituting the Fast Fourier Transform operation from OpenCV provides a marked improvement on all platforms. In addition, we show that embedded platforms offer some possibility for extensive improvement in terms of efficiency compared to a fully featured workstation.
Stream-wise distribution of skin-friction drag reduction on a flat plate with bubble injection
NASA Astrophysics Data System (ADS)
Qin, Shijie; Chu, Ning; Yao, Yan; Liu, Jingting; Huang, Bin; Wu, Dazhuan
2017-03-01
To investigate the stream-wise distribution of skin-friction drag reduction on a flat plate with bubble injection, both experiments and simulations of bubble drag reduction (BDR) have been conducted in this paper. Drag reductions at various flow speeds and air injection rates have been tested in cavitation tunnel experiments. Visualization of bubble flow pattern is implemented synchronously. The computational fluid dynamics (CFD) method, in the framework of Eulerian-Eulerian two fluid modeling, coupled with population balance model (PBM) is used to simulate the bubbly flow along the flat plate. A wide range of bubble sizes considering bubble breakup and coalescence is modeled based on experimental bubble distribution images. Drag and lift forces are fully modeled based on applicable closure models. Both predicted drag reductions and bubble distributions are in reasonable concordance with experimental results. Stream-wise distribution of BDR is revealed based on CFD-PBM numerical results. In particular, four distinct regions with different BDR characteristics are first identified and discussed in this study. Thresholds between regions are extracted and discussed. And it is highly necessary to fully understand the stream-wise distribution of BDR in order to establish a universal scaling law. Moreover, mechanism of stream-wise distribution of BDR is analysed based on the near-wall flow parameters. The local drag reduction is a direct result of near-wall max void fraction. And the near-wall velocity gradient modified by the presence of bubbles is considered as another important factor for bubble drag reduction.
Scale effect challenges in urban hydrology highlighted with a distributed hydrological model
NASA Astrophysics Data System (ADS)
Ichiba, Abdellah; Gires, Auguste; Tchiguirinskaia, Ioulia; Schertzer, Daniel; Bompard, Philippe; Ten Veldhuis, Marie-Claire
2018-01-01
Hydrological models are extensively used in urban water management, development and evaluation of future scenarios and research activities. There is a growing interest in the development of fully distributed and grid-based models. However, some complex questions related to scale effects are not yet fully understood and still remain open issues in urban hydrology. In this paper we propose a two-step investigation framework to illustrate the extent of scale effects in urban hydrology. First, fractal tools are used to highlight the scale dependence observed within distributed data input into urban hydrological models. Then an intensive multi-scale modelling work is carried out to understand scale effects on hydrological model performance. Investigations are conducted using a fully distributed and physically based model, Multi-Hydro, developed at Ecole des Ponts ParisTech. The model is implemented at 17 spatial resolutions ranging from 100 to 5 m. Results clearly exhibit scale effect challenges in urban hydrology modelling. The applicability of fractal concepts highlights the scale dependence observed within distributed data. Patterns of geophysical data change when the size of the observation pixel changes. The multi-scale modelling investigation confirms scale effects on hydrological model performance. Results are analysed over three ranges of scales identified in the fractal analysis and confirmed through modelling. This work also discusses some remaining issues in urban hydrology modelling related to the availability of high-quality data at high resolutions, and model numerical instabilities as well as the computation time requirements. The main findings of this paper enable a replacement of traditional methods of model calibration
by innovative methods of model resolution alteration
based on the spatial data variability and scaling of flows in urban hydrology.
Distributed computer taxonomy based on O/S structure
NASA Technical Reports Server (NTRS)
Foudriat, Edwin C.
1985-01-01
The taxonomy considers the resource structure at the operating system level. It compares a communication based taxonomy with the new taxonomy to illustrate how the latter does a better job when related to the client's view of the distributed computer. The results illustrate the fundamental features and what is required to construct fully distributed processing systems. The problem of using network computers on the space station is addressed. A detailed discussion of the taxonomy is not given here. Information is given in the form of charts and diagrams that were used to illustrate a talk.
Towards the Development of a Unified Distributed Date System for L1 Spacecraft
NASA Technical Reports Server (NTRS)
Lazarus, Alan J.; Kasper, Justin C.
2005-01-01
The purpose of this grant, 'Towards the Development of a Unified Distributed Data System for L1 Spacecraft', is to take the initial steps towards the development of a data distribution mechanism for making in-situ measurements more easily accessible to the scientific community. Our obligations as subcontractors to this grant are to add our Faraday Cup plasma data to this initial study and to contribute to the design of a general data distribution system. The year 1 objectives of the overall project as stated in the GSFC proposal are: 1) Both the rsync and Perl based data exchange tools will be fully developed and tested in our mixed, Unix, VMS, Windows and Mac OS X data service environment. Based on the performance comparisons, one will be selected and fully deployed. Continuous data exchange between all L1 solar wind monitors initiated. 2) Data version metadata will be agreed upon, fully documented, and deployed on our data sites. 3) The first version of the data description rules, encoded in a XML Schema, will be finalized. 4) Preliminary set of library routines will be collected, documentation standards and formats agreed on, and desirable routines that have not been implemented identified and assigned. 5) ViSBARD test site implemented to independently validate data mirroring procedures. The specific MIT tasks over the duration of this project are the following: a) implement mirroring service for WIND plasma data b) participate in XML Schema development c) contribute toward routine library.
NASA Astrophysics Data System (ADS)
Vivoni, Enrique R.; Mascaro, Giuseppe; Mniszewski, Susan; Fasel, Patricia; Springer, Everett P.; Ivanov, Valeriy Y.; Bras, Rafael L.
2011-10-01
SummaryA major challenge in the use of fully-distributed hydrologic models has been the lack of computational capabilities for high-resolution, long-term simulations in large river basins. In this study, we present the parallel model implementation and real-world hydrologic assessment of the Triangulated Irregular Network (TIN)-based Real-time Integrated Basin Simulator (tRIBS). Our parallelization approach is based on the decomposition of a complex watershed using the channel network as a directed graph. The resulting sub-basin partitioning divides effort among processors and handles hydrologic exchanges across boundaries. Through numerical experiments in a set of nested basins, we quantify parallel performance relative to serial runs for a range of processors, simulation complexities and lengths, and sub-basin partitioning methods, while accounting for inter-run variability on a parallel computing system. In contrast to serial simulations, the parallel model speed-up depends on the variability of hydrologic processes. Load balancing significantly improves parallel speed-up with proportionally faster runs as simulation complexity (domain resolution and channel network extent) increases. The best strategy for large river basins is to combine a balanced partitioning with an extended channel network, with potential savings through a lower TIN resolution. Based on these advances, a wider range of applications for fully-distributed hydrologic models are now possible. This is illustrated through a set of ensemble forecasts that account for precipitation uncertainty derived from a statistical downscaling model.
Validation of Distributed Soil Moisture: Airborne Polarimetric SAR vs. Ground-based Sensor Networks
NASA Astrophysics Data System (ADS)
Jagdhuber, T.; Kohling, M.; Hajnsek, I.; Montzka, C.; Papathanassiou, K. P.
2012-04-01
The knowledge of spatially distributed soil moisture is highly desirable for an enhanced hydrological modeling in terms of flood prevention and for yield optimization in combination with precision farming. Especially in mid-latitudes, the growing agricultural vegetation results in an increasing soil coverage along the crop cycle. For a remote sensing approach, this vegetation influence has to be separated from the soil contribution within the resolution cell to extract the actual soil moisture. Therefore a hybrid decomposition was developed for estimation of soil moisture under vegetation cover using fully polarimetric SAR data. The novel polarimetric decomposition combines a model-based decomposition, separating the volume component from the ground components, with an eigen-based decomposition of the two ground components into a surface and a dihedral scattering contribution. Hence, this hybrid decomposition, which is based on [1,2], establishes an innovative way to retrieve soil moisture under vegetation. The developed inversion algorithm for soil moisture under vegetation cover is applied on fully polarimetric data of the TERENO campaign, conducted in May and June 2011 for the Rur catchment within the Eifel/Lower Rhine Valley Observatory. The fully polarimetric SAR data were acquired in high spatial resolution (range: 1.92m, azimuth: 0.6m) by DLR's novel F-SAR sensor at L-band. The inverted soil moisture product from the airborne SAR data is validated with corresponding distributed ground measurements for a quality assessment of the developed algorithm. The in situ measurements were obtained on the one hand by mobile FDR probes from agricultural fields near the towns of Merzenhausen and Selhausen incorporating different crop types and on the other hand by distributed wireless sensor networks (SoilNet clusters) from a grassland test site (near the town of Rollesbroich) and from a forest stand (within the Wüstebach sub-catchment). Each SoilNet cluster incorporates around 150 wireless measuring devices on a grid of approximately 30ha for distributed soil moisture sensing. Finally, the comparison of both distributed soil moisture products results in a discussion on potentials and limitations for obtaining soil moisture under vegetation cover with high resolution fully polarimetric SAR. [1] S.R. Cloude, Polarisation: applications in remote sensing. Oxford, Oxford University Press, 2010. [2] Jagdhuber, T., Hajnsek, I., Papathanassiou, K.P. and Bronstert, A.: A Hybrid Decomposition for Soil Moisture Estimation under Vegetation Cover Using Polarimetric SAR. Proc. of the 5th International Workshop on Science and Applications of SAR Polarimetry and Polarimetric Interferometry, ESA-ESRIN, Frascati, Italy, January 24-28, 2011, p.1-6.
Design method of freeform light distribution lens for LED automotive headlamp based on DMD
NASA Astrophysics Data System (ADS)
Ma, Jianshe; Huang, Jianwei; Su, Ping; Cui, Yao
2018-01-01
We propose a new method to design freeform light distribution lens for light-emitting diode (LED) automotive headlamp based on digital micro mirror device (DMD). With the Parallel optical path architecture, the exit pupil of the illuminating system is set in infinity. Thus the principal incident rays of micro lens in DMD is parallel. DMD is made of high speed digital optical reflection array, the function of distribution lens is to distribute the emergent parallel rays from DMD and get a lighting pattern that fully comply with the national regulation GB 25991-2010.We use DLP 4500 to design the light distribution lens, mesh the target plane regulated by the national regulation GB 25991-2010 and correlate the mesh grids with the active mirror array of DLP4500. With the mapping relations and the refraction law, we can build the mathematics model and get the parameters of freeform light distribution lens. Then we import its parameter into the three-dimensional (3D) software CATIA to construct its 3D model. The ray tracing results using Tracepro demonstrate that the Illumination value of target plane is easily adjustable and fully comply with the requirement of the national regulation GB 25991-2010 by adjusting the exit brightness value of DMD. The theoretical optical efficiencies of the light distribution lens designed using this method could be up to 92% without any other auxiliary lens.
High-Order Hyperbolic Residual-Distribution Schemes on Arbitrary Triangular Grids
2015-06-22
Galerkin methodology formulated in the framework of the residual-distribution method. For both second- and third- 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND...construct these schemes based on the Low-Diffusion-A and the Streamwise-Upwind-Petrov-Galerkin methodology formulated in the framework of the residual...methodology formulated in the framework of the residual-distribution method. For both second- and third-order-schemes, we construct a fully implicit
Market-Based Coordination and Auditing Mechanisms for Self-Interested Multi-Robot Systems
ERIC Educational Resources Information Center
Ham, MyungJoo
2009-01-01
We propose market-based coordinated task allocation mechanisms, which allocate complex tasks that require synchronized and collaborated services of multiple robot agents to robot agents, and an auditing mechanism, which ensures proper behaviors of robot agents by verifying inter-agent activities, for self-interested, fully-distributed, and…
Jeong, Seol Young; Jo, Hyeong Gon; Kang, Soon Ju
2014-03-21
A tracking service like asset management is essential in a dynamic hospital environment consisting of numerous mobile assets (e.g., wheelchairs or infusion pumps) that are continuously relocated throughout a hospital. The tracking service is accomplished based on the key technologies of an indoor location-based service (LBS), such as locating and monitoring multiple mobile targets inside a building in real time. An indoor LBS such as a tracking service entails numerous resource lookups being requested concurrently and frequently from several locations, as well as a network infrastructure requiring support for high scalability in indoor environments. A traditional centralized architecture needs to maintain a geographic map of the entire building or complex in its central server, which can cause low scalability and traffic congestion. This paper presents a self-organizing and fully distributed indoor mobile asset management (MAM) platform, and proposes an architecture for multiple trackees (such as mobile assets) and trackers based on the proposed distributed platform in real time. In order to verify the suggested platform, scalability performance according to increases in the number of concurrent lookups was evaluated in a real test bed. Tracking latency and traffic load ratio in the proposed tracking architecture was also evaluated.
Micro-Power Sources Enabling Robotic Outpost Based Deep Space Exploration
NASA Technical Reports Server (NTRS)
West, W. C.; Whitacre, J. F.; Ratnakumar, B. V.; Brandon, E. J.; Studor, G. F.
2001-01-01
Robotic outpost based exploration represents a fundamental shift in mission design from conventional, single spacecraft missions towards a distributed risk approach with many miniaturized semi-autonomous robots and sensors. This approach can facilitate wide-area sampling and exploration, and may consist of a web of orbiters, landers, or penetrators. To meet the mass and volume constraints of deep space missions such as the Europa Ocean Science Station, the distributed units must be fully miniaturized to fully leverage the wide-area exploration approach. However, presently there is a dearth of available options for powering these miniaturized sensors and robots. This group is currently examining miniaturized, solid state batteries as candidates to meet the demand of applications requiring low power, mass, and volume micro-power sources. These applications may include powering microsensors, battery-backing rad-hard CMOS memory and providing momentary chip back-up power. Additional information is contained in the original extended abstract.
Research on fully distributed optical fiber sensing security system localization algorithm
NASA Astrophysics Data System (ADS)
Wu, Xu; Hou, Jiacheng; Liu, Kun; Liu, Tiegen
2013-12-01
A new fully distributed optical fiber sensing and location technology based on the Mach-Zehnder interferometers is studied. In this security system, a new climbing point locating algorithm based on short-time average zero-crossing rate is presented. By calculating the zero-crossing rates of the multiple grouped data separately, it not only utilizes the advantages of the frequency analysis method to determine the most effective data group more accurately, but also meets the requirement of the real-time monitoring system. Supplemented with short-term energy calculation group signal, the most effective data group can be quickly picked out. Finally, the accurate location of the climbing point can be effectively achieved through the cross-correlation localization algorithm. The experimental results show that the proposed algorithm can realize the accurate location of the climbing point and meanwhile the outside interference noise of the non-climbing behavior can be effectively filtered out.
Distributed optical fiber vibration sensor based on spectrum analysis of Polarization-OTDR system.
Zhang, Ziyi; Bao, Xiaoyi
2008-07-07
A fully distributed optical fiber vibration sensor is demonstrated based on spectrum analysis of Polarization-OTDR system. Without performing any data averaging, vibration disturbances up to 5 kHz is successfully demonstrated in a 1km fiber link with 10m spatial resolution. The FFT is performed at each spatial resolution; the relation of the disturbance at each frequency component versus location allows detection of multiple events simultaneously with different and the same frequency components.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giantsoudi, D; Schuemann, J; Dowdell, S
Purpose: For proton radiation therapy, Monte Carlo simulation (MCS) methods are recognized as the gold-standard dose calculation approach. Although previously unrealistic due to limitations in available computing power, GPU-based applications allow MCS of proton treatment fields to be performed in routine clinical use, on time scales comparable to that of conventional pencil-beam algorithms. This study focuses on validating the results of our GPU-based code (gPMC) versus fully implemented proton therapy based MCS code (TOPAS) for clinical patient cases. Methods: Two treatment sites were selected to provide clinical cases for this study: head-and-neck cases due to anatomical geometrical complexity (air cavitiesmore » and density heterogeneities), making dose calculation very challenging, and prostate cases due to higher proton energies used and close proximity of the treatment target to sensitive organs at risk. Both gPMC and TOPAS methods were used to calculate 3-dimensional dose distributions for all patients in this study. Comparisons were performed based on target coverage indices (mean dose, V90 and D90) and gamma index distributions for 2% of the prescription dose and 2mm. Results: For seven out of eight studied cases, mean target dose, V90 and D90 differed less than 2% between TOPAS and gPMC dose distributions. Gamma index analysis for all prostate patients resulted in passing rate of more than 99% of voxels in the target. Four out of five head-neck-cases showed passing rate of gamma index for the target of more than 99%, the fifth having a gamma index passing rate of 93%. Conclusion: Our current work showed excellent agreement between our GPU-based MCS code and fully implemented proton therapy based MC code for a group of dosimetrically challenging patient cases.« less
Different modelling approaches to evaluate nitrogen transport and turnover at the watershed scale
NASA Astrophysics Data System (ADS)
Epelde, Ane Miren; Antiguedad, Iñaki; Brito, David; Jauch, Eduardo; Neves, Ramiro; Garneau, Cyril; Sauvage, Sabine; Sánchez-Pérez, José Miguel
2016-08-01
This study presents the simulation of hydrological processes and nutrient transport and turnover processes using two integrated numerical models: Soil and Water Assessment Tool (SWAT) (Arnold et al., 1998), an empirical and semi-distributed numerical model; and Modelo Hidrodinâmico (MOHID) (Neves, 1985), a physics-based and fully distributed numerical model. This work shows that both models reproduce satisfactorily water and nitrate exportation at the watershed scale at annual and daily basis, MOHID providing slightly better results. At the watershed scale, both SWAT and MOHID simulated similarly and satisfactorily the denitrification amount. However, as MOHID numerical model was the only one able to reproduce adequately the spatial variation of the soil hydrological conditions and water table level fluctuation, it proved to be the only model able of reproducing the spatial variation of the nutrient cycling processes that are dependent to the soil hydrological conditions such as the denitrification process. This evidences the strength of the fully distributed and physics-based models to simulate the spatial variability of nutrient cycling processes that are dependent to the hydrological conditions of the soils.
Fully Packaged Carbon Nanotube Supercapacitors by Direct Ink Writing on Flexible Substrates.
Chen, Bolin; Jiang, Yizhou; Tang, Xiaohui; Pan, Yayue; Hu, Shan
2017-08-30
The ability to print fully packaged integrated energy storage components (e.g., supercapacitors) is of critical importance for practical applications of printed electronics. Due to the limited variety of printable materials, most studies on printed supercapacitors focus on printing the electrode materials but rarely the full-packaged cell. This work presents for the first time the printing of a fully packaged single-wall carbon nanotube-based supercapacitor with direct ink writing (DIW) technology. Enabled by the developed ink formula, DIW setup, and cell architecture, the whole printing process is mask free, transfer free, and alignment free with precise and repeatable control on the spatial distribution of all constituent materials. Studies on cell design show that a wider electrode pattern and narrower gap distance between electrodes lead to higher specific capacitance. The as-printed fully packaged supercapacitors have energy and power performances that are among the best in recently reported planar carbon-based supercapacitors that are only partially printed or nonprinted.
Imaging of zymogen granules in fully wet cells: evidence for restricted mechanism of granule growth.
Hammel, Ilan; Anaby, Debbie
2007-09-01
The introduction of wet SEM imaging technology permits electron microscopy of wet samples. Samples are placed in sealed specimen capsules and are insulated from the vacuum in the SEM chamber by an impermeable, electron-transparent membrane. The complete insulation of the sample from the vacuum allows direct imaging of fully hydrated, whole-mount tissue. In the current work, we demonstrate direct inspection of thick pancreatic tissue slices (above 400 mum). In the case of scanning of the pancreatic surface, the boundaries of intracellular features are seen directly. Thus no unfolding is required to ascertain the actual particle size distribution based on the sizes of the sections. This method enabled us to investigate the true granule size distribution and confirm early studies of improved conformity to a Poisson-like distribution, suggesting that the homotypic granule growth results from a mechanism, which favors the addition of a single unit granule to mature granules.
Structural optimization for joined-wing synthesis
NASA Technical Reports Server (NTRS)
Gallman, John W.; Kroo, Ilan M.
1992-01-01
The differences between fully stressed and minimum-weight joined-wing structures are identified, and these differences are quantified in terms of weight, stress, and direct operating cost. A numerical optimization method and a fully stressed design method are used to design joined-wing structures. Both methods determine the sizes of 204 structural members, satisfying 1020 stress constraints and five buckling constraints. Monotonic splines are shown to be a very effective way of linking spanwise distributions of material to a few design variables. Both linear and nonlinear analyses are employed to formulate the buckling constraints. With a constraint on buckling, the fully stressed design is shown to be very similar to the minimum-weight structure. It is suggested that a fully stressed design method based on nonlinear analysis is adequate for an aircraft optimization study.
A fully traits-based approach to modeling global vegetation distribution.
van Bodegom, Peter M; Douma, Jacob C; Verheijen, Lieneke M
2014-09-23
Dynamic Global Vegetation Models (DGVMs) are indispensable for our understanding of climate change impacts. The application of traits in DGVMs is increasingly refined. However, a comprehensive analysis of the direct impacts of trait variation on global vegetation distribution does not yet exist. Here, we present such analysis as proof of principle. We run regressions of trait observations for leaf mass per area, stem-specific density, and seed mass from a global database against multiple environmental drivers, making use of findings of global trait convergence. This analysis explained up to 52% of the global variation of traits. Global trait maps, generated by coupling the regression equations to gridded soil and climate maps, showed up to orders of magnitude variation in trait values. Subsequently, nine vegetation types were characterized by the trait combinations that they possess using Gaussian mixture density functions. The trait maps were input to these functions to determine global occurrence probabilities for each vegetation type. We prepared vegetation maps, assuming that the most probable (and thus, most suited) vegetation type at each location will be realized. This fully traits-based vegetation map predicted 42% of the observed vegetation distribution correctly. Our results indicate that a major proportion of the predictive ability of DGVMs with respect to vegetation distribution can be attained by three traits alone if traits like stem-specific density and seed mass are included. We envision that our traits-based approach, our observation-driven trait maps, and our vegetation maps may inspire a new generation of powerful traits-based DGVMs.
Huang, Jie; Lan, Xinwei; Luo, Ming; Xiao, Hai
2014-07-28
This paper reports a spatially continuous distributed fiber optic sensing technique using optical carrier based microwave interferometry (OCMI), in which many optical interferometers with the same or different optical path differences are interrogated in the microwave domain and their locations can be unambiguously determined. The concept is demonstrated using cascaded weak optical reflectors along a single optical fiber, where any two arbitrary reflectors are paired to define a low-finesse Fabry-Perot interferometer. While spatially continuous (i.e., no dark zone), fully distributed strain measurement was used as an example to demonstrate the capability, the proposed concept may also be implemented on other types of waveguide or free-space interferometers and used for distributed measurement of various physical, chemical and biological quantities.
Jeong, Seol Young; Jo, Hyeong Gon; Kang, Soon Ju
2014-01-01
A tracking service like asset management is essential in a dynamic hospital environment consisting of numerous mobile assets (e.g., wheelchairs or infusion pumps) that are continuously relocated throughout a hospital. The tracking service is accomplished based on the key technologies of an indoor location-based service (LBS), such as locating and monitoring multiple mobile targets inside a building in real time. An indoor LBS such as a tracking service entails numerous resource lookups being requested concurrently and frequently from several locations, as well as a network infrastructure requiring support for high scalability in indoor environments. A traditional centralized architecture needs to maintain a geographic map of the entire building or complex in its central server, which can cause low scalability and traffic congestion. This paper presents a self-organizing and fully distributed indoor mobile asset management (MAM) platform, and proposes an architecture for multiple trackees (such as mobile assets) and trackers based on the proposed distributed platform in real time. In order to verify the suggested platform, scalability performance according to increases in the number of concurrent lookups was evaluated in a real test bed. Tracking latency and traffic load ratio in the proposed tracking architecture was also evaluated. PMID:24662407
Knowledge-base browsing: an application of hybrid distributed/local connectionist networks
NASA Astrophysics Data System (ADS)
Samad, Tariq; Israel, Peggy
1990-08-01
We describe a knowledge base browser based on a connectionist (or neural network) architecture that employs both distributed and local representations. The distributed representations are used for input and output thereby enabling associative noise-tolerant interaction with the environment. Internally all representations are fully local. This simplifies weight assignment and facilitates network configuration for specific applications. In our browser concepts and relations in a knowledge base are represented using " microfeatures. " The microfeatures can encode semantic attributes structural features contextual information etc. Desired portions of the knowledge base can then be associatively retrieved based on a structured cue. An ordered list of partial matches is presented to the user for selection. Microfeatures can also be used as " bookmarks" they can be placed dynamically at appropriate points in the knowledge base and subsequently used as retrieval cues. A proof-of-concept system has been implemented for an internally developed Honeywell-proprietary knowledge acquisition tool. 1.
Reply to ``Comment on `Quantum time-of-flight distribution for cold trapped atoms' ''
NASA Astrophysics Data System (ADS)
Ali, Md. Manirul; Home, Dipankar; Majumdar, A. S.; Pan, Alok K.
2008-02-01
In their comment Gomes [Phys. Rev. A 77, 026101 (2008)] have questioned the possibility of empirically testable differences existing between the semiclassical time of flight distribution for cold trapped atoms and a quantum distribution discussed by us recently [Ali , Phys. Rev. A 75, 042110 (2007).]. We argue that their criticism is based on a semiclassical treatment having restricted applicability for a particular trapping potential. Their claim does not preclude, in general, the possibility of differences between the semiclassical calculations and fully quantum results for the arrival time distribution of freely falling atoms.
Reply to 'Comment on 'Quantum time-of-flight distribution for cold trapped atoms''
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ali, Md. Manirul; Home, Dipankar; Pan, Alok K.
2008-02-15
In their comment Gomes et al. [Phys. Rev. A 77, 026101 (2008)] have questioned the possibility of empirically testable differences existing between the semiclassical time of flight distribution for cold trapped atoms and a quantum distribution discussed by us recently [Ali et al., Phys. Rev. A 75, 042110 (2007).]. We argue that their criticism is based on a semiclassical treatment having restricted applicability for a particular trapping potential. Their claim does not preclude, in general, the possibility of differences between the semiclassical calculations and fully quantum results for the arrival time distribution of freely falling atoms.
Square and rectangular concrete columns confined by CFRP: Experimental and numerical investigation
NASA Astrophysics Data System (ADS)
Monti, G.; Nistico, N.
2008-05-01
The results of an experimental and theoretical investigation into the deformation behavior of CFRP-confined square and rectangular concrete columns under axial loads are presented. Three types of columns are considered: unwrapped; fully wrapped; and fully wrapped, with L-slaped steel angles placed at the corners. A mechanical deformation model for them is proposed, which is based on a nonuniform distribution of the stresses caused by the confining device. The results given by the model are in a good agreement with the experimental results obtained.
The topology of galaxy clustering.
NASA Astrophysics Data System (ADS)
Coles, P.; Plionis, M.
The authors discuss an objective method for quantifying the topology of the galaxy distribution using only projected galaxy counts. The method is a useful complement to fully three-dimensional studies of topology based on the genus by virtue of the enormous projected data sets available. Applying the method to the Lick counts they find no evidence for large-scale non-gaussian behaviour, whereas the small-scale distribution is strongly non-gaussian, with a shift in the meatball direction.
Churn-Resilient Replication Strategy for Peer-to-Peer Distributed Hash-Tables
NASA Astrophysics Data System (ADS)
Legtchenko, Sergey; Monnet, Sébastien; Sens, Pierre; Muller, Gilles
DHT-based P2P systems provide a fault-tolerant and scalable mean to store data blocks in a fully distributed way. Unfortunately, recent studies have shown that if connection/disconnection frequency is too high, data blocks may be lost. This is true for most current DHT-based system's implementations. To avoid this problem, it is necessary to build really efficient replication and maintenance mechanisms. In this paper, we study the effect of churn on an existing DHT-based P2P system such as DHash or PAST. We then propose solutions to enhance churn tolerance and evaluate them through discrete event simulations.
Construction of a Digital Learning Environment Based on Cloud Computing
ERIC Educational Resources Information Center
Ding, Jihong; Xiong, Caiping; Liu, Huazhong
2015-01-01
Constructing the digital learning environment for ubiquitous learning and asynchronous distributed learning has opened up immense amounts of concrete research. However, current digital learning environments do not fully fulfill the expectations on supporting interactive group learning, shared understanding and social construction of knowledge.…
A Very Large Area Network (VLAN) knowledge-base applied to space communication problems
NASA Technical Reports Server (NTRS)
Zander, Carol S.
1988-01-01
This paper first describes a hierarchical model for very large area networks (VLAN). Space communication problems whose solution could profit by the model are discussed and then an enhanced version of this model incorporating the knowledge needed for the missile detection-destruction problem is presented. A satellite network or VLAN is a network which includes at least one satellite. Due to the complexity, a compromise between fully centralized and fully distributed network management has been adopted. Network nodes are assigned to a physically localized group, called a partition. Partitions consist of groups of cell nodes with one cell node acting as the organizer or master, called the Group Master (GM). Coordinating the group masters is a Partition Master (PM). Knowledge is also distributed hierarchically existing in at least two nodes. Each satellite node has a back-up earth node. Knowledge must be distributed in such a way so as to minimize information loss when a node fails. Thus the model is hierarchical both physically and informationally.
Two-dimensional distributed-phase-reference protocol for quantum key distribution
NASA Astrophysics Data System (ADS)
Bacco, Davide; Christensen, Jesper Bjerge; Castaneda, Mario A. Usuga; Ding, Yunhong; Forchhammer, Søren; Rottwitt, Karsten; Oxenløwe, Leif Katsuo
2016-12-01
Quantum key distribution (QKD) and quantum communication enable the secure exchange of information between remote parties. Currently, the distributed-phase-reference (DPR) protocols, which are based on weak coherent pulses, are among the most practical solutions for long-range QKD. During the last 10 years, long-distance fiber-based DPR systems have been successfully demonstrated, although fundamental obstacles such as intrinsic channel losses limit their performance. Here, we introduce the first two-dimensional DPR-QKD protocol in which information is encoded in the time and phase of weak coherent pulses. The ability of extracting two bits of information per detection event, enables a higher secret key rate in specific realistic network scenarios. Moreover, despite the use of more dimensions, the proposed protocol remains simple, practical, and fully integrable.
Two-dimensional distributed-phase-reference protocol for quantum key distribution.
Bacco, Davide; Christensen, Jesper Bjerge; Castaneda, Mario A Usuga; Ding, Yunhong; Forchhammer, Søren; Rottwitt, Karsten; Oxenløwe, Leif Katsuo
2016-12-22
Quantum key distribution (QKD) and quantum communication enable the secure exchange of information between remote parties. Currently, the distributed-phase-reference (DPR) protocols, which are based on weak coherent pulses, are among the most practical solutions for long-range QKD. During the last 10 years, long-distance fiber-based DPR systems have been successfully demonstrated, although fundamental obstacles such as intrinsic channel losses limit their performance. Here, we introduce the first two-dimensional DPR-QKD protocol in which information is encoded in the time and phase of weak coherent pulses. The ability of extracting two bits of information per detection event, enables a higher secret key rate in specific realistic network scenarios. Moreover, despite the use of more dimensions, the proposed protocol remains simple, practical, and fully integrable.
Two-dimensional distributed-phase-reference protocol for quantum key distribution
Bacco, Davide; Christensen, Jesper Bjerge; Castaneda, Mario A. Usuga; Ding, Yunhong; Forchhammer, Søren; Rottwitt, Karsten; Oxenløwe, Leif Katsuo
2016-01-01
Quantum key distribution (QKD) and quantum communication enable the secure exchange of information between remote parties. Currently, the distributed-phase-reference (DPR) protocols, which are based on weak coherent pulses, are among the most practical solutions for long-range QKD. During the last 10 years, long-distance fiber-based DPR systems have been successfully demonstrated, although fundamental obstacles such as intrinsic channel losses limit their performance. Here, we introduce the first two-dimensional DPR-QKD protocol in which information is encoded in the time and phase of weak coherent pulses. The ability of extracting two bits of information per detection event, enables a higher secret key rate in specific realistic network scenarios. Moreover, despite the use of more dimensions, the proposed protocol remains simple, practical, and fully integrable. PMID:28004821
A new type of simplified fuzzy rule-based system
NASA Astrophysics Data System (ADS)
Angelov, Plamen; Yager, Ronald
2012-02-01
Over the last quarter of a century, two types of fuzzy rule-based (FRB) systems dominated, namely Mamdani and Takagi-Sugeno type. They use the same type of scalar fuzzy sets defined per input variable in their antecedent part which are aggregated at the inference stage by t-norms or co-norms representing logical AND/OR operations. In this paper, we propose a significantly simplified alternative to define the antecedent part of FRB systems by data Clouds and density distribution. This new type of FRB systems goes further in the conceptual and computational simplification while preserving the best features (flexibility, modularity, and human intelligibility) of its predecessors. The proposed concept offers alternative non-parametric form of the rules antecedents, which fully reflects the real data distribution and does not require any explicit aggregation operations and scalar membership functions to be imposed. Instead, it derives the fuzzy membership of a particular data sample to a Cloud by the data density distribution of the data associated with that Cloud. Contrast this to the clustering which is parametric data space decomposition/partitioning where the fuzzy membership to a cluster is measured by the distance to the cluster centre/prototype ignoring all the data that form that cluster or approximating their distribution. The proposed new approach takes into account fully and exactly the spatial distribution and similarity of all the real data by proposing an innovative and much simplified form of the antecedent part. In this paper, we provide several numerical examples aiming to illustrate the concept.
NASA Technical Reports Server (NTRS)
Schwaller, Mathew R.; Schweiss, Robert J.
2007-01-01
The NPOESS Preparatory Project (NPP) Science Data Segment (SDS) provides a framework for the future of NASA s distributed Earth science data systems. The NPP SDS performs research and data product assessment while using a fully distributed architecture. The components of this architecture are organized around key environmental data disciplines: land, ocean, ozone, atmospheric sounding, and atmospheric composition. The SDS thus establishes a set of concepts and a working prototypes. This paper describes the framework used by the NPP Project as it enabled Measurement-Based Earth Science Data Systems for the assessment of NPP products.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 1 2010-10-01 2010-10-01 false How are funds distributed when a Self-Governance..., DEPARTMENT OF HEALTH AND HUMAN SERVICES TRIBAL SELF-GOVERNANCE Retrocession § 137.250 How are funds distributed when a Self-Governance Tribe fully or partially retrocedes from its compact or funding agreement...
NASA Astrophysics Data System (ADS)
Ichiba, Abdellah; Gires, Auguste; Tchiguirinskaia, Ioulia; Schertzer, Daniel; Bompard, Philippe; Ten Veldhuis, Marie-Claire
2017-04-01
Nowadays, there is a growing interest on small-scale rainfall information, provided by weather radars, to be used in urban water management and decision-making. Therefore, an increasing interest is in parallel devoted to the development of fully distributed and grid-based models following the increase of computation capabilities, the availability of high-resolution GIS information needed for such models implementation. However, the choice of an appropriate implementation scale to integrate the catchment heterogeneity and the whole measured rainfall variability provided by High-resolution radar technologies still issues. This work proposes a two steps investigation of scale effects in urban hydrology and its effects on modeling works. In the first step fractal tools are used to highlight the scale dependency observed within distributed data used to describe the catchment heterogeneity, both the structure of the sewer network and the distribution of impervious areas are analyzed. Then an intensive multi-scale modeling work is carried out to understand scaling effects on hydrological model performance. Investigations were conducted using a fully distributed and physically based model, Multi-Hydro, developed at Ecole des Ponts ParisTech. The model was implemented at 17 spatial resolutions ranging from 100 m to 5 m and modeling investigations were performed using both rain gauge rainfall information as well as high resolution X band radar data in order to assess the sensitivity of the model to small scale rainfall variability. Results coming out from this work demonstrate scale effect challenges in urban hydrology modeling. In fact, fractal concept highlights the scale dependency observed within distributed data used to implement hydrological models. Patterns of geophysical data change when we change the observation pixel size. The multi-scale modeling investigation performed with Multi-Hydro model at 17 spatial resolutions confirms scaling effect on hydrological model performance. Results were analyzed at three ranges of scales identified in the fractal analysis and confirmed in the modeling work. The sensitivity of the model to small-scale rainfall variability was discussed as well.
Fully Burdened Cost of Fuel Using Input-Output Analysis
2011-12-01
Distribution Model could be used to replace the current seven-step Fully Burdened Cost of Fuel process with a single step, allowing for less complex and...wide extension of the Bulk Fuels Distribution Model could be used to replace the current seven-step Fully Burdened Cost of Fuel process with a single...ABBREVIATIONS AEM Atlantic, Europe, and the Mediterranean AOAs Analysis of Alternatives DAG Defense Acquisition Guidebook DAU Defense Acquisition University
Centralized vs decentralized lunar power system study
NASA Astrophysics Data System (ADS)
Metcalf, Kenneth; Harty, Richard B.; Perronne, Gerald E.
1991-09-01
Three power-system options are considered with respect to utilization on a lunar base: the fully centralized option, the fully decentralized option, and a hybrid comprising features of the first two options. Power source, power conditioning, and power transmission are considered separately, and each architecture option is examined with ac and dc distribution, high and low voltage transmission, and buried and suspended cables. Assessments are made on the basis of mass, technological complexity, cost, reliability, and installation complexity, however, a preferred power-system architecture is not proposed. Preferred options include transmission based on ac, transmission voltages of 2000-7000 V with buried high-voltage lines and suspended low-voltage lines. Assessments of the total cost associated with the installations are required to determine the most suitable power system.
Multigrid Method for Modeling Multi-Dimensional Combustion with Detailed Chemistry
NASA Technical Reports Server (NTRS)
Zheng, Xiaoqing; Liu, Chaoqun; Liao, Changming; Liu, Zhining; McCormick, Steve
1996-01-01
A highly accurate and efficient numerical method is developed for modeling 3-D reacting flows with detailed chemistry. A contravariant velocity-based governing system is developed for general curvilinear coordinates to maintain simplicity of the continuity equation and compactness of the discretization stencil. A fully-implicit backward Euler technique and a third-order monotone upwind-biased scheme on a staggered grid are used for the respective temporal and spatial terms. An efficient semi-coarsening multigrid method based on line-distributive relaxation is used as the flow solver. The species equations are solved in a fully coupled way and the chemical reaction source terms are treated implicitly. Example results are shown for a 3-D gas turbine combustor with strong swirling inflows.
Generalised solutions for fully nonlinear PDE systems and existence-uniqueness theorems
NASA Astrophysics Data System (ADS)
Katzourakis, Nikos
2017-07-01
We introduce a new theory of generalised solutions which applies to fully nonlinear PDE systems of any order and allows for merely measurable maps as solutions. This approach bypasses the standard problems arising by the application of Distributions to PDEs and is not based on either integration by parts or on the maximum principle. Instead, our starting point builds on the probabilistic representation of derivatives via limits of difference quotients in the Young measures over a toric compactification of the space of jets. After developing some basic theory, as a first application we consider the Dirichlet problem and we prove existence-uniqueness-partial regularity of solutions to fully nonlinear degenerate elliptic 2nd order systems and also existence of solutions to the ∞-Laplace system of vectorial Calculus of Variations in L∞.
Cities, towns, and Tribes rely on clean air, water and other natural resources for economic sustainability and quality of life. Yet natural resources and their benefits are not always fully understood or considered in local decisions. EnviroAtlas is a web-based, easy-to-use map...
Background Cities, towns, and Tribes rely on clean air, water and other natural resources for public health and well-being. Yet natural infrastructure and its benefits are not always fully understood or considered in local decisions. EnviroAtlas is a web-based, easy-to-use mapp...
USDA-ARS?s Scientific Manuscript database
To improve the management strategy of riparian restoration, better understanding of the dynamic of eco-hydrological system and its feedback between hydrological and ecological components are needed. The fully distributed eco-hydrological model coupled with a hydrology component was developed based o...
Towards dropout training for convolutional neural networks.
Wu, Haibing; Gu, Xiaodong
2015-11-01
Recently, dropout has seen increasing use in deep learning. For deep convolutional neural networks, dropout is known to work well in fully-connected layers. However, its effect in convolutional and pooling layers is still not clear. This paper demonstrates that max-pooling dropout is equivalent to randomly picking activation based on a multinomial distribution at training time. In light of this insight, we advocate employing our proposed probabilistic weighted pooling, instead of commonly used max-pooling, to act as model averaging at test time. Empirical evidence validates the superiority of probabilistic weighted pooling. We also empirically show that the effect of convolutional dropout is not trivial, despite the dramatically reduced possibility of over-fitting due to the convolutional architecture. Elaborately designing dropout training simultaneously in max-pooling and fully-connected layers, we achieve state-of-the-art performance on MNIST, and very competitive results on CIFAR-10 and CIFAR-100, relative to other approaches without data augmentation. Finally, we compare max-pooling dropout and stochastic pooling, both of which introduce stochasticity based on multinomial distributions at pooling stage. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Shokri, Ali
2017-04-01
The hydrological cycle contains a wide range of linked surface and subsurface flow processes. In spite of natural connections between surface water and groundwater, historically, these processes have been studied separately. The current trend in hydrological distributed physically based model development is to combine distributed surface water models with distributed subsurface flow models. This combination results in a better estimation of the temporal and spatial variability of the interaction between surface and subsurface flow. On the other hand, simple lumped models such as the Soil Conservation Service Curve Number (SCS-CN) are still quite common because of their simplicity. In spite of the popularity of the SCS-CN method, there have always been concerns about the ambiguity of the SCS-CN method in explaining physical mechanism of rainfall-runoff processes. The aim of this study is to minimize these ambiguity by establishing a method to find an equivalence of the SCS-CN solution to the DrainFlow model, which is a fully distributed physically based coupled surface-subsurface flow model. In this paper, two hypothetical v-catchment tests are designed and the direct runoff from a storm event are calculated by both SCS-CN and DrainFlow models. To find a comparable solution to runoff prediction through the SCS-CN and DrainFlow, the variance between runoff predictions by the two models are minimized by changing Curve Number (CN) and initial abstraction (Ia) values. Results of this study have led to a set of lumped model parameters (CN and Ia) for each catchment that is comparable to a set of physically based parameters including hydraulic conductivity, Manning roughness coefficient, ground surface slope, and specific storage. Considering the lack of physical interpretation in CN and Ia is often argued as a weakness of SCS-CN method, the novel method in this paper gives a physical explanation to CN and Ia.
Integrated Distribution Management System for Alabama Principal Investigator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schatz, Joe
2013-03-31
Southern Company Services, under contract with the Department of Energy, along with Alabama Power, Alstom Grid (formerly AREVA T&D) and others moved the work product developed in the first phase of the Integrated Distribution Management System (IDMS) from “Proof of Concept” to true deployment through the activity described in this Final Report. This Project – Integrated Distribution Management Systems in Alabama – advanced earlier developed proof of concept activities into actual implementation and furthermore completed additional requirements to fully realize the benefits of an IDMS. These tasks include development and implementation of a Distribution System based Model that enables datamore » access and enterprise application integration.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mijnheer, B; Mans, A; Olaciregui-Ruiz, I
Purpose: To develop a 3D in vivo dosimetry method that is able to substitute pre-treatment verification in an efficient way, and to terminate treatment delivery if the online measured 3D dose distribution deviates too much from the predicted dose distribution. Methods: A back-projection algorithm has been further developed and implemented to enable automatic 3D in vivo dose verification of IMRT/VMAT treatments using a-Si EPIDs. New software tools were clinically introduced to allow automated image acquisition, to periodically inspect the record-and-verify database, and to automatically run the EPID dosimetry software. The comparison of the EPID-reconstructed and planned dose distribution is donemore » offline to raise automatically alerts and to schedule actions when deviations are detected. Furthermore, a software package for online dose reconstruction was also developed. The RMS of the difference between the cumulative planned and reconstructed 3D dose distributions was used for triggering a halt of a linac. Results: The implementation of fully automated 3D EPID-based in vivo dosimetry was able to replace pre-treatment verification for more than 90% of the patient treatments. The process has been fully automated and integrated in our clinical workflow where over 3,500 IMRT/VMAT treatments are verified each year. By optimizing the dose reconstruction algorithm and the I/O performance, the delivered 3D dose distribution is verified in less than 200 ms per portal image, which includes the comparison between the reconstructed and planned dose distribution. In this way it was possible to generate a trigger that can stop the irradiation at less than 20 cGy after introducing large delivery errors. Conclusion: The automatic offline solution facilitated the large scale clinical implementation of 3D EPID-based in vivo dose verification of IMRT/VMAT treatments; the online approach has been successfully tested for various severe delivery errors.« less
Size distribution of extracellular vesicles by optical correlation techniques.
Montis, Costanza; Zendrini, Andrea; Valle, Francesco; Busatto, Sara; Paolini, Lucia; Radeghieri, Annalisa; Salvatore, Annalisa; Berti, Debora; Bergese, Paolo
2017-10-01
Understanding the colloidal properties of extracellular vesicles (EVs) is key to advance fundamental knowledge in this field and to develop effective EV-based diagnostics, therapeutics and devices. Determination of size distribution and of colloidal stability of purified EVs resuspended in buffered media is a complex and challenging issue - because of the wide range of EV diameters (from 30 to 2000nm), concentrations of interest and membrane properties, and the possible presence of co-isolated contaminants with similar size and densities, such as protein aggregates and fat globules - which is still waiting to be fully addressed. We report here a fully detailed protocol for accurate and robust determination of the size distribution and stability of EV samples which leverages a dedicated combination of Fluorescence Correlation Spectroscopy (FCS) and Dynamic Light Scattering (DLS). The theoretical background, critical experimental steps and data analysis procedures are thoroughly presented and finally illustrated through the representative case study of EV formulations obtained from culture media of B16 melanoma cells, a murine tumor cell line used as a model for human skin cancers. Copyright © 2017 Elsevier B.V. All rights reserved.
A voting-based star identification algorithm utilizing local and global distribution
NASA Astrophysics Data System (ADS)
Fan, Qiaoyun; Zhong, Xuyang; Sun, Junhua
2018-03-01
A novel star identification algorithm based on voting scheme is presented in this paper. In the proposed algorithm, the global distribution and local distribution of sensor stars are fully utilized, and the stratified voting scheme is adopted to obtain the candidates for sensor stars. The database optimization is employed to reduce its memory requirement and improve the robustness of the proposed algorithm. The simulation shows that the proposed algorithm exhibits 99.81% identification rate with 2-pixel standard deviations of positional noises and 0.322-Mv magnitude noises. Compared with two similar algorithms, the proposed algorithm is more robust towards noise, and the average identification time and required memory is less. Furthermore, the real sky test shows that the proposed algorithm performs well on the real star images.
A generalized weight-based particle-in-cell simulation scheme
NASA Astrophysics Data System (ADS)
Lee, W. W.; Jenkins, T. G.; Ethier, S.
2011-03-01
A generalized weight-based particle simulation scheme suitable for simulating magnetized plasmas, where the zeroth-order inhomogeneity is important, is presented. The scheme is an extension of the perturbative simulation schemes developed earlier for particle-in-cell (PIC) simulations. The new scheme is designed to simulate both the perturbed distribution ( δf) and the full distribution (full- F) within the same code. The development is based on the concept of multiscale expansion, which separates the scale lengths of the background inhomogeneity from those associated with the perturbed distributions. The potential advantage for such an arrangement is to minimize the particle noise by using δf in the linear stage of the simulation, while retaining the flexibility of a full- F capability in the fully nonlinear stage of the development when signals associated with plasma turbulence are at a much higher level than those from the intrinsic particle noise.
USSR Report, Physics and Mathematics.
1987-01-14
polarization distribution in these crystals at a temperature above the 70°C phase transition point corresponding to maximum dielectric permittivity ...are derived theoretically and matched with experimental data. The theory is based on the relation between complex dielectric permittivity and...Kramers-Heisenberg relation for polarizability. Both real and imaginary parts of dielectric permittivity are evaluated, assuming a valence band fully
Unconditional optimality of Gaussian attacks against continuous-variable quantum key distribution.
García-Patrón, Raúl; Cerf, Nicolas J
2006-11-10
A fully general approach to the security analysis of continuous-variable quantum key distribution (CV-QKD) is presented. Provided that the quantum channel is estimated via the covariance matrix of the quadratures, Gaussian attacks are shown to be optimal against all collective eavesdropping strategies. The proof is made strikingly simple by combining a physical model of measurement, an entanglement-based description of CV-QKD, and a recent powerful result on the extremality of Gaussian states [M. M. Wolf, Phys. Rev. Lett. 96, 080502 (2006)10.1103/PhysRevLett.96.080502].
Superstatistics model for T₂ distribution in NMR experiments on porous media.
Correia, M D; Souza, A M; Sinnecker, J P; Sarthour, R S; Santos, B C C; Trevizan, W; Oliveira, I S
2014-07-01
We propose analytical functions for T2 distribution to describe transverse relaxation in high- and low-fields NMR experiments on porous media. The method is based on a superstatistics theory, and allows to find the mean and standard deviation of T2, directly from measurements. It is an alternative to multiexponential models for data decay inversion in NMR experiments. We exemplify the method with q-exponential functions and χ(2)-distributions to describe, respectively, data decay and T2 distribution on high-field experiments of fully water saturated glass microspheres bed packs, sedimentary rocks from outcrop and noisy low-field experiment on rocks. The method is general and can also be applied to biological systems. Copyright © 2014 Elsevier Inc. All rights reserved.
Distributed Cooperation Solution Method of Complex System Based on MAS
NASA Astrophysics Data System (ADS)
Weijin, Jiang; Yuhui, Xu
To adapt the model in reconfiguring fault diagnosing to dynamic environment and the needs of solving the tasks of complex system fully, the paper introduced multi-Agent and related technology to the complicated fault diagnosis, an integrated intelligent control system is studied in this paper. Based on the thought of the structure of diagnostic decision and hierarchy in modeling, based on multi-layer decomposition strategy of diagnosis task, a multi-agent synchronous diagnosis federation integrated different knowledge expression modes and inference mechanisms are presented, the functions of management agent, diagnosis agent and decision agent are analyzed, the organization and evolution of agents in the system are proposed, and the corresponding conflict resolution algorithm in given, Layered structure of abstract agent with public attributes is build. System architecture is realized based on MAS distributed layered blackboard. The real world application shows that the proposed control structure successfully solves the fault diagnose problem of the complex plant, and the special advantage in the distributed domain.
Observer-based distributed adaptive iterative learning control for linear multi-agent systems
NASA Astrophysics Data System (ADS)
Li, Jinsha; Liu, Sanyang; Li, Junmin
2017-10-01
This paper investigates the consensus problem for linear multi-agent systems from the viewpoint of two-dimensional systems when the state information of each agent is not available. Observer-based fully distributed adaptive iterative learning protocol is designed in this paper. A local observer is designed for each agent and it is shown that without using any global information about the communication graph, all agents achieve consensus perfectly for all undirected connected communication graph when the number of iterations tends to infinity. The Lyapunov-like energy function is employed to facilitate the learning protocol design and property analysis. Finally, simulation example is given to illustrate the theoretical analysis.
Where-Fi: a dynamic energy-efficient multimedia distribution framework for MANETs
NASA Astrophysics Data System (ADS)
Mohapatra, Shivajit; Carbunar, Bogdan; Pearce, Michael; Chaudhri, Rohit; Vasudevan, Venu
2008-01-01
Next generation mobile ad-hoc applications will revolve around users' need for sharing content/presence information with co-located devices. However, keeping such information fresh requires frequent meta-data exchanges, which could result in significant energy overheads. To address this issue, we propose distributed algorithms for energy efficient dissemination of presence and content usage information between nodes in mobile ad-hoc networks. First, we introduce a content dissemination protocol (called CPMP) for effectively distributing frequent small meta-data updates between co-located devices using multicast. We then develop two distributed algorithms that use the CPMP protocol to achieve "phase locked" wake up cycles for all the participating nodes in the network. The first algorithm is designed for fully-connected networks and then extended in the second to handle hidden terminals. The "phase locked" schedules are then exploited to adaptively transition the network interface to a deep sleep state for energy savings. We have implemented a prototype system (called "Where-Fi") on several Motorola Linux-based cell phone models. Our experimental results show that for all network topologies our algorithms were able to achieve "phase locking" between nodes even in the presence of hidden terminals. Moreover, we achieved battery lifetime extensions of as much as 28% for fully connected networks and about 20% for partially connected networks.
Research on mobile electronic commerce security technology based on WPKI
NASA Astrophysics Data System (ADS)
Zhang, Bo
2013-07-01
Through the in-depth study on the existing mobile e-commerce and WAP protocols, this paper presents a security solution of e-commerce system based on WPKI, and describes its implementation process and specific implementation details. This solution uniformly distributes the key used by the various participating entities , to fully ensure the confidentiality, authentication, fairness and integrity of mobile e-commerce payments, therefore has some pract ical value for improving the security of e-commerce system.
NASA Technical Reports Server (NTRS)
Harrington, Douglas E.; Burley, Richard R.; Corban, Robert R.
1986-01-01
Wall Mach number distributions were determined over a range of test-section free-stream Mach numbers from 0.2 to 0.92. The test section was slotted and had a nominal porosity of 11 percent. Reentry flaps located at the test-section exit were varied from 0 (fully closed) to 9 (fully open) degrees. Flow was bled through the test-section slots by means of a plenum evacuation system (PES) and varied from 0 to 3 percent of tunnel flow. Variations in reentry flap angle or PES flow rate had little or no effect on the Mach number distributions in the first 70 percent of the test section. However, in the aft region of the test section, flap angle and PES flow rate had a major impact on the Mach number distributions. Optimum PES flow rates were nominally 2 to 2.5 percent wtih the flaps fully closed and less than 1 percent when the flaps were fully open. The standard deviation of the test-section wall Mach numbers at the optimum PES flow rates was 0.003 or less.
A Fully Implemented 12 × 12 Data Vortex Optical Packet Switching Interconnection Network
NASA Astrophysics Data System (ADS)
Shacham, Assaf; Small, Benjamin A.; Liboiron-Ladouceur, Odile; Bergman, Keren
2005-10-01
A fully functional optical packet switching (OPS) interconnection network based on the data vortex architecture is presented. The photonic switching fabric uniquely capitalizes on the enormous bandwidth advantage of wavelength division multiplexing (WDM) wavelength parallelism while delivering minimal packet transit latency. Utilizing semiconductor optical amplifier (SOA)-based switching nodes and conventional fiber-optic technology, the 12-port system exhibits a capacity of nearly 1 Tb/s. Optical packets containing an eight-wavelength WDM payload with 10 Gb/s per wavelength are routed successfully to all 12 ports while maintaining a bit error rate (BER) of 10-12 or better. Median port-to-port latencies of 110 ns are achieved with a distributed deflection routing network that resolves packet contention on-the-fly without the use of optical buffers and maintains the entire payload path in the optical domain.
Model-based approach for cyber-physical attack detection in water distribution systems.
Housh, Mashor; Ohar, Ziv
2018-08-01
Modern Water Distribution Systems (WDSs) are often controlled by Supervisory Control and Data Acquisition (SCADA) systems and Programmable Logic Controllers (PLCs) which manage their operation and maintain a reliable water supply. As such, and with the cyber layer becoming a central component of WDS operations, these systems are at a greater risk of being subjected to cyberattacks. This paper offers a model-based methodology based on a detailed hydraulic understanding of WDSs combined with an anomaly detection algorithm for the identification of complex cyberattacks that cannot be fully identified by hydraulically based rules alone. The results show that the proposed algorithm is capable of achieving the best-known performance when tested on the data published in the BATtle of the Attack Detection ALgorithms (BATADAL) competition (http://www.batadal.net). Copyright © 2018. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Ajami, H.; Sharma, A.; Lakshmi, V.
2017-12-01
Application of semi-distributed hydrologic modeling frameworks is a viable alternative to fully distributed hyper-resolution hydrologic models due to computational efficiency and resolving fine-scale spatial structure of hydrologic fluxes and states. However, fidelity of semi-distributed model simulations is impacted by (1) formulation of hydrologic response units (HRUs), and (2) aggregation of catchment properties for formulating simulation elements. Here, we evaluate the performance of a recently developed Soil Moisture and Runoff simulation Toolkit (SMART) for large catchment scale simulations. In SMART, topologically connected HRUs are delineated using thresholds obtained from topographic and geomorphic analysis of a catchment, and simulation elements are equivalent cross sections (ECS) representative of a hillslope in first order sub-basins. Earlier investigations have shown that formulation of ECSs at the scale of a first order sub-basin reduces computational time significantly without compromising simulation accuracy. However, the implementation of this approach has not been fully explored for catchment scale simulations. To assess SMART performance, we set-up the model over the Little Washita watershed in Oklahoma. Model evaluations using in-situ soil moisture observations show satisfactory model performance. In addition, we evaluated the performance of a number of soil moisture disaggregation schemes recently developed to provide spatially explicit soil moisture outputs at fine scale resolution. Our results illustrate that the statistical disaggregation scheme performs significantly better than the methods based on topographic data. Future work is focused on assessing the performance of SMART using remotely sensed soil moisture observations using spatially based model evaluation metrics.
Resizing procedure for optimum design of structures under combined mechanical and thermal loading
NASA Technical Reports Server (NTRS)
Adelman, H. M.; Narayanaswami, R.
1976-01-01
An algorithm is reported for resizing structures subjected to combined thermal and mechanical loading. The algorithm is applicable to uniaxial stress elements (rods) and membrane biaxial stress members. Thermal Fully Stressed Design (TFSD) is based on the basic difference between mechanical and thermal stresses in their response to resizing. The TFSD technique is found to converge in fewer iterations than ordinary fully stressed design for problems where thermal stresses are comparable to the mechanical stresses. The improved convergence is demonstrated by example with a study of a simplified wing structure, built-up with rods and membranes and subjected to a combination of mechanical loads and a three dimensional temperature distribution.
Laboratory Testing Protocols for Heparin-Induced Thrombocytopenia (HIT) Testing.
Lau, Kun Kan Edwin; Mohammed, Soma; Pasalic, Leonardo; Favaloro, Emmanuel J
2017-01-01
Heparin-induced thrombocytopenia (HIT) represents a significant high morbidity complication of heparin therapy. The clinicopathological diagnosis of HIT remains challenging for many reasons; thus, laboratory testing represents an important component of an accurate diagnosis. Although there are many assays available to assess HIT, these essentially fall into two categories-(a) immunological assays, and (b) functional assays. The current chapter presents protocols for several HIT assays, being those that are most commonly performed in laboratory practice and have the widest geographic distribution. These comprise a manual lateral flow-based system (STiC), a fully automated latex immunoturbidimetric assay, a fully automated chemiluminescent assay (CLIA), light transmission aggregation (LTA), and whole blood aggregation (Multiplate).
a Framework for Distributed Mixed Language Scientific Applications
NASA Astrophysics Data System (ADS)
Quarrie, D. R.
The Object Management Group has defined an architecture (CORBA) for distributed object applications based on an Object Request Broker and Interface Definition Language. This project builds upon this architecture to establish a framework for the creation of mixed language scientific applications. A prototype compiler has been written that generates FORTRAN 90 or Eiffel stubs and skeletons and the required C++ glue code from an input IDL file that specifies object interfaces. This generated code can be used directly for non-distributed mixed language applications or in conjunction with the C++ code generated from a commercial IDL compiler for distributed applications. A feasibility study is presently underway to see whether a fully integrated software development environment for distributed, mixed-language applications can be created by modifying the back-end code generator of a commercial CASE tool to emit IDL.
NASA Astrophysics Data System (ADS)
Guo, L.; Huang, H.; Gaston, D.; Redden, G. D.; Fox, D. T.; Fujita, Y.
2010-12-01
Inducing mineral precipitation in the subsurface is one potential strategy for immobilizing trace metal and radionuclide contaminants. Generating mineral precipitates in situ can be achieved by manipulating chemical conditions, typically through injection or in situ generation of reactants. How these reactants transport, mix and react within the medium controls the spatial distribution and composition of the resulting mineral phases. Multiple processes, including fluid flow, dispersive/diffusive transport of reactants, biogeochemical reactions and changes in porosity-permeability, are tightly coupled over a number of scales. Numerical modeling can be used to investigate the nonlinear coupling effects of these processes which are quite challenging to explore experimentally. Many subsurface reactive transport simulators employ a de-coupled or operator-splitting approach where transport equations and batch chemistry reactions are solved sequentially. However, such an approach has limited applicability for biogeochemical systems with fast kinetics and strong coupling between chemical reactions and medium properties. A massively parallel, fully coupled, fully implicit Reactive Transport simulator (referred to as “RAT”) based on a parallel multi-physics object-oriented simulation framework (MOOSE) has been developed at the Idaho National Laboratory. Within this simulator, systems of transport and reaction equations can be solved simultaneously in a fully coupled, fully implicit manner using the Jacobian Free Newton-Krylov (JFNK) method with additional advanced computing capabilities such as (1) physics-based preconditioning for solution convergence acceleration, (2) massively parallel computing and scalability, and (3) adaptive mesh refinements for 2D and 3D structured and unstructured mesh. The simulator was first tested against analytical solutions, then applied to simulating induced calcium carbonate mineral precipitation in 1D columns and 2D flow cells as analogs to homogeneous and heterogeneous porous media, respectively. In 1D columns, calcium carbonate mineral precipitation was driven by urea hydrolysis catalyzed by urease enzyme, and in 2D flow cells, calcium carbonate mineral forming reactants were injected sequentially, forming migrating reaction fronts that are typically highly nonuniform. The RAT simulation results for the spatial and temporal distributions of precipitates, reaction rates and major species in the system, and also for changes in porosity and permeability, were compared to both laboratory experimental data and computational results obtained using other reactive transport simulators. The comparisons demonstrate the ability of RAT to simulate complex nonlinear systems and the advantages of fully coupled approaches, over de-coupled methods, for accurate simulation of complex, dynamic processes such as engineered mineral precipitation in subsurface environments.
Cooperative Management of a Lithium-Ion Battery Energy Storage Network: A Distributed MPC Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fang, Huazhen; Wu, Di; Yang, Tao
2016-12-12
This paper presents a study of cooperative power supply and storage for a network of Lithium-ion energy storage systems (LiBESSs). We propose to develop a distributed model predictive control (MPC) approach for two reasons. First, able to account for the practical constraints of a LiBESS, the MPC can enable a constraint-aware operation. Second, a distributed management can cope with a complex network that integrates a large number of LiBESSs over a complex communication topology. With this motivation, we then build a fully distributed MPC algorithm from an optimization perspective, which is based on an extension of the alternating direction methodmore » of multipliers (ADMM) method. A simulation example is provided to demonstrate the effectiveness of the proposed algorithm.« less
NASA Astrophysics Data System (ADS)
Xiao, Liang; Mao, Zhi-qiang; Xie, Xiu-hong
2017-04-01
It is crucial to understand the behavior of the T2 distribution in the presence of hydrocarbon to properly interpret pore size distribution from NMR logging. The NMR T2 spectrum is associated with pore throat radius distribution under fully brine saturated. However, when the pore space occupied by hydrocarbon, the shape of NMR spectrum is changed due to the bulk relaxation of hydrocarbon. In this study, to understand the effect of hydrocarbon to NMR logging, the kerosene and transformer oil are used to simulate borehole crude oils with different viscosity. 20 core samples, which were separately drilled from conventional, medium porosity and permeability and tight sands are saturated with four conditions of irreducible water saturation, fully saturated with brine, hydrocarbon-bearing condition and residual oil saturation, and the corresponding NMR experiments are applied to acquire NMR measurements. The residual oil saturation is used to simulate field NMR logging due to the shallow investigation depth of NMR logging. The NMR spectra with these conditions are compared, the results illustrate that for core samples drilled from tight sandstone reservoirs, the shape of NMR spectra have much change once they pore space occupied by hydrocarbon. The T2 distributions are wide, and they are bimodal due to the effect of bulk relaxation of hydrocarbon, even though the NMR spectra are unimodal under fully brine saturated. The location of the first peaks are similar with those of the irreducible water, and the second peaks are close to the bulk relaxation of viscosity oils. While for core samples drilled from conventional formations, the shape of T2 spectra have little changes. The T2 distributions overlap with each other under these three conditions of fully brine saturated, hydrocarbon-bearing and residual oil. Hence, in tight sandstone reservoirs, the shape of NMR logging should be corrected. In this study, based on the lab experiments, seven T2 times of 1ms, 3ms, 10ms, 33ms, 100ms, 300ms and 1000ms are first used to separate the T2 distributions of the residual oil saturation as 8 parts, and 8 pore components percentage compositions are calculated, second, an optimal T2 cutoff is determined to cut the T2 spectra of fully brine saturated conditions into two parts, the left parts (with short T2 time) represent to the irreducible water, and they do not need to be corrected, only the shape for the right parts of the T2 spectra needed to be corrected. Third the relationships among the amplitudes corresponding to the T2 times large than the optimal T2 cut off and 8 pore components percentage compositions are established, and they are used to predict corrected T2 amplitudes from NMR logging under residual oil saturation. Finally, the amplitudes corresponding to the left parts and the estimated amplitudes are spliced as the corrected NMR amplitudes, and a corrected T2 spectrum can be obtained. The reliability of this method is verified by comparing the corrected results and the experimental measurements. This method is extended to field application, fully water saturated T2 distributions are extracted from field NMR logging, and they are used to precisely evaluate hydrocarbon-bearing formations pore structure.
Korsgaard, Inge Riis; Lund, Mogens Sandø; Sorensen, Daniel; Gianola, Daniel; Madsen, Per; Jensen, Just
2003-01-01
A fully Bayesian analysis using Gibbs sampling and data augmentation in a multivariate model of Gaussian, right censored, and grouped Gaussian traits is described. The grouped Gaussian traits are either ordered categorical traits (with more than two categories) or binary traits, where the grouping is determined via thresholds on the underlying Gaussian scale, the liability scale. Allowances are made for unequal models, unknown covariance matrices and missing data. Having outlined the theory, strategies for implementation are reviewed. These include joint sampling of location parameters; efficient sampling from the fully conditional posterior distribution of augmented data, a multivariate truncated normal distribution; and sampling from the conditional inverse Wishart distribution, the fully conditional posterior distribution of the residual covariance matrix. Finally, a simulated dataset was analysed to illustrate the methodology. This paper concentrates on a model where residuals associated with liabilities of the binary traits are assumed to be independent. A Bayesian analysis using Gibbs sampling is outlined for the model where this assumption is relaxed. PMID:12633531
NASA Astrophysics Data System (ADS)
Neill, Aaron; Reaney, Sim
2015-04-01
Fully-distributed, physically-based rainfall-runoff models attempt to capture some of the complexity of the runoff processes that operate within a catchment, and have been used to address a variety of issues including water quality and the effect of climate change on flood frequency. Two key issues are prevalent, however, which call into question the predictive capability of such models. The first is the issue of parameter equifinality which can be responsible for large amounts of uncertainty. The second is whether such models make the right predictions for the right reasons - are the processes operating within a catchment correctly represented, or do the predictive abilities of these models result only from the calibration process? The use of additional data sources, such as environmental tracers, has been shown to help address both of these issues, by allowing for multi-criteria model calibration to be undertaken, and by permitting a greater understanding of the processes operating in a catchment and hence a more thorough evaluation of how well catchment processes are represented in a model. Using discharge and oxygen-18 data sets, the ability of the fully-distributed, physically-based CRUM3 model to represent the runoff processes in three sub-catchments in Cumbria, NW England has been evaluated. These catchments (Morland, Dacre and Pow) are part of the of the River Eden demonstration test catchment project. The oxygen-18 data set was firstly used to derive transit-time distributions and mean residence times of water for each of the catchments to gain an integrated overview of the types of processes that were operating. A generalised likelihood uncertainty estimation procedure was then used to calibrate the CRUM3 model for each catchment based on a single discharge data set from each catchment. Transit-time distributions and mean residence times of water obtained from the model using the top 100 behavioural parameter sets for each catchment were then compared to those derived from the oxygen-18 data to see how well the model captured catchment dynamics. The value of incorporating the oxygen-18 data set, as well as discharge data sets from multiple as opposed to single gauging stations in each catchment, in the calibration process to improve the predictive capability of the model was then investigated. This was achieved by assessing by how much the identifiability of the model parameters and the ability of the model to represent the runoff processes operating in each catchment improved with the inclusion of the additional data sets with respect to the likely costs that would be incurred in obtaining the data sets themselves.
Fully probabilistic seismic source inversion - Part 2: Modelling errors and station covariances
NASA Astrophysics Data System (ADS)
Stähler, Simon C.; Sigloch, Karin
2016-11-01
Seismic source inversion, a central task in seismology, is concerned with the estimation of earthquake source parameters and their uncertainties. Estimating uncertainties is particularly challenging because source inversion is a non-linear problem. In a companion paper, Stähler and Sigloch (2014) developed a method of fully Bayesian inference for source parameters, based on measurements of waveform cross-correlation between broadband, teleseismic body-wave observations and their modelled counterparts. This approach yields not only depth and moment tensor estimates but also source time functions. A prerequisite for Bayesian inference is the proper characterisation of the noise afflicting the measurements, a problem we address here. We show that, for realistic broadband body-wave seismograms, the systematic error due to an incomplete physical model affects waveform misfits more strongly than random, ambient background noise. In this situation, the waveform cross-correlation coefficient CC, or rather its decorrelation D = 1 - CC, performs more robustly as a misfit criterion than ℓp norms, more commonly used as sample-by-sample measures of misfit based on distances between individual time samples. From a set of over 900 user-supervised, deterministic earthquake source solutions treated as a quality-controlled reference, we derive the noise distribution on signal decorrelation D = 1 - CC of the broadband seismogram fits between observed and modelled waveforms. The noise on D is found to approximately follow a log-normal distribution, a fortunate fact that readily accommodates the formulation of an empirical likelihood function for D for our multivariate problem. The first and second moments of this multivariate distribution are shown to depend mostly on the signal-to-noise ratio (SNR) of the CC measurements and on the back-azimuthal distances of seismic stations. By identifying and quantifying this likelihood function, we make D and thus waveform cross-correlation measurements usable for fully probabilistic sampling strategies, in source inversion and related applications such as seismic tomography.
Coordination of heterogeneous nonlinear multi-agent systems with prescribed behaviours
NASA Astrophysics Data System (ADS)
Tang, Yutao
2017-10-01
In this paper, we consider a coordination problem for a class of heterogeneous nonlinear multi-agent systems with a prescribed input-output behaviour which was represented by another input-driven system. In contrast to most existing multi-agent coordination results with an autonomous (virtual) leader, this formulation takes possible control inputs of the leader into consideration. First, the coordination was achieved by utilising a group of distributed observers based on conventional assumptions of model matching problem. Then, a fully distributed adaptive extension was proposed without using the input of this input-output behaviour. An example was given to verify their effectiveness.
Parallelization of a Fully-Distributed Hydrologic Model using Sub-basin Partitioning
NASA Astrophysics Data System (ADS)
Vivoni, E. R.; Mniszewski, S.; Fasel, P.; Springer, E.; Ivanov, V. Y.; Bras, R. L.
2005-12-01
A primary obstacle towards advances in watershed simulations has been the limited computational capacity available to most models. The growing trend of model complexity, data availability and physical representation has not been matched by adequate developments in computational efficiency. This situation has created a serious bottleneck which limits existing distributed hydrologic models to small domains and short simulations. In this study, we present novel developments in the parallelization of a fully-distributed hydrologic model. Our work is based on the TIN-based Real-time Integrated Basin Simulator (tRIBS), which provides continuous hydrologic simulation using a multiple resolution representation of complex terrain based on a triangulated irregular network (TIN). While the use of TINs reduces computational demand, the sequential version of the model is currently limited over large basins (>10,000 km2) and long simulation periods (>1 year). To address this, a parallel MPI-based version of the tRIBS model has been implemented and tested using high performance computing resources at Los Alamos National Laboratory. Our approach utilizes domain decomposition based on sub-basin partitioning of the watershed. A stream reach graph based on the channel network structure is used to guide the sub-basin partitioning. Individual sub-basins or sub-graphs of sub-basins are assigned to separate processors to carry out internal hydrologic computations (e.g. rainfall-runoff transformation). Routed streamflow from each sub-basin forms the major hydrologic data exchange along the stream reach graph. Individual sub-basins also share subsurface hydrologic fluxes across adjacent boundaries. We demonstrate how the sub-basin partitioning provides computational feasibility and efficiency for a set of test watersheds in northeastern Oklahoma. We compare the performance of the sequential and parallelized versions to highlight the efficiency gained as the number of processors increases. We also discuss how the coupled use of TINs and parallel processing can lead to feasible long-term simulations in regional watersheds while preserving basin properties at high-resolution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hurd, J.R.; Bonner, C.A.; Ostenak, C.A.
1989-01-01
ROBOCAL, which is presently being developed and tested at Los Alamos National Laboratory, is a full-scale, prototypical robotic system, for remote calorimetric and gamma-ray analysis of special nuclear materials. It integrates a fully automated, multi-drawer, vertical stacker-retriever system for staging unmeasured nuclear materials, and a fully automated gantry robot for computer-based selection and transfer of nuclear materials to calorimetric and gamma-ray measurement stations. Since ROBOCAL is designed for minimal operator intervention, a completely programmed user interface and data-base system are provided to interact with the automated mechanical and assay systems. The assay system is designed to completely integrate calorimetric andmore » gamma-ray data acquisition and to perform state-of-the-art analyses on both homogeneous and heterogeneous distributions of nuclear materials in a wide variety of matrices. 10 refs., 10 figs., 4 tabs.« less
Social judgment theory based model on opinion formation, polarization and evolution
NASA Astrophysics Data System (ADS)
Chau, H. F.; Wong, C. Y.; Chow, F. K.; Fung, Chi-Hang Fred
2014-12-01
The dynamical origin of opinion polarization in the real world is an interesting topic that physical scientists may help to understand. To properly model the dynamics, the theory must be fully compatible with findings by social psychologists on microscopic opinion change. Here we introduce a generic model of opinion formation with homogeneous agents based on the well-known social judgment theory in social psychology by extending a similar model proposed by Jager and Amblard. The agents’ opinions will eventually cluster around extreme and/or moderate opinions forming three phases in a two-dimensional parameter space that describes the microscopic opinion response of the agents. The dynamics of this model can be qualitatively understood by mean-field analysis. More importantly, first-order phase transition in opinion distribution is observed by evolving the system under a slow change in the system parameters, showing that punctuated equilibria in public opinion can occur even in a fully connected social network.
Characterization of Inclusion Populations in Mn-Si Deoxidized Steel
NASA Astrophysics Data System (ADS)
García-Carbajal, Alfonso; Herrera-Trejo, Martín; Castro-Cedeño, Edgar-Ivan; Castro-Román, Manuel; Martinez-Enriquez, Arturo-Isaias
2017-12-01
Four plant heats of Mn-Si deoxidized steel were conducted to follow the evolution of the inclusion population through ladle furnace (LF) treatment and subsequent vacuum treatment (VT). The liquid steel was sampled, and the chemical composition and size distribution of the inclusion populations were characterized. The Gumbel generalized extreme-value (GEV) and generalized Pareto (GP) distributions were used for the statistical analysis of the inclusion size distributions. The inclusions found at the beginning of the LF treatment were mostly fully liquid SiO2-Al2O3-MnO inclusions, which then evolved into fully liquid SiO2-Al2O3-CaO-MgO and partly liquid SiO2-CaO-MgO-(Al2O3-MgO) inclusions detected at the end of the VT. The final fully liquid inclusions had a desirable chemical composition for plastic behavior in subsequent metallurgical operations. The GP distribution was found to be undesirable for statistical analysis. The GEV distribution approach led to shape parameter values different from the zero value hypothesized from the Gumbel distribution. According to the GEV approach, some of the final inclusion size distributions had statistically significant differences, whereas the Gumbel approach predicted no statistically significant differences. The heats were organized according to indicators of inclusion cleanliness and a statistical comparison of the size distributions.
NASA Astrophysics Data System (ADS)
Candela, A.; Brigandì, G.; Aronica, G. T.
2014-07-01
In this paper a procedure to derive synthetic flood design hydrographs (SFDH) using a bivariate representation of rainfall forcing (rainfall duration and intensity) via copulas, which describes and models the correlation between two variables independently of the marginal laws involved, coupled with a distributed rainfall-runoff model, is presented. Rainfall-runoff modelling (R-R modelling) for estimating the hydrological response at the outlet of a catchment was performed by using a conceptual fully distributed procedure based on the Soil Conservation Service - Curve Number method as an excess rainfall model and on a distributed unit hydrograph with climatic dependencies for the flow routing. Travel time computation, based on the distributed unit hydrograph definition, was performed by implementing a procedure based on flow paths, determined from a digital elevation model (DEM) and roughness parameters obtained from distributed geographical information. In order to estimate the primary return period of the SFDH, which provides the probability of occurrence of a hydrograph flood, peaks and flow volumes obtained through R-R modelling were treated statistically using copulas. Finally, the shapes of hydrographs have been generated on the basis of historically significant flood events, via cluster analysis. An application of the procedure described above has been carried out and results presented for the case study of the Imera catchment in Sicily, Italy.
Distributed generation of shared RSA keys in mobile ad hoc networks
NASA Astrophysics Data System (ADS)
Liu, Yi-Liang; Huang, Qin; Shen, Ying
2005-12-01
Mobile Ad Hoc Networks is a totally new concept in which mobile nodes are able to communicate together over wireless links in an independent manner, independent of fixed physical infrastructure and centralized administrative infrastructure. However, the nature of Ad Hoc Networks makes them very vulnerable to security threats. Generation and distribution of shared keys for CA (Certification Authority) is challenging for security solution based on distributed PKI(Public-Key Infrastructure)/CA. The solutions that have been proposed in the literature and some related issues are discussed in this paper. The solution of a distributed generation of shared threshold RSA keys for CA is proposed in the present paper. During the process of creating an RSA private key share, every CA node only has its own private security. Distributed arithmetic is used to create the CA's private share locally, and that the requirement of centralized management institution is eliminated. Based on fully considering the Mobile Ad Hoc network's characteristic of self-organization, it avoids the security hidden trouble that comes by holding an all private security share of CA, with which the security and robustness of system is enhanced.
Fully automated screening of immunocytochemically stained specimens for early cancer detection
NASA Astrophysics Data System (ADS)
Bell, André A.; Schneider, Timna E.; Müller-Frank, Dirk A. C.; Meyer-Ebrecht, Dietrich; Böcking, Alfred; Aach, Til
2007-03-01
Cytopathological cancer diagnoses can be obtained less invasive than histopathological investigations. Cells containing specimens can be obtained without pain or discomfort, bloody biopsies are avoided, and the diagnosis can, in some cases, even be made earlier. Since no tissue biopsies are necessary these methods can also be used in screening applications, e.g., for cervical cancer. Among the cytopathological methods a diagnosis based on the analysis of the amount of DNA in individual cells achieves high sensitivity and specificity. Yet this analysis is time consuming, which is prohibitive for a screening application. Hence, it will be advantageous to retain, by a preceding selection step, only a subset of suspicious specimens. This can be achieved using highly sensitive immunocytochemical markers like p16 ink4a for preselection of suspicious cells and specimens. We present a method to fully automatically acquire images at distinct positions at cytological specimens using a conventional computer controlled microscope and an autofocus algorithm. Based on the thus obtained images we automatically detect p16 ink4a-positive objects. This detection in turn is based on an analysis of the color distribution of the p16 ink4a marker in the Lab-colorspace. A Gaussian-mixture-model is used to describe this distribution and the method described in this paper so far achieves a sensitivity of up to 90%.
NASA Technical Reports Server (NTRS)
Trimble, Jay
2017-01-01
For NASA's Resource Prospector (RP) Lunar Rover Mission, we are moving away from a control center concept, to a fully distributed operation utilizing control nodes, with decision support from anywhere via mobile devices. This operations concept will utilize distributed information systems, notifications, mobile data access, and optimized mobile data display for off-console decision support. We see this concept of operations as a step in the evolution of mission operations from a central control center concept to a mission operations anywhere concept. The RP example is part of a trend, in which mission expertise for design, development and operations is distributed across countries and across the globe. Future spacecraft operations will be most cost efficient and flexible by following this distributed expertise, enabling operations from anywhere. For the RP mission we arrived at the decision to utilize a fully distributed operations team, where everyone operates from their home institution, based on evaluating the following factors: the requirement for physical proximity for near-real time command and control decisions; the cost of distributed control nodes vs. a centralized control center; the impact on training and mission preparation of flying the team to a central location. Physical proximity for operational decisions is seldom required, though certain categories of decisions, such as launch abort, or close coordination for mission or safety-critical near-real-time command and control decisions may benefit from co-location. The cost of facilities and operational infrastructure has not been found to be a driving factor for location in our studies. Mission training and preparation benefit from having all operators train and operate from home institutions.
The Energy Coding of a Structural Neural Network Based on the Hodgkin-Huxley Model.
Zhu, Zhenyu; Wang, Rubin; Zhu, Fengyun
2018-01-01
Based on the Hodgkin-Huxley model, the present study established a fully connected structural neural network to simulate the neural activity and energy consumption of the network by neural energy coding theory. The numerical simulation result showed that the periodicity of the network energy distribution was positively correlated to the number of neurons and coupling strength, but negatively correlated to signal transmitting delay. Moreover, a relationship was established between the energy distribution feature and the synchronous oscillation of the neural network, which showed that when the proportion of negative energy in power consumption curve was high, the synchronous oscillation of the neural network was apparent. In addition, comparison with the simulation result of structural neural network based on the Wang-Zhang biophysical model of neurons showed that both models were essentially consistent.
Craig F. Barrett; John V. Freudenstein; D. Lee Taylor; Urmas Koljalg
2010-01-01
Fully mycoheterotrophic plants offer a fascinating system for studying phylogenetic associations and dynamics of symbiotic specificity between hosts and parasites. These plants frequently parasitize mutualistic mycorrhizal symbioses between fungi and trees. Corallorhiza striata is a fully mycoheterotrophic, North American orchid distributed from...
GRID-BASED EXPLORATION OF COSMOLOGICAL PARAMETER SPACE WITH SNAKE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mikkelsen, K.; Næss, S. K.; Eriksen, H. K., E-mail: kristin.mikkelsen@astro.uio.no
2013-11-10
We present a fully parallelized grid-based parameter estimation algorithm for investigating multidimensional likelihoods called Snake, and apply it to cosmological parameter estimation. The basic idea is to map out the likelihood grid-cell by grid-cell according to decreasing likelihood, and stop when a certain threshold has been reached. This approach improves vastly on the 'curse of dimensionality' problem plaguing standard grid-based parameter estimation simply by disregarding grid cells with negligible likelihood. The main advantages of this method compared to standard Metropolis-Hastings Markov Chain Monte Carlo methods include (1) trivial extraction of arbitrary conditional distributions; (2) direct access to Bayesian evidences; (3)more » better sampling of the tails of the distribution; and (4) nearly perfect parallelization scaling. The main disadvantage is, as in the case of brute-force grid-based evaluation, a dependency on the number of parameters, N{sub par}. One of the main goals of the present paper is to determine how large N{sub par} can be, while still maintaining reasonable computational efficiency; we find that N{sub par} = 12 is well within the capabilities of the method. The performance of the code is tested by comparing cosmological parameters estimated using Snake and the WMAP-7 data with those obtained using CosmoMC, the current standard code in the field. We find fully consistent results, with similar computational expenses, but shorter wall time due to the perfect parallelization scheme.« less
NNLO QCD predictions for fully-differential top-quark pair production at the Tevatron
NASA Astrophysics Data System (ADS)
Czakon, Michal; Fiedler, Paul; Heymes, David; Mitov, Alexander
2016-05-01
We present a comprehensive study of differential distributions for Tevatron top-pair events at the level of stable top quarks. All calculations are performed in NNLO QCD with the help of a fully differential partonic Monte-Carlo and are exact at this order in perturbation theory. We present predictions for all kinematic distributions for which data exists. Particular attention is paid on the top-quark forward-backward asymmetry which we study in detail. We compare the NNLO results with existing approximate NNLO predictions as well as differential distributions computed with different parton distribution sets. Theory errors are significantly smaller than current experimental ones with overall agreement between theory and data.
Hyperchaotic Dynamics for Light Polarization in a Laser Diode
NASA Astrophysics Data System (ADS)
Bonatto, Cristian
2018-04-01
It is shown that a highly randomlike behavior of light polarization states in the output of a free-running laser diode, covering the whole Poincaré sphere, arises as a result from a fully deterministic nonlinear process, which is characterized by a hyperchaotic dynamics of two polarization modes nonlinearly coupled with a semiconductor medium, inside the optical cavity. A number of statistical distributions were found to describe the deterministic data of the low-dimensional nonlinear flow, such as lognormal distribution for the light intensity, Gaussian distributions for the electric field components and electron densities, Rice and Rayleigh distributions, and Weibull and negative exponential distributions, for the modulus and intensity of the orthogonal linear components of the electric field, respectively. The presented results could be relevant for the generation of single units of compact light source devices to be used in low-dimensional optical hyperchaos-based applications.
Li, Chaojie; Yu, Xinghuo; Huang, Tingwen; He, Xing; Chaojie Li; Xinghuo Yu; Tingwen Huang; Xing He; Li, Chaojie; Huang, Tingwen; He, Xing; Yu, Xinghuo
2018-06-01
The resource allocation problem is studied and reformulated by a distributed interior point method via a -logarithmic barrier. By the facilitation of the graph Laplacian, a fully distributed continuous-time multiagent system is developed for solving the problem. Specifically, to avoid high singularity of the -logarithmic barrier at boundary, an adaptive parameter switching strategy is introduced into this dynamical multiagent system. The convergence rate of the distributed algorithm is obtained. Moreover, a novel distributed primal-dual dynamical multiagent system is designed in a smart grid scenario to seek the saddle point of dynamical economic dispatch, which coincides with the optimal solution. The dual decomposition technique is applied to transform the optimization problem into easily solvable resource allocation subproblems with local inequality constraints. The good performance of the new dynamical systems is, respectively, verified by a numerical example and the IEEE six-bus test system-based simulations.
Towards scalable Byzantine fault-tolerant replication
NASA Astrophysics Data System (ADS)
Zbierski, Maciej
2017-08-01
Byzantine fault-tolerant (BFT) replication is a powerful technique, enabling distributed systems to remain available and correct even in the presence of arbitrary faults. Unfortunately, existing BFT replication protocols are mostly load-unscalable, i.e. they fail to respond with adequate performance increase whenever new computational resources are introduced into the system. This article proposes a universal architecture facilitating the creation of load-scalable distributed services based on BFT replication. The suggested approach exploits parallel request processing to fully utilize the available resources, and uses a load balancer module to dynamically adapt to the properties of the observed client workload. The article additionally provides a discussion on selected deployment scenarios, and explains how the proposed architecture could be used to increase the dependability of contemporary large-scale distributed systems.
NASA Astrophysics Data System (ADS)
Eilert, Tobias; Beckers, Maximilian; Drechsler, Florian; Michaelis, Jens
2017-10-01
The analysis tool and software package Fast-NPS can be used to analyse smFRET data to obtain quantitative structural information about macromolecules in their natural environment. In the algorithm a Bayesian model gives rise to a multivariate probability distribution describing the uncertainty of the structure determination. Since Fast-NPS aims to be an easy-to-use general-purpose analysis tool for a large variety of smFRET networks, we established an MCMC based sampling engine that approximates the target distribution and requires no parameter specification by the user at all. For an efficient local exploration we automatically adapt the multivariate proposal kernel according to the shape of the target distribution. In order to handle multimodality, the sampler is equipped with a parallel tempering scheme that is fully adaptive with respect to temperature spacing and number of chains. Since the molecular surrounding of a dye molecule affects its spatial mobility and thus the smFRET efficiency, we introduce dye models which can be selected for every dye molecule individually. These models allow the user to represent the smFRET network in great detail leading to an increased localisation precision. Finally, a tool to validate the chosen model combination is provided. Programme Files doi:http://dx.doi.org/10.17632/7ztzj63r68.1 Licencing provisions: Apache-2.0 Programming language: GUI in MATLAB (The MathWorks) and the core sampling engine in C++ Nature of problem: Sampling of highly diverse multivariate probability distributions in order to solve for macromolecular structures from smFRET data. Solution method: MCMC algorithm with fully adaptive proposal kernel and parallel tempering scheme.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Kyungsang; Ye, Jong Chul, E-mail: jong.ye@kaist.ac.kr; Lee, Taewon
2015-09-15
Purpose: In digital breast tomosynthesis (DBT), scatter correction is highly desirable, as it improves image quality at low doses. Because the DBT detector panel is typically stationary during the source rotation, antiscatter grids are not generally compatible with DBT; thus, a software-based scatter correction is required. This work proposes a fully iterative scatter correction method that uses a novel fast Monte Carlo simulation (MCS) with a tissue-composition ratio estimation technique for DBT imaging. Methods: To apply MCS to scatter estimation, the material composition in each voxel should be known. To overcome the lack of prior accurate knowledge of tissue compositionmore » for DBT, a tissue-composition ratio is estimated based on the observation that the breast tissues are principally composed of adipose and glandular tissues. Using this approximation, the composition ratio can be estimated from the reconstructed attenuation coefficients, and the scatter distribution can then be estimated by MCS using the composition ratio. The scatter estimation and image reconstruction procedures can be performed iteratively until an acceptable accuracy is achieved. For practical use, (i) the authors have implemented a fast MCS using a graphics processing unit (GPU), (ii) the MCS is simplified to transport only x-rays in the energy range of 10–50 keV, modeling Rayleigh and Compton scattering and the photoelectric effect using the tissue-composition ratio of adipose and glandular tissues, and (iii) downsampling is used because the scatter distribution varies rather smoothly. Results: The authors have demonstrated that the proposed method can accurately estimate the scatter distribution, and that the contrast-to-noise ratio of the final reconstructed image is significantly improved. The authors validated the performance of the MCS by changing the tissue thickness, composition ratio, and x-ray energy. The authors confirmed that the tissue-composition ratio estimation was quite accurate under a variety of conditions. Our GPU-based fast MCS implementation took approximately 3 s to generate each angular projection for a 6 cm thick breast, which is believed to make this process acceptable for clinical applications. In addition, the clinical preferences of three radiologists were evaluated; the preference for the proposed method compared to the preference for the convolution-based method was statistically meaningful (p < 0.05, McNemar test). Conclusions: The proposed fully iterative scatter correction method and the GPU-based fast MCS using tissue-composition ratio estimation successfully improved the image quality within a reasonable computational time, which may potentially increase the clinical utility of DBT.« less
You, Hongjian
2018-01-01
Target detection is one of the important applications in the field of remote sensing. The Gaofen-3 (GF-3) Synthetic Aperture Radar (SAR) satellite launched by China is a powerful tool for maritime monitoring. This work aims at detecting ships in GF-3 SAR images using a new land masking strategy, the appropriate model for sea clutter and a neural network as the discrimination scheme. Firstly, the fully convolutional network (FCN) is applied to separate the sea from the land. Then, by analyzing the sea clutter distribution in GF-3 SAR images, we choose the probability distribution model of Constant False Alarm Rate (CFAR) detector from K-distribution, Gamma distribution and Rayleigh distribution based on a tradeoff between the sea clutter modeling accuracy and the computational complexity. Furthermore, in order to better implement CFAR detection, we also use truncated statistic (TS) as a preprocessing scheme and iterative censoring scheme (ICS) for boosting the performance of detector. Finally, we employ a neural network to re-examine the results as the discrimination stage. Experiment results on three GF-3 SAR images verify the effectiveness and efficiency of this approach. PMID:29364194
An, Quanzhi; Pan, Zongxu; You, Hongjian
2018-01-24
Target detection is one of the important applications in the field of remote sensing. The Gaofen-3 (GF-3) Synthetic Aperture Radar (SAR) satellite launched by China is a powerful tool for maritime monitoring. This work aims at detecting ships in GF-3 SAR images using a new land masking strategy, the appropriate model for sea clutter and a neural network as the discrimination scheme. Firstly, the fully convolutional network (FCN) is applied to separate the sea from the land. Then, by analyzing the sea clutter distribution in GF-3 SAR images, we choose the probability distribution model of Constant False Alarm Rate (CFAR) detector from K-distribution, Gamma distribution and Rayleigh distribution based on a tradeoff between the sea clutter modeling accuracy and the computational complexity. Furthermore, in order to better implement CFAR detection, we also use truncated statistic (TS) as a preprocessing scheme and iterative censoring scheme (ICS) for boosting the performance of detector. Finally, we employ a neural network to re-examine the results as the discrimination stage. Experiment results on three GF-3 SAR images verify the effectiveness and efficiency of this approach.
NASA Astrophysics Data System (ADS)
Hager, Robert; Yoon, E. S.; Ku, S.; D'Azevedo, E. F.; Worley, P. H.; Chang, C. S.
2015-11-01
We describe the implementation, and application of a time-dependent, fully nonlinear multi-species Fokker-Planck-Landau collision operator based on the single-species work of Yoon and Chang [Phys. Plasmas 21, 032503 (2014)] in the full-function gyrokinetic particle-in-cell codes XGC1 [Ku et al., Nucl. Fusion 49, 115021 (2009)] and XGCa. XGC simulations include the pedestal and scrape-off layer, where significant deviations of the particle distribution function from a Maxwellian can occur. Thus, in order to describe collisional effects on neoclassical and turbulence physics accurately, the use of a non-linear collision operator is a necessity. Our collision operator is based on a finite volume method using the velocity-space distribution functions sampled from the marker particles. Since the same fine configuration space mesh is used for collisions and the Poisson solver, the workload due to collisions can be comparable to or larger than the workload due to particle motion. We demonstrate that computing time spent on collisions can be kept affordable by applying advanced parallelization strategies while conserving mass, momentum, and energy to reasonable accuracy. We also show results of production scale XGCa simulations in the H-mode pedestal and compare to conventional theory. Work supported by US DOE OFES and OASCR.
Cooperation based dynamic team formation in multi-agent auctions
NASA Astrophysics Data System (ADS)
Pippin, Charles E.; Christensen, Henrik
2012-06-01
Auction based methods are often used to perform distributed task allocation on multi-agent teams. Many existing approaches to auctions assume fully cooperative team members. On in-situ and dynamically formed teams, reciprocal collaboration may not always be a valid assumption. This paper presents an approach for dynamically selecting auction partners based on observed team member performance and shared reputation. In addition, we present the use of a shared reputation authority mechanism. Finally, experiments are performed in simulation on multiple UAV platforms to highlight situations in which it is better to enforce cooperation in auctions using this approach.
VizieR Online Data Catalog: Proper motions of PM2000 open clusters (Krone-Martins+, 2010)
NASA Astrophysics Data System (ADS)
Krone-Martins, A.; Soubiran, C.; Ducourant, C.; Teixeira, R.; Le Campion, J. F.
2010-04-01
We present lists of proper-motions and kinematic membership probabilities in the region of 49 open clusters or possible open clusters. The stellar proper motions were taken from the Bordeaux PM2000 catalogue. The segregation between cluster and field stars and the assignment of membership probabilities was accomplished by applying a fully automated method based on parametrisations for the probability distribution functions and genetic algorithm optimisation heuristics associated with a derivative-based hill climbing algorithm for the likelihood optimization. (3 data files).
Echo-Enabled X-Ray Vortex Generation
NASA Astrophysics Data System (ADS)
Hemsing, E.; Marinelli, A.
2012-11-01
A technique to generate high-brightness electromagnetic vortices with tunable topological charge at extreme ultraviolet and x-ray wavelengths is described. Based on a modified version of echo-enabled harmonic generation for free-electron lasers, the technique uses two lasers and two chicanes to produce high-harmonic microbunching of a relativistic electron beam with a corkscrew distribution that matches the instantaneous helical phase structure of the x-ray vortex. The strongly correlated electron distribution emerges from an efficient three-dimensional recoherence effect in the echo-enabled harmonic generation transport line and can emit fully coherent vortices in a downstream radiator for access to new research in x-ray science.
A decentralized training algorithm for Echo State Networks in distributed big data applications.
Scardapane, Simone; Wang, Dianhui; Panella, Massimo
2016-06-01
The current big data deluge requires innovative solutions for performing efficient inference on large, heterogeneous amounts of information. Apart from the known challenges deriving from high volume and velocity, real-world big data applications may impose additional technological constraints, including the need for a fully decentralized training architecture. While several alternatives exist for training feed-forward neural networks in such a distributed setting, less attention has been devoted to the case of decentralized training of recurrent neural networks (RNNs). In this paper, we propose such an algorithm for a class of RNNs known as Echo State Networks. The algorithm is based on the well-known Alternating Direction Method of Multipliers optimization procedure. It is formulated only in terms of local exchanges between neighboring agents, without reliance on a coordinating node. Additionally, it does not require the communication of training patterns, which is a crucial component in realistic big data implementations. Experimental results on large scale artificial datasets show that it compares favorably with a fully centralized implementation, in terms of speed, efficiency and generalization accuracy. Copyright © 2015 Elsevier Ltd. All rights reserved.
Sanchez-Martinez, M; Crehuet, R
2014-12-21
We present a method based on the maximum entropy principle that can re-weight an ensemble of protein structures based on data from residual dipolar couplings (RDCs). The RDCs of intrinsically disordered proteins (IDPs) provide information on the secondary structure elements present in an ensemble; however even two sets of RDCs are not enough to fully determine the distribution of conformations, and the force field used to generate the structures has a pervasive influence on the refined ensemble. Two physics-based coarse-grained force fields, Profasi and Campari, are able to predict the secondary structure elements present in an IDP, but even after including the RDC data, the re-weighted ensembles differ between both force fields. Thus the spread of IDP ensembles highlights the need for better force fields. We distribute our algorithm in an open-source Python code.
A survey of southern hemisphere meteor showers
NASA Astrophysics Data System (ADS)
Jenniskens, Peter; Baggaley, Jack; Crumpton, Ian; Aldous, Peter; Pokorny, Petr; Janches, Diego; Gural, Peter S.; Samuels, Dave; Albers, Jim; Howell, Andreas; Johannink, Carl; Breukers, Martin; Odeh, Mohammad; Moskovitz, Nicholas; Collison, Jack; Ganju, Siddha
2018-05-01
Results are presented from a video-based meteoroid orbit survey conducted in New Zealand between Sept. 2014 and Dec. 2016, which netted 24,906 orbits from +5 to -5 magnitude meteors. 44 new southern hemisphere meteor showers are identified after combining this data with that of other video-based networks. Results are compared to showers reported from recent radar-based surveys. We find that video cameras and radar often see different showers and sometimes measure different semi-major axis distributions for the same meteoroid stream. For identifying showers in sparse daily orbit data, a shower look-up table of radiant position and speed as a function of time was created. This can replace the commonly used method of identifying showers from a set of mean orbital elements by using a discriminant criterion, which does not fully describe the distribution of meteor shower radiants over time.
Ogura-Tsujita, Yuki; Yukawa, Tomohisa
2008-01-01
Because mycoheterotrophic plants fully depend on their mycorrhizal partner for their carbon supply, the major limiting factor for the geographic distribution of these plants may be the presence of their mycorrhizal partner. Although this factor may seem to be a disadvantage for increasing geographic distribution, widespread mycoheterotrophic species nonetheless exist. The mechanism causing the wide distribution of some mycoheterotrophic species is, however, seldom discussed. We identified the mycorrhizal partner of a widespread mycoheterotrophic orchid, Eulophia zollingeri, using 12 individuals from seven populations in Japan, Myanmar, and Taiwan by DNA-based methods. All fungal ITS sequences from the roots closely related to those of Psathyrella candolleana (Coprinaceae) from GenBank accessions and herbarium specimens. These results indicate that E. zollingeri is exclusively associated with the P. candolleana species group. Further, the molecular data support the wide distribution and wide-ranging habitat of this fungal partner. Our data provide evidence that a mycoheterotrophic plant can achieve a wide distribution, even though it has a high mycorrhizal specificity, if its fungal partner is widely distributed.
Zhao, Kai; Musolesi, Mirco; Hui, Pan; Rao, Weixiong; Tarkoma, Sasu
2015-03-16
Human mobility has been empirically observed to exhibit Lévy flight characteristics and behaviour with power-law distributed jump size. The fundamental mechanisms behind this behaviour has not yet been fully explained. In this paper, we propose to explain the Lévy walk behaviour observed in human mobility patterns by decomposing them into different classes according to the different transportation modes, such as Walk/Run, Bike, Train/Subway or Car/Taxi/Bus. Our analysis is based on two real-life GPS datasets containing approximately 10 and 20 million GPS samples with transportation mode information. We show that human mobility can be modelled as a mixture of different transportation modes, and that these single movement patterns can be approximated by a lognormal distribution rather than a power-law distribution. Then, we demonstrate that the mixture of the decomposed lognormal flight distributions associated with each modality is a power-law distribution, providing an explanation to the emergence of Lévy Walk patterns that characterize human mobility patterns.
NASA Astrophysics Data System (ADS)
Zhao, Kai; Musolesi, Mirco; Hui, Pan; Rao, Weixiong; Tarkoma, Sasu
2015-03-01
Human mobility has been empirically observed to exhibit Lévy flight characteristics and behaviour with power-law distributed jump size. The fundamental mechanisms behind this behaviour has not yet been fully explained. In this paper, we propose to explain the Lévy walk behaviour observed in human mobility patterns by decomposing them into different classes according to the different transportation modes, such as Walk/Run, Bike, Train/Subway or Car/Taxi/Bus. Our analysis is based on two real-life GPS datasets containing approximately 10 and 20 million GPS samples with transportation mode information. We show that human mobility can be modelled as a mixture of different transportation modes, and that these single movement patterns can be approximated by a lognormal distribution rather than a power-law distribution. Then, we demonstrate that the mixture of the decomposed lognormal flight distributions associated with each modality is a power-law distribution, providing an explanation to the emergence of Lévy Walk patterns that characterize human mobility patterns.
Zhao, Kai; Musolesi, Mirco; Hui, Pan; Rao, Weixiong; Tarkoma, Sasu
2015-01-01
Human mobility has been empirically observed to exhibit Lévy flight characteristics and behaviour with power-law distributed jump size. The fundamental mechanisms behind this behaviour has not yet been fully explained. In this paper, we propose to explain the Lévy walk behaviour observed in human mobility patterns by decomposing them into different classes according to the different transportation modes, such as Walk/Run, Bike, Train/Subway or Car/Taxi/Bus. Our analysis is based on two real-life GPS datasets containing approximately 10 and 20 million GPS samples with transportation mode information. We show that human mobility can be modelled as a mixture of different transportation modes, and that these single movement patterns can be approximated by a lognormal distribution rather than a power-law distribution. Then, we demonstrate that the mixture of the decomposed lognormal flight distributions associated with each modality is a power-law distribution, providing an explanation to the emergence of Lévy Walk patterns that characterize human mobility patterns. PMID:25779306
NASA Astrophysics Data System (ADS)
Bao, Yi; Valipour, Mahdi; Meng, Weina; Khayat, Kamal H.; Chen, Genda
2017-08-01
This study develops a delamination detection system for smart ultra-high-performance concrete (UHPC) overlays using a fully distributed fiber optic sensor. Three 450 mm (length) × 200 mm (width) × 25 mm (thickness) UHPC overlays were cast over an existing 200 mm thick concrete substrate. The initiation and propagation of delamination due to early-age shrinkage of the UHPC overlay were detected as sudden increases and their extension in spatial distribution of shrinkage-induced strains measured from the sensor based on pulse pre-pump Brillouin optical time domain analysis. The distributed sensor is demonstrated effective in detecting delamination openings from microns to hundreds of microns. A three-dimensional finite element model with experimental material properties is proposed to understand the complete delamination process measured from the distributed sensor. The model is validated using the distributed sensor data. The finite element model with cohesive elements for the overlay-substrate interface can predict the complete delamination process.
The fully differential top decay distribution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aguilar-Saavedra, J. A.; Boudreau, J.; Escobar, C.
We write down the four-dimensional fully differential decay distribution for the top quark decay t → Wb → ℓνb. We discuss how its eight physical parameters can be measured, either with a global fit or with the use of selected one-dimensional distributions and asymmetries. We give expressions for the top decay amplitudes for a general tbW interaction, and show how the untangled measurement of the two components of the fraction of longitudinal W bosons – those with b quark helicities of 1/2 and –1/2, respectively – could improve the precision of a global fit to the tbW vertex.
The fully differential top decay distribution
Aguilar-Saavedra, J. A.; Boudreau, J.; Escobar, C.; ...
2017-03-29
We write down the four-dimensional fully differential decay distribution for the top quark decay t → Wb → ℓνb. We discuss how its eight physical parameters can be measured, either with a global fit or with the use of selected one-dimensional distributions and asymmetries. We give expressions for the top decay amplitudes for a general tbW interaction, and show how the untangled measurement of the two components of the fraction of longitudinal W bosons – those with b quark helicities of 1/2 and –1/2, respectively – could improve the precision of a global fit to the tbW vertex.
Ultramap v3 - a Revolution in Aerial Photogrammetry
NASA Astrophysics Data System (ADS)
Reitinger, B.; Sormann, M.; Zebedin, L.; Schachinger, B.; Hoefler, M.; Tomasi, R.; Lamperter, M.; Gruber, B.; Schiester, G.; Kobald, M.; Unger, M.; Klaus, A.; Bernoegger, S.; Karner, K.; Wiechert, A.; Ponticelli, M.; Gruber, M.
2012-07-01
In the last years, Microsoft has driven innovation in the aerial photogrammetry community. Besides the market leading camera technology, UltraMap has grown to an outstanding photogrammetric workflow system which enables users to effectively work with large digital aerial image blocks in a highly automated way. Best example is the project-based color balancing approach which automatically balances images to a homogeneous block. UltraMap V3 continues innovation, and offers a revolution in terms of ortho processing. A fully automated dense matching module strives for high precision digital surface models (DSMs) which are calculated either on CPUs or on GPUs using a distributed processing framework. By applying constrained filtering algorithms, a digital terrain model can be derived which in turn can be used for fully automated traditional ortho texturing. By having the knowledge about the underlying geometry, seamlines can be generated automatically by applying cost functions in order to minimize visual disturbing artifacts. By exploiting the generated DSM information, a DSMOrtho is created using the balanced input images. Again, seamlines are detected automatically resulting in an automatically balanced ortho mosaic. Interactive block-based radiometric adjustments lead to a high quality ortho product based on UltraCam imagery. UltraMap v3 is the first fully integrated and interactive solution for supporting UltraCam images at best in order to deliver DSM and ortho imagery.
Stability of compressible Taylor-Couette flow
NASA Technical Reports Server (NTRS)
Kao, Kai-Hsiung; Chow, Chuen-Yen
1991-01-01
Compressible stability equations are solved using the spectral collocation method in an attempt to study the effects of temperature difference and compressibility on the stability of Taylor-Couette flow. It is found that the Chebyshev collocation spectral method yields highly accurate results using fewer grid points for solving stability problems. Comparisons are made between the result obtained by assuming small Mach number with a uniform temperature distribution and that based on fully incompressible analysis.
Curtis H. Flather; Carolyn Hull Sieg; Michael S. Knowles; Jason McNees
2003-01-01
This indicator measures the portion of a species' historical distribution that is currently occupied as a surrogate measure of genetic diversity. Based on data for 1,642 terrestrial animals associated with forests, most species (88 percent) were found to fully occupy their historic range - at least as measured by coarse state-level occurrence patterns. Of the 193...
Jeong, Seol Young; Jo, Hyeong Gon; Kang, Soon Ju
2015-01-01
Indoor location-based services (iLBS) are extremely dynamic and changeable, and include numerous resources and mobile devices. In particular, the network infrastructure requires support for high scalability in the indoor environment, and various resource lookups are requested concurrently and frequently from several locations based on the dynamic network environment. A traditional map-based centralized approach for iLBSs has several disadvantages: it requires global knowledge to maintain a complete geographic indoor map; the central server is a single point of failure; it can also cause low scalability and traffic congestion; and it is hard to adapt to a change of service area in real time. This paper proposes a self-organizing and fully distributed platform for iLBSs. The proposed self-organizing distributed platform provides a dynamic reconfiguration of locality accuracy and service coverage by expanding and contracting dynamically. In order to verify the suggested platform, scalability performance according to the number of inserted or deleted nodes composing the dynamic infrastructure was evaluated through a simulation similar to the real environment. PMID:26016908
Nuclear microprobe imaging of gallium nitrate in cancer cells
NASA Astrophysics Data System (ADS)
Ortega, Richard; Suda, Asami; Devès, Guillaume
2003-09-01
Gallium nitrate is used in clinical oncology as treatment for hypercalcemia and for cancer that has spread to the bone. Its mechanism of antitumor action has not been fully elucidated yet. The knowledge of the intracellular distribution of anticancer drugs is of particular interest in oncology to better understand their cellular pharmacology. In addition, most metal-based anticancer compounds interact with endogenous trace elements in cells, altering their metabolism. The purpose of this experiment was to examine, by use of nuclear microprobe analysis, the cellular distribution of gallium and endogenous trace elements within cancer cells exposed to gallium nitrate. In a majority of cellular analyses, gallium was found homogeneously distributed in cells following the distribution of carbon. In a smaller number of cells, however, gallium appeared concentrated together with P, Ca and Fe within round structures of about 2-5 μm diameter located in the perinuclear region. These intracellular structures are typical of lysosomial material.
NASA Astrophysics Data System (ADS)
Zhao, L.; Landi, E.; Lepri, S. T.; Kocher, M.; Zurbuchen, T. H.; Fisk, L. A.; Raines, J. M.
2017-01-01
In this paper, we study a subset of slow solar winds characterized by an anomalous charge state composition and ion temperatures compared to average solar wind distributions, and thus referred to as an “Outlier” wind. We find that although this wind is slower and denser than normal slow wind, it is accelerated from the same source regions (active regions and quiet-Sun regions) as the latter and its occurrence rate depends on the solar cycle. The defining property of the Outlier wind is that its charge state composition is the same as that of normal slow wind, with the only exception being a very large decrease in the abundance of fully charged species (He2+, C6+, N7+, O8+, Mg12+), resulting in a significant depletion of the He and C element abundances. Based on these observations, we suggest three possible scenarios for the origin of this wind: (1) local magnetic waves preferentially accelerating non-fully stripped ions over fully stripped ions from a loop opened by reconnection; (2) depleted fully stripped ions already contained in the corona magnetic loops before they are opened up by reconnection; or (3) fully stripped ions depleted by Coulomb collision after magnetic reconnection in the solar corona. If any one of these three scenarios is confirmed, the Outlier wind represents a direct signature of slow wind release through magnetic reconnection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, L.; Landi, E.; Lepri, S. T.
In this paper, we study a subset of slow solar winds characterized by an anomalous charge state composition and ion temperatures compared to average solar wind distributions, and thus referred to as an “Outlier” wind. We find that although this wind is slower and denser than normal slow wind, it is accelerated from the same source regions (active regions and quiet-Sun regions) as the latter and its occurrence rate depends on the solar cycle. The defining property of the Outlier wind is that its charge state composition is the same as that of normal slow wind, with the only exceptionmore » being a very large decrease in the abundance of fully charged species (He{sup 2+}, C{sup 6+}, N{sup 7+}, O{sup 8+}, Mg{sup 12+}), resulting in a significant depletion of the He and C element abundances. Based on these observations, we suggest three possible scenarios for the origin of this wind: (1) local magnetic waves preferentially accelerating non-fully stripped ions over fully stripped ions from a loop opened by reconnection; (2) depleted fully stripped ions already contained in the corona magnetic loops before they are opened up by reconnection; or (3) fully stripped ions depleted by Coulomb collision after magnetic reconnection in the solar corona. If any one of these three scenarios is confirmed, the Outlier wind represents a direct signature of slow wind release through magnetic reconnection.« less
NASA Astrophysics Data System (ADS)
McIntosh, Chris; Welch, Mattea; McNiven, Andrea; Jaffray, David A.; Purdie, Thomas G.
2017-08-01
Recent works in automated radiotherapy treatment planning have used machine learning based on historical treatment plans to infer the spatial dose distribution for a novel patient directly from the planning image. We present a probabilistic, atlas-based approach which predicts the dose for novel patients using a set of automatically selected most similar patients (atlases). The output is a spatial dose objective, which specifies the desired dose-per-voxel, and therefore replaces the need to specify and tune dose-volume objectives. Voxel-based dose mimicking optimization then converts the predicted dose distribution to a complete treatment plan with dose calculation using a collapsed cone convolution dose engine. In this study, we investigated automated planning for right-sided oropharaynx head and neck patients treated with IMRT and VMAT. We compare four versions of our dose prediction pipeline using a database of 54 training and 12 independent testing patients by evaluating 14 clinical dose evaluation criteria. Our preliminary results are promising and demonstrate that automated methods can generate comparable dose distributions to clinical. Overall, automated plans achieved an average of 0.6% higher dose for target coverage evaluation criteria, and 2.4% lower dose at the organs at risk criteria levels evaluated compared with clinical. There was no statistically significant difference detected in high-dose conformity between automated and clinical plans as measured by the conformation number. Automated plans achieved nine more unique criteria than clinical across the 12 patients tested and automated plans scored a significantly higher dose at the evaluation limit for two high-risk target coverage criteria and a significantly lower dose in one critical organ maximum dose. The novel dose prediction method with dose mimicking can generate complete treatment plans in 12-13 min without user interaction. It is a promising approach for fully automated treatment planning and can be readily applied to different treatment sites and modalities.
McIntosh, Chris; Welch, Mattea; McNiven, Andrea; Jaffray, David A; Purdie, Thomas G
2017-07-06
Recent works in automated radiotherapy treatment planning have used machine learning based on historical treatment plans to infer the spatial dose distribution for a novel patient directly from the planning image. We present a probabilistic, atlas-based approach which predicts the dose for novel patients using a set of automatically selected most similar patients (atlases). The output is a spatial dose objective, which specifies the desired dose-per-voxel, and therefore replaces the need to specify and tune dose-volume objectives. Voxel-based dose mimicking optimization then converts the predicted dose distribution to a complete treatment plan with dose calculation using a collapsed cone convolution dose engine. In this study, we investigated automated planning for right-sided oropharaynx head and neck patients treated with IMRT and VMAT. We compare four versions of our dose prediction pipeline using a database of 54 training and 12 independent testing patients by evaluating 14 clinical dose evaluation criteria. Our preliminary results are promising and demonstrate that automated methods can generate comparable dose distributions to clinical. Overall, automated plans achieved an average of 0.6% higher dose for target coverage evaluation criteria, and 2.4% lower dose at the organs at risk criteria levels evaluated compared with clinical. There was no statistically significant difference detected in high-dose conformity between automated and clinical plans as measured by the conformation number. Automated plans achieved nine more unique criteria than clinical across the 12 patients tested and automated plans scored a significantly higher dose at the evaluation limit for two high-risk target coverage criteria and a significantly lower dose in one critical organ maximum dose. The novel dose prediction method with dose mimicking can generate complete treatment plans in 12-13 min without user interaction. It is a promising approach for fully automated treatment planning and can be readily applied to different treatment sites and modalities.
Nonlinear interaction between underwater explosion bubble and structure based on fully coupled model
NASA Astrophysics Data System (ADS)
Zhang, A. M.; Wu, W. B.; Liu, Y. L.; Wang, Q. X.
2017-08-01
The interaction between an underwater explosion bubble and an elastic-plastic structure is a complex transient process, accompanying violent bubble collapsing, jet impact, penetration through the bubble, and large structural deformation. In the present study, the bubble dynamics are modeled using the boundary element method and the nonlinear transient structural response is modeled using the explicit finite element method. A new fully coupled 3D model is established through coupling the equations for the state variables of the fluid and structure and solving them as a set of coupled linear algebra equations. Based on the acceleration potential theory, the mutual dependence between the hydrodynamic load and the structural motion is decoupled. The pressure distribution in the flow field is calculated with the Bernoulli equation, where the partial derivative of the velocity potential in time is calculated using the boundary integral method to avoid numerical instabilities. To validate the present fully coupled model, the experiments of small-scale underwater explosion near a stiffened plate are carried out. High-speed imaging is used to capture the bubble behaviors and strain gauges are used to measure the strain response. The numerical results correspond well with the experimental data, in terms of bubble shapes and structural strain response. By both the loosely coupled model and the fully coupled model, the interaction between a bubble and a hollow spherical shell is studied. The bubble patterns vary with different parameters. When the fully coupled model and the loosely coupled model are advanced with the same time step, the error caused by the loosely coupled model becomes larger with the coupling effect becoming stronger. The fully coupled model is more stable than the loosely coupled model. Besides, the influences of the internal fluid on the dynamic response of the spherical shell are studied. At last, the case that the bubble interacts with an air-backed stiffened plate is simulated. The associated interesting physical phenomenon is obtained and expounded.
Fully probabilistic earthquake source inversion on teleseismic scales
NASA Astrophysics Data System (ADS)
Stähler, Simon; Sigloch, Karin
2017-04-01
Seismic source inversion is a non-linear problem in seismology where not just the earthquake parameters but also estimates of their uncertainties are of great practical importance. We have developed a method of fully Bayesian inference for source parameters, based on measurements of waveform cross-correlation between broadband, teleseismic body-wave observations and their modelled counterparts. This approach yields not only depth and moment tensor estimates but also source time functions. These unknowns are parameterised efficiently by harnessing as prior knowledge solutions from a large number of non-Bayesian inversions. The source time function is expressed as a weighted sum of a small number of empirical orthogonal functions, which were derived from a catalogue of >1000 source time functions (STFs) by a principal component analysis. We use a likelihood model based on the cross-correlation misfit between observed and predicted waveforms. The resulting ensemble of solutions provides full uncertainty and covariance information for the source parameters, and permits propagating these source uncertainties into travel time estimates used for seismic tomography. The computational effort is such that routine, global estimation of earthquake mechanisms and source time functions from teleseismic broadband waveforms is feasible. A prerequisite for Bayesian inference is the proper characterisation of the noise afflicting the measurements. We show that, for realistic broadband body-wave seismograms, the systematic error due to an incomplete physical model affects waveform misfits more strongly than random, ambient background noise. In this situation, the waveform cross-correlation coefficient CC, or rather its decorrelation D = 1 - CC, performs more robustly as a misfit criterion than ℓp norms, more commonly used as sample-by-sample measures of misfit based on distances between individual time samples. From a set of over 900 user-supervised, deterministic earthquake source solutions treated as a quality-controlled reference, we derive the noise distribution on signal decorrelation D of the broadband seismogram fits between observed and modelled waveforms. The noise on D is found to approximately follow a log-normal distribution, a fortunate fact that readily accommodates the formulation of an empirical likelihood function for D for our multivariate problem. The first and second moments of this multivariate distribution are shown to depend mostly on the signal-to-noise ratio (SNR) of the CC measurements and on the back-azimuthal distances of seismic stations. References: Stähler, S. C. and Sigloch, K.: Fully probabilistic seismic source inversion - Part 1: Efficient parameterisation, Solid Earth, 5, 1055-1069, doi:10.5194/se-5-1055-2014, 2014. Stähler, S. C. and Sigloch, K.: Fully probabilistic seismic source inversion - Part 2: Modelling errors and station covariances, Solid Earth, 7, 1521-1536, doi:10.5194/se-7-1521-2016, 2016.
NASA Astrophysics Data System (ADS)
Rios, Edmilson Helton; Figueiredo, Irineu; Moss, Adam Keith; Pritchard, Timothy Neil; Glassborow, Brent Anthony; Guedes Domingues, Ana Beatriz; Bagueira de Vasconcellos Azeredo, Rodrigo
2016-07-01
The effect of the selection of different nuclear magnetic resonance (NMR) relaxation times for permeability estimation is investigated for a set of fully brine-saturated rocks acquired from Cretaceous carbonate reservoirs in the North Sea and Middle East. Estimators that are obtained from the relaxation times based on the Pythagorean means are compared with estimators that are obtained from the relaxation times based on the concept of a cumulative saturation cut-off. Select portions of the longitudinal (T1) and transverse (T2) relaxation-time distributions are systematically evaluated by applying various cut-offs, analogous to the Winland-Pittman approach for mercury injection capillary pressure (MICP) curves. Finally, different approaches to matching the NMR and MICP distributions using different mean-based scaling factors are validated based on the performance of the related size-scaled estimators. The good results that were obtained demonstrate possible alternatives to the commonly adopted logarithmic mean estimator and reinforce the importance of NMR-MICP integration to improving carbonate permeability estimates.
A fiber-based quasi-continuous-wave quantum key distribution system
Shen, Yong; Chen, Yan; Zou, Hongxin; Yuan, Jianmin
2014-01-01
We report a fiber-based quasi-continuous-wave (CW) quantum key distribution (QKD) system with continuous variables (CV). This system employs coherent light pulses and time multiplexing to maximally reduce cross talk in the fiber. No-switching detection scheme is adopted to optimize the repetition rate. Information is encoded on the sideband of the pulsed coherent light to fully exploit the continuous wave nature of laser field. With this configuration, high secret key rate can be achieved. For the 50 MHz detected bandwidth in our experiment, when the multidimensional reconciliation protocol is applied, a secret key rate of 187 kb/s can be achieved over 50 km of optical fiber against collective attacks, which have been shown to be asymptotically optimal. Moreover, recently studied loopholes have been fixed in our system. PMID:24691409
The Role of Graphlets in Viral Processes on Networks
NASA Astrophysics Data System (ADS)
Khorshidi, Samira; Al Hasan, Mohammad; Mohler, George; Short, Martin B.
2018-05-01
Predicting the evolution of viral processes on networks is an important problem with applications arising in biology, the social sciences, and the study of the Internet. In existing works, mean-field analysis based upon degree distribution is used for the prediction of viral spreading across networks of different types. However, it has been shown that degree distribution alone fails to predict the behavior of viruses on some real-world networks and recent attempts have been made to use assortativity to address this shortcoming. In this paper, we show that adding assortativity does not fully explain the variance in the spread of viruses for a number of real-world networks. We propose using the graphlet frequency distribution in combination with assortativity to explain variations in the evolution of viral processes across networks with identical degree distribution. Using a data-driven approach by coupling predictive modeling with viral process simulation on real-world networks, we show that simple regression models based on graphlet frequency distribution can explain over 95% of the variance in virality on networks with the same degree distribution but different network topologies. Our results not only highlight the importance of graphlets but also identify a small collection of graphlets which may have the highest influence over the viral processes on a network.
Free vibration of fully functionally graded carbon nanotube reinforced graphite/epoxy laminates
NASA Astrophysics Data System (ADS)
Kuo, Shih-Yao
2018-03-01
This study provides the first-known vibration analysis of fully functionally graded carbon nanotube reinforced hybrid composite (FFG-CNTRHC) laminates. CNTs are non-uniformly distributed to reinforce the graphite/epoxy laminates. Some CNT distribution functions in the plane and thickness directions are proposed to more efficiently increase the stiffening effect. The rule of mixtures is modified by considering the non-homogeneous material properties of FFG-CNTRHC laminates. The formulation of the location dependent stiffness matrix and mass matrix is derived. The effects of CNT volume fraction and distribution on the natural frequencies of FFG-CNTRHC laminates are discussed. The results reveal that the FFG layout may significantly increase the natural frequencies of FFG-CNTRHC laminate.
1992-12-01
Ground-Based Mission Planning Systems 9 2.3 Networking Mission Planning Systems 11 2.4 Fully Automated Mission Planning I I 2.5 Unmanned Air Vehicles 13...Missile Engagement Zone RPV Remotely Piloted Vehicle MIDS Multifunction Information Distribution System RRDB Rapidly Reconfigurable Databus MIL-STD...Comrmantd OPORD Operations Order TV Television OPS Operational OR Operational Relationship UAV Unmanned Air Vehicle UAV Unnmanned Air Vehicle PA
1978-07-01
1,547 5,254 1 ,547 TOTAL 12,809 3 ,922 AML Resource Management Report, Part C, Section 1, Base Operations Functions as of...t» o *-~ «-» o* »•* 9 0 o o o » O * O o o O Cft ... thesi - people, an well as to other leaders in the materials handling field (editors, educators, and industry professionals), to fully clarify and
A Fully Distributed Approach to the Design of a KBIT/SEC VHF Packet Radio Network,
1984-02-01
topological change and consequent out-modea routing data. Algorithm development has been aided by computer simulation using a finite state machine technique...development has been aided by computer simulation using a finite state machine technique to model a realistic network of up to fifty nodes. This is...use of computer based equipments in weapons systems and their associated sensors and command and control elements and the trend from voice to data
NASA Astrophysics Data System (ADS)
Qi, Li; Zhu, Jiang; Hancock, Aneeka M.; Dai, Cuixia; Zhang, Xuping; Frostig, Ron D.; Chen, Zhongping
2017-02-01
Doppler optical coherence tomography (DOCT) is considered one of the most promising functional imaging modalities for neuro biology research and has demonstrated the ability to quantify cerebral blood flow velocity at a high accuracy. However, the measurement of total absolute blood flow velocity (BFV) of major cerebral arteries is still a difficult problem since it not only relates to the properties of the laser and the scattering particles, but also relates to the geometry of both directions of the laser beam and the flow. In this paper, focusing on the analysis of cerebral hemodynamics, we presents a method to quantify the total absolute blood flow velocity in middle cerebral artery (MCA) based on volumetric vessel reconstruction from pure DOCT images. A modified region growing segmentation method is first used to localize the MCA on successive DOCT B-scan images. Vessel skeletonization, followed by an averaging gradient angle calculation method, is then carried out to obtain Doppler angles along the entire MCA. Once the Doppler angles are determined, the absolute blood flow velocity of each position on the MCA is easily found. Given a seed point position on the MCA, our approach could achieve automatic quantification of the fully distributed absolute BFV. Based on experiments conducted using a swept-source optical coherence tomography system, our approach could achieve automatic quantification of the fully distributed absolute BFV across different vessel branches in the rodent brain.
Bosomprah, Samuel; Tatem, Andrew J; Dotse-Gborgbortsi, Winfred; Aboagye, Patrick; Matthews, Zoe
2016-01-01
To provide clear policy directions for gaps in the provision of signal function services and sub-regions requiring priority attention using data from the 2010 Ghana Emergency Obstetric and Newborn Care (EmONC) survey. Using 2010 survey data, the fraction of facilities with only one or two signal functions missing was calculated for each facility type and EmONC designation. Thematic maps were used to provide insight into inequities in service provision. Of 1159 maternity facilities, 89 provided all the necessary basic or comprehensive EmONC signal functions 3months prior to the 2010 survey. Only 21% of facility-based births were in fully functioning EmONC facilities, but an additional 30% occurred in facilities missing one or two basic signal functions-most often assisted vaginal delivery and removal of retained products. Tackling these missing signal functions would extend births taking place in fully functioning facilities to over 50%. Subnational analyses based on estimated total pregnancies in each district revealed a pattern of inequity in service provision across the country. Upgrading facilities missing only one or two signal functions will allow Ghana to meet international standards for availability of EmONC services. Reducing maternal deaths will require high national priority given to addressing inequities in the distribution of EmONC services. Copyright © 2015 International Federation of Gynecology and Obstetrics. Published by Elsevier Ireland Ltd. All rights reserved.
Mishima, T; Kao, K C
1982-03-15
New laser interferometry has been developed, based on the principle that a 2-D fringe pattern can be produced by interference of spatially coherent light beams. To avoid the effect of reflection from the back surface of the substrate, the Brewster angle of incidence is adopted; to suppress the effect of diffraction, a lens or a lens system is used. This laser interferometry is an efficient nondestructive technique for the determination of thickness distributions or uniformities of low absorbing films on transparent substrates over a large area without involving laborious computations. The limitation of spatial resolution, thickness resolution, and visibility of fringes is fully analyzed.
Photonic integrated circuits unveil crisis-induced intermittency.
Karsaklian Dal Bosco, Andreas; Akizawa, Yasuhiro; Kanno, Kazutaka; Uchida, Atsushi; Harayama, Takahisa; Yoshimura, Kazuyuki
2016-09-19
We experimentally investigate an intermittent route to chaos in a photonic integrated circuit consisting of a semiconductor laser with time-delayed optical feedback from a short external cavity. The transition from a period-doubling dynamics to a fully-developed chaos reveals a stage intermittently exhibiting these two dynamics. We unveil the bifurcation mechanism underlying this route to chaos by using the Lang-Kobayashi model and demonstrate that the process is based on a phenomenon of attractor expansion initiated by a particular distribution of the local Lyapunov exponents. We emphasize on the crucial importance of the distribution of the steady-state solutions introduced by the time-delayed feedback on the existence of this intermittent dynamics.
Proving refinement transformations using extended denotational semantics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Winter, V.L.; Boyle, J.M.
1996-04-01
TAMPR is a fully automatic transformation system based on syntactic rewrites. Our approach in a correctness proof is to map the transformation into an axiomatized mathematical domain where formal (and automated) reasoning can be performed. This mapping is accomplished via an extended denotational semantic paradigm. In this approach, the abstract notion of a program state is distributed between an environment function and a store function. Such a distribution introduces properties that go beyond the abstract state that is being modeled. The reasoning framework needs to be aware of these properties in order to successfully complete a correctness proof. This papermore » discusses some of our experiences in proving the correctness of TAMPR transformations.« less
NASA Astrophysics Data System (ADS)
Liu, Weibo; Jin, Yan; Price, Mark
2016-10-01
A new heuristic based on the Nawaz-Enscore-Ham algorithm is proposed in this article for solving a permutation flow-shop scheduling problem. A new priority rule is proposed by accounting for the average, mean absolute deviation, skewness and kurtosis, in order to fully describe the distribution style of processing times. A new tie-breaking rule is also introduced for achieving effective job insertion with the objective of minimizing both makespan and machine idle time. Statistical tests illustrate better solution quality of the proposed algorithm compared to existing benchmark heuristics.
Bonabeau model on a fully connected graph
NASA Astrophysics Data System (ADS)
Malarz, K.; Stauffer, D.; Kułakowski, K.
2006-03-01
Numerical simulations are reported on the Bonabeau model on a fully connected graph, where spatial degrees of freedom are absent. The control parameter is the memory factor f. The phase transition is observed at the dispersion of the agents power hi. The critical value fC shows a hysteretic behavior with respect to the initial distribution of hi. fC decreases with the system size; this decrease can be compensated by a greater number of fights between a global reduction of the distribution width of hi. The latter step is equivalent to a partial forgetting.
ROBOCAL: An automated NDA (nondestructive analysis) calorimetry and gamma isotopic system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hurd, J.R.; Powell, W.D.; Ostenak, C.A.
1989-11-01
ROBOCAL, which is presently being developed and tested at Los Alamos National Laboratory, is a full-scale, prototype robotic system for remote calorimetric and gamma-ray analysis of special nuclear materials. It integrates a fully automated, multidrawer, vertical stacker-retriever system for staging unmeasured nuclear materials, and a fully automated gantry robot for computer-based selection and transfer of nuclear materials to calorimetric and gamma-ray measurement stations. Since ROBOCAL is designed for minimal operator intervention, a completely programmed user interface is provided to interact with the automated mechanical and assay systems. The assay system is designed to completely integrate calorimetric and gamma-ray data acquisitionmore » and to perform state-of-the-art analyses on both homogeneous and heterogeneous distributions of nuclear materials in a wide variety of matrices.« less
NASA Technical Reports Server (NTRS)
Collins, Oliver (Inventor); Dolinar, Jr., Samuel J. (Inventor); Hus, In-Shek (Inventor); Bozzola, Fabrizio P. (Inventor); Olson, Erlend M. (Inventor); Statman, Joseph I. (Inventor); Zimmerman, George A. (Inventor)
1991-01-01
A method of formulating and packaging decision-making elements into a long constraint length Viterbi decoder which involves formulating the decision-making processors as individual Viterbi butterfly processors that are interconnected in a deBruijn graph configuration. A fully distributed architecture, which achieves high decoding speeds, is made feasible by novel wiring and partitioning of the state diagram. This partitioning defines universal modules, which can be used to build any size decoder, such that a large number of wires is contained inside each module, and a small number of wires is needed to connect modules. The total system is modular and hierarchical, and it implements a large proportion of the required wiring internally within modules and may include some external wiring to fully complete the deBruijn graph. pg,14.
Buddhavarapu, Prasad; Smit, Andre F; Prozzi, Jorge A
2015-07-01
Permeable friction course (PFC), a porous hot-mix asphalt, is typically applied to improve wet weather safety on high-speed roadways in Texas. In order to warrant expensive PFC construction, a statistical evaluation of its safety benefits is essential. Generally, the literature on the effectiveness of porous mixes in reducing wet-weather crashes is limited and often inconclusive. In this study, the safety effectiveness of PFC was evaluated using a fully Bayesian before-after safety analysis. First, two groups of road segments overlaid with PFC and non-PFC material were identified across Texas; the non-PFC or reference road segments selected were similar to their PFC counterparts in terms of site specific features. Second, a negative binomial data generating process was assumed to model the underlying distribution of crash counts of PFC and reference road segments to perform Bayesian inference on the safety effectiveness. A data-augmentation based computationally efficient algorithm was employed for a fully Bayesian estimation. The statistical analysis shows that PFC is not effective in reducing wet weather crashes. It should be noted that the findings of this study are in agreement with the existing literature, although these studies were not based on a fully Bayesian statistical analysis. Our study suggests that the safety effectiveness of PFC road surfaces, or any other safety infrastructure, largely relies on its interrelationship with the road user. The results suggest that the safety infrastructure must be properly used to reap the benefits of the substantial investments. Copyright © 2015 Elsevier Ltd. All rights reserved.
Experimental and numerical studies of micro PEM fuel cell
NASA Astrophysics Data System (ADS)
Peng, Rong-Gui; Chung, Chen-Chung; Chen, Chiun-Hsun
2011-10-01
A single micro proton exchange membrane fuel cell (PEMFC) has been produced using Micro-electromechanical systems (MEMS) technology with the active area of 2.5 cm2 and channel depth of about 500 µm. A theoretical analysis is performed in this study for a novel MEMS-based design of amicro PEMFC. Themodel consists of the conservation equations of mass, momentum, species and electric current in a fully integrated finite-volume solver using the CFD-ACE+ commercial code. The polarization curves of simulation are well correlated with experimental data. Three-dimensional simulations are carried out to treat prediction and analysis of micro PEMFC temperature, current density and water distributions in two different fuel flow rates (15 cm3/min and 40 cm3/min). Simulation results show that temperature distribution within the micro PEMFC is affected by water distribution in the membrane and indicate that low and uniform temperature distribution in the membrane at low fuel flow rates leads to increased membrane water distribution and obtains superior micro PEMFC current density distribution under 0.4V operating voltage. Model predictions are well within those known for experimental mechanism phenomena.
NASA Astrophysics Data System (ADS)
Bastola, S.; Dialynas, Y. G.; Arnone, E.; Bras, R. L.
2014-12-01
The spatial variability of soil, vegetation, topography, and precipitation controls hydrological processes, consequently resulting in high spatio-temporal variability of most of the hydrological variables, such as soil moisture. Limitation in existing measuring system to characterize this spatial variability, and its importance in various application have resulted in a need of reconciling spatially distributed soil moisture evolution model and corresponding measurements. Fully distributed ecohydrological model simulates soil moisture at high resolution soil moisture. This is relevant for range of environmental studies e.g., flood forecasting. They can also be used to evaluate the value of space born soil moisture data, by assimilating them into hydrological models. In this study, fine resolution soil moisture data simulated by a physically-based distributed hydrological model, tRIBS-VEGGIE, is compared with soil moisture data collected during the field campaign in Turkey river basin, Iowa. The soil moisture series at the 2 and 4 inch depth exhibited a more rapid response to rainfall as compared to bottom 8 and 20 inch ones. The spatial variability in two distinct land surfaces of Turkey River, IA, reflects the control of vegetation, topography and soil texture in the characterization of spatial variability. The comparison of observed and simulated soil moisture at various depth showed that model was able to capture the dynamics of soil moisture at a number of gauging stations. Discrepancies are large in some of the gauging stations, which are characterized by rugged terrain and represented, in the model, through large computational units.
Effect of polarization on the evolution of electromagnetic hollow Gaussian Schell-model beam
NASA Astrophysics Data System (ADS)
Long, Xuewen; Lu, Keqing; Zhang, Yuhong; Guo, Jianbang; Li, Kehao
2011-02-01
Based on the theory of coherence, an analytical propagation formula for partially polarized and partially coherent hollow Gaussian Schell-model beams (HGSMBs) passing through a paraxial optical system is derived. Furthermore, we show that the degree of polarization of source may affect the evolution of HGSMBs and a tunable dark region may exist. For two special cases of fully coherent and partially coherent δxx = δyy, normalized intensity distributions are independent of the polarization of source.
Distributed Heterogeneous Simulation of a Hybrid-Electric Vehicle
2006-03-29
voltage dc bus via a fully controlled three-phase bridge converter. Also connc·:[uJ iu tilL UUS are the Lithium - ion battery bank, the ultra-capacitor...s~b~;~~~~·3 .... ! Lithium - Ion Battery Storage I _ .. ~:; Low-voltage Bus i I I] j i DC~ Converter ! -~~- ti~! 1 I --Ii! Battery i...devices in the propulsion system include the lithium - ion battery bank and the ultra-capacitor. Based on the range of the vehicle in the stealth model
Fully programmable and scalable optical switching fabric for petabyte data center.
Zhu, Zhonghua; Zhong, Shan; Chen, Li; Chen, Kai
2015-02-09
We present a converged EPS and OCS switching fabric for data center networks (DCNs) based on a distributed optical switching architecture leveraging both WDM & SDM technologies. The architecture is topology adaptive, well suited to dynamic and diverse *-cast traffic patterns. Compared to a typical folded-Clos network, the new architecture is more readily scalable to future multi-Petabyte data centers with 1000 + racks while providing a higher link bandwidth, reducing transceiver count by 50%, and improving cabling efficiency by more than 90%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Motie, Iman; Bokaeeyan, Mahyar, E-mail: Mehyar9798@gmail.com
2015-02-15
A close analysis of dust charging process in the presence of radio frequency (RF) discharge on low pressure and fully ionized plasma for both weak and strong discharge's electric field is considered. When the electromagnetic waves pass throughout fully ionized plasma, the collision frequency of the plasma is derived. Moreover, the disturbed distribution function of plasma particles in the presence of the RF discharge is obtained. In this article, by using the Krook model, we separate the distribution function in two parts, the Maxwellian part and the perturbed part. The perturbed part of distribution can make an extra current, so-calledmore » the accretion rate of electron (or ion) current, towards a dust particle as a function of the average electron-ion collision frequency. It is proven that when the potential of dust grains increases, the accretion rate of electron current experiences an exponential reduction. Furthermore, the accretion rate of electron current for a strong electric field is relatively smaller than that for a weak electric field. The reasons are elaborated.« less
Kertesz, Vilmos; Calligaris, David; Feldman, Daniel R.; ...
2015-06-18
Described here are the results from the profiling of the proteins arginine vasopressin (AVP) and adrenocorticotropic hormone (ACTH) from normal human pituitary gland and pituitary adenoma tissue sections using a fully automated droplet-based liquid microjunction surface sampling-HPLC-ESI-MS/MS system for spatially resolved sampling, HPLC separation, and mass spectral detection. Excellent correlation was found between the protein distribution data obtained with this droplet-based liquid microjunction surface sampling-HPLC-ESI-MS/MS system and those data obtained with matrix assisted laser desorption ionization (MALDI) chemical imaging analyses of serial sections of the same tissue. The protein distributions correlated with the visible anatomic pattern of the pituitary gland.more » AVP was most abundant in the posterior pituitary gland region (neurohypophysis) and ATCH was dominant in the anterior pituitary gland region (adenohypophysis). The relative amounts of AVP and ACTH sampled from a series of ACTH secreting and non-secreting pituitary adenomas correlated with histopathological evaluation. ACTH was readily detected at significantly higher levels in regions of ACTH secreting adenomas and in normal anterior adenohypophysis compared to non-secreting adenoma and neurohypophysis. AVP was mostly detected in normal neurohypophysis as anticipated. This work demonstrates that a fully automated droplet-based liquid microjunction surface sampling system coupled to HPLC-ESI-MS/MS can be readily used for spatially resolved sampling, separation, detection, and semi-quantitation of physiologically-relevant peptide and protein hormones, such as AVP and ACTH, directly from human tissue. In addition, the relative simplicity, rapidity and specificity of the current methodology support the potential of this basic technology with further advancement for assisting surgical decision-making.« less
Kertesz, Vilmos; Calligaris, David; Feldman, Daniel R; Changelian, Armen; Laws, Edward R; Santagata, Sandro; Agar, Nathalie Y R; Van Berkel, Gary J
2015-08-01
Described here are the results from the profiling of the proteins arginine vasopressin (AVP) and adrenocorticotropic hormone (ACTH) from normal human pituitary gland and pituitary adenoma tissue sections, using a fully automated droplet-based liquid-microjunction surface-sampling-HPLC-ESI-MS-MS system for spatially resolved sampling, HPLC separation, and mass spectrometric detection. Excellent correlation was found between the protein distribution data obtained with this method and data obtained with matrix-assisted laser desorption/ionization (MALDI) chemical imaging analyses of serial sections of the same tissue. The protein distributions correlated with the visible anatomic pattern of the pituitary gland. AVP was most abundant in the posterior pituitary gland region (neurohypophysis), and ATCH was dominant in the anterior pituitary gland region (adenohypophysis). The relative amounts of AVP and ACTH sampled from a series of ACTH-secreting and non-secreting pituitary adenomas correlated with histopathological evaluation. ACTH was readily detected at significantly higher levels in regions of ACTH-secreting adenomas and in normal anterior adenohypophysis compared with non-secreting adenoma and neurohypophysis. AVP was mostly detected in normal neurohypophysis, as expected. This work reveals that a fully automated droplet-based liquid-microjunction surface-sampling system coupled to HPLC-ESI-MS-MS can be readily used for spatially resolved sampling, separation, detection, and semi-quantitation of physiologically-relevant peptide and protein hormones, including AVP and ACTH, directly from human tissue. In addition, the relative simplicity, rapidity, and specificity of this method support the potential of this basic technology, with further advancement, for assisting surgical decision-making. Graphical Abstract Mass spectrometry based profiling of hormones in human pituitary gland and tumor thin tissue sections.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kertesz, Vilmos; Calligaris, David; Feldman, Daniel R.
Described here are the results from the profiling of the proteins arginine vasopressin (AVP) and adrenocorticotropic hormone (ACTH) from normal human pituitary gland and pituitary adenoma tissue sections using a fully automated droplet-based liquid microjunction surface sampling-HPLC-ESI-MS/MS system for spatially resolved sampling, HPLC separation, and mass spectral detection. Excellent correlation was found between the protein distribution data obtained with this droplet-based liquid microjunction surface sampling-HPLC-ESI-MS/MS system and those data obtained with matrix assisted laser desorption ionization (MALDI) chemical imaging analyses of serial sections of the same tissue. The protein distributions correlated with the visible anatomic pattern of the pituitary gland.more » AVP was most abundant in the posterior pituitary gland region (neurohypophysis) and ATCH was dominant in the anterior pituitary gland region (adenohypophysis). The relative amounts of AVP and ACTH sampled from a series of ACTH secreting and non-secreting pituitary adenomas correlated with histopathological evaluation. ACTH was readily detected at significantly higher levels in regions of ACTH secreting adenomas and in normal anterior adenohypophysis compared to non-secreting adenoma and neurohypophysis. AVP was mostly detected in normal neurohypophysis as anticipated. This work demonstrates that a fully automated droplet-based liquid microjunction surface sampling system coupled to HPLC-ESI-MS/MS can be readily used for spatially resolved sampling, separation, detection, and semi-quantitation of physiologically-relevant peptide and protein hormones, such as AVP and ACTH, directly from human tissue. In addition, the relative simplicity, rapidity and specificity of the current methodology support the potential of this basic technology with further advancement for assisting surgical decision-making.« less
Reduction of chemical formulas from the isotopic peak distributions of high-resolution mass spectra.
Roussis, Stilianos G; Proulx, Richard
2003-03-15
A method has been developed for the reduction of the chemical formulas of compounds in complex mixtures from the isotopic peak distributions of high-resolution mass spectra. The method is based on the principle that the observed isotopic peak distribution of a mixture of compounds is a linear combination of the isotopic peak distributions of the individual compounds in the mixture. All possible chemical formulas that meet specific criteria (e.g., type and number of atoms in structure, limits of unsaturation, etc.) are enumerated, and theoretical isotopic peak distributions are generated for each formula. The relative amount of each formula is obtained from the accurately measured isotopic peak distribution and the calculated isotopic peak distributions of all candidate formulas. The formulas of compounds in simple spectra, where peak components are fully resolved, are rapidly determined by direct comparison of the calculated and experimental isotopic peak distributions. The singular value decomposition linear algebra method is used to determine the contributions of compounds in complex spectra containing unresolved peak components. The principles of the approach and typical application examples are presented. The method is most useful for the characterization of complex spectra containing partially resolved peaks and structures with multiisotopic elements.
Kokaly, R.F.; King, T.V.V.; Hoefen, T.M.
2011-01-01
Identifying materials by measuring and analyzing their reflectance spectra has been an important method in analytical chemistry for decades. Airborne and space-based imaging spectrometers allow scientists to detect materials and map their distributions across the landscape. With new satellite-borne hyperspectral sensors planned for the future, for example, HYSPIRI (HYPerspectral InfraRed Imager), robust methods are needed to fully exploit the information content of hyperspectral remote sensing data. A method of identifying and mapping materials using spectral-feature based analysis of reflectance data in an expert-system framework called MICA (Material Identification and Characterization Algorithm) is described in this paper. The core concepts and calculations of MICA are presented. A MICA command file has been developed and applied to map minerals in the full-country coverage of the 2007 Afghanistan HyMap hyperspectral data. ?? 2011 IEEE.
NASA Astrophysics Data System (ADS)
Flynn, Brendan P.; D'Souza, Alisha V.; Kanick, Stephen C.; Maytin, Edward; Hasan, Tayyaba; Pogue, Brian W.
2013-03-01
Aminolevulinic acid (ALA)-induced Protoporphyrin IX (PpIX)-based photodynamic therapy (PDT) is an effective treatment for skin cancers including basal cell carcinoma (BCC). Topically applied ALA promotes PpIX production preferentially in tumors, and many strategies have been developed to increase PpIX distribution and PDT treatment efficacy at depths > 1mm is not fully understood. While surface imaging techniques provide useful diagnosis, dosimetry, and efficacy information for superficial tumors, these methods cannot interrogate deeper tumors to provide in situ insight into spatial PpIX distributions. We have developed an ultrasound-guided, white-light-informed, tomographics spectroscopy system for the spatial measurement of subsurface PpIX. Detailed imaging system specifications, methodology, and optical-phantom-based characterization will be presented separately. Here we evaluate preliminary in vivo results using both full tomographic reconstruction and by plotting individual tomographic source-detector pair data against US images.
Peled, Yair; Motil, Avi; Kressel, Iddo; Tur, Moshe
2013-05-06
We report a Brillouin-based fully distributed and dynamic monitoring of the strain induced by a propagating mechanical wave along a 20 m long composite strip, to which surface a single-mode optical fiber was glued. Employing a simplified version of the Slope-Assisted Brillouin Optical Time Domain Analysis (SA-BOTDA) technique, the whole length of the strip was interrogated every 10 ms (strip sampling rate of 100 Hz) with a spatial resolution of the order of 1m. A dynamic spatially and temporally continuous map of the strain was obtained, whose temporal behavior at four discrete locations was verified against co-located fiber Bragg gratings. With a trade-off among sampling rate, range and signal to noise ratio, kHz sampling rates and hundreds of meters of range can be obtained with resolution down to a few centimeters.
Foglia, L.; Hill, Mary C.; Mehl, Steffen W.; Burlando, P.
2009-01-01
We evaluate the utility of three interrelated means of using data to calibrate the fully distributed rainfall‐runoff model TOPKAPI as applied to the Maggia Valley drainage area in Switzerland. The use of error‐based weighting of observation and prior information data, local sensitivity analysis, and single‐objective function nonlinear regression provides quantitative evaluation of sensitivity of the 35 model parameters to the data, identification of data types most important to the calibration, and identification of correlations among parameters that contribute to nonuniqueness. Sensitivity analysis required only 71 model runs, and regression required about 50 model runs. The approach presented appears to be ideal for evaluation of models with long run times or as a preliminary step to more computationally demanding methods. The statistics used include composite scaled sensitivities, parameter correlation coefficients, leverage, Cook's D, and DFBETAS. Tests suggest predictive ability of the calibrated model typical of hydrologic models.
Tang, Jian; Jiang, Xiaoliang
2017-01-01
Image segmentation has always been a considerable challenge in image analysis and understanding due to the intensity inhomogeneity, which is also commonly known as bias field. In this paper, we present a novel region-based approach based on local entropy for segmenting images and estimating the bias field simultaneously. Firstly, a local Gaussian distribution fitting (LGDF) energy function is defined as a weighted energy integral, where the weight is local entropy derived from a grey level distribution of local image. The means of this objective function have a multiplicative factor that estimates the bias field in the transformed domain. Then, the bias field prior is fully used. Therefore, our model can estimate the bias field more accurately. Finally, minimization of this energy function with a level set regularization term, image segmentation, and bias field estimation can be achieved. Experiments on images of various modalities demonstrated the superior performance of the proposed method when compared with other state-of-the-art approaches.
Transition in Gas Turbine Control System Architecture: Modular, Distributed, and Embedded
NASA Technical Reports Server (NTRS)
Culley, Dennis
2010-01-01
Controls systems are an increasingly important component of turbine-engine system technology. However, as engines become more capable, the control system itself becomes ever more constrained by the inherent environmental conditions of the engine; a relationship forced by the continued reliance on commercial electronics technology. A revolutionary change in the architecture of turbine-engine control systems will change this paradigm and result in fully distributed engine control systems. Initially, the revolution will begin with the physical decoupling of the control law processor from the hostile engine environment using a digital communications network and engine-mounted high temperature electronics requiring little or no thermal control. The vision for the evolution of distributed control capability from this initial implementation to fully distributed and embedded control is described in a roadmap and implementation plan. The development of this plan is the result of discussions with government and industry stakeholders
ERIC Educational Resources Information Center
Mistler, Stephen A.; Enders, Craig K.
2017-01-01
Multiple imputation methods can generally be divided into two broad frameworks: joint model (JM) imputation and fully conditional specification (FCS) imputation. JM draws missing values simultaneously for all incomplete variables using a multivariate distribution, whereas FCS imputes variables one at a time from a series of univariate conditional…
Isotope Induced Proton Ordering in Partially Deuterated Aspirin
NASA Astrophysics Data System (ADS)
Schiebel, P.; Papoular, R. J.; Paulus, W.; Zimmermann, H.; Detken, A.; Haeberlen, U.; Prandl, W.
1999-08-01
We report the nuclear density distribution of partially deuterated aspirin, C8H5O4-CH2D, at 300 and 15 K, as determined by neutron diffraction coupled with maximum entropy method image reconstruction. While fully protonated and fully deuterated methyl groups in aspirin are delocalized at low temperatures due to quantum mechanical tunneling, we provide here direct evidence that in aspirin- CH2D at 15 K the methyl hydrogens are localized, while randomly distributed over three sites at 300 K. This is the first observation by diffraction methods of low-temperature isotopic ordering in condensed matter.
Merchán-Pérez, Angel; Rodríguez, José-Rodrigo; González, Santiago; Robles, Víctor; DeFelipe, Javier; Larrañaga, Pedro; Bielza, Concha
2014-01-01
In the cerebral cortex, most synapses are found in the neuropil, but relatively little is known about their 3-dimensional organization. Using an automated dual-beam electron microscope that combines focused ion beam milling and scanning electron microscopy, we have been able to obtain 10 three-dimensional samples with an average volume of 180 µm3 from the neuropil of layer III of the young rat somatosensory cortex (hindlimb representation). We have used specific software tools to fully reconstruct 1695 synaptic junctions present in these samples and to accurately quantify the number of synapses per unit volume. These tools also allowed us to determine synapse position and to analyze their spatial distribution using spatial statistical methods. Our results indicate that the distribution of synaptic junctions in the neuropil is nearly random, only constrained by the fact that synapses cannot overlap in space. A theoretical model based on random sequential absorption, which closely reproduces the actual distribution of synapses, is also presented. PMID:23365213
Merchán-Pérez, Angel; Rodríguez, José-Rodrigo; González, Santiago; Robles, Víctor; Defelipe, Javier; Larrañaga, Pedro; Bielza, Concha
2014-06-01
In the cerebral cortex, most synapses are found in the neuropil, but relatively little is known about their 3-dimensional organization. Using an automated dual-beam electron microscope that combines focused ion beam milling and scanning electron microscopy, we have been able to obtain 10 three-dimensional samples with an average volume of 180 µm(3) from the neuropil of layer III of the young rat somatosensory cortex (hindlimb representation). We have used specific software tools to fully reconstruct 1695 synaptic junctions present in these samples and to accurately quantify the number of synapses per unit volume. These tools also allowed us to determine synapse position and to analyze their spatial distribution using spatial statistical methods. Our results indicate that the distribution of synaptic junctions in the neuropil is nearly random, only constrained by the fact that synapses cannot overlap in space. A theoretical model based on random sequential absorption, which closely reproduces the actual distribution of synapses, is also presented.
Intelligent distributed medical image management
NASA Astrophysics Data System (ADS)
Garcia, Hong-Mei C.; Yun, David Y.
1995-05-01
The rapid advancements in high performance global communication have accelerated cooperative image-based medical services to a new frontier. Traditional image-based medical services such as radiology and diagnostic consultation can now fully utilize multimedia technologies in order to provide novel services, including remote cooperative medical triage, distributed virtual simulation of operations, as well as cross-country collaborative medical research and training. Fast (efficient) and easy (flexible) retrieval of relevant images remains a critical requirement for the provision of remote medical services. This paper describes the database system requirements, identifies technological building blocks for meeting the requirements, and presents a system architecture for our target image database system, MISSION-DBS, which has been designed to fulfill the goals of Project MISSION (medical imaging support via satellite integrated optical network) -- an experimental high performance gigabit satellite communication network with access to remote supercomputing power, medical image databases, and 3D visualization capabilities in addition to medical expertise anywhere and anytime around the country. The MISSION-DBS design employs a synergistic fusion of techniques in distributed databases (DDB) and artificial intelligence (AI) for storing, migrating, accessing, and exploring images. The efficient storage and retrieval of voluminous image information is achieved by integrating DDB modeling and AI techniques for image processing while the flexible retrieval mechanisms are accomplished by combining attribute- based and content-based retrievals.
Collaborative emitter tracking using Rao-Blackwellized random exchange diffusion particle filtering
NASA Astrophysics Data System (ADS)
Bruno, Marcelo G. S.; Dias, Stiven S.
2014-12-01
We introduce in this paper the fully distributed, random exchange diffusion particle filter (ReDif-PF) to track a moving emitter using multiple received signal strength (RSS) sensors. We consider scenarios with both known and unknown sensor model parameters. In the unknown parameter case, a Rao-Blackwellized (RB) version of the random exchange diffusion particle filter, referred to as the RB ReDif-PF, is introduced. In a simulated scenario with a partially connected network, the proposed ReDif-PF outperformed a PF tracker that assimilates local neighboring measurements only and also outperformed a linearized random exchange distributed extended Kalman filter (ReDif-EKF). Furthermore, the novel ReDif-PF matched the tracking error performance of alternative suboptimal distributed PFs based respectively on iterative Markov chain move steps and selective average gossiping with an inter-node communication cost that is roughly two orders of magnitude lower than the corresponding cost for the Markov chain and selective gossip filters. Compared to a broadcast-based filter which exactly mimics the optimal centralized tracker or its equivalent (exact) consensus-based implementations, ReDif-PF showed a degradation in steady-state error performance. However, compared to the optimal consensus-based trackers, ReDif-PF is better suited for real-time applications since it does not require iterative inter-node communication between measurement arrivals.
A fully convolutional network for weed mapping of unmanned aerial vehicle (UAV) imagery.
Huang, Huasheng; Deng, Jizhong; Lan, Yubin; Yang, Aqing; Deng, Xiaoling; Zhang, Lei
2018-01-01
Appropriate Site Specific Weed Management (SSWM) is crucial to ensure the crop yields. Within SSWM of large-scale area, remote sensing is a key technology to provide accurate weed distribution information. Compared with satellite and piloted aircraft remote sensing, unmanned aerial vehicle (UAV) is capable of capturing high spatial resolution imagery, which will provide more detailed information for weed mapping. The objective of this paper is to generate an accurate weed cover map based on UAV imagery. The UAV RGB imagery was collected in 2017 October over the rice field located in South China. The Fully Convolutional Network (FCN) method was proposed for weed mapping of the collected imagery. Transfer learning was used to improve generalization capability, and skip architecture was applied to increase the prediction accuracy. After that, the performance of FCN architecture was compared with Patch_based CNN algorithm and Pixel_based CNN method. Experimental results showed that our FCN method outperformed others, both in terms of accuracy and efficiency. The overall accuracy of the FCN approach was up to 0.935 and the accuracy for weed recognition was 0.883, which means that this algorithm is capable of generating accurate weed cover maps for the evaluated UAV imagery.
Fast, Nonlinear, Fully Probabilistic Inversion of Large Geophysical Problems
NASA Astrophysics Data System (ADS)
Curtis, A.; Shahraeeni, M.; Trampert, J.; Meier, U.; Cho, G.
2010-12-01
Almost all Geophysical inverse problems are in reality nonlinear. Fully nonlinear inversion including non-approximated physics, and solving for probability distribution functions (pdf’s) that describe the solution uncertainty, generally requires sampling-based Monte-Carlo style methods that are computationally intractable in most large problems. In order to solve such problems, physical relationships are usually linearized leading to efficiently-solved, (possibly iterated) linear inverse problems. However, it is well known that linearization can lead to erroneous solutions, and in particular to overly optimistic uncertainty estimates. What is needed across many Geophysical disciplines is a method to invert large inverse problems (or potentially tens of thousands of small inverse problems) fully probabilistically and without linearization. This talk shows how very large nonlinear inverse problems can be solved fully probabilistically and incorporating any available prior information using mixture density networks (driven by neural network banks), provided the problem can be decomposed into many small inverse problems. In this talk I will explain the methodology, compare multi-dimensional pdf inversion results to full Monte Carlo solutions, and illustrate the method with two applications: first, inverting surface wave group and phase velocities for a fully-probabilistic global tomography model of the Earth’s crust and mantle, and second inverting industrial 3D seismic data for petrophysical properties throughout and around a subsurface hydrocarbon reservoir. The latter problem is typically decomposed into 104 to 105 individual inverse problems, each solved fully probabilistically and without linearization. The results in both cases are sufficiently close to the Monte Carlo solution to exhibit realistic uncertainty, multimodality and bias. This provides far greater confidence in the results, and in decisions made on their basis.
Ensemble models of proteins and protein domains based on distance distribution restraints.
Jeschke, Gunnar
2016-04-01
Conformational ensembles of intrinsically disordered peptide chains are not fully determined by experimental observations. Uncertainty due to lack of experimental restraints and due to intrinsic disorder can be distinguished if distance distributions restraints are available. Such restraints can be obtained from pulsed dipolar electron paramagnetic resonance (EPR) spectroscopy applied to pairs of spin labels. Here, we introduce a Monte Carlo approach for generating conformational ensembles that are consistent with a set of distance distribution restraints, backbone dihedral angle statistics in known protein structures, and optionally, secondary structure propensities or membrane immersion depths. The approach is tested with simulated restraints for a terminal and an internal loop and for a protein with 69 residues by using sets of sparse restraints for underlying well-defined conformations and for published ensembles of a premolten globule-like and a coil-like intrinsically disordered protein. © 2016 Wiley Periodicals, Inc.
Distributed Optimal Power Flow of AC/DC Interconnected Power Grid Using Synchronous ADMM
NASA Astrophysics Data System (ADS)
Liang, Zijun; Lin, Shunjiang; Liu, Mingbo
2017-05-01
Distributed optimal power flow (OPF) is of great importance and challenge to AC/DC interconnected power grid with different dispatching centres, considering the security and privacy of information transmission. In this paper, a fully distributed algorithm for OPF problem of AC/DC interconnected power grid called synchronous ADMM is proposed, and it requires no form of central controller. The algorithm is based on the fundamental alternating direction multiplier method (ADMM), by using the average value of boundary variables of adjacent regions obtained from current iteration as the reference values of both regions for next iteration, which realizes the parallel computation among different regions. The algorithm is tested with the IEEE 11-bus AC/DC interconnected power grid, and by comparing the results with centralized algorithm, we find it nearly no differences, and its correctness and effectiveness can be validated.
Suboptimal distributed control and estimation: application to a four coupled tanks system
NASA Astrophysics Data System (ADS)
Orihuela, Luis; Millán, Pablo; Vivas, Carlos; Rubio, Francisco R.
2016-06-01
The paper proposes an innovative estimation and control scheme that enables the distributed monitoring and control of large-scale processes. The proposed approach considers a discrete linear time-invariant process controlled by a network of agents that may both collect information about the evolution of the plant and apply control actions to drive its behaviour. The problem makes full sense when local observability/controllability is not assumed and the communication between agents can be exploited to reach system-wide goals. Additionally, to reduce agents bandwidth requirements and power consumption, an event-based communication policy is studied. The design procedure guarantees system stability, allowing the designer to trade-off performance, control effort and communication requirements. The obtained controllers and observers are implemented in a fully distributed fashion. To illustrate the performance of the proposed technique, experimental results on a quadruple-tank process are provided.
Space radiation absorbed dose distribution in a human phantom
NASA Technical Reports Server (NTRS)
Badhwar, G. D.; Atwell, W.; Badavi, F. F.; Yang, T. C.; Cleghorn, T. F.
2002-01-01
The radiation risk to astronauts has always been based on measurements using passive thermoluminescent dosimeters (TLDs). The skin dose is converted to dose equivalent using an average radiation quality factor based on model calculations. The radiological risk estimates, however, are based on organ and tissue doses. This paper describes results from the first space flight (STS-91, 51.65 degrees inclination and approximately 380 km altitude) of a fully instrumented Alderson Rando phantom torso (with head) to relate the skin dose to organ doses. Spatial distributions of absorbed dose in 34 1-inch-thick sections measured using TLDs are described. There is about a 30% change in dose as one moves from the front to the back of the phantom body. Small active dosimeters were developed specifically to provide time-resolved measurements of absorbed dose rates and quality factors at five organ locations (brain, thyroid, heart/lung, stomach and colon) inside the phantom. Using these dosimeters, it was possible to separate the trapped-proton and the galactic cosmic radiation components of the doses. A tissue-equivalent proportional counter (TEPC) and a charged-particle directional spectrometer (CPDS) were flown next to the phantom torso to provide data on the incident internal radiation environment. Accurate models of the shielding distributions at the site of the TEPC, the CPDS and a scalable Computerized Anatomical Male (CAM) model of the phantom torso were developed. These measurements provided a comprehensive data set to map the dose distribution inside a human phantom, and to assess the accuracy and validity of radiation transport models throughout the human body. The results show that for the conditions in the International Space Station (ISS) orbit during periods near the solar minimum, the ratio of the blood-forming organ dose rate to the skin absorbed dose rate is about 80%, and the ratio of the dose equivalents is almost one. The results show that the GCR model dose-rate predictions are 20% lower than the observations. Assuming that the trapped-belt models lead to a correct orbit-averaged energy spectrum, the measurements of dose rates inside the phantom cannot be fully understood. Passive measurements using 6Li- and 7Li-based detectors on the astronauts and inside the brain and thyroid of the phantom show the presence of a significant contribution due to thermal neutrons, an area requiring additional study.
Healthcare information system approaches based on middleware concepts.
Holena, M; Blobel, B
1997-01-01
To meet the challenges for efficient and high-level quality, health care systems must implement the "Shared Care" paradigm of distributed co-operating systems. To this end, both the newly developed and legacy applications must be fully integrated into the care process. These requirements can be fulfilled by information systems based on middleware concepts. In the paper, the middleware approaches HL7, DHE, and CORBA are described. The relevance of those approaches to the healthcare domain is documented. The description presented here is complemented through two other papers in this volume, concentrating on the evaluation of the approaches, and on their security threats and solutions.
Reaction-diffusion on the fully-connected lattice: A+A\\rightarrow A
NASA Astrophysics Data System (ADS)
Turban, Loïc; Fortin, Jean-Yves
2018-04-01
Diffusion-coagulation can be simply described by a dynamic where particles perform a random walk on a lattice and coalesce with probability unity when meeting on the same site. Such processes display non-equilibrium properties with strong fluctuations in low dimensions. In this work we study this problem on the fully-connected lattice, an infinite-dimensional system in the thermodynamic limit, for which mean-field behaviour is expected. Exact expressions for the particle density distribution at a given time and survival time distribution for a given number of particles are obtained. In particular, we show that the time needed to reach a finite number of surviving particles (vanishing density in the scaling limit) displays strong fluctuations and extreme value statistics, characterized by a universal class of non-Gaussian distributions with singular behaviour.
van Alebeek, Gert-Jan W M; Christensen, Tove M I E; Schols, Henk A; Mikkelsen, Jørn D; Voragen, Alphons G J
2002-07-19
A thorough investigation of the mode of action of Aspergillus niger (4M-147) pectin lyase A (PLA) on differently C(6)-substituted oligogalacturonides is described. PLA appeared to be very specific for fully methyl-esterified oligogalacturonides: removal of the methyl-ester or changing the type of ester (ethyl esterification) or transamidation resulted in (almost) complete loss of conversion. The PLA activity increased with increasing length of the substrate up to a degree of polymerization (DP) of 8 indicating the presence of at least eight subsites on the enzyme. Product analysis demonstrated the formation of several Delta 4,5 unsaturated products and their saturated counterparts. The Delta 4,5 unsaturated trimer was the main product up to DP 8. For DP 9 and 10 Delta 4,5 unsaturated tetramer was the major product. Based upon the bond cleavage frequencies, a provisional subsite map was calculated, which supports the presence of eight subsites. By limited alkaline de-esterification of fully methyl-esterified pentamer and hexamer two sets of partially methyl-esterified pentamers (x and y methyl groups) and hexamers (a and b methyl groups) were prepared. Matrix-assisted laser desorption/ionization time of flight mass spectroscopy (MALDI-TOF MS) analysis demonstrated that the methyl-ester distribution was fully random. Using these partially methyl-esterified oligogalacturonides as substrates for PLA a 10-fold decrease in reaction rate was recorded compared with the fully methyl-esterified counterparts. Analysis of the methyl-ester distribution of the products showed that PLA tolerates carboxyl groups in the substrate binding cleft. At either subsite +2, +4, or -1 to -4 a free carboxyl group could be tolerated, whereas methyl-esters were obligatory at subsite +1 and +3. So PLA is capable to cleave the bond between a methyl-esterified and a non-esterified galacturonic acid residue, where the newly formed Delta 4,5 unsaturated non-reducing end residue always contains a methyl-ester.
Effect of authority figures for pedestrian evacuation at metro stations
NASA Astrophysics Data System (ADS)
Song, Xiao; Zhang, Zenghui; Peng, Gongzhuang; Shi, Guoqiang
2017-01-01
Most pedestrian evacuation literatures are about routing algorithm, human intelligence and behavior etc. Few works studied how to fully explore the function of authority/security figures, who know more of the environment by simply being there every day. To evaluate the effect of authority figure (AF) in complex buildings, this paper fully investigates the AF related factors that may influence the evacuation effect of crowd, such as the number and locations of AFs, their spread of direction, calming effect and distribution strategies etc. Social force based modeling and simulation results show that these factors of AFs play important roles in evacuation efficiency, which means fewer AFs with right guiding strategy can have good evacuation performance. For our case study, Zhichun Avenue station, the conclusion is that deployment of four AFs is a good choice to achieve relatively high evacuation performance yet save cost.
Liu, Yanhui; Zhang, Peihua
2016-09-01
This paper presents a study of the compression behaviors of fully covered biodegradable polydioxanone biliary stents (FCBPBs) developed for human body by finite element method. To investigate the relationship between the compression force and structure parameter (monofilament diameter and braid-pin number), nine numerical models based on actual biliary stent were established, the simulation and experimental results are in good agreement with each other when calculating the compression force derived from both experiment and simulation results, indicating that the simulation results can be provided a useful reference to the investigation of biliary stents. The stress distribution on FCBPBSs was studied to optimize the structure of FCBPBSs. In addition, the plastic dissipation analysis and plastic strain of FCBPBSs were obtained via the compression simulation, revealing the structure parameter effect on the tolerance. Copyright © 2016 Elsevier Ltd. All rights reserved.
The fully actuated traffic control problem solved by global optimization and complementarity
NASA Astrophysics Data System (ADS)
Ribeiro, Isabel M.; de Lurdes de Oliveira Simões, Maria
2016-02-01
Global optimization and complementarity are used to determine the signal timing for fully actuated traffic control, regarding effective green and red times on each cycle. The average values of these parameters can be used to estimate the control delay of vehicles. In this article, a two-phase queuing system for a signalized intersection is outlined, based on the principle of minimization of the total waiting time for the vehicles. The underlying model results in a linear program with linear complementarity constraints, solved by a sequential complementarity algorithm. Departure rates of vehicles during green and yellow periods were treated as deterministic, while arrival rates of vehicles were assumed to follow a Poisson distribution. Several traffic scenarios were created and solved. The numerical results reveal that it is possible to use global optimization and complementarity over a reasonable number of cycles and determine with efficiency effective green and red times for a signalized intersection.
NASA Astrophysics Data System (ADS)
Keen, David A.; Keeble, Dean S.; Bennett, Thomas D.
2018-04-01
The structure of fully hydrated grossular, or katoite, contains an unusual arrangement of four O-H bonds within each O4 tetrahedra. Neutron and X-ray total scattering from a powdered deuterated sample have been measured to investigate the local arrangement of this O4D4 cluster. The O-D bond length determined directly from the pair distribution function is 0.954 Å, although the Rietveld-refined distance between average O and D positions was slightly smaller. Reverse Monte Carlo refinement of supercell models to the total scattering data show that other than the consequences of this correctly determined O-D bond length, there is little to suggest that the O4D4 structure is locally significantly different from that expected based on the average structure determined solely from Bragg diffraction.
Study for prediction of rotor/wake/fuselage interference, part 1
NASA Technical Reports Server (NTRS)
Clark, D. R.; Maskew, B.
1985-01-01
A method was developed which allows the fully coupled calculation of fuselage and rotor airloads for typical helicopter configurations in forward flight. To do this, an iterative solution is carried out based on a conventional panel representation of the fuselage and a blade element representation of the rotor where fuselage and rotor singularity strengths are determined simultaneously at each step and the rotor wake is allowed to relax (deform) in response to changes in rotor wake loading and fuselage presence. On completion of the iteration, rotor loading and inflow, fuselage singularity strength (and, hence, pressure and velocity distributions) and rotor wake are all consistent. The results of a fully coupled calculation of the flow around representative helicopter configurations are presented. The effect of fuselage components on the rotor flow field and the overall wake structure is detailed and the aerodynamic interference between the different parts of the aircraft is discussed.
Modular decomposition of metabolic reaction networks based on flux analysis and pathway projection.
Yoon, Jeongah; Si, Yaguang; Nolan, Ryan; Lee, Kyongbum
2007-09-15
The rational decomposition of biochemical networks into sub-structures has emerged as a useful approach to study the design of these complex systems. A biochemical network is characterized by an inhomogeneous connectivity distribution, which gives rise to several organizational features, including modularity. To what extent the connectivity-based modules reflect the functional organization of the network remains to be further explored. In this work, we examine the influence of physiological perturbations on the modular organization of cellular metabolism. Modules were characterized for two model systems, liver and adipocyte primary metabolism, by applying an algorithm for top-down partition of directed graphs with non-uniform edge weights. The weights were set by the engagement of the corresponding reactions as expressed by the flux distribution. For the base case of the fasted rat liver, three modules were found, carrying out the following biochemical transformations: ketone body production, glucose synthesis and transamination. This basic organization was further modified when different flux distributions were applied that describe the liver's metabolic response to whole body inflammation. For the fully mature adipocyte, only a single module was observed, integrating all of the major pathways needed for lipid storage. Weaker levels of integration between the pathways were found for the early stages of adipocyte differentiation. Our results underscore the inhomogeneous distribution of both connectivity and connection strengths, and suggest that global activity data such as the flux distribution can be used to study the organizational flexibility of cellular metabolism. Supplementary data are available at Bioinformatics online.
Cook, J L; Rio, E; Purdam, C R; Docking, S I
2016-01-01
The pathogenesis of tendinopathy and the primary biological change in the tendon that precipitates pathology have generated several pathoaetiological models in the literature. The continuum model of tendon pathology, proposed in 2009, synthesised clinical and laboratory-based research to guide treatment choices for the clinical presentations of tendinopathy. While the continuum has been cited extensively in the literature, its clinical utility has yet to be fully elucidated. The continuum model proposed a model for staging tendinopathy based on the changes and distribution of disorganisation within the tendon. However, classifying tendinopathy based on structure in what is primarily a pain condition has been challenged. The interplay between structure, pain and function is not yet fully understood, which has partly contributed to the complex clinical picture of tendinopathy. Here we revisit and assess the merit of the continuum model in the context of new evidence. We (1) summarise new evidence in tendinopathy research in the context of the continuum, (2) discuss tendon pain and the relevance of a model based on structure and (3) describe relevant clinical elements (pain, function and structure) to begin to build a better understanding of the condition. Our goal is that the continuum model may help guide targeted treatments and improved patient outcomes. PMID:27127294
RTD-based Material Tracking in a Fully-Continuous Dry Granulation Tableting Line.
Martinetz, M C; Karttunen, A-P; Sacher, S; Wahl, P; Ketolainen, J; Khinast, J G; Korhonen, O
2018-06-06
Continuous manufacturing (CM) offers quality and cost-effectiveness benefits over currently dominating batch processing. One challenge that needs to be addressed when implementing CM is traceability of materials through the process, which is needed for the batch/lot definition and control strategy. In this work the residence time distributions (RTD) of single unit operations (blender, roller compactor and tablet press) of a continuous dry granulation tableting line were captured with NIR based methods at selected mass flow rates to create training data. RTD models for continuous operated unit operations and the entire line were developed based on transfer functions. For semi-continuously operated bucket conveyor and pneumatic transport an assumption based the operation frequency was used. For validation of the parametrized process model, a pre-defined API step change and its propagation through the manufacturing line was computed and compared to multi-scale experimental runs conducted with the fully assembled continuous operated manufacturing line. This novel approach showed a very good prediction power at the selected mass flow rates for a complete continuous dry granulation line. Furthermore, it shows and proves the capabilities of process simulation as a tool to support development and control of pharmaceutical manufacturing processes. Copyright © 2018. Published by Elsevier B.V.
Distributed cooperative control of AC microgrids
NASA Astrophysics Data System (ADS)
Bidram, Ali
In this dissertation, the comprehensive secondary control of electric power microgrids is of concern. Microgrid technical challenges are mainly realized through the hierarchical control structure, including primary, secondary, and tertiary control levels. Primary control level is locally implemented at each distributed generator (DG), while the secondary and tertiary control levels are conventionally implemented through a centralized control structure. The centralized structure requires a central controller which increases the reliability concerns by posing the single point of failure. In this dissertation, the distributed control structure using the distributed cooperative control of multi-agent systems is exploited to increase the secondary control reliability. The secondary control objectives are microgrid voltage and frequency, and distributed generators (DGs) active and reactive powers. Fully distributed control protocols are implemented through distributed communication networks. In the distributed control structure, each DG only requires its own information and the information of its neighbors on the communication network. The distributed structure obviates the requirements for a central controller and complex communication network which, in turn, improves the system reliability. Since the DG dynamics are nonlinear and non-identical, input-output feedback linearization is used to transform the nonlinear dynamics of DGs to linear dynamics. Proposed control frameworks cover the control of microgrids containing inverter-based DGs. Typical microgrid test systems are used to verify the effectiveness of the proposed control protocols.
An Agent-Based Dynamic Model for Analysis of Distributed Space Exploration Architectures
NASA Astrophysics Data System (ADS)
Sindiy, Oleg V.; DeLaurentis, Daniel A.; Stein, William B.
2009-07-01
A range of complex challenges, but also potentially unique rewards, underlie the development of exploration architectures that use a distributed, dynamic network of resources across the solar system. From a methodological perspective, the prime challenge is to systematically model the evolution (and quantify comparative performance) of such architectures, under uncertainty, to effectively direct further study of specialized trajectories, spacecraft technologies, concept of operations, and resource allocation. A process model for System-of-Systems Engineering is used to define time-varying performance measures for comparative architecture analysis and identification of distinguishing patterns among interoperating systems. Agent-based modeling serves as the means to create a discrete-time simulation that generates dynamics for the study of architecture evolution. A Solar System Mobility Network proof-of-concept problem is introduced representing a set of longer-term, distributed exploration architectures. Options within this set revolve around deployment of human and robotic exploration and infrastructure assets, their organization, interoperability, and evolution, i.e., a system-of-systems. Agent-based simulations quantify relative payoffs for a fully distributed architecture (which can be significant over the long term), the latency period before they are manifest, and the up-front investment (which can be substantial compared to alternatives). Verification and sensitivity results provide further insight on development paths and indicate that the framework and simulation modeling approach may be useful in architectural design of other space exploration mass, energy, and information exchange settings.
A KDE-Based Random Walk Method for Modeling Reactive Transport With Complex Kinetics in Porous Media
NASA Astrophysics Data System (ADS)
Sole-Mari, Guillem; Fernà ndez-Garcia, Daniel; Rodríguez-Escales, Paula; Sanchez-Vila, Xavier
2017-11-01
In recent years, a large body of the literature has been devoted to study reactive transport of solutes in porous media based on pure Lagrangian formulations. Such approaches have also been extended to accommodate second-order bimolecular reactions, in which the reaction rate is proportional to the concentrations of the reactants. Rather, in some cases, chemical reactions involving two reactants follow more complicated rate laws. Some examples are (1) reaction rate laws written in terms of powers of concentrations, (2) redox reactions incorporating a limiting term (e.g., Michaelis-Menten), or (3) any reaction where the activity coefficients vary with the concentration of the reactants, just to name a few. We provide a methodology to account for complex kinetic bimolecular reactions in a fully Lagrangian framework where each particle represents a fraction of the total mass of a specific solute. The method, built as an extension to the second-order case, is based on the concept of optimal Kernel Density Estimator, which allows the concentrations to be written in terms of particle locations, hence transferring the concept of reaction rate to that of particle location distribution. By doing so, we can update the probability of particles reacting without the need to fully reconstruct the concentration maps. The performance and convergence of the method is tested for several illustrative examples that simulate the Advection-Dispersion-Reaction Equation in a 1-D homogeneous column. Finally, a 2-D application example is presented evaluating the need of fully describing non-bilinear chemical kinetics in a randomly heterogeneous porous medium.
NASA Astrophysics Data System (ADS)
Arneitz, P.; Leonhardt, R.; Fabian, K.; Egli, R.
2017-12-01
Historical and paleomagnetic data are the two main sources of information about the long-term geomagnetic field evolution. Historical observations extend to the late Middle Ages, and prior to the 19th century, they consisted mainly of pure declination measurements from navigation and orientation logs. Field reconstructions going back further in time rely solely on magnetization acquired by rocks, sediments, and archaeological artefacts. The combined dataset is characterized by a strongly inhomogeneous spatio-temporal distribution and highly variable data reliability and quality. Therefore, an adequate weighting of the data that correctly accounts for data density, type, and realistic error estimates represents the major challenge for an inversion approach. Until now, there has not been a fully self-consistent geomagnetic model that correctly recovers the variation of the geomagnetic dipole together with the higher-order spherical harmonics. Here we present a new geomagnetic field model for the last 4 kyrs based on historical, archeomagnetic and volcanic records. The iterative Bayesian inversion approach targets the implementation of reliable error treatment, which allows different record types to be combined in a fully self-consistent way. Modelling results will be presented along with a thorough analysis of model limitations, validity and sensitivity.
Fully decentralized estimation and control for a modular wheeled mobile robot
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mutambara, A.G.O.; Durrant-Whyte, H.F.
2000-06-01
In this paper, the problem of fully decentralized data fusion and control for a modular wheeled mobile robot (WMR) is addressed. This is a vehicle system with nonlinear kinematics, distributed multiple sensors, and nonlinear sensor models. The problem is solved by applying fully decentralized estimation and control algorithms based on the extended information filter. This is achieved by deriving a modular, decentralized kinematic model by using plane motion kinematics to obtain the forward and inverse kinematics for a generalized simple wheeled vehicle. This model is then used in the decentralized estimation and control algorithms. WMR estimation and control is thusmore » obtained locally using reduced order models with reduced communication of information between nodes is carried out after every measurement (full rate communication), the estimates and control signals obtained at each node are equivalent to those obtained by a corresponding centralized system. Transputer architecture is used as the basis for hardware and software design as it supports the extensive communication and concurrency requirements that characterize modular and decentralized systems. The advantages of a modular WMR vehicle include scalability, application flexibility, low prototyping costs, and high reliability.« less
Guellouz, Asma; Valerio-Lepiniec, Marie; Urvoas, Agathe; Chevrel, Anne; Graille, Marc; Fourati-Kammoun, Zaineb; Desmadril, Michel; van Tilbeurgh, Herman; Minard, Philippe
2013-01-01
We previously designed a new family of artificial proteins named αRep based on a subgroup of thermostable helicoidal HEAT-like repeats. We have now assembled a large optimized αRep library. In this library, the side chains at each variable position are not fully randomized but instead encoded by a distribution of codons based on the natural frequency of side chains of the natural repeats family. The library construction is based on a polymerization of micro-genes and therefore results in a distribution of proteins with a variable number of repeats. We improved the library construction process using a "filtration" procedure to retain only fully coding modules that were recombined to recreate sequence diversity. The final library named Lib2.1 contains 1.7×10(9) independent clones. Here, we used phage display to select, from the previously described library or from the new library, new specific αRep proteins binding to four different non-related predefined protein targets. Specific binders were selected in each case. The results show that binders with various sizes are selected including relatively long sequences, with up to 7 repeats. ITC-measured affinities vary with Kd values ranging from micromolar to nanomolar ranges. The formation of complexes is associated with a significant thermal stabilization of the bound target protein. The crystal structures of two complexes between αRep and their cognate targets were solved and show that the new interfaces are established by the variable surfaces of the repeated modules, as well by the variable N-cap residues. These results suggest that αRep library is a new and versatile source of tight and specific binding proteins with favorable biophysical properties.
NASA Astrophysics Data System (ADS)
Tahani, Masoud; Askari, Amir R.
2014-09-01
In spite of the fact that pull-in instability of electrically actuated nano/micro-beams has been investigated by many researchers to date, no explicit formula has been presented yet which can predict pull-in voltage based on a geometrically non-linear and distributed parameter model. The objective of present paper is to introduce a simple and accurate formula to predict this value for a fully clamped electrostatically actuated nano/micro-beam. To this end, a non-linear Euler-Bernoulli beam model is employed, which accounts for the axial residual stress, geometric non-linearity of mid-plane stretching, distributed electrostatic force and the van der Waals (vdW) attraction. The non-linear boundary value governing equation of equilibrium is non-dimensionalized and solved iteratively through single-term Galerkin based reduced order model (ROM). The solutions are validated thorough direct comparison with experimental and other existing results reported in previous studies. Pull-in instability under electrical and vdW loads are also investigated using universal graphs. Based on the results of these graphs, non-dimensional pull-in and vdW parameters, which are defined in the text, vary linearly versus the other dimensionless parameters of the problem. Using this fact, some linear equations are presented to predict pull-in voltage, the maximum allowable length, the so-called detachment length, and the minimum allowable gap for a nano/micro-system. These linear equations are also reduced to a couple of universal pull-in formulas for systems with small initial gap. The accuracy of the universal pull-in formulas are also validated by comparing its results with available experimental and some previous geometric linear and closed-form findings published in the literature.
Chevrel, Anne; Graille, Marc; Fourati-Kammoun, Zaineb; Desmadril, Michel; van Tilbeurgh, Herman; Minard, Philippe
2013-01-01
We previously designed a new family of artificial proteins named αRep based on a subgroup of thermostable helicoidal HEAT-like repeats. We have now assembled a large optimized αRep library. In this library, the side chains at each variable position are not fully randomized but instead encoded by a distribution of codons based on the natural frequency of side chains of the natural repeats family. The library construction is based on a polymerization of micro-genes and therefore results in a distribution of proteins with a variable number of repeats. We improved the library construction process using a “filtration” procedure to retain only fully coding modules that were recombined to recreate sequence diversity. The final library named Lib2.1 contains 1.7×109 independent clones. Here, we used phage display to select, from the previously described library or from the new library, new specific αRep proteins binding to four different non-related predefined protein targets. Specific binders were selected in each case. The results show that binders with various sizes are selected including relatively long sequences, with up to 7 repeats. ITC-measured affinities vary with Kd values ranging from micromolar to nanomolar ranges. The formation of complexes is associated with a significant thermal stabilization of the bound target protein. The crystal structures of two complexes between αRep and their cognate targets were solved and show that the new interfaces are established by the variable surfaces of the repeated modules, as well by the variable N-cap residues. These results suggest that αRep library is a new and versatile source of tight and specific binding proteins with favorable biophysical properties. PMID:24014183
Brown, Jeffrey S; Holmes, John H; Shah, Kiran; Hall, Ken; Lazarus, Ross; Platt, Richard
2010-06-01
Comparative effectiveness research, medical product safety evaluation, and quality measurement will require the ability to use electronic health data held by multiple organizations. There is no consensus about whether to create regional or national combined (eg, "all payer") databases for these purposes, or distributed data networks that leave most Protected Health Information and proprietary data in the possession of the original data holders. Demonstrate functions of a distributed research network that supports research needs and also address data holders concerns about participation. Key design functions included strong local control of data uses and a centralized web-based querying interface. We implemented a pilot distributed research network and evaluated the design considerations, utility for research, and the acceptability to data holders of methods for menu-driven querying. We developed and tested a central, web-based interface with supporting network software. Specific functions assessed include query formation and distribution, query execution and review, and aggregation of results. This pilot successfully evaluated temporal trends in medication use and diagnoses at 5 separate sites, demonstrating some of the possibilities of using a distributed research network. The pilot demonstrated the potential utility of the design, which addressed the major concerns of both users and data holders. No serious obstacles were identified that would prevent development of a fully functional, scalable network. Distributed networks are capable of addressing nearly all anticipated uses of routinely collected electronic healthcare data. Distributed networks would obviate the need for centralized databases, thus avoiding numerous obstacles.
Distributed Propulsion Vehicles
NASA Technical Reports Server (NTRS)
Kim, Hyun Dae
2010-01-01
Since the introduction of large jet-powered transport aircraft, the majority of these vehicles have been designed by placing thrust-generating engines either under the wings or on the fuselage to minimize aerodynamic interactions on the vehicle operation. However, advances in computational and experimental tools along with new technologies in materials, structures, and aircraft controls, etc. are enabling a high degree of integration of the airframe and propulsion system in aircraft design. The National Aeronautics and Space Administration (NASA) has been investigating a number of revolutionary distributed propulsion vehicle concepts to increase aircraft performance. The concept of distributed propulsion is to fully integrate a propulsion system within an airframe such that the aircraft takes full synergistic benefits of coupling of airframe aerodynamics and the propulsion thrust stream by distributing thrust using many propulsors on the airframe. Some of the concepts are based on the use of distributed jet flaps, distributed small multiple engines, gas-driven multi-fans, mechanically driven multifans, cross-flow fans, and electric fans driven by turboelectric generators. This paper describes some early concepts of the distributed propulsion vehicles and the current turboelectric distributed propulsion (TeDP) vehicle concepts being studied under the NASA s Subsonic Fixed Wing (SFW) Project to drastically reduce aircraft-related fuel burn, emissions, and noise by the year 2030 to 2035.
Extreme Cost Reductions with Multi-Megawatt Centralized Inverter Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schwabe, Ulrich; Fishman, Oleg
2015-03-20
The objective of this project was to fully develop, demonstrate, and commercialize a new type of utility scale PV system. Based on patented technology, this includes the development of a truly centralized inverter system with capacities up to 100MW, and a high voltage, distributed harvesting approach. This system promises to greatly impact both the energy yield from large scale PV systems by reducing losses and increasing yield from mismatched arrays, as well as reduce overall system costs through very cost effective conversion and BOS cost reductions enabled by higher voltage operation.
Continuous QKD and high speed data encryption
NASA Astrophysics Data System (ADS)
Zbinden, Hugo; Walenta, Nino; Guinnard, Olivier; Houlmann, Raphael; Wen, Charles Lim Ci; Korzh, Boris; Lunghi, Tommaso; Gisin, Nicolas; Burg, Andreas; Constantin, Jeremy; Legré, Matthieu; Trinkler, Patrick; Caselunghe, Dario; Kulesza, Natalia; Trolliet, Gregory; Vannel, Fabien; Junod, Pascal; Auberson, Olivier; Graf, Yoan; Curchod, Gilles; Habegger, Gilles; Messerli, Etienne; Portmann, Christopher; Henzen, Luca; Keller, Christoph; Pendl, Christian; Mühlberghuber, Michael; Roth, Christoph; Felber, Norbert; Gürkaynak, Frank; Schöni, Daniel; Muheim, Beat
2013-10-01
We present the results of a Swiss project dedicated to the development of high speed quantum key distribution and data encryption. The QKD engine features fully automated key exchange, hardware key distillation based on finite key security analysis, efficient authentication and wavelength division multiplexing of the quantum and the classical channel and one-time pas encryption. The encryption device allows authenticated symmetric key encryption (e.g AES) at rates of up to 100 Gb/s. A new quantum key can uploaded up to 1000 times second from the QKD engine.
AIRSAR Automated Web-based Data Processing and Distribution System
NASA Technical Reports Server (NTRS)
Chu, Anhua; vanZyl, Jakob; Kim, Yunjin; Lou, Yunling; Imel, David; Tung, Wayne; Chapman, Bruce; Durden, Stephen
2005-01-01
In this paper, we present an integrated, end-to-end synthetic aperture radar (SAR) processing system that accepts data processing requests, submits processing jobs, performs quality analysis, delivers and archives processed data. This fully automated SAR processing system utilizes database and internet/intranet web technologies to allow external users to browse and submit data processing requests and receive processed data. It is a cost-effective way to manage a robust SAR processing and archival system. The integration of these functions has reduced operator errors and increased processor throughput dramatically.
Numerical Simulation of a Spatially Evolving Supersonic Turbulent Boundary Layer
NASA Technical Reports Server (NTRS)
Gatski, T. B.; Erlebacher, G.
2002-01-01
The results from direct numerical simulations of a spatially evolving, supersonic, flat-plate turbulent boundary-layer flow, with free-stream Mach number of 2.25 are presented. The simulated flow field extends from a transition region, initiated by wall suction and blowing near the inflow boundary, into the fully turbulent regime. Distributions of mean and turbulent flow quantities are obtained and an analysis of these quantities is performed at a downstream station corresponding to Re(sub x)= 5.548 x10(exp 6) based on distance from the leading edge.
NASA Astrophysics Data System (ADS)
Fiechter, J.; Rose, K.; Curchitser, E. N.; Huckstadt, L. A.; Costa, D. P.; Hedstrom, K.
2016-12-01
A fully coupled ecosystem model is used to describe the impact of regional and climate variability on changes in abundance and distribution of forage fish and apex predators in the California Current Large Marine Ecosystem. The ecosystem model consists of a biogeochemical submodel (NEMURO) embedded in a regional ocean circulation submodel (ROMS), and both coupled with a multi-species individual-based submodel for two forage fish species (sardine and anchovy) and one apex predator (California sea lion). Sardine and anchovy are specifically included in the model as they exhibit significant interannual and decadal variability in population abundances, and are commonly found in the diet of California sea lions. Output from the model demonstrates how regional-scale (i.e., upwelling intensity) and basin-scale (i.e., PDO and ENSO signals) physical processes control species distributions and predator-prey interactions on interannual time scales. The results also illustrate how variability in environmental conditions leads to the formation of seasonal hotspots where prey and predator spatially overlap. While specifically focused on sardine, anchovy and sea lions, the modeling framework presented here can provide new insights into the physical and biological mechanisms controlling trophic interactions in the California Current, or other regions where similar end-to-end ecosystem models may be implemented.
Fully correcting the meteor speed distribution for radar observing biases
NASA Astrophysics Data System (ADS)
Moorhead, Althea V.; Brown, Peter G.; Campbell-Brown, Margaret D.; Heynen, Denis; Cooke, William J.
2017-09-01
Meteor radars such as the Canadian Meteor Orbit Radar (CMOR) have the ability to detect millions of meteors, making it possible to study the meteoroid environment in great detail. However, meteor radars also suffer from a number of detection biases; these biases must be fully corrected for in order to derive an accurate description of the meteoroid population. We present a bias correction method for patrol radars that accounts for the full form of ionization efficiency and mass distribution. This is an improvement over previous methods such as that of Taylor (1995), which requires power-law distributions for ionization efficiency and a single mass index. We apply this method to the meteor speed distribution observed by CMOR and find a significant enhancement of slow meteors compared to earlier treatments. However, when the data set is severely restricted to include only meteors with very small uncertainties in speed, the fraction of slow meteors is substantially reduced, indicating that speed uncertainties must be carefully handled.
Mobile Autonomous Sensing Unit (MASU): A Framework That Supports Distributed Pervasive Data Sensing
Medina, Esunly; Lopez, David; Meseguer, Roc; Ochoa, Sergio F.; Royo, Dolors; Santos, Rodrigo
2016-01-01
Pervasive data sensing is a major issue that transverses various research areas and application domains. It allows identifying people’s behaviour and patterns without overwhelming the monitored persons. Although there are many pervasive data sensing applications, they are typically focused on addressing specific problems in a single application domain, making them difficult to generalize or reuse. On the other hand, the platforms for supporting pervasive data sensing impose restrictions to the devices and operational environments that make them unsuitable for monitoring loosely-coupled or fully distributed work. In order to help address this challenge this paper present a framework that supports distributed pervasive data sensing in a generic way. Developers can use this framework to facilitate the implementations of their applications, thus reducing complexity and effort in such an activity. The framework was evaluated using simulations and also through an empirical test, and the obtained results indicate that it is useful to support such a sensing activity in loosely-coupled or fully distributed work scenarios. PMID:27409617
Instabilities and Turbulence Generation by Pick-Up Ion Distributions in the Outer Heliosheath
NASA Astrophysics Data System (ADS)
Weichman, K.; Roytershteyn, V.; Delzanno, G. L.; Pogorelov, N.
2017-12-01
Pick-up ions (PUIs) play a significant role in the dynamics of the heliosphere. One problem that has attracted significant attention is the stability of ring-like distributions of PUIs and the electromagnetic fluctuations that could be generated by PUI distributions. For example, PUI stability is relevant to theories attempting to identify the origins of the IBEX ribbon. PUIs have previously been investigated by linear stability analysis of model (e.g. Gaussian) rings and corresponding computer simulations. The majority of these simulations utilized particle-in-cell methods which suffer from accuracy limitations imposed by the statistical noise associated with representing the plasma by a relatively small number of computational particles. In this work, we utilize highly accurate spectral Vlasov simulations conducted using the fully kinetic implicit code SPS (Spectral Plasma Solver) to investigate the PUI distributions inferred from a global heliospheric model (Heerikhuisen et al., 2016). Results are compared with those obtained by hybrid and fully kinetic particle-in-cell methods.
Exploring the Full Range of Properties of Quasar Spectral Distributions
NASA Technical Reports Server (NTRS)
Wilkes, B.
1998-01-01
The aim of this work is to support our ISO, far-infrared (IR) observing program of quasars and active galaxies. We have obtained, as far as possible, complete spectral energy distributions (radio-X-ray) of the ISO sample in order to fully delineate the continuum shapes and to allow detailed modelling of that continuum. This includes: ground-based optical, near-IR and mm data, the spectral ranges closest to the ISO data, within 1-2 years of the ISO observations themselves. ISO was launched in Nov 1995 and is currently observing routinely. It has an estimated lifetime is 2 years. All near-IR and optical imaging and spectroscopy are now in hand and in the process of being reduced, mm data collection and proposal writing continues.
Kertesz, Vilmos; Paranthaman, Nithya; Moench, Paul; ...
2014-10-01
The aim of this paper was to evaluate the analytical performance of a fully automated droplet-based surface-sampling system for determining the distribution of the drugs acetaminophen and terfenadine, and their metabolites, in rat thin tissue sections. The following are the results: The rank order of acetaminophen concentration observed in tissues was stomach > small intestine > liver, while the concentrations of its glucuronide and sulfate metabolites were greatest in the liver and small intestine. Terfenadine was most concentrated in the liver and kidney, while its major metabolite, fexofenadine, was found in the liver and small intestine. In conclusion, the spatialmore » distributions of both drugs and their respective metabolites observed in this work were consistent with previous studies using radiolabeled drugs.« less
MaxEnt-Based Ecological Theory: A Template for Integrated Catchment Theory
NASA Astrophysics Data System (ADS)
Harte, J.
2017-12-01
The maximum information entropy procedure (MaxEnt) is both a powerful tool for inferring least-biased probability distributions from limited data and a framework for the construction of complex systems theory. The maximum entropy theory of ecology (METE) describes remarkably well widely observed patterns in the distribution, abundance and energetics of individuals and taxa in relatively static ecosystems. An extension to ecosystems undergoing change in response to disturbance or natural succession (DynaMETE) is in progress. I describe the structure of both the static and the dynamic theory and show a range of comparisons with census data. I then propose a generalization of the MaxEnt approach that could provide a framework for a predictive theory of both static and dynamic, fully-coupled, eco-socio-hydrological catchment systems.
Layer 1 VPN services in distributed next-generation SONET/SDH networks with inverse multiplexing
NASA Astrophysics Data System (ADS)
Ghani, N.; Muthalaly, M. V.; Benhaddou, D.; Alanqar, W.
2006-05-01
Advances in next-generation SONET/SDH along with GMPLS control architectures have enabled many new service provisioning capabilities. In particular, a key services paradigm is the emergent Layer 1 virtual private network (L1 VPN) framework, which allows multiple clients to utilize a common physical infrastructure and provision their own 'virtualized' circuit-switched networks. This precludes expensive infrastructure builds and increases resource utilization for carriers. Along these lines, a novel L1 VPN services resource management scheme for next-generation SONET/SDH networks is proposed that fully leverages advanced virtual concatenation and inverse multiplexing features. Additionally, both centralized and distributed GMPLS-based implementations are also tabled to support the proposed L1 VPN services model. Detailed performance analysis results are presented along with avenues for future research.
High-Order Hyperbolic Residual-Distribution Schemes on Arbitrary Triangular Grids
NASA Technical Reports Server (NTRS)
Mazaheri, Alireza; Nishikawa, Hiroaki
2015-01-01
In this paper, we construct high-order hyperbolic residual-distribution schemes for general advection-diffusion problems on arbitrary triangular grids. We demonstrate that the second-order accuracy of the hyperbolic schemes can be greatly improved by requiring the scheme to preserve exact quadratic solutions. We also show that the improved second-order scheme can be easily extended to third-order by further requiring the exactness for cubic solutions. We construct these schemes based on the LDA and the SUPG methodology formulated in the framework of the residual-distribution method. For both second- and third-order-schemes, we construct a fully implicit solver by the exact residual Jacobian of the second-order scheme, and demonstrate rapid convergence of 10-15 iterations to reduce the residuals by 10 orders of magnitude. We demonstrate also that these schemes can be constructed based on a separate treatment of the advective and diffusive terms, which paves the way for the construction of hyperbolic residual-distribution schemes for the compressible Navier-Stokes equations. Numerical results show that these schemes produce exceptionally accurate and smooth solution gradients on highly skewed and anisotropic triangular grids, including curved boundary problems, using linear elements. We also present Fourier analysis performed on the constructed linear system and show that an under-relaxation parameter is needed for stabilization of Gauss-Seidel relaxation.
MPI-Defrost: Extension of Defrost to MPI-based Cluster Environment
NASA Astrophysics Data System (ADS)
Amin, Mustafa A.; Easther, Richard; Finkel, Hal
2011-06-01
MPI-Defrost extends Frolov’s Defrost to an MPI-based cluster environment. This version has been restricted to a single field. Restoring two-field support should be straightforward, but will require some code changes. Some output options may also not be fully supported under MPI. This code was produced to support our own work, and has been made available for the benefit of anyone interested in either oscillon simulations or an MPI capable version of Defrost, and it is provided on an "as-is" basis. Andrei Frolov is the primary developer of Defrost and we thank him for placing his work under the GPL (GNU Public License), and thus allowing us to distribute this modified version.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giuseppe Palmiotti
In this work, the implementation of a collision history-based approach to sensitivity/perturbation calculations in the Monte Carlo code SERPENT is discussed. The proposed methods allow the calculation of the eects of nuclear data perturbation on several response functions: the eective multiplication factor, reaction rate ratios and bilinear ratios (e.g., eective kinetics parameters). SERPENT results are compared to ERANOS and TSUNAMI Generalized Perturbation Theory calculations for two fast metallic systems and for a PWR pin-cell benchmark. New methods for the calculation of sensitivities to angular scattering distributions are also presented, which adopts fully continuous (in energy and angle) Monte Carlo estimators.
A large scale software system for simulation and design optimization of mechanical systems
NASA Technical Reports Server (NTRS)
Dopker, Bernhard; Haug, Edward J.
1989-01-01
The concept of an advanced integrated, networked simulation and design system is outlined. Such an advanced system can be developed utilizing existing codes without compromising the integrity and functionality of the system. An example has been used to demonstrate the applicability of the concept of the integrated system outlined here. The development of an integrated system can be done incrementally. Initial capabilities can be developed and implemented without having a detailed design of the global system. Only a conceptual global system must exist. For a fully integrated, user friendly design system, further research is needed in the areas of engineering data bases, distributed data bases, and advanced user interface design.
Analytical Model for Mean Flow and Fluxes of Momentum and Energy in Very Large Wind Farms
NASA Astrophysics Data System (ADS)
Markfort, Corey D.; Zhang, Wei; Porté-Agel, Fernando
2018-01-01
As wind-turbine arrays continue to be installed and the array size continues to grow, there is an increasing need to represent very large wind-turbine arrays in numerical weather prediction models, for wind-farm optimization, and for environmental assessment. We propose a simple analytical model for boundary-layer flow in fully-developed wind-turbine arrays, based on the concept of sparsely-obstructed shear flows. In describing the vertical distribution of the mean wind speed and shear stress within wind farms, our model estimates the mean kinetic energy harvested from the atmospheric boundary layer, and determines the partitioning between the wind power captured by the wind turbines and that absorbed by the underlying land or water. A length scale based on the turbine geometry, spacing, and performance characteristics, is able to estimate the asymptotic limit for the fully-developed flow through wind-turbine arrays, and thereby determine if the wind-farm flow is fully developed for very large turbine arrays. Our model is validated using data collected in controlled wind-tunnel experiments, and its usefulness for the prediction of wind-farm performance and optimization of turbine-array spacing are described. Our model may also be useful for assessing the extent to which the extraction of wind power affects the land-atmosphere coupling or air-water exchange of momentum, with implications for the transport of heat, moisture, trace gases such as carbon dioxide, methane, and nitrous oxide, and ecologically important oxygen.
Sample distribution in peak mode isotachophoresis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rubin, Shimon; Schwartz, Ortal; Bercovici, Moran, E-mail: mberco@technion.ac.il
We present an analytical study of peak mode isotachophoresis (ITP), and provide closed form solutions for sample distribution and electric field, as well as for leading-, trailing-, and counter-ion concentration profiles. Importantly, the solution we present is valid not only for the case of fully ionized species, but also for systems of weak electrolytes which better represent real buffer systems and for multivalent analytes such as proteins and DNA. The model reveals two major scales which govern the electric field and buffer distributions, and an additional length scale governing analyte distribution. Using well-controlled experiments, and numerical simulations, we verify andmore » validate the model and highlight its key merits as well as its limitations. We demonstrate the use of the model for determining the peak concentration of focused sample based on known buffer and analyte properties, and show it differs significantly from commonly used approximations based on the interface width alone. We further apply our model for studying reactions between multiple species having different effective mobilities yet co-focused at a single ITP interface. We find a closed form expression for an effective-on rate which depends on reactants distributions, and derive the conditions for optimizing such reactions. Interestingly, the model reveals that maximum reaction rate is not necessarily obtained when the concentration profiles of the reacting species perfectly overlap. In addition to the exact solutions, we derive throughout several closed form engineering approximations which are based on elementary functions and are simple to implement, yet maintain the interplay between the important scales. Both the exact and approximate solutions provide insight into sample focusing and can be used to design and optimize ITP-based assays.« less
Better Water Demand and Pipe Description Improve the Distribution Network Modeling Results
Distribution system modeling simplifies pipe network in skeletonization and simulates the flow and water quality by using generalized water demand patterns. While widely used, the approach has not been examined fully on how it impacts the modeling fidelity. This study intends to ...
APPLICATION OF A FULLY DISTRIBUTED WASHOFF AND TRANSPORT MODEL FOR A GULF COAST WATERSHED
Advances in hydrologic modeling have been shown to improve the accuracy of rainfall runoff simulation and prediction. Building on the capabilities of distributed hydrologic modeling, a water quality model was developed to simulate buildup, washoff, and advective transport of a co...
Network structure of subway passenger flows
NASA Astrophysics Data System (ADS)
Xu, Q.; Mao, B. H.; Bai, Y.
2016-03-01
The results of transportation infrastructure network analyses have been used to analyze complex networks in a topological context. However, most modeling approaches, including those based on complex network theory, do not fully account for real-life traffic patterns and may provide an incomplete view of network functions. This study utilizes trip data obtained from the Beijing Subway System to characterize individual passenger movement patterns. A directed weighted passenger flow network was constructed from the subway infrastructure network topology by incorporating trip data. The passenger flow networks exhibit several properties that can be characterized by power-law distributions based on flow size, and log-logistic distributions based on the fraction of boarding and departing passengers. The study also characterizes the temporal patterns of in-transit and waiting passengers and provides a hierarchical clustering structure for passenger flows. This hierarchical flow organization varies in the spatial domain. Ten cluster groups were identified, indicating a hierarchical urban polycentric structure composed of large concentrated flows at urban activity centers. These empirical findings provide insights regarding urban human mobility patterns within a large subway network.
Design and Verification of Remote Sensing Image Data Center Storage Architecture Based on Hadoop
NASA Astrophysics Data System (ADS)
Tang, D.; Zhou, X.; Jing, Y.; Cong, W.; Li, C.
2018-04-01
The data center is a new concept of data processing and application proposed in recent years. It is a new method of processing technologies based on data, parallel computing, and compatibility with different hardware clusters. While optimizing the data storage management structure, it fully utilizes cluster resource computing nodes and improves the efficiency of data parallel application. This paper used mature Hadoop technology to build a large-scale distributed image management architecture for remote sensing imagery. Using MapReduce parallel processing technology, it called many computing nodes to process image storage blocks and pyramids in the background to improve the efficiency of image reading and application and sovled the need for concurrent multi-user high-speed access to remotely sensed data. It verified the rationality, reliability and superiority of the system design by testing the storage efficiency of different image data and multi-users and analyzing the distributed storage architecture to improve the application efficiency of remote sensing images through building an actual Hadoop service system.
2-D inversion of VES data in Saqqara archaeological area, Egypt
NASA Astrophysics Data System (ADS)
El-Qady, Gad; Sakamoto, Chika; Ushijima, Keisuke
1999-10-01
The interpretation of actual geophysical field data still has a problem for obtaining a unique solution. In order to investigate the groundwater potentials in Saqqara archaeological area, vertical electrical soundings with Schlumberger array have been carried out. In the interpretation of VES data, 1D resistivity inversion has been performed based on a horizontally layered earth model by El-Qady (1995). However, some results of 1D inversion are not fully satisfied for actual 3D structures such as archaeological tombs. Therefore, we have carried out 2D inversion based on ABIC least squares method for Schlumberger VES data obtained in Saqqara area. Although the results of 2D cross sections were correlated with the previous interpretation, the 2D inversion still shows a rough spatial resistivity distribution, which is the abrupt change in resistivity between two neighboring blocks of the computed region. It is concluded that 3D interpretation is recommended for visualizing ground water distribution with depth in the Saqqara area.
NASA Astrophysics Data System (ADS)
Chen, Zhanbin
2018-05-01
Plasma-screening effects on the 1s _{1/2} → 2l (l = s , p ) and 1s _{1/2} → 3d _{3/2} electron-impact excitation of highly charged ions are investigated, together with their subsequent radiative decay. The analysis is performed based on the multi-configuration Dirac-Fock method and the fully relativistic distorted-wave method incorporating the Debye-Hückel potential. To explore the nature of the effects, calculations are carried out based on detailed analyses of the integrated total and magnetic sublevel cross sections, the alignment parameters, the linear polarizations, and the angular distribution of the X-ray photoemission, as well as on corresponding data calculated in various Debye lengths/environments, taking the 2p _{3/2}→ 1s _{1/2} and 3d _{3/2}→ 1s _{1/2} characteristic lines of H-like Fe^{25+} ion as an example. The present results are compared with experimental data and other theoretical predictions where available.
Fully device-independent quantum key distribution.
Vazirani, Umesh; Vidick, Thomas
2014-10-03
Quantum cryptography promises levels of security that are impossible to replicate in a classical world. Can this security be guaranteed even when the quantum devices on which the protocol relies are untrusted? This central question dates back to the early 1990s when the challenge of achieving device-independent quantum key distribution was first formulated. We answer this challenge by rigorously proving the device-independent security of a slight variant of Ekert's original entanglement-based protocol against the most general (coherent) attacks. The resulting protocol is robust: While assuming only that the devices can be modeled by the laws of quantum mechanics and are spatially isolated from each other and from any adversary's laboratory, it achieves a linear key rate and tolerates a constant noise rate in the devices. In particular, the devices may have quantum memory and share arbitrary quantum correlations with the eavesdropper. The proof of security is based on a new quantitative understanding of the monogamous nature of quantum correlations in the context of a multiparty protocol.
The effectiveness of surrogate taxa to conserve freshwater biodiversity
Stewart, David R.; Underwood, Zachary E.; Rahel, Frank J.; Walters, Annika W.
2018-01-01
Establishing protected areas has long been an effective conservation strategy, and is often based on more readily surveyed species. The potential of any freshwater taxa to be a surrogate of other aquatic groups has not been fully explored. We compiled occurrence data on 72 species of freshwater fish, amphibians, mussels, and aquatic reptiles for the Great Plains, Wyoming. We used hierarchical Bayesian multi-species mixture models and MaxEnt models to describe species distributions, and program Zonation to identify conservation priority areas for each aquatic group. The landscape-scale factors that best characterized aquatic species distributions differed among groups. There was low agreement and congruence among taxa-specific conservation priorities (<20%), meaning that no surrogate priority areas would include or protect the best habitats of other aquatic taxa. We found that common, wide-ranging aquatic species were included in taxa-specific priority areas, but rare freshwater species were not included. Thus, the development of conservation priorities based on a single freshwater aquatic group would not protect all species in the other aquatic groups.
Fully Device-Independent Quantum Key Distribution
NASA Astrophysics Data System (ADS)
Vazirani, Umesh; Vidick, Thomas
2014-10-01
Quantum cryptography promises levels of security that are impossible to replicate in a classical world. Can this security be guaranteed even when the quantum devices on which the protocol relies are untrusted? This central question dates back to the early 1990s when the challenge of achieving device-independent quantum key distribution was first formulated. We answer this challenge by rigorously proving the device-independent security of a slight variant of Ekert's original entanglement-based protocol against the most general (coherent) attacks. The resulting protocol is robust: While assuming only that the devices can be modeled by the laws of quantum mechanics and are spatially isolated from each other and from any adversary's laboratory, it achieves a linear key rate and tolerates a constant noise rate in the devices. In particular, the devices may have quantum memory and share arbitrary quantum correlations with the eavesdropper. The proof of security is based on a new quantitative understanding of the monogamous nature of quantum correlations in the context of a multiparty protocol.
Mars Science Laboratory Heatshield Aerothermodynamics: Design and Reconstruction
NASA Technical Reports Server (NTRS)
Edquist, Karl T.; Hollis, Brian R.; Johnston, Christopher O.; Bose, Deepak; White, Todd R.; Mahzari, Milad
2013-01-01
The Mars Science Laboratory heatshield was designed to withstand a fully turbulent heat pulse based on test results and computational analysis on a pre-flight design trajectory. Instrumentation on the flight heatshield measured in-depth temperatures in the thermal protection system. The data indicate that boundary layer transition occurred at 5 of 7 thermocouple locations prior to peak heating. Data oscillations at 3 pressure measurement locations may also indicate transition. This paper presents the heatshield temperature and pressure data, possible explanations for the timing of boundary layer transition, and a qualitative comparison of reconstructed and computational heating on the as-flown trajectory. Boundary layer Reynolds numbers that are typically used to predict transition are compared to observed transition at various heatshield locations. A uniform smooth-wall transition Reynolds number does not explain the timing of boundary layer transition observed during flight. A roughness-based Reynolds number supports the possibility of transition due to discrete or distributed roughness elements on the heatshield. However, the distributed roughness height would have needed to be larger than the pre-flight assumption. The instrumentation confirmed the predicted location of maximum turbulent heat flux near the leeside shoulder. The reconstructed heat flux at that location is bounded by smooth-wall turbulent calculations on the reconstructed trajectory, indicating that augmentation due to surface roughness probably did not occur. Turbulent heating on the downstream side of the heatshield nose exceeded smooth-wall computations, indicating that roughness may have augmented heating. The stagnation region also experienced heating that exceeded computational levels, but shock layer radiation does not fully explain the differences.
Algorithm of probabilistic assessment of fully-mechanized longwall downtime
NASA Astrophysics Data System (ADS)
Domrachev, A. N.; Rib, S. V.; Govorukhin, Yu M.; Krivopalov, V. G.
2017-09-01
The problem of increasing the load on a long fully-mechanized longwall has several aspects, one of which is the improvement of efficiency in using available stoping equipment due to the increase in coefficient of the machine operating time of a shearer and other mining machines that form an integral part of the longwall set of equipment. The task of predicting the reliability indicators of stoping equipment is solved by the statistical evaluation of parameters of downtime exponential distribution and failure recovery. It is more difficult to solve the problems of downtime accounting in case of accidents in the face workings and, despite the statistical data on accidents in mine workings, no solution has been found to date. The authors have proposed a variant of probability assessment of workings caving using Poisson distribution and the duration of their restoration using normal distribution. The above results confirm the possibility of implementing the approach proposed by the authors.
Energy management and cooperation in microgrids
NASA Astrophysics Data System (ADS)
Rahbar, Katayoun
Microgrids are key components of future smart power grids, which integrate distributed renewable energy generators to efficiently serve the load demand locally. However, random and intermittent characteristics of renewable energy generations may hinder the reliable operation of microgrids. This thesis is thus devoted to investigating new strategies for microgrids to optimally manage their energy consumption, energy storage system (ESS) and cooperation in real time to achieve the reliable and cost-effective operation. This thesis starts with a single microgrid system. The optimal energy scheduling and ESS management policy is derived to minimize the energy cost of the microgrid resulting from drawing conventional energy from the main grid under both the off-line and online setups, where the renewable energy generation/load demand are assumed to be non-causally known and causally known at the microgrid, respectively. The proposed online algorithm is designed based on the optimal off-line solution and works under arbitrary (even unknown) realizations of future renewable energy generation/load demand. Therefore, it is more practically applicable as compared to solutions based on conventional techniques such as dynamic programming and stochastic programming that require the prior knowledge of renewable energy generation and load demand realizations/distributions. Next, for a group of microgrids that cooperate in energy management, we study efficient methods for sharing energy among them for both fully and partially cooperative scenarios, where microgrids are of common interests and self-interested, respectively. For the fully cooperative energy management, the off-line optimization problem is first formulated and optimally solved, where a distributed algorithm is proposed to minimize the total (sum) energy cost of microgrids. Inspired by the results obtained from the off-line optimization, efficient online algorithms are proposed for the real-time energy management, which are of low complexity and work given arbitrary realizations of renewable energy generation/load demand. On the other hand, for self-interested microgrids, the partially cooperative energy management is formulated and a distributed algorithm is proposed to optimize the energy cooperation such that energy costs of individual microgrids reduce simultaneously over the case without energy cooperation while limited information is shared among the microgrids and the central controller.
Distributed Planning in a Mixed-Initiative Environment
2008-06-01
Knowledge Sources Control Remote Blackboard Remote Knowledge Sources Remot e Data Remot e Data Java Distributed Blackboard Figure 3 - Distributed...an interface agent or planning agent and the second type is a critic agent. Agents in the DEEP architecture extend and use the Java Agent...chosen because it is fully implemented in Java , and supports these requirements. 2.3.3 Interface Agents Interface agents are the interfaces through
National Geomagnetism Program: Current Status & Five-Year Plan, 2006-2010
Love, Jeffrey J.
2006-01-01
Executive Summary: The U.S. Geological Survey's Geomagnetism Program serves the scientific community and the broader public by collecting and distributing magnetometer data from an array of ground-based observatories and by conducting scientific analysis on those data. Preliminary, variational time-series can be collected and distributed in near-real time, while fully calibrated, absolute time-series are distributed after processing. The data are used by the civilian and military parts of the Federal Government, by private industry, and by academia, for a wide variety of purposes of both immediately practical importance and long-term scientific interest, including space-weather diagnosis and related hazard mitigation, mapping of the magnetic field and measurement of its activity, and research on the nature of the Earth's interior and the near-Earth space environment. This document reviews the current status of the Program, in terms of its situation within the Government and within the scientific community; summarizes the Program's operations, its staffing situation, and its facilities; describes the diversity of uses of Program magnetometer data; and presents a plan for the next 5 years for enhancing the Program's data-based services, developing products, and conducting scientific research.
NASA Technical Reports Server (NTRS)
Tinetti, Ana F.; Maglieri, Domenic J.; Driver, Cornelius; Bobbitt, Percy J.
2011-01-01
A detailed geometric description, in wave drag format, has been developed for the Convair B-58 and North American XB-70-1 delta wing airplanes. These descriptions have been placed on electronic files, the contents of which are described in this paper They are intended for use in wave drag and sonic boom calculations. Included in the electronic file and in the present paper are photographs and 3-view drawings of the two airplanes, tabulated geometric descriptions of each vehicle and its components, and comparisons of the electronic file outputs with existing data. The comparisons include a pictorial of the two airplanes based on the present geometric descriptions, and cross-sectional area distributions for both the normal Mach cuts and oblique Mach cuts above and below the vehicles. Good correlation exists between the area distributions generated in the late 1950s and 1960s and the present files. The availability of these electronic files facilitates further validation of sonic boom prediction codes through the use of two existing data bases on these airplanes, which were acquired in the 1960s and have not been fully exploited.
Modeling Electronic Skin Response to Normal Distributed Force
Seminara, Lucia
2018-01-01
The reference electronic skin is a sensor array based on PVDF (Polyvinylidene fluoride) piezoelectric polymers, coupled to a rigid substrate and covered by an elastomer layer. It is first evaluated how a distributed normal force (Hertzian distribution) is transmitted to an extended PVDF sensor through the elastomer layer. A simplified approach based on Boussinesq’s half-space assumption is used to get a qualitative picture and extensive FEM simulations allow determination of the quantitative response for the actual finite elastomer layer. The ultimate use of the present model is to estimate the electrical sensor output from a measure of a basic mechanical action at the skin surface. However this requires that the PVDF piezoelectric coefficient be known a-priori. This was not the case in the present investigation. However, the numerical model has been used to fit experimental data from a real skin prototype and to estimate the sensor piezoelectric coefficient. It turned out that this value depends on the preload and decreases as a result of PVDF aging and fatigue. This framework contains all the fundamental ingredients of a fully predictive model, suggesting a number of future developments potentially useful for skin design and validation of the fabrication technology. PMID:29401692
Modeling Electronic Skin Response to Normal Distributed Force.
Seminara, Lucia
2018-02-03
The reference electronic skin is a sensor array based on PVDF (Polyvinylidene fluoride) piezoelectric polymers, coupled to a rigid substrate and covered by an elastomer layer. It is first evaluated how a distributed normal force (Hertzian distribution) is transmitted to an extended PVDF sensor through the elastomer layer. A simplified approach based on Boussinesq's half-space assumption is used to get a qualitative picture and extensive FEM simulations allow determination of the quantitative response for the actual finite elastomer layer. The ultimate use of the present model is to estimate the electrical sensor output from a measure of a basic mechanical action at the skin surface. However this requires that the PVDF piezoelectric coefficient be known a-priori. This was not the case in the present investigation. However, the numerical model has been used to fit experimental data from a real skin prototype and to estimate the sensor piezoelectric coefficient. It turned out that this value depends on the preload and decreases as a result of PVDF aging and fatigue. This framework contains all the fundamental ingredients of a fully predictive model, suggesting a number of future developments potentially useful for skin design and validation of the fabrication technology.
Ma, Xiang; Schonfeld, Dan; Khokhar, Ashfaq A
2009-06-01
In this paper, we propose a novel solution to an arbitrary noncausal, multidimensional hidden Markov model (HMM) for image and video classification. First, we show that the noncausal model can be solved by splitting it into multiple causal HMMs and simultaneously solving each causal HMM using a fully synchronous distributed computing framework, therefore referred to as distributed HMMs. Next we present an approximate solution to the multiple causal HMMs that is based on an alternating updating scheme and assumes a realistic sequential computing framework. The parameters of the distributed causal HMMs are estimated by extending the classical 1-D training and classification algorithms to multiple dimensions. The proposed extension to arbitrary causal, multidimensional HMMs allows state transitions that are dependent on all causal neighbors. We, thus, extend three fundamental algorithms to multidimensional causal systems, i.e., 1) expectation-maximization (EM), 2) general forward-backward (GFB), and 3) Viterbi algorithms. In the simulations, we choose to limit ourselves to a noncausal 2-D model whose noncausality is along a single dimension, in order to significantly reduce the computational complexity. Simulation results demonstrate the superior performance, higher accuracy rate, and applicability of the proposed noncausal HMM framework to image and video classification.
Fully Burdened Cost of Energy Analysis: A Model for Marine Corps Systems
2013-03-01
and the lognormal parameters are not used in the creation of the output distribution since they are not required values for a triangular distribution...Army energy security implementation strategy. Washington, DC: Government Printing Office. Bell Helicopter. (n.d.). The Bell AH-1Z Zulu [Image
Fully Burdened Cost of Energy Analysis: A Model for Marine Corps Systems
2013-01-30
creation of the output distribution since they are not required values for a triangular distribution. The model has the capacity to analyze a wide...Partnerships. (2009). Army energy security implementation strategy. Washington, DC: Government Printing Office. Bell Helicopter. (n.d.). The Bell AH-1Z Zulu
2008-10-01
Agents in the DEEP architecture extend and use the Java Agent Development (JADE) framework. DEEP requires a distributed multi-agent system and a...framework to help simplify the implementation of this system. JADE was chosen because it is fully implemented in Java , and supports these requirements
Distributed intelligent monitoring and reporting facilities
NASA Astrophysics Data System (ADS)
Pavlou, George; Mykoniatis, George; Sanchez-P, Jorge-A.
1996-06-01
Distributed intelligent monitoring and reporting facilities are of paramount importance in both service and network management as they provide the capability to monitor quality of service and utilization parameters and notify degradation so that corrective action can be taken. By intelligent, we refer to the capability of performing the monitoring tasks in a way that has the smallest possible impact on the managed network, facilitates the observation and summarization of information according to a number of criteria and in its most advanced form and permits the specification of these criteria dynamically to suit the particular policy in hand. In addition, intelligent monitoring facilities should minimize the design and implementation effort involved in such activities. The ISO/ITU Metric, Summarization and Performance management functions provide models that only partially satisfy the above requirements. This paper describes our extensions to the proposed models to support further capabilities, with the intention to eventually lead to fully dynamically defined monitoring policies. The concept of distributing intelligence is also discussed, including the consideration of security issues and the applicability of the model in ODP-based distributed processing environments.
Empirical Reference Distributions for Networks of Different Size
Smith, Anna; Calder, Catherine A.; Browning, Christopher R.
2016-01-01
Network analysis has become an increasingly prevalent research tool across a vast range of scientific fields. Here, we focus on the particular issue of comparing network statistics, i.e. graph-level measures of network structural features, across multiple networks that differ in size. Although “normalized” versions of some network statistics exist, we demonstrate via simulation why direct comparison is often inappropriate. We consider normalizing network statistics relative to a simple fully parameterized reference distribution and demonstrate via simulation how this is an improvement over direct comparison, but still sometimes problematic. We propose a new adjustment method based on a reference distribution constructed as a mixture model of random graphs which reflect the dependence structure exhibited in the observed networks. We show that using simple Bernoulli models as mixture components in this reference distribution can provide adjusted network statistics that are relatively comparable across different network sizes but still describe interesting features of networks, and that this can be accomplished at relatively low computational expense. Finally, we apply this methodology to a collection of ecological networks derived from the Los Angeles Family and Neighborhood Survey activity location data. PMID:27721556
Evolution of the ATLAS Nightly Build System
NASA Astrophysics Data System (ADS)
Undrus, A.
2012-12-01
The ATLAS Nightly Build System is a major component in the ATLAS collaborative software organization, validation, and code approval scheme. For over 10 years of development it has evolved into a factory for automatic release production and grid distribution. The 50 multi-platform branches of ATLAS releases provide vast opportunities for testing new packages, verification of patches to existing software, and migration to new platforms and compilers for ATLAS code that currently contains 2200 packages with 4 million C++ and 1.4 million python scripting lines written by about 1000 developers. Recent development was focused on the integration of ATLAS Nightly Build and Installation systems. The nightly releases are distributed and validated and some are transformed into stable releases used for data processing worldwide. The ATLAS Nightly System is managed by the NICOS control tool on a computing farm with 50 powerful multiprocessor nodes. NICOS provides the fully automated framework for the release builds, testing, and creation of distribution kits. The ATN testing framework of the Nightly System runs unit and integration tests in parallel suites, fully utilizing the resources of multi-core machines, and provides the first results even before compilations complete. The NICOS error detection system is based on several techniques and classifies the compilation and test errors according to their severity. It is periodically tuned to place greater emphasis on certain software defects by highlighting the problems on NICOS web pages and sending automatic e-mail notifications to responsible developers. These and other recent developments will be presented and future plans will be described.
Temporal dynamics of catchment transit times from stable isotope data
NASA Astrophysics Data System (ADS)
Klaus, Julian; Chun, Kwok P.; McGuire, Kevin J.; McDonnell, Jeffrey J.
2015-06-01
Time variant catchment transit time distributions are fundamental descriptors of catchment function but yet not fully understood, characterized, and modeled. Here we present a new approach for use with standard runoff and tracer data sets that is based on tracking of tracer and age information and time variant catchment mixing. Our new approach is able to deal with nonstationarity of flow paths and catchment mixing, and an irregular shape of the transit time distribution. The approach extracts information on catchment mixing from the stable isotope time series instead of prior assumptions of mixing or the shape of transit time distribution. We first demonstrate proof of concept of the approach with artificial data; the Nash-Sutcliffe efficiencies in tracer and instantaneous transit times were >0.9. The model provides very accurate estimates of time variant transit times when the boundary conditions and fluxes are fully known. We then tested the model with real rainfall-runoff flow and isotope tracer time series from the H.J. Andrews Watershed 10 (WS10) in Oregon. Model efficiencies were 0.37 for the 18O modeling for a 2 year time series; the efficiencies increased to 0.86 for the second year underlying the need of long time tracer time series with a long overlap of tracer input and output. The approach was able to determine time variant transit time of WS10 with field data and showed how it follows the storage dynamics and related changes in flow paths where wet periods with high flows resulted in clearly shorter transit times compared to dry low flow periods.
Flexible distributed architecture for semiconductor process control and experimentation
NASA Astrophysics Data System (ADS)
Gower, Aaron E.; Boning, Duane S.; McIlrath, Michael B.
1997-01-01
Semiconductor fabrication requires an increasingly expensive and integrated set of tightly controlled processes, driving the need for a fabrication facility with fully computerized, networked processing equipment. We describe an integrated, open system architecture enabling distributed experimentation and process control for plasma etching. The system was developed at MIT's Microsystems Technology Laboratories and employs in-situ CCD interferometry based analysis in the sensor-feedback control of an Applied Materials Precision 5000 Plasma Etcher (AME5000). Our system supports accelerated, advanced research involving feedback control algorithms, and includes a distributed interface that utilizes the internet to make these fabrication capabilities available to remote users. The system architecture is both distributed and modular: specific implementation of any one task does not restrict the implementation of another. The low level architectural components include a host controller that communicates with the AME5000 equipment via SECS-II, and a host controller for the acquisition and analysis of the CCD sensor images. A cell controller (CC) manages communications between these equipment and sensor controllers. The CC is also responsible for process control decisions; algorithmic controllers may be integrated locally or via remote communications. Finally, a system server images connections from internet/intranet (web) based clients and uses a direct link with the CC to access the system. Each component communicates via a predefined set of TCP/IP socket based messages. This flexible architecture makes integration easier and more robust, and enables separate software components to run on the same or different computers independent of hardware or software platform.
NASA Astrophysics Data System (ADS)
Boucharin, Alexis; Oguz, Ipek; Vachet, Clement; Shi, Yundi; Sanchez, Mar; Styner, Martin
2011-03-01
The use of regional connectivity measurements derived from diffusion imaging datasets has become of considerable interest in the neuroimaging community in order to better understand cortical and subcortical white matter connectivity. Current connectivity assessment methods are based on streamline fiber tractography, usually applied in a Monte-Carlo fashion. In this work we present a novel, graph-based method that performs a fully deterministic, efficient and stable connectivity computation. The method handles crossing fibers and deals well with multiple seed regions. The computation is based on a multi-directional graph propagation method applied to sampled orientation distribution function (ODF), which can be computed directly from the original diffusion imaging data. We show early results of our method on synthetic and real datasets. The results illustrate the potential of our method towards subjectspecific connectivity measurements that are performed in an efficient, stable and reproducible manner. Such individual connectivity measurements would be well suited for application in population studies of neuropathology, such as Autism, Huntington's Disease, Multiple Sclerosis or leukodystrophies. The proposed method is generic and could easily be applied to non-diffusion data as long as local directional data can be derived.
NASA Astrophysics Data System (ADS)
Zhou, Zhi; Zhang, Zhichun; Wang, Chuan; Ou, Jinping
2006-03-01
FRP ( Fiber Reinforced Polymer ) has become the popular material to alternate steel in civil engineering under harsh corrosion environment. But due to its low shear strength ability, the anchor for FRP is most important for its practical application. However, the strain state of the surface between FRP and anchor is not fully understood due to that there is no proper sensor to monitor the inner strain in the anchor by traditional method. In this paper, a new smart FBG-based FRP anchor is brought forward, and the inner strain distribution of FRP anchor has been monitored using FRP-OFBG sensors, a smart FBG-embedded FRP rebar, which is pre-embedded in the FRP rod and cast in the anchor. Based on the strain distribution information the bonding shear stress on the surface of FRP rod along the anchor can also be obtained. This method can supply important information for FRP anchor design and can also monitor the anchorage system, which is useful for the application of FRP in civil engineering. The experimental results also show that the smart FBG-based FRP anchor can give direct information of the load and damage of the FRP anchor.
NASA Astrophysics Data System (ADS)
Moeferdt, Matthias; Kiel, Thomas; Sproll, Tobias; Intravaia, Francesco; Busch, Kurt
2018-02-01
A combined analytical and numerical study of the modes in two distinct plasmonic nanowire systems is presented. The computations are based on a discontinuous Galerkin time-domain approach, and a fully nonlinear and nonlocal hydrodynamic Drude model for the metal is utilized. In the linear regime, these computations demonstrate the strong influence of nonlocality on the field distributions as well as on the scattering and absorption spectra. Based on these results, second-harmonic-generation efficiencies are computed over a frequency range that covers all relevant modes of the linear spectra. In order to interpret the physical mechanisms that lead to corresponding field distributions, the associated linear quasielectrostatic problem is solved analytically via conformal transformation techniques. This provides an intuitive classification of the linear excitations of the systems that is then applied to the full Maxwell case. Based on this classification, group theory facilitates the determination of the selection rules for the efficient excitation of modes in both the linear and nonlinear regimes. This leads to significantly enhanced second-harmonic generation via judiciously exploiting the system symmetries. These results regarding the mode structure and second-harmonic generation are of direct relevance to other nanoantenna systems.
Neutron measurements of stresses in a test artifact produced by laser-based additive manufacturing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gnäupel-Herold, Thomas; Slotwinski, John; Moylan, Shawn
2014-02-18
A stainless steel test artifact produced by Direct Metal Laser Sintering and similar to a proposed standardized test artifact was examined using neutron diffraction. The artifact contained a number of structures with different aspect ratios pertaining to wall thickness, height above base plate, and side length. Through spatial resolutions of the order of one millimeter the volumetric distribution of stresses in several was measured. It was found that the stresses peak in the tensile region around 500 MPa near the top surface, with balancing compressive stresses in the interior. The presence of a support structure (a one millimeter high, thinmore » walled, hence weaker, lattice structure deposited on the base plate, followed by a fully dense AM structure) has only minor effects on the stresses.« less
A nonparametric spatial scan statistic for continuous data.
Jung, Inkyung; Cho, Ho Jin
2015-10-20
Spatial scan statistics are widely used for spatial cluster detection, and several parametric models exist. For continuous data, a normal-based scan statistic can be used. However, the performance of the model has not been fully evaluated for non-normal data. We propose a nonparametric spatial scan statistic based on the Wilcoxon rank-sum test statistic and compared the performance of the method with parametric models via a simulation study under various scenarios. The nonparametric method outperforms the normal-based scan statistic in terms of power and accuracy in almost all cases under consideration in the simulation study. The proposed nonparametric spatial scan statistic is therefore an excellent alternative to the normal model for continuous data and is especially useful for data following skewed or heavy-tailed distributions.
Chen, Gang; Song, Yongduan; Guan, Yanfeng
2018-03-01
This brief investigates the finite-time consensus tracking control problem for networked uncertain mechanical systems on digraphs. A new terminal sliding-mode-based cooperative control scheme is developed to guarantee that the tracking errors converge to an arbitrarily small bound around zero in finite time. All the networked systems can have different dynamics and all the dynamics are unknown. A neural network is used at each node to approximate the local unknown dynamics. The control schemes are implemented in a fully distributed manner. The proposed control method eliminates some limitations in the existing terminal sliding-mode-based consensus control methods and extends the existing analysis methods to the case of directed graphs. Simulation results on networked robot manipulators are provided to show the effectiveness of the proposed control algorithms.
NASA Astrophysics Data System (ADS)
Nesvold, E.; Mukerji, T.
2017-12-01
River deltas display complex channel networks that can be characterized through the framework of graph theory, as shown by Tejedor et al. (2015). Deltaic patterns may also be useful in a Bayesian approach to uncertainty quantification of the subsurface, but this requires a prior distribution of the networks of ancient deltas. By considering subaerial deltas, one can at least obtain a snapshot in time of the channel network spectrum across deltas. In this study, the directed graph structure is semi-automatically extracted from satellite imagery using techniques from statistical processing and machine learning. Once the network is labeled with vertices and edges, spatial trends and width and sinuosity distributions can also be found easily. Since imagery is inherently 2D, computational sediment transport models can serve as a link between 2D network structure and 3D depositional elements; the numerous empirical rules and parameters built into such models makes it necessary to validate the output with field data. For this purpose we have used a set of 110 modern deltas, with average water discharge ranging from 10 - 200,000 m3/s, as a benchmark for natural variability. Both graph theoretic and more general distributions are established. A key question is whether it is possible to reproduce this deltaic network spectrum with computational models. Delft3D was used to solve the shallow water equations coupled with sediment transport. The experimental setup was relatively simple; incoming channelized flow onto a tilted plane, with varying wave and tidal energy, sediment types and grain size distributions, river discharge and a few other input parameters. Each realization was run until a delta had fully developed: between 50 and 500 years (with a morphology acceleration factor). It is shown that input parameters should not be sampled independently from the natural ranges, since this may result in deltaic output that falls well outside the natural spectrum. Since we are interested in studying the patterns occurring in nature, ideas are proposed for how to sample computer realizations that match this distribution. By establishing a link between surface based patterns from the field with the associated subsurface structure from physics-based models, this is a step towards a fully Bayesian workflow in subsurface simulation.
Leveraging social networks for understanding the evolution of epidemics
2011-01-01
Background To understand how infectious agents disseminate throughout a population it is essential to capture the social model in a realistic manner. This paper presents a novel approach to modeling the propagation of the influenza virus throughout a realistic interconnection network based on actual individual interactions which we extract from online social networks. The advantage is that these networks can be extracted from existing sources which faithfully record interactions between people in their natural environment. We additionally allow modeling the characteristics of each individual as well as customizing his daily interaction patterns by making them time-dependent. Our purpose is to understand how the infection spreads depending on the structure of the contact network and the individuals who introduce the infection in the population. This would help public health authorities to respond more efficiently to epidemics. Results We implement a scalable, fully distributed simulator and validate the epidemic model by comparing the simulation results against the data in the 2004-2005 New York State Department of Health Report (NYSDOH), with similar temporal distribution results for the number of infected individuals. We analyze the impact of different types of connection models on the virus propagation. Lastly, we analyze and compare the effects of adopting several different vaccination policies, some of them based on individual characteristics -such as age- while others targeting the super-connectors in the social model. Conclusions This paper presents an approach to modeling the propagation of the influenza virus via a realistic social model based on actual individual interactions extracted from online social networks. We implemented a scalable, fully distributed simulator and we analyzed both the dissemination of the infection and the effect of different vaccination policies on the progress of the epidemics. The epidemic values predicted by our simulator match real data from NYSDOH. Our results show that our simulator can be a useful tool in understanding the differences in the evolution of an epidemic within populations with different characteristics and can provide guidance with regard to which, and how many, individuals should be vaccinated to slow down the virus propagation and reduce the number of infections. PMID:22784620
Load sharing in distributed real-time systems with state-change broadcasts
NASA Technical Reports Server (NTRS)
Shin, Kang G.; Chang, Yi-Chieh
1989-01-01
A decentralized dynamic load-sharing (LS) method based on state-change broadcasts is proposed for a distributed real-time system. Whenever the state of a node changes from underloaded to fully loaded and vice versa, the node broadcasts this change to a set of nodes, called a buddy set, in the system. The performance of the method is evaluated with both analytic modeling and simulation. It is modeled first by an embedded Markov chain for which numerical solutions are derived. The model solutions are then used to calculate the distribution of queue lengths at the nodes and the probability of meeting task deadlines. The analytical results show that buddy sets of 10 nodes outperform those of less than 10 nodes, and the incremental benefit gained from increasing the buddy set size beyond 15 nodes is insignificant. These and other analytical results are verified by simulation. The proposed LS method is shown to meet task deadlines with a very high probability.
Studies of transverse momentum dependent parton distributions and Bessel weighting
Aghasyan, M.; Avakian, H.; De Sanctis, E.; ...
2015-03-01
In this paper we present a new technique for analysis of transverse momentum dependent parton distribution functions, based on the Bessel weighting formalism. The procedure is applied to studies of the double longitudinal spin asymmetry in semi-inclusive deep inelastic scattering using a new dedicated Monte Carlo generator which includes quark intrinsic transverse momentum within the generalized parton model. Using a fully differential cross section for the process, the effect of four momentum conservation is analyzed using various input models for transverse momentum distributions and fragmentation functions. We observe a few percent systematic offset of the Bessel-weighted asymmetry obtained from Montemore » Carlo extraction compared to input model calculations, which is due to the limitations imposed by the energy and momentum conservation at the given energy/Q2. We find that the Bessel weighting technique provides a powerful and reliable tool to study the Fourier transform of TMDs with controlled systematics due to experimental acceptances and resolutions with different TMD model inputs.« less
Studies of transverse momentum dependent parton distributions and Bessel weighting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aghasyan, M.; Avakian, H.; De Sanctis, E.
In this paper we present a new technique for analysis of transverse momentum dependent parton distribution functions, based on the Bessel weighting formalism. The procedure is applied to studies of the double longitudinal spin asymmetry in semi-inclusive deep inelastic scattering using a new dedicated Monte Carlo generator which includes quark intrinsic transverse momentum within the generalized parton model. Using a fully differential cross section for the process, the effect of four momentum conservation is analyzed using various input models for transverse momentum distributions and fragmentation functions. We observe a few percent systematic offset of the Bessel-weighted asymmetry obtained from Montemore » Carlo extraction compared to input model calculations, which is due to the limitations imposed by the energy and momentum conservation at the given energy/Q2. We find that the Bessel weighting technique provides a powerful and reliable tool to study the Fourier transform of TMDs with controlled systematics due to experimental acceptances and resolutions with different TMD model inputs.« less
The Vehicular Information Space Framework
NASA Astrophysics Data System (ADS)
Prinz, Vivian; Schlichter, Johann; Schweiger, Benno
Vehicular networks are distributed, self-organizing and highly mobile ad hoc networks. They allow for providing drivers with up-to-the-minute information about their environment. Therefore, they are expected to be a decisive future enabler for enhancing driving comfort and safety. This article introduces the Vehicular Information Space framework (VIS). Vehicles running the VIS form a kind of distributed database. It enables them to provide information like existing hazards, parking spaces or traffic densities in a location aware and fully distributed manner. In addition, vehicles can retrieve, modify and delete these information items. The underlying algorithm is based on features derived from existing structured Peer-to-Peer algorithms and extended to suit the specific characteristics of highly mobile ad hoc networks. We present, implement and simulate the VIS using a motorway and an urban traffic environment. Simulation studies on VIS message occurrence show that the VIS implies reasonable traffic overhead. Also, overall VIS message traffic is independent from the number of information items provided.
Triggering the volume phase transition of core-shell Au nanorod-microgel nanocomposites with light
NASA Astrophysics Data System (ADS)
Rodríguez-Fernández, Jessica; Fedoruk, Michael; Hrelescu, Calin; Lutich, Andrey A.; Feldmann, Jochen
2011-06-01
We have coated gold nanorods (NRs) with thermoresponsive microgel shells based on poly(N-isopropylacrylamide) (pNIPAM). We demonstrate by simultaneous laser-heating and optical extinction measurements that the Au NR cores can be simultaneously used as fast optothermal manipulators (switchers) and sensitive optical reporters of the microgel state in a fully externally controlled and reversible manner. We support our results with optical modeling based on the boundary element method and 3D numerical analysis on the temperature distribution. Briefly, we show that due to the sharp increase in refractive index resulting from the optothermally triggered microgel collapse, the longitudinal plasmon band of the coated Au NRs is significantly red-shifted. The optothermal control over the pNIPAM shell, and thereby over the optical response of the nanocomposite, is fully reversible and can be simply controlled by switching on and off a NIR heating laser. In contrast to bulk solution heating, we demonstrate that light-triggering does not compromise colloidal stability, which is of primary importance for the ultimate utilization of these types of nanocomposites as remotely controlled optomechanical actuators, for applications spanning from drug delivery to photonic crystals and nanoscale motion.
Su, Wenjing; Cook, Benjamin S.; Fang, Yunnan; Tentzeris, Manos M.
2016-01-01
As the needs for low-cost rapidly-produced microfluidics are growing with the trend of Lab-on-a-Chip and distributed healthcare, the fully inkjet-printing of microfluidics can be a solution to it with numerous potential electrical and sensing applications. Inkjet-printing is an additive manufacturing technique featuring no material waste and a low equipment cost. Moreover, similar to other additive manufacturing techniques, inkjet-printing is easy to learn and has a high fabrication speed, while it offers generally a great planar resolution down to below 20 µm and enables flexible designs due to its inherent thin film deposition capabilities. Due to the thin film feature, the printed objects also usually obtain a high vertical resolution (such as 4.6 µm). This paper introduces a low-cost rapid three-dimensional fabrication process of microfluidics, that relies entirely on an inkjet-printer based single platform and can be implemented directly on top of virtually any substrates. PMID:27713545
NASA Astrophysics Data System (ADS)
Su, Wenjing; Cook, Benjamin S.; Fang, Yunnan; Tentzeris, Manos M.
2016-10-01
As the needs for low-cost rapidly-produced microfluidics are growing with the trend of Lab-on-a-Chip and distributed healthcare, the fully inkjet-printing of microfluidics can be a solution to it with numerous potential electrical and sensing applications. Inkjet-printing is an additive manufacturing technique featuring no material waste and a low equipment cost. Moreover, similar to other additive manufacturing techniques, inkjet-printing is easy to learn and has a high fabrication speed, while it offers generally a great planar resolution down to below 20 µm and enables flexible designs due to its inherent thin film deposition capabilities. Due to the thin film feature, the printed objects also usually obtain a high vertical resolution (such as 4.6 µm). This paper introduces a low-cost rapid three-dimensional fabrication process of microfluidics, that relies entirely on an inkjet-printer based single platform and can be implemented directly on top of virtually any substrates.
Design of an Electric Propulsion System for SCEPTOR
NASA Technical Reports Server (NTRS)
Dubois, Arthur; van der Geest, Martin; Bevirt, JoeBen; Clarke, Sean; Christie, Robert J.; Borer, Nicholas K.
2016-01-01
The rise of electric propulsion systems has pushed aircraft designers towards new and potentially transformative concepts. As part of this effort, NASA is leading the SCEPTOR program which aims at designing a fully electric distributed propulsion general aviation aircraft. This article highlights critical aspects of the design of SCEPTOR's propulsion system conceived at Joby Aviation in partnership with NASA, including motor electromagnetic design and optimization as well as cooling system integration. The motor is designed with a finite element based multi-objective optimization approach. This provides insight into important design tradeoffs such as mass versus efficiency, and enables a detailed quantitative comparison between different motor topologies. Secondly, a complete design and Computational Fluid Dynamics analysis of the air breathing cooling system is presented. The cooling system is fully integrated into the nacelle, contains little to no moving parts and only incurs a small drag penalty. Several concepts are considered and compared over a range of operating conditions. The study presents trade-offs between various parameters such as cooling efficiency, drag, mechanical simplicity and robustness.
Liu, Yanhui; Zhu, Guoqing; Yang, Huazhe; Wang, Conger; Zhang, Peihua; Han, Guangting
2018-01-01
This paper presents a study of the bending flexibility of fully covered biodegradable polydioxanone biliary stents (FCBPBs) developed for human body. To investigate the relationship between the bending load and structure parameter (monofilament diameter and braid-pin number), biodegradable polydioxanone biliary stents derived from braiding method were covered with membrane prepared via electrospinning method, and nine FCBPBSs were then obtained for bending test to evaluate the bending flexibility. In addition, by the finite element method, nine numerical models based on actual biliary stent were established and the bending load was calculated through the finite element method. Results demonstrate that the simulation and experimental results are in good agreement with each other, indicating that the simulation results can be provided a useful reference to the investigation of biliary stents. Furthermore, the stress distribution on FCBPBSs was studied, and the plastic dissipation analysis and plastic strain of FCBPBSs were obtained via the bending simulation. Copyright © 2017 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, C.; et al.
We measure a large set of observables in inclusive charged current muon neutrino scattering on argon with the MicroBooNE liquid argon time projection chamber operating at Fermilab. We evaluate three neutrino interaction models based on the widely used GENIE event generator using these observables. The measurement uses a data set consisting of neutrino interactions with a final state muon candidate fully contained within the MicroBooNE detector. These data were collected in 2016 with the Fermilab Booster Neutrino Beam, which has an average neutrino energy of 800 MeV, using an exposure corresponding to 5e19 protons-on-target. The analysis employs fully automatic eventmore » selection and charged particle track reconstruction and uses a data-driven technique to separate neutrino interactions from cosmic ray background events. We find that GENIE models consistently describe the shapes of a large number of kinematic distributions for fixed observed multiplicity, but we show an indication that the observed multiplicity fractions deviate from GENIE expectations.« less
Aubert, B; Karyotakis, Y; Lees, J P; Poireau, V; Prencipe, E; Prudent, X; Tisserand, V; Garra Tico, J; Grauges, E; Martinelli, M; Palano, A; Pappagallo, M; Eigen, G; Stugu, B; Sun, L; Battaglia, M; Brown, D N; Kerth, L T; Kolomensky, Yu G; Lynch, G; Osipenkov, I L; Tackmann, K; Tanabe, T; Hawkes, C M; Soni, N; Watson, A T; Koch, H; Schroeder, T; Asgeirsson, D J; Fulsom, B G; Hearty, C; Mattison, T S; McKenna, J A; Barrett, M; Khan, A; Randle-Conde, A; Blinov, V E; Bukin, A D; Buzykaev, A R; Druzhinin, V P; Golubev, V B; Onuchin, A P; Serednyakov, S I; Skovpen, Yu I; Solodov, E P; Todyshev, K Yu; Bondioli, M; Curry, S; Eschrich, I; Kirkby, D; Lankford, A J; Lund, P; Mandelkern, M; Martin, E C; Stoker, D P; Atmacan, H; Gary, J W; Liu, F; Long, O; Vitug, G M; Yasin, Z; Zhang, L; Sharma, V; Campagnari, C; Hong, T M; Kovalskyi, D; Mazur, M A; Richman, J D; Beck, T W; Eisner, A M; Heusch, C A; Kroseberg, J; Lockman, W S; Martinez, A J; Schalk, T; Schumm, B A; Seiden, A; Wang, L; Winstrom, L O; Cheng, C H; Doll, D A; Echenard, B; Fang, F; Hitlin, D G; Narsky, I; Piatenko, T; Porter, F C; Andreassen, R; Mancinelli, G; Meadows, B T; Mishra, K; Sokoloff, M D; Bloom, P C; Ford, W T; Gaz, A; Hirschauer, J F; Nagel, M; Nauenberg, U; Smith, J G; Wagner, S R; Ayad, R; Toki, W H; Wilson, R J; Feltresi, E; Hauke, A; Jasper, H; Karbach, T M; Merkel, J; Petzold, A; Spaan, B; Wacker, K; Kobel, M J; Nogowski, R; Schubert, K R; Schwierz, R; Volk, A; Bernard, D; Latour, E; Verderi, M; Clark, P J; Playfer, S; Watson, J E; Andreotti, M; Bettoni, D; Bozzi, C; Calabrese, R; Cecchi, A; Cibinetto, G; Fioravanti, E; Franchini, P; Luppi, E; Munerato, M; Negrini, M; Petrella, A; Piemontese, L; Santoro, V; Baldini-Ferroli, R; Calcaterra, A; de Sangro, R; Finocchiaro, G; Pacetti, S; Patteri, P; Peruzzi, I M; Piccolo, M; Rama, M; Zallo, A; Contri, R; Guido, E; Lo Vetere, M; Monge, M R; Passaggio, S; Patrignani, C; Robutti, E; Tosi, S; Chaisanguanthum, K S; Morii, M; Adametz, A; Marks, J; Schenk, S; Uwer, U; Bernlochner, F U; Klose, V; Lacker, H M; Bard, D J; Dauncey, P D; Tibbetts, M; Behera, P K; Charles, M J; Mallik, U; Cochran, J; Crawley, H B; Dong, L; Eyges, V; Meyer, W T; Prell, S; Rosenberg, E I; Rubin, A E; Gao, Y Y; Gritsan, A V; Guo, Z J; Arnaud, N; Béquilleux, J; D'Orazio, A; Davier, M; Derkach, D; da Costa, J Firmino; Grosdidier, G; Le Diberder, F; Lepeltier, V; Lutz, A M; Malaescu, B; Pruvot, S; Roudeau, P; Schune, M H; Serrano, J; Sordini, V; Stocchi, A; Wormser, G; Lange, D J; Wright, D M; Bingham, I; Burke, J P; Chavez, C A; Fry, J R; Gabathuler, E; Gamet, R; Hutchcroft, D E; Payne, D J; Touramanis, C; Bevan, A J; Clarke, C K; Di Lodovico, F; Sacco, R; Sigamani, M; Cowan, G; Paramesvaran, S; Wren, A C; Brown, D N; Davis, C L; Denig, A G; Fritsch, M; Gradl, W; Hafner, A; Alwyn, K E; Bailey, D; Barlow, R J; Jackson, G; Lafferty, G D; West, T J; Yi, J I; Anderson, J; Chen, C; Jawahery, A; Roberts, D A; Simi, G; Tuggle, J M; Dallapiccola, C; Salvati, E; Saremi, S; Cowan, R; Dujmic, D; Fisher, P H; Henderson, S W; Sciolla, G; Spitznagel, M; Yamamoto, R K; Zhao, M; Patel, P M; Robertson, S H; Schram, M; Lazzaro, A; Lombardo, V; Palombo, F; Stracka, S; Bauer, J M; Cremaldi, L; Godang, R; Kroeger, R; Sonnek, P; Summers, D J; Zhao, H W; Simard, M; Taras, P; Nicholson, H; De Nardo, G; Lista, L; Monorchio, D; Onorato, G; Sciacca, C; Raven, G; Snoek, H L; Jessop, C P; Knoepfel, K J; LoSecco, J M; Wang, W F; Corwin, L A; Honscheid, K; Kagan, H; Kass, R; Morris, J P; Rahimi, A M; Regensburger, J J; Sekula, S J; Wong, Q K; Blount, N L; Brau, J; Frey, R; Igonkina, O; Kolb, J A; Lu, M; Rahmat, R; Sinev, N B; Strom, D; Strube, J; Torrence, E; Castelli, G; Gagliardi, N; Margoni, M; Morandin, M; Posocco, M; Rotondo, M; Simonetto, F; Stroili, R; Voci, C; Sanchez, P del Amo; Ben-Haim, E; Bonneaud, G R; Briand, H; Chauveau, J; Hamon, O; Leruste, Ph; Marchiori, G; Ocariz, J; Perez, A; Prendki, J; Sitt, S; Gladney, L; Biasini, M; Manoni, E; Angelini, C; Batignani, G; Bettarini, S; Calderini, G; Carpinelli, M; Cervelli, A; Forti, F; Giorgi, M A; Lusiani, A; Morganti, M; Neri, N; Paoloni, E; Rizzo, G; Walsh, J J; Pegna, D Lopes; Lu, C; Olsen, J; Smith, A J S; Telnov, A V; Anulli, F; Baracchini, E; Cavoto, G; Faccini, R; Ferrarotto, F; Ferroni, F; Gaspero, M; Jackson, P D; Gioi, L Li; Mazzoni, M A; Morganti, S; Piredda, G; Renga, F; Voena, C; Ebert, M; Hartmann, T; Schröder, H; Waldi, R; Adye, T; Franek, B; Olaiya, E O; Wilson, F F; Emery, S; Esteve, L; de Monchenault, G Hamel; Kozanecki, W; Vasseur, G; Yèche, Ch; Zito, M; Allen, M T; Aston, D; Bartoldus, R; Benitez, J F; Cenci, R; Coleman, J P; Convery, M R; Dingfelder, J C; Dorfan, J; Dubois-Felsmann, G P; Dunwoodie, W; Field, R C; Sevilla, M Franco; Gabareen, A M; Graham, M T; Grenier, P; Hast, C; Innes, W R; Kaminski, J; Kelsey, M H; Kim, H; Kim, P; Kocian, M L; Leith, D W G S; Li, S; Lindquist, B; Luitz, S; Luth, V; Lynch, H L; MacFarlane, D B; Marsiske, H; Messner, R; Muller, D R; Neal, H; Nelson, S; O'Grady, C P; Ofte, I; Perl, M; Ratcliff, B N; Roodman, A; Salnikov, A A; Schindler, R H; Schwiening, J; Snyder, A; Su, D; Sullivan, M K; Suzuki, K; Swain, S K; Thompson, J M; Va'vra, J; Wagner, A P; Weaver, M; West, C A; Wisniewski, W J; Wittgen, M; Wright, D H; Wulsin, H W; Yarritu, A K; Young, C C; Ziegler, V; Chen, X R; Liu, H; Park, W; Purohit, M V; White, R M; Wilson, J R; Burchat, P R; Edwards, A J; Miyashita, T S; Ahmed, S; Alam, M S; Ernst, J A; Pan, B; Saeed, M A; Zain, S B; Soffer, A; Spanier, S M; Wogsland, B J; Eckmann, R; Ritchie, J L; Ruland, A M; Schilling, C J; Schwitters, R F; Wray, B C; Drummond, B W; Izen, J M; Lou, X C; Bianchi, F; Gamba, D; Pelliccioni, M; Bomben, M; Bosisio, L; Cartaro, C; Della Ricca, G; Lanceri, L; Vitale, L; Azzolini, V; Lopez-March, N; Martinez-Vidal, F; Milanes, D A; Oyanguren, A; Albert, J; Banerjee, Sw; Bhuyan, B; Choi, H H F; Hamano, K; King, G J; Kowalewski, R; Lewczuk, M J; Nugent, I M; Roney, J M; Sobie, R J; Gershon, T J; Harrison, P F; Ilic, J; Latham, T E; Mohanty, G B; Puccio, E M T; Band, H R; Chen, X; Dasu, S; Flood, K T; Pan, Y; Prepost, R; Vuosalo, C O; Wu, S L
2010-01-08
We present a measurement of the Cabibbo-Kobayashi-Maskawa matrix element |V(cb)| and the form-factor slope rho2 in B --> Dl- nu(l) decays based on 460x10(6) BB events recorded at the Upsilon(4S) resonance with the BABAR detector. B --> Dl- nu(l) decays are selected in events in which a hadronic decay of the second B meson is fully reconstructed. We measure B(B- --> D0 l- nu(l))/B(B- --> Xl- nu(l)) = (0.255+/-0.009+/-0.009) and B(B0 --> D+ l- nu(l))/B(B0 --> Xl- nu(l)) = (0.230+/-0.011+/-0.011), along with the differential decay distribution in B --> Dl- nu(l) decays. We then determine G(1)|V(cb)| = (42.3+/-1.9+/-1.4)x10(-3) and rho2 = 1.20+/-0.09+/-0.04, where G(1) is the hadronic form factor at the point of zero recoil.
Winds from Luminous Late-Type Stars: II. Broadband Frequency Distribution of Alfven Waves
NASA Technical Reports Server (NTRS)
Airapetian, V.; Carpenter, K. G.; Ofman, L.
2010-01-01
We present the numerical simulations of winds from evolved giant stars using a fully non-linear, time dependent 2.5-dimensional magnetohydrodynamic (MHD) code. This study extends our previous fully non-linear MHD wind simulations to include a broadband frequency spectrum of Alfven waves that drive winds from red giant stars. We calculated four Alfven wind models that cover the whole range of Alfven wave frequency spectrum to characterize the role of freely propagated and reflected Alfven waves in the gravitationally stratified atmosphere of a late-type giant star. Our simulations demonstrate that, unlike linear Alfven wave-driven wind models, a stellar wind model based on plasma acceleration due to broadband non-linear Alfven waves, can consistently reproduce the wide range of observed radial velocity profiles of the winds, their terminal velocities and the observed mass loss rates. Comparison of the calculated mass loss rates with the empirically determined mass loss rate for alpha Tau suggests an anisotropic and time-dependent nature of stellar winds from evolved giants.
Tests of neutrino interaction models with the MicroBooNE detector
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rafique, Aleena
2018-01-01
I measure a large set of observables in inclusive charged current muon neutrino scattering on argon with the MicroBooNE liquid argon time projection chamber operating at Fermilab. I evaluate three neutrino interaction models based on the widely used GENIE event generator using these observables. The measurement uses a data set consisting of neutrino interactions with a final state muon candidate fully contained within the MicroBooNE detector. These data were collected in 2016 with the Fermilab Booster Neutrino Beam, which has an average neutrino energy ofmore » $800$ MeV, using an exposure corresponding to $$5.0\\times10^{19}$$ protons-on-target. The analysis employs fully automatic event selection and charged particle track reconstruction and uses a data-driven technique to separate neutrino interactions from cosmic ray background events. I find that GENIE models consistently describe the shapes of a large number of kinematic distributions for fixed observed multiplicity, but I show an indication that the observed multiplicity fractions deviate from GENIE expectations.« less
Study for prediction of rotor/wake/fuselage interference. Part 2: Program users guide
NASA Technical Reports Server (NTRS)
Clark, D. R.; Maskew, B.
1985-01-01
A method was developed which permits the fully coupled calculation of fuselage and rotor airloads for typical helicopter configurations in forward flight. To do this, an iterative solution is carried out based on a conventional panel representation of the fuselage and a blade element representation of the rotor where fuselage and rotor singularity strengths are determined simultaneously at each step and the rotor wake is allowed to relax (deform) in response to changes in rotor wake loading and fuselage presence. On completion of the iteration, rotor loading and inflow, fuselage singularity strength (and, hence, pressure and velocity distributions) and rotor wake are all consistent. The results of a fully coupled calculation of the flow around representative helicopter configurations are presented. The effect of fuselage components on the rotor flow field and the overall wake structure is discussed as well as the aerodynamic interference between the different parts of the aircraft. Details of the computer program are given.
Data Reduction Procedures for Laser Velocimeter Measurements in Turbomachinery Rotors
NASA Technical Reports Server (NTRS)
Lepicovsky, Jan
1994-01-01
Blade-to-blade velocity distributions based on laser velocimeter data acquired in compressor or fan rotors are increasingly used as benchmark data for the verification and calibration of turbomachinery computational fluid dynamics (CFD) codes. Using laser Doppler velocimeter (LDV) data for this purpose, however, must be done cautiously. Aside from the still not fully resolved issue of the seed particle response in complex flowfields, there is an important inherent difference between CFD predictions and LDV blade-to-blade velocity distributions. CFD codes calculate velocity fields for an idealized rotor passage. LDV data, on the other hand, stem from the actual geometry of all blade channels in a rotor. The geometry often varies from channel to channel as a result of manufacturing tolerances, assembly tolerances, and incurred operational damage or changes in the rotor individual blades.
Pore Pressure and Stress Distributions Around a Hydraulic Fracture in Heterogeneous Rock
NASA Astrophysics Data System (ADS)
Gao, Qian; Ghassemi, Ahmad
2017-12-01
One of the most significant characteristics of unconventional petroleum bearing formations is their heterogeneity, which affects the stress distribution, hydraulic fracture propagation and also fluid flow. This study focuses on the stress and pore pressure redistributions during hydraulic stimulation in a heterogeneous poroelastic rock. Lognormal random distributions of Young's modulus and permeability are generated to simulate the heterogeneous distributions of material properties. A 3D fully coupled poroelastic model based on the finite element method is presented utilizing a displacement-pressure formulation. In order to verify the model, numerical results are compared with analytical solutions showing excellent agreements. The effects of heterogeneities on stress and pore pressure distributions around a penny-shaped fracture in poroelastic rock are then analyzed. Results indicate that the stress and pore pressure distributions are more complex in a heterogeneous reservoir than in a homogeneous one. The spatial extent of stress reorientation during hydraulic stimulations is a function of time and is continuously changing due to the diffusion of pore pressure in the heterogeneous system. In contrast to the stress distributions in homogeneous media, irregular distributions of stresses and pore pressure are observed. Due to the change of material properties, shear stresses and nonuniform deformations are generated. The induced shear stresses in heterogeneous rock cause the initial horizontal principal stresses to rotate out of horizontal planes.
NASA Astrophysics Data System (ADS)
Miller, K. L.; Berg, S. J.; Davison, J. H.; Sudicky, E. A.; Forsyth, P. A.
2018-01-01
Although high performance computers and advanced numerical methods have made the application of fully-integrated surface and subsurface flow and transport models such as HydroGeoSphere common place, run times for large complex basin models can still be on the order of days to weeks, thus, limiting the usefulness of traditional workhorse algorithms for uncertainty quantification (UQ) such as Latin Hypercube simulation (LHS) or Monte Carlo simulation (MCS), which generally require thousands of simulations to achieve an acceptable level of accuracy. In this paper we investigate non-intrusive polynomial chaos for uncertainty quantification, which in contrast to random sampling methods (e.g., LHS and MCS), represents a model response of interest as a weighted sum of polynomials over the random inputs. Once a chaos expansion has been constructed, approximating the mean, covariance, probability density function, cumulative distribution function, and other common statistics as well as local and global sensitivity measures is straightforward and computationally inexpensive, thus making PCE an attractive UQ method for hydrologic models with long run times. Our polynomial chaos implementation was validated through comparison with analytical solutions as well as solutions obtained via LHS for simple numerical problems. It was then used to quantify parametric uncertainty in a series of numerical problems with increasing complexity, including a two-dimensional fully-saturated, steady flow and transient transport problem with six uncertain parameters and one quantity of interest; a one-dimensional variably-saturated column test involving transient flow and transport, four uncertain parameters, and two quantities of interest at 101 spatial locations and five different times each (1010 total); and a three-dimensional fully-integrated surface and subsurface flow and transport problem for a small test catchment involving seven uncertain parameters and three quantities of interest at 241 different times each. Numerical experiments show that polynomial chaos is an effective and robust method for quantifying uncertainty in fully-integrated hydrologic simulations, which provides a rich set of features and is computationally efficient. Our approach has the potential for significant speedup over existing sampling based methods when the number of uncertain model parameters is modest ( ≤ 20). To our knowledge, this is the first implementation of the algorithm in a comprehensive, fully-integrated, physically-based three-dimensional hydrosystem model.
NASA Astrophysics Data System (ADS)
Rolland, R.
The use of the CNES/NASA/NOAA Argos environmental-research instrumentation on the Tiros-N series of satellites as a search and rescue location-finding system for trans-Atlantic yacht races during 1979-1982 is described. The transmission beacons, satellite equipment, data-processing center, and data distribution facilities of Argos are characterized and illustrated; the nine race rescue operations in which Argos was involved are listed and discussed; and the deficiencies of Argos are shown to be fully corrected in the 406-MHz location system developed for Sarsat.
Aggregate age-at-marriage patterns from individual mate-search heuristics.
Todd, Peter M; Billari, Francesco C; Simão, Jorge
2005-08-01
The distribution of age at first marriage shows well-known strong regularities across many countries and recent historical periods. We accounted for these patterns by developing agent-based models that simulate the aggregate behavior of individuals who are searching for marriage partners. Past models assumed fully rational agents with complete knowledge of the marriage market; our simulated agents used psychologically plausible simple heuristic mate search rules that adjust aspiration levels on the basis of a sequence of encounters with potential partners. Substantial individual variation must be included in the models to account for the demographically observed age-at-marriage patterns.
Electronic structures of graphane with vacancies and graphene adsorbed with fluorine atoms
NASA Astrophysics Data System (ADS)
Wu, Bi-Ru; Yang, Chih-Kai
2012-03-01
We investigate the electronic structure of graphane with hydrogen vacancies, which are supposed to occur in the process of hydrogenation of graphene. A variety of configurations is considered and defect states are derived by density functional calculation. We find that a continuous chain-like distribution of hydrogen vacancies will result in conduction of linear dispersion, much like the transport on a superhighway cutting through the jungle of hydrogen. The same conduction also occurs for chain-like vacancies in an otherwise fully fluorine-adsorbed graphene. These results should be very useful in the design of graphene-based electronic circuits.
NASA Astrophysics Data System (ADS)
Zhang, Hao; Chen, Minghua; Parekh, Abhay; Ramchandran, Kannan
2011-09-01
We design a distributed multi-channel P2P Video-on-Demand (VoD) system using "plug-and-play" helpers. Helpers are heterogenous "micro-servers" with limited storage, bandwidth and number of users they can serve simultaneously. Our proposed system has the following salient features: (1) it jointly optimizes over helper-user connection topology, video storage distribution and transmission bandwidth allocation; (2) it minimizes server load, and is adaptable to varying supply and demand patterns across multiple video channels irrespective of video popularity; and (3) it is fully distributed and requires little or no maintenance overhead. The combinatorial nature of the problem and the system demand for distributed algorithms makes the problem uniquely challenging. By utilizing Lagrangian decomposition and Markov chain approximation based arguments, we address this challenge by designing two distributed algorithms running in tandem: a primal-dual storage and bandwidth allocation algorithm and a "soft-worst-neighbor-choking" topology-building algorithm. Our scheme provably converges to a near-optimal solution, and is easy to implement in practice. Packet-level simulation results show that the proposed scheme achieves minimum sever load under highly heterogeneous combinations of supply and demand patterns, and is robust to system dynamics of user/helper churn, user/helper asynchrony, and random delays in the network.
ERIC Educational Resources Information Center
Beeman, Jennifer Leigh Sloan
2013-01-01
Research has found that students successfully complete an introductory course in statistics without fully comprehending the underlying theory or being able to exhibit statistical reasoning. This is particularly true for the understanding about the sampling distribution of the mean, a crucial concept for statistical inference. This study…
Distributed GPU Computing in GIScience
NASA Astrophysics Data System (ADS)
Jiang, Y.; Yang, C.; Huang, Q.; Li, J.; Sun, M.
2013-12-01
Geoscientists strived to discover potential principles and patterns hidden inside ever-growing Big Data for scientific discoveries. To better achieve this objective, more capable computing resources are required to process, analyze and visualize Big Data (Ferreira et al., 2003; Li et al., 2013). Current CPU-based computing techniques cannot promptly meet the computing challenges caused by increasing amount of datasets from different domains, such as social media, earth observation, environmental sensing (Li et al., 2013). Meanwhile CPU-based computing resources structured as cluster or supercomputer is costly. In the past several years with GPU-based technology matured in both the capability and performance, GPU-based computing has emerged as a new computing paradigm. Compare to traditional computing microprocessor, the modern GPU, as a compelling alternative microprocessor, has outstanding high parallel processing capability with cost-effectiveness and efficiency(Owens et al., 2008), although it is initially designed for graphical rendering in visualization pipe. This presentation reports a distributed GPU computing framework for integrating GPU-based computing within distributed environment. Within this framework, 1) for each single computer, computing resources of both GPU-based and CPU-based can be fully utilized to improve the performance of visualizing and processing Big Data; 2) within a network environment, a variety of computers can be used to build up a virtual super computer to support CPU-based and GPU-based computing in distributed computing environment; 3) GPUs, as a specific graphic targeted device, are used to greatly improve the rendering efficiency in distributed geo-visualization, especially for 3D/4D visualization. Key words: Geovisualization, GIScience, Spatiotemporal Studies Reference : 1. Ferreira de Oliveira, M. C., & Levkowitz, H. (2003). From visual data exploration to visual data mining: A survey. Visualization and Computer Graphics, IEEE Transactions on, 9(3), 378-394. 2. Li, J., Jiang, Y., Yang, C., Huang, Q., & Rice, M. (2013). Visualizing 3D/4D Environmental Data Using Many-core Graphics Processing Units (GPUs) and Multi-core Central Processing Units (CPUs). Computers & Geosciences, 59(9), 78-89. 3. Owens, J. D., Houston, M., Luebke, D., Green, S., Stone, J. E., & Phillips, J. C. (2008). GPU computing. Proceedings of the IEEE, 96(5), 879-899.
The Effect of General Statistical Fiber Misalignment on Predicted Damage Initiation in Composites
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett A.; Aboudi, Jacob; Arnold, Steven M.
2014-01-01
A micromechanical method is employed for the prediction of unidirectional composites in which the fiber orientation can possess various statistical misalignment distributions. The method relies on the probability-weighted averaging of the appropriate concentration tensor, which is established by the micromechanical procedure. This approach provides access to the local field quantities throughout the constituents, from which initiation of damage in the composite can be predicted. In contrast, a typical macromechanical procedure can determine the effective composite elastic properties in the presence of statistical fiber misalignment, but cannot provide the local fields. Fully random fiber distribution is presented as a special case using the proposed micromechanical method. Results are given that illustrate the effects of various amounts of fiber misalignment in terms of the standard deviations of in-plane and out-of-plane misalignment angles, where normal distributions have been employed. Damage initiation envelopes, local fields, effective moduli, and strengths are predicted for polymer and ceramic matrix composites with given normal distributions of misalignment angles, as well as fully random fiber orientation.
Reynolds shear stress and heat flux calculations in a fully developed turbulent duct flow
NASA Technical Reports Server (NTRS)
Antonia, R. A.; Kim, J.
1991-01-01
The use of a modified form of the Van Driest mixing length for a fully developed turbulent channel flow leads to mean velocity and Reynolds stress distributions that are in close agreement with data obtained either from experiments or direct numerical simulations. The calculations are then extended to a nonisothermal flow by assuming a constant turbulent Prandtl number, the value of which depends on the molecular Prandtl number. Calculated distributions of mean temperature and lateral heat flux are in reasonable agreement with the simulations. The extension of the calculations to higher Reynolds numbers provides some idea of the Reynolds number required for scaling on wall variables to apply in the inner region of the flow.
Research and Development of Fully Automatic Alien Smoke Stack and Packaging System
NASA Astrophysics Data System (ADS)
Yang, Xudong; Ge, Qingkuan; Peng, Tao; Zuo, Ping; Dong, Weifu
2017-12-01
The problem of low efficiency of manual sorting packaging for the current tobacco distribution center, which developed a set of safe efficient and automatic type of alien smoke stack and packaging system. The functions of fully automatic alien smoke stack and packaging system adopt PLC control technology, servo control technology, robot technology, image recognition technology and human-computer interaction technology. The characteristics, principles, control process and key technology of the system are discussed in detail. Through the installation and commissioning fully automatic alien smoke stack and packaging system has a good performance and has completed the requirements for shaped cigarette.
Fully automated processing of fMRI data in SPM: from MRI scanner to PACS.
Maldjian, Joseph A; Baer, Aaron H; Kraft, Robert A; Laurienti, Paul J; Burdette, Jonathan H
2009-01-01
Here we describe the Wake Forest University Pipeline, a fully automated method for the processing of fMRI data using SPM. The method includes fully automated data transfer and archiving from the point of acquisition, real-time batch script generation, distributed grid processing, interface to SPM in MATLAB, error recovery and data provenance, DICOM conversion and PACS insertion. It has been used for automated processing of fMRI experiments, as well as for the clinical implementation of fMRI and spin-tag perfusion imaging. The pipeline requires no manual intervention, and can be extended to any studies requiring offline processing.
Alverson, Dale C; Saiki, Stanley M; Jacobs, Joshua; Saland, Linda; Keep, Marcus F; Norenberg, Jeffrey; Baker, Rex; Nakatsu, Curtis; Kalishman, Summers; Lindberg, Marlene; Wax, Diane; Mowafi, Moad; Summers, Kenneth L; Holten, James R; Greenfield, John A; Aalseth, Edward; Nickles, David; Sherstyuk, Andrei; Haines, Karen; Caudell, Thomas P
2004-01-01
Medical knowledge and skills essential for tomorrow's healthcare professionals continue to change faster than ever before creating new demands in medical education. Project TOUCH (Telehealth Outreach for Unified Community Health) has been developing methods to enhance learning by coupling innovations in medical education with advanced technology in high performance computing and next generation Internet2 embedded in virtual reality environments (VRE), artificial intelligence and experiential active learning. Simulations have been used in education and training to allow learners to make mistakes safely in lieu of real-life situations, learn from those mistakes and ultimately improve performance by subsequent avoidance of those mistakes. Distributed virtual interactive environments are used over distance to enable learning and participation in dynamic, problem-based, clinical, artificial intelligence rules-based, virtual simulations. The virtual reality patient is programmed to dynamically change over time and respond to the manipulations by the learner. Participants are fully immersed within the VRE platform using a head-mounted display and tracker system. Navigation, locomotion and handling of objects are accomplished using a joy-wand. Distribution is managed via the Internet2 Access Grid using point-to-point or multi-casting connectivity through which the participants can interact. Medical students in Hawaii and New Mexico (NM) participated collaboratively in problem solving and managing of a simulated patient with a closed head injury in VRE; dividing tasks, handing off objects, and functioning as a team. Students stated that opportunities to make mistakes and repeat actions in the VRE were extremely helpful in learning specific principles. VRE created higher performance expectations and some anxiety among VRE users. VRE orientation was adequate but students needed time to adapt and practice in order to improve efficiency. This was also demonstrated successfully between Western Australia and UNM. We successfully demonstrated the ability to fully immerse participants in a distributed virtual environment independent of distance for collaborative team interaction in medical simulation designed for education and training. The ability to make mistakes in a safe environment is well received by students and has a positive impact on their understanding, as well as memory of the principles involved in correcting those mistakes. Bringing people together as virtual teams for interactive experiential learning and collaborative training, independent of distance, provides a platform for distributed "just-in-time" training, performance assessment and credentialing. Further validation is necessary to determine the potential value of the distributed VRE in knowledge transfer, improved future performance and should entail training participants to competence in using these tools.
Applying transport-distance specific SOC distribution to calibrate soil erosion model WaTEM
NASA Astrophysics Data System (ADS)
Hu, Yaxian; Heckrath, Goswin J.; Kuhn, Nikolaus J.
2016-04-01
Slope-scale soil erosion, transport and deposition fundamentally decide the spatial redistribution of eroded sediments in terrestrial and aquatic systems, which further affect the burial and decomposition of eroded SOC. However, comparisons of SOC contents between upper eroding slope and lower depositional site cannot fully reflect the movement of eroded SOC in-transit along hillslopes. The actual transport distance of eroded SOC is decided by its settling velocity. So far, the settling velocity distribution of eroded SOC is mostly calculated from mineral particle specific SOC distribution. Yet, soil is mostly eroded in form of aggregates, and the movement of aggregates differs significantly from individual mineral particles. This urges a SOC erodibility parameter based on actual transport distance distribution of eroded fractions to better calibrate soil erosion models. Previous field investigation on a freshly seeded cropland in Denmark has shown immediate deposition of fast settling soil fractions and the associated SOC at footslopes, followed by a fining trend at the slope tail. To further quantify the long-term effects of topography on erosional redistribution of eroded SOC, the actual transport-distance specific SOC distribution observed on the field was applied to a soil erosion model WaTEM (based on USLE). After integrating with local DEM, our calibrated model succeeded in locating the hotspots of enrichment/depletion of eroded SOC on different topographic positions, much better corresponding to the real-world field observation. By extrapolating into repeated erosion events, our projected results on the spatial distribution of eroded SOC are also adequately consistent with the SOC properties in the consecutive sample profiles along the slope.
Wu, Fei; Sioshansi, Ramteen
2017-05-25
Electric vehicles (EVs) hold promise to improve the energy efficiency and environmental impacts of transportation. However, widespread EV use can impose significant stress on electricity-distribution systems due to their added charging loads. This paper proposes a centralized EV charging-control model, which schedules the charging of EVs that have flexibility. This flexibility stems from EVs that are parked at the charging station for a longer duration of time than is needed to fully recharge the battery. The model is formulated as a two-stage stochastic optimization problem. The model captures the use of distributed energy resources and uncertainties around EV arrival timesmore » and charging demands upon arrival, non-EV loads on the distribution system, energy prices, and availability of energy from the distributed energy resources. We use a Monte Carlo-based sample-average approximation technique and an L-shaped method to solve the resulting optimization problem efficiently. We also apply a sequential sampling technique to dynamically determine the optimal size of the randomly sampled scenario tree to give a solution with a desired quality at minimal computational cost. Here, we demonstrate the use of our model on a Central-Ohio-based case study. We show the benefits of the model in reducing charging costs, negative impacts on the distribution system, and unserved EV-charging demand compared to simpler heuristics. Lastly, we also conduct sensitivity analyses, to show how the model performs and the resulting costs and load profiles when the design of the station or EV-usage parameters are changed.« less
ZERODUR - bending strength: review of achievements
NASA Astrophysics Data System (ADS)
Hartmann, Peter
2017-08-01
Increased demand for using the glass ceramic ZERODUR® with high mechanical loads called for strength data based on larger statistical samples. Design calculations for failure probability target value below 1: 100 000 cannot be made reliable with parameters derived from 20 specimen samples. The data now available for a variety of surface conditions, ground with different grain sizes and acid etched for full micro crack removal, allow stresses by factors four to ten times higher than before. The large sample revealed that breakage stresses of ground surfaces follow the three parameter Weibull distribution instead of the two parameter version. This is more reasonable considering that the micro cracks of such surfaces have a maximum depth which is reflected in the existence of a threshold breakage stress below which breakage probability is zero. This minimum strength allows calculating minimum lifetimes. Fatigue under load can be taken into account by using the stress corrosion coefficient for the actual environmental humidity. For fully etched surfaces Weibull statistics fails. The precondition of the Weibull distribution, the existence of one unique failure mechanism, is not given anymore. ZERODUR® with fully etched surfaces free from damages introduced after etching endures easily 100 MPa tensile stress. The possibility to use ZERODUR® for combined high precision and high stress application was confirmed by the successful launch and continuing operation of LISA Pathfinder the precursor experiment for the gravitational wave antenna satellite array eLISA.
IR wireless cluster synapses of HYDRA very large neural networks
NASA Astrophysics Data System (ADS)
Jannson, Tomasz; Forrester, Thomas
2008-04-01
RF/IR wireless (virtual) synapses are critical components of HYDRA (Hyper-Distributed Robotic Autonomy) neural networks, already discussed in two earlier papers. The HYDRA network has the potential to be very large, up to 10 11-neurons and 10 18-synapses, based on already established technologies (cellular RF telephony and IR-wireless LANs). It is organized into almost fully connected IR-wireless clusters. The HYDRA neurons and synapses are very flexible, simple, and low-cost. They can be modified into a broad variety of biologically-inspired brain-like computing capabilities. In this third paper, we focus on neural hardware in general, and on IR-wireless synapses in particular. Such synapses, based on LED/LD-connections, dominate the HYDRA neural cluster.
Phase gradient algorithm based on co-axis two-step phase-shifting interferometry and its application
NASA Astrophysics Data System (ADS)
Wang, Yawei; Zhu, Qiong; Xu, Yuanyuan; Xin, Zhiduo; Liu, Jingye
2017-12-01
A phase gradient method based on co-axis two-step phase-shifting interferometry, is used to reveal the detailed information of a specimen. In this method, the phase gradient distribution can only be obtained by calculating both the first-order derivative and the radial Hilbert transformation of the intensity difference between two phase-shifted interferograms. The feasibility and accuracy of this method were fully verified by the simulation results for a polystyrene sphere and a red blood cell. The empirical results demonstrated that phase gradient is sensitive to changes in the refractive index and morphology. Because phase retrieval and tedious phase unwrapping are not required, the calculation speed is faster. In addition, co-axis interferometry has high spatial resolution.
Ability to pay and equity in access to Italian and British National Health Services.
Domenighetti, Gianfranco; Vineis, Paolo; De Pietro, Carlo; Tomada, Angelo
2010-10-01
Equity in delivery and distribution of health care is an important determinant of health and a cornerstone in the long way to social justice. We performed a comparative analysis of the prevalence of Italian and British residents who have fully paid out-of-pocket for health services which they could have obtained free of charge or at a lower cost from their respective National Health Services. Cross-sectional study based on a standardized questionnaire survey carried out in autumn 2006 among two representative samples (n = 1000) of the general population aged 20-74 years in each of the two countries. 78% (OR 19.9; 95% CI 15.5-25.6) of Italian residents have fully paid out-of-pocket for at least one access to health services in their lives, and 45% (OR 18.1; 95% CI 12.9-25.5) for more than five accesses. Considering only the last 2 years, 61% (OR 16.5; 95% CI 12.6-21.5) of Italians have fully paid out-of-pocket for at least one access. The corresponding pattern for British residents is 20 and 4% for lifelong prevalence, and 10% for the last 2 years. Opening the public health facilities to a privileged private access to all hospital physicians based on patient's ability to pay, as Italy does, could be a source of social inequality in access to care and could probably represent a major obstacle to decreasing waiting times for patients in the standard formal 'free of charge' way of access.
NASA Astrophysics Data System (ADS)
Cox, M.; Shirono, K.
2017-10-01
A criticism levelled at the Guide to the Expression of Uncertainty in Measurement (GUM) is that it is based on a mixture of frequentist and Bayesian thinking. In particular, the GUM’s Type A (statistical) uncertainty evaluations are frequentist, whereas the Type B evaluations, using state-of-knowledge distributions, are Bayesian. In contrast, making the GUM fully Bayesian implies, among other things, that a conventional objective Bayesian approach to Type A uncertainty evaluation for a number n of observations leads to the impractical consequence that n must be at least equal to 4, thus presenting a difficulty for many metrologists. This paper presents a Bayesian analysis of Type A uncertainty evaluation that applies for all n ≥slant 2 , as in the frequentist analysis in the current GUM. The analysis is based on assuming that the observations are drawn from a normal distribution (as in the conventional objective Bayesian analysis), but uses an informative prior based on lower and upper bounds for the standard deviation of the sampling distribution for the quantity under consideration. The main outcome of the analysis is a closed-form mathematical expression for the factor by which the standard deviation of the mean observation should be multiplied to calculate the required standard uncertainty. Metrological examples are used to illustrate the approach, which is straightforward to apply using a formula or look-up table.
NASA Astrophysics Data System (ADS)
Demirel, M. C.; Mai, J.; Stisen, S.; Mendiguren González, G.; Koch, J.; Samaniego, L. E.
2016-12-01
Distributed hydrologic models are traditionally calibrated and evaluated against observations of streamflow. Spatially distributed remote sensing observations offer a great opportunity to enhance spatial model calibration schemes. For that it is important to identify the model parameters that can change spatial patterns before the satellite based hydrologic model calibration. Our study is based on two main pillars: first we use spatial sensitivity analysis to identify the key parameters controlling the spatial distribution of actual evapotranspiration (AET). Second, we investigate the potential benefits of incorporating spatial patterns from MODIS data to calibrate the mesoscale Hydrologic Model (mHM). This distributed model is selected as it allows for a change in the spatial distribution of key soil parameters through the calibration of pedo-transfer function parameters and includes options for using fully distributed daily Leaf Area Index (LAI) directly as input. In addition the simulated AET can be estimated at the spatial resolution suitable for comparison to the spatial patterns observed using MODIS data. We introduce a new dynamic scaling function employing remotely sensed vegetation to downscale coarse reference evapotranspiration. In total, 17 parameters of 47 mHM parameters are identified using both sequential screening and Latin hypercube one-at-a-time sampling methods. The spatial patterns are found to be sensitive to the vegetation parameters whereas streamflow dynamics are sensitive to the PTF parameters. The results of multi-objective model calibration show that calibration of mHM against observed streamflow does not reduce the spatial errors in AET while they improve only the streamflow simulations. We will further examine the results of model calibration using only multi spatial objective functions measuring the association between observed AET and simulated AET maps and another case including spatial and streamflow metrics together.
NASA Astrophysics Data System (ADS)
Klatt, Steffen; Haas, Edwin; Kraus, David; Kiese, Ralf; Butterbach-Bahl, Klaus; Kraft, Philipp; Plesca, Ina; Breuer, Lutz; Zhu, Bo; Zhou, Minghua; Zhang, Wei; Zheng, Xunhua; Wlotzka, Martin; Heuveline, Vincent
2014-05-01
The use of mineral nitrogen fertilizer sustains the global food production and therefore the livelihood of human kind. The rise in world population will put pressure on the global agricultural system to increase its productivity leading most likely to an intensification of mineral nitrogen fertilizer use. The fate of excess nitrogen and its distribution within landscapes is manifold. Process knowledge on the site scale has rapidly grown in recent years and models have been developed to simulate carbon and nitrogen cycling in managed ecosystems on the site scale. Despite first regional studies, the carbon and nitrogen cycling on the landscape or catchment scale is not fully understood. In this study we present a newly developed modelling approach by coupling the fully distributed hydrology model CMF (catchment modelling framework) to the process based regional ecosystem model LandscapeDNDC for the investigation of hydrological processes and carbon and nitrogen transport and cycling, with a focus on nutrient displacement and resulting greenhouse gas emissions in a small catchment at the Yanting Agro-ecological Experimental Station of Purple Soil, Sichuan province, China. The catchment hosts cypress forests on the outer regions, arable fields on the sloping croplands cultivated with wheat-maize rotations and paddy rice fields in the lowland. The catchment consists of 300 polygons vertically stratified into 10 soil layers. Ecosystem states (soil water content and nutrients) and fluxes (evapotranspiration) are exchanged between the models at high temporal scales (hourly to daily) forming a 3-dimensional model application. The water flux and nutrients transport in the soil is modelled using a 3D Richards/Darcy approach for subsurface fluxes with a kinematic wave approach for surface water runoff and the evapotranspiration is based on Penman-Monteith. Biogeochemical processes are modelled by LandscapeDNDC, including soil microclimate, plant growth and biomass allocation, organic matter mineralisation, nitrification, denitrification, chemodenitrification and methanogenesis producing and consuming soil based greenhouse gases. The model application will present first validation results of the coupled model to simulate soil based greenhouse gas emissions as well as nitrate discharge from the Yanting catchment. The model application will also present the effects of different management practices (fertilization rates and timings, tilling, residues management) on the redistribution of N surplus within the catchment causing biomass productivity gradients and different levels of indirect N2O emissions along topographical gradients.
NASA Astrophysics Data System (ADS)
Haas, Edwin; Klatt, Steffen; Kiese, Ralf; Butterbach-Bahl, Klaus; Kraft, Philipp; Breuer, Lutz
2015-04-01
The use of mineral nitrogen fertilizer sustains the global food production and therefore the livelihood of human kind. The rise in world population will put pressure on the global agricultural system to increase its productivity leading most likely to an intensification of mineral nitrogen fertilizer use. The fate of excess nitrogen and its distribution within landscapes is manifold. Process knowledge on the site scale has rapidly grown in recent years and models have been developed to simulate carbon and nitrogen cycling in managed ecosystems on the site scale. Despite first regional studies, the carbon and nitrogen cycling on the landscape or catchment scale is not fully understood. In this study we present a newly developed modelling approach by coupling the fully distributed hydrology model CMF (catchment modelling framework) to the process based regional ecosystem model LandscapeDNDC for the investigation of hydrological processes and carbon and nitrogen transport and cycling, with a focus on nutrient displacement and resulting greenhouse gas emissions in various virtual landscapes / catchment to demonstrate the capabilities of the modelling system. The modelling system was applied to simulate water and nutrient transport at the at the Yanting Agro-ecological Experimental Station of Purple Soil, Sichuan province, China. The catchment hosts cypress forests on the outer regions, arable fields on the sloping croplands cultivated with wheat-maize rotations and paddy rice fields in the lowland. The catchment consists of 300 polygons vertically stratified into 10 soil layers. Ecosystem states (soil water content and nutrients) and fluxes (evapotranspiration) are exchanged between the models at high temporal scales (hourly to daily) forming a 3-dimensional model application. The water flux and nutrients transport in the soil is modelled using a 3D Richards/Darcy approach for subsurface fluxes with a kinematic wave approach for surface water runoff and the evapotranspiration is based on Penman-Monteith. Biogeochemical processes are modelled by LandscapeDNDC, including soil microclimate, plant growth and biomass allocation, organic matter mineralisation, nitrification, denitrification, chemodenitrification and methanogenesis producing and consuming soil based greenhouse gases. The model application will present first results of the coupled model to simulate soil based greenhouse gas emissions as well as nitrate discharge from the Yanting catchment. The model application will also present the effects of different management practices (fertilization rates and timings, tilling, residues management) on the redistribution of N surplus within the catchment causing biomass productivity gradients and different levels of indirect N2O emissions along topographical gradients.
NASA Astrophysics Data System (ADS)
Carjan, Nicolae; Rizea, Margarit; Talou, Patrick
2017-09-01
Prompt fission neutrons (PFN) angular and energy distributions for the reaction 235U(nth,f) are calculated as a function of the mass asymmetry of the fission fragments using two extreme assumptions: 1) PFN are released during the neck rupture due to the diabatic coupling between the neutron degree of freedom and the rapidly changing neutron-nucleus potential. These unbound neutrons are faster than the separation of the nascent fragments and most of them leave the fissioning system in few 10-21 sec. i.e., at the begining of the acceleration phase. Surrounding the fissioning nucleus by a sphere one can calculate the radial component of the neutron current density. Its time integral gives the angular distribution with respect to the fission axis. The average energy of each emitted neutron is also calculated using the unbound part of each neutron wave packet. The distribution of these average energies gives the general trends of the PFN spectrum: the slope, the range and the average value. 2) PFN are evaporated from fully accelerated, fully equilibrated fission fragments. To follow the de-excitation of these fragments via neutron and γ-ray sequential emissions, a Monte Carlo sampling of the initial conditions and a Hauser-Feshbach statistical approach is used. Recording at each step the emission probability, the energy and the angle of each evaporated neutron one can construct the PFN energy and the PFN angular distribution in the laboratory system. The predictions of these two methods are finally compared with recent experimental results obtained for a given fragment mass ratio.
NASA Astrophysics Data System (ADS)
Hwang, Jae-Sang; Seong, Jae-Kyu; Shin, Woo-Ju; Lee, Jong-Geon; Cho, Jeon-Wook; Ryoo, Hee-Suk; Lee, Bang-Wook
2013-11-01
High temperature superconducting (HTS) cable has been paid much attention due to its high efficiency and high current transportation capability, and it is also regarded as eco-friendly power cable for the next generation. Especially for DC HTS cable, it has more sustainable and stable properties compared to AC HTS cable due to the absence of AC loss in DC HTS cable. Recently, DC HTS cable has been investigated competitively all over the world, and one of the key components of DC HTS cable to be developed is a cable joint box considering HVDC environment. In order to achieve the optimum insulation design of the joint box, analysis of DC electric field distribution of the joint box is a fundamental process to develop DC HTS cable. Generally, AC electric field distribution depends on relative permittivity of dielectric materials but in case of DC, electrical conductivity of dielectric material is a dominant factor which determines electric field distribution. In this study, in order to evaluate DC electric field characteristics of the joint box for DC HTS cable, polypropylene laminated paper (PPLP) specimen has been prepared and its DC electric field distribution was analyzed based on the measurement of electrical conductivity of PPLP in liquid nitrogen (LN2). Electrical conductivity of PPLP in LN2 has not been reported yet but it should be measured for DC electric field analysis. The experimental works for measuring electrical conductivity of PPLP in LN2 were presented in this paper. Based on the experimental works, DC electric field distribution of PPLP specimen was fully analyzed considering the steady state and the transient state of DC. Consequently, it was possible to determine the electric field distribution characteristics considering different DC applying stages including DC switching on, DC switching off and polarity reversal conditions.
NASA Astrophysics Data System (ADS)
Touhidul Mustafa, Syed Md.; Nossent, Jiri; Ghysels, Gert; Huysmans, Marijke
2017-04-01
Transient numerical groundwater flow models have been used to understand and forecast groundwater flow systems under anthropogenic and climatic effects, but the reliability of the predictions is strongly influenced by different sources of uncertainty. Hence, researchers in hydrological sciences are developing and applying methods for uncertainty quantification. Nevertheless, spatially distributed flow models pose significant challenges for parameter and spatially distributed input estimation and uncertainty quantification. In this study, we present a general and flexible approach for input and parameter estimation and uncertainty analysis of groundwater models. The proposed approach combines a fully distributed groundwater flow model (MODFLOW) with the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. To avoid over-parameterization, the uncertainty of the spatially distributed model input has been represented by multipliers. The posterior distributions of these multipliers and the regular model parameters were estimated using DREAM. The proposed methodology has been applied in an overexploited aquifer in Bangladesh where groundwater pumping and recharge data are highly uncertain. The results confirm that input uncertainty does have a considerable effect on the model predictions and parameter distributions. Additionally, our approach also provides a new way to optimize the spatially distributed recharge and pumping data along with the parameter values under uncertain input conditions. It can be concluded from our approach that considering model input uncertainty along with parameter uncertainty is important for obtaining realistic model predictions and a correct estimation of the uncertainty bounds.
Metropolitan Washington Area Water Supply Study. Appendix F. Structural Alternatives.
1983-09-01
Geology F-132 Description of Aquifers F-137 Patuxent Formation F-137 Patapsco Formation F-137 Magothy Formation F-138 Aquia Formation F-138 Aquifer...Distribution in the Patuxent Aquifer F-146 F-32 Transmissivity Distribution in the Patapsco Aquifer F-i46 F-33 Transmissivity Distribution in the Magothy ...wellfield scheme was planned to tap the region’s deep * - aquifers, particularly the Magothy and Patapsco formations. To fully penetrate these aquifers
1994-05-01
PARALLEL DISTRIBUTED MEMORY ARCHITECTURE LTJh T. M. Eidson 0 - 8 l 9 5 " G. Erlebacher _ _ _. _ DTIe QUALITY INSPECTED a Contract NAS I - 19480 May 1994...DISTRIBUTED MEMORY ARCHITECTURE T.M. Eidson * High Technology Corporation Hampton, VA 23665 G. Erlebachert Institute for Computer Applications in Science and...developed and evaluated. Simple model calculations as well as timing results are pres.nted to evaluate the various strategies. The particular
Work Distribution in a Fully Distributed Processing System.
1982-01-01
Institute of Technology Atlanta, Georgia 30332 THE VIEW, OPINIONS, AND/OR FINDINGS CONTAINED IN THIS REPORT ARE THOSE OF THE AUTHOR AND SHOULD NOT BE...opinions, and/or findings contained in this report are those of the author and should not be construed as an official Department of the Navy position...SECTIONO1 Distributed data processing systems are currently being studied by researchers and prospective users because of their potential for improvements
Statistical distribution of building lot frontage: application for Tokyo downtown districts
NASA Astrophysics Data System (ADS)
Usui, Hiroyuki
2018-03-01
The frontage of a building lot is the determinant factor of the residential environment. The statistical distribution of building lot frontages shows how the perimeters of urban blocks are shared by building lots for a given density of buildings and roads. For practitioners in urban planning, this is indispensable to identify potential districts which comprise a high percentage of building lots with narrow frontage after subdivision and to reconsider the appropriate criteria for the density of buildings and roads as residential environment indices. In the literature, however, the statistical distribution of building lot frontages and the density of buildings and roads has not been fully researched. In this paper, based on the empirical study in the downtown districts of Tokyo, it is found that (1) a log-normal distribution fits the observed distribution of building lot frontages better than a gamma distribution, which is the model of the size distribution of Poisson Voronoi cells on closed curves; (2) the statistical distribution of building lot frontages statistically follows a log-normal distribution, whose parameters are the gross building density, road density, average road width, the coefficient of variation of building lot frontage, and the ratio of the number of building lot frontages to the number of buildings; and (3) the values of the coefficient of variation of building lot frontages, and that of the ratio of the number of building lot frontages to that of buildings are approximately equal to 0.60 and 1.19, respectively.
NASA Astrophysics Data System (ADS)
Essa, Mohammed Sh.; Chiad, Bahaa T.; Shafeeq, Omer Sh.
2017-09-01
Thin Films of Copper Oxide (CuO) absorption layer have been deposited using home-made Fully Computerized Spray Pyrolysis Deposition system FCSPD on glass substrates, at the nozzle to substrate distance equal to 20,35 cm, and computerized spray mode (continues spray, macro-control spray). The substrate temperature has been kept at 450 °c with the optional user can enter temperature tolerance values ± 5 °C. Also that fixed molar concentration of 0.1 M, and 2D platform speed or deposition platform speed of 4mm/s. more than 1000 instruction program code, and specific design of graphical user interface GUI to fully control the deposition process and real-time monitoring and controlling the deposition temperature at every 200 ms. The changing in the temperature has been recorded during deposition processes, in addition to all deposition parameters. The films have been characterized to evaluate the thermal distribution over the X, Y movable hot plate, the structure and optical energy gap, thermal and temperature distribution exhibited a good and uniform distribution over 20 cm2 hot plate area, X-ray diffraction (XRD) measurement revealed that the films are polycrystalline in nature and can be assigned to monoclinic CuO structure. Optical band gap varies from 1.5-1.66 eV depending on deposition parameter.
Investigation of veritcal graded channel doping in nanoscale fully-depleted SOI-MOSFET
NASA Astrophysics Data System (ADS)
Ramezani, Zeinab; Orouji, Ali A.
2016-10-01
For achieving reliable transistor, we investigate an amended channel doping (ACD) engineering which improves the electrical and thermal performances of fully-depleted silicon-on-insulator (SOI) MOSFET. We have called the proposed structure with the amended channel doping engineering as ACD-SOI structure and compared it with a conventional fully-depleted SOI MOSFET (C-SOI) with uniform doping distribution using 2-D ATLAS simulator. The amended channel doping is a vertical graded doping that is distributed from the surface of structure with high doping density to the bottom of channel, near the buried oxide, with low doping density. Short channel effects (SCEs) and leakage current suppress due to high barrier height near the source region and electric field modification in the ACD-SOI in comparison with the C-SOI structure. Furthermore, by lower electric field and electron temperature near the drain region that is the place of hot carrier generation, we except the improvement of reliability and gate induced drain lowering (GIDL) in the proposed structure. Undesirable Self heating effect (SHE) that become a critical challenge for SOI MOSFETs is alleviated in the ACD-SOI structure because of utilizing low doping density near the buried oxide. Thus, refer to accessible results, the ACD-SOI structure with graded distribution in vertical direction is a reliable device especially in low power and high temperature applications.
Transmural variation in elastin fiber orientation distribution in the arterial wall.
Yu, Xunjie; Wang, Yunjie; Zhang, Yanhang
2018-01-01
The complex three-dimensional elastin network is a major load-bearing extracellular matrix (ECM) component of an artery. Despite the reported anisotropic behavior of arterial elastin network, it is usually treated as an isotropic material in constitutive models. Our recent multiphoton microscopy study reported a relatively uniform elastin fiber orientation distribution in porcine thoracic aorta when imaging from the intima side (Chow et al., 2014). However it is questionable whether the fiber orientation distribution obtained from a small depth is representative of the elastin network structure in the arterial wall, especially when developing structure-based constitutive models. To date, the structural basis for the anisotropic mechanical behavior of elastin is still not fully understood. In this study, we examined the transmural variation in elastin fiber orientation distribution in porcine thoracic aorta and its association with elastin anisotropy. Using multi-photon microscopy, we observed that the elastin fibers orientation changes from a relatively uniform distribution in regions close to the luminal surface to a more circumferential distribution in regions that dominate the media, then to a longitudinal distribution in regions close to the outer media. Planar biaxial tensile test was performed to characterize the anisotropic behavior of elastin network. A new structure-based constitutive model of elastin network was developed to incorporate the transmural variation in fiber orientation distribution. The new model well captures the anisotropic mechanical behavior of elastin network under both equi- and nonequi-biaxial loading and showed improvements in both fitting and predicting capabilities when compared to a model that only considers the fiber orientation distribution from the intima side. We submit that the transmural variation in fiber orientation distribution is important in characterizing the anisotropic mechanical behavior of elastin network and should be considered in constitutive modeling of an artery. Copyright © 2017 Elsevier Ltd. All rights reserved.
The MSG Central Facility - A Mission Control System for Windows NT
NASA Astrophysics Data System (ADS)
Thompson, R.
The MSG Central Facility, being developed by Science Systems for EUMETSAT1, represents the first of a new generation of satellite mission control systems, based on the Windows NT operating system. The system makes use of a range of new technologies to provide an integrated environment for the planning, scheduling, control and monitoring of the entire Meteosat Second Generation mission. It supports packetised TM/TC and uses Science System's Space UNiT product to provide automated operations support at both Schedule (Timeline) and Procedure levels. Flexible access to historical data is provided through an operations archive based on ORACLE Enterprise Server, hosted on a large RAID array and off-line tape jukebox. Event driven real-time data distribution is based on the CORBA standard. Operations preparation and configuration control tools form a fully integrated element of the system.
Meta-Analysis of Planetarium Efficacy Research
ERIC Educational Resources Information Center
Brazell, Bruce D.; Espinoza, Sue
2009-01-01
In this study, the instructional effectiveness of the planetarium in astronomy education was explored through a meta-analysis of 19 studies. This analysis resulted in a heterogeneous distribution of 24 effect sizes with a mean of +0.28, p less than 0.05. The variability in this distribution was not fully explained under a fixed effect model. As a…
Qi, Li; Zhu, Jiang; Hancock, Aneeka M.; Dai, Cuixia; Zhang, Xuping; Frostig, Ron D.; Chen, Zhongping
2016-01-01
Doppler optical coherence tomography (DOCT) is considered one of the most promising functional imaging modalities for neuro biology research and has demonstrated the ability to quantify cerebral blood flow velocity at a high accuracy. However, the measurement of total absolute blood flow velocity (BFV) of major cerebral arteries is still a difficult problem since it is related to vessel geometry. In this paper, we present a volumetric vessel reconstruction approach that is capable of measuring the absolute BFV distributed along the entire middle cerebral artery (MCA) within a large field-of-view. The Doppler angle at each point of the MCA, representing the vessel geometry, is derived analytically by localizing the artery from pure DOCT images through vessel segmentation and skeletonization. Our approach could achieve automatic quantification of the fully distributed absolute BFV across different vessel branches. Experiments on rodents using swept-source optical coherence tomography showed that our approach was able to reveal the consequences of permanent MCA occlusion with absolute BFV measurement. PMID:26977365
Qi, Li; Zhu, Jiang; Hancock, Aneeka M; Dai, Cuixia; Zhang, Xuping; Frostig, Ron D; Chen, Zhongping
2016-02-01
Doppler optical coherence tomography (DOCT) is considered one of the most promising functional imaging modalities for neuro biology research and has demonstrated the ability to quantify cerebral blood flow velocity at a high accuracy. However, the measurement of total absolute blood flow velocity (BFV) of major cerebral arteries is still a difficult problem since it is related to vessel geometry. In this paper, we present a volumetric vessel reconstruction approach that is capable of measuring the absolute BFV distributed along the entire middle cerebral artery (MCA) within a large field-of-view. The Doppler angle at each point of the MCA, representing the vessel geometry, is derived analytically by localizing the artery from pure DOCT images through vessel segmentation and skeletonization. Our approach could achieve automatic quantification of the fully distributed absolute BFV across different vessel branches. Experiments on rodents using swept-source optical coherence tomography showed that our approach was able to reveal the consequences of permanent MCA occlusion with absolute BFV measurement.
Liu, Yun-feng; Wang, Russell; Baur, Dale A.; Jiang, Xian-feng
2018-01-01
Objective: To investigate the stress distribution to the mandible, with and without impacted third molars (IM3s) at various orientations, resulting from a 2000-Newton impact force either from the anterior midline or from the body of the mandible. Materials and methods: A 3D mandibular virtual model from a healthy dentate patient was created and the mechanical properties of the mandible were categorized to 9 levels based on the Hounsfield unit measured from computed tomography (CT) images. Von Mises stress distributions to the mandibular angle and condylar areas from static impact forces (Load I-front blow and Load II left blow) were evaluated using finite element analysis (FEA). Six groups with IM3 were included: full horizontal bony, full vertical bony, full 450 mesioangular bony, partial horizontal bony, partial vertical, and partial 450 mesioangular bony impaction, and a baseline group with no third molars. Results: Von Mises stresses in the condyle and angle areas were higher for partially than for fully impacted third molars under both loading conditions, with partial horizontal IM3 showing the highest fracture risk. Stresses were higher on the contralateral than on the ipsilateral side. Under Load II, the angle area had the highest stress for various orientations of IM3s. The condylar region had the highest stress when IM3s were absent. Conclusions: High-impact forces are more likely to cause condylar rather than angular fracture when IM3s are missing. The risk of mandibular fracture is higher for partially than fully impacted third molars, with the angulation of impaction having little effect on facture risk. PMID:29308606
Hydrological Modelling of The Guadiana Basin
NASA Astrophysics Data System (ADS)
Conan, C.; Bouraoui, F.; de Marsily, G.; Bidoglio, G.
Increased anthropogenic activities such as agriculture, irrigation, industry, mining, ur- ban water supply and sewage treatment, have created significant environmental prob- lems. To ensure sustainable development of water resources, water managers need new strategies and suitable tools. In particular it is often compulsory that surface wa- ter and groundwater be managed simultaneously both in terms of quantity and quality at catchment scales. To this purpose, a model coupling SWAT (Soil and Water As- sessment Tool) and MODFLOW (Modular 3-D Flow model) was developed. SWAT is a quasi-distributed watershed model with a GIS interface that outlines the sub-basins and stream networks from a Digital Elevation Model (DEM) and calculates daily wa- ter balances from meteorological data, soil and land-use characteristics. The particular advantage of this model, compared to other fully distributed physically based mod- els, is that it requires a small amount of readily available input data. MODFLOW is a fully distributed model that calculates groundwater flow from aquifer characteris- tics. We have adapted this new coupled model SWAT-MODFLOW to a Mediterranean catchment, the Guadiana basin, and present the first results of this work. Only wa- ter quantity results are available at this stage. The validation consisted in comparing measured and predicted daily flow at the catchment and sub-catchment outlets for the period 1970-1995. The model accurately reproduced the decrease of the piezometric level, due to increased water abstraction, and the exchanges between surface water and ground-water. The sensitivity of the model to irrigation practices was evaluated. The usefulness of this model as a management tool has been illustrated through the analysis of alternative scenarios of agricultural practices and climate change.
NASA Astrophysics Data System (ADS)
Jacobs, Colin; Ma, Kevin; Moin, Paymann; Liu, Brent
2010-03-01
Multiple Sclerosis (MS) is a common neurological disease affecting the central nervous system characterized by pathologic changes including demyelination and axonal injury. MR imaging has become the most important tool to evaluate the disease progression of MS which is characterized by the occurrence of white matter lesions. Currently, radiologists evaluate and assess the multiple sclerosis lesions manually by estimating the lesion volume and amount of lesions. This process is extremely time-consuming and sensitive to intra- and inter-observer variability. Therefore, there is a need for automatic segmentation of the MS lesions followed by lesion quantification. We have developed a fully automatic segmentation algorithm to identify the MS lesions. The segmentation algorithm is accelerated by parallel computing using Graphics Processing Units (GPU) for practical implementation into a clinical environment. Subsequently, characterized quantification of the lesions is performed. The quantification results, which include lesion volume and amount of lesions, are stored in a structured report together with the lesion location in the brain to establish a standardized representation of the disease progression of the patient. The development of this structured report in collaboration with radiologists aims to facilitate outcome analysis and treatment assessment of the disease and will be standardized based on DICOM-SR. The results can be distributed to other DICOM-compliant clinical systems that support DICOM-SR such as PACS. In addition, the implementation of a fully automatic segmentation and quantification system together with a method for storing, distributing, and visualizing key imaging and informatics data in DICOM-SR for MS lesions improves the clinical workflow of radiologists and visualizations of the lesion segmentations and will provide 3-D insight into the distribution of lesions in the brain.
Competition or cooperation in transboundary fish stocks management: Insight from a dynamical model.
Nguyen, Trong Hieu; Brochier, Timothée; Auger, Pierre; Trinh, Viet Duoc; Brehmer, Patrice
2018-06-14
An idealized system of a shared fish stock associated with different exclusive economic zones (EEZ) is modelled. Parameters were estimated for the case of the small pelagic fisheries shared between Southern Morocco, Mauritania and the Senegambia. Two models of fishing effort distribution were explored. The first one considers independent national fisheries in each EEZ, with a cost per unit of fishing effort that depends on local fishery policy. The second one considers the case of a fully cooperative fishery performed by an international fleet freely moving across the borders. Both models are based on a set of six ordinary differential equations describing the time evolution of the fish biomass and the fishing effort. We take advantage of the two time scales to obtain a reduced model governing the total fish biomass of the system and fishing efforts in each zone. At the fast equilibrium, the fish distribution follows the ideal free distribution according to the carrying capacity in each area. Different equilibria can be reached according to management choices. When fishing fleets are independent and national fishery policies are not harmonized, in the general case, competition leads after a few decades to a scenario where only one fishery remains sustainable. In the case of sub-regional agreement acting on the adjustment of cost per unit of fishing effort in each EEZ, we found that a large number of equilibria exists. In this last case the initial distribution of fishing effort strongly impact the optimal equilibrium that can be reached. Lastly, the country with the highest carrying capacity density may get less landings when collaborating with other countries than if it minimises its fishing costs. The second fully cooperative model shows that a single international fishing fleet moving freely in the fishing areas leads to a sustainable equilibrium. Such findings should foster regional fisheries organizations to get potential new ways for neighbouring fish stock management. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Implementation of Grid Tier 2 and Tier 3 facilities on a Distributed OpenStack Cloud
NASA Astrophysics Data System (ADS)
Limosani, Antonio; Boland, Lucien; Coddington, Paul; Crosby, Sean; Huang, Joanna; Sevior, Martin; Wilson, Ross; Zhang, Shunde
2014-06-01
The Australian Government is making a AUD 100 million investment in Compute and Storage for the academic community. The Compute facilities are provided in the form of 30,000 CPU cores located at 8 nodes around Australia in a distributed virtualized Infrastructure as a Service facility based on OpenStack. The storage will eventually consist of over 100 petabytes located at 6 nodes. All will be linked via a 100 Gb/s network. This proceeding describes the development of a fully connected WLCG Tier-2 grid site as well as a general purpose Tier-3 computing cluster based on this architecture. The facility employs an extension to Torque to enable dynamic allocations of virtual machine instances. A base Scientific Linux virtual machine (VM) image is deployed in the OpenStack cloud and automatically configured as required using Puppet. Custom scripts are used to launch multiple VMs, integrate them into the dynamic Torque cluster and to mount remote file systems. We report on our experience in developing this nation-wide ATLAS and Belle II Tier 2 and Tier 3 computing infrastructure using the national Research Cloud and storage facilities.
Lee, Stephen; Aranyosi, A J; Wong, Michelle D; Hong, Ji Hyung; Lowe, Jared; Chan, Carol; Garlock, David; Shaw, Scott; Beattie, Patrick D; Kratochvil, Zachary; Kubasti, Nick; Seagers, Kirsten; Ghaffari, Roozbeh; Swanson, Christina D
2016-04-15
In developing countries, the deployment of medical diagnostic technologies remains a challenge because of infrastructural limitations (e.g. refrigeration, electricity), and paucity of health professionals, distribution centers and transportation systems. Here we demonstrate the technical development and clinical testing of a novel electronics enabled microfluidic paper-based analytical device (EE-μPAD) for quantitative measurement of micronutrient concentrations in decentralized, resource-limited settings. The system performs immune-detection using paper-based microfluidics, instrumented with flexible electronics and optoelectronic sensors in a mechanically robust, ultrathin format comparable in size to a credit card. Autonomous self-calibration, plasma separation, flow monitoring, timing and data storage enable multiple devices to be run simultaneously. Measurements are wirelessly transferred to a mobile phone application that geo-tags the data and transmits it to a remote server for real time tracking of micronutrient deficiencies. Clinical tests of micronutrient levels from whole blood samples (n=95) show comparable sensitivity and specificity to ELISA-based tests. These results demonstrate instantaneous acquisition and global aggregation of diagnostics data using a fully integrated point of care system that will enable rapid and distributed surveillance of disease prevalence and geographical progression. Copyright © 2015 Elsevier B.V. All rights reserved.
Fast magnetic resonance fingerprinting for dynamic contrast-enhanced studies in mice.
Gu, Yuning; Wang, Charlie Y; Anderson, Christian E; Liu, Yuchi; Hu, He; Johansen, Mette L; Ma, Dan; Jiang, Yun; Ramos-Estebanez, Ciro; Brady-Kalnay, Susann; Griswold, Mark A; Flask, Chris A; Yu, Xin
2018-05-09
The goal of this study was to develop a fast MR fingerprinting (MRF) method for simultaneous T 1 and T 2 mapping in DCE-MRI studies in mice. The MRF sequences based on balanced SSFP and fast imaging with steady-state precession were implemented and evaluated on a 7T preclinical scanner. The readout used a zeroth-moment-compensated variable-density spiral trajectory that fully sampled the entire k-space and the inner 10 × 10 k-space with 48 and 4 interleaves, respectively. In vitro and in vivo studies of mouse brain were performed to evaluate the accuracy of MRF measurements with both fully sampled and undersampled data. The application of MRF to dynamic T 1 and T 2 mapping in DCE-MRI studies were demonstrated in a mouse model of heterotopic glioblastoma using gadolinium-based and dysprosium-based contrast agents. The T 1 and T 2 measurements in phantom showed strong agreement between the MRF and the conventional methods. The MRF with spiral encoding allowed up to 8-fold undersampling without loss of measurement accuracy. This enabled simultaneous T 1 and T 2 mapping with 2-minute temporal resolution in DCE-MRI studies. Magnetic resonance fingerprinting provides the opportunity for dynamic quantification of contrast agent distribution in preclinical tumor models on high-field MRI scanners. © 2018 International Society for Magnetic Resonance in Medicine.
Libberton, Ben; Coates, Rosanna E.
2014-01-01
Nasal carriage of Staphylococcus aureus is a risk factor for infection, yet the bacterial determinants required for carriage are poorly defined. Interactions between S. aureus and other members of the bacterial flora may determine colonization and have been inferred in previous studies by using correlated species distributions. However, traits mediating species interactions are often polymorphic, suggesting that understanding how interactions structure communities requires a trait-based approach. We characterized S. aureus growth inhibition by the culturable bacterial aerobe consortia of 60 nasal microbiomes, and this revealed intraspecific variation in growth inhibition and that inhibitory isolates clustered within communities that were culture negative for S. aureus. Across microbiomes, the cumulative community-level growth inhibition was negatively associated with S. aureus incidence. To fully understand the ecological processes structuring microbiomes, it will be crucial to account for intraspecific variation in the traits that mediate species interactions. PMID:24980973
Quantum key distribution session with 16-dimensional photonic states.
Etcheverry, S; Cañas, G; Gómez, E S; Nogueira, W A T; Saavedra, C; Xavier, G B; Lima, G
2013-01-01
The secure transfer of information is an important problem in modern telecommunications. Quantum key distribution (QKD) provides a solution to this problem by using individual quantum systems to generate correlated bits between remote parties, that can be used to extract a secret key. QKD with D-dimensional quantum channels provides security advantages that grow with increasing D. However, the vast majority of QKD implementations has been restricted to two dimensions. Here we demonstrate the feasibility of using higher dimensions for real-world quantum cryptography by performing, for the first time, a fully automated QKD session based on the BB84 protocol with 16-dimensional quantum states. Information is encoded in the single-photon transverse momentum and the required states are dynamically generated with programmable spatial light modulators. Our setup paves the way for future developments in the field of experimental high-dimensional QKD.
Robert, Donatien; Douillard, Thierry; Boulineau, Adrien; Brunetti, Guillaume; Nowakowski, Pawel; Venet, Denis; Bayle-Guillemaud, Pascale; Cayron, Cyril
2013-12-23
LiFePO4 and FePO4 phase distributions of entire cross-sectioned electrodes with various Li content are investigated from nanoscale to mesoscale, by transmission electron microscopy and by the new electron forward scattering diffraction technique. The distributions of the fully delithiated (FePO4) or lithiated particles (LiFePO4) are mapped on large fields of view (>100 × 100 μm(2)). Heterogeneities in thin and thick electrodes are highlighted at different scales. At the nanoscale, the statistical analysis of 64 000 particles unambiguously shows that the small particles delithiate first. At the mesoscale, the phase maps reveal a core-shell mechanism at the scale of the agglomerates with a preferential pathway along the electrode porosities. At larger scale, lithiation occurs in thick electrodes "stratum by stratum" from the surface in contact with electrolyte toward the current collector.
NASA Astrophysics Data System (ADS)
Ni, Yong; He, Linghui; Khachaturyan, Armen G.
2010-07-01
A phase field method is proposed to determine the equilibrium fields of a magnetoelectroelastic multiferroic with arbitrarily distributed constitutive constants under applied loadings. This method is based on a developed generalized Eshelby's equivalency principle, in which the elastic strain, electrostatic, and magnetostatic fields at the equilibrium in the original heterogeneous system are exactly the same as those in an equivalent homogeneous magnetoelectroelastic coupled or uncoupled system with properly chosen distributed effective eigenstrain, polarization, and magnetization fields. Finding these effective fields fully solves the equilibrium elasticity, electrostatics, and magnetostatics in the original heterogeneous multiferroic. The paper formulates a variational principle proving that the effective fields are minimizers of appropriate close-form energy functional. The proposed phase field approach produces the energy minimizing effective fields (and thus solving the general multiferroic problem) as a result of artificial relaxation process described by the Ginzburg-Landau-Khalatnikov kinetic equations.
Channel MAC Protocol for Opportunistic Communication in Ad Hoc Wireless Networks
NASA Astrophysics Data System (ADS)
Ashraf, Manzur; Jayasuriya, Aruna; Perreau, Sylvie
2008-12-01
Despite significant research effort, the performance of distributed medium access control methods has failed to meet theoretical expectations. This paper proposes a protocol named "Channel MAC" performing a fully distributed medium access control based on opportunistic communication principles. In this protocol, nodes access the channel when the channel quality increases beyond a threshold, while neighbouring nodes are deemed to be silent. Once a node starts transmitting, it will keep transmitting until the channel becomes "bad." We derive an analytical throughput limit for Channel MAC in a shared multiple access environment. Furthermore, three performance metrics of Channel MAC—throughput, fairness, and delay—are analysed in single hop and multihop scenarios using NS2 simulations. The simulation results show throughput performance improvement of up to 130% with Channel MAC over IEEE 802.11. We also show that the severe resource starvation problem (unfairness) of IEEE 802.11 in some network scenarios is reduced by the Channel MAC mechanism.
Hua, Yongzhao; Dong, Xiwang; Li, Qingdong; Ren, Zhang
2017-11-01
This paper investigates the fault-tolerant time-varying formation control problems for high-order linear multi-agent systems in the presence of actuator failures. Firstly, a fully distributed formation control protocol is presented to compensate for the influences of both bias fault and loss of effectiveness fault. Using the adaptive online updating strategies, no global knowledge about the communication topology is required and the bounds of actuator failures can be unknown. Then an algorithm is proposed to determine the control parameters of the fault-tolerant formation protocol, where the time-varying formation feasible conditions and an approach to expand the feasible formation set are given. Furthermore, the stability of the proposed algorithm is proven based on the Lyapunov-like theory. Finally, two simulation examples are given to demonstrate the effectiveness of the theoretical results. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
General simulation algorithm for autocorrelated binary processes.
Serinaldi, Francesco; Lombardo, Federico
2017-02-01
The apparent ubiquity of binary random processes in physics and many other fields has attracted considerable attention from the modeling community. However, generation of binary sequences with prescribed autocorrelation is a challenging task owing to the discrete nature of the marginal distributions, which makes the application of classical spectral techniques problematic. We show that such methods can effectively be used if we focus on the parent continuous process of beta distributed transition probabilities rather than on the target binary process. This change of paradigm results in a simulation procedure effectively embedding a spectrum-based iterative amplitude-adjusted Fourier transform method devised for continuous processes. The proposed algorithm is fully general, requires minimal assumptions, and can easily simulate binary signals with power-law and exponentially decaying autocorrelation functions corresponding, for instance, to Hurst-Kolmogorov and Markov processes. An application to rainfall intermittency shows that the proposed algorithm can also simulate surrogate data preserving the empirical autocorrelation.
Quantum key distribution session with 16-dimensional photonic states
NASA Astrophysics Data System (ADS)
Etcheverry, S.; Cañas, G.; Gómez, E. S.; Nogueira, W. A. T.; Saavedra, C.; Xavier, G. B.; Lima, G.
2013-07-01
The secure transfer of information is an important problem in modern telecommunications. Quantum key distribution (QKD) provides a solution to this problem by using individual quantum systems to generate correlated bits between remote parties, that can be used to extract a secret key. QKD with D-dimensional quantum channels provides security advantages that grow with increasing D. However, the vast majority of QKD implementations has been restricted to two dimensions. Here we demonstrate the feasibility of using higher dimensions for real-world quantum cryptography by performing, for the first time, a fully automated QKD session based on the BB84 protocol with 16-dimensional quantum states. Information is encoded in the single-photon transverse momentum and the required states are dynamically generated with programmable spatial light modulators. Our setup paves the way for future developments in the field of experimental high-dimensional QKD.
DOE Office of Scientific and Technical Information (OSTI.GOV)
van Marrewijk, N.; Mirzaei, B.; Hayton, D.
2015-10-07
In this study, we have performed frequency locking of a dual, forward reverse emitting third-order distributed feedback quantum cascade laser (QCL) at 3.5 THz. By using both directions of THz emission in combination with two gas cells and two power detectors, we can for the first time perform frequency stabilization, while monitor the frequency locking quality independently. We also characterize how the use of a less sensitive pyroelectric detector can influence the quality of frequency locking, illustrating experimentally that the sensitivity of the detectors is crucial. Using both directions of terahertz (THz) radiation has a particular advantage for the applicationmore » of a QCL as a local oscillator, where radiation from one side can be used for frequency/phase stabilization, leaving the other side to be fully utilized as a local oscillator to pump a mixer.« less
Tight coupling between coral reef morphology and mapped resilience in the Red Sea.
Rowlands, Gwilym; Purkis, Sam; Bruckner, Andrew
2016-04-30
Lack of knowledge on the conservation value of different reef types can stymie decision making, and result in less optimal management solutions. Addressing the information gap of coral reef resilience, we produce a map-based Remote Sensed Resilience Index (RSRI) from data describing the spatial distribution of stressors, and properties of reef habitats on the Farasan Banks, Saudi Arabia. We contrast the distribution of this index among fourteen reef types, categorized on a scale of maturity that includes juvenile (poorly aggraded), mature (partially aggraded), and senile (fully aggraded) reefs. Sites with high reef resilience can be found in most detached reef types; however they are most common in mature reefs. We aim to stimulate debate on the coupling that exists between geomorphology and conservation biology, and consider how such information can be used to inform management decisions. Copyright © 2015 Elsevier Ltd. All rights reserved.
Distributed FBG sensors apply in spacecraft health monitoring
NASA Astrophysics Data System (ADS)
Huang, Xiujun; Zhang, Cuicui; Shi, Dele; Shen, Jingshi
2017-10-01
At present, Spacecraft manufacturing face with high adventure for its complicate structure, serious space environment and not maintained on orbit. When something wrong with spacecraft, monitoring its health state, supply health data in real time would assure quickly locate error and save more time to rescue it. For FBG sensor can distributed test several parameters such as temperature, strain, vibration and easily construct net. At same time, it has more advantages such as ant-radiate, anti-jamming, rodent-resistant and with long lifetime, which more fit for applying in space. In this paper, a spacecraft health monitor system based on FBG sensors is present, Firstly, spacecraft health monitor system and its development are introduced. Then a four channels FBG demodulator is design. At last, Temperature and strain detecting experiment is done. The result shows that the demodulator fully satisfied the need of spacecraft health monitor system.
Quantum key distribution session with 16-dimensional photonic states
Etcheverry, S.; Cañas, G.; Gómez, E. S.; Nogueira, W. A. T.; Saavedra, C.; Xavier, G. B.; Lima, G.
2013-01-01
The secure transfer of information is an important problem in modern telecommunications. Quantum key distribution (QKD) provides a solution to this problem by using individual quantum systems to generate correlated bits between remote parties, that can be used to extract a secret key. QKD with D-dimensional quantum channels provides security advantages that grow with increasing D. However, the vast majority of QKD implementations has been restricted to two dimensions. Here we demonstrate the feasibility of using higher dimensions for real-world quantum cryptography by performing, for the first time, a fully automated QKD session based on the BB84 protocol with 16-dimensional quantum states. Information is encoded in the single-photon transverse momentum and the required states are dynamically generated with programmable spatial light modulators. Our setup paves the way for future developments in the field of experimental high-dimensional QKD. PMID:23897033
Methods to examine reproductive biology in free-ranging, fully-marine mammals.
Lanyon, Janet M; Burgess, Elizabeth A
2014-01-01
Historical overexploitation of marine mammals, combined with present-day pressures, has resulted in severely depleted populations, with many species listed as threatened or endangered. Understanding breeding patterns of threatened marine mammals is crucial to assessing population viability, potential recovery and conservation actions. However, determining reproductive parameters of wild fully-marine mammals (cetaceans and sirenians) is challenging due to their wide distributions, high mobility, inaccessible habitats, cryptic lifestyles and in many cases, large body size and intractability. Consequently, reproductive biologists employ an innovative suite of methods to collect useful information from these species. This chapter reviews historic, recent and state-of-the-art methods to examine diverse aspects of reproduction in fully-aquatic mammals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Higa, Kenneth; Zhao, Hui; Parkinson, Dilworth Y.
The internal structure of a porous electrode strongly influences battery performance. Understanding the dynamics of electrode slurry drying could aid in engineering electrodes with desired properties. For instance, one might monitor the dynamic, spatially-varying thickness near the edge of a slurry coating, as it should lead to non-uniform thickness of the dried film. This work examines the dynamic behavior of drying slurry drops consisting of SiO x and carbon black particles in a solution of carboxymethylcellulose and deionized water, as an experimental model of drying behavior near the edge of a slurry coating. An X-ray radiography-based procedure is developed tomore » calculate the evolving spatial distribution of active material particles from images of the drying slurry drops. To the authors’ knowledge, this study is the first to use radiography to investigate battery slurry drying, as well as the first to determine particle distributions from radiography images of drying suspensions. The dynamic results are consistent with tomography reconstructions of the static, fully-dried films. It is found that active material particles can rapidly become non-uniformly distributed within the drops. Heating can promote distribution uniformity, but seemingly must be applied very soon after slurry deposition. Higher slurry viscosity is found to strongly restrain particle redistribution.« less
Studies of Transverse Momentum Dependent Parton Distributions and Bessel Weighting
NASA Astrophysics Data System (ADS)
Gamberg, Leonard
2015-04-01
We present a new technique for analysis of transverse momentum dependent parton distribution functions, based on the Bessel weighting formalism. Advantages of employing Bessel weighting are that transverse momentum weighted asymmetries provide a means to disentangle the convolutions in the cross section in a model independent way. The resulting compact expressions immediately connect to work on evolution equations for transverse momentum dependent parton distribution and fragmentation functions. As a test case, we apply the procedure to studies of the double longitudinal spin asymmetry in SIDIS using a dedicated Monte Carlo generator which includes quark intrinsic transverse momentum within the generalized parton model. Using a fully differential cross section for the process, the effect of four momentum conservation is analyzed using various input models for transverse momentum distributions and fragmentation functions. We observe a few percent systematic offset of the Bessel-weighted asymmetry obtained from Monte Carlo extraction compared to input model calculations. Bessel weighting provides a powerful and reliable tool to study the Fourier transform of TMDs with controlled systematics due to experimental acceptances and resolutions with different TMD model inputs. Work is supported by the U.S. Department of Energy under Contract No. DE-FG02-07ER41460.
Studies of Transverse Momentum Dependent Parton Distributions and Bessel Weighting
NASA Astrophysics Data System (ADS)
Gamberg, Leonard
2015-10-01
We present a new technique for analysis of transverse momentum dependent parton distribution functions, based on the Bessel weighting formalism. Advantages of employing Bessel weighting are that transverse momentum weighted asymmetries provide a means to disentangle the convolutions in the cross section in a model independent way. The resulting compact expressions immediately connect to work on evolution equations for transverse momentum dependent parton distribution and fragmentation functions. As a test case, we apply the procedure to studies of the double longitudinal spin asymmetry in SIDIS using a dedicated Monte Carlo generator which includes quark intrinsic transverse momentum within the generalized parton model. Using a fully differential cross section for the process, the effect of four momentum conservation is analyzed using various input models for transverse momentum distributions and fragmentation functions. We observe a few percent systematic offset of the Bessel-weighted asymmetry obtained from Monte Carlo extraction compared to input model calculations. Bessel weighting provides a powerful and reliable tool to study the Fourier transform of TMDs with controlled systematics due to experimental acceptances and resolutions with different TMD model inputs. Work is supported by the U.S. Department of Energy under Contract No. DE-FG02-07ER41460.
A ferrofluid-based neural network: design of an analogue associative memory
NASA Astrophysics Data System (ADS)
Palm, R.; Korenivski, V.
2009-02-01
We analyse an associative memory based on a ferrofluid, consisting of a system of magnetic nano-particles suspended in a carrier fluid of variable viscosity subject to patterns of magnetic fields from an array of input and output magnetic pads. The association relies on forming patterns in the ferrofluid during a training phase, in which the magnetic dipoles are free to move and rotate to minimize the total energy of the system. Once equilibrated in energy for a given input-output magnetic field pattern pair, the particles are fully or partially immobilized by cooling the carrier liquid. Thus produced particle distributions control the memory states, which are read out magnetically using spin-valve sensors incorporated into the output pads. The actual memory consists of spin distributions that are dynamic in nature, realized only in response to the input patterns that the system has been trained for. Two training algorithms for storing multiple patterns are investigated. Using Monte Carlo simulations of the physical system, we demonstrate that the device is capable of storing and recalling two sets of images, each with an accuracy approaching 100%.
The effectiveness of surrogate taxa to conserve freshwater biodiversity.
Stewart, David R; Underwood, Zachary E; Rahel, Frank J; Walters, Annika W
2018-02-01
Establishing protected areas has long been an effective conservation strategy and is often based on readily surveyed species. The potential of any freshwater taxa to be a surrogate for other aquatic groups has not been explored fully. We compiled occurrence data on 72 species of freshwater fishes, amphibians, mussels, and aquatic reptiles for the Great Plains, Wyoming (U.S.A.). We used hierarchical Bayesian multispecies mixture models and MaxEnt models to describe species' distributions and the program Zonation to identify areas of conservation priority for each aquatic group. The landscape-scale factors that best characterized aquatic species' distributions differed among groups. There was low agreement and congruence among taxa-specific conservation priorities (<20%), meaning no surrogate priority areas would include or protect the best habitats of other aquatic taxa. Common, wideranging aquatic species were included in taxa-specific priority areas, but rare freshwater species were not included. Thus, the development of conservation priorities based on a single freshwater aquatic group would not protect all species in the other aquatic groups. © 2017 Society for Conservation Biology.
A new method for calculating differential distributions directly in Mellin space
NASA Astrophysics Data System (ADS)
Mitov, Alexander
2006-12-01
We present a new method for the calculation of differential distributions directly in Mellin space without recourse to the usual momentum-fraction (or z-) space. The method is completely general and can be applied to any process. It is based on solving the integration-by-parts identities when one of the powers of the propagators is an abstract number. The method retains the full dependence on the Mellin variable and can be implemented in any program for solving the IBP identities based on algebraic elimination, like Laporta. General features of the method are: (1) faster reduction, (2) smaller number of master integrals compared to the usual z-space approach and (3) the master integrals satisfy difference instead of differential equations. This approach generalizes previous results related to fully inclusive observables like the recently calculated three-loop space-like anomalous dimensions and coefficient functions in inclusive DIS to more general processes requiring separate treatment of the various physical cuts. Many possible applications of this method exist, the most notable being the direct evaluation of the three-loop time-like splitting functions in QCD.
Reconstruction of initial pressure from limited view photoacoustic images using deep learning
NASA Astrophysics Data System (ADS)
Waibel, Dominik; Gröhl, Janek; Isensee, Fabian; Kirchner, Thomas; Maier-Hein, Klaus; Maier-Hein, Lena
2018-02-01
Quantification of tissue properties with photoacoustic (PA) imaging typically requires a highly accurate representation of the initial pressure distribution in tissue. Almost all PA scanners reconstruct the PA image only from a partial scan of the emitted sound waves. Especially handheld devices, which have become increasingly popular due to their versatility and ease of use, only provide limited view data because of their geometry. Owing to such limitations in hardware as well as to the acoustic attenuation in tissue, state-of-the-art reconstruction methods deliver only approximations of the initial pressure distribution. To overcome the limited view problem, we present a machine learning-based approach to the reconstruction of initial pressure from limited view PA data. Our method involves a fully convolutional deep neural network based on a U-Net-like architecture with pixel-wise regression loss on the acquired PA images. It is trained and validated on in silico data generated with Monte Carlo simulations. In an initial study we found an increase in accuracy over the state-of-the-art when reconstructing simulated linear-array scans of blood vessels.
Biology-Inspired Distributed Consensus in Massively-Deployed Sensor Networks
NASA Technical Reports Server (NTRS)
Jones, Kennie H.; Lodding, Kenneth N.; Olariu, Stephan; Wilson, Larry; Xin, Chunsheng
2005-01-01
Promises of ubiquitous control of the physical environment by large-scale wireless sensor networks open avenues for new applications that are expected to redefine the way we live and work. Most of recent research has concentrated on developing techniques for performing relatively simple tasks in small-scale sensor networks assuming some form of centralized control. The main contribution of this work is to propose a new way of looking at large-scale sensor networks, motivated by lessons learned from the way biological ecosystems are organized. Indeed, we believe that techniques used in small-scale sensor networks are not likely to scale to large networks; that such large-scale networks must be viewed as an ecosystem in which the sensors/effectors are organisms whose autonomous actions, based on local information, combine in a communal way to produce global results. As an example of a useful function, we demonstrate that fully distributed consensus can be attained in a scalable fashion in massively deployed sensor networks where individual motes operate based on local information, making local decisions that are aggregated across the network to achieve globally-meaningful effects.
Node-controlled allocation of mineral elements in Poaceae.
Yamaji, Naoki; Ma, Jian Feng
2017-10-01
Mineral elements taken up by the roots will be delivered to different organs and tissues depending on their requirements. In Poaceae, this selective distribution is mainly mediated in the nodes, which have highly developed and fully organized vascular systems. Inter-vascular transfer of mineral elements from enlarged vascular bundles to diffuse vascular bundles is required for their preferential distribution to developing tissues and reproductive organs. A number of transporters involved in this inter-vascular transfer processes have been identified mainly in rice. They are localized at the different cell layers and form an efficient machinery within the node. Furthermore, some these transporters show rapid response to the environmental changes of mineral elements at the protein level. In addition to the node-based transporters, distinct nodal structures including enlarged xylem area, folded plasma membrane of xylem transfer cells and presence of an apoplastic barrier are also required for the efficient inter-vascular transfer. Manipulation of node-based transporters will provide a novel breeding target to improve nutrient use efficiency, productivity, nutritional value and safety in cereal crops. Copyright © 2017 Elsevier Ltd. All rights reserved.
Microdevelopment during an activity-based science lesson
NASA Astrophysics Data System (ADS)
Parziale, Jim
1997-11-01
The purpose of this study was to describe the microdevelopment of task-related skills during a classroom science activity. Pairs of fifth and pairs of seventh grade students were videotaped as they constructed marshmallow and toothpick bridges. A skill theory based system of analysis was developed and used to detect the construction of new understandings. Patterns of change observed in these understandings were used to infer three means of self-construction: shifts of focus, bridging mechanisms and distributed cognition. Shift of focus is a mechanism used by students to efficiently explore a web of possibilities, collect ideas and make observations for later coordination as new understandings. Bridging mechanisms are partially built conversational structures that scaffolded the construction of higher level thinking structures. Students used the distributed cognition mechanism to test the adaptiveness of their design ideas without the need to fully coordinate an understandings of these designs. An integrated model of these three mechanisms is proposed specific to this task. This model describes how these mechanisms spontaneously emerged and interacted to support the construction of mental representations.
Malhotra, Sony; Sowdhamini, Ramanathan
2013-08-01
The interaction of proteins with their respective DNA targets is known to control many high-fidelity cellular processes. Performing a comprehensive survey of the sequenced genomes for DNA-binding proteins (DBPs) will help in understanding their distribution and the associated functions in a particular genome. Availability of fully sequenced genome of Arabidopsis thaliana enables the review of distribution of DBPs in this model plant genome. We used profiles of both structure and sequence-based DNA-binding families, derived from PDB and PFam databases, to perform the survey. This resulted in 4471 proteins, identified as DNA-binding in Arabidopsis genome, which are distributed across 300 different PFam families. Apart from several plant-specific DNA-binding families, certain RING fingers and leucine zippers also had high representation. Our search protocol helped to assign DNA-binding property to several proteins that were previously marked as unknown, putative or hypothetical in function. The distribution of Arabidopsis genes having a role in plant DNA repair were particularly studied and noted for their functional mapping. The functions observed to be overrepresented in the plant genome harbour DNA-3-methyladenine glycosylase activity, alkylbase DNA N-glycosylase activity and DNA-(apurinic or apyrimidinic site) lyase activity, suggesting their role in specialized functions such as gene regulation and DNA repair.
flexsurv: A Platform for Parametric Survival Modeling in R
Jackson, Christopher H.
2018-01-01
flexsurv is an R package for fully-parametric modeling of survival data. Any parametric time-to-event distribution may be fitted if the user supplies a probability density or hazard function, and ideally also their cumulative versions. Standard survival distributions are built in, including the three and four-parameter generalized gamma and F distributions. Any parameter of any distribution can be modeled as a linear or log-linear function of covariates. The package also includes the spline model of Royston and Parmar (2002), in which both baseline survival and covariate effects can be arbitrarily flexible parametric functions of time. The main model-fitting function, flexsurvreg, uses the familiar syntax of survreg from the standard survival package (Therneau 2016). Censoring or left-truncation are specified in ‘Surv’ objects. The models are fitted by maximizing the full log-likelihood, and estimates and confidence intervals for any function of the model parameters can be printed or plotted. flexsurv also provides functions for fitting and predicting from fully-parametric multi-state models, and connects with the mstate package (de Wreede, Fiocco, and Putter 2011). This article explains the methods and design principles of the package, giving several worked examples of its use. PMID:29593450
Dams and Intergovernmental Transfers
NASA Astrophysics Data System (ADS)
Bao, X.
2012-12-01
Gainers and Losers are always associated with large scale hydrological infrastructure construction, such as dams, canals and water treatment facilities. Since most of these projects are public services and public goods, Some of these uneven impacts cannot fully be solved by markets. This paper tried to explore whether the governments are paying any effort to balance the uneven distributional impacts caused by dam construction or not. It showed that dam construction brought an average 2% decrease in per capita tax revenue in the upstream counties, a 30% increase in the dam-location counties and an insignificant increase in downstream counties. Similar distributional impacts were observed for other outcome variables. like rural income and agricultural crop yields, though the impacts differ across different crops. The paper also found some balancing efforts from inter-governmental transfers to reduce the unevenly distributed impacts caused by dam construction. However, overall the inter-governmental fiscal transfer efforts were not large enough to fully correct those uneven distributions, reflected from a 2% decrease of per capita GDP in upstream counties and increase of per capita GDP in local and downstream counties. This paper may shed some lights on the governmental considerations in the decision making process for large hydrological infrastructures.
Åhlfeldt, Rose-Mharie; Persson, Anne; Rexhepi, Hanife; Wåhlander, Kalle
2016-12-01
This article presents and illustrates the main features of a proposed process-oriented approach for patient information distribution in future health care information systems, by using a prototype of a process support system. The development of the prototype was based on the Visuera method, which includes five defined steps. The results indicate that a visualized prototype is a suitable tool for illustrating both the opportunities and constraints of future ideas and solutions in e-Health. The main challenges for developing and implementing a fully functional process support system concern both technical and organizational/management aspects. © The Author(s) 2015.
Asymptotic/numerical analysis of supersonic propeller noise
NASA Technical Reports Server (NTRS)
Myers, M. K.; Wydeven, R.
1989-01-01
An asymptotic analysis based on the Mach surface structure of the field of a supersonic helical source distribution is applied to predict thickness and loading noise radiated by high speed propeller blades. The theory utilizes an integral representation of the Ffowcs-Williams Hawkings equation in a fully linearized form. The asymptotic results are used for chordwise strips of the blade, while required spanwise integrations are performed numerically. The form of the analysis enables predicted waveforms to be interpreted in terms of Mach surface propagation. A computer code developed to implement the theory is described and found to yield results in close agreement with more exact computations.
Applying machine learning to pattern analysis for automated in-design layout optimization
NASA Astrophysics Data System (ADS)
Cain, Jason P.; Fakhry, Moutaz; Pathak, Piyush; Sweis, Jason; Gennari, Frank; Lai, Ya-Chieh
2018-04-01
Building on previous work for cataloging unique topological patterns in an integrated circuit physical design, a new process is defined in which a risk scoring methodology is used to rank patterns based on manufacturing risk. Patterns with high risk are then mapped to functionally equivalent patterns with lower risk. The higher risk patterns are then replaced in the design with their lower risk equivalents. The pattern selection and replacement is fully automated and suitable for use for full-chip designs. Results from 14nm product designs show that the approach can identify and replace risk patterns with quantifiable positive impact on the risk score distribution after replacement.
Real-time segmentation in 4D ultrasound with continuous max-flow
NASA Astrophysics Data System (ADS)
Rajchl, M.; Yuan, J.; Peters, T. M.
2012-02-01
We present a novel continuous Max-Flow based method to segment the inner left ventricular wall from 3D trans-esophageal echocardiography image sequences, which minimizes an energy functional encoding two Fisher-Tippett distributions and a geometrical constraint in form of a Euclidean distance map in a numerically efficient and accurate way. After initialization the method is fully automatic and is able to perform at up to 10Hz making it available for image-guided interventions. Results are shown on 4D TEE data sets from 18 patients with pathological cardiac conditions and the speed of the algorithm is assessed under a variety of conditions.
Parallel performance investigations of an unstructured mesh Navier-Stokes solver
NASA Technical Reports Server (NTRS)
Mavriplis, Dimitri J.
2000-01-01
A Reynolds-averaged Navier-Stokes solver based on unstructured mesh techniques for analysis of high-lift configurations is described. The method makes use of an agglomeration multigrid solver for convergence acceleration. Implicit line-smoothing is employed to relieve the stiffness associated with highly stretched meshes. A GMRES technique is also implemented to speed convergence at the expense of additional memory usage. The solver is cache efficient and fully vectorizable, and is parallelized using a two-level hybrid MPI-OpenMP implementation suitable for shared and/or distributed memory architectures, as well as clusters of shared memory machines. Convergence and scalability results are illustrated for various high-lift cases.
Reflective ghost imaging through turbulence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hardy, Nicholas D.; Shapiro, Jeffrey H.
2011-12-15
Recent work has indicated that ghost imaging may have applications in standoff sensing. However, most theoretical work has addressed transmission-based ghost imaging. To be a viable remote-sensing system, the ghost imager needs to image rough-surfaced targets in reflection through long, turbulent optical paths. We develop, within a Gaussian-state framework, expressions for the spatial resolution, image contrast, and signal-to-noise ratio of such a system. We consider rough-surfaced targets that create fully developed speckle in their returns and Kolmogorov-spectrum turbulence that is uniformly distributed along all propagation paths. We address both classical and nonclassical optical sources, as well as a computational ghostmore » imager.« less
Numerical simulation of MPD thruster flows with anomalous transport
NASA Technical Reports Server (NTRS)
Caldo, Giuliano; Choueiri, Edgar Y.; Kelly, Arnold J.; Jahn, Robert G.
1992-01-01
Anomalous transport effects in an Ar self-field coaxial MPD thruster are presently studied by means of a fully 2D two-fluid numerical code; its calculations are extended to a range of typical operating conditions. An effort is made to compare the spatial distribution of the steady state flow and field properties and thruster power-dissipation values for simulation runs with and without anomalous transport. A conductivity law based on the nonlinear saturation of lower hybrid current-driven instability is used for the calculations. Anomalous-transport simulation runs have indicated that the resistivity in specific areas of the discharge is significantly higher than that calculated in classical runs.
Study and Application of Remote Data Moving Transmission under the Network Convergence
NASA Astrophysics Data System (ADS)
Zhiguo, Meng; Du, Zhou
The data transmission is an important problem in remote applications. Advance of network convergence has help to select and use data transmission model. The embedded system and data management platform is a key of the design. With communication module, interface technology and the transceiver which has independent intellectual property rights connected broadband network and mobile network seamlessly. Using the distribution system of mobile base station to realize the wireless transmission, using public networks to implement the data transmission, making the distant information system break through area restrictions and realizing transmission of the moving data, it has been fully recognized in long-distance medical care applications.
Schmidt, Paul; Schmid, Volker J; Gaser, Christian; Buck, Dorothea; Bührlen, Susanne; Förschler, Annette; Mühlau, Mark
2013-01-01
Aiming at iron-related T2-hypointensity, which is related to normal aging and neurodegenerative processes, we here present two practicable approaches, based on Bayesian inference, for preprocessing and statistical analysis of a complex set of structural MRI data. In particular, Markov Chain Monte Carlo methods were used to simulate posterior distributions. First, we rendered a segmentation algorithm that uses outlier detection based on model checking techniques within a Bayesian mixture model. Second, we rendered an analytical tool comprising a Bayesian regression model with smoothness priors (in the form of Gaussian Markov random fields) mitigating the necessity to smooth data prior to statistical analysis. For validation, we used simulated data and MRI data of 27 healthy controls (age: [Formula: see text]; range, [Formula: see text]). We first observed robust segmentation of both simulated T2-hypointensities and gray-matter regions known to be T2-hypointense. Second, simulated data and images of segmented T2-hypointensity were analyzed. We found not only robust identification of simulated effects but also a biologically plausible age-related increase of T2-hypointensity primarily within the dentate nucleus but also within the globus pallidus, substantia nigra, and red nucleus. Our results indicate that fully Bayesian inference can successfully be applied for preprocessing and statistical analysis of structural MRI data.
Chen, Pan; Terenzi, Camilla; Furó, István; Berglund, Lars A; Wohlert, Jakob
2018-05-15
Macromolecular dynamics in biological systems, which play a crucial role for biomolecular function and activity at ambient temperature, depend strongly on moisture content. Yet, a generally accepted quantitative model of hydration-dependent phenomena based on local relaxation and diffusive dynamics of both polymer and its adsorbed water is still missing. In this work, atomistic-scale spatial distributions of motional modes are calculated using molecular dynamics simulations of hydrated xyloglucan (XG). These are shown to reproduce experimental hydration-dependent 13 C NMR longitudinal relaxation times ( T 1 ) at room temperature, and relevant features of their broad distributions, which are indicative of locally heterogeneous polymer reorientational dynamics. At low hydration, the self-diffusion behavior of water shows that water molecules are confined to particular locations in the randomly aggregated XG network while the average polymer segmental mobility remains low. Upon increasing water content, the hydration network becomes mobile and fully accessible for individual water molecules, and the motion of hydrated XG segments becomes faster. Yet, the polymer network retains a heterogeneous gel-like structure even at the highest level of hydration. We show that the observed distribution of relaxations times arises from the spatial heterogeneity of chain mobility that in turn is a result of heterogeneous distribution of water-chain and chain-chain interactions. Our findings contribute to the picture of hydration-dependent dynamics in other macromolecules such as proteins, DNA, and synthetic polymers, and hold important implications for the mechanical properties of polysaccharide matrixes in plants and plant-based materials.
Integration of Cloud resources in the LHCb Distributed Computing
NASA Astrophysics Data System (ADS)
Úbeda García, Mario; Méndez Muñoz, Víctor; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel
2014-06-01
This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) - instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keeping this on mind, pros and cons of a cloud based infrasctructure have been studied in contrast with the current setup. As a result, this work addresses four different use cases which represent a major improvement on several levels of our infrastructure. We describe the solution implemented by LHCb for the contextualisation of the VMs based on the idea of Cloud Site. We report on operational experience of using in production several institutional Cloud resources that are thus becoming integral part of the LHCb Distributed Computing resources. Furthermore, we describe as well the gradual migration of our Service Infrastructure towards a fully distributed architecture following the Service as a Service (SaaS) model.
On service differentiation in mobile Ad Hoc networks.
Zhang, Shun-liang; Ye, Cheng-qing
2004-09-01
A network model is proposed to support service differentiation for mobile Ad Hoc networks by combining a fully distributed admission control approach and the DIFS based differentiation mechanism of IEEE802.11. It can provide different kinds of QoS (Quality of Service) for various applications. Admission controllers determine a committed bandwidth based on the reserved bandwidth of flows and the source utilization of networks. Packets are marked when entering into networks by markers according to the committed rate. By the mark in the packet header, intermediate nodes handle the received packets in different manners to provide applications with the QoS corresponding to the pre-negotiated profile. Extensive simulation experiments showed that the proposed mechanism can provide QoS guarantee to assured service traffic and increase the channel utilization of networks.
A coupled/uncoupled deformation and fatigue damage algorithm utilizing the finite element method
NASA Technical Reports Server (NTRS)
Wilt, Thomas E.; Arnold, Steven M.
1994-01-01
A fatigue damage computational algorithm utilizing a multiaxial, isothermal, continuum based fatigue damage model for unidirectional metal matrix composites has been implemented into the commercial finite element code MARC using MARC user subroutines. Damage is introduced into the finite element solution through the concept of effective stress which fully couples the fatigue damage calculations with the finite element deformation solution. An axisymmetric stress analysis was performed on a circumferentially reinforced ring, wherein both the matrix cladding and the composite core were assumed to behave elastic-perfectly plastic. The composite core behavior was represented using Hill's anisotropic continuum based plasticity model, and similarly, the matrix cladding was represented by an isotropic plasticity model. Results are presented in the form of S-N curves and damage distribution plots.
A comparative study of the influence of buoyancy driven fluid flow on GaAs crystal growth
NASA Technical Reports Server (NTRS)
Kafalas, J. A.; Bellows, A. H.
1988-01-01
A systematic investigation of the effect of gravity driven fluid flow on GaAs crystal growth was performed. It includes GaAs crystal growth in the microgravity environment aboard the Space Shuttle. The program involves a controlled comparative study of crystal growth under a variety of earth based conditions with variable orientation and applied magnetic field in addition to the microgravity growth. Earth based growth will be performed under stabilizing as well as destabilizing temperature gradients. The boules grown in space and on earth will be fully characterized to correlate the degree of convection with the distribution of impurities. Both macro- and micro-segregation will be determined. The space growth experiment will be flown in a self-contained payload container through NASA's Get Away Special program.
NASA Technical Reports Server (NTRS)
Mcsween, H. Y., Jr.; Harvey, R. P.
1993-01-01
Constraints on the volatile inventory and outgassing history of Mars are critical to understanding the origin of ancient valley systems and paleoclimates. Planetary accretion models for Mars allow either a volatile-rich or volatile-poor mantle, depending on whether the accreted materials were fully oxidized or whether accretion was homogeneous so that water was lost through reaction with metallic iron. The amount of water that has been outgassed from the interior is likewise a contentious subject, and estimates of globally distributed water based on various geochemical and geological measurements vary from a few meters to more than a thousand meters. New data on SNC meteorites, which are thought to be Martian igneous rocks, provide constraints on both mantle and outgassed water.
NASA Astrophysics Data System (ADS)
Pianezze, J.; Barthe, C.; Bielli, S.; Tulet, P.; Jullien, S.; Cambon, G.; Bousquet, O.; Claeys, M.; Cordier, E.
2018-03-01
Ocean-Waves-Atmosphere (OWA) exchanges are not well represented in current Numerical Weather Prediction (NWP) systems, which can lead to large uncertainties in tropical cyclone track and intensity forecasts. In order to explore and better understand the impact of OWA interactions on tropical cyclone modeling, a fully coupled OWA system based on the atmospheric model Meso-NH, the oceanic model CROCO, and the wave model WW3 and called MSWC was designed and applied to the case of tropical cyclone Bejisa (2013-2014). The fully coupled OWA simulation shows good agreement with the literature and available observations. In particular, simulated significant wave height is within 30 cm of measurements made with buoys and altimeters. Short-term (< 2 days) sensitivity experiments used to highlight the effect of oceanic waves coupling show limited impact on the track, the intensity evolution, and the turbulent surface fluxes of the tropical cyclone. However, it is also shown that using a fully coupled OWA system is essential to obtain consistent sea salt emissions. Spatial and temporal coherence of the sea state with the 10 m wind speed are necessary to produce sea salt aerosol emissions in the right place (in the eyewall of the tropical cyclone) and with the right size distribution, which is critical for cloud microphysics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elmagarmid, A.K.
The availability of distributed data bases is directly affected by the timely detection and resolution of deadlocks. Consequently, mechanisms are needed to make deadlock detection algorithms resilient to failures. Presented first is a centralized algorithm that allows transactions to have multiple requests outstanding. Next, a new distributed deadlock detection algorithm (DDDA) is presented, using a global detector (GD) to detect global deadlocks and local detectors (LDs) to detect local deadlocks. This algorithm essentially identifies transaction-resource interactions that m cause global (multisite) deadlocks. Third, a deadlock detection algorithm utilizing a transaction-wait-for (TWF) graph is presented. It is a fully disjoint algorithmmore » that allows multiple outstanding requests. The proposed algorithm can achieve improved overall performance by using multiple disjoint controllers coupled with the two-phase property while maintaining the simplicity of centralized schemes. Fourth, an algorithm that combines deadlock detection and avoidance is given. This algorithm uses concurrent transaction controllers and resource coordinators to achieve maximum distribution. The language of CSP is used to describe this algorithm. Finally, two efficient deadlock resolution protocols are given along with some guidelines to be used in choosing a transaction for abortion.« less
Hierarchical Bayesian sparse image reconstruction with application to MRFM.
Dobigeon, Nicolas; Hero, Alfred O; Tourneret, Jean-Yves
2009-09-01
This paper presents a hierarchical Bayesian model to reconstruct sparse images when the observations are obtained from linear transformations and corrupted by an additive white Gaussian noise. Our hierarchical Bayes model is well suited to such naturally sparse image applications as it seamlessly accounts for properties such as sparsity and positivity of the image via appropriate Bayes priors. We propose a prior that is based on a weighted mixture of a positive exponential distribution and a mass at zero. The prior has hyperparameters that are tuned automatically by marginalization over the hierarchical Bayesian model. To overcome the complexity of the posterior distribution, a Gibbs sampling strategy is proposed. The Gibbs samples can be used to estimate the image to be recovered, e.g., by maximizing the estimated posterior distribution. In our fully Bayesian approach, the posteriors of all the parameters are available. Thus, our algorithm provides more information than other previously proposed sparse reconstruction methods that only give a point estimate. The performance of the proposed hierarchical Bayesian sparse reconstruction method is illustrated on synthetic data and real data collected from a tobacco virus sample using a prototype MRFM instrument.
Darabi-Darestani, Kaveh; Sari, Alireza; Sarafrazi, Alimorad; Utevsky, Serge
2018-04-01
Phylogenetic relationships between species of the genus Hirudo plus genetic variation in the entire distribution range of Hirudo orientalis were investigated based on mitochondrial (COI and 12S rDNA) and nuclear (ITS1+5.8S+ITS2) genome regions. The sister relationship of Hirudo orientalis and H. medicinalis was revealed with a high posterior probability. A broad and patchy distribution with minor genetic differences was observed in populations of H. orientalis along the central and Middle Eastern parts of Asia. The known distribution range occurred in topographically heterogeneous landscapes around the Caspian Sea. The demographic analysis suggests the selection of the COI locus under unfavourable respiratory conditions, but population size expansion cannot be fully rejected. The genetic variation trend indicated northward dispersal. Higher haplotype diversity in the South Caspian region potentially suggests the area as a historical refugium for the species. The vast dispersal is assumed to occur after the Pleistocene glaciations via vertebrate hosts. Copyright © 2017 Elsevier Inc. All rights reserved.
Understanding spatial connectivity of individuals with non-uniform population density.
Wang, Pu; González, Marta C
2009-08-28
We construct a two-dimensional geometric graph connecting individuals placed in space within a given contact distance. The individuals are distributed using a measured country's density of population. We observe that while large clusters (group of individuals connected) emerge within some regions, they are trapped in detached urban areas owing to the low population density of the regions bordering them. To understand the emergence of a giant cluster that connects the entire population, we compare the empirical geometric graph with the one generated by placing the same number of individuals randomly in space. We find that, for small contact distances, the empirical distribution of population dominates the growth of connected components, but no critical percolation transition is observed in contrast to the graph generated by a random distribution of population. Our results show that contact distances from real-world situations as for WIFI and Bluetooth connections drop in a zone where a fully connected cluster is not observed, hinting that human mobility must play a crucial role in contact-based diseases and wireless viruses' large-scale spreading.
Gleim, A V; Egorov, V I; Nazarov, Yu V; Smirnov, S V; Chistyakov, V V; Bannik, O I; Anisimov, A A; Kynev, S M; Ivanova, A E; Collins, R J; Kozlov, S A; Buller, G S
2016-02-08
A quantum key distribution system based on the subcarrier wave modulation method has been demonstrated which employs the BB84 protocol with a strong reference to generate secure bits at a rate of 16.5 kbit/s with an error of 0.5% over an optical channel of 10 dB loss, and 18 bits/s with an error of 0.75% over 25 dB of channel loss. To the best of our knowledge, these results represent the highest channel loss reported for secure quantum key distribution using the subcarrier wave approach. A passive unidirectional scheme has been used to compensate for the polarization dependence of the phase modulators in the receiver module, which resulted in a high visibility of 98.8%. The system is thus fully insensitive to polarization fluctuations and robust to environmental changes, making the approach promising for use in optical telecommunication networks. Further improvements in secure key rate and transmission distance can be achieved by implementing the decoy states protocol or by optimizing the mean photon number used in line with experimental parameters.
1983-11-01
transmission, FM(R) will only have to hold one message. 3. Program Control Block (PCB) The PCB ( Deitel 82] will be maintained by the Executive in...and Use of Kernel to Process Interrupts 35 10. Layered Operating System Design 38 11. Program Control Block Table 43 12. Ready List Data Structure 45 13...examples of fully distributed systems in operation. An objective of the NPS research program for SPLICE is to advance our knowledge of distributed
Deviations from Rayleigh statistics in ultrasonic speckle.
Tuthill, T A; Sperry, R H; Parker, K J
1988-04-01
The statistics of speckle patterns in ultrasound images have potential for tissue characterization. In "fully developed speckle" from many random scatterers, the amplitude is widely recognized as possessing a Rayleigh distribution. This study examines how scattering populations and signal processing can produce non-Rayleigh distributions. The first order speckle statistics are shown to depend on random scatterer density and the amplitude and spacing of added periodic scatterers. Envelope detection, amplifier compression, and signal bandwidth are also shown to cause distinct changes in the signal distribution.
DAVE: A plug and play model for distributed multimedia application development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mines, R.F.; Friesen, J.A.; Yang, C.L.
1994-07-01
This paper presents a model being used for the development of distributed multimedia applications. The Distributed Audio Video Environment (DAVE) was designed to support the development of a wide range of distributed applications. The implementation of this model is described. DAVE is unique in that it combines a simple ``plug and play`` programming interface, supports both centralized and fully distributed applications, provides device and media extensibility, promotes object reuseability, and supports interoperability and network independence. This model enables application developers to easily develop distributed multimedia applications and create reusable multimedia toolkits. DAVE was designed for developing applications such as videomore » conferencing, media archival, remote process control, and distance learning.« less
Analysis of groundwater flow and stream depletion in L-shaped fluvial aquifers
NASA Astrophysics Data System (ADS)
Lin, Chao-Chih; Chang, Ya-Chi; Yeh, Hund-Der
2018-04-01
Understanding the head distribution in aquifers is crucial for the evaluation of groundwater resources. This article develops a model for describing flow induced by pumping in an L-shaped fluvial aquifer bounded by impermeable bedrocks and two nearly fully penetrating streams. A similar scenario for numerical studies was reported in Kihm et al. (2007). The water level of the streams is assumed to be linearly varying with distance. The aquifer is divided into two subregions and the continuity conditions of the hydraulic head and flux are imposed at the interface of the subregions. The steady-state solution describing the head distribution for the model without pumping is first developed by the method of separation of variables. The transient solution for the head distribution induced by pumping is then derived based on the steady-state solution as initial condition and the methods of finite Fourier transform and Laplace transform. Moreover, the solution for stream depletion rate (SDR) from each of the two streams is also developed based on the head solution and Darcy's law. Both head and SDR solutions in the real time domain are obtained by a numerical inversion scheme called the Stehfest algorithm. The software MODFLOW is chosen to compare with the proposed head solution for the L-shaped aquifer. The steady-state and transient head distributions within the L-shaped aquifer predicted by the present solution are compared with the numerical simulations and measurement data presented in Kihm et al. (2007).
Electro-mechanical response of a 3D nerve bundle model to mechanical loads leading to axonal injury.
Cinelli, I; Destrade, M; Duffy, M; McHugh, P
2018-03-01
Traumatic brain injuries and damage are major causes of death and disability. We propose a 3D fully coupled electro-mechanical model of a nerve bundle to investigate the electrophysiological impairments due to trauma at the cellular level. The coupling is based on a thermal analogy of the neural electrical activity by using the finite element software Abaqus CAE 6.13-3. The model includes a real-time coupling, modulated threshold for spiking activation, and independent alteration of the electrical properties for each 3-layer fibre within a nerve bundle as a function of strain. Results of the coupled electro-mechanical model are validated with previously published experimental results of damaged axons. Here, the cases of compression and tension are simulated to induce (mild, moderate, and severe) damage at the nerve membrane of a nerve bundle, made of 4 fibres. Changes in strain, stress distribution, and neural activity are investigated for myelinated and unmyelinated nerve fibres, by considering the cases of an intact and of a traumatised nerve membrane. A fully coupled electro-mechanical modelling approach is established to provide insights into crucial aspects of neural activity at the cellular level due to traumatic brain injury. One of the key findings is the 3D distribution of residual stresses and strains at the membrane of each fibre due to mechanically induced electrophysiological impairments, and its impact on signal transmission. Copyright © 2017 John Wiley & Sons, Ltd.
Nano-colloid electrophoretic transport: Fully explicit modelling via dissipative particle dynamics
NASA Astrophysics Data System (ADS)
Hassanzadeh Afrouzi, Hamid; Farhadi, Mousa; Sedighi, Kurosh; Moshfegh, Abouzar
2018-02-01
In present study, a novel fully explicit approach using dissipative particle dynamics (DPD) method is introduced for modelling electrophoretic transport of nano-colloids in an electrolyte solution. Slater type charge smearing function included in 3D Ewald summation method is employed to treat electrostatic interaction. Moreover, capability of different thermostats are challenged to control the system temperature and study the dynamic response of colloidal electrophoretic mobility under practical ranges of external electric field in nano scale application (0.072 < E < 0.361 v / nm) covering non-linear response regime, and ionic salt concentration (0.049 < SC < 0.69 [M]) covering weak to strong Debye screening of the colloid. The effect of different colloidal repulsions are then studied on temperature, reduced mobility and zeta potential which is computed based on charge distribution within the spherical colloidal EDL. System temperature and electrophoretic mobility both show a direct and inverse relationship respectively with electric field and colloidal repulsion. Mobility declining with colloidal repulsion reaches a plateau which is a relatively constant value at each electrolyte salinity for Aii > 600 in DPD units regardless of electric field intensity. Nosé-Hoover-Lowe-Andersen and Lowe-Andersen thermostats are found to function more effectively under high electric fields (E > 0.145 [ v / nm ]) while thermal equilibrium is maintained. Reasonable agreements are achieved by benchmarking the radial distribution function with available electrolyte structure modellings, as well as comparing reduced mobility against conventional Smoluchowski and Hückel theories, and numerical solution of Poisson-Boltzmann equation.
NASA Astrophysics Data System (ADS)
Holt, Robert W.; Zhang, Rongxiao; Esipova, Tatiana V.; Vinogradov, Sergei A.; Glaser, Adam K.; Gladstone, David J.; Pogue, Brian W.
2014-09-01
Megavoltage radiation beams used in External Beam Radiotherapy (EBRT) generate Cherenkov light emission in tissues and equivalent phantoms. This optical emission was utilized to excite an oxygen-sensitive phosphorescent probe, PtG4, which has been developed specifically for NIR lifetime-based sensing of the partial pressure of oxygen (pO2). Phosphorescence emission, at different time points with respect to the excitation pulse, was acquired by an intensifier-gated CCD camera synchronized with radiation pulses delivered by a medical linear accelerator. The pO2 distribution was tomographically recovered in a tissue-equivalent phantom during EBRT with multiple beams targeted from different angles at a tumor-like anomaly. The reconstructions were tested in two different phantoms that have fully oxygenated background, to compare a fully oxygenated and a fully deoxygenated inclusion. To simulate a realistic situation of EBRT, where the size and location of the tumor is well known, spatial information of a prescribed region was utilized in the recovery estimation. The phantom results show that region-averaged pO2 values were recovered successfully, differentiating aerated and deoxygenated inclusions. Finally, a simulation study was performed showing that pO2 in human brain tumors can be measured to within 15 mmHg for edge depths less than 10-20 mm using the Cherenkov Excited Phosphorescence Oxygen imaging (CEPhOx) method and PtG4 as a probe. This technique could allow non-invasive monitoring of pO2 in tumors during the normal process of EBRT, where beams are generally delivered from multiple angles or arcs during each treatment fraction.
Holt, Robert W; Zhang, Rongxiao; Esipova, Tatiana V; Vinogradov, Sergei A; Glaser, Adam K; Gladstone, David J; Pogue, Brian W
2014-09-21
Megavoltage radiation beams used in External Beam Radiotherapy (EBRT) generate Cherenkov light emission in tissues and equivalent phantoms. This optical emission was utilized to excite an oxygen-sensitive phosphorescent probe, PtG4, which has been developed specifically for NIR lifetime-based sensing of the partial pressure of oxygen (pO2). Phosphorescence emission, at different time points with respect to the excitation pulse, was acquired by an intensifier-gated CCD camera synchronized with radiation pulses delivered by a medical linear accelerator. The pO2 distribution was tomographically recovered in a tissue-equivalent phantom during EBRT with multiple beams targeted from different angles at a tumor-like anomaly. The reconstructions were tested in two different phantoms that have fully oxygenated background, to compare a fully oxygenated and a fully deoxygenated inclusion. To simulate a realistic situation of EBRT, where the size and location of the tumor is well known, spatial information of a prescribed region was utilized in the recovery estimation. The phantom results show that region-averaged pO2 values were recovered successfully, differentiating aerated and deoxygenated inclusions. Finally, a simulation study was performed showing that pO2 in human brain tumors can be measured to within 15 mmHg for edge depths less than 10-20 mm using the Cherenkov Excited Phosphorescence Oxygen imaging (CEPhOx) method and PtG4 as a probe. This technique could allow non-invasive monitoring of pO2 in tumors during the normal process of EBRT, where beams are generally delivered from multiple angles or arcs during each treatment fraction.
NASA Astrophysics Data System (ADS)
Charalambous, C. A.; Pike, W. T.
2013-12-01
We present the development of a soil evolution framework and multiscale modelling of the surface of Mars, Moon and Itokawa thus providing an atlas of extra-terrestrial Particle Size Distributions (PSD). These PSDs are profoundly based on a tailoring method which interconnects several datasets from different sites captured by the various missions. The final integrated product is then fully justified through a soil evolution analysis model mathematically constructed via fundamental physical principles (Charalambous, 2013). The construction of the PSD takes into account the macroscale fresh primary impacts and their products, the mesoscale distributions obtained by the in-situ data of surface missions (Golombek et al., 1997, 2012) and finally the microscopic scale distributions provided by Curiosity and Phoenix Lander (Pike, 2011). The distribution naturally extends at the magnitudinal scales at which current data does not exist due to the lack of scientific instruments capturing the populations at these data absent scales. The extension is based on the model distribution (Charalambous, 2013) which takes as parameters known values of material specific probabilities of fragmentation and grinding limits. Additionally, the establishment of a closed-form statistical distribution provides a quantitative description of the soil's structure. Consequently, reverse engineering of the model distribution allows the synthesis of soil that faithfully represents the particle population at the studied sites (Charalambous, 2011). Such representation essentially delivers a virtual soil environment to work with for numerous applications. A specific application demonstrated here will be the information that can directly be extracted for the successful drilling probability as a function of distance in an effort to aid the HP3 instrument of the 2016 Insight Mission to Mars. Pike, W. T., et al. "Quantification of the dry history of the Martian soil inferred from in situ microscopy." Geophysical Research Letters 38.24 (2011). C. A. Charalambous and W. T. Pike (2013). 'Evolution of Particle Size Distributions in Fragmentation Over Time' Abstract Submitted to the AGU 46th Fall Meeting. Charalambous, C., Pike, W. T., Goetz, W., Hecht, M. H., & Staufer, U. (2011, December). 'A Digital Martian Soil based on In-Situ Data.' In AGU Fall Meeting Abstracts (Vol. 1, p. 1669). Golombek, M., & Rapp, D. (1997). 'Size-frequency distributions of rocks on Mars and Earth analog sites: Implications for future landed missions.' Journal of Geophysical Research, 102(E2), 4117-4129. Golombek, M., Huertas, A., Kipp, D., & Calef, F. (2012). 'Detection and characterization of rocks and rock size-frequency distributions at the final four Mars Science Laboratory landing sites.' Mars, 7, 1-22.
Yaryhin, Oleksandr; Werneburg, Ingmar
2018-06-08
The sand lizard, Lacerta agilis, is a classical model species in herpetology. Its adult skull anatomy and its embryonic development are well known. The description of its fully formed primordial skull by Ernst Gaupp, in 1900, was a key publication in vertebrate morphology and influenced many comparative embryologists. Based on recent methodological considerations, we restudied the early cranial development of this species starting as early as the formation of mesenchymal condensations up to the fully formed chondrocranium. We traced the formation of the complex chondrocranial architecture in detail, clarified specific homologies for the first time, and uncovered major differences to old textbook descriptions. Comparison with other lacertid lizards revealed a very similar genesis of the primordial skull. However, we detected shifts in the developmental timing of particular cartilaginous elements, mainly in the nasal region, which may correlate to specific ecological adaptation in the adults. Late timing of nasal elements might be an important innovation for the successful wide range distribution of the well-known sand lizard. © 2018 Wiley Periodicals, Inc.
Xu, Xiaoping; Huang, Qingming; Chen, Shanshan; Yang, Peiqiang; Chen, Shaojiang; Song, Yiqiao
2016-01-01
One of the modern crop breeding techniques uses doubled haploid plants that contain an identical pair of chromosomes in order to accelerate the breeding process. Rapid haploid identification method is critical for large-scale selections of double haploids. The conventional methods based on the color of the endosperm and embryo seeds are slow, manual and prone to error. On the other hand, there exists a significant difference between diploid and haploid seeds generated by high oil inducer, which makes it possible to use oil content to identify the haploid. This paper describes a fully-automated high-throughput NMR screening system for maize haploid kernel identification. The system is comprised of a sampler unit to select a single kernel to feed for measurement of NMR and weight, and a kernel sorter to distribute the kernel according to the measurement result. Tests of the system show a consistent accuracy of 94% with an average screening time of 4 seconds per kernel. Field test result is described and the directions for future improvement are discussed. PMID:27454427
NASA Astrophysics Data System (ADS)
Jensen, Robert K.; Fletcher, P.; Abraham, C.
1991-04-01
The segment mass mass proportions and moments of inertia of a sample of twelve females and seven males with mean ages of 67. 4 and 69. 5 years were estimated using textbook proportions based on cadaver studies. These were then compared with the parameters calculated using a mathematical model the zone method. The methodology of the model was fully evaluated for accuracy and precision and judged to be adequate. The results of the comparisons show that for some segments female parameters are quite different from male parameters and inadequately predicted by the cadaver proportions. The largest discrepancies were for the thigh and the trunk. The cadaver predictions were generally less than satisfactory although the common variance for some segments was moderately high. The use ofnon-linear regression and segment anthropometry was illustrated for the thigh moments of inertia and appears to be appropriate. However the predictions from cadaver data need to be examined fully. These results are dependent on the changes in mass and density distribution which occur with aging and the changes which occur with cadaver samples prior to and following death.
Local impact of humidification on degradation in polymer electrolyte fuel cells
NASA Astrophysics Data System (ADS)
Sanchez, Daniel G.; Ruiu, Tiziana; Biswas, Indro; Schulze, Mathias; Helmly, Stefan; Friedrich, K. Andreas
2017-06-01
The water level in a polymer electrolyte membrane fuel cell (PEMFC) affects the durability as is seen from the degradation processes during operation a PEMFC with fully- and nonhumidified gas streams as analyzed using an in-situ segmented cell for local current density measurements during a 300 h test operating under constant conditions and using ex situ SEM/EDX and XPS post-test analysis of specific regions. The impact of the RH on spatial distribution of the degradation process results from different water distribution giving different chemical environments. Under nonhumidified gas streams, the cathode inlet region exhibits increased degradation, whereas with fully humidified gases the bottom of the cell had the higher performance losses. The degradation and the degree of reversibility produced by Pt dissolution, PTFE defluorination, and contaminants such as silicon (Si) and nickel (Ni) were locally evaluated.
A Camera-Based Target Detection and Positioning UAV System for Search and Rescue (SAR) Purposes
Sun, Jingxuan; Li, Boyang; Jiang, Yifan; Wen, Chih-yung
2016-01-01
Wilderness search and rescue entails performing a wide-range of work in complex environments and large regions. Given the concerns inherent in large regions due to limited rescue distribution, unmanned aerial vehicle (UAV)-based frameworks are a promising platform for providing aerial imaging. In recent years, technological advances in areas such as micro-technology, sensors and navigation have influenced the various applications of UAVs. In this study, an all-in-one camera-based target detection and positioning system is developed and integrated into a fully autonomous fixed-wing UAV. The system presented in this paper is capable of on-board, real-time target identification, post-target identification and location and aerial image collection for further mapping applications. Its performance is examined using several simulated search and rescue missions, and the test results demonstrate its reliability and efficiency. PMID:27792156
A charge-based model of Junction Barrier Schottky rectifiers
NASA Astrophysics Data System (ADS)
Latorre-Rey, Alvaro D.; Mudholkar, Mihir; Quddus, Mohammed T.; Salih, Ali
2018-06-01
A new charge-based model of the electric field distribution for Junction Barrier Schottky (JBS) diodes is presented, based on the description of the charge-sharing effect between the vertical Schottky junction and the lateral pn-junctions that constitute the active cell of the device. In our model, the inherently 2-D problem is transformed into a simple but accurate 1-D problem which has a closed analytical solution that captures the reshaping and reduction of the electric field profile responsible for the improved electrical performance of these devices, while preserving physically meaningful expressions that depend on relevant device parameters. The validation of the model is performed by comparing calculated electric field profiles with drift-diffusion simulations of a JBS device showing good agreement. Even though other fully 2-D models already available provide higher accuracy, they lack physical insight making the proposed model an useful tool for device design.
A Camera-Based Target Detection and Positioning UAV System for Search and Rescue (SAR) Purposes.
Sun, Jingxuan; Li, Boyang; Jiang, Yifan; Wen, Chih-Yung
2016-10-25
Wilderness search and rescue entails performing a wide-range of work in complex environments and large regions. Given the concerns inherent in large regions due to limited rescue distribution, unmanned aerial vehicle (UAV)-based frameworks are a promising platform for providing aerial imaging. In recent years, technological advances in areas such as micro-technology, sensors and navigation have influenced the various applications of UAVs. In this study, an all-in-one camera-based target detection and positioning system is developed and integrated into a fully autonomous fixed-wing UAV. The system presented in this paper is capable of on-board, real-time target identification, post-target identification and location and aerial image collection for further mapping applications. Its performance is examined using several simulated search and rescue missions, and the test results demonstrate its reliability and efficiency.
Experimental Study of Combined Forced and Free Laminar Convection in a Vertical Tube
NASA Technical Reports Server (NTRS)
Hallman, Theodore M.
1961-01-01
An apparatus was built to verify an analysis of combined forced and free convection in a vertical tube with uniform wall heat flux and to determine the limits of the analysis. The test section was electrically heated by resistance heating of the tube wall and was instrumented with thermocouples in such a way that detailed thermal entrance heat-transfer coefficients could be obtained for both upflow and downflow and any asymmetry in wall temperature could be detected. The experiments showed that fully developed heat-transfer results, predicted by a previous analysis, were confirmed over the range of Rayleigh numbers investigated. The concept of "locally fully developed" heat transfer was established. This concept involves the assumption that the fully developed heat-transfer analysis can be applied locally even though the Rayleigh number is varying along the tube because of physical-property variations with temperature. Thermal entrance region data were obtained for pure forced convection and for combined forced and free convection. The analysis of laminar pure forced convection in the thermal entrance region conducted by Siegel, Sparrow, and Hallman was experimentally confirmed. A transition to an eddy motion, indicated by a fluctuation in wall temperature was found in many of the upflow runs. A stability correlation was found. The fully developed Nusselt numbers in downflow were below those for pure forced convection but fell about 10 percent above the analytical curve. Quite large circumferential variations in wall temperature were observed in downflow as compaired with those encountered in upflow, and the fully developed Nussalt numbers reported are based on average wall temperatures determined by averaging the readings of two diametrically opposite wall thermocouples at each axial position. With larger heating rates in downflow the wall temperature distributions strongly suggested a cell flow near the bottom. At still larger heating rates the wall temperatures varied in a periodic way.
Recursive algorithms for phylogenetic tree counting.
Gavryushkina, Alexandra; Welch, David; Drummond, Alexei J
2013-10-28
In Bayesian phylogenetic inference we are interested in distributions over a space of trees. The number of trees in a tree space is an important characteristic of the space and is useful for specifying prior distributions. When all samples come from the same time point and no prior information available on divergence times, the tree counting problem is easy. However, when fossil evidence is used in the inference to constrain the tree or data are sampled serially, new tree spaces arise and counting the number of trees is more difficult. We describe an algorithm that is polynomial in the number of sampled individuals for counting of resolutions of a constraint tree assuming that the number of constraints is fixed. We generalise this algorithm to counting resolutions of a fully ranked constraint tree. We describe a quadratic algorithm for counting the number of possible fully ranked trees on n sampled individuals. We introduce a new type of tree, called a fully ranked tree with sampled ancestors, and describe a cubic time algorithm for counting the number of such trees on n sampled individuals. These algorithms should be employed for Bayesian Markov chain Monte Carlo inference when fossil data are included or data are serially sampled.
Chen, Jinxiang; Tuo, Wanyong; Zhang, Xiaoming; He, Chenglin; Xie, Juan; Liu, Chang
2016-12-01
To develop lightweight biomimetic composite structures, the compressive failure and mechanical properties of fully integrated honeycomb plates were investigated experimentally and through the finite element method. The results indicated that: fracturing of the fully integrated honeycomb plates primarily occurred in the core layer, including the sealing edge structure. The morphological failures can be classified into two types, namely dislocations and compactions, and were caused primarily by the stress concentrations at the interfaces between the core layer and the upper and lower laminations and secondarily by the disordered short-fiber distribution in the material; although the fully integrated honeycomb plates manufactured in this experiment were imperfect, their mass-specific compressive strength was superior to that of similar biomimetic samples. Therefore, the proposed bio-inspired structure possesses good overall mechanical properties, and a range of parameters, such as the diameter of the transition arc, was defined for enhancing the design of fully integrated honeycomb plates and improving their compressive mechanical properties. Copyright © 2016 Elsevier B.V. All rights reserved.
Comparisons of dense-plasma-focus kinetic simulations with experimental measurements.
Schmidt, A; Link, A; Welch, D; Ellsworth, J; Falabella, S; Tang, V
2014-06-01
Dense-plasma-focus (DPF) Z-pinch devices are sources of copious high-energy electrons and ions, x rays, and neutrons. The mechanisms through which these physically simple devices generate such high-energy beams in a relatively short distance are not fully understood and past optimization efforts of these devices have been largely empirical. Previously we reported on fully kinetic simulations of a DPF and compared them with hybrid and fluid simulations of the same device. Here we present detailed comparisons between fully kinetic simulations and experimental data on a 1.2 kJ DPF with two electrode geometries, including neutron yield and ion beam energy distributions. A more intensive third calculation is presented which examines the effects of a fully detailed pulsed power driver model. We also compare simulated electromagnetic fluctuations with direct measurement of radiofrequency electromagnetic fluctuations in a DPF plasma. These comparisons indicate that the fully kinetic model captures the essential physics of these plasmas with high fidelity, and provide further evidence that anomalous resistivity in the plasma arises due to a kinetic instability near the lower hybrid frequency.
Technologies for distributed defense
NASA Astrophysics Data System (ADS)
Seiders, Barbara; Rybka, Anthony
2002-07-01
For Americans, the nature of warfare changed on September 11, 2001. Our national security henceforth will require distributed defense. One extreme of distributed defense is represented by fully deployed military troops responding to a threat from a hostile nation state. At the other extreme is a country of 'citizen soldiers', with families and communities securing their common defense through heightened awareness, engagement as good neighbors, and local support of and cooperation with local law enforcement, emergency and health care providers. Technologies - for information exploitation, biological agent detection, health care surveillance, and security - will be critical to ensuring success in distributed defense.
Code of Federal Regulations, 2011 CFR
2011-04-01
... on World War II active military or naval service. 404.111 Section 404.111 Employees' Benefits SOCIAL... Quarters of Coverage Fully Insured Status § 404.111 When we consider a person fully insured based on World... States during World War II; (b) The person died within three years after separation from service and...
NASA Astrophysics Data System (ADS)
Tian, Y.; Zheng, Y.; Zheng, C.; Han, F., Sr.
2017-12-01
Physically based and fully-distributed integrated hydrological models (IHMs) can quantitatively depict hydrological processes, both surface and subsurface, with sufficient spatial and temporal details. However, the complexity involved in pre-processing data and setting up models seriously hindered the wider application of IHMs in scientific research and management practice. This study introduces our design and development of Visual HEIFLOW, hereafter referred to as VHF, a comprehensive graphical data processing and modeling system for integrated hydrological simulation. The current version of VHF has been structured to accommodate an IHM named HEIFLOW (Hydrological-Ecological Integrated watershed-scale FLOW model). HEIFLOW is a model being developed by the authors, which has all typical elements of physically based and fully-distributed IHMs. It is based on GSFLOW, a representative integrated surface water-groundwater model developed by USGS. HEIFLOW provides several ecological modules that enable to simulate growth cycle of general vegetation and special plants (maize and populus euphratica). VHF incorporates and streamlines all key steps of the integrated modeling, and accommodates all types of GIS data necessary to hydrological simulation. It provides a GIS-based data processing framework to prepare an IHM for simulations, and has functionalities to flexibly display and modify model features (e.g., model grids, streams, boundary conditions, observational sites, etc.) and their associated data. It enables visualization and various spatio-temporal analyses of all model inputs and outputs at different scales (i.e., computing unit, sub-basin, basin, or user-defined spatial extent). The above system features, as well as many others, can significantly reduce the difficulty and time cost of building and using a complex IHM. The case study in the Heihe River Basin demonstrated the applicability of VHF for large scale integrated SW-GW modeling. Visualization and spatial-temporal analysis of the modeling results by HEIFLOW greatly facilitates our understanding on the complicated hydrologic cycle and relationship among the hydrological and ecological variables in the study area, and provides insights into the regional water resources management.
Using Satellite and Airborne LiDAR to Model Woodpecker Habitat Occupancy at the Landscape Scale
Vierling, Lee A.; Vierling, Kerri T.; Adam, Patrick; Hudak, Andrew T.
2013-01-01
Incorporating vertical vegetation structure into models of animal distributions can improve understanding of the patterns and processes governing habitat selection. LiDAR can provide such structural information, but these data are typically collected via aircraft and thus are limited in spatial extent. Our objective was to explore the utility of satellite-based LiDAR data from the Geoscience Laser Altimeter System (GLAS) relative to airborne-based LiDAR to model the north Idaho breeding distribution of a forest-dependent ecosystem engineer, the Red-naped sapsucker (Sphyrapicus nuchalis). GLAS data occurred within ca. 64 m diameter ellipses spaced a minimum of 172 m apart, and all occupancy analyses were confined to this grain scale. Using a hierarchical approach, we modeled Red-naped sapsucker occupancy as a function of LiDAR metrics derived from both platforms. Occupancy models based on satellite data were weak, possibly because the data within the GLAS ellipse did not fully represent habitat characteristics important for this species. The most important structural variables influencing Red-naped Sapsucker breeding site selection based on airborne LiDAR data included foliage height diversity, the distance between major strata in the canopy vertical profile, and the vegetation density near the ground. These characteristics are consistent with the diversity of foraging activities exhibited by this species. To our knowledge, this study represents the first to examine the utility of satellite-based LiDAR to model animal distributions. The large area of each GLAS ellipse and the non-contiguous nature of GLAS data may pose significant challenges for wildlife distribution modeling; nevertheless these data can provide useful information on ecosystem vertical structure, particularly in areas of gentle terrain. Additional work is thus warranted to utilize LiDAR datasets collected from both airborne and past and future satellite platforms (e.g. GLAS, and the planned IceSAT2 mission) with the goal of improving wildlife modeling for more locations across the globe. PMID:24324655
NASA Astrophysics Data System (ADS)
Khan, Urooj; Tuteja, Narendra; Ajami, Hoori; Sharma, Ashish
2014-05-01
While the potential uses and benefits of distributed catchment simulation models is undeniable, their practical usage is often hindered by the computational resources they demand. To reduce the computational time/effort in distributed hydrological modelling, a new approach of modelling over an equivalent cross-section is investigated where topographical and physiographic properties of first-order sub-basins are aggregated to constitute modelling elements. To formulate an equivalent cross-section, a homogenization test is conducted to assess the loss in accuracy when averaging topographic and physiographic variables, i.e. length, slope, soil depth and soil type. The homogenization test indicates that the accuracy lost in weighting the soil type is greatest, therefore it needs to be weighted in a systematic manner to formulate equivalent cross-sections. If the soil type remains the same within the sub-basin, a single equivalent cross-section is formulated for the entire sub-basin. If the soil type follows a specific pattern, i.e. different soil types near the centre of the river, middle of hillslope and ridge line, three equivalent cross-sections (left bank, right bank and head water) are required. If the soil types are complex and do not follow any specific pattern, multiple equivalent cross-sections are required based on the number of soil types. The equivalent cross-sections are formulated for a series of first order sub-basins by implementing different weighting methods of topographic and physiographic variables of landforms within the entire or part of a hillslope. The formulated equivalent cross-sections are then simulated using a 2-dimensional, Richards' equation based distributed hydrological model. The simulated fluxes are multiplied by the weighted area of each equivalent cross-section to calculate the total fluxes from the sub-basins. The simulated fluxes include horizontal flow, transpiration, soil evaporation, deep drainage and soil moisture. To assess the accuracy of equivalent cross-section approach, the sub-basins are also divided into equally spaced multiple hillslope cross-sections. These cross-sections are simulated in a fully distributed settings using the 2-dimensional, Richards' equation based distributed hydrological model. The simulated fluxes are multiplied by the contributing area of each cross-section to get total fluxes from each sub-basin referred as reference fluxes. The equivalent cross-section approach is investigated for seven first order sub-basins of the McLaughlin catchment of the Snowy River, NSW, Australia, and evaluated in Wagga-Wagga experimental catchment. Our results show that the simulated fluxes using an equivalent cross-section approach are very close to the reference fluxes whereas computational time is reduced of the order of ~4 to ~22 times in comparison to the fully distributed settings. The transpiration and soil evaporation are the dominant fluxes and constitute ~85% of actual rainfall. Overall, the accuracy achieved in dominant fluxes is higher than the other fluxes. The simulated soil moistures from equivalent cross-section approach are compared with the in-situ soil moisture observations in the Wagga-Wagga experimental catchment in NSW, and results found to be consistent. Our results illustrate that the equivalent cross-section approach reduces the computational time significantly while maintaining the same order of accuracy in predicting the hydrological fluxes. As a result, this approach provides a great potential for implementation of distributed hydrological models at regional scales.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false When we consider a person fully insured based on World War II active military or naval service. 404.111 Section 404.111 Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL OLD-AGE, SURVIVORS AND DISABILITY INSURANCE (1950- ) Insured Status and Quarters of Coverage Fully Insured Status §...
2013-01-01
Background Intravascular ultrasound (IVUS) is a standard imaging modality for identification of plaque formation in the coronary and peripheral arteries. Volumetric three-dimensional (3D) IVUS visualization provides a powerful tool to overcome the limited comprehensive information of 2D IVUS in terms of complex spatial distribution of arterial morphology and acoustic backscatter information. Conventional 3D IVUS techniques provide sub-optimal visualization of arterial morphology or lack acoustic information concerning arterial structure due in part to low quality of image data and the use of pixel-based IVUS image reconstruction algorithms. In the present study, we describe a novel volumetric 3D IVUS reconstruction algorithm to utilize IVUS signal data and a shape-based nonlinear interpolation. Methods We developed an algorithm to convert a series of IVUS signal data into a fully volumetric 3D visualization. Intermediary slices between original 2D IVUS slices were generated utilizing the natural cubic spline interpolation to consider the nonlinearity of both vascular structure geometry and acoustic backscatter in the arterial wall. We evaluated differences in image quality between the conventional pixel-based interpolation and the shape-based nonlinear interpolation methods using both virtual vascular phantom data and in vivo IVUS data of a porcine femoral artery. Volumetric 3D IVUS images of the arterial segment reconstructed using the two interpolation methods were compared. Results In vitro validation and in vivo comparative studies with the conventional pixel-based interpolation method demonstrated more robustness of the shape-based nonlinear interpolation algorithm in determining intermediary 2D IVUS slices. Our shape-based nonlinear interpolation demonstrated improved volumetric 3D visualization of the in vivo arterial structure and more realistic acoustic backscatter distribution compared to the conventional pixel-based interpolation method. Conclusions This novel 3D IVUS visualization strategy has the potential to improve ultrasound imaging of vascular structure information, particularly atheroma determination. Improved volumetric 3D visualization with accurate acoustic backscatter information can help with ultrasound molecular imaging of atheroma component distribution. PMID:23651569
Flow distribution in parallel microfluidic networks and its effect on concentration gradient
Guermonprez, Cyprien; Michelin, Sébastien; Baroud, Charles N.
2015-01-01
The architecture of microfluidic networks can significantly impact the flow distribution within its different branches and thereby influence tracer transport within the network. In this paper, we study the flow rate distribution within a network of parallel microfluidic channels with a single input and single output, using a combination of theoretical modeling and microfluidic experiments. Within the ladder network, the flow rate distribution follows a U-shaped profile, with the highest flow rate occurring in the initial and final branches. The contrast with the central branches is controlled by a single dimensionless parameter, namely, the ratio of hydrodynamic resistance between the distribution channel and the side branches. This contrast in flow rates decreases when the resistance of the side branches increases relative to the resistance of the distribution channel. When the inlet flow is composed of two parallel streams, one of which transporting a diffusing species, a concentration variation is produced within the side branches of the network. The shape of this concentration gradient is fully determined by two dimensionless parameters: the ratio of resistances, which determines the flow rate distribution, and the Péclet number, which characterizes the relative speed of diffusion and advection. Depending on the values of these two control parameters, different distribution profiles can be obtained ranging from a flat profile to a step distribution of solute, with well-distributed gradients between these two limits. Our experimental results are in agreement with our numerical model predictions, based on a simplified 2D advection-diffusion problem. Finally, two possible applications of this work are presented: the first one combines the present design with self-digitization principle to encapsulate the controlled concentration in nanoliter chambers, while the second one extends the present design to create a continuous concentration gradient within an open flow chamber. PMID:26487905
Parton Distributions based on a Maximally Consistent Dataset
NASA Astrophysics Data System (ADS)
Rojo, Juan
2016-04-01
The choice of data that enters a global QCD analysis can have a substantial impact on the resulting parton distributions and their predictions for collider observables. One of the main reasons for this has to do with the possible presence of inconsistencies, either internal within an experiment or external between different experiments. In order to assess the robustness of the global fit, different definitions of a conservative PDF set, that is, a PDF set based on a maximally consistent dataset, have been introduced. However, these approaches are typically affected by theory biases in the selection of the dataset. In this contribution, after a brief overview of recent NNPDF developments, we propose a new, fully objective, definition of a conservative PDF set, based on the Bayesian reweighting approach. Using the new NNPDF3.0 framework, we produce various conservative sets, which turn out to be mutually in agreement within the respective PDF uncertainties, as well as with the global fit. We explore some of their implications for LHC phenomenology, finding also good consistency with the global fit result. These results provide a non-trivial validation test of the new NNPDF3.0 fitting methodology, and indicate that possible inconsistencies in the fitted dataset do not affect substantially the global fit PDFs.
Self-referenced continuous-variable quantum key distribution protocol
Soh, Daniel Beom Soo; Sarovar, Mohan; Brif, Constantin; ...
2015-10-21
We introduce a new continuous-variable quantum key distribution (CV-QKD) protocol, self-referenced CV-QKD, that eliminates the need for transmission of a high-power local oscillator between the communicating parties. In this protocol, each signal pulse is accompanied by a reference pulse (or a pair of twin reference pulses), used to align Alice’s and Bob’s measurement bases. The method of phase estimation and compensation based on the reference pulse measurement can be viewed as a quantum analog of intradyne detection used in classical coherent communication, which extracts the phase information from the modulated signal. We present a proof-of-principle, fiber-based experimental demonstration of themore » protocol and quantify the expected secret key rates by expressing them in terms of experimental parameters. Our analysis of the secret key rate fully takes into account the inherent uncertainty associated with the quantum nature of the reference pulse(s) and quantifies the limit at which the theoretical key rate approaches that of the respective conventional protocol that requires local oscillator transmission. The self-referenced protocol greatly simplifies the hardware required for CV-QKD, especially for potential integrated photonics implementations of transmitters and receivers, with minimum sacrifice of performance. Furthermore, it provides a pathway towards scalable integrated CV-QKD transceivers, a vital step towards large-scale QKD networks.« less
Self-referenced continuous-variable quantum key distribution protocol
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soh, Daniel Beom Soo; Sarovar, Mohan; Brif, Constantin
We introduce a new continuous-variable quantum key distribution (CV-QKD) protocol, self-referenced CV-QKD, that eliminates the need for transmission of a high-power local oscillator between the communicating parties. In this protocol, each signal pulse is accompanied by a reference pulse (or a pair of twin reference pulses), used to align Alice’s and Bob’s measurement bases. The method of phase estimation and compensation based on the reference pulse measurement can be viewed as a quantum analog of intradyne detection used in classical coherent communication, which extracts the phase information from the modulated signal. We present a proof-of-principle, fiber-based experimental demonstration of themore » protocol and quantify the expected secret key rates by expressing them in terms of experimental parameters. Our analysis of the secret key rate fully takes into account the inherent uncertainty associated with the quantum nature of the reference pulse(s) and quantifies the limit at which the theoretical key rate approaches that of the respective conventional protocol that requires local oscillator transmission. The self-referenced protocol greatly simplifies the hardware required for CV-QKD, especially for potential integrated photonics implementations of transmitters and receivers, with minimum sacrifice of performance. Furthermore, it provides a pathway towards scalable integrated CV-QKD transceivers, a vital step towards large-scale QKD networks.« less
Self-Referenced Continuous-Variable Quantum Key Distribution Protocol
NASA Astrophysics Data System (ADS)
Soh, Daniel B. S.; Brif, Constantin; Coles, Patrick J.; Lütkenhaus, Norbert; Camacho, Ryan M.; Urayama, Junji; Sarovar, Mohan
2015-10-01
We introduce a new continuous-variable quantum key distribution (CV-QKD) protocol, self-referenced CV-QKD, that eliminates the need for transmission of a high-power local oscillator between the communicating parties. In this protocol, each signal pulse is accompanied by a reference pulse (or a pair of twin reference pulses), used to align Alice's and Bob's measurement bases. The method of phase estimation and compensation based on the reference pulse measurement can be viewed as a quantum analog of intradyne detection used in classical coherent communication, which extracts the phase information from the modulated signal. We present a proof-of-principle, fiber-based experimental demonstration of the protocol and quantify the expected secret key rates by expressing them in terms of experimental parameters. Our analysis of the secret key rate fully takes into account the inherent uncertainty associated with the quantum nature of the reference pulse(s) and quantifies the limit at which the theoretical key rate approaches that of the respective conventional protocol that requires local oscillator transmission. The self-referenced protocol greatly simplifies the hardware required for CV-QKD, especially for potential integrated photonics implementations of transmitters and receivers, with minimum sacrifice of performance. As such, it provides a pathway towards scalable integrated CV-QKD transceivers, a vital step towards large-scale QKD networks.
Zhang, Kejiang; Achari, Gopal; Pei, Yuansheng
2010-10-01
Different types of uncertain information-linguistic, probabilistic, and possibilistic-exist in site characterization. Their representation and propagation significantly influence the management of contaminated sites. In the absence of a framework with which to properly represent and integrate these quantitative and qualitative inputs together, decision makers cannot fully take advantage of the available and necessary information to identify all the plausible alternatives. A systematic methodology was developed in the present work to incorporate linguistic, probabilistic, and possibilistic information into the Preference Ranking Organization METHod for Enrichment Evaluation (PROMETHEE), a subgroup of Multi-Criteria Decision Analysis (MCDA) methods for ranking contaminated sites. The identification of criteria based on the paradigm of comparative risk assessment provides a rationale for risk-based prioritization. Uncertain linguistic, probabilistic, and possibilistic information identified in characterizing contaminated sites can be properly represented as numerical values, intervals, probability distributions, and fuzzy sets or possibility distributions, and linguistic variables according to their nature. These different kinds of representation are first transformed into a 2-tuple linguistic representation domain. The propagation of hybrid uncertainties is then carried out in the same domain. This methodology can use the original site information directly as much as possible. The case study shows that this systematic methodology provides more reasonable results. © 2010 SETAC.
Raghuram, Jayaram; Miller, David J; Kesidis, George
2014-07-01
We propose a method for detecting anomalous domain names, with focus on algorithmically generated domain names which are frequently associated with malicious activities such as fast flux service networks, particularly for bot networks (or botnets), malware, and phishing. Our method is based on learning a (null hypothesis) probability model based on a large set of domain names that have been white listed by some reliable authority. Since these names are mostly assigned by humans, they are pronounceable, and tend to have a distribution of characters, words, word lengths, and number of words that are typical of some language (mostly English), and often consist of words drawn from a known lexicon. On the other hand, in the present day scenario, algorithmically generated domain names typically have distributions that are quite different from that of human-created domain names. We propose a fully generative model for the probability distribution of benign (white listed) domain names which can be used in an anomaly detection setting for identifying putative algorithmically generated domain names. Unlike other methods, our approach can make detections without considering any additional (latency producing) information sources, often used to detect fast flux activity. Experiments on a publicly available, large data set of domain names associated with fast flux service networks show encouraging results, relative to several baseline methods, with higher detection rates and low false positive rates.
NASA Astrophysics Data System (ADS)
Egawa, K.; Furukawa, T.; Saeki, T.; Suzuki, K.; Narita, H.
2011-12-01
Natural gas hydrate-related sequences commonly provide unclear seismic images due to bottom simulating reflector, a seismic indicator of the theoretical base of gas hydrate stability zone, which usually causes problems for fully analyzing the detailed sedimentary structures and seismic facies. Here we propose an alternative technique to predict the distributional pattern of gas hydrate-related deep-sea turbidites with special reference to a Pleistocene forearc minibasin in the northeastern Nankai Trough area, off central Japan, from the integrated 3D structural and sedimentologic modeling. Structural unfolding and stratigraphic backstripping successively modeled a simple horseshoe-shaped paleobathymetry of the targeted turbidite sequence. Based on best-fit matching of net-to-gross ratio (or sand fraction) between the model and wells, subsequent turbidity current modeling on the restored paleobathymetric surface during a single flow event demonstrated excellent prediction results showing the morphologically controlled turbidity current evolution and selective turbidite sand distribution within the modeled minibasin. Also, multiple turbidity current modeling indicated the stacking sheet turbidites with regression and proximal/distal onlaps in the minibasin due to reflections off an opposing slope, whose sedimentary features are coincident with the seismic interpretation. Such modeling works can help us better understand the depositional pattern of gas hydrate-related, unconsolidated turbidites and also can improve gas hydrate reservoir characterization. This study was financially supported by MH21 Research Consortium.
Raghuram, Jayaram; Miller, David J.; Kesidis, George
2014-01-01
We propose a method for detecting anomalous domain names, with focus on algorithmically generated domain names which are frequently associated with malicious activities such as fast flux service networks, particularly for bot networks (or botnets), malware, and phishing. Our method is based on learning a (null hypothesis) probability model based on a large set of domain names that have been white listed by some reliable authority. Since these names are mostly assigned by humans, they are pronounceable, and tend to have a distribution of characters, words, word lengths, and number of words that are typical of some language (mostly English), and often consist of words drawn from a known lexicon. On the other hand, in the present day scenario, algorithmically generated domain names typically have distributions that are quite different from that of human-created domain names. We propose a fully generative model for the probability distribution of benign (white listed) domain names which can be used in an anomaly detection setting for identifying putative algorithmically generated domain names. Unlike other methods, our approach can make detections without considering any additional (latency producing) information sources, often used to detect fast flux activity. Experiments on a publicly available, large data set of domain names associated with fast flux service networks show encouraging results, relative to several baseline methods, with higher detection rates and low false positive rates. PMID:25685511
Ghosh, Sujit K
2010-01-01
Bayesian methods are rapidly becoming popular tools for making statistical inference in various fields of science including biology, engineering, finance, and genetics. One of the key aspects of Bayesian inferential method is its logical foundation that provides a coherent framework to utilize not only empirical but also scientific information available to a researcher. Prior knowledge arising from scientific background, expert judgment, or previously collected data is used to build a prior distribution which is then combined with current data via the likelihood function to characterize the current state of knowledge using the so-called posterior distribution. Bayesian methods allow the use of models of complex physical phenomena that were previously too difficult to estimate (e.g., using asymptotic approximations). Bayesian methods offer a means of more fully understanding issues that are central to many practical problems by allowing researchers to build integrated models based on hierarchical conditional distributions that can be estimated even with limited amounts of data. Furthermore, advances in numerical integration methods, particularly those based on Monte Carlo methods, have made it possible to compute the optimal Bayes estimators. However, there is a reasonably wide gap between the background of the empirically trained scientists and the full weight of Bayesian statistical inference. Hence, one of the goals of this chapter is to bridge the gap by offering elementary to advanced concepts that emphasize linkages between standard approaches and full probability modeling via Bayesian methods.
Estimating the Fully Burdened Cost of Fuel Using an Input-Output Model - A Micro-Level Analysis
2011-09-01
The multilocation distribution model used by Lu and Rencheng to evaluate an international supply chain (From: Lu & Rencheng, 2007...IO model to evaluate an international supply chain specifically for a multilocation production system. Figure 2 illustrates such a system. vendor...vendor vendor Target markets Production plants Material vendor Figure 2. The multilocation distribution model used by Lu and Rencheng to
Implementing a distributed intranet-based information system.
O'Kane, K C; McColligan, E E; Davis, G A
1996-11-01
The article discusses Internet and intranet technologies and describes how to install an intranet-based information system using the Merle language facility and other readily available components. Merle is a script language designed to support decentralized medical record information retrieval applications on the World Wide Web. The goal of this work is to provide a script language tool to facilitate construction of efficient, fully functional, multipoint medical record information systems that can be accessed anywhere by low-cost Web browsers to search, retrieve, and analyze patient information. The language allows legacy MUMPS applications to function in a Web environment and to make use of the Web graphical, sound, and video presentation services. It also permits downloading of script applets for execution on client browsers, and it can be used in standalone mode with the Unix, Windows 95, Windows NT, and OS/2 operating systems.
NASA Astrophysics Data System (ADS)
Zheng, J.; Zhu, J.; Wang, Z.; Fang, F.; Pain, C. C.; Xiang, J.
2015-06-01
A new anisotropic hr-adaptive mesh technique has been applied to modelling of multiscale transport phenomena, which is based on a discontinuous Galerkin/control volume discretization on unstructured meshes. Over existing air quality models typically based on static-structured grids using a locally nesting technique, the advantage of the anisotropic hr-adaptive model has the ability to adapt the mesh according to the evolving pollutant distribution and flow features. That is, the mesh resolution can be adjusted dynamically to simulate the pollutant transport process accurately and effectively. To illustrate the capability of the anisotropic adaptive unstructured mesh model, three benchmark numerical experiments have been setup for two-dimensional (2-D) transport phenomena. Comparisons have been made between the results obtained using uniform resolution meshes and anisotropic adaptive resolution meshes.
Health economics, equity, and efficiency: are we almost there?
Ferraz, Marcos Bosi
2015-01-01
Health care is a highly complex, dynamic, and creative sector of the economy. While health economics has to continue its efforts to improve its methods and tools to better inform decisions, the application needs to be aligned with the insights and models of other social sciences disciplines. Decisions may be guided by four concept models based on ethical and distributive justice: libertarian, communitarian, egalitarian, and utilitarian. The societal agreement on one model or a defined mix of models is critical to avoid inequity and unfair decisions in a public and/or private insurance-based health care system. The excess use of methods and tools without fully defining the basic goals and philosophical principles of the health care system and without evaluating the fitness of these measures to reaching these goals may not contribute to an efficient improvement of population health.
Health economics, equity, and efficiency: are we almost there?
Ferraz, Marcos Bosi
2015-01-01
Health care is a highly complex, dynamic, and creative sector of the economy. While health economics has to continue its efforts to improve its methods and tools to better inform decisions, the application needs to be aligned with the insights and models of other social sciences disciplines. Decisions may be guided by four concept models based on ethical and distributive justice: libertarian, communitarian, egalitarian, and utilitarian. The societal agreement on one model or a defined mix of models is critical to avoid inequity and unfair decisions in a public and/or private insurance-based health care system. The excess use of methods and tools without fully defining the basic goals and philosophical principles of the health care system and without evaluating the fitness of these measures to reaching these goals may not contribute to an efficient improvement of population health. PMID:25709481
Kierkels, Roel G J; Wopken, Kim; Visser, Ruurd; Korevaar, Erik W; van der Schaaf, Arjen; Bijl, Hendrik P; Langendijk, Johannes A
2016-12-01
Radiotherapy of the head and neck is challenged by the relatively large number of organs-at-risk close to the tumor. Biologically-oriented objective functions (OF) could optimally distribute the dose among the organs-at-risk. We aimed to explore OFs based on multivariable normal tissue complication probability (NTCP) models for grade 2-4 dysphagia (DYS) and tube feeding dependence (TFD). One hundred head and neck cancer patients were studied. Additional to the clinical plan, two more plans (an OF DYS and OF TFD -plan) were optimized per patient. The NTCP models included up to four dose-volume parameters and other non-dosimetric factors. A fully automatic plan optimization framework was used to optimize the OF NTCP -based plans. All OF NTCP -based plans were reviewed and classified as clinically acceptable. On average, the Δdose and ΔNTCP were small comparing the OF DYS -plan, OF TFD -plan, and clinical plan. For 5% of patients NTCP TFD reduced >5% using OF TFD -based planning compared to the OF DYS -plans. Plan optimization using NTCP DYS - and NTCP TFD -based objective functions resulted in clinically acceptable plans. For patients with considerable risk factors of TFD, the OF TFD steered the optimizer to dose distributions which directly led to slightly lower predicted NTCP TFD values as compared to the other studied plans. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Jung, S. Y.; Sanandres, Luis A.; Vance, J. M.
1991-01-01
Measurements of pressure distributions and force coefficients were carried out in two types of squeeze film dampers, executing a circular centered orbit, an open-ended configuration, and a partially sealed one, in order to investigate the effect of fluid inertia and cavitation on pressure distributions and force coefficients. Dynamic pressure measurements were carried out for two orbit radii, epsilon 0.5 and 0.8. It was found that the partially sealed configuration was less influenced by fluid inertia than the open ended configuration.
An analytically soluble problem in fully nonlinear statistical gravitational lensing
NASA Technical Reports Server (NTRS)
Schneider, P.
1987-01-01
The amplification probability distribution p(I)dI for a point source behind a random star field which acts as the deflector exhibits a I exp-3 behavior for large amplification, as can be shown from the universality of the lens equation near critical lines. In this paper it is shown that the amplitude of the I exp-3 tail can be derived exactly for arbitrary mass distribution of the stars, surface mass density of stars and smoothly distributed matter, and large-scale shear. This is then compared with the corresponding linear result.
Lu, Dongmei; Wu, Chao; Li, Pengfei
2014-02-03
Boryl radicals have the potential for the development of new molecular entities and for application in new radical reactions. However, the effects of the substituents and coordinating Lewis bases on the reactivity of boryl radicals are not fully understood. By using first-principles methods, we investigated the spin-density distribution and reactivity of a series of boryl radicals with various substituents and Lewis bases. The substituents, along with the Lewis bases, only affect the radical reactivity when an unpaired electron is in the boron pz orbital, that is, for three-coordinate radicals. We found evidence of synergistic effects between the substituents and the Lewis bases that can substantially broaden the tunability of the reactivity of the boryl radicals. Among Lewis bases, pyridine and imidazol-2-ylidene show a similar capacity for stabilization by delocalizing the spin density. Electron-donating substituents, such as nitrogen, more efficiently stabilize boryl radicals than oxygen and carbon atoms. The reactivity of a boryl radical is always boron based, irrespective of the spin density on boron. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
SenSyF Experience on Integration of EO Services in a Generic, Cloud-Based EO Exploitation Platform
NASA Astrophysics Data System (ADS)
Almeida, Nuno; Catarino, Nuno; Gutierrez, Antonio; Grosso, Nuno; Andrade, Joao; Caumont, Herve; Goncalves, Pedro; Villa, Guillermo; Mangin, Antoine; Serra, Romain; Johnsen, Harald; Grydeland, Tom; Emsley, Stephen; Jauch, Eduardo; Moreno, Jose; Ruiz, Antonio
2016-08-01
SenSyF is a cloud-based data processing framework for EO- based services. It has been pioneer in addressing Big Data issues from the Earth Observation point of view, and is a precursor of several of the technologies and methodologies that will be deployed in ESA's Thematic Exploitation Platforms and other related systems.The SenSyF system focuses on developing fully automated data management, together with access to a processing and exploitation framework, including Earth Observation specific tools. SenSyF is both a development and validation platform for data intensive applications using Earth Observation data. With SenSyF, scientific, institutional or commercial institutions developing EO- based applications and services can take advantage of distributed computational and storage resources, tailored for applications dependent on big Earth Observation data, and without resorting to deep infrastructure and technological investments.This paper describes the integration process and the experience gathered from different EO Service providers during the project.
On developing the local research environment of the 1990s - The Space Station era
NASA Technical Reports Server (NTRS)
Chase, Robert; Ziel, Fred
1989-01-01
A requirements analysis for the Space Station's polar platform data system has been performed. Based upon this analysis, a cluster, layered cluster, and layered-modular implementation of one specific module within the Eos Data and Information System (EosDIS), an active data base for satellite remote sensing research has been developed. It is found that a distributed system based on a layered-modular architecture and employing current generation work station technologies has the requisite attributes ascribed by the remote sensing research community. Although, based on benchmark testing, probabilistic analysis, failure analysis and user-survey technique analysis, it is found that this architecture presents some operational shortcomings that will not be alleviated with new hardware or software developments. Consequently, the potential of a fully-modular layered architectural design for meeting the needs of Eos researchers has also been evaluated, concluding that it would be well suited to the evolving requirements of this multidisciplinary research community.
Near-field interaction between domain walls in adjacent permalloy nanowires
NASA Astrophysics Data System (ADS)
O'Brien, Liam
2010-03-01
A domain wall (DW) moving in a ferromagnetic nanowire may interact with the stray field from another DW travelling in an adjacent wire. This could greatly impact the operation of proposed DW based data storage schemes which rely on the controlled propagation of DWs in densely packed nanowires [1, 2]. Here we experimentally study the interaction between two DWs travelling in adjacent Permalloy nanowires [3]. We find that the interaction causes significant pinning, with measured pinning fields of up to 93 Oe (˜5 times the intrinsic pinning field of an isolated wire) for the smallest separations. We present an analysis of the observed pinning field dependence on wire separation in terms of the full magnetostatic charge distribution within a DW. By considering an isolated DW, and accounting for finite temperature, it is possible to fully reproduce the experimentally observed dependence. This suggests that the DW internal structure is not appreciably perturbed by the interaction and so remains rigid, consistent with a finite sized quasi-particle description [4]. The full charge distribution must be considered in understanding these near-field interactions as other models based on simpler descriptions of the charge distribution within the DW, including a point-like distribution, cannot reproduce the observed dependence. Finally, we develop the idea of using localized stray fields to pin a DW and show how specific potential landscapes can be created by tailoring a pinning charge distribution, with the added advantage that neither DW internal structure nor nanowire geometry is appreciably perturbed. [4pt] [1] Allwood, Cowburn et al. Science 309, 1688 (2005) [0pt] [2] S. S. Parkin, Science 320, 190 (2008) [0pt] [3] O'Brien, Cowburn et al. Phys. Rev. Lett. 103, 7, 077206 (2009) [0pt] [4] Saitoh, Miyajima et al. Nature 432, 203 (2004)
Subcellular controls of mercury trophic transfer to a marine fish.
Dang, Fei; Wang, Wen-Xiong
2010-09-15
Different behaviors of inorganic mercury [Hg(II)] and methylmercury (MeHg) during trophic transfer along the marine food chain have been widely reported, but the mechanisms are not fully understood. The bioavailability of ingested mercury, quantified by assimilation efficiency (AE), was investigated in a marine fish, the grunt Terapon jarbua, based on mercury subcellular partitioning in prey and purified subcellular fractions of prey tissues. The subcellular distribution of Hg(II) differed substantially among prey types, with cellular debris being a major (49-57% in bivalves) or secondary (14-19% in other prey) binding pool. However, MeHg distribution varied little among prey types, with most MeHg (43-79%) in heat-stable protein (HSP) fraction. The greater AEs measured for MeHg (90-94%) than for Hg(II) (23-43%) confirmed the findings of previous studies. Bioavailability of each purified subcellular fraction rather than the proposed trophically available metal (TAM) fraction could better elucidate mercury assimilation difference. Hg(II) associated with insoluble fraction (e.g. cellular debris) was less bioavailable than that in soluble fraction (e.g. HSP). However, subcellular distribution was shown to be less important for MeHg, with each fraction having comparable MeHg bioavailability. Subcellular distribution in prey should be an important consideration in mercury trophic transfer studies. 2010 Elsevier B.V. All rights reserved.
Yoshitomi, Munetake; Ohta, Keisuke; Kanazawa, Tomonoshin; Togo, Akinobu; Hirashima, Shingo; Uemura, Kei-Ichiro; Okayama, Satoko; Morioka, Motohiro; Nakamura, Kei-Ichiro
2016-10-31
Endocrine and endothelial cells of the anterior pituitary gland frequently make close appositions or contacts, and the secretory granules of each endocrine cell tend to accumulate at the perivascular regions, which is generally considered to facilitate secretory functions of these cells. However, three-dimensional relationships between the localization pattern of secretory granules and blood vessels are not fully understood. To define and characterize these spatial relationships, we used scanning electron microscopy (SEM) three-dimensional reconstruction method based on focused ion-beam slicing and scanning electron microscopy (FIB/SEM). Full three-dimensional cellular architectures of the anterior pituitary tissue at ultrastructural resolution revealed that about 70% of endocrine cells were in apposition to the endothelial cells, while almost 30% of endocrine cells were entirely isolated from perivascular space in the tissue. Our three-dimensional analyses also visualized the distribution pattern of secretory granules in individual endocrine cells, showing an accumulation of secretory granules in regions in close apposition to the blood vessels in many cases. However, secretory granules in cells isolated from the perivascular region tended to distribute uniformly in the cytoplasm of these cells. These data suggest that the cellular interactions between the endocrine and endothelial cells promote an uneven cytoplasmic distribution of the secretory granules.
Electrode Slurry Particle Density Mapping Using X-ray Radiography
Higa, Kenneth; Zhao, Hui; Parkinson, Dilworth Y.; ...
2017-01-05
The internal structure of a porous electrode strongly influences battery performance. Understanding the dynamics of electrode slurry drying could aid in engineering electrodes with desired properties. For instance, one might monitor the dynamic, spatially-varying thickness near the edge of a slurry coating, as it should lead to non-uniform thickness of the dried film. This work examines the dynamic behavior of drying slurry drops consisting of SiO x and carbon black particles in a solution of carboxymethylcellulose and deionized water, as an experimental model of drying behavior near the edge of a slurry coating. An X-ray radiography-based procedure is developed tomore » calculate the evolving spatial distribution of active material particles from images of the drying slurry drops. To the authors’ knowledge, this study is the first to use radiography to investigate battery slurry drying, as well as the first to determine particle distributions from radiography images of drying suspensions. The dynamic results are consistent with tomography reconstructions of the static, fully-dried films. It is found that active material particles can rapidly become non-uniformly distributed within the drops. Heating can promote distribution uniformity, but seemingly must be applied very soon after slurry deposition. Higher slurry viscosity is found to strongly restrain particle redistribution.« less
Differentiated protection method in passive optical networks based on OPEX
NASA Astrophysics Data System (ADS)
Zhang, Zhicheng; Guo, Wei; Jin, Yaohui; Sun, Weiqiang; Hu, Weisheng
2011-12-01
Reliable service delivery becomes more significant due to increased dependency on electronic services all over society and the growing importance of reliable service delivery. As the capability of PON increasing, both residential and business customers may be included in a PON. Meanwhile, OPEX have been proven to be a very important factor of the total cost for a telecommunication operator. Thus, in this paper, we present the partial protection PON architecture and compare the operational expenditures (OPEX) of fully duplicated protection and partly duplicated protection for ONUs with different distributed fiber length, reliability requirement and penalty cost per hour. At last, we propose a differentiated protection method to minimize OPEX.
3D elemental sensitive imaging using transmission X-ray microscopy.
Liu, Yijin; Meirer, Florian; Wang, Junyue; Requena, Guillermo; Williams, Phillip; Nelson, Johanna; Mehta, Apurva; Andrews, Joy C; Pianetta, Piero
2012-09-01
Determination of the heterogeneous distribution of metals in alloy/battery/catalyst and biological materials is critical to fully characterize and/or evaluate the functionality of the materials. Using synchrotron-based transmission x-ray microscopy (TXM), it is now feasible to perform nanoscale-resolution imaging over a wide X-ray energy range covering the absorption edges of many elements; combining elemental sensitive imaging with determination of sample morphology. We present an efficient and reliable methodology to perform 3D elemental sensitive imaging with excellent sample penetration (tens of microns) using hard X-ray TXM. A sample of an Al-Si piston alloy is used to demonstrate the capability of the proposed method.
NASA Astrophysics Data System (ADS)
Silaev, M. A.
2018-06-01
We develop a theory based on the formalism of quasiclassical Green's functions to study the spin dynamics in superfluid ^3He. First, we derive kinetic equations for the spin-dependent distribution function in the bulk superfluid reproducing the results obtained earlier without quasiclassical approximation. Then, we consider spin dynamics near the surface of fully gapped ^3He-B-phase taking into account spin relaxation due to the transitions in the spectrum of localized fermionic states. The lifetimes of longitudinal and transverse spin waves are calculated taking into account the Fermi-liquid corrections which lead to a crucial modification of fermionic spectrum and spin responses.
United States Transuranium and Uranium Registries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kathren, R.L.; Filipy, R.E.; Dietert, S.E.
1991-06-01
This report summarizes the primary scientific activities of the United States Transuranium and Uranium Registries for the period October 1, 1989 through September 30, 1990. The Registries are parallel human tissue research programs devoted to the study of the actinide elements in humans. To date there have been 261 autopsy or surgical specimen donations, which include 11 whole bodies. The emphasis of the Registry was directed towards quality improvement and the development of a fully computerized data base that would incorporate not only the results of postmortem radiochemical analysis, but also medical and monitoring information obtained during life. Human subjectsmore » reviews were also completed. A three compartment biokinetic model for plutonium distribution is proposed. 2 tabs.« less
Zou, Weiwen; Jiang, Wenning; Chen, Jianping
2013-03-11
This paper demonstrates stimulated Brillouin scattering (SBS) characterization in silica optical fiber tapers drawn from commercial single mode optical fibers by hydrogen flame. They have different waist diameters downscaled from 5 μm to 42 μm. The fully-distributed SBS measurement along the fiber tapers is implemented by Brillouin optical correlation domain analysis technique with millimeter spatial resolution. It is found that the Brillouin frequency shift (BFS) in the waist of all fiber tapers is approximately the same (i.e., ~11.17 GHz at 1550 nm). However, the BFS is gradually reduced and the Brillouin gain decreases from the waist to the untapered zone in each fiber taper.
Space power distribution system technology. Volume 1: Reference EPS design
NASA Technical Reports Server (NTRS)
Decker, D. K.; Cannady, M. D.; Cassinelli, J. E.; Farber, B. F.; Lurie, C.; Fleck, G. W.; Lepisto, J. W.; Massner, A.; Ritterman, P. F.
1983-01-01
The multihundred kilowatt electrical power aspects of a mannable space platform in low Earth orbit is analyzed from a cost and technology viewpoint. At the projected orbital altitudes, Shuttle launch and servicing are technically and economically viable. Power generation is specified as photovoltaic consistent with projected planning. The cost models and trades are based upon a zero interest rate (the government taxes concurrently as required), constant dollars (1980), and costs derived in the first half of 1980. Space platform utilization of up to 30 years is evaluated to fully understand the impact of resupply and replacement as satellite missions are extended. Such lifetimes are potentially realizable with Shuttle servicing capability and are economically desirable.
Multi-frequency metasurface carpet cloaks.
Wang, Chan; Yang, Yihao; Liu, Qianghu; Liang, Dachuan; Zheng, Bin; Chen, Hongsheng; Xu, Zhiwei; Wang, Huaping
2018-05-28
Metasurfaces provide an alternative way to design three-dimensional arbitrary-shaped carpet cloaks with ultrathin thicknesses. Nevertheless, the previous metasurface carpet cloaks work only at a single frequency. To overcome this challenge, we here propose a macroscopic metasurface carpet cloak. The cloak is designed with a metasurface of a few layers that exhibit a special spatial distribution of the conductance and inductance in the unit cell; therefore, it can fully control the reflection phases at several independent frequencies simultaneously. Because of this, the present metasurface cloak can work at dual frequencies based on multi-resonance principle. The proposed design methodology will be very useful in future broadband macroscopic cloaks design with low profiles, light weights, and easy access.
GridWise Standards Mapping Overview
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bosquet, Mia L.
''GridWise'' is a concept of how advanced communications, information and controls technology can transform the nation's energy system--across the spectrum of large scale, central generation to common consumer appliances and equipment--into a collaborative network, rich in the exchange of decision making information and an abundance of market-based opportunities (Widergren and Bosquet 2003) accompanying the electric transmission and distribution system fully into the information and telecommunication age. This report summarizes a broad review of standards efforts which are related to GridWise--those which could ultimately contribute significantly to advancements toward the GridWise vision, or those which represent today's current technological basis uponmore » which this vision must build.« less
NASA Astrophysics Data System (ADS)
Shiokawa, Hotaka; Gammie, C. F.; Dolence, J.; Noble, S. C.
2013-01-01
We perform global General Relativistic Magnetohydrodynamics (GRMHD) simulations of non-radiative, magnetized disks that are initially tilted with respect to the black hole's spin axis. We run the simulations with different size and tilt angle of the tori for 2 different resolutions. We also perform radiative transfer using Monte Carlo based code that includes synchrotron emission, absorption and Compton scattering to obtain spectral energy distribution and light curves. Similar work was done by Fragile et al. (2007) and Dexter & Fragile (2012) to model the super massive black hole SgrA* with tilted accretion disks. We compare our results of fully conservative hydrodynamic code and spectra that include X-ray, with their results.
A "total parameter estimation" method in the varification of distributed hydrological models
NASA Astrophysics Data System (ADS)
Wang, M.; Qin, D.; Wang, H.
2011-12-01
Conventionally hydrological models are used for runoff or flood forecasting, hence the determination of model parameters are common estimated based on discharge measurements at the catchment outlets. With the advancement in hydrological sciences and computer technology, distributed hydrological models based on the physical mechanism such as SWAT, MIKESHE, and WEP, have gradually become the mainstream models in hydrology sciences. However, the assessments of distributed hydrological models and model parameter determination still rely on runoff and occasionally, groundwater level measurements. It is essential in many countries, including China, to understand the local and regional water cycle: not only do we need to simulate the runoff generation process and for flood forecasting in wet areas, we also need to grasp the water cycle pathways and consumption process of transformation in arid and semi-arid regions for the conservation and integrated water resources management. As distributed hydrological model can simulate physical processes within a catchment, we can get a more realistic representation of the actual water cycle within the simulation model. Runoff is the combined result of various hydrological processes, using runoff for parameter estimation alone is inherits problematic and difficult to assess the accuracy. In particular, in the arid areas, such as the Haihe River Basin in China, runoff accounted for only 17% of the rainfall, and very concentrated during the rainy season from June to August each year. During other months, many of the perennial rivers within the river basin dry up. Thus using single runoff simulation does not fully utilize the distributed hydrological model in arid and semi-arid regions. This paper proposed a "total parameter estimation" method to verify the distributed hydrological models within various water cycle processes, including runoff, evapotranspiration, groundwater, and soil water; and apply it to the Haihe river basin in China. The application results demonstrate that this comprehensive testing method is very useful in the development of a distributed hydrological model and it provides a new way of thinking in hydrological sciences.
Coupled nonlinear aeroelasticity and flight dynamics of fully flexible aircraft
NASA Astrophysics Data System (ADS)
Su, Weihua
This dissertation introduces an approach to effectively model and analyze the coupled nonlinear aeroelasticity and flight dynamics of highly flexible aircraft. A reduced-order, nonlinear, strain-based finite element framework is used, which is capable of assessing the fundamental impact of structural nonlinear effects in preliminary vehicle design and control synthesis. The cross-sectional stiffness and inertia properties of the wings are calculated along the wing span, and then incorporated into the one-dimensional nonlinear beam formulation. Finite-state unsteady subsonic aerodynamics is used to compute airloads along lifting surfaces. Flight dynamic equations are then introduced to complete the aeroelastic/flight dynamic system equations of motion. Instead of merely considering the flexibility of the wings, the current work allows all members of the vehicle to be flexible. Due to their characteristics of being slender structures, the wings, tail, and fuselage of highly flexible aircraft can be modeled as beams undergoing three dimensional displacements and rotations. New kinematic relationships are developed to handle the split beam systems, such that fully flexible vehicles can be effectively modeled within the existing framework. Different aircraft configurations are modeled and studied, including Single-Wing, Joined-Wing, Blended-Wing-Body, and Flying-Wing configurations. The Lagrange Multiplier Method is applied to model the nodal displacement constraints at the joint locations. Based on the proposed models, roll response and stability studies are conducted on fully flexible and rigidized models. The impacts of the flexibility of different vehicle members on flutter with rigid body motion constraints, flutter in free flight condition, and roll maneuver performance are presented. Also, the static stability of the compressive member of the Joined-Wing configuration is studied. A spatially-distributed discrete gust model is incorporated into the time simulation of the framework. Gust responses of the Flying-Wing configuration subject to stall effects are investigated. A bilinear torsional stiffness model is introduced to study the skin wrinkling due to large bending curvature of the Flying-Wing. The numerical studies illustrate the improvements of the existing reduced-order formulation with new capabilities of both structural modeling and coupled aeroelastic and flight dynamic analysis of fully flexible aircraft.
2D Bayesian automated tilted-ring fitting of disc galaxies in large H I galaxy surveys: 2DBAT
NASA Astrophysics Data System (ADS)
Oh, Se-Heon; Staveley-Smith, Lister; Spekkens, Kristine; Kamphuis, Peter; Koribalski, Bärbel S.
2018-01-01
We present a novel algorithm based on a Bayesian method for 2D tilted-ring analysis of disc galaxy velocity fields. Compared to the conventional algorithms based on a chi-squared minimization procedure, this new Bayesian-based algorithm suffers less from local minima of the model parameters even with highly multimodal posterior distributions. Moreover, the Bayesian analysis, implemented via Markov Chain Monte Carlo sampling, only requires broad ranges of posterior distributions of the parameters, which makes the fitting procedure fully automated. This feature will be essential when performing kinematic analysis on the large number of resolved galaxies expected to be detected in neutral hydrogen (H I) surveys with the Square Kilometre Array and its pathfinders. The so-called 2D Bayesian Automated Tilted-ring fitter (2DBAT) implements Bayesian fits of 2D tilted-ring models in order to derive rotation curves of galaxies. We explore 2DBAT performance on (a) artificial H I data cubes built based on representative rotation curves of intermediate-mass and massive spiral galaxies, and (b) Australia Telescope Compact Array H I data from the Local Volume H I Survey. We find that 2DBAT works best for well-resolved galaxies with intermediate inclinations (20° < i < 70°), complementing 3D techniques better suited to modelling inclined galaxies.
Young, Meggie N; Bleiholder, Christian
2017-04-01
Structure elucidation by ion mobility spectrometry-mass spectrometry methods is based on the comparison of an experimentally measured momentum transfer cross-section to cross-sections calculated for model structures. Thus, it is imperative that the calculated cross-section must be accurate. However, it is not fully understood how important it is to accurately model the charge distribution of an analyte ion when calculating momentum transfer cross-sections. Here, we calculate and compare momentum transfer cross-sections for carbon clusters that differ in mass, charge state, and mode of charge distribution, and vary temperature and polarizability of the buffer gas. Our data indicate that the detailed distribution of the ion charge density is intimately linked to the contribution of glancing collisions to the momentum transfer cross-section. The data suggest that analyte ions with molecular mass ~3 kDa or momentum transfer cross-section 400-500 Å 2 would be significantly influenced by the charge distribution in nitrogen buffer gas. Our data further suggest that accurate structure elucidation on the basis of IMS-MS data measured in nitrogen buffer gas must account for the molecular charge distribution even for systems as large as C 960 (~12 kDa) when localized charges are present and/or measurements are conducted under cryogenic temperatures. Finally, our data underscore that accurate structure elucidation is unlikely if ion mobility data recorded in one buffer gas is converted into other buffer gases when electronic properties of the buffer gases differ. Graphical Abstract ᅟ.
Fine tuning of transmission features in nanoporous anodic alumina distributed Bragg reflectors
NASA Astrophysics Data System (ADS)
Lim, Siew Yee; Law, Cheryl Suwen; Santos, Abel
2018-01-01
This study introduces an innovative apodisation strategy to tune the filtering features of distributed Bragg reflectors based on nanoporous anodic alumina (NAA-DBRs). The effective medium of NAA-DBRs, which is modulated in a stepwise fashion by a pulse-like anodisation approach, is apodised following a logarithmic negative function to engineer the transmission features of NAA-DBRs. We investigate the effect of various apodisation parameters such as apodisation amplitude difference, anodisation period, current density offset and pore widening time, to tune and optimise the optical properties of NAA-DBRs in terms of central wavelength position, full width at half maximum and quality of photonic stop band. The transmission features of NAA-DBRs are shown to be fully controllable with precision across the spectral regions by means of the apodisation parameters. Our study demonstrates that an apodisation strategy can significantly narrow the width and enhance the quality of the characteristic photonic stop band of NAA-DBRs. This rationally designed anodisation approach based on the combination of apodisation and stepwise pulse anodisation enables the development of optical filters with tuneable filtering features to be integrated into optical technologies acting as essential photonic elements in devices such as optical sensors and biosensors.
Investigating accident causation through information network modelling.
Griffin, T G C; Young, M S; Stanton, N A
2010-02-01
Management of risk in complex domains such as aviation relies heavily on post-event investigations, requiring complex approaches to fully understand the integration of multi-causal, multi-agent and multi-linear accident sequences. The Event Analysis of Systemic Teamwork methodology (EAST; Stanton et al. 2008) offers such an approach based on network models. In this paper, we apply EAST to a well-known aviation accident case study, highlighting communication between agents as a central theme and investigating the potential for finding agents who were key to the accident. Ultimately, this work aims to develop a new model based on distributed situation awareness (DSA) to demonstrate that the risk inherent in a complex system is dependent on the information flowing within it. By identifying key agents and information elements, we can propose proactive design strategies to optimize the flow of information and help work towards avoiding aviation accidents. Statement of Relevance: This paper introduces a novel application of an holistic methodology for understanding aviation accidents. Furthermore, it introduces an ongoing project developing a nonlinear and prospective method that centralises distributed situation awareness and communication as themes. The relevance of findings are discussed in the context of current ergonomic and aviation issues of design, training and human-system interaction.
Ultraviolet refractometry using field-based light scattering spectroscopy
Fu, Dan; Choi, Wonshik; Sung, Yongjin; Oh, Seungeun; Yaqoob, Zahid; Park, YongKeun; Dasari, Ramachandra R.; Feld, Michael S.
2010-01-01
Accurate refractive index measurement in the deep ultraviolet (UV) range is important for the separate quantification of biomolecules such as proteins and DNA in biology. This task is demanding and has not been fully exploited so far. Here we report a new method of measuring refractive index using field-based light scattering spectroscopy, which is applicable to any wavelength range and suitable for both solutions and homogenous objects with well-defined shape such as microspheres. The angular scattering distribution of single microspheres immersed in homogeneous media is measured over the wavelength range 260 to 315 nm using quantitative phase microscopy. By least square fitting the observed scattering distribution with Mie scattering theory, the refractive index of either the sphere or the immersion medium can be determined provided that one is known a priori. Using this method, we have measured the refractive index dispersion of SiO2 spheres and bovine serum albumin (BSA) solutions in the deep UV region. Specific refractive index increments of BSA are also extracted. Typical accuracy of the present refractive index technique is ≤0.003. The precision of refractive index measurements is ≤0.002 and that of specific refractive index increment determination is ≤0.01 mL/g. PMID:20372622
NASA Astrophysics Data System (ADS)
Ipsen, Andreas; Ebbels, Timothy M. D.
2014-10-01
In a recent article, we derived a probability distribution that was shown to closely approximate that of the data produced by liquid chromatography time-of-flight mass spectrometry (LC/TOFMS) instruments employing time-to-digital converters (TDCs) as part of their detection system. The approach of formulating detailed and highly accurate mathematical models of LC/MS data via probability distributions that are parameterized by quantities of analytical interest does not appear to have been fully explored before. However, we believe it could lead to a statistically rigorous framework for addressing many of the data analytical problems that arise in LC/MS studies. In this article, we present new procedures for correcting for TDC saturation using such an approach and demonstrate that there is potential for significant improvements in the effective dynamic range of TDC-based mass spectrometers, which could make them much more competitive with the alternative analog-to-digital converters (ADCs). The degree of improvement depends on our ability to generate mass and chromatographic peaks that conform to known mathematical functions and our ability to accurately describe the state of the detector dead time—tasks that may be best addressed through engineering efforts.
Transformation of HDF-EOS metadata from the ECS model to ISO 19115-based XML
NASA Astrophysics Data System (ADS)
Wei, Yaxing; Di, Liping; Zhao, Baohua; Liao, Guangxuan; Chen, Aijun
2007-02-01
Nowadays, geographic data, such as NASA's Earth Observation System (EOS) data, are playing an increasing role in many areas, including academic research, government decisions and even in people's every lives. As the quantity of geographic data becomes increasingly large, a major problem is how to fully make use of such data in a distributed, heterogeneous network environment. In order for a user to effectively discover and retrieve the specific information that is useful, the geographic metadata should be described and managed properly. Fortunately, the emergence of XML and Web Services technologies greatly promotes information distribution across the Internet. The research effort discussed in this paper presents a method and its implementation for transforming Hierarchical Data Format (HDF)-EOS metadata from the NASA ECS model to ISO 19115-based XML, which will be managed by the Open Geospatial Consortium (OGC) Catalogue Services—Web Profile (CSW). Using XML and international standards rather than domain-specific models to describe the metadata of those HDF-EOS data, and further using CSW to manage the metadata, can allow metadata information to be searched and interchanged more widely and easily, thus promoting the sharing of HDF-EOS data.
Analytical Study of the Mechanical Behavior of Fully Grouted Bolts in Bedding Rock Slopes
NASA Astrophysics Data System (ADS)
Liu, C. H.; Li, Y. Z.
2017-09-01
Bolting is widely used as a reinforcement means for rock slopes. The support force of a fully grouted bolt is often provided by the combination of the axial and shear forces acting at the cross section of the bolt, especially for bedding rock slopes. In this paper, load distribution and deformation behavior of the deflecting section of a fully grouted bolt were analyzed, and a structural mechanical model was established. Based on force method equations and deformation compatibility relationships, an analytical approach, describing the contribution of the axial and shear forces acting at the intersection between the bolt and the joint plane to the stability of a rock slope, was developed. Influence of the inclination of the bolt to the joint plane was discussed. Laboratory tests were conducted with different inclinations of the bolt to the joint plane. Comparisons between the proposed approach, the experimental data and a code method were made. The calculation results are in good agreement with the test data. It is shown that transverse shear resistance plays a significant role to the bolting contribution and that the bigger the dip of the bolt to the joint plane, the more significant the dowel effect. It is also shown that the design method suggested in the code overestimates the resistance of the bolt. The proposed model considering dowel effect provides a more precise description on bolting properties of bedding rock slopes than the code method and will be helpful to improve bolting design methods.
Examining the Role of Environment in a Comprehensive Sample of Compact Groups
NASA Astrophysics Data System (ADS)
Walker, Lisa May; Johnson, Kelsey E.; Gallagher, Sarah C.; Charlton, Jane C.; Hornschemeier, Ann E.; Hibbard, John E.
2012-03-01
Compact groups, with their high number densities, small velocity dispersions, and an interstellar medium that has not been fully processed, provide a local analog to conditions of galaxy interactions in the earlier universe. The frequent and prolonged gravitational encounters that occur in compact groups affect the evolution of the constituent galaxies in a myriad of ways, for example, gas processing and star formation. Recently, a statistically significant "gap" has been discovered in the mid-infrared (MIR: 3.6-8 μm) IRAC color space of compact group galaxies. This gap is not seen in field samples and is a new example of how the compact group environment may affect the evolution of member galaxies. In order to investigate the origin and nature of this gap, we have compiled a larger sample of 37 compact groups in addition to the original 12 groups studied by Johnson et al. (yielding 174 individual galaxies with reliable MIR photometry). We find that a statistically significant deficit of galaxies in this gap region of IRAC color space is persistent in the full sample, lending support to the hypothesis that the compact group environment inhibits moderate specific star formation rates. Using this expanded sample, we have more fully characterized the distribution of galaxies in this color space and quantified the low-density region more fully with respect to MIR bluer and MIR redder colors. We note a curvature in the color-space distribution, which is fully consistent with increasing dust temperature as the activity in a galaxy increases. This full sample of 49 compact groups allows us to subdivide the data according to physical properties of the groups. An analysis of these subsamples indicates that neither projected physical diameter nor density shows a trend in color space within the values represented by this sample. We hypothesize that the apparent lack of a trend is due to the relatively small range of properties in this sample, whose groups have already been pre-selected to be compact and dense. Thus, the relative influence of stochastic effects (such as the particular distribution and amount of star formation in individual galaxies) becomes dominant. We analyze spectral energy distributions of member galaxies as a function of their location in color space and find that galaxies in different regions of MIR color space contain dust with varying temperatures and/or polycyclic aromatic hydrocarbon emission.
Examining the Role of Environment in a Comprehensive Sample of Compact Groups
NASA Technical Reports Server (NTRS)
Walker, Lisa May; Johnson, Kelsey E.; Gallagher, Sarah C.; Charlton, Jane C.; Hornschemeier, Ann E.; Hibbard, John E.
2012-01-01
Compact groups, with their high number densities, small velocity dispersions, and an interstellar medium that has not been fully processed, provide a local analog to conditions of galaxy interactions in the earlier universe. The frequent and prolonged gravitational encounters that occur in compact groups affect the evolution of the constituent galaxies in a myriad of ways, for example, gas processing and star formation. Recently, a statistically significant "gap" has been discovered in the mid-infrared (MIR: 3.6-8 µm) IRAC color space of compact group galaxies. This gap is not seen in field samples and is a new example of how the compact group environment may affect the evolution of member galaxies. In order to investigate the origin and nature of this gap, we have compiled a larger sample of 37 compact groups in addition to the original 12 groups studied by Johnson et al. (yielding 174 individual galaxies with reliable MIR photometry). We find that a statistically significant deficit of galaxies in this gap region of IRAC color space is persistent in the full sample, lending support to the hypothesis that the compact group environment inhibits moderate specific star formation rates. Using this expanded sample, we have more fully characterized the distribution of galaxies in this color space and quantified the low-density region more fully with respect to MIR bluer and MIR redder colors. We note a curvature in the color-space distribution, which is fully consistent with increasing dust temperature as the activity in a galaxy increases. This full sample of 49 compact groups allows us to subdivide the data according to physical properties of the groups. An analysis of these subsamples indicates that neither projected physical diameter nor density shows a trend in color space within the values represented by this sample. We hypothesize that the apparent lack of a trend is due to the relatively small range of properties in this sample, whose groups have already been pre-selected to be compact and dense. Thus, the relative influence of stochastic effects (such as the particular distribution and amount of star formation in individual galaxies) becomes dominant. We analyze spectral energy distributions of member galaxies as a function of their location in color space and find that galaxies in different regions of MIR color space contain dust with varying temperatures and/or polycyclic aromatic hydrocarbon emission.
Ebel, B.A.; Mirus, B.B.; Heppner, C.S.; VanderKwaak, J.E.; Loague, K.
2009-01-01
Distributed hydrologic models capable of simulating fully-coupled surface water and groundwater flow are increasingly used to examine problems in the hydrologic sciences. Several techniques are currently available to couple the surface and subsurface; the two most frequently employed approaches are first-order exchange coefficients (a.k.a., the surface conductance method) and enforced continuity of pressure and flux at the surface-subsurface boundary condition. The effort reported here examines the parameter sensitivity of simulated hydrologic response for the first-order exchange coefficients at a well-characterized field site using the fully coupled Integrated Hydrology Model (InHM). This investigation demonstrates that the first-order exchange coefficients can be selected such that the simulated hydrologic response is insensitive to the parameter choice, while simulation time is considerably reduced. Alternatively, the ability to choose a first-order exchange coefficient that intentionally decouples the surface and subsurface facilitates concept-development simulations to examine real-world situations where the surface-subsurface exchange is impaired. While the parameters comprising the first-order exchange coefficient cannot be directly estimated or measured, the insensitivity of the simulated flow system to these parameters (when chosen appropriately) combined with the ability to mimic actual physical processes suggests that the first-order exchange coefficient approach can be consistent with a physics-based framework. Copyright ?? 2009 John Wiley & Sons, Ltd.
The Java Image Science Toolkit (JIST) for rapid prototyping and publishing of neuroimaging software.
Lucas, Blake C; Bogovic, John A; Carass, Aaron; Bazin, Pierre-Louis; Prince, Jerry L; Pham, Dzung L; Landman, Bennett A
2010-03-01
Non-invasive neuroimaging techniques enable extraordinarily sensitive and specific in vivo study of the structure, functional response and connectivity of biological mechanisms. With these advanced methods comes a heavy reliance on computer-based processing, analysis and interpretation. While the neuroimaging community has produced many excellent academic and commercial tool packages, new tools are often required to interpret new modalities and paradigms. Developing custom tools and ensuring interoperability with existing tools is a significant hurdle. To address these limitations, we present a new framework for algorithm development that implicitly ensures tool interoperability, generates graphical user interfaces, provides advanced batch processing tools, and, most importantly, requires minimal additional programming or computational overhead. Java-based rapid prototyping with this system is an efficient and practical approach to evaluate new algorithms since the proposed system ensures that rapidly constructed prototypes are actually fully-functional processing modules with support for multiple GUI's, a broad range of file formats, and distributed computation. Herein, we demonstrate MRI image processing with the proposed system for cortical surface extraction in large cross-sectional cohorts, provide a system for fully automated diffusion tensor image analysis, and illustrate how the system can be used as a simulation framework for the development of a new image analysis method. The system is released as open source under the Lesser GNU Public License (LGPL) through the Neuroimaging Informatics Tools and Resources Clearinghouse (NITRC).
The Java Image Science Toolkit (JIST) for Rapid Prototyping and Publishing of Neuroimaging Software
Lucas, Blake C.; Bogovic, John A.; Carass, Aaron; Bazin, Pierre-Louis; Prince, Jerry L.; Pham, Dzung
2010-01-01
Non-invasive neuroimaging techniques enable extraordinarily sensitive and specific in vivo study of the structure, functional response and connectivity of biological mechanisms. With these advanced methods comes a heavy reliance on computer-based processing, analysis and interpretation. While the neuroimaging community has produced many excellent academic and commercial tool packages, new tools are often required to interpret new modalities and paradigms. Developing custom tools and ensuring interoperability with existing tools is a significant hurdle. To address these limitations, we present a new framework for algorithm development that implicitly ensures tool interoperability, generates graphical user interfaces, provides advanced batch processing tools, and, most importantly, requires minimal additional programming or computational overhead. Java-based rapid prototyping with this system is an efficient and practical approach to evaluate new algorithms since the proposed system ensures that rapidly constructed prototypes are actually fully-functional processing modules with support for multiple GUI's, a broad range of file formats, and distributed computation. Herein, we demonstrate MRI image processing with the proposed system for cortical surface extraction in large cross-sectional cohorts, provide a system for fully automated diffusion tensor image analysis, and illustrate how the system can be used as a simulation framework for the development of a new image analysis method. The system is released as open source under the Lesser GNU Public License (LGPL) through the Neuroimaging Informatics Tools and Resources Clearinghouse (NITRC). PMID:20077162
A review of surrogate models and their application to groundwater modeling
NASA Astrophysics Data System (ADS)
Asher, M. J.; Croke, B. F. W.; Jakeman, A. J.; Peeters, L. J. M.
2015-08-01
The spatially and temporally variable parameters and inputs to complex groundwater models typically result in long runtimes which hinder comprehensive calibration, sensitivity, and uncertainty analysis. Surrogate modeling aims to provide a simpler, and hence faster, model which emulates the specified output of a more complex model in function of its inputs and parameters. In this review paper, we summarize surrogate modeling techniques in three categories: data-driven, projection, and hierarchical-based approaches. Data-driven surrogates approximate a groundwater model through an empirical model that captures the input-output mapping of the original model. Projection-based models reduce the dimensionality of the parameter space by projecting the governing equations onto a basis of orthonormal vectors. In hierarchical or multifidelity methods the surrogate is created by simplifying the representation of the physical system, such as by ignoring certain processes, or reducing the numerical resolution. In discussing the application to groundwater modeling of these methods, we note several imbalances in the existing literature: a large body of work on data-driven approaches seemingly ignores major drawbacks to the methods; only a fraction of the literature focuses on creating surrogates to reproduce outputs of fully distributed groundwater models, despite these being ubiquitous in practice; and a number of the more advanced surrogate modeling methods are yet to be fully applied in a groundwater modeling context.
Nonlinear Evolution of Azimuthally Compact Crossflow-Vortex Packet over a Yawed Cone
NASA Astrophysics Data System (ADS)
Choudhari, Meelan; Li, Fei; Paredes, Pedro; Duan, Lian; NASA Langley Research Center Team; Missouri Univ of Sci; Tech Team
2017-11-01
Hypersonic boundary-layer flows over a circular cone at moderate incidence angle can support strong crossflow instability and, therefore, a likely scenario for laminar-turbulent transition in such flows corresponds to rapid amplification of high-frequency secondary instabilities sustained by finite amplitude stationary crossflow vortices. Direct numerical simulations (DNS) are used to investigate the nonlinear evolution of azimuthally compact crossflow vortex packets over a 7-degree half-angle, yawed circular cone in a Mach 6 free stream. Simulation results indicate that the azimuthal distribution of forcing has a strong influence on the stationary crossflow amplitudes; however, the vortex trajectories are nearly the same for both periodic and localized roughness height distributions. The frequency range, mode shapes, and amplification characteristics of strongly amplified secondary instabilities in the DNS are found to overlap with the predictions of secondary instability theory. The DNS computations also provide valuable insights toward the application of planar, partial-differential-equation based eigenvalue analysis to spanwise inhomogeneous, fully three-dimensional, crossflow-dominated flow configurations.
Borrego-Jaraba, Francisco; Garrido, Pilar Castro; García, Gonzalo Cerruela; Ruiz, Irene Luque; Gómez-Nieto, Miguel Ángel
2013-01-01
Because of the global economic turmoil, nowadays a lot of companies are adopting a “deal of the day” business model, some of them with great success. Generally, they try to attract and retain customers through discount coupons and gift cards, using, generally, traditional distribution media. This paper describes a framework, which integrates intelligent environments by using NFC, oriented to the full management of this kind of businesses. The system is responsible for diffusion, distribution, sourcing, validation, redemption and managing of vouchers, loyalty cards and all kind of mobile coupons using NFC, as well as QR codes. WingBonus can be fully adapted to the requirements of marketing campaigns, voucher providers, shop or retailer infrastructures and mobile devices and purchasing habits. Security of the voucher is granted by the system by synchronizing procedures using secure encriptation algorithms. The WingBonus website and mobile applications can be adapted to any requirement of the system actors. PMID:23673675
McDonald, Steve
2015-01-01
This study makes three critical contributions to the "Do Contacts Matter?" debate. First, the widely reported null relationship between informal job searching and wages is shown to be mostly the artifact of a coding error and sample selection restrictions. Second, previous analyses examined only active informal job searching without fully considering the benefits derived from unsolicited network assistance (the "invisible hand of social capital") - thereby underestimating the network effect. Third, wage returns to networks are examined across the earnings distribution. Longitudinal data from the NLSY reveal significant wage returns for network-based job finding over formal job searching, especially for individuals who were informally recruited into their jobs (non-searchers). Fixed effects quantile regression analyses show that contacts generate wage premiums among middle and high wage jobs, but not low wage jobs. These findings challenge conventional wisdom on contact effects and advance understanding of how social networks affect wage attainment and inequality. Copyright © 2014 Elsevier Inc. All rights reserved.
Photon-number statistics in resonance fluorescence
NASA Astrophysics Data System (ADS)
Lenstra, D.
1982-12-01
The theory of photon-number statistics in resonance fluorescence is treated, starting with the general formula for the emission probability of n photons during a given time interval T. The results fully confirm formerly obtained results by Cook that were based on the theory of atomic motion in a traveling wave. General expressions for the factorial moments are derived and explicit results for the mean and the variance are given. It is explicitly shown that the distribution function tends to a Gaussian when T becomes much larger than the natural lifetime of the excited atom. The speed of convergence towards the Gaussian is found to be typically slow, that is, the third normalized central moment (or the skewness) is proportional to T-12. However, numerical results illustrate that the overall features of the distribution function are already well represented by a Gaussian when T is larger than a few natural lifetimes only, at least if the intensity of the exciting field is not too small and its detuning is not too large.
Borrego-Jaraba, Francisco; Garrido, Pilar Castro; García, Gonzalo Cerruela; Ruiz, Irene Luque; Gómez-Nieto, Miguel Angel
2013-05-14
Because of the global economic turmoil, nowadays a lot of companies are adopting a "deal of the day" business model, some of them with great success. Generally, they try to attract and retain customers through discount coupons and gift cards, using, generally, traditional distribution media. This paper describes a framework, which integrates intelligent environments by using NFC, oriented to the full management of this kind of businesses. The system is responsible for diffusion, distribution, sourcing, validation, redemption and managing of vouchers, loyalty cards and all kind of mobile coupons using NFC, as well as QR codes. WingBonus can be fully adapted to the requirements of marketing campaigns, voucher providers, shop or retailer infrastructures and mobile devices and purchasing habits. Security of the voucher is granted by the system by synchronizing procedures using secure encriptation algorithms. The WingBonus website and mobile applications can be adapted to any requirement of the system actors.
Energy use in the New Zealand food system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patterson, M.G.; Earle, M.D.
1985-03-01
The study covered the total energy requirements of the production, processing, wholesale distribution, retailing, shopping and household sectors of the food system in New Zealand. This included the direct energy requirements, and the indirect energy requirements in supplying materials, buildings and equipment. Data were collected from a wide range of literature sources, and converted into forms required for this research project. Also, data were collected in supplementary sample surveys at the wholesale distribution, retailing and shopping sectors. The details of these supplementary surveys are outlined in detailed survey reports fully referenced in the text. From these base data, the totalmore » energy requirements per unit product (MJ/kg) were estimated for a wide range of food chain steps. Some clear alternatives in terms of energy efficiency emerged from a comparison of these estimates. For example, it was found that it was most energy efficient to use dehydrated vegetables, followed by fresh vegetables, freeze dried vegetables, canned vegetables and then finally frozen vegetables.« less
Characteristics of the Hadronic Production of the $$D^{*\\pm}$$ Meson (in Portuguese)
DOE Office of Scientific and Technical Information (OSTI.GOV)
de Miranda, Jussara Marques
The Fermilab experiment E769, a 250 GeV /c tagged hadron beam incident on thin target foils of Be, Al,Cu, and W, measured themore » $$X_F$$ and $$p^2_t$$ distributions of $$D^{*\\pm}$$ through the decay mode$$D^{*\\pm} \\to D^0 \\pi^+, D^0 \\to K^- \\pi^+$$. Fitting the distributions to the form $$A(1 - X_F)^n$$ and $$B exp(-bp^2_t)$$, we determined $n$ - 3.84 ± 0.20 ± 0.06 and $b$ = 0. 7 48 ± 0.034 ± 0.009, respectively. We observe no significant lea.ding particle ef.~ct suggested by earlier experiments. The dependence of the total cross section on the atomic mass number was determined to be $$A^{0.98 \\pm 0,05 \\pm 0.04}$$ . The measurements were based on 351 ± 16 fully reconstructed $$D^{*\\pm}$$ mesons induced by a $$\\pi^{\\pm}$$ and $$K^{\\pm}$$ beam. This is the gest available sample of hadroproduced $$D^{*\\pm}$$.« less
Lu, Tao; Lu, Minggen; Wang, Min; Zhang, Jun; Dong, Guang-Hui; Xu, Yong
2017-12-18
Longitudinal competing risks data frequently arise in clinical studies. Skewness and missingness are commonly observed for these data in practice. However, most joint models do not account for these data features. In this article, we propose partially linear mixed-effects joint models to analyze skew longitudinal competing risks data with missingness. In particular, to account for skewness, we replace the commonly assumed symmetric distributions by asymmetric distribution for model errors. To deal with missingness, we employ an informative missing data model. The joint models that couple the partially linear mixed-effects model for the longitudinal process, the cause-specific proportional hazard model for competing risks process and missing data process are developed. To estimate the parameters in the joint models, we propose a fully Bayesian approach based on the joint likelihood. To illustrate the proposed model and method, we implement them to an AIDS clinical study. Some interesting findings are reported. We also conduct simulation studies to validate the proposed method.
How the Central American Seaway and an ancient northern passage affected flatfish diversification.
Byrne, Lisa; Chapleau, François; Aris-Brosou, Stéphane
2018-05-21
While the natural history of flatfish has been debated for decades, the mode of diversification of this biologically and economically important group has never been elucidated. To address this question, we assembled the largest molecular data set to date, covering > 300 species (out of ca. 800 extant), from 13 of the 14 known families over nine genes, and employed relaxed molecular clocks to uncover their patterns of diversification. As the fossil record of flatfish is contentious, we used sister species distributed on both sides of the American continent to calibrate clock models based on the closure of the Central American Seaway (CAS), and on their current species range. We show that flatfish diversified in two bouts, as species that are today distributed around the Equator diverged during the closure of CAS, while those with a northern range diverged after this, hereby suggesting the existence of a post-CAS closure dispersal for these northern species, most likely along a trans-Arctic northern route, a hypothesis fully compatible with paleogeographic reconstructions.
Lazar, Alexandru C; Kloczewiak, Marek A; Mazsaroff, Istvan
2004-01-01
Recombinant monoclonal antibodies produced using mammalian cell lines contain multiple chemical modifications. One specific modification resides on the C-terminus of the heavy chain. Enzymes inside the cell can cleave the C-terminal lysine from the heavy-chain molecules, and variants with and without C-terminal lysine can be produced. In order to fully characterize the protein, there is a need for analytical methods that are able to account for the different product variants. Conventional analytical methods used for the measurement of the distribution of the two different variants are based on chemical or enzymatic degradation of the protein followed by chromatographic separation of the degradation products. Chromatographic separations with gradient elution have long run times, and analyses of multiple samples are time-consuming. This paper reports development of a novel method for the determination of the relative amounts of the two C-terminal heavy-chain variants based on matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOFMS) measurements of the cyanogen bromide degraded recombinant monoclonal antibody products. The distribution of the variants is determined from the MALDI-TOF mass spectra by measuring the peak areas of the two C-terminal peptides. The assay was used for the assessment of the C-terminal lysine distribution in different development lots. The method was able to differentiate between the products obtained using the same cell line as well as between products obtained from different cell lines. Copyright 2004 John Wiley & Sons, Ltd.
Evaluation of induced seismicity forecast models in the Induced Seismicity Test Bench
NASA Astrophysics Data System (ADS)
Király, Eszter; Gischig, Valentin; Zechar, Jeremy; Doetsch, Joseph; Karvounis, Dimitrios; Wiemer, Stefan
2016-04-01
Induced earthquakes often accompany fluid injection, and the seismic hazard they pose threatens various underground engineering projects. Models to monitor and control induced seismic hazard with traffic light systems should be probabilistic, forward-looking, and updated as new data arrive. Here, we propose an Induced Seismicity Test Bench to test and rank such models. We apply the test bench to data from the Basel 2006 and Soultz-sous-Forêts 2004 geothermal stimulation projects, and we assess forecasts from two models that incorporate a different mix of physical understanding and stochastic representation of the induced sequences: Shapiro in Space (SiS) and Hydraulics and Seismics (HySei). SiS is based on three pillars: the seismicity rate is computed with help of the seismogenic index and a simple exponential decay of the seismicity; the magnitude distribution follows the Gutenberg-Richter relation; and seismicity is distributed in space based on smoothing seismicity during the learning period with 3D Gaussian kernels. The HySei model describes seismicity triggered by pressure diffusion with irreversible permeability enhancement. Our results show that neither model is fully superior to the other. HySei forecasts the seismicity rate well, but is only mediocre at forecasting the spatial distribution. On the other hand, SiS forecasts the spatial distribution well but not the seismicity rate. The shut-in phase is a difficult moment for both models in both reservoirs: the models tend to underpredict the seismicity rate around, and shortly after, shut-in. Ensemble models that combine HySei's rate forecast with SiS's spatial forecast outperform each individual model.
UBioLab: a web-LABoratory for Ubiquitous in-silico experiments.
Bartocci, E; Di Berardini, M R; Merelli, E; Vito, L
2012-03-01
The huge and dynamic amount of bioinformatic resources (e.g., data and tools) available nowadays in Internet represents a big challenge for biologists -for what concerns their management and visualization- and for bioinformaticians -for what concerns the possibility of rapidly creating and executing in-silico experiments involving resources and activities spread over the WWW hyperspace. Any framework aiming at integrating such resources as in a physical laboratory has imperatively to tackle -and possibly to handle in a transparent and uniform way- aspects concerning physical distribution, semantic heterogeneity, co-existence of different computational paradigms and, as a consequence, of different invocation interfaces (i.e., OGSA for Grid nodes, SOAP for Web Services, Java RMI for Java objects, etc.). The framework UBioLab has been just designed and developed as a prototype following the above objective. Several architectural features -as those ones of being fully Web-based and of combining domain ontologies, Semantic Web and workflow techniques- give evidence of an effort in such a direction. The integration of a semantic knowledge management system for distributed (bioinformatic) resources, a semantic-driven graphic environment for defining and monitoring ubiquitous workflows and an intelligent agent-based technology for their distributed execution allows UBioLab to be a semantic guide for bioinformaticians and biologists providing (i) a flexible environment for visualizing, organizing and inferring any (semantics and computational) "type" of domain knowledge (e.g., resources and activities, expressed in a declarative form), (ii) a powerful engine for defining and storing semantic-driven ubiquitous in-silico experiments on the domain hyperspace, as well as (iii) a transparent, automatic and distributed environment for correct experiment executions.
Prochor, Piotr; Piszczatowski, Szczepan; Sajewicz, Eugeniusz
2016-01-01
The study was aimed at biomechanical evaluation of a novel Limb Prosthesis Osseointegrated Fixation System (LPOFS) designed to combine the advantages of interference-fit and threaded solutions. Three cases, the LPOFS (designed), the OPRA (threaded) and the ITAP (interference-fit) implants were studied. Von-Mises stresses in bone patterns and maximal values generated while axial loading on an implant placed in bone and the force reaction values in contact elements while extracting an implant were analysed. Primary and fully osteointegrated connections were considered. The results obtained for primary connection indicate more effective anchoring of the OPRA, however the LPOFS provides more appropriate stress distribution (lower stress-shielding, no overloading) in bone. In the case of fully osteointegrated connection the LPOFSs kept the most favourable stress distribution in cortical bone which is the most important long-term feature of the implant usage and bone remodelling. Moreover, in fully bound connection its anchoring elements resist extracting attempts more than the ITAP and the OPRA. The results obtained allow us to conclude that in the case of features under study the LPOFS is a more functional solution to direct skeletal attachment of limb prosthesis than the referential implants during short and long-term use.
The application of muscle wrapping to voxel-based finite element models of skeletal structures.
Liu, Jia; Shi, Junfen; Fitton, Laura C; Phillips, Roger; O'Higgins, Paul; Fagan, Michael J
2012-01-01
Finite elements analysis (FEA) is now used routinely to interpret skeletal form in terms of function in both medical and biological applications. To produce accurate predictions from FEA models, it is essential that the loading due to muscle action is applied in a physiologically reasonable manner. However, it is common for muscle forces to be represented as simple force vectors applied at a few nodes on the model's surface. It is certainly rare for any wrapping of the muscles to be considered, and yet wrapping not only alters the directions of muscle forces but also applies an additional compressive load from the muscle belly directly to the underlying bone surface. This paper presents a method of applying muscle wrapping to high-resolution voxel-based finite element (FE) models. Such voxel-based models have a number of advantages over standard (geometry-based) FE models, but the increased resolution with which the load can be distributed over a model's surface is particularly advantageous, reflecting more closely how muscle fibre attachments are distributed. In this paper, the development, application and validation of a muscle wrapping method is illustrated using a simple cylinder. The algorithm: (1) calculates the shortest path over the surface of a bone given the points of origin and ultimate attachment of the muscle fibres; (2) fits a Non-Uniform Rational B-Spline (NURBS) curve from the shortest path and calculates its tangent, normal vectors and curvatures so that normal and tangential components of the muscle force can be calculated and applied along the fibre; and (3) automatically distributes the loads between adjacent fibres to cover the bone surface with a fully distributed muscle force, as is observed in vivo. Finally, we present a practical application of this approach to the wrapping of the temporalis muscle around the cranium of a macaque skull.
Influence of defect distribution on the thermoelectric properties of FeNbSb based materials.
Guo, Shuping; Yang, Kaishuai; Zeng, Zhi; Zhang, Yongsheng
2018-05-21
Doping and alloying are important methodologies to improve the thermoelectric performance of FeNbSb based materials. To fully understand the influence of point defects on the thermoelectric properties, we have used density functional calculations in combination with the cluster expansion and Monte Carlo methods to examine the defect distribution behaviors in the mesoscopic FeNb1-xVxSb and FeNb1-xTixSb systems. We find that V and Ti exhibit different distribution behaviors in FeNbSb at low temperature: forming the FeNbSb-FeVSb phase separations in the FeNb1-xVxSb system but two thermodynamically stable phases in FeNb1-xTixSb. Based on the calculated effective mass and band degeneracy, it seems the doping concentration of V or Ti in FeNbSb has little effect on the electrical properties, except for one of the theoretically predicted stable Ti phases (Fe6Nb5Ti1Sb6). Thus, an essential methodology to improve the thermoelectric performance of FeNbSb should rely on phonon scattering to decrease the thermal conductivity. According to the theoretically determined phase diagrams of Fe(Nb,V)Sb and Fe(Nb,Ti)Sb, we propose the (composition, temperature) conditions for the experimental synthesis to improve the thermoelectric performance of FeNbSb based materials: lowering the experimental preparation temperature to around the phase boundary to form a mixture of the solid solution and phase separation. The point defects in the solid solution effectively scatter the short-wavelength phonons and the (coherent or incoherent) interfaces introduced by the phase separation can additionally scatter the middle-wavelength phonons to further decrease the thermal conductivity. Moreover, the induced interfaces could enhance the Seebeck coefficient as well, through the energy filtering effect. Our results give insight into the understanding of the impact of the defect distribution on the thermoelectric performance of materials and strengthen the connection between theoretical predictions and experimental measurements.
Reflectance Prediction Modelling for Residual-Based Hyperspectral Image Coding
Xiao, Rui; Gao, Junbin; Bossomaier, Terry
2016-01-01
A Hyperspectral (HS) image provides observational powers beyond human vision capability but represents more than 100 times the data compared to a traditional image. To transmit and store the huge volume of an HS image, we argue that a fundamental shift is required from the existing “original pixel intensity”-based coding approaches using traditional image coders (e.g., JPEG2000) to the “residual”-based approaches using a video coder for better compression performance. A modified video coder is required to exploit spatial-spectral redundancy using pixel-level reflectance modelling due to the different characteristics of HS images in their spectral and shape domain of panchromatic imagery compared to traditional videos. In this paper a novel coding framework using Reflectance Prediction Modelling (RPM) in the latest video coding standard High Efficiency Video Coding (HEVC) for HS images is proposed. An HS image presents a wealth of data where every pixel is considered a vector for different spectral bands. By quantitative comparison and analysis of pixel vector distribution along spectral bands, we conclude that modelling can predict the distribution and correlation of the pixel vectors for different bands. To exploit distribution of the known pixel vector, we estimate a predicted current spectral band from the previous bands using Gaussian mixture-based modelling. The predicted band is used as the additional reference band together with the immediate previous band when we apply the HEVC. Every spectral band of an HS image is treated like it is an individual frame of a video. In this paper, we compare the proposed method with mainstream encoders. The experimental results are fully justified by three types of HS dataset with different wavelength ranges. The proposed method outperforms the existing mainstream HS encoders in terms of rate-distortion performance of HS image compression. PMID:27695102
AIRSAR Web-Based Data Processing
NASA Technical Reports Server (NTRS)
Chu, Anhua; Van Zyl, Jakob; Kim, Yunjin; Hensley, Scott; Lou, Yunling; Madsen, Soren; Chapman, Bruce; Imel, David; Durden, Stephen; Tung, Wayne
2007-01-01
The AIRSAR automated, Web-based data processing and distribution system is an integrated, end-to-end synthetic aperture radar (SAR) processing system. Designed to function under limited resources and rigorous demands, AIRSAR eliminates operational errors and provides for paperless archiving. Also, it provides a yearly tune-up of the processor on flight missions, as well as quality assurance with new radar modes and anomalous data compensation. The software fully integrates a Web-based SAR data-user request subsystem, a data processing system to automatically generate co-registered multi-frequency images from both polarimetric and interferometric data collection modes in 80/40/20 MHz bandwidth, an automated verification quality assurance subsystem, and an automatic data distribution system for use in the remote-sensor community. Features include Survey Automation Processing in which the software can automatically generate a quick-look image from an entire 90-GB SAR raw data 32-MB/s tape overnight without operator intervention. Also, the software allows product ordering and distribution via a Web-based user request system. To make AIRSAR more user friendly, it has been designed to let users search by entering the desired mission flight line (Missions Searching), or to search for any mission flight line by entering the desired latitude and longitude (Map Searching). For precision image automation processing, the software generates the products according to each data processing request stored in the database via a Queue management system. Users are able to have automatic generation of coregistered multi-frequency images as the software generates polarimetric and/or interferometric SAR data processing in ground and/or slant projection according to user processing requests for one of the 12 radar modes.
Internetting tactical security sensor systems
NASA Astrophysics Data System (ADS)
Gage, Douglas W.; Bryan, W. D.; Nguyen, Hoa G.
1998-08-01
The Multipurpose Surveillance and Security Mission Platform (MSSMP) is a distributed network of remote sensing packages and control stations, designed to provide a rapidly deployable, extended-range surveillance capability for a wide variety of military security operations and other tactical missions. The baseline MSSMP sensor suite consists of a pan/tilt unit with video and FLIR cameras and laser rangefinder. With an additional radio transceiver, MSSMP can also function as a gateway between existing security/surveillance sensor systems such as TASS, TRSS, and IREMBASS, and IP-based networks, to support the timely distribution of both threat detection and threat assessment information. The MSSMP system makes maximum use of Commercial Off The Shelf (COTS) components for sensing, processing, and communications, and of both established and emerging standard communications networking protocols and system integration techniques. Its use of IP-based protocols allows it to freely interoperate with the Internet -- providing geographic transparency, facilitating development, and allowing fully distributed demonstration capability -- and prepares it for integration with the IP-based tactical radio networks that will evolve in the next decade. Unfortunately, the Internet's standard Transport layer protocol, TCP, is poorly matched to the requirements of security sensors and other quasi- autonomous systems in being oriented to conveying a continuous data stream, rather than discrete messages. Also, its canonical 'socket' interface both conceals short losses of communications connectivity and simply gives up and forces the Application layer software to deal with longer losses. For MSSMP, a software applique is being developed that will run on top of User Datagram Protocol (UDP) to provide a reliable message-based Transport service. In addition, a Session layer protocol is being developed to support the effective transfer of control of multiple platforms among multiple control stations.
Offshore wellbore stability analysis based on fully coupled poro-thermo-elastic theory
NASA Astrophysics Data System (ADS)
Cao, Wenke; Deng, Jingen; Yu, Baohua; Liu, Wei; Tan, Qiang
2017-03-01
Drilling-induced tensile fractures are usually caused when the weight of mud is too high, and the effective tangential stress becomes tensile. It is thus hard to explain why tensile fractures are distributed along the lower part of a hole in an offshore exploration well when the mud weight is low. According to analysis, the reason could be the thermal effect, which cannot be ignored because of the drilling fluid and the cooling action of sea water during circulation. A heat transfer model is set up to obtain the temperature distribution of the wellbore and its formation by the finite difference method. Then, fully coupled poro-thermo-elastic theory is used to study the pore pressure and effective stress around the wellbore. By comparing it with both poroelastic and elastic models, it is indicated that the poroelastic effect is dominant at the beginning of circulation and inhibits tensile fractures from forming; then, the thermal effect becomes more important and decreases the effective tangential stress with the passing of time, so the drilling fluid and the cooling effect of sea water can cause tensile fractures to happen. Meanwhile, tensile fractures are shallow and not likely to lead to mud leakage with lower mud weight, which agrees with the actual drilling process. On the other hand, the fluid cooling effect could increase the strength of the rock and reduce the likelihood of shear failure, which would be beneficial for wellbore stability. So, the thermal effect cannot be neglected in offshore wellbore stability analysis, and mud weight and borehole exposure time should be controlled in the case of mud loss.
Marine protected areas and the value of spatially optimized fishery management
Rassweiler, Andrew; Costello, Christopher; Siegel, David A.
2012-01-01
There is a growing focus around the world on marine spatial planning, including spatial fisheries management. Some spatial management approaches are quite blunt, as when marine protected areas (MPAs) are established to restrict fishing in specific locations. Other management tools, such as zoning or spatial user rights, will affect the distribution of fishing effort in a more nuanced manner. Considerable research has focused on the ability of MPAs to increase fishery returns, but the potential for the broader class of spatial management approaches to outperform MPAs has received far less attention. We use bioeconomic models of seven nearshore fisheries in Southern California to explore the value of optimized spatial management in which the distribution of fishing is chosen to maximize profits. We show that fully optimized spatial management can substantially increase fishery profits relative to optimal nonspatial management but that the magnitude of this increase depends on characteristics of the fishing fleet and target species. Strategically placed MPAs can also increase profits substantially compared with nonspatial management, particularly if fishing costs are low, although profit increases available through optimal MPA-based management are roughly half those from fully optimized spatial management. However, if the same total area is protected by randomly placing MPAs, starkly contrasting results emerge: most random MPA designs reduce expected profits. The high value of spatial management estimated here supports continued interest in spatially explicit fisheries regulations but emphasizes that predicted increases in profits can only be achieved if the fishery is well understood and the regulations are strategically designed. PMID:22753469
Assessing the Performance of LED-Based Flashlights Available in the Kenyan Off-Grid Lighting Market
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tracy, Jennifer; Jacobson, Arne; Mills, Evan
Low cost rechargeable flashlights that use LED technology are increasingly available in African markets. While LED technology holds promise to provide affordable, high quality lighting services, the widespread dissemination of low quality products may make it difficult to realize this potential. This study includes performance results for three brands of commonly available LED flashlights that were purchased in Kenya in 2009. The performance of the flashlights was evaluated by testing five units for each of the three brands. The tests included measurements of battery capacity, time required to charge the battery, maximum illuminance at one meter, operation time and lux-hoursmore » from a fully charged battery, light distribution, and color rendering. All flashlights tested performed well below the manufacturers? rated specifications; the measured battery capacity was 30-50percent lower than the rated capacity and the time required to fully charge the battery was 6-25percent greater than the rated time requirement. Our analysis further shows that within each brand there is considerable variability in each performance indicator. The five samples within a single brand varied from each other by as much as 22percent for battery capacity measurements, 3.6percent for the number of hours required for a full charge, 23percent for maximum initial lux, 38percent for run time, 11percent for light distribution and by as much as 200percent for color rendering. Results obtained are useful for creating a framework for quality assurance of off-grid LED products and will be valuable for informing consumers, distributors and product manufacturers about product performance.« less
NASA Astrophysics Data System (ADS)
Santos-Carballal, David; Ngoepe, Phuti E.; de Leeuw, Nora H.
2018-02-01
The spinel-structured lithium manganese oxide (LiMn2O4 ) is a material currently used as cathode for secondary lithium-ion batteries, but whose properties are not yet fully understood. Here, we report a computational investigation of the inversion thermodynamics and electronic behavior of LiMn2O4 derived from spin-polarized density functional theory calculations with a Hubbard Hamiltonian and long-range dispersion corrections (DFT+U-D3). Based on the analysis of the configurational free energy, we have elucidated a partially inverse equilibrium cation distribution for the LiMn2O4 spinel. This equilibrium degree of inversion is rationalized in terms of the crystal field stabilization effects and the difference between the size of the cations. We compare the atomic charges with the oxidation numbers for each degree of inversion. We found segregation of the Mn charge once these ions occupy the tetrahedral and octahedral sites of the spinel. We have obtained the atomic projections of the electronic band structure and density of states, showing that the normal LiMn2O4 has half-metallic properties, while the fully inverse spinel is an insulator. This material is in the ferrimagnetic state for the inverse and partially inverse cation arrangement. The optimized lattice and oxygen parameters, as well as the equilibrium degree of inversion, are in agreement with the available experimental data. The partial equilibrium degree of inversion is important in the interpretation of the lithium ion migration and surface properties of the LiMn2O4 spinel.
Marine protected areas and the value of spatially optimized fishery management.
Rassweiler, Andrew; Costello, Christopher; Siegel, David A
2012-07-17
There is a growing focus around the world on marine spatial planning, including spatial fisheries management. Some spatial management approaches are quite blunt, as when marine protected areas (MPAs) are established to restrict fishing in specific locations. Other management tools, such as zoning or spatial user rights, will affect the distribution of fishing effort in a more nuanced manner. Considerable research has focused on the ability of MPAs to increase fishery returns, but the potential for the broader class of spatial management approaches to outperform MPAs has received far less attention. We use bioeconomic models of seven nearshore fisheries in Southern California to explore the value of optimized spatial management in which the distribution of fishing is chosen to maximize profits. We show that fully optimized spatial management can substantially increase fishery profits relative to optimal nonspatial management but that the magnitude of this increase depends on characteristics of the fishing fleet and target species. Strategically placed MPAs can also increase profits substantially compared with nonspatial management, particularly if fishing costs are low, although profit increases available through optimal MPA-based management are roughly half those from fully optimized spatial management. However, if the same total area is protected by randomly placing MPAs, starkly contrasting results emerge: most random MPA designs reduce expected profits. The high value of spatial management estimated here supports continued interest in spatially explicit fisheries regulations but emphasizes that predicted increases in profits can only be achieved if the fishery is well understood and the regulations are strategically designed.
NASA Astrophysics Data System (ADS)
Reza, Arash; Shishesaz, Mohammad
2017-09-01
The aim of this research is to study the effect of a break in the laminated composite adherends on stress distribution in the adhesively single-lap joint with viscoelastic adhesive and matrix. The proposed model involves two adherends with E-glass fibers and poly-methyl-methacrylate matrix that have been adhered to each other by phenolic-epoxy resin. The equilibrium equations that are based on shear-lag theory have been derived in the Laplace domain, and the governing differential equations of the model have been derived analytically in the Laplace domain. A numerical inverse Laplace transform, which is called Gaver-Stehfest method, has been used to extract desired results in the time domain. The results obtained at the initial time completely matched with the results of elastic solution. Also, a comparison between results obtained from the analytical and finite element models show a relatively good match. The results show that viscoelastic behavior decreases the peak of stress near the break. Finally, the effect of size and location of the break, as well as volume fraction of fibers, on the stress distribution in the adhesive layer is fully investigated.
Luria, Oded; Bar, Jacob; Kovo, Michal; Malinger, Gustavo; Golan, Abraham; Barnea, Ofer
2012-04-01
Fetal growth restriction (FGR) elicits hemodynamic compensatory mechanisms in the fetal circulation. These mechanisms are complex and their effect on the cerebral oxygen availability is not fully understood. To quantify the contribution of each compensatory mechanism to the fetal cerebral oxygen availability, a mathematical model of the fetal circulation was developed. The model was based on cardiac-output distribution in the fetal circulation. The compensatory mechanisms of FGR were simulated and their effects on cerebral oxygen availability were analyzed. The mathematical analysis included the effects of cerebral vasodilation, placental resistance to blood flow, degree of blood shunting by the ductus venosus and the effect of maternal-originated placental insufficiency. The model indicated a unimodal dependency between placental blood flow and cerebral oxygen availability. Optimal cerebral oxygen availability was achieved when the placental blood flow was mildly reduced compared to the normal flow. This optimal ratio was found to increase as the hypoxic state of FGR worsens. The model indicated that cerebral oxygen availability is increasingly dependent on the cardiac output distribution as the fetus gains weight. Copyright © 2011 IPEM. Published by Elsevier Ltd. All rights reserved.
Growth disorders among 6-year-old Iranian children.
Kelishadi, Roya; Amiri, Masoud; Motlagh, Mohammad Esmaeil; Taslimi, Mahnaz; Ardalan, Gelayol; Rouzbahani, Reza; Poursafa, Parinaz
2014-06-01
Sociodemographic factors are important determinants of weight disorders. National representative studies provide a view on this health problem at national and regional levels. This study aimed to assess the distribution of growth disorders in terms of body mass index (BMI) and height in 6-year-old Iranian children using geographical information system (GIS). In this cross-sectional nationwide survey, all Iranian children entering public and private elementary schools were examined in a mandatory national screening program in 2009. Descriptive analysis was used to calculate the prevalence of underweight, overweight, obesity, and short stature. Then, ArcGIS software was used to draw the figures. The study population consisted of 955388 children (48.5% girls and 76.5% urban). Overall, 20% of children were underweight, and 14.3% had high BMI, consisted of 10.9% overweight and 3.4% obese. The corresponding figure for short stature was 6.6%; however, these growth disorders were not equally distributed across various provinces. Our results confirmed unequal distribution of BMI and height of 6-year-old children in Iran generally and in most of its provinces particularly. The differences among provinces cannot be fully explained by the socioeconomic pattern. These findings necessitate a comprehensive national policy with provincial evidence-based programs.
Modeling unobserved sources of heterogeneity in animal abundance using a Dirichlet process prior
Dorazio, R.M.; Mukherjee, B.; Zhang, L.; Ghosh, M.; Jelks, H.L.; Jordan, F.
2008-01-01
In surveys of natural populations of animals, a sampling protocol is often spatially replicated to collect a representative sample of the population. In these surveys, differences in abundance of animals among sample locations may induce spatial heterogeneity in the counts associated with a particular sampling protocol. For some species, the sources of heterogeneity in abundance may be unknown or unmeasurable, leading one to specify the variation in abundance among sample locations stochastically. However, choosing a parametric model for the distribution of unmeasured heterogeneity is potentially subject to error and can have profound effects on predictions of abundance at unsampled locations. In this article, we develop an alternative approach wherein a Dirichlet process prior is assumed for the distribution of latent abundances. This approach allows for uncertainty in model specification and for natural clustering in the distribution of abundances in a data-adaptive way. We apply this approach in an analysis of counts based on removal samples of an endangered fish species, the Okaloosa darter. Results of our data analysis and simulation studies suggest that our implementation of the Dirichlet process prior has several attractive features not shared by conventional, fully parametric alternatives. ?? 2008, The International Biometric Society.
Aerial cooperative transporting and assembling control using multiple quadrotor-manipulator systems
NASA Astrophysics Data System (ADS)
Qi, Yuhua; Wang, Jianan; Shan, Jiayuan
2018-02-01
In this paper, a fully distributed control scheme for aerial cooperative transporting and assembling is proposed using multiple quadrotor-manipulator systems with each quadrotor equipped with a robotic manipulator. First, the kinematic and dynamic models of a quadrotor with multi-Degree of Freedom (DOF) robotic manipulator are established together using Euler-Lagrange equations. Based on the aggregated dynamic model, the control scheme consisting of position controller, attitude controller and manipulator controller is presented. Regarding cooperative transporting and assembling, multiple quadrotor-manipulator systems should be able to form a desired formation without collision among quadrotors from any initial position. The desired formation is achieved by the distributed position controller and attitude controller, while the collision avoidance is guaranteed by an artificial potential function method. Then, the transporting and assembling tasks request the manipulators to reach the desired angles cooperatively, which is achieved by the distributed manipulator controller. The overall stability of the closed-loop system is proven by a Lyapunov method and Matrosov's theorem. In the end, the proposed control scheme is simplified for the real application and then validated by two formation flying missions of four quadrotors with 2-DOF manipulators.
Goodness of fit of probability distributions for sightings as species approach extinction.
Vogel, Richard M; Hosking, Jonathan R M; Elphick, Chris S; Roberts, David L; Reed, J Michael
2009-04-01
Estimating the probability that a species is extinct and the timing of extinctions is useful in biological fields ranging from paleoecology to conservation biology. Various statistical methods have been introduced to infer the time of extinction and extinction probability from a series of individual sightings. There is little evidence, however, as to which of these models provide adequate fit to actual sighting records. We use L-moment diagrams and probability plot correlation coefficient (PPCC) hypothesis tests to evaluate the goodness of fit of various probabilistic models to sighting data collected for a set of North American and Hawaiian bird populations that have either gone extinct, or are suspected of having gone extinct, during the past 150 years. For our data, the uniform, truncated exponential, and generalized Pareto models performed moderately well, but the Weibull model performed poorly. Of the acceptable models, the uniform distribution performed best based on PPCC goodness of fit comparisons and sequential Bonferroni-type tests. Further analyses using field significance tests suggest that although the uniform distribution is the best of those considered, additional work remains to evaluate the truncated exponential model more fully. The methods we present here provide a framework for evaluating subsequent models.
To Each According to its Degree: The Meritocracy and Topocracy of Embedded Markets
NASA Astrophysics Data System (ADS)
Borondo, J.; Borondo, F.; Rodriguez-Sickert, C.; Hidalgo, C. A.
2014-01-01
A system is said to be meritocratic if the compensation and power available to individuals is determined by their abilities and merits. A system is topocratic if the compensation and power available to an individual is determined primarily by her position in a network. Here we introduce a model that is perfectly meritocratic for fully connected networks but that becomes topocratic for sparse networks-like the ones in society. In the model, individuals produce and sell content, but also distribute the content produced by others when they belong to the shortest path connecting a buyer and a seller. The production and distribution of content defines two channels of compensation: a meritocratic channel, where individuals are compensated for the content they produce, and a topocratic channel, where individual compensation is based on the number of shortest paths that go through them in the network. We solve the model analytically and show that the distribution of payoffs is meritocratic only if the average degree of the nodes is larger than a root of the total number of nodes. We conclude that, in the light of this model, the sparsity and structure of networks represents a fundamental constraint to the meritocracy of societies.
To each according to its degree: the meritocracy and topocracy of embedded markets.
Borondo, J; Borondo, F; Rodriguez-Sickert, C; Hidalgo, C A
2014-01-21
A system is said to be meritocratic if the compensation and power available to individuals is determined by their abilities and merits. A system is topocratic if the compensation and power available to an individual is determined primarily by her position in a network. Here we introduce a model that is perfectly meritocratic for fully connected networks but that becomes topocratic for sparse networks-like the ones in society. In the model, individuals produce and sell content, but also distribute the content produced by others when they belong to the shortest path connecting a buyer and a seller. The production and distribution of content defines two channels of compensation: a meritocratic channel, where individuals are compensated for the content they produce, and a topocratic channel, where individual compensation is based on the number of shortest paths that go through them in the network. We solve the model analytically and show that the distribution of payoffs is meritocratic only if the average degree of the nodes is larger than a root of the total number of nodes. We conclude that, in the light of this model, the sparsity and structure of networks represents a fundamental constraint to the meritocracy of societies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, Fei; Ji, Haoran; Wang, Chengshan
Distributed generators (DGs) including photovoltaic panels (PVs) have been integrated dramatically in active distribution networks (ADNs). Due to the strong volatility and uncertainty, the high penetration of PV generation immensely exacerbates the conditions of voltage violation in ADNs. However, the emerging flexible interconnection technology based on soft open points (SOPs) provides increased controllability and flexibility to the system operation. For fully exploiting the regulation ability of SOPs to address the problems caused by PV, this paper proposes a robust optimization method to achieve the robust optimal operation of SOPs in ADNs. A two-stage adjustable robust optimization model is built tomore » tackle the uncertainties of PV outputs, in which robust operation strategies of SOPs are generated to eliminate the voltage violations and reduce the power losses of ADNs. A column-and-constraint generation (C&CG) algorithm is developed to solve the proposed robust optimization model, which are formulated as second-order cone program (SOCP) to facilitate the accuracy and computation efficiency. Case studies on the modified IEEE 33-node system and comparisons with the deterministic optimization approach are conducted to verify the effectiveness and robustness of the proposed method.« less
Numerical modeling of laser assisted tape winding process
NASA Astrophysics Data System (ADS)
Zaami, Amin; Baran, Ismet; Akkerman, Remko
2017-10-01
Laser assisted tape winding (LATW) has become more and more popular way of producing new thermoplastic products such as ultra-deep sea water riser, gas tanks, structural parts for aerospace applications. Predicting the temperature in LATW has been a source of great interest since the temperature at nip-point plays a key role for mechanical interface performance. Modeling the LATW process includes several challenges such as the interaction of optics and heat transfer. In the current study, numerical modeling of the optical behavior of laser radiation on circular surfaces is investigated based on a ray tracing and non-specular reflection model. The non-specular reflection is implemented considering the anisotropic reflective behavior of the fiber-reinforced thermoplastic tape using a bidirectional reflectance distribution function (BRDF). The proposed model in the present paper includes a three-dimensional circular geometry, in which the effects of reflection from different ranges of the circular surface as well as effect of process parameters on temperature distribution are studied. The heat transfer model is constructed using a fully implicit method. The effect of process parameters on the nip-point temperature is examined. Furthermore, several laser distributions including Gaussian and linear are examined which has not been considered in literature up to now.
Garagnani, Max; Wennekers, Thomas; Pulvermüller, Friedemann
2009-01-01
Current cognitive theories postulate either localist representations of knowledge or fully overlapping, distributed ones. We use a connectionist model that closely replicates known anatomical properties of the cerebral cortex and neurophysiological principles to show that Hebbian learning in a multi-layer neural network leads to memory traces (cell assemblies) that are both distributed and anatomically distinct. Taking the example of word learning based on action-perception correlation, we document mechanisms underlying the emergence of these assemblies, especially (i) the recruitment of neurons and consolidation of connections defining the kernel of the assembly along with (ii) the pruning of the cell assembly’s halo (consisting of very weakly connected cells). We found that, whereas a learning rule mapping covariance led to significant overlap and merging of assemblies, a neurobiologically grounded synaptic plasticity rule with fixed LTP/LTD thresholds produced minimal overlap and prevented merging, exhibiting competitive learning behaviour. Our results are discussed in light of current theories of language and memory. As simulations with neurobiologically realistic neural networks demonstrate here spontaneous emergence of lexical representations that are both cortically dispersed and anatomically distinct, both localist and distributed cognitive accounts receive partial support. PMID:20396612
Garagnani, Max; Wennekers, Thomas; Pulvermüller, Friedemann
2009-06-01
Current cognitive theories postulate either localist representations of knowledge or fully overlapping, distributed ones. We use a connectionist model that closely replicates known anatomical properties of the cerebral cortex and neurophysiological principles to show that Hebbian learning in a multi-layer neural network leads to memory traces (cell assemblies) that are both distributed and anatomically distinct. Taking the example of word learning based on action-perception correlation, we document mechanisms underlying the emergence of these assemblies, especially (i) the recruitment of neurons and consolidation of connections defining the kernel of the assembly along with (ii) the pruning of the cell assembly's halo (consisting of very weakly connected cells). We found that, whereas a learning rule mapping covariance led to significant overlap and merging of assemblies, a neurobiologically grounded synaptic plasticity rule with fixed LTP/LTD thresholds produced minimal overlap and prevented merging, exhibiting competitive learning behaviour. Our results are discussed in light of current theories of language and memory. As simulations with neurobiologically realistic neural networks demonstrate here spontaneous emergence of lexical representations that are both cortically dispersed and anatomically distinct, both localist and distributed cognitive accounts receive partial support.
NASA Astrophysics Data System (ADS)
Cheng, Yanyan; Ogden, Fred L.; Zhu, Jianting
2017-07-01
Preferential flow paths (PFPs) affect the hydrological response of humid tropical catchments but have not received sufficient attention. We consider PFPs created by tree roots and earthworms in a near-surface soil layer in steep, humid, tropical lowland catchments and hypothesize that observed hydrological behaviors can be better captured by reasonably considering PFPs in this layer. We test this hypothesis by evaluating the performance of four different physically based distributed model structures without and with PFPs in different configurations. Model structures are tested both quantitatively and qualitatively using hydrological, geophysical, and geochemical data both from the Smithsonian Tropical Research Institute Agua Salud Project experimental catchment(s) in Central Panama and other sources in the literature. The performance of different model structures is evaluated using runoff Volume Error and three Nash-Sutcliffe efficiency measures against observed total runoff, stormflows, and base flows along with visual comparison of simulated and observed hydrographs. Two of the four proposed model structures which include both lateral and vertical PFPs are plausible, but the one with explicit simulation of PFPs performs the best. A small number of vertical PFPs that fully extend below the root zone allow the model to reasonably simulate deep groundwater recharge, which plays a crucial role in base flow generation. Results also show that the shallow lateral PFPs are the main contributor to the observed high flow characteristics. Their number and size distribution are found to be more important than the depth distribution. Our model results are corroborated by geochemical and geophysical observations.
NASA Technical Reports Server (NTRS)
Yatheendradas, Soni; Peters-Lidard, Christa D.; Koren, Victor; Cosgrove, Brian A.; DeGoncalves, Luis G. D.; Smith, Michael; Geiger, James; Cui, Zhengtao; Borak, Jordan; Kumar, Sujay V.;
2012-01-01
Snow cover area affects snowmelt, soil moisture, evapotranspiration, and ultimately streamflow. For the Distributed Model Intercomparison Project - Phase 2 Western basins, we assimilate satellite-based fractional snow cover area (fSCA) from the Moderate Resolution Imaging Spectroradiometer, or MODIS, into the National Weather Service (NWS) SNOW-17 model. This model is coupled with the NWS Sacramento Heat Transfer (SAC-HT) model inside the National Aeronautics and Space Administration's (NASA) Land Information System. SNOW-17 computes fSCA from snow water equivalent (SWE) values using an areal depletion curve. Using a direct insertion, we assimilate fSCAs in two fully distributed ways: 1) we update the curve by attempting SWE preservation, and 2) we reconstruct SWEs using the curve. The preceding are refinements of an existing simple, conceptually-guided NWS algorithm. Satellite fSCA over dense forests inadequately accounts for below-canopy snow, degrading simulated streamflow upon assimilation during snowmelt. Accordingly, we implement a below-canopy allowance during assimilation. This simplistic allowance and direct insertion are found to be inadequate for improving calibrated results, still degrading them as mentioned above. However, for streamflow volume for the uncalibrated runs, we obtain: (1) substantial to major improvements (64-81 %) as a percentage of the control run residuals (or distance from observations), and (2) minor improvements (16-22 %) as a percentage of observed values. We highlight the need for detailed representations of canopy-snow optical radiative transfer processes in mountainous, dense forest regions if assimilation-based improvements are to be seen in calibrated runs over these areas.
U.S. Geographic Analysis of the Cost of Hydrogen from Electrolysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saur, G.; Ainscough, C.
2011-12-01
This report summarizes U.S. geographic analysis of the cost of hydrogen from electrolysis. Wind-based water electrolysis represents a viable path to renewably-produced hydrogen production. It might be used for hydrogen-based transportation fuels, energy storage to augment electricity grid services, or as a supplement for other industrial hydrogen uses. This analysis focuses on the levelized production, costs of producing green hydrogen, rather than market prices which would require more extensive knowledge of an hourly or daily hydrogen market. However, the costs of hydrogen presented here do include a small profit from an internal rate of return on the system. The costmore » of renewable wind-based hydrogen production is very sensitive to the cost of the wind electricity. Using differently priced grid electricity to supplement the system had only a small effect on the cost of hydrogen; because wind electricity was always used either directly or indirectly to fully generate the hydrogen. Wind classes 3-6 across the U.S. were examined and the costs of hydrogen ranged from $3.74kg to $5.86/kg. These costs do not quite meet the 2015 DOE targets for central or distributed hydrogen production ($3.10/kg and $3.70/kg, respectively), so more work is needed on reducing the cost of wind electricity and the electrolyzers. If the PTC and ITC are claimed, however, many of the sites will meet both targets. For a subset of distributed refueling stations where there is also inexpensive, open space nearby this could be an alternative to central hydrogen production and distribution.« less
NASA Technical Reports Server (NTRS)
Badhwar, G. D.; O'Neill, P. M.
2001-01-01
There is considerable interest in developing silicon-based telescopes because of their compactness and low power requirements. Three such telescopes have been flown on board the Space Shuttle to measure the linear energy transfer spectra of trapped, galactic cosmic ray, and solar energetic particles. Dosimeters based on single silicon detectors have also been flown on the Mir orbital station. A comparison of the absorbed dose and radiation quality factors calculated from these telescopes with that estimated from measurements made with a tissue equivalent proportional counter show differences which need to be fully understood if these telescopes are to be used for astronaut radiation risk assessments. Instrument performance is complicated by a variety of factors. A Monte Carlo-based technique was developed to model the behavior of both single element detectors in a proton beam, and the performance of a two-element, wide-angle telescope, in the trapped belt proton field inside the Space Shuttle. The technique is based on: (1) radiation transport intranuclear-evaporation model that takes into account the charge and angular distribution of target fragments, (2) Landau-Vavilov distribution of energy deposition allowing for electron escape, (3) true detector geometry of the telescope, (4) coincidence and discriminator settings, (5) spacecraft shielding geometry, and (6) the external space radiation environment, including albedo protons. The value of such detailed modeling and its implications in astronaut risk assessment is addressed. c2001 Elsevier Science B.V. All rights reserved.
Badhwar, G D; O'Neill, P M
2001-07-11
There is considerable interest in developing silicon-based telescopes because of their compactness and low power requirements. Three such telescopes have been flown on board the Space Shuttle to measure the linear energy transfer spectra of trapped, galactic cosmic ray, and solar energetic particles. Dosimeters based on single silicon detectors have also been flown on the Mir orbital station. A comparison of the absorbed dose and radiation quality factors calculated from these telescopes with that estimated from measurements made with a tissue equivalent proportional counter show differences which need to be fully understood if these telescopes are to be used for astronaut radiation risk assessments. Instrument performance is complicated by a variety of factors. A Monte Carlo-based technique was developed to model the behavior of both single element detectors in a proton beam, and the performance of a two-element, wide-angle telescope, in the trapped belt proton field inside the Space Shuttle. The technique is based on: (1) radiation transport intranuclear-evaporation model that takes into account the charge and angular distribution of target fragments, (2) Landau-Vavilov distribution of energy deposition allowing for electron escape, (3) true detector geometry of the telescope, (4) coincidence and discriminator settings, (5) spacecraft shielding geometry, and (6) the external space radiation environment, including albedo protons. The value of such detailed modeling and its implications in astronaut risk assessment is addressed. c2001 Elsevier Science B.V. All rights reserved.
A Bayesian Approach to Determination of F, D, and Z Values Used in Steam Sterilization Validation.
Faya, Paul; Stamey, James D; Seaman, John W
2017-01-01
For manufacturers of sterile drug products, steam sterilization is a common method used to provide assurance of the sterility of manufacturing equipment and products. The validation of sterilization processes is a regulatory requirement and relies upon the estimation of key resistance parameters of microorganisms. Traditional methods have relied upon point estimates for the resistance parameters. In this paper, we propose a Bayesian method for estimation of the well-known D T , z , and F o values that are used in the development and validation of sterilization processes. A Bayesian approach allows the uncertainty about these values to be modeled using probability distributions, thereby providing a fully risk-based approach to measures of sterility assurance. An example is given using the survivor curve and fraction negative methods for estimation of resistance parameters, and we present a means by which a probabilistic conclusion can be made regarding the ability of a process to achieve a specified sterility criterion. LAY ABSTRACT: For manufacturers of sterile drug products, steam sterilization is a common method used to provide assurance of the sterility of manufacturing equipment and products. The validation of sterilization processes is a regulatory requirement and relies upon the estimation of key resistance parameters of microorganisms. Traditional methods have relied upon point estimates for the resistance parameters. In this paper, we propose a Bayesian method for estimation of the critical process parameters that are evaluated in the development and validation of sterilization processes. A Bayesian approach allows the uncertainty about these parameters to be modeled using probability distributions, thereby providing a fully risk-based approach to measures of sterility assurance. An example is given using the survivor curve and fraction negative methods for estimation of resistance parameters, and we present a means by which a probabilistic conclusion can be made regarding the ability of a process to achieve a specified sterility criterion. © PDA, Inc. 2017.
Systems Suitable for Information Professionals.
ERIC Educational Resources Information Center
Blair, John C., Jr.
1983-01-01
Describes computer operating systems applicable to microcomputers, noting hardware components, advantages and disadvantages of each system, local area networks, distributed processing, and a fully configured system. Lists of hardware components (disk drives, solid state disk emulators, input/output and memory components, and processors) and…
Simulation of speckle patterns with pre-defined correlation distributions.
Song, Lipei; Zhou, Zhen; Wang, Xueyan; Zhao, Xing; Elson, Daniel S
2016-03-01
We put forward a method to easily generate a single or a sequence of fully developed speckle patterns with pre-defined correlation distribution by utilizing the principle of coherent imaging. The few-to-one mapping between the input correlation matrix and the correlation distribution between simulated speckle patterns is realized and there is a simple square relationship between the values of these two correlation coefficient sets. This method is demonstrated both theoretically and experimentally. The square relationship enables easy conversion from any desired correlation distribution. Since the input correlation distribution can be defined by a digital matrix or a gray-scale image acquired experimentally, this method provides a convenient way to simulate real speckle-related experiments and to evaluate data processing techniques.
Simulation of speckle patterns with pre-defined correlation distributions
Song, Lipei; Zhou, Zhen; Wang, Xueyan; Zhao, Xing; Elson, Daniel S.
2016-01-01
We put forward a method to easily generate a single or a sequence of fully developed speckle patterns with pre-defined correlation distribution by utilizing the principle of coherent imaging. The few-to-one mapping between the input correlation matrix and the correlation distribution between simulated speckle patterns is realized and there is a simple square relationship between the values of these two correlation coefficient sets. This method is demonstrated both theoretically and experimentally. The square relationship enables easy conversion from any desired correlation distribution. Since the input correlation distribution can be defined by a digital matrix or a gray-scale image acquired experimentally, this method provides a convenient way to simulate real speckle-related experiments and to evaluate data processing techniques. PMID:27231589
NASA Astrophysics Data System (ADS)
Chaney, N.; Wood, E. F.
2014-12-01
The increasing accessibility of high-resolution land data (< 100 m) and high performance computing allows improved parameterizations of subgrid hydrologic processes in macroscale land surface models. Continental scale fully distributed modeling at these spatial scales is possible; however, its practicality for operational use is still unknown due to uncertainties in input data, model parameters, and storage requirements. To address these concerns, we propose a modeling framework that provides the spatial detail of a fully distributed model yet maintains the benefits of a semi-distributed model. In this presentation we will introduce DTOPLATS-MP, a coupling between the NOAH-MP land surface model and the Dynamic TOPMODEL hydrologic model. This new model captures a catchment's spatial heterogeneity by clustering high-resolution land datasets (soil, topography, and land cover) into hundreds of hydrologic similar units (HSUs). A prior DEM analysis defines the connections between each HSU. At each time step, the 1D land surface model updates each HSU; the HSUs then interact laterally via the subsurface and surface. When compared to the fully distributed form of the model, this framework allows a significant decrease in computation and storage while providing most of the same information and enabling parameter transferability. As a proof of concept, we will show how this new modeling framework can be run over CONUS at a 30-meter spatial resolution. For each catchment in the WBD HUC-12 dataset, the model is run between 2002 and 2012 using available high-resolution continental scale land and meteorological datasets over CONUS (dSSURGO, NLCD, NED, and NCEP Stage IV). For each catchment, the model is run with 1000 model parameter sets obtained from a Latin hypercube sample. This exercise will illustrate the feasibility of running the model operationally at continental scales while accounting for model parameter uncertainty.
Accelerated radial Fourier-velocity encoding using compressed sensing.
Hilbert, Fabian; Wech, Tobias; Hahn, Dietbert; Köstler, Herbert
2014-09-01
Phase Contrast Magnetic Resonance Imaging (MRI) is a tool for non-invasive determination of flow velocities inside blood vessels. Because Phase Contrast MRI only measures a single mean velocity per voxel, it is only applicable to vessels significantly larger than the voxel size. In contrast, Fourier Velocity Encoding measures the entire velocity distribution inside a voxel, but requires a much longer acquisition time. For accurate diagnosis of stenosis in vessels on the scale of spatial resolution, it is important to know the velocity distribution of a voxel. Our aim was to determine velocity distributions with accelerated Fourier Velocity Encoding in an acquisition time required for a conventional Phase Contrast image. We imaged the femoral artery of healthy volunteers with ECG-triggered, radial CINE acquisition. Data acquisition was accelerated by undersampling, while missing data were reconstructed by Compressed Sensing. Velocity spectra of the vessel were evaluated by high resolution Phase Contrast images and compared to spectra from fully sampled and undersampled Fourier Velocity Encoding. By means of undersampling, it was possible to reduce the scan time for Fourier Velocity Encoding to the duration required for a conventional Phase Contrast image. Acquisition time for a fully sampled data set with 12 different Velocity Encodings was 40 min. By applying a 12.6-fold retrospective undersampling, a data set was generated equal to 3:10 min acquisition time, which is similar to a conventional Phase Contrast measurement. Velocity spectra from fully sampled and undersampled Fourier Velocity Encoded images are in good agreement and show the same maximum velocities as compared to velocity maps from Phase Contrast measurements. Compressed Sensing proved to reliably reconstruct Fourier Velocity Encoded data. Our results indicate that Fourier Velocity Encoding allows an accurate determination of the velocity distribution in vessels in the order of the voxel size. Thus, compared to normal Phase Contrast measurements delivering only mean velocities, no additional scan time is necessary to retrieve meaningful velocity spectra in small vessels. Copyright © 2013. Published by Elsevier GmbH.
Expression for time travel based on diffusive wave theory: applicability and considerations
NASA Astrophysics Data System (ADS)
Aguilera, J. C.; Escauriaza, C. R.; Passalacqua, P.; Gironas, J. A.
2017-12-01
Prediction of hydrological response is of utmost importance when dealing with urban planning, risk assessment, or water resources management issues. With the advent of climate change, special care must be taken with respect to variations in rainfall and runoff due to rising temperature averages. Nowadays, while typical workstations have adequate power to run distributed routing hydrological models, it is still not enough for modeling on-the-fly, a crucial ability in a natural disaster context, where rapid decisions must be made. Semi-distributed time travel models, which compute a watershed's hydrograph without explicitly solving the full shallow water equations, appear as an attractive approach to rainfall-runoff modeling since, like fully distributed models, also superimpose a grid on the watershed, and compute runoff based on cell parameter values. These models are heavily dependent on the travel time expression for an individual cell. Many models make use of expressions based on kinematic wave theory, which is not applicable in cases where watershed storage is important, such as mild slopes. This work presents a new expression for concentration times in overland flow, based on diffusive wave theory, which considers not only the effects of storage but also the effects on upstream contribution. Setting upstream contribution equal to zero gives an expression consistent with previous work on diffusive wave theory; on the other hand, neglecting storage effects (i.e.: diffusion,) is shown to be equivalent to kinematic wave theory, currently used in many spatially distributed time travel models. The newly found expression is shown to be dependent on plane discretization, particularly when dealing with very non-kinematic cases. This is shown to be the result of upstream contribution, which gets larger downstream, versus plane length. This result also provides some light on the limits on applicability of the expression: when a certain kinematic threshold is reached, the expression is no longer valid, and one must fall back to kinematic wave theory, for lack of a better option. This expression could be used for improving currently published spatially distributed time travel models, since they would become applicable in many new cases.
A comparison of decentralized, distributed, and centralized vibro-acoustic control.
Frampton, Kenneth D; Baumann, Oliver N; Gardonio, Paolo
2010-11-01
Direct velocity feedback control of structures is well known to increase structural damping and thus reduce vibration. In multi-channel systems the way in which the velocity signals are used to inform the actuators ranges from decentralized control, through distributed or clustered control to fully centralized control. The objective of distributed controllers is to exploit the anticipated performance advantage of the centralized control while maintaining the scalability, ease of implementation, and robustness of decentralized control. However, and in seeming contradiction, some investigations have concluded that decentralized control performs as well as distributed and centralized control, while other results have indicated that distributed control has significant performance advantages over decentralized control. The purpose of this work is to explain this seeming contradiction in results, to explore the effectiveness of decentralized, distributed, and centralized vibro-acoustic control, and to expand the concept of distributed control to include the distribution of the optimization process and the cost function employed.
O’Callaghan, John; Mohan, Helen M; Sharrock, Anna; Gokani, Vimal; Fitzgerald, J Edward; Williams, Adam P; Harries, Rhiannon L
2017-01-01
Objectives Applications for surgical training have declined over the last decade, and anecdotally the costs of training at the expense of the surgical trainee are rising. We aimed to quantify the costs surgical trainees are expected to cover for postgraduate training. Design Prospective, cross-sectional, questionnaire-based study. Setting/Participants A non-mandatory online questionnaire for UK-based trainees was distributed nationally. A similar national questionnaire was distributed for Ireland, taking into account differences between the healthcare systems. Only fully completed responses were included. Results There were 848 and 58 fully completed responses from doctors based in the UK and Ireland, respectively. Medical students in the UK reported a significant increase in debt on graduation by 55% from £17 892 (2000–2004) to £27 655 (2010–2014) (p<0.01). 41% of specialty trainees in the UK indicated that some or all of their study budget was used to fund mandatory regional teaching. By the end of training, a surgical trainee in the UK spends on average £9105 on courses, £5411 on conferences and £4185 on exams, not covered by training budget. Irish trainees report similarly high costs. Most trainees undertake a higher degree during their postgraduate training. The cost of achieving the mandatory requirements for completion of training ranges between £20 000 and £26 000 (dependent on specialty), except oral and maxillofacial surgery, which is considerably higher (£71 431). Conclusions Medical students are graduating with significantly larger debt than before. Surgical trainees achieve their educational requirements at substantial personal expenditure. To encourage graduates to pursue and remain in surgical training, urgent action is required to fund the mandatory requirements and annual training costs for completion of training and provide greater transparency to inform doctors of what their postgraduate training costs will be. This is necessary to increase diversity in surgery, reduce debt load and ensure surgery remains a popular career choice. PMID:29146646
NASA Astrophysics Data System (ADS)
Haas, Edwin; Santabarbara, Ignacio; Kiese, Ralf; Butterbach-Bahl, Klaus
2017-04-01
Numerical simulation models are increasingly used to estimate greenhouse gas emissions at site to regional / national scale and are outlined as the most advanced methodology (Tier 3) in the framework of UNFCCC reporting. Process-based models incorporate the major processes of the carbon and nitrogen cycle of terrestrial ecosystems and are thus thought to be widely applicable at various conditions and spatial scales. Process based modelling requires high spatial resolution input data on soil properties, climate drivers and management information. The acceptance of model based inventory calculations depends on the assessment of the inventory's uncertainty (model, input data and parameter induced uncertainties). In this study we fully quantify the uncertainty in modelling soil N2O and NO emissions from arable, grassland and forest soils using the biogeochemical model LandscapeDNDC. We address model induced uncertainty (MU) by contrasting two different soil biogeochemistry modules within LandscapeDNDC. The parameter induced uncertainty (PU) was assessed by using joint parameter distributions for key parameters describing microbial C and N turnover processes as obtained by different Bayesian calibration studies for each model configuration. Input data induced uncertainty (DU) was addressed by Bayesian calibration of soil properties, climate drivers and agricultural management practices data. For the MU, DU and PU we performed several hundred simulations each to contribute to the individual uncertainty assessment. For the overall uncertainty quantification we assessed the model prediction probability, followed by sampled sets of input datasets and parameter distributions. Statistical analysis of the simulation results have been used to quantify the overall full uncertainty of the modelling approach. With this study we can contrast the variation in model results to the different sources of uncertainties for each ecosystem. Further we have been able to perform a fully uncertainty analysis for modelling N2O and NO emissions from arable, grassland and forest soils necessary for the comprehensibility of modelling results. We have applied the methodology to a regional inventory to assess the overall modelling uncertainty for a regional N2O and NO emissions inventory for the state of Saxony, Germany.
Validating modelled variable surface saturation in the riparian zone with thermal infrared images
NASA Astrophysics Data System (ADS)
Glaser, Barbara; Klaus, Julian; Frei, Sven; Frentress, Jay; Pfister, Laurent; Hopp, Luisa
2015-04-01
Variable contributing areas and hydrological connectivity have become prominent new concepts for hydrologic process understanding in recent years. The dynamic connectivity within the hillslope-riparian-stream (HRS) system is known to have a first order control on discharge generation and especially the riparian zone functions as runoff buffering or producing zone. However, despite their importance, the highly dynamic processes of contraction and extension of saturation within the riparian zone and its impact on runoff generation still remain not fully understood. In this study, we analysed the potential of a distributed, fully coupled and physically based model (HydroGeoSphere) to represent the spatial and temporal water flux dynamics of a forested headwater HRS system (6 ha) in western Luxembourg. The model was set up and parameterised under consideration of experimentally-derived knowledge of catchment structure and was run for a period of four years (October 2010 to August 2014). For model evaluation, we especially focused on the temporally varying spatial patterns of surface saturation. We used ground-based thermal infrared (TIR) imagery to map surface saturation with a high spatial and temporal resolution and collected 20 panoramic snapshots of the riparian zone (ca. 10 by 20 m) under different hydrologic conditions. These TIR panoramas were used in addition to several classical discharge and soil moisture time series for a spatially-distributed model validation. In a manual calibration process we optimised model parameters (e.g. porosity, saturated hydraulic conductivity, evaporation depth) to achieve a better agreement between observed and modelled discharges and soil moistures. The subsequent validation of surface saturation patterns by a visual comparison of processed TIR panoramas and corresponding model output panoramas revealed an overall good accordance for all but one region that was always too dry in the model. However, quantitative comparisons of modelled and observed saturated pixel percentages and of their modelled and measured relationships to concurrent discharges revealed remarkable similarities. During the calibration process we observed that surface saturation patterns were mostly affected by changing the soil properties of the topsoil in the riparian zone, but that the discharge behaviour did not change substantially at the same time. This effect of various spatial patterns occurring concomitant to a nearly unchanged integrated response demonstrates the importance of spatially distributed validation data. Our study clearly benefited from using different kinds of data - spatially integrated and distributed, temporally continuous and discrete - for the model evaluation procedure.
Non-uniform Solar Temperature Field on Large Aperture, Fully-Steerable Telescope Structure
NASA Astrophysics Data System (ADS)
Liu, Yan
2016-09-01
In this study, a 110-m fully steerable radio telescope was used as an analysis platform and the integral parametric finite element model of the antenna structure was built in the ANSYS thermal analysis module. The boundary conditions of periodic air temperature, solar radiation, long-wave radiation shadows of the surrounding environment, etc. were computed at 30 min intervals under a cloudless sky on a summer day, i.e., worstcase climate conditions. The transient structural temperatures were then analyzed under a period of several days of sunshine with a rational initial structural temperature distribution until the whole set of structural temperatures converged to the results obtained the day before. The non-uniform temperature field distribution of the entire structure and the main reflector surface RMS were acquired according to changes in pitch and azimuth angle over the observation period. Variations in the solar cooker effect over time and spatial distributions in the secondary reflector were observed to elucidate the mechanism of the effect. The results presented here not only provide valuable realtime data for the design, construction, sensor arrangement and thermal deformation control of actuators but also provide a troubleshooting reference for existing actuators.
Feasibility of Decentralized Linear-Quadratic-Gaussian Control of Autonomous Distributed Spacecraft
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell
1999-01-01
A distributed satellite formation, modeled as an arbitrary number of fully connected nodes in a network, could be controlled using a decentralized controller framework that distributes operations in parallel over the network. For such problems, a solution that minimizes data transmission requirements, in the context of linear-quadratic-Gaussian (LQG) control theory, was given by Speyer. This approach is advantageous because it is non-hierarchical, detected failures gracefully degrade system performance, fewer local computations are required than for a centralized controller, and it is optimal with respect to the standard LQG cost function. Disadvantages of the approach are the need for a fully connected communications network, the total operations performed over all the nodes are greater than for a centralized controller, and the approach is formulated for linear time-invariant systems. To investigate the feasibility of the decentralized approach to satellite formation flying, a simple centralized LQG design for a spacecraft orbit control problem is adapted to the decentralized framework. The simple design uses a fixed reference trajectory (an equatorial, Keplerian, circular orbit), and by appropriate choice of coordinates and measurements is formulated as a linear time-invariant system.
Effect of rib angle on local heat/mass transfer distribution in a two-pass rib-roughened channel
NASA Technical Reports Server (NTRS)
Chandra, P. R.; Han, J. C.; Lau, S. C.
1987-01-01
The naphthalene sublimation technique is used to investigate the heat transfer characteristics of turbulent air flow in a two-pass channel. A test section that resembles the internal cooling passages of gas turbine airfoils is employed. The local Sherwood numbers on the ribbed walls were found to be 1.5-6.5 times those for a fully developed flow in a smooth square duct. Depending on the rib angle-of-attack and the Reynolds number, the average ribbed-wall Sherwood numbers were 2.5-3.5 times higher than the fully developed values.
NASA Astrophysics Data System (ADS)
Georgiou, Mike F.; Sfakianakis, George N.; Johnson, Gary; Douligeris, Christos; Scandar, Silvia; Eisler, E.; Binkley, B.
1994-05-01
In an effort to improve patient care while considering cost-effectiveness, we developed a Picture Archiving and Communication System (PACS), which combines imaging cameras, computers and other peripheral equipment from multiple nuclear medicine vectors. The PACS provides fully-digital clinical operation which includes acquisition and automatic organization of patient data, distribution of the data to all networked units inside the department and other remote locations, digital analysis and quantitation of images, digital diagnostic reading of image studies and permanent data archival with the ability for fast retrieval. The PACS enabled us to significantly reduce the amount of film used, and we are currently proceeding with implementing a film-less laboratory. Hard copies are produced on paper or transparent sheets for non-digitally connected parts of the hospital. The PACS provides full-digital operation which is faster, more reliable, better organized and managed, and overall more efficient than a conventional film-based operation. In this paper, the integration of the various PACS components from multiple vendors is reviewed, and the impact of PACS, with its advantages and limitations on our clinical operation is analyzed.
Enhanced storage capacity with errors in scale-free Hopfield neural networks: An analytical study.
Kim, Do-Hyun; Park, Jinha; Kahng, Byungnam
2017-01-01
The Hopfield model is a pioneering neural network model with associative memory retrieval. The analytical solution of the model in mean field limit revealed that memories can be retrieved without any error up to a finite storage capacity of O(N), where N is the system size. Beyond the threshold, they are completely lost. Since the introduction of the Hopfield model, the theory of neural networks has been further developed toward realistic neural networks using analog neurons, spiking neurons, etc. Nevertheless, those advances are based on fully connected networks, which are inconsistent with recent experimental discovery that the number of connections of each neuron seems to be heterogeneous, following a heavy-tailed distribution. Motivated by this observation, we consider the Hopfield model on scale-free networks and obtain a different pattern of associative memory retrieval from that obtained on the fully connected network: the storage capacity becomes tremendously enhanced but with some error in the memory retrieval, which appears as the heterogeneity of the connections is increased. Moreover, the error rates are also obtained on several real neural networks and are indeed similar to that on scale-free model networks.
Li, Yijun; Wang, Cheng; Zhu, Yibo; Zhou, Xiaohong; Xiang, Yu; He, Miao; Zeng, Siyu
2017-03-15
This work presents a fully integrated graphene field-effect transistor (GFET) biosensor for the label-free detection of lead ions (Pb 2+ ) in aqueous-media, which first implements the G-quadruplex structure-switching biosensing principle in graphene nanoelectronics. We experimentally illustrate the biomolecular interplay that G-rich DNA single-strands with one-end confined on graphene surface can specifically interact with Pb 2+ ions and switch into G-quadruplex structures. Since the structure-switching of electrically charged DNA strands can disrupt the charge distribution in the vicinity of graphene surface, the carrier equilibrium in graphene sheet might be altered, and manifested by the conductivity variation of GFET. The experimental data and theoretical analysis show that our devices are capable of the label-free and specific quantification of Pb 2+ with a detection limit down to 163.7ng/L. These results first verify the signaling principle competency of G-quadruplex structure-switching in graphene electronic biosensors. Combining with the advantages of the compact device structure and convenient electrical signal, a label-free GFET biosensor for Pb 2+ monitoring is enabled with promising application potential. Copyright © 2016 Elsevier B.V. All rights reserved.
Thermal equilibrium and statistical thermometers in special relativity.
Cubero, David; Casado-Pascual, Jesús; Dunkel, Jörn; Talkner, Peter; Hänggi, Peter
2007-10-26
There is an intense debate in the recent literature about the correct generalization of Maxwell's velocity distribution in special relativity. The most frequently discussed candidate distributions include the Jüttner function as well as modifications thereof. Here we report results from fully relativistic one-dimensional molecular dynamics simulations that resolve the ambiguity. The numerical evidence unequivocally favors the Jüttner distribution. Moreover, our simulations illustrate that the concept of "thermal equilibrium" extends naturally to special relativity only if a many-particle system is spatially confined. They make evident that "temperature" can be statistically defined and measured in an observer frame independent way.
Ahn, Hyo-Sung; Kim, Byeong-Yeon; Lim, Young-Hun; Lee, Byung-Hun; Oh, Kwang-Kyo
2018-03-01
This paper proposes three coordination laws for optimal energy generation and distribution in energy network, which is composed of physical flow layer and cyber communication layer. The physical energy flows through the physical layer; but all the energies are coordinated to generate and flow by distributed coordination algorithms on the basis of communication information. First, distributed energy generation and energy distribution laws are proposed in a decoupled manner without considering the interactive characteristics between the energy generation and energy distribution. Second, a joint coordination law to treat the energy generation and energy distribution in a coupled manner taking account of the interactive characteristics is designed. Third, to handle over- or less-energy generation cases, an energy distribution law for networks with batteries is designed. The coordination laws proposed in this paper are fully distributed in the sense that they are decided optimally only using relative information among neighboring nodes. Through numerical simulations, the validity of the proposed distributed coordination laws is illustrated.
On selecting a prior for the precision parameter of Dirichlet process mixture models
Dorazio, R.M.
2009-01-01
In hierarchical mixture models the Dirichlet process is used to specify latent patterns of heterogeneity, particularly when the distribution of latent parameters is thought to be clustered (multimodal). The parameters of a Dirichlet process include a precision parameter ?? and a base probability measure G0. In problems where ?? is unknown and must be estimated, inferences about the level of clustering can be sensitive to the choice of prior assumed for ??. In this paper an approach is developed for computing a prior for the precision parameter ?? that can be used in the presence or absence of prior information about the level of clustering. This approach is illustrated in an analysis of counts of stream fishes. The results of this fully Bayesian analysis are compared with an empirical Bayes analysis of the same data and with a Bayesian analysis based on an alternative commonly used prior.
NASA Astrophysics Data System (ADS)
Zhu, Dazhao; Chen, Youhua; Fang, Yue; Hussain, Anwar; Kuang, Cuifang; Zhou, Xiaoxu; Xu, Yingke; Liu, Xu
2017-12-01
A compact microscope system for three-dimensional (3-D) super-resolution imaging is presented. The super-resolution capability of the system is based on a size-reduced effective 3-D point spread function generated through the fluorescence emission difference (FED) method. The appropriate polarization direction distribution and manipulation allows the panel active area of the spatial light modulator to be fully utilized. This allows simultaneous modulation of the incident light by two kinds of phase masks to be performed with a single spatial light modulator in order to generate a 3-D negative spot. The system is more compact than standard 3-D FED systems while maintaining all the advantages of 3-D FED microscopy. The experimental results demonstrated the improvement in 3-D resolution by nearly 1.7 times and 1.6 times compared to the classic confocal resolution in the lateral and axial directions, respectively.
An Outdoor Navigation Platform with a 3D Scanner and Gyro-assisted Odometry
NASA Astrophysics Data System (ADS)
Yoshida, Tomoaki; Irie, Kiyoshi; Koyanagi, Eiji; Tomono, Masahiro
This paper proposes a light-weight navigation platform that consists of gyro-assisted odometry, a 3D laser scanner and map-based localization for human-scale robots. The gyro-assisted odometry provides highly accurate positioning only by dead-reckoning. The 3D laser scanner has a wide field of view and uniform measuring-point distribution. The map-based localization is robust and computationally inexpensive by utilizing a particle filter on a 2D grid map generated by projecting 3D points on to the ground. The system uses small and low-cost sensors, and can be applied to a variety of mobile robots in human-scale environments. Outdoor navigation experiments were conducted at the Tsukuba Challenge held in 2009 and 2010, which is an open proving ground for human-scale robots. Our robot successfully navigated the assigned 1-km courses in a fully autonomous mode multiple times.
Benefits of cloud computing for PACS and archiving.
Koch, Patrick
2012-01-01
The goal of cloud-based services is to provide easy, scalable access to computing resources and IT services. The healthcare industry requires a private cloud that adheres to government mandates designed to ensure privacy and security of patient data while enabling access by authorized users. Cloud-based computing in the imaging market has evolved from a service that provided cost effective disaster recovery for archived data to fully featured PACS and vendor neutral archiving services that can address the needs of healthcare providers of all sizes. Healthcare providers worldwide are now using the cloud to distribute images to remote radiologists while supporting advanced reading tools, deliver radiology reports and imaging studies to referring physicians, and provide redundant data storage. Vendor managed cloud services eliminate large capital investments in equipment and maintenance, as well as staffing for the data center--creating a reduction in total cost of ownership for the healthcare provider.
Optimal deployment of resources for maximizing impact in spreading processes
2017-01-01
The effective use of limited resources for controlling spreading processes on networks is of prime significance in diverse contexts, ranging from the identification of “influential spreaders” for maximizing information dissemination and targeted interventions in regulatory networks, to the development of mitigation policies for infectious diseases and financial contagion in economic systems. Solutions for these optimization tasks that are based purely on topological arguments are not fully satisfactory; in realistic settings, the problem is often characterized by heterogeneous interactions and requires interventions in a dynamic fashion over a finite time window via a restricted set of controllable nodes. The optimal distribution of available resources hence results from an interplay between network topology and spreading dynamics. We show how these problems can be addressed as particular instances of a universal analytical framework based on a scalable dynamic message-passing approach and demonstrate the efficacy of the method on a variety of real-world examples. PMID:28900013
A goodness-of-fit test for capture-recapture model M(t) under closure
Stanley, T.R.; Burnham, K.P.
1999-01-01
A new, fully efficient goodness-of-fit test for the time-specific closed-population capture-recapture model M(t) is presented. This test is based on the residual distribution of the capture history data given the maximum likelihood parameter estimates under model M(t), is partitioned into informative components, and is based on chi-square statistics. Comparison of this test with Leslie's test (Leslie, 1958, Journal of Animal Ecology 27, 84- 86) for model M(t), using Monte Carlo simulations, shows the new test generally outperforms Leslie's test. The new test is frequently computable when Leslie's test is not, has Type I error rates that are closer to nominal error rates than Leslie's test, and is sensitive to behavioral variation and heterogeneity in capture probabilities. Leslie's test is not sensitive to behavioral variation in capture probabilities but, when computable, has greater power to detect heterogeneity than the new test.
Bayesian median regression for temporal gene expression data
NASA Astrophysics Data System (ADS)
Yu, Keming; Vinciotti, Veronica; Liu, Xiaohui; 't Hoen, Peter A. C.
2007-09-01
Most of the existing methods for the identification of biologically interesting genes in a temporal expression profiling dataset do not fully exploit the temporal ordering in the dataset and are based on normality assumptions for the gene expression. In this paper, we introduce a Bayesian median regression model to detect genes whose temporal profile is significantly different across a number of biological conditions. The regression model is defined by a polynomial function where both time and condition effects as well as interactions between the two are included. MCMC-based inference returns the posterior distribution of the polynomial coefficients. From this a simple Bayes factor test is proposed to test for significance. The estimation of the median rather than the mean, and within a Bayesian framework, increases the robustness of the method compared to a Hotelling T2-test previously suggested. This is shown on simulated data and on muscular dystrophy gene expression data.
C. S., Lim; M. S., Shaharuddin; W. Y., Sam
2013-01-01
Introduction: A cross sectional study was conducted to estimate risk of exposure to lead via tap water ingestion pathway for the population of Seri Kembangan (SK). Methodology: By using purposive sampling method, 100 respondents who fulfilled the inclusive criteria were selected from different housing areas of SK based on geographical population distribution. Residents with filtration systems installed were excluded from the study. Questionnaires were administered to determine water consumption-related information and demographics. Two water samples (first-flushed and fully-flushed samples) were collected from kitchen tap of each household using HDPE bottles. A total of 200 water samples were collected and lead concentrations were determined using a Graphite Furnace Atomic Absorption Spectrophotometer (GFAAS). Results: Mean lead concentration in first-flushed samples was 3.041± SD 6.967µg/L and 1.064± SD 1.103µg/L for fully-flushed samples. Of the first-flushed samples, four (4) had exceeded the National Drinking Water Quality Standard (NDWQS) lead limit value of 10µg/L while none of the fully-flushed samples had lead concentration exceeded the limit. There was a significant difference between first-flushed samples and fully-flushed samples and flushing had elicited a significant change in lead concentration in the water (Z = -5.880, p<0.05). It was also found that lead concentration in both first-flushed and fully flushed samples was not significantly different across nine (9) areas of Seri Kembangan (p>0.05). Serdang Jaya was found to have the highest lead concentration in first-flushed water (mean= 10.44± SD 17.83µg/L) while Taman Universiti Indah had the highest lead concentration in fully-flushed water (mean=1.45± SD 1.83µg/L). Exposure assessment found that the mean chronic daily intake (CDI) was 0.028± SD 0.034µgday-1kg-1. None of the hazard quotient (HQ) value was found to be greater than 1. Conclusion: The overall quality of water supply in SK was satisfactory because most of the parameters tested in this study were within the range of permissible limit and only a few samples had exceeded the standard values for lead and pH. Non-carcinogenic risk attributed to ingestion of lead in SK tap water was found to be negligible. PMID:23445691
Lim, C S; Shaharuddin, M S; Sam, W Y
2012-11-21
A cross sectional study was conducted to estimate risk of exposure to lead via tap water ingestion pathway for the population of Seri Kembangan (SK). By using purposive sampling method, 100 respondents who fulfilled the inclusive criteria were selected from different housing areas of SK based on geographical population distribution. Residents with filtration systems installed were excluded from the study. Questionnaires were administered to determine water consumption-related information and demographics. Two water samples (first-flushed and fully-flushed samples) were collected from kitchen tap of each household using HDPE bottles. A total of 200 water samples were collected and lead concentrations were determined using a Graphite Furnace Atomic Absorption Spectrophotometer (GFAAS). Mean lead concentration in first-flushed samples was 3.041± SD 6.967µg/L and 1.064± SD 1.103µg/L for fully-flushed samples. Of the first-flushed samples, four (4) had exceeded the National Drinking Water Quality Standard (NDWQS) lead limit value of 10µg/L while none of the fully-flushed samples had lead concentration exceeded the limit. There was a significant difference between first-flushed samples and fully-flushed samples and flushing had elicited a significant change in lead concentration in the water (Z = -5.880, p<0.05). It was also found that lead concentration in both first-flushed and fully flushed samples was not significantly different across nine (9) areas of Seri Kembangan (p>0.05). Serdang Jaya was found to have the highest lead concentration in first-flushed water (mean= 10.44± SD 17.83µg/L) while Taman Universiti Indah had the highest lead concentration in fully-flushed water (mean=1.45± SD 1.83µg/L). Exposure assessment found that the mean chronic daily intake (CDI) was 0.028± SD 0.034µgday-1kg-1. None of the hazard quotient (HQ) value was found to be greater than 1. The overall quality of water supply in SK was satisfactory because most of the parameters tested in this study were within the range of permissible limit and only a few samples had exceeded the standard values for lead and pH. Non-carcinogenic risk attributed to ingestion of lead in SK tap water was found to be negligible.
A novel approach to EPID-based 3D volumetric dosimetry for IMRT and VMAT QA
NASA Astrophysics Data System (ADS)
Alhazmi, Abdulaziz; Gianoli, Chiara; Neppl, Sebastian; Martins, Juliana; Veloza, Stella; Podesta, Mark; Verhaegen, Frank; Reiner, Michael; Belka, Claus; Parodi, Katia
2018-06-01
Intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT) are relatively complex treatment delivery techniques and require quality assurance (QA) procedures. Pre-treatment dosimetric verification represents a fundamental QA procedure in daily clinical routine in radiation therapy. The purpose of this study is to develop an EPID-based approach to reconstruct a 3D dose distribution as imparted to a virtual cylindrical water phantom to be used for plan-specific pre-treatment dosimetric verification for IMRT and VMAT plans. For each depth, the planar 2D dose distributions acquired in air were back-projected and convolved by depth-specific scatter and attenuation kernels. The kernels were obtained by making use of scatter and attenuation models to iteratively estimate the parameters from a set of reference measurements. The derived parameters served as a look-up table for reconstruction of arbitrary measurements. The summation of the reconstructed 3D dose distributions resulted in the integrated 3D dose distribution of the treatment delivery. The accuracy of the proposed approach was validated in clinical IMRT and VMAT plans by means of gamma evaluation, comparing the reconstructed 3D dose distributions with Octavius measurement. The comparison was carried out using (3%, 3 mm) criteria scoring 99% and 96% passing rates for IMRT and VMAT, respectively. An accuracy comparable to the one of the commercial device for 3D volumetric dosimetry was demonstrated. In addition, five IMRT and five VMAT were validated against the 3D dose calculation performed by the TPS in a water phantom using the same passing rate criteria. The median passing rates within the ten treatment plans was 97.3%, whereas the lowest was 95%. Besides, the reconstructed 3D distribution is obtained without predictions relying on forward dose calculation and without external phantom or dosimetric devices. Thus, the approach provides a fully automated, fast and easy QA procedure for plan-specific pre-treatment dosimetric verification.
Effect of particle size distribution on the separation efficiency in liquid chromatography.
Horváth, Krisztián; Lukács, Diána; Sepsey, Annamária; Felinger, Attila
2014-09-26
In this work, the influence of the width of particle size distribution (PSD) on chromatographic efficiency is studied. The PSD is described by lognormal distribution. A theoretical framework is developed in order to calculate heights equivalent to a theoretical plate in case of different PSDs. Our calculations demonstrate and verify that wide particle size distributions have significant effect on the separation efficiency of molecules. The differences of fully porous and core-shell phases regarding the influence of width of PSD are presented and discussed. The efficiencies of bimodal phases were also calculated. The results showed that these packings do not have any advantage over unimodal phases. Copyright © 2014 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishnamoorthy, Sriram; Daily, Jeffrey A.; Vishnu, Abhinav
2015-11-01
Global Arrays (GA) is a distributed-memory programming model that allows for shared-memory-style programming combined with one-sided communication, to create a set of tools that combine high performance with ease-of-use. GA exposes a relatively straightforward programming abstraction, while supporting fully-distributed data structures, locality of reference, and high-performance communication. GA was originally formulated in the early 1990’s to provide a communication layer for the Northwest Chemistry (NWChem) suite of chemistry modeling codes that was being developed concurrently.
2015-05-07
the proper depth-dependent pressure distribution before intruder motion begins. We model the intruder as a rigid surface within the granular body by...assigning corresponding planar nodes to move as a rigid body at a constant rate. This resembles a fully rough surface due to the no-slip condition, no...Stokesian fluids. Despite its remarkable capability to predict experimental locomotion and force distributions on mobile bodies in granular media, there is
NASA Astrophysics Data System (ADS)
Weinheimer, Oliver; Wielpütz, Mark O.; Konietzke, Philip; Heussel, Claus P.; Kauczor, Hans-Ulrich; Brochhausen, Christoph; Hollemann, David; Savage, Dasha; Galbán, Craig J.; Robinson, Terry E.
2017-02-01
Cystic Fibrosis (CF) results in severe bronchiectasis in nearly all cases. Bronchiectasis is a disease where parts of the airways are permanently dilated. The development and the progression of bronchiectasis is not evenly distributed over the entire lungs - rather, individual functional units are affected differently. We developed a fully automated method for the precise calculation of lobe-based airway taper indices. To calculate taper indices, some preparatory algorithms are needed. The airway tree is segmented, skeletonized and transformed to a rooted acyclic graph. This graph is used to label the airways. Then a modified version of the previously validated integral based method (IBM) for airway geometry determination is utilized. The rooted graph, the airway lumen and wall information are then used to calculate the airway taper indices. Using a computer-generated phantom simulating 10 cross sections of airways we present results showing a high accuracy of the modified IBM. The new taper index calculation method was applied to 144 volumetric inspiratory low-dose MDCT scans. The scans were acquired from 36 children with mild CF at 4 time-points (baseline, 3 month, 1 year, 2 years). We found a moderate correlation with the visual lobar Brody bronchiectasis scores by three raters (r2 = 0.36, p < .0001). The taper index has the potential to be a precise imaging biomarker but further improvements are needed. In combination with other imaging biomarkers, taper index calculation can be an important tool for monitoring the progression and the individual treatment of patients with bronchiectasis.
Advanced signal processing based on support vector regression for lidar applications
NASA Astrophysics Data System (ADS)
Gelfusa, M.; Murari, A.; Malizia, A.; Lungaroni, M.; Peluso, E.; Parracino, S.; Talebzadeh, S.; Vega, J.; Gaudio, P.
2015-10-01
The LIDAR technique has recently found many applications in atmospheric physics and remote sensing. One of the main issues, in the deployment of systems based on LIDAR, is the filtering of the backscattered signal to alleviate the problems generated by noise. Improvement in the signal to noise ratio is typically achieved by averaging a quite large number (of the order of hundreds) of successive laser pulses. This approach can be effective but presents significant limitations. First of all, it implies a great stress on the laser source, particularly in the case of systems for automatic monitoring of large areas for long periods. Secondly, this solution can become difficult to implement in applications characterised by rapid variations of the atmosphere, for example in the case of pollutant emissions, or by abrupt changes in the noise. In this contribution, a new method for the software filtering and denoising of LIDAR signals is presented. The technique is based on support vector regression. The proposed new method is insensitive to the statistics of the noise and is therefore fully general and quite robust. The developed numerical tool has been systematically compared with the most powerful techniques available, using both synthetic and experimental data. Its performances have been tested for various statistical distributions of the noise and also for other disturbances of the acquired signal such as outliers. The competitive advantages of the proposed method are fully documented. The potential of the proposed approach to widen the capability of the LIDAR technique, particularly in the detection of widespread smoke, is discussed in detail.
Spacecube: A Family of Reconfigurable Hybrid On-Board Science Data Processors
NASA Technical Reports Server (NTRS)
Flatley, Thomas P.
2015-01-01
SpaceCube is a family of Field Programmable Gate Array (FPGA) based on-board science data processing systems developed at the NASA Goddard Space Flight Center (GSFC). The goal of the SpaceCube program is to provide 10x to 100x improvements in on-board computing power while lowering relative power consumption and cost. SpaceCube is based on the Xilinx Virtex family of FPGAs, which include processor, FPGA logic and digital signal processing (DSP) resources. These processing elements are leveraged to produce a hybrid science data processing platform that accelerates the execution of algorithms by distributing computational functions to the most suitable elements. This approach enables the implementation of complex on-board functions that were previously limited to ground based systems, such as on-board product generation, data reduction, calibration, classification, eventfeature detection, data mining and real-time autonomous operations. The system is fully reconfigurable in flight, including data parameters, software and FPGA logic, through either ground commanding or autonomously in response to detected eventsfeatures in the instrument data stream.
Fang, L; Jia, Y; Mishra, V; Chaparro, C; Vlasko-Vlasov, V K; Koshelev, A E; Welp, U; Crabtree, G W; Zhu, S; Zhigadlo, N D; Katrych, S; Karpinski, J; Kwok, W K
2013-01-01
Iron-based superconductors could be useful for electricity distribution and superconducting magnet applications because of their relatively high critical current densities and upper critical fields. SmFeAsO₀.₈F₀.₁₅ is of particular interest as it has the highest transition temperature among these materials. Here we show that by introducing a low density of correlated nano-scale defects into this material by heavy-ion irradiation, we can increase its critical current density to up to 2 × 10⁷ A cm⁻² at 5 K--the highest ever reported for an iron-based superconductor--without reducing its critical temperature of 50 K. We also observe a notable reduction in the thermodynamic superconducting anisotropy, from 8 to 4 upon irradiation. We develop a model based on anisotropic electron scattering that predicts that the superconducting anisotropy can be tailored via correlated defects in semimetallic, fully gapped type II superconductors.
Object-based change detection method using refined Markov random field
NASA Astrophysics Data System (ADS)
Peng, Daifeng; Zhang, Yongjun
2017-01-01
In order to fully consider the local spatial constraints between neighboring objects in object-based change detection (OBCD), an OBCD approach is presented by introducing a refined Markov random field (MRF). First, two periods of images are stacked and segmented to produce image objects. Second, object spectral and textual histogram features are extracted and G-statistic is implemented to measure the distance among different histogram distributions. Meanwhile, object heterogeneity is calculated by combining spectral and textual histogram distance using adaptive weight. Third, an expectation-maximization algorithm is applied for determining the change category of each object and the initial change map is then generated. Finally, a refined change map is produced by employing the proposed refined object-based MRF method. Three experiments were conducted and compared with some state-of-the-art unsupervised OBCD methods to evaluate the effectiveness of the proposed method. Experimental results demonstrate that the proposed method obtains the highest accuracy among the methods used in this paper, which confirms its validness and effectiveness in OBCD.