Numerical Investigation of a Model Scramjet Combustor Using DDES
NASA Astrophysics Data System (ADS)
Shin, Junsu; Sung, Hong-Gye
2017-04-01
Non-reactive flows moving through a model scramjet were investigated using a delayed detached eddy simulation (DDES), which is a hybrid scheme combining Reynolds averaged Navier-Stokes scheme and a large eddy simulation. The three dimensional Navier-Stokes equations were solved numerically on a structural grid using finite volume methods. An in-house was developed. This code used a monotonic upstream-centered scheme for conservation laws (MUSCL) with an advection upstream splitting method by pressure weight function (AUSMPW+) for space. In addition, a 4th order Runge-Kutta scheme was used with preconditioning for time integration. The geometries and boundary conditions of a scramjet combustor operated by DLR, a German aerospace center, were considered. The profiles of the lower wall pressure and axial velocity obtained from a time-averaged solution were compared with experimental results. Also, the mixing efficiency and total pressure recovery factor were provided in order to inspect the performance of the combustor.
NASA Astrophysics Data System (ADS)
Jin, G.
2012-12-01
Multiphase flow modeling is an important numerical tool for a better understanding of transport processes in the fields including, but not limited to, petroleum reservoir engineering, remedy of ground water contamination, and risk evaluation of greenhouse gases such as CO2 injected into deep saline reservoirs. However, accurate numerical modeling for multiphase flow remains many challenges that arise from the inherent tight coupling and strong non-linear nature of the governing equations and the highly heterogeneous media. The existence of counter current flow which is caused by the effect of adverse relative mobility contrast and gravitational and capillary forces will introduce additional numerical instability. Recently multipoint flux approximation (MPFA) has become a subject of extensive research and has been demonstrated with great success in reducing considerable grid orientation effects compared to the conventional single point upstream (SPU) weighting scheme, especially in higher dimensions. However, the present available MPFA schemes are mathematically targeted to certain types of grids in two dimensions, a more general form of MPFA scheme is needed for both 2-D and 3-D problems. In this work a new upstream weighting scheme based on multipoint directional incoming fluxes is proposed which incorporates full permeability tensor to account for the heterogeneity of the porous media. First, the multiphase governing equations are decoupled into an elliptic pressure equation and a hyperbolic or parabolic saturation depends on whether the gravitational and capillary pressures are presented or not. Next, a dual secondary grid (called finite volume grid) is formulated from a primary grid (called finite element grid) to create interaction regions for each grid cell over the entire simulation domain. Such a discretization must ensure the conservation of mass and maintain the continuity of the Darcy velocity across the boundaries between neighboring interaction regions. The pressure field is then implicitly calculated from the pressure equation, which in turn results in the derived velocity field for directional flux calculation at each grid node. Directional flux at the center of each interaction surface is also calculated by interpolation from the element nodal fluxes using shape functions. The MPFA scheme is performed by a specific linear combination of all incoming fluxes into the upstream cell represented by either nodal fluxes or interpolated surface boundary fluxes to produce an upwind directional fluxed weighted relative mobility at the center of the interaction region boundary. Such an upwind weighted relative mobility is then used for calculating the saturations of each fluid phase explicitly. The proposed upwind weighting scheme has been implemented into a mixed finite element-finite volume (FE-FV) method, which allows for handling complex reservoir geometry with second-order accuracies in approximating primary variables. The numerical solver has been tested with several bench mark test problems. The application of the proposed scheme to migration path analysis of CO2 injected into deep saline reservoirs in 3-D has demonstrated its ability and robustness in handling multiphase flow with adverse mobility contrast in highly heterogeneous porous media.
NASA Astrophysics Data System (ADS)
Brantson, Eric Thompson; Ju, Binshan; Wu, Dan; Gyan, Patricia Semwaah
2018-04-01
This paper proposes stochastic petroleum porous media modeling for immiscible fluid flow simulation using Dykstra-Parson coefficient (V DP) and autocorrelation lengths to generate 2D stochastic permeability values which were also used to generate porosity fields through a linear interpolation technique based on Carman-Kozeny equation. The proposed method of permeability field generation in this study was compared to turning bands method (TBM) and uniform sampling randomization method (USRM). On the other hand, many studies have also reported that, upstream mobility weighting schemes, commonly used in conventional numerical reservoir simulators do not accurately capture immiscible displacement shocks and discontinuities through stochastically generated porous media. This can be attributed to high level of numerical smearing in first-order schemes, oftentimes misinterpreted as subsurface geological features. Therefore, this work employs high-resolution schemes of SUPERBEE flux limiter, weighted essentially non-oscillatory scheme (WENO), and monotone upstream-centered schemes for conservation laws (MUSCL) to accurately capture immiscible fluid flow transport in stochastic porous media. The high-order schemes results match well with Buckley Leverett (BL) analytical solution without any non-oscillatory solutions. The governing fluid flow equations were solved numerically using simultaneous solution (SS) technique, sequential solution (SEQ) technique and iterative implicit pressure and explicit saturation (IMPES) technique which produce acceptable numerical stability and convergence rate. A comparative and numerical examples study of flow transport through the proposed method, TBM and USRM permeability fields revealed detailed subsurface instabilities with their corresponding ultimate recovery factors. Also, the impact of autocorrelation lengths on immiscible fluid flow transport were analyzed and quantified. A finite number of lines used in the TBM resulted into visual artifact banding phenomenon unlike the proposed method and USRM. In all, the proposed permeability and porosity fields generation coupled with the numerical simulator developed will aid in developing efficient mobility control schemes to improve on poor volumetric sweep efficiency in porous media.
NASA Astrophysics Data System (ADS)
Kim, Bong Kyu; Chung, Hwan Seok; Chang, Sun Hyok; Park, Sangjo
We propose and demonstrate a scheme enhancing the performance of optical access networks with Manchester coded downstream and re-modulated NRZ coded upstream. It is achieved by threshold level control of a limiting amplifier at a receiver, and the minimum sensitivity of upstream is significantly improved for the re-modulation scheme with 5Gb/s Manchester coded downstream and 2.488Gb/s NRZ upstream data rates.
NASA Astrophysics Data System (ADS)
Nadarajah, Nishaanthan; Attygalle, Manik; Wong, Elaine; Nirmalathas, Ampalavanapillai
2005-10-01
This paper proposes two novel optical layer schemes for intercommunication between customers in a passive optical network (PON). The proposed schemes use radio frequency (RF) subcarrier multiplexed transmission for intercommunication between customers in conjunction with upstream access to the central office (CO) at baseband. One scheme employs a narrowband fiber Bragg grating (FBG) placed close to the star coupler in the feeder fiber of the PON, while the other uses an additional short-length distribution fiber from the star coupler to each customer unit for the redirection of customer traffic. In both schemes, only one optical transmitter is required at each optical network unit (ONU) for the transmission of customer traffic and upstream access traffic. Moreover, downstream bandwidth is not consumed by customer traffic unlike in previously reported techniques. The authors experimentally verify the feasibility of both schemes with 1.25 Gb/s upstream baseband transmission to the CO and 155 Mb/s customer data transmission on the RF carrier. The experimental results obtained from both schemes are compared, and the power budgets are calculated to analyze the scalability of each scheme. Further, the proposed schemes were discussed in terms of upgradability of the transmission bit rates for the upstream access traffic, bandwidth requirements at the customer premises, dispersion tolerance, and stability issues for the practical implementations of the network.
NASA Astrophysics Data System (ADS)
Zhang, Yuchao; Gan, Chaoqin; Gou, Kaiyu; Xu, Anni; Ma, Jiamin
2018-01-01
DBA scheme based on Load balance algorithm (LBA) and wavelength recycle mechanism (WRM) for multi-wavelength upstream transmission is proposed in this paper. According to 1 Gbps and 10 Gbps line rates, ONUs are grouped into different VPONs. To facilitate wavelength management, resource pool is proposed to record wavelength state. To realize quantitative analysis, a mathematical model describing metro-access network (MAN) environment is presented. To 10G-EPON upstream, load balance algorithm is designed to ensure load distribution fairness for 10G-OLTs. To 1G-EPON upstream, wavelength recycle mechanism is designed to share remained wavelengths. Finally, the effectiveness of the proposed scheme is demonstrated by simulation and analysis.
Hybrid Upwinding for Two-Phase Flow in Heterogeneous Porous Media with Buoyancy and Capillarity
NASA Astrophysics Data System (ADS)
Hamon, F. P.; Mallison, B.; Tchelepi, H.
2016-12-01
In subsurface flow simulation, efficient discretization schemes for the partial differential equations governing multiphase flow and transport are critical. For highly heterogeneous porous media, the temporal discretization of choice is often the unconditionally stable fully implicit (backward-Euler) method. In this scheme, the simultaneous update of all the degrees of freedom requires solving large algebraic nonlinear systems at each time step using Newton's method. This is computationally expensive, especially in the presence of strong capillary effects driven by abrupt changes in porosity and permeability between different rock types. Therefore, discretization schemes that reduce the simulation cost by improving the nonlinear convergence rate are highly desirable. To speed up nonlinear convergence, we present an efficient fully implicit finite-volume scheme for immiscible two-phase flow in the presence of strong capillary forces. In this scheme, the discrete viscous, buoyancy, and capillary spatial terms are evaluated separately based on physical considerations. We build on previous work on Implicit Hybrid Upwinding (IHU) by using the upstream saturations with respect to the total velocity to compute the relative permeabilities in the viscous term, and by determining the directionality of the buoyancy term based on the phase density differences. The capillary numerical flux is decomposed into a rock- and geometry-dependent transmissibility factor, a nonlinear capillary diffusion coefficient, and an approximation of the saturation gradient. Combining the viscous, buoyancy, and capillary terms, we obtain a numerical flux that is consistent, bounded, differentiable, and monotone for homogeneous one-dimensional flow. The proposed scheme also accounts for spatially discontinuous capillary pressure functions. Specifically, at the interface between two rock types, the numerical scheme accurately honors the entry pressure condition by solving a local nonlinear problem to compute the numerical flux. Heterogeneous numerical tests demonstrate that this extended IHU scheme is non-oscillatory and convergent upon refinement. They also illustrate the superior accuracy and nonlinear convergence rate of the IHU scheme compared with the standard phase-based upstream weighting approach.
Error reduction program: A progress report
NASA Technical Reports Server (NTRS)
Syed, S. A.
1984-01-01
Five finite differences schemes were evaluated for minimum numerical diffusion in an effort to identify and incorporate the best error reduction scheme into a 3D combustor performance code. Based on this evaluated, two finite volume method schemes were selected for further study. Both the quadratic upstream differencing scheme (QUDS) and the bounded skew upstream differencing scheme two (BSUDS2) were coded into a two dimensional computer code and their accuracy and stability determined by running several test cases. It was found that BSUDS2 was more stable than QUDS. It was also found that the accuracy of both schemes is dependent on the angle that the streamline make with the mesh with QUDS being more accurate at smaller angles and BSUDS2 more accurate at larger angles. The BSUDS2 scheme was selected for extension into three dimensions.
Evaluation of Euler fluxes by a high-order CFD scheme: shock instability
NASA Astrophysics Data System (ADS)
Tu, Guohua; Zhao, Xiaohui; Mao, Meiliang; Chen, Jianqiang; Deng, Xiaogang; Liu, Huayong
2014-05-01
The construction of Euler fluxes is an important step in shock-capturing/upwind schemes. It is well known that unsuitable fluxes are responsible for many shock anomalies, such as the carbuncle phenomenon. Three kinds of flux vector splittings (FVSs) as well as three kinds of flux difference splittings (FDSs) are evaluated for the shock instability by a fifth-order weighted compact nonlinear scheme. The three FVSs are Steger-Warming splitting, van Leer splitting and kinetic flux vector splitting (KFVS). The three FDSs are Roe's splitting, advection upstream splitting method (AUSM) type splitting and Harten-Lax-van Leer (HLL) type splitting. Numerical results indicate that FVSs and high dissipative FDSs undergo a relative lower risk on the shock instability than that of low dissipative FDSs. However, none of the fluxes evaluated in the present study can entirely avoid the shock instability. Generally, the shock instability may be caused by any of the following factors: low dissipation, high Mach number, unsuitable grid distribution, large grid aspect ratio, and the relative shock-internal flow state (or position) between upstream and downstream shock waves. It comes out that the most important factor is the relative shock-internal state. If the shock-internal state is closer to the downstream state, the computation is at higher susceptibility to the shock instability. Wall-normal grid distribution has a greater influence on the shock instability than wall-azimuthal grid distribution because wall-normal grids directly impact on the shock-internal position. High shock intensity poses a high risk on the shock instability, but its influence is not as much as the shock-internal state. Large grid aspect ratio is also a source of the shock instability. Some results of a second-order scheme and a first-order scheme are also given. The comparison between the high-order scheme and the two low-order schemes indicates that high-order schemes are at a higher risk of the shock instability. Adding an entropy fix is very helpful in suppressing the shock instability for the two low-order schemes. When the high-order scheme is used, the entropy fix still works well for Roe's flux, but its effect on the Steger-Warming flux is trivial and not much clear.
NASA Astrophysics Data System (ADS)
Khan, Yousaf; Afridi, Muhammad Idrees; Khan, Ahmed Mudassir; Rehman, Waheed Ur; Khan, Jahanzeb
2014-09-01
Hybrid wavelength-division multiplexed/time-division multiplexed passive optical access networks (WDM/TDM-PONs) combine the advance features of both WDM and TDM PONs to provide a cost-effective access network solution. We demonstrate and analyze the transmission performances and power budget issues of a colorless hybrid WDM/TDM-PON scheme. A 10-Gb/s downstream differential phase shift keying (DPSK) and remodulated upstream on/off keying (OOK) data signals are transmitted over 25 km standard single mode fiber. Simulation results show error free transmission having adequate power margins in both downstream and upstream transmission, which prove the applicability of the proposed scheme to future passive optical access networks. The power budget confines both the PON splitting ratio and the distance between the Optical Line Terminal (OLT) and Optical Network Unit (ONU).
An upstream burst-mode equalization scheme for 40 Gb/s TWDM PON based on optimized SOA cascade
NASA Astrophysics Data System (ADS)
Sun, Xiao; Chang, Qingjiang; Gao, Zhensen; Ye, Chenhui; Xiao, Simiao; Huang, Xiaoan; Hu, Xiaofeng; Zhang, Kaibin
2016-02-01
We present a novel upstream burst-mode equalization scheme based on optimized SOA cascade for 40 Gb/s TWDMPON. The power equalizer is placed at the OLT which consists of two SOAs, two circulators, an optical NOT gate, and a variable optical attenuator. The first SOA operates in the linear region which acts as a pre-amplifier to let the second SOA operate in the saturation region. The upstream burst signals are equalized through the second SOA via nonlinear amplification. From theoretical analysis, this scheme gives sufficient dynamic range suppression up to 16.7 dB without any dynamic control or signal degradation. In addition, a total power budget extension of 9.3 dB for loud packets and 26 dB for soft packets has been achieved to allow longer transmission distance and increased splitting ratio.
NASA Astrophysics Data System (ADS)
El-Nahal, Fady I.
2017-01-01
We investigate a wavelength-division-multiplexing passive optical network (WDM-PON) with centralized lightwave and direct detection. The system is demonstrated for symmetric 10 Gbit/s differential phase-shift keying (DPSK) downstream signals and on-off keying (OOK) upstream signals, respectively. A wavelength reused scheme is employed to carry the upstream data by using a reflective semiconductor optical amplifier (RSOA) as an intensity modulator at the optical network unit (ONU). The constant-intensity property of the DPSK modulation format can keep high extinction ratio ( ER) of downstream signal and reduce the crosstalk to the upstream signal. The bit error rate ( BER) performance of our scheme shows that the proposed 10 Gbit/s symmetric WDM-PON can achieve error free transmission over 25-km-long fiber transmission with low power penalty.
NASA Technical Reports Server (NTRS)
Schlesinger, R. E.
1985-01-01
The impact of upstream-biased corrections for third-order spatial truncation error on the stability and phase error of the two-dimensional Crowley combined advective scheme with the cross-space term included is analyzed, putting primary emphasis on phase error reduction. The various versions of the Crowley scheme are formally defined, and their stability and phase error characteristics are intercompared using a linear Fourier component analysis patterned after Fromm (1968, 1969). The performances of the schemes under prototype simulation conditions are tested using time-dependent numerical experiments which advect an initially cone-shaped passive scalar distribution in each of three steady nondivergent flows. One such flow is solid rotation, while the other two are diagonal uniform flow and a strongly deformational vortex.
Evolution of Advection Upstream Splitting Method Schemes
NASA Technical Reports Server (NTRS)
Liou, Meng-Sing
2010-01-01
This paper focuses on the evolution of advection upstream splitting method(AUSM) schemes. The main ingredients that have led to the development of modern computational fluid dynamics (CFD) methods have been reviewed, thus the ideas behind AUSM. First and foremost is the concept of upwinding. Second, the use of Riemann problem in constructing the numerical flux in the finite-volume setting. Third, the necessity of including all physical processes, as characterised by the linear (convection) and nonlinear (acoustic) fields. Fourth, the realisation of separating the flux into convection and pressure fluxes. The rest of this review briefly outlines the technical evolution of AUSM and more details can be found in the cited references. Keywords: Computational fluid dynamics methods, hyperbolic systems, advection upstream splitting method, conservation laws, upwinding, CFD
Development of a new flux splitting scheme
NASA Technical Reports Server (NTRS)
Liou, Meng-Sing; Steffen, Christopher J., Jr.
1991-01-01
The use of a new splitting scheme, the advection upstream splitting method, for model aerodynamic problems where Van Leer and Roe schemes had failed previously is discussed. The present scheme is based on splitting in which the convective and pressure terms are separated and treated differently depending on the underlying physical conditions. The present method is found to be both simple and accurate.
Development of a new flux splitting scheme
NASA Technical Reports Server (NTRS)
Liou, Meng-Sing; Steffen, Christopher J., Jr.
1991-01-01
The successful use of a novel splitting scheme, the advection upstream splitting method, for model aerodynamic problems where Van Leer and Roe schemes had failed previously is discussed. The present scheme is based on splitting in which the convective and pressure terms are separated and treated differently depending on the underlying physical conditions. The present method is found to be both simple and accurate.
Investigation of flow turning phenomenon - Effect of upstream and downstream propagation
NASA Astrophysics Data System (ADS)
Baum, Joseph D.
1988-01-01
Upstream acoustic-wave propagation in flow injected laterally through the boundary layer of a tube (simulating the flow in a solid-rocket motor) is investigated analytically. A noniterative linearized-block implicit scheme is used to solve the time-dependent compressible Navier-Stokes equations, and the results are presented in extensive graphs and characterized. Acoustic streaming interaction is shown to be significantly greater for upstream than for downstream propagation.
NASA Astrophysics Data System (ADS)
Wong, Elaine; Zhao, Xiaoxue; Chang-Hasnain, Connie J.
2008-04-01
As wavelength division multiplexed passive optical networks (WDM-PONs) are expected to be first deployed to transport high capacity services to business customers, real-time knowledge of fiber/device faults and the location of such faults will be a necessity to guarantee reliability. Nonetheless, the added benefit of implementing fault monitoring capability should only incur minimal cost associated with upgrades to the network. In this work, we propose and experimentally demonstrate a fault monitoring and localization scheme based on a highly-sensitive and potentially low-cost monitor in conjunction with vertical cavity surface-emitting lasers (VCSELs). The VCSELs are used as upstream transmitters in the WDM-PON. The proposed scheme benefits from the high reflectivity of the top distributed Bragg reflector (DBR) mirror of optical injection-locked (OIL) VCSELs to reflect monitoring channels back to the central office for monitoring. Characterization of the fault monitor demonstrates high sensitivity, low bandwidth requirements, and potentially low output power. The added advantage of the proposed fault monitoring scheme incurs only a 0.5 dB penalty on the upstream transmissions on the existing infrastructure.
Nonlinearity Analysis for Efficient Modelling of Long-Term CO2 Storage
NASA Astrophysics Data System (ADS)
Li, Boxiao; Benson, Sally; Tchelepi, Hamdi
2014-05-01
Numerical simulation is widely used to predict the long-term fate of the injected CO2 in a storage formation. Performing large-scale simulations is often limited by the computational speed, where convergence failure of Newton iterations is one of the main bottlenecks. In order to design better numerical schemes and faster nonlinear solvers for modelling long-term CO2 storage, the nonlinearity in the simulations has to be analysed thoroughly, and the cause of convergence failures has to be identified clearly. We focus on the transport of CO2 and water in the presence of viscous, gravity, and heterogeneous capillary forces. We investigate the nonlinearity of the discrete transport equation obtained from finite-volume discretization with single-point phase-based upstream weighting, which is the industry standard. In particular, we study the discretized flux expressed as a function of saturations at the upstream and downstream (with respect to the total velocity) of each gridblock interface. We analyse the locations and complexity of the unit-flux, zero-flux, and inflection lines on the numerical flux. The unit- and zero-flux lines, referred to as kinks, correspond to a change of the flow direction, which often occurs when strong buoyancy and capillarity are present. We observe that these kinks and inflection lines are major sources of nonlinear convergence difficulties. We find that kinks create more challenges than inflection lines, especially when their locations depend on both the upstream and downstream saturations of the total velocity. When the flow is driven by viscous and gravity forces (e.g., during CO2 injection), one kink will occur in the numerical flux and its location depends only on the upstream saturation. However, when capillarity is dominant (e.g., during the post-injection period), two kinks will occur and both are functions of the upstream and downstream saturations, causing severe convergence difficulties particularly when heterogeneity is present. Our analysis of the numerical flux theoretically describes the cause of the convergence failures for simulating long-term CO2 storage. This understanding provides useful guidance in designing numerical schemes and nonlinear solvers that overcome the convergence bottlenecks. For example, to reduce the nonlinearity introduced by the two kinks in the presence of capillarity, we modify the method of Cances (2009) to discretize the capillary flux. Consequently, only one kink will occur even for coupled viscous, buoyancy, and heterogeneous capillary forces, and the kink depends only on the upstream saturation of the total velocity. An efficient nonlinear solver that is a significant refinement of the works of Jenny et al. (2009) and Wang and Tchelepi (2013) has also been proposed and demonstrated. References [1] C. Cances. Finite volume scheme for two-phase flows in heterogeneous porous media involving capillary pressure discontinuities. ESAIM:M2AN., 43, 973-1001, (2009). [2] P. Jenny, H.A. Tchelepi, and S.H. Lee. Unconditionally convergent nonlinear solver for hyperbolic conservation laws with S-shaped flux functions. J. Comput. Phys., 228, 7497-7512, (2009). [3] X. Wang and H.A. Tchelepi. Trust-region based solver for nonlinear transport in heterogeneous porous media. J. Comput. Phys., 253, 114-137, (2013).
Effects of Nose Bluntness on Hypersonic Boundary-Layer Receptivity and Stability Over Cones
NASA Technical Reports Server (NTRS)
Kara, Kursat; Balakumar, Ponnampalam; Kandil, Osama A.
2011-01-01
The receptivity to freestream acoustic disturbances and the stability properties of hypersonic boundary layers are numerically investigated for boundary-layer flows over a 5 straight cone at a freestream Mach number of 6.0. To compute the shock and the interaction of the shock with the instability waves, the Navier-Stokes equations in axisymmetric coordinates were solved. In the governing equations, inviscid and viscous flux vectors are discretized using a fifth-order accurate weighted-essentially-non-oscillatory scheme. A third-order accurate total-variation-diminishing Runge-Kutta scheme is employed for time integration. After the mean flow field is computed, disturbances are introduced at the upstream end of the computational domain. The appearance of instability waves near the nose region and the receptivity of the boundary layer with respect to slow mode acoustic waves are investigated. Computations confirm the stabilizing effect of nose bluntness and the role of the entropy layer in the delay of boundary-layer transition. The current solutions, compared with experimental observations and other computational results, exhibit good agreement.
Prediction of Geomagnetic Activity and Key Parameters in High-latitude Ionosphere
NASA Technical Reports Server (NTRS)
Khazanov, George V.; Lyatsky, Wladislaw; Tan, Arjun; Ridley, Aaron
2007-01-01
Prediction of geomagnetic activity and related events in the Earth's magnetosphere and ionosphere are important tasks of US Space Weather Program. Prediction reliability is dependent on the prediction method, and elements included in the prediction scheme. Two of the main elements of such prediction scheme are: an appropriate geomagnetic activity index, and an appropriate coupling function (the combination of solar wind parameters providing the best correlation between upstream solar wind data and geomagnetic activity). We have developed a new index of geomagnetic activity, the Polar Magnetic (PM) index and an improved version of solar wind coupling function. PM index is similar to the existing polar cap PC index but it shows much better correlation with upstream solar wind/IMF data and other events in the magnetosphere and ionosphere. We investigate the correlation of PM index with upstream solar wind/IMF data for 10 years (1995-2004) that include both low and high solar activity. We also have introduced a new prediction function for the predicting of cross-polar-cap voltage and Joule heating based on using both PM index and upstream solar wind/IMF data. As we show such prediction function significantly increase the reliability of prediction of these important parameters. The correlation coefficients between the actual and predicted values of these parameters are approx. 0.9 and higher.
An explicit mixed numerical method for mesoscale model
NASA Technical Reports Server (NTRS)
Hsu, H.-M.
1981-01-01
A mixed numerical method has been developed for mesoscale models. The technique consists of a forward difference scheme for time tendency terms, an upstream scheme for advective terms, and a central scheme for the other terms in a physical system. It is shown that the mixed method is conditionally stable and highly accurate for approximating the system of either shallow-water equations in one dimension or primitive equations in three dimensions. Since the technique is explicit and two time level, it conserves computer and programming resources.
NASA Technical Reports Server (NTRS)
Corda, Stephen (Inventor); Smith, Mark Stephen (Inventor); Myre, David Daniel (Inventor)
2008-01-01
The present invention blocks and/or attenuates the upstream travel of acoustic disturbances or sound waves from a flight vehicle or components of a flight vehicle traveling at subsonic speed using a local injection of a high molecular weight gas. Additional benefit may also be obtained by lowering the temperature of the gas. Preferably, the invention has a means of distributing the high molecular weight gas from the nose, wing, component, or other structure of the flight vehicle into the upstream or surrounding air flow. Two techniques for distribution are direct gas injection and sublimation of the high molecular weight solid material from the vehicle surface. The high molecular weight and low temperature of the gas significantly decreases the local speed of sound such that a localized region of supersonic flow and possibly shock waves are formed, preventing the upstream travel of sound waves from the flight vehicle.
NASA Astrophysics Data System (ADS)
Wong, Elaine; Zhao, Xiaoxue; Chang-Hasnain, Connie J.; Hofmann, Werner; Amann, Marcus C.
2007-11-01
In this paper, we will discuss the utilization of optically injection-locked (OIL) 1.55 μm vertical-cavity surface-emitting lasers (VCSELs) for operation as low-cost, stable, directly modulated, and potentially uncooled transmitters, whereby the injection-locking master source is furnished by modulated downstream signals. Such a transmitter will find useful application in wavelength division multiplexed passive optical networks (WDM-PONs) which is actively being developed to meet the ever-increasing bandwidth demands of end users. Our scheme eliminates the need for external injection locking optical sources, external modulators, and wavelength stabilization circuitry. We show through experiments that the injection-locked VCSEL favors low injection powers and responds only strongly to the carrier but not the modulated data of the downstream signal. Further, we will discuss results from experimental studies performed on the dependence of OIL-VCSELs in bidirectional networks on the degree of Rayleigh backscattered signal and extinction ratio. We show that error-free upstream performance can be achieved when the upstream signal to Rayleigh backscattering ratio is greater than 13.4 dB, and with minimal dependence on the downstream extinction ratio. We will also review a fault monitoring and localization scheme based on a highly-sensitive yet low-cost monitor comprising a low output power broadband source and low bandwidth detectors. The proposed scheme benefits from the high reflectivity top distributed Bragg reflector mirror of the OIL-VCSEL, incurring only a minimal penalty on the upstream transmissions of the existing infrastructure. Such a scheme provides fault monitoring without having to further invest in the upgrade of customer premises.
NASA Astrophysics Data System (ADS)
Wong, Elaine; Nadarajah, Nishaanthan; Chae, Chang-Joon; Nirmalathas, Ampalavanapillai; Attygalle, Sanjeewa M.
2006-01-01
We describe two optical layer schemes which simultaneously facilitate local area network emulation and automatic protection switching against distribution fiber breaks in passive optical networks. One scheme employs a narrowband fiber Bragg grating placed close to the star coupler in the feeder fiber of the passive optical network, while the other uses an additional short length distribution fiber from the star coupler to each customer for the redirection of the customer traffic. Both schemes use RF subcarrier multiplexed transmission for intercommunication between customers in conjunction with upstream access to the central office at baseband. Failure detection and automatic protection switching are performed independently by each optical network unit that is located at the customer premises in a distributed manner. The restoration of traffic transported between the central office and an optical network unit in the event of the distribution fiber break is performed by interconnecting adjacent optical network units and carrying out signal transmissions via an independent but interconnected optical network unit. Such a protection mechanism enables multiple adjacent optical network units to be simultaneously protected by a single optical network unit utilizing its maximum available bandwidth. We experimentally verify the feasibility of both schemes with 1.25 Gb/s upstream baseband transmission to the central office and 155 Mb/s local area network data transmission on a RF subcarrier frequency. The experimental results obtained from both schemes are compared, and the power budgets are calculated to analyze the scalability of each scheme.
Transition in a Supersonic Boundary-Layer Due to Roughness and Acoustic Disturbances
NASA Technical Reports Server (NTRS)
Balakumar, P.
2003-01-01
The transition process induced by the interaction of an isolated roughness with acoustic disturbances in the free stream is numerically investigated for a boundary layer over a flat plate with a blunted leading edge at a free stream Mach number of 3.5. The roughness is assumed to be of Gaussian shape and the acoustic disturbances are introduced as boundary condition at the outer field. The governing equations are solved using the 5'h-rder accurate weighted essentially non-oscillatory (WENO) scheme for space discretization and using third- order total-variation-diminishing (TVD) Runge- Kutta scheme for time integration. The steady field induced by the two and three-dimensional roughness is also computed. The flow field induced by two-dimensional roughness exhibits different characteristics depending on the roughness heights. At small roughness heights the flow passes smoothly over the roughness, at moderate heights the flow separates downstream of the roughness and at larger roughness heights the flow separates upstream and downstream of the roughness. Computations also show that disturbances inside the boundary layer is due to the direct interaction of the acoustic waves and isolated roughness plays a minor role in generating instability waves.
Turbomachine rotor with improved cooling
Hultgren, Kent Goran; McLaurin, Leroy Dixon; Bertsch, Oran Leroy; Lowe, Perry Eugene
1998-01-01
A gas turbine rotor has an essentially closed loop cooling air scheme in which cooling air drawn from the compressor discharge air that is supplied to the combustion chamber is further compressed, cooled, and then directed to the aft end of the turbine rotor. Downstream seal rings attached to the downstream face of each rotor disc direct the cooling air over the downstream disc face, thereby cooling it, and then to cooling air passages formed in the rotating blades. Upstream seal rings attached to the upstream face of each disc direct the heated cooling air away from the blade root while keeping the disc thermally isolated from the heated cooling air. From each upstream seal ring, the heated cooling air flows through passages in the upstream discs and is then combined and returned to the combustion chamber from which it was drawn.
Turbomachine rotor with improved cooling
Hultgren, K.G.; McLaurin, L.D.; Bertsch, O.L.; Lowe, P.E.
1998-05-26
A gas turbine rotor has an essentially closed loop cooling air scheme in which cooling air drawn from the compressor discharge air that is supplied to the combustion chamber is further compressed, cooled, and then directed to the aft end of the turbine rotor. Downstream seal rings attached to the downstream face of each rotor disc direct the cooling air over the downstream disc face, thereby cooling it, and then to cooling air passages formed in the rotating blades. Upstream seal rings attached to the upstream face of each disc direct the heated cooling air away from the blade root while keeping the disc thermally isolated from the heated cooling air. From each upstream seal ring, the heated cooling air flows through passages in the upstream discs and is then combined and returned to the combustion chamber from which it was drawn. 5 figs.
Self-healing ring-based WDM-PON
NASA Astrophysics Data System (ADS)
Zhou, Yang; Gan, Chaoqin; Zhu, Long
2010-05-01
In this paper, a survivable ring-based wavelength-division-multiplexing (WDM)-passive optical network (PON) for fiber protection is proposed. Protections for feeder fiber and distributed fiber are independent in the scheme. Optical line terminal (OLT) and optical network units (ONUs) can automatically switch to protection link when fiber failure occurs. Protection distributed fiber is not required in the scheme. Cost-effective components are used in ONUs to minimize costs of network. A simulation study is performed to demonstrate the scheme. Its result shows good performance of upstream and downstream signals.
NASA Astrophysics Data System (ADS)
Huyakorn, P. S.; Panday, S.; Wu, Y. S.
1994-06-01
A three-dimensional, three-phase numerical model is presented for stimulating the movement on non-aqueous-phase liquids (NAPL's) through porous and fractured media. The model is designed for practical application to a wide variety of contamination and remediation scenarios involving light or dense NAPL's in heterogeneous subsurface systems. The model formulation is first derived for three-phase flow of water, NAPL and air (or vapor) in porous media. The formulation is then extended to handle fractured systems using the dual-porosity and discrete-fracture modeling approaches The model accommodates a wide variety of boundary conditions, including withdrawal and injection well conditions which are treated rigorously using fully implicit schemes. The three-phase of formulation collapses to its simpler forms when air-phase dynamics are neglected, capillary effects are neglected, or two-phase-air-liquid, liquid-liquid systems with one or two active phases are considered. A Galerkin procedure with upstream weighting of fluid mobilities, storage matrix lumping, and fully implicit treatment of nonlinear coefficients and well conditions is used. A variety of nodal connectivity schemes leading to finite-difference, finite-element and hybrid spatial approximations in three dimensions are incorporated in the formulation. Selection of primary variables and evaluation of the terms of the Jacobian matrix for the Newton-Raphson linearized equations is discussed. The various nodal lattice options, and their significance to the computational time and memory requirements with regards to the block-Orthomin solution scheme are noted. Aggressive time-stepping schemes and under-relaxation formulas implemented in the code further alleviate the computational burden.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghosh, Debojyoti; Baeder, James D.
2014-01-21
A new class of compact-reconstruction weighted essentially non-oscillatory (CRWENO) schemes were introduced (Ghosh and Baeder in SIAM J Sci Comput 34(3): A1678–A1706, 2012) with high spectral resolution and essentially non-oscillatory behavior across discontinuities. The CRWENO schemes use solution-dependent weights to combine lower-order compact interpolation schemes and yield a high-order compact scheme for smooth solutions and a non-oscillatory compact scheme near discontinuities. The new schemes result in lower absolute errors, and improved resolution of discontinuities and smaller length scales, compared to the weighted essentially non-oscillatory (WENO) scheme of the same order of convergence. Several improvements to the smoothness-dependent weights, proposed inmore » the literature in the context of the WENO schemes, address the drawbacks of the original formulation. This paper explores these improvements in the context of the CRWENO schemes and compares the different formulations of the non-linear weights for flow problems with small length scales as well as discontinuities. Simplified one- and two-dimensional inviscid flow problems are solved to demonstrate the numerical properties of the CRWENO schemes and its different formulations. Canonical turbulent flow problems—the decay of isotropic turbulence and the shock-turbulence interaction—are solved to assess the performance of the schemes for the direct numerical simulation of compressible, turbulent flows« less
A Continuing Search for a Near-Perfect Numerical Flux Scheme. Part 1; [AUSM+
NASA Technical Reports Server (NTRS)
Liou, Meng-Sing
1994-01-01
While enjoying demonstrated improvement in accuracy, efficiency, and robustness over existing schemes, the Advection Upstream Splitting Scheme (AUSM) was found to have some deficiencies in extreme cases. This recent progress towards improving the AUSM while retaining its advantageous features is described. The new scheme, termed AUSM+, features: unification of velocity and Mach number splitting; exact capture of a single stationary shock; and improvement in accuracy. A general construction of the AUSM+ scheme is layed out and then focus is on the analysis of the a scheme and its mathematical properties, heretofore unreported. Monotonicity and positivity are proved, and a CFL-like condition is given for first and second order schemes and for generalized curvilinear co-ordinates. Finally, results of numerical tests on many problems are given to confirm the capability and improvements on a variety of problems including those failed by prominent schemes.
A Gibbs sampler for motif detection in phylogenetically close sequences
NASA Astrophysics Data System (ADS)
Siddharthan, Rahul; van Nimwegen, Erik; Siggia, Eric
2004-03-01
Genes are regulated by transcription factors that bind to DNA upstream of genes and recognize short conserved ``motifs'' in a random intergenic ``background''. Motif-finders such as the Gibbs sampler compare the probability of these short sequences being represented by ``weight matrices'' to the probability of their arising from the background ``null model'', and explore this space (analogous to a free-energy landscape). But closely related species may show conservation not because of functional sites but simply because they have not had sufficient time to diverge, so conventional methods will fail. We introduce a new Gibbs sampler algorithm that accounts for common ancestry when searching for motifs, while requiring minimal ``prior'' assumptions on the number and types of motifs, assessing the significance of detected motifs by ``tracking'' clusters that stay together. We apply this scheme to motif detection in sporulation-cycle genes in the yeast S. cerevisiae, using recent sequences of other closely-related Saccharomyces species.
NASA Astrophysics Data System (ADS)
Zuo, Zhifeng; Maekawa, Hiroshi
2014-02-01
The interaction between a moderate-strength shock wave and a near-wall vortex is studied numerically by solving the two-dimensional, unsteady compressible Navier-Stokes equations using a weighted compact nonlinear scheme with a simple low-dissipation advection upstream splitting method for flux splitting. Our main purpose is to clarify the development of the flow field and the generation of sound waves resulting from the interaction. The effects of the vortex-wall distance on the sound generation associated with variations in the flow structures are also examined. The computational results show that three sound sources are involved in this problem: (i) a quadrupolar sound source due to the shock-vortex interaction; (ii) a dipolar sound source due to the vortex-wall interaction; and (iii) a dipolar sound source due to unsteady wall shear stress. The sound field is the combination of the sound waves produced by all three sound sources. In addition to the interaction of the incident shock with the vortex, a secondary shock-vortex interaction is caused by the reflection of the reflected shock (MR2) from the wall. The flow field is dominated by the primary and secondary shock-vortex interactions. The generation mechanism of the third sound, which is newly discovered, due to the MR2-vortex interaction is presented. The pressure variations generated by (ii) become significant with decreasing vortex-wall distance. The sound waves caused by (iii) are extremely weak compared with those caused by (i) and (ii) and are negligible in the computed sound field.
NASA Astrophysics Data System (ADS)
Parkash, Sooraj; Sharma, Anurag; Singh, Harsukhpreet
2016-09-01
This paper successfully demonstrates bidirectional wavelength division multiplexing passive optical network (WDM-PON) system for 32 channels, 0.8 nm (100 GHz) channels spacing with 3.5 GHz filter bandwidth. The system delivers 160 GB/s data rate and 80 GB/s data rate in downstream and upstream, respectively. The optical source for downstream data and upstream data is mode-locked laser at central office and reflective semiconductor optical amplifier (RSOA) at optical network unit. The maximum reach of designed system is 50 km without using any dispersion compensation scheme. This paper comprises comparison of series of modulation format in downstream and upstream such as SOLITON, NRZ, RZ, MANCHESTER, CSRZ and CRZ-DPSK and optimization of the performance of designed system. It has been observed that CRZ-DPSK/NRZ gives best performance in downstream and upstream transmission for designed system. The simulation work report of minimum BER is e-13 for CRZ-DPSK in downstream and e-16 for NRZ in upstream transmission in case of 32-channel bidirectional WDM-PON.
NASA Technical Reports Server (NTRS)
Liou, Meng-Sing; Steffen, Christopher J., Jr.
1993-01-01
A new flux splitting scheme is proposed. The scheme is remarkably simple and yet its accuracy rivals and in some cases surpasses that of Roe's solver in the Euler and Navier-Stokes solutions performed in this study. The scheme is robust and converges as fast as the Roe splitting. An approximately defined cell-face advection Mach number is proposed using values from the two straddling cells via associated characteristic speeds. This interface Mach number is then used to determine the upwind extrapolation for the convective quantities. Accordingly, the name of the scheme is coined as Advection Upstream Splitting Method (AUSM). A new pressure splitting is introduced which is shown to behave successfully, yielding much smoother results than other existing pressure splittings. Of particular interest is the supersonic blunt body problem in which the Roe scheme gives anomalous solutions. The AUSM produces correct solutions without difficulty for a wide range of flow conditions as well as grids.
NASA Technical Reports Server (NTRS)
Liou, Meng-Sing; Steffen, Christopher J., Jr.
1991-01-01
A new flux splitting scheme is proposed. The scheme is remarkably simple and yet its accuracy rivals and in some cases surpasses that of Roe's solver in the Euler and Navier-Stokes solutions performed in this study. The scheme is robust and converges as fast as the Roe splitting. An approximately defined cell-face advection Mach number is proposed using values from the two straddling cells via associated characteristic speeds. This interface Mach number is then used to determine the upwind extrapolation for the convective quantities. Accordingly, the name of the scheme is coined as Advection Upstream Splitting Method (AUSM). A new pressure splitting is introduced which is shown to behave successfully, yielding much smoother results than other existing pressure splittings. Of particular interest is the supersonic blunt body problem in which the Roe scheme gives anomalous solutions. The AUSM produces correct solutions without difficulty for a wide range of flow conditions as well as grids.
On the primary variable switching technique for simulating unsaturated-saturated flows
NASA Astrophysics Data System (ADS)
Diersch, H.-J. G.; Perrochet, P.
Primary variable switching appears as a promising numerical technique for variably saturated flows. While the standard pressure-based form of the Richards equation can suffer from poor mass balance accuracy, the mixed form with its improved conservative properties can possess convergence difficulties for dry initial conditions. On the other hand, variable switching can overcome most of the stated numerical problems. The paper deals with variable switching for finite elements in two and three dimensions. The technique is incorporated in both an adaptive error-controlled predictor-corrector one-step Newton (PCOSN) iteration strategy and a target-based full Newton (TBFN) iteration scheme. Both schemes provide different behaviors with respect to accuracy and solution effort. Additionally, a simplified upstream weighting technique is used. Compared with conventional approaches the primary variable switching technique represents a fast and robust strategy for unsaturated problems with dry initial conditions. The impact of the primary variable switching technique is studied over a wide range of mostly 2D and partly difficult-to-solve problems (infiltration, drainage, perched water table, capillary barrier), where comparable results are available. It is shown that the TBFN iteration is an effective but error-prone procedure. TBFN sacrifices temporal accuracy in favor of accelerated convergence if aggressive time step sizes are chosen.
Effects of Wall Cooling on Hypersonic Boundary Layer Receptivity Over a Cone
NASA Technical Reports Server (NTRS)
Kara, K.; Balakumar, P.; Kandil, O. A.
2008-01-01
Effects of wall cooling on the receptivity process induced by the interaction of slow acoustic disturbances in the free-stream are numerically investigated for a boundary layer flow over a 5-degrees straight cone. The free-stream Mach number is 6.0 and the Reynolds number is 7.8x10(exp 6)/ft. Both the steady and unsteady solutions are obtained by solving the full Navier-Stokes equations using 5th-order accurate weighted essentially non-oscillatory (WENO) scheme for space discretization and using 3rd-order total variation diminishing (T VD) Runge-K utta scheme for time integration. Computations are performed for a cone with nose radius of 0.001 inch for adiabatic wall temperature (T(sub aw)), 0.75*T(sub aw), 0.5*T(sub aw), 0.40*T(sub aw), 0.30*T(sub aw), and 0.20*T(sub aw). Once the mean flow field is computed, disturbances are introduced at the upstream end of the computational domain. Generation of instability waves from leading edge region and receptivity of boundary layer to slow acoustic waves are investigated. Computations showed that wall cooling has strong stabilization effect on the first mode disturbances as was observed in the experiments. T ransition location moved to upstream when wall cooling was applied It is also found that the boundary layer is much more receptive to fast acoustic wave (by almost a factor of 50). When simulations performed using the same forcing frequency growth of the second mode disturbances are delayed with wall cooling and they attained values two times higher than that of adiabatic case. In 0.20*T(sub aw) case the transition Reynolds number is doubled compared to adiabatic conditions. The receptivity coefficient for adiabatic wall case (804 R) is 1.5225 and for highly cooled cones (241, and 161 R); they are in the order of 10(exp -3).
Cost-effective WDM-PON Delivering Up/Down-stream Data on a Single Wavelength Using Soliton Pulse
NASA Astrophysics Data System (ADS)
Tawade, Laxman
2013-06-01
This paper presents wavelength division multiplexing passive optical network (WDM-PON) system delivering downstream 2.5 Gbit/s data and upstream 1 Gbit/s data on a single wavelength using pulse source is mode locked laser which generating a single pulse of "sech" shape with specified power and width i.e. soliton pulse. The optical source for downstream data and upstream data is sech pulse generator at central office and reflective semiconductor optical amplifier (RSOA) at each optical network unit. We also investigate analysis of backscattered optical signal for upstream data and downstream data simultaneously. Bit error rate, Q-Factor were measured to demonstrate the proposed scheme. In this paper Long reach aspects of an access network is investigated using single channel scenario.
NASA Astrophysics Data System (ADS)
Fang, Wei Jin; Huang, Xu Guang; Yang, Kai; Zhang, Xiao Min
2012-09-01
We propose and demonstrate a full duplex dense-wavelength-division-multiplexing radio-over-fiber (DWDM-ROF) system for transmitting 75-GHz W-band frequency multiple-input multiple-output orthogonal-frequency-division-multiplexing (MIMO-OFDM) signals with 12 Gbps downstream and 6 Gbps upstream. The downstream transmitting terminal is based on a three-channels sextupling-frequency scheme using an external modulation of a distributed feedback laser diode (DFB-LD) and dual drive Mach-Zehnder modulator (DD-MZM) for carrying downstream signals. MIMO-OFDM algorithms effectively compensate for impairments in the wireless link. Without using costly W-band components in the transmitter, a 12 Gbps downstream transmission system operation at 75 GHz is experimentally validated. For the downstream transmission, a power penalty of less than 3 dB was observed after a 50 km single mode fiber (SMF) and 4 m wireless transmission at a bit error rate (BER) of 3.8×10-3. For the upstream transmission, we use a commercially available 1.5 GHz bandwidth reflective semiconductor optical amplifier (RSOA) to achieve 6 Gbps upstream traffic for 16 QAM-OFDM signals. A power penalty of 3 dB was observed after a 50 km SMF transmission at a BER of 3.8×10-3. The frequency of the local oscillator is reduced due to the frequency sextupling scheme. The cost of the proposed system is largely reduced.
Zadoff-Chu sequence-based hitless ranging scheme for OFDMA-PON configured 5G fronthaul uplinks
NASA Astrophysics Data System (ADS)
Reza, Ahmed Galib; Rhee, June-Koo Kevin
2017-05-01
A Zadoff-Chu (ZC) sequence-based low-complexity hitless upstream time synchronization scheme is proposed for an orthogonal frequency division multiple access passive optical network configured cloud radio access network fronthaul. The algorithm is based on gradual loading of the ZC sequences, where the phase discontinuity due to the cyclic prefix is alleviated by a frequency domain phase precoder, eliminating the requirements of guard bands to mitigate intersymbol interference and inter-carrier interference. Simulation results for uncontrolled-wavelength asynchronous transmissions from four concurrent transmitting optical network units are presented to demonstrate the effectiveness of the proposed scheme.
A novel survivable WDM passive optical networks
NASA Astrophysics Data System (ADS)
Cheng, Xiaofei; Fang, Qin; Zhang, Yong; Chen, Bin; Lu, Fucai
2008-11-01
We propose a novel survivable wavelength-division multiplexed-passive optical network (WDM-PON) based on an N × N cyclic array waveguide grating (AWG) and reflective semiconductor optical amplifiers (RSOAs). ONUs are grouped and connected with extra connection fibres (CFs). Protection resources are provided mutually in ONU pairs. The characteristics of the proposed survivable WDM-PON and wavelength routing scheme are analyzed. Experiments of 10- Gb/s downstream and 1.25-Gb/s upstream transmission experiments are demonstrated to verify our proposed scheme.
Reducing numerical diffusion for incompressible flow calculations
NASA Technical Reports Server (NTRS)
Claus, R. W.; Neely, G. M.; Syed, S. A.
1984-01-01
A number of approaches for improving the accuracy of incompressible, steady-state flow calculations are examined. Two improved differencing schemes, Quadratic Upstream Interpolation for Convective Kinematics (QUICK) and Skew-Upwind Differencing (SUD), are applied to the convective terms in the Navier-Stokes equations and compared with results obtained using hybrid differencing. In a number of test calculations, it is illustrated that no single scheme exhibits superior performance for all flow situations. However, both SUD and QUICK are shown to be generally more accurate than hybrid differencing.
NASA Astrophysics Data System (ADS)
Cho, Seung-Hyun; Lee, Sang-Soo; Shin, Dong-Wook
2010-06-01
We have experimentally demonstrated that the use of an optical receiver with decision threshold level adjustment (DTLA) improved the performance of an upstream transmission in reflective semiconductor optical amplifier (RSOA)-based loopback wavelength division multiplexing-passive optical network (WDM-PON). Even though the extinction ratio (ER) of the downstream signal was as much as 9 dB and the injection power into the RSOA at the optical network unit was about -24 dBm, we successfully obtained error-free transmission results for the upstream signal through careful control of the decision threshold value in the optical receiver located at optical line terminal (OLT). Using an optical receiver with DTLA for upstream signal detection overcame significant obstacles related to the injection power into the RSOA and the ER of the downstream signal, which were previously considered limitations of the wavelength remodulation scheme. This technique is expected to provide flexibility for the optical link design in the practical deployment of a WDM-PON.
Hunink, J E; Droogers, P; Kauffman, S; Mwaniki, B M; Bouma, J
2012-11-30
Upstream soil and water conservation measures in catchments can have positive impact both upstream in terms of less erosion and higher crop yields, but also downstream by less sediment flow into reservoirs and increased groundwater recharge. Green Water Credits (GWC) schemes are being developed to encourage upstream farmers to invest in soil and water conservation practices which will positively effect upstream and downstream water availability. Quantitative information on water and sediment fluxes is crucial as a basis for such financial schemes. A pilot design project in the large and strategically important Upper-Tana Basin in Kenya has the objective to develop a methodological framework for this purpose. The essence of the methodology is the integration and use of a collection of public domain tools and datasets: the so-called Green water and Blue water Assessment Toolkit (GBAT). This toolkit was applied in order to study different options to implement GWC in agricultural rainfed land for the pilot study. Impact of vegetative contour strips, mulching, and tied ridges were determined for: (i) three upstream key indicators: soil loss, crop transpiration and soil evaporation, and (ii) two downstream indicators: sediment inflow in reservoirs and groundwater recharge. All effects were compared with a baseline scenario of average conditions. Thus, not only actual land management was considered but also potential benefits of changed land use practices. Results of the simulations indicate that especially applying contour strips or tied ridges significantly reduces soil losses and increases groundwater recharge in the catchment. The model was used to build spatial expressions of the proposed management practices in order to assess their effectiveness. The developed procedure allows exploring the effects of soil conservation measures in a catchment to support the implementation of GWC. Copyright © 2012 Elsevier Ltd. All rights reserved.
Implicit approximate-factorization schemes for the low-frequency transonic equation
NASA Technical Reports Server (NTRS)
Ballhaus, W. F.; Steger, J. L.
1975-01-01
Two- and three-level implicit finite-difference algorithms for the low-frequency transonic small disturbance-equation are constructed using approximate factorization techniques. The schemes are unconditionally stable for the model linear problem. For nonlinear mixed flows, the schemes maintain stability by the use of conservatively switched difference operators for which stability is maintained only if shock propagation is restricted to be less than one spatial grid point per time step. The shock-capturing properties of the schemes were studied for various shock motions that might be encountered in problems of engineering interest. Computed results for a model airfoil problem that produces a flow field similar to that about a helicopter rotor in forward flight show the development of a shock wave and its subsequent propagation upstream off the front of the airfoil.
A Technique of Treating Negative Weights in WENO Schemes
NASA Technical Reports Server (NTRS)
Shi, Jing; Hu, Changqing; Shu, Chi-Wang
2000-01-01
High order accurate weighted essentially non-oscillatory (WENO) schemes have recently been developed for finite difference and finite volume methods both in structural and in unstructured meshes. A key idea in WENO scheme is a linear combination of lower order fluxes or reconstructions to obtain a high order approximation. The combination coefficients, also called linear weights, are determined by local geometry of the mesh and order of accuracy and may become negative. WENO procedures cannot be applied directly to obtain a stable scheme if negative linear weights are present. Previous strategy for handling this difficulty is by either regrouping of stencils or reducing the order of accuracy to get rid of the negative linear weights. In this paper we present a simple and effective technique for handling negative linear weights without a need to get rid of them.
Energy-saving scheme based on downstream packet scheduling in ethernet passive optical networks
NASA Astrophysics Data System (ADS)
Zhang, Lincong; Liu, Yejun; Guo, Lei; Gong, Xiaoxue
2013-03-01
With increasing network sizes, the energy consumption of Passive Optical Networks (PONs) has grown significantly. Therefore, it is important to design effective energy-saving schemes in PONs. Generally, energy-saving schemes have focused on sleeping the low-loaded Optical Network Units (ONUs), which tends to bring large packet delays. Further, the traditional ONU sleep modes are not capable of sleeping the transmitter and receiver independently, though they are not required to transmit or receive packets. Clearly, this approach contributes to wasted energy. Thus, in this paper, we propose an Energy-Saving scheme that is based on downstream Packet Scheduling (ESPS) in Ethernet PON (EPON). First, we design both an algorithm and a rule for downstream packet scheduling at the inter- and intra-ONU levels, respectively, to reduce the downstream packet delay. After that, we propose a hybrid sleep mode that contains not only ONU deep sleep mode but also independent sleep modes for the transmitter and the receiver. This ensures that the energy consumed by the ONUs is minimal. To realize the hybrid sleep mode, a modified GATE control message is designed that involves 10 time points for sleep processes. In ESPS, the 10 time points are calculated according to the allocated bandwidths in both the upstream and the downstream. The simulation results show that ESPS outperforms traditional Upstream Centric Scheduling (UCS) scheme in terms of energy consumption and the average delay for both real-time and non-real-time packets downstream. The simulation results also show that the average energy consumption of each ONU in larger-sized networks is less than that in smaller-sized networks; hence, our ESPS is better suited for larger-sized networks.
This dataset represents climate observations throughout the years 2008-09 within individual local NHDPlusV2 catchments and upstream, contributing watersheds based on the Composite Topographic Index (See Supplementary Info for Glossary of Terms). PRISM is a set of monthly, yearly, and single-event gridded data products of mean temperature and precipitation, max/min temperatures, and dewpoints, primarily for the United States. In-situ point measurements are ingested into the PRISM (Parameter elevation Regression on Independent Slopes Model) statistical mapping system. The PRISM products use a weighted regression scheme to account for complex climate regimes associated with orography, rain shadows, temperature inversions, slope aspect, coastal proximity, and other factors. (see Data Sources for links to NHDPlusV2 data and USGS Data) These data were summarized to produce local catchment-level and watershed-level metrics as a continuous data type (see Data Structure and Attribute Information for a description).
NASA Technical Reports Server (NTRS)
Miki, Kenji; Moder, Jeff; Liou, Meng-Sing
2016-01-01
In this paper, we present the recent enhancement of the Open National Combustion Code (OpenNCC) and apply the OpenNCC to model a realistic combustor configuration (Energy Efficient Engine (E3)). First, we perform a series of validation tests for the newly-implemented advection upstream splitting method (AUSM) and the extended version of the AUSM-family schemes (AUSM+-up). Compared with the analytical/experimental data of the validation tests, we achieved good agreement. In the steady-state E3 cold flow results using the Reynolds-averaged Navier-Stokes(RANS), we find a noticeable difference in the flow fields calculated by the two different numerical schemes, the standard Jameson- Schmidt-Turkel (JST) scheme and the AUSM scheme. The main differences are that the AUSM scheme is less numerical dissipative and it predicts much stronger reverse flow in the recirculation zone. This study indicates that two schemes could show different flame-holding predictions and overall flame structures.
Suppression of pattern dependence in 10 Gbps upstream transmission of WDM-PON with RSOA-based ONUs
NASA Astrophysics Data System (ADS)
Zhang, Min; Wang, Danshi; Cao, Zhihui; Chen, Xue; Huang, Shanguo
2013-11-01
The finite gain recovery time of the reflective semiconductor optical amplifier (RSOA) causes distortion and pattern dependence at high bit rates in colorless optical network units (ONUs) of WDM passive optical network (WDN-PON). We propose and demonstrate a scheme of upstream transmission of 10 Gbps NRZ signals directly modulated via a RSOA in a 25 km single fiber, where we use a fiber Bragg grating (FBG) as an offset filter to suppress the pattern dependence and improve the RSOA modulation bandwidth. Both experimental and simulation results are provided, which are useful results for designing cost-effective colorless transceivers.
Liu, Xiang; Effenberger, Frank; Chand, Naresh
2015-03-09
We demonstrate a flexible modulation and detection scheme for upstream transmission in passive optical networks using pulse position modulation at optical network unit, facilitating burst-mode detection with automatic decision threshold tracking, and DSP-enabled soft-combining at optical line terminal. Adaptive receiver sensitivities of -33.1 dBm, -36.6 dBm and -38.3 dBm at a bit error ratio of 10(-4) are respectively achieved for 2.5 Gb/s, 1.25 Gb/s and 625 Mb/s after transmission over a 20-km standard single-mode fiber without any optical amplification.
Effects of Nose Bluntness on Stability of Hypersonic Boundary Layers over Blunt Cone
NASA Technical Reports Server (NTRS)
Kara, K.; Balakumar, P.; Kandil, O. A.
2007-01-01
Receptivity and stability of hypersonic boundary layers are numerically investigated for boundary layer flows over a 5-degree straight cone at a free-stream Mach number of 6.0. To compute the shock and the interaction of shock with the instability waves, we solve the Navier-Stokes equations in axisymmetric coordinates. The governing equations are solved using the 5th-order accurate weighted essentially non-oscillatory (WENO) scheme for space discretization and using third-order total-variation-diminishing (TVD) Runge-Kutta scheme for time integration. After the mean flow field is computed, disturbances are introduced at the upstream end of the computational domain. Generation of instability waves from leading edge region and receptivity of boundary layer to slow acoustic waves are investigated. Computations are performed for a cone with nose radii of 0.001, 0.05 and 0.10 inches that give Reynolds numbers based on the nose radii ranging from 650 to 130,000. The linear stability results showed that the bluntness has a strong stabilizing effect on the stability of axisymmetric boundary layers. The transition Reynolds number for a cone with the nose Reynolds number of 65,000 is increased by a factor of 1.82 compared to that for a sharp cone. The receptivity coefficient for a sharp cone is about 4.23 and it is very small, approx.10(exp -3), for large bluntness.
Wu, Yao; Dai, Xiaodong; Huang, Niu; Zhao, Lifeng
2013-06-05
In force field parameter development using ab initio potential energy surfaces (PES) as target data, an important but often neglected matter is the lack of a weighting scheme with optimal discrimination power to fit the target data. Here, we developed a novel partition function-based weighting scheme, which not only fits the target potential energies exponentially like the general Boltzmann weighting method, but also reduces the effect of fitting errors leading to overfitting. The van der Waals (vdW) parameters of benzene and propane were reparameterized by using the new weighting scheme to fit the high-level ab initio PESs probed by a water molecule in global configurational space. The molecular simulation results indicate that the newly derived parameters are capable of reproducing experimental properties in a broader range of temperatures, which supports the partition function-based weighting scheme. Our simulation results also suggest that structural properties are more sensitive to vdW parameters than partial atomic charge parameters in these systems although the electrostatic interactions are still important in energetic properties. As no prerequisite conditions are required, the partition function-based weighting method may be applied in developing any types of force field parameters. Copyright © 2013 Wiley Periodicals, Inc.
Extended bounds limiter for high-order finite-volume schemes on unstructured meshes
NASA Astrophysics Data System (ADS)
Tsoutsanis, Panagiotis
2018-06-01
This paper explores the impact of the definition of the bounds of the limiter proposed by Michalak and Ollivier-Gooch in [56] (2009), for higher-order Monotone-Upstream Central Scheme for Conservation Laws (MUSCL) numerical schemes on unstructured meshes in the finite-volume (FV) framework. A new modification of the limiter is proposed where the bounds are redefined by utilising all the spatial information provided by all the elements in the reconstruction stencil. Numerical results obtained on smooth and discontinuous test problems of the Euler equations on unstructured meshes, highlight that the newly proposed extended bounds limiter exhibits superior performance in terms of accuracy and mesh sensitivity compared to the cell-based or vertex-based bounds implementations.
Paul C. Van Deusen; Linda S. Heath
2010-01-01
Weighted estimation methods for analysis of mapped plot forest inventory data are discussed. The appropriate weighting scheme can vary depending on the type of analysis and graphical display. Both statistical issues and user expectations need to be considered in these methods. A weighting scheme is proposed that balances statistical considerations and the logical...
NASA Astrophysics Data System (ADS)
Jung, Sang-Min; Won, Yong-Yuk; Han, Sang-Kook
2013-12-01
A Novel technique for reducing the OBI noise in optical OFDMA-PON uplink is presented. OFDMA is a multipleaccess/ multiplexing scheme that can provide multiplexing operation of user data streams onto the downlink sub-channels and uplink multiple access by means of dividing OFDM subcarriers as sub-channels. The main issue of high-speed, single-wavelength upstream OFDMA-PON arises from optical beating interference noise. Because the sub-channels are allocated dynamically to multiple access users over same nominal wavelength, it generates the optical beating interference among upstream signals. In this paper, we proposed a novel scheme using self-homodyne balanced detection in the optical line terminal (OLT) to reduce OBI noise which is generated in the uplink transmission of OFDMA-PON system. When multiple OFDMA sub-channels over the same nominal wavelength are received at the same time in the proposed architecture, OBI noises can be removed using balanced detection. Using discrete multitone modulation (DMT) to generate real valued OFDM signals, the proposed technique is verified through experimental demonstration.
NASA Astrophysics Data System (ADS)
Nakamura, Hirotaka; Suzuki, Hiro; Kani, Jun-Ichi; Iwatsuki, Katsumi
2006-05-01
This paper proposes and demonstrates a reliable wide-area wavelength-division-multiplexing passive optical network (WDM-PON) with a wavelength-shifted protection scheme. This protection scheme utilizes the cyclic property of 2 × N athermal arrayed-waveguide grating and two kinds of wavelength allocations, each of which is assigned for working and protection, respectively. Compared with conventional protection schemes, this scheme does not need a 3-dB optical coupler, thus leading to ensure the large loss budget that is suited for wide-area WDM-PONs. It also features a passive access node and does not have a protection function in the optical network unit (ONU). The feasibility of the proposed scheme is experimentally confirmed by the carrier-distributed WDM-PON with gigabit Ethernet interface (GbE-IF) and 10-GbE-IF, in which the ONU does not employ a light source, and all wavelengths for upstream signals are centralized and distributed from the central office.
OSLG: A new granting scheme in WDM Ethernet passive optical networks
NASA Astrophysics Data System (ADS)
Razmkhah, Ali; Rahbar, Akbar Ghaffarpour
2011-12-01
Several granting schemes have been proposed to grant transmission window and dynamic bandwidth allocation (DBA) in passive optical networks (PON). Generally, granting schemes suffer from bandwidth wastage of granted windows. Here, we propose a new granting scheme for WDM Ethernet PONs, called optical network unit (ONU) Side Limited Granting (OSLG) that conserves upstream bandwidth, thus resulting in decreasing queuing delay and packet drop ratio. In OSLG instead of optical line terminal (OLT), each ONU determines its transmission window. Two OSLG algorithms are proposed in this paper: the OSLG_GA algorithm that determines the size of its transmission window in such a way that the bandwidth wastage problem is relieved, and the OSLG_SC algorithm that saves unused bandwidth for more bandwidth utilization later on. The OSLG can be used as granting scheme of any DBA to provide better performance in the terms of packet drop ratio and queuing delay. Our performance evaluations show the effectiveness of OSLG in reducing packet drop ratio and queuing delay under different DBA techniques.
A queuing model for road traffic simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guerrouahane, N.; Aissani, D.; Bouallouche-Medjkoune, L.
We present in this article a stochastic queuing model for the raod traffic. The model is based on the M/G/c/c state dependent queuing model, and is inspired from the deterministic Godunov scheme for the road traffic simulation. We first propose a variant of M/G/c/c state dependent model that works with density-flow fundamental diagrams rather than density-speed relationships. We then extend this model in order to consider upstream traffic demand as well as downstream traffic supply. Finally, we show how to model a whole raod by concatenating raod sections as in the deterministic Godunov scheme.
Advection of Microphysical Scalars in Terminal Area Simulation System (TASS)
NASA Technical Reports Server (NTRS)
Ahmad, Nashat N.; Proctor, Fred H.
2011-01-01
The Terminal Area Simulation System (TASS) is a large eddy scale atmospheric flow model with extensive turbulence and microphysics packages. It has been applied successfully in the past to a diverse set of problems ranging from prediction of severe convective events (Proctor et al. 2002), tracking storms and for simulating weapons effects such as the dispersion and fallout of fission debris (Bacon and Sarma 1991), etc. More recently, TASS has been used for predicting the transport and decay of wake vortices behind aircraft (Proctor 2009). An essential part of the TASS model is its comprehensive microphysics package, which relies on the accurate computation of microphysical scalar transport. This paper describes an evaluation of the Leonard scheme implemented in the TASS model for transporting microphysical scalars. The scheme is validated against benchmark cases with exact solutions and compared with two other schemes - a Monotone Upstream-centered Scheme for Conservation Laws (MUSCL)-type scheme after van Leer and LeVeque's high-resolution wave propagation method. Finally, a comparison between the schemes is made against an incident of severe tornadic super-cell convection near Del City, Oklahoma.
Liu, Ying; Navathe, Shamkant B; Pivoshenko, Alex; Dasigi, Venu G; Dingledine, Ray; Ciliax, Brian J
2006-01-01
One of the key challenges of microarray studies is to derive biological insights from the gene-expression patterns. Clustering genes by functional keyword association can provide direct information about the functional links among genes. However, the quality of the keyword lists significantly affects the clustering results. We compared two keyword weighting schemes: normalised z-score and term frequency-inverse document frequency (TFIDF). Two gene sets were tested to evaluate the effectiveness of the weighting schemes for keyword extraction for gene clustering. Using established measures of cluster quality, the results produced from TFIDF-weighted keywords outperformed those produced from normalised z-score weighted keywords. The optimised algorithms should be useful for partitioning genes from microarray lists into functionally discrete clusters.
Using EMAP data from the NE Wadeable Stream Survey and state datasets (CT, ME), assessment tools were developed to predict diffuse NPS effects from watershed development and distinguish these from local impacts (point sources, contaminated sediments). Classification schemes were...
MPDATA: A positive definite solver for geophysical flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smolarkiewicz, P.K.; Margolin, L.G.
1997-12-31
This paper is a review of MPDATA, a class of methods for the numerical simulation of advection based on the sign-preserving properties of upstream differencing. MPDATA was designed originally as an inexpensive alternative to flux-limited schemes for evaluating the transport of nonnegative thermodynamic variables (such as liquid water or water vapor) in atmospheric models. During the last decade, MPDATA has evolved from a simple advection scheme to a general approach for integrating the conservation laws of geophysical fluids on micro-to-planetary scales. The purpose of this paper is to summarize the basic concepts leading to a family of MPDATA schemes, reviewmore » the existing MPDATA options, as well as to demonstrate the efficacy of the approach using diverse examples of complex geophysical flows.« less
Performance analysis of cross-seeding WDM-PON system using transfer matrix method
NASA Astrophysics Data System (ADS)
Simatupang, Joni Welman; Pukhrambam, Puspa Devi; Huang, Yen-Ru
2016-12-01
In this paper, a model based on the transfer matrix method is adopted to analyze the effects of Rayleigh backscattering and Fresnel multiple reflections on a cross-seeding WDM-PON system. As part of analytical approximation methods, this time-independent model is quite simple but very efficient when it is applied to various WDM-PON transmission systems, including the cross-seeding scheme. The cross seeding scheme is most beneficial for systems with low loop-back ONU gain or low reflection loss at the drop fiber for upstream data in bidirectional transmission. However for downstream data transmission, multiple reflections power could destroy the usefulness of the cross-seeding scheme when the reflectivity is high enough and the RN is positioned near OLT or close to ONU.
Computational flow field in energy efficient engine (EEE)
NASA Astrophysics Data System (ADS)
Miki, Kenji; Moder, Jeff; Liou, Meng-Sing
2016-11-01
In this paper, preliminary results for the recently-updated Open National Combustor Code (Open NCC) as applied to the EEE are presented. The comparison between two different numerical schemes, the standard Jameson-Schmidt-Turkel (JST) scheme and the advection upstream splitting method (AUSM), is performed for the cold flow and the reacting flow calculations using the RANS. In the cold flow calculation, the AUSM scheme predicts a much stronger reverse flow in the central recirculation zone. In the reacting flow calculation, we test two cases: gaseous fuel injection and liquid spray injection. In the gaseous fuel injection case, the overall flame structures of the two schemes are similar to one another, in the sense that the flame is attached to the main nozzle, but is detached from the pilot nozzle. However, in the exit temperature profile, the AUSM scheme shows a more uniform profile than that of the JST scheme, which is close to the experimental data. In the liquid spray injection case, we expect different flame structures in this scenario. We will give a brief discussion on how two numerical schemes predict the flame structures inside the Eusing different ways to introduce the fuel injection. Supported by NASA's Transformational Tools and Technologies project.
Computational Flow Field in Energy Efficient Engine (EEE)
NASA Technical Reports Server (NTRS)
Miki, Kenji; Moder, Jeff; Liou, Meng-Sing
2016-01-01
In this paper, preliminary results for the recently-updated Open National Combustion Code (Open NCC) as applied to the EEE are presented. The comparison between two different numerical schemes, the standard Jameson-Schmidt-Turkel (JST) scheme and the advection upstream splitting method (AUSM), is performed for the cold flow and the reacting flow calculations using the RANS. In the cold flow calculation, the AUSM scheme predicts a much stronger reverse flow in the central recirculation zone. In the reacting flow calculation, we test two cases: gaseous fuel injection and liquid spray injection. In the gaseous fuel injection case, the overall flame structures of the two schemes are similar to one another, in the sense that the flame is attached to the main nozzle, but is detached from the pilot nozzle. However, in the exit temperature profile, the AUSM scheme shows a more uniform profile than that of the JST scheme, which is close to the experimental data. In the liquid spray injection case, we expect different flame structures in this scenario. We will give a brief discussion on how two numerical schemes predict the flame structures inside the EEE using different ways to introduce the fuel injection.
New Term Weighting Formulas for the Vector Space Method in Information Retrieval
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chisholm, E.; Kolda, T.G.
The goal in information retrieval is to enable users to automatically and accurately find data relevant to their queries. One possible approach to this problem i use the vector space model, which models documents and queries as vectors in the term space. The components of the vectors are determined by the term weighting scheme, a function of the frequencies of the terms in the document or query as well as throughout the collection. We discuss popular term weighting schemes and present several new schemes that offer improved performance.
NASA Astrophysics Data System (ADS)
Wei, Pei; Gu, Rentao; Ji, Yuefeng
2014-06-01
As an innovative and promising technology, network coding has been introduced to passive optical networks (PON) in recent years to support inter optical network unit (ONU) communication, yet the signaling process and dynamic bandwidth allocation (DBA) in PON with network coding (NC-PON) still need further study. Thus, we propose a joint signaling and DBA scheme for efficiently supporting differentiated services of inter ONU communication in NC-PON. In the proposed joint scheme, the signaling process lays the foundation to fulfill network coding in PON, and it can not only avoid the potential threat to downstream security in previous schemes but also be suitable for the proposed hybrid dynamic bandwidth allocation (HDBA) scheme. In HDBA, a DBA cycle is divided into two sub-cycles for applying different coding, scheduling and bandwidth allocation strategies to differentiated classes of services. Besides, as network traffic load varies, the entire upstream transmission window for all REPORT messages slides accordingly, leaving the transmission time of one or two sub-cycles to overlap with the bandwidth allocation calculation time at the optical line terminal (the OLT), so that the upstream idle time can be efficiently eliminated. Performance evaluation results validate that compared with the existing two DBA algorithms deployed in NC-PON, HDBA demonstrates the best quality of service (QoS) support in terms of delay for all classes of services, especially guarantees the end-to-end delay bound of high class services. Specifically, HDBA can eliminate queuing delay and scheduling delay of high class services, reduce those of lower class services by at least 20%, and reduce the average end-to-end delay of all services over 50%. Moreover, HDBA also achieves the maximum delay fairness between coded and uncoded lower class services, and medium delay fairness for high class services.
Life history of the sea lamprey of Cayugaf Lake, New York
Wigley, Roland L.
1959-01-01
A life history study of the sea lamprey, Petromyson marinus Linnaeus, in Cayuga Lake, N.Y., was conducted during 1950, 1951, and 1952. One of the major objectives was to obtain biological data concerning this endemic stock of sea lampreys for comparison with the newly established stocks in the Great Lakes. Sexually mature sea lampreys captured on their spawning migration in Cayuga Inlet were the basis of much of this study. Such items as meristic counts, body proportions, body color, sex ratios, lengths and weights, fecundity, rate of upstream travel, effect of dams in retarding upstream movement, nesting habits, parasites, predators, estimates of abundance, and morphological changes were based on mature upstream migrants. Sea lampreys were procured by weir and trap operations and captured by hand. Tagging and marking' programs each spring made it possible to determine movements and morphological changes of individual lampreys, in addition to estimating the number of upstream migrants. Growth of parasitic-phase sea lampreys was estimated from measurements of specimens captured in Cayuga Inlet and Cayuga Lake proper. The incubation period of lamprey eggs and the habits of ammocoetes and transforming lampreys were ascertained from specimens kept in hatchery troughs and raceways. Length-frequency and weight-frequency distributions, together with the length-weight regression, of ammocoetes from Cayuga Inlet were utilized for estimating the duration of their larval life. Lake trout, Salvelinus n. namayc"Ush (Walbaum), from Cayuga Lake and Seneca Lake were the subject of an inquiry into the effects of sea lamprey attacks. Incidence of sea lamprey attacks on the white sucker, Catosto7llus c. commerson/: (LacepMe), was investigated. Three methods are suggested for reducing the number of sea lampreys in Cayuga Lake.
This dataset represents climate observations within individual, local NHDPlusV2 catchments and upstream, contributing watersheds. Attributes of the landscape layer were calculated for every local NHDPlusV2 catchment and accumulated to provide watershed-level metrics. (See Supplementary Info for Glossary of Terms) PRISM is a set of monthly, yearly, and single-event gridded data products of mean temperature and precipitation, max/min temperatures, and dewpoints, primarily for the United States. In-situ point measurements are ingested into the PRISM (Parameter elevation Regression on Independent Slopes Model) statistical mapping system. The PRISM products use a weighted regression scheme to account for complex climate regimes associated with orography, rain shadows, temperature inversions, slope aspect, coastal proximity, and other factors. (see Data Sources for links to NHDPlusV2 data and USGS Data) These data are summarized by local catchment and by watershed to produce local catchment-level and watershed-level metrics as a continuous data type (see Data Structure and Attribute Information for a description).
This dataset represents climate observations within individual, local NHDPlusV2 catchments and upstream, contributing watersheds. Attributes of the landscape layer were calculated for every local NHDPlusV2 catchment and accumulated to provide watershed-level metrics. (See Supplementary Info for Glossary of Terms) PRISM is a set of monthly, yearly, and single-event gridded data products of mean temperature and precipitation, max/min temperatures, and dewpoints, primarily for the United States. In-situ point measurements are ingested into the PRISM (Parameter elevation Regression on Independent Slopes Model) statistical mapping system. The PRISM products use a weighted regression scheme to account for complex climate regimes associated with orography, rain shadows, temperature inversions, slope aspect, coastal proximity, and other factors. (see Data Sources for links to NHDPlusV2 data and USGS Data) These data are summarized by local catchment and by watershed to produce local catchment-level and watershed-level metrics as a continuous data type (see Data Structure and Attribute Information for a description).
Development and application of a hillslope hydrologic model
Blain, C.A.; Milly, P.C.D.
1991-01-01
A vertically integrated two-dimensional lateral flow model of soil moisture has been developed. Derivation of the governing equation is based on a physical interpretation of hillslope processes. The lateral subsurface-flow model permits variability of precipitation and evapotranspiration, and allows arbitrary specification of soil-moisture retention properties. Variable slope, soil thickness, and saturation are all accommodated. The numerical solution method, a Crank-Nicolson, finite-difference, upstream-weighted scheme, is simple and robust. A small catchment in northeastern Kansas is the subject of an application of the lateral subsurface-flow model. Calibration of the model using observed discharge provides estimates of the active porosity (0.1 cm3/cm3) and of the saturated horizontal hydraulic conductivity (40 cm/hr). The latter figure is at least an order of magnitude greater than the vertical hydraulic conductivity associated with the silty clay loam soil matrix. The large value of hydraulic conductivity derived from the calibration is suggestive of macropore-dominated hillslope drainage. The corresponding value of active porosity agrees well with a published average value of the difference between total porosity and field capacity for a silty clay loam. ?? 1991.
Langevin, Christian D.; Hughes, Joseph D.
2010-01-01
A model with a small amount of numerical dispersion was used to represent saltwater 7 intrusion in a homogeneous aquifer for a 10-year historical calibration period with one 8 groundwater withdrawal location followed by a 10-year prediction period with two groundwater 9 withdrawal locations. Time-varying groundwater concentrations at arbitrary locations in this low-10 dispersion model were then used as observations to calibrate a model with a greater amount of 11 numerical dispersion. The low-dispersion model was solved using a Total Variation Diminishing 12 numerical scheme; an implicit finite difference scheme with upstream weighting was used for 13 the calibration simulations. Calibration focused on estimating a three-dimensional hydraulic 14 conductivity field that was parameterized using a regular grid of pilot points in each layer and a 15 smoothness constraint. Other model parameters (dispersivity, porosity, recharge, etc.) were 16 fixed at the known values. The discrepancy between observed and simulated concentrations 17 (due solely to numerical dispersion) was reduced by adjusting hydraulic conductivity through the 18 calibration process. Within the transition zone, hydraulic conductivity tended to be lower than 19 the true value for the calibration runs tested. The calibration process introduced lower hydraulic 20 conductivity values to compensate for numerical dispersion and improve the match between 21 observed and simulated concentration breakthrough curves at monitoring locations. 22 Concentrations were underpredicted at both groundwater withdrawal locations during the 10-23 year prediction period.
Wang, Andong; Zhu, Long; Liu, Jun; Du, Cheng; Mo, Qi; Wang, Jian
2015-11-16
Mode-division multiplexing passive optical network (MDM-PON) is a promising scheme for next-generation access networks to further increase fiber transmission capacity. In this paper, we demonstrate the proof-of-concept experiment of hybrid mode-division multiplexing (MDM) and time-division multiplexing (TDM) PON architecture by exploiting orbital angular momentum (OAM) modes. Bidirectional transmissions with 2.5-Gbaud 4-level pulse amplitude modulation (PAM-4) downstream and 2-Gbaud on-off keying (OOK) upstream are demonstrated in the experiment. The observed optical signal-to-noise ratio (OSNR) penalties for downstream and upstream transmissions at a bit-error rate (BER) of 2 × 10(-3) are less than 2.0 dB and 3.0 dB, respectively.
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.
2010-01-01
Cell-centered and node-centered approaches have been compared for unstructured finite-volume discretization of inviscid fluxes. The grids range from regular grids to irregular grids, including mixed-element grids and grids with random perturbations of nodes. Accuracy, complexity, and convergence rates of defect-correction iterations are studied for eight nominally second-order accurate schemes: two node-centered schemes with weighted and unweighted least-squares (LSQ) methods for gradient reconstruction and six cell-centered schemes two node-averaging with and without clipping and four schemes that employ different stencils for LSQ gradient reconstruction. The cell-centered nearest-neighbor (CC-NN) scheme has the lowest complexity; a version of the scheme that involves smart augmentation of the LSQ stencil (CC-SA) has only marginal complexity increase. All other schemes have larger complexity; complexity of node-centered (NC) schemes are somewhat lower than complexity of cell-centered node-averaging (CC-NA) and full-augmentation (CC-FA) schemes. On highly anisotropic grids typical of those encountered in grid adaptation, discretization errors of five of the six cell-centered schemes converge with second order on all tested grids; the CC-NA scheme with clipping degrades solution accuracy to first order. The NC schemes converge with second order on regular and/or triangular grids and with first order on perturbed quadrilaterals and mixed-element grids. All schemes may produce large relative errors in gradient reconstruction on grids with perturbed nodes. Defect-correction iterations for schemes employing weighted least-square gradient reconstruction diverge on perturbed stretched grids. Overall, the CC-NN and CC-SA schemes offer the best options of the lowest complexity and secondorder discretization errors. On anisotropic grids over a curved body typical of turbulent flow simulations, the discretization errors converge with second order and are small for the CC-NN, CC-SA, and CC-FA schemes on all grids and for NC schemes on triangular grids; the discretization errors of the CC-NA scheme without clipping do not converge on irregular grids. Accurate gradient reconstruction can be achieved by introducing a local approximate mapping; without approximate mapping, only the NC scheme with weighted LSQ method provides accurate gradients. Defect correction iterations for the CC-NA scheme without clipping diverge; for the NC scheme with weighted LSQ method, the iterations either diverge or converge very slowly. The best option in curved geometries is the CC-SA scheme that offers low complexity, second-order discretization errors, and fast convergence.
NASA Technical Reports Server (NTRS)
Ramamurti, R.; Ghia, U.; Ghia, K. N.
1988-01-01
A semi-elliptic formulation, termed the interacting parabolized Navier-Stokes (IPNS) formulation, is developed for the analysis of a class of subsonic viscous flows for which streamwise diffusion is neglible but which are significantly influenced by upstream interactions. The IPNS equations are obtained from the Navier-Stokes equations by dropping the streamwise viscous-diffusion terms but retaining upstream influence via the streamwise pressure-gradient. A two-step alternating-direction-explicit numerical scheme is developed to solve these equations. The quasi-linearization and discretization of the equations are carefully examined so that no artificial viscosity is added externally to the scheme. Also, solutions to compressible as well as nearly compressible flows are obtained without any modification either in the analysis or in the solution process. The procedure is applied to constricted channels and cascade passages formed by airfoils of various shapes. These geometries are represented using numerically generated curilinear boundary-oriented coordinates forming an H-grid. A hybrid C-H grid, more appropriate for cascade of airfoils with rounded leading edges, was also developed. Satisfactory results are obtained for flows through cascades of Joukowski airfoils.
Eberhard, Wynn L
2017-04-01
The maximum likelihood estimator (MLE) is derived for retrieving the extinction coefficient and zero-range intercept in the lidar slope method in the presence of random and independent Gaussian noise. Least-squares fitting, weighted by the inverse of the noise variance, is equivalent to the MLE. Monte Carlo simulations demonstrate that two traditional least-squares fitting schemes, which use different weights, are less accurate. Alternative fitting schemes that have some positive attributes are introduced and evaluated. The principal factors governing accuracy of all these schemes are elucidated. Applying these schemes to data with Poisson rather than Gaussian noise alters accuracy little, even when the signal-to-noise ratio is low. Methods to estimate optimum weighting factors in actual data are presented. Even when the weighting estimates are coarse, retrieval accuracy declines only modestly. Mathematical tools are described for predicting retrieval accuracy. Least-squares fitting with inverse variance weighting has optimum accuracy for retrieval of parameters from single-wavelength lidar measurements when noise, errors, and uncertainties are Gaussian distributed, or close to optimum when only approximately Gaussian.
Time domain numerical calculations of unsteady vortical flows about a flat plate airfoil
NASA Technical Reports Server (NTRS)
Hariharan, S. I.; Yu, Ping; Scott, J. R.
1989-01-01
A time domain numerical scheme is developed to solve for the unsteady flow about a flat plate airfoil due to imposed upstream, small amplitude, transverse velocity perturbations. The governing equation for the resulting unsteady potential is a homogeneous, constant coefficient, convective wave equation. Accurate solution of the problem requires the development of approximate boundary conditions which correctly model the physics of the unsteady flow in the far field. A uniformly valid far field boundary condition is developed, and numerical results are presented using this condition. The stability of the scheme is discussed, and the stability restriction for the scheme is established as a function of the Mach number. Finally, comparisons are made with the frequency domain calculation by Scott and Atassi, and the relative strengths and weaknesses of each approach are assessed.
High-Order Energy Stable WENO Schemes
NASA Technical Reports Server (NTRS)
Yamaleev, Nail K.; Carpenter, Mark H.
2009-01-01
A third-order Energy Stable Weighted Essentially Non-Oscillatory (ESWENO) finite difference scheme developed by Yamaleev and Carpenter was proven to be stable in the energy norm for both continuous and discontinuous solutions of systems of linear hyperbolic equations. Herein, a systematic approach is presented that enables 'energy stable' modifications for existing WENO schemes of any order. The technique is demonstrated by developing a one-parameter family of fifth-order upwind-biased ESWENO schemes; ESWENO schemes up to eighth order are presented in the appendix. New weight functions are also developed that provide (1) formal consistency, (2) much faster convergence for smooth solutions with an arbitrary number of vanishing derivatives, and (3) improved resolution near strong discontinuities.
NASA Astrophysics Data System (ADS)
Lakshminarayana, B.; Ho, Y.; Basson, A.
1993-07-01
The objective of this research is to simulate steady and unsteady viscous flows, including rotor/stator interaction and tip clearance effects in turbomachinery. The numerical formulation for steady flow developed here includes an efficient grid generation scheme, particularly suited to computational grids for the analysis of turbulent turbomachinery flows and tip clearance flows, and a semi-implicit, pressure-based computational fluid dynamics scheme that directly includes artificial dissipation, and is applicable to both viscous and inviscid flows. The values of these artificial dissipation is optimized to achieve accuracy and convergency in the solution. The numerical model is used to investigate the structure of tip clearance flows in a turbine nozzle. The structure of leakage flow is captured accurately, including blade-to-blade variation of all three velocity components, pitch and yaw angles, losses and blade static pressures in the tip clearance region. The simulation also includes evaluation of such quantities of leakage mass flow, vortex strength, losses, dominant leakage flow regions and the spanwise extent affected by the leakage flow. It is demonstrated, through optimization of grid size and artificial dissipation, that the tip clearance flow field can be captured accurately. The above numerical formulation was modified to incorporate time accurate solutions. An inner loop iteration scheme is used at each time step to account for the non-linear effects. The computation of unsteady flow through a flat plate cascade subjected to a transverse gust reveals that the choice of grid spacing and the amount of artificial dissipation is critical for accurate prediction of unsteady phenomena. The rotor-stator interaction problem is simulated by starting the computation upstream of the stator, and the upstream rotor wake is specified from the experimental data. The results show that the stator potential effects have appreciable influence on the upstream rotor wake. The predicted unsteady wake profiles are compared with the available experimental data and the agreement is good. The numerical results are interpreted to draw conclusions on the unsteady wake transport mechanism in the blade passage.
NASA Astrophysics Data System (ADS)
Zhang, Hui; Wang, Deqing; Wu, Wenjun; Hu, Hongping
2012-11-01
In today's business environment, enterprises are increasingly under pressure to process the vast amount of data produced everyday within enterprises. One method is to focus on the business intelligence (BI) applications and increasing the commercial added-value through such business analytics activities. Term weighting scheme, which has been used to convert the documents as vectors in the term space, is a vital task in enterprise Information Retrieval (IR), text categorisation, text analytics, etc. When determining term weight in a document, the traditional TF-IDF scheme sets weight value for the term considering only its occurrence frequency within the document and in the entire set of documents, which leads to some meaningful terms that cannot get the appropriate weight. In this article, we propose a new term weighting scheme called Term Frequency - Function of Document Frequency (TF-FDF) to address this issue. Instead of using monotonically decreasing function such as Inverse Document Frequency, FDF presents a convex function that dynamically adjusts weights according to the significance of the words in a document set. This function can be manually tuned based on the distribution of the most meaningful words which semantically represent the document set. Our experiments show that the TF-FDF can achieve higher value of Normalised Discounted Cumulative Gain in IR than that of TF-IDF and its variants, and improving the accuracy of relevance ranking of the IR results.
Simulation study on combination of GRACE monthly gravity field solutions
NASA Astrophysics Data System (ADS)
Jean, Yoomin; Meyer, Ulrich; Jäggi, Adrian
2016-04-01
The GRACE monthly gravity fields from different processing centers are combined in the frame of the project EGSIEM. This combination is done on solution level first to define weights which will be used for a combination on normal equation level. The applied weights are based on the deviation of the individual gravity fields from the arithmetic mean of all involved gravity fields. This kind of weighting scheme relies on the assumption that the true gravity field is close to the arithmetic mean of the involved individual gravity fields. However, the arithmetic mean can be affected by systematic errors in individual gravity fields, which consequently results in inappropriate weights. For the future operational scientific combination service of GRACE monthly gravity fields, it is necessary to examine the validity of the weighting scheme also in possible extreme cases. To investigate this, we make a simulation study on the combination of gravity fields. Firstly, we show how a deviated gravity field can affect the combined solution in terms of signal and noise in the spatial domain. We also show the impact of systematic errors in individual gravity fields on the resulting combined solution. Then, we investigate whether the weighting scheme still works in the presence of outliers. The result of this simulation study will be useful to understand and validate the weighting scheme applied to the combination of the monthly gravity fields.
Computation of Feedback Aeroacoustic System by the CE/SE Method
NASA Technical Reports Server (NTRS)
Loh, Ching Y.; Wang, Xiao Y.; Chang, Sin-Chung; Jorgenson, Philip C. E.
2000-01-01
It is well known that due to vortex shedding in high speed flow over cutouts, cavities, and gaps, intense noise may be generated. Strong tonal oscillations occur in a feedback cycle in which the vortices shed from the upstream edge of the cavity convect downstream and impinge on the cavity lip, generating acoustic waves that propagate upstream to excite new vortices. Numerical simulation of such a complicated process requires a scheme that can: (1) resolve acoustic waves with low dispersion and numerical dissipation, (2) handle nonlinear and discontinuous waves (e.g. shocks), and (3) have an effective (near field) nonreflecting boundary condition (NRBC). The new space time conservation element and solution element method, or CE/SE for short, is a numerical method that meets the above requirements.
NASA Astrophysics Data System (ADS)
Cai, Zun; Liu, Xiao; Gong, Cheng; Sun, Mingbo; Wang, Zhenguo; Bai, Xue-Song
2016-09-01
Large Eddy Simulation (LES) was employed to investigate the fuel/oxidizer mixing process in an ethylene fueled scramjet combustor with a rearwall-expansion cavity. The numerical solver was first validated for an experimental flow, the DLR strut-based scramjet combustor case. Shock wave structures and wall-pressure distribution from the numerical simulations were compared with experimental data and the numerical results were shown in good agreement with the available experimental data. Effects of the injection location on the flow and mixing process were then studied. It was found that with a long injection distance upstream the cavity, the fuel is transported much further into the main flow and a smaller subsonic zone is formed inside the cavity. Conversely, with a short injection distance, the fuel is entrained more into the cavity and a larger subsonic zone is formed inside the cavity, which is favorable for ignition in the cavity. For the rearwall-expansion cavity, it is suggested that the optimized ignition location with a long upstream injection distance should be in the bottom wall in the middle part of the cavity, while the optimized ignition location with a short upstream injection distance should be in the bottom wall in the front side of the cavity. By employing a cavity direct injection on the rear wall, the fuel mass fraction inside the cavity and the local turbulent intensity will both be increased due to this fueling, and it will also enhance the mixing process which will also lead to increased mixing efficiency. For the rearwall-expansion cavity, the combined injection scheme is expected to be an optimized injection scheme.
NASA Astrophysics Data System (ADS)
Zhang, Jing; Yang, Heming; Zhao, Difu; Qiu, Kun
2016-07-01
We introduce digital coherent superposition (DCS) into optical access network and propose a DCS-OFDM-PON upstream transmission scheme using intensity modulator and collective self-coherent detection. The generated OFDM signal is real based on Hermitian symmetry, which can be used to estimate the common phase error (CPE) by complex conjugate subcarrier pairs without any pilots. In simulation, we transmit an aggregated 40 Gb/s optical OFDM signal from two ONUs. The transmission performance with DCS is slightly better after 25 km transmission without relative transmission time delay. The fiber distance for different ONUs to RN are not same in general and there is relative transmission time delay between ONUs, which causes inter-carrier-interference (ICI) power increasing and degrades the transmission performance. The DCS can mitigate the ICI power and the DCS-OFDM-PON upstream transmission outperforms the conventional OFDM-PON. The CPE estimation is by using two pairs of complex conjugate subcarriers without redundancy. The power variation can be 9 dB in DCS-OFDM-PON, which is enough to tolerate several kilometers fiber length difference between the ONUs.
NASA Astrophysics Data System (ADS)
Ilik, Semih C.; Arsoy, Aysen B.
2017-07-01
Integration of distributed generation (DG) such as renewable energy sources to electrical network becomes more prevalent in recent years. Grid connection of DG has effects on load flow directions, voltage profile, short circuit power and especially protection selectivity. Applying traditional overcurrent protection scheme is inconvenient when system reliability and sustainability are considered. If a fault happens in DG connected network, short circuit contribution of DG, creates additional branch element feeding the fault current; compels to consider directional overcurrent (OC) protection scheme. Protection coordination might get lost for changing working conditions when DG sources are connected. Directional overcurrent relay parameters are determined for downstream and upstream relays when different combinations of DG connected singular or plural, on radial test system. With the help of proposed flow chart, relay parameters are updated and coordination between relays kept sustained for different working conditions in DigSILENT PowerFactory program.
Filter Bank Multicarrier (FBMC) for long-reach intensity modulated optical access networks
NASA Astrophysics Data System (ADS)
Saljoghei, Arsalan; Gutiérrez, Fernando A.; Perry, Philip; Barry, Liam P.
2017-04-01
Filter Bank Multi Carrier (FBMC) is a modulation scheme which has recently attracted significant interest in both wireless and optical communications. The interest in optical communications arises due to FBMC's capability to operate without a Cyclic Prefix (CP) and its high resilience to synchronisation errors. However, the operation of FBMC in optical access networks has not been extensively studied either in downstream or upstream. In this work we use experimental work to investigate the operation of FBMC in intensity modulated Passive Optical Networks (PONs) employing direct detection in conjunction with both direct and external modulation schemes. The data rates and propagation lengths employed here vary from 8.4 to 14.8 Gb/s and 0-75 km. The results suggest that by using FBMC it is possible to accomplish CP-Less transmission up to 75 km of SSMF in passive links using cost effective intensity modulation and detection schemes.
Relativistic and Slowing Down: The Flow in the Hotspots of Powerful Radio Galaxies and Quasars
NASA Technical Reports Server (NTRS)
Kazanas, D.
2003-01-01
The 'hotspots' of powerful radio galaxies (the compact, high brightness regions, where the jet flow collides with the intergalactic medium (IGM)) have been imaged in radio, optical and recently in X-ray frequencies. We propose a scheme that unifies their, at first sight, disparate broad band (radio to X-ray) spectral properties. This scheme involves a relativistic flow upstream of the hotspot that decelerates to the sub-relativistic speed of its inferred advance through the IGM and it is viewed at different angles to its direction of motion, as suggested by two independent orientation estimators (the presence or not of broad emission lines in their optical spectra and the core-to-extended radio luminosity). This scheme, besides providing an account of the hotspot spectral properties with jet orientation, it also suggests that the large-scale jets remain relativistic all the way to the hotspots.
A Systematic Methodology for Constructing High-Order Energy-Stable WENO Schemes
NASA Technical Reports Server (NTRS)
Yamaleev, Nail K.; Carpenter, Mark H.
2008-01-01
A third-order Energy Stable Weighted Essentially Non-Oscillatory (ESWENO) finite difference scheme developed by Yamaleev and Carpenter (AIAA 2008-2876, 2008) was proven to be stable in the energy norm for both continuous and discontinuous solutions of systems of linear hyperbolic equations. Herein, a systematic approach is presented that enables \\energy stable" modifications for existing WENO schemes of any order. The technique is demonstrated by developing a one-parameter family of fifth-order upwind-biased ESWENO schemes; ESWENO schemes up to eighth order are presented in the appendix. New weight functions are also developed that provide (1) formal consistency, (2) much faster convergence for smooth solutions with an arbitrary number of vanishing derivatives, and (3) improved resolution near strong discontinuities.
A Systematic Methodology for Constructing High-Order Energy Stable WENO Schemes
NASA Technical Reports Server (NTRS)
Yamaleev, Nail K.; Carpenter, Mark H.
2009-01-01
A third-order Energy Stable Weighted Essentially Non{Oscillatory (ESWENO) finite difference scheme developed by Yamaleev and Carpenter [1] was proven to be stable in the energy norm for both continuous and discontinuous solutions of systems of linear hyperbolic equations. Herein, a systematic approach is presented that enables "energy stable" modifications for existing WENO schemes of any order. The technique is demonstrated by developing a one-parameter family of fifth-order upwind-biased ESWENO schemes; ESWENO schemes up to eighth order are presented in the appendix. New weight functions are also developed that provide (1) formal consistency, (2) much faster convergence for smooth solutions with an arbitrary number of vanishing derivatives, and (3) improved resolution near strong discontinuities.
A new third order finite volume weighted essentially non-oscillatory scheme on tetrahedral meshes
NASA Astrophysics Data System (ADS)
Zhu, Jun; Qiu, Jianxian
2017-11-01
In this paper a third order finite volume weighted essentially non-oscillatory scheme is designed for solving hyperbolic conservation laws on tetrahedral meshes. Comparing with other finite volume WENO schemes designed on tetrahedral meshes, the crucial advantages of such new WENO scheme are its simplicity and compactness with the application of only six unequal size spatial stencils for reconstructing unequal degree polynomials in the WENO type spatial procedures, and easy choice of the positive linear weights without considering the topology of the meshes. The original innovation of such scheme is to use a quadratic polynomial defined on a big central spatial stencil for obtaining third order numerical approximation at any points inside the target tetrahedral cell in smooth region and switch to at least one of five linear polynomials defined on small biased/central spatial stencils for sustaining sharp shock transitions and keeping essentially non-oscillatory property simultaneously. By performing such new procedures in spatial reconstructions and adopting a third order TVD Runge-Kutta time discretization method for solving the ordinary differential equation (ODE), the new scheme's memory occupancy is decreased and the computing efficiency is increased. So it is suitable for large scale engineering requirements on tetrahedral meshes. Some numerical results are provided to illustrate the good performance of such scheme.
Tang, Chengpei; Shokla, Sanesy Kumcr; Modhawar, George; Wang, Qiang
2016-02-19
Collaborative strategies for mobile sensor nodes ensure the efficiency and the robustness of data processing, while limiting the required communication bandwidth. In order to solve the problem of pipeline inspection and oil leakage monitoring, a collaborative weighted mobile sensing scheme is proposed. By adopting a weighted mobile sensing scheme, the adaptive collaborative clustering protocol can realize an even distribution of energy load among the mobile sensor nodes in each round, and make the best use of battery energy. A detailed theoretical analysis and experimental results revealed that the proposed protocol is an energy efficient collaborative strategy such that the sensor nodes can communicate with a fusion center and produce high power gain.
Corrections to the General (2,4) and (4,4) FDTD Schemes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meierbachtol, Collin S.; Smith, William S.; Shao, Xuan-Min
The sampling weights associated with two general higher order FDTD schemes were derived by Smith, et al. and published in a IEEE Transactions on Antennas and Propagation article in 2012. Inconsistencies between governing equations and their resulting solutions were discovered within the article. In an effort to track down the root cause of these inconsistencies, the full three-dimensional, higher order FDTD dispersion relation was re-derived using Mathematica TM. During this process, two errors were identi ed in the article. Both errors are highlighted in this document. The corrected sampling weights are also provided. Finally, the original stability limits provided formore » both schemes are corrected, and presented in a more precise form. It is recommended any future implementations of the two general higher order schemes provided in the Smith, et al. 2012 article should instead use the sampling weights and stability conditions listed in this document.« less
Method for operating a spark-ignition, direct-injection internal combustion engine
Narayanaswamy, Kushal; Koch, Calvin K.; Najt, Paul M.; Szekely, Jr., Gerald A.; Toner, Joel G.
2015-06-02
A spark-ignition, direct-injection internal combustion engine is coupled to an exhaust aftertreatment system including a three-way catalytic converter upstream of an NH3-SCR catalyst. A method for operating the engine includes operating the engine in a fuel cutoff mode and coincidentally executing a second fuel injection control scheme upon detecting an engine load that permits operation in the fuel cutoff mode.
Pal, Madhubonti; Chakrabortty, Sankha; Pal, Parimal; Linnanen, Lassi
2015-08-01
For purifying fluoride-contaminated water, a new forward osmosis scheme in horizontal flat-sheet cross flow module was designed and investigated. Effects of pressure, cross flow rate, draw solution and alignment of membrane module on separation and flux were studied. Concentration polarization and reverse salt diffusion got significantly reduced in the new hydrodynamic regime. This resulted in less membrane fouling, better solute separation and higher pure water flux than in a conventional module. The entire scheme was completed in two stages-an upstream forward osmosis for separating pure water from contaminated water and a downstream nanofiltration operation for continuous recovery and recycle of draw solute. Synchronization of these two stages of operation resulted in a continuous, steady-state process. From a set of commercial membranes, two polyamide composite membranes were screened out for the upstream and downstream filtrations. A 0.3-M NaCl solution was found to be the best one for forward osmosis draw solution. Potable water with less than 1% residual fluoride could be produced at a high flux of 60-62 L m(-2) h(-1) whereas more than 99% draw solute could be recovered and recycled in the downstream nanofiltration stage from where flux was 62-65 L m(-2) h(-1).
Bento and Buffet: Two Approaches to Flexible Summative Assessment
ERIC Educational Resources Information Center
Didicher, Nicky
2016-01-01
This practice-sharing piece outlines two main approaches to flexible summative assessment schemes, including for each approach one example from my practice and another from a published study. The bento approach offers the same assessments to all students but a variety of grade weighting schemes, allowing students to change weighting during the…
A Systematic Scheme for Multiple Access in Ethernet Passive Optical Access Networks
NASA Astrophysics Data System (ADS)
Ma, Maode; Zhu, Yongqing; Hiang Cheng, Tee
2005-11-01
While backbone networks have experienced substantial changes in the last decade, access networks have not changed much. Recently, passive optical networks (PONs) seem to be ready for commercial deployment as access networks, due to the maturity of a number of enabling technologies. Among the PON technologies, Ethernet PON (EPON) standardized by the IEEE 802.3ah Ethernet in the First Mile (EFM) Task Force is the most attractive one because of its high speed, low cost, familiarity, interoperability, and low overhead. In this paper, we consider the issue of upstream channel sharing in the EPONs. We propose a novel multiple-access control scheme to provide bandwidth-guaranteed service for high-demand customers, while providing best effort service to low-demand customers according to the service level agreement (SLA). The analytical and simulation results prove that the proposed scheme performs best in what it is designed to do compared to another well-known scheme that has not considered providing differentiated services. With business customers preferring premium services with guaranteed bandwidth and residential users preferring low-cost best effort services, our scheme could benefit both groups of subscribers, as well as the operators.
Accuracy of the weighted essentially non-oscillatory conservative finite difference schemes
NASA Astrophysics Data System (ADS)
Don, Wai-Sun; Borges, Rafael
2013-10-01
In the reconstruction step of (2r-1) order weighted essentially non-oscillatory conservative finite difference schemes (WENO) for solving hyperbolic conservation laws, nonlinear weights αk and ωk, such as the WENO-JS weights by Jiang et al. and the WENO-Z weights by Borges et al., are designed to recover the formal (2r-1) order (optimal order) of the upwinded central finite difference scheme when the solution is sufficiently smooth. The smoothness of the solution is determined by the lower order local smoothness indicators βk in each substencil. These nonlinear weight formulations share two important free parameters in common: the power p, which controls the amount of numerical dissipation, and the sensitivity ε, which is added to βk to avoid a division by zero in the denominator of αk. However, ε also plays a role affecting the order of accuracy of WENO schemes, especially in the presence of critical points. It was recently shown that, for any design order (2r-1), ε should be of Ω(Δx2) (Ω(Δxm) means that ε⩾CΔxm for some C independent of Δx, as Δx→0) for the WENO-JS scheme to achieve the optimal order, regardless of critical points. In this paper, we derive an alternative proof of the sufficient condition using special properties of βk. Moreover, it is unknown if the WENO-Z scheme should obey the same condition on ε. Here, using same special properties of βk, we prove that in fact the optimal order of the WENO-Z scheme can be guaranteed with a much weaker condition ε=Ω(Δxm), where m(r,p)⩾2 is the optimal sensitivity order, regardless of critical points. Both theoretical results are confirmed numerically on smooth functions with arbitrary order of critical points. This is a highly desirable feature, as illustrated with the Lax problem and the Mach 3 shock-density wave interaction of one dimensional Euler equations, for a smaller ε allows a better essentially non-oscillatory shock capturing as it does not over-dominate over the size of βk. We also show that numerical oscillations can be further attenuated by increasing the power parameter 2⩽p⩽r-1, at the cost of increased numerical dissipation. Compact formulas of βk for WENO schemes are also presented.
A joint precoding scheme for indoor downlink multi-user MIMO VLC systems
NASA Astrophysics Data System (ADS)
Zhao, Qiong; Fan, Yangyu; Kang, Bochao
2017-11-01
In this study, we aim to improve the system performance and reduce the implementation complexity of precoding scheme for visible light communication (VLC) systems. By incorporating the power-method algorithm and the block diagonalization (BD) algorithm, we propose a joint precoding scheme for indoor downlink multi-user multi-input-multi-output (MU-MIMO) VLC systems. In this scheme, we apply the BD algorithm to eliminate the co-channel interference (CCI) among users firstly. Secondly, the power-method algorithm is used to search the precoding weight for each user based on the optimal criterion of signal to interference plus noise ratio (SINR) maximization. Finally, the optical power restrictions of VLC systems are taken into account to constrain the precoding weight matrix. Comprehensive computer simulations in two scenarios indicate that the proposed scheme always has better bit error rate (BER) performance and lower computation complexity than that of the traditional scheme.
Tang, Chengpei; Shokla, Sanesy Kumcr; Modhawar, George; Wang, Qiang
2016-01-01
Collaborative strategies for mobile sensor nodes ensure the efficiency and the robustness of data processing, while limiting the required communication bandwidth. In order to solve the problem of pipeline inspection and oil leakage monitoring, a collaborative weighted mobile sensing scheme is proposed. By adopting a weighted mobile sensing scheme, the adaptive collaborative clustering protocol can realize an even distribution of energy load among the mobile sensor nodes in each round, and make the best use of battery energy. A detailed theoretical analysis and experimental results revealed that the proposed protocol is an energy efficient collaborative strategy such that the sensor nodes can communicate with a fusion center and produce high power gain. PMID:26907285
NASA Technical Reports Server (NTRS)
Murthy, V. S.; Rose, W. C.
1977-01-01
Detailed measurements of wall shear stress (skin friction) were made with specially developed buried wire gages in the interaction regions of a Mach 2.9 turbulent boundary layer with externally generated shocks. Separation and reattachment points inferred by these measurements support the findings of earlier experiments which used a surface oil flow technique and pitot profile measurements. The measurements further indicate that the boundary layer tends to attain significantly higher skin-friction values downstream of the interaction region as compared to upstream. Comparisons between measured wall shear stress and published results of some theoretical calculation schemes show that the general, but not detailed, behavior is predicted well by such schemes.
Eliason, Michele J; Fogel, Sarah C
2015-01-01
In recent years, many studies have focused on the body of sexual minority women, particularly emphasizing their larger size. These studies rarely offer theoretically based explanations for the increased weight, nor study the potential consequences (or lack thereof) of being heavier. This article provides a brief overview of the multitude of factors that might cause or contribute to larger size of sexual minority women, using an ecological framework that elucidates upstream social determinants of health as well as individual risk factors. This model is infused with a minority stress model, which hypothesizes excess strain resulting from the stigma associated with oppressed minority identities such as woman, lesbian, bisexual, woman of color, and others. We argue that lack of attention to the upstream social determinants of health may result in individual-level victim blaming and interventions that do not address the root causes of minority stress or increased weight.
Parks, Sean A; McKelvey, Kevin S; Schwartz, Michael K
2013-02-01
The importance of movement corridors for maintaining connectivity within metapopulations of wild animals is a cornerstone of conservation. One common approach for determining corridor locations is least-cost corridor (LCC) modeling, which uses algorithms within a geographic information system to search for routes with the lowest cumulative resistance between target locations on a landscape. However, the presentation of multiple LCCs that connect multiple locations generally assumes all corridors contribute equally to connectivity, regardless of the likelihood that animals will use them. Thus, LCCs may overemphasize seldom-used longer routes and underemphasize more frequently used shorter routes. We hypothesize that, depending on conservation objectives and available biological information, weighting individual corridors on the basis of species-specific movement, dispersal, or gene flow data may better identify effective corridors. We tested whether locations of key connectivity areas, defined as the highest 75th and 90th percentile cumulative weighted value of approximately 155,000 corridors, shift under different weighting scenarios. In addition, we quantified the amount and location of private land that intersect key connectivity areas under each weighting scheme. Some areas that appeared well connected when analyzed with unweighted corridors exhibited much less connectivity compared with weighting schemes that discount corridors with large effective distances. Furthermore, the amount and location of key connectivity areas that intersected private land varied among weighting schemes. We believe biological assumptions and conservation objectives should be explicitly incorporated to weight corridors when assessing landscape connectivity. These results are highly relevant to conservation planning because on the basis of recent interest by government agencies and nongovernmental organizations in maintaining and enhancing wildlife corridors, connectivity will likely be an important criterion for prioritization of land purchases and swaps. ©2012 Society for Conservation Biology.
Colangelo, David J; Jones, Bradley L
2005-03-01
Phase I of the Kissimmee River restoration project included backfilling of 12 km of canal and restoring flow through 24 km of continuous river channel. We quantified the effects of construction activities on four water quality parameters (turbidity, total phosphorus flow-weighted concentration, total phosphorus load and dissolved oxygen concentration). Data were collected at stations upstream and downstream of the construction and at four stations within the construction zone to determine if canal backfilling and construction of 2.4 km of new river channel would negatively impact local and downstream water quality. Turbidity levels at the downstream station were elevated for approximately 2 weeks during the one and a half year construction period, but never exceeded the Florida Department of Environmental Protection construction permit criteria. Turbidity levels at stations within the construction zone were high at certain times. Flow-weighted concentration of total phosphorus at the downstream station was slightly higher than the upstream station during construction, but low discharge limited downstream transport of phosphorus. Total phosphorus loads at the upstream and downstream stations were similar and loading to Lake Okeechobee was not significantly affected by construction. Mean water column dissolved oxygen concentrations at all sampling stations were similar during construction.
Enhancing Community Detection By Affinity-based Edge Weighting Scheme
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, Andy; Sanders, Geoffrey; Henson, Van
Community detection refers to an important graph analytics problem of finding a set of densely-connected subgraphs in a graph and has gained a great deal of interest recently. The performance of current community detection algorithms is limited by an inherent constraint of unweighted graphs that offer very little information on their internal community structures. In this paper, we propose a new scheme to address this issue that weights the edges in a given graph based on recently proposed vertex affinity. The vertex affinity quantifies the proximity between two vertices in terms of their clustering strength, and therefore, it is idealmore » for graph analytics applications such as community detection. We also demonstrate that the affinity-based edge weighting scheme can improve the performance of community detection algorithms significantly.« less
Read, Emily K; Patil, Vijay P; Oliver, Samantha K; Hetherington, Amy L; Brentrup, Jennifer A; Zwart, Jacob A; Winters, Kirsten M; Corman, Jessica R; Nodine, Emily R; Woolway, R Iestyn; Dugan, Hilary A; Jaimes, Aline; Santoso, Arianto B; Hong, Grace S; Winslow, Luke A; Hanson, Paul C; Weathers, Kathleen C
2015-06-01
Lake water quality is affected by local and regional drivers, including lake physical characteristics, hydrology, landscape position, land cover, land use, geology, and climate. Here, we demonstrate the utility of hypothesis testing within the landscape limnology framework using a random forest algorithm on a national-scale, spatially explicit data set, the United States Environmental Protection Agency's 2007 National Lakes Assessment. For 1026 lakes, we tested the relative importance of water quality drivers across spatial scales, the importance of hydrologic connectivity in mediating water quality drivers, and how the importance of both spatial scale and connectivity differ across response variables for five important in-lake water quality metrics (total phosphorus, total nitrogen, dissolved organic carbon, turbidity, and conductivity). By modeling the effect of water quality predictors at different spatial scales, we found that lake-specific characteristics (e.g., depth, sediment area-to-volume ratio) were important for explaining water quality (54-60% variance explained), and that regionalization schemes were much less effective than lake specific metrics (28-39% variance explained). Basin-scale land use and land cover explained between 45-62% of variance, and forest cover and agricultural land uses were among the most important basin-scale predictors. Water quality drivers did not operate independently; in some cases, hydrologic connectivity (the presence of upstream surface water features) mediated the effect of regional-scale drivers. For example, for water quality in lakes with upstream lakes, regional classification schemes were much less effective predictors than lake-specific variables, in contrast to lakes with no upstream lakes or with no surface inflows. At the scale of the continental United States, conductivity was explained by drivers operating at larger spatial scales than for other water quality responses. The current regulatory practice of using regionalization schemes to guide water quality criteria could be improved by consideration of lake-specific characteristics, which were the most important predictors of water quality at the scale of the continental United States. The spatial extent and high quality of contextual data available for this analysis makes this work an unprecedented application of landscape limnology theory to water quality data. Further, the demonstrated importance of lake morphology over other controls on water quality is relevant to both aquatic scientists and managers.
Consensus-based distributed cooperative learning from closed-loop neural control systems.
Chen, Weisheng; Hua, Shaoyong; Zhang, Huaguang
2015-02-01
In this paper, the neural tracking problem is addressed for a group of uncertain nonlinear systems where the system structures are identical but the reference signals are different. This paper focuses on studying the learning capability of neural networks (NNs) during the control process. First, we propose a novel control scheme called distributed cooperative learning (DCL) control scheme, by establishing the communication topology among adaptive laws of NN weights to share their learned knowledge online. It is further proved that if the communication topology is undirected and connected, all estimated weights of NNs can converge to small neighborhoods around their optimal values over a domain consisting of the union of all state orbits. Second, as a corollary it is shown that the conclusion on the deterministic learning still holds in the decentralized adaptive neural control scheme where, however, the estimated weights of NNs just converge to small neighborhoods of the optimal values along their own state orbits. Thus, the learned controllers obtained by DCL scheme have the better generalization capability than ones obtained by decentralized learning method. A simulation example is provided to verify the effectiveness and advantages of the control schemes proposed in this paper.
NASA Astrophysics Data System (ADS)
Fu, Meixia; Zhang, Min; Wang, Danshi; Cui, Yue; Han, Huanhuan
2016-10-01
We propose a scheme of optical duobinary-modulated upstream transmission system for reflective semiconductor optical amplifier-based colorless optical network units in 10-Gbps wavelength-division multiplexed passive optical network (WDM-PON), where a fiber Bragg grating (FBG) is adopted as an optical equalizer for better performance. The demodulation module is extremely simple, only needing a binary intensity modulation direct detection receiver. A better received sensitivity of -16.98 dBm at bit rate error (BER)=1.0×10-4 can be achieved at 120 km without FBG, and the BER at the sensitivity of -18.49 dBm can be up to 2.1×10-5 at the transmission distance of 160 km with FBG, which demonstrates the feasibility of our proposed scheme. Moreover, it could be a high cost-effectiveness scheme for WDM-PON in the future.
Hybrid scheduling mechanisms for Next-generation Passive Optical Networks based on network coding
NASA Astrophysics Data System (ADS)
Zhao, Jijun; Bai, Wei; Liu, Xin; Feng, Nan; Maier, Martin
2014-10-01
Network coding (NC) integrated into Passive Optical Networks (PONs) is regarded as a promising solution to achieve higher throughput and energy efficiency. To efficiently support multimedia traffic under this new transmission mode, novel NC-based hybrid scheduling mechanisms for Next-generation PONs (NG-PONs) including energy management, time slot management, resource allocation, and Quality-of-Service (QoS) scheduling are proposed in this paper. First, we design an energy-saving scheme that is based on Bidirectional Centric Scheduling (BCS) to reduce the energy consumption of both the Optical Line Terminal (OLT) and Optical Network Units (ONUs). Next, we propose an intra-ONU scheduling and an inter-ONU scheduling scheme, which takes NC into account to support service differentiation and QoS assurance. The presented simulation results show that BCS achieves higher energy efficiency under low traffic loads, clearly outperforming the alternative NC-based Upstream Centric Scheduling (UCS) scheme. Furthermore, BCS is shown to provide better QoS assurance.
NASA Technical Reports Server (NTRS)
Shu, Chi-Wang
1997-01-01
In these lecture notes we describe the construction, analysis, and application of ENO (Essentially Non-Oscillatory) and WENO (Weighted Essentially Non-Oscillatory) schemes for hyperbolic conservation laws and related Hamilton- Jacobi equations. ENO and WENO schemes are high order accurate finite difference schemes designed for problems with piecewise smooth solutions containing discontinuities. The key idea lies at the approximation level, where a nonlinear adaptive procedure is used to automatically choose the locally smoothest stencil, hence avoiding crossing discontinuities in the interpolation procedure as much as possible. ENO and WENO schemes have been quite successful in applications, especially for problems containing both shocks and complicated smooth solution structures, such as compressible turbulence simulations and aeroacoustics. These lecture notes are basically self-contained. It is our hope that with these notes and with the help of the quoted references, the reader can understand the algorithms and code them up for applications.
A generalized weight-based particle-in-cell simulation scheme
NASA Astrophysics Data System (ADS)
Lee, W. W.; Jenkins, T. G.; Ethier, S.
2011-03-01
A generalized weight-based particle simulation scheme suitable for simulating magnetized plasmas, where the zeroth-order inhomogeneity is important, is presented. The scheme is an extension of the perturbative simulation schemes developed earlier for particle-in-cell (PIC) simulations. The new scheme is designed to simulate both the perturbed distribution ( δf) and the full distribution (full- F) within the same code. The development is based on the concept of multiscale expansion, which separates the scale lengths of the background inhomogeneity from those associated with the perturbed distributions. The potential advantage for such an arrangement is to minimize the particle noise by using δf in the linear stage of the simulation, while retaining the flexibility of a full- F capability in the fully nonlinear stage of the development when signals associated with plasma turbulence are at a much higher level than those from the intrinsic particle noise.
A CLS-based survivable and energy-saving WDM-PON architecture
NASA Astrophysics Data System (ADS)
Zhu, Min; Zhong, Wen-De; Zhang, Zhenrong; Luan, Feng
2013-11-01
We propose and demonstrate an improved survivable and energy-saving WDM-PON with colorless ONUs. It incorporates both energy-saving and self-healing operations. A simple effective energy-saving scheme is proposed by including an energy-saving control unit in the OLT and a control unit at each ONU. The energy-saving scheme realizes both dozing and sleep (offline) modes, which greatly improves the energy-saving efficiency for WDM-PONs. An intelligent protection switching scheme is designed in the OLT, which can distinguish if an ONU is in dozing/sleep (offline) state or a fiber is faulty. Moreover, by monitoring the optical power of each channel on both working and protection paths, the OLT can know the connection status of every fiber path, thus facilitating an effective protection switching and a faster failure recovery. The improved WDM-PON architecture not only significantly reduces energy consumption, but also performs self-healing operation in practical operation scenarios. The scheme feasibility is experimentally verified with 10 Gbit/s downstream and 1.25 Gbit/s upstream transmissions. We also examine the energy-saving efficiency of our proposed energy-saving scheme by simulation, which reveals that energy saving mainly arises from the dozing mode, not from the sleep mode when the ONU is in the online state.
The fundamentals of adaptive grid movement
NASA Technical Reports Server (NTRS)
Eiseman, Peter R.
1990-01-01
Basic grid point movement schemes are studied. The schemes are referred to as adaptive grids. Weight functions and equidistribution in one dimension are treated. The specification of coefficients in the linear weight, attraction to a given grid or a curve, and evolutionary forces are considered. Curve by curve and finite volume methods are described. The temporal coupling of partial differential equations solvers and grid generators was discussed.
A long-reach WDM passive optical network enabling broadcasting service with centralized light source
NASA Astrophysics Data System (ADS)
Liu, D.; Tang, M.; Fu, S.; Liu, D.; Shum, P.
2012-02-01
We propose a long-reach wavelength-division-multiplexed (WDM) passive optical network (PON) to provide conventional point-to-point (P2P) data and downstream broadcasting service simultaneously by superimposing, for each WDM channel, the differential-phase-shift-keying (DPSK) broadcasting signal with the subcarrier multiplexing (SCM) modulated downstream P2P signal, at the optical line terminal (OLT). In the optical network units (ONUs), by re-modulating part of the downstream signal with a reflective semiconductor optical amplifier (RSOA), we realize color-less ONUs for upstream data transmission. The proposed scheme is numerically verified with a 5 Gb/s downstream P2P signal and broadcasting services, as well as 2.5 Gb/s upstream data through a 60 km bidirectional fiber link. In particular, the influence of the downstream lightwave's optical carrier-subcarrier ratio (OCSR) on the system performance is also investigated.
NASA Astrophysics Data System (ADS)
Hayashi, K.; Matsui, H.; Kawano, H.; Yamamoto, T.; Kokubun, S.
1994-12-01
Whistler mode waves observed in the upstream region very close to the bow-shock is focused from the initial survey for magnetic fed data in a frequency range between 1Hz and 50Hz observed by the search coil magnetometer on board the Geotail satellite. Based on the three component wave form data polarization and wave-normal characteristics of foreshock waves is first shown as dynamic spectra for the whole Fourier components of the 50 Hz band width. Intense whistler mode waves generated in the foot region of the bow-shock are found strongly controlled in the observed polarization dependent on the angle between directions of the wave propagation and the solar wind flow but not very dependent on frequency. Our simple scheme to derive the ware characteristics which is effective to survey large amount of data continuously growing is also introduced.
A new hybrid-Lagrangian numerical scheme for gyrokinetic simulation of tokamak edge plasma
Ku, S.; Hager, R.; Chang, C. S.; ...
2016-04-01
In order to enable kinetic simulation of non-thermal edge plasmas at a reduced computational cost, a new hybrid-Lagrangian δf scheme has been developed that utilizes the phase space grid in addition to the usual marker particles, taking advantage of the computational strengths from both sides. The new scheme splits the particle distribution function of a kinetic equation into two parts. Marker particles contain the fast space-time varying, δf, part of the distribution function and the coarse-grained phase-space grid contains the slow space-time varying part. The coarse-grained phase-space grid reduces the memory-requirement and the computing cost, while the marker particles providemore » scalable computing ability for the fine-grained physics. Weights of the marker particles are determined by a direct weight evolution equation instead of the differential form weight evolution equations that the conventional delta-f schemes use. The particle weight can be slowly transferred to the phase space grid, thereby reducing the growth of the particle weights. The non-Lagrangian part of the kinetic equation – e.g., collision operation, ionization, charge exchange, heat-source, radiative cooling, and others – can be operated directly on the phase space grid. Deviation of the particle distribution function on the velocity grid from a Maxwellian distribution function – driven by ionization, charge exchange and wall loss – is allowed to be arbitrarily large. In conclusion, the numerical scheme is implemented in the gyrokinetic particle code XGC1, which specializes in simulating the tokamak edge plasma that crosses the magnetic separatrix and is in contact with the material wall.« less
A new hybrid-Lagrangian numerical scheme for gyrokinetic simulation of tokamak edge plasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ku, S.; Hager, R.; Chang, C. S.
In order to enable kinetic simulation of non-thermal edge plasmas at a reduced computational cost, a new hybrid-Lagrangian δf scheme has been developed that utilizes the phase space grid in addition to the usual marker particles, taking advantage of the computational strengths from both sides. The new scheme splits the particle distribution function of a kinetic equation into two parts. Marker particles contain the fast space-time varying, δf, part of the distribution function and the coarse-grained phase-space grid contains the slow space-time varying part. The coarse-grained phase-space grid reduces the memory-requirement and the computing cost, while the marker particles providemore » scalable computing ability for the fine-grained physics. Weights of the marker particles are determined by a direct weight evolution equation instead of the differential form weight evolution equations that the conventional delta-f schemes use. The particle weight can be slowly transferred to the phase space grid, thereby reducing the growth of the particle weights. The non-Lagrangian part of the kinetic equation – e.g., collision operation, ionization, charge exchange, heat-source, radiative cooling, and others – can be operated directly on the phase space grid. Deviation of the particle distribution function on the velocity grid from a Maxwellian distribution function – driven by ionization, charge exchange and wall loss – is allowed to be arbitrarily large. In conclusion, the numerical scheme is implemented in the gyrokinetic particle code XGC1, which specializes in simulating the tokamak edge plasma that crosses the magnetic separatrix and is in contact with the material wall.« less
A new hybrid-Lagrangian numerical scheme for gyrokinetic simulation of tokamak edge plasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ku, S., E-mail: sku@pppl.gov; Hager, R.; Chang, C.S.
In order to enable kinetic simulation of non-thermal edge plasmas at a reduced computational cost, a new hybrid-Lagrangian δf scheme has been developed that utilizes the phase space grid in addition to the usual marker particles, taking advantage of the computational strengths from both sides. The new scheme splits the particle distribution function of a kinetic equation into two parts. Marker particles contain the fast space-time varying, δf, part of the distribution function and the coarse-grained phase-space grid contains the slow space-time varying part. The coarse-grained phase-space grid reduces the memory-requirement and the computing cost, while the marker particles providemore » scalable computing ability for the fine-grained physics. Weights of the marker particles are determined by a direct weight evolution equation instead of the differential form weight evolution equations that the conventional delta-f schemes use. The particle weight can be slowly transferred to the phase space grid, thereby reducing the growth of the particle weights. The non-Lagrangian part of the kinetic equation – e.g., collision operation, ionization, charge exchange, heat-source, radiative cooling, and others – can be operated directly on the phase space grid. Deviation of the particle distribution function on the velocity grid from a Maxwellian distribution function – driven by ionization, charge exchange and wall loss – is allowed to be arbitrarily large. The numerical scheme is implemented in the gyrokinetic particle code XGC1, which specializes in simulating the tokamak edge plasma that crosses the magnetic separatrix and is in contact with the material wall.« less
Scheduling with hop-by-hop priority increasing in meshed optical burst-switched network
NASA Astrophysics Data System (ADS)
Chang, Hao; Luo, Jiangtao; Zhang, Zhizhong; Xia, Da; Gong, Jue
2006-09-01
In OBS, JET (Just-Enough-Time) is the classical wavelength reservation scheme. But there is a phenomenon that the burst priority decreasing hop-by-hop in multi-hop networks that will waste the bandwidth that was used in the upstream. Based on the HPI (Hop-by-hop Priority Increasing) proposed in the former research, this paper will do an unprecedented simulation in 4×4 meshed topology, which is closer to the real network environment with the help of a NS2-based OBSN simulation platform constructed by ourselves. By contrasting, the drop probability and throughput on one of the longest end-to-end path lengths in the whole networks, it shows that the HPI scheme can improve the utilance of bandwidth better.
On High-Order Upwind Methods for Advection
NASA Technical Reports Server (NTRS)
Huynh, H. T.
2017-01-01
In the fourth installment of the celebrated series of five papers entitled "Towards the ultimate conservative difference scheme", Van Leer (1977) introduced five schemes for advection, the first three are piecewise linear, and the last two, piecewise parabolic. Among the five, scheme I, which is the least accurate, extends with relative ease to systems of equations in multiple dimensions. As a result, it became the most popular and is widely known as the MUSCL scheme (monotone upstream-centered schemes for conservation laws). Schemes III and V have the same accuracy, are the most accurate, and are closely related to current high-order methods. Scheme III uses a piecewise linear approximation that is discontinuous across cells, and can be considered as a precursor of the discontinuous Galerkin methods. Scheme V employs a piecewise quadratic approximation that is, as opposed to the case of scheme III, continuous across cells. This method is the basis for the on-going "active flux scheme" developed by Roe and collaborators. Here, schemes III and V are shown to be equivalent in the sense that they yield identical (reconstructed) solutions, provided the initial condition for scheme III is defined from that of scheme V in a manner dependent on the CFL number. This equivalence is counter intuitive since it is generally believed that piecewise linear and piecewise parabolic methods cannot produce the same solutions due to their different degrees of approximation. The finding also shows a key connection between the approaches of discontinuous and continuous polynomial approximations. In addition to the discussed equivalence, a framework using both projection and interpolation that extends schemes III and V into a single family of high-order schemes is introduced. For these high-order extensions, it is demonstrated via Fourier analysis that schemes with the same number of degrees of freedom ?? per cell, in spite of the different piecewise polynomial degrees, share the same sets of eigenvalues and thus, have the same stability and accuracy. Moreover, these schemes are accurate to order 2??-1, which is higher than the expected order of ??.
Liu, Ying; Ciliax, Brian J; Borges, Karin; Dasigi, Venu; Ram, Ashwin; Navathe, Shamkant B; Dingledine, Ray
2004-01-01
One of the key challenges of microarray studies is to derive biological insights from the unprecedented quatities of data on gene-expression patterns. Clustering genes by functional keyword association can provide direct information about the nature of the functional links among genes within the derived clusters. However, the quality of the keyword lists extracted from biomedical literature for each gene significantly affects the clustering results. We extracted keywords from MEDLINE that describes the most prominent functions of the genes, and used the resulting weights of the keywords as feature vectors for gene clustering. By analyzing the resulting cluster quality, we compared two keyword weighting schemes: normalized z-score and term frequency-inverse document frequency (TFIDF). The best combination of background comparison set, stop list and stemming algorithm was selected based on precision and recall metrics. In a test set of four known gene groups, a hierarchical algorithm correctly assigned 25 of 26 genes to the appropriate clusters based on keywords extracted by the TDFIDF weighting scheme, but only 23 og 26 with the z-score method. To evaluate the effectiveness of the weighting schemes for keyword extraction for gene clusters from microarray profiles, 44 yeast genes that are differentially expressed during the cell cycle were used as a second test set. Using established measures of cluster quality, the results produced from TFIDF-weighted keywords had higher purity, lower entropy, and higher mutual information than those produced from normalized z-score weighted keywords. The optimized algorithms should be useful for sorting genes from microarray lists into functionally discrete clusters.
Implementation of the high-order schemes QUICK and LECUSSO in the COMMIX-1C Program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sakai, K.; Sun, J.G.; Sha, W.T.
Multidimensional analysis computer programs based on the finite volume method, such as COMMIX-1C, have been commonly used to simulate thermal-hydraulic phenomena in engineering systems such as nuclear reactors. In COMMIX-1C, the first-order schemes with respect to both space and time are used. In many situations such as flow recirculations and stratifications with steep gradient of velocity and temperature fields, however, high-order difference schemes are necessary for an accurate prediction of the fields. For these reasons, two second-order finite difference numerical schemes, QUICK (Quadratic Upstream Interpolation for Convective Kinematics) and LECUSSO (Local Exact Consistent Upwind Scheme of Second Order), have beenmore » implemented in the COMMIX-1C computer code. The formulations were derived for general three-dimensional flows with nonuniform grid sizes. Numerical oscillation analyses for QUICK and LECUSSO were performed. To damp the unphysical oscillations which occur in calculations with high-order schemes at high mesh Reynolds numbers, a new FRAM (Filtering Remedy and Methodology) scheme was developed and implemented. To be consistent with the high-order schemes, the pressure equation and the boundary conditions for all the conservation equations were also modified to be of second order. The new capabilities in the code are listed. Test calculations were performed to validate the implementation of the high-order schemes. They include the test of the one-dimensional nonlinear Burgers equation, two-dimensional scalar transport in two impinging streams, von Karmann vortex shedding, shear driven cavity flow, Couette flow, and circular pipe flow. The calculated results were compared with available data; the agreement is good.« less
Raabe, Joshua K.; Hightower, Joseph E.
2014-01-01
Despite extensive management and research, populations of American Shad Alosa sapidissima have experienced prolonged declines, and uncertainty about the underlying mechanisms causing these declines remains. In the springs of 2007 through 2010, we used a resistance board weir and PIT technology to capture, tag, and track American Shad in the Little River, North Carolina, a tributary to the Neuse River with complete and partial removals of low-head dams. Our objectives were to examine migratory behaviors and estimate weight loss, survival, and abundance during each spawning season. Males typically immigrated earlier than females and also used upstream habitat at a higher percentage, but otherwise exhibited relatively similar migratory patterns. Proportional weight loss displayed a strong positive relationship with both cumulative water temperature during residence time and number of days spent upstream, and to a lesser extent, minimum distance the fish traveled in the river. Surviving emigrating males lost up to 30% of their initial weight and females lost up to 50% of their initial weight, indicating there are potential survival thresholds. Survival for the spawning season was low and estimates ranged from 0.07 to 0.17; no distinct factors (e.g., sex, size, migration distance) that could contribute to survival were detected. Sampled and estimated American Shad abundance increased from 2007 through 2009, but was lower in 2010. Our study provides substantial new information about American Shad spawning that may aid restoration efforts.
Space Station racks weight and CG measurement using the rack insertion end-effector
NASA Technical Reports Server (NTRS)
Brewer, William V.
1994-01-01
The objective was to design a method to measure weight and center of gravity (C.G.) location for Space Station Modules by adding sensors to the existing Rack Insertion End Effector (RIEE). Accomplishments included alternative sensor placement schemes organized into categories. Vendors were queried for suitable sensor equipment recommendations. Inverse mathematical models for each category determine expected maximum sensor loads. Sensors are selected using these computations, yielding cost and accuracy data. Accuracy data for individual sensors are inserted into forward mathematical models to estimate the accuracy of an overall sensor scheme. Cost of the schemes can be estimated. Ease of implementation and operation are discussed.
Barjhoux, Iris; Fechner, Lise C; Lebrun, Jérémie D; Anzil, Adriana; Ayrault, Sophie; Budzinski, Hélène; Cachot, Jérôme; Charron, Laetitia; Chaumot, Arnaud; Clérandeau, Christelle; Dedourge-Geffard, Odile; Faburé, Juliette; François, Adeline; Geffard, Olivier; George, Isabelle; Labadie, Pierre; Lévi, Yves; Munoz, Gabriel; Noury, Patrice; Oziol, Lucie; Quéau, Hervé; Servais, Pierre; Uher, Emmanuelle; Urien, Nastassia; Geffard, Alain
2016-06-08
Quality assessment of environments under high anthropogenic pressures such as the Seine Basin, subjected to complex and chronic inputs, can only be based on combined chemical and biological analyses. The present study integrates and summarizes a multidisciplinary dataset acquired throughout a 1-year monitoring survey conducted at three workshop sites along the Seine River (PIREN-Seine program), upstream and downstream of the Paris conurbation, during four seasonal campaigns using a weight-of-evidence approach. Sediment and water column chemical analyses, bioaccumulation levels and biomarker responses in caged gammarids, and laboratory (eco)toxicity bioassays were integrated into four lines of evidence (LOEs). Results from each LOE clearly reflected an anthropogenic gradient, with contamination levels and biological effects increasing from upstream to downstream of Paris, in good agreement with the variations in the structure and composition of bacterial communities from the water column. Based on annual average data, the global hazard was summarized as "moderate" at the upstream station and as "major" at the two downstream ones. Seasonal variability was also highlighted; the winter campaign was least impacted. The model was notably improved using previously established reference and threshold values from national-scale studies. It undoubtedly represents a powerful practical tool to facilitate the decision-making processes of environment managers within the framework of an environmental risk assessment strategy.
A fuzzy call admission control scheme in wireless networks
NASA Astrophysics Data System (ADS)
Ma, Yufeng; Gong, Shenguang; Hu, Xiulin; Zhang, Yunyu
2007-11-01
Scarcity of the spectrum resource and mobility of users make quality of service (QoS) provision a critical issue in wireless networks. This paper presents a fuzzy call admission control scheme to meet the requirement of the QoS. A performance measure is formed as a weighted linear function of new call and handoff call blocking probabilities. Simulation compares the proposed fuzzy scheme with an adaptive channel reservation scheme. Simulation results show that fuzzy scheme has a better robust performance in terms of average blocking criterion.
Receiver-Coupling Schemes Based On Optimal-Estimation Theory
NASA Technical Reports Server (NTRS)
Kumar, Rajendra
1992-01-01
Two schemes for reception of weak radio signals conveying digital data via phase modulation provide for mutual coupling of multiple receivers, and coherent combination of outputs of receivers. In both schemes, optimal mutual-coupling weights computed according to Kalman-filter theory, but differ in manner of transmission and combination of outputs of receivers.
NASA Astrophysics Data System (ADS)
Zhang, Jing; Chen, Xuemei; Deng, Mingliang; Zeng, Dengke; Yang, Heming; Qiu, Kun
2015-08-01
We propose a novel ICI cancellation using opposite weighting on symmetric subcarrier pairs to combat the linear phase noise of laser source and the nonlinear phase noise resulted from the fiber nonlinearity. We compare the proposed ICI cancellation scheme with conventional OFDM and the ICI self-cancellation at the same raw bit rate of 35.6 Gb/s. In simulations, the proposed ICI cancellation scheme shows better phase noise tolerance compared with conventional OFDM and has similar phase noise tolerance with the ICI self-cancellation. The laser linewidth is about 13 MHz at BER of 2 × 10-3 with ICI cancellation scheme while it is 5 MHz in conventional OFDM. We also study the nonlinearity tolerance and find that the proposed ICI cancellation scheme is better compared with the other two schemes which due to the first order nonlinearity mitigation. The launch power is 7 dBm for the proposed ICI cancellation scheme and its SNR improves by 4 dB or 3 dB compared with the ICI self-cancellation or conventional OFDM at BER of 1.1 × 10-3, respectively.
Fair ranking of researchers and research teams.
Vavryčuk, Václav
2018-01-01
The main drawback of ranking of researchers by the number of papers, citations or by the Hirsch index is ignoring the problem of distributing authorship among authors in multi-author publications. So far, the single-author or multi-author publications contribute to the publication record of a researcher equally. This full counting scheme is apparently unfair and causes unjust disproportions, in particular, if ranked researchers have distinctly different collaboration profiles. These disproportions are removed by less common fractional or authorship-weighted counting schemes, which can distribute the authorship credit more properly and suppress a tendency to unjustified inflation of co-authors. The urgent need of widely adopting a fair ranking scheme in practise is exemplified by analysing citation profiles of several highly-cited astronomers and astrophysicists. While the full counting scheme often leads to completely incorrect and misleading ranking, the fractional or authorship-weighted schemes are more accurate and applicable to ranking of researchers as well as research teams. In addition, they suppress differences in ranking among scientific disciplines. These more appropriate schemes should urgently be adopted by scientific publication databases as the Web of Science (Thomson Reuters) or the Scopus (Elsevier).
Large-eddy simulation/Reynolds-averaged Navier-Stokes hybrid schemes for high speed flows
NASA Astrophysics Data System (ADS)
Xiao, Xudong
Three LES/RANS hybrid schemes have been proposed for the prediction of high speed separated flows. Each method couples the k-zeta (Enstrophy) BANS model with an LES subgrid scale one-equation model by using a blending function that is coordinate system independent. Two of these functions are based on turbulence dissipation length scale and grid size, while the third one has no explicit dependence on the grid. To implement the LES/RANS hybrid schemes, a new rescaling-reintroducing method is used to generate time-dependent turbulent inflow conditions. The hybrid schemes have been tested on a Mach 2.88 flow over 25 degree compression-expansion ramp and a Mach 2.79 flow over 20 degree compression ramp. A special computation procedure has been designed to prevent the separation zone from expanding upstream to the recycle-plane. The code is parallelized using Message Passing Interface (MPI) and is optimized for running on IBM-SP3 parallel machine. The scheme was validated first for a flat plate. It was shown that the blending function has to be monotonic to prevent the RANS region from appearing in the LES region. In the 25 deg ramp case, the hybrid schemes provided better agreement with experiment in the recovery region. Grid refinement studies demonstrated the importance of using a grid independent blend function and further improvement with experiment in the recovery region. In the 20 deg ramp case, with a relatively finer grid, the hybrid scheme characterized by grid independent blending function well predicted the flow field in both the separation region and the recovery region. Therefore, with "appropriately" fine grid, current hybrid schemes are promising for the simulation of shock wave/boundary layer interaction problems.
NASA Astrophysics Data System (ADS)
Choudhury, Pallab K.
2018-05-01
Spectrally shaped orthogonal frequency division multiplexing (OFDM) signal for symmetric 10 Gb/s cross-wavelength reuse reflective semiconductor optical amplifier (RSOA) based colorless wavelength division multiplexed passive optical network (WDM-PON) is proposed and further analyzed to support broadband services of next generation high speed optical access networks. The generated OFDM signal has subcarriers in separate frequency ranges for downstream and upstream, such that the re-modulation noise can be effectively minimized in upstream data receiver. Moreover, the cross wavelength reuse approach improves the tolerance against Rayleigh backscattering noise due to the propagation of different wavelengths in the same feeder fiber. The proposed WDM-PON is successfully demonstrated for 25 km fiber with 16-QAM (quadrature amplitude modulation) OFDM signal having bandwidth of 2.5 GHz for 10 Gb/s operation and subcarrier frequencies in 3-5.5 GHz and DC-2.5 GHz for downstream (DS) and upstream (US) transmission respectively. The result shows that the proposed scheme maintains a good bit error rate (BER) performance below the forward error correction (FEC) limit of 3.8 × 10-3 at acceptable receiver sensitivity and provides a high resilience against re-modulation and Rayleigh backscattering noises as well as chromatic dispersion.
Survivable architectures for time and wavelength division multiplexed passive optical networks
NASA Astrophysics Data System (ADS)
Wong, Elaine
2014-08-01
The increased network reach and customer base of next-generation time and wavelength division multiplexed PON (TWDM-PONs) have necessitated rapid fault detection and subsequent restoration of services to its users. However, direct application of existing solutions for conventional PONs to TWDM-PONs is unsuitable as these schemes rely on the loss of signal (LOS) of upstream transmissions to trigger protection switching. As TWDM-PONs are required to potentially use sleep/doze mode optical network units (ONU), the loss of upstream transmission from a sleeping or dozing ONU could erroneously trigger protection switching. Further, TWDM-PONs require its monitoring modules for fiber/device fault detection to be more sensitive than those typically deployed in conventional PONs. To address the above issues, three survivable architectures that are compliant with TWDM-PON specifications are presented in this work. These architectures combine rapid detection and protection switching against multipoint failure, and most importantly do not rely on upstream transmissions for LOS activation. Survivability analyses as well as evaluations of the additional costs incurred to achieve survivability are performed and compared to the unprotected TWDM-PON. Network parameters that impact the maximum achievable network reach, maximum split ratio, connection availability, fault impact, and the incremental reliability costs for each proposed survivable architecture are highlighted.
Experimental and Computational Study of Trapped Vortex Combustor Sector Rig With Tri-Pass Diffuser
NASA Technical Reports Server (NTRS)
Hendricks, R. C.; Shouse, D. T.; Roquernore, W. M.; Burrus, D. L.; Duncan, B. S.; Ryder, R. C.; Brankovic, A.; Liu, N.-S.; Gallagher, J. R.; Hendricks, J. A.
2004-01-01
The Trapped Vortex Combustor (TVC) potentially offers numerous operational advantages over current production gas turbine engine combustors. These include lower weight, lower pollutant emissions, effective flame stabilization, high combustion efficiency, excellent high altitude relight capability, and operation in the lean burn or RQL modes of combustion. The present work describes the operational principles of the TVC, and extends diffuser velocities toward choked flow and provides system performance data. Performance data include EINOx results for various fuel-air ratios and combustor residence times, combustion efficiency as a function of combustor residence time, and combustor lean blow-out (LBO) performance. Computational fluid dynamics (CFD) simulations using liquid spray droplet evaporation and combustion modeling are performed and related to flow structures observed in photographs of the combustor. The CFD results are used to understand the aerodynamics and combustion features under different fueling conditions. Performance data acquired to date are favorable compared to conventional gas turbine combustors. Further testing over a wider range of fuel-air ratios, fuel flow splits, and pressure ratios is in progress to explore the TVC performance. In addition, alternate configurations for the upstream pressure feed, including bi-pass diffusion schemes, as well as variations on the fuel injection patterns, are currently in test and evaluation phases.
Lappala, E.G.; Healy, R.W.; Weeks, E.P.
1987-01-01
This report documents FORTRAN computer code for solving problems involving variably saturated single-phase flow in porous media. The flow equation is written with total hydraulic potential as the dependent variable, which allows straightforward treatment of both saturated and unsaturated conditions. The spatial derivatives in the flow equation are approximated by central differences, and time derivatives are approximated either by a fully implicit backward or by a centered-difference scheme. Nonlinear conductance and storage terms may be linearized using either an explicit method or an implicit Newton-Raphson method. Relative hydraulic conductivity is evaluated at cell boundaries by using either full upstream weighting, the arithmetic mean, or the geometric mean of values from adjacent cells. Nonlinear boundary conditions treated by the code include infiltration, evaporation, and seepage faces. Extraction by plant roots that is caused by atmospheric demand is included as a nonlinear sink term. These nonlinear boundary and sink terms are linearized implicitly. The code has been verified for several one-dimensional linear problems for which analytical solutions exist and against two nonlinear problems that have been simulated with other numerical models. A complete listing of data-entry requirements and data entry and results for three example problems are provided. (USGS)
Topological Principles of Control in Dynamical Networks
NASA Astrophysics Data System (ADS)
Kim, Jason; Pasqualetti, Fabio; Bassett, Danielle
Networked biological systems, such as the brain, feature complex patterns of interactions. To predict and correct the dynamic behavior of such systems, it is imperative to understand how the underlying topological structure affects and limits the function of the system. Here, we use network control theory to extract topological features that favor or prevent network controllability, and to understand the network-wide effect of external stimuli on large-scale brain systems. Specifically, we treat each brain region as a dynamic entity with real-valued state, and model the time evolution of all interconnected regions using linear, time-invariant dynamics. We propose a simplified feed-forward scheme where the effect of upstream regions (drivers) on the connected downstream regions (non-drivers) is characterized in closed-form. Leveraging this characterization of the simplified model, we derive topological features that predict the controllability properties of non-simplified networks. We show analytically and numerically that these predictors are accurate across a large range of parameters. Among other contributions, our analysis shows that heterogeneity in the network weights facilitate controllability, and allows us to implement targeted interventions that profoundly improve controllability. By assuming an underlying dynamical mechanism, we are able to understand the complex topology of networked biological systems in a functionally meaningful way.
Feng, Zhenhua; Xu, Liang; Wu, Qiong; Tang, Ming; Fu, Songnian; Tong, Weijun; Shum, Perry Ping; Liu, Deming
2017-03-20
Towards 100G beyond large-capacity optical access networks, wavelength division multiplexing (WDM) techniques incorporating with space division multiplexing (SDM) and affordable spectrally efficient advanced modulation formats are indispensable. In this paper, we proposed and experimentally demonstrated a cost-efficient multicore fiber (MCF) based hybrid WDM-SDM optical access network with self-homodyne coherent detection (SHCD) based downstream (DS) and direct detection optical filter bank multi carrier (DDO-FBMC) based upstream (US). In the DS experiments, the inner core of the 7-core fiber is used as a dedicated channel to deliver the local oscillator (LO) lights while the other 6 outer cores are used to transmit 4 channels of wavelength multiplexed 200-Gb/s PDM-16QAM-OFDM signals. For US transmission, 4 wavelengths with channel spacing of 100 GHz are intensity modulated with 30 Gb/s 32-QAM-FBMC and directly detected by a ~7 GHz bandwidth receiver after transmission along one of the outer core. The results show that a 4 × 6 × 200-Gb/s DS transmission can be realized over 37 km 7-core fiber without carrier frequency offset (CFO) and phase noise (PN) compensation even using 10 MHz linewidth DFB lasers. The SHCD based on MCF provides a compromise and cost efficient scheme between conventional intradyne coherent detection and intensity modulation and direct detection (IM/DD) schemes. Both US and DS have acceptable BER performance and high spectral efficiency.
Parametric Study of Decay of Homogeneous Isotropic Turbulence Using Large Eddy Simulation
NASA Technical Reports Server (NTRS)
Swanson, R. C.; Rumsey, Christopher L.; Rubinstein, Robert; Balakumar, Ponnampalam; Zang, Thomas A.
2012-01-01
Numerical simulations of decaying homogeneous isotropic turbulence are performed with both low-order and high-order spatial discretization schemes. The turbulent Mach and Reynolds numbers for the simulations are 0.2 and 250, respectively. For the low-order schemes we use either second-order central or third-order upwind biased differencing. For higher order approximations we apply weighted essentially non-oscillatory (WENO) schemes, both with linear and nonlinear weights. There are two objectives in this preliminary effort to investigate possible schemes for large eddy simulation (LES). One is to explore the capability of a widely used low-order computational fluid dynamics (CFD) code to perform LES computations. The other is to determine the effect of higher order accuracy (fifth, seventh, and ninth order) achieved with high-order upwind biased WENO-based schemes. Turbulence statistics, such as kinetic energy, dissipation, and skewness, along with the energy spectra from simulations of the decaying turbulence problem are used to assess and compare the various numerical schemes. In addition, results from the best performing schemes are compared with those from a spectral scheme. The effects of grid density, ranging from 32 cubed to 192 cubed, on the computations are also examined. The fifth-order WENO-based scheme is found to be too dissipative, especially on the coarser grids. However, with the seventh-order and ninth-order WENO-based schemes we observe a significant improvement in accuracy relative to the lower order LES schemes, as revealed by the computed peak in the energy dissipation and by the energy spectrum.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Guozhu, E-mail: gzhang6@ncsu.edu
Zebrafish have become a key alternative model for studying health effects of environmental stressors, partly due to their genetic similarity to humans, fast generation time, and the efficiency of generating high-dimensional systematic data. Studies aiming to characterize adverse health effects in zebrafish typically include several phenotypic measurements (endpoints). While there is a solid biomedical basis for capturing a comprehensive set of endpoints, making summary judgments regarding health effects requires thoughtful integration across endpoints. Here, we introduce a Bayesian method to quantify the informativeness of 17 distinct zebrafish endpoints as a data-driven weighting scheme for a multi-endpoint summary measure, called weightedmore » Aggregate Entropy (wAggE). We implement wAggE using high-throughput screening (HTS) data from zebrafish exposed to five concentrations of all 1060 ToxCast chemicals. Our results show that our empirical weighting scheme provides better performance in terms of the Receiver Operating Characteristic (ROC) curve for identifying significant morphological effects and improves robustness over traditional curve-fitting approaches. From a biological perspective, our results suggest that developmental cascade effects triggered by chemical exposure can be recapitulated by analyzing the relationships among endpoints. Thus, wAggE offers a powerful approach for analysis of multivariate phenotypes that can reveal underlying etiological processes. - Highlights: • Introduced a data-driven weighting scheme for multiple phenotypic endpoints. • Weighted Aggregate Entropy (wAggE) implies differential importance of endpoints. • Endpoint relationships reveal developmental cascade effects triggered by exposure. • wAggE is generalizable to multi-endpoint data of different shapes and scales.« less
Reiter, Harold I; Lockyer, Jocelyn; Ziola, Barry; Courneya, Carol-Ann; Eva, Kevin
2012-04-01
Traditional medical school admissions assessment tools may be limiting diversity. This study investigates whether the Multiple Mini-Interview (MMI) is diversity-neutral and, if so, whether applying it with greater weight would dilute the anticipated negative impact of diversity-limiting admissions measures. Interviewed applicants to six medical schools in 2008 and 2009 underwent MMI. Predictor variables of MMI scores, grade point average (GPA), and Medical College Admission Test (MCAT) scores were correlated with diversity measures of age, gender, size of community of origin, income level, and self-declared aboriginal status. A subset of the data was then combined with variable weight assigned to predictor variables to determine whether weighting during the applicant selection process would affect diversity among chosen applicants. MMI scores were unrelated to gender, size of community of origin, and income level. They correlated positively with age and negatively with aboriginal status. GPA and MCAT correlated negatively with age and aboriginal status, GPA correlated positively with income level, and MCAT correlated positively with size of community of origin. Even extreme combinations of MMI and GPA weightings failed to increase diversity among applicants who would be selected on the basis of weighted criteria. MMI could not neutralize the diversity-limiting properties of academic scores as selection criteria to interview. Using academic scores in this way causes range restriction, counteracting attempts to enhance diversity using downstream admissions selection measures such as MMI. Diversity efforts should instead be focused upstream. These results lend further support for the development of pipeline programs.
A Comparison of Some Difference Schemes for a Parabolic Problem of Zero-Coupon Bond Pricing
NASA Astrophysics Data System (ADS)
Chernogorova, Tatiana; Vulkov, Lubin
2009-11-01
This paper describes a comparison of some numerical methods for solving a convection-diffusion equation subjected by dynamical boundary conditions which arises in the zero-coupon bond pricing. The one-dimensional convection-diffusion equation is solved by using difference schemes with weights including standard difference schemes as the monotone Samarskii's scheme, FTCS and Crank-Nicolson methods. The schemes are free of spurious oscillations and satisfy the positivity and maximum principle as demanded for the financial and diffusive solution. Numerical results are compared with analytical solutions.
Energy-saving framework for passive optical networks with ONU sleep/doze mode.
Van, Dung Pham; Valcarenghi, Luca; Dias, Maluge Pubuduni Imali; Kondepu, Koteswararao; Castoldi, Piero; Wong, Elaine
2015-02-09
This paper proposes an energy-saving passive optical network framework (ESPON) that aims to incorporate optical network unit (ONU) sleep/doze mode into dynamic bandwidth allocation (DBA) algorithms to reduce ONU energy consumption. In the ESPON, the optical line terminal (OLT) schedules both downstream (DS) and upstream (US) transmissions in the same slot in an online and dynamic fashion whereas the ONU enters sleep mode outside the slot. The ONU sleep time is maximized based on both DS and US traffic. Moreover, during the slot, the ONU might enter doze mode when only its transmitter is idle to further improve energy efficiency. The scheduling order of data transmission, control message exchange, sleep period, and doze period defines an energy-efficient scheme under the ESPON. Three schemes are designed and evaluated in an extensive FPGA-based evaluation. Results show that whilst all the schemes significantly save ONU energy for different evaluation scenarios, the scheduling order has great impact on their performance. In addition, the ESPON allows for a scheduling order that saves ONU energy independently of the network reach.
Generation of a composite grid for turbine flows and consideration of a numerical scheme
NASA Technical Reports Server (NTRS)
Choo, Y.; Yoon, S.; Reno, C.
1986-01-01
A composite grid was generated for flows in turbines. It consisted of the C-grid (or O-grid) in the immediate vicinity of the blade and the H-grid in the middle of the blade passage between the C-grids and in the upstream region. This new composite grid provides better smoothness, resolution, and orthogonality than any single grid for a typical turbine blade with a large camber and rounded leading and trailing edges. The C-H (or O-H) composite grid has an unusual grid point that is connected to more than four neighboring nodes in two dimensions (more than six neighboring nodes in three dimensions). A finite-volume lower-upper (LU) implicit scheme to be used on this grid poses no problem and requires no special treatment because each interior cell of this composite grid has only four neighboring cells in two dimensions (six cells in three dimensions). The LU implicit scheme was demonstrated to be efficient and robust for external flows in a broad flow regime and can be easily applied to internal flows and extended from two to three dimensions.
Simulation of a Wall-Bounded Flow using a Hybrid LES/RAS Approach with Turbulence Recycling
NASA Technical Reports Server (NTRS)
Quinlan, Jesse R.; Mcdaniel, James; Baurle, Robert A.
2012-01-01
Simulations of a supersonic recessed-cavity flow are performed using a hybrid large-eddy/ Reynolds-averaged simulation approach utilizing an inflow turbulence recycling procedure and hybridized inviscid flux scheme. Calorically perfect air enters the three-dimensional domain at a free stream Mach number of 2.92. Simulations are performed to assess grid sensitivity of the solution, efficacy of the turbulence recycling, and effect of the shock sensor used with the hybridized inviscid flux scheme. Analysis of the turbulent boundary layer upstream of the rearward-facing step for each case indicates excellent agreement with theoretical predictions. Mean velocity and pressure results are compared to Reynolds-averaged simulations and experimental data for each case, and these comparisons indicate good agreement on the finest grid. Simulations are repeated on a coarsened grid, and results indicate strong grid density sensitivity. The effect of turbulence recycling on the solution is illustrated by performing coarse grid simulations with and without inflow turbulence recycling. Two shock sensors, one of Ducros and one of Larsson, are assessed for use with the hybridized inviscid flux reconstruction scheme.
NASA Technical Reports Server (NTRS)
Fisher, Travis C.; Carpenter, Mark H.; Yamaleev, Nail K.; Frankel, Steven H.
2009-01-01
A general strategy exists for constructing Energy Stable Weighted Essentially Non Oscillatory (ESWENO) finite difference schemes up to eighth-order on periodic domains. These ESWENO schemes satisfy an energy norm stability proof for both continuous and discontinuous solutions of systems of linear hyperbolic equations. Herein, boundary closures are developed for the fourth-order ESWENO scheme that maintain wherever possible the WENO stencil biasing properties, while satisfying the summation-by-parts (SBP) operator convention, thereby ensuring stability in an L2 norm. Second-order, and third-order boundary closures are developed that achieve stability in diagonal and block norms, respectively. The global accuracy for the second-order closures is three, and for the third-order closures is four. A novel set of non-uniform flux interpolation points is necessary near the boundaries to simultaneously achieve 1) accuracy, 2) the SBP convention, and 3) WENO stencil biasing mechanics.
NASA Astrophysics Data System (ADS)
Mo, Jingyue; Huang, Tao; Zhang, Xiaodong; Zhao, Yuan; Liu, Xiao; Li, Jixiang; Gao, Hong; Ma, Jianmin
2017-12-01
As a renewable and clean energy source, wind power has become the most rapidly growing energy resource worldwide in the past decades. Wind power has been thought not to exert any negative impacts on the environment. However, since a wind farm can alter the local meteorological conditions and increase the surface roughness lengths, it may affect air pollutants passing through and over the wind farm after released from their sources and delivered to the wind farm. In the present study, we simulated the nitrogen dioxide (NO2) air concentration within and around the world's largest wind farm (Jiuquan wind farm in Gansu Province, China) using a coupled meteorology and atmospheric chemistry model WRF-Chem. The results revealed an edge effect
, which featured higher NO2 levels at the immediate upwind and border region of the wind farm and lower NO2 concentration within the wind farm and the immediate downwind transition area of the wind farm. A surface roughness length scheme and a wind turbine drag force scheme were employed to parameterize the wind farm in this model investigation. Modeling results show that both parameterization schemes yield higher concentration in the immediate upstream of the wind farm and lower concentration within the wind farm compared to the case without the wind farm. We infer this edge effect and the spatial distribution of air pollutants to be the result of the internal boundary layer induced by the changes in wind speed and turbulence intensity driven by the rotation of the wind turbine rotor blades and the enhancement of surface roughness length over the wind farm. The step change in the roughness length from the smooth to rough surfaces (overshooting) in the upstream of the wind farm decelerates the atmospheric transport of air pollutants, leading to their accumulation. The rough to the smooth surface (undershooting) in the downstream of the wind farm accelerates the atmospheric transport of air pollutants, resulting in lower concentration level.
A priori discretization error metrics for distributed hydrologic modeling applications
NASA Astrophysics Data System (ADS)
Liu, Hongli; Tolson, Bryan A.; Craig, James R.; Shafii, Mahyar
2016-12-01
Watershed spatial discretization is an important step in developing a distributed hydrologic model. A key difficulty in the spatial discretization process is maintaining a balance between the aggregation-induced information loss and the increase in computational burden caused by the inclusion of additional computational units. Objective identification of an appropriate discretization scheme still remains a challenge, in part because of the lack of quantitative measures for assessing discretization quality, particularly prior to simulation. This study proposes a priori discretization error metrics to quantify the information loss of any candidate discretization scheme without having to run and calibrate a hydrologic model. These error metrics are applicable to multi-variable and multi-site discretization evaluation and provide directly interpretable information to the hydrologic modeler about discretization quality. The first metric, a subbasin error metric, quantifies the routing information loss from discretization, and the second, a hydrological response unit (HRU) error metric, improves upon existing a priori metrics by quantifying the information loss due to changes in land cover or soil type property aggregation. The metrics are straightforward to understand and easy to recode. Informed by the error metrics, a two-step discretization decision-making approach is proposed with the advantage of reducing extreme errors and meeting the user-specified discretization error targets. The metrics and decision-making approach are applied to the discretization of the Grand River watershed in Ontario, Canada. Results show that information loss increases as discretization gets coarser. Moreover, results help to explain the modeling difficulties associated with smaller upstream subbasins since the worst discretization errors and highest error variability appear in smaller upstream areas instead of larger downstream drainage areas. Hydrologic modeling experiments under candidate discretization schemes validate the strong correlation between the proposed discretization error metrics and hydrologic simulation responses. Discretization decision-making results show that the common and convenient approach of making uniform discretization decisions across the watershed performs worse than the proposed non-uniform discretization approach in terms of preserving spatial heterogeneity under the same computational cost.
Fair ranking of researchers and research teams
2018-01-01
The main drawback of ranking of researchers by the number of papers, citations or by the Hirsch index is ignoring the problem of distributing authorship among authors in multi-author publications. So far, the single-author or multi-author publications contribute to the publication record of a researcher equally. This full counting scheme is apparently unfair and causes unjust disproportions, in particular, if ranked researchers have distinctly different collaboration profiles. These disproportions are removed by less common fractional or authorship-weighted counting schemes, which can distribute the authorship credit more properly and suppress a tendency to unjustified inflation of co-authors. The urgent need of widely adopting a fair ranking scheme in practise is exemplified by analysing citation profiles of several highly-cited astronomers and astrophysicists. While the full counting scheme often leads to completely incorrect and misleading ranking, the fractional or authorship-weighted schemes are more accurate and applicable to ranking of researchers as well as research teams. In addition, they suppress differences in ranking among scientific disciplines. These more appropriate schemes should urgently be adopted by scientific publication databases as the Web of Science (Thomson Reuters) or the Scopus (Elsevier). PMID:29621316
Feng, Hanlin; Ge, Jia; Xiao, Shilin; Fok, Mable P
2014-05-19
In this paper, we present a novel Rayleigh backscattering (RB) noise mitigation scheme based on central carrier suppression for 10 Gb/s loop-back wavelength division multiplexing passive optical network (WDM-PON). Microwave modulated multi-subcarrier optical signal is used as downstream seeding light, while cascaded semiconductor optical amplifier (SOA) are used in the optical network unit (ONU) for suppressing the central carrier of the multi-subcarrier upstream signal. With central carrier suppression, interference generated by carrier RB noise at low frequency region is eliminated successfully. Transmission performance over 45 km single mode fiber (SMF) is studied experimentally, and the optical-signal-to-Rayleigh-noise-ratio (OSRNR) can be reduced to 15 dB with central carrier suppression ratio (CCSR) of 21 dB. Receiver sensitivity is further improved by 6 dB with the use of microwave photonic filter (MPF) for suppressing residual upstream microwave signal and residual carrier RB at high frequency region.
Investigation of Convection and Pressure Treatment with Splitting Techniques
NASA Technical Reports Server (NTRS)
Thakur, Siddharth; Shyy, Wei; Liou, Meng-Sing
1995-01-01
Treatment of convective and pressure fluxes in the Euler and Navier-Stokes equations using splitting formulas for convective velocity and pressure is investigated. Two schemes - controlled variation scheme (CVS) and advection upstream splitting method (AUSM) - are explored for their accuracy in resolving sharp gradients in flows involving moving or reflecting shock waves as well as a one-dimensional combusting flow with a strong heat release source term. For two-dimensional compressible flow computations, these two schemes are implemented in one of the pressure-based algorithms, whose very basis is the separate treatment of convective and pressure fluxes. For the convective fluxes in the momentum equations as well as the estimation of mass fluxes in the pressure correction equation (which is derived from the momentum and continuity equations) of the present algorithm, both first- and second-order (with minmod limiter) flux estimations are employed. Some issues resulting from the conventional use in pressure-based methods of a staggered grid, for the location of velocity components and pressure, are also addressed. Using the second-order fluxes, both CVS and AUSM type schemes exhibit sharp resolution. Overall, the combination of upwinding and splitting for the convective and pressure fluxes separately exhibits robust performance for a variety of flows and is particularly amenable for adoption in pressure-based methods.
Additive schemes for certain operator-differential equations
NASA Astrophysics Data System (ADS)
Vabishchevich, P. N.
2010-12-01
Unconditionally stable finite difference schemes for the time approximation of first-order operator-differential systems with self-adjoint operators are constructed. Such systems arise in many applied problems, for example, in connection with nonstationary problems for the system of Stokes (Navier-Stokes) equations. Stability conditions in the corresponding Hilbert spaces for two-level weighted operator-difference schemes are obtained. Additive (splitting) schemes are proposed that involve the solution of simple problems at each time step. The results are used to construct splitting schemes with respect to spatial variables for nonstationary Navier-Stokes equations for incompressible fluid. The capabilities of additive schemes are illustrated using a two-dimensional model problem as an example.
Efficient High-Order Accurate Methods using Unstructured Grids for Hydrodynamics and Acoustics
2007-08-31
Leer. On upstream differencing and godunov-type schemes for hyperbolic conservation laws. SIAM Review, 25(1):35-61, 1983. [46] F . Eleuterio Toro ...early stage [4-61. The basic idea can be surmised from simple approximation theory. If a continuous function f is to be approximated over a set of...a2f 4h4 a4ff(x+eh) = f (x)+-- + _ •-+• e +0 +... (1) where 0 < e < 1 for approximations inside the interval of width h. For a second-order approximation
A novel WDM passive optical network architecture supporting two independent multicast data streams
NASA Astrophysics Data System (ADS)
Qiu, Yang; Chan, Chun-Kit
2012-01-01
We propose a novel scheme to perform optical multicast overlay of two independent multicast data streams on a wavelength-division-multiplexed (WDM) passive optical network. By controlling a sinusoidal clock signal and shifting the wavelength at the optical line terminal (OLT), the delivery of the two multicast data, being carried by the generated optical tones, can be independently and flexibly controlled. Simultaneous transmission of 10-Gb/s unicast downstream and upstream data as well as two independent 10-Gb/s multicast data was successfully demonstrated.
NASA Astrophysics Data System (ADS)
Liu, Xiao; Cai, Zun; Tong, Yiheng; Zheng, Hongtao
2017-08-01
Large Eddy Simulation (LES) and experiment were employed to investigate the transient ignition and flame propagation process in a rearwall-expansion cavity scramjet combustor using combined fuel injection schemes. The compressible supersonic solver and three ethylene combustion mechanisms were first validated against experimental data and results show in reasonably good agreement. Fuel injection scheme combining transverse and direct injectors in the cavity provides a benefit mixture distribution and could achieve a successful ignition. Four stages are illustrated in detail from both experiment and LES. After forced ignition in the cavity, initial flame kernel propagates upstream towards the cavity front edge and ignites the mixture, which acts as a continuous pilot flame, and then propagates downstream along the cavity shear layer rapidly to the combustor exit. Cavity shear layer flame stabilization mode can be concluded from the heat release rate and local high temperature distribution during the combustion process.
Combination of GRACE monthly gravity field solutions from different processing strategies
NASA Astrophysics Data System (ADS)
Jean, Yoomin; Meyer, Ulrich; Jäggi, Adrian
2018-02-01
We combine the publicly available GRACE monthly gravity field time series to produce gravity fields with reduced systematic errors. We first compare the monthly gravity fields in the spatial domain in terms of signal and noise. Then, we combine the individual gravity fields with comparable signal content, but diverse noise characteristics. We test five different weighting schemes: equal weights, non-iterative coefficient-wise, order-wise, or field-wise weights, and iterative field-wise weights applying variance component estimation (VCE). The combined solutions are evaluated in terms of signal and noise in the spectral and spatial domains. Compared to the individual contributions, they in general show lower noise. In case the noise characteristics of the individual solutions differ significantly, the weighted means are less noisy, compared to the arithmetic mean: The non-seasonal variability over the oceans is reduced by up to 7.7% and the root mean square (RMS) of the residuals of mass change estimates within Antarctic drainage basins is reduced by 18.1% on average. The field-wise weighting schemes in general show better performance, compared to the order- or coefficient-wise weighting schemes. The combination of the full set of considered time series results in lower noise levels, compared to the combination of a subset consisting of the official GRACE Science Data System gravity fields only: The RMS of coefficient-wise anomalies is smaller by up to 22.4% and the non-seasonal variability over the oceans by 25.4%. This study was performed in the frame of the European Gravity Service for Improved Emergency Management (EGSIEM; http://www.egsiem.eu) project. The gravity fields provided by the EGSIEM scientific combination service (ftp://ftp.aiub.unibe.ch/EGSIEM/) are combined, based on the weights derived by VCE as described in this article.
NASA Astrophysics Data System (ADS)
Skare, Stefan; Hedehus, Maj; Moseley, Michael E.; Li, Tie-Qiang
2000-12-01
Diffusion tensor mapping with MRI can noninvasively track neural connectivity and has great potential for neural scientific research and clinical applications. For each diffusion tensor imaging (DTI) data acquisition scheme, the diffusion tensor is related to the measured apparent diffusion coefficients (ADC) by a transformation matrix. With theoretical analysis we demonstrate that the noise performance of a DTI scheme is dependent on the condition number of the transformation matrix. To test the theoretical framework, we compared the noise performances of different DTI schemes using Monte-Carlo computer simulations and experimental DTI measurements. Both the simulation and the experimental results confirmed that the noise performances of different DTI schemes are significantly correlated with the condition number of the associated transformation matrices. We therefore applied numerical algorithms to optimize a DTI scheme by minimizing the condition number, hence improving the robustness to experimental noise. In the determination of anisotropic diffusion tensors with different orientations, MRI data acquisitions using a single optimum b value based on the mean diffusivity can produce ADC maps with regional differences in noise level. This will give rise to rotational variances of eigenvalues and anisotropy when diffusion tensor mapping is performed using a DTI scheme with a limited number of diffusion-weighting gradient directions. To reduce this type of artifact, a DTI scheme with not only a small condition number but also a large number of evenly distributed diffusion-weighting gradients in 3D is preferable.
NASA Technical Reports Server (NTRS)
Hanson, Donald B.
2001-01-01
This report examines the effects on broadband noise generation of unsteady coupling between a rotor and stator in the fan stage of a turbofan engine. Whereas previous acoustic analyses treated the blade rows as isolated cascades, the present work accounts for reflection and transmission effects at both blade rows by tracking the mode and frequency scattering of pressure and vortical waves. The fan stage is modeled in rectilinear geometry to take advantage of a previously existing unsteady cascade theory for 3D perturbation waves and thereby use a realistic 3D turbulence spectrum. In the analysis, it was found that the set of participating modes divides itself naturally into "independent mode subsets" that couple only among themselves and not to the other such subsets. This principle is the basis for the analysis and considerably reduces computational effort. It also provides a simple, accurate scheme for modal averaging for further efficiency. Computed results for a coupled fan stage are compared with calculations for isolated blade rows. It is found that coupling increases downstream noise by 2 to 4 dB. Upstream noise is lower for isolated cascades and is further reduced by including coupling effects. In comparison with test data, the increase in the upstream/downstream differential indicates that broadband noise from turbulent inflow at the stator dominates downstream noise but is not a significant contributor to upstream noise.
Hysteresis of mode transition in a dual-struts based scramjet
NASA Astrophysics Data System (ADS)
Yan, Zhang; Shaohua, Zhu; Bing, Chen; Xu, Xu
2016-11-01
Tests and numerical simulations were performed to investigate the combustion performance of a dual-staged scramjet combustor. High enthalpy vitiated inflow at a total temperature of 1231 K was supplied using a hydrogen-combustion heater. The inlet Mach number was 2.0. Liquid kerosene was injected into the combustor using the dual crossed struts. Three-dimensional Reynolds averaged reacting flow was solved using a two-equation k-ω SST turbulence model to calculate the effect of turbulent stress, and a partial-premixed flamelet model to model the effects of turbulence-chemistry interactions. The discrete phase model was utilized to simulate the fuel atomization and vaporization. For simplicity, the n-decane was chosen as the surrogate fuel with a reaction mechanism of 40 species and 141 steps. The predicted wall pressure profiles at three fuel injection schemes basically captured the axial varying trend of the experimental data. With the downstream equivalence ratio held constant, the upstream equivalence ratio was numerically increased from 0.1 to 0.4 until a steady combustion was obtained. Subsequently, the upstream equivalence ratio was decreased from 0.4 to 0.1 once again. Two ramjet modes with different wall pressure profiles and corresponding flow structures were captured under the identical upstream equivalence ratio of 0.1, illustrating an obvious hysteresis phenomenon. The mechanism of this hysteresis was explained by the transition hysteresis of the pre-combustion shock train in the isolator.
NASA Astrophysics Data System (ADS)
Patel, Dhananjay; Dalal, U. D.
2017-05-01
A novel m-QAM Orthogonal Frequency Division Multiplexing (OFDM) Single Sideband (SSB) architecture is proposed for centralized light source (CLS) bidirectional Radio over Fiber (RoF) - Wavelength Division Multiplexing (WDM) - Passive Optical Network (PON). In bidirectional transmission with carrier reuse over the single fiber, the Rayleigh Backscattering (RB) noise and reflection (RE) interferences from optical components can seriously deteriorate the transmission performance of the fiber optic systems. These interferometric noises can be mitigated by utilizing the optical modulation schemes at the Optical Line Terminal (OLT) and Optical Network Unit (ONU) such that the spectral overlap between the optical data spectrum and the RB and RE noise is minimum. A mathematical model is developed for the proposed architecture to accurately measure the performance of the transmission system and also to analyze the effect of interferometric noise caused by the RB and RE. The model takes into the account the different modulation schemes employed at the OLT and the ONU using a Mach Zehnder Modulator (MZM), the optical launch power and the bit-rates of the downstream and upstream signals, the gain of the amplifiers at the OLT and the ONU, the RB-RE noise, chromatic dispersion of the single mode fiber and optical filter responses. In addition, the model analyzes all the components of the RB-RE noise such as carrier RB, signal RB, carrier RE and signal RE, thus providing the complete representation of all the physical phenomena involved. An optical m-QAM OFDM SSB signal acts as a test signal to validate the model which provides excellent agreement with simulation results. The SSB modulation technique using the MZM at the OLT and the ONU differs in the data transmission technique that takes place through the first-order higher and the lower optical sideband respectively. This spectral gap between the downstream and upstream signals reduces the effect of Rayleigh backscattering and discrete reflections.
2010-01-01
Background The finite volume solver Fluent (Lebanon, NH, USA) is a computational fluid dynamics software employed to analyse biological mass-transport in the vasculature. A principal consideration for computational modelling of blood-side mass-transport is convection-diffusion discretisation scheme selection. Due to numerous discretisation schemes available when developing a mass-transport numerical model, the results obtained should either be validated against benchmark theoretical solutions or experimentally obtained results. Methods An idealised aneurysm model was selected for the experimental and computational mass-transport analysis of species concentration due to its well-defined recirculation region within the aneurysmal sac, allowing species concentration to vary slowly with time. The experimental results were obtained from fluid samples extracted from a glass aneurysm model, using the direct spectrophometric concentration measurement technique. The computational analysis was conducted using the four convection-diffusion discretisation schemes available to the Fluent user, including the First-Order Upwind, the Power Law, the Second-Order Upwind and the Quadratic Upstream Interpolation for Convective Kinetics (QUICK) schemes. The fluid has a diffusivity of 3.125 × 10-10 m2/s in water, resulting in a Peclet number of 2,560,000, indicating strongly convection-dominated flow. Results The discretisation scheme applied to the solution of the convection-diffusion equation, for blood-side mass-transport within the vasculature, has a significant influence on the resultant species concentration field. The First-Order Upwind and the Power Law schemes produce similar results. The Second-Order Upwind and QUICK schemes also correlate well but differ considerably from the concentration contour plots of the First-Order Upwind and Power Law schemes. The computational results were then compared to the experimental findings. An average error of 140% and 116% was demonstrated between the experimental results and those obtained from the First-Order Upwind and Power Law schemes, respectively. However, both the Second-Order upwind and QUICK schemes accurately predict species concentration under high Peclet number, convection-dominated flow conditions. Conclusion Convection-diffusion discretisation scheme selection has a strong influence on resultant species concentration fields, as determined by CFD. Furthermore, either the Second-Order or QUICK discretisation schemes should be implemented when numerically modelling convection-dominated mass-transport conditions. Finally, care should be taken not to utilize computationally inexpensive discretisation schemes at the cost of accuracy in resultant species concentration. PMID:20642816
WENO schemes on arbitrary mixed-element unstructured meshes in three space dimensions
NASA Astrophysics Data System (ADS)
Tsoutsanis, P.; Titarev, V. A.; Drikakis, D.
2011-02-01
The paper extends weighted essentially non-oscillatory (WENO) methods to three dimensional mixed-element unstructured meshes, comprising tetrahedral, hexahedral, prismatic and pyramidal elements. Numerical results illustrate the convergence rates and non-oscillatory properties of the schemes for various smooth and discontinuous solutions test cases and the compressible Euler equations on various types of grids. Schemes of up to fifth order of spatial accuracy are considered.
A climate model projection weighting scheme accounting for performance and interdependence
NASA Astrophysics Data System (ADS)
Knutti, Reto; Sedláček, Jan; Sanderson, Benjamin M.; Lorenz, Ruth; Fischer, Erich M.; Eyring, Veronika
2017-02-01
Uncertainties of climate projections are routinely assessed by considering simulations from different models. Observations are used to evaluate models, yet there is a debate about whether and how to explicitly weight model projections by agreement with observations. Here we present a straightforward weighting scheme that accounts both for the large differences in model performance and for model interdependencies, and we test reliability in a perfect model setup. We provide weighted multimodel projections of Arctic sea ice and temperature as a case study to demonstrate that, for some questions at least, it is meaningless to treat all models equally. The constrained ensemble shows reduced spread and a more rapid sea ice decline than the unweighted ensemble. We argue that the growing number of models with different characteristics and considerable interdependence finally justifies abandoning strict model democracy, and we provide guidance on when and how this can be achieved robustly.
Zhang, Yong-Tao; Shi, Jing; Shu, Chi-Wang; Zhou, Ye
2003-10-01
A quantitative study is carried out in this paper to investigate the size of numerical viscosities and the resolution power of high-order weighted essentially nonoscillatory (WENO) schemes for solving one- and two-dimensional Navier-Stokes equations for compressible gas dynamics with high Reynolds numbers. A one-dimensional shock tube problem, a one-dimensional example with parameters motivated by supernova and laser experiments, and a two-dimensional Rayleigh-Taylor instability problem are used as numerical test problems. For the two-dimensional Rayleigh-Taylor instability problem, or similar problems with small-scale structures, the details of the small structures are determined by the physical viscosity (therefore, the Reynolds number) in the Navier-Stokes equations. Thus, to obtain faithful resolution to these small-scale structures, the numerical viscosity inherent in the scheme must be small enough so that the physical viscosity dominates. A careful mesh refinement study is performed to capture the threshold mesh for full resolution, for specific Reynolds numbers, when WENO schemes of different orders of accuracy are used. It is demonstrated that high-order WENO schemes are more CPU time efficient to reach the same resolution, both for the one-dimensional and two-dimensional test problems.
Sari, Youssef; Chiba, Tomohiro; Yamada, Marina; Rebec, George V.; Aiso, Sadakazu
2009-01-01
Fetal alcohol exposure is known to induce cell death through apoptosis. We found that colivelin (CLN), a novel peptide with the sequence SALLRSIPAPAGASRLLLLTGEIDLP, prevents this apoptosis. Our initial experiment revealed that CLN enhanced the viability of primary cortical neurons exposed to alcohol. We then used a mouse model of fetal alcohol exposure to identify the intracellular mechanisms underlying these neuroprotective effects. On embryonic day 7 (E7), weight-matched pregnant females were assigned to the following groups: (1) ethanol liquid diet (ALC) 25% (4.49%, v/v) ethanol derived calories; (2) pair-fed control; (3) normal chow; (4) ALC combined with administration (i.p.) of CLN (20 μg/20 g body weight); and (5) pair-fed combined with administration (i.p.) of CLN (20 μg/20 g body weight). On E13, fetal brains were collected and assayed for TUNEL staining, caspase-3 colorimetric assay, ELISA, and MSD electrochemiluminescence. CLN blocked the alcohol-induced decline in brain weight and prevented alcohol-induced: apoptosis, activation of caspase-3 and increases of cytosolic cytochrome c, and decreases of mitochondrial cytochrome c. Analysis of proteins in the upstream signaling pathway revealed that CLN down-regulated the phosphorylation of the c-Jun N-terminal kinase. Moreover, CLN prevented alcohol-induced reduction in phosphorylation of BAD protein. Thus, CLN appears to act directly on upstream signaling proteins to prevent alcohol-induced apoptosis. Further assessment of these proteins and their signaling mechanisms is likely to enhance development of neuroprotective therapies. PMID:19782727
Stubbs, R James; Pallister, Carolyn; Whybrow, Stephen; Avery, Amanda; Lavin, Jacquie
2011-01-01
This project audited rate and extent of weight loss in a primary care/commercial weight management organisation partnership scheme. 34,271 patients were referred to Slimming World for 12 weekly sessions. Data were analysed using individual weekly weight records. Average (SD) BMI change was -1.5 kg/m² (1.3), weight change -4.0 kg (3.7), percent weight change -4.0% (3.6), rate of weight change -0.3 kg/week, and number of sessions attended 8.9 (3.6) of 12. For patients attending at least 10 of 12 sessions (n = 19,907 or 58.1%), average (SD) BMI change was -2.0 kg/m² (1.3), weight change -5.5 kg (3.8), percent weight change -5.5% (3.5), rate of weight change -0.4 kg/week, and average number of sessions attended was 11.5 (0.7) (p < 0.001, compared to all patients). Weight loss was greater in men (n = 3,651) than in women (n = 30,620) (p < 0.001). 35.8% of all patients enrolled and 54.7% in patients attending 10 or more sessions achieved at least 5% weight loss. Weight gain was prevented in 92.1% of all patients referred. Attendance explained 29.6% and percent weight lost in week 1 explained 18.4% of the variance in weight loss. Referral to a commercial organisation is a practical option for National Health Service (NHS) weight management strategies, which achieves clinically safe and effective weight loss. Copyright © 2011 S. Karger AG, Basel.
Minică, Camelia C.; Genovese, Giulio; Hultman, Christina M.; Pool, René; Vink, Jacqueline M.; Neale, Michael C.; Dolan, Conor V.; Neale, Benjamin M.
2017-01-01
Sequence-based association studies are at a critical inflexion point with the increasing availability of exome-sequencing data. A popular test of association is the sequence kernel association test (SKAT). Weights are embedded within SKAT to reflect the hypothesized contribution of the variants to the trait variance. Because the true weights are generally unknown, and so are subject to misspecification, we examined the efficiency of a data-driven weighting scheme. We propose the use of a set of theoretically defensible weighting schemes, of which, we assume, the one that gives the largest test statistic is likely to capture best the allele frequency-functional effect relationship. We show that the use of alternative weights obviates the need to impose arbitrary frequency thresholds in sequence data association analyses. As both the score test and the likelihood ratio test (LRT) may be used in this context, and may differ in power, we characterize the behavior of both tests. We found that the two tests have equal power if the set of weights resembled the correct ones. However, if the weights are badly specified, the LRT shows superior power (due to its robustness to misspecification). With this data-driven weighting procedure the LRT detected significant signal in genes located in regions already confirmed as associated with schizophrenia – the PRRC2A (P=1.020E-06) and the VARS2 (P=2.383E-06) – in the Swedish schizophrenia case-control cohort of 11,040 individuals with exome-sequencing data. The score test is currently preferred for its computational efficiency and power. Indeed, assuming correct specification, in some circumstances the score test is the most powerful. However, LRT has the advantageous properties of being generally more robust and more powerful under weight misspecification. This is an important result given that, arguably, misspecified models are likely to be the rule rather than the exception in weighting-based approaches. PMID:28238293
Experimental and Computational Study of Trapped Vortex Combustor Sector Rig with Tri-pass Diffuser
NASA Technical Reports Server (NTRS)
Hendricks, Robert C.; Shouse, D. T.; Roquemore, W. M.; Burrus, D. L.; Duncan, B. S.; Ryder, R. C.; Brankovic, A.; Liu, N.-S.; Gallagher, J. R.; Hendricks, J. A.
2001-01-01
The Trapped Vortex Combustor (TVC) potentially offers numerous operational advantages over current production gas turbine engine combustors. These include lower weight, lower pollutant emissions, effective flame stabilization, high combustion efficiency, excellent high altitude relight capability, and operation in the lean burn or RQL (Rich burn/Quick mix/Lean burn) modes of combustion. The present work describes the operational principles of the TVC, and provides detailed performance data on a configuration featuring a tri-pass diffusion system. Performance data include EINOx (NO(sub x) emission index) results for various fuel-air ratios and combustor residence times, combustion efficiency as a function of combustor residence time, and combustor lean blow-out (LBO) performance. Computational fluid dynamics (CFD) simulations using liquid spray droplet evaporation and combustion modeling are performed and related to flow structures observed in photographs of the combustor. The CFD results are used to understand the aerodynamics and combustion features under different fueling conditions. Performance data acquired to date are favorable in comparison to conventional gas turbine combustors. Further testing over a wider range of fuel-air ratios, fuel flow splits, and pressure ratios is in progress to explore the TVC performance. In addition, alternate configurations for the upstream pressure feed, including bi-pass diffusion schemes, as well as variations on the fuel injection patterns, are currently in test and evaluation phases.
Targeted ENO schemes with tailored resolution property for hyperbolic conservation laws
NASA Astrophysics Data System (ADS)
Fu, Lin; Hu, Xiangyu Y.; Adams, Nikolaus A.
2017-11-01
In this paper, we extend the range of targeted ENO (TENO) schemes (Fu et al. (2016) [18]) by proposing an eighth-order TENO8 scheme. A general formulation to construct the high-order undivided difference τK within the weighting strategy is proposed. With the underlying scale-separation strategy, sixth-order accuracy for τK in the smooth solution regions is designed for good performance and robustness. Furthermore, a unified framework to optimize independently the dispersion and dissipation properties of high-order finite-difference schemes is proposed. The new framework enables tailoring of dispersion and dissipation as function of wavenumber. The optimal linear scheme has minimum dispersion error and a dissipation error that satisfies a dispersion-dissipation relation. Employing the optimal linear scheme, a sixth-order TENO8-opt scheme is constructed. A set of benchmark cases involving strong discontinuities and broadband fluctuations is computed to demonstrate the high-resolution properties of the new schemes.
Projection methods for incompressible flow problems with WENO finite difference schemes
NASA Astrophysics Data System (ADS)
de Frutos, Javier; John, Volker; Novo, Julia
2016-03-01
Weighted essentially non-oscillatory (WENO) finite difference schemes have been recommended in a competitive study of discretizations for scalar evolutionary convection-diffusion equations [20]. This paper explores the applicability of these schemes for the simulation of incompressible flows. To this end, WENO schemes are used in several non-incremental and incremental projection methods for the incompressible Navier-Stokes equations. Velocity and pressure are discretized on the same grid. A pressure stabilization Petrov-Galerkin (PSPG) type of stabilization is introduced in the incremental schemes to account for the violation of the discrete inf-sup condition. Algorithmic aspects of the proposed schemes are discussed. The schemes are studied on several examples with different features. It is shown that the WENO finite difference idea can be transferred to the simulation of incompressible flows. Some shortcomings of the methods, which are due to the splitting in projection schemes, become also obvious.
NASA Technical Reports Server (NTRS)
1975-01-01
A shuttle EVLSS Thermal Control System (TCS) is defined. Thirteen heat rejection subsystems, thirteen water management subsystems, nine humidity control subsystems, three pressure control schemes and five temperature control schemes are evaluated. Sixteen integrated TCS systems are studied, and an optimum system is selected based on quantitative weighting of weight, volume, cost, complexity and other factors. The selected sybsystem contains a sublimator for heat rejection, a bubble expansion tank for water management, and a slurper and rotary separator for humidity control. Design of the selected subsystem prototype hardware is presented.
High-Order Semi-Discrete Central-Upwind Schemes for Multi-Dimensional Hamilton-Jacobi Equations
NASA Technical Reports Server (NTRS)
Bryson, Steve; Levy, Doron; Biegel, Bryan (Technical Monitor)
2002-01-01
We present the first fifth order, semi-discrete central upwind method for approximating solutions of multi-dimensional Hamilton-Jacobi equations. Unlike most of the commonly used high order upwind schemes, our scheme is formulated as a Godunov-type scheme. The scheme is based on the fluxes of Kurganov-Tadmor and Kurganov-Tadmor-Petrova, and is derived for an arbitrary number of space dimensions. A theorem establishing the monotonicity of these fluxes is provided. The spacial discretization is based on a weighted essentially non-oscillatory reconstruction of the derivative. The accuracy and stability properties of our scheme are demonstrated in a variety of examples. A comparison between our method and other fifth-order schemes for Hamilton-Jacobi equations shows that our method exhibits smaller errors without any increase in the complexity of the computations.
Finite time step and spatial grid effects in δf simulation of warm plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sturdevant, Benjamin J., E-mail: benjamin.j.sturdevant@gmail.com; Department of Applied Mathematics, University of Colorado at Boulder, Boulder, CO 80309; Parker, Scott E.
2016-01-15
This paper introduces a technique for analyzing time integration methods used with the particle weight equations in δf method particle-in-cell (PIC) schemes. The analysis applies to the simulation of warm, uniform, periodic or infinite plasmas in the linear regime and considers the collective behavior similar to the analysis performed by Langdon for full-f PIC schemes [1,2]. We perform both a time integration analysis and spatial grid analysis for a kinetic ion, adiabatic electron model of ion acoustic waves. An implicit time integration scheme is studied in detail for δf simulations using our weight equation analysis and for full-f simulations usingmore » the method of Langdon. It is found that the δf method exhibits a CFL-like stability condition for low temperature ions, which is independent of the parameter characterizing the implicitness of the scheme. The accuracy of the real frequency and damping rate due to the discrete time and spatial schemes is also derived using a perturbative method. The theoretical analysis of numerical error presented here may be useful for the verification of simulations and for providing intuition for the design of new implicit time integration schemes for the δf method, as well as understanding differences between δf and full-f approaches to plasma simulation.« less
Esdar, Moritz; Hübner, Ursula; Liebe, Jan-David; Hüsers, Jens; Thye, Johannes
2017-01-01
Clinical information logistics is a construct that aims to describe and explain various phenomena of information provision to drive clinical processes. It can be measured by the workflow composite score, an aggregated indicator of the degree of IT support in clinical processes. This study primarily aimed to investigate the yet unknown empirical patterns constituting this construct. The second goal was to derive a data-driven weighting scheme for the constituents of the workflow composite score and to contrast this scheme with a literature based, top-down procedure. This approach should finally test the validity and robustness of the workflow composite score. Based on secondary data from 183 German hospitals, a tiered factor analytic approach (confirmatory and subsequent exploratory factor analysis) was pursued. A weighting scheme, which was based on factor loadings obtained in the analyses, was put into practice. We were able to identify five statistically significant factors of clinical information logistics that accounted for 63% of the overall variance. These factors were "flow of data and information", "mobility", "clinical decision support and patient safety", "electronic patient record" and "integration and distribution". The system of weights derived from the factor loadings resulted in values for the workflow composite score that differed only slightly from the score values that had been previously published based on a top-down approach. Our findings give insight into the internal composition of clinical information logistics both in terms of factors and weights. They also allowed us to propose a coherent model of clinical information logistics from a technical perspective that joins empirical findings with theoretical knowledge. Despite the new scheme of weights applied to the calculation of the workflow composite score, the score behaved robustly, which is yet another hint of its validity and therefore its usefulness. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
One-dimensional high-order compact method for solving Euler's equations
NASA Astrophysics Data System (ADS)
Mohamad, M. A. H.; Basri, S.; Basuno, B.
2012-06-01
In the field of computational fluid dynamics, many numerical algorithms have been developed to simulate inviscid, compressible flows problems. Among those most famous and relevant are based on flux vector splitting and Godunov-type schemes. Previously, this system was developed through computational studies by Mawlood [1]. However the new test cases for compressible flows, the shock tube problems namely the receding flow and shock waves were not investigated before by Mawlood [1]. Thus, the objective of this study is to develop a high-order compact (HOC) finite difference solver for onedimensional Euler equation. Before developing the solver, a detailed investigation was conducted to assess the performance of the basic third-order compact central discretization schemes. Spatial discretization of the Euler equation is based on flux-vector splitting. From this observation, discretization of the convective flux terms of the Euler equation is based on a hybrid flux-vector splitting, known as the advection upstream splitting method (AUSM) scheme which combines the accuracy of flux-difference splitting and the robustness of flux-vector splitting. The AUSM scheme is based on the third-order compact scheme to the approximate finite difference equation was completely analyzed consequently. In one-dimensional problem for the first order schemes, an explicit method is adopted by using time integration method. In addition to that, development and modification of source code for the one-dimensional flow is validated with four test cases namely, unsteady shock tube, quasi-one-dimensional supersonic-subsonic nozzle flow, receding flow and shock waves in shock tubes. From these results, it was also carried out to ensure that the definition of Riemann problem can be identified. Further analysis had also been done in comparing the characteristic of AUSM scheme against experimental results, obtained from previous works and also comparative analysis with computational results generated by van Leer, KFVS and AUSMPW schemes. Furthermore, there is a remarkable improvement with the extension of the AUSM scheme from first-order to third-order accuracy in terms of shocks, contact discontinuities and rarefaction waves.
NASA Astrophysics Data System (ADS)
Goren, Liran; Petit, Carole
2017-04-01
Fluvial channels respond to changing tectonic and climatic conditions by adjusting their patterns of erosion and relief. It is therefore expected that by examining these patterns, we can infer the tectonic and climatic conditions that shaped the channels. However, the potential interference between climatic and tectonic signals complicates this inference. Within the framework of the stream power model that describes incision rate of mountainous bedrock rivers, climate variability has two effects: it influences the erosive power of the river, causing local slope change, and it changes the fluvial response time that controls the rate at which tectonically and climatically induced slope breaks are communicated upstream. Because of this dual role, the fluvial response time during continuous climate change has so far been elusive, which hinders our understanding of environmental signal propagation and preservation in the fluvial topography. An analytic solution of the stream power model during general tectonic and climatic histories gives rise to a new definition of the fluvial response time. The analytic solution offers accurate predictions for landscape evolution that are hard to achieve with classical numerical schemes and thus can be used to validate and evaluate the accuracy of numerical landscape evolution models. The analytic solution together with the new definition of the fluvial response time allow inferring either the tectonic history or the climatic history from river long profiles by using simple linear inversion schemes. Analytic study of landscape evolution during periodic climate change reveals that high frequency (10-100 kyr) climatic oscillations with respect to the response time, such as Milankovitch cycles, are not expected to leave significant fingerprints in the upstream reaches of fluvial channels. Linear inversion schemes are applied to the Tinee river tributaries in the southern French Alps, where tributary long profiles are used to recover the incision rate history of the Tinee main trunk. Inversion results show periodic, high incision rate pulses, which are correlated with interglacial episodes. Similar incision rate histories are recovered for the past 100 kyr when assuming constant climatic conditions or periodic climatic oscillations, in agreement with theoretical predictions.
A simple algorithm to improve the performance of the WENO scheme on non-uniform grids
NASA Astrophysics Data System (ADS)
Huang, Wen-Feng; Ren, Yu-Xin; Jiang, Xiong
2018-02-01
This paper presents a simple approach for improving the performance of the weighted essentially non-oscillatory (WENO) finite volume scheme on non-uniform grids. This technique relies on the reformulation of the fifth-order WENO-JS (WENO scheme presented by Jiang and Shu in J. Comput. Phys. 126:202-228, 1995) scheme designed on uniform grids in terms of one cell-averaged value and its left and/or right interfacial values of the dependent variable. The effect of grid non-uniformity is taken into consideration by a proper interpolation of the interfacial values. On non-uniform grids, the proposed scheme is much more accurate than the original WENO-JS scheme, which was designed for uniform grids. When the grid is uniform, the resulting scheme reduces to the original WENO-JS scheme. In the meantime, the proposed scheme is computationally much more efficient than the fifth-order WENO scheme designed specifically for the non-uniform grids. A number of numerical test cases are simulated to verify the performance of the present scheme.
Volume 2: Explicit, multistage upwind schemes for Euler and Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Elmiligui, Alaa; Ash, Robert L.
1992-01-01
The objective of this study was to develop a high-resolution-explicit-multi-block numerical algorithm, suitable for efficient computation of the three-dimensional, time-dependent Euler and Navier-Stokes equations. The resulting algorithm has employed a finite volume approach, using monotonic upstream schemes for conservation laws (MUSCL)-type differencing to obtain state variables at cell interface. Variable interpolations were written in the k-scheme formulation. Inviscid fluxes were calculated via Roe's flux-difference splitting, and van Leer's flux-vector splitting techniques, which are considered state of the art. The viscous terms were discretized using a second-order, central-difference operator. Two classes of explicit time integration has been investigated for solving the compressible inviscid/viscous flow problems--two-state predictor-corrector schemes, and multistage time-stepping schemes. The coefficients of the multistage time-stepping schemes have been modified successfully to achieve better performance with upwind differencing. A technique was developed to optimize the coefficients for good high-frequency damping at relatively high CFL numbers. Local time-stepping, implicit residual smoothing, and multigrid procedure were added to the explicit time stepping scheme to accelerate convergence to steady-state. The developed algorithm was implemented successfully in a multi-block code, which provides complete topological and geometric flexibility. The only requirement is C degree continuity of the grid across the block interface. The algorithm has been validated on a diverse set of three-dimensional test cases of increasing complexity. The cases studied were: (1) supersonic corner flow; (2) supersonic plume flow; (3) laminar and turbulent flow over a flat plate; (4) transonic flow over an ONERA M6 wing; and (5) unsteady flow of a compressible jet impinging on a ground plane (with and without cross flow). The emphasis of the test cases was validation of code, and assessment of performance, as well as demonstration of flexibility.
A joint tracking method for NSCC based on WLS algorithm
NASA Astrophysics Data System (ADS)
Luo, Ruidan; Xu, Ying; Yuan, Hong
2017-12-01
Navigation signal based on compound carrier (NSCC), has the flexible multi-carrier scheme and various scheme parameters configuration, which enables it to possess significant efficiency of navigation augmentation in terms of spectral efficiency, tracking accuracy, multipath mitigation capability and anti-jamming reduction compared with legacy navigation signals. Meanwhile, the typical scheme characteristics can provide auxiliary information for signal synchronism algorithm design. This paper, based on the characteristics of NSCC, proposed a kind of joint tracking method utilizing Weighted Least Square (WLS) algorithm. In this method, the LS algorithm is employed to jointly estimate each sub-carrier frequency shift with the frequency-Doppler linear relationship, by utilizing the known sub-carrier frequency. Besides, the weighting matrix is set adaptively according to the sub-carrier power to ensure the estimation accuracy. Both the theory analysis and simulation results illustrate that the tracking accuracy and sensitivity of this method outperforms the single-carrier algorithm with lower SNR.
Gait Characteristic Analysis and Identification Based on the iPhone's Accelerometer and Gyrometer
Sun, Bing; Wang, Yang; Banda, Jacob
2014-01-01
Gait identification is a valuable approach to identify humans at a distance. In this paper, gait characteristics are analyzed based on an iPhone's accelerometer and gyrometer, and a new approach is proposed for gait identification. Specifically, gait datasets are collected by the triaxial accelerometer and gyrometer embedded in an iPhone. Then, the datasets are processed to extract gait characteristic parameters which include gait frequency, symmetry coefficient, dynamic range and similarity coefficient of characteristic curves. Finally, a weighted voting scheme dependent upon the gait characteristic parameters is proposed for gait identification. Four experiments are implemented to validate the proposed scheme. The attitude and acceleration solutions are verified by simulation. Then the gait characteristics are analyzed by comparing two sets of actual data, and the performance of the weighted voting identification scheme is verified by 40 datasets of 10 subjects. PMID:25222034
Income-based equity weights in healthcare planning and policy.
Herlitz, Anders
2017-08-01
Recent research indicates that there is a gap in life expectancy between the rich and the poor. This raises the question: should we on egalitarian grounds use income-based equity weights when we assess benefits of alternative benevolent interventions, so that health benefits to the poor count for more? This article provides three egalitarian arguments for using income-based equity weights under certain circumstances. If income inequality correlates with inequality in health, we have reason to use income-based equity weights on the ground that health inequality is bad. If income inequality correlates with inequality in opportunity for health, we have reason to use such weights on the ground that inequality in opportunity for health is bad. If income inequality correlates with inequality in well-being, income-based equity weights should be used to mitigate inequality in well-being. Three different ways in which to construe income-based equity weights are introduced and discussed. They can be based on relative income inequality, on income rankings and on capped absolute income. The article does not defend any of these types of weighting schemes, but argues that in order to settle which of these types of weighting scheme to choose, more empirical research is needed. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
NASA Astrophysics Data System (ADS)
Hishida, Manabu; Hayashi, A. Koichi
1992-12-01
Pulsed Jet Combustion (PJC) is numerically simulated using time-dependent, axisymmetric, full Navier-Stokes equations with the mass, momentum, energy, and species conservation equations for a hydrogen-air mixture. A hydrogen-air reaction mechanism is modeled by nine species and nineteen elementary forward and backward reactions to evaluate the effect of the chemical reactions accurately. A point implicit method with the Harten and Yee's non-MUSCL (Monotone Upstream-centerd Schemes for Conservation Laws) modified-flux type TVD (Total Variation Diminishing) scheme is applied to deal with the stiff partial differential equations. Furthermore, a zonal method making use of the Fortified Solution Algorithm (FSA) is applied to simulate the phenomena in the complicated shape of the sub-chamber. The numerical result shows that flames propagating in the sub-chamber interact with pressure waves and are deformed to be wrinkled like a 'tulip' flame and a jet passed through the orifice changes its mass flux quasi-periodically.
Input filter compensation for switching regulators
NASA Technical Reports Server (NTRS)
Kelkar, S. S.; Lee, F. C.
1983-01-01
A novel input filter compensation scheme for a buck regulator that eliminates the interaction between the input filter output impedance and the regulator control loop is presented. The scheme is implemented using a feedforward loop that senses the input filter state variables and uses this information to modulate the duty cycle signal. The feedforward design process presented is seen to be straightforward and the feedforward easy to implement. Extensive experimental data supported by analytical results show that significant performance improvement is achieved with the use of feedforward in the following performance categories: loop stability, audiosusceptibility, output impedance and transient response. The use of feedforward results in isolating the switching regulator from its power source thus eliminating all interaction between the regulator and equipment upstream. In addition the use of feedforward removes some of the input filter design constraints and makes the input filter design process simpler thus making it possible to optimize the input filter. The concept of feedforward compensation can also be extended to other types of switching regulators.
Censored quantile regression with recursive partitioning-based weights
Wey, Andrew; Wang, Lan; Rudser, Kyle
2014-01-01
Censored quantile regression provides a useful alternative to the Cox proportional hazards model for analyzing survival data. It directly models the conditional quantile of the survival time and hence is easy to interpret. Moreover, it relaxes the proportionality constraint on the hazard function associated with the popular Cox model and is natural for modeling heterogeneity of the data. Recently, Wang and Wang (2009. Locally weighted censored quantile regression. Journal of the American Statistical Association 103, 1117–1128) proposed a locally weighted censored quantile regression approach that allows for covariate-dependent censoring and is less restrictive than other censored quantile regression methods. However, their kernel smoothing-based weighting scheme requires all covariates to be continuous and encounters practical difficulty with even a moderate number of covariates. We propose a new weighting approach that uses recursive partitioning, e.g. survival trees, that offers greater flexibility in handling covariate-dependent censoring in moderately high dimensions and can incorporate both continuous and discrete covariates. We prove that this new weighting scheme leads to consistent estimation of the quantile regression coefficients and demonstrate its effectiveness via Monte Carlo simulations. We also illustrate the new method using a widely recognized data set from a clinical trial on primary biliary cirrhosis. PMID:23975800
Orbit Estimation of Non-Cooperative Maneuvering Spacecraft
2015-06-01
only take on values that generate real sigma points; therefore, λ > −n. The additional weighting scheme is outlined in the following equations κ = α2...orbit shapes resulted in a similar model weighting. Additional cases of this orbit type also resulted in heavily weighting smaller η value models. It is...determined using both the symmetric and additional parameters UTs. The best values for the weighting parameters are then compared for each test case
Long-range effect of cyanide on mercury methylation in a gold mining area in southern Ecuador.
Guimaraes, Jean Remy Davée; Betancourt, Oscar; Miranda, Marcio Rodrigues; Barriga, Ramiro; Cueva, Edwin; Betancourt, Sebastián
2011-11-01
Small-scale gold mining in Portovelo-Zaruma, Southern Equador, performed by mercury amalgamation and cyanidation, yields 9-10 t of gold/annum, resulting in annual releases of around 0.65 t of inorganic mercury and 6000 t of sodium cyanide in the local river system. The release of sediments, cyanide, mercury, and other metals present in the ore such as lead, manganese and arsenic significantly reduces biodiversity downstream the processing plants and enriches metals in bottom sediments and biota. However, methylmercury concentrations in sediments downstream the mining area were recently found to be one order of magnitude lower than upstream or in small tributaries. In this study we investigated cyanide, bacterial activity in water and sediment and mercury methylation potentials in sediments along the Puyango river watershed, measured respectively by in-situ spectrophotometry and incubation with (3)H-leucine and (203)Hg(2+). Free cyanide was undetectable (<1 μg·L(-1)) upstream mining activities, reached 280 μg·L(-1) a few km downstream the processing plants area and was still detectable about 100 km downstream. At stations with detectable free cyanide in unfiltered water, 50% of it was dissolved and 50% associated to suspended particles. Bacterial activity and mercury methylation in sediment showed a similar spatial pattern, inverse to the one found for free cyanide in water, i.e. with significant values in pristine upstream sampling points (respectively 6.4 to 22 μgC·mg wet weight(-1)·h(-1) and 1.2 to 19% of total (203) Hg·gdry weight(-1)·day(-1)) and undetectable downstream the processing plants, returning to upstream values only in the most distant downstream stations. The data suggest that free cyanide oxidation was slower than would be expected from the high water turbulence, resulting in a long-range inhibition of bacterial activity and hence mercury methylation. The important mercury fluxes resultant from mining activities raise concerns about its biomethylation in coastal areas where many mangrove areas have been converted to shrimp farming. Copyright © 2011. Published by Elsevier B.V.
Saiki, Michael K.; Schmitt, Christopher J.
1986-01-01
Samples of bluegills (Lepomis macrochirus) and common carp (Cyprinus carpio) collected from the San Joaquin River and two tributaries (Merced River and Salt Slough) in California were analyzed for 21 organochlorine chemical residues by gas chromatography to determine if pesticide contamination was confined to downstream sites exposed to irrigated agriculture, or if nonirrigated upstream sites were also contaminated. Residues ofp,p′-DDE were detected in all samples of both species. Six other contaminants were also present in both species at one or more of the collection sites: chlordane (cis-chlordane +trans-nonachlor);p,p′-DDD;o,p′-DDT;p,p′-DDT; DCPA (dimethyl tetrachloroterephthalate); and dieldrin. Concentrations of most of these residues were generally higher in carp than in bluegills; residues of other compounds were found only in carp: α-BHC (α-benzenehexachloride), Aroclor® 1260, and toxaphene. Concentrations of most organochlorines in fish increased from upstream to downstream. Water quality variables that are influenced by irrigation return flows (e.g., conductivity, turbidity, and total alkalinity) also increased from upstream to downstream and were significantly correlated (P < 0.05) with organochlorine residue levels in the fish. In carp, concentrations of two residues-⌆DDT (p,p′-DDD +p,p′-DDE + +p,p′-DDT; 1.43 to 2.21 mg/kg wet weight) and toxaphene (3.12 mg/kg wet weight)-approached the highest levels reported by the National Pesticide Monitoring Program for fish from other intensively farmed watersheds of the United States in 1980 to 1981, and surpassed criteria for whole-body residue concentrations recomended by the National Academy of Sciences and National Academy of Engineers for the protection of piscivorous wildlife.
A new approach to the convective parameterization of the regional atmospheric model BRAMS
NASA Astrophysics Data System (ADS)
Dos Santos, A. F.; Freitas, S. R.; de Campos Velho, H. F.; Luz, E. F.; Gan, M. A.; de Mattos, J. Z.; Grell, G. A.
2013-05-01
The summer characteristics of January 2010 was performed using the atmospheric model Brazilian developments on the Regional Atmospheric Modeling System (BRAMS). The convective parameterization scheme of Grell and Dévényi was used to represent clouds and their interaction with the large scale environment. As a result, the precipitation forecasts can be combined in several ways, generating a numerical representation of precipitation and atmospheric heating and moistening rates. The purpose of this study was to generate a set of weights to compute a best combination of the hypothesis of the convective scheme. It is an inverse problem of parameter estimation and the problem is solved as an optimization problem. To minimize the difference between observed data and forecasted precipitation, the objective function was computed with the quadratic difference between five simulated precipitation fields and observation. The precipitation field estimated by the Tropical Rainfall Measuring Mission satellite was used as observed data. Weights were obtained using the firefly algorithm and the mass fluxes of each closure of the convective scheme were weighted generating a new set of mass fluxes. The results indicated the better skill of the model with the new methodology compared with the old ensemble mean calculation.
Thurman, E.M.; Malcolm, R.L.
1979-01-01
A scheme is presented which used adsorption chromatography with pH gradient elution and size-exclusion chromatography to concentrate and separate hydrophobic organic acids from water. A review of chromatographic processes involved in the flow scheme is also presented. Organic analytes which appear in each aqueous fraction are quantified by dissolved organic carbon analysis. Hydrophobic organic acids in a water sample are concentrated on a porous acrylic resin. These acids usually constitute approximately 30-50 percent of the dissolved organic carbon in an unpolluted water sample and are eluted with an aqueous eluent (dilute base). The concentrate is then passed through a column of polyacryloylmorpholine gel, which separates the acids into high- and low-molecular-weight fractions. The high- and low-molecular-weight eluates are reconcentrated by adsorption chromatography, then are eluted with a pH gradient into strong acids (predominately carboxylic acids) and weak acids (predominately phenolic compounds). For standard compounds and samples of unpolluted waters, the scheme fractionates humic substances into strong and weak acid fractions that are separated from the low molecular weight acids. A new method utilizing conductivity is also presented to estimate the acidic components in the methanol fraction.
Local classifier weighting by quadratic programming.
Cevikalp, Hakan; Polikar, Robi
2008-10-01
It has been widely accepted that the classification accuracy can be improved by combining outputs of multiple classifiers. However, how to combine multiple classifiers with various (potentially conflicting) decisions is still an open problem. A rich collection of classifier combination procedures -- many of which are heuristic in nature -- have been developed for this goal. In this brief, we describe a dynamic approach to combine classifiers that have expertise in different regions of the input space. To this end, we use local classifier accuracy estimates to weight classifier outputs. Specifically, we estimate local recognition accuracies of classifiers near a query sample by utilizing its nearest neighbors, and then use these estimates to find the best weights of classifiers to label the query. The problem is formulated as a convex quadratic optimization problem, which returns optimal nonnegative classifier weights with respect to the chosen objective function, and the weights ensure that locally most accurate classifiers are weighted more heavily for labeling the query sample. Experimental results on several data sets indicate that the proposed weighting scheme outperforms other popular classifier combination schemes, particularly on problems with complex decision boundaries. Hence, the results indicate that local classification-accuracy-based combination techniques are well suited for decision making when the classifiers are trained by focusing on different regions of the input space.
Quality of Recovery Evaluation of the Protection Schemes for Fiber-Wireless Access Networks
NASA Astrophysics Data System (ADS)
Fu, Minglei; Chai, Zhicheng; Le, Zichun
2016-03-01
With the rapid development of fiber-wireless (FiWi) access network, the protection schemes have got more and more attention due to the risk of huge data loss when failures occur. However, there are few studies on the performance evaluation of the FiWi protection schemes by the unified evaluation criterion. In this paper, quality of recovery (QoR) method was adopted to evaluate the performance of three typical protection schemes (MPMC scheme, OBOF scheme and RPMF scheme) against the segment-level failure in FiWi access network. The QoR models of the three schemes were derived in terms of availability, quality of backup path, recovery time and redundancy. To compare the performance of the three protection schemes comprehensively, five different classes of network services such as emergency service, prioritized elastic service, conversational service, etc. were utilized by means of assigning different QoR weights. Simulation results showed that, for the most service cases, RPMF scheme was proved to be the best solution to enhance the survivability when planning the FiWi access network.
NASA Astrophysics Data System (ADS)
Kumar, Rakesh; Levin, Deborah A.
2011-03-01
In the present work, we have simulated the homogeneous condensation of carbon dioxide and ethanol using the Bhatnagar-Gross-Krook based approach. In an earlier work of Gallagher-Rogers et al. [J. Thermophys. Heat Transfer 22, 695 (2008)], it was found that it was not possible to simulate condensation experiments of Wegener et al. [Phys. Fluids 15, 1869 (1972)] using the direct simulation Monte Carlo method. Therefore, in this work, we have used the statistical Bhatnagar-Gross-Krook approach, which was found to be numerically more efficient than direct simulation Monte Carlo method in our previous studies [Kumar et al., AIAA J. 48, 1531 (2010)], to model homogeneous condensation of two small polyatomic systems, carbon dioxide and ethanol. A new weighting scheme is developed in the Bhatnagar-Gross-Krook framework to reduce the computational load associated with the study of homogeneous condensation flows. The solutions obtained by the use of the new scheme are compared with those obtained by the baseline Bhatnagar-Gross-Krook condensation model (without the species weighting scheme) for the condensing flow of carbon dioxide in the stagnation pressure range of 1-5 bars. Use of the new weighting scheme in the present work makes the simulation of homogeneous condensation of ethanol possible. We obtain good agreement between our simulated predictions for homogeneous condensation of ethanol and experiments in terms of the point of condensation onset and the distribution of mass fraction of ethanol condensed along the nozzle centerline.
NASA Astrophysics Data System (ADS)
Wang, Li; Li, Chuanghong
2018-02-01
As a sustainable form of ecological structure, green building is widespread concerned and advocated in society increasingly nowadays. In the survey and design phase of preliminary project construction, carrying out the evaluation and selection of green building design scheme, which is in accordance with the scientific and reasonable evaluation index system, can improve the ecological benefits of green building projects largely and effectively. Based on the new Green Building Evaluation Standard which came into effect on January 1, 2015, the evaluation index system of green building design scheme is constructed taking into account the evaluation contents related to the green building design scheme. We organized experts who are experienced in construction scheme optimization to mark and determine the weight of each evaluation index through the AHP method. The correlation degree was calculated between each evaluation scheme and ideal scheme by using multilevel gray relational analysis model and then the optimal scheme was determined. The feasibility and practicability of the evaluation method are verified by introducing examples.
Direct numerical simulation of shockwave and turbulent boundary layer interactions
NASA Astrophysics Data System (ADS)
Wu, Minwei
Direct numerical simulations (DNS) of a shockwave/turbulent boundary layer interaction (STBLI) at Mach number 3 and Reynolds number based on the momentum thickness of 2300 are performed. A 4th-order accurate, bandwidth-optimized weighted-essentially-non-oscillatory (WENO) scheme is used and the method is found to be too dissipative for the STBLI simulation due to the over-adaptation properties of this original WENO scheme. In turn, a relative limiter is introduced to mitigate the problem. Tests on the Shu-Osher problem show that the modified WENO scheme decreases the numerical dissipation significantly. By utilizing a combination of the relative limiter and the absolute limiter described by Jiang & Shu [32], the DNS results are improved further. The DNS data agree well with the reference experiments of Bookey et al. [10] in the size of the separation bubble, the separation and reattachment point, the mean wall-pressure distribution, and the velocity profiles both upstream and downstream of the interaction region. The DNS data show that velocity profiles change dramatically along the streamwise direction. Downstream of the interaction, the velocity profiles show a characteristic "dip" in the logarithmic region, as shown by the experiments of Smits & Muck [66] at much higher Reynolds number. In the separation region, the velocity profiles are found to resemble those of a laminar flow, yet the flow does not fully relaminarize. The mass-flux turbulence intensity is amplified by a factor of about 5 throughout the interaction, which is consistent with that found in higher Reynolds experiments of Selig et al. [52]. All Reynolds stress components are greatly amplified by the interaction. Assuming that the ow is still two dimensional downstream of the interaction, the components rhou'u', rhov'v', rho w'w', and rho u'w' are amplified by factors of 6, 6, 12, and 24, respectively, where u is the streamwise and w is the wall-normal velocity. However, analyses of the turbulence structure show that the ow is not uniform in the spanwise direction downstream of the interaction. A pair of counter-rotating vortices is observed in streamwise-wall-normal planes in the mean ow downstream of the ramp corner. Taking the three-dimensionality into account, the amplification factors of the Reynolds stresses are greatly decreased. The component rhou'w' is amplified by a factor of about 10, which is comparable to that found in the experiments of Smits & Muck [66]. Strong Reynolds analogy (SRA) relations are also studied using the DNS data. The SRA is found to hold in the incoming boundary layer of the DNS. However, inside and downstream of the interaction region, the SRA relations are not satisfied. From the DNS analyses, the shock motion is characterized by a low frequency component (of order 0.01Uinfinity/delta). In addition, the motion of the shock is found to have two aspects: a spanwise wrinkling motion and a streamwise oscillatory motion. The spanwise wrinkling is observed to be a local feature with high frequencies (of order Uinfinity /delta). Two-point correlations reveal that the spanwise wrinkling is closely related to the low momentum motions in the incoming boundary layer as they convect through the shock. The low frequency shock motion is found to be a streamwise oscillation motion. Conditional statistics show that there is no significant difference in the mean properties of the incoming boundary layer when the shock is at an upstream or downstream location. However, analyses of the unsteadiness of the separation bubble reveal that the low frequency shock motion is driven by the downstream flow.
Duplication of an upstream silencer of FZP increases grain yield in rice.
Bai, Xufeng; Huang, Yong; Hu, Yong; Liu, Haiyang; Zhang, Bo; Smaczniak, Cezary; Hu, Gang; Han, Zhongmin; Xing, Yongzhong
2017-11-01
Transcriptional silencer and copy number variants (CNVs) are associated with gene expression. However, their roles in generating phenotypes have not been well studied. Here we identified a rice quantitative trait locus, SGDP7 (Small Grain and Dense Panicle 7). SGDP7 is identical to FZP (FRIZZY PANICLE), which represses the formation of axillary meristems. The causal mutation of SGDP7 is an 18-bp fragment, named CNV-18bp, which was inserted ~5.3 kb upstream of FZP and resulted in a tandem duplication in the cultivar Chuan 7. The CNV-18bp duplication repressed FZP expression, prolonged the panicle branching period and increased grain yield by more than 15% through substantially increasing the number of spikelets per panicle (SPP) and slightly decreasing the 1,000-grain weight (TGW). The transcription repressor OsBZR1 binds the CGTG motifs in CNV-18bp and thereby represses FZP expression, indicating that CNV-18bp is the upstream silencer of FZP. These findings showed that the silencer CNVs coordinate a trade-off between SPP and TGW by fine-tuning FZP expression, and balancing the trade-off could enhance yield potential.
Wadud, Zahid; Hussain, Sajjad; Javaid, Nadeem; Bouk, Safdar Hussain; Alrajeh, Nabil; Alabed, Mohamad Souheil; Guizani, Nadra
2017-09-30
Industrial Underwater Acoustic Sensor Networks (IUASNs) come with intrinsic challenges like long propagation delay, small bandwidth, large energy consumption, three-dimensional deployment, and high deployment and battery replacement cost. Any routing strategy proposed for IUASN must take into account these constraints. The vector based forwarding schemes in literature forward data packets to sink using holding time and location information of the sender, forwarder, and sink nodes. Holding time suppresses data broadcasts; however, it fails to keep energy and delay fairness in the network. To achieve this, we propose an Energy Scaled and Expanded Vector-Based Forwarding (ESEVBF) scheme. ESEVBF uses the residual energy of the node to scale and vector pipeline distance ratio to expand the holding time. Resulting scaled and expanded holding time of all forwarding nodes has a significant difference to avoid multiple forwarding, which reduces energy consumption and energy balancing in the network. If a node has a minimum holding time among its neighbors, it shrinks the holding time and quickly forwards the data packets upstream. The performance of ESEVBF is analyzed through in network scenario with and without node mobility to ensure its effectiveness. Simulation results show that ESEVBF has low energy consumption, reduces forwarded data copies, and less end-to-end delay.
Modeling the Effect of Wetlands, Flooding, and Irrigation on River Flow: Application to the Aral Sea
NASA Technical Reports Server (NTRS)
Ferrari, Michael R.; Miller, James R.; Russell, Gary L.
1999-01-01
As the world's population continues to increase, additional stress is placed on water resources. This stress, coupled with future uncertainties regarding climate change, makes arid and semi-arid regions particularly vulnerable. One example is the Aral Sea where the freshwater inflow, which is dominated by snowmelt runoff, has decreased significantly since the expansion of intensive irrigation in the 1960s. The purpose of this paper is to use a river routing scheme from a global climate model to examine the flow of the Amu Dar'ya River into the Aral Sea. The river routing scheme is modified to include groundwater flow, flooding, and evaporative losses in the river's wetlands and floodplain, and anthropogenic withdrawals for irrigation. A set of scenarios is designed to test the sensitivity of river flow to the inclusion of these modifications into the river routing scheme. When riverine wetlands and floodplains are present, the river flow is reduced significantly and is similar to the observed flow. In addition the model results show that it is essential to incorporate human diversions to accurately represent the inflow to the Aral Sea, and they also indicate potential management strategies that might be appropriate to maintain a balance between inflow to the Sea and upstream diversions for irrigation.
Two-dimensional CFD modeling of wave rotor flow dynamics
NASA Technical Reports Server (NTRS)
Welch, Gerard E.; Chima, Rodrick V.
1994-01-01
A two-dimensional Navier-Stokes solver developed for detailed study of wave rotor flow dynamics is described. The CFD model is helping characterize important loss mechanisms within the wave rotor. The wave rotor stationary ports and the moving rotor passages are resolved on multiple computational grid blocks. The finite-volume form of the thin-layer Navier-Stokes equations with laminar viscosity are integrated in time using a four-stage Runge-Kutta scheme. Roe's approximate Riemann solution scheme or the computationally less expensive advection upstream splitting method (AUSM) flux-splitting scheme is used to effect upwind-differencing of the inviscid flux terms, using cell interface primitive variables set by MUSCL-type interpolation. The diffusion terms are central-differenced. The solver is validated using a steady shock/laminar boundary layer interaction problem and an unsteady, inviscid wave rotor passage gradual opening problem. A model inlet port/passage charging problem is simulated and key features of the unsteady wave rotor flow field are identified. Lastly, the medium pressure inlet port and high pressure outlet port portion of the NASA Lewis Research Center experimental divider cycle is simulated and computed results are compared with experimental measurements. The model accurately predicts the wave timing within the rotor passages and the distribution of flow variables in the stationary inlet port region.
Two-dimensional CFD modeling of wave rotor flow dynamics
NASA Technical Reports Server (NTRS)
Welch, Gerard E.; Chima, Rodrick V.
1993-01-01
A two-dimensional Navier-Stokes solver developed for detailed study of wave rotor flow dynamics is described. The CFD model is helping characterize important loss mechanisms within the wave rotor. The wave rotor stationary ports and the moving rotor passages are resolved on multiple computational grid blocks. The finite-volume form of the thin-layer Navier-Stokes equations with laminar viscosity are integrated in time using a four-stage Runge-Kutta scheme. The Roe approximate Riemann solution scheme or the computationally less expensive Advection Upstream Splitting Method (AUSM) flux-splitting scheme are used to effect upwind-differencing of the inviscid flux terms, using cell interface primitive variables set by MUSCL-type interpolation. The diffusion terms are central-differenced. The solver is validated using a steady shock/laminar boundary layer interaction problem and an unsteady, inviscid wave rotor passage gradual opening problem. A model inlet port/passage charging problem is simulated and key features of the unsteady wave rotor flow field are identified. Lastly, the medium pressure inlet port and high pressure outlet port portion of the NASA Lewis Research Center experimental divider cycle is simulated and computed results are compared with experimental measurements. The model accurately predicts the wave timing within the rotor passage and the distribution of flow variables in the stationary inlet port region.
Modeling the effect of wetlands, flooding, and irrigation on river flow: Application to the Aral Sea
NASA Astrophysics Data System (ADS)
Ferrari, Michael R.; Miller, James R.; Russell, Gary L.
1999-06-01
As the world's population continues to increase, additional stress is placed on water resources. This stress, coupled with future uncertainties regarding climate change, makes arid and semiarid regions particularly vulnerable. One example is the Aral Sea where the freshwater inflow, which is dominated by snowmelt runoff, has decreased significantly since the expansion of intensive irrigation in the 1960s. The purpose of this paper is to use a river routing scheme from a global climate model to examine the flow of the Amu Dar'ya River into the Aral Sea. The river routing scheme is modified to include groundwater flow, flooding, and evaporative losses in the river's wetlands and floodplain and anthropogenic withdrawals for irrigation. A set of scenarios is designed to test the sensitivity of river flow to the inclusion of these modifications into the river routing scheme. When riverine wetlands and floodplains are present, the river flow is reduced significantly and is similar to the observed flow. In addition, the model results show that it is essential to incorporate human diversions to represent accurately the inflow to the Aral Sea, and they also indicate potential management strategies that might be appropriate to maintain a balance between inflow to the sea and upstream diversions for irrigation.
Traverse Focusing of Intense Charged Particle Beams with Chromatic Effects for Heavy Ion Fusion
DOE Office of Scientific and Technical Information (OSTI.GOV)
James M. Mitrani, Igor D. Kaganovich, Ronald C. Davidson
A fi nal focusing scheme designed to minimize chromatic effects is discussed. The Neutralized Drift Compression Experiment-II (NDCX-II) will apply a velocity tilt for longitudinal bunch compression, and a fi nal focusing solenoid (FFS) for transverse bunch compression. In the beam frame, neutralized drift compression causes a suffi ciently large spread in axial momentum, pz , resulting in chromatic effects to the fi nal focal spot during transverse bunch compression. Placing a weaker solenoid upstream of a stronger fi nal focusing solenoid (FFS) mitigates chromatic effects and improves transverse focusing by a factor of approximately 2-4 for appropriate NDCX-II parameters.
Goodin, Douglas S.; Jones, Jason; Li, David; Traboulsee, Anthony; Reder, Anthony T.; Beckmann, Karola; Konieczny, Andreas; Knappertz, Volker
2011-01-01
Context Establishing the long-term benefit of therapy in chronic diseases has been challenging. Long-term studies require non-randomized designs and, thus, are often confounded by biases. For example, although disease-modifying therapy in MS has a convincing benefit on several short-term outcome-measures in randomized trials, its impact on long-term function remains uncertain. Objective Data from the 16-year Long-Term Follow-up study of interferon-beta-1b is used to assess the relationship between drug-exposure and long-term disability in MS patients. Design/Setting To mitigate the bias of outcome-dependent exposure variation in non-randomized long-term studies, drug-exposure was measured as the medication-possession-ratio, adjusted up or down according to multiple different weighting-schemes based on MS severity and MS duration at treatment initiation. A recursive-partitioning algorithm assessed whether exposure (using any weighing scheme) affected long-term outcome. The optimal cut-point that was used to define “high” or “low” exposure-groups was chosen by the algorithm. Subsequent to verification of an exposure-impact that included all predictor variables, the two groups were compared using a weighted propensity-stratified analysis in order to mitigate any treatment-selection bias that may have been present. Finally, multiple sensitivity-analyses were undertaken using different definitions of long-term outcome and different assumptions about the data. Main Outcome Measure Long-Term Disability. Results In these analyses, the same weighting-scheme was consistently selected by the recursive-partitioning algorithm. This scheme reduced (down-weighted) the effectiveness of drug exposure as either disease duration or disability at treatment-onset increased. Applying this scheme and using propensity-stratification to further mitigate bias, high-exposure had a consistently better clinical outcome compared to low-exposure (Cox proportional hazard ratio = 0.30–0.42; p<0.0001). Conclusions Early initiation and sustained use of interferon-beta-1b has a beneficial impact on long-term outcome in MS. Our analysis strategy provides a methodological framework for bias-mitigation in the analysis of non-randomized clinical data. Trial Registration Clinicaltrials.gov NCT00206635 PMID:22140424
A soft-hard combination-based cooperative spectrum sensing scheme for cognitive radio networks.
Do, Nhu Tri; An, Beongku
2015-02-13
In this paper we propose a soft-hard combination scheme, called SHC scheme, for cooperative spectrum sensing in cognitive radio networks. The SHC scheme deploys a cluster based network in which Likelihood Ratio Test (LRT)-based soft combination is applied at each cluster, and weighted decision fusion rule-based hard combination is utilized at the fusion center. The novelties of the SHC scheme are as follows: the structure of the SHC scheme reduces the complexity of cooperative detection which is an inherent limitation of soft combination schemes. By using the LRT, we can detect primary signals in a low signal-to-noise ratio regime (around an average of -15 dB). In addition, the computational complexity of the LRT is reduced since we derive the closed-form expression of the probability density function of LRT value. The SHC scheme also takes into account the different effects of large scale fading on different users in the wide area network. The simulation results show that the SHC scheme not only provides the better sensing performance compared to the conventional hard combination schemes, but also reduces sensing overhead in terms of reporting time compared to the conventional soft combination scheme using the LRT.
TUW at the First Total Recall Track
2015-11-20
Appen- dix A were ignored. 2.1. Term weighting. The bmi used the basic tf.idf weighting scheme, as given by: (1) weightT (t, d) = (1 + log( tft ,d)) ⇤ log(N...dft); where t is a term, d a document, tft ,d the term frequency, dft the document frequency, and N is the number of documents in the collection. For...save us some training e↵ort. The used weight, marked by “B” in the run names, is given by: (2) weightB(t, d) = tft ,d⇣ 1 mavgtf avgtfd mavgtf + (1 1
NASA Technical Reports Server (NTRS)
Shu, Chi-Wang
2000-01-01
This project is about the investigation of the development of the discontinuous Galerkin finite element methods, for general geometry and triangulations, for solving convection dominated problems, with applications to aeroacoustics. On the analysis side, we have studied the efficient and stable discontinuous Galerkin framework for small second derivative terms, for example in Navier-Stokes equations, and also for related equations such as the Hamilton-Jacobi equations. This is a truly local discontinuous formulation where derivatives are considered as new variables. On the applied side, we have implemented and tested the efficiency of different approaches numerically. Related issues in high order ENO and WENO finite difference methods and spectral methods have also been investigated. Jointly with Hu, we have presented a discontinuous Galerkin finite element method for solving the nonlinear Hamilton-Jacobi equations. This method is based on the RungeKutta discontinuous Galerkin finite element method for solving conservation laws. The method has the flexibility of treating complicated geometry by using arbitrary triangulation, can achieve high order accuracy with a local, compact stencil, and are suited for efficient parallel implementation. One and two dimensional numerical examples are given to illustrate the capability of the method. Jointly with Hu, we have constructed third and fourth order WENO schemes on two dimensional unstructured meshes (triangles) in the finite volume formulation. The third order schemes are based on a combination of linear polynomials with nonlinear weights, and the fourth order schemes are based on combination of quadratic polynomials with nonlinear weights. We have addressed several difficult issues associated with high order WENO schemes on unstructured mesh, including the choice of linear and nonlinear weights, what to do with negative weights, etc. Numerical examples are shown to demonstrate the accuracies and robustness of the methods for shock calculations. Jointly with P. Montarnal, we have used a recently developed energy relaxation theory by Coquel and Perthame and high order weighted essentially non-oscillatory (WENO) schemes to simulate the Euler equations of real gas. The main idea is an energy decomposition under the form epsilon = epsilon(sub 1) + epsilon(sub 2), where epsilon(sub 1) is associated with a simpler pressure law (gamma)-law in this paper) and the nonlinear deviation epsilon(sub 2) is convected with the flow. A relaxation process is performed for each time step to ensure that the original pressure law is satisfied. The necessary characteristic decomposition for the high order WENO schemes is performed on the characteristic fields based on the epsilon(sub l) gamma-law. The algorithm only calls for the original pressure law once per grid point per time step, without the need to compute its derivatives or any Riemann solvers. Both one and two dimensional numerical examples are shown to illustrate the effectiveness of this approach.
On Asymptotically Good Ramp Secret Sharing Schemes
NASA Astrophysics Data System (ADS)
Geil, Olav; Martin, Stefano; Martínez-Peñas, Umberto; Matsumoto, Ryutaroh; Ruano, Diego
Asymptotically good sequences of linear ramp secret sharing schemes have been intensively studied by Cramer et al. in terms of sequences of pairs of nested algebraic geometric codes. In those works the focus is on full privacy and full reconstruction. In this paper we analyze additional parameters describing the asymptotic behavior of partial information leakage and possibly also partial reconstruction giving a more complete picture of the access structure for sequences of linear ramp secret sharing schemes. Our study involves a detailed treatment of the (relative) generalized Hamming weights of the considered codes.
A novel dynamical community detection algorithm based on weighting scheme
NASA Astrophysics Data System (ADS)
Li, Ju; Yu, Kai; Hu, Ke
2015-12-01
Network dynamics plays an important role in analyzing the correlation between the function properties and the topological structure. In this paper, we propose a novel dynamical iteration (DI) algorithm, which incorporates the iterative process of membership vector with weighting scheme, i.e. weighting W and tightness T. These new elements can be used to adjust the link strength and the node compactness for improving the speed and accuracy of community structure detection. To estimate the optimal stop time of iteration, we utilize a new stability measure which is defined as the Markov random walk auto-covariance. We do not need to specify the number of communities in advance. It naturally supports the overlapping communities by associating each node with a membership vector describing the node's involvement in each community. Theoretical analysis and experiments show that the algorithm can uncover communities effectively and efficiently.
Multi-level optimization of a beam-like space truss utilizing a continuum model
NASA Technical Reports Server (NTRS)
Yates, K.; Gurdal, Z.; Thangjitham, S.
1992-01-01
A continuous beam model is developed for approximate analysis of a large, slender, beam-like truss. The model is incorporated in a multi-level optimization scheme for the weight minimization of such trusses. This scheme is tested against traditional optimization procedures for savings in computational cost. Results from both optimization methods are presented for comparison.
ERIC Educational Resources Information Center
Soh, Kaycheng
2015-01-01
In the various world university ranking schemes, the "Overall" is a sum of the weighted indicator scores. As the indicators are of a different nature from each other, "Overall" conceals important differences. Factor analysis of the data from three prominent ranking schemes reveals that there are two factors in each of the…
Using concatenated quantum codes for universal fault-tolerant quantum gates.
Jochym-O'Connor, Tomas; Laflamme, Raymond
2014-01-10
We propose a method for universal fault-tolerant quantum computation using concatenated quantum error correcting codes. The concatenation scheme exploits the transversal properties of two different codes, combining them to provide a means to protect against low-weight arbitrary errors. We give the required properties of the error correcting codes to ensure universal fault tolerance and discuss a particular example using the 7-qubit Steane and 15-qubit Reed-Muller codes. Namely, other than computational basis state preparation as required by the DiVincenzo criteria, our scheme requires no special ancillary state preparation to achieve universality, as opposed to schemes such as magic state distillation. We believe that optimizing the codes used in such a scheme could provide a useful alternative to state distillation schemes that exhibit high overhead costs.
Statistical process control based chart for information systems security
NASA Astrophysics Data System (ADS)
Khan, Mansoor S.; Cui, Lirong
2015-07-01
Intrusion detection systems have a highly significant role in securing computer networks and information systems. To assure the reliability and quality of computer networks and information systems, it is highly desirable to develop techniques that detect intrusions into information systems. We put forward the concept of statistical process control (SPC) in computer networks and information systems intrusions. In this article we propose exponentially weighted moving average (EWMA) type quality monitoring scheme. Our proposed scheme has only one parameter which differentiates it from the past versions. We construct the control limits for the proposed scheme and investigate their effectiveness. We provide an industrial example for the sake of clarity for practitioner. We give comparison of the proposed scheme with EWMA schemes and p chart; finally we provide some recommendations for the future work.
NASA Astrophysics Data System (ADS)
Tessler, Zachary D.; Vörösmarty, Charles J.; Overeem, Irina; Syvitski, James P. M.
2018-03-01
Modern deltas are dependent on human-mediated freshwater and sediment fluxes. Changes to these fluxes impact delta biogeophysical functioning and affect the long-term sustainability of these landscapes for human and for natural systems. Here we present contemporary estimates of long-term mean sediment balance and relative sea level rise across 46 global deltas. We model scenarios of contemporary and future water resource management schemes and hydropower infrastructure in upstream river basins to explore how changing sediment fluxes impact relative sea level rise in delta systems. Model results show that contemporary sediment fluxes, anthropogenic drivers of land subsidence, and sea level rise result in delta relative sea level rise rates that average 6.8 mm/y. Assessment of impacts of planned and under-construction dams on relative sea level rise rates suggests increases on the order of 1 mm/y in deltas with new upstream construction. Sediment fluxes are estimated to decrease by up to 60% in the Danube and 21% in the Ganges-Brahmaputra-Meghna if all currently planned dams are constructed. Reduced sediment retention on deltas caused by increased river channelization and management has a larger impact, increasing relative sea level rise on average by nearly 2 mm/y. Long-term delta sustainability requires a more complete understanding of how geophysical and anthropogenic change impact delta geomorphology. Local and regional strategies for sustainable delta management that focus on local and regional drivers of change, especially groundwater and hydrocarbon extraction and upstream dam construction, can be highly impactful even in the context of global climate-induced sea level rise.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kotalczyk, G., E-mail: Gregor.Kotalczyk@uni-due.de; Kruis, F.E.
Monte Carlo simulations based on weighted simulation particles can solve a variety of population balance problems and allow thus to formulate a solution-framework for many chemical engineering processes. This study presents a novel concept for the calculation of coagulation rates of weighted Monte Carlo particles by introducing a family of transformations to non-weighted Monte Carlo particles. The tuning of the accuracy (named ‘stochastic resolution’ in this paper) of those transformations allows the construction of a constant-number coagulation scheme. Furthermore, a parallel algorithm for the inclusion of newly formed Monte Carlo particles due to nucleation is presented in the scope ofmore » a constant-number scheme: the low-weight merging. This technique is found to create significantly less statistical simulation noise than the conventional technique (named ‘random removal’ in this paper). Both concepts are combined into a single GPU-based simulation method which is validated by comparison with the discrete-sectional simulation technique. Two test models describing a constant-rate nucleation coupled to a simultaneous coagulation in 1) the free-molecular regime or 2) the continuum regime are simulated for this purpose.« less
SSME Turbopump Turbine Computations
NASA Technical Reports Server (NTRS)
Jorgenson, P. G. E.
1985-01-01
A two-dimensional viscous code was developed to be used in the prediction of the flow in the SSME high-pressure turbopump blade passages. The rotor viscous code (RVC) employs a four-step Runge-Kutta scheme to solve the two-dimensional, thin-layer Navier-Stokes equations. The Baldwin-Lomax eddy-viscosity model is used for these turbulent flow calculations. A viable method was developed to use the relative exit conditions from an upstream blade row as the inlet conditions to the next blade row. The blade loading diagrams are compared with the meridional values obtained from an in-house quasithree-dimensional inviscid code. Periodic boundary conditions are imposed on a body-fitted C-grid computed by using the GRAPE GRids about Airfoils using Poisson's Equation (GRAPE) code. Total pressure, total temperature, and flow angle are specified at the inlet. The upstream-running Riemann invariant is extrapolated from the interior. Static pressure is specified at the exit such that mass flow is conserved from blade row to blade row, and the conservative variables are extrapolated from the interior. For viscous flows the noslip condition is imposed at the wall. The normal momentum equation gives the pressure at the wall. The density at the wall is obtained from the wall total temperature.
Loosely bound oxytetracycline in riverine sediments from two tributaries of the Chesapeake Bay
Simon, N.S.
2005-01-01
The fate of antibiotics that bind to riverine sediment is not well understood. A solution used in geochemical extraction schemes to determine loosely bound species in sediments, 1 M MgCl2 (pH 8), was chosen to determine loosely bound, and potentially bioavailable, tetracycline antibiotics (TCs), including oxytetracycline (5-OH tetracycline) (OTC) in sediment samples from two rivers on the eastern shore of the Chesapeake Bay. Bottom sediments were collected at sites upstream from, at, and downstream from municipal sewage-treatment plants (STPs) situated on two natural waterways, Yellow Bank Stream, MD, and the Pocomoke River, MD. Concentrations of easily desorbed OTC ranged from 0.6 to approximately 1.2 ??g g-1 dry wt sediment in Yellow Bank Stream and from 0.7 to approximately 3.3 ??g g-1 dry wt sediment in the Pocomoke River. Concentrations of easily desorbable OTC were generally smaller in sediment upstream than in sediment downstream from the STP in the Pocomoke River. STPs and poultry manure are both potential sources of OTC to these streams. OTC that is loosely bound to sediment is subject to desorption. Other researchers have found desorbed TCs to be biologically active compounds.
Guo, Qi; Tran, An V
2012-12-17
In this paper, we investigate the transmission impairments in a high-speed single-feeder wavelength-division-multiplexed passive optical network (WDM-PON) employing low-bandwidth upstream transmitter. A 1-GHz reflective semiconductor optical amplifier (RSOA) is operated at the rates of 10 Gb/s and 20 Gb/s in the proposed WDM-PON. Since the system performance is seriously limited by its uplink in both capacity and reach owing to inter-symbol interference and reflection noise, we present a novel technique with simultaneous capability of spectral efficiency enhancement and transmission distance extension in the uplink via coding and equalization that exploit the principles of partial-response (PR) signal. It is experimentally demonstrated that the proposed system supports the delivery of 10 Gb/s and 20 Gb/s upstream signals over 75-km and 25-km bidirectional fiber, respectively. The configuration of PR equalizer is optimized for its best performance-complexity trade-off. The reflection tolerance of 10 Gb/s and 20 Gb/s channels is improved by 8 dB and 6 dB, respectively, with PR coding. The proposed cost-effective signal processing scheme has great potential for the next-generation access networks.
A postscript to Circulation of the blood: men and ideas.
Riley, R L
1982-10-01
Since 1964, when Fishman and Richards published Circulation of the Blood: Men and Ideas, Guyton's model of the circulation, in which mean circulatory pressure serves as the upstream pressure for venous return, has been extended, and the concept of vascular smooth muscle tone acting like the pressure surrounding a Starling resistor has been postulated. According to this scheme, the positive zero flow intercepts of rapidly determined arterial pressure-flow curves are the effective downstream pressures for arterial flow to different tissues. The arterioles, like Starling resistors, determine the downstream pressures and are followed by abrupt pressure drops, or "waterfalls." Capillary pressures are closely linked to those of the venules into which they flow. Capillary-venular pressures are the upstream pressures for venous return. In exercising muscles, reduced arteriolar tone lowers arteriolar pressure and increases arterial flow. This, in turn, raises capillary-venular pressure and increases venous flow. The arteriolar-capillary waterfall is decreased or eliminated. Total blood flow is increased by diversion of blood from tissues with slow venous drainage to muscles with fast venous drainage (low resistance X compliance). The heart pumps away the increased venous return by shifting to a new ventricular function curve.
Modeling surface trapped river plumes: A sensitivity study
Hyatt, Jason; Signell, Richard P.
2000-01-01
To better understand the requirements for realistic regional simulation of river plumes in the Gulf of Maine, we test the sensitivity of the Blumberg-Mellor hydrodynamic model to choice of advection scheme, grid resolution, and wind, using idealized geometry and forcing. The test case discharges 1500 m3/s of fresh water into a uniform 32 psu ocean along a straight shelf at 43?? north. The water depth is 15 m at the coast and increases linearly to 190 m at a distance 100 km offshore. Constant discharge runs are conducted in the presence of ambient alongshore current and with and without periodic alongshore wind forcing. Advection methods tested are CENTRAL, UPWIND, the standard Smolarkiewicz MPDATA and a recursive MPDATA scheme. For the no-wind runs, the UPWIND advection scheme performs poorly for grid resolutions typically used in regional simulations (grid spacing of 1-2 km, comparable to or slightly less than the internal Rossby radius, and vertical resolution of 10% of the water column), damping out much of the plume structure. The CENTRAL difference scheme also has problems when wind forcing is neglected, and generates too much structure, shedding eddies of numerical origin. When a weak 5 cm/s ambient current is present in the no-wind case, both the CENTRAL and standard MPDATA schemes produce a false fresh- and dense-water source just upstream of the river inflow due to a standing two-grid length oscillation in the salinity field. The recursive MPDATA scheme completely eliminates the false dense water source, and produces results closest to the grid-converged solution. The results are shown to be very sensitive to vertical grid resolution, and the presence of wind forcing dramatically changes the nature of the plume simulations. The implication of these idealized tests for realistic simulations is discussed, as well as ramifications on previous studies of idealized plume models.
Multi-model ensemble hydrologic prediction using Bayesian model averaging
NASA Astrophysics Data System (ADS)
Duan, Qingyun; Ajami, Newsha K.; Gao, Xiaogang; Sorooshian, Soroosh
2007-05-01
Multi-model ensemble strategy is a means to exploit the diversity of skillful predictions from different models. This paper studies the use of Bayesian model averaging (BMA) scheme to develop more skillful and reliable probabilistic hydrologic predictions from multiple competing predictions made by several hydrologic models. BMA is a statistical procedure that infers consensus predictions by weighing individual predictions based on their probabilistic likelihood measures, with the better performing predictions receiving higher weights than the worse performing ones. Furthermore, BMA provides a more reliable description of the total predictive uncertainty than the original ensemble, leading to a sharper and better calibrated probability density function (PDF) for the probabilistic predictions. In this study, a nine-member ensemble of hydrologic predictions was used to test and evaluate the BMA scheme. This ensemble was generated by calibrating three different hydrologic models using three distinct objective functions. These objective functions were chosen in a way that forces the models to capture certain aspects of the hydrograph well (e.g., peaks, mid-flows and low flows). Two sets of numerical experiments were carried out on three test basins in the US to explore the best way of using the BMA scheme. In the first set, a single set of BMA weights was computed to obtain BMA predictions, while the second set employed multiple sets of weights, with distinct sets corresponding to different flow intervals. In both sets, the streamflow values were transformed using Box-Cox transformation to ensure that the probability distribution of the prediction errors is approximately Gaussian. A split sample approach was used to obtain and validate the BMA predictions. The test results showed that BMA scheme has the advantage of generating more skillful and equally reliable probabilistic predictions than original ensemble. The performance of the expected BMA predictions in terms of daily root mean square error (DRMS) and daily absolute mean error (DABS) is generally superior to that of the best individual predictions. Furthermore, the BMA predictions employing multiple sets of weights are generally better than those using single set of weights.
Criterion for correct recalls in associative-memory neural networks
NASA Astrophysics Data System (ADS)
Ji, Han-Bing
1992-12-01
A novel weighted outer-product learning (WOPL) scheme for associative memory neural networks (AMNNs) is presented. In the scheme, each fundamental memory is allocated a learning weight to direct its correct recall. Both the Hopfield and multiple training models are instances of the WOPL model with certain sets of learning weights. A necessary condition of choosing learning weights for the convergence property of the WOPL model is obtained through neural dynamics. A criterion for choosing learning weights for correct associative recalls of the fundamental memories is proposed. In this paper, an important parameter called signal to noise ratio gain (SNRG) is devised, and it is found out empirically that SNRGs have their own threshold values which means that any fundamental memory can be correctly recalled when its corresponding SNRG is greater than or equal to its threshold value. Furthermore, a theorem is given and some theoretical results on the conditions of SNRGs and learning weights for good associative recall performance of the WOPL model are accordingly obtained. In principle, when all SNRGs or learning weights chosen satisfy the theoretically obtained conditions, the asymptotic storage capacity of the WOPL model will grow at the greatest rate under certain known stochastic meaning for AMNNs, and thus the WOPL model can achieve correct recalls for all fundamental memories. The representative computer simulations confirm the criterion and theoretical analysis.
Validation of a RANS transition model using a high-order weighted compact nonlinear scheme
NASA Astrophysics Data System (ADS)
Tu, GuoHua; Deng, XiaoGang; Mao, MeiLiang
2013-04-01
A modified transition model is given based on the shear stress transport (SST) turbulence model and an intermittency transport equation. The energy gradient term in the original model is replaced by flow strain rate to saving computational costs. The model employs local variables only, and then it can be conveniently implemented in modern computational fluid dynamics codes. The fifth-order weighted compact nonlinear scheme and the fourth-order staggered scheme are applied to discrete the governing equations for the purpose of minimizing discretization errors, so as to mitigate the confusion between numerical errors and transition model errors. The high-order package is compared with a second-order TVD method on simulating the transitional flow of a flat plate. Numerical results indicate that the high-order package give better grid convergence property than that of the second-order method. Validation of the transition model is performed for transitional flows ranging from low speed to hypersonic speed.
NASA Astrophysics Data System (ADS)
Lin, Guofen; Hong, Hanshu; Xia, Yunhao; Sun, Zhixin
2017-10-01
Attribute-based encryption (ABE) is an interesting cryptographic technique for flexible cloud data sharing access control. However, some open challenges hinder its practical application. In previous schemes, all attributes are considered as in the same status while they are not in most of practical scenarios. Meanwhile, the size of access policy increases dramatically with the raise of its expressiveness complexity. In addition, current research hardly notices that mobile front-end devices, such as smartphones, are poor in computational performance while too much bilinear pairing computation is needed for ABE. In this paper, we propose a key-policy weighted attribute-based encryption without bilinear pairing computation (KP-WABE-WB) for secure cloud data sharing access control. A simple weighted mechanism is presented to describe different importance of each attribute. We introduce a novel construction of ABE without executing any bilinear pairing computation. Compared to previous schemes, our scheme has a better performance in expressiveness of access policy and computational efficiency.
Perfluorinated substance assessment in sediments of a large-scale reservoir in Danjiangkou, China.
He, Xiaomin; Li, Aimin; Wang, Shengyao; Chen, Hao; Yang, Zixin
2018-01-07
The occurrence of eight perfluorinated compounds (PFCs) in the surface sediments from 10 sampling sites spread across the Danjiangkou Reservoir was investigated by isotope dilution ultra-high-performance liquid chromatography-tandem mass spectrometry (UPLC-MS/MS) after solid-phase extraction (SPE). All the sediments from the 10 sites contained detectable levels of PFCs. The total concentration of the target PFCs in each sediment sample (C ∑PFCs ) ranged from 0.270 to 0.395 ng g -1 of dry weight, and the mean value of C ∑PFCs was 0.324 ± 0.045 ng g -1 of dry weight for the whole reservoir. For each perfluorinated compound in one sediment, perfluorooctane sulfonate (PFOS) or perfluoro-n-butanoic acid (PFBA) consistently had a higher concentration than the other six PFCs, while perfluoro-n-octanoic acid (PFOA) was always undetectable. In terms of spatial distribution, the total and individual concentrations of PFCs in sediment from downstream sites of the Danjiangkou Reservoir were higher than those from upstream sites. Factor analysis revealed that PFCs in the sediment samples originated from electroplating and anti-fog agents in industry, food/pharmaceutical packaging and the water/oil repellent paper coating, and the deposition process. The quotient method was utilized to assess the ecological risk of PFCs in the sediments of the Danjiangkou Reservoir, which showed that the concentrations of PFCs were not considered a risk. In this study, detailed information on the concentration level and distribution of PFCs in the sediments of the Danjiangkou Reservoir, which is the source of water for the Middle Route Project of the South-to-North Water Transfer Scheme in China, was reported and analyzed for the first time. These results can provide valuable information for water resource management and pollution control in the Danjiangkou Reservoir.
DOW-PR DOlphin and Whale Pods Routing Protocol for Underwater Wireless Sensor Networks (UWSNs).
Wadud, Zahid; Ullah, Khadem; Hussain, Sajjad; Yang, Xiaodong; Qazi, Abdul Baseer
2018-05-12
Underwater Wireless Sensor Networks (UWSNs) have intrinsic challenges that include long propagation delays, high mobility of sensor nodes due to water currents, Doppler spread, delay variance, multipath, attenuation and geometric spreading. The existing Weighting Depth and Forwarding Area Division Depth Based Routing (WDFAD-DBR) protocol considers the weighting depth of the two hops in order to select the next Potential Forwarding Node (PFN). To improve the performance of WDFAD-DBR, we propose DOlphin and Whale Pod Routing protocol (DOW-PR). In this scheme, we divide the transmission range into a number of transmission power levels and at the same time select the next PFNs from forwarding and suppressed zones. In contrast to WDFAD-DBR, our scheme not only considers the packet upward advancement, but also takes into account the number of suppressed nodes and number of PFNs at the first and second hops. Consequently, reasonable energy reduction is observed while receiving and transmitting packets. Moreover, our scheme also considers the hops count of the PFNs from the sink. In the absence of PFNs, the proposed scheme will select the node from the suppressed region for broadcasting and thus ensures minimum loss of data. Besides this, we also propose another routing scheme (whale pod) in which multiple sinks are placed at water surface, but one sink is embedded inside the water and is physically connected with the surface sink through high bandwidth connection. Simulation results show that the proposed scheme has high Packet Delivery Ratio (PDR), low energy tax, reduced Accumulated Propagation Distance (APD) and increased the network lifetime.
DOW-PR DOlphin and Whale Pods Routing Protocol for Underwater Wireless Sensor Networks (UWSNs)
Wadud, Zahid; Ullah, Khadem; Hussain, Sajjad; Yang, Xiaodong; Qazi, Abdul Baseer
2018-01-01
Underwater Wireless Sensor Networks (UWSNs) have intrinsic challenges that include long propagation delays, high mobility of sensor nodes due to water currents, Doppler spread, delay variance, multipath, attenuation and geometric spreading. The existing Weighting Depth and Forwarding Area Division Depth Based Routing (WDFAD-DBR) protocol considers the weighting depth of the two hops in order to select the next Potential Forwarding Node (PFN). To improve the performance of WDFAD-DBR, we propose DOlphin and Whale Pod Routing protocol (DOW-PR). In this scheme, we divide the transmission range into a number of transmission power levels and at the same time select the next PFNs from forwarding and suppressed zones. In contrast to WDFAD-DBR, our scheme not only considers the packet upward advancement, but also takes into account the number of suppressed nodes and number of PFNs at the first and second hops. Consequently, reasonable energy reduction is observed while receiving and transmitting packets. Moreover, our scheme also considers the hops count of the PFNs from the sink. In the absence of PFNs, the proposed scheme will select the node from the suppressed region for broadcasting and thus ensures minimum loss of data. Besides this, we also propose another routing scheme (whale pod) in which multiple sinks are placed at water surface, but one sink is embedded inside the water and is physically connected with the surface sink through high bandwidth connection. Simulation results show that the proposed scheme has high Packet Delivery Ratio (PDR), low energy tax, reduced Accumulated Propagation Distance (APD) and increased the network lifetime. PMID:29757208
Optimal Weight Assignment for a Chinese Signature File.
ERIC Educational Resources Information Center
Liang, Tyne; And Others
1996-01-01
Investigates the performance of a character-based Chinese text retrieval scheme in which monogram keys and bigram keys are encoded into document signatures. Tests and verifies the theoretical predictions of the optimal weight assignments and the minimal false hit rate in experiments using a real Chinese corpus for disyllabic queries of different…
Rapid evaluation of high-performance systems
NASA Astrophysics Data System (ADS)
Forbes, G. W.; Ruoff, J.
2017-11-01
System assessment for design often involves averages, such as rms wavefront error, that are estimated by ray tracing through a sample of points within the pupil. Novel general-purpose sampling and weighting schemes are presented and it is also shown that optical design can benefit from tailored versions of these schemes. It turns out that the type of Gaussian quadrature that has long been recognized for efficiency in this domain requires about 40-50% more ray tracing to attain comparable accuracy to generic versions of the new schemes. Even greater efficiency gains can be won, however, by tailoring such sampling schemes to the optical context where azimuthal variation in the wavefront is generally weaker than the radial variation. These new schemes are special cases of what is known in the mathematical world as cubature. Our initial results also led to the consideration of simpler sampling configurations that approximate the newfound cubature schemes. We report on the practical application of a selection of such schemes and make observations that aid in the discovery of novel cubature schemes relevant to optical design of systems with circular pupils.
Rainey, R C T
2018-01-01
For tidal power barrages, a breast-shot water wheel, with a hydraulic transmission, has significant advantages over a conventional Kaplan turbine. It is better suited to combined operations with pumping that maintain the tidal range upstream of the barrage (important in reducing the environmental impact), and is much less harmful to fish. It also does not require tapered entry and exit ducts, making the barrage much smaller and lighter, so that it can conveniently be built in steel. For the case of the Severn Estuary, UK, it is shown that a barrage at Porlock would generate an annual average power of 4 GW (i.e. 35 TWh yr -1 ), maintain the existing tidal ranges upstream of it and reduce the tidal ranges downstream of it by only about 10%. The weight of steel required, in relation to the annual average power generated, compares very favourably with a recent offshore wind farm.
NASA Astrophysics Data System (ADS)
Rainey, R. C. T.
2018-01-01
For tidal power barrages, a breast-shot water wheel, with a hydraulic transmission, has significant advantages over a conventional Kaplan turbine. It is better suited to combined operations with pumping that maintain the tidal range upstream of the barrage (important in reducing the environmental impact), and is much less harmful to fish. It also does not require tapered entry and exit ducts, making the barrage much smaller and lighter, so that it can conveniently be built in steel. For the case of the Severn Estuary, UK, it is shown that a barrage at Porlock would generate an annual average power of 4 GW (i.e. 35 TWh yr-1), maintain the existing tidal ranges upstream of it and reduce the tidal ranges downstream of it by only about 10%. The weight of steel required, in relation to the annual average power generated, compares very favourably with a recent offshore wind farm.
NASA Astrophysics Data System (ADS)
Tanji, Hajime; Kiri, Hirohide; Kobayashi, Shintaro
When total supply is smaller than total demand, it is difficult to apply the paddy irrigation water distribution rule. The gap must be narrowed by decreasing demand. Historically, the upstream served rule, rotation schedule, or central schedule weight to irrigated area was adopted. This paper proposes the hypothesis that these rules are dependent on social justice, a hypothesis called the "Society-Justice-Water Distribution Rule Hypothesis". Justice, which means a balance of efficiency and equity of distribution, is discussed under the political philosophy of utilitarianism, liberalism (Rawls), libertarianism, and communitarianism. The upstream served rule can be derived from libertarianism. The rotation schedule and central schedule can be derived from communitarianism. Liberalism can provide arranged schedule to adjust supply and demand based on "the Difference Principle". The authors conclude that to achieve efficiency and equity, liberalism may provide the best solution after modernization.
NASA Astrophysics Data System (ADS)
Xamán, J.; Zavala-Guillén, I.; Hernández-López, I.; Uriarte-Flores, J.; Hernández-Pérez, I.; Macías-Melo, E. V.; Aguilar-Castro, K. M.
2018-03-01
In this paper, we evaluated the convergence rate (CPU time) of a new mathematical formulation for the numerical solution of the radiative transfer equation (RTE) with several High-Order (HO) and High-Resolution (HR) schemes. In computational fluid dynamics, this procedure is known as the Normalized Weighting-Factor (NWF) method and it is adopted here. The NWF method is used to incorporate the high-order resolution schemes in the discretized RTE. The NWF method is compared, in terms of computer time needed to obtain a converged solution, with the widely used deferred-correction (DC) technique for the calculations of a two-dimensional cavity with emitting-absorbing-scattering gray media using the discrete ordinates method. Six parameters, viz. the grid size, the order of quadrature, the absorption coefficient, the emissivity of the boundary surface, the under-relaxation factor, and the scattering albedo are considered to evaluate ten schemes. The results showed that using the DC method, in general, the scheme that had the lowest CPU time is the SOU. In contrast, with the results of theDC procedure the CPU time for DIAMOND and QUICK schemes using the NWF method is shown to be, between the 3.8 and 23.1% faster and 12.6 and 56.1% faster, respectively. However, the other schemes are more time consuming when theNWFis used instead of the DC method. Additionally, a second test case was presented and the results showed that depending on the problem under consideration, the NWF procedure may be computationally faster or slower that the DC method. As an example, the CPU time for QUICK and SMART schemes are 61.8 and 203.7%, respectively, slower when the NWF formulation is used for the second test case. Finally, future researches to explore the computational cost of the NWF method in more complex problems are required.
Reducing hydrologic model uncertainty in monthly streamflow predictions using multimodel combination
NASA Astrophysics Data System (ADS)
Li, Weihua; Sankarasubramanian, A.
2012-12-01
Model errors are inevitable in any prediction exercise. One approach that is currently gaining attention in reducing model errors is by combining multiple models to develop improved predictions. The rationale behind this approach primarily lies on the premise that optimal weights could be derived for each model so that the developed multimodel predictions will result in improved predictions. A new dynamic approach (MM-1) to combine multiple hydrological models by evaluating their performance/skill contingent on the predictor state is proposed. We combine two hydrological models, "abcd" model and variable infiltration capacity (VIC) model, to develop multimodel streamflow predictions. To quantify precisely under what conditions the multimodel combination results in improved predictions, we compare multimodel scheme MM-1 with optimal model combination scheme (MM-O) by employing them in predicting the streamflow generated from a known hydrologic model (abcd model orVICmodel) with heteroscedastic error variance as well as from a hydrologic model that exhibits different structure than that of the candidate models (i.e., "abcd" model or VIC model). Results from the study show that streamflow estimated from single models performed better than multimodels under almost no measurement error. However, under increased measurement errors and model structural misspecification, both multimodel schemes (MM-1 and MM-O) consistently performed better than the single model prediction. Overall, MM-1 performs better than MM-O in predicting the monthly flow values as well as in predicting extreme monthly flows. Comparison of the weights obtained from each candidate model reveals that as measurement errors increase, MM-1 assigns weights equally for all the models, whereas MM-O assigns higher weights for always the best-performing candidate model under the calibration period. Applying the multimodel algorithms for predicting streamflows over four different sites revealed that MM-1 performs better than all single models and optimal model combination scheme, MM-O, in predicting the monthly flows as well as the flows during wetter months.
Barca, E; Castrignanò, A; Buttafuoco, G; De Benedetto, D; Passarella, G
2015-07-01
Soil survey is generally time-consuming, labor-intensive, and costly. Optimization of sampling scheme allows one to reduce the number of sampling points without decreasing or even increasing the accuracy of investigated attribute. Maps of bulk soil electrical conductivity (EC a ) recorded with electromagnetic induction (EMI) sensors could be effectively used to direct soil sampling design for assessing spatial variability of soil moisture. A protocol, using a field-scale bulk EC a survey, has been applied in an agricultural field in Apulia region (southeastern Italy). Spatial simulated annealing was used as a method to optimize spatial soil sampling scheme taking into account sampling constraints, field boundaries, and preliminary observations. Three optimization criteria were used. the first criterion (minimization of mean of the shortest distances, MMSD) optimizes the spreading of the point observations over the entire field by minimizing the expectation of the distance between an arbitrarily chosen point and its nearest observation; the second criterion (minimization of weighted mean of the shortest distances, MWMSD) is a weighted version of the MMSD, which uses the digital gradient of the grid EC a data as weighting function; and the third criterion (mean of average ordinary kriging variance, MAOKV) minimizes mean kriging estimation variance of the target variable. The last criterion utilizes the variogram model of soil water content estimated in a previous trial. The procedures, or a combination of them, were tested and compared in a real case. Simulated annealing was implemented by the software MSANOS able to define or redesign any sampling scheme by increasing or decreasing the original sampling locations. The output consists of the computed sampling scheme, the convergence time, and the cooling law, which can be an invaluable support to the process of sampling design. The proposed approach has found the optimal solution in a reasonable computation time. The use of bulk EC a gradient as an exhaustive variable, known at any node of an interpolation grid, has allowed the optimization of the sampling scheme, distinguishing among areas with different priority levels.
Sensors with centroid-based common sensing scheme and their multiplexing
NASA Astrophysics Data System (ADS)
Berkcan, Ertugrul; Tiemann, Jerome J.; Brooksby, Glen W.
1993-03-01
The ability to multiplex sensors with different measurands but with a common sensing scheme is of importance in aircraft and aircraft engine applications; this unification of the sensors into a common interface has major implications for weight, cost, and reliability. A new class of sensors based on a common sensing scheme and their E/O Interface has been developed. The approach detects the location of the centroid of a beam of light; the set of fiber optic sensors with this sensing scheme include linear and rotary position, temperature, pressure, as well as duct Mach number. The sensing scheme provides immunity to intensity variations of the source or due to environmental effects on the fiber. A detector spatially multiplexed common electro-optic interface for the sensors has been demonstrated with a position and a temperature sensor.
Yamada, Haruyasu; Abe, Osamu; Shizukuishi, Takashi; Kikuta, Junko; Shinozaki, Takahiro; Dezawa, Ko; Nagano, Akira; Matsuda, Masayuki; Haradome, Hiroki; Imamura, Yoshiki
2014-01-01
Diffusion imaging is a unique noninvasive tool to detect brain white matter trajectory and integrity in vivo. However, this technique suffers from spatial distortion and signal pileup or dropout originating from local susceptibility gradients and eddy currents. Although there are several methods to mitigate these problems, most techniques can be applicable either to susceptibility or eddy-current induced distortion alone with a few exceptions. The present study compared the correction efficiency of FSL tools, “eddy_correct” and the combination of “eddy” and “topup” in terms of diffusion-derived fractional anisotropy (FA). The brain diffusion images were acquired from 10 healthy subjects using 30 and 60 directions encoding schemes based on the electrostatic repulsive forces. For the 30 directions encoding, 2 sets of diffusion images were acquired with the same parameters, except for the phase-encode blips which had opposing polarities along the anteroposterior direction. For the 60 directions encoding, non–diffusion-weighted and diffusion-weighted images were obtained with forward phase-encoding blips and non–diffusion-weighted images with the same parameter, except for the phase-encode blips, which had opposing polarities. FA images without and with distortion correction were compared in a voxel-wise manner with tract-based spatial statistics. We showed that images corrected with eddy and topup possessed higher FA values than images uncorrected and corrected with eddy_correct with trilinear (FSL default setting) or spline interpolation in most white matter skeletons, using both encoding schemes. Furthermore, the 60 directions encoding scheme was superior as measured by increased FA values to the 30 directions encoding scheme, despite comparable acquisition time. This study supports the combination of eddy and topup as a superior correction tool in diffusion imaging rather than the eddy_correct tool, especially with trilinear interpolation, using 60 directions encoding scheme. PMID:25405472
Yamada, Haruyasu; Abe, Osamu; Shizukuishi, Takashi; Kikuta, Junko; Shinozaki, Takahiro; Dezawa, Ko; Nagano, Akira; Matsuda, Masayuki; Haradome, Hiroki; Imamura, Yoshiki
2014-01-01
Diffusion imaging is a unique noninvasive tool to detect brain white matter trajectory and integrity in vivo. However, this technique suffers from spatial distortion and signal pileup or dropout originating from local susceptibility gradients and eddy currents. Although there are several methods to mitigate these problems, most techniques can be applicable either to susceptibility or eddy-current induced distortion alone with a few exceptions. The present study compared the correction efficiency of FSL tools, "eddy_correct" and the combination of "eddy" and "topup" in terms of diffusion-derived fractional anisotropy (FA). The brain diffusion images were acquired from 10 healthy subjects using 30 and 60 directions encoding schemes based on the electrostatic repulsive forces. For the 30 directions encoding, 2 sets of diffusion images were acquired with the same parameters, except for the phase-encode blips which had opposing polarities along the anteroposterior direction. For the 60 directions encoding, non-diffusion-weighted and diffusion-weighted images were obtained with forward phase-encoding blips and non-diffusion-weighted images with the same parameter, except for the phase-encode blips, which had opposing polarities. FA images without and with distortion correction were compared in a voxel-wise manner with tract-based spatial statistics. We showed that images corrected with eddy and topup possessed higher FA values than images uncorrected and corrected with eddy_correct with trilinear (FSL default setting) or spline interpolation in most white matter skeletons, using both encoding schemes. Furthermore, the 60 directions encoding scheme was superior as measured by increased FA values to the 30 directions encoding scheme, despite comparable acquisition time. This study supports the combination of eddy and topup as a superior correction tool in diffusion imaging rather than the eddy_correct tool, especially with trilinear interpolation, using 60 directions encoding scheme.
Towards Stochastic Optimization-Based Electric Vehicle Penetration in a Novel Archipelago Microgrid.
Yang, Qingyu; An, Dou; Yu, Wei; Tan, Zhengan; Yang, Xinyu
2016-06-17
Due to the advantage of avoiding upstream disturbance and voltage fluctuation from a power transmission system, Islanded Micro-Grids (IMG) have attracted much attention. In this paper, we first propose a novel self-sufficient Cyber-Physical System (CPS) supported by Internet of Things (IoT) techniques, namely "archipelago micro-grid (MG)", which integrates the power grid and sensor networks to make the grid operation effective and is comprised of multiple MGs while disconnected with the utility grid. The Electric Vehicles (EVs) are used to replace a portion of Conventional Vehicles (CVs) to reduce CO 2 emission and operation cost. Nonetheless, the intermittent nature and uncertainty of Renewable Energy Sources (RESs) remain a challenging issue in managing energy resources in the system. To address these issues, we formalize the optimal EV penetration problem as a two-stage Stochastic Optimal Penetration (SOP) model, which aims to minimize the emission and operation cost in the system. Uncertainties coming from RESs (e.g., wind, solar, and load demand) are considered in the stochastic model and random parameters to represent those uncertainties are captured by the Monte Carlo-based method. To enable the reasonable deployment of EVs in each MGs, we develop two scheduling schemes, namely Unlimited Coordinated Scheme (UCS) and Limited Coordinated Scheme (LCS), respectively. An extensive simulation study based on a modified 9 bus system with three MGs has been carried out to show the effectiveness of our proposed schemes. The evaluation data indicates that our proposed strategy can reduce both the environmental pollution created by CO 2 emissions and operation costs in UCS and LCS.
Towards Stochastic Optimization-Based Electric Vehicle Penetration in a Novel Archipelago Microgrid
Yang, Qingyu; An, Dou; Yu, Wei; Tan, Zhengan; Yang, Xinyu
2016-01-01
Due to the advantage of avoiding upstream disturbance and voltage fluctuation from a power transmission system, Islanded Micro-Grids (IMG) have attracted much attention. In this paper, we first propose a novel self-sufficient Cyber-Physical System (CPS) supported by Internet of Things (IoT) techniques, namely “archipelago micro-grid (MG)”, which integrates the power grid and sensor networks to make the grid operation effective and is comprised of multiple MGs while disconnected with the utility grid. The Electric Vehicles (EVs) are used to replace a portion of Conventional Vehicles (CVs) to reduce CO2 emission and operation cost. Nonetheless, the intermittent nature and uncertainty of Renewable Energy Sources (RESs) remain a challenging issue in managing energy resources in the system. To address these issues, we formalize the optimal EV penetration problem as a two-stage Stochastic Optimal Penetration (SOP) model, which aims to minimize the emission and operation cost in the system. Uncertainties coming from RESs (e.g., wind, solar, and load demand) are considered in the stochastic model and random parameters to represent those uncertainties are captured by the Monte Carlo-based method. To enable the reasonable deployment of EVs in each MGs, we develop two scheduling schemes, namely Unlimited Coordinated Scheme (UCS) and Limited Coordinated Scheme (LCS), respectively. An extensive simulation study based on a modified 9 bus system with three MGs has been carried out to show the effectiveness of our proposed schemes. The evaluation data indicates that our proposed strategy can reduce both the environmental pollution created by CO2 emissions and operation costs in UCS and LCS. PMID:27322281
Intelligent call admission control for multi-class services in mobile cellular networks
NASA Astrophysics Data System (ADS)
Ma, Yufeng; Hu, Xiulin; Zhang, Yunyu
2005-11-01
Scarcity of the spectrum resource and mobility of users make quality of service (QoS) provision a critical issue in mobile cellular networks. This paper presents a fuzzy call admission control scheme to meet the requirement of the QoS. A performance measure is formed as a weighted linear function of new call and handoff call blocking probabilities of each service class. Simulation compares the proposed fuzzy scheme with complete sharing and guard channel policies. Simulation results show that fuzzy scheme has a better robust performance in terms of average blocking criterion.
NASA Astrophysics Data System (ADS)
Huang, Juntao; Shu, Chi-Wang
2018-05-01
In this paper, we develop bound-preserving modified exponential Runge-Kutta (RK) discontinuous Galerkin (DG) schemes to solve scalar hyperbolic equations with stiff source terms by extending the idea in Zhang and Shu [43]. Exponential strong stability preserving (SSP) high order time discretizations are constructed and then modified to overcome the stiffness and preserve the bound of the numerical solutions. It is also straightforward to extend the method to two dimensions on rectangular and triangular meshes. Even though we only discuss the bound-preserving limiter for DG schemes, it can also be applied to high order finite volume schemes, such as weighted essentially non-oscillatory (WENO) finite volume schemes as well.
Sparsity-Aware DOA Estimation Scheme for Noncircular Source in MIMO Radar.
Wang, Xianpeng; Wang, Wei; Li, Xin; Liu, Qi; Liu, Jing
2016-04-14
In this paper, a novel sparsity-aware direction of arrival (DOA) estimation scheme for a noncircular source is proposed in multiple-input multiple-output (MIMO) radar. In the proposed method, the reduced-dimensional transformation technique is adopted to eliminate the redundant elements. Then, exploiting the noncircularity of signals, a joint sparsity-aware scheme based on the reweighted l1 norm penalty is formulated for DOA estimation, in which the diagonal elements of the weight matrix are the coefficients of the noncircular MUSIC-like (NC MUSIC-like) spectrum. Compared to the existing l1 norm penalty-based methods, the proposed scheme provides higher angular resolution and better DOA estimation performance. Results from numerical experiments are used to show the effectiveness of our proposed method.
Hybrid Large-Eddy/Reynolds-Averaged Simulation of a Supersonic Cavity Using VULCAN
NASA Technical Reports Server (NTRS)
Quinlan, Jesse; McDaniel, James; Baurle, Robert A.
2013-01-01
Simulations of a supersonic recessed-cavity flow are performed using a hybrid large-eddy/Reynolds-averaged simulation approach utilizing an inflow turbulence recycling procedure and hybridized inviscid flux scheme. Calorically perfect air enters a three-dimensional domain at a free stream Mach number of 2.92. Simulations are performed to assess grid sensitivity of the solution, efficacy of the turbulence recycling, and the effect of the shock sensor used with the hybridized inviscid flux scheme. Analysis of the turbulent boundary layer upstream of the rearward-facing step for each case indicates excellent agreement with theoretical predictions. Mean velocity and pressure results are compared to Reynolds-averaged simulations and experimental data for each case and indicate good agreement on the finest grid. Simulations are repeated on a coarsened grid, and results indicate strong grid density sensitivity. Simulations are performed with and without inflow turbulence recycling on the coarse grid to isolate the effect of the recycling procedure, which is demonstrably critical to capturing the relevant shear layer dynamics. Shock sensor formulations of Ducros and Larsson are found to predict mean flow statistics equally well.
Parallel Adaptive Simulation of Detonation Waves Using a Weighted Essentially Non-Oscillatory Scheme
NASA Astrophysics Data System (ADS)
McMahon, Sean
The purpose of this thesis was to develop a code that could be used to develop a better understanding of the physics of detonation waves. First, a detonation was simulated in one dimension using ZND theory. Then, using the 1D solution as an initial condition, a detonation was simulated in two dimensions using a weighted essentially non-oscillatory scheme on an adaptive mesh with the smallest lengthscales being equal to 2-3 flamelet lengths. The code development in linking Chemkin for chemical kinetics to the adaptive mesh refinement flow solver was completed. The detonation evolved in a way that, qualitatively, matched the experimental observations, however, the simulation was unable to progress past the formation of the triple point.
NASA Astrophysics Data System (ADS)
Starov, A. V.; Goldfeld, M. A.
2017-10-01
The efficiency of using two variants of hydrogen injection (distributed and non-distributed injection from vertical pylons) is experimentally investigated. The tests are performed in the attached pipeline regime with the Mach number at the model combustor entrance M=2. The combustion chamber has a backward-facing step at the entrance and slotted channels for combustion stabilization. The tested variants of injection differ basically by the shapes of the fuel jets and, correspondingly, by the hydrogen distribution over the combustor. As a result, distributed injection is found to provide faster ignition, upstream displacement of the elevated pressure region, and more intense combustion over the entire combustor volume.
Pressure balanced drag turbine mass flow meter
Dacus, M.W.; Cole, J.H.
1980-04-23
The density of the fluid flowing through a tubular member may be measured by a device comprising a rotor assembly suspended within the tubular member, a fluid bearing medium for the rotor assembly shaft, independent fluid flow lines to each bearing chamber, and a scheme for detection of any difference between the upstream and downstream bearing fluid pressures. The rotor assembly reacts to fluid flow both by rotation and axial displacement; therefore concurrent measurements may be made of the velocity of blade rotation and also bearing pressure changes, where the pressure changes may be equated to the fluid momentum flux imparted to the rotor blades. From these parameters the flow velocity and density of the fluid may be deduced.
Pressure balanced drag turbine mass flow meter
Dacus, Michael W.; Cole, Jack H.
1982-01-01
The density of the fluid flowing through a tubular member may be measured by a device comprising a rotor assembly suspended within the tubular member, a fluid bearing medium for the rotor assembly shaft, independent fluid flow lines to each bearing chamber, and a scheme for detection of any difference between the upstream and downstream bearing fluid pressures. The rotor assembly reacts to fluid flow both by rotation and axial displacement; therefore concurrent measurements may be made of the velocity of blade rotation and also bearing pressure changes, where the pressure changes may be equated to the fluid momentum flux imparted to the rotor blades. From these parameters the flow velocity and density of the fluid may be deduced.
NASA Astrophysics Data System (ADS)
Wang, Yiguang; Chi, Nan
2016-10-01
Light emitting diodes (LEDs) based visible light communication (VLC) has been considered as a promising technology for indoor high-speed wireless access, due to its unique advantages, such as low cost, license free and high security. To achieve high-speed VLC transmission, carrierless amplitude and phase (CAP) modulation has been utilized for its lower complexity and high spectral efficiency. Moreover, to compensate the linear and nonlinear distortions such as frequency attenuation, sampling time offset, LED nonlinearity etc., series of pre- and post-equalization schemes should be employed in high-speed VLC systems. In this paper, we make an investigation on several advanced pre- and postequalization schemes for high-order CAP modulation based VLC systems. We propose to use a weighted preequalization technique to compensate the LED frequency attenuation. In post-equalization, a hybrid post equalizer is proposed, which consists of a linear equalizer, a Volterra series based nonlinear equalizer, and a decision-directed least mean square (DD-LMS) equalizer. Modified cascaded multi-modulus algorithm (M-CMMA) is employed to update the weights of the linear and the nonlinear equalizer, while DD-LMS can further improve the performance after the preconvergence. Based on high-order CAP modulation and these equalization schemes, we have experimentally demonstrated a 1.35-Gb/s, a 4.5-Gb/s and a 8-Gb/s high-speed indoor VLC transmission systems. The results show the benefit and feasibility of the proposed equalization schemes for high-speed VLC systems.
NASA Technical Reports Server (NTRS)
Milner, G. Martin; Black, Mike; Hovenga, Mike; Mcclure, Paul; Miller, Patrice
1988-01-01
The application of vibration monitoring to the rotating machinery typical of ECLSS components in advanced NASA spacecraft was studied. It is found that the weighted summation of the accelerometer power spectrum is the most successful detection scheme for a majority of problem types. Other detection schemes studied included high-frequency demodulation, cepstrum, clustering, and amplitude processing.
A Geometric Analysis of when Fixed Weighting Schemes Will Outperform Ordinary Least Squares
ERIC Educational Resources Information Center
Davis-Stober, Clintin P.
2011-01-01
Many researchers have demonstrated that fixed, exogenously chosen weights can be useful alternatives to Ordinary Least Squares (OLS) estimation within the linear model (e.g., Dawes, Am. Psychol. 34:571-582, 1979; Einhorn & Hogarth, Org. Behav. Human Perform. 13:171-192, 1975; Wainer, Psychol. Bull. 83:213-217, 1976). Generalizing the approach of…
Estepp, Jeremie H; Melloni, Chiara; Thornburg, Courtney D; Wiczling, Paweł; Rogers, Zora; Rothman, Jennifer A; Green, Nancy S; Liem, Robert; Brandow, Amanda M; Crary, Shelley E; Howard, Thomas H; Morris, Maurine H; Lewandowski, Andrew; Garg, Uttam; Jusko, William J; Neville, Kathleen A
2016-03-01
Hydroxyurea (HU) is a crucial therapy for children with sickle cell anemia, but its off-label use is a barrier to widespread acceptance. We found HU exposure is not significantly altered by liquid vs capsule formulation, and weight-based dosing schemes provide consistent exposure. HU is recommended for all children starting as young as 9 months of age with sickle cell anemia (SCA; HbSS and HbSβspan(0) thalassemia); however; a paucity of pediatric data exists regarding the pharmacokinetics (PK) or the exposure-response relationship of HU. This trial aimed to characterize the PK of HU in children and to evaluate and compare the bioavailability of a liquid vs capsule formulation. This multicenter; prospective; open-label trial enrolled 39 children with SCA who provided 682 plasma samples for PK analysis following administration of HU. Noncompartmental and population PK models are described. We report that liquid and capsule formulations of HU are bioequivalent; weight-based dosing schemes provide consistent drug exposure; and age-based dosing schemes are unnecessary. These data support the use of liquid HU in children unable to swallow capsules and in those whose weight precludes the use of fixed capsule formulations. Taken with existing safety and efficacy literature; these findings should encourage the use of HU across the spectrum of age and weight in children with SCA; and they should facilitate the expanded use of HU as recommended in the National Heart; Lung; and Blood Institute guidelines for individuals with SCA. © 2015, The American College of Clinical Pharmacology.
OTACT: ONU Turning with Adaptive Cycle Times in Long-Reach PONs
NASA Astrophysics Data System (ADS)
Zare, Sajjad; Ghaffarpour Rahbar, Akbar
2015-01-01
With the expansion of PON networks as Long-Reach PON (LR-PON) networks, the problem of degrading the efficiency of centralized bandwidth allocation algorithms threatens this network due to high propagation delay. This is because these algorithms are based on bandwidth negotiation messages frequently exchanged between the optical line terminal (OLT) in the Central Office and optical network units (ONUs) near the users, which become seriously delayed when the network is extended. To solve this problem, some decentralized algorithms are proposed based on bandwidth negotiation messages frequently exchanged between the Remote Node (RN)/Local Exchange (LX) and ONUs near the users. The network has a relatively high delay since there are relatively large distances between RN/LX and ONUs, and therefore, control messages should travel twice between ONUs and RN/LX in order to go from one ONU to another ONU. In this paper, we propose a novel framework, called ONU Turning with Adaptive Cycle Times (OTACT), that uses Power Line Communication (PLC) to connect two adjacent ONUs. Since there is a large population density in urban areas, ONUs are closer to each other. Thus, the efficiency of the proposed method is high. We investigate the performance of the proposed scheme in contrast with other decentralized schemes under the worst case conditions. Simulation results show that the average upstream packet delay can be decreased under the proposed scheme.
Weighted statistical parameters for irregularly sampled time series
NASA Astrophysics Data System (ADS)
Rimoldini, Lorenzo
2014-01-01
Unevenly spaced time series are common in astronomy because of the day-night cycle, weather conditions, dependence on the source position in the sky, allocated telescope time and corrupt measurements, for example, or inherent to the scanning law of satellites like Hipparcos and the forthcoming Gaia. Irregular sampling often causes clumps of measurements and gaps with no data which can severely disrupt the values of estimators. This paper aims at improving the accuracy of common statistical parameters when linear interpolation (in time or phase) can be considered an acceptable approximation of a deterministic signal. A pragmatic solution is formulated in terms of a simple weighting scheme, adapting to the sampling density and noise level, applicable to large data volumes at minimal computational cost. Tests on time series from the Hipparcos periodic catalogue led to significant improvements in the overall accuracy and precision of the estimators with respect to the unweighted counterparts and those weighted by inverse-squared uncertainties. Automated classification procedures employing statistical parameters weighted by the suggested scheme confirmed the benefits of the improved input attributes. The classification of eclipsing binaries, Mira, RR Lyrae, Delta Cephei and Alpha2 Canum Venaticorum stars employing exclusively weighted descriptive statistics achieved an overall accuracy of 92 per cent, about 6 per cent higher than with unweighted estimators.
NASA Astrophysics Data System (ADS)
Jin, Juliang; Li, Lei; Wang, Wensheng; Zhang, Ming
2006-10-01
The optimal selection of schemes of water transportation projects is a process of choosing a relatively optimal scheme from a number of schemes of water transportation programming and management projects, which is of importance in both theory and practice in water resource systems engineering. In order to achieve consistency and eliminate the dimensions of fuzzy qualitative and fuzzy quantitative evaluation indexes, to determine the weights of the indexes objectively, and to increase the differences among the comprehensive evaluation index values of water transportation project schemes, a projection pursuit method, named FPRM-PP for short, was developed in this work for selecting the optimal water transportation project scheme based on the fuzzy preference relation matrix. The research results show that FPRM-PP is intuitive and practical, the correction range of the fuzzy preference relation matrix
Optimal sampling with prior information of the image geometry in microfluidic MRI.
Han, S H; Cho, H; Paulsen, J L
2015-03-01
Recent advances in MRI acquisition for microscopic flows enable unprecedented sensitivity and speed in a portable NMR/MRI microfluidic analysis platform. However, the application of MRI to microfluidics usually suffers from prolonged acquisition times owing to the combination of the required high resolution and wide field of view necessary to resolve details within microfluidic channels. When prior knowledge of the image geometry is available as a binarized image, such as for microfluidic MRI, it is possible to reduce sampling requirements by incorporating this information into the reconstruction algorithm. The current approach to the design of the partial weighted random sampling schemes is to bias toward the high signal energy portions of the binarized image geometry after Fourier transformation (i.e. in its k-space representation). Although this sampling prescription is frequently effective, it can be far from optimal in certain limiting cases, such as for a 1D channel, or more generally yield inefficient sampling schemes at low degrees of sub-sampling. This work explores the tradeoff between signal acquisition and incoherent sampling on image reconstruction quality given prior knowledge of the image geometry for weighted random sampling schemes, finding that optimal distribution is not robustly determined by maximizing the acquired signal but from interpreting its marginal change with respect to the sub-sampling rate. We develop a corresponding sampling design methodology that deterministically yields a near optimal sampling distribution for image reconstructions incorporating knowledge of the image geometry. The technique robustly identifies optimal weighted random sampling schemes and provides improved reconstruction fidelity for multiple 1D and 2D images, when compared to prior techniques for sampling optimization given knowledge of the image geometry. Copyright © 2015 Elsevier Inc. All rights reserved.
Kamalakaran, Sitharthan; Radhakrishnan, Senthil K; Beck, William T
2005-06-03
We developed a pipeline to identify novel genes regulated by the steroid hormone-dependent transcription factor, estrogen receptor, through a systematic analysis of upstream regions of all human and mouse genes. We built a data base of putative promoter regions for 23,077 human and 19,984 mouse transcripts from National Center for Biotechnology Information annotation and 8793 human and 6785 mouse promoters from the Data Base of Transcriptional Start Sites. We used this data base of putative promoters to identify potential targets of estrogen receptor by identifying estrogen response elements (EREs) in their promoters. Our program correctly identified EREs in genes known to be regulated by estrogen in addition to several new genes whose putative promoters contained EREs. We validated six genes (KIAA1243, NRIP1, MADH9, NME3, TPD52L, and ABCG2) to be estrogen-responsive in MCF7 cells using reverse transcription PCR. To allow for extensibility of our program in identifying targets of other transcription factors, we have built a Web interface to access our data base and programs. Our Web-based program for Promoter Analysis of Genome, PAGen@UIC, allows a user to identify putative target genes for vertebrate transcription factors through the analysis of their upstream sequences. The interface allows the user to search the human and mouse promoter data bases for potential target genes containing one or more listed transcription factor binding sites (TFBSs) in their upstream elements, using either regular expression-based consensus or position weight matrices. The data base can also be searched for promoters harboring user-defined TFBSs given as a consensus or a position weight matrix. Furthermore, the user can retrieve putative promoter sequences for any given gene together with identified TFBSs located on its promoter. Orthologous promoters are also analyzed to determine conserved elements.
Spray algorithm without interface construction
NASA Astrophysics Data System (ADS)
Al-Kadhem Majhool, Ahmed Abed; Watkins, A. P.
2012-05-01
This research is aimed to create a new and robust family of convective schemes to capture the interface between the dispersed and the carrier phases in a spray without the need to build up the interface boundary. The selection of the Weighted Average Flux (WAF) scheme is due to this scheme being designed to deal with random flux scheme which is second-order accurate in space and time. The convective flux in each cell face utilizes the WAF scheme blended with Switching Technique for Advection and Capturing of Surfaces (STACS) scheme for high resolution flux limiters. In the next step, the high resolution scheme is blended with the WAF scheme to provide the sharpness and boundedness of the interface by using switching strategy. In this work, the Eulerian-Eulerian framework of non-reactive turbulent spray is set in terms of theoretical proposed methodology namely spray moments of drop size distribution, presented by Beck and Watkins [1]. The computational spray model avoids the need to segregate the local droplet number distribution into parcels of identical droplets. The proposed scheme is tested on capturing the spray edges in modelling hollow cone sprays without need to reconstruct two-phase interface. A test is made on simple comparison between TVD scheme and WAF scheme using the same flux limiter on convective flow hollow cone spray. Results show the WAF scheme gives a better prediction than TVD scheme. The only way to check the accuracy of the presented models is by evaluating the spray sheet thickness.
NASA Astrophysics Data System (ADS)
Vargas, Catarina I. C.; Vaz, Nuno; Dias, João M.
2017-04-01
It is of global interest, for the definition of effective adaptation strategies, to make an assessment of climate change impacts in coastal environments. In this study, the salinity patterns adjustments and the correspondent Venice System zonations adaptations are evaluated through numerical modelling for Ria de Aveiro, a mesotidal shallow water lagoon located in the Portuguese coast, for the end of the 21st century in a climate change context. A reference (equivalent to present conditions) and three future scenarios are defined and simulated, both for wet and dry conditions. The future scenarios are designed with the following changes to the reference: scenario 1) projected mean sea level (MSL) rise; scenario 2) projected river flow discharges; and scenario 3) projections for both MSL and river flow discharges. The projections imposed are: a MSL rise of 0.42 m; a freshwater flow reduction of ∼22% for the wet season and a reduction of ∼87% for the dry season. Modelling results are analyzed for different tidal ranges. Results indicate: a) a salinity upstream intrusion and a generalized salinity increase for sea level rise scenario, with higher significance in middle-to-upper lagoon zones; b) a maximum salinity increase of ∼12 in scenario 3 and wet conditions for Espinheiro channel, the one with higher freshwater contribution; c) an upstream displacement of the saline fronts occurring in wet conditions for all future scenarios, with stronger expression for scenario 3, of ∼2 km in Espinheiro channel; and d) a landward progression of the saltier physical zones established in the Venice System scheme. The adaptation of the ecosystem to the upstream relocation of physical zones may be blocked by human settlements and other artificial barriers surrounding the estuarine environment.
Audet, Mélisa; Dumas, Alex; Binette, Rachelle; Dionne, Isabelle J
2017-11-01
Socioeconomic inequalities in health persist despite major investments in illness prevention campaigns and universal healthcare systems. In this context, the increased risks of chronic diseases of specific sub-groups of vulnerable populations should be further investigated. The objective of this qualitative study is to examine the interaction between socioeconomic status (SES) and body weight in order to understand underprivileged women's increased vulnerability to chronic diseases after menopause. By drawing specifically on Pierre Bourdieu's sociocultural theory of practice, 20 semi-structured interviews were conducted from May to December of 2013 to investigate the health practices of clinically overweight, postmenopausal women living an underprivileged life in Canada. Findings emphasise that poor life conditions undermine personal investment in preventive health and weight loss, showing the importance for policy makers to bring stronger consideration on upstream determinants of health. © 2017 Foundation for the Sociology of Health & Illness.
Potential applications of the white rot fungus Pleurotus in bioregenerative life support systems
NASA Astrophysics Data System (ADS)
Manukovsky, N. S.; Kovalev, V. S.; Yu, Ch.; Gurevich, Yu. L.; Liu, H.
Earlier we demonstrated the possibility of using soil-like substrate SLS for plant cultivation in bioregenerative life support systems BLSS We suggest dividing the process of SLS bioregeneration at BLSS conditions into two stages At the first stage plant residues should be used for growing of white rot fungus Pleurotus ostreatus Pleurotus florida etc The fruit bodies could be used as food Spent mushroom compost is carried in SLS and treated by microorganisms and worms at the second stage The possibility of extension of human food ration is only one of the reasons for realization of the suggested two-stage SLS regeneration scheme people s daily consumption of mushrooms is limited to 200 -250 g of wet weight or 20 -25 g of dry weight Multiple tests showed what is more important is that inclusion of mushrooms into the system cycle scheme contributes through various mechanisms to the more stable functioning of vegetative cenosis in general Taking into account the given experimental data we determined the scheme of mushroom module material balance The technological peculiarities of mushroom cultivation at BLSS conditions are being discussed
Sparsity-Aware DOA Estimation Scheme for Noncircular Source in MIMO Radar
Wang, Xianpeng; Wang, Wei; Li, Xin; Liu, Qi; Liu, Jing
2016-01-01
In this paper, a novel sparsity-aware direction of arrival (DOA) estimation scheme for a noncircular source is proposed in multiple-input multiple-output (MIMO) radar. In the proposed method, the reduced-dimensional transformation technique is adopted to eliminate the redundant elements. Then, exploiting the noncircularity of signals, a joint sparsity-aware scheme based on the reweighted l1 norm penalty is formulated for DOA estimation, in which the diagonal elements of the weight matrix are the coefficients of the noncircular MUSIC-like (NC MUSIC-like) spectrum. Compared to the existing l1 norm penalty-based methods, the proposed scheme provides higher angular resolution and better DOA estimation performance. Results from numerical experiments are used to show the effectiveness of our proposed method. PMID:27089345
High-Order Semi-Discrete Central-Upwind Schemes for Multi-Dimensional Hamilton-Jacobi Equations
NASA Technical Reports Server (NTRS)
Bryson, Steve; Levy, Doron; Biegel, Bran R. (Technical Monitor)
2002-01-01
We present high-order semi-discrete central-upwind numerical schemes for approximating solutions of multi-dimensional Hamilton-Jacobi (HJ) equations. This scheme is based on the use of fifth-order central interpolants like those developed in [1], in fluxes presented in [3]. These interpolants use the weighted essentially nonoscillatory (WENO) approach to avoid spurious oscillations near singularities, and become "central-upwind" in the semi-discrete limit. This scheme provides numerical approximations whose error is as much as an order of magnitude smaller than those in previous WENO-based fifth-order methods [2, 1]. Thee results are discussed via examples in one, two and three dimensions. We also pregnant explicit N-dimensional formulas for the fluxes, discuss their monotonicity and tl!e connection between this method and that in [2].
Conformal Electromagnetic Particle in Cell: A Review
Meierbachtol, Collin S.; Greenwood, Andrew D.; Verboncoeur, John P.; ...
2015-10-26
We review conformal (or body-fitted) electromagnetic particle-in-cell (EM-PIC) numerical solution schemes. Included is a chronological history of relevant particle physics algorithms often employed in these conformal simulations. We also provide brief mathematical descriptions of particle-tracking algorithms and current weighting schemes, along with a brief summary of major time-dependent electromagnetic solution methods. Several research areas are also highlighted for recommended future development of new conformal EM-PIC methods.
NASA Astrophysics Data System (ADS)
Yan, Li; Huang, Wei; Zhang, Tian-tian; Li, Hao; Yan, Xiao-ting
2014-12-01
The mixing and combustion process has an important impact on the engineering realization of the scramjet engine. The nonreacting and reacting flow fields in a transverse injection channel have been investigated numerically, and the predicted results have been compared with the available experimental data in the open literature, the wall pressure distributions, the separation length, as well as the penetration height. Further, the influences of the molecular weight of the fuel and the jet-to-crossflow pressure ratio on the wall pressure distribution have been studied. The obtained results show that the predicted results show reasonable agreement with the experimental data, and the variable trends of the penetration height and the separation distance are almost the same as those obtained in the experiment. The vapor pressure model is suitable to fit the relationship between the penetration height, the separation distance and the jet-to-crossflow pressure ratio. The combustion process mainly occurs upstream of the injection port, and it makes a great difference to the wall pressure distribution upstream of the injection port, especially when the jet-to-crossflow pressure ratio is large enough, namely 17.72 and 25.15 in the range considered in the current study. For hydrogen, the combustion downstream of the injection port occurs more intensively, and this may be induced by its smaller molecular weight.
IRRA at TREC 2009: Index Term Weighting based on Divergence From Independence Model
2009-11-01
weighting scheme ( Salton and Buckley, 1988), where TF stands for the term frequency and IDF stands for the inverse document frequency. In contrast to TF...IDF is a collection dependent factor, which identifies the terms that concentrates in a few documents of the collection. Salton and Buckley (1988...chapter 4, pages 35–56. Butterworths, Oxford, UK, 1981. G. Salton and C. Buckley. Term-weighting approaches in automatic text retrieval. In Information Processing and Management, pages 513–523, 1988. 15
Jeffrey H. Gove
2003-01-01
Many of the most popular sampling schemes used in forestry are probability proportional to size methods. These methods are also referred to as size biased because sampling is actually from a weighted form of the underlying population distribution. Length- and area-biased sampling are special cases of size-biased sampling where the probability weighting comes from a...
ERIC Educational Resources Information Center
Greenberg, Kathleen Puglisi
2012-01-01
The scoring instrument described in this article is based on a deconstruction of the seven sections of an American Psychological Association (APA)-style empirical research report into a set of learning outcomes divided into content-, expression-, and format-related categories. A double-weighting scheme used to score the report yields a final grade…
THERAPY WITH P$sup 32$ IN POLYCYTHEMIA (in German)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waschulewski, H.; Dorffel, E.W.
1958-01-01
Therapy vdth P/sup 32/ is being used more and more in polycythemia vera rubra. There ls no generally valid dosage scheme. Body-weight, blood-pictare and general condition furnish centaln clues. At present, we administer about 0.08 mc/kg body-weight as initial dose adding later, if necessary, further quantities under careful control of the blood-picture. (auth)
NASA Astrophysics Data System (ADS)
Balsara, Dinshaw S.
2017-12-01
As computational astrophysics comes under pressure to become a precision science, there is an increasing need to move to high accuracy schemes for computational astrophysics. The algorithmic needs of computational astrophysics are indeed very special. The methods need to be robust and preserve the positivity of density and pressure. Relativistic flows should remain sub-luminal. These requirements place additional pressures on a computational astrophysics code, which are usually not felt by a traditional fluid dynamics code. Hence the need for a specialized review. The focus here is on weighted essentially non-oscillatory (WENO) schemes, discontinuous Galerkin (DG) schemes and PNPM schemes. WENO schemes are higher order extensions of traditional second order finite volume schemes. At third order, they are most similar to piecewise parabolic method schemes, which are also included. DG schemes evolve all the moments of the solution, with the result that they are more accurate than WENO schemes. PNPM schemes occupy a compromise position between WENO and DG schemes. They evolve an Nth order spatial polynomial, while reconstructing higher order terms up to Mth order. As a result, the timestep can be larger. Time-dependent astrophysical codes need to be accurate in space and time with the result that the spatial and temporal accuracies must be matched. This is realized with the help of strong stability preserving Runge-Kutta schemes and ADER (Arbitrary DERivative in space and time) schemes, both of which are also described. The emphasis of this review is on computer-implementable ideas, not necessarily on the underlying theory.
NASA Technical Reports Server (NTRS)
Wang, Shugong; Liang, Xu
2013-01-01
A new approach is presented in this paper to effectively obtain parameter estimations for the Multiscale Kalman Smoother (MKS) algorithm. This new approach has demonstrated promising potentials in deriving better data products based on data of different spatial scales and precisions. Our new approach employs a multi-objective (MO) parameter estimation scheme (called MO scheme hereafter), rather than using the conventional maximum likelihood scheme (called ML scheme) to estimate the MKS parameters. Unlike the ML scheme, the MO scheme is not simply built on strict statistical assumptions related to prediction errors and observation errors, rather, it directly associates the fused data of multiple scales with multiple objective functions in searching best parameter estimations for MKS through optimization. In the MO scheme, objective functions are defined to facilitate consistency among the fused data at multiscales and the input data at their original scales in terms of spatial patterns and magnitudes. The new approach is evaluated through a Monte Carlo experiment and a series of comparison analyses using synthetic precipitation data. Our results show that the MKS fused precipitation performs better using the MO scheme than that using the ML scheme. Particularly, improvements are significant compared to that using the ML scheme for the fused precipitation associated with fine spatial resolutions. This is mainly due to having more criteria and constraints involved in the MO scheme than those included in the ML scheme. The weakness of the original ML scheme that blindly puts more weights onto the data associated with finer resolutions is overcome in our new approach.
Weighted cubic and biharmonic splines
NASA Astrophysics Data System (ADS)
Kvasov, Boris; Kim, Tae-Wan
2017-01-01
In this paper we discuss the design of algorithms for interpolating discrete data by using weighted cubic and biharmonic splines in such a way that the monotonicity and convexity of the data are preserved. We formulate the problem as a differential multipoint boundary value problem and consider its finite-difference approximation. Two algorithms for automatic selection of shape control parameters (weights) are presented. For weighted biharmonic splines the resulting system of linear equations can be efficiently solved by combining Gaussian elimination with successive over-relaxation method or finite-difference schemes in fractional steps. We consider basic computational aspects and illustrate main features of this original approach.
NASA Technical Reports Server (NTRS)
Huff, Ronald G.
1989-01-01
Tests were conducted in the NASA Lewis Research Center's Powered Lift Facility to experimentally evaluate the noise generated by a flight weight, 12 in. butterfly valve installed in a proposed vertical takeoff and landing thrust vectoring system. Fluctuating pressure measurements were made in the circular duct upstream and downstream of the valve. This data report presents the results of these tests. The maximum overall sound pressure level is generated in the duct downstream of the valve and reached a value of 180 dB at a valve pressure ratio of 2.8. At the higher valve pressure ratios the spectra downstream of the valve is broad banded with its maximum at 1000 Hz.
Jia, Xinzheng; Lin, Huiran; Nie, Qinghua; Zhang, Xiquan; Lamont, Susan J
2016-11-03
Body weight is one of the most important quantitative traits with high heritability in chicken. We previously mapped a quantitative trait locus (QTL) for body weight by genome-wide association study (GWAS) in an F2 chicken resource population. To identify the causal mutations linked to this QTL, expression profiles were determined on livers of high-weight and low-weight chicken lines by microarray. Combining the expression pattern with SNP effects by GWAS, miR-16 was identified as the most likely potential candidate with a 3.8-fold decrease in high-weight lines. Re-sequencing revealed that a 54-bp insertion mutation in the upstream region of miR-15a-16 displayed high allele frequencies in high-weight commercial broiler line. This mutation resulted in lower miR-16 expression by introducing three novel splicing sites instead of the missing 5' terminal splicing of mature miR-16. Elevating miR-16 significantly inhibited DF-1 chicken embryo cell proliferation, consistent with a role in suppression of cellular growth. The 54-bp insertion was significantly associated with increased body weight, bone size and muscle mass. Also, the insertion mutation tended towards fixation in commercial broilers (Fst > 0.4). Our findings revealed a novel causative mutation for body weight regulation that aids our basic understanding of growth regulation in birds.
Jia, Xinzheng; Lin, Huiran; Nie, Qinghua; Zhang, Xiquan; Lamont, Susan J.
2016-01-01
Body weight is one of the most important quantitative traits with high heritability in chicken. We previously mapped a quantitative trait locus (QTL) for body weight by genome-wide association study (GWAS) in an F2 chicken resource population. To identify the causal mutations linked to this QTL, expression profiles were determined on livers of high-weight and low-weight chicken lines by microarray. Combining the expression pattern with SNP effects by GWAS, miR-16 was identified as the most likely potential candidate with a 3.8-fold decrease in high-weight lines. Re-sequencing revealed that a 54-bp insertion mutation in the upstream region of miR-15a-16 displayed high allele frequencies in high-weight commercial broiler line. This mutation resulted in lower miR-16 expression by introducing three novel splicing sites instead of the missing 5′ terminal splicing of mature miR-16. Elevating miR-16 significantly inhibited DF-1 chicken embryo cell proliferation, consistent with a role in suppression of cellular growth. The 54-bp insertion was significantly associated with increased body weight, bone size and muscle mass. Also, the insertion mutation tended towards fixation in commercial broilers (Fst > 0.4). Our findings revealed a novel causative mutation for body weight regulation that aids our basic understanding of growth regulation in birds. PMID:27808177
NASA Astrophysics Data System (ADS)
Iribarren Anacona, P.; Norton, K. P.; Mackintosh, A.
2014-07-01
Glacier retreat since the Little Ice Age has resulted in the development or expansion of hundreds of glacial lakes in Patagonia. Some of these lakes have produced large (≥106 m3) Glacial Lake Outburst Floods (GLOFs) damaging inhabited areas. GLOF hazard studies in Patagonia have been mainly based on the analysis of short-term series (≤50 years) of flood data and until now no attempt has been made to identify the relative susceptibility of lakes to failure. Power schemes and associated infrastructure are planned for Patagonian basins that have historically been affected by GLOFs, and we now require a thorough understanding of the characteristics of dangerous lakes in order to assist with hazard assessment and planning. In this paper, the conditioning factors of 16 outbursts from moraine dammed lakes in Patagonia were analysed. These data were used to develop a classification scheme designed to assess outburst susceptibility, based on image classification techniques, flow routine algorithms and the Analytical Hierarchy Process. This scheme was applied to the Baker Basin, Chile, where at least 7 moraine-dammed lakes have failed in historic time. We identified 386 moraine-dammed lakes in the Baker Basin of which 28 were classified with high or very high outburst susceptibility. Commonly, lakes with high outburst susceptibility are in contact with glaciers and have moderate (>8°) to steep (>15°) dam outlet slopes, akin to failed lakes in Patagonia. The proposed classification scheme is suitable for first-order GLOF hazard assessments in this region. However, rapidly changing glaciers in Patagonia make detailed analysis and monitoring of hazardous lakes and glaciated areas upstream from inhabited areas or critical infrastructure necessary, in order to better prepare for hazards emerging from an evolving cryosphere.
NASA Astrophysics Data System (ADS)
Iribarren Anacona, P.; Norton, K. P.; Mackintosh, A.
2014-12-01
Glacier retreat since the Little Ice Age has resulted in the development or expansion of hundreds of glacial lakes in Patagonia. Some of these lakes have produced large (≥ 106 m3) Glacial Lake Outburst Floods (GLOFs) damaging inhabited areas. GLOF hazard studies in Patagonia have been mainly based on the analysis of short-term series (≤ 50 years) of flood data and until now no attempt has been made to identify the relative susceptibility of lakes to failure. Power schemes and associated infrastructure are planned for Patagonian basins that have historically been affected by GLOFs, and we now require a thorough understanding of the characteristics of dangerous lakes in order to assist with hazard assessment and planning. In this paper, the conditioning factors of 16 outbursts from moraine-dammed lakes in Patagonia were analysed. These data were used to develop a classification scheme designed to assess outburst susceptibility, based on image classification techniques, flow routine algorithms and the Analytical Hierarchy Process. This scheme was applied to the Baker Basin, Chile, where at least seven moraine-dammed lakes have failed in historic time. We identified 386 moraine-dammed lakes in the Baker Basin of which 28 were classified with high or very high outburst susceptibility. Commonly, lakes with high outburst susceptibility are in contact with glaciers and have moderate (> 8°) to steep (> 15°) dam outlet slopes, akin to failed lakes in Patagonia. The proposed classification scheme is suitable for first-order GLOF hazard assessments in this region. However, rapidly changing glaciers in Patagonia make detailed analysis and monitoring of hazardous lakes and glaciated areas upstream from inhabited areas or critical infrastructure necessary, in order to better prepare for hazards emerging from an evolving cryosphere.
Microenvironment-Sensitive Multimodal Contrast Agent for Prostate Cancer Diagnosis
2015-10-01
with a biopolymer (i.e. starch ) to improve biocompatibility, and tagged with prostate cancer-targeting ligands. A significant challenge to translation... starch coating of 50 nm and 100 nm SPIONs was crosslinked and coated with amine groups, and then functionalized with NHS-polyethylene glycol (PEG) of...varying molecular weight (i.e., 2k, 5k or 20k Da) as shown in Scheme 1. Scheme 1. Surface modification of starch -coated SPIONs into aminated and
Microenvironment Sensitive Multimodal Contrast Agent for Prostate Cancer Diagnosis
2016-10-01
coated with a biopolymer (i.e. starch ) to improve biocompatibility, and tagged with prostate cancer-targeting ligands. A significant challenge to...The starch coating of 50 nm and 100 nm SPIONs was crosslinked and coated with amine groups, and then functionalized with NHS-polyethylene glycol (PEG...of varying molecular weight (i.e., 2k, 5k or 20k Da) as shown in Scheme 1. Scheme 1. Surface modification of starch -coated SPIONs into aminated
NASA Astrophysics Data System (ADS)
dos Santos, A. F.; Freitas, S. R.; de Mattos, J. G. Z.; de Campos Velho, H. F.; Gan, M. A.; da Luz, E. F. P.; Grell, G. A.
2013-09-01
In this paper we consider an optimization problem applying the metaheuristic Firefly algorithm (FY) to weight an ensemble of rainfall forecasts from daily precipitation simulations with the Brazilian developments on the Regional Atmospheric Modeling System (BRAMS) over South America during January 2006. The method is addressed as a parameter estimation problem to weight the ensemble of precipitation forecasts carried out using different options of the convective parameterization scheme. Ensemble simulations were performed using different choices of closures, representing different formulations of dynamic control (the modulation of convection by the environment) in a deep convection scheme. The optimization problem is solved as an inverse problem of parameter estimation. The application and validation of the methodology is carried out using daily precipitation fields, defined over South America and obtained by merging remote sensing estimations with rain gauge observations. The quadratic difference between the model and observed data was used as the objective function to determine the best combination of the ensemble members to reproduce the observations. To reduce the model rainfall biases, the set of weights determined by the algorithm is used to weight members of an ensemble of model simulations in order to compute a new precipitation field that represents the observed precipitation as closely as possible. The validation of the methodology is carried out using classical statistical scores. The algorithm has produced the best combination of the weights, resulting in a new precipitation field closest to the observations.
A Highly Flexible and Efficient Passive Optical Network Employing Dynamic Wavelength Allocation
NASA Astrophysics Data System (ADS)
Hsueh, Yu-Li; Rogge, Matthew S.; Yamamoto, Shu; Kazovsky, Leonid G.
2005-01-01
A novel and high-performance passive optical network (PON), the SUCCESS-DWA PON, employs dynamic wavelength allocation to provide bandwidth sharing across multiple physical PONs. In the downstream, tunable lasers, an arrayed waveguide grating, and coarse/fine filtering combine to create a flexible new optical access solution. In the upstream, several distributed and centralized schemes are proposed and investigated. The network performance is compared to conventional TDM-PONs under different traffic models, including the self-similar traffic model and the transaction-oriented model. Broadcast support and deployment issues are addressed. The network's excellent scalability can bridge the gap between conventional TDM-PONs and WDM-PONs. The powerful architecture is a promising candidate for next generation optical access networks.
Mitigating chromatic effects for the transverse focusing of intense charged particle beams
NASA Astrophysics Data System (ADS)
Mitrani, James; Kaganovich, Igor; Davidson, Ronald
2013-09-01
A final focusing scheme designed to minimize chromatic effects is discussed. Solenoids are often used for transverse focusing in accelerator systems that require a charged particle beam with a small focal spot and/or large energy density A sufficiently large spread in axial momentum will reduce the effectiveness of transverse focusing, and result in chromatic effects on the final focal spot. Placing a weaker solenoid upstream of a stronger final focusing solenoid (FFS) mitigates chromatic effects on transverse beam focusing. J.M. Mitrani et al., Nucl. Inst. Meth. Phys. Res. A (2013) http://dx.doi.org/10.1016/j.nima.2013.05.09 This work was supported by DOE contract DE-AC02-09CH11466.
Inferring Pre-shock Acoustic Field From Post-shock Pitot Pressure Measurement
NASA Astrophysics Data System (ADS)
Wang, Jian-Xun; Zhang, Chao; Duan, Lian; Xiao, Heng; Virginia Tech Team; Missouri Univ of Sci; Tech Team
2017-11-01
Linear interaction analysis (LIA) and iterative ensemble Kalman method are used to convert post-shock Pitot pressure fluctuations to static pressure fluctuations in front of the shock. The LIA is used as the forward model for the transfer function associated with a homogeneous field of acoustic waves passing through a nominally normal shock wave. The iterative ensemble Kalman method is then employed to infer the spectrum of upstream acoustic waves based on the post-shock Pitot pressure measured at a single point. Several test cases with synthetic and real measurement data are used to demonstrate the merits of the proposed inference scheme. The study provides the basis for measuring tunnel freestream noise with intrusive probes in noisy supersonic wind tunnels.
Identification of Natural RORγ Ligands that Regulate the Development of Lymphoid Cells
Santori, Fabio R.; Huang, Pengxiang; van de Pavert, Serge A.; Douglass, Eugene F.; Leaver, David J.; Haubrich, Brad A.; Keber, Rok; Lorbek, Gregor; Konijn, Tanja; Rosales, Brittany N.; Horvat, Simon; Rozman, Damjana; Rahier, Alain; Mebius, Reina E.; Rastinejad, Fraydoon; Nes, W. David; Littman, Dan R.
2015-01-01
SUMMARY Mice deficient in the nuclear hormone receptor RORγt have defective development of thymocytes, lymphoid organs, Th17 cells and type 3 innate lymphoid cells. RORγt binds to oxysterols derived from cholesterol catabolism but it is not clear whether these are its natural ligands. Here, we show that sterol lipids are necessary and sufficient to drive RORγt-dependent transcription. We combined overexpression, RNA interference and genetic deletion of metabolic enzymes to study RORγ-dependent transcription. Our results are consistent with the RORγt ligand(s) being a cholesterol biosynthetic intermediate (CBI) downstream of lanosterol and upstream of zymosterol. Analysis of lipids bound to RORγ identified molecules with molecular weights consistent with CBIs. Furthermore, CBIs stabilized the RORγ ligand-binding domain and induced co-activator recruitment. Genetic deletion of metabolic enzymes upstream of the RORγt-ligand(s) affected the development of lymph nodes and Th17 cells. Our data suggest that CBIs play a role in lymphocyte development potentially through regulation of RORγt. PMID:25651181
A general diagram for estimating pore size of ultrafiltration and reverse osmosis membranes
NASA Technical Reports Server (NTRS)
Sarbolouki, M. N.
1982-01-01
A slit sieve model has been used to develop a general correlation between the average pore size of the upstream surface of a membrane and the molecular weight of the solute which it retains by better than 80%. The pore size is determined by means of the correlation using the high retention data from an ultrafiltration (UF) or a reverse osmosis (RO) experiment. The pore population density can also be calculated from the flux data via appropriate equations.
Influence diagnostics in meta-regression model.
Shi, Lei; Zuo, ShanShan; Yu, Dalei; Zhou, Xiaohua
2017-09-01
This paper studies the influence diagnostics in meta-regression model including case deletion diagnostic and local influence analysis. We derive the subset deletion formulae for the estimation of regression coefficient and heterogeneity variance and obtain the corresponding influence measures. The DerSimonian and Laird estimation and maximum likelihood estimation methods in meta-regression are considered, respectively, to derive the results. Internal and external residual and leverage measure are defined. The local influence analysis based on case-weights perturbation scheme, responses perturbation scheme, covariate perturbation scheme, and within-variance perturbation scheme are explored. We introduce a method by simultaneous perturbing responses, covariate, and within-variance to obtain the local influence measure, which has an advantage of capable to compare the influence magnitude of influential studies from different perturbations. An example is used to illustrate the proposed methodology. Copyright © 2017 John Wiley & Sons, Ltd.
Wang, Mingming; Sweetapple, Chris; Fu, Guangtao; Farmani, Raziyeh; Butler, David
2017-10-01
This paper presents a new framework for decision making in sustainable drainage system (SuDS) scheme design. It integrates resilience, hydraulic performance, pollution control, rainwater usage, energy analysis, greenhouse gas (GHG) emissions and costs, and has 12 indicators. The multi-criteria analysis methods of entropy weight and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) were selected to support SuDS scheme selection. The effectiveness of the framework is demonstrated with a SuDS case in China. Indicators used include flood volume, flood duration, a hydraulic performance indicator, cost and resilience. Resilience is an important design consideration, and it supports scheme selection in the case study. The proposed framework will help a decision maker to choose an appropriate design scheme for implementation without subjectivity. Copyright © 2017 Elsevier Ltd. All rights reserved.
Performance evaluation methodology for historical document image binarization.
Ntirogiannis, Konstantinos; Gatos, Basilis; Pratikakis, Ioannis
2013-02-01
Document image binarization is of great importance in the document image analysis and recognition pipeline since it affects further stages of the recognition process. The evaluation of a binarization method aids in studying its algorithmic behavior, as well as verifying its effectiveness, by providing qualitative and quantitative indication of its performance. This paper addresses a pixel-based binarization evaluation methodology for historical handwritten/machine-printed document images. In the proposed evaluation scheme, the recall and precision evaluation measures are properly modified using a weighting scheme that diminishes any potential evaluation bias. Additional performance metrics of the proposed evaluation scheme consist of the percentage rates of broken and missed text, false alarms, background noise, character enlargement, and merging. Several experiments conducted in comparison with other pixel-based evaluation measures demonstrate the validity of the proposed evaluation scheme.
Bi-orthogonal Symbol Mapping and Detection in Optical CDMA Communication System
NASA Astrophysics Data System (ADS)
Liu, Maw-Yang
2017-12-01
In this paper, the bi-orthogonal symbol mapping and detection scheme is investigated in time-spreading wavelength-hopping optical CDMA communication system. The carrier-hopping prime code is exploited as signature sequence, whose put-of-phase autocorrelation is zero. Based on the orthogonality of carrier-hopping prime code, the equal weight orthogonal signaling scheme can be constructed, and the proposed scheme using bi-orthogonal symbol mapping and detection can be developed. The transmitted binary data bits are mapped into corresponding bi-orthogonal symbols, where the orthogonal matrix code and its complement are utilized. In the receiver, the received bi-orthogonal data symbol is fed into the maximum likelihood decoder for detection. Under such symbol mapping and detection, the proposed scheme can greatly enlarge the Euclidean distance; hence, the system performance can be drastically improved.
NASA Technical Reports Server (NTRS)
Hsu, Andrew T.; Lytle, John K.
1989-01-01
An algebraic adaptive grid scheme based on the concept of arc equidistribution is presented. The scheme locally adjusts the grid density based on gradients of selected flow variables from either finite difference or finite volume calculations. A user-prescribed grid stretching can be specified such that control of the grid spacing can be maintained in areas of known flowfield behavior. For example, the grid can be clustered near a wall for boundary layer resolution and made coarse near the outer boundary of an external flow. A grid smoothing technique is incorporated into the adaptive grid routine, which is found to be more robust and efficient than the weight function filtering technique employed by other researchers. Since the present algebraic scheme requires no iteration or solution of differential equations, the computer time needed for grid adaptation is trivial, making the scheme useful for three-dimensional flow problems. Applications to two- and three-dimensional flow problems show that a considerable improvement in flowfield resolution can be achieved by using the proposed adaptive grid scheme. Although the scheme was developed with steady flow in mind, it is a good candidate for unsteady flow computations because of its efficiency.
Ginzburg, Irina
2017-01-01
Impact of the unphysical tangential advective-diffusion constraint of the bounce-back (BB) reflection on the impermeable solid surface is examined for the first four moments of concentration. Despite the number of recent improvements for the Neumann condition in the lattice Boltzmann method-advection-diffusion equation, the BB rule remains the only known local mass-conserving no-flux condition suitable for staircase porous geometry. We examine the closure relation of the BB rule in straight channel and cylindrical capillary analytically, and show that it excites the Knudsen-type boundary layers in the nonequilibrium solution for full-weight equilibrium stencil. Although the d2Q5 and d3Q7 coordinate schemes are sufficient for the modeling of isotropic diffusion, the full-weight stencils are appealing for their advanced stability, isotropy, anisotropy and anti-numerical-diffusion ability. The boundary layers are not covered by the Chapman-Enskog expansion around the expected equilibrium, but they accommodate the Chapman-Enskog expansion in the bulk with the closure relation of the bounce-back rule. We show that the induced boundary layers introduce first-order errors in two primary transport properties, namely, mean velocity (first moment) and molecular diffusion coefficient (second moment). As a side effect, the Taylor-dispersion coefficient (second moment), skewness (third moment), and kurtosis (fourth moment) deviate from their physical values and predictions of the fourth-order Chapman-Enskog analysis, even though the kurtosis error in pure diffusion does not depend on grid resolution. In two- and three-dimensional grid-aligned channels and open-tubular conduits, the errors of velocity and diffusion are proportional to the diagonal weight values of the corresponding equilibrium terms. The d2Q5 and d3Q7 schemes do not suffer from this deficiency in grid-aligned geometries but they cannot avoid it if the boundaries are not parallel to the coordinate lines. In order to vanish or attenuate the disparity of the modeled transport coefficients with the equilibrium weights without any modification of the BB rule, we propose to use the two-relaxation-times collision operator with free-tunable product of two eigenfunctions Λ. Two different values Λ_{v} and Λ_{b} are assigned for bulk and boundary nodes, respectively. The rationale behind this is that Λ_{v} is adjustable for stability, accuracy, or other purposes, while the corresponding Λ_{b}(Λ_{v}) controls the primary accommodation effects. Two distinguished but similar functional relations Λ_{b}(Λ_{v}) are constructed analytically: they preserve advection velocity in parabolic profile, exactly in the two-dimensional channel and very accurately in a three-dimensional cylindrical capillary. For any velocity-weight stencil, the (local) double-Λ BB scheme produces quasi-identical solutions with the (nonlocal) specular-forward reflection for first four moments in a channel. In a capillary, this strategy allows for the accurate modeling of the Taylor-dispersion and non-Gaussian effects. As illustrative example, it is shown that in the flow around a circular obstacle, the double-Λ scheme may also vanish the dependency of mean velocity on the velocity weight; the required value for Λ_{b}(Λ_{v}) can be identified in a few bisection iterations in given geometry. A positive solution for Λ_{b}(Λ_{v}) may not exist in pure diffusion, but a sufficiently small value of Λ_{b} significantly reduces the disparity in diffusion coefficient with the mass weight in ducts and in the presence of rectangular obstacles. Although Λ_{b} also controls the effective position of straight or curved boundaries, the double-Λ scheme deals with the lower-order effects. Its idea and construction may help understanding and amelioration of the anomalous, zero- and first-order behavior of the macroscopic solution in the presence of the bulk and boundary or interface discontinuities, commonly found in multiphase flow and heterogeneous transport.
NASA Astrophysics Data System (ADS)
Ginzburg, Irina
2017-01-01
Impact of the unphysical tangential advective-diffusion constraint of the bounce-back (BB) reflection on the impermeable solid surface is examined for the first four moments of concentration. Despite the number of recent improvements for the Neumann condition in the lattice Boltzmann method-advection-diffusion equation, the BB rule remains the only known local mass-conserving no-flux condition suitable for staircase porous geometry. We examine the closure relation of the BB rule in straight channel and cylindrical capillary analytically, and show that it excites the Knudsen-type boundary layers in the nonequilibrium solution for full-weight equilibrium stencil. Although the d2Q5 and d3Q7 coordinate schemes are sufficient for the modeling of isotropic diffusion, the full-weight stencils are appealing for their advanced stability, isotropy, anisotropy and anti-numerical-diffusion ability. The boundary layers are not covered by the Chapman-Enskog expansion around the expected equilibrium, but they accommodate the Chapman-Enskog expansion in the bulk with the closure relation of the bounce-back rule. We show that the induced boundary layers introduce first-order errors in two primary transport properties, namely, mean velocity (first moment) and molecular diffusion coefficient (second moment). As a side effect, the Taylor-dispersion coefficient (second moment), skewness (third moment), and kurtosis (fourth moment) deviate from their physical values and predictions of the fourth-order Chapman-Enskog analysis, even though the kurtosis error in pure diffusion does not depend on grid resolution. In two- and three-dimensional grid-aligned channels and open-tubular conduits, the errors of velocity and diffusion are proportional to the diagonal weight values of the corresponding equilibrium terms. The d2Q5 and d3Q7 schemes do not suffer from this deficiency in grid-aligned geometries but they cannot avoid it if the boundaries are not parallel to the coordinate lines. In order to vanish or attenuate the disparity of the modeled transport coefficients with the equilibrium weights without any modification of the BB rule, we propose to use the two-relaxation-times collision operator with free-tunable product of two eigenfunctions Λ . Two different values Λv and Λb are assigned for bulk and boundary nodes, respectively. The rationale behind this is that Λv is adjustable for stability, accuracy, or other purposes, while the corresponding Λb(Λv) controls the primary accommodation effects. Two distinguished but similar functional relations Λb(Λv) are constructed analytically: they preserve advection velocity in parabolic profile, exactly in the two-dimensional channel and very accurately in a three-dimensional cylindrical capillary. For any velocity-weight stencil, the (local) double-Λ BB scheme produces quasi-identical solutions with the (nonlocal) specular-forward reflection for first four moments in a channel. In a capillary, this strategy allows for the accurate modeling of the Taylor-dispersion and non-Gaussian effects. As illustrative example, it is shown that in the flow around a circular obstacle, the double-Λ scheme may also vanish the dependency of mean velocity on the velocity weight; the required value for Λb(Λv) can be identified in a few bisection iterations in given geometry. A positive solution for Λb(Λv) may not exist in pure diffusion, but a sufficiently small value of Λb significantly reduces the disparity in diffusion coefficient with the mass weight in ducts and in the presence of rectangular obstacles. Although Λb also controls the effective position of straight or curved boundaries, the double-Λ scheme deals with the lower-order effects. Its idea and construction may help understanding and amelioration of the anomalous, zero- and first-order behavior of the macroscopic solution in the presence of the bulk and boundary or interface discontinuities, commonly found in multiphase flow and heterogeneous transport.
Assessment strategies for municipal selective waste collection schemes.
Ferreira, Fátima; Avelino, Catarina; Bentes, Isabel; Matos, Cristina; Teixeira, Carlos Afonso
2017-01-01
An important strategy to promote a strong sustainable growth relies on an efficient municipal waste management, and phasing out waste landfilling through waste prevention and recycling emerges as a major target. For this purpose, effective collection schemes are required, in particular those regarding selective waste collection, pursuing a more efficient and high quality recycling of reusable materials. This paper addresses the assessment and benchmarking of selective collection schemes, relevant to guide future operational improvements. In particular, the assessment is based on the monitoring and statistical analysis of a core-set of performance indicators that highlights collection trends, complemented with a performance index that gathers a weighted linear combination of these indicators. This combined analysis underlines a potential tool to support decision makers involved in the process of selecting the collection scheme with best overall performance. The presented approach was applied to a case study conducted in Oporto Municipality, with data gathered from two distinct selective collection schemes. Copyright © 2016 Elsevier Ltd. All rights reserved.
Adaptive neural network motion control of manipulators with experimental evaluations.
Puga-Guzmán, S; Moreno-Valenzuela, J; Santibáñez, V
2014-01-01
A nonlinear proportional-derivative controller plus adaptive neuronal network compensation is proposed. With the aim of estimating the desired torque, a two-layer neural network is used. Then, adaptation laws for the neural network weights are derived. Asymptotic convergence of the position and velocity tracking errors is proven, while the neural network weights are shown to be uniformly bounded. The proposed scheme has been experimentally validated in real time. These experimental evaluations were carried in two different mechanical systems: a horizontal two degrees-of-freedom robot and a vertical one degree-of-freedom arm which is affected by the gravitational force. In each one of the two experimental set-ups, the proposed scheme was implemented without and with adaptive neural network compensation. Experimental results confirmed the tracking accuracy of the proposed adaptive neural network-based controller.
Adaptive Neural Network Motion Control of Manipulators with Experimental Evaluations
Puga-Guzmán, S.; Moreno-Valenzuela, J.; Santibáñez, V.
2014-01-01
A nonlinear proportional-derivative controller plus adaptive neuronal network compensation is proposed. With the aim of estimating the desired torque, a two-layer neural network is used. Then, adaptation laws for the neural network weights are derived. Asymptotic convergence of the position and velocity tracking errors is proven, while the neural network weights are shown to be uniformly bounded. The proposed scheme has been experimentally validated in real time. These experimental evaluations were carried in two different mechanical systems: a horizontal two degrees-of-freedom robot and a vertical one degree-of-freedom arm which is affected by the gravitational force. In each one of the two experimental set-ups, the proposed scheme was implemented without and with adaptive neural network compensation. Experimental results confirmed the tracking accuracy of the proposed adaptive neural network-based controller. PMID:24574910
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glover, W. J., E-mail: williamjglover@gmail.com
2014-11-07
State averaged complete active space self-consistent field (SA-CASSCF) is a workhorse for determining the excited-state electronic structure of molecules, particularly for states with multireference character; however, the method suffers from known issues that have prevented its wider adoption. One issue is the presence of discontinuities in potential energy surfaces when a state that is not included in the state averaging crosses with one that is. In this communication I introduce a new dynamical weight with spline (DWS) scheme that mimics SA-CASSCF while removing energy discontinuities due to unweighted state crossings. In addition, analytical gradients for DWS-CASSCF (and other dynamically weightedmore » schemes) are derived for the first time, enabling energy-conserving excited-state ab initio molecular dynamics in instances where SA-CASSCF fails.« less
Effects of a Simple Convective Organization Scheme in a Two-Plume GCM
NASA Astrophysics Data System (ADS)
Chen, Baohua; Mapes, Brian E.
2018-03-01
A set of experiments is described with the Community Atmosphere Model (CAM5) using a two-plume convection scheme. To represent the differences of organized convection from General Circulation Model (GCM) assumptions of isolated plumes in uniform environments, a dimensionless prognostic "organization" tracer Ω is invoked to lend the second plume a buoyancy advantage relative to the first, as described in Mapes and Neale (2016). When low-entrainment plumes are unconditionally available (Ω = 1 everywhere), deep convection occurs too easily, with consequences including premature (upstream) rainfall in inflows to the deep tropics, excessive convective versus large-scale rainfall, poor relationships to the vapor field, stable bias in the mean state, weak and poor tropical variability, and midday peak in diurnal rainfall over land. Some of these are shown to also be characteristic of CAM4 with its separated deep and shallow convection schemes. When low-entrainment plumes are forbidden by setting Ω = 0 everywhere, some opposite problems can be discerned. In between those extreme cases, an interactive Ω driven by the evaporation of precipitation acts as a local positive feedback loop, concentrating deep convection: In areas of little recent rain, only highly entraining plumes can occur, unfavorable for rain production. This tunable mechanism steadily increases precipitation variance in both space and time, as illustrated here with maps, time-longitude series, and spectra, while avoiding some mean state biases as illustrated with process-oriented diagnostics such as conserved variable profiles and vapor-binned precipitation curves.
NASA Technical Reports Server (NTRS)
Taylor, Robert P.; Luck, Rogelio
1995-01-01
The view factors which are used in diffuse-gray radiation enclosure calculations are often computed by approximate numerical integrations. These approximately calculated view factors will usually not satisfy the important physical constraints of reciprocity and closure. In this paper several view-factor rectification algorithms are reviewed and a rectification algorithm based on a least-squares numerical filtering scheme is proposed with both weighted and unweighted classes. A Monte-Carlo investigation is undertaken to study the propagation of view-factor and surface-area uncertainties into the heat transfer results of the diffuse-gray enclosure calculations. It is found that the weighted least-squares algorithm is vastly superior to the other rectification schemes for the reduction of the heat-flux sensitivities to view-factor uncertainties. In a sample problem, which has proven to be very sensitive to uncertainties in view factor, the heat transfer calculations with weighted least-squares rectified view factors are very good with an original view-factor matrix computed to only one-digit accuracy. All of the algorithms had roughly equivalent effects on the reduction in sensitivity to area uncertainty in this case study.
Autonomous learning by simple dynamical systems with delayed feedback.
Kaluza, Pablo; Mikhailov, Alexander S
2014-09-01
A general scheme for the construction of dynamical systems able to learn generation of the desired kinds of dynamics through adjustment of their internal structure is proposed. The scheme involves intrinsic time-delayed feedback to steer the dynamics towards the target performance. As an example, a system of coupled phase oscillators, which can, by changing the weights of connections between its elements, evolve to a dynamical state with the prescribed (low or high) synchronization level, is considered and investigated.
A Hybrid OFDM-TDM Architecture with Decentralized Dynamic Bandwidth Allocation for PONs
Cevik, Taner
2013-01-01
One of the major challenges of passive optical networks is to achieve a fair arbitration mechanism that will prevent possible collisions from occurring at the upstream channel when multiple users attempt to access the common fiber at the same time. Therefore, in this study we mainly focus on fair bandwidth allocation among users, and present a hybrid Orthogonal Frequency Division Multiplexed/Time Division Multiplexed architecture with a dynamic bandwidth allocation scheme that provides satisfying service qualities to the users depending on their varying bandwidth requirements. Unnecessary delays in centralized schemes occurring during bandwidth assignment stage are eliminated by utilizing a decentralized approach. Instead of sending bandwidth demands to the optical line terminal (OLT) which is the only competent authority, each optical network unit (ONU) runs the same bandwidth demand determination algorithm. ONUs inform each other via signaling channel about the status of their queues. This information is fed to the bandwidth determination algorithm which is run by each ONU in a distributed manner. Furthermore, Light Load Penalty, which is a phenomenon in optical communications, is mitigated by limiting the amount of bandwidth that an ONU can demand. PMID:24194684
Wang, Yiqun; Pei, Li; Li, Jing; Li, Yueqin
2017-06-10
A full-duplex radio-over-fiber system is proposed, which provides both the generation of a millimeter-wave (mm-wave) signal with tunable frequency multiplication factors (FMFs) and wavelength reuse for uplink data. A dual-driving Mach-Zehnder modulator and a phase modulator are cascaded to form an optical frequency comb. An acousto-optic tunable filter based on a uniform fiber Bragg grating (FBG-AOTF) is employed to select three target optical sidebands. Two symmetrical sidebands are chosen to generate mm waves with tunable FMFs up to 16, which can be adjusted by changing the frequency of the applied acoustic wave. The optical carrier is reused at the base station for uplink connection. FBG-AOTFs driven by two acoustic wave signals are experimentally fabricated and further applied in the proposed scheme. Results of the research indicate that the 2-Gbit/s data can be successfully transmitted over a 25-km single-mode fiber for bidirectional full-duplex channels with power penalty of less than 2.6 dB. The feasibility of the proposed scheme is verified by detailed simulations and partial experiments.
Numerical Boundary Conditions for Computational Aeroacoustics Benchmark Problems
NASA Technical Reports Server (NTRS)
Tam, Chritsopher K. W.; Kurbatskii, Konstantin A.; Fang, Jun
1997-01-01
Category 1, Problems 1 and 2, Category 2, Problem 2, and Category 3, Problem 2 are solved computationally using the Dispersion-Relation-Preserving (DRP) scheme. All these problems are governed by the linearized Euler equations. The resolution requirements of the DRP scheme for maintaining low numerical dispersion and dissipation as well as accurate wave speeds in solving the linearized Euler equations are now well understood. As long as 8 or more mesh points per wavelength is employed in the numerical computation, high quality results are assured. For the first three categories of benchmark problems, therefore, the real challenge is to develop high quality numerical boundary conditions. For Category 1, Problems 1 and 2, it is the curved wall boundary conditions. For Category 2, Problem 2, it is the internal radiation boundary conditions inside the duct. For Category 3, Problem 2, they are the inflow and outflow boundary conditions upstream and downstream of the blade row. These are the foci of the present investigation. Special nonhomogeneous radiation boundary conditions that generate the incoming disturbances and at the same time allow the outgoing reflected or scattered acoustic disturbances to leave the computation domain without significant reflection are developed. Numerical results based on these boundary conditions are provided.
Network community-detection enhancement by proper weighting
NASA Astrophysics Data System (ADS)
Khadivi, Alireza; Ajdari Rad, Ali; Hasler, Martin
2011-04-01
In this paper, we show how proper assignment of weights to the edges of a complex network can enhance the detection of communities and how it can circumvent the resolution limit and the extreme degeneracy problems associated with modularity. Our general weighting scheme takes advantage of graph theoretic measures and it introduces two heuristics for tuning its parameters. We use this weighting as a preprocessing step for the greedy modularity optimization algorithm of Newman to improve its performance. The result of the experiments of our approach on computer-generated and real-world data networks confirm that the proposed approach not only mitigates the problems of modularity but also improves the modularity optimization.
Fitzsimons, J.D.; Williston, B.; Amcoff, P.; Balk, L.; Pecor, C.; Ketola, H.G.; Hinterkopf, J.P.; Honeyfield, D.C.
2005-01-01
A diet containing a high proportion of alewives Alosa pseudoharengus results in a thiamine deficiency that has been associated with high larval salmonid mortality, known as early mortality syndrome (EMS), but relatively little is known about the effects of the deficiency on adults. Using thiamine injection (50 mg thiamine/kg body weight) of ascending adult female coho salmon Oncorhynchus kisutch on the Platte River, Michigan, we investigated the effects of thiamine supplementation on migration, adult survival, and thiamine status. The thiamine concentrations of eggs, muscle (red and white), spleen, kidney (head and trunk), and liver and the transketolase activity of the liver, head kidney, and trunk kidney of fish injected with thiamine dissolved in physiological saline (PST) or physiological saline only (PS) were compared with those of uninjected fish. The injection did not affect the number of fish making the 15-km upstream migration to a collection weir but did affect survival once fish reached the upstream weir, where survival of PST-injected fish was almost twice that of controls. The egg and liver thiamine concentrations in PS fish sampled after their upstream migration were significantly lower than those of uninjected fish collected at the downstream weir, but the white muscle thiamine concentration did not differ between the two groups. At the upper weir, thiamine levels in the liver, spleen, head kidney, and trunk kidney of PS fish were indistinguishable from those of uninjected fish (called "wigglers") suffering from a severe deficiency and exhibiting reduced equilibrium, a stage that precedes total loss of equilibrium and death. For PST fish collected at the upstream weir, total thiamine levels in all tissues were significantly elevated over those of PS fish. Based on the limited number of tissues examined, thiamine status was indicated better by tissue thiamine concentration than by transketolase activity. The adult injection method we used appears to be a more effective means of increasing egg thiamine levels than immersion of eggs in a thiamine solution. ?? Copyright by the American Fisheries Society 2005.
A two‐point scheme for optimal breast IMRT treatment planning
2013-01-01
We propose an approach to determining optimal beam weights in breast/chest wall IMRT treatment plans. The goal is to decrease breathing effect and to maximize skin dose if the skin is included in the target or, otherwise, to minimize the skin dose. Two points in the target are utilized to calculate the optimal weights. The optimal plan (i.e., the plan with optimal beam weights) consists of high energy unblocked beams, low energy unblocked beams, and IMRT beams. Six breast and five chest wall cases were retrospectively planned with this scheme in Eclipse, including one breast case where CTV was contoured by the physician. Compared with 3D CRT plans composed of unblocked and field‐in‐field beams, the optimal plans demonstrated comparable or better dose uniformity, homogeneity, and conformity to the target, especially at beam junction when supraclavicular nodes are involved. Compared with nonoptimal plans (i.e., plans with nonoptimized weights), the optimal plans had better dose distributions at shallow depths close to the skin, especially in cases where breathing effect was taken into account. This was verified with experiments using a MapCHECK device attached to a motion simulation table (to mimic motion caused by breathing). PACS number: 87.55 de PMID:24257291
Truncation-based energy weighting string method for efficiently resolving small energy barriers
NASA Astrophysics Data System (ADS)
Carilli, Michael F.; Delaney, Kris T.; Fredrickson, Glenn H.
2015-08-01
The string method is a useful numerical technique for resolving minimum energy paths in rare-event barrier-crossing problems. However, when applied to systems with relatively small energy barriers, the string method becomes inconvenient since many images trace out physically uninteresting regions where the barrier has already been crossed and recrossing is unlikely. Energy weighting alleviates this difficulty to an extent, but typical implementations still require the string's endpoints to evolve to stable states that may be far from the barrier, and deciding upon a suitable energy weighting scheme can be an iterative process dependent on both the application and the number of images used. A second difficulty arises when treating nucleation problems: for later images along the string, the nucleus grows to fill the computational domain. These later images are unphysical due to confinement effects and must be discarded. In both cases, computational resources associated with unphysical or uninteresting images are wasted. We present a new energy weighting scheme that eliminates all of the above difficulties by actively truncating the string as it evolves and forcing all images, including the endpoints, to remain within and cover uniformly a desired barrier region. The calculation can proceed in one step without iterating on strategy, requiring only an estimate of an energy value below which images become uninteresting.
A Simple Noise Correction Scheme for Diffusional Kurtosis Imaging
Glenn, G. Russell; Tabesh, Ali; Jensen, Jens H.
2014-01-01
Purpose Diffusional kurtosis imaging (DKI) is sensitive to the effects of signal noise due to strong diffusion weightings and higher order modeling of the diffusion weighted signal. A simple noise correction scheme is proposed to remove the majority of the noise bias in the estimated diffusional kurtosis. Methods Weighted linear least squares (WLLS) fitting together with a voxel-wise, subtraction-based noise correction from multiple, independent acquisitions are employed to reduce noise bias in DKI data. The method is validated in phantom experiments and demonstrated for in vivo human brain for DKI-derived parameter estimates. Results As long as the signal-to-noise ratio (SNR) for the most heavily diffusion weighted images is greater than 2.1, errors in phantom diffusional kurtosis estimates are found to be less than 5 percent with noise correction, but as high as 44 percent for uncorrected estimates. In human brain, noise correction is also shown to improve diffusional kurtosis estimates derived from measurements made with low SNR. Conclusion The proposed correction technique removes the majority of noise bias from diffusional kurtosis estimates in noisy phantom data and is applicable to DKI of human brain. Features of the method include computational simplicity and ease of integration into standard WLLS DKI post-processing algorithms. PMID:25172990
Weighted divergence correction scheme and its fast implementation
NASA Astrophysics Data System (ADS)
Wang, ChengYue; Gao, Qi; Wei, RunJie; Li, Tian; Wang, JinJun
2017-05-01
Forcing the experimental volumetric velocity fields to satisfy mass conversation principles has been proved beneficial for improving the quality of measured data. A number of correction methods including the divergence correction scheme (DCS) have been proposed to remove divergence errors from measurement velocity fields. For tomographic particle image velocimetry (TPIV) data, the measurement uncertainty for the velocity component along the light thickness direction is typically much larger than for the other two components. Such biased measurement errors would weaken the performance of traditional correction methods. The paper proposes a variant for the existing DCS by adding weighting coefficients to the three velocity components, named as the weighting DCS (WDCS). The generalized cross validation (GCV) method is employed to choose the suitable weighting coefficients. A fast algorithm for DCS or WDCS is developed, making the correction process significantly low-cost to implement. WDCS has strong advantages when correcting velocity components with biased noise levels. Numerical tests validate the accuracy and efficiency of the fast algorithm, the effectiveness of GCV method, and the advantages of WDCS. Lastly, DCS and WDCS are employed to process experimental velocity fields from the TPIV measurement of a turbulent boundary layer. This shows that WDCS achieves a better performance than DCS in improving some flow statistics.
Yan, Fei; Christmas, William; Kittler, Josef
2008-10-01
In this paper, we propose a multilayered data association scheme with graph-theoretic formulation for tracking multiple objects that undergo switching dynamics in clutter. The proposed scheme takes as input object candidates detected in each frame. At the object candidate level, "tracklets'' are "grown'' from sets of candidates that have high probabilities of containing only true positives. At the tracklet level, a directed and weighted graph is constructed, where each node is a tracklet, and the edge weight between two nodes is defined according to the "compatibility'' of the two tracklets. The association problem is then formulated as an all-pairs shortest path (APSP) problem in this graph. Finally, at the path level, by analyzing the APSPs, all object trajectories are identified, and track initiation and track termination are automatically dealt with. By exploiting a special topological property of the graph, we have also developed a more efficient APSP algorithm than the general-purpose ones. The proposed data association scheme is applied to tennis sequences to track tennis balls. Experiments show that it works well on sequences where other data association methods perform poorly or fail completely.
On the use of transition matrix methods with extended ensembles.
Escobedo, Fernando A; Abreu, Charlles R A
2006-03-14
Different extended ensemble schemes for non-Boltzmann sampling (NBS) of a selected reaction coordinate lambda were formulated so that they employ (i) "variable" sampling window schemes (that include the "successive umbrella sampling" method) to comprehensibly explore the lambda domain and (ii) transition matrix methods to iteratively obtain the underlying free-energy eta landscape (or "importance" weights) associated with lambda. The connection between "acceptance ratio" and transition matrix methods was first established to form the basis of the approach for estimating eta(lambda). The validity and performance of the different NBS schemes were then assessed using as lambda coordinate the configurational energy of the Lennard-Jones fluid. For the cases studied, it was found that the convergence rate in the estimation of eta is little affected by the use of data from high-order transitions, while it is noticeably improved by the use of a broader window of sampling in the variable window methods. Finally, it is shown how an "elastic" window of sampling can be used to effectively enact (nonuniform) preferential sampling over the lambda domain, and how to stitch the weights from separate one-dimensional NBS runs to produce a eta surface over a two-dimensional domain.
Jagannathan, Sarangapani; He, Pingan
2008-12-01
In this paper, a suite of adaptive neural network (NN) controllers is designed to deliver a desired tracking performance for the control of an unknown, second-order, nonlinear discrete-time system expressed in nonstrict feedback form. In the first approach, two feedforward NNs are employed in the controller with tracking error as the feedback variable whereas in the adaptive critic NN architecture, three feedforward NNs are used. In the adaptive critic architecture, two action NNs produce virtual and actual control inputs, respectively, whereas the third critic NN approximates certain strategic utility function and its output is employed for tuning action NN weights in order to attain the near-optimal control action. Both the NN control methods present a well-defined controller design and the noncausal problem in discrete-time backstepping design is avoided via NN approximation. A comparison between the controller methodologies is highlighted. The stability analysis of the closed-loop control schemes is demonstrated. The NN controller schemes do not require an offline learning phase and the NN weights can be initialized at zero or random. Results show that the performance of the proposed controller schemes is highly satisfactory while meeting the closed-loop stability.
Two-Level Scheduling for Video Transmission over Downlink OFDMA Networks
Tham, Mau-Luen
2016-01-01
This paper presents a two-level scheduling scheme for video transmission over downlink orthogonal frequency-division multiple access (OFDMA) networks. It aims to maximize the aggregate quality of the video users subject to the playback delay and resource constraints, by exploiting the multiuser diversity and the video characteristics. The upper level schedules the transmission of video packets among multiple users based on an overall target bit-error-rate (BER), the importance level of packet and resource consumption efficiency factor. Instead, the lower level renders unequal error protection (UEP) in terms of target BER among the scheduled packets by solving a weighted sum distortion minimization problem, where each user weight reflects the total importance level of the packets that has been scheduled for that user. Frequency-selective power is then water-filled over all the assigned subcarriers in order to leverage the potential channel coding gain. Realistic simulation results demonstrate that the proposed scheme significantly outperforms the state-of-the-art scheduling scheme by up to 6.8 dB in terms of peak-signal-to-noise-ratio (PSNR). Further test evaluates the suitability of equal power allocation which is the common assumption in the literature. PMID:26906398
Analyzing Hydraulic Conductivity Sampling Schemes in an Idealized Meandering Stream Model
NASA Astrophysics Data System (ADS)
Stonedahl, S. H.; Stonedahl, F.
2017-12-01
Hydraulic conductivity (K) is an important parameter affecting the flow of water through sediments under streams, which can vary by orders of magnitude within a stream reach. Measuring heterogeneous K distributions in the field is limited by time and resources. This study investigates hypothetical sampling practices within a modeling framework on a highly idealized meandering stream. We generated three sets of 100 hydraulic conductivity grids containing two sands with connectivity values of 0.02, 0.08, and 0.32. We investigated systems with twice as much fast (K=0.1 cm/s) sand as slow sand (K=0.01 cm/s) and the reverse ratio on the same grids. The K values did not vary with depth. For these 600 cases, we calculated the homogenous K value, Keq, that would yield the same flux into the sediments as the corresponding heterogeneous grid. We then investigated sampling schemes with six weighted probability distributions derived from the homogenous case: uniform, flow-paths, velocity, in-stream, flux-in, and flux-out. For each grid, we selected locations from these distributions and compared the arithmetic, geometric, and harmonic means of these lists to the corresponding Keq using the root-mean-square deviation. We found that arithmetic averaging of samples outperformed geometric or harmonic means for all sampling schemes. Of the sampling schemes, flux-in (sampling inside the stream in an inward flux-weighted manner) yielded the least error and flux-out yielded the most error. All three sampling schemes outside of the stream yielded very similar results. Grids with lower connectivity values (fewer and larger clusters) showed the most sensitivity to the choice of sampling scheme, and thus improved the most with the flux-insampling. We also explored the relationship between the number of samples taken and the resulting error. Increasing the number of sampling points reduced error for the arithmetic mean with diminishing returns, but did not substantially reduce error associated with geometric and harmonic means.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prochnow, Bo; O'Reilly, Ossian; Dunham, Eric M.
In this paper, we develop a high-order finite difference scheme for axisymmetric wave propagation in a cylindrical conduit filled with a viscous fluid. The scheme is provably stable, and overcomes the difficulty of the polar coordinate singularity in the radial component of the diffusion operator. The finite difference approximation satisfies the principle of summation-by-parts (SBP), which is used to establish stability using the energy method. To treat the coordinate singularity without losing the SBP property of the scheme, a staggered grid is introduced and quadrature rules with weights set to zero at the endpoints are considered. Finally, the accuracy ofmore » the scheme is studied both for a model problem with periodic boundary conditions at the ends of the conduit and its practical utility is demonstrated by modeling acoustic-gravity waves in a magmatic conduit.« less
Privacy-Enhanced and Multifunctional Health Data Aggregation under Differential Privacy Guarantees
Ren, Hao; Li, Hongwei; Liang, Xiaohui; He, Shibo; Dai, Yuanshun; Zhao, Lian
2016-01-01
With the rapid growth of the health data scale, the limited storage and computation resources of wireless body area sensor networks (WBANs) is becoming a barrier to their development. Therefore, outsourcing the encrypted health data to the cloud has been an appealing strategy. However, date aggregation will become difficult. Some recently-proposed schemes try to address this problem. However, there are still some functions and privacy issues that are not discussed. In this paper, we propose a privacy-enhanced and multifunctional health data aggregation scheme (PMHA-DP) under differential privacy. Specifically, we achieve a new aggregation function, weighted average (WAAS), and design a privacy-enhanced aggregation scheme (PAAS) to protect the aggregated data from cloud servers. Besides, a histogram aggregation scheme with high accuracy is proposed. PMHA-DP supports fault tolerance while preserving data privacy. The performance evaluation shows that the proposal leads to less communication overhead than the existing one. PMID:27626417
NASA Astrophysics Data System (ADS)
Ullah, Asmat; Chen, Wen; Khan, Mushtaq Ahmad
2017-07-01
This paper introduces a fractional order total variation (FOTV) based model with three different weights in the fractional order derivative definition for multiplicative noise removal purpose. The fractional-order Euler Lagrange equation which is a highly non-linear partial differential equation (PDE) is obtained by the minimization of the energy functional for image restoration. Two numerical schemes namely an iterative scheme based on the dual theory and majorization- minimization algorithm (MMA) are used. To improve the restoration results, we opt for an adaptive parameter selection procedure for the proposed model by applying the trial and error method. We report numerical simulations which show the validity and state of the art performance of the fractional-order model in visual improvement as well as an increase in the peak signal to noise ratio comparing to corresponding methods. Numerical experiments also demonstrate that MMAbased methodology is slightly better than that of an iterative scheme.
Privacy-Enhanced and Multifunctional Health Data Aggregation under Differential Privacy Guarantees.
Ren, Hao; Li, Hongwei; Liang, Xiaohui; He, Shibo; Dai, Yuanshun; Zhao, Lian
2016-09-10
With the rapid growth of the health data scale, the limited storage and computation resources of wireless body area sensor networks (WBANs) is becoming a barrier to their development. Therefore, outsourcing the encrypted health data to the cloud has been an appealing strategy. However, date aggregation will become difficult. Some recently-proposed schemes try to address this problem. However, there are still some functions and privacy issues that are not discussed. In this paper, we propose a privacy-enhanced and multifunctional health data aggregation scheme (PMHA-DP) under differential privacy. Specifically, we achieve a new aggregation function, weighted average (WAAS), and design a privacy-enhanced aggregation scheme (PAAS) to protect the aggregated data from cloud servers. Besides, a histogram aggregation scheme with high accuracy is proposed. PMHA-DP supports fault tolerance while preserving data privacy. The performance evaluation shows that the proposal leads to less communication overhead than the existing one.
Least-squares finite element methods for compressible Euler equations
NASA Technical Reports Server (NTRS)
Jiang, Bo-Nan; Carey, G. F.
1990-01-01
A method based on backward finite differencing in time and a least-squares finite element scheme for first-order systems of partial differential equations in space is applied to the Euler equations for gas dynamics. The scheme minimizes the L-sq-norm of the residual within each time step. The method naturally generates numerical dissipation proportional to the time step size. An implicit method employing linear elements has been implemented and proves robust. For high-order elements, computed solutions based on the L-sq method may have oscillations for calculations at similar time step sizes. To overcome this difficulty, a scheme which minimizes the weighted H1-norm of the residual is proposed and leads to a successful scheme with high-degree elements. Finally, a conservative least-squares finite element method is also developed. Numerical results for two-dimensional problems are given to demonstrate the shock resolution of the methods and compare different approaches.
Prochnow, Bo; O'Reilly, Ossian; Dunham, Eric M.; ...
2017-03-16
In this paper, we develop a high-order finite difference scheme for axisymmetric wave propagation in a cylindrical conduit filled with a viscous fluid. The scheme is provably stable, and overcomes the difficulty of the polar coordinate singularity in the radial component of the diffusion operator. The finite difference approximation satisfies the principle of summation-by-parts (SBP), which is used to establish stability using the energy method. To treat the coordinate singularity without losing the SBP property of the scheme, a staggered grid is introduced and quadrature rules with weights set to zero at the endpoints are considered. Finally, the accuracy ofmore » the scheme is studied both for a model problem with periodic boundary conditions at the ends of the conduit and its practical utility is demonstrated by modeling acoustic-gravity waves in a magmatic conduit.« less
NASA Astrophysics Data System (ADS)
Pantano, Carlos
2005-11-01
We describe a hybrid finite difference method for large-eddy simulation (LES) of compressible flows with a low-numerical dissipation scheme and structured adaptive mesh refinement (SAMR). Numerical experiments and validation calculations are presented including a turbulent jet and the strongly shock-driven mixing of a Richtmyer-Meshkov instability. The approach is a conservative flux-based SAMR formulation and as such, it utilizes refinement to computational advantage. The numerical method for the resolved scale terms encompasses the cases of scheme alternation and internal mesh interfaces resulting from SAMR. An explicit centered scheme that is consistent with a skew-symmetric finite difference formulation is used in turbulent flow regions while a weighted essentially non-oscillatory (WENO) scheme is employed to capture shocks. The subgrid stresses and transports are calculated by means of the streched-vortex model, Misra & Pullin (1997)
Reliability analysis of the epidural spinal cord compression scale.
Bilsky, Mark H; Laufer, Ilya; Fourney, Daryl R; Groff, Michael; Schmidt, Meic H; Varga, Peter Paul; Vrionis, Frank D; Yamada, Yoshiya; Gerszten, Peter C; Kuklo, Timothy R
2010-09-01
The evolution of imaging techniques, along with highly effective radiation options has changed the way metastatic epidural tumors are treated. While high-grade epidural spinal cord compression (ESCC) frequently serves as an indication for surgical decompression, no consensus exists in the literature about the precise definition of this term. The advancement of the treatment paradigms in patients with metastatic tumors for the spine requires a clear grading scheme of ESCC. The degree of ESCC often serves as a major determinant in the decision to operate or irradiate. The purpose of this study was to determine the reliability and validity of a 6-point, MR imaging-based grading system for ESCC. To determine the reliability of the grading scale, a survey was distributed to 7 spine surgeons who participate in the Spine Oncology Study Group. The MR images of 25 cervical or thoracic spinal tumors were distributed consisting of 1 sagittal image and 3 axial images at the identical level including T1-weighted, T2-weighted, and Gd-enhanced T1-weighted images. The survey was administered 3 times at 2-week intervals. The inter- and intrarater reliability was assessed. The inter- and intrarater reliability ranged from good to excellent when surgeons were asked to rate the degree of spinal cord compression using T2-weighted axial images. The T2-weighted images were superior indicators of ESCC compared with T1-weighted images with and without Gd. The ESCC scale provides a valid and reliable instrument that may be used to describe the degree of ESCC based on T2-weighted MR images. This scale accounts for recent advances in the treatment of spinal metastases and may be used to provide an ESCC classification scheme for multicenter clinical trial and outcome studies.
Chinese Version of the EQ-5D Preference Weights: Applicability in a Chinese General Population
Wu, Chunmei; Gong, Yanhong; Wu, Jiang; Zhang, Shengchao; Yin, Xiaoxv; Dong, Xiaoxin; Li, Wenzhen; Cao, Shiyi; Mkandawire, Naomie; Lu, Zuxun
2016-01-01
Objectives This study aimed to test the reliability, validity and sensitivity of Chinese version of the EQ-5D preference weights in Chinese general people, examine the differences between the China value set and the UK, Japan and Korea value sets, and provide methods for evaluating and comparing the EQ-5D value sets of different countries. Methods A random sample of 2984 community residents (15 years or older) were interviewed using a questionnaire including the EQ-5D scale. Level of agreement, convergent validity, known-groups validity and sensitivity of the EQ-5D China, United Kingdom (UK), Japan and Korea value sets were determined. Results The mean EQ-5D index scores were significantly (P<0.05) different among the UK (0.964), Japan (0.981), Korea (0.987), and China (0.985) weights. High level of agreement (intraclass correlations coefficients > 0.75) and convergent validity (Pearson’s correlation coefficients > 0.95) were found between each paired schemes. The EQ-5D index scores discriminated equally well for the four versions between levels of 10 known-groups (P< 0.05). The effect size and the relative efficiency statistics showed that the China weights had better sensitivity. Conclusions The China EQ-5D preference weights show equivalent psychometric properties with those from the UK, Japan and Korea weights while slightly more sensitive to known group differences than those from the Japan and Korea weights. Considering both psychometric and sociocultural issues, the China scheme should be a priority as an EQ-5D based measure of the health related quality of life in Chinese general population. PMID:27711169
Weighted SAW reflector gratings for orthogonal frequency coded SAW tags and sensors
NASA Technical Reports Server (NTRS)
Puccio, Derek (Inventor); Malocha, Donald (Inventor)
2011-01-01
Weighted surface acoustic wave reflector gratings for coding identification tags and sensors to enable unique sensor operation and identification for a multi-sensor environment. In an embodiment, the weighted reflectors are variable while in another embodiment the reflector gratings are apodized. The weighting technique allows the designer to decrease reflectively and allows for more chips to be implemented in a device and, consequently, more coding diversity. As a result, more tags and sensors can be implemented using a given bandwidth when compared with uniform reflectors. Use of weighted reflector gratings with OFC makes various phase shifting schemes possible, such as in-phase and quadrature implementations of coded waveforms resulting in reduced device size and increased coding.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohatt, D; Malhotra, H
Purpose: Conventional treatment plans for lung radiotherapy are created using either the free breathing (FB) scheme which represents the tumor at an arbitrary breathing phase of the patient’s respiratory cycle, or the average computed tomography (ACT) intensity projection over 10-binned phases. Neither method is entirely accurate because of the absence of time dependence of tumor movement. In the present “Hybrid” method, the HU of tumor in 3D space is determined by relative weighting of the HU of the tumor and lung in proportion to the time they spend at that location during the entire breathing cycle. Methods: A Quasar respiratorymore » motion phantom was employed to simulate lung tumor movement. Utilizing 4DCT image scans, volumetric modulated arc therapy (VMAT) plans were generated for three treatment planning scenarios which included conventional FB and ACT schemes, along with a third alternative Hybrid approach. Our internal target volume (ITV) hybrid structure was created using Boolean operation in Eclipse (ver. 11) treatment planning system, where independent sub-regions created by the gross tumor volume (GTV) overlap from the 10 motion phases were each assigned a time weighted CT value. The dose-volume-histograms (DVH) for each scheme were compared and analyzed. Results: Using our hybrid technique, we have demonstrated a reduction of 1.9% – 3.4% in total monitor units with respect to conventional treatment planning strategies, along with a 6 fold improvement in high dose spillage over the FB plan. The higher density ACT and Hybrid schemes also produced a slight enhancement in target conformity and reduction in low dose spillage. Conclusion: All treatment plans created in this study exceeded RTOG protocol criteria. Our results determine the free breathing approach yields an inaccurate account of the target treatment density. A significant decrease in unnecessary lung irradiation can be achieved by implementing Hybrid HU method with ACT method second best.« less
NASA Astrophysics Data System (ADS)
Yang, L. M.; Shu, C.; Wang, Y.; Sun, Y.
2016-08-01
The sphere function-based gas kinetic scheme (GKS), which was presented by Shu and his coworkers [23] for simulation of inviscid compressible flows, is extended to simulate 3D viscous incompressible and compressible flows in this work. Firstly, we use certain discrete points to represent the spherical surface in the phase velocity space. Then, integrals along the spherical surface for conservation forms of moments, which are needed to recover 3D Navier-Stokes equations, are approximated by integral quadrature. The basic requirement is that these conservation forms of moments can be exactly satisfied by weighted summation of distribution functions at discrete points. It was found that the integral quadrature by eight discrete points on the spherical surface, which forms the D3Q8 discrete velocity model, can exactly match the integral. In this way, the conservative variables and numerical fluxes can be computed by weighted summation of distribution functions at eight discrete points. That is, the application of complicated formulations resultant from integrals can be replaced by a simple solution process. Several numerical examples including laminar flat plate boundary layer, 3D lid-driven cavity flow, steady flow through a 90° bending square duct, transonic flow around DPW-W1 wing and supersonic flow around NACA0012 airfoil are chosen to validate the proposed scheme. Numerical results demonstrate that the present scheme can provide reasonable numerical results for 3D viscous flows.
Literature-based concept profiles for gene annotation: the issue of weighting.
Jelier, Rob; Schuemie, Martijn J; Roes, Peter-Jan; van Mulligen, Erik M; Kors, Jan A
2008-05-01
Text-mining has been used to link biomedical concepts, such as genes or biological processes, to each other for annotation purposes or the generation of new hypotheses. To relate two concepts to each other several authors have used the vector space model, as vectors can be compared efficiently and transparently. Using this model, a concept is characterized by a list of associated concepts, together with weights that indicate the strength of the association. The associated concepts in the vectors and their weights are derived from a set of documents linked to the concept of interest. An important issue with this approach is the determination of the weights of the associated concepts. Various schemes have been proposed to determine these weights, but no comparative studies of the different approaches are available. Here we compare several weighting approaches in a large scale classification experiment. Three different techniques were evaluated: (1) weighting based on averaging, an empirical approach; (2) the log likelihood ratio, a test-based measure; (3) the uncertainty coefficient, an information-theory based measure. The weighting schemes were applied in a system that annotates genes with Gene Ontology codes. As the gold standard for our study we used the annotations provided by the Gene Ontology Annotation project. Classification performance was evaluated by means of the receiver operating characteristics (ROC) curve using the area under the curve (AUC) as the measure of performance. All methods performed well with median AUC scores greater than 0.84, and scored considerably higher than a binary approach without any weighting. Especially for the more specific Gene Ontology codes excellent performance was observed. The differences between the methods were small when considering the whole experiment. However, the number of documents that were linked to a concept proved to be an important variable. When larger amounts of texts were available for the generation of the concepts' vectors, the performance of the methods diverged considerably, with the uncertainty coefficient then outperforming the two other methods.
Research on comprehensive decision-making of PV power station connecting system
NASA Astrophysics Data System (ADS)
Zhou, Erxiong; Xin, Chaoshan; Ma, Botao; Cheng, Kai
2018-04-01
In allusion to the incomplete indexes system and not making decision on the subjectivity and objectivity of PV power station connecting system, based on the combination of improved Analytic Hierarchy Process (AHP), Criteria Importance Through Intercriteria Correlation (CRITIC) as well as grey correlation degree analysis (GCDA) is comprehensively proposed to select the appropriate system connecting scheme of PV power station. Firstly, indexes of PV power station connecting system are divided the recursion order hierarchy and calculated subjective weight by the improved AHP. Then, CRITIC is adopted to determine the objective weight of each index through the comparison intensity and conflict between indexes. The last the improved GCDA is applied to screen the optimal scheme, so as to, from the subjective and objective angle, select the connecting system. Comprehensive decision of Xinjiang PV power station is conducted and reasonable analysis results are attained. The research results might provide scientific basis for investment decision.
Optimal Sensor Allocation for Fault Detection and Isolation
NASA Technical Reports Server (NTRS)
Azam, Mohammad; Pattipati, Krishna; Patterson-Hine, Ann
2004-01-01
Automatic fault diagnostic schemes rely on various types of sensors (e.g., temperature, pressure, vibration, etc) to measure the system parameters. Efficacy of a diagnostic scheme is largely dependent on the amount and quality of information available from these sensors. The reliability of sensors, as well as the weight, volume, power, and cost constraints, often makes it impractical to monitor a large number of system parameters. An optimized sensor allocation that maximizes the fault diagnosibility, subject to specified weight, volume, power, and cost constraints is required. Use of optimal sensor allocation strategies during the design phase can ensure better diagnostics at a reduced cost for a system incorporating a high degree of built-in testing. In this paper, we propose an approach that employs multiple fault diagnosis (MFD) and optimization techniques for optimal sensor placement for fault detection and isolation (FDI) in complex systems. Keywords: sensor allocation, multiple fault diagnosis, Lagrangian relaxation, approximate belief revision, multidimensional knapsack problem.
Capturing planar shapes by approximating their outlines
NASA Astrophysics Data System (ADS)
Sarfraz, M.; Riyazuddin, M.; Baig, M. H.
2006-05-01
A non-deterministic evolutionary approach for approximating the outlines of planar shapes has been developed. Non-uniform Rational B-splines (NURBS) have been utilized as an underlying approximation curve scheme. Simulated Annealing heuristic is used as an evolutionary methodology. In addition to independent studies of the optimization of weight and knot parameters of the NURBS, a separate scheme has also been developed for the optimization of weights and knots simultaneously. The optimized NURBS models have been fitted over the contour data of the planar shapes for the ultimate and automatic output. The output results are visually pleasing with respect to the threshold provided by the user. A web-based system has also been developed for the effective and worldwide utilization. The objective of this system is to provide the facility to visualize the output to the whole world through internet by providing the freedom to the user for various desired input parameters setting in the algorithm designed.
Graphical tensor product reduction scheme for the Lie algebras so(5) = sp(2) , su(3) , and g(2)
NASA Astrophysics Data System (ADS)
Vlasii, N. D.; von Rütte, F.; Wiese, U.-J.
2016-08-01
We develop in detail a graphical tensor product reduction scheme, first described by Antoine and Speiser, for the simple rank 2 Lie algebras so(5) = sp(2) , su(3) , and g(2) . This leads to an efficient practical method to reduce tensor products of irreducible representations into sums of such representations. For this purpose, the 2-dimensional weight diagram of a given representation is placed in a ;landscape; of irreducible representations. We provide both the landscapes and the weight diagrams for a large number of representations for the three simple rank 2 Lie algebras. We also apply the algebraic ;girdle; method, which is much less efficient for calculations by hand for moderately large representations. Computer code for reducing tensor products, based on the graphical method, has been developed as well and is available from the authors upon request.
Gyroaveraging operations using adaptive matrix operators
NASA Astrophysics Data System (ADS)
Dominski, Julien; Ku, Seung-Hoe; Chang, Choong-Seock
2018-05-01
A new adaptive scheme to be used in particle-in-cell codes for carrying out gyroaveraging operations with matrices is presented. This new scheme uses an intermediate velocity grid whose resolution is adapted to the local thermal Larmor radius. The charge density is computed by projecting marker weights in a field-line following manner while preserving the adiabatic magnetic moment μ. These choices permit to improve the accuracy of the gyroaveraging operations performed with matrices even when strong spatial variation of temperature and magnetic field is present. Accuracy of the scheme in different geometries from simple 2D slab geometry to realistic 3D toroidal equilibrium has been studied. A successful implementation in the gyrokinetic code XGC is presented in the delta-f limit.
NASA Astrophysics Data System (ADS)
Cheraghian, Goshtasp; Khalili Nezhad, Seyyed Shahram; Kamari, Mosayyeb; Hemmati, Mahmood; Masihi, Mohsen; Bazgir, Saeed
2014-07-01
Nanotechnology has been used in many applications and new possibilities are discovered constantly. Recently, a renewed interest has risen in the application of nanotechnology for the upstream petroleum industry, such as exploration, drilling, production and distribution. In particular, adding nanoparticles to fluids may significantly benefit enhanced oil recovery and improve well drilling, such as changing the properties of the fluid, wettability alternation of rocks, advanced drag reduction, strengthening sand consolidation, reducing the interfacial tension and increasing the mobility of the capillary-trapped oil. In this study, we focus on the roles of clay and silica nanoparticles in adsorption process on reservoir rocks. Polymer-flooding schemes for recovering residual oil have been in general less satisfactory due to loss of chemicals by adsorption on reservoir rocks, precipitation, and resultant changes in rheological properties. Adsorption and rheological property changes are mainly determined by the chemical structure of the polymers, surface properties of the rock, composition of the oil and reservoir fluids, the nature of the polymers added and solution conditions such as salinity, pH and temperature. Because this method relies on the adsorption of a polymer layer onto the rock surface, a deeper understanding of the relevant polymer-rock interactions is of primary importance to develop reliable chemical selection rules for field applications. In this paper, the role of nanoparticles in the adsorption of water-soluble polymers onto solid surfaces of carbonate and sandstone is studied. The results obtained by means of static adsorption tests show that the adsorption is dominated by the nanoclay and nanosilica between the polymer molecules and the solid surface. These results also show that lithology, brine concentration and polymer viscosity are critical parameters influencing the adsorption behavior at a rock interface. On the other hand, in this study, the focus is on viscosity, temperature and salinity of solutions of polyacrylamide polymers with different nanoparticle degrees and molecular weight. The adsorption of nanopolymer solution is always higher in carbonated stones than in sandstones, and polymer solutions containing silica nanoparticles have less adsorption based on weight percent than similar samples containing clay. Based on the area of contact for stone, this behavior is the same regarding adsorption.
Ahn, Jae-Hyun; Park, Young-Je; Kim, Wonkook; Lee, Boram
2016-12-26
An estimation of the aerosol multiple-scattering reflectance is an important part of the atmospheric correction procedure in satellite ocean color data processing. Most commonly, the utilization of two near-infrared (NIR) bands to estimate the aerosol optical properties has been adopted for the estimation of the effects of aerosols. Previously, the operational Geostationary Color Ocean Imager (GOCI) atmospheric correction scheme relies on a single-scattering reflectance ratio (SSE), which was developed for the processing of the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) data to determine the appropriate aerosol models and their aerosol optical thicknesses. The scheme computes reflectance contributions (weighting factor) of candidate aerosol models in a single scattering domain then spectrally extrapolates the single-scattering aerosol reflectance from NIR to visible (VIS) bands using the SSE. However, it directly applies the weight value to all wavelengths in a multiple-scattering domain although the multiple-scattering aerosol reflectance has a non-linear relationship with the single-scattering reflectance and inter-band relationship of multiple scattering aerosol reflectances is non-linear. To avoid these issues, we propose an alternative scheme for estimating the aerosol reflectance that uses the spectral relationships in the aerosol multiple-scattering reflectance between different wavelengths (called SRAMS). The process directly calculates the multiple-scattering reflectance contributions in NIR with no residual errors for selected aerosol models. Then it spectrally extrapolates the reflectance contribution from NIR to visible bands for each selected model using the SRAMS. To assess the performance of the algorithm regarding the errors in the water reflectance at the surface or remote-sensing reflectance retrieval, we compared the SRAMS atmospheric correction results with the SSE atmospheric correction using both simulations and in situ match-ups with the GOCI data. From simulations, the mean errors for bands from 412 to 555 nm were 5.2% for the SRAMS scheme and 11.5% for SSE scheme in case-I waters. From in situ match-ups, 16.5% for the SRAMS scheme and 17.6% scheme for the SSE scheme in both case-I and case-II waters. Although we applied the SRAMS algorithm to the GOCI, it can be applied to other ocean color sensors which have two NIR wavelengths.
NASA Astrophysics Data System (ADS)
Zamri, Nurnadiah; Abdullah, Lazim
2014-06-01
Flood control project is a complex issue which takes economic, social, environment and technical attributes into account. Selection of the best flood control project requires the consideration of conflicting quantitative and qualitative evaluation criteria. When decision-makers' judgment are under uncertainty, it is relatively difficult for them to provide exact numerical values. The interval type-2 fuzzy set (IT2FS) is a strong tool which can deal with the uncertainty case of subjective, incomplete, and vague information. Besides, it helps to solve for some situations where the information about criteria weights for alternatives is completely unknown. Therefore, this paper is adopted the information interval type-2 entropy concept into the weighting process of interval type-2 fuzzy TOPSIS. This entropy weight is believed can effectively balance the influence of uncertainty factors in evaluating attribute. Then, a modified ranking value is proposed in line with the interval type-2 entropy weight. Quantitative and qualitative factors that normally linked with flood control project are considered for ranking. Data in form of interval type-2 linguistic variables were collected from three authorised personnel of three Malaysian Government agencies. Study is considered for the whole of Malaysia. From the analysis, it shows that diversion scheme yielded the highest closeness coefficient at 0.4807. A ranking can be drawn using the magnitude of closeness coefficient. It was indicated that the diversion scheme recorded the first rank among five causes.
Enhanced performance of a filter-sensor system.
Sasaki, Isao; Josowicz, Mira; Janata, Jirí; Glezer, Ari
2006-06-01
In this paper are addressed two important, but seemingly unrelated issues: long term performance of a gas sensing array and performance of an air purification unit. It is shown that when considered together, the system can be regarded as a "smart filter". The enhancement is achieved by periodic differential sampling and measurement of the "upstream" and "downstream" gases of a filter. The correctly functioning filter supplies the "zero gas" from the downstream for the continuous sensor baseline correction. A key element in this scheme is the synthetic jet that delivers well-defined pulses of the two gases. The deterioration of the performance of the "smart filter" can be diagnosed from the response pattern of the sensor. The approach has been demonstrated on removal/sensing of ammonia gas from air.
NASA Technical Reports Server (NTRS)
Kandula, M.; Pearce, D. G.
1991-01-01
A steady incompressible three-dimensional viscous flow analysis has been conducted for the Space Shuttle external tank/orbiter propellant feed line disconnect flapper valves with upstream elbows. The Navier-Stokes code, INS3D, is modified to handle interior obstacles and a simple turbulence model. The flow solver is tested for stability and convergence in the presence of interior flappers. An under-relaxation scheme has been incorporated to improve the solution stability. Important flow characteristics such as secondary flows, recirculation, vortex and wake regions, and separated flows are observed. Computed values for forces, moments, and pressure drop are in satisfactory agreement with water flow test data covering a maximum tube Reynolds number of 3.5 million. The predicted hydrodynamical stability of the flappers correlates well with the measurements.
NASA Astrophysics Data System (ADS)
Islam, Md. Shahidul; Hibino, Manabu; Nakayama, Kouji; Tanaka, Masaru
2006-02-01
The present study investigates feeding and condition of larval and juvenile Japanese temperate bass Lateolabrax japonicus in relation to spatial distribution in the Chikugo estuary (Japan). Larvae were collected in a wide area covering the nursery grounds of the species in 2002 and 2003. Food habits of the fish were analysed by examining their gut contents. Fish condition was evaluated by using morphometric (the length-weight relationship and condition factor) and biochemical (the RNA:DNA ratio and other nucleic acid based parameters) indices and growth rates. The nucleic-acid contents in individually frozen larvae and juveniles were quantified by standard fluorometric methods. Two distinct feeding patterns, determined by the distribution of prey copepods, were identified. The first pattern showed dependence on the calanoid copepod Sinocalanus sinensis, which was the single dominant prey in low-saline upper river areas. The second pattern involved a multi-specific dietary habit mainly dominated by Acartia omorii, Oithona davisae, and Paracalanus parvus. As in the gut contents analyses, two different sets of values were observed for RNA, DNA, total protein, growth rates and for all the nucleic acid-based indices: one for the high-saline downstream areas and a second for the low-saline upstream areas, which was significantly higher than the first. The proportion of starving fish was lower upstream than downstream. Values of the allometric coefficient ( b) and the condition factor ( K) obtained from the length-weight relationships increased gradually from the sea to the upper river. Clearly, fish in the upper river had a better condition than those in the lower estuary. RNA:DNA ratios correlated positively with temperature and negatively with salinity. We hypothesise that by migration to the better foraging grounds of the upper estuary (with higher prey biomass, elevated temperature and reduced salinity), the fish reduce early mortality and attain a better condition. We conclude that utilisation of the copepod S. sinensis in the upstream nursery grounds is one of the key early survival strategies in Japanese temperate bass in the Chikugo estuary.
NASA Astrophysics Data System (ADS)
Bergmann, Michel; Cordier, Laurent; Brancher, Jean-Pierre
2006-02-01
In this Brief Communication we are interested in the maximum mean drag reduction that can be achieved under rotary sinusoidal control for the circular cylinder wake in the laminar regime. For a Reynolds number equal to 200, we give numerical evidence that partial control restricted to an upstream part of the cylinder surface may considerably increase the effectiveness of the control. Indeed, a maximum value of relative mean drag reduction equal to 30% is obtained when applying a specific sinusoidal control to the whole cylinder, where up to 75% of reduction can be obtained when the same control law is applied only to a well-selected upstream part of the cylinder. This result suggests that a mean flow correction field with negative drag is observable for this controlled flow configuration. The significant thrust force that is locally generated in the near wake corresponds to a reverse von Kármán vortex street as commonly observed in fish-like locomotion or flapping wing flight. Finally, the energetic efficiency of the control is quantified by examining the power saving ratio: it is shown that our approach is energetically inefficient. However, it is also demonstrated that for this control scheme the improvement of the effectiveness generally occurs along with an improvement of the efficiency.
Nayak, Deepak Ranjan; Dash, Ratnakar; Majhi, Banshidhar
2017-12-07
Pathological brain detection has made notable stride in the past years, as a consequence many pathological brain detection systems (PBDSs) have been proposed. But, the accuracy of these systems still needs significant improvement in order to meet the necessity of real world diagnostic situations. In this paper, an efficient PBDS based on MR images is proposed that markedly improves the recent results. The proposed system makes use of contrast limited adaptive histogram equalization (CLAHE) to enhance the quality of the input MR images. Thereafter, two-dimensional PCA (2DPCA) strategy is employed to extract the features and subsequently, a PCA+LDA approach is used to generate a compact and discriminative feature set. Finally, a new learning algorithm called MDE-ELM is suggested that combines modified differential evolution (MDE) and extreme learning machine (ELM) for segregation of MR images as pathological or healthy. The MDE is utilized to optimize the input weights and hidden biases of single-hidden-layer feed-forward neural networks (SLFN), whereas an analytical method is used for determining the output weights. The proposed algorithm performs optimization based on both the root mean squared error (RMSE) and norm of the output weights of SLFNs. The suggested scheme is benchmarked on three standard datasets and the results are compared against other competent schemes. The experimental outcomes show that the proposed scheme offers superior results compared to its counterparts. Further, it has been noticed that the proposed MDE-ELM classifier obtains better accuracy with compact network architecture than conventional algorithms.
Qureshi, Adnan I
2007-10-01
Imaging of head and neck vasculature continues to improve with the application of new technology. To judge the value of new technologies reported in the literature, it is imperative to develop objective standards optimized against bias and favoring statistical power and clinical relevance. A review of the existing literature identified the following items as lending scientific value to a report on imaging technology: prospective design, comparison with an accepted modality, unbiased patient selection, standardized image acquisition, blinded interpretation, and measurement of reliability. These were incorporated into a new grading scheme. Two physicians tested the new scheme and an established scheme to grade reports published in the medical literature. Inter-observer reliability for both methods was calculated using the kappa coefficient. A total of 22 reports evaluating imaging modalities for cervical internal carotid artery stenosis were identified from a literature search and graded by both schemes. Agreement between the two physicians in grading the level of scientific evidence using the new scheme was excellent (kappa coefficient: 0.93, p<0.0001). Agreement using the established scheme was less rigorous (kappa coefficient: 0.39, p<0.0001). The weighted kappa coefficients were 0.95 and 0.38 for the new and established schemes, respectively. Overall agreement was higher for the newer scheme (95% versus 64%). The new grading scheme can be used reliably to categorize the strength of scientific knowledge provided by individual studies of vascular imaging. The new method could assist clinicians and researchers in determining appropriate clinical applications of newly reported technical advances.
NASA Astrophysics Data System (ADS)
Reis, C.; Clain, S.; Figueiredo, J.; Baptista, M. A.; Miranda, J. M. A.
2015-12-01
Numerical tools turn to be very important for scenario evaluations of hazardous phenomena such as tsunami. Nevertheless, the predictions highly depends on the numerical tool quality and the design of efficient numerical schemes still receives important attention to provide robust and accurate solutions. In this study we propose a comparative study between the efficiency of two volume finite numerical codes with second-order discretization implemented with different method to solve the non-conservative shallow water equations, the MUSCL (Monotonic Upstream-Centered Scheme for Conservation Laws) and the MOOD methods (Multi-dimensional Optimal Order Detection) which optimize the accuracy of the approximation in function of the solution local smoothness. The MUSCL is based on a priori criteria where the limiting procedure is performed before updated the solution to the next time-step leading to non-necessary accuracy reduction. On the contrary, the new MOOD technique uses a posteriori detectors to prevent the solution from oscillating in the vicinity of the discontinuities. Indeed, a candidate solution is computed and corrections are performed only for the cells where non-physical oscillations are detected. Using a simple one-dimensional analytical benchmark, 'Single wave on a sloping beach', we show that the classical 1D shallow-water system can be accurately solved with the finite volume method equipped with the MOOD technique and provide better approximation with sharper shock and less numerical diffusion. For the code validation, we also use the Tohoku-Oki 2011 tsunami and reproduce two DART records, demonstrating that the quality of the solution may deeply interfere with the scenario one can assess. This work is funded by the Portugal-France research agreement, through the research project GEONUM FCT-ANR/MAT-NAN/0122/2012.Numerical tools turn to be very important for scenario evaluations of hazardous phenomena such as tsunami. Nevertheless, the predictions highly depends on the numerical tool quality and the design of efficient numerical schemes still receives important attention to provide robust and accurate solutions. In this study we propose a comparative study between the efficiency of two volume finite numerical codes with second-order discretization implemented with different method to solve the non-conservative shallow water equations, the MUSCL (Monotonic Upstream-Centered Scheme for Conservation Laws) and the MOOD methods (Multi-dimensional Optimal Order Detection) which optimize the accuracy of the approximation in function of the solution local smoothness. The MUSCL is based on a priori criteria where the limiting procedure is performed before updated the solution to the next time-step leading to non-necessary accuracy reduction. On the contrary, the new MOOD technique uses a posteriori detectors to prevent the solution from oscillating in the vicinity of the discontinuities. Indeed, a candidate solution is computed and corrections are performed only for the cells where non-physical oscillations are detected. Using a simple one-dimensional analytical benchmark, 'Single wave on a sloping beach', we show that the classical 1D shallow-water system can be accurately solved with the finite volume method equipped with the MOOD technique and provide better approximation with sharper shock and less numerical diffusion. For the code validation, we also use the Tohoku-Oki 2011 tsunami and reproduce two DART records, demonstrating that the quality of the solution may deeply interfere with the scenario one can assess. This work is funded by the Portugal-France research agreement, through the research project GEONUM FCT-ANR/MAT-NAN/0122/2012.
46 CFR 298.11 - Vessel requirements.
Code of Federal Regulations, 2011 CFR
2011-10-01
... with accepted commercial experience and practice. (g) Metric Usage. Our preferred system of measurement and weights for Vessels and Shipyard Projects is the metric system. ...), classification societies to be ISO 9000 series registered or Quality Systems Certificate Scheme qualified IACS...
46 CFR 298.11 - Vessel requirements.
Code of Federal Regulations, 2013 CFR
2013-10-01
... with accepted commercial experience and practice. (g) Metric Usage. Our preferred system of measurement and weights for Vessels and Shipyard Projects is the metric system. ...), classification societies to be ISO 9000 series registered or Quality Systems Certificate Scheme qualified IACS...
46 CFR 298.11 - Vessel requirements.
Code of Federal Regulations, 2010 CFR
2010-10-01
... with accepted commercial experience and practice. (g) Metric Usage. Our preferred system of measurement and weights for Vessels and Shipyard Projects is the metric system. ...), classification societies to be ISO 9000 series registered or Quality Systems Certificate Scheme qualified IACS...
46 CFR 298.11 - Vessel requirements.
Code of Federal Regulations, 2012 CFR
2012-10-01
... with accepted commercial experience and practice. (g) Metric Usage. Our preferred system of measurement and weights for Vessels and Shipyard Projects is the metric system. ...), classification societies to be ISO 9000 series registered or Quality Systems Certificate Scheme qualified IACS...
46 CFR 298.11 - Vessel requirements.
Code of Federal Regulations, 2014 CFR
2014-10-01
... with accepted commercial experience and practice. (g) Metric Usage. Our preferred system of measurement and weights for Vessels and Shipyard Projects is the metric system. ...), classification societies to be ISO 9000 series registered or Quality Systems Certificate Scheme qualified IACS...
Significant parent-of-origin effects in cucumber
USDA-ARS?s Scientific Manuscript database
Cucumber is a useful plant to study organellar effects because chloroplasts are maternally and mitochondria paternally transmitted. We produced doubled haploids (DH) from divergent cucumber populations, generated reciprocal crosses in a diallel mating scheme, measured weights of plants approximately...
High-Order Entropy Stable Finite Difference Schemes for Nonlinear Conservation Laws: Finite Domains
NASA Technical Reports Server (NTRS)
Fisher, Travis C.; Carpenter, Mark H.
2013-01-01
Developing stable and robust high-order finite difference schemes requires mathematical formalism and appropriate methods of analysis. In this work, nonlinear entropy stability is used to derive provably stable high-order finite difference methods with formal boundary closures for conservation laws. Particular emphasis is placed on the entropy stability of the compressible Navier-Stokes equations. A newly derived entropy stable weighted essentially non-oscillatory finite difference method is used to simulate problems with shocks and a conservative, entropy stable, narrow-stencil finite difference approach is used to approximate viscous terms.
High-performance packaging for monolithic microwave and millimeter-wave integrated circuits
NASA Technical Reports Server (NTRS)
Shalkhauser, K. A.; Li, K.; Shih, Y. C.
1992-01-01
Packaging schemes are developed that provide low-loss, hermetic enclosure for enhanced monolithic microwave and millimeter-wave integrated circuits. These package schemes are based on a fused quartz substrate material offering improved RF performance through 44 GHz. The small size and weight of the packages make them useful for a number of applications, including phased array antenna systems. As part of the packaging effort, a test fixture was developed to interface the single chip packages to conventional laboratory instrumentation for characterization of the packaged devices.
Fast rerouting schemes for protected mobile IP over MPLS networks
NASA Astrophysics Data System (ADS)
Wen, Chih-Chao; Chang, Sheng-Yi; Chen, Huan; Chen, Kim-Joan
2005-10-01
Fast rerouting is a critical traffic engineering operation in the MPLS networks. To implement the Mobile IP service over the MPLS network, one can collaborate with the fast rerouting operation to enhance the availability and survivability. MPLS can protect critical LSP tunnel between Home Agent (HA) and Foreign Agent (FA) using the fast rerouting scheme. In this paper, we propose a simple but efficient algorithm to address the triangle routing problem for the Mobile IP over the MPLS networks. We consider this routing issue as a link weighting and capacity assignment (LW-CA) problem. The derived solution is used to plan the fast restoration mechanism to protect the link or node failure. In this paper, we first model the LW-CA problem as a mixed integer optimization problem. Our goal is to minimize the call blocking probability on the most congested working truck for the mobile IP connections. Many existing network topologies are used to evaluate the performance of our scheme. Results show that our proposed scheme can obtain the best performance in terms of the smallest blocking probability compared to other schemes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khangaonkar, Tarang P.; Breithaupt, Stephen A.; Kristanovich, Felix C.
A hydrodynamic and hydrologic modeling analysis was conducted to evaluate the feasibility of restoring natural estuarine functions and tidal marine wetlands habitat in the Chinook River estuary, located near the mouth of the Columbia River in Washington. The reduction in salmonid populations is attributable primarily to the construction of a Highway 101 overpass across the mouth of the Chinook River in the early 1920s with a tide gate under the overpass. This construction, which was designed to eliminate tidal action in the estuary, has impeded the upstream passage of salmonids. The goal of the Chinook River Restoration Project is tomore » restore tidal functions through the estuary, by removing the tide gate at the mouth of the river, filling drainage ditches, restoring tidal swales, and reforesting riparian areas. The hydrologic model (HEC-HMS) was used to compute Chinook River and tributary inflows for use as input to the hydrodynamic model at the project area boundary. The hydrodynamic model (RMA-10) was used to generate information on water levels, velocities, salinity, and inundation during both normal tides and 100-year storm conditions under existing conditions and under the restoration alternatives. The RMA-10 model was extended well upstream of the normal tidal flats into the watershed domain to correctly simulate flooding and drainage with tidal effects included, using the wetting and drying schemes. The major conclusion of the hydrologic and hydrodynamic modeling study was that restoration of the tidal functions in the Chinook River estuary would be feasible through opening or removal of the tide gate. Implementation of the preferred alternative (removal of the tide gate, restoration of the channel under Hwy 101 to a 200-foot width, and construction of an internal levee inside the project area) would provide the required restorations benefits (inundation, habitat, velocities, and salinity penetration, etc.) and meet flood protection requirements. The alternative design included design of storage such that relatively little difference in the drainage or inundation upstream of Chinook River Valley Road would occur as a result of the proposed restoration activities.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khangaonkar, Tarang P.; Breithaupt, Stephen A.; Kristanovich, Felix C.
A hydrodynamic and hydrologic modeling analysis was conducted to evaluate the feasibility of restoring natural estuarine functions and tidal marine wetlands habitat in the Chinook River estuary, located near the mouth of the Columbia River in Washington. The reduction in salmonid populations is attributable primarily to the construction of a Highway 101 overpass across the mouth of the Chinook River in the early 1920s with a tide gate under the overpass. This construction, which was designed to eliminate tidal action in the estuary, has impeded the upstream passage of salmonids. The goal of the Chinook River Restoration Project is tomore » restore tidal functions through the estuary, by removing the tide gate at the mouth of the river, filling drainage ditches, restoring tidal swales, and reforesting riparian areas. The hydrologic model (HEC-HMS) was used to compute Chinook River and tributary inflows for use as input to the hydrodynamic model at the project area boundary. The hydrodynamic model (RMA-10) was used to generate information on water levels, velocities, salinity, and inundation during both normal tides and 100-year storm conditions under existing conditions and under the restoration alternatives. The RMA-10 model was extended well upstream of the normal tidal flats into the watershed domain to correctly simulate flooding anddrainage with tidal effects included, using the wetting and drying schemes. The major conclusion of the hydrologic and hydrodynamic modeling study was that restoration of the tidal functions in the Chinook River estuary would be feasible through opening or removal of the tide gate. Implementation of the preferred alternative (removal of the tide gate, restoration of the channel under Hwy 101 to a 200-foot width, and construction of an internal levee inside the project area) would provide the required restorations benefits (inundation, habitat, velocities, and salinity penetration, etc.) and meet flood protection requirements. The alternative design included design of storage such that relatively little difference in the drainage or inundation upstream of Chinook River Valley Road would occur as a result of the proposed restoration activities.« less
A regional coupled surface water/groundwater model of the Okavango Delta, Botswana
NASA Astrophysics Data System (ADS)
Bauer, Peter; Gumbricht, Thomas; Kinzelbach, Wolfgang
2006-04-01
In the endorheic Okavango River system in southern Africa a balance between human and environmental water demands has to be achieved. The runoff generated in the humid tropical highlands of Angola flows through arid Namibia and Botswana before forming a large inland delta and eventually being consumed by evapotranspiration. With an approximate size of about 30,000 km2, the Okavango Delta is the world's largest site protected under the convention on wetlands of international importance, signed in 1971 in Ramsar, Iran. The extended wetlands of the Okavango Delta, which sustain a rich ecology, spectacular wildlife, and a first-class tourism infrastructure, depend on the combined effect of the highly seasonal runoff in the Okavango River and variable local climate. The annual fluctuations in the inflow are transformed into vast areas of seasonally inundated floodplains. Water abstraction and reservoir building in the upstream countries are expected to reduce and/or redistribute the available flows for the Okavango Delta ecosystem. To study the impacts of upstream and local interventions, a large-scale (1 km2 grid), coupled surface water/groundwater model has been developed. It is composed of a surface water flow component based on the diffusive wave approximation of the Saint-Venant equations, a groundwater component, and a relatively simple vadose zone component for calculating the net water exchange between land and atmosphere. The numerical scheme is based on the groundwater simulation software MODFLOW-96. Since the primary model output is the spatiotemporal distribution of flooded areas and since hydrologic data on the large and inaccessible floodplains and tributaries are sparse and unreliable, the model was not calibrated with point hydrographs but with a time series of flooding patterns derived from satellite imagery (NOAA advanced very high resolution radiometer). Scenarios were designed to study major upstream and local interventions and their expected impacts in the Delta. The scenarios' results can help decision makers strike a balance between environmental and human water demands in the basin.
Narayanan, Vignesh; Jagannathan, Sarangapani
2017-09-07
In this paper, a distributed control scheme for an interconnected system composed of uncertain input affine nonlinear subsystems with event triggered state feedback is presented by using a novel hybrid learning scheme-based approximate dynamic programming with online exploration. First, an approximate solution to the Hamilton-Jacobi-Bellman equation is generated with event sampled neural network (NN) approximation and subsequently, a near optimal control policy for each subsystem is derived. Artificial NNs are utilized as function approximators to develop a suite of identifiers and learn the dynamics of each subsystem. The NN weight tuning rules for the identifier and event-triggering condition are derived using Lyapunov stability theory. Taking into account, the effects of NN approximation of system dynamics and boot-strapping, a novel NN weight update is presented to approximate the optimal value function. Finally, a novel strategy to incorporate exploration in online control framework, using identifiers, is introduced to reduce the overall cost at the expense of additional computations during the initial online learning phase. System states and the NN weight estimation errors are regulated and local uniformly ultimately bounded results are achieved. The analytical results are substantiated using simulation studies.
Tuan, Pham Viet; Koo, Insoo
2017-10-06
In this paper, we consider multiuser simultaneous wireless information and power transfer (SWIPT) for cognitive radio systems where a secondary transmitter (ST) with an antenna array provides information and energy to multiple single-antenna secondary receivers (SRs) equipped with a power splitting (PS) receiving scheme when multiple primary users (PUs) exist. The main objective of the paper is to maximize weighted sum harvested energy for SRs while satisfying their minimum required signal-to-interference-plus-noise ratio (SINR), the limited transmission power at the ST, and the interference threshold of each PU. For the perfect channel state information (CSI), the optimal beamforming vectors and PS ratios are achieved by the proposed PSO-SDR in which semidefinite relaxation (SDR) and particle swarm optimization (PSO) methods are jointly combined. We prove that SDR always has a rank-1 solution, and is indeed tight. For the imperfect CSI with bounded channel vector errors, the upper bound of weighted sum harvested energy (WSHE) is also obtained through the S-Procedure. Finally, simulation results demonstrate that the proposed PSO-SDR has fast convergence and better performance as compared to the other baseline schemes.
Cook, James P; Mahajan, Anubha; Morris, Andrew P
2017-02-01
Linear mixed models are increasingly used for the analysis of genome-wide association studies (GWAS) of binary phenotypes because they can efficiently and robustly account for population stratification and relatedness through inclusion of random effects for a genetic relationship matrix. However, the utility of linear (mixed) models in the context of meta-analysis of GWAS of binary phenotypes has not been previously explored. In this investigation, we present simulations to compare the performance of linear and logistic regression models under alternative weighting schemes in a fixed-effects meta-analysis framework, considering designs that incorporate variable case-control imbalance, confounding factors and population stratification. Our results demonstrate that linear models can be used for meta-analysis of GWAS of binary phenotypes, without loss of power, even in the presence of extreme case-control imbalance, provided that one of the following schemes is used: (i) effective sample size weighting of Z-scores or (ii) inverse-variance weighting of allelic effect sizes after conversion onto the log-odds scale. Our conclusions thus provide essential recommendations for the development of robust protocols for meta-analysis of binary phenotypes with linear models.
Recent Research on the Automated Mass Measuring System
NASA Astrophysics Data System (ADS)
Yao, Hong; Ren, Xiao-Ping; Wang, Jian; Zhong, Rui-Lin; Ding, Jing-An
The research development of robotic measurement system as well as the representative automatic system were introduced in the paper, and then discussed a sub-multiple calibration scheme adopted on a fully-automatic CCR10 system effectively. Automatic robot system can be able to perform the dissemination of the mass scale without any manual intervention as well as the fast speed calibration of weight samples against a reference weight. At the last, evaluation of the expanded uncertainty was given out.
Advances in Inertial Navigation Systems and Components
1981-04-01
directions: a. Improvement of the classical fringeshil’t reading through differential two- detector schemes and application of very low loss components...1969 29. Aronowitz, F., " Loss Lock-In in Ringlaser," J. Appi Physics 41, 130 (1970) 30. Malota, F , "Ringlaser and Ringinterferometer," Laser and...8217 . ] 4-9 TABLE I Iý0PPERHEAD RRS CHARACTERISTICS SUMMARY Weight of Jet: - 3.8 oz. Weight of Total Packagt: - 12.0 oz. Volume of Total Packagi?: - 10.5 in
Physiological Responses to Salinity Vary with Proximity to the Ocean in a Coastal Amphibian.
Hopkins, Gareth R; Brodie, Edmund D; Neuman-Lee, Lorin A; Mohammadi, Shabnam; Brusch, George A; Hopkins, Zoë M; French, Susannah S
2016-01-01
Freshwater organisms are increasingly exposed to elevated salinity in their habitats, presenting physiological challenges to homeostasis. Amphibians are particularly vulnerable to osmotic stress and yet are often subject to high salinity in a variety of inland and coastal environments around the world. Here, we examine the physiological responses to elevated salinity of rough-skinned newts (Taricha granulosa) inhabiting a coastal stream on the Pacific coast of North America and compare the physiological responses to salinity stress of newts living in close proximity to the ocean with those of newts living farther upstream. Although elevated salinity significantly affected the osmotic (body weight, plasma osmolality), stress (corticosterone), and immune (bactericidal ability) responses of newts, animals found closer to the ocean were generally less reactive to salt stress than those found farther upstream. Our results provide possible evidence for some physiological tolerance in this species to elevated salinity in coastal environments. As freshwater environments become increasingly saline and more stressful, understanding the physiological tolerances of vulnerable groups such as amphibians will become increasingly important to our understanding of their abilities to respond, to adapt, and, ultimately, to survive.
[Estimation of spur dike-affected fish habitat area].
Ray-Shyan, Wu; Yan-Ru, Chen; Yi-Liang, Ge
2012-04-01
Based on the HEC-RAS and River 2D modes, and taking 5% change rate of weighted usable area (WUA) as the threshold to define the spur dike- affected area of target fish species Acrossocheilus paradoxus in Fazi River in Taiwan, this paper studied the affected area of the fish habitat by spur dike, and, in combining with the references about the installations of spur dikes in Taiwan in recent 10 years, analyzed the relative importance of related affecting factors such as dike height, dike length (water block rate), average slope gradient of river way, single or double spur dike, and flow discharge. In spite of the length of the dike, the affected area in downstream was farther, and was about 2-6 times as large as that in upstream. The ratio of the affected area in downstream / upstream decreased with increasing slope gradient, but increased with increasing dike length and flow discharge. When the discharge was approximate to 10 years return periods, the ratio of the affected area would be close to a constant of 2. Building double spur dike would produce a better WUA than building single spur dike.
Maekawa, T; Sudo, T; Kurimoto, M; Ishii, S
1991-09-11
The transcription factor HIV-TF1, which binds to a region about 60 bp upstream from the enhancer of the human immunodeficiency virus-1 (HIV-1), was purified from human B cells. HIV-TF1 had a molecular weight of 39,000. Binding of HIV-TF1 to the HIV long terminal repeat (LTR) activated transcription from the HIV promoter in vitro. The HIV-TF1-binding site in HIV LTR was similar to the site recognized by upstream stimulatory factor (USF) in the adenovirus major late promoter. DNA-binding properties of HIV-TF1 suggested that HIV-TF1 might be identical or related to USF. Interestingly, treatment of purified HIV-TF1 by phosphatase greatly reduced its DNA-binding activity, suggesting that phosphorylation of HIV-TF1 was essential for DNA binding. The disruption of HIV-TF1-binding site induced a 60% decrease in the level of transcription from the HIV promoter in vivo. These results suggest that HIV-TF1 is involved in transcriptional regulation of HIV-1.
Franco, J.N.; Ceia, F.R.; Patricio, J.; Thompson, John; Marques, J.C.; Neto, J.M.
2012-01-01
Due to its range expansion and potential ecological effects, Corbicula fluminea is considered one of the most important non-indigenous species (NIS) in aquatic ecosystems. Its presence since 2003 in the upstream area of Mondego estuary (oligohaline and mesohaline sectors) was studied during thirteen months, from December 2007 to December 2008. Monthly mean abundance and biomass ranged from 542 to 11142 individuals m-2 and 13.1–20.4 g Ash Free Dry Weight m-2, respectively. Populations of C.fluminea were composed mostly of juveniles, always present in extremely high densities compared to other estuarine ecosystems (e.g. Minho estuary) suggesting a continuous recruitment pattern. The hydraulic regime of the River Mondego favours the downstream colonization of the upper Mondego estuary by recruits produced upstream. However, salinity in these sectors of the estuary apparently neither favours growth nor the establishment of structured populations of this species. Other factors like contaminants and predation, which were not studied, could also contribute to the community structure observed.
NASA Astrophysics Data System (ADS)
Richey, A. S.; Richey, J. E.; Tan, A.; Liu, M.; Adam, J. C.; Sokolov, V.
2015-12-01
Central Asia presents a perfect case study to understand the dynamic, and often conflicting, linkages between food, energy, and water in natural systems. The destruction of the Aral Sea is a well-known environmental disaster, largely driven by increased irrigation demand on the rivers that feed the endorheic sea. Continued reliance on these rivers, the Amu Darya and Syr Darya, often place available water resources at odds between hydropower demands upstream and irrigation requirements downstream. A combination of tools is required to understand these linkages and how they may change in the future as a function of climate change and population growth. In addition, the region is geopolitically complex as the former Soviet basin states develop management strategies to sustainably manage shared resources. This complexity increases the importance of relying upon publically available information sources and tools. Preliminary work has shown potential for the Variable Infiltration Capacity (VIC) model to recreate the natural water balance in the Amu Darya and Syr Darya basins by comparing results to total terrestrial water storage changes observed from NASA's Gravity Recovery and Climate Experiment (GRACE) satellite mission. Modeled streamflow is well correlated to observed streamflow at upstream gauges prior to the large-scale expansion of irrigation and hydropower. However, current modeled results are unable to capture the human influence of water use on downstream flow. This study examines the utility of a crop simulation model, CropSyst, to represent irrigation demand and GRACE to improve modeled streamflow estimates in the Amu Darya and Syr Darya basins. Specifically we determine crop water demand with CropSyst utilizing available data on irrigation schemes and cropping patterns. We determine how this demand can be met either by surface water, modeled by VIC with a reservoir operation scheme, and/or by groundwater derived from GRACE. Finally, we assess how the inclusion of CropSyst and groundwater to model and meet irrigation demand improves modeled streamflow from VIC throughout the basins. The results of this work are integrated into a decision support platform to assist the basin states in understanding water availability and the impact of management decisions on available resources.
Procedure for the Selection and Validation of a Calibration Model I-Description and Application.
Desharnais, Brigitte; Camirand-Lemyre, Félix; Mireault, Pascal; Skinner, Cameron D
2017-05-01
Calibration model selection is required for all quantitative methods in toxicology and more broadly in bioanalysis. This typically involves selecting the equation order (quadratic or linear) and weighting factor correctly modelizing the data. A mis-selection of the calibration model will generate lower quality control (QC) accuracy, with an error up to 154%. Unfortunately, simple tools to perform this selection and tests to validate the resulting model are lacking. We present a stepwise, analyst-independent scheme for selection and validation of calibration models. The success rate of this scheme is on average 40% higher than a traditional "fit and check the QCs accuracy" method of selecting the calibration model. Moreover, the process was completely automated through a script (available in Supplemental Data 3) running in RStudio (free, open-source software). The need for weighting was assessed through an F-test using the variances of the upper limit of quantification and lower limit of quantification replicate measurements. When weighting was required, the choice between 1/x and 1/x2 was determined by calculating which option generated the smallest spread of weighted normalized variances. Finally, model order was selected through a partial F-test. The chosen calibration model was validated through Cramer-von Mises or Kolmogorov-Smirnov normality testing of the standardized residuals. Performance of the different tests was assessed using 50 simulated data sets per possible calibration model (e.g., linear-no weight, quadratic-no weight, linear-1/x, etc.). This first of two papers describes the tests, procedures and outcomes of the developed procedure using real LC-MS-MS results for the quantification of cocaine and naltrexone. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Predictors of 2,4-dichlorophenoxyacetic acid exposure among herbicide applicators
BHATTI, PARVEEN; BLAIR, AARON; BELL, ERIN M.; ROTHMAN, NATHANIEL; LAN, QING; BARR, DANA B.; NEEDHAM, LARRY L.; PORTENGEN, LUTZEN; FIGGS, LARRY W.; VERMEULEN, ROEL
2009-01-01
To determine the major factors affecting the urinary levels of 2,4-dichlorophenoxyacetic acid (2,4-D) among county noxious weed applicators in Kansas, we used a regression technique that accounted for multiple days of exposure. We collected 136 12-h urine samples from 31 applicators during the course of two spraying seasons (April to August of 1994 and 1995). Using mixed-effects models, we constructed exposure models that related urinary 2,4-D measurements to weighted self-reported work activities from daily diaries collected over 5 to 7 days before the collection of the urine sample. Our primary weights were based on an earlier pharmacokinetic analysis of turf applicators; however, we examined a series of alternative weighting schemes to assess the impact of the specific weights and the number of days before urine sample collection that were considered. The derived models accounting for multiple days of exposure related to a single urine measurement seemed robust with regard to the exact weights, but less to the number of days considered; albeit the determinants from the primary model could be fitted with marginal losses of fit to the data from the other weighting schemes that considered a different numbers of days. In the primary model, the total time of all activities (spraying, mixing, other activities), spraying method, month of observation, application concentration, and wet gloves were significant determinants of urinary 2,4-D concentration and explained 16% of the between-worker variance and 23% of the within-worker variance of urinary 2,4-D levels. As a large proportion of the variance remained unexplained, further studies should be conducted to try to systematically assess other exposure determinants. PMID:19319162
NASA Astrophysics Data System (ADS)
Verma, Gaurav; Chawla, Sanjeev; Nagarajan, Rajakumar; Iqbal, Zohaib; Albert Thomas, M.; Poptani, Harish
2017-04-01
Two-dimensional localized correlated spectroscopy (2D L-COSY) offers greater spectral dispersion than conventional one-dimensional (1D) MRS techniques, yet long acquisition times and limited post-processing support have slowed its clinical adoption. Improving acquisition efficiency and developing versatile post-processing techniques can bolster the clinical viability of 2D MRS. The purpose of this study was to implement a non-uniformly weighted sampling (NUWS) scheme for faster acquisition of 2D-MRS. A NUWS 2D L-COSY sequence was developed for 7T whole-body MRI. A phantom containing metabolites commonly observed in the brain at physiological concentrations was scanned ten times with both the NUWS scheme of 12:48 duration and a 17:04 constant eight-average sequence using a 32-channel head coil. 2D L-COSY spectra were also acquired from the occipital lobe of four healthy volunteers using both the proposed NUWS and the conventional uniformly-averaged L-COSY sequence. The NUWS 2D L-COSY sequence facilitated 25% shorter acquisition time while maintaining comparable SNR in humans (+0.3%) and phantom studies (+6.0%) compared to uniform averaging. NUWS schemes successfully demonstrated improved efficiency of L-COSY, by facilitating a reduction in scan time without affecting signal quality.
NASA Astrophysics Data System (ADS)
Kalscheuer, Thomas; Yan, Ping; Hedin, Peter; Garcia Juanatey, Maria d. l. A.
2017-04-01
We introduce a new constrained 2D magnetotelluric (MT) inversion scheme, in which the local weights of the regularization operator with smoothness constraints are based directly on the envelope attribute of a reflection seismic image. The weights resemble those of a previously published seismic modification of the minimum gradient support method introducing a global stabilization parameter. We measure the directional gradients of the seismic envelope to modify the horizontal and vertical smoothness constraints separately. An appropriate choice of the new stabilization parameter is based on a simple trial-and-error procedure. Our proposed constrained inversion scheme was easily implemented in an existing Gauss-Newton inversion package. From a theoretical perspective, we compare our new constrained inversion to similar constrained inversion methods, which are based on image theory and seismic attributes. Successful application of the proposed inversion scheme to the MT field data of the Collisional Orogeny in the Scandinavian Caledonides (COSC) project using constraints from the envelope attribute of the COSC reflection seismic profile (CSP) helped to reduce the uncertainty of the interpretation of the main décollement. Thus, the new model gave support to the proposed location of a future borehole COSC-2 which is supposed to penetrate the main décollement and the underlying Precambrian basement.
Godinez, William J; Rohr, Karl
2015-02-01
Tracking subcellular structures as well as viral structures displayed as 'particles' in fluorescence microscopy images yields quantitative information on the underlying dynamical processes. We have developed an approach for tracking multiple fluorescent particles based on probabilistic data association. The approach combines a localization scheme that uses a bottom-up strategy based on the spot-enhancing filter as well as a top-down strategy based on an ellipsoidal sampling scheme that uses the Gaussian probability distributions computed by a Kalman filter. The localization scheme yields multiple measurements that are incorporated into the Kalman filter via a combined innovation, where the association probabilities are interpreted as weights calculated using an image likelihood. To track objects in close proximity, we compute the support of each image position relative to the neighboring objects of a tracked object and use this support to recalculate the weights. To cope with multiple motion models, we integrated the interacting multiple model algorithm. The approach has been successfully applied to synthetic 2-D and 3-D images as well as to real 2-D and 3-D microscopy images, and the performance has been quantified. In addition, the approach was successfully applied to the 2-D and 3-D image data of the recent Particle Tracking Challenge at the IEEE International Symposium on Biomedical Imaging (ISBI) 2012.
Doulamis, A; Doulamis, N; Ntalianis, K; Kollias, S
2003-01-01
In this paper, an unsupervised video object (VO) segmentation and tracking algorithm is proposed based on an adaptable neural-network architecture. The proposed scheme comprises: 1) a VO tracking module and 2) an initial VO estimation module. Object tracking is handled as a classification problem and implemented through an adaptive network classifier, which provides better results compared to conventional motion-based tracking algorithms. Network adaptation is accomplished through an efficient and cost effective weight updating algorithm, providing a minimum degradation of the previous network knowledge and taking into account the current content conditions. A retraining set is constructed and used for this purpose based on initial VO estimation results. Two different scenarios are investigated. The first concerns extraction of human entities in video conferencing applications, while the second exploits depth information to identify generic VOs in stereoscopic video sequences. Human face/ body detection based on Gaussian distributions is accomplished in the first scenario, while segmentation fusion is obtained using color and depth information in the second scenario. A decision mechanism is also incorporated to detect time instances for weight updating. Experimental results and comparisons indicate the good performance of the proposed scheme even in sequences with complicated content (object bending, occlusion).
NASA Astrophysics Data System (ADS)
van de Giesen, N.; Andreini, M.; Liebe, J.; Steenhuis, T.; Huber-Lee, A.
2005-12-01
After a strong reduction in investments in water infrastructure in Sub-Saharan Africa, we now see a revival and increased interest to start water-related projects. The global political willingness to work towards the UN millennium goals are an important driver behind this recent development. Large scale irrigation projects, such as were constructed at tremendous costs in the 1970's and early 1980's, are no longer seen as the way forward. Instead, the construction of a large number of small, village-level irrigation schemes is thought to be a more effective way to improve food production. Such small schemes would fit better in existing and functioning governance structures. An important question now becomes what the cumulative (downstream) impact is of a large number of small irrigation projects, especially when they threaten to deplete transboundary water resources. The Volta Basin in West Africa is a transboundary river catchment, divided over six countries. Of these six countries, upstream Burkina Faso and downstream Ghana are the most important and cover 43% and 42% of the basin, respectively. In Burkina Faso (and also North Ghana), small reservoirs and associated irrigation schemes are already an important means to improve the livelihoods of the rural population. In fact, over two thousand such schemes have already been constructed in Burkina Faso and further construction is to be expected in the light of the UN millennium goals. The cumulative impact of these schemes would affect the Akosombo Reservoir, one of the largest manmade lakes in the world and an important motor behind the economic development in (South) Ghana. This presentation will put forward an analytical framework that allows for the impact assessment of (large) ensembles of small reservoirs. It will be shown that despite their relatively low water use efficiencies, the overall impact remains low compared to the impact of large dams. The tools developed can be used in similar settings elsewhere in the developing world. The methods are mainly based on relatively objective observations as provided by satellites. As such, these tool provide a good basis for transboundary impact assessment and conflict avoidance.
Distributed multi-criteria model evaluation and spatial association analysis
NASA Astrophysics Data System (ADS)
Scherer, Laura; Pfister, Stephan
2015-04-01
Model performance, if evaluated, is often communicated by a single indicator and at an aggregated level; however, it does not embrace the trade-offs between different indicators and the inherent spatial heterogeneity of model efficiency. In this study, we simulated the water balance of the Mississippi watershed using the Soil and Water Assessment Tool (SWAT). The model was calibrated against monthly river discharge at 131 measurement stations. Its time series were bisected to allow for subsequent validation at the same gauges. Furthermore, the model was validated against evapotranspiration which was available as a continuous raster based on remote sensing. The model performance was evaluated for each of the 451 sub-watersheds using four different criteria: 1) Nash-Sutcliffe efficiency (NSE), 2) percent bias (PBIAS), 3) root mean square error (RMSE) normalized to standard deviation (RSR), as well as 4) a combined indicator of the squared correlation coefficient and the linear regression slope (bR2). Conditions that might lead to a poor model performance include aridity, a very flat and steep relief, snowfall and dams, as indicated by previous research. In an attempt to explain spatial differences in model efficiency, the goodness of the model was spatially compared to these four phenomena by means of a bivariate spatial association measure which combines Pearson's correlation coefficient and Moran's index for spatial autocorrelation. In order to assess the model performance of the Mississippi watershed as a whole, three different averages of the sub-watershed results were computed by 1) applying equal weights, 2) weighting by the mean observed river discharge, 3) weighting by the upstream catchment area and the square root of the time series length. Ratings of model performance differed significantly in space and according to efficiency criterion. The model performed much better in the humid Eastern region than in the arid Western region which was confirmed by the high spatial association with the aridity index (ratio of mean annual precipitation to mean annual potential evapotranspiration). This association was still significant when controlling for slopes which manifested the second highest spatial association. In line with these findings, overall model efficiency of the entire Mississippi watershed appeared better when weighted with mean observed river discharge. Furthermore, the model received the highest rating with regards to PBIAS and was judged worst when considering NSE as the most comprehensive indicator. No universal performance indicator exists that considers all aspects of a hydrograph. Therefore, sound model evaluation must take into account multiple criteria. Since model efficiency varies in space which is masked by aggregated ratings spatially explicit model goodness should be communicated as standard praxis - at least as a measure of spatial variability of indicators. Furthermore, transparent documentation of the evaluation procedure also with regards to weighting of aggregated model performance is crucial but often lacking in published research. Finally, the high spatial association between model performance and aridity highlights the need to improve modelling schemes for arid conditions as priority over other aspects that might weaken model goodness.
Step to improve neural cryptography against flipping attacks.
Zhou, Jiantao; Xu, Qinzhen; Pei, Wenjiang; He, Zhenya; Szu, Harold
2004-12-01
Synchronization of neural networks by mutual learning has been demonstrated to be possible for constructing key exchange protocol over public channel. However, the neural cryptography schemes presented so far are not the securest under regular flipping attack (RFA) and are completely insecure under majority flipping attack (MFA). We propose a scheme by splitting the mutual information and the training process to improve the security of neural cryptosystem against flipping attacks. Both analytical and simulation results show that the success probability of RFA on the proposed scheme can be decreased to the level of brute force attack (BFA) and the success probability of MFA still decays exponentially with the weights' level L. The synchronization time of the parties also remains polynomial with L. Moreover, we analyze the security under an advanced flipping attack.
Gyroaveraging operations using adaptive matrix operators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dominski, Julien; Ku, Seung -Hoe; Chang, Choong -Seock
A new adaptive scheme to be used in particle-in-cell codes for carrying out gyroaveraging operations with matrices is presented. This new scheme uses an intermediate velocity grid whose resolution is adapted to the local thermal Larmor radius. The charge density is computed by projecting marker weights in a field-line following manner while preserving the adiabatic magnetic moment μ. These choices permit to improve the accuracy of the gyroaveraging operations performed with matrices even when strong spatial variation of temperature and magnetic field is present. Accuracy of the scheme in different geometries from simple 2D slab geometry to realistic 3D toroidalmore » equilibrium has been studied. As a result, a successful implementation in the gyrokinetic code XGC is presented in the delta-f limit.« less
Design of an anti-Rician-fading modem for mobile satellite communication systems
NASA Technical Reports Server (NTRS)
Kojima, Toshiharu; Ishizu, Fumio; Miyake, Makoto; Murakami, Keishi; Fujino, Tadashi
1995-01-01
To design a demodulator applicable to mobile satellite communication systems using differential phase shift keying modulation, we have developed key technologies including an anti-Rician-fading demodulation scheme, an initial acquisition scheme, automatic gain control (AGC), automatic frequency control (AFC), and bit timing recovery (BTR). Using these technologies, we have developed one-chip digital signal processor (DSP) modem for mobile terminal, which is compact, of light weight, and of low power consumption. Results of performance test show that the developed DSP modem achieves good performance in terms of bit error ratio in mobile satellite communication environment, i.e., Rician fading channel. It is also shown that the initial acquisition scheme acquires received signal rapidly even if the carrier-to-noise power ratio (CNR) of the received signal is considerably low.
Gyroaveraging operations using adaptive matrix operators
Dominski, Julien; Ku, Seung -Hoe; Chang, Choong -Seock
2018-05-17
A new adaptive scheme to be used in particle-in-cell codes for carrying out gyroaveraging operations with matrices is presented. This new scheme uses an intermediate velocity grid whose resolution is adapted to the local thermal Larmor radius. The charge density is computed by projecting marker weights in a field-line following manner while preserving the adiabatic magnetic moment μ. These choices permit to improve the accuracy of the gyroaveraging operations performed with matrices even when strong spatial variation of temperature and magnetic field is present. Accuracy of the scheme in different geometries from simple 2D slab geometry to realistic 3D toroidalmore » equilibrium has been studied. As a result, a successful implementation in the gyrokinetic code XGC is presented in the delta-f limit.« less
NASA Astrophysics Data System (ADS)
Li, Haifeng; Cui, Guixiang; Zhang, Zhaoshun
2018-04-01
A coupling scheme is proposed for the simulation of microscale flow and dispersion in which both the mesoscale field and small-scale turbulence are specified at the boundary of a microscale model. The small-scale turbulence is obtained individually in the inner and outer layers by the transformation of pre-computed databases, and then combined in a weighted sum. Validation of the results of a flow over a cluster of model buildings shows that the inner- and outer-layer transition height should be located in the roughness sublayer. Both the new scheme and the previous scheme are applied in the simulation of the flow over the central business district of Oklahoma City (a point source during intensive observation period 3 of the Joint Urban 2003 experimental campaign), with results showing that the wind speed is well predicted in the canopy layer. Compared with the previous scheme, the new scheme improves the prediction of the wind direction and turbulent kinetic energy (TKE) in the canopy layer. The flow field influences the scalar plume in two ways, i.e. the averaged flow field determines the advective flux and the TKE field determines the turbulent flux. Thus, the mean, root-mean-square and maximum of the concentration agree better with the observations with the new scheme. These results indicate that the new scheme is an effective means of simulating the complex flow and dispersion in urban canopies.
NASA Astrophysics Data System (ADS)
Lange, Heiner; Craig, George
2014-05-01
This study uses the Local Ensemble Transform Kalman Filter (LETKF) to perform storm-scale Data Assimilation of simulated Doppler radar observations into the non-hydrostatic, convection-permitting COSMO model. In perfect model experiments (OSSEs), it is investigated how the limited predictability of convective storms affects precipitation forecasts. The study compares a fine analysis scheme with small RMS errors to a coarse scheme that allows for errors in position, shape and occurrence of storms in the ensemble. The coarse scheme uses superobservations, a coarser grid for analysis weights, a larger localization radius and larger observation error that allow a broadening of the Gaussian error statistics. Three hour forecasts of convective systems (with typical lifetimes exceeding 6 hours) from the detailed analyses of the fine scheme are found to be advantageous to those of the coarse scheme during the first 1-2 hours, with respect to the predicted storm positions. After 3 hours in the convective regime used here, the forecast quality of the two schemes appears indiscernible, judging by RMSE and verification methods for rain-fields and objects. It is concluded that, for operational assimilation systems, the analysis scheme might not necessarily need to be detailed to the grid scale of the model. Depending on the forecast lead time, and on the presence of orographic or synoptic forcing that enhance the predictability of storm occurrences, analyses from a coarser scheme might suffice.
Chandoevwit, Worawan; Phatchana, Phasith
2018-06-01
The Thai elderly are eligible for the Civil Servant Medical Benefit Scheme (CS) or Universal Coverage Scheme (UCS) depending on their pre-retirement or their children work status. This study aimed to investigate the disparity in inpatient care expenditures in the last year of life among Thai elderly individuals who used the two public health insurance schemes. Using death registration and inpatient administrative data from 2007 to 2011, our subpopulation group included the elderly with four chronic disease groups: diabetes mellitus, hypertension and cardiovascular disease, heart disease, and cancer. Among 1,242,150 elderly decedents, about 40% of them had at least one of the four chronic disease conditions and were hospitalized in their last year of life. The results showed that the means of inpatient care expenditures in the last year of life paid by CS and UCS per decedent were 99,672 Thai Baht and 52,472 Thai Baht, respectively. On average, UCS used higher healthcare resources by diagnosis-related group relative weight measure per decedent compared with CS. In all cases, the rates of payment for inpatient treatment per diagnosis-related group adjusted relative weight were higher for CS than UCS. This study found that the disparities in inpatient care expenditures in the last year of life stemmed mainly from the difference in payment rates. To mitigate this disparity, unified payment rates for various types of treatment that reflect costs of hospital care across insurance schemes were recommended. Copyright © 2018 Elsevier Ltd. All rights reserved.
Self-formed meandering river created in the laboratory using an upstream migrating boundary
NASA Astrophysics Data System (ADS)
van Dijk, W. M.; van de Lageweg, W. I.; Kleinhans, M. G.
2010-12-01
Braided rivers are relatively easily formed in the laboratory, whereas self-formed meandering rivers in the lab have proven very difficult to form, indicating a lack of understanding of the necessary and sufficient conditions for meandering. Our objective is to create self-formed dynamic meandering rivers and floodplains in a laboratory. Early experiments attempted to initiate meandering with upstream inflow at a fixed angle different from the general flow direction. The resulting bends were fixed at one position, which is not the dynamic meandering observed in nature. Another important condition for meandering is to have banks stronger than the non-cohesive bed sediment, which has been attained by growing vegetation. Furthermore, finer or light-weight sediment has been used to let chute channels fill up where otherwise multi-thread channels would have evolved, which is braiding. Yet the fixed-angle inflow kept meander migration and channel belt width and complexity limited. We accomplished dynamic meandering in the laboratory by using an upstream migrating boundary, which simulates a meander migrating into the flume. Our experiments were conducted in a circulated flume of 11x6 meter, with a constant discharge and sediment feed consisting of a sediment mixture ranging from silt to fine gravel (Kleinhans et al., 2010, this conference). The downstream boundary is a lake into which the river built a branched fan delta (Van de Lageweg et al., 2010, this conference). The morphology was recorded by high-resolution (0.5 mm) line-laser scanning and digital Single Lens Reflex (SLR) camera used for channel-floodplain segmentation and particle size estimation, at an interval of 8 hours. Furthermore a large number of smaller-scale auxiliary experiments were conducted to explore meandering tendency in a large range of parameters. Initial alternate ‘forced’ bars were formed at fixed positions with low sinuosity when the upstream boundary was at one fixed position. Migration of the upstream boundary caused further erosion of the outer banks and formation of point bars in inner bends, so that sinuosity increased to about 1.25. When the upstream boundary reversed migration direction chute cut-offs formed and meander bends reformed in the opposite direction. Hence in the first meander sweep the reworked floodplain showed nodes and antinodes at a wave length in agreement with linear bar stability analysis. After 260 hours experimental time the floodplain had become much more complex, exhibiting meandering channels, point bars, chutes, abandoned and partially filled channels, and slightly cohesive floodplains similar to natural meandering gravel-bed rivers such as the Allier near Moulins (France) and the Rhine near Emmerich (Germany). The flow became even more confined to a single-thread channel when pulses of silica flour were fed during short flood events, which significantly enhanced cohesive floodplain formation. The strengthened floodplains decreased channel mobility, however. We conclude that the necessary and sufficient conditions for meandering are a dynamic upstream boundary and active floodplain formation by fines.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dahlen, Lisa, E-mail: lisa.dahlen@ltu.s; Lagerkvist, Anders
2010-01-15
Householders' response to weight-based billing for the collection of household waste was investigated with the aim of providing decision support for waste management policies. Three questions were addressed: How much and what kind of information on weight-based billing is discernible in generic Swedish waste collection statistics? Why do local authorities implement weight-based billing, and how do they perceive the results? and, Which strengths and weaknesses of weight-based billing have been observed on the local level? The study showed that municipalities with pay-by-weight schemes collected 20% less household waste per capita than other municipalities. Surprisingly, no part of this difference couldmore » be explained by higher recycling rates. Nevertheless, the majority of waste management professionals were convinced that recycling had increased as a result of the billing system. A number of contradicting strengths and weaknesses of weight-based billing were revealed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roehm, Dominic; Pavel, Robert S.; Barros, Kipton
We present an adaptive sampling method supplemented by a distributed database and a prediction method for multiscale simulations using the Heterogeneous Multiscale Method. A finite-volume scheme integrates the macro-scale conservation laws for elastodynamics, which are closed by momentum and energy fluxes evaluated at the micro-scale. In the original approach, molecular dynamics (MD) simulations are launched for every macro-scale volume element. Our adaptive sampling scheme replaces a large fraction of costly micro-scale MD simulations with fast table lookup and prediction. The cloud database Redis provides the plain table lookup, and with locality aware hashing we gather input data for our predictionmore » scheme. For the latter we use kriging, which estimates an unknown value and its uncertainty (error) at a specific location in parameter space by using weighted averages of the neighboring points. We find that our adaptive scheme significantly improves simulation performance by a factor of 2.5 to 25, while retaining high accuracy for various choices of the algorithm parameters.« less
Equivalent ZF precoding scheme for downlink indoor MU-MIMO VLC systems
NASA Astrophysics Data System (ADS)
Fan, YangYu; Zhao, Qiong; Kang, BoChao; Deng, LiJun
2018-01-01
In indoor visible light communication (VLC) systems, the channels of photo detectors (PDs) at one user are highly correlated, which determines the choice of spatial diversity model for individual users. In a spatial diversity model, the signals received by PDs belonging to one user carry the same information, and can be combined directly. Based on the above, we propose an equivalent zero-forcing (ZF) precoding scheme for multiple-user multiple-input single-output (MU-MIMO) VLC systems by transforming an indoor MU-MIMO VLC system into an indoor multiple-user multiple-input single-output (MU-MISO) VLC system through simply processing. The power constraints of light emitting diodes (LEDs) are also taken into account. Comprehensive computer simulations in three scenarios indicate that our scheme can not only reduce the computational complexity, but also guarantee the system performance. Furthermore, the proposed scheme does not require noise information in the calculating of the precoding weights, and has no restrictions on the numbers of APs and PDs.
Distributed database kriging for adaptive sampling (D²KAS)
Roehm, Dominic; Pavel, Robert S.; Barros, Kipton; ...
2015-03-18
We present an adaptive sampling method supplemented by a distributed database and a prediction method for multiscale simulations using the Heterogeneous Multiscale Method. A finite-volume scheme integrates the macro-scale conservation laws for elastodynamics, which are closed by momentum and energy fluxes evaluated at the micro-scale. In the original approach, molecular dynamics (MD) simulations are launched for every macro-scale volume element. Our adaptive sampling scheme replaces a large fraction of costly micro-scale MD simulations with fast table lookup and prediction. The cloud database Redis provides the plain table lookup, and with locality aware hashing we gather input data for our predictionmore » scheme. For the latter we use kriging, which estimates an unknown value and its uncertainty (error) at a specific location in parameter space by using weighted averages of the neighboring points. We find that our adaptive scheme significantly improves simulation performance by a factor of 2.5 to 25, while retaining high accuracy for various choices of the algorithm parameters.« less
NASA Astrophysics Data System (ADS)
Mehari, Abraham; Koppen, Barbara Van; McCartney, Matthew; Lankford, Bruce
Tanzania is currently attempting to improve water resources management through formal water rights and water fees systems, and formal institutions. The water rights system is expected to facilitate water allocation. The water fees system aims at cost-recovery for water resources management services. To enhance community involvement in water management, Water User Associations (WUAs) are being established and, in areas with growing upstream-downstream conflicts, apex bodies of all users along the stressed river stretch. The Mkoji sub-catchment (MSC) in the Rufiji basin is one of the first where these formal water management systems are being attempted. This paper analyzes the effectiveness of these systems in the light of their expected merits and the consequences of the juxtaposition of contemporary laws with traditional approaches. The study employed mainly qualitative, but also quantitative approaches on social and technical variables. Major findings were: (1) a good mix of formal (water fees and WUAs) and traditional (rotation-based water sharing, the Zamu) systems improved village-level water management services and reduced intra-scheme conflicts; (2) the water rights system has not brought abstractions into line with allocations and (3) so far, the MSC Apex body failed to mitigate inter-scheme conflicts. A more sophisticated design of allocation infrastructure and institutions is recommended.
Global Distributions of Ionospheric Electrostatic Potentials for Various Interplanetary Conditions
NASA Astrophysics Data System (ADS)
Kartalev, M.; Papitashvili, V.; Keremidarska, V.; Grigorov, K.; Romanov, D.
2001-12-01
We report on a study of the global ionospheric electrostatic potential distributions obtained from combining two algorithms used for the mapping of high-latitude and middle-latitude ionospheric electrodynamics; that is, the LiMIE (http://www.sprl.umich.edu/mist/) and IMEH (http://geospace.nat.bg) models, respectively. In this combination, the latter model utilizes the LiMIE high-latitude field-aligned current distributions for various IMF conditions and different seasons (summer, winter, equinox). The IMEH model is a mathematical tool, allowing us to study conjugacy (or non-conjugacy) of the ionospheric electric fields on a global scale, from the northern and southern polar regions to the middle- and low-latitudes. The proposed numerical scheme permits testing of different mechanisms of the interhemispheric coupling and mapping to the ionosphere through the appropriate current systems. The scheme is convenient for determining self-consistently the separatrices in both the northern and southern hemispheres. In this study we focus on the global ionospheric electrostatic field distributions neglecting other possible electric field sources. Considering some implications of the proposed technique for the space weather specification and forecasting, we developed a Web-based interface providing global distributions of the ionospheric electrostatic potentials in near-real time from the ACE upstream solar wind observations at L1.
On the Relationship between Observed NLDN Lightning ...
Lightning-produced nitrogen oxides (NOX=NO+NO2) in the middle and upper troposphere play an essential role in the production of ozone (O3) and influence the oxidizing capacity of the troposphere. Despite much effort in both observing and modeling lightning NOX during the past decade, considerable uncertainties still exist with the quantification of lightning NOX production and distribution in the troposphere. It is even more challenging for regional chemistry and transport models to accurately parameterize lightning NOX production and distribution in time and space. The Community Multiscale Air Quality Model (CMAQ) parameterizes the lightning NO emissions using local scaling factors adjusted by the convective precipitation rate that is predicted by the upstream meteorological model; the adjustment is based on the observed lightning strikes from the National Lightning Detection Network (NLDN). For this parameterization to be valid, the existence of an a priori reasonable relationship between the observed lightning strikes and the modeled convective precipitation rates is needed. In this study, we will present an analysis leveraged on the observed NLDN lightning strikes and CMAQ model simulations over the continental United States for a time period spanning over a decade. Based on the analysis, new parameterization scheme for lightning NOX will be proposed and the results will be evaluated. The proposed scheme will be beneficial to modeling exercises where the obs
Implicit preconditioned WENO scheme for steady viscous flow computation
NASA Astrophysics Data System (ADS)
Huang, Juan-Chen; Lin, Herng; Yang, Jaw-Yen
2009-02-01
A class of lower-upper symmetric Gauss-Seidel implicit weighted essentially nonoscillatory (WENO) schemes is developed for solving the preconditioned Navier-Stokes equations of primitive variables with Spalart-Allmaras one-equation turbulence model. The numerical flux of the present preconditioned WENO schemes consists of a first-order part and high-order part. For first-order part, we adopt the preconditioned Roe scheme and for the high-order part, we employ preconditioned WENO methods. For comparison purpose, a preconditioned TVD scheme is also given and tested. A time-derivative preconditioning algorithm is devised and a discriminant is devised for adjusting the preconditioning parameters at low Mach numbers and turning off the preconditioning at intermediate or high Mach numbers. The computations are performed for the two-dimensional lid driven cavity flow, low subsonic viscous flow over S809 airfoil, three-dimensional low speed viscous flow over 6:1 prolate spheroid, transonic flow over ONERA-M6 wing and hypersonic flow over HB-2 model. The solutions of the present algorithms are in good agreement with the experimental data. The application of the preconditioned WENO schemes to viscous flows at all speeds not only enhances the accuracy and robustness of resolving shock and discontinuities for supersonic flows, but also improves the accuracy of low Mach number flow with complicated smooth solution structures.
Weighted-outer-product associative neural network
NASA Astrophysics Data System (ADS)
Ji, Han-Bing
1991-11-01
A weighted outer product learning (WOPL) scheme for associative memory neural network is presented in which learning orders are incorporated to the Hopfield model. WOPL can be guaranteed to achieve correct recall of some stored datums no matter whether or not they are stable in the Hopfield model, and whether the number of stored datums is small or large. A technically sufficient condition is also discussed for how to suitably choose learning orders to fully utilize WOPL for correct recall of as many stored datums as possible.
Investigation of Near Shannon Limit Coding Schemes
NASA Technical Reports Server (NTRS)
Kwatra, S. C.; Kim, J.; Mo, Fan
1999-01-01
Turbo codes can deliver performance that is very close to the Shannon limit. This report investigates algorithms for convolutional turbo codes and block turbo codes. Both coding schemes can achieve performance near Shannon limit. The performance of the schemes is obtained using computer simulations. There are three sections in this report. First section is the introduction. The fundamental knowledge about coding, block coding and convolutional coding is discussed. In the second section, the basic concepts of convolutional turbo codes are introduced and the performance of turbo codes, especially high rate turbo codes, is provided from the simulation results. After introducing all the parameters that help turbo codes achieve such a good performance, it is concluded that output weight distribution should be the main consideration in designing turbo codes. Based on the output weight distribution, the performance bounds for turbo codes are given. Then, the relationships between the output weight distribution and the factors like generator polynomial, interleaver and puncturing pattern are examined. The criterion for the best selection of system components is provided. The puncturing pattern algorithm is discussed in detail. Different puncturing patterns are compared for each high rate. For most of the high rate codes, the puncturing pattern does not show any significant effect on the code performance if pseudo - random interleaver is used in the system. For some special rate codes with poor performance, an alternative puncturing algorithm is designed which restores their performance close to the Shannon limit. Finally, in section three, for iterative decoding of block codes, the method of building trellis for block codes, the structure of the iterative decoding system and the calculation of extrinsic values are discussed.
Simulations of Turbine Cooling Flows Using a Multiblock-Multigrid Scheme
NASA Technical Reports Server (NTRS)
Steinthorsson, Erlendur; Ameri, Ali A.; Rigby, David L.
1996-01-01
Results from numerical simulations of air flow and heat transfer in a 'branched duct' geometry are presented. The geometry contains features, including pins and a partition, as are found in coolant passages of turbine blades. The simulations were performed using a multi-block structured grid system and a finite volume discretization of the governing equations (the compressible Navier-Stokes equations). The effects of turbulence on the mean flow and heat transfer were modeled using the Baldwin-Lomax turbulence model. The computed results are compared to experimental data. It was found that the extent of some regions of high heat transfer was somewhat under predicted. It is conjectured that the underlying reason is the local nature of the turbulence model which cannot account for upstream influence on the turbulence field. In general, however, the comparison with the experimental data is favorable.
NASA Astrophysics Data System (ADS)
Daiguji, Hisaaki; Yamamoto, Satoru
1988-12-01
The implicit time-marching finite-difference method for solving the three-dimensional compressible Euler equations developed by the authors is extended to the Navier-Stokes equations. The distinctive features of this method are to make use of momentum equations of contravariant velocities instead of physical boundaries, and to be able to treat the periodic boundary condition for the three-dimensional impeller flow easily. These equations can be solved by using the same techniques as the Euler equations, such as the delta-form approximate factorization, diagonalization and upstreaming. In addition to them, a simplified total variation diminishing scheme by the authors is applied to the present method in order to capture strong shock waves clearly. Finally, the computed results of the three-dimensional flow through a transonic compressor rotor with tip clearance are shown.
1.25-3.125 Gb/s per user PON with RSOA as phase modulator for statistical wavelength ONU
NASA Astrophysics Data System (ADS)
Chu, Guang Yong; Polo, Victor; Lerín, Adolfo; Tabares, Jeison; Cano, Iván N.; Prat, Josep
2015-12-01
We report a new scheme to support, cost efficiently, ultra-dense wavelength division multiplexing (UDWDM) for optical access networks. As validating experiment, we apply phase modulation of a reflective semiconductor optical amplifier (RSOA) at the ONU with a single DFB, and simplified coherent receiver at OLT for upstream. We extend the limited 3-dB modulation bandwidth of available uncooled To-can packaged RSOA (~400 MHz) and operate it at 3.125 Gb/s with the optimal performance for phase modulation using small and large signal measurement characteristics. The optimal condition is selected at input power of 0 dBm, with 70 mA bias condition. The sensitivities at 3.125 Gb/s (at BER=10-3) for heterodyne and intradyne detection reach -34.3 dBm and -38.8 dBm, respectively.
Simulation of violent free surface flow by AMR method
NASA Astrophysics Data System (ADS)
Hu, Changhong; Liu, Cheng
2018-05-01
A novel CFD approach based on adaptive mesh refinement (AMR) technique is being developed for numerical simulation of violent free surface flows. CIP method is applied to the flow solver and tangent of hyperbola for interface capturing with slope weighting (THINC/SW) scheme is implemented as the free surface capturing scheme. The PETSc library is adopted to solve the linear system. The linear solver is redesigned and modified to satisfy the requirement of the AMR mesh topology. In this paper, our CFD method is outlined and newly obtained results on numerical simulation of violent free surface flows are presented.
Information Security Scheme Based on Computational Temporal Ghost Imaging.
Jiang, Shan; Wang, Yurong; Long, Tao; Meng, Xiangfeng; Yang, Xiulun; Shu, Rong; Sun, Baoqing
2017-08-09
An information security scheme based on computational temporal ghost imaging is proposed. A sequence of independent 2D random binary patterns are used as encryption key to multiply with the 1D data stream. The cipher text is obtained by summing the weighted encryption key. The decryption process can be realized by correlation measurement between the encrypted information and the encryption key. Due to the instinct high-level randomness of the key, the security of this method is greatly guaranteed. The feasibility of this method and robustness against both occlusion and additional noise attacks are discussed with simulation, respectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gu, Z.; Ching, W.Y.
Based on the Sterne-Inkson model for the self-energy correction to the single-particle energy in the local-density approximation (LDA), we have implemented an approximate energy-dependent and [bold k]-dependent [ital GW] correction scheme to the orthogonalized linear combination of atomic orbital-based local-density calculation for insulators. In contrast to the approach of Jenkins, Srivastava, and Inkson, we evaluate the on-site exchange integrals using the LDA Bloch functions throughout the Brillouin zone. By using a [bold k]-weighted band gap [ital E][sub [ital g
REMOVAL OF CHLORINATED ALKENE SOLVENTS FROM DRINKING WATER BY VARIOUS REVERSE OSMOSIS MEMBRANES
Historically, membranes have been used to desalinate water. As new membrane materials are developed, traditional water treatment schemes may incorporate membrane technologies, such as reverse osmosis, to address a variety of new concerns such as low molecular weight volatile org...
Triple collocation based merging of satellite soil moisture retrievals
USDA-ARS?s Scientific Manuscript database
We propose a method for merging soil moisture retrievals from space borne active and passive microwave instruments based on weighted averaging taking into account the error characteristics of the individual data sets. The merging scheme is parameterized using error variance estimates obtained from u...
Zong, Guo; Wang, Ahong; Wang, Lu; Liang, Guohua; Gu, Minghong; Sang, Tao; Han, Bin
2012-07-20
1000-Grain weight and spikelet number per panicle are two important components for rice grain yield. In our previous study, eight quantitative trait loci (QTLs) conferring spikelet number per panicle and 1000-grain weight were mapped through sequencing-based genotyping of 150 rice recombinant inbred lines (RILs). In this study, we validated the effects of four QTLs from Nipponbare using chromosome segment substitution lines (CSSLs), and pyramided eight grain yield related QTLs. The new lines containing the eight QTLs with positive effects showed increased panicle and spikelet size as compared with the parent variety 93-11. We further proposed a novel pyramid breeding scheme based on marker-assistant and phenotype selection (MAPS). This scheme allowed pyramiding of as many as 24 QTLs at a single hybridization without massive cross work. This study provided insights into the molecular basis of rice grain yield for direct wealth for high-yielding rice breeding. Copyright © 2012. Published by Elsevier Ltd.
How Molecular Size Impacts RMSD Applications in Molecular Dynamics Simulations.
Sargsyan, Karen; Grauffel, Cédric; Lim, Carmay
2017-04-11
The root-mean-square deviation (RMSD) is a similarity measure widely used in analysis of macromolecular structures and dynamics. As increasingly larger macromolecular systems are being studied, dimensionality effects such as the "curse of dimensionality" (a diminishing ability to discriminate pairwise differences between conformations with increasing system size) may exist and significantly impact RMSD-based analyses. For such large bimolecular systems, whether the RMSD or other alternative similarity measures might suffer from this "curse" and lose the ability to discriminate different macromolecular structures had not been explicitly addressed. Here, we show such dimensionality effects for both weighted and nonweighted RMSD schemes. We also provide a mechanism for the emergence of the "curse of dimensionality" for RMSD from the law of large numbers by showing that the conformational distributions from which RMSDs are calculated become increasingly similar as the system size increases. Our findings suggest the use of weighted RMSD schemes for small proteins (less than 200 residues) and nonweighted RMSD for larger proteins when analyzing molecular dynamics trajectories.
NASA Astrophysics Data System (ADS)
Tormos, T.; Kosuth, P.; Souchon, Y.; Villeneuve, B.; Durrieu, S.; Chandesris, A.
2010-12-01
Preservation and restoration of river ecosystems require an improved understanding of the mechanisms through which they are influenced by landscape at multiple spatial scales and particularly at river corridor scale considering the role of riparian vegetation for regulating and protecting river ecological status and the relevance of this specific area for implementing efficient and realistic strategies. Assessing correctly this influence over large river networks involves accurate broad scale (i.e. at least regional) information on Land Cover within Riparian Areas (LCRA). As the structure of land cover along rivers is generally not accessible using moderate-scale satellite imagery, finer spatial resolution imagery and specific mapping techniques are needed. For this purpose we developed a generic multi-scale Object Based Image Analysis (OBIA) scheme able to produce LCRA maps in different geographic context by exploiting information available from very high spatial resolution imagery (satellite or airborne) and/or metric to decametric spatial thematic data on a given study zone thanks to fuzzy expert knowledge classification rules. A first experimentation was carried out on the Herault river watershed (southern of France), a 2650 square kilometers basin that presents a contrasted landscape (different ecoregions) and a total stream length of 1150 Km, using high and very high multispectral remotely-sensed images (10m Spot5 multispectral images and 0.5m aerial photography) and existing spatial thematic data. Application of the OBIA scheme produced a detailed (22 classes) LCRA map with an overall accuracy of 89% and a Kappa index of 83% according to a land cover pressures typology (six categories). A second experimentation (using the same data sources) was carried out on a larger test zone, a part of the Normandy river network (25 000 square kilometers basin; 6000 km long river network; 155 ecological stations). This second work aimed at elaborating a robust statistical eco-regional model to study links between land cover spatial indicators calculated at local and watershed scales, and river ecological status assessed with macroinvertebrate indicators. Application of the OBIA scheme produced a detailed (62 classes) LCRA map which allowed the model to highlight influence of specific land use patterns: (i) the significant beneficial effect of 20-m riparian tree vegetation strip near a station and 20-m riparian grassland strip along the upstream network of a station and (ii) the negative impact on river ecological status of urban areas and roads on the upstream flood plain of a station. Results of these two experimentations highlight that (i) the application of an OBIA scheme using multi-source spatial data provides an efficient approach for mapping and monitoring LCRA that can be implemented operationally at regional or national scale and (ii) and the interest of using LCRA-maps derived from very high spatial resolution imagery (satellite or airborne) and/or metric spatial thematic data to study landscape influence on river ecological status and support managers in the definition of optimized riparian preservation and restoration strategies.
Optimized diffusion gradient orientation schemes for corrupted clinical DTI data sets.
Dubois, J; Poupon, C; Lethimonnier, F; Le Bihan, D
2006-08-01
A method is proposed for generating schemes of diffusion gradient orientations which allow the diffusion tensor to be reconstructed from partial data sets in clinical DT-MRI, should the acquisition be corrupted or terminated before completion because of patient motion. A general energy-minimization electrostatic model was developed in which the interactions between orientations are weighted according to their temporal order during acquisition. In this report, two corruption scenarios were specifically considered for generating relatively uniform schemes of 18 and 60 orientations, with useful subsets of 6 and 15 orientations. The sets and subsets were compared to conventional sets through their energy, condition number and rotational invariance. Schemes of 18 orientations were tested on a volunteer. The optimized sets were similar to uniform sets in terms of energy, condition number and rotational invariance, whether the complete set or only a subset was considered. Diffusion maps obtained in vivo were close to those for uniform sets whatever the acquisition time was. This was not the case with conventional schemes, whose subset uniformity was insufficient. With the proposed approach, sets of orientations responding to several corruption scenarios can be generated, which is potentially useful for imaging uncooperative patients or infants.
Variance reduction for Fokker–Planck based particle Monte Carlo schemes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorji, M. Hossein, E-mail: gorjih@ifd.mavt.ethz.ch; Andric, Nemanja; Jenny, Patrick
Recently, Fokker–Planck based particle Monte Carlo schemes have been proposed and evaluated for simulations of rarefied gas flows [1–3]. In this paper, the variance reduction for particle Monte Carlo simulations based on the Fokker–Planck model is considered. First, deviational based schemes were derived and reviewed, and it is shown that these deviational methods are not appropriate for practical Fokker–Planck based rarefied gas flow simulations. This is due to the fact that the deviational schemes considered in this study lead either to instabilities in the case of two-weight methods or to large statistical errors if the direct sampling method is applied.more » Motivated by this conclusion, we developed a novel scheme based on correlated stochastic processes. The main idea here is to synthesize an additional stochastic process with a known solution, which is simultaneously solved together with the main one. By correlating the two processes, the statistical errors can dramatically be reduced; especially for low Mach numbers. To assess the methods, homogeneous relaxation, planar Couette and lid-driven cavity flows were considered. For these test cases, it could be demonstrated that variance reduction based on parallel processes is very robust and effective.« less
Hemmingsson, Erik
2018-06-01
To explore the sequence and interaction of infancy and early childhood risk factors, particularly relating to disturbances in the social environment, and how the consequences of such exposures can promote weight gain and obesity. This review will argue that socioeconomic adversity is a key upstream catalyst that sets the stage for critical midstream risk factors such as family strain and dysfunction, offspring insecurity, stress, emotional turmoil, low self-esteem, and poor mental health. These midstream risk factors, particularly stress and emotional turmoil, create a more or less perfect foil for calorie-dense junk food self-medication and subtle addiction, to alleviate uncomfortable psychological and emotional states. Disturbances in the social environment during infancy and early childhood appear to play a critical role in weight gain and obesity, through such mechanisms as insecurity, stress, and emotional turmoil, eventually leading to junk food self-medication and subtle addiction.
Crystal structure prediction supported by incomplete experimental data
NASA Astrophysics Data System (ADS)
Tsujimoto, Naoto; Adachi, Daiki; Akashi, Ryosuke; Todo, Synge; Tsuneyuki, Shinji
2018-05-01
We propose an efficient theoretical scheme for structure prediction on the basis of the idea of combining methods, which optimize theoretical calculation and experimental data simultaneously. In this scheme, we formulate a cost function based on a weighted sum of interatomic potential energies and a penalty function which is defined with partial experimental data totally insufficient for conventional structure analysis. In particular, we define the cost function using "crystallinity" formulated with only peak positions within the small range of the x-ray-diffraction pattern. We apply this method to well-known polymorphs of SiO2 and C with up to 108 atoms in the simulation cell and show that it reproduces the correct structures efficiently with very limited information of diffraction peaks. This scheme opens a new avenue for determining and predicting structures that are difficult to determine by conventional methods.
Advanced control design for hybrid turboelectric vehicle
NASA Technical Reports Server (NTRS)
Abban, Joseph; Norvell, Johnesta; Momoh, James A.
1995-01-01
The new environment standards are a challenge and opportunity for industry and government who manufacture and operate urban mass transient vehicles. A research investigation to provide control scheme for efficient power management of the vehicle is in progress. Different design requirements using functional analysis and trade studies of alternate power sources and controls have been performed. The design issues include portability, weight and emission/fuel efficiency of induction motor, permanent magnet and battery. A strategic design scheme to manage power requirements using advanced control systems is presented. It exploits fuzzy logic, technology and rule based decision support scheme. The benefits of our study will enhance the economic and technical feasibility of technological needs to provide low emission/fuel efficient urban mass transit bus. The design team includes undergraduate researchers in our department. Sample results using NASA HTEV simulation tool are presented.
Numerical simulation of electrophoresis separation processes
NASA Technical Reports Server (NTRS)
Ganjoo, D. K.; Tezduyar, T. E.
1986-01-01
A new Petrov-Galerkin finite element formulation has been proposed for transient convection-diffusion problems. Most Petrov-Galerkin formulations take into account the spatial discretization, and the weighting functions so developed give satisfactory solutions for steady state problems. Though these schemes can be used for transient problems, there is scope for improvement. The schemes proposed here, which consider temporal as well as spatial discretization, provide improved solutions. Electrophoresis, which involves the motion of charged entities under the influence of an applied electric field, is governed by equations similiar to those encountered in fluid flow problems, i.e., transient convection-diffusion equations. Test problems are solved in electrophoresis and fluid flow. The results obtained are satisfactory. It is also expected that these schemes, suitably adapted, will improve the numerical solutions of the compressible Euler and the Navier-Stokes equations.
Wash-away of contaminant downstream of a backward-facing step over a range of Schmidt number
NASA Astrophysics Data System (ADS)
Min, Hannah; Fischer, Paul F.; Pearlstein, Arne J.
2017-11-01
We report computations of two-dimensional unsteady convective mass transfer in flow over a backward-facing step, in which a contaminant initially present downstream of the step is ``washed away''. Results are presented for a range of Schmidt numbers, showing how the recirculation region downstream of the step not only serves to retain contaminant near the step, but also transports contaminant upstream towards the step. The results for the highest Schmidt number considered (2650) are relevant to wash-away of low-molecular weight species in liquids, for which some implications are discussed.
NASA Astrophysics Data System (ADS)
McInerney, David; Thyer, Mark; Kavetski, Dmitri; Lerat, Julien; Kuczera, George
2017-03-01
Reliable and precise probabilistic prediction of daily catchment-scale streamflow requires statistical characterization of residual errors of hydrological models. This study focuses on approaches for representing error heteroscedasticity with respect to simulated streamflow, i.e., the pattern of larger errors in higher streamflow predictions. We evaluate eight common residual error schemes, including standard and weighted least squares, the Box-Cox transformation (with fixed and calibrated power parameter λ) and the log-sinh transformation. Case studies include 17 perennial and 6 ephemeral catchments in Australia and the United States, and two lumped hydrological models. Performance is quantified using predictive reliability, precision, and volumetric bias metrics. We find the choice of heteroscedastic error modeling approach significantly impacts on predictive performance, though no single scheme simultaneously optimizes all performance metrics. The set of Pareto optimal schemes, reflecting performance trade-offs, comprises Box-Cox schemes with λ of 0.2 and 0.5, and the log scheme (λ = 0, perennial catchments only). These schemes significantly outperform even the average-performing remaining schemes (e.g., across ephemeral catchments, median precision tightens from 105% to 40% of observed streamflow, and median biases decrease from 25% to 4%). Theoretical interpretations of empirical results highlight the importance of capturing the skew/kurtosis of raw residuals and reproducing zero flows. Paradoxically, calibration of λ is often counterproductive: in perennial catchments, it tends to overfit low flows at the expense of abysmal precision in high flows. The log-sinh transformation is dominated by the simpler Pareto optimal schemes listed above. Recommendations for researchers and practitioners seeking robust residual error schemes for practical work are provided.
NASA Technical Reports Server (NTRS)
Iguchi, Takamichi; Tao, Wei-Kuo; Wu, Di; Peters-Lidard, Christa; Santanello, Joseph A.; Kemp, Eric; Tian, Yudong; Case, Jonathan; Wang, Weile; Ferraro, Robert;
2017-01-01
This study investigates the sensitivity of daily rainfall rates in regional seasonal simulations over the contiguous United States (CONUS) to different cumulus parameterization schemes. Daily rainfall fields were simulated at 24-km resolution using the NASA-Unified Weather Research and Forecasting (NU-WRF) Model for June-August 2000. Four cumulus parameterization schemes and two options for shallow cumulus components in a specific scheme were tested. The spread in the domain-mean rainfall rates across the parameterization schemes was generally consistent between the entire CONUS and most subregions. The selection of the shallow cumulus component in a specific scheme had more impact than that of the four cumulus parameterization schemes. Regional variability in the performance of each scheme was assessed by calculating optimally weighted ensembles that minimize full root-mean-square errors against reference datasets. The spatial pattern of the seasonally averaged rainfall was insensitive to the selection of cumulus parameterization over mountainous regions because of the topographical pattern constraint, so that the simulation errors were mostly attributed to the overall bias there. In contrast, the spatial patterns over the Great Plains regions as well as the temporal variation over most parts of the CONUS were relatively sensitive to cumulus parameterization selection. Overall, adopting a single simulation result was preferable to generating a better ensemble for the seasonally averaged daily rainfall simulation, as long as their overall biases had the same positive or negative sign. However, an ensemble of multiple simulation results was more effective in reducing errors in the case of also considering temporal variation.
Weighted bi-prediction for light field image coding
NASA Astrophysics Data System (ADS)
Conti, Caroline; Nunes, Paulo; Ducla Soares, Luís.
2017-09-01
Light field imaging based on a single-tier camera equipped with a microlens array - also known as integral, holoscopic, and plenoptic imaging - has currently risen up as a practical and prospective approach for future visual applications and services. However, successfully deploying actual light field imaging applications and services will require developing adequate coding solutions to efficiently handle the massive amount of data involved in these systems. In this context, self-similarity compensated prediction is a non-local spatial prediction scheme based on block matching that has been shown to achieve high efficiency for light field image coding based on the High Efficiency Video Coding (HEVC) standard. As previously shown by the authors, this is possible by simply averaging two predictor blocks that are jointly estimated from a causal search window in the current frame itself, referred to as self-similarity bi-prediction. However, theoretical analyses for motion compensated bi-prediction have suggested that it is still possible to achieve further rate-distortion performance improvements by adaptively estimating the weighting coefficients of the two predictor blocks. Therefore, this paper presents a comprehensive study of the rate-distortion performance for HEVC-based light field image coding when using different sets of weighting coefficients for self-similarity bi-prediction. Experimental results demonstrate that it is possible to extend the previous theoretical conclusions to light field image coding and show that the proposed adaptive weighting coefficient selection leads to up to 5 % of bit savings compared to the previous self-similarity bi-prediction scheme.
Role of atmosphere-ocean interactions in supermodeling the tropical Pacific climate
NASA Astrophysics Data System (ADS)
Shen, Mao-Lin; Keenlyside, Noel; Bhatt, Bhuwan C.; Duane, Gregory S.
2017-12-01
The supermodel strategy interactively combines several models to outperform the individual models comprising it. A key advantage of the approach is that nonlinear improvements can be achieved, in contrast to the linear weighted combination of individual unconnected models. This property is found in a climate supermodel constructed by coupling two versions of an atmospheric model differing only in their convection scheme to a single ocean model. The ocean model receives a weighted combination of the momentum and heat fluxes. Optimal weights can produce a supermodel with a basic state similar to observations: a single Intertropical Convergence zone (ITCZ), with a western Pacific warm pool and an equatorial cold tongue. This is in stark contrast to the erroneous double ITCZ pattern simulated by both of the two stand-alone coupled models. By varying weights, we develop a conceptual scheme to explain how combining the momentum fluxes of the two different atmospheric models affects equatorial upwelling and surface wind feedback so as to give a realistic basic state in the tropical Pacific. In particular, we propose a mechanism based on the competing influences of equatorial zonal wind and off-equatorial wind stress curl in driving equatorial upwelling in the coupled models. Our results show how nonlinear ocean-atmosphere interaction is essential in combining these two effects to build different sea surface temperature structures, some of which are realistic. They also provide some insight into observed and modelled tropical Pacific climate.
Role of atmosphere-ocean interactions in supermodeling the tropical Pacific climate.
Shen, Mao-Lin; Keenlyside, Noel; Bhatt, Bhuwan C; Duane, Gregory S
2017-12-01
The supermodel strategy interactively combines several models to outperform the individual models comprising it. A key advantage of the approach is that nonlinear improvements can be achieved, in contrast to the linear weighted combination of individual unconnected models. This property is found in a climate supermodel constructed by coupling two versions of an atmospheric model differing only in their convection scheme to a single ocean model. The ocean model receives a weighted combination of the momentum and heat fluxes. Optimal weights can produce a supermodel with a basic state similar to observations: a single Intertropical Convergence zone (ITCZ), with a western Pacific warm pool and an equatorial cold tongue. This is in stark contrast to the erroneous double ITCZ pattern simulated by both of the two stand-alone coupled models. By varying weights, we develop a conceptual scheme to explain how combining the momentum fluxes of the two different atmospheric models affects equatorial upwelling and surface wind feedback so as to give a realistic basic state in the tropical Pacific. In particular, we propose a mechanism based on the competing influences of equatorial zonal wind and off-equatorial wind stress curl in driving equatorial upwelling in the coupled models. Our results show how nonlinear ocean-atmosphere interaction is essential in combining these two effects to build different sea surface temperature structures, some of which are realistic. They also provide some insight into observed and modelled tropical Pacific climate.
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.; Nielsen, Eric J.; Nishikawa, Hiroaki; White, Jeffery A.
2009-01-01
Discretization of the viscous terms in current finite-volume unstructured-grid schemes are compared using node-centered and cell-centered approaches in two dimensions. Accuracy and efficiency are studied for six nominally second-order accurate schemes: a node-centered scheme, cell-centered node-averaging schemes with and without clipping, and cell-centered schemes with unweighted, weighted, and approximately mapped least-square face gradient reconstruction. The grids considered range from structured (regular) grids to irregular grids composed of arbitrary mixtures of triangles and quadrilaterals, including random perturbations of the grid points to bring out the worst possible behavior of the solution. Two classes of tests are considered. The first class of tests involves smooth manufactured solutions on both isotropic and highly anisotropic grids with discontinuous metrics, typical of those encountered in grid adaptation. The second class concerns solutions and grids varying strongly anisotropically over a curved body, typical of those encountered in high-Reynolds number turbulent flow simulations. Results from the first class indicate the face least-square methods, the node-averaging method without clipping, and the node-centered method demonstrate second-order convergence of discretization errors with very similar accuracies per degree of freedom. The second class of tests are more discriminating. The node-centered scheme is always second order with an accuracy and complexity in linearization comparable to the best of the cell-centered schemes. In comparison, the cell-centered node-averaging schemes are less accurate, have a higher complexity in linearization, and can fail to converge to the exact solution when clipping of the node-averaged values is used. The cell-centered schemes using least-square face gradient reconstruction have more compact stencils with a complexity similar to the complexity of the node-centered scheme. For simulations on highly anisotropic curved grids, the least-square methods have to be amended either by introducing a local mapping of the surface anisotropy or modifying the scheme stencil to reflect the direction of strong coupling.
Robust and efficient estimation with weighted composite quantile regression
NASA Astrophysics Data System (ADS)
Jiang, Xuejun; Li, Jingzhi; Xia, Tian; Yan, Wanfeng
2016-09-01
In this paper we introduce a weighted composite quantile regression (CQR) estimation approach and study its application in nonlinear models such as exponential models and ARCH-type models. The weighted CQR is augmented by using a data-driven weighting scheme. With the error distribution unspecified, the proposed estimators share robustness from quantile regression and achieve nearly the same efficiency as the oracle maximum likelihood estimator (MLE) for a variety of error distributions including the normal, mixed-normal, Student's t, Cauchy distributions, etc. We also suggest an algorithm for the fast implementation of the proposed methodology. Simulations are carried out to compare the performance of different estimators, and the proposed approach is used to analyze the daily S&P 500 Composite index, which verifies the effectiveness and efficiency of our theoretical results.
A weighted ℓ{sub 1}-minimization approach for sparse polynomial chaos expansions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peng, Ji; Hampton, Jerrad; Doostan, Alireza, E-mail: alireza.doostan@colorado.edu
2014-06-15
This work proposes a method for sparse polynomial chaos (PC) approximation of high-dimensional stochastic functions based on non-adapted random sampling. We modify the standard ℓ{sub 1}-minimization algorithm, originally proposed in the context of compressive sampling, using a priori information about the decay of the PC coefficients, when available, and refer to the resulting algorithm as weightedℓ{sub 1}-minimization. We provide conditions under which we may guarantee recovery using this weighted scheme. Numerical tests are used to compare the weighted and non-weighted methods for the recovery of solutions to two differential equations with high-dimensional random inputs: a boundary value problem with amore » random elliptic operator and a 2-D thermally driven cavity flow with random boundary condition.« less
Robust Mean and Covariance Structure Analysis through Iteratively Reweighted Least Squares.
ERIC Educational Resources Information Center
Yuan, Ke-Hai; Bentler, Peter M.
2000-01-01
Adapts robust schemes to mean and covariance structures, providing an iteratively reweighted least squares approach to robust structural equation modeling. Each case is weighted according to its distance, based on first and second order moments. Test statistics and standard error estimators are given. (SLD)
An Introduction to "Benefit of the Doubt" Composite Indicators
ERIC Educational Resources Information Center
Cherchye, Laurens; Moesen, Willem; Rogge, Nicky; Van Puyenbroeck, Tom
2007-01-01
Despite their increasing use, composite indicators remain controversial. The undesirable dependence of countries' rankings on the preliminary normalization stage, and the disagreement among experts/stakeholders on the specific weighting scheme used to aggregate sub-indicators, are often invoked to undermine the credibility of composite indicators.…
SIMULATION OF DISPERSION OF A POWER PLANT PLUME USING AN ADAPTIVE GRID ALGORITHM
A new dynamic adaptive grid algorithm has been developed for use in air quality modeling. This algorithm uses a higher order numerical scheme?the piecewise parabolic method (PPM)?for computing advective solution fields; a weight function capable of promoting grid node clustering ...
Through-wall image enhancement using fuzzy and QR decomposition.
Riaz, Muhammad Mohsin; Ghafoor, Abdul
2014-01-01
QR decomposition and fuzzy logic based scheme is proposed for through-wall image enhancement. QR decomposition is less complex compared to singular value decomposition. Fuzzy inference engine assigns weights to different overlapping subspaces. Quantitative measures and visual inspection are used to analyze existing and proposed techniques.
What are the emerging features of community health insurance schemes in East Africa?
Basaza, Robert; Pariyo, George; Criel, Bart
2009-01-01
Background The three East African countries of Uganda, Tanzania, and Kenya are characterized by high poverty levels, population growth rates, prevalence of HIV/AIDS, under-funding of the health sector, poor access to quality health care, and small health insurance coverage. Tanzania and Kenya have user-fees whereas Uganda abolished user-fees in public-owned health units. Objective To provide comparative description of community health insurance (CHI) schemes in three East African countries of Uganda, Tanzania, and Kenya and thereafter provide a basis for future policy research for development of CHI schemes. Methods An analytical grid of 10 distinctive items pertaining to the nature of CHI schemes was developed so as to have a uniform lens of comparing country situations of CHI. Results and conclusions The majority of the schemes have been in existence for a relatively short time of less than 10 years and their number remains small. There is need for further research to identify what is the mix and weight of factors that cause people to refrain from joining schemes. Specific issues that could also be addressed in subsequent studies are whether the current schemes provide financial protection, increase access to quality of care and impact on the equity of health services financing and delivery. On the basis of this knowledge, rational policy decisions can be taken. The governments thereafter could consider an option of playing more roles in advocacy, paying for the poorest, and developing an enabling policy and legal framework. PMID:22312207
Fully parallel write/read in resistive synaptic array for accelerating on-chip learning
NASA Astrophysics Data System (ADS)
Gao, Ligang; Wang, I.-Ting; Chen, Pai-Yu; Vrudhula, Sarma; Seo, Jae-sun; Cao, Yu; Hou, Tuo-Hung; Yu, Shimeng
2015-11-01
A neuro-inspired computing paradigm beyond the von Neumann architecture is emerging and it generally takes advantage of massive parallelism and is aimed at complex tasks that involve intelligence and learning. The cross-point array architecture with synaptic devices has been proposed for on-chip implementation of the weighted sum and weight update in the learning algorithms. In this work, forming-free, silicon-process-compatible Ta/TaO x /TiO2/Ti synaptic devices are fabricated, in which >200 levels of conductance states could be continuously tuned by identical programming pulses. In order to demonstrate the advantages of parallelism of the cross-point array architecture, a novel fully parallel write scheme is designed and experimentally demonstrated in a small-scale crossbar array to accelerate the weight update in the training process, at a speed that is independent of the array size. Compared to the conventional row-by-row write scheme, it achieves >30× speed-up and >30× improvement in energy efficiency as projected in a large-scale array. If realistic synaptic device characteristics such as device variations are taken into an array-level simulation, the proposed array architecture is able to achieve ∼95% recognition accuracy of MNIST handwritten digits, which is close to the accuracy achieved by software using the ideal sparse coding algorithm.
Cai, Hongmin; Peng, Yanxia; Ou, Caiwen; Chen, Minsheng; Li, Li
2014-01-01
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is increasingly used for breast cancer diagnosis as supplementary to conventional imaging techniques. Combining of diffusion-weighted imaging (DWI) of morphology and kinetic features from DCE-MRI to improve the discrimination power of malignant from benign breast masses is rarely reported. The study comprised of 234 female patients with 85 benign and 149 malignant lesions. Four distinct groups of features, coupling with pathological tests, were estimated to comprehensively characterize the pictorial properties of each lesion, which was obtained by a semi-automated segmentation method. Classical machine learning scheme including feature subset selection and various classification schemes were employed to build prognostic model, which served as a foundation for evaluating the combined effects of the multi-sided features for predicting of the types of lesions. Various measurements including cross validation and receiver operating characteristics were used to quantify the diagnostic performances of each feature as well as their combination. Seven features were all found to be statistically different between the malignant and the benign groups and their combination has achieved the highest classification accuracy. The seven features include one pathological variable of age, one morphological variable of slope, three texture features of entropy, inverse difference and information correlation, one kinetic feature of SER and one DWI feature of apparent diffusion coefficient (ADC). Together with the selected diagnostic features, various classical classification schemes were used to test their discrimination power through cross validation scheme. The averaged measurements of sensitivity, specificity, AUC and accuracy are 0.85, 0.89, 90.9% and 0.93, respectively. Multi-sided variables which characterize the morphological, kinetic, pathological properties and DWI measurement of ADC can dramatically improve the discriminatory power of breast lesions.
Genetic Parameter Estimates of Carcass Traits under National Scale Breeding Scheme for Beef Cattle.
Do, ChangHee; Park, ByungHo; Kim, SiDong; Choi, TaeJung; Yang, BohSuk; Park, SuBong; Song, HyungJun
2016-08-01
Carcass and price traits of 72,969 Hanwoo cows, bulls and steers aged 16 to 80 months at slaughter collected from 2002 to 2013 at 75 beef packing plants in Korea were analyzed to determine heritability, correlation and breeding value using the Multi-Trait restricted maximum likelihood (REML) animal model procedure. The traits included carcass measurements, scores and grades at 24 h postmortem and bid prices at auction. Relatively high heritability was found for maturity (0.41±0.031), while moderate heritability estimates were obtained for backfat thickness (0.20±0.018), longissimus muscle (LM) area (0.23±0.020), carcass weight (0.28±0.019), yield index (0.20±0.018), yield grade (0.16±0.017), marbling (0.28±0.021), texture (0.14±0.016), quality grade (0.26±0.016) and price/kg (0.24±0.025). Relatively low heritability estimates were observed for meat color (0.06±0.013) and fat color (0.06±0.012). Heritability estimates for most traits were lower than those in the literature. Genetic correlations of carcass measurements with characteristic scores or quality grade of carcass ranged from -0.27 to +0.21. Genetic correlations of yield grade with backfat thickness, LM area and carcass weight were 0.91, -0.43, and -0.09, respectively. Genetic correlations of quality grade with scores of marbling, meat color, fat color and texture were -0.99, 0.48, 0.47, and 0.98, respectively. Genetic correlations of price/kg with LM area, carcass weight, marbling, meat color, texture and maturity were 0.57, 0.64, 0.76, -0.41, -0.79, and -0.42, respectively. Genetic correlations of carcass price with LM area, carcass weight, marbling and texture were 0.61, 0.57, 0.64, and -0.73, respectively, with standard errors ranging from ±0.047 to ±0.058. The mean carcass weight breeding values increased by more than 8 kg, whereas the mean marbling scores decreased by approximately 0.2 from 2000 through 2009. Overall, the results suggest that genetic improvement of productivity and carcass quality could be obtained under the national scale breeding scheme of Korea for Hanwoo and that continuous efforts to improve the breeding scheme should be made to increase genetic progress.
Navier-Stokes analysis of cold scramjet-afterbody flows
NASA Technical Reports Server (NTRS)
Baysal, Oktay; Engelund, Walter C.; Eleshaky, Mohamed E.
1989-01-01
The progress of two efforts in coding solutions of Navier-Stokes equations is summarized. The first effort concerns a 3-D space marching parabolized Navier-Stokes (PNS) code being modified to compute the supersonic mixing flow through an internal/external expansion nozzle with multicomponent gases. The 3-D PNS equations, coupled with a set of species continuity equations, are solved using an implicit finite difference scheme. The completed work is summarized and includes code modifications for four chemical species, computing the flow upstream of the upper cowl for a theoretical air mixture, developing an initial plane solution for the inner nozzle region, and computing the flow inside the nozzle for both a N2/O2 mixture and a Freon-12/Ar mixture, and plotting density-pressure contours for the inner nozzle region. The second effort concerns a full Navier-Stokes code. The species continuity equations account for the diffusion of multiple gases. This 3-D explicit afterbody code has the ability to use high order numerical integration schemes such as the 4th order MacCormack, and the Gottlieb-MacCormack schemes. Changes to the work are listed and include, but are not limited to: (1) internal/external flow capability; (2) new treatments of the cowl wall boundary conditions and relaxed computations around the cowl region and cowl tip; (3) the entering of the thermodynamic and transport properties of Freon-12, Ar, O, and N; (4) modification to the Baldwin-Lomax turbulence model to account for turbulent eddies generated by cowl walls inside and external to the nozzle; and (5) adopting a relaxation formula to account for the turbulence in the mixing shear layer.
Energetic-ion acceleration and transport in the upstream region of Jupiter: Voyager 1 and 2
NASA Technical Reports Server (NTRS)
Baker, D. N.; Zwickl, R. D.; Carbary, J. F.; Krimigis, S. M.; Lepping, R. P.
1982-01-01
Long-lived upstream energetic ion events at Jupiter appear to be very similar in nearly all respects to upstream ion events at Earth. A notable difference between the two planetary systems is the enhanced heavy ion compositional signature reported for the Jovian events. This compositional feature has suggested that ions escaping from the Jovian magnetosphere play an important role in forming upstream ion populations at Jupiter. In contrast, models of energetic upstream ions at Earth emphasize in situ acceleration of reflected solar wind ions within the upstream region itself. Using Voyager 1 and 2 energetic ( approximately 30 keV) ion measurements near the magnetopause, in the magnetosheath, and immediately upstream of the bow shock, the compositional patterns are examined together with typical energy spectra in each of these regions. A model involving upstream Fermi acceleration early in events and emphasizing energetic particle escape in the prenoon part of the Jovian magnetosphere late in events is presented to explain many of the features in the upstream region of Jupiter.
White-nose syndrome pathology grading in Nearctic and Palearctic bats
Pikula, Jiri; Amelon, Sybill K.; Bandouchova, Hana; Bartonička, Tomáš; Berkova, Hana; Brichta, Jiri; Hooper, Sarah; Kokurewicz, Tomasz; Kolarik, Miroslav; Köllner, Bernd; Kovacova, Veronika; Linhart, Petr; Piacek, Vladimir; Turner, Gregory G.; Zukal, Jan; Martínková, Natália
2017-01-01
While white-nose syndrome (WNS) has decimated hibernating bat populations in the Nearctic, species from the Palearctic appear to cope better with the fungal skin infection causing WNS. This has encouraged multiple hypotheses on the mechanisms leading to differential survival of species exposed to the same pathogen. To facilitate intercontinental comparisons, we proposed a novel pathogenesis-based grading scheme consistent with WNS diagnosis histopathology criteria. UV light-guided collection was used to obtain single biopsies from Nearctic and Palearctic bat wing membranes non-lethally. The proposed scheme scores eleven grades associated with WNS on histopathology. Given weights reflective of grade severity, the sum of findings from an individual results in weighted cumulative WNS pathology score. The probability of finding fungal skin colonisation and single, multiple or confluent cupping erosions increased with increase in Pseudogymnoascus destructans load. Increasing fungal load mimicked progression of skin infection from epidermal surface colonisation to deep dermal invasion. Similarly, the number of UV-fluorescent lesions increased with increasing weighted cumulative WNS pathology score, demonstrating congruence between WNS-associated tissue damage and extent of UV fluorescence. In a case report, we demonstrated that UV-fluorescence disappears within two weeks of euthermy. Change in fluorescence was coupled with a reduction in weighted cumulative WNS pathology score, whereby both methods lost diagnostic utility. While weighted cumulative WNS pathology scores were greater in the Nearctic than Palearctic, values for Nearctic bats were within the range of those for Palearctic species. Accumulation of wing damage probably influences mortality in affected bats, as demonstrated by a fatal case of Myotis daubentonii with natural WNS infection and healing in Myotis myotis. The proposed semi-quantitative pathology score provided good agreement between experienced raters, showing it to be a powerful and widely applicable tool for defining WNS severity. PMID:28767673
Frankel, Arthur D.; Petersen, Mark D.
2008-01-01
The geometry and recurrence times of large earthquakes associated with the Cascadia Subduction Zone (CSZ) were discussed and debated at a March 28-29, 2006 Pacific Northwest workshop for the USGS National Seismic Hazard Maps. The CSZ is modeled from Cape Mendocino in California to Vancouver Island in British Columbia. We include the same geometry and weighting scheme as was used in the 2002 model (Frankel and others, 2002) based on thermal constraints (Fig. 1; Fluck and others, 1997 and a reexamination by Wang et al., 2003, Fig. 11, eastern edge of intermediate shading). This scheme includes four possibilities for the lower (eastern) limit of seismic rupture: the base of elastic zone (weight 0.1), the base of transition zone (weight 0.2), the midpoint of the transition zone (weight 0.2), and a model with a long north-south segment at 123.8? W in the southern and central portions of the CSZ, with a dogleg to the northwest in the northern portion of the zone (weight 0.5). The latter model was derived from the approximate average longitude of the contour of the 30 km depth of the CSZ as modeled by Fluck et al. (1997). A global study of the maximum depth of thrust earthquakes on subduction zones by Tichelaar and Ruff (1993) indicated maximum depths of about 40 km for most of the subduction zones studied, although the Mexican subduction zone had a maximum depth of about 25 km (R. LaForge, pers. comm., 2006). The recent inversion of GPS data by McCaffrey et al. (2007) shows a significant amount of coupling (a coupling factor of 0.2-0.3) as far east as 123.8? West in some portions of the CSZ. Both of these lines of evidence lend support to the model with a north-south segment at 123.8? W.
White-nose syndrome pathology grading in Nearctic and Palearctic bats.
Pikula, Jiri; Amelon, Sybill K; Bandouchova, Hana; Bartonička, Tomáš; Berkova, Hana; Brichta, Jiri; Hooper, Sarah; Kokurewicz, Tomasz; Kolarik, Miroslav; Köllner, Bernd; Kovacova, Veronika; Linhart, Petr; Piacek, Vladimir; Turner, Gregory G; Zukal, Jan; Martínková, Natália
2017-01-01
While white-nose syndrome (WNS) has decimated hibernating bat populations in the Nearctic, species from the Palearctic appear to cope better with the fungal skin infection causing WNS. This has encouraged multiple hypotheses on the mechanisms leading to differential survival of species exposed to the same pathogen. To facilitate intercontinental comparisons, we proposed a novel pathogenesis-based grading scheme consistent with WNS diagnosis histopathology criteria. UV light-guided collection was used to obtain single biopsies from Nearctic and Palearctic bat wing membranes non-lethally. The proposed scheme scores eleven grades associated with WNS on histopathology. Given weights reflective of grade severity, the sum of findings from an individual results in weighted cumulative WNS pathology score. The probability of finding fungal skin colonisation and single, multiple or confluent cupping erosions increased with increase in Pseudogymnoascus destructans load. Increasing fungal load mimicked progression of skin infection from epidermal surface colonisation to deep dermal invasion. Similarly, the number of UV-fluorescent lesions increased with increasing weighted cumulative WNS pathology score, demonstrating congruence between WNS-associated tissue damage and extent of UV fluorescence. In a case report, we demonstrated that UV-fluorescence disappears within two weeks of euthermy. Change in fluorescence was coupled with a reduction in weighted cumulative WNS pathology score, whereby both methods lost diagnostic utility. While weighted cumulative WNS pathology scores were greater in the Nearctic than Palearctic, values for Nearctic bats were within the range of those for Palearctic species. Accumulation of wing damage probably influences mortality in affected bats, as demonstrated by a fatal case of Myotis daubentonii with natural WNS infection and healing in Myotis myotis. The proposed semi-quantitative pathology score provided good agreement between experienced raters, showing it to be a powerful and widely applicable tool for defining WNS severity.
NASA Astrophysics Data System (ADS)
Chao, I.-Fen; Zhang, Tsung-Min
2015-06-01
Long-reach passive optical networks (LR-PONs) have been considered to be promising solutions for future access networks. In this paper, we propose a distributed medium access control (MAC) scheme over an advantageous LR-PON network architecture that reroutes the control information from and back to all ONUs through an (N + 1) × (N + 1) star coupler (SC) deployed near the ONUs, thereby overwhelming the extremely long propagation delay problem in LR-PONs. In the network, the control slot is designed to contain all bandwidth requirements of all ONUs and is in-band time-division-multiplexed with a number of data slots within a cycle. In the proposed MAC scheme, a novel profit-weight-based dynamic bandwidth allocation (P-DBA) scheme is presented. The algorithm is designed to efficiently and fairly distribute the amount of excess bandwidth based on a profit value derived from the excess bandwidth usage of each ONU, which resolves the problems of previously reported DBA schemes that are either unfair or inefficient. The simulation results show that the proposed decentralized algorithms exhibit a nearly three-order-of-magnitude improvement in delay performance compared to the centralized algorithms over LR-PONs. Moreover, the newly proposed P-DBA scheme guarantees low delay performance and fairness even when under attack by the malevolent ONU irrespective of traffic loads and burstiness.
Clark, William R; Winchester, James F
2003-10-01
Molecular weight has traditionally been the parameter most commonly used to classify uremic toxins, with a value of approximately 500 Da frequently used as a demarcation point below which the molecular weights of small nitrogenous waste products fall. This toxin group, the most extensively studied from a clinical perspective, is characterized by a high degree of water solubility and the absence of protein binding. However, uremia is mediated by the retention of a plethora of other compounds having characteristics that differ significantly from those of the previously mentioned group. As opposed to the relative homogeneity of the nitrogenous metabolite class, other uremic toxins collectively are a very heterogeneous group, not only with respect to molecular weight but also other characteristics, such as protein binding and hydrophobicity. A recently proposed classification scheme by the European Uraemic Toxin Work Group subdivides the remainder of molecules into 2 categories: protein-bound solutes and middle molecules. For the latter group, the Work Group proposes a molecular weight range (500-60,000 Da) that incorporates many toxins identified since the original middle molecule hypothesis, for which the upper molecular weight limit was approximately 2,000 Da. In fact, low-molecular-weight peptides and proteins (LMWPs) comprise nearly the entire middle molecule category in the new scheme. The purpose of this article is to provide an overview of the middle molecule class of uremic toxins, with the focus on LMWPs. A brief review of LMWP metabolism under conditions of normal (and in a few cases, abnormal) renal function will be presented. The physical characteristics of several LMWPs will also be presented, including molecular weight, conformation, and charge. Specific LMWPs to be covered will include beta 2-microglobulin, complement proteins (C3a and Factor D), leptin, and proinflammatory cytokines. The article will also include a discussion of the treatment-related factors influencing dialytic removal of middle molecules. Once these factors, which include membrane characteristics, protein-membrane interactions, and solute removal mechanisms, are discussed, an overview of the different therapeutic strategies used to enhance clearance of these compounds is provided.
The Weighted-Average Lagged Ensemble.
DelSole, T; Trenary, L; Tippett, M K
2017-11-01
A lagged ensemble is an ensemble of forecasts from the same model initialized at different times but verifying at the same time. The skill of a lagged ensemble mean can be improved by assigning weights to different forecasts in such a way as to maximize skill. If the forecasts are bias corrected, then an unbiased weighted lagged ensemble requires the weights to sum to one. Such a scheme is called a weighted-average lagged ensemble. In the limit of uncorrelated errors, the optimal weights are positive and decay monotonically with lead time, so that the least skillful forecasts have the least weight. In more realistic applications, the optimal weights do not always behave this way. This paper presents a series of analytic examples designed to illuminate conditions under which the weights of an optimal weighted-average lagged ensemble become negative or depend nonmonotonically on lead time. It is shown that negative weights are most likely to occur when the errors grow rapidly and are highly correlated across lead time. The weights are most likely to behave nonmonotonically when the mean square error is approximately constant over the range forecasts included in the lagged ensemble. An extreme example of the latter behavior is presented in which the optimal weights vanish everywhere except at the shortest and longest lead times.
Ma, Long-Sheng; Robertsson, Lennart; Picard, Susanne; Zucco, Massimo; Bi, Zhiyi; Wu, Shenghai; Windeler, Robert S
2004-03-15
The first international comparison of femtosecond laser combs has been carried out at the International Bureau of Weights and Measures (BIPM). Three comb systems were involved: BIPM-C1 and BIPM-C2 from the BIPM and ECNU-C1 from the East China Normal University (ECNU). The agreement among the three combs was found to be on the subhertz level in the vicinity of 563 THz. A frequency difference measurement scheme was demonstrated that is suitable for general comb comparisons.
Low Temperature Performance of High-Speed Neural Network Circuits
NASA Technical Reports Server (NTRS)
Duong, T.; Tran, M.; Daud, T.; Thakoor, A.
1995-01-01
Artificial neural networks, derived from their biological counterparts, offer a new and enabling computing paradigm specially suitable for such tasks as image and signal processing with feature classification/object recognition, global optimization, and adaptive control. When implemented in fully parallel electronic hardware, it offers orders of magnitude speed advantage. Basic building blocks of the new architecture are the processing elements called neurons implemented as nonlinear operational amplifiers with sigmoidal transfer function, interconnected through weighted connections called synapses implemented using circuitry for weight storage and multiply functions either in an analog, digital, or hybrid scheme.
Katiyar, Prateek; Divine, Mathew R; Kohlhofer, Ursula; Quintanilla-Martinez, Leticia; Schölkopf, Bernhard; Pichler, Bernd J; Disselhorst, Jonathan A
2017-04-01
In this study, we described and validated an unsupervised segmentation algorithm for the assessment of tumor heterogeneity using dynamic 18 F-FDG PET. The aim of our study was to objectively evaluate the proposed method and make comparisons with compartmental modeling parametric maps and SUV segmentations using simulations of clinically relevant tumor tissue types. Methods: An irreversible 2-tissue-compartmental model was implemented to simulate clinical and preclinical 18 F-FDG PET time-activity curves using population-based arterial input functions (80 clinical and 12 preclinical) and the kinetic parameter values of 3 tumor tissue types. The simulated time-activity curves were corrupted with different levels of noise and used to calculate the tissue-type misclassification errors of spectral clustering (SC), parametric maps, and SUV segmentation. The utility of the inverse noise variance- and Laplacian score-derived frame weighting schemes before SC was also investigated. Finally, the SC scheme with the best results was tested on a dynamic 18 F-FDG measurement of a mouse bearing subcutaneous colon cancer and validated using histology. Results: In the preclinical setup, the inverse noise variance-weighted SC exhibited the lowest misclassification errors (8.09%-28.53%) at all noise levels in contrast to the Laplacian score-weighted SC (16.12%-31.23%), unweighted SC (25.73%-40.03%), parametric maps (28.02%-61.45%), and SUV (45.49%-45.63%) segmentation. The classification efficacy of both weighted SC schemes in the clinical case was comparable to the unweighted SC. When applied to the dynamic 18 F-FDG measurement of colon cancer, the proposed algorithm accurately identified densely vascularized regions from the rest of the tumor. In addition, the segmented regions and clusterwise average time-activity curves showed excellent correlation with the tumor histology. Conclusion: The promising results of SC mark its position as a robust tool for quantification of tumor heterogeneity using dynamic PET studies. Because SC tumor segmentation is based on the intrinsic structure of the underlying data, it can be easily applied to other cancer types as well. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.
Magam, Sami M; Zakaria, Mohamad Pauzi; Halimoon, Normala; Aris, Ahmad Zaharin; Kannan, Narayanan; Masood, Najat; Mustafa, Shuhaimi; Alkhadher, Sadeq; Keshavarzifard, Mehrzad; Vaezzadeh, Vahab; Sani, Muhamad S A; Latif, Mohd Talib
2016-03-01
This is the first extensive report on linear alkylbenzenes (LABs) as sewage molecular markers in surface sediments collected from the Perlis, Kedah, Merbok, Prai, and Perak Rivers and Estuaries in the west of Peninsular Malaysia. Sediment samples were extracted, fractionated, and analyzed using gas chromatography mass spectrometry (GC-MS). The concentrations of total LABs ranged from 68 to 154 (Perlis River), 103 to 314 (Kedah River), 242 to 1062 (Merbok River), 1985 to 2910 (Prai River), and 217 to 329 ng g(-1) (Perak River) dry weight (dw). The highest levels of LABs were found at PI3 (Prai Estuary) due to the rapid industrialization and population growth in this region, while the lowest concentrations of LABs were found at PS1 (upstream of Perlis River). The LABs ratio of internal to external isomers (I/E) in this study ranged from 0.56 at KH1 (upstream of Kedah River) to 1.35 at MK3 (Merbok Estuary) indicating that the rivers receive raw sewage and primary treatment effluents in the study area. In general, the results of this paper highlighted the necessity of continuation of water treatment system improvement in Malaysia.
Hyperspectral proximal sensing of Salix Alba trees in the Sacco river valley (Latium, Italy).
Moroni, Monica; Lupo, Emanuela; Cenedese, Antonio
2013-10-29
Recent developments in hardware and software have increased the possibilities and reduced the costs of hyperspectral proximal sensing. Through the analysis of high resolution spectroscopic measurements at the laboratory or field scales, this monitoring technique is suitable for quantitative estimates of biochemical and biophysical variables related to the physiological state of vegetation. Two systems for hyperspectral imaging have been designed and developed at DICEA-Sapienza University of Rome, one based on the use of spectrometers, the other on tunable interference filters. Both systems provide a high spectral and spatial resolution with low weight, power consumption and cost. This paper describes the set-up of the tunable filter platform and its application to the investigation of the environmental status of the region crossed by the Sacco river (Latium, Italy). This was achieved by analyzing the spectral response given by tree samples, with roots partly or wholly submerged in the river, located upstream and downstream of an industrial area affected by contamination. Data acquired is represented as reflectance indices as well as reflectance values. Broadband and narrowband indices based on pigment content and carotenoids vs. chlorophyll content suggest tree samples located upstream of the contaminated area are 'healthier' than those downstream.
Scott, Alison J; Wilson, Rebecca F
2011-01-01
Few studies have focused on overweight and obesity among rural African American youth in the Deep South, despite disproportionately high rates in this group. In addition, few studies have been conducted to elucidate how these disparities are created and perpetuated within rural communities in this region. This descriptive study explores community-based risks for overweight and obesity among African American youth in a rural town in the Deep South. We used ecological theory in conjunction with embodiment theory to explore how upstream ecological factors may contribute to risk of overweight and obesity for African American youth in a rural town in the Deep South. We conducted and analyzed in-depth interviews with African American community members who interact with youth in varying contexts (home, school, church, community). Participants most commonly stated that race relations, poverty, and the built environment were barriers to maintaining a healthy weight. Findings suggested the need for rural, community-based interventions that target obesity at multiple ecological levels and incorporate issues related to race, poverty, and the built environment. More research is needed to determine how disparities in obesity are created and perpetuated in specific community contexts.
Effects of commercial harvest on shovelnose sturgeon populations in the Upper Mississippi River
Koch, Jeff D.; Quist, Michael C.; Pierce, Clay L.; Hansen, Kirk A.; Steuck, Michael J.
2009-01-01
Shovelnose sturgeon Scaphirhynchus platorynchus have become an increasingly important commercial species in the upper Mississippi River (UMR) because of the collapse of foreign sturgeon (family Acipenseridae) populations and bans on imported caviar. In response to concerns about the sustainability of the commercial shovelnose sturgeon fishery in the UMR, we undertook this study to describe the demographics of the shovelnose sturgeon population and evaluate the influence of commercial harvest on shovelnose sturgeon populations in the UMR. A total of 1,682 shovelnose sturgeon were collected from eight study pools in 2006 and 2007 (Pools 4, 7, 9, 11, 13, 14, 16, and 18). Shovelnose sturgeon from upstream pools generally had greater lengths, weights, and ages than those from downstream pools. Additionally, mortality estimates were lower in upstream pools (Pools 4, 7, 9, and 11) than in downstream pools (Pools 13, 14, 16, and 18). Linear regression suggested that the slower growth of shovelnose sturgeon is a consequence of commercial harvest in the UMR. Modeling of potential management scenarios suggested that a 685-mm minimum length limit is necessary to prevent growth and recruitment overfishing of shovelnose sturgeon in the UMR.
Area under precision-recall curves for weighted and unweighted data.
Keilwagen, Jens; Grosse, Ivo; Grau, Jan
2014-01-01
Precision-recall curves are highly informative about the performance of binary classifiers, and the area under these curves is a popular scalar performance measure for comparing different classifiers. However, for many applications class labels are not provided with absolute certainty, but with some degree of confidence, often reflected by weights or soft labels assigned to data points. Computing the area under the precision-recall curve requires interpolating between adjacent supporting points, but previous interpolation schemes are not directly applicable to weighted data. Hence, even in cases where weights were available, they had to be neglected for assessing classifiers using precision-recall curves. Here, we propose an interpolation for precision-recall curves that can also be used for weighted data, and we derive conditions for classification scores yielding the maximum and minimum area under the precision-recall curve. We investigate accordances and differences of the proposed interpolation and previous ones, and we demonstrate that taking into account existing weights of test data is important for the comparison of classifiers.
Area under Precision-Recall Curves for Weighted and Unweighted Data
Grosse, Ivo
2014-01-01
Precision-recall curves are highly informative about the performance of binary classifiers, and the area under these curves is a popular scalar performance measure for comparing different classifiers. However, for many applications class labels are not provided with absolute certainty, but with some degree of confidence, often reflected by weights or soft labels assigned to data points. Computing the area under the precision-recall curve requires interpolating between adjacent supporting points, but previous interpolation schemes are not directly applicable to weighted data. Hence, even in cases where weights were available, they had to be neglected for assessing classifiers using precision-recall curves. Here, we propose an interpolation for precision-recall curves that can also be used for weighted data, and we derive conditions for classification scores yielding the maximum and minimum area under the precision-recall curve. We investigate accordances and differences of the proposed interpolation and previous ones, and we demonstrate that taking into account existing weights of test data is important for the comparison of classifiers. PMID:24651729
NASA Astrophysics Data System (ADS)
Pan, M.-Ch.; Chu, W.-Ch.; Le, Duc-Do
2016-12-01
The paper presents an alternative Vold-Kalman filter order tracking (VKF_OT) method, i.e. adaptive angular-velocity VKF_OT technique, to extract and characterize order components in an adaptive manner for the condition monitoring and fault diagnosis of rotary machinery. The order/spectral waveforms to be tracked can be recursively solved by using Kalman filter based on the one-step state prediction. The paper comprises theoretical derivation of computation scheme, numerical implementation, and parameter investigation. Comparisons of the adaptive VKF_OT scheme with two other ones are performed through processing synthetic signals of designated order components. Processing parameters such as the weighting factor and the correlation matrix of process noise, and data conditions like the sampling frequency, which influence tracking behavior, are explored. The merits such as adaptive processing nature and computation efficiency brought by the proposed scheme are addressed although the computation was performed in off-line conditions. The proposed scheme can simultaneously extract multiple spectral components, and effectively decouple close and crossing orders associated with multi-axial reference rotating speeds.
Accurate B-spline-based 3-D interpolation scheme for digital volume correlation
NASA Astrophysics Data System (ADS)
Ren, Maodong; Liang, Jin; Wei, Bin
2016-12-01
An accurate and efficient 3-D interpolation scheme, based on sampling theorem and Fourier transform technique, is proposed to reduce the sub-voxel matching error caused by intensity interpolation bias in digital volume correlation. First, the influence factors of the interpolation bias are investigated theoretically using the transfer function of an interpolation filter (henceforth filter) in the Fourier domain. A law that the positional error of a filter can be expressed as a function of fractional position and wave number is found. Then, considering the above factors, an optimized B-spline-based recursive filter, combining B-spline transforms and least squares optimization method, is designed to virtually eliminate the interpolation bias in the process of sub-voxel matching. Besides, given each volumetric image containing different wave number ranges, a Gaussian weighting function is constructed to emphasize or suppress certain of wave number ranges based on the Fourier spectrum analysis. Finally, a novel software is developed and series of validation experiments were carried out to verify the proposed scheme. Experimental results show that the proposed scheme can reduce the interpolation bias to an acceptable level.
NASA Astrophysics Data System (ADS)
Li, Gaohua; Fu, Xiang; Wang, Fuxin
2017-10-01
The low-dissipation high-order accurate hybrid up-winding/central scheme based on fifth-order weighted essentially non-oscillatory (WENO) and sixth-order central schemes, along with the Spalart-Allmaras (SA)-based delayed detached eddy simulation (DDES) turbulence model, and the flow feature-based adaptive mesh refinement (AMR), are implemented into a dual-mesh overset grid infrastructure with parallel computing capabilities, for the purpose of simulating vortex-dominated unsteady detached wake flows with high spatial resolutions. The overset grid assembly (OGA) process based on collection detection theory and implicit hole-cutting algorithm achieves an automatic coupling for the near-body and off-body solvers, and the error-and-try method is used for obtaining a globally balanced load distribution among the composed multiple codes. The results of flows over high Reynolds cylinder and two-bladed helicopter rotor show that the combination of high-order hybrid scheme, advanced turbulence model, and overset adaptive mesh refinement can effectively enhance the spatial resolution for the simulation of turbulent wake eddies.
Fault detection and multiclassifier fusion for unmanned aerial vehicles (UAVs)
NASA Astrophysics Data System (ADS)
Yan, Weizhong
2001-03-01
UAVs demand more accurate fault accommodation for their mission manager and vehicle control system in order to achieve a reliability level that is comparable to that of a pilot aircraft. This paper attempts to apply multi-classifier fusion techniques to achieve the necessary performance of the fault detection function for the Lockheed Martin Skunk Works (LMSW) UAV Mission Manager. Three different classifiers that meet the design requirements of the fault detection of the UAAV are employed. The binary decision outputs from the classifiers are then aggregated using three different classifier fusion schemes, namely, majority vote, weighted majority vote, and Naieve Bayes combination. All of the three schemes are simple and need no retraining. The three fusion schemes (except the majority vote that gives an average performance of the three classifiers) show the classification performance that is better than or equal to that of the best individual. The unavoidable correlation between the classifiers with binary outputs is observed in this study. We conclude that it is the correlation between the classifiers that limits the fusion schemes to achieve an even better performance.
Sparse Image Reconstruction on the Sphere: Analysis and Synthesis.
Wallis, Christopher G R; Wiaux, Yves; McEwen, Jason D
2017-11-01
We develop techniques to solve ill-posed inverse problems on the sphere by sparse regularization, exploiting sparsity in both axisymmetric and directional scale-discretized wavelet space. Denoising, inpainting, and deconvolution problems and combinations thereof, are considered as examples. Inverse problems are solved in both the analysis and synthesis settings, with a number of different sampling schemes. The most effective approach is that with the most restricted solution-space, which depends on the interplay between the adopted sampling scheme, the selection of the analysis/synthesis problem, and any weighting of the l 1 norm appearing in the regularization problem. More efficient sampling schemes on the sphere improve reconstruction fidelity by restricting the solution-space and also by improving sparsity in wavelet space. We apply the technique to denoise Planck 353-GHz observations, improving the ability to extract the structure of Galactic dust emission, which is important for studying Galactic magnetism.
Real Gas Computation Using an Energy Relaxation Method and High-Order WENO Schemes
NASA Technical Reports Server (NTRS)
Montarnal, Philippe; Shu, Chi-Wang
1998-01-01
In this paper, we use a recently developed energy relaxation theory by Coquel and Perthame and high order weighted essentially non-oscillatory (WENO) schemes to simulate the Euler equations of real gas. The main idea is an energy decomposition into two parts: one part is associated with a simpler pressure law and the other part (the nonlinear deviation) is convected with the flow. A relaxation process is performed for each time step to ensure that the original pressure law is satisfied. The necessary characteristic decomposition for the high order WENO schemes is performed on the characteristic fields based on the first part. The algorithm only calls for the original pressure law once per grid point per time step, without the need to compute its derivatives or any Riemann solvers. Both one and two dimensional numerical examples are shown to illustrate the effectiveness of this approach.
Scattering-free optical levitation of a cavity mirror.
Guccione, G; Hosseini, M; Adlong, S; Johnsson, M T; Hope, J; Buchler, B C; Lam, P K
2013-11-01
We demonstrate the feasibility of levitating a small mirror using only radiation pressure. In our scheme, the mirror is supported by a tripod where each leg of the tripod is a Fabry-Perot cavity. The macroscopic state of the mirror is coherently coupled to the supporting cavity modes allowing coherent interrogation and manipulation of the mirror motion. The proposed scheme is an extreme example of the optical spring, where a mechanical oscillator is isolated from the environment and its mechanical frequency and macroscopic state can be manipulated solely through optical fields. We model the stability of the system and find a three-dimensional lattice of trapping points where cavity resonances allow for buildup of optical field sufficient to support the weight of the mirror. Our scheme offers a unique platform for studying quantum and classical optomechanics and can potentially be used for precision gravitational field sensing and quantum state generation.
Cycle of a closed gas-turbine plant with a gas-dynamic energy-separation device
NASA Astrophysics Data System (ADS)
Leontiev, A. I.; Burtsev, S. A.
2017-09-01
The efficiency of closed gas-turbine space-based plants is analyzed. The weight-size characteristics of closed gas-turbine plants are shown in many respects as determined by the refrigerator-radiator parameters. The scheme of closed gas-turbine plants with a gas-dynamic temperature-stratification device is proposed, and a calculation model is developed. This model shows that the cycle efficiency decreases by 2% in comparison with that of the closed gas-turbine plants operating by the traditional scheme with increasing temperature at the output from the refrigerator-radiator by 28 K and decreasing its area by 13.7%.
NASA Technical Reports Server (NTRS)
Berk, G.; Jean, P. N.; Rotholz, E.
1982-01-01
Several satellite uplink and downlink accessing schemes for customer premises service are compared. Four conceptual system designs are presented: satellite-routed frequency division multiple access (FDMA), satellite-switched time division multiple access (TDMA), processor-routed TDMA, and frequency-routed TDMA, operating in the 30/20 GHz band. The designs are compared on the basis of estimated satellite weight, system capacity, power consumption, and cost. The systems are analyzed for fixed multibeam coverage of the continental United States. Analysis shows that the system capacity is limited by the available satellite resources and by the terminal size and cost.
Experimental investigation of an astronaut maneuvering scheme.
NASA Technical Reports Server (NTRS)
Kane, T. R.; Headrick, M. R.; Yatteau, J. D.
1972-01-01
A new concept for astronaut maneuvering in space is proposed, and an experimental study undertaken to test this concept is described. The series of experiments performed appear to promise advantages over previously proposed schemes in terms of propellant economy, system weight, reliability, and safety. The simulation tests established the feasibility of the proposed maneuvering concept by showing that test subjects were able to place their bodies sufficiently near the reference position to avoid excessive angular momentum build-up; no difficulties were encountered in selecting self-rotation maneuvers suitable for effecting desired changes in orientation; and the execution of these maneuvers produced predicted reorientations without tiring the test subject significantly.
Achromatic diffractive lens written onto a liquid crystal display.
Márquez, A; Iemmi, C; Campos, J; Yzuel, M J
2006-02-01
We propose a programmable diffractive lens written onto a liquid crystal display (LCD) that is able to provide equal focal lengths for several wavelengths simultaneously. To achieve this goal it is necessary that the LCD operate in the phase-only regime simultaneously for the different wavelengths. We design the appropriate lens for each wavelength, and then the lenses are spatially multiplexed onto the LCD. Various multiplexing schemes have been analyzed, and the random scheme shows the best performance. We further show the possibility of finely tuning the chromaticity of the focal spot by changing the relative weights of the multiplexing among the various wavelengths.
Wave-optics modeling of the optical-transport line for passive optical stochastic cooling
NASA Astrophysics Data System (ADS)
Andorf, M. B.; Lebedev, V. A.; Piot, P.; Ruan, J.
2018-03-01
Optical stochastic cooling (OSC) is expected to enable fast cooling of dense particle beams. Transition from microwave to optical frequencies enables an achievement of stochastic cooling rates which are orders of magnitude higher than ones achievable with the classical microwave based stochastic cooling systems. A subsystemcritical to the OSC scheme is the focusing optics used to image radiation from the upstream "pickup" undulator to the downstream "kicker" undulator. In this paper, we present simulation results using wave-optics calculation carried out with the SYNCHROTRON RADIATION WORKSHOP (SRW). Our simulations are performed in support to a proof-of-principle experiment planned at the Integrable Optics Test Accelerator (IOTA) at Fermilab. The calculations provide an estimate of the energy kick received by a 100-MeV electron as it propagates in the kicker undulator and interacts with the electromagnetic pulse it radiated at an earlier time while traveling through the pickup undulator.
A full-duplex optical access system with hybrid 64/16/4QAM-OFDM downlink
NASA Astrophysics Data System (ADS)
He, Chao; Tan, Ze-fu; Shao, Yu-feng; Cai, Li; Pu, He-sheng; Zhu, Yun-le; Huang, Si-si; Liu, Yu
2016-09-01
A full-duplex optical passive access scheme is proposed and verified by simulation, in which hybrid 64/16/4-quadrature amplitude modulation (64/16/4QAM) orthogonal frequency division multiplexing (OFDM) optical signal is for downstream transmission and non-return-to-zero (NRZ) optical signal is for upstream transmission. In view of the transmitting and receiving process for downlink optical signal, in-phase/quadrature-phase (I/Q) modulation based on Mach-Zehnder modulator (MZM) and homodyne coherent detection technology are employed, respectively. The simulation results show that the bit error ratio ( BER) less than hardware decision forward error correction (HD-FEC) threshold is successfully obtained over transmission path with 20-km-long standard single mode fiber (SSMF) for hybrid downlink modulation OFDM optical signal. In addition, by dividing the system bandwidth into several subchannels consisting of some continuous subcarriers, it is convenient for users to select different channels depending on requirements of communication.
Study on the capability of four-level partial response equalization in RSOA-based WDM-PON
NASA Astrophysics Data System (ADS)
Guo, Qi; Tran, An Vu
2010-12-01
The expected development of advanced video services with HDTV quality demands the delivery of more than Gb/s link to end users across the last mile connection. Future access networks are also required to have long reach for reduction in the number of central offices (CO). Fueled by those requirements, we propose a novel equalization scheme that increases the capacity and reach of the wavelength division multiplexing passive optical network (WDM-PON) based on a low bandwidth reflective semiconductor optical amplifier (RSOA). We investigate the characteristics of 10 Gb/s upstream transmission in WDM-PON using RSOA with only 1.2 GHz electrical bandwidth and various lengths of fiber. It is proven that the proposed four-level partial response equalizer (PRE) is capable of mitigating the impact of ISI in the received signals from optical network units (ONU) located 0 km to 75 km away from the optical line terminal (OLT).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andorf, M. B.; Lebedev, V. A.; Piot, P.
Optical stochastic cooling (OSC) is expected to enable fast cooling of dense particle beams. Transition from microwave to optical frequencies enables an achievement of stochastic cooling rates which are orders of magnitude higher than ones achievable with the classical microwave based stochastic cooling systems. A subsystemcritical to the OSC scheme is the focusing optics used to image radiation from the upstream “pickup” undulator to the downstream “kicker” undulator. In this paper, we present simulation results using wave-optics calculation carried out with the Synchrotron Radiation Workshop (SRW). Our simulations are performed in support to a proof-of-principle experiment planned at the Integrablemore » Optics Test Accelerator (IOTA) at Fermilab. The calculations provide an estimate of the energy kick received by a 100-MeV electron as it propagates in the kicker undulator and interacts with the electromagnetic pulse it radiated at an earlier time while traveling through the pickup undulator.« less
A Partial Least Squares Based Procedure for Upstream Sequence Classification in Prokaryotes.
Mehmood, Tahir; Bohlin, Jon; Snipen, Lars
2015-01-01
The upstream region of coding genes is important for several reasons, for instance locating transcription factor, binding sites, and start site initiation in genomic DNA. Motivated by a recently conducted study, where multivariate approach was successfully applied to coding sequence modeling, we have introduced a partial least squares (PLS) based procedure for the classification of true upstream prokaryotic sequence from background upstream sequence. The upstream sequences of conserved coding genes over genomes were considered in analysis, where conserved coding genes were found by using pan-genomics concept for each considered prokaryotic species. PLS uses position specific scoring matrix (PSSM) to study the characteristics of upstream region. Results obtained by PLS based method were compared with Gini importance of random forest (RF) and support vector machine (SVM), which is much used method for sequence classification. The upstream sequence classification performance was evaluated by using cross validation, and suggested approach identifies prokaryotic upstream region significantly better to RF (p-value < 0.01) and SVM (p-value < 0.01). Further, the proposed method also produced results that concurred with known biological characteristics of the upstream region.
Pope, L.M.; Brewer, L.D.; Foley, G.A.; Morgan, S.C.
1996-01-01
A study of the distribution and transport of atrazine in surface water in the 1,117 square-mile Delaware River Basin in northeast Kansas was conducted from July 1992 through September 1995. The purpose of this report is to present information to assess the present (1992-95) conditions and possible future changes in the distribution and magnitude of atrazine concentrations, loads, and yields spatially, temporally, and in relation to hydrologic conditions and land-use characteristics. A network of 11 stream-monitoring and sample-collection sites was established within the basin. Stream- water samples were collected during a wide range of hydrologic conditions throughout the study. Nearly 5,000 samples were analyzed by enzyme- linked immunosorbent assay (ELISA) for triazine herbicide concentrations. Daily mean triazine herbicide concentrations were calculated for all sampling sites and subsequently used to estimate daily mean atrazine concentrations with a linear- regression relation between ELISA-derived triazine concentrations and atrazine concentrations determined by gas chromatography/mass spectrometry for 141 dual-analyzed surface-water samples. During May, June, and July, time-weighted, daily mean atrazine concentrations in streams in the Delaware River Basin commonly exceeded the value of 3.0-ug/L (micrograms per liter) annual mean Maximum Contaminant Level (MCL) established by the U.S. Environmental Protection Agency for drinking-water supplies. Time-weighted, daily mean concentrations equal to or greater than 20 ug/L were not uncommon. However, most time- weighted, daily mean concentrations were less than 1.0 ug/L from August through April. The largest time-weighted, monthly mean atrazine concentrations occurred during May, June, and July. Most monthly mean concentrations between August and April were less than 0.50 ug/L. Large differences were documented in monthly mean concentrations within the basin. Sites receiving runoff from the northern and northeastern parts of the Delaware River Basin had the largest monthly and annual mean atrazine concentrations. Time- weighted, annual mean atrazine concentrations did not exceed the MCL in water from any sampling site for either the 1993 or 1994 crop years (April-March); however, concentrations were during 1994 than during 1993. Time-weighted, annual mean concentrations in water from among the 11 sampling sites during the 1993 crop year ranged from 0.27 to 1.5 ug/L and from 0.36 to 2.8 ug/L during the 1994 crop year. Furthermore, concentrations in samples from the outflow of Perry Lake were larger during the first 6 months of the 1995 crop year than during the previous year. Flow-weighted, annual mean atrazine concentrations were larger than time-weighted, annual mean concentrations in water from all sampling sites upstream of Perry Lake, and samples from several sites had concentrations were substantially larger than the MCL. This difference explained why time-weighted, annual mean concentrations in the outflow of Perry Lake were larger than corresponding time-weighted concentrations in water from sampling sites upstream of Perry Lake. Flow- weighted, annual mean concentrations in water from among the 11 sampling sites during the 1993 crop year ranged from 1.0 to 4.4 ug/L and from 1.0 to 8.9 ug/L during the 1994 crop year. Statistically significant linear-regression equations were identified relating the percentage of subbasin in cropland to time- and flow-weighted, average annual mean atrazine concentrations. The relations indicate that time-weighted, average annual mean atrazine concentrations may not exceed the MCL in water from subbasins with at least about 70-percent cropland. However, flow-weighted, average annual mean atrazine concentrations may exceed the MCL when the percentage of cropland is greater than about 40 percent. Approximately 90 percent of the annual atrazine load is transport from May through July. Atrazine loads and yields were larger during the 1993 cro
Scale-Free Compact Routing Schemes in Networks of Low Doubling Dimension
DOE Office of Scientific and Technical Information (OSTI.GOV)
Konjevod, Goran; Richa, Andréa W.; Xia, Donglin
In this work, we consider compact routing schemes in networks of low doubling dimension, where the doubling dimension is the least value α such that any ball in the network can be covered by at most 2 α balls of half radius. There are two variants of routing-scheme design: (i) labeled (name-dependent) routing, in which the designer is allowed to rename the nodes so that the names (labels) can contain additional routing information, for example, topological information; and (ii) name-independent routing, which works on top of the arbitrary original node names in the network, that is, the node names aremore » independent of the routing scheme. In this article, given any constant ε ϵ (0, 1) and an n-node edge-weighted network of doubling dimension α ϵ O(loglog n), we present —a (1 + ε)-stretch labeled compact routing scheme with Γlog n-bit routing labels, O(log 2 n/loglog n)-bit packet headers, and ((1/ε) O(α) log 3 n)-bit routing information at each node; —a (9 + ε)-stretch name-independent compact routing scheme with O(log 2 n/loglog n)-bit packet headers, and ((1/ε) O(α) log 3 n)-bit routing information at each node. In addition, we prove a lower bound: any name-independent routing scheme with o(n (ε/60)2) bits of storage at each node has stretch no less than 9 - ε for any ε ϵ (0, 8). Therefore, our name-independent routing scheme achieves asymptotically optimal stretch with polylogarithmic storage at each node and packet headers. Note that both schemes are scale-free in the sense that their space requirements do not depend on the normalized diameter Δ of the network. Finally, we also present a simpler nonscale-free (9 + ε)-stretch name-independent compact routing scheme with improved space requirements if Δ is polynomial in n.« less
Scale-Free Compact Routing Schemes in Networks of Low Doubling Dimension
Konjevod, Goran; Richa, Andréa W.; Xia, Donglin
2016-06-15
In this work, we consider compact routing schemes in networks of low doubling dimension, where the doubling dimension is the least value α such that any ball in the network can be covered by at most 2 α balls of half radius. There are two variants of routing-scheme design: (i) labeled (name-dependent) routing, in which the designer is allowed to rename the nodes so that the names (labels) can contain additional routing information, for example, topological information; and (ii) name-independent routing, which works on top of the arbitrary original node names in the network, that is, the node names aremore » independent of the routing scheme. In this article, given any constant ε ϵ (0, 1) and an n-node edge-weighted network of doubling dimension α ϵ O(loglog n), we present —a (1 + ε)-stretch labeled compact routing scheme with Γlog n-bit routing labels, O(log 2 n/loglog n)-bit packet headers, and ((1/ε) O(α) log 3 n)-bit routing information at each node; —a (9 + ε)-stretch name-independent compact routing scheme with O(log 2 n/loglog n)-bit packet headers, and ((1/ε) O(α) log 3 n)-bit routing information at each node. In addition, we prove a lower bound: any name-independent routing scheme with o(n (ε/60)2) bits of storage at each node has stretch no less than 9 - ε for any ε ϵ (0, 8). Therefore, our name-independent routing scheme achieves asymptotically optimal stretch with polylogarithmic storage at each node and packet headers. Note that both schemes are scale-free in the sense that their space requirements do not depend on the normalized diameter Δ of the network. Finally, we also present a simpler nonscale-free (9 + ε)-stretch name-independent compact routing scheme with improved space requirements if Δ is polynomial in n.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levakhina, Y. M.; Mueller, J.; Buzug, T. M.
Purpose: This paper introduces a nonlinear weighting scheme into the backprojection operation within the simultaneous algebraic reconstruction technique (SART). It is designed for tomosynthesis imaging of objects with high-attenuation features in order to reduce limited angle artifacts. Methods: The algorithm estimates which projections potentially produce artifacts in a voxel. The contribution of those projections into the updating term is reduced. In order to identify those projections automatically, a four-dimensional backprojected space representation is used. Weighting coefficients are calculated based on a dissimilarity measure, evaluated in this space. For each combination of an angular view direction and a voxel position anmore » individual weighting coefficient for the updating term is calculated. Results: The feasibility of the proposed approach is shown based on reconstructions of the following real three-dimensional tomosynthesis datasets: a mammography quality phantom, an apple with metal needles, a dried finger bone in water, and a human hand. Datasets have been acquired with a Siemens Mammomat Inspiration tomosynthesis device and reconstructed using SART with and without suggested weighting. Out-of-focus artifacts are described using line profiles and measured using standard deviation (STD) in the plane and below the plane which contains artifact-causing features. Artifacts distribution in axial direction is measured using an artifact spread function (ASF). The volumes reconstructed with the weighting scheme demonstrate the reduction of out-of-focus artifacts, lower STD (meaning reduction of artifacts), and narrower ASF compared to nonweighted SART reconstruction. It is achieved successfully for different kinds of structures: point-like structures such as phantom features, long structures such as metal needles, and fine structures such as trabecular bone structures. Conclusions: Results indicate the feasibility of the proposed algorithm to reduce typical tomosynthesis artifacts produced by high-attenuation features. The proposed algorithm assigns weighting coefficients automatically and no segmentation or tissue-classification steps are required. The algorithm can be included into various iterative reconstruction algorithms with an additive updating strategy. It can also be extended to computed tomography case with the complete set of angular data.« less
Willardson, Jeffrey M; Simão, Roberto; Fontana, Fabio E
2012-11-01
The purpose of this study was to compare 4 different loading schemes for the free weight bench press, wide grip front lat pull-down, and free weight back squat to determine the extent of progressive load reductions necessary to maintain repetition performance. Thirty-two recreationally trained women (age = 29.34 ± 4.58 years, body mass = 59.61 ± 4.72 kg, height = 162.06 ± 4.04 cm) performed 4 resistance exercise sessions that involved 3 sets of the free weight bench press, wide grip front lat pull-down, and free weight back squat, performed in this exercise order during all 4 sessions. Each of the 4 sessions was conducted under different randomly ordered loading schemes, including (a) a constant 10 repetition maximum (RM) load for all 3 sets and for all 3 exercises, (b) a 5% reduction after the first and second sets for all the 3 exercises, (c) a 10% reduction after the first and second sets for all the 3 exercises, and (d) a 15% reduction after the first and second sets for all the 3 exercises. The results indicated that for the wide grip front lat pull-down and free weight back squat, a 10% load reduction was necessary after the first and second sets to accomplish 10 repetitions on all the 3 sets. For the free weight bench press, a load reduction between 10 and 15% was necessary; specifically, a 10% reduction was insufficient and a 15% reduction was excessive, as evidenced by significantly >10 repetitions on the second and third sets for this exercise (p ≤ 0.05). In conclusion, the results of this study indicate that a resistance training prescription that involves 1-minute rest intervals between multiple 10RM sets does require load reductions to maintain repetition performance. Practitioners might apply these results by considering an approximate 10% load reduction after the first and second sets for the exercises examined, when training women of similar characteristics as in this study.
NASA Astrophysics Data System (ADS)
David, McInerney; Mark, Thyer; Dmitri, Kavetski; George, Kuczera
2017-04-01
This study provides guidance to hydrological researchers which enables them to provide probabilistic predictions of daily streamflow with the best reliability and precision for different catchment types (e.g. high/low degree of ephemerality). Reliable and precise probabilistic prediction of daily catchment-scale streamflow requires statistical characterization of residual errors of hydrological models. It is commonly known that hydrological model residual errors are heteroscedastic, i.e. there is a pattern of larger errors in higher streamflow predictions. Although multiple approaches exist for representing this heteroscedasticity, few studies have undertaken a comprehensive evaluation and comparison of these approaches. This study fills this research gap by evaluating 8 common residual error schemes, including standard and weighted least squares, the Box-Cox transformation (with fixed and calibrated power parameter, lambda) and the log-sinh transformation. Case studies include 17 perennial and 6 ephemeral catchments in Australia and USA, and two lumped hydrological models. We find the choice of heteroscedastic error modelling approach significantly impacts on predictive performance, though no single scheme simultaneously optimizes all performance metrics. The set of Pareto optimal schemes, reflecting performance trade-offs, comprises Box-Cox schemes with lambda of 0.2 and 0.5, and the log scheme (lambda=0, perennial catchments only). These schemes significantly outperform even the average-performing remaining schemes (e.g., across ephemeral catchments, median precision tightens from 105% to 40% of observed streamflow, and median biases decrease from 25% to 4%). Theoretical interpretations of empirical results highlight the importance of capturing the skew/kurtosis of raw residuals and reproducing zero flows. Recommendations for researchers and practitioners seeking robust residual error schemes for practical work are provided.
High-order conservative finite difference GLM-MHD schemes for cell-centered MHD
NASA Astrophysics Data System (ADS)
Mignone, Andrea; Tzeferacos, Petros; Bodo, Gianluigi
2010-08-01
We present and compare third- as well as fifth-order accurate finite difference schemes for the numerical solution of the compressible ideal MHD equations in multiple spatial dimensions. The selected methods lean on four different reconstruction techniques based on recently improved versions of the weighted essentially non-oscillatory (WENO) schemes, monotonicity preserving (MP) schemes as well as slope-limited polynomial reconstruction. The proposed numerical methods are highly accurate in smooth regions of the flow, avoid loss of accuracy in proximity of smooth extrema and provide sharp non-oscillatory transitions at discontinuities. We suggest a numerical formulation based on a cell-centered approach where all of the primary flow variables are discretized at the zone center. The divergence-free condition is enforced by augmenting the MHD equations with a generalized Lagrange multiplier yielding a mixed hyperbolic/parabolic correction, as in Dedner et al. [J. Comput. Phys. 175 (2002) 645-673]. The resulting family of schemes is robust, cost-effective and straightforward to implement. Compared to previous existing approaches, it completely avoids the CPU intensive workload associated with an elliptic divergence cleaning step and the additional complexities required by staggered mesh algorithms. Extensive numerical testing demonstrate the robustness and reliability of the proposed framework for computations involving both smooth and discontinuous features.
Schmid-Burgk, Jonathan L; Chauhan, Dhruv; Schmidt, Tobias; Ebert, Thomas S; Reinhardt, Julia; Endl, Elmar; Hornung, Veit
2016-01-01
Inflammasomes are high molecular weight protein complexes that assemble in the cytosol upon pathogen encounter. This results in caspase-1-dependent pro-inflammatory cytokine maturation, as well as a special type of cell death, known as pyroptosis. The Nlrp3 inflammasome plays a pivotal role in pathogen defense, but at the same time, its activity has also been implicated in many common sterile inflammatory conditions. To this effect, several studies have identified Nlrp3 inflammasome engagement in a number of common human diseases such as atherosclerosis, type 2 diabetes, Alzheimer disease, or gout. Although it has been shown that known Nlrp3 stimuli converge on potassium ion efflux upstream of Nlrp3 activation, the exact molecular mechanism of Nlrp3 activation remains elusive. Here, we describe a genome-wide CRISPR/Cas9 screen in immortalized mouse macrophages aiming at the unbiased identification of gene products involved in Nlrp3 inflammasome activation. We employed a FACS-based screen for Nlrp3-dependent cell death, using the ionophoric compound nigericin as a potassium efflux-inducing stimulus. Using a genome-wide guide RNA (gRNA) library, we found that targeting Nek7 rescued macrophages from nigericin-induced lethality. Subsequent studies revealed that murine macrophages deficient in Nek7 displayed a largely blunted Nlrp3 inflammasome response, whereas Aim2-mediated inflammasome activation proved to be fully intact. Although the mechanism of Nek7 functioning upstream of Nlrp3 yet remains elusive, these studies provide a first genetic handle of a component that specifically functions upstream of Nlrp3. © 2016 by The American Society for Biochemistry and Molecular Biology, Inc.
NASA Astrophysics Data System (ADS)
Yokokawa, Miwa; Yamano, Junpei; Miyai, Masatomo; Hughes Clarke, John; Izumi, Norihiro
2017-04-01
Field observations of turbidity currents and seabed topography on the Squamish delta in British Columbia, Canada revealed that cyclic steps formed by the surge-type turbidity currents (e.g., Hughes Clarke et al., 2014). The high-density portion of the flow, which affects the sea floor morphology, lasted only 30-60 seconds. We are doing flume experiments aiming to investigate the relationship between the condition of surges and topography of resultant steps. In this presentation, we are going to discuss about the effect of surge duration on the topography of steps. The experiments have been performed at Osaka Institute of Technology. A flume, which is 7.0 m long, 0.3 m deep and 2 cm wide, was suspended in a larger tank, which is 7.6 m long, 1.2 m deep and 0.3 m wide, filled with water. The inner flume tilted at 7 degrees. As a source of turbidity currents, mixture of salt water (1.17 g/cm^3) and plastic particles (1.3 g/cm^3, 0.1-0.18 mm in diameter) was prepared. The concentration of the sediments was 6.1 weight % (5.5 volume %) in the head tank. This mixture of salt water and plastic particles poured into the upstream end of the inner flume from head tank for 3 seconds or 7 seconds. 140 surges were made respectively. Discharge of the currents were fluctuated but range from 306 to 870 mL for 3s-surge, and from 1134 to 2030 mL for 7s-surge. As a result, five or six steps were formed respectively. At the case of 3s-surge, steps located at upstream portion of the flume moved vigorously toward upstream direction, whereas steps at downstream portion of the flume moved toward upstream direction at the case of 7s-surge. The wavelengths and wave heights of the steps by 3s-surge are larger than those of 7s-surge at the upstream portion of the flume, but the size of steps of 3s-surge are smaller than those of 7s-surge at the downstream portion of the flume. In this condition of slope and concentration, the longer surge duration, i.e. larger discharge of the current transports the sediment further and makes the steps larger and active at the further location from the source of the currents.
75 FR 68714 - Final Flood Elevation Determinations
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-09
... mile +683 Village of Mayfield. upstream of Rogers Road. Chagrin River Approximately 40 feet +786 Village of Moreland upstream of Woodland Hills. Road. Approximately 1,200 feet +789 upstream of Woodland... Miles Road. Countrymans Creek Upstream of I-71......... +721 Village of Lindale. Downstream of Bellaire...
The 'upstream wake' of swimming and flying animals and its correlation with propulsive efficiency.
Peng, Jifeng; Dabiri, John O
2008-08-01
The interaction between swimming and flying animals and their fluid environments generates downstream wake structures such as vortices. In most studies, the upstream flow in front of the animal is neglected. In this study, we demonstrate the existence of upstream fluid structures even though the upstream flow is quiescent or possesses a uniform incoming velocity. Using a computational model, the flow generated by a swimmer (an oscillating flexible plate) is simulated and a new fluid mechanical analysis is applied to the flow to identify the upstream fluid structures. These upstream structures show the exact portion of fluid that is going to interact with the swimmer. A mass flow rate is then defined based on the upstream structures, and a metric for propulsive efficiency is established using the mass flow rate and the kinematics of the swimmer. We propose that the unsteady mass flow rate defined by the upstream fluid structures can be used as a metric to measure and objectively compare the efficiency of locomotion in water and air.
Fifty-year flood-inundation maps for Juticalpa, Honduras
Kresch, David L.; Mastin, M.C.; Olsen, T.D.
2002-01-01
After the devastating floods caused by Hurricane Mitch in 1998, maps of the areas and depths of 50-year-flood inundation at 15 municipalities in Honduras were prepared as a tool for agencies involved in reconstruction and planning. This report, which is one in a series of 15, presents maps of areas in the municipality of Juticalpa that would be inundated by a 50-year flood of Rio Juticalpa. Geographic Information System (GIS) coverages of the flood inundation are available on a computer in the municipality of Juticalpa as part of the Municipal GIS project and on the Internet at the Flood Hazard Mapping Web page (http://mitchnts1.cr.usgs.gov/projects/floodhazard.html). These coverages allow users to view the flood inundation in much more detail than is possible using the maps in this report. Water-surface elevations for a 50-year-flood on Rio Juticalpa at Juticalpa were estimated using HEC-RAS, a one-dimensional, steady-flow, step-backwater computer program. The channel and floodplain cross sections used in HEC-RAS were developed from an airborne light-detection-and-ranging (LIDAR) topographic survey of the area. The estimated 50-year-flood discharge for Rio Juticalpa at Juticalpa, 1,360 cubic meters per second, was computed as the drainage-area-adjusted weighted average of two independently estimated 50-year-flood discharges for the gaging station Rio Juticalpa en El Torito, located about 2 kilometers upstream from Juticalpa. One discharge, 1,551 cubic meters per second, was estimated from a frequency analysis of the 33 years of peak-discharge record for the gage, and the other, 486 cubic meters per second, was estimated from a regression equation that relates the 50-year-flood discharge to drainage area and mean annual precipitation. The weighted-average of the two discharges at the gage is 1,310 cubic meters per second. The 50-year flood discharge for the study area reach of Rio Juticalpa was estimated by multiplying the weighted discharge at the gage by the ratio of the drainage areas upstream from the two locations.
NASA Astrophysics Data System (ADS)
Zargari Khuzani, Abolfazl; Danala, Gopichandh; Heidari, Morteza; Du, Yue; Mashhadi, Najmeh; Qiu, Yuchen; Zheng, Bin
2018-02-01
Higher recall rates are a major challenge in mammography screening. Thus, developing computer-aided diagnosis (CAD) scheme to classify between malignant and benign breast lesions can play an important role to improve efficacy of mammography screening. Objective of this study is to develop and test a unique image feature fusion framework to improve performance in classifying suspicious mass-like breast lesions depicting on mammograms. The image dataset consists of 302 suspicious masses detected on both craniocaudal and mediolateral-oblique view images. Amongst them, 151 were malignant and 151 were benign. The study consists of following 3 image processing and feature analysis steps. First, an adaptive region growing segmentation algorithm was used to automatically segment mass regions. Second, a set of 70 image features related to spatial and frequency characteristics of mass regions were initially computed. Third, a generalized linear regression model (GLM) based machine learning classifier combined with a bat optimization algorithm was used to optimally fuse the selected image features based on predefined assessment performance index. An area under ROC curve (AUC) with was used as a performance assessment index. Applying CAD scheme to the testing dataset, AUC was 0.75+/-0.04, which was significantly higher than using a single best feature (AUC=0.69+/-0.05) or the classifier with equally weighted features (AUC=0.73+/-0.05). This study demonstrated that comparing to the conventional equal-weighted approach, using an unequal-weighted feature fusion approach had potential to significantly improve accuracy in classifying between malignant and benign breast masses.
Tummers, Jeroen S; Hudson, Steve; Lucas, Martyn C
2016-11-01
A more holistic approach towards testing longitudinal connectivity restoration is needed in order to establish that intended ecological functions of such restoration are achieved. We illustrate the use of a multi-method scheme to evaluate the effectiveness of 'nature-like' connectivity restoration for stream fish communities in the River Deerness, NE England. Electric-fishing, capture-mark-recapture, PIT telemetry and radio-telemetry were used to measure fish community composition, dispersal, fishway efficiency and upstream migration respectively. For measuring passage and dispersal, our rationale was to evaluate a wide size range of strong swimmers (exemplified by brown trout Salmo trutta) and weak swimmers (exemplified by bullhead Cottus perifretum) in situ in the stream ecosystem. Radio-tracking of adult trout during the spawning migration showed that passage efficiency at each of five connectivity-restored sites was 81.3-100%. Unaltered (experimental control) structures on the migration route had a bottle-neck effect on upstream migration, especially during low flows. However, even during low flows, displaced PIT tagged juvenile trout (total n=153) exhibited a passage efficiency of 70.1-93.1% at two nature-like passes. In mark-recapture experiments juvenile brown trout and bullhead tagged (total n=5303) succeeded in dispersing upstream more often at most structures following obstacle modification, but not at the two control sites, based on a Laplace kernel modelling approach of observed dispersal distance and barrier traverses. Medium-term post-restoration data (2-3years) showed that the fish assemblage remained similar at five of six connectivity-restored sites and two control sites, but at one connectivity-restored headwater site previously inhabited by trout only, three native non-salmonid species colonized. We conclude that stream habitat reconnection should support free movement of a wide range of species and life stages, wherever retention of such obstacles is not needed to manage non-native invasive species. Evaluation of the effectiveness of fish community restoration in degraded streams benefits from a similarly holistic approach. Copyright © 2016 Elsevier B.V. All rights reserved.
Thinking Upstream: A 25-Year Retrospective and Conceptual Model Aimed at Reducing Health Inequities.
Butterfield, Patricia G
Thinking upstream was first introduced into the nursing vernacular in 1990 with the goal of advancing broad and context-rich perspectives of health. Initially invoked as conceptual framing language, upstream precepts were subsequently adopted and adapted by a generation of thoughtful nursing scholars. Their work reduced health inequities by redirecting actions further up etiologic pathways and by emphasizing economic, political, and environmental health determinants. US health care reform has fostered a much broader adoption of upstream language in policy documents. This article includes a semantic exploration of thinking upstream and a new model, the Butterfield Upstream Model for Population Health (BUMP Health).
Cancer cachexia: mediators, signaling, and metabolic pathways.
Fearon, Kenneth C H; Glass, David J; Guttridge, Denis C
2012-08-08
Cancer cachexia is characterized by a significant reduction in body weight resulting predominantly from loss of adipose tissue and skeletal muscle. Cachexia causes reduced cancer treatment tolerance and reduced quality and length of life, and remains an unmet medical need. Therapeutic progress has been impeded, in part, by the marked heterogeneity of mediators, signaling, and metabolic pathways both within and between model systems and the clinical syndrome. Recent progress in understanding conserved, molecular mechanisms of skeletal muscle atrophy/hypertrophy has provided a downstream platform for circumventing the variations and redundancy in upstream mediators and may ultimately translate into new targeted therapies. Copyright © 2012 Elsevier Inc. All rights reserved.
Analysis of key thresholds leading to upstream dependencies in global transboundary water bodies
NASA Astrophysics Data System (ADS)
Munia, Hafsa Ahmed; Guillaume, Joseph; Kummu, Matti; Mirumachi, Naho; Wada, Yoshihide
2017-04-01
Transboundary water bodies supply 60% of global fresh water flow and are home to about 1/3 of the world's population; creating hydrological, social and economic interdependencies between countries. Trade-offs between water users are delimited by certain thresholds, that, when crossed, result in changes in system behavior, often related to undesirable impacts. A wide variety of thresholds are potentially related to water availability and scarcity. Scarcity can occur because of the country's own water use, and that is potentially intensified by upstream water use. In general, increased water scarcity escalates the reliance on shared water resources, which increases interdependencies between riparian states. In this paper the upstream dependencies of global transboundary river basins are examined at the scale of sub-basin areas. We aim to assess how upstream water withdrawals cause changes in the scarcity categories, such that crossing thresholds is interpreted in terms of downstream dependency on upstream water availability. The thresholds are defined for different types of water availability on which a sub-basin relies: - reliable local runoff (available even in a dry year), - less reliable local water (available in the wet year), - reliable dry year inflows from possible upstream area, and - less reliable wet year inflows from upstream. Possible upstream withdrawals reduce available water downstream, influencing the latter two water availabilities. Upstream dependencies have then been categorized by comparing a sub-basin's scarcity category across different water availability types. When population (or water consumption) grows, the sub-basin satisfies its needs using less reliable water. Thus, the factors affecting the type of water availability being used are different not only for each type of dependency category, but also possibly for every sub- basin. Our results show that, in the case of stress (impacts from high use of water), in 104 (12%) sub- basins out of 886 sub-basins are dependent on upstream water, while in the case of shortage (impacts from insufficient water availability per person), 79 (9%) sub-basins out of 886 sub-basins dependent on upstream water. Categorization of the upstream dependency of the sub-basins helps to differentiate between areas where i) there is currently no dependency on upstream water, ii) upstream water withdrawals are sufficiently high that they alter the scarcity and dependency status, and iii) which are always dependent on upstream water regardless of upstream water withdrawals. Our dependency assessment is expected to considerably support the studies and discussions of hydro-political power relations and management practices in transboundary basins.
Automatic Cataloguing and Searching for Retrospective Data by Use of OCR Text.
ERIC Educational Resources Information Center
Tseng, Yuen-Hsien
2001-01-01
Describes efforts in supporting information retrieval from OCR (optical character recognition) degraded text. Reports on approaches used in an automatic cataloging and searching contest for books in multiple languages, including a vector space retrieval model, an n-gram indexing method, and a weighting scheme; and discusses problems of Asian…
An Ensemble-Based Smoother with Retrospectively Updated Weights for Highly Nonlinear Systems
NASA Technical Reports Server (NTRS)
Chin, T. M.; Turmon, M. J.; Jewell, J. B.; Ghil, M.
2006-01-01
Monte Carlo computational methods have been introduced into data assimilation for nonlinear systems in order to alleviate the computational burden of updating and propagating the full probability distribution. By propagating an ensemble of representative states, algorithms like the ensemble Kalman filter (EnKF) and the resampled particle filter (RPF) rely on the existing modeling infrastructure to approximate the distribution based on the evolution of this ensemble. This work presents an ensemble-based smoother that is applicable to the Monte Carlo filtering schemes like EnKF and RPF. At the minor cost of retrospectively updating a set of weights for ensemble members, this smoother has demonstrated superior capabilities in state tracking for two highly nonlinear problems: the double-well potential and trivariate Lorenz systems. The algorithm does not require retrospective adaptation of the ensemble members themselves, and it is thus suited to a streaming operational mode. The accuracy of the proposed backward-update scheme in estimating non-Gaussian distributions is evaluated by comparison to the more accurate estimates provided by a Markov chain Monte Carlo algorithm.
Pietryk, Edward W; Clement, Kiristin; Elnagheeb, Marwa; Kuster, Ryan; Kilpatrick, Kayla; Love, Michael I; Ideraabdullah, Folami Y
2018-03-10
In utero exposure to vinclozolin (VIN), an antiandrogenic fungicide, is linked to multigenerational phenotypic and epigenetic effects. Mechanisms remain unclear. We assessed the role of antiandrogenic activity and DNA sequence context by comparing effects of VIN vs. M2 (metabolite with greater antiandrogenic activity) and wild-type C57BL/6 (B6) mice vs. mice carrying mutations at the previously reported VIN-responsive H19/Igf2 locus. First generation offspring from VIN-treated 8nrCG mutant dams exhibited increased body weight and decreased sperm ICR methylation. Second generation pups sired by affected males exhibited decreased neonatal body weight but only when dam was unexposed. Offspring from M2 treatments, B6 dams, 8nrCG sires or additional mutant lines were not similarly affected. Therefore, pup response to VIN over two generations detected here was an 8nrCG-specific maternal effect, independent of antiandrogenic activity. These findings demonstrate that maternal effects and crossing scheme play a major role in multigenerational response to in utero exposures. Copyright © 2018 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Cordier, G.; Choi, J.; Raguin, L. G.
2008-11-01
Skin microcirculation plays an important role in diseases such as chronic venous insufficiency and diabetes. Magnetic resonance imaging (MRI) can provide quantitative information with a better penetration depth than other noninvasive methods, such as laser Doppler flowmetry or optical coherence tomography. Moreover, successful MRI skin studies have recently been reported. In this article, we investigate three potential inverse models to quantify skin microcirculation using diffusion-weighted MRI (DWI), also known as q-space MRI. The model parameters are estimated based on nonlinear least-squares (NLS). For each of the three models, an optimal DWI sampling scheme is proposed based on D-optimality in order to minimize the size of the confidence region of the NLS estimates and thus the effect of the experimental noise inherent to DWI. The resulting covariance matrices of the NLS estimates are predicted by asymptotic normality and compared to the ones computed by Monte-Carlo simulations. Our numerical results demonstrate the effectiveness of the proposed models and corresponding DWI sampling schemes as compared to conventional approaches.
NASA Astrophysics Data System (ADS)
Song, Qingguana; Wang, Cheng; Han, Yong; Gao, Dayuan; Duan, Yingliang
2017-06-01
Since detonation often initiates and propagates in the non-homogeneous mixtures, investigating its behavior in non-uniform mixtures is significant not only for the industrial explosion in the leakage combustible gas, but also for the experimental investigations with a vertical concentration gradient caused by the difference in the molecular weight of gas mixture. Objective of this work is to show the detonation behavior in the mixture with different concentration gradients with detailed chemical reaction mechanism. A globally planar detonation in H2-O2 system is simulated by a high-resolution code based on the fifth-order weighted essentially non-oscillatory (WENO) scheme in spatial discretization and the third-order Additive Runge-Kutta schemes in time discretization. The different shocked combustion modes appear in the rich-fuel and poor-fuel layers due to the concentration gradient effect. Globally, for the cases with the lower gradient detonation can be sustained in a way of the alternation of the multi-heads mode and single-head mode, whereas for the cases with the higher gradient detonation propagates with a single-head mode. Institute of Chemical Materials, CAEP.
A flooded-starved design for nickel-cadmium cells
NASA Technical Reports Server (NTRS)
Thaller, L. H.
1986-01-01
A somewhat analogous situation among groupings of alkaline fuel cells is described where the stochastic aspects were much more accurately documented and then it was illustrated how this problem was eliminated using straight forward principles of pore size engineering. This is followed by a suggested method of adapting these same design principles to nickel-cadmium cells. It must be kept in mind that when cells are cycled to typically twenty percent depth of discharge that eighty percent of the weight of the cell is simply dead weight. Some of this dead weight might be put to better use by trading it for a scheme that would increase the time during which the cell would be working more closely to its optimum set of operating parameters.
Multi-Shell Hybrid Diffusion Imaging (HYDI) at 7 Tesla in TgF344-AD Transgenic Alzheimer Rats.
Daianu, Madelaine; Jacobs, Russell E; Weitz, Tara M; Town, Terrence C; Thompson, Paul M
2015-01-01
Diffusion weighted imaging (DWI) is widely used to study microstructural characteristics of the brain. Diffusion tensor imaging (DTI) and high-angular resolution imaging (HARDI) are frequently used in radiology and neuroscience research but can be limited in describing the signal behavior in composite nerve fiber structures. Here, we developed and assessed the benefit of a comprehensive diffusion encoding scheme, known as hybrid diffusion imaging (HYDI), composed of 300 DWI volumes acquired at 7-Tesla with diffusion weightings at b = 1000, 3000, 4000, 8000 and 12000 s/mm2 and applied it in transgenic Alzheimer rats (line TgF344-AD) that model the full clinico-pathological spectrum of the human disease. We studied and visualized the effects of the multiple concentric "shells" when computing three distinct anisotropy maps-fractional anisotropy (FA), generalized fractional anisotropy (GFA) and normalized quantitative anisotropy (NQA). We tested the added value of the multi-shell q-space sampling scheme, when reconstructing neural pathways using mathematical frameworks from DTI and q-ball imaging (QBI). We show a range of properties of HYDI, including lower apparent anisotropy when using high b-value shells in DTI-based reconstructions, and increases in apparent anisotropy in QBI-based reconstructions. Regardless of the reconstruction scheme, HYDI improves FA-, GFA- and NQA-aided tractography. HYDI may be valuable in human connectome projects and clinical research, as well as magnetic resonance research in experimental animals.
Multi-Shell Hybrid Diffusion Imaging (HYDI) at 7 Tesla in TgF344-AD Transgenic Alzheimer Rats
Daianu, Madelaine; Jacobs, Russell E.; Weitz, Tara M.; Town, Terrence C.; Thompson, Paul M.
2015-01-01
Diffusion weighted imaging (DWI) is widely used to study microstructural characteristics of the brain. Diffusion tensor imaging (DTI) and high-angular resolution imaging (HARDI) are frequently used in radiology and neuroscience research but can be limited in describing the signal behavior in composite nerve fiber structures. Here, we developed and assessed the benefit of a comprehensive diffusion encoding scheme, known as hybrid diffusion imaging (HYDI), composed of 300 DWI volumes acquired at 7-Tesla with diffusion weightings at b = 1000, 3000, 4000, 8000 and 12000 s/mm2 and applied it in transgenic Alzheimer rats (line TgF344-AD) that model the full clinico-pathological spectrum of the human disease. We studied and visualized the effects of the multiple concentric “shells” when computing three distinct anisotropy maps–fractional anisotropy (FA), generalized fractional anisotropy (GFA) and normalized quantitative anisotropy (NQA). We tested the added value of the multi-shell q-space sampling scheme, when reconstructing neural pathways using mathematical frameworks from DTI and q-ball imaging (QBI). We show a range of properties of HYDI, including lower apparent anisotropy when using high b-value shells in DTI-based reconstructions, and increases in apparent anisotropy in QBI-based reconstructions. Regardless of the reconstruction scheme, HYDI improves FA-, GFA- and NQA-aided tractography. HYDI may be valuable in human connectome projects and clinical research, as well as magnetic resonance research in experimental animals. PMID:26683657
Three-dimensional Gravity Inversion with a New Gradient Scheme on Unstructured Grids
NASA Astrophysics Data System (ADS)
Sun, S.; Yin, C.; Gao, X.; Liu, Y.; Zhang, B.
2017-12-01
Stabilized gradient-based methods have been proved to be efficient for inverse problems. Based on these methods, setting gradient close to zero can effectively minimize the objective function. Thus the gradient of objective function determines the inversion results. By analyzing the cause of poor resolution on depth in gradient-based gravity inversion methods, we find that imposing depth weighting functional in conventional gradient can improve the depth resolution to some extent. However, the improvement is affected by the regularization parameter and the effect of the regularization term becomes smaller with increasing depth (shown as Figure 1 (a)). In this paper, we propose a new gradient scheme for gravity inversion by introducing a weighted model vector. The new gradient can improve the depth resolution more efficiently, which is independent of the regularization parameter, and the effect of regularization term will not be weakened when depth increases. Besides, fuzzy c-means clustering method and smooth operator are both used as regularization terms to yield an internal consecutive inverse model with sharp boundaries (Sun and Li, 2015). We have tested our new gradient scheme with unstructured grids on synthetic data to illustrate the effectiveness of the algorithm. Gravity forward modeling with unstructured grids is based on the algorithm proposed by Okbe (1979). We use a linear conjugate gradient inversion scheme to solve the inversion problem. The numerical experiments show a great improvement in depth resolution compared with regular gradient scheme, and the inverse model is compact at all depths (shown as Figure 1 (b)). AcknowledgeThis research is supported by Key Program of National Natural Science Foundation of China (41530320), China Natural Science Foundation for Young Scientists (41404093), and Key National Research Project of China (2016YFC0303100, 2017YFC0601900). ReferencesSun J, Li Y. 2015. Multidomain petrophysically constrained inversion and geology differentiation using guided fuzzy c-means clustering. Geophysics, 80(4): ID1-ID18. Okabe M. 1979. Analytical expressions for gravity anomalies due to homogeneous polyhedral bodies and translations into magnetic anomalies. Geophysics, 44(4), 730-741.
Speed Sensorless Induction Motor Drives for Electrical Actuators: Schemes, Trends and Tradeoffs
NASA Technical Reports Server (NTRS)
Elbuluk, Malik E.; Kankam, M. David
1997-01-01
For a decade, induction motor drive-based electrical actuators have been under investigation as potential replacement for the conventional hydraulic and pneumatic actuators in aircraft. Advantages of electric actuator include lower weight and size, reduced maintenance and operating costs, improved safety due to the elimination of hazardous fluids and high pressure hydraulic and pneumatic actuators, and increased efficiency. Recently, the emphasis of research on induction motor drives has been on sensorless vector control which eliminates flux and speed sensors mounted on the motor. Also, the development of effective speed and flux estimators has allowed good rotor flux-oriented (RFO) performance at all speeds except those close to zero. Sensorless control has improved the motor performance, compared to the Volts/Hertz (or constant flux) controls. This report evaluates documented schemes for speed sensorless drives, and discusses the trends and tradeoffs involved in selecting a particular scheme. These schemes combine the attributes of the direct and indirect field-oriented control (FOC) or use model adaptive reference systems (MRAS) with a speed-dependent current model for flux estimation which tracks the voltage model-based flux estimator. Many factors are important in comparing the effectiveness of a speed sensorless scheme. Among them are the wide speed range capability, motor parameter insensitivity and noise reduction. Although a number of schemes have been proposed for solving the speed estimation, zero-speed FOC with robustness against parameter variations still remains an area of research for speed sensorless control.
Rizvi, Sanam Shahla; Chung, Tae-Sun
2010-01-01
Flash memory has become a more widespread storage medium for modern wireless devices because of its effective characteristics like non-volatility, small size, light weight, fast access speed, shock resistance, high reliability and low power consumption. Sensor nodes are highly resource constrained in terms of limited processing speed, runtime memory, persistent storage, communication bandwidth and finite energy. Therefore, for wireless sensor networks supporting sense, store, merge and send schemes, an efficient and reliable file system is highly required with consideration of sensor node constraints. In this paper, we propose a novel log structured external NAND flash memory based file system, called Proceeding to Intelligent service oriented memorY Allocation for flash based data centric Sensor devices in wireless sensor networks (PIYAS). This is the extended version of our previously proposed PIYA [1]. The main goals of the PIYAS scheme are to achieve instant mounting and reduced SRAM space by keeping memory mapping information to a very low size of and to provide high query response throughput by allocation of memory to the sensor data by network business rules. The scheme intelligently samples and stores the raw data and provides high in-network data availability by keeping the aggregate data for a longer period of time than any other scheme has done before. We propose effective garbage collection and wear-leveling schemes as well. The experimental results show that PIYAS is an optimized memory management scheme allowing high performance for wireless sensor networks.
NASA Astrophysics Data System (ADS)
Ragan-Kelley, Benjamin
Space-charge limited flow is a topic of much interest and varied application. We extend existing understanding of space-charge limits by simulations, and develop new tools and techniques for doing these simulations along the way. The Child-Langmuir limit is a simple analytic solution for space-charge limited current density in a one-dimensional diode. It has been previously extended to two dimensions by numerical calculation in planar geometries. By considering an axisymmetric cylindrical system with axial emission from a circular cathode of finite radius r and outer drift tube R > r and gap length L, we further examine the space charge limit in two dimensions. We simulate a two-dimensional axisymmetric parallel plate diode of various aspect ratios (r/L), and develop a scaling law for the measured two-dimensional space-charge limit (2DSCL) relative to the Child-Langmuir limit as a function of the aspect ratio of the diode. These simulations are done with a large (100T) longitudinal magnetic field to restrict electron motion to 1D, with the two-dimensional particle-in-cell simulation code OOPIC. We find a scaling law that is a monotonically decreasing function of this aspect ratio, and the one-dimensional result is recovered in the limit as r >> L. The result is in good agreement with prior results in planar geometry, where the emission area is proportional to the cathode width. We find a weak contribution from the effects of the drift tube for current at the beam edge, and a strong contribution of high current-density "wings" at the outer-edge of the beam, with a very large relative contribution when the beam is narrow. Mechanisms for enhancing current beyond the Child-Langmuir limit remain a matter of great importance. We analyze the enhancement effects of upstream ion injection on the transmitted current in a one-dimensional parallel plate diode. Electrons are field-emitted at the cathode, and ions are injected at a controlled current from the anode. An analytic solution is derived for maximizing the electron current throughput in terms of the ion current. This analysis accounts for various energy regimes, from classical to fully relativistic. The analytical result is then confirmed by simulation of the diode in each energy regime. Field-limited emission is an approach for using Gauss's law to satisfy the space charge limit for emitting current in particle-in-cell simulations. We find that simple field-limited emission models make several assumptions, which introduce small, systematic errors in the system. We make a thorough analysis of each assumption, and ultimately develop and test a new emission scheme that accounts for each. The first correction we make is to allow for a non-zero surface field at the boundary. Since traditional field-emission schemes only aim to balance Gauss's law at the surface, a zero surface field is an assumed condition. But for many systems, this is not appropriate, so the addition of a target surface field is made. The next correction is to account for nonzero initial velocity, which, if neglected, results in a systematic underestimation of the current, due to assuming that all emitted charge will be weighted to the boundary, when in fact it will be weighted as a fraction strictly less than unity, depending on the distance across the initial cell the particle travels in its initial fractional timestep. A correction is made to the scheme, to use the actual particle weight to adjust the target emission. The final analyses involve geometric terms, analyzing the effects of cylindrical coordinates, and taking particular care to analyze the center of a cylindrical beam, as well as the outer edge of the beam, in Cartesian coordinates. We find that balancing Gauss's law at the edge of the beam is not the correct behavior, and that it is important to resolve the profile of the emitted current, in order to avoid systematic errors. A thorough analysis is done of the assumptions made in prior implementations, and corrections are introduced for cylindrical geometry, non-zero injection velocity, and non-zero surface field. Particular care is taken to determine special conditions for the outermost node, where we find that forcing a balance of Gauss's law would be incorrect. (Abstract shortened by UMI.)
Spittal, Matthew J; Carlin, John B; Currier, Dianne; Downes, Marnie; English, Dallas R; Gordon, Ian; Pirkis, Jane; Gurrin, Lyle
2016-10-31
The Australian Longitudinal Study on Male Health (Ten to Men) used a complex sampling scheme to identify potential participants for the baseline survey. This raises important questions about when and how to adjust for the sampling design when analyzing data from the baseline survey. We describe the sampling scheme used in Ten to Men focusing on four important elements: stratification, multi-stage sampling, clustering and sample weights. We discuss how these elements fit together when using baseline data to estimate a population parameter (e.g., population mean or prevalence) or to estimate the association between an exposure and an outcome (e.g., an odds ratio). We illustrate this with examples using a continuous outcome (weight in kilograms) and a binary outcome (smoking status). Estimates of a population mean or disease prevalence using Ten to Men baseline data are influenced by the extent to which the sampling design is addressed in an analysis. Estimates of mean weight and smoking prevalence are larger in unweighted analyses than weighted analyses (e.g., mean = 83.9 kg vs. 81.4 kg; prevalence = 18.0 % vs. 16.7 %, for unweighted and weighted analyses respectively) and the standard error of the mean is 1.03 times larger in an analysis that acknowledges the hierarchical (clustered) structure of the data compared with one that does not. For smoking prevalence, the corresponding standard error is 1.07 times larger. Measures of association (mean group differences, odds ratios) are generally similar in unweighted or weighted analyses and whether or not adjustment is made for clustering. The extent to which the Ten to Men sampling design is accounted for in any analysis of the baseline data will depend on the research question. When the goals of the analysis are to estimate the prevalence of a disease or risk factor in the population or the magnitude of a population-level exposure-outcome association, our advice is to adopt an analysis that respects the sampling design.
Query-Adaptive Reciprocal Hash Tables for Nearest Neighbor Search.
Liu, Xianglong; Deng, Cheng; Lang, Bo; Tao, Dacheng; Li, Xuelong
2016-02-01
Recent years have witnessed the success of binary hashing techniques in approximate nearest neighbor search. In practice, multiple hash tables are usually built using hashing to cover more desired results in the hit buckets of each table. However, rare work studies the unified approach to constructing multiple informative hash tables using any type of hashing algorithms. Meanwhile, for multiple table search, it also lacks of a generic query-adaptive and fine-grained ranking scheme that can alleviate the binary quantization loss suffered in the standard hashing techniques. To solve the above problems, in this paper, we first regard the table construction as a selection problem over a set of candidate hash functions. With the graph representation of the function set, we propose an efficient solution that sequentially applies normalized dominant set to finding the most informative and independent hash functions for each table. To further reduce the redundancy between tables, we explore the reciprocal hash tables in a boosting manner, where the hash function graph is updated with high weights emphasized on the misclassified neighbor pairs of previous hash tables. To refine the ranking of the retrieved buckets within a certain Hamming radius from the query, we propose a query-adaptive bitwise weighting scheme to enable fine-grained bucket ranking in each hash table, exploiting the discriminative power of its hash functions and their complement for nearest neighbor search. Moreover, we integrate such scheme into the multiple table search using a fast, yet reciprocal table lookup algorithm within the adaptive weighted Hamming radius. In this paper, both the construction method and the query-adaptive search method are general and compatible with different types of hashing algorithms using different feature spaces and/or parameter settings. Our extensive experiments on several large-scale benchmarks demonstrate that the proposed techniques can significantly outperform both the naive construction methods and the state-of-the-art hashing algorithms.
Variations of archived static-weight data and WIM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elliott, C.J.; Gillmann, R.; Kent, P.M.
1998-12-01
Using seven-card archived, static-weight and weigh-in-motion (WIM), truck data received by FHWA for 1966--1992, the authors examine the fluctuations of four fiducial weight measures reported at weight sites in the 50 states. The reduced 172 MB Class 9 (332000) database was prepared and ordered from 2 CD-ROMS with duplicate records removed. Front-axle weight and gross-vehicle weight (GVW) are combined conceptually by determining the front axle weight in four-quartile GVW categories. The four categories of front axle weight from the four GVW categories are combined in four ways. Three linear combinations are with fixed-coefficient fiducials and one is that optimal linearmore » combination producing the smallest standard deviation to mean value ratio. The best combination gives coefficients of variation of 2--3% for samples of 100 trucks, below the expected accuracy of single-event WIM measurements. Time tracking of data shows some high-variation sites have seasonal variations, or linear variations over the time-ordered samples. Modeling of these effects is very site specific but provides a way to reduce high variations. Some automatic calibration schemes would erroneously remove such seasonal or linear variations were they static effects.« less
Chirp-aided power fading mitigation for upstream 100 km full-range long reach PON with DBR DML
NASA Astrophysics Data System (ADS)
Zhang, Kuo; He, Hao; Xin, Haiyun; Hu, Weisheng; Liang, Song; Lu, Dan; Zhao, Lingjuan
2018-01-01
The DML is a promising option for cost-sensitive ONUs in optical access networks, but suffers from severe power fading due to dispersion and chirp. In this work, we investigate to mitigate the power fading by optimizing the chirp. Theoretical analysis indicates, a see-saw effect, influenced by the bias, exists between the adiabatic notch-induced fading (A-fading) and the transient notch-induced fading (T-fading). High bias can mitigate T-fading, but causes large A-fading. Low bias can avoid A-fading, but cannot completely mitigate T-fading. For each transmission distance, balance should be achieved to favor transmission. The ∼20 km short distance requires high bias to obtain large adiabatic chirp to counteract the T-fading, while the ∼100 km long distance requires relatively low bias to avoid the A-fading. With this power fading mitigation technique, we conduct upstream transmission experiment of LR-PON. Experiments show that, although signal contamination is inevitable, clear ;1; and ;0; are obtained with this power fading mitigation scheme for any 0 ∼100 km distance with 10 Gb/s OOK signal and DBR DML. The optical power budget penalty induced by 0 ∼100 km fiber is limited within only 2.2 dB, with optimum bias for each distance. More than 10 and 15 dB improvement is achieved when BER is 10-3 and 10-6. A method is also proposed to automatically obtain optimum bias from the ranging procedure of PON protocol.
NASA Astrophysics Data System (ADS)
Wei, C.; Cheng, K. S.
Using meteorological radar and satellite imagery had become an efficient tool for rainfall forecasting However few studies were aimed to predict quantitative rainfall in small watersheds for flood forecasting by using remote sensing data Due to the terrain shelter and ground clutter effect of Central Mountain Ridges the application of meteorological radar data was limited in mountainous areas of central Taiwan This study devises a new scheme to predict rainfall of a small upstream watershed by combing GOES-9 geostationary weather satellite imagery and ground rainfall records which can be applied for local quantitative rainfall forecasting during periods of typhoon and heavy rainfall Imagery of two typhoon events in 2004 and five correspondent ground raingauges records of Chitou Forest Recreational Area which is located in upstream region of Bei-Shi river were analyzed in this study The watershed accounts for 12 7 square kilometers and altitudes ranging from 1000 m to 1800 m Basin-wide Average Rainfall BAR in study area were estimated by block kriging Cloud Top Temperature CTT from satellite imagery and ground hourly rainfall records were medium correlated The regression coefficient ranges from 0 5 to 0 7 and the value decreases as the altitude of the gauge site increases The regression coefficient of CCT and next 2 to 6 hour accumulated BAR decrease as the time scale increases The rainfall forecasting for BAR were analyzed by Kalman Filtering Technique The correlation coefficient and average hourly deviates between estimated and observed value of BAR for
Gutiérrez-Cacciabue, Dolores; Teich, Ingrid; Poma, Hugo Ramiro; Cruz, Mercedes Cecilia; Balzarini, Mónica; Rajal, Verónica Beatriz
2014-01-01
Several recreational surface waters in Salta, Argentina, were selected to assess their quality. Seventy percent of the measurements exceeded at least one of the limits established by international legislation becoming unsuitable for their use. To interpret results of complex data, multivariate techniques were applied. Arenales River, due to the variability observed in the data, was divided in two: upstream and downstream representing low and high pollution sites, respectively; and Cluster Analysis supported that differentiation. Arenales River downstream and Campo Alegre Reservoir were the most different environments and Vaqueros and La Caldera Rivers were the most similar. Canonical Correlation Analysis allowed exploration of correlations between physicochemical and microbiological variables except in both parts of Arenales River, and Principal Component Analysis allowed finding relationships among the 9 measured variables in all aquatic environments. Variable’s loadings showed that Arenales River downstream was impacted by industrial and domestic activities, Arenales River upstream was affected by agricultural activities, Campo Alegre Reservoir was disturbed by anthropogenic and ecological effects, and La Caldera and Vaqueros Rivers were influenced by recreational activities. Discriminant Analysis allowed identification of subgroup of variables responsible for seasonal and spatial variations. Enterococcus, dissolved oxygen, conductivity, E. coli, pH, and fecal coliforms are sufficient to spatially describe the quality of the aquatic environments. Regarding seasonal variations, dissolved oxygen, conductivity, fecal coliforms, and pH can be used to describe water quality during dry season, while dissolved oxygen, conductivity, total coliforms, E. coli, and Enterococcus during wet season. Thus, the use of multivariate techniques allowed optimizing monitoring tasks and minimizing costs involved. PMID:25190636
The fallacy of using NII in analyzing aircraft operations. [Noise Impact Index
NASA Technical Reports Server (NTRS)
Melton, R. G.; Jacobson, I. D.
1984-01-01
Three measures of noise annoyance (Noise Impact Index, Level-Weighted Population, and Annoyed Population Number) are compared, regarding their utility in assessing noise reduction schemes for aircraft operations. While NII is intended to measure the average annoyance per person in a community, it is found that the method of averaging can lead to erroneous conclusions, particularly if the population does not have uniform spatial distribution. Level-Weighted Population and Annoyed Population Number are shown to be better indicators of noise annoyance when rating different strategies for noise reduction in a given community.
Rest requirements and rest management of personnel in shift work
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hammell, B.D.; Scheuerle, A.
1995-12-31
A difficulty-weighted shift assignment scheme is proposed for use in prolonged and strenuous field operations such as emergency response, site testing, and short term hazardous waste remediation projects. The purpose of the work rotation plan is to increase productivity, safety, and moral of workers. Job weighting is accomplished by assigning adjustments to the mental and physical intensity of the task, the protective equipment worn, and the climatic conditions. The plan is based on medical studies of sleep deprivation, the effects of rest adjustments, and programs to reduce sleep deprivation and normalize shift schedules.
Distributed Sleep Scheduling in Wireless Sensor Networks via Fractional Domatic Partitioning
NASA Astrophysics Data System (ADS)
Schumacher, André; Haanpää, Harri
We consider setting up sleep scheduling in sensor networks. We formulate the problem as an instance of the fractional domatic partition problem and obtain a distributed approximation algorithm by applying linear programming approximation techniques. Our algorithm is an application of the Garg-Könemann (GK) scheme that requires solving an instance of the minimum weight dominating set (MWDS) problem as a subroutine. Our two main contributions are a distributed implementation of the GK scheme for the sleep-scheduling problem and a novel asynchronous distributed algorithm for approximating MWDS based on a primal-dual analysis of Chvátal's set-cover algorithm. We evaluate our algorithm with
Optimal weighting in fNL constraints from large scale structure in an idealised case
NASA Astrophysics Data System (ADS)
Slosar, Anže
2009-03-01
We consider the problem of optimal weighting of tracers of structure for the purpose of constraining the non-Gaussianity parameter fNL. We work within the Fisher matrix formalism expanded around fiducial model with fNL = 0 and make several simplifying assumptions. By slicing a general sample into infinitely many samples with different biases, we derive the analytic expression for the relevant Fisher matrix element. We next consider weighting schemes that construct two effective samples from a single sample of tracers with a continuously varying bias. We show that a particularly simple ansatz for weighting functions can recover all information about fNL in the initial sample that is recoverable using a given bias observable and that simple division into two equal samples is considerably suboptimal when sampling of modes is good, but only marginally suboptimal in the limit where Poisson errors dominate.
Weighted stacking of seismic AVO data using hybrid AB semblance and local similarity
NASA Astrophysics Data System (ADS)
Deng, Pan; Chen, Yangkang; Zhang, Yu; Zhou, Hua-Wei
2016-04-01
The common-midpoint (CMP) stacking technique plays an important role in enhancing the signal-to-noise ratio (SNR) in seismic data processing and imaging. Weighted stacking is often used to improve the performance of conventional equal-weight stacking in further attenuating random noise and handling the amplitude variations in real seismic data. In this study, we propose to use a hybrid framework of combining AB semblance and a local-similarity-weighted stacking scheme. The objective is to achieve an optimal stacking of the CMP gathers with class II amplitude-variation-with-offset (AVO) polarity-reversal anomaly. The selection of high-quality near-offset reference trace is another innovation of this work because of its better preservation of useful energy. Applications to synthetic and field seismic data demonstrate a great improvement using our method to capture the true locations of weak reflections, distinguish thin-bed tuning artifacts, and effectively attenuate random noise.
Analysis of fault-tolerant neurocontrol architectures
NASA Technical Reports Server (NTRS)
Troudet, T.; Merrill, W.
1992-01-01
The fault-tolerance of analog parallel distributed implementations of a multivariable aircraft neurocontroller is analyzed by simulating weight and neuron failures in a simplified scheme of analog processing based on the functional architecture of the ETANN chip (Electrically Trainable Artificial Neural Network). The neural information processing is found to be only partially distributed throughout the set of weights of the neurocontroller synthesized with the backpropagation algorithm. Although the degree of distribution of the neural processing, and consequently the fault-tolerance of the neurocontroller, could be enhanced using Locally Distributed Weight and Neuron Approaches, a satisfactory level of fault-tolerance could only be obtained by retraining the degrated VLSI neurocontroller. The possibility of maintaining neurocontrol performance and stability in the presence of single weight of neuron failures was demonstrated through an automated retraining procedure of the neurocontroller based on a pre-programmed choice and sequence of the training parameters.
A similarity retrieval approach for weighted track and ambient field of tropical cyclones
NASA Astrophysics Data System (ADS)
Li, Ying; Xu, Luan; Hu, Bo; Li, Yuejun
2018-03-01
Retrieving historical tropical cyclones (TC) which have similar position and hazard intensity to the objective TC is an important means in TC track forecast and TC disaster assessment. A new similarity retrieval scheme is put forward based on historical TC track data and ambient field data, including ERA-Interim reanalysis and GFS and EC-fine forecast. It takes account of both TC track similarity and ambient field similarity, and optimal weight combination is explored subsequently. Result shows that both the distance and direction errors of TC track forecast at 24-hour timescale follow an approximately U-shape distribution. They tend to be large when the weight assigned to track similarity is close to 0 or 1.0, while relatively small when track similarity weight is from 0.2˜0.7 for distance error and 0.3˜0.6 for direction error.
Integrating Iris and Signature Traits for Personal Authentication Using User-Specific Weighting
Viriri, Serestina; Tapamo, Jules R.
2012-01-01
Biometric systems based on uni-modal traits are characterized by noisy sensor data, restricted degrees of freedom, non-universality and are susceptible to spoof attacks. Multi-modal biometric systems seek to alleviate some of these drawbacks by providing multiple evidences of the same identity. In this paper, a user-score-based weighting technique for integrating the iris and signature traits is presented. This user-specific weighting technique has proved to be an efficient and effective fusion scheme which increases the authentication accuracy rate of multi-modal biometric systems. The weights are used to indicate the importance of matching scores output by each biometrics trait. The experimental results show that our biometric system based on the integration of iris and signature traits achieve a false rejection rate (FRR) of 0.08% and a false acceptance rate (FAR) of 0.01%. PMID:22666032
Integrated neuron circuit for implementing neuromorphic system with synaptic device
NASA Astrophysics Data System (ADS)
Lee, Jeong-Jun; Park, Jungjin; Kwon, Min-Woo; Hwang, Sungmin; Kim, Hyungjin; Park, Byung-Gook
2018-02-01
In this paper, we propose and fabricate Integrate & Fire neuron circuit for implementing neuromorphic system. Overall operation of the circuit is verified by measuring discrete devices and the output characteristics of the circuit. Since the neuron circuit shows asymmetric output characteristic that can drive synaptic device with Spike-Timing-Dependent-Plasticity (STDP) characteristic, the autonomous weight update process is also verified by connecting the synaptic device and the neuron circuit. The timing difference of the pre-neuron and the post-neuron induce autonomous weight change of the synaptic device. Unlike 2-terminal devices, which is frequently used to implement neuromorphic system, proposed scheme of the system enables autonomous weight update and simple configuration by using 4-terminal synapse device and appropriate neuron circuit. Weight update process in the multi-layer neuron-synapse connection ensures implementation of the hardware-based artificial intelligence, based on Spiking-Neural- Network (SNN).
Effects of 1980 technology on weight of a recovery system for a one million pound booster
NASA Technical Reports Server (NTRS)
Eckstrom, C. V.
1975-01-01
The effects were evaluated of 1980 technology on the weight of recovery systems capable of decelerating a one-million-pound booster to vertical velocities of 60 or 30 ft/sec at sea level impact. A nominal set of booster staging conditions were assumed and there were no constraints on parachute size, number or type. The effects of new materials that would be available by 1980, the effects of booster attitude during entry, various parachute staging methods, parachute reefing schemes, parachute-retro rocket hybrid systems, and the effects of dividing the booster into separate pieces for recovery were evaluated. It was determined that for the systems considered, a hybrid parachute-retro-rocket recovery system would have the minimum weight. New materials now becoming available for parachute fabrication should result in a 37-percent reduction in hybrid recovery system weight for an impact velocity of 30 fps.
Genetic Parameter Estimates of Carcass Traits under National Scale Breeding Scheme for Beef Cattle
Do, ChangHee; Park, ByungHo; Kim, SiDong; Choi, TaeJung; Yang, BohSuk; Park, SuBong; Song, HyungJun
2016-01-01
Carcass and price traits of 72,969 Hanwoo cows, bulls and steers aged 16 to 80 months at slaughter collected from 2002 to 2013 at 75 beef packing plants in Korea were analyzed to determine heritability, correlation and breeding value using the Multi-Trait restricted maximum likelihood (REML) animal model procedure. The traits included carcass measurements, scores and grades at 24 h postmortem and bid prices at auction. Relatively high heritability was found for maturity (0.41±0.031), while moderate heritability estimates were obtained for backfat thickness (0.20±0.018), longissimus muscle (LM) area (0.23±0.020), carcass weight (0.28±0.019), yield index (0.20±0.018), yield grade (0.16±0.017), marbling (0.28±0.021), texture (0.14±0.016), quality grade (0.26±0.016) and price/kg (0.24±0.025). Relatively low heritability estimates were observed for meat color (0.06±0.013) and fat color (0.06±0.012). Heritability estimates for most traits were lower than those in the literature. Genetic correlations of carcass measurements with characteristic scores or quality grade of carcass ranged from −0.27 to +0.21. Genetic correlations of yield grade with backfat thickness, LM area and carcass weight were 0.91, −0.43, and −0.09, respectively. Genetic correlations of quality grade with scores of marbling, meat color, fat color and texture were −0.99, 0.48, 0.47, and 0.98, respectively. Genetic correlations of price/kg with LM area, carcass weight, marbling, meat color, texture and maturity were 0.57, 0.64, 0.76, −0.41, −0.79, and −0.42, respectively. Genetic correlations of carcass price with LM area, carcass weight, marbling and texture were 0.61, 0.57, 0.64, and −0.73, respectively, with standard errors ranging from ±0.047 to ±0.058. The mean carcass weight breeding values increased by more than 8 kg, whereas the mean marbling scores decreased by approximately 0.2 from 2000 through 2009. Overall, the results suggest that genetic improvement of productivity and carcass quality could be obtained under the national scale breeding scheme of Korea for Hanwoo and that continuous efforts to improve the breeding scheme should be made to increase genetic progress. PMID:27004809