Heerkens, Yvonne F; de Weerd, Marjolein; Huber, Machteld; de Brouwer, Carin P M; van der Veen, Sabina; Perenboom, Rom J M; van Gool, Coen H; Ten Napel, Huib; van Bon-Martens, Marja; Stallinga, Hillegonda A; van Meeteren, Nico L U
2018-03-01
The ICF (International Classification of Functioning, Disability and Health) framework (used worldwide to describe 'functioning' and 'disability'), including the ICF scheme (visualization of functioning as result of interaction with health condition and contextual factors), needs reconsideration. The purpose of this article is to discuss alternative ICF schemes. Reconsideration of ICF via literature review and discussions with 23 Dutch ICF experts. Twenty-six experts were invited to rank the three resulting alternative schemes. The literature review provided five themes: 1) societal developments; 2) health and research influences; 3) conceptualization of health; 4) models/frameworks of health and disability; and 5) ICF-criticism (e.g. position of 'health condition' at the top and role of 'contextual factors'). Experts concluded that the ICF scheme gives the impression that the medical perspective is dominant instead of the biopsychosocial perspective. Three alternative ICF schemes were ranked by 16 (62%) experts, resulting in one preferred scheme. There is a need for a new ICF scheme, better reflecting the ICF framework, for further (inter)national consideration. These Dutch schemes should be reviewed on a global scale, to develop a scheme that is more consistent with current and foreseen developments and changing ideas on health. Implications for Rehabilitation We propose policy makers on community, regional and (inter)national level to consider the use of the alternative schemes of the International Classification of Functioning, Disability and Health within their plans to promote functioning and health of their citizens and researchers and teachers to incorporate the alternative schemes into their research and education to emphasize the biopsychosocial paradigm. We propose to set up an international Delphi procedure involving citizens (including patients), experts in healthcare, occupational care, research, education and policy, and planning to get consensus on an alternative scheme of the International Classification of Functioning, Disability and Health. We recommend to discuss the alternatives for the present scheme of the International Classification of Functioning, Disability and Health in the present update and revision process within the World Health Organization as a part of the discussion on the future of the International Classification of Functioning, Disability and Health framework (including ontology, title and relation with the International Classification of Diseases). We recommend to revise the definition of personal factors and to draft a list of personal factors that can be used in policy making, clinical practice, research, and education and to put effort in the revision of the present list of environmental factors to make it more useful in, e.g., occupational health care.
Nuclear Explosion and Infrasound Event Resources of the SMDC Monitoring Research Program
2008-09-01
2008 Monitoring Research Review: Ground-Based Nuclear Explosion Monitoring Technologies 928 Figure 7. Dozens of detected infrasound signals from...investigate alternative detection schemes at the two infrasound arrays based on frequency-wavenumber (fk) processing and the F-statistic. The results of... infrasound signal - detection processing schemes. REFERENCES Bahavar, M., B. Barker, J. Bennett, R. Bowman, H. Israelsson, B. Kohl, Y-L. Kung, J. Murphy
Plasmonic Antenna Coupling for QWIPs
NASA Technical Reports Server (NTRS)
Hong, John
2007-01-01
In a proposed scheme for coupling light into a quantum-well infrared photodetector (QWIP), an antenna or an array of antennas made of a suitable metal would be fabricated on the face of what would otherwise be a standard QWIP. This or any such coupling scheme is required to effect polarization conversion: Light incident perpendicularly to the face is necessarily polarized in the plane of the face, whereas, as a matter of fundamental electrodynamics and related quantum selection rules, light must have a non-zero component of perpendicular polarization in order to be absorbed in the photodetection process. In a prior coupling scheme, gratings in the form of surface corrugations diffract normally gles, thereby imparting some perpendicular polarization. Unfortunately, the corrugation- fabrication process increases the overall nonuniformity of a large QWIP array. The proposed scheme is an alternative to the use of surface corrugations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sansone, M.J.
1979-02-01
On the basis of simple, first approximation calculations, it has been shown that catalytic gasification and hydrogasification are inherently superior to conventional gasification with respect to carbon utilization and thermal efficiency. However, most processes which are directed toward the production of substitute natural gas (SNG) by direct combination of coal with steam at low temperatures (catalytic processes) or with hydrogen (hydrogasification) will require a step for separation of product SNG from a recycle stream. The success or falure of the process could well depend upon the economics of this separation scheme. The energetics for the separation of mixtures of idealmore » gases has been considered in some detail. Minimum energies for complete separation of representative effluent mixtures have been calculated as well as energies for separation into product and recycle streams. The gas mixtures include binary systems of H/sub 2/ and CH/sub 4/ and ternary mixtures of H/sub 2/, CH/sub 4/, and CO. A brief summary of a number of different real separation schemes has also been included. We have arbitrarily divided these into five categories: liquefaction, absorption, adsorption, chemical, and diffusional methods. These separation methods will be screened and the more promising methods examined in more detail in later reports. Finally, a brief mention of alternative coal conversion processes concludes this report.« less
Range Sidelobe Suppression Using Complementary Sets in Distributed Multistatic Radar Networks
Wang, Xuezhi; Song, Yongping; Huang, Xiaotao; Moran, Bill
2017-01-01
We propose an alternative waveform scheme built on mutually-orthogonal complementary sets for a distributed multistatic radar. Our analysis and simulation show a reduced frequency band requirement for signal separation between antennas with centralized signal processing using the same carrier frequency. While the scheme can tolerate fluctuations of carrier frequencies and phases, range sidelobes arise when carrier frequencies between antennas are significantly different. PMID:29295566
Nagy-Soper subtraction scheme for multiparton final states
NASA Astrophysics Data System (ADS)
Chung, Cheng-Han; Robens, Tania
2013-04-01
In this work, we present the extension of an alternative subtraction scheme for next-to-leading order QCD calculations to the case of an arbitrary number of massless final state partons. The scheme is based on the splitting kernels of an improved parton shower and comes with a reduced number of final state momentum mappings. While a previous publication including the setup of the scheme has been restricted to cases with maximally two massless partons in the final state, we here provide the final state real emission and integrated subtraction terms for processes with any number of massless partons. We apply our scheme to three jet production at lepton colliders at next-to-leading order and present results for the differential C parameter distribution.
Comparative efficiency of a scheme of cyclic alternating-period subtraction
NASA Astrophysics Data System (ADS)
Golikov, V. S.; Artemenko, I. G.; Malinin, A. P.
1986-06-01
The estimation of the detection quality of a signal on a background of correlated noise according to the Neumann-Pearson criterion is examined. It is shown that, in a number of cases, the cyclic alternating-period subtraction scheme has a higher noise immunity than the conventional alternating-period subtraction scheme.
Microelectromechanical reprogrammable logic device.
Hafiz, M A A; Kosuru, L; Younis, M I
2016-03-29
In modern computing, the Boolean logic operations are set by interconnect schemes between the transistors. As the miniaturization in the component level to enhance the computational power is rapidly approaching physical limits, alternative computing methods are vigorously pursued. One of the desired aspects in the future computing approaches is the provision for hardware reconfigurability at run time to allow enhanced functionality. Here we demonstrate a reprogrammable logic device based on the electrothermal frequency modulation scheme of a single microelectromechanical resonator, capable of performing all the fundamental 2-bit logic functions as well as n-bit logic operations. Logic functions are performed by actively tuning the linear resonance frequency of the resonator operated at room temperature and under modest vacuum conditions, reprogrammable by the a.c.-driving frequency. The device is fabricated using complementary metal oxide semiconductor compatible mass fabrication process, suitable for on-chip integration, and promises an alternative electromechanical computing scheme.
Microelectromechanical reprogrammable logic device
Hafiz, M. A. A.; Kosuru, L.; Younis, M. I.
2016-01-01
In modern computing, the Boolean logic operations are set by interconnect schemes between the transistors. As the miniaturization in the component level to enhance the computational power is rapidly approaching physical limits, alternative computing methods are vigorously pursued. One of the desired aspects in the future computing approaches is the provision for hardware reconfigurability at run time to allow enhanced functionality. Here we demonstrate a reprogrammable logic device based on the electrothermal frequency modulation scheme of a single microelectromechanical resonator, capable of performing all the fundamental 2-bit logic functions as well as n-bit logic operations. Logic functions are performed by actively tuning the linear resonance frequency of the resonator operated at room temperature and under modest vacuum conditions, reprogrammable by the a.c.-driving frequency. The device is fabricated using complementary metal oxide semiconductor compatible mass fabrication process, suitable for on-chip integration, and promises an alternative electromechanical computing scheme. PMID:27021295
NASA Astrophysics Data System (ADS)
Pan, M.-Ch.; Chu, W.-Ch.; Le, Duc-Do
2016-12-01
The paper presents an alternative Vold-Kalman filter order tracking (VKF_OT) method, i.e. adaptive angular-velocity VKF_OT technique, to extract and characterize order components in an adaptive manner for the condition monitoring and fault diagnosis of rotary machinery. The order/spectral waveforms to be tracked can be recursively solved by using Kalman filter based on the one-step state prediction. The paper comprises theoretical derivation of computation scheme, numerical implementation, and parameter investigation. Comparisons of the adaptive VKF_OT scheme with two other ones are performed through processing synthetic signals of designated order components. Processing parameters such as the weighting factor and the correlation matrix of process noise, and data conditions like the sampling frequency, which influence tracking behavior, are explored. The merits such as adaptive processing nature and computation efficiency brought by the proposed scheme are addressed although the computation was performed in off-line conditions. The proposed scheme can simultaneously extract multiple spectral components, and effectively decouple close and crossing orders associated with multi-axial reference rotating speeds.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Foglietta, J.H.
1999-07-01
A new LNG cycle has been developed for base load liquefaction facilities. This new design offers a different technical and economical solution comparing in efficiency with the classical technologies. The new LNG scheme could offer attractive business opportunities to oil and gas companies that are trying to find paths to monetize gas sources more effectively; particularly for remote or offshore locations where smaller scale LNG facilities might be applicable. This design offers also an alternative route to classic LNG projects, as well as alternative fuel sources. Conceived to offer simplicity and access to industry standard equipment, This design is amore » hybrid result of combining a standard refrigeration system and turboexpander technology.« less
NASA Technical Reports Server (NTRS)
Brooner, W. G.; Nichols, D. A.
1972-01-01
Development of a scheme for utilizing remote sensing technology in an operational program for regional land use planning and land resource management program applications. The scheme utilizes remote sensing imagery as one of several potential inputs to derive desired and necessary data, and considers several alternative approaches to the expansion and/or reduction and analysis of data, using automated data handling techniques. Within this scheme is a five-stage program development which includes: (1) preliminary coordination, (2) interpretation and encoding, (3) creation of data base files, (4) data analysis and generation of desired products, and (5) applications.
COLA: Optimizing Stream Processing Applications via Graph Partitioning
NASA Astrophysics Data System (ADS)
Khandekar, Rohit; Hildrum, Kirsten; Parekh, Sujay; Rajan, Deepak; Wolf, Joel; Wu, Kun-Lung; Andrade, Henrique; Gedik, Buğra
In this paper, we describe an optimization scheme for fusing compile-time operators into reasonably-sized run-time software units called processing elements (PEs). Such PEs are the basic deployable units in System S, a highly scalable distributed stream processing middleware system. Finding a high quality fusion significantly benefits the performance of streaming jobs. In order to maximize throughput, our solution approach attempts to minimize the processing cost associated with inter-PE stream traffic while simultaneously balancing load across the processing hosts. Our algorithm computes a hierarchical partitioning of the operator graph based on a minimum-ratio cut subroutine. We also incorporate several fusion constraints in order to support real-world System S jobs. We experimentally compare our algorithm with several other reasonable alternative schemes, highlighting the effectiveness of our approach.
Mergias, I; Moustakas, K; Papadopoulos, A; Loizidou, M
2007-08-25
Each alternative scheme for treating a vehicle at its end of life has its own consequences from a social, environmental, economic and technical point of view. Furthermore, the criteria used to determine these consequences are often contradictory and not equally important. In the presence of multiple conflicting criteria, an optimal alternative scheme never exists. A multiple-criteria decision aid (MCDA) method to aid the Decision Maker (DM) in selecting the best compromise scheme for the management of End-of-Life Vehicles (ELVs) is presented in this paper. The constitution of a set of alternatives schemes, the selection of a list of relevant criteria to evaluate these alternative schemes and the choice of an appropriate management system are also analyzed in this framework. The proposed procedure relies on the PROMETHEE method which belongs to the well-known family of multiple criteria outranking methods. For this purpose, level, linear and Gaussian functions are used as preference functions.
New coherent laser communication detection scheme based on channel-switching method.
Liu, Fuchuan; Sun, Jianfeng; Ma, Xiaoping; Hou, Peipei; Cai, Guangyu; Sun, Zhiwei; Lu, Zhiyong; Liu, Liren
2015-04-01
A new coherent laser communication detection scheme based on the channel-switching method is proposed. The detection front end of this scheme comprises a 90° optical hybrid and two balanced photodetectors which outputs the in-phase (I) channel and quadrature-phase (Q) channel signal current, respectively. With this method, the ultrahigh speed analog/digital transform of the signal of the I or Q channel is not required. The phase error between the signal and local lasers is obtained by simple analog circuit. Using the phase error signal, the signals of the I/Q channel are switched alternately. The principle of this detection scheme is presented. Moreover, the comparison of the sensitivity of this scheme with that of homodyne detection with an optical phase-locked loop is discussed. An experimental setup was constructed to verify the proposed detection scheme. The offline processing procedure and results are presented. This scheme could be realized through simple structure and has potential applications in cost-effective high-speed laser communication.
78 FR 60670 - Airworthiness Directives; The Boeing Company Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-02
... in this regard. Request To Approve an Alternate Generic Repair Scheme as an AMOC British Airways requested that an alternate generic repair scheme be approved as an AMOC to this final rule. British Airways... scheme to British Airways which allowed British Airways to manufacture certain repair parts. British...
Alternative Packaging for Back-Illuminated Imagers
NASA Technical Reports Server (NTRS)
Pain, Bedabrata
2009-01-01
An alternative scheme has been conceived for packaging of silicon-based back-illuminated, back-side-thinned complementary metal oxide/semiconductor (CMOS) and charge-coupled-device image-detector integrated circuits, including an associated fabrication process. This scheme and process are complementary to those described in "Making a Back-Illuminated Imager With Back-Side Connections" (NPO-42839), NASA Tech Briefs, Vol. 32, No. 7 (July 2008), page 38. To avoid misunderstanding, it should be noted that in the terminology of imaging integrated circuits, "front side" or "back side" does not necessarily refer to the side that, during operation, faces toward or away from a source of light or other object to be imaged. Instead, "front side" signifies that side of a semiconductor substrate upon which the pixel pattern and the associated semiconductor devices and metal conductor lines are initially formed during fabrication, and "back side" signifies the opposite side. If the imager is of the type called "back-illuminated," then the back side is the one that faces an object to be imaged. Initially, a back-illuminated, back-side-thinned image-detector is fabricated with its back side bonded to a silicon handle wafer. At a subsequent stage of fabrication, the front side is bonded to a glass wafer (for mechanical support) and the silicon handle wafer is etched away to expose the back side. The frontside integrated circuitry includes metal input/output contact pads, which are rendered inaccessible by the bonding of the front side to the glass wafer. Hence, one of the main problems is to make the input/output contact pads accessible from the back side, which is ultimately to be the side accessible to the external world. The present combination of an alternative packaging scheme and associated fabrication process constitute a solution of the problem.
Development of a methodology for classifying software errors
NASA Technical Reports Server (NTRS)
Gerhart, S. L.
1976-01-01
A mathematical formalization of the intuition behind classification of software errors is devised and then extended to a classification discipline: Every classification scheme should have an easily discernible mathematical structure and certain properties of the scheme should be decidable (although whether or not these properties hold is relative to the intended use of the scheme). Classification of errors then becomes an iterative process of generalization from actual errors to terms defining the errors together with adjustment of definitions according to the classification discipline. Alternatively, whenever possible, small scale models may be built to give more substance to the definitions. The classification discipline and the difficulties of definition are illustrated by examples of classification schemes from the literature and a new study of observed errors in published papers of programming methodologies.
Medical image enhancement using resolution synthesis
NASA Astrophysics Data System (ADS)
Wong, Tak-Shing; Bouman, Charles A.; Thibault, Jean-Baptiste; Sauer, Ken D.
2011-03-01
We introduce a post-processing approach to improve the quality of CT reconstructed images. The scheme is adapted from the resolution-synthesis (RS)1 interpolation algorithm. In this approach, we consider the input image, scanned at a particular dose level, as a degraded version of a high quality image scanned at a high dose level. Image enhancement is achieved by predicting the high quality image by classification based linear regression. To improve the robustness of our scheme, we also apply the minimum description length principle to determine the optimal number of predictors to use in the scheme, and the ridge regression to regularize the design of the predictors. Experimental results show that our scheme is effective in reducing the noise in images reconstructed from filtered back projection without significant loss of image details. Alternatively, our scheme can also be applied to reduce dose while maintaining image quality at an acceptable level.
ERIC Educational Resources Information Center
Schatschneider, Christopher; Wagner, Richard K.; Hart, Sara A.; Tighe, Elizabeth L.
2016-01-01
The present study employed data simulation techniques to investigate the 1-year stability of alternative classification schemes for identifying children with reading disabilities. Classification schemes investigated include low performance, unexpected low performance, dual-discrepancy, and a rudimentary form of constellation model of reading…
NASA Astrophysics Data System (ADS)
Ford, Neville J.; Connolly, Joseph A.
2009-07-01
We give a comparison of the efficiency of three alternative decomposition schemes for the approximate solution of multi-term fractional differential equations using the Caputo form of the fractional derivative. The schemes we compare are based on conversion of the original problem into a system of equations. We review alternative approaches and consider how the most appropriate numerical scheme may be chosen to solve a particular equation.
Nagy-Soper Subtraction: a Review
NASA Astrophysics Data System (ADS)
Robens, Tania
2013-07-01
In this review, we present a review on an alternative NLO subtraction scheme, based on the splitting kernels of an improved parton shower that promises to facilitate the inclusion of higher-order corrections into Monte Carlo event generators. We give expressions for the scheme for massless emitters, and point to work on the extension for massive cases. As an example, we show results for the C parameter of the process e+e-→3 jets at NLO which have recently been published as a verification of this scheme. We equally provide analytic expressions for integrated counterterms that have not been presented in previous work, and comment on the possibility of analytic approximations for the remaining numerical integrals.
77 FR 52636 - Hazardous Materials: Revision to Fireworks Regulations (RRR)
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-30
... mathematical errors, or denied for safety issues. If an application is rejected, the applicant often resubmits... processing of EX approval applications under the current regulatory scheme. PHMSA proposes an alternative option for Division 1.4G consumer fireworks in which applicants will submit applications for...
Intercomparison of land-surface parameterizations launched
NASA Astrophysics Data System (ADS)
Henderson-Sellers, A.; Dickinson, R. E.
One of the crucial tasks for climatic and hydrological scientists over the next several years will be validating land surface process parameterizations used in climate models. There is not, necessarily, a unique set of parameters to be used. Different scientists will want to attempt to capture processes through various methods “for example, Avissar and Verstraete, 1990”. Validation of some aspects of the available (and proposed) schemes' performance is clearly required. It would also be valuable to compare the behavior of the existing schemes [for example, Dickinson et al., 1991; Henderson-Sellers, 1992a].The WMO-CAS Working Group on Numerical Experimentation (WGNE) and the Science Panel of the GEWEX Continental-Scale International Project (GCIP) [for example, Chahine, 1992] have agreed to launch the joint WGNE/GCIP Project for Intercomparison of Land-Surface Parameterization Schemes (PILPS). The principal goal of this project is to achieve greater understanding of the capabilities and potential applications of existing and new land-surface schemes in atmospheric models. It is not anticipated that a single “best” scheme will emerge. Rather, the aim is to explore alternative models in ways compatible with their authors' or exploiters' goals and to increase understanding of the characteristics of these models in the scientific community.
Feasibility Studies of Optical Processing of Image Bandwidth Compression Schemes.
1983-05-15
R.N. STRICKIAND AFOSR-81-O170 R.A. SCHOWENGERDT S. PERFORMING ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENT, PROJECT. TASK Ur,;t. o4 o . AREA & WORK...is the intent of research sponsored under this Grant to direct investi- gation into the following issues: () formulation of alternative architechtural
Techniques for improving transients in learning control systems
NASA Technical Reports Server (NTRS)
Chang, C.-K.; Longman, Richard W.; Phan, Minh
1992-01-01
A discrete modern control formulation is used to study the nature of the transient behavior of the learning process during repetitions. Several alternative learning control schemes are developed to improve the transient performance. These include a new method using an alternating sign on the learning gain, which is very effective in limiting peak transients and also very useful in multiple-input, multiple-output systems. Other methods include learning at an increasing number of points progressing with time, or an increasing number of points of increasing density.
NASA Technical Reports Server (NTRS)
Traversi, M.; Piccolo, R.
1980-01-01
Tradeoff study activities and the analysis process used are described with emphasis on (1) review of the alternatives; (2) vehicle architecture; and (3) evaluation of the propulsion system alternatives; interim results are presented for the basic hybrid vehicle characterization; vehicle scheme development; propulsion system power and transmission ratios; vehicle weight; energy consumption and emissions; performance; production costs; reliability, availability and maintainability; life cycle costs, and operational quality. The final vehicle conceptual design is examined.
A channel dynamics model for real-time flood forecasting
Hoos, Anne B.; Koussis, Antonis D.; Beale, Guy O.
1989-01-01
A new channel dynamics scheme (alternative system predictor in real time (ASPIRE)), designed specifically for real-time river flow forecasting, is introduced to reduce uncertainty in the forecast. ASPIRE is a storage routing model that limits the influence of catchment model forecast errors to the downstream station closest to the catchment. Comparisons with the Muskingum routing scheme in field tests suggest that the ASPIRE scheme can provide more accurate forecasts, probably because discharge observations are used to a maximum advantage and routing reaches (and model errors in each reach) are uncoupled. Using ASPIRE in conjunction with the Kalman filter did not improve forecast accuracy relative to a deterministic updating procedure. Theoretical analysis suggests that this is due to a large process noise to measurement noise ratio.
Diffusion of Zonal Variables Using Node-Centered Diffusion Solver
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, T B
2007-08-06
Tom Kaiser [1] has done some preliminary work to use the node-centered diffusion solver (originally developed by T. Palmer [2]) in Kull for diffusion of zonal variables such as electron temperature. To avoid numerical diffusion, Tom used a scheme developed by Shestakov et al. [3] and found their scheme could, in the vicinity of steep gradients, decouple nearest-neighbor zonal sub-meshes leading to 'alternating-zone' (red-black mode) errors. Tom extended their scheme to couple the sub-meshes with appropriate chosen artificial diffusion and thereby solved the 'alternating-zone' problem. Because the choice of the artificial diffusion coefficient could be very delicate, it is desirablemore » to use a scheme that does not require the artificial diffusion but still able to avoid both numerical diffusion and the 'alternating-zone' problem. In this document we present such a scheme.« less
NASA Astrophysics Data System (ADS)
Endelt, B.
2017-09-01
Forming operation are subject to external disturbances and changing operating conditions e.g. new material batch, increasing tool temperature due to plastic work, material properties and lubrication is sensitive to tool temperature. It is generally accepted that forming operations are not stable over time and it is not uncommon to adjust the process parameters during the first half hour production, indicating that process instability is gradually developing over time. Thus, in-process feedback control scheme might not-be necessary to stabilize the process and an alternative approach is to apply an iterative learning algorithm, which can learn from previously produced parts i.e. a self learning system which gradually reduces error based on historical process information. What is proposed in the paper is a simple algorithm which can be applied to a wide range of sheet-metal forming processes. The input to the algorithm is the final flange edge geometry and the basic idea is to reduce the least-square error between the current flange geometry and a reference geometry using a non-linear least square algorithm. The ILC scheme is applied to a square deep-drawing and the Numisheet’08 S-rail benchmark problem, the numerical tests shows that the proposed control scheme is able control and stabilise both processes.
Studies in integrated line-and packet-switched computer communication systems
NASA Astrophysics Data System (ADS)
Maglaris, B. S.
1980-06-01
The problem of efficiently allocating the bandwidth of a trunk to both types of traffic is handled for various system and traffic models. A performance analysis is carried out both for variable and fixed frame schemes. It is shown that variable frame schemes, adjusting the frame length according to the traffic variations, offer better trunk utilization at the cost of the additional hardware and software complexity needed because of the lack of synchronization. An optimization study on the fixed frame schemes follows. The problem of dynamically allocating the fixed frame to both types of traffic is formulated as a Markovian Decision process. It is shown that the movable boundary scheme, suggested for commercial implementations of integrated multiplexors, offers optimal or near optimal performance and simplicity of implementation. Finally, the behavior of the movable boundary integrated scheme is studied for tandem link connections. Under the assumptions made for the line-switched traffic, the forward allocation technique is found to offer the best alternative among different path set-up strategies.
Nanopositioning for polarimetric characterization.
Qureshi, Naser; Kolokoltsev, Oleg V; Ortega-Martínez, Roberto; Ordoñez-Romero, C L
2008-12-01
A positioning system with approximately nanometer resolution has been developed based on a new implementation of a motor-driven screw scheme. In contrast to conventional positioning systems based on piezoelectric elements, this system shows remarkably low levels of drift and vibration, and eliminates the need for position feedback during typical data acquisition processes. During positioning or scanning processes, non-repeatability and hysteresis problems inherent in mechanical positioning systems are greatly reduced using a software feedback scheme. As a result, we are able to demonstrate an average mechanical resolution of 1.45 nm and near diffraction-limited imaging using scanning optical microscopy. We propose this approach to nanopositioning as a readily accessible alternative enabling high spatial resolution scanning probe characterization (e.g., polarimetry) and provide practical details for its implementation.
Søgaard, Rikke; Kristensen, Søren Rud; Bech, Mickael
2015-08-01
This paper is a first examination of the development of an alternative to activity-based remuneration in public hospitals, which is currently being tested at nine hospital departments in a Danish region. The objective is to examine the process of delegating the authority of designing new incentive schemes from the principal (the regional government) to the agents (the hospital departments). We adopt a theoretical framework where, when deciding about delegation, the principal should trade off an initiative effect against the potential cost of loss of control. The initiative effect is evaluated by studying the development process and the resulting incentive schemes for each of the departments. Similarly, the potential cost of loss of control is evaluated by assessing the congruence between focus of the new incentive schemes and the principal's objectives. We observe a high impact of the effort incentive in the form of innovative and ambitious selection of projects by the agents, leading to nine very different solutions across departments. However, we also observe some incongruence between the principal's stated objectives and the revealed private interests of the agents. Although this is a baseline study involving high uncertainty about the future, the findings point at some issues with the delegation approach that could lead to inefficient outcomes. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Chen, Dong; Shang-Hong, Zhao; MengYi, Deng
2018-03-01
The multiple crystal heralded source with post-selection (MHPS), originally introduced to improve the single-photon character of the heralded source, has specific applications for quantum information protocols. In this paper, by combining decoy-state measurement-device-independent quantum key distribution (MDI-QKD) with spontaneous parametric downconversion process, we present a modified MDI-QKD scheme with MHPS where two architectures are proposed corresponding to symmetric scheme and asymmetric scheme. The symmetric scheme, which linked by photon switches in a log-tree structure, is adopted to overcome the limitation of the current low efficiency of m-to-1 optical switches. The asymmetric scheme, which shows a chained structure, is used to cope with the scalability issue with increase in the number of crystals suffered in symmetric scheme. The numerical simulations show that our modified scheme has apparent advances both in transmission distance and key generation rate compared to the original MDI-QKD with weak coherent source and traditional heralded source with post-selection. Furthermore, the recent advances in integrated photonics suggest that if built into a single chip, the MHPS might be a practical alternative source in quantum key distribution tasks requiring single photons to work.
A search for space energy alternatives
NASA Technical Reports Server (NTRS)
Gilbreath, W. P.; Billman, K. W.
1978-01-01
This paper takes a look at a number of schemes for converting radiant energy in space to useful energy for man. These schemes are possible alternatives to the currently most studied solar power satellite concept. Possible primary collection and conversion devices discussed include the space particle flux devices, solar windmills, photovoltaic devices, photochemical cells, photoemissive converters, heat engines, dielectric energy conversion, electrostatic generators, plasma solar collectors, and thermionic schemes. Transmission devices reviewed include lasers and masers.
An Efficient Variable Length Coding Scheme for an IID Source
NASA Technical Reports Server (NTRS)
Cheung, K. -M.
1995-01-01
A scheme is examined for using two alternating Huffman codes to encode a discrete independent and identically distributed source with a dominant symbol. This combined strategy, or alternating runlength Huffman (ARH) coding, was found to be more efficient than ordinary coding in certain circumstances.
NASA Astrophysics Data System (ADS)
Schaefer, S.; Gregory, M.; Rosenkranz, W.
2017-09-01
Due to higher data rates, better data security and unlicensed spectral usage optical inter-satellite links (OISL) offer an attractive alternative to conventional RF-communication. However, the very high transmission distances necessitate an optical receiver design enabling high receiver sensitivity which requires careful carrier synchronization and a quasi-coherent detection scheme.
Minimum Disclosure Counting for the Alternative Vote
NASA Astrophysics Data System (ADS)
Wen, Roland; Buckland, Richard
Although there is a substantial body of work on preventing bribery and coercion of voters in cryptographic election schemes for plurality electoral systems, there are few attempts to construct such schemes for preferential electoral systems. The problem is preferential systems are prone to bribery and coercion via subtle signature attacks during the counting. We introduce a minimum disclosure counting scheme for the alternative vote preferential system. Minimum disclosure provides protection from signature attacks by revealing only the winning candidate.
Quantum Iterative Deepening with an Application to the Halting Problem
Tarrataca, Luís; Wichert, Andreas
2013-01-01
Classical models of computation traditionally resort to halting schemes in order to enquire about the state of a computation. In such schemes, a computational process is responsible for signaling an end of a calculation by setting a halt bit, which needs to be systematically checked by an observer. The capacity of quantum computational models to operate on a superposition of states requires an alternative approach. From a quantum perspective, any measurement of an equivalent halt qubit would have the potential to inherently interfere with the computation by provoking a random collapse amongst the states. This issue is exacerbated by undecidable problems such as the Entscheidungsproblem which require universal computational models, e.g. the classical Turing machine, to be able to proceed indefinitely. In this work we present an alternative view of quantum computation based on production system theory in conjunction with Grover's amplitude amplification scheme that allows for (1) a detection of halt states without interfering with the final result of a computation; (2) the possibility of non-terminating computation and (3) an inherent speedup to occur during computations susceptible of parallelization. We discuss how such a strategy can be employed in order to simulate classical Turing machines. PMID:23520465
NASA Astrophysics Data System (ADS)
Du, X.; Savich, G. R.; Marozas, B. T.; Wicks, G. W.
2018-02-01
Surface leakage and lateral diffusion currents in InAs-based nBn photodetectors have been investigated. Devices fabricated using a shallow etch processing scheme that etches through the top contact and stops at the barrier exhibited large lateral diffusion current but undetectably low surface leakage. Such large lateral diffusion current significantly increased the dark current, especially in small devices, and causes pixel-to-pixel crosstalk in detector arrays. To eliminate the lateral diffusion current, two different approaches were examined. The conventional solution utilized a deep etch process, which etches through the top contact, barrier, and absorber. This deep etch processing scheme eliminated lateral diffusion, but introduced high surface current along the device mesa sidewalls, increasing the dark current. High device failure rate was also observed in deep-etched nBn structures. An alternative approach to limit lateral diffusion used an inverted nBn structure that has its absorber grown above the barrier. Like the shallow etch process on conventional nBn structures, the inverted nBn devices were fabricated with a processing scheme that only etches the top layer (the absorber, in this case) but avoids etching through the barrier. The results show that inverted nBn devices have the advantage of eliminating the lateral diffusion current without introducing elevated surface current.
Break-even cost of cloning in genetic improvement of dairy cattle.
Dematawewa, C M; Berger, P J
1998-04-01
Twelve different models for alternative progeny-testing schemes based on genetic and economic gains were compared. The first 10 alternatives were considered to be optimally operating progeny-testing schemes. Alternatives 1 to 5 considered the following combinations of technologies: 1) artificial insemination, 2) artificial insemination with sexed semen, 3) artificial insemination with embryo transfer, 4) artificial insemination and embryo transfer with few bulls as sires, and 5) artificial insemination, embryo transfer, and sexed semen with few bulls, respectively. Alternatives 6 to 12 considered cloning from dams. Alternatives 11 and 12 considered a regular progeny-testing scheme that had selection gains (intensity x accuracy x genetic standard deviation) of 890, 300, 600, and 89 kg, respectively, for the four paths. The sums of the generation intervals of the four paths were 19 yr for the first 8 alternatives and 19.5, 22, 29, and 29.5 yr for alternatives 9 to 12, respectively. Rates of genetic gain in milk yield for alternatives 1 to 5 were 257, 281, 316, 327, and 340 kg/yr, respectively. The rate of gain for other alternatives increased as number of clones increased. The use of three records per clone increased both accuracy and generation interval of a path. Cloning was highly beneficial for progeny-testing schemes with lower intensity and accuracy of selection. The discounted economic gain (break-even cost) per clone was the highest ($84) at current selection levels using sexed semen and three records on clones of the dam. The total cost associated with cloning has to be below $84 for cloning to be an economically viable option.
On the convergence of nonconvex minimization methods for image recovery.
Xiao, Jin; Ng, Michael Kwok-Po; Yang, Yu-Fei
2015-05-01
Nonconvex nonsmooth regularization method has been shown to be effective for restoring images with neat edges. Fast alternating minimization schemes have also been proposed and developed to solve the nonconvex nonsmooth minimization problem. The main contribution of this paper is to show the convergence of these alternating minimization schemes, based on the Kurdyka-Łojasiewicz property. In particular, we show that the iterates generated by the alternating minimization scheme, converges to a critical point of this nonconvex nonsmooth objective function. We also extend the analysis to nonconvex nonsmooth regularization model with box constraints, and obtain similar convergence results of the related minimization algorithm. Numerical examples are given to illustrate our convergence analysis.
Fault-tolerant quantum computation with nondeterministic entangling gates
NASA Astrophysics Data System (ADS)
Auger, James M.; Anwar, Hussain; Gimeno-Segovia, Mercedes; Stace, Thomas M.; Browne, Dan E.
2018-03-01
Performing entangling gates between physical qubits is necessary for building a large-scale universal quantum computer, but in some physical implementations—for example, those that are based on linear optics or networks of ion traps—entangling gates can only be implemented probabilistically. In this work, we study the fault-tolerant performance of a topological cluster state scheme with local nondeterministic entanglement generation, where failed entangling gates (which correspond to bonds on the lattice representation of the cluster state) lead to a defective three-dimensional lattice with missing bonds. We present two approaches for dealing with missing bonds; the first is a nonadaptive scheme that requires no additional quantum processing, and the second is an adaptive scheme in which qubits can be measured in an alternative basis to effectively remove them from the lattice, hence eliminating their damaging effect and leading to better threshold performance. We find that a fault-tolerance threshold can still be observed with a bond-loss rate of 6.5% for the nonadaptive scheme, and a bond-loss rate as high as 14.5% for the adaptive scheme.
ERIC Educational Resources Information Center
Mertler, Craig A.
This study attempted to (1) expand the dichotomous classification scheme typically used by educators and researchers to describe teaching incentives and (2) offer administrators and teachers an alternative framework within which to develop incentive systems. Elementary, middle, and high school teachers in Ohio rated 10 commonly instituted teaching…
NASA Astrophysics Data System (ADS)
Schaefer, Semjon; Gregory, Mark; Rosenkranz, Werner
2016-11-01
We present simulative and experimental investigations of different coherent receiver designs for high-speed optical intersatellite links. We focus on frequency offset (FO) compensation in homodyne and intradyne detection systems. The considered laser communication terminal uses an optical phase-locked loop (OPLL), which ensures stable homodyne detection. However, the hardware complexity increases with the modulation order. Therefore, we show that software-based intradyne detection is an attractive alternative for OPLL-based homodyne systems. Our approach is based on digital FO and phase noise compensation, in order to achieve a more flexible coherent detection scheme. Analytic results will further show the theoretical impact of the different detection schemes on the receiver sensitivity. Finally, we compare the schemes in terms of bit error ratio measurements and optimal receiver design.
Corrected simulations for one-dimensional diffusion processes with naturally occurring boundaries.
Shafiey, Hassan; Gan, Xinjun; Waxman, David
2017-11-01
To simulate a diffusion process, a usual approach is to discretize the time in the associated stochastic differential equation. This is the approach used in the Euler method. In the present work we consider a one-dimensional diffusion process where the terms occurring, within the stochastic differential equation, prevent the process entering a region. The outcome is a naturally occurring boundary (which may be absorbing or reflecting). A complication occurs in a simulation of this situation. The term involving a random variable, within the discretized stochastic differential equation, may take a trajectory across the boundary into a "forbidden region." The naive way of dealing with this problem, which we refer to as the "standard" approach, is simply to reset the trajectory to the boundary, based on the argument that crossing the boundary actually signifies achieving the boundary. In this work we show, within the framework of the Euler method, that such resetting introduces a spurious force into the original diffusion process. This force may have a significant influence on trajectories that come close to a boundary. We propose a corrected numerical scheme, for simulating one-dimensional diffusion processes with naturally occurring boundaries. This involves correcting the standard approach, so that an exact property of the diffusion process is precisely respected. As a consequence, the proposed scheme does not introduce a spurious force into the dynamics. We present numerical test cases, based on exactly soluble one-dimensional problems with one or two boundaries, which suggest that, for a given value of the discrete time step, the proposed scheme leads to substantially more accurate results than the standard approach. Alternatively, the standard approach needs considerably more computation time to obtain a comparable level of accuracy to the proposed scheme, because the standard approach requires a significantly smaller time step.
Corrected simulations for one-dimensional diffusion processes with naturally occurring boundaries
NASA Astrophysics Data System (ADS)
Shafiey, Hassan; Gan, Xinjun; Waxman, David
2017-11-01
To simulate a diffusion process, a usual approach is to discretize the time in the associated stochastic differential equation. This is the approach used in the Euler method. In the present work we consider a one-dimensional diffusion process where the terms occurring, within the stochastic differential equation, prevent the process entering a region. The outcome is a naturally occurring boundary (which may be absorbing or reflecting). A complication occurs in a simulation of this situation. The term involving a random variable, within the discretized stochastic differential equation, may take a trajectory across the boundary into a "forbidden region." The naive way of dealing with this problem, which we refer to as the "standard" approach, is simply to reset the trajectory to the boundary, based on the argument that crossing the boundary actually signifies achieving the boundary. In this work we show, within the framework of the Euler method, that such resetting introduces a spurious force into the original diffusion process. This force may have a significant influence on trajectories that come close to a boundary. We propose a corrected numerical scheme, for simulating one-dimensional diffusion processes with naturally occurring boundaries. This involves correcting the standard approach, so that an exact property of the diffusion process is precisely respected. As a consequence, the proposed scheme does not introduce a spurious force into the dynamics. We present numerical test cases, based on exactly soluble one-dimensional problems with one or two boundaries, which suggest that, for a given value of the discrete time step, the proposed scheme leads to substantially more accurate results than the standard approach. Alternatively, the standard approach needs considerably more computation time to obtain a comparable level of accuracy to the proposed scheme, because the standard approach requires a significantly smaller time step.
A novel high-speed CMOS circuit based on a gang of capacitors
NASA Astrophysics Data System (ADS)
Sharroush, Sherif M.
2017-08-01
There is no doubt that complementary metal-oxide semiconductor (CMOS) circuits with wide fan-in suffers from the relatively sluggish operation. In this paper, a circuit that contains a gang of capacitors sharing their charge with each other is proposed as an alternative to long N-channel MOS and P-channel MOS stacks. The proposed scheme is investigated quantitatively and verified by simulation using the 45-nm CMOS technology with VDD = 1 V. The time delay, area and power consumption of the proposed scheme are investigated and compared with the conventional static CMOS logic circuit. It is verified that the proposed scheme achieves 52% saving in the average propagation delay for eight inputs and that it has a smaller area compared to the conventional CMOS logic when the number of inputs exceeds three and a smaller power consumption for a number of inputs exceeding two. The impacts of process variations, component mismatches and technology scaling on the proposed scheme are also investigated.
Fault Diagnosis for Centre Wear Fault of Roll Grinder Based on a Resonance Demodulation Scheme
NASA Astrophysics Data System (ADS)
Wang, Liming; Shao, Yimin; Yin, Lei; Yuan, Yilin; Liu, Jing
2017-05-01
Roll grinder is one of the important parts in the rolling machinery, and the grinding precision of roll surface has direct influence on the surface quality of steel strip. However, during the grinding process, the centre bears the gravity of the roll and alternating stress. Therefore, wear or spalling faults are easily observed on the centre, which will lead to an anomalous vibration of the roll grinder. In this study, a resonance demodulation scheme is proposed to detect the centre wear fault of roll grinder. Firstly, fast kurtogram method is employed to help select the sub-band filter parameters for optimal resonance demodulation. Further, the envelope spectrum are derived based on the filtered signal. Finally, two health indicators are designed to conduct the fault diagnosis for centre wear fault. The proposed scheme is assessed by analysing experimental data from a roll grinder of twenty-high rolling mill. The results show that the proposed scheme can effectively detect the centre wear fault of the roll grinder.
Thompson, John
2002-11-01
This paper discusses the management of meat tenderness using a carcass grading scheme which utilizes the concept of total quality management of those factors which impact on beef palatability. The scheme called Meat Standards Australia (MSA) has identified the Critical Control Points (CCPs) from the production, pre-slaughter, processing and value adding sectors of the beef supply chain and quantified their relative importance using large-scale consumer testing. These CCPs have been used to manage beef palatability in two ways. Firstly, CCPs from the pre-slaughter and processing sectors have been used as mandatory criteria for carcasses to be graded. Secondly, other CCPs from the production and processing sectors have been incorporated into a model to predict palatability for individual muscles. The evidence for the importance of CCPs from the production (breed, growth path and HGP implants), pre-slaughter and processing (pH/temperature window, alternative carcass suspension, marbling and ageing) sectors are reviewed and the accuracy of the model to predict palatability for specific muscle×cooking techniques is presented.
Andrade, Xavier; Aspuru-Guzik, Alán
2013-10-08
We discuss the application of graphical processing units (GPUs) to accelerate real-space density functional theory (DFT) calculations. To make our implementation efficient, we have developed a scheme to expose the data parallelism available in the DFT approach; this is applied to the different procedures required for a real-space DFT calculation. We present results for current-generation GPUs from AMD and Nvidia, which show that our scheme, implemented in the free code Octopus, can reach a sustained performance of up to 90 GFlops for a single GPU, representing a significant speed-up when compared to the CPU version of the code. Moreover, for some systems, our implementation can outperform a GPU Gaussian basis set code, showing that the real-space approach is a competitive alternative for DFT simulations on GPUs.
Borole, Abhijeet P.
2015-08-25
Conversion of biomass into bioenergy is possible via multiple pathways resulting in production of biofuels, bioproducts and biopower. Efficient and sustainable conversion of biomass, however, requires consideration of many environmental and societal parameters in order to minimize negative impacts. Integration of multiple conversion technologies and inclusion of upcoming alternatives such as bioelectrochemical systems can minimize these impacts and improve conservation of resources such as hydrogen, water and nutrients via recycle and reuse. This report outlines alternate pathways integrating microbial electrolysis in biorefinery schemes to improve energy efficiency while evaluating environmental sustainability parameters.
Alternating Direction Implicit (ADI) schemes for a PDE-based image osmosis model
NASA Astrophysics Data System (ADS)
Calatroni, L.; Estatico, C.; Garibaldi, N.; Parisotto, S.
2017-10-01
We consider Alternating Direction Implicit (ADI) splitting schemes to compute efficiently the numerical solution of the PDE osmosis model considered by Weickert et al. in [10] for several imaging applications. The discretised scheme is shown to preserve analogous properties to the continuous model. The dimensional splitting strategy traduces numerically into the solution of simple tridiagonal systems for which standard matrix factorisation techniques can be used to improve upon the performance of classical implicit methods, even for large time steps. Applications to the shadow removal problem are presented.
Wang, Mingming; Sun, Yuanxiang; Sweetapple, Chris
2017-12-15
Storage is important for flood mitigation and non-point source pollution control. However, to seek a cost-effective design scheme for storage tanks is very complex. This paper presents a two-stage optimization framework to find an optimal scheme for storage tanks using storm water management model (SWMM). The objectives are to minimize flooding, total suspended solids (TSS) load and storage cost. The framework includes two modules: (i) the analytical module, which evaluates and ranks the flooding nodes with the analytic hierarchy process (AHP) using two indicators (flood depth and flood duration), and then obtains the preliminary scheme by calculating two efficiency indicators (flood reduction efficiency and TSS reduction efficiency); (ii) the iteration module, which obtains an optimal scheme using a generalized pattern search (GPS) method based on the preliminary scheme generated by the analytical module. The proposed approach was applied to a catchment in CZ city, China, to test its capability in choosing design alternatives. Different rainfall scenarios are considered to test its robustness. The results demonstrate that the optimal framework is feasible, and the optimization is fast based on the preliminary scheme. The optimized scheme is better than the preliminary scheme for reducing runoff and pollutant loads under a given storage cost. The multi-objective optimization framework presented in this paper may be useful in finding the best scheme of storage tanks or low impact development (LID) controls. Copyright © 2017 Elsevier Ltd. All rights reserved.
Four-dimensional MRI using an internal respiratory surrogate derived by dimensionality reduction
NASA Astrophysics Data System (ADS)
Uh, Jinsoo; Ayaz Khan, M.; Hua, Chiaho
2016-11-01
This study aimed to develop a practical and accurate 4-dimensional (4D) magnetic resonance imaging (MRI) method using a non-navigator, image-based internal respiratory surrogate derived by dimensionality reduction (DR). The use of DR has been previously suggested but not implemented for reconstructing 4D MRI, despite its practical advantages. We compared multiple image-acquisition schemes and refined a retrospective-sorting process to optimally implement a DR-derived surrogate. The comparison included an unconventional scheme that acquires paired slices alternately to mitigate the internal surrogate’s dependency on a specific slice location. We introduced ‘target-oriented sorting’, as opposed to conventional binning, to quantify the coherence in retrospectively sorted images, thereby determining the minimal scan time needed for sufficient coherence. This study focused on evaluating the proposed method using digital phantoms which provided unequivocal gold standard. The evaluation indicated that the DR-based respiratory surrogate is highly accurate: the error in amplitude percentile of the surrogate signal was less than 5% with the optimal scheme. Acquiring alternating paired slices was superior to the conventional scheme of acquiring individual slices; the advantage of the unconventional scheme was more pronounced when a substantial phase shift occurred across slice locations. The analysis of coherence across sorted images confirmed the advantage of higher sampling efficiencies in non-navigator respiratory surrogates. We determined that a scan time of 20 s per imaging slice was sufficient to achieve a mean coherence error of less than 1% for the tested respiratory patterns. The clinical applicability of the proposed 4D MRI has been demonstrated with volunteers and patients. The diaphragm motion in 4D MRI was consistent with that in dynamic 2D imaging which was regarded as the gold standard (difference within 1.8 mm on average).
Effective crisis decision-making.
Kaschner, Holger
2017-01-01
When an organisation's reputation is at stake, crisis decision-making (CDM) is challenging and prone to failure. Most CDM schemes are strong at certain aspects of the overall CDM process, but almost none are strong at all of them. This paper defines criteria for good CDM schemes, analyses common approaches and introduces an alternative, stakeholder-driven scheme. Focusing on the most important stakeholders and directing any actions to preserve the relationships with them is crucial. When doing so, the interdependencies between the stakeholders must be identified and considered. Without knowledge of the sometimes less than obvious links, wellmeaning actions can cause adverse effects, so a cross-check for the impacts of potential options is recommended before making the final decision. The paper also gives recommendations on how to implement these steps at any organisation in order to enhance the quality of CDM and thus protect the organisation's reputation.
A Bookmarking Service for Organizing and Sharing URLs
NASA Technical Reports Server (NTRS)
Keller, Richard M.; Wolfe, Shawn R.; Chen, James R.; Mathe, Nathalie; Rabinowitz, Joshua L.
1997-01-01
Web browser bookmarking facilities predominate as the method of choice for managing URLs. In this paper, we describe some deficiencies of current bookmarking schemes, and examine an alternative to current approaches. We present WebTagger(TM), an implemented prototype of a personal bookmarking service that provides both individuals and groups with a customizable means of organizing and accessing Web-based information resources. In addition, the service enables users to supply feedback on the utility of these resources relative to their information needs, and provides dynamically-updated ranking of resources based on incremental user feedback. Individuals may access the service from anywhere on the Internet, and require no special software. This service greatly simplifies the process of sharing URLs within groups, in comparison with manual methods involving email. The underlying bookmark organization scheme is more natural and flexible than current hierarchical schemes supported by the major Web browsers, and enables rapid access to stored bookmarks.
A low knee voltage and high breakdown voltage of 4H-SiC TSBS employing poly-Si/Ni Schottky scheme
NASA Astrophysics Data System (ADS)
Kim, Dong Young; Seok, Ogyun; Park, Himchan; Bahng, Wook; Kim, Hyoung Woo; Park, Ki Cheol
2018-02-01
We report a low knee voltage and high breakdown voltage 4H-SiC TSBS employing poly-Si/Ni dual Schottky contacts. A knee voltage was significantly improved from 0.75 to 0.48 V by utilizing an alternative low work-function material of poly-Si as an anode electrode. Also, reverse breakdown voltage was successfully improved from 901 to 1154 V due to a shrunk low-work-function Schottky region by a proposed self-align etching process between poly-Si and SiC. SiC TSBS with poly-Si/Ni dual Schottky scheme is a suitable structure for high-efficiency rectification and high-voltage blocking operation.
An agent-based model for water management and planning in the Lake Naivasha basin, Kenya
NASA Astrophysics Data System (ADS)
van Oel, Pieter; Mulatu, Dawit; Odongo, Vincent; Onyando, Japheth; Becht, Robert; van der Veen, Anne
2013-04-01
A variety of human and natural processes influence the ecological and economic state of the Lake Naivasha basin. The ecological wealth and recent economic developments in the area are strongly connected to Lake Naivasha which supports a rich variety of flora, mammal and bird species. Many human activities depend on clean freshwater from the lake whereas recently the freshwater availability of good quality is seriously influenced by water abstractions and the use of fertilizers in agriculture. Management alternatives include those aiming at limiting water abstractions and fertilizer use. A possible way to achieve reduced use of water and fertilizers is the introduction of Payment for Environmental Services (PES) schemes. As the Lake Naivasha basin and its population have experienced increasing pressures various disputes and disagreements have arisen about the processes responsible for the problems experienced, and the effectively of management alternatives. Beside conflicts of interest and disagreements on responsibilities there are serious factual disagreements. To share scientific knowledge on the effects of the socio-ecological system processes on the Lake Naivasha basin, tools may be used that expose information at temporal and spatial scales that are meaningful to stakeholders. In this study we use a spatially-explicit agent-based modelling (ABM) approach to depict the interactions between socio-economic and natural subsystems for supporting a more sustainable governance of the river basin resources. Agents consider alternative livelihood strategies and decide to go for the one they perceive as likely to be most profitable. Agents may predict and sense the availability of resources and also can observe economic performance achieved by neighbouring agents. Results are presented at the basin and subbasin level to provide relevant knowledge to Water Resources Users Associations which are important collective forums for water management through which PES schemes are managed.
Webcams for Bird Detection and Monitoring: A Demonstration Study
Verstraeten, Willem W.; Vermeulen, Bart; Stuckens, Jan; Lhermitte, Stefaan; Van der Zande, Dimitry; Van Ranst, Marc; Coppin, Pol
2010-01-01
Better insights into bird migration can be a tool for assessing the spread of avian borne infections or ecological/climatologic issues reflected in deviating migration patterns. This paper evaluates whether low budget permanent cameras such as webcams can offer a valuable contribution to the reporting of migratory birds. An experimental design was set up to study the detection capability using objects of different size, color and velocity. The results of the experiment revealed the minimum size, maximum velocity and contrast of the objects required for detection by a standard webcam. Furthermore, a modular processing scheme was proposed to track and follow migratory birds in webcam recordings. Techniques such as motion detection by background subtraction, stereo vision and lens distortion were combined to form the foundation of the bird tracking algorithm. Additional research to integrate webcam networks, however, is needed and future research should enforce the potential of the processing scheme by exploring and testing alternatives of each individual module or processing step. PMID:22319308
Webcams for bird detection and monitoring: a demonstration study.
Verstraeten, Willem W; Vermeulen, Bart; Stuckens, Jan; Lhermitte, Stefaan; Van der Zande, Dimitry; Van Ranst, Marc; Coppin, Pol
2010-01-01
Better insights into bird migration can be a tool for assessing the spread of avian borne infections or ecological/climatologic issues reflected in deviating migration patterns. This paper evaluates whether low budget permanent cameras such as webcams can offer a valuable contribution to the reporting of migratory birds. An experimental design was set up to study the detection capability using objects of different size, color and velocity. The results of the experiment revealed the minimum size, maximum velocity and contrast of the objects required for detection by a standard webcam. Furthermore, a modular processing scheme was proposed to track and follow migratory birds in webcam recordings. Techniques such as motion detection by background subtraction, stereo vision and lens distortion were combined to form the foundation of the bird tracking algorithm. Additional research to integrate webcam networks, however, is needed and future research should enforce the potential of the processing scheme by exploring and testing alternatives of each individual module or processing step.
NASA Technical Reports Server (NTRS)
Mayo, L. H.
1971-01-01
A preliminary provisional assessment of the prospects for the establishment of an adequate technology assessment function and the implications of the assessment function for the public decision process are presented. Effects of the technology assessment function on each phase of the public decision process and briefly explored. Significant implications during the next decade are projected with respect to the following phases: invention and development of alternative means (technological configurations); evaluation, selection and promotion of preferred courses of action; and modification of statutory scheme or social action program as an outcome of continuing monitoring and appraisal.
Control scheme for power modulation of a free piston Stirling engine
Dhar, Manmohan
1989-01-01
The present invention relates to a control scheme for power modulation of a free-piston Stirling engine-linear alternator power generator system. The present invention includes connecting an autotransformer in series with a tuning capacitance between a linear alternator and a utility grid to maintain a constant displacement to piston stroke ratio and their relative phase angle over a wide range of operating conditions.
Data multiplexing in radio interferometric calibration
NASA Astrophysics Data System (ADS)
Yatawatta, Sarod; Diblen, Faruk; Spreeuw, Hanno; Koopmans, L. V. E.
2018-03-01
New and upcoming radio interferometers will produce unprecedented amount of data that demand extremely powerful computers for processing. This is a limiting factor due to the large computational power and energy costs involved. Such limitations restrict several key data processing steps in radio interferometry. One such step is calibration where systematic errors in the data are determined and corrected. Accurate calibration is an essential component in reaching many scientific goals in radio astronomy and the use of consensus optimization that exploits the continuity of systematic errors across frequency significantly improves calibration accuracy. In order to reach full consensus, data at all frequencies need to be calibrated simultaneously. In the SKA regime, this can become intractable if the available compute agents do not have the resources to process data from all frequency channels simultaneously. In this paper, we propose a multiplexing scheme that is based on the alternating direction method of multipliers with cyclic updates. With this scheme, it is possible to simultaneously calibrate the full data set using far fewer compute agents than the number of frequencies at which data are available. We give simulation results to show the feasibility of the proposed multiplexing scheme in simultaneously calibrating a full data set when a limited number of compute agents are available.
Processable Electronically Conducting Polymers
1991-01-01
polyheterocycles and will be discussed in detail later. The Grignard coupling reaction of alkyl substituted 1,4- dibromobenzenes was initially employed, as...R 2 ,R 3 ) along with alkyl , aryl, benzyl, and -CH 2 CN substituents on the nitrogen (R I ). Since aniline preferentially oxidatively polymerizes at...but, in the case of N-aryl substituted anilines , an alternative mecha- nism has been proposed [171], which is outlined in Scheme 9. Both poly(N
Comment on "Scrutinizing the carbon cycle and CO2residence time in the atmosphere" by H. Harde
NASA Astrophysics Data System (ADS)
Köhler, Peter; Hauck, Judith; Völker, Christoph; Wolf-Gladrow, Dieter A.; Butzin, Martin; Halpern, Joshua B.; Rice, Ken; Zeebe, Richard E.
2018-05-01
Harde (2017) proposes an alternative accounting scheme for the modern carbon cycle and concludes that only 4.3% of today's atmospheric CO2 is a result of anthropogenic emissions. As we will show, this alternative scheme is too simple, is based on invalid assumptions, and does not address many of the key processes involved in the global carbon cycle that are important on the timescale of interest. Harde (2017) therefore reaches an incorrect conclusion about the role of anthropogenic CO2 emissions. Harde (2017) tries to explain changes in atmospheric CO2 concentration with a single equation, while the most simple model of the carbon cycle must at minimum contain equations of at least two reservoirs (the atmosphere and the surface ocean), which are solved simultaneously. A single equation is fundamentally at odds with basic theory and observations. In the following we will (i) clarify the difference between CO2 atmospheric residence time and adjustment time, (ii) present recently published information about anthropogenic carbon, (iii) present details about the processes that are missing in Harde (2017), (iv) briefly discuss shortcoming in Harde's generalization to paleo timescales, (v) and comment on deficiencies in some of the literature cited in Harde (2017).
Leão, Erico; Montez, Carlos; Moraes, Ricardo; Portugal, Paulo; Vasques, Francisco
2017-01-01
The IEEE 802.15.4/ZigBee cluster-tree topology is a suitable technology to deploy wide-scale Wireless Sensor Networks (WSNs). These networks are usually designed to support convergecast traffic, where all communication paths go through the PAN (Personal Area Network) coordinator. Nevertheless, peer-to-peer communication relationships may be also required for different types of WSN applications. That is the typical case of sensor and actuator networks, where local control loops must be closed using a reduced number of communication hops. The use of communication schemes optimised just for the support of convergecast traffic may result in higher network congestion and in a potentially higher number of communication hops. Within this context, this paper proposes an Alternative-Route Definition (ARounD) communication scheme for WSNs. The underlying idea of ARounD is to setup alternative communication paths between specific source and destination nodes, avoiding congested cluster-tree paths. These alternative paths consider shorter inter-cluster paths, using a set of intermediate nodes to relay messages during their inactive periods in the cluster-tree network. Simulation results show that the ARounD communication scheme can significantly decrease the end-to-end communication delay, when compared to the use of standard cluster-tree communication schemes. Moreover, the ARounD communication scheme is able to reduce the network congestion around the PAN coordinator, enabling the reduction of the number of message drops due to queue overflows in the cluster-tree network. PMID:28481245
Leão, Erico; Montez, Carlos; Moraes, Ricardo; Portugal, Paulo; Vasques, Francisco
2017-05-06
The IEEE 802.15.4/ZigBee cluster-tree topology is a suitable technology to deploy wide-scale Wireless Sensor Networks (WSNs). These networks are usually designed to support convergecast traffic, where all communication paths go through the PAN (Personal Area Network) coordinator. Nevertheless, peer-to-peer communication relationships may be also required for different types of WSN applications. That is the typical case of sensor and actuator networks, where local control loops must be closed using a reduced number of communication hops. The use of communication schemes optimised just for the support of convergecast traffic may result in higher network congestion and in a potentially higher number of communication hops. Within this context, this paper proposes an Alternative-Route Definition (ARounD) communication scheme for WSNs. The underlying idea of ARounD is to setup alternative communication paths between specific source and destination nodes, avoiding congested cluster-tree paths. These alternative paths consider shorter inter-cluster paths, using a set of intermediate nodes to relay messages during their inactive periods in the cluster-tree network. Simulation results show that the ARounD communication scheme can significantly decrease the end-to-end communication delay, when compared to the use of standard cluster-tree communication schemes. Moreover, the ARounD communication scheme is able to reduce the network congestion around the PAN coordinator, enabling the reduction of the number of message drops due to queue overflows in the cluster-tree network.
Description of the Prometheus Program Alternator/Thruster Integration Laboratory (ATIL)
NASA Technical Reports Server (NTRS)
Baez, Anastacio N.; Birchenough, Arthur G.; Lebron-Velilla, Ramon C.; Gonzalez, Marcelo C.
2005-01-01
The Project Prometheus Alternator Electric Thruster Integration Laboratory's (ATIL) primary two objectives are to obtain test data to influence the power conversion and electric propulsion systems design, and to assist in developing the primary power quality specifications prior to system Preliminary Design Review (PDR). ATIL is being developed in stages or configurations of increasing fidelity and complexity in order to support the various phases of the Prometheus program. ATIL provides a timely insight of the electrical interactions between a representative Permanent Magnet Generator, its associated control schemes, realistic electric system loads, and an operating electric propulsion thruster. The ATIL main elements are an electrically driven 100 kWe Alternator Test Unit (ATU), an alternator controller using parasitic loads, and a thruster Power Processing Unit (PPU) breadboard. This paper describes the ATIL components, its development approach, preliminary integration test results, and current status.
Patterning and templating for nanoelectronics.
Galatsis, Kosmas; Wang, Kang L; Ozkan, Mihri; Ozkan, Cengiz S; Huang, Yu; Chang, Jane P; Monbouquette, Harold G; Chen, Yong; Nealey, Paul; Botros, Youssry
2010-02-09
The semiconductor industry will soon be launching 32 nm complementary metal oxide semiconductor (CMOS) technology node using 193 nm lithography patterning technology to fabricate microprocessors with more than 2 billion transistors. To ensure the survival of Moore's law, alternative patterning techniques that offer advantages beyond conventional top-down patterning are aggressively being explored. It is evident that most alternative patterning techniques may not offer compelling advantages to succeed conventional top-down lithography for silicon integrated circuits, but alternative approaches may well indeed offer functional advantages in realising next-generation information processing nanoarchitectures such as those based on cellular, bioinsipired, magnetic dot logic, and crossbar schemes. This paper highlights and evaluates some patterning methods from the Center on Functional Engineered Nano Architectonics in Los Angeles and discusses key benchmarking criteria with respect to CMOS scaling.
What Is the Reference? An Examination of Alternatives to the Reference Sources Used in IES TM-30-15
DOE Office of Scientific and Technical Information (OSTI.GOV)
Royer, Michael P.
A study was undertaken to document the role of the reference illuminant in the IES TM-30-15 method for evaluating color rendition. TM-30-15 relies on a relative reference scheme; that is, the reference illuminant and test source always have the same correlated color temperature (CCT). The reference illuminant is a Planckian radiator, model of daylight, or combination of those two, depending on the exact CCT of the test source. Three alternative reference schemes were considered: 1) either using all Planckian radiators or all daylight models; 2) using only one of ten possible illuminants (Planckian, daylight, or equal energy), regardless of themore » CCT of the test source; 3) using an off-Planckian reference illuminant (i.e., a source with a negative Duv). No reference scheme is inherently superior to another, with differences in metric values largely a result of small differences in gamut shape of the reference alternatives. While using any of the alternative schemes is more reasonable in the TM-30-15 evaluation framework than it was with the CIE CRI framework, the differences still ultimately manifest only as changes in interpretation of the results. References are employed in color rendering measures to provide a familiar point of comparison, not to establish an ideal source.« less
Comparison of two matrix data structures for advanced CSM testbed applications
NASA Technical Reports Server (NTRS)
Regelbrugge, M. E.; Brogan, F. A.; Nour-Omid, B.; Rankin, C. C.; Wright, M. A.
1989-01-01
The first section describes data storage schemes presently used by the Computational Structural Mechanics (CSM) testbed sparse matrix facilities and similar skyline (profile) matrix facilities. The second section contains a discussion of certain features required for the implementation of particular advanced CSM algorithms, and how these features might be incorporated into the data storage schemes described previously. The third section presents recommendations, based on the discussions of the prior sections, for directing future CSM testbed development to provide necessary matrix facilities for advanced algorithm implementation and use. The objective is to lend insight into the matrix structures discussed and to help explain the process of evaluating alternative matrix data structures and utilities for subsequent use in the CSM testbed.
Robust energy-absorbing compensators for the ACTEX II test article
NASA Astrophysics Data System (ADS)
Blaurock, Carl A.; Miller, David W.; Nye, Ted
1995-05-01
The paper addresses the problem of satellite solar panel vibration. A multi-layer vibration control scheme is investigated using a flight test article. Key issues in the active control portion are presented in the paper. The paper discusses the primary control design drivers, which are the time variations in modal frequencies due to configuration and thermal changes. A local control design approach is investigated, but found to be unworkable due to sensor/actuator non-collocation. An alternate design process uses linear robust control techniques, by describing the modal shifts as uncertainties. Multiple modal design, alpha- shifted multiple model, and a feedthrough compensation scheme are examined. Ground and simulation tests demonstrate that the resulting controllers provide significant vibration reduction in the presence of expected system variations.
ULTRA-SHARP nonoscillatory convection schemes for high-speed steady multidimensional flow
NASA Technical Reports Server (NTRS)
Leonard, B. P.; Mokhtari, Simin
1990-01-01
For convection-dominated flows, classical second-order methods are notoriously oscillatory and often unstable. For this reason, many computational fluid dynamicists have adopted various forms of (inherently stable) first-order upwinding over the past few decades. Although it is now well known that first-order convection schemes suffer from serious inaccuracies attributable to artificial viscosity or numerical diffusion under high convection conditions, these methods continue to enjoy widespread popularity for numerical heat transfer calculations, apparently due to a perceived lack of viable high accuracy alternatives. But alternatives are available. For example, nonoscillatory methods used in gasdynamics, including currently popular TVD schemes, can be easily adapted to multidimensional incompressible flow and convective transport. This, in itself, would be a major advance for numerical convective heat transfer, for example. But, as is shown, second-order TVD schemes form only a small, overly restrictive, subclass of a much more universal, and extremely simple, nonoscillatory flux-limiting strategy which can be applied to convection schemes of arbitrarily high order accuracy, while requiring only a simple tridiagonal ADI line-solver, as used in the majority of general purpose iterative codes for incompressible flow and numerical heat transfer. The new universal limiter and associated solution procedures form the so-called ULTRA-SHARP alternative for high resolution nonoscillatory multidimensional steady state high speed convective modelling.
A back-fitting algorithm to improve real-time flood forecasting
NASA Astrophysics Data System (ADS)
Zhang, Xiaojing; Liu, Pan; Cheng, Lei; Liu, Zhangjun; Zhao, Yan
2018-07-01
Real-time flood forecasting is important for decision-making with regards to flood control and disaster reduction. The conventional approach involves a postprocessor calibration strategy that first calibrates the hydrological model and then estimates errors. This procedure can simulate streamflow consistent with observations, but obtained parameters are not optimal. Joint calibration strategies address this issue by refining hydrological model parameters jointly with the autoregressive (AR) model. In this study, five alternative schemes are used to forecast floods. Scheme I uses only the hydrological model, while scheme II includes an AR model for error correction. In scheme III, differencing is used to remove non-stationarity in the error series. A joint inference strategy employed in scheme IV calibrates the hydrological and AR models simultaneously. The back-fitting algorithm, a basic approach for training an additive model, is adopted in scheme V to alternately recalibrate hydrological and AR model parameters. The performance of the five schemes is compared with a case study of 15 recorded flood events from China's Baiyunshan reservoir basin. Our results show that (1) schemes IV and V outperform scheme III during the calibration and validation periods and (2) scheme V is inferior to scheme IV in the calibration period, but provides better results in the validation period. Joint calibration strategies can therefore improve the accuracy of flood forecasting. Additionally, the back-fitting recalibration strategy produces weaker overcorrection and a more robust performance compared with the joint inference strategy.
Unstructured grids for sonic-boom analysis
NASA Technical Reports Server (NTRS)
Fouladi, Kamran
1993-01-01
A fast and efficient unstructured grid scheme is evaluated for sonic-boom applications. The scheme is used to predict the near-field pressure signatures of a body of revolution at several body lengths below the configuration, and those results are compared with experimental data. The introduction of the 'sonic-boom grid topology' to this scheme make it well suited for sonic-boom applications, thus providing an alternative to conventional multiblock structured grid schemes.
The Politico-Economic Challenges of Ghana’s National Health Insurance Scheme Implementation
Fusheini, Adam
2016-01-01
Background: National/social health insurance schemes have increasingly been seen in many low- and middle-income countries (LMICs) as a vehicle to universal health coverage (UHC) and a viable alternative funding mechanism for the health sector. Several countries, including Ghana, have thus introduced and implemented mandatory national health insurance schemes (NHIS) as part of reform efforts towards increasing access to health services. Ghana passed mandatory national health insurance (NHI) legislation (ACT 650) in 2003 and commenced nationwide implementation in 2004. Several peer review studies and other research reports have since assessed the performance of the scheme with positive rating while challenges also noted. This paper contributes to the literature on economic and political implementation challenges based on empirical evidence from the perspectives of the different category of actors and institutions involved in the process. Methods: Qualitative in-depth interviews were held with 33 different category of participants in four selected district mutual health insurance schemes in Southern (two) and Northern (two) Ghana. This was to ascertain their views regarding the main challenges in the implementation process. The participants were selected through purposeful sampling, stakeholder mapping, and snowballing. Data was analysed using thematic grouping procedure. Results: Participants identified political issues of over politicisation and political interference as main challenges. The main economic issues participants identified included low premiums or contributions; broad exemptions, poor gatekeeper enforcement system; and culture of curative and hospital-centric care. Conclusion: The study establishes that political and economic factors have influenced the implementation process and the degree to which the policy has been implemented as intended. Thus, we conclude that there is a synergy between implementation and politics; and achieving UHC under the NHIS requires political stewardship. Political leadership has the responsibility to build trust and confidence in the system by providing the necessary resources and backing with minimal interference in the operations. For sustainability of the scheme, authorities need to review the exemption policy, rate of contributions, especially, from informal sector employees and recruitment criteria of scheme workers, explore additional sources of funding and re-examine training needs of employees to strengthen their competences among others. PMID:27694681
The Politico-Economic Challenges of Ghana's National Health Insurance Scheme Implementation.
Fusheini, Adam
2016-04-27
National/social health insurance schemes have increasingly been seen in many low- and middle-income countries (LMICs) as a vehicle to universal health coverage (UHC) and a viable alternative funding mechanism for the health sector. Several countries, including Ghana, have thus introduced and implemented mandatory national health insurance schemes (NHIS) as part of reform efforts towards increasing access to health services. Ghana passed mandatory national health insurance (NHI) legislation (ACT 650) in 2003 and commenced nationwide implementation in 2004. Several peer review studies and other research reports have since assessed the performance of the scheme with positive rating while challenges also noted. This paper contributes to the literature on economic and political implementation challenges based on empirical evidence from the perspectives of the different category of actors and institutions involved in the process. Qualitative in-depth interviews were held with 33 different category of participants in four selected district mutual health insurance schemes in Southern (two) and Northern (two) Ghana. This was to ascertain their views regarding the main challenges in the implementation process. The participants were selected through purposeful sampling, stakeholder mapping, and snowballing. Data was analysed using thematic grouping procedure. Participants identified political issues of over politicisation and political interference as main challenges. The main economic issues participants identified included low premiums or contributions; broad exemptions, poor gatekeeper enforcement system; and culture of curative and hospital-centric care. The study establishes that political and economic factors have influenced the implementation process and the degree to which the policy has been implemented as intended. Thus, we conclude that there is a synergy between implementation and politics; and achieving UHC under the NHIS requires political stewardship. Political leadership has the responsibility to build trust and confidence in the system by providing the necessary resources and backing with minimal interference in the operations. For sustainability of the scheme, authorities need to review the exemption policy, rate of contributions, especially, from informal sector employees and recruitment criteria of scheme workers, explore additional sources of funding and re-examine training needs of employees to strengthen their competences among others. © 2016 by Kerman University of Medical Sciences
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-16
... continuation of electronic newsgathering operations, and the appropriate channelization scheme, coordination... also sought comment on alternative channelization schemes. Several commenters, including FWCC and...
Anokye, Nana; de Bekker-Grob, Esther W.; Higgins, Ailish; Relton, Clare; Strong, Mark; Fox-Rushby, Julia
2018-01-01
Background Increasing breastfeeding rates have been associated with reductions in disease in babies and mothers as well as in related costs. ‘Nourishing Start for Health (NoSH)’, a financial incentive scheme has been proposed as a potentially effective way to increase both the number of mothers breastfeeding and duration of breastfeeding. Aims To establish women’s relative preferences for different aspects of a financial incentive scheme for breastfeeding and to identify importance of scheme characteristics on probability on participation in an incentive scheme. Methods A discrete choice experiment (DCE) obtained information on alternative specifications of the NoSH scheme designed to promote continued breastfeeding duration until at least 6 weeks after birth. Four attributes framed alternative scheme designs: value of the incentive; minimum breastfeeding duration required to receive incentive; method of verifying breastfeeding; type of incentive. Three versions of the DCE questionnaire, each containing 8 different choice sets, provided 24 choice sets for analysis. The questionnaire was mailed to 2,531 women in the South Yorkshire Cohort (SYC) aged 16–45 years in IMD quintiles 3–5. The analytic approach considered conditional and mixed effects logistic models to account for preference heterogeneity that may be associated with a variation in effects mediated by respondents’ characteristics. Results 564 women completed the questionnaire and a response rate of 22% was achieved. Most of the included attributes were found to affect utility and therefore the probability to participate in the incentive scheme. Higher rewards were preferred, although the type of incentive significantly affected women’s preferences on average. We found evidence for preference heterogeneity based on individual characteristics that mediated preferences for an incentive scheme.Conclusions Although participants’ opinion in our sample was mixed, financial incentives for breastfeeding may be an acceptable and effective instrument to change behaviour. However, individual characteristics could mediate the effect and should therefore be considered when developing and targeting future interventions. PMID:29649245
Force analysis of magnetic bearings with power-saving controls
NASA Technical Reports Server (NTRS)
Johnson, Dexter; Brown, Gerald V.; Inman, Daniel J.
1992-01-01
Most magnetic bearing control schemes use a bias current with a superimposed control current to linearize the relationship between the control current and the force it delivers. For most operating conditions, the existence of the bias current requires more power than alternative methods that do not use conventional bias. Two such methods are examined which diminish or eliminate bias current. In the typical bias control scheme it is found that for a harmonic control force command into a voltage limited transconductance amplifier, the desired force output is obtained only up to certain combinations of force amplitude and frequency. Above these values, the force amplitude is reduced and a phase lag occurs. The power saving alternative control schemes typically exhibit such deficiencies at even lower command frequencies and amplitudes. To assess the severity of these effects, a time history analysis of the force output is performed for the bias method and the alternative methods. Results of the analysis show that the alternative approaches may be viable. The various control methods examined were mathematically modeled using nondimensionalized variables to facilitate comparison of the various methods.
NASA Astrophysics Data System (ADS)
Rewieński, M.; Lamecki, A.; Mrozowski, M.
2013-09-01
This paper proposes a technique, based on the Inexact Shift-Invert Lanczos (ISIL) method with Inexact Jacobi Orthogonal Component Correction (IJOCC) refinement, and a preconditioned conjugate-gradient (PCG) linear solver with multilevel preconditioner, for finding several eigenvalues for generalized symmetric eigenproblems. Several eigenvalues are found by constructing (with the ISIL process) an extended projection basis. Presented results of numerical experiments confirm the technique can be effectively applied to challenging, large-scale problems characterized by very dense spectra, such as resonant cavities with spatial dimensions which are large with respect to wavelengths of the resonating electromagnetic fields. It is also shown that the proposed scheme based on inexact linear solves delivers superior performance, as compared to methods which rely on exact linear solves, indicating tremendous potential of the 'inexact solve' concept. Finally, the scheme which generates an extended projection basis is found to provide a cost-efficient alternative to classical deflation schemes when several eigenvalues are computed.
Reeve, Belinda
2013-06-01
Reducing non-core food advertising to children is an important priority in strategies to address childhood obesity. Public health researchers argue for government intervention on the basis that food industry self-regulation is ineffective; however, the industry contends that the existing voluntary scheme adequately addresses community concerns. This paper examines the operation of two self-regulatory initiatives governing food advertising to children in Australia, in order to determine whether these regulatory processes foster transparent and accountable self-regulation. The paper concludes that while both codes appear to establish transparency and accountability mechanisms, they do not provide for meaningful stakeholder participation in the self-regulatory scheme. Accordingly, food industry self-regulation is unlikely to reflect public health concerns or to be perceived as a legitimate form of governance by external stakeholders. If industry regulation is to remain a feasible alternative to statutory regulation, there is a strong argument for strengthening government oversight and implementing a co-regulatory scheme.
NASA Astrophysics Data System (ADS)
Kalina, E. A.; Biswas, M.; Newman, K.; Grell, E. D.; Bernardet, L.; Frimel, J.; Carson, L.
2017-12-01
The parameterization of moist physics in numerical weather prediction models plays an important role in modulating tropical cyclone structure, intensity, and evolution. The Hurricane Weather Research and Forecast system (HWRF), the National Oceanic and Atmospheric Administration's operational model for tropical cyclone prediction, uses the Scale-Aware Simplified Arakawa-Schubert (SASAS) cumulus scheme and a modified version of the Ferrier-Aligo (FA) microphysics scheme to parameterize moist physics. The FA scheme contains a number of simplifications that allow it to run efficiently in an operational setting, which includes prescribing values for hydrometeor number concentrations (i.e., single-moment microphysics) and advecting the total condensate rather than the individual hydrometeor species. To investigate the impact of these simplifying assumptions on the HWRF forecast, the FA scheme was replaced with the more complex double-moment Thompson microphysics scheme, which individually advects cloud ice, cloud water, rain, snow, and graupel. Retrospective HWRF forecasts of tropical cyclones that occurred in the Atlantic and eastern Pacific ocean basins from 2015-2017 were then simulated and compared to those produced by the operational HWRF configuration. Both traditional model verification metrics (i.e., tropical cyclone track and intensity) and process-oriented metrics (e.g., storm size, precipitation structure, and heating rates from the microphysics scheme) will be presented and compared. The sensitivity of these results to the cumulus scheme used (i.e., the operational SASAS versus the Grell-Freitas scheme) also will be examined. Finally, the merits of replacing the moist physics schemes that are used operationally with the alternatives tested here will be discussed from a standpoint of forecast accuracy versus computational resources.
Ahn, Jae-Hyun; Park, Young-Je; Kim, Wonkook; Lee, Boram
2016-12-26
An estimation of the aerosol multiple-scattering reflectance is an important part of the atmospheric correction procedure in satellite ocean color data processing. Most commonly, the utilization of two near-infrared (NIR) bands to estimate the aerosol optical properties has been adopted for the estimation of the effects of aerosols. Previously, the operational Geostationary Color Ocean Imager (GOCI) atmospheric correction scheme relies on a single-scattering reflectance ratio (SSE), which was developed for the processing of the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) data to determine the appropriate aerosol models and their aerosol optical thicknesses. The scheme computes reflectance contributions (weighting factor) of candidate aerosol models in a single scattering domain then spectrally extrapolates the single-scattering aerosol reflectance from NIR to visible (VIS) bands using the SSE. However, it directly applies the weight value to all wavelengths in a multiple-scattering domain although the multiple-scattering aerosol reflectance has a non-linear relationship with the single-scattering reflectance and inter-band relationship of multiple scattering aerosol reflectances is non-linear. To avoid these issues, we propose an alternative scheme for estimating the aerosol reflectance that uses the spectral relationships in the aerosol multiple-scattering reflectance between different wavelengths (called SRAMS). The process directly calculates the multiple-scattering reflectance contributions in NIR with no residual errors for selected aerosol models. Then it spectrally extrapolates the reflectance contribution from NIR to visible bands for each selected model using the SRAMS. To assess the performance of the algorithm regarding the errors in the water reflectance at the surface or remote-sensing reflectance retrieval, we compared the SRAMS atmospheric correction results with the SSE atmospheric correction using both simulations and in situ match-ups with the GOCI data. From simulations, the mean errors for bands from 412 to 555 nm were 5.2% for the SRAMS scheme and 11.5% for SSE scheme in case-I waters. From in situ match-ups, 16.5% for the SRAMS scheme and 17.6% scheme for the SSE scheme in both case-I and case-II waters. Although we applied the SRAMS algorithm to the GOCI, it can be applied to other ocean color sensors which have two NIR wavelengths.
1992-06-01
part, the composition , structure, and mechanisms for formation of the geopolymers is generally not understood (Henrichs, submitted). A possible scheme...unknown. A major process appears to be the incorpora- tion of organic compounds by refractory geopolymers or humic substances. However, for the most... geopolymers (Ertel and Hedges 1983; Rubinsztain et al. 1984; Taguchi and Sampei 1986). Alternative pathways of geopolymerization could involve alteration
Conservative zonal schemes for patched grids in 2 and 3 dimensions
NASA Technical Reports Server (NTRS)
Hessenius, Kristin A.
1987-01-01
The computation of flow over complex geometries, such as realistic aircraft configurations, poses difficult grid generation problems for computational aerodynamicists. The creation of a traditional, single-module grid of acceptable quality about an entire configuration may be impossible even with the most sophisticated of grid generation techniques. A zonal approach, wherein the flow field is partitioned into several regions within which grids are independently generated, is a practical alternative for treating complicated geometries. This technique not only alleviates the problems of discretizing a complex region, but also facilitates a block processing approach to computation thereby circumventing computer memory limitations. The use of such a zonal scheme, however, requires the development of an interfacing procedure that ensures a stable, accurate, and conservative calculation for the transfer of information across the zonal borders.
Improved performance of laser wakefield acceleration by tailored self-truncated ionization injection
NASA Astrophysics Data System (ADS)
Irman, A.; Couperus, J. P.; Debus, A.; Köhler, A.; Krämer, J. M.; Pausch, R.; Zarini, O.; Schramm, U.
2018-04-01
We report on tailoring ionization-induced injection in laser wakefield acceleration so that the electron injection process is self-truncating following the evolution of the plasma bubble. Robust generation of high-quality electron beams with shot-to-shot fluctuations of the beam parameters better than 10% is presented in detail. As a novelty, the scheme was found to enable well-controlled yet simple tuning of the injected charge while preserving acceleration conditions and beam quality. Quasi-monoenergetic electron beams at several 100 MeV energy and 15% relative energy spread were routinely demonstrated with a total charge of the monoenergetic feature reaching 0.5 nC. Finally these unique beam parameters, suggesting unprecedented peak currents of several 10 kA, are systematically related to published data on alternative injection schemes.
Alternating direction implicit methods for parabolic equations with a mixed derivative
NASA Technical Reports Server (NTRS)
Beam, R. M.; Warming, R. F.
1980-01-01
Alternating direction implicit (ADI) schemes for two-dimensional parabolic equations with a mixed derivative are constructed by using the class of all A(0)-stable linear two-step methods in conjunction with the method of approximate factorization. The mixed derivative is treated with an explicit two-step method which is compatible with an implicit A(0)-stable method. The parameter space for which the resulting ADI schemes are second-order accurate and unconditionally stable is determined. Some numerical examples are given.
Alternating direction implicit methods for parabolic equations with a mixed derivative
NASA Technical Reports Server (NTRS)
Beam, R. M.; Warming, R. F.
1979-01-01
Alternating direction implicit (ADI) schemes for two-dimensional parabolic equations with a mixed derivative are constructed by using the class of all A sub 0-stable linear two-step methods in conjunction with the method of approximation factorization. The mixed derivative is treated with an explicit two-step method which is compatible with an implicit A sub 0-stable method. The parameter space for which the resulting ADI schemes are second order accurate and unconditionally stable is determined. Some numerical examples are given.
Optimization of wastewater treatment alternative selection by hierarchy grey relational analysis.
Zeng, Guangming; Jiang, Ru; Huang, Guohe; Xu, Min; Li, Jianbing
2007-01-01
This paper describes an innovative systematic approach, namely hierarchy grey relational analysis for optimal selection of wastewater treatment alternatives, based on the application of analytic hierarchy process (AHP) and grey relational analysis (GRA). It can be applied for complicated multicriteria decision-making to obtain scientific and reasonable results. The effectiveness of this approach was verified through a real case study. Four wastewater treatment alternatives (A(2)/O, triple oxidation ditch, anaerobic single oxidation ditch and SBR) were evaluated and compared against multiple economic, technical and administrative performance criteria, including capital cost, operation and maintenance (O and M) cost, land area, removal of nitrogenous and phosphorous pollutants, sludge disposal effect, stability of plant operation, maturity of technology and professional skills required for O and M. The result illustrated that the anaerobic single oxidation ditch was the optimal scheme and would obtain the maximum general benefits for the wastewater treatment plant to be constructed.
NASA Astrophysics Data System (ADS)
Agrawal, Anuj; Bhatia, Vimal; Prakash, Shashi
2018-01-01
Efficient utilization of spectrum is a key concern in the soon to be deployed elastic optical networks (EONs). To perform routing in EONs, various fixed routing (FR), and fixed-alternate routing (FAR) schemes are ubiquitously used. FR, and FAR schemes calculate a fixed route, and a prioritized list of a number of alternate routes, respectively, between different pairs of origin o and target t nodes in the network. The route calculation performed using FR and FAR schemes is predominantly based on either the physical distance, known as k -shortest paths (KSP), or on the hop count (HC). For survivable optical networks, FAR usually calculates link-disjoint (LD) paths. These conventional routing schemes have been efficiently used for decades in communication networks. However, in this paper, it has been demonstrated that these commonly used routing schemes cannot utilize the network spectral resources optimally in the newly introduced EONs. Thus, we propose a new routing scheme for EON, namely, k -distance adaptive paths (KDAP) that efficiently utilizes the benefit of distance-adaptive modulation, and bit rate-adaptive superchannel capability inherited by EON to improve spectrum utilization. In the proposed KDAP, routes are found and prioritized on the basis of bit rate, distance, spectrum granularity, and the number of links used for a particular route. To evaluate the performance of KSP, HC, LD, and the proposed KDAP, simulations have been performed for three different sized networks, namely, 7-node test network (TEST7), NSFNET, and 24-node US backbone network (UBN24). We comprehensively assess the performance of various conventional, and the proposed routing schemes by solving both the RSA and the dual RSA problems under homogeneous and heterogeneous traffic requirements. Simulation results demonstrate that there is a variation amongst the performance of KSP, HC, and LD, depending on the o - t pair, and the network topology and its connectivity. However, the proposed KDAP always performs better for all the considered networks and traffic scenarios, as compared to the conventional routing schemes, namely, KSP, HC, and LD. The proposed KDAP achieves up to 60 % , and 10.46 % improvement in terms of spectrum utilization, and resource utilization ratio, respectively, over the conventional routing schemes.
Dailey, James M; Power, Mark J; Webb, Roderick P; Manning, Robert J
2011-12-19
We report on the novel all-optical generation of duobinary (DB) and alternate-mark-inversion (AMI) modulation formats at 42.6 Gb/s from an input on-off keyed signal. The modulation converter consists of two semiconductor optical amplifier (SOA)-based Mach-Zehnder interferometer gates. A detailed SOA model numerically confirms the operational principles and experimental data shows successful AMI and DB conversion at 42.6 Gb/s. We also predict that the operational bandwidth can be extended beyond 40 Gb/s by utilizing a new pattern-effect suppression scheme, and demonstrate dramatic reductions in patterning up to 160 Gb/s. We show an increasing trade-off between pattern-effect reduction and mean output power with increasing bitrate.
Cooling schemes for two-component fermions in layered optical lattices
NASA Astrophysics Data System (ADS)
Goto, Shimpei; Danshita, Ippei
2017-12-01
Recently, a cooling scheme for ultracold atoms in a bilayer optical lattice has been proposed (A. Kantian et al., arXiv:1609.03579). In their scheme, the energy offset between the two layers is increased dynamically such that the entropy of one layer is transferred to the other layer. Using the full-Hilbert-space approach, we compute cooling dynamics subjected to the scheme in order to show that their scheme fails to cool down two-component fermions. We develop an alternative cooling scheme for two-component fermions, in which the spin-exchange interaction of one layer is significantly reduced. Using both full-Hilbert-space and matrix-product-state approaches, we find that our scheme can decrease the temperature of the other layer by roughly half.
Qualitative Analysis: The Current Status.
ERIC Educational Resources Information Center
Cole, G. Mattney, Jr.; Waggoner, William H.
1983-01-01
To assist in designing/implementing qualitative analysis courses, examines reliability/accuracy of several published separation schemes, notes methods where particular difficulties arise (focusing on Groups II/III), and presents alternative schemes for the separation of these groups. Only cation analyses are reviewed. Figures are presented in…
A Controlled-Phase Gate via Adiabatic Rydberg Dressing of Neutral Atoms
NASA Astrophysics Data System (ADS)
Keating, Tyler; Deutsch, Ivan; Cook, Robert; Biederman, Grant; Jau, Yuan-Yu
2014-05-01
The dipole blockade effect between Rydberg atoms is a promising tool for quantum information processing in neutral atoms. So far, most efforts to perform a quantum logic gate with this effect have used resonant laser pulses to excite the atoms, which makes the system particularly susceptible to decoherence through thermal motional effects. We explore an alternative scheme in which the atomic ground states are adiabatically ``dressed'' by turning on an off-resonant laser. We analyze the implementation of a CPHASE gate using this mechanism and find that fidelities of >99% should be possible with current technology, owing primarily to the suppression of motional errors. We also discuss how such a scheme could be generalized to perform more complicated, multi-qubit gates; in particular, a simple generalization would allow us to perform a Toffoli gate in a single step.
Alternative irradiation schemes for NIF and LMJ hohlraums
NASA Astrophysics Data System (ADS)
Bourgade, Jean-Luc; Bowen, Christopher; Gauthier, Pascal; Landen, Otto
2018-02-01
We explore two alternative irradiation schemes for the large (‘outer’) and small (‘inner’) angle beams that currently illuminate National Ignition Facility (NIF) and Laser Mégajoule cavities. In the first, while the outer laser beams enter through the usual end laser entrance holes (LEH), the inner beams enter through slots along the cavity axis wall, illuminating the back wall of the cavity. This avoids the current interaction of the inner laser beams with the gold wall bubbles generated by the outer beams, which leads to large time-dependent changes in drive symmetry. Another scheme potentially useful for NIF uses only the outer beams. The radiative losses through the slots or from the use of outer beams only are compensated by using a smaller cavity and LEH.
Alternative irradiation schemes for NIF and LMJ hohlraums
Bourgade, Jean-Luc; Bowen, Christopher; Gauthier, Pascal; ...
2017-12-13
Here, we explore two alternative irradiation schemes for the large ('outer') and small ('inner') angle beams that currently illuminate National Ignition Facility (NIF) and Laser Mégajoule cavities. In the first, while the outer laser beams enter through the usual end laser entrance holes (LEH), the inner beams enter through slots along the cavity axis wall, illuminating the back wall of the cavity. This avoids the current interaction of the inner laser beams with the gold wall bubbles generated by the outer beams, which leads to large time-dependent changes in drive symmetry. Another scheme potentially useful for NIF uses only themore » outer beams. The radiative losses through the slots or from the use of outer beams only are compensated by using a smaller cavity and LEH.« less
Alternative irradiation schemes for NIF and LMJ hohlraums
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bourgade, Jean-Luc; Bowen, Christopher; Gauthier, Pascal
Here, we explore two alternative irradiation schemes for the large ('outer') and small ('inner') angle beams that currently illuminate National Ignition Facility (NIF) and Laser Mégajoule cavities. In the first, while the outer laser beams enter through the usual end laser entrance holes (LEH), the inner beams enter through slots along the cavity axis wall, illuminating the back wall of the cavity. This avoids the current interaction of the inner laser beams with the gold wall bubbles generated by the outer beams, which leads to large time-dependent changes in drive symmetry. Another scheme potentially useful for NIF uses only themore » outer beams. The radiative losses through the slots or from the use of outer beams only are compensated by using a smaller cavity and LEH.« less
Hanford Spent Nuclear Fuel Project recommended path forward
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fulton, J.C.
The Spent Nuclear Fuel Project (the Project), in conjunction with the U.S. Department of Energy-commissioned Independent Technical Assessment (ITA) team, has developed engineered alternatives for expedited removal of spent nuclear fuel, including sludge, from the K Basins at Hanford. These alternatives, along with a foreign processing alternative offered by British Nuclear Fuels Limited (BNFL), were extensively reviewed and evaluated. Based on these evaluations, a Westinghouse Hanford Company (WHC) Recommended Path Forward for K Basins spent nuclear fuel has been developed and is presented in Volume I of this document. The recommendation constitutes an aggressive series of projects to construct andmore » operate systems and facilities to safely retrieve, package, transport, process, and store K Basins fuel and sludge. The overall processing and storage scheme is based on the ITA team`s proposed passivation and vault storage process. A dual purpose staging and vault storage facility provides an innovative feature which allows accelerated removal of fuel and sludge from the basins and minimizes programmatic risks beyond any of the originally proposed alternatives. The projects fit within a regulatory and National Environmental Policy Act (NEPA) overlay which mandates a two-phased approach to construction and operation of the needed facilities. The two-phase strategy packages and moves K Basins fuel and sludge to a newly constructed Staging and Storage Facility by the year 2000 where it is staged for processing. When an adjoining facility is constructed, the fuel is cycled through a stabilization process and returned to the Staging and Storage Facility for dry interim (40-year) storage. The estimated total expenditure for this Recommended Path Forward, including necessary new construction, operations, and deactivation of Project facilities through 2012, is approximately $1,150 million (unescalated).« less
A Nanotechnology-Ready Computing Scheme based on a Weakly Coupled Oscillator Network
NASA Astrophysics Data System (ADS)
Vodenicarevic, Damir; Locatelli, Nicolas; Abreu Araujo, Flavio; Grollier, Julie; Querlioz, Damien
2017-03-01
With conventional transistor technologies reaching their limits, alternative computing schemes based on novel technologies are currently gaining considerable interest. Notably, promising computing approaches have proposed to leverage the complex dynamics emerging in networks of coupled oscillators based on nanotechnologies. The physical implementation of such architectures remains a true challenge, however, as most proposed ideas are not robust to nanotechnology devices’ non-idealities. In this work, we propose and investigate the implementation of an oscillator-based architecture, which can be used to carry out pattern recognition tasks, and which is tailored to the specificities of nanotechnologies. This scheme relies on a weak coupling between oscillators, and does not require a fine tuning of the coupling values. After evaluating its reliability under the severe constraints associated to nanotechnologies, we explore the scalability of such an architecture, suggesting its potential to realize pattern recognition tasks using limited resources. We show that it is robust to issues like noise, variability and oscillator non-linearity. Defining network optimization design rules, we show that nano-oscillator networks could be used for efficient cognitive processing.
A Nanotechnology-Ready Computing Scheme based on a Weakly Coupled Oscillator Network.
Vodenicarevic, Damir; Locatelli, Nicolas; Abreu Araujo, Flavio; Grollier, Julie; Querlioz, Damien
2017-03-21
With conventional transistor technologies reaching their limits, alternative computing schemes based on novel technologies are currently gaining considerable interest. Notably, promising computing approaches have proposed to leverage the complex dynamics emerging in networks of coupled oscillators based on nanotechnologies. The physical implementation of such architectures remains a true challenge, however, as most proposed ideas are not robust to nanotechnology devices' non-idealities. In this work, we propose and investigate the implementation of an oscillator-based architecture, which can be used to carry out pattern recognition tasks, and which is tailored to the specificities of nanotechnologies. This scheme relies on a weak coupling between oscillators, and does not require a fine tuning of the coupling values. After evaluating its reliability under the severe constraints associated to nanotechnologies, we explore the scalability of such an architecture, suggesting its potential to realize pattern recognition tasks using limited resources. We show that it is robust to issues like noise, variability and oscillator non-linearity. Defining network optimization design rules, we show that nano-oscillator networks could be used for efficient cognitive processing.
A Nanotechnology-Ready Computing Scheme based on a Weakly Coupled Oscillator Network
Vodenicarevic, Damir; Locatelli, Nicolas; Abreu Araujo, Flavio; Grollier, Julie; Querlioz, Damien
2017-01-01
With conventional transistor technologies reaching their limits, alternative computing schemes based on novel technologies are currently gaining considerable interest. Notably, promising computing approaches have proposed to leverage the complex dynamics emerging in networks of coupled oscillators based on nanotechnologies. The physical implementation of such architectures remains a true challenge, however, as most proposed ideas are not robust to nanotechnology devices’ non-idealities. In this work, we propose and investigate the implementation of an oscillator-based architecture, which can be used to carry out pattern recognition tasks, and which is tailored to the specificities of nanotechnologies. This scheme relies on a weak coupling between oscillators, and does not require a fine tuning of the coupling values. After evaluating its reliability under the severe constraints associated to nanotechnologies, we explore the scalability of such an architecture, suggesting its potential to realize pattern recognition tasks using limited resources. We show that it is robust to issues like noise, variability and oscillator non-linearity. Defining network optimization design rules, we show that nano-oscillator networks could be used for efficient cognitive processing. PMID:28322262
Zhang, Xiaoling; Huang, Kai; Zou, Rui; Liu, Yong; Yu, Yajuan
2013-01-01
The conflict of water environment protection and economic development has brought severe water pollution and restricted the sustainable development in the watershed. A risk explicit interval linear programming (REILP) method was used to solve integrated watershed environmental-economic optimization problem. Interval linear programming (ILP) and REILP models for uncertainty-based environmental economic optimization at the watershed scale were developed for the management of Lake Fuxian watershed, China. Scenario analysis was introduced into model solution process to ensure the practicality and operability of optimization schemes. Decision makers' preferences for risk levels can be expressed through inputting different discrete aspiration level values into the REILP model in three periods under two scenarios. Through balancing the optimal system returns and corresponding system risks, decision makers can develop an efficient industrial restructuring scheme based directly on the window of "low risk and high return efficiency" in the trade-off curve. The representative schemes at the turning points of two scenarios were interpreted and compared to identify a preferable planning alternative, which has the relatively low risks and nearly maximum benefits. This study provides new insights and proposes a tool, which was REILP, for decision makers to develop an effectively environmental economic optimization scheme in integrated watershed management.
Zou, Rui; Liu, Yong; Yu, Yajuan
2013-01-01
The conflict of water environment protection and economic development has brought severe water pollution and restricted the sustainable development in the watershed. A risk explicit interval linear programming (REILP) method was used to solve integrated watershed environmental-economic optimization problem. Interval linear programming (ILP) and REILP models for uncertainty-based environmental economic optimization at the watershed scale were developed for the management of Lake Fuxian watershed, China. Scenario analysis was introduced into model solution process to ensure the practicality and operability of optimization schemes. Decision makers' preferences for risk levels can be expressed through inputting different discrete aspiration level values into the REILP model in three periods under two scenarios. Through balancing the optimal system returns and corresponding system risks, decision makers can develop an efficient industrial restructuring scheme based directly on the window of “low risk and high return efficiency” in the trade-off curve. The representative schemes at the turning points of two scenarios were interpreted and compared to identify a preferable planning alternative, which has the relatively low risks and nearly maximum benefits. This study provides new insights and proposes a tool, which was REILP, for decision makers to develop an effectively environmental economic optimization scheme in integrated watershed management. PMID:24191144
DISSECT: a new mnemonic-based approach to the categorization of aortic dissection.
Dake, M D; Thompson, M; van Sambeek, M; Vermassen, F; Morales, J P
2013-08-01
Classification systems for aortic dissection provide important guides to clinical decision-making, but the relevance of traditional categorization schemes is being questioned in an era when endovascular techniques are assuming a growing role in the management of this frequently complex and catastrophic entity. In recognition of the expanding range of interventional therapies now used as alternatives to conventional treatment approaches, the Working Group on Aortic Diseases of the DEFINE Project developed a categorization system that features the specific anatomic and clinical manifestations of the disease process that are most relevant to contemporary decision-making. The DISSECT classification system is a mnemonic-based approach to the evaluation of aortic dissection. It guides clinicians through an assessment of six critical characteristics that facilitate optimal communication of the most salient details that currently influence the selection of a therapeutic option, including those findings that are key when considering an endovascular procedure, but are not taken into account by the DeBakey or Stanford categorization schemes. The six features of aortic dissection include: duration of disease; intimal tear location; size of the dissected aorta; segmental extent of aortic involvement; clinical complications of the dissection, and thrombus within the aortic false lumen. In current clinical practice, endovascular therapy is increasingly considered as an alternative to medical management or open surgical repair in select cases of type B aortic dissection. Currently, endovascular aortic repair is not used for patients with type A aortic dissection, but catheter-based techniques directed at peripheral branch vessel ischemia that may complicate type A dissection are considered valuable adjunctive interventions, when indicated. The use of a new system for categorization of aortic dissection, DISSECT, addresses the shortcomings of well-known established schemes devised more than 40 years ago, before the introduction of endovascular techniques. It will serve as a guide to support a critical analysis of contemporary therapeutic options and inform management decisions based on specific features of the disease process. Copyright © 2013 European Society for Vascular Surgery. All rights reserved.
Automated array assembly task, phase 1
NASA Technical Reports Server (NTRS)
Carbajal, B. G.
1977-01-01
An assessment of state-of-the-art technologies that are applicable to silicon solar cell and solar cell module fabrication is provided. The assessment consists of a technical feasibility evaluation and a cost projection for high-volume production of silicon solar cell modules. The cost projection was approached from two directions; a design-to-cost analysis assigned cost goals to each major process element in the fabrication scheme, and a cost analysis built up projected costs for alternate technologies for each process element. A technical evaluation was used in combination with the cost analysis to identify a baseline low cost process. A novel approach to metal pattern design based on minimum power loss was developed. These design equations were used as a tool in the evaluation of metallization technologies.
NASA Astrophysics Data System (ADS)
Jiang, Jiamin; Younis, Rami M.
2017-10-01
In the presence of counter-current flow, nonlinear convergence problems may arise in implicit time-stepping when the popular phase-potential upwinding (PPU) scheme is used. The PPU numerical flux is non-differentiable across the co-current/counter-current flow regimes. This may lead to cycles or divergence in the Newton iterations. Recently proposed methods address improved smoothness of the numerical flux. The objective of this work is to devise and analyze an alternative numerical flux scheme called C1-PPU that, in addition to improving smoothness with respect to saturations and phase potentials, also improves the level of scalar nonlinearity and accuracy. C1-PPU involves a novel use of the flux limiter concept from the context of high-resolution methods, and allows a smooth variation between the co-current/counter-current flow regimes. The scheme is general and applies to fully coupled flow and transport formulations with an arbitrary number of phases. We analyze the consistency property of the C1-PPU scheme, and derive saturation and pressure estimates, which are used to prove the solution existence. Several numerical examples for two- and three-phase flows in heterogeneous and multi-dimensional reservoirs are presented. The proposed scheme is compared to the conventional PPU and the recently proposed Hybrid Upwinding schemes. We investigate three properties of these numerical fluxes: smoothness, nonlinearity, and accuracy. The results indicate that in addition to smoothness, nonlinearity may also be critical for convergence behavior and thus needs to be considered in the design of an efficient numerical flux scheme. Moreover, the numerical examples show that the C1-PPU scheme exhibits superior convergence properties for large time steps compared to the other alternatives.
NASA Astrophysics Data System (ADS)
Vodenicarevic, D.; Locatelli, N.; Mizrahi, A.; Friedman, J. S.; Vincent, A. F.; Romera, M.; Fukushima, A.; Yakushiji, K.; Kubota, H.; Yuasa, S.; Tiwari, S.; Grollier, J.; Querlioz, D.
2017-11-01
Low-energy random number generation is critical for many emerging computing schemes proposed to complement or replace von Neumann architectures. However, current random number generators are always associated with an energy cost that is prohibitive for these computing schemes. We introduce random number bit generation based on specific nanodevices: superparamagnetic tunnel junctions. We experimentally demonstrate high-quality random bit generation that represents an orders-of-magnitude improvement in energy efficiency over current solutions. We show that the random generation speed improves with nanodevice scaling, and we investigate the impact of temperature, magnetic field, and cross talk. Finally, we show how alternative computing schemes can be implemented using superparamagentic tunnel junctions as random number generators. These results open the way for fabricating efficient hardware computing devices leveraging stochasticity, and they highlight an alternative use for emerging nanodevices.
Lee, Tian-Fu; Chang, I-Pin; Lin, Tsung-Hung; Wang, Ching-Cheng
2013-06-01
The integrated EPR information system supports convenient and rapid e-medicine services. A secure and efficient authentication scheme for the integrated EPR information system provides safeguarding patients' electronic patient records (EPRs) and helps health care workers and medical personnel to rapidly making correct clinical decisions. Recently, Wu et al. proposed an efficient password-based user authentication scheme using smart cards for the integrated EPR information system, and claimed that the proposed scheme could resist various malicious attacks. However, their scheme is still vulnerable to lost smart card and stolen verifier attacks. This investigation discusses these weaknesses and proposes a secure and efficient authentication scheme for the integrated EPR information system as alternative. Compared with related approaches, the proposed scheme not only retains a lower computational cost and does not require verifier tables for storing users' secrets, but also solves the security problems in previous schemes and withstands possible attacks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pern, F.J.; Glick, S.H.; Czanderna, A.W.
The stabilization effects of various superstrate materials against UV-induced EVA discoloration and the effect of photocurrent enhancement by white light-reflecting substrates are summarized. Based on the results, some alternative PV module encapsulation schemes are proposed for improved module performance, where the current or modified formulations of EVA encapsulants still can be used so that the typical processing tools and conditions need not to be changed significantly. The schemes are designed in an attempt to eliminate or minimize the EVA yellow-browning and to increase the module power output. Four key experimental results from the studies of EVA discoloration and encapsulation aremore » to employ: (1) UV-absorbing (filtering) glasses as superstrates to protect EVA from UV-induced discoloration, (2) gas-permeable polymer films as superstrates and/or substrates to prevent EVA yellowing by permitting photobleaching reactions, (3) modified EVA formulations, and (4) internal reflection of the light by white substrates. {copyright} {ital 1996 American Institute of Physics.}« less
A parallel graded-mesh FDTD algorithm for human-antenna interaction problems.
Catarinucci, Luca; Tarricone, Luciano
2009-01-01
The finite difference time domain method (FDTD) is frequently used for the numerical solution of a wide variety of electromagnetic (EM) problems and, among them, those concerning human exposure to EM fields. In many practical cases related to the assessment of occupational EM exposure, large simulation domains are modeled and high space resolution adopted, so that strong memory and central processing unit power requirements have to be satisfied. To better afford the computational effort, the use of parallel computing is a winning approach; alternatively, subgridding techniques are often implemented. However, the simultaneous use of subgridding schemes and parallel algorithms is very new. In this paper, an easy-to-implement and highly-efficient parallel graded-mesh (GM) FDTD scheme is proposed and applied to human-antenna interaction problems, demonstrating its appropriateness in dealing with complex occupational tasks and showing its capability to guarantee the advantages of a traditional subgridding technique without affecting the parallel FDTD performance.
Hydrogen production from solar energy
NASA Technical Reports Server (NTRS)
Eisenstadt, M. M.; Cox, K. E.
1975-01-01
Three alternatives for hydrogen production from solar energy have been analyzed on both efficiency and economic grounds. The analysis shows that the alternative using solar energy followed by thermochemical decomposition of water to produce hydrogen is the optimum one. The other schemes considered were the direct conversion of solar energy to electricity by silicon cells and water electrolysis, and the use of solar energy to power a vapor cycle followed by electrical generation and electrolysis. The capital cost of hydrogen via the thermochemical alternative was estimated at $575/kW of hydrogen output or $3.15/million Btu. Although this cost appears high when compared with hydrogen from other primary energy sources or from fossil fuel, environmental and social costs which favor solar energy may prove this scheme feasible in the future.
A high-efficiency high-power-generation system for automobiles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Naidu, M.; Boules, N.; Henry, R.
This paper presents a new scheme for the efficient generation of high electric power demanded for future automobiles. The new system consists of a permanent-magnet (PM) alternator having high-energy MAGNEQUENCH (MQ) magnets and split winding and a novel electronic voltage-regulation scheme. A proof-of-concept system, capable of providing 100/250 A (idle/cruising) at 14 V, has been built and tested in the laboratory with encouraging results. This high output is provided at 15--20 percentage points higher efficiencies than conventional automotive alternators, which translates into considerable fuel economy savings. The system is 8 dB quieter and has a rotor inertia of only 2/3more » that of an equivalent production alternator, thus allowing for a belt drive without excessive slippage.« less
A high-efficiency, high power generation system for automobiles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Naidu, M.; Boules, N.; Henry, R.
The paper presents a new scheme for the efficient generation of high electric power, demands for future automobiles. The new system, consists of a permanent magnet (PM) alternator having high energy MAGNEQUENCH (MQ) magnets and split winding; and a novel electronic voltage regulation scheme. A proof of concept system, capable of providing 100/250 A (idle/cruising) at 14 V, has been built and tested in the laboratory with encouraging results. This high output is provided at 15--20 percentage points higher efficiencies than conventional automotive alternators, which translates into considerable fuel economy savings. The system is 8 dB quieter and has amore » rotor inertia of only 2/3 that of an equivalent production alternator, thus allowing for a belt drive without excessive slippage.« less
Approximate optimal guidance for the advanced launch system
NASA Technical Reports Server (NTRS)
Feeley, T. S.; Speyer, J. L.
1993-01-01
A real-time guidance scheme for the problem of maximizing the payload into orbit subject to the equations of motion for a rocket over a spherical, non-rotating earth is presented. An approximate optimal launch guidance law is developed based upon an asymptotic expansion of the Hamilton - Jacobi - Bellman or dynamic programming equation. The expansion is performed in terms of a small parameter, which is used to separate the dynamics of the problem into primary and perturbation dynamics. For the zeroth-order problem the small parameter is set to zero and a closed-form solution to the zeroth-order expansion term of Hamilton - Jacobi - Bellman equation is obtained. Higher-order terms of the expansion include the effects of the neglected perturbation dynamics. These higher-order terms are determined from the solution of first-order linear partial differential equations requiring only the evaluation of quadratures. This technique is preferred as a real-time, on-line guidance scheme to alternative numerical iterative optimization schemes because of the unreliable convergence properties of these iterative guidance schemes and because the quadratures needed for the approximate optimal guidance law can be performed rapidly and by parallel processing. Even if the approximate solution is not nearly optimal, when using this technique the zeroth-order solution always provides a path which satisfies the terminal constraints. Results for two-degree-of-freedom simulations are presented for the simplified problem of flight in the equatorial plane and compared to the guidance scheme generated by the shooting method which is an iterative second-order technique.
Lattice design for the CEPC double ring scheme
NASA Astrophysics Data System (ADS)
Wang, Yiwei; Su, Feng; Bai, Sha; Zhang, Yuan; Bian, Tianjian; Wang, Dou; Yu, Chenghui; Gao, Jie
2018-01-01
A future Circular Electron Positron Collider (CEPC) has been proposed by China with the main goal of studying the Higgs boson. Its baseline design, chosen on the basis of its performance, is a double ring scheme; an alternative design is a partial double ring scheme which reduces the budget while maintaining an adequate performance. This paper will present the collider ring lattice design for the double ring scheme. The CEPC will also work as a W and a Z factory. For the W and Z modes, except in the RF region, compatible lattices were obtained by scaling down the magnet strength with energy.
Causation and Validation of Nursing Diagnoses: A Middle Range Theory.
de Oliveira Lopes, Marcos Venícios; da Silva, Viviane Martins; Herdman, T Heather
2017-01-01
To describe a predictive middle range theory (MRT) that provides a process for validation and incorporation of nursing diagnoses in clinical practice. Literature review. The MRT includes definitions, a pictorial scheme, propositions, causal relationships, and translation to nursing practice. The MRT can be a useful alternative for education, research, and translation of this knowledge into practice. This MRT can assist clinicians in understanding clinical reasoning, based on temporal logic and spectral interaction among elements of nursing classifications. In turn, this understanding will improve the use and accuracy of nursing diagnosis, which is a critical component of the nursing process that forms a basis for nursing practice standards worldwide. © 2015 NANDA International, Inc.
Interactive searching of facial image databases
NASA Astrophysics Data System (ADS)
Nicholls, Robert A.; Shepherd, John W.; Shepherd, Jean
1995-09-01
A set of psychological facial descriptors has been devised to enable computerized searching of criminal photograph albums. The descriptors have been used to encode image databased of up to twelve thousand images. Using a system called FACES, the databases are searched by translating a witness' verbal description into corresponding facial descriptors. Trials of FACES have shown that this coding scheme is more productive and efficient than searching traditional photograph albums. An alternative method of searching the encoded database using a genetic algorithm is currenly being tested. The genetic search method does not require the witness to verbalize a description of the target but merely to indicate a degree of similarity between the target and a limited selection of images from the database. The major drawback of FACES is that is requires a manual encoding of images. Research is being undertaken to automate the process, however, it will require an algorithm which can predict human descriptive values. Alternatives to human derived coding schemes exist using statistical classifications of images. Since databases encoded using statistical classifiers do not have an obvious direct mapping to human derived descriptors, a search method which does not require the entry of human descriptors is required. A genetic search algorithm is being tested for such a purpose.
An alternate lining scheme for solar ponds - Results of a liner test rig
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raman, P.; Kishore, V.V.N.
1990-01-01
Solar pond lining schemes consisting of combinations of clays and Low Density Polyethylene (LDPE) films have been experimentally evaluated by means of a Solar Pond Liner Test Rig. Results indicate that LDPE film sandwiched between two layers of clay can be effectively used for lining solar ponds.
An Innovative Approach to Scheme Learning Map Considering Tradeoff Multiple Objectives
ERIC Educational Resources Information Center
Lin, Yu-Shih; Chang, Yi-Chun; Chu, Chih-Ping
2016-01-01
An important issue in personalized learning is to provide learners with customized learning according to their learning characteristics. This paper focused attention on scheming learning map as follows. The learning goal can be achieved via different pathways based on alternative materials, which have the relationships of prerequisite, dependence,…
Analysis of a Teacher's Pedagogical Arguments Using Toulmin's Model and Argumentation Schemes
ERIC Educational Resources Information Center
Metaxas, N.; Potari, D.; Zachariades, T.
2016-01-01
In this article, we elaborate methodologies to study the argumentation speech of a teacher involved in argumentative activities. The standard tool of analysis of teachers' argumentation concerning pedagogical matters is Toulmin's model. The theory of argumentation schemes offers an alternative perspective on the analysis of arguments. We propose…
Problems Associated with Grid Convergence of Functionals
NASA Technical Reports Server (NTRS)
Salas, Manuel D.; Atkins, Harld L.
2008-01-01
The current use of functionals to evaluate order-of-convergence of a numerical scheme can lead to incorrect values. The problem comes about because of interplay between the errors from the evaluation of the functional, e.g., quadrature error, and from the numerical scheme discretization. Alternative procedures for deducing the order-property of a scheme are presented. The problem is studied within the context of the inviscid supersonic flow over a blunt body; however, the problem and solutions presented are not unique to this example.
On Problems Associated with Grid Convergence of Functionals
NASA Technical Reports Server (NTRS)
Salas, Manuael D.; Atkins, Harold L
2009-01-01
The current use of functionals to evaluate order-of-convergence of a numerical scheme can lead to incorrect values. The problem comes about because of interplay between the errors from the evaluation of the functional, e.g., quadrature error, and from the numerical scheme discretization. Alternative procedures for deducing the order property of a scheme are presented. The problems are studied within the context of the inviscid supersonic flow over a blunt body; however, the problems and solutions presented are not unique to this example.
Compiler-directed cache management in multiprocessors
NASA Technical Reports Server (NTRS)
Cheong, Hoichi; Veidenbaum, Alexander V.
1990-01-01
The necessity of finding alternatives to hardware-based cache coherence strategies for large-scale multiprocessor systems is discussed. Three different software-based strategies sharing the same goals and general approach are presented. They consist of a simple invalidation approach, a fast selective invalidation scheme, and a version control scheme. The strategies are suitable for shared-memory multiprocessor systems with interconnection networks and a large number of processors. Results of trace-driven simulations conducted on numerical benchmark routines to compare the performance of the three schemes are presented.
Research to Assembly Scheme for Satellite Deck Based on Robot Flexibility Control Principle
NASA Astrophysics Data System (ADS)
Guo, Tao; Hu, Ruiqin; Xiao, Zhengyi; Zhao, Jingjing; Fang, Zhikai
2018-03-01
Deck assembly is critical quality control point in final satellite assembly process, and cable extrusion and structure collision problems in assembly process will affect development quality and progress of satellite directly. Aimed at problems existing in deck assembly process, assembly project scheme for satellite deck based on robot flexibility control principle is proposed in this paper. Scheme is introduced firstly; secondly, key technologies on end force perception and flexible docking control in the scheme are studied; then, implementation process of assembly scheme for satellite deck is described in detail; finally, actual application case of assembly scheme is given. Result shows that compared with traditional assembly scheme, assembly scheme for satellite deck based on robot flexibility control principle has obvious advantages in work efficiency, reliability and universality aspects etc.
He, Alex Jingwei; Wu, Shaolong
2017-12-01
China's remarkable progress in building a comprehensive social health insurance (SHI) system was swift and impressive. Yet the country's decentralized and incremental approach towards universal coverage has created a fragmented SHI system under which a series of structural deficiencies have emerged with negative impacts. First, contingent on local conditions and financing capacity, benefit packages vary considerably across schemes, leading to systematic inequity. Second, the existence of multiple schemes, complicated by massive migration, has resulted in weak portability of SHI, creating further barriers to access. Third, many individuals are enrolled on multiple schemes, which causes inefficient use of government subsidies. Moral hazard and adverse selection are not effectively managed. The Chinese government announced its blueprint for integrating the urban and rural resident schemes in early 2016, paving the way for the ultimate consolidation of all SHI schemes and equal benefits for all. This article proposes three policy alternatives to inform the consolidation: (1) a single-pool system at the prefectural level with significant government subsidies, (2) a dual-pool system at the prefectural level with risk-equalization mechanisms, and (3) a household approach without merging existing pools. Vertical integration to the provincial level is unlikely to happen in the near future. Two caveats are raised to inform this transition towards universal health coverage.
Methodological aspects of fuel performance system analysis at raw hydrocarbon processing plants
NASA Astrophysics Data System (ADS)
Kulbjakina, A. V.; Dolotovskij, I. V.
2018-01-01
The article discusses the methodological aspects of fuel performance system analysis at raw hydrocarbon (RH) processing plants. Modern RH processing facilities are the major consumers of energy resources (ER) for their own needs. To reduce ER, including fuel consumption, and to develop rational fuel system structure are complex and relevant scientific tasks that can only be done using system analysis and complex system synthesis. In accordance with the principles of system analysis, the hierarchical structure of the fuel system, the block scheme for the synthesis of the most efficient alternative of the fuel system using mathematical models and the set of performance criteria have been developed on the main stages of the study. The results from the introduction of specific engineering solutions to develop their own energy supply sources for RH processing facilities have been provided.
Differential diagnosis of suspected multiple sclerosis: a consensus approach
Miller, DH; Weinshenker, BG; Filippi, M; Banwell, BL; Cohen, JA; Freedman, MS; Galetta, SL; Hutchinson, M; Johnson, RT; Kappos, L; Kira, J; Lublin, FD; McFarland, HF; Montalban, X; Panitch, H; Richert, JR; Reingold, SC; Polman, CH
2008-01-01
Background and objectives Diagnosis of multiple sclerosis (MS) requires exclusion of diseases that could better explain the clinical and paraclinical findings. A systematic process for exclusion of alternative diagnoses has not been defined. An International Panel of MS experts developed consensus perspectives on MS differential diagnosis. Methods Using available literature and consensus, we developed guidelines for MS differential diagnosis, focusing on exclusion of potential MS mimics, diagnosis of common initial isolated clinical syndromes, and differentiating between MS and non-MS idiopathic inflammatory demyelinating diseases. Results We present recommendations for 1) clinical and paraclinical red flags suggesting alternative diagnoses to MS; 2) more precise definition of “clinically isolated syndromes” (CIS), often the first presentations of MS or its alternatives; 3) algorithms for diagnosis of three common CISs related to MS in the optic nerves, brainstem, and spinal cord; and 4) a classification scheme and diagnosis criteria for idiopathic inflammatory demyelinating disorders of the central nervous system. Conclusions Differential diagnosis leading to MS or alternatives is complex and a strong evidence base is lacking. Consensus-determined guidelines provide a practical path for diagnosis and will be useful for the non-MS specialist neurologist. Recommendations are made for future research to validate and support these guidelines. Guidance on the differential diagnosis process when MS is under consideration will enhance diagnostic accuracy and precision. PMID:18805839
DOE Office of Scientific and Technical Information (OSTI.GOV)
RT Hallen; SA Bryan; FV Hoopes
A number of Hanford tanks received waste containing organic complexants, which increase the volubility of Sr-90 and transuranic (TRU) elements. Wastes from these tanks require additional pretreatment to remove Sr-90 and TRU for immobilization as low activity waste (Waste Envelope C). The baseline pretreatment process for Sr/TRU removal was isotopic exchange and precipitation with added strontium and iron. However, studies at both Battelle and Savannah River Technology Center (SRTC) have shown that the Sr/Fe precipitates were very difficult to filter. This was a result of the formation of poor filtering iron solids. An alternate treatment technology was needed for Sr/TRUmore » removal. Battelle had demonstrated that permanganate treatment was effective for decontaminating waste samples from Hanford Tank SY-101 and proposed that permanganate be examined as an alternative Sr/TRU removal scheme for complexant-containing tank wastes such as AW107. Battelle conducted preliminary small-scale experiments to determine the effectiveness of permanganate treatment with AN-107 waste samples that had been archived at Battelle from earlier studies. Three series of experiments were performed to evaluate conditions that provided adequate Sr/TRU decontamination using permanganate treatment. The final series included experiments with actual AN-107 diluted feed that had been obtained specifically for BNFL process testing. Conditions that provided adequate Sr/TRU decontamination were identified. A free hydroxide concentration of 0.5M provided adequate decontamination with added Sr of 0.05M and permanganate of 0.03M for archived AN-107. The best results were obtained when reagents were added in the sequence Sr followed by permanganate with the waste at ambient temperature. The reaction conditions for Sr/TRU removal will be further evaluated with a 1-L batch of archived AN-107, which will provide a large enough volume of waste to conduct crossflow filtration studies (Hallen et al. 2000a).« less
VTOL shipboard letdown guidance system analysis
NASA Technical Reports Server (NTRS)
Phatak, A. V.; Karmali, M. S.
1983-01-01
Alternative letdown guidance strategies are examined for landing of a VTOL aircraft onboard a small aviation ship under adverse environmental conditions. Off line computer simulation of shipboard landing task is utilized for assessing the relative merits of the proposed guidance schemes. The touchdown performance of a nominal constant rate of descent (CROD) letdown strategy serves as a benchmark for ranking the performance of the alternative letdown schemes. Analysis of ship motion time histories indicates the existence of an alternating sequence of quiescent and rough motions called lulls and swells. A real time algorithms lull/swell classification based upon ship motion pattern features is developed. The classification algorithm is used to command a go/no go signal to indicate the initiation and termination of an acceptable landing window. Simulation results show that such a go/no go pattern based letdown guidance strategy improves touchdown performance.
Alternative Line Coding Scheme with Fixed Dimming for Visible Light Communication
NASA Astrophysics Data System (ADS)
Niaz, M. T.; Imdad, F.; Kim, H. S.
2017-01-01
An alternative line coding scheme called fixed-dimming on/off keying (FD-OOK) is proposed for visible-light communication (VLC). FD-OOK reduces the flickering caused by a VLC transmitter and can maintain a 50% dimming level. Simple encoder and decoder are proposed which generates codes where the number of bits representing one is same as the number of bits representing zero. By keeping the number of ones and zeros equal the change in the brightness of lighting may be minimized and kept constant at 50%, thereby reducing the flickering in VLC. The performance of FD-OOK is analysed with two parameters: the spectral efficiency and power requirement.
31 CFR 592.301 - Controlled through the Kimberley Process Certification Scheme.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Process Certification Scheme. 592.301 Section 592.301 Money and Finance: Treasury Regulations Relating to... Certification Scheme. (a) Except as otherwise provided in paragraph (b) of this section, the term controlled through the Kimberley Process Certification Scheme refers to the following requirements that apply, as...
31 CFR 592.301 - Controlled through the Kimberley Process Certification Scheme.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Process Certification Scheme. 592.301 Section 592.301 Money and Finance: Treasury Regulations Relating to... Certification Scheme. (a) Except as otherwise provided in paragraph (b) of this section, the term controlled through the Kimberley Process Certification Scheme refers to the following requirements that apply, as...
31 CFR 592.301 - Controlled through the Kimberley Process Certification Scheme.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Process Certification Scheme. 592.301 Section 592.301 Money and Finance: Treasury Regulations Relating to... Certification Scheme. (a) Except as otherwise provided in paragraph (b) of this section, the term controlled through the Kimberley Process Certification Scheme refers to the following requirements that apply, as...
31 CFR 592.301 - Controlled through the Kimberley Process Certification Scheme.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Process Certification Scheme. 592.301 Section 592.301 Money and Finance: Treasury Regulations Relating to... Certification Scheme. (a) Except as otherwise provided in paragraph (b) of this section, the term controlled through the Kimberley Process Certification Scheme refers to the following requirements that apply, as...
31 CFR 592.301 - Controlled through the Kimberley Process Certification Scheme.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Process Certification Scheme. 592.301 Section 592.301 Money and Finance: Treasury Regulations Relating to... Certification Scheme. (a) Except as otherwise provided in paragraph (b) of this section, the term controlled through the Kimberley Process Certification Scheme refers to the following requirements that apply, as...
Fused man-machine classification schemes to enhance diagnosis of breast microcalcifications
NASA Astrophysics Data System (ADS)
Andreadis, Ioannis; Sevastianos, Chatzistergos; George, Spyrou; Konstantina, Nikita
2017-11-01
Computer aided diagnosis (CAD x ) approaches are developed towards the effective discrimination between benign and malignant clusters of microcalcifications. Different sources of information are exploited, such as features extracted from the image analysis of the region of interest, features related to the location of the cluster inside the breast, age of the patient and descriptors provided by the radiologists while performing their diagnostic task. A series of different CAD x schemes are implemented, each of which uses a different category of features and adopts a variety of machine learning algorithms and alternative image processing techniques. A novel framework is introduced where these independent diagnostic components are properly combined according to features critical to a radiologist in an attempt to identify the most appropriate CAD x schemes for the case under consideration. An open access database (Digital Database of Screening Mammography (DDSM)) has been elaborated to construct a large dataset with cases of varying subtlety, in order to ensure the development of schemes with high generalization ability, as well as extensive evaluation of their performance. The obtained results indicate that the proposed framework succeeds in improving the diagnostic procedure, as the achieved overall classification performance outperforms all the independent single diagnostic components, as well as the radiologists that assessed the same cases, in terms of accuracy, sensitivity, specificity and area under the curve following receiver operating characteristic analysis.
The construction of causal schemes: learning mechanisms at the knowledge level.
diSessa, Andrea A
2014-06-01
This work uses microgenetic study of classroom learning to illuminate (1) the role of pre-instructional student knowledge in the construction of normative scientific knowledge, and (2) the learning mechanisms that drive change. Three enactments of an instructional sequence designed to lead to a scientific understanding of thermal equilibration are used as data sources. Only data from a scaffolded student inquiry preceding introduction of a normative model were used. Hence, the study involves nearly autonomous student learning. In two classes, students developed stable and socially shared explanations ("causal schemes") for understanding thermal equilibration. One case resulted in a near-normative understanding, while the other resulted in a non-normative "alternative conception." The near-normative case seems to be a particularly clear example wherein the constructed causal scheme is a composition of previously documented naïve conceptions. Detailed prior description of these naive elements allows a much better than usual view of the corresponding details of change during construction of the new scheme. A list of candidate mechanisms that can account for observed change is presented. The non-normative construction seems also to be a composition, albeit of a different structural form, using a different (although similar) set of naïve elements. This article provides one of very few high-resolution process analyses showing the productive use of naïve knowledge in learning. © 2014 Cognitive Science Society, Inc.
Cache-Oblivious parallel SIMD Viterbi decoding for sequence search in HMMER.
Ferreira, Miguel; Roma, Nuno; Russo, Luis M S
2014-05-30
HMMER is a commonly used bioinformatics tool based on Hidden Markov Models (HMMs) to analyze and process biological sequences. One of its main homology engines is based on the Viterbi decoding algorithm, which was already highly parallelized and optimized using Farrar's striped processing pattern with Intel SSE2 instruction set extension. A new SIMD vectorization of the Viterbi decoding algorithm is proposed, based on an SSE2 inter-task parallelization approach similar to the DNA alignment algorithm proposed by Rognes. Besides this alternative vectorization scheme, the proposed implementation also introduces a new partitioning of the Markov model that allows a significantly more efficient exploitation of the cache locality. Such optimization, together with an improved loading of the emission scores, allows the achievement of a constant processing throughput, regardless of the innermost-cache size and of the dimension of the considered model. The proposed optimized vectorization of the Viterbi decoding algorithm was extensively evaluated and compared with the HMMER3 decoder to process DNA and protein datasets, proving to be a rather competitive alternative implementation. Being always faster than the already highly optimized ViterbiFilter implementation of HMMER3, the proposed Cache-Oblivious Parallel SIMD Viterbi (COPS) implementation provides a constant throughput and offers a processing speedup as high as two times faster, depending on the model's size.
Sakai, K; Watanabe, A; Kogi, K
1993-01-01
The improvement of an irregular three-shift system with anti-clockwise rotation of workers of a disabled persons' facility covering 42 h a week was a subject for management-labour debate. Workers were complaining of physical fatigue, high prevalence of low back pain, sleep shortages associated with short inter-shift intervals, and irregular holidays. With the co-operation of trade union members, an educational and intervention programme was designed to analyse, plan, and implement improved shift rotation schemes. The programme consisted of (a) a group study on the existing system and effects on health and working life; (b) joint planning of potential schemes; (c) communication and feedback (d) testing and evaluation; and (e) agreement on an improved system. The group study was undertaken by means of time study, questionnaire and physiological methods, and the results were jointly discussed. This led to the planning of alternative shift schemes incorporating more regular, clockwise rotation. It was agreed to stage a trial period with a view to shorter working hours. This experience indicated the importance of a stepwise intervention strategy with frequent dialogues and a participatory process focusing on the broad range of working life and health issues.
NASA Astrophysics Data System (ADS)
Battisti, F.; Carli, M.; Neri, A.
2011-03-01
The increasing use of digital image-based applications is resulting in huge databases that are often difficult to use and prone to misuse and privacy concerns. These issues are especially crucial in medical applications. The most commonly adopted solution is the encryption of both the image and the patient data in separate files that are then linked. This practice results to be inefficient since, in order to retrieve patient data or analysis details, it is necessary to decrypt both files. In this contribution, an alternative solution for secure medical image annotation is presented. The proposed framework is based on the joint use of a key-dependent wavelet transform, the Integer Fibonacci-Haar transform, of a secure cryptographic scheme, and of a reversible watermarking scheme. The system allows: i) the insertion of the patient data into the encrypted image without requiring the knowledge of the original image, ii) the encryption of annotated images without causing loss in the embedded information, and iii) due to the complete reversibility of the process, it allows recovering the original image after the mark removal. Experimental results show the effectiveness of the proposed scheme.
NASA Astrophysics Data System (ADS)
Montane, F.; Fox, A. M.; Arellano, A. F.; Alexander, M. R.; Moore, D. J.
2016-12-01
Carbon (C) allocation to different plant tissues (leaves, stem and roots) remains a central challenge for understanding the global C cycle, as it determines C residence time. We used a diverse set of observations (AmeriFlux eddy covariance towers, biomass estimates from tree-ring data, and Leaf Area Index measurements) to compare C fluxes, pools, and Leaf Area Index (LAI) data with the Community Land Model (CLM). We ran CLM for seven temperate forests in North America (including evergreen and deciduous sites) between 1980 and 2013 using different C allocation schemes: i) standard C allocation scheme in CLM, which allocates C to the stem and leaves as a dynamic function of annual net primary productivity (NPP); ii) two fixed C allocation schemes, one representative of evergreen and the other one of deciduous forests, based on Luyssaert et al. 2007; iii) an alternative C allocation scheme, which allocated C to stem and leaves, and to stem and coarse roots, as a dynamic function of annual NPP, based on Litton et al. 2007. At our sites CLM usually overestimated gross primary production and ecosystem respiration, and underestimated net ecosystem exchange. Initial aboveground biomass in 1980 was largely overestimated for deciduous forests, whereas aboveground biomass accumulation between 1980 and 2011 was highly underestimated for both evergreen and deciduous sites due to the lower turnover rate in the sites than the one used in the model. CLM overestimated LAI in both evergreen and deciduous sites because the Leaf C-LAI relationship in the model did not match the observed Leaf C-LAI relationship in our sites. Although the different C allocation schemes gave similar results for aggregated C fluxes, they translated to important differences in long-term aboveground biomass accumulation and aboveground NPP. For deciduous forests, one of the alternative C allocation schemes used (iii) gave more realistic stem C/leaf C ratios, and highly reduced the overestimation of initial aboveground biomass, and accumulated aboveground NPP for deciduous forests by CLM. Our results would suggest using different C allocation schemes for evergreen and deciduous forests. It is crucial to improve CLM in the near future to minimize data-model mismatches, and to address some of the current model structural errors and parameter uncertainties.
γ5 in the four-dimensional helicity scheme
NASA Astrophysics Data System (ADS)
Gnendiger, C.; Signer, A.
2018-05-01
We investigate the regularization-scheme dependent treatment of γ5 in the framework of dimensional regularization, mainly focusing on the four-dimensional helicity scheme (fdh). Evaluating distinctive examples, we find that for one-loop calculations, the recently proposed four-dimensional formulation (fdf) of the fdh scheme constitutes a viable and efficient alternative compared to more traditional approaches. In addition, we extend the considerations to the two-loop level and compute the pseudoscalar form factors of quarks and gluons in fdh. We provide the necessary operator renormalization and discuss at a practical level how the complexity of intermediate calculational steps can be reduced in an efficient way.
Combining image-processing and image compression schemes
NASA Technical Reports Server (NTRS)
Greenspan, H.; Lee, M.-C.
1995-01-01
An investigation into the combining of image-processing schemes, specifically an image enhancement scheme, with existing compression schemes is discussed. Results are presented on the pyramid coding scheme, the subband coding scheme, and progressive transmission. Encouraging results are demonstrated for the combination of image enhancement and pyramid image coding schemes, especially at low bit rates. Adding the enhancement scheme to progressive image transmission allows enhanced visual perception at low resolutions. In addition, further progressing of the transmitted images, such as edge detection schemes, can gain from the added image resolution via the enhancement.
What Is the Value of Value-Based Purchasing?
Tanenbaum, Sandra J
2016-10-01
Value-based purchasing (VBP) is a widely favored strategy for improving the US health care system. The meaning of value that predominates in VBP schemes is (1) conformance to selected process and/or outcome metrics, and sometimes (2) such conformance at the lowest possible cost. In other words, VBP schemes choose some number of "quality indicators" and financially incent providers to meet them (and not others). Process measures are usually based on clinical science that cannot determine the effects of a process on individual patients or patients with comorbidities, and do not necessarily measure effects that patients value; additionally, there is no provision for different patients valuing different things. Proximate outcome measures may or may not predict distal ones, and the more distal the outcome, the less reliably it can be attributed to health care. Outcome measures may be quite rudimentary, such as mortality rates, or highly contestable: survival or function after prostate surgery? When cost is an element of value-based purchasing, it is the cost to the value-based payer and not to other payers or patients' families. The greatest value of value-based purchasing may not be to patients or even payers, but to policy makers seeking a morally justifiable alternative to politically contested regulatory policies. Copyright © 2016 by Duke University Press.
A gas kinetic scheme for hybrid simulation of partially rarefied flows
NASA Astrophysics Data System (ADS)
Colonia, S.; Steijl, R.; Barakos, G.
2017-06-01
Approaches to predict flow fields that display rarefaction effects incur a cost in computational time and memory considerably higher than methods commonly employed for continuum flows. For this reason, to simulate flow fields where continuum and rarefied regimes coexist, hybrid techniques have been introduced. In the present work, analytically defined gas-kinetic schemes based on the Shakhov and Rykov models for monoatomic and diatomic gas flows, respectively, are proposed and evaluated with the aim to be used in the context of hybrid simulations. This should reduce the region where more expensive methods are needed by extending the validity of the continuum formulation. Moreover, since for high-speed rare¦ed gas flows it is necessary to take into account the nonequilibrium among the internal degrees of freedom, the extension of the approach to employ diatomic gas models including rotational relaxation process is a mandatory first step towards realistic simulations. Compared to previous works of Xu and coworkers, the presented scheme is de¦ned directly on the basis of kinetic models which involve a Prandtl number correction. Moreover, the methods are defined fully analytically instead of making use of Taylor expansion for the evaluation of the required derivatives. The scheme has been tested for various test cases and Mach numbers proving to produce reliable predictions in agreement with other approaches for near-continuum flows. Finally, the performance of the scheme, in terms of memory and computational time, compared to discrete velocity methods makes it a compelling alternative in place of more complex methods for hybrid simulations of weakly rarefied flows.
NASA Astrophysics Data System (ADS)
Wang, Yonggang; Li, Deng; Lu, Xiaoming; Cheng, Xinyi; Wang, Liwei
2014-10-01
Continuous crystal-based positron emission tomography (PET) detectors could be an ideal alternative for current high-resolution pixelated PET detectors if the issues of high performance γ interaction position estimation and its real-time implementation are solved. Unfortunately, existing position estimators are not very feasible for implementation on field-programmable gate array (FPGA). In this paper, we propose a new self-organizing map neural network-based nearest neighbor (SOM-NN) positioning scheme aiming not only at providing high performance, but also at being realistic for FPGA implementation. Benefitting from the SOM feature mapping mechanism, the large set of input reference events at each calibration position is approximated by a small set of prototypes, and the computation of the nearest neighbor searching for unknown events is largely reduced. Using our experimental data, the scheme was evaluated, optimized and compared with the smoothed k-NN method. The spatial resolutions of full-width-at-half-maximum (FWHM) of both methods averaged over the center axis of the detector were obtained as 1.87 ±0.17 mm and 1.92 ±0.09 mm, respectively. The test results show that the SOM-NN scheme has an equivalent positioning performance with the smoothed k-NN method, but the amount of computation is only about one-tenth of the smoothed k-NN method. In addition, the algorithm structure of the SOM-NN scheme is more feasible for implementation on FPGA. It has the potential to realize real-time position estimation on an FPGA with a high-event processing throughput.
An experiment-based comparative study of fuzzy logic control
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.; Chen, Yung-Yaw; Lee, Chuen-Chein; Murugesan, S.; Jang, Jyh-Shing
1989-01-01
An approach is presented to the control of a dynamic physical system through the use of approximate reasoning. The approach has been implemented in a program named POLE, and the authors have successfully built a prototype hardware system to solve the cartpole balancing problem in real-time. The approach provides a complementary alternative to the conventional analytical control methodology and is of substantial use when a precise mathematical model of the process being controlled is not available. A set of criteria for comparing controllers based on approximate reasoning and those based on conventional control schemes is furnished.
A study of alternative schemes for extrapolation of secular variation at observatories
Alldredge, L.R.
1976-01-01
The geomagnetic secular variation is not well known. This limits the useful life of geomagnetic models. The secular variation is usually assumed to be linear with time. It is found that attenative schemes that employ quasiperiodic variations from internal and external sources can improve the extrapolation of secular variation at high-quality observatories. Although the schemes discussed are not yet fully applicable in worldwide model making, they do suggest some basic ideas that may be developed into useful tools in future model work. ?? 1976.
Dynamic SPECT reconstruction from few projections: a sparsity enforced matrix factorization approach
NASA Astrophysics Data System (ADS)
Ding, Qiaoqiao; Zan, Yunlong; Huang, Qiu; Zhang, Xiaoqun
2015-02-01
The reconstruction of dynamic images from few projection data is a challenging problem, especially when noise is present and when the dynamic images are vary fast. In this paper, we propose a variational model, sparsity enforced matrix factorization (SEMF), based on low rank matrix factorization of unknown images and enforced sparsity constraints for representing both coefficients and bases. The proposed model is solved via an alternating iterative scheme for which each subproblem is convex and involves the efficient alternating direction method of multipliers (ADMM). The convergence of the overall alternating scheme for the nonconvex problem relies upon the Kurdyka-Łojasiewicz property, recently studied by Attouch et al (2010 Math. Oper. Res. 35 438) and Attouch et al (2013 Math. Program. 137 91). Finally our proof-of-concept simulation on 2D dynamic images shows the advantage of the proposed method compared to conventional methods.
Kang, Hyunchul
2015-01-01
We investigate the in-network processing of an iceberg join query in wireless sensor networks (WSNs). An iceberg join is a special type of join where only those joined tuples whose cardinality exceeds a certain threshold (called iceberg threshold) are qualified for the result. Processing such a join involves the value matching for the join predicate as well as the checking of the cardinality constraint for the iceberg threshold. In the previous scheme, the value matching is carried out as the main task for filtering non-joinable tuples while the iceberg threshold is treated as an additional constraint. We take an alternative approach, meeting the cardinality constraint first and matching values next. In this approach, with a logical fragmentation of the join operand relations on the aggregate counts of the joining attribute values, the optimal sequence of 2-way fragment semijoins is generated, where each fragment semijoin employs a Bloom filter as a synopsis of the joining attribute values. This sequence filters non-joinable tuples in an energy-efficient way in WSNs. Through implementation and a set of detailed experiments, we show that our alternative approach considerably outperforms the previous one. PMID:25774710
Using concatenated quantum codes for universal fault-tolerant quantum gates.
Jochym-O'Connor, Tomas; Laflamme, Raymond
2014-01-10
We propose a method for universal fault-tolerant quantum computation using concatenated quantum error correcting codes. The concatenation scheme exploits the transversal properties of two different codes, combining them to provide a means to protect against low-weight arbitrary errors. We give the required properties of the error correcting codes to ensure universal fault tolerance and discuss a particular example using the 7-qubit Steane and 15-qubit Reed-Muller codes. Namely, other than computational basis state preparation as required by the DiVincenzo criteria, our scheme requires no special ancillary state preparation to achieve universality, as opposed to schemes such as magic state distillation. We believe that optimizing the codes used in such a scheme could provide a useful alternative to state distillation schemes that exhibit high overhead costs.
Simulations of Merging Helion Bunches on the AGS Injection Porch
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gardner, C. J.
During the setup of helions for the FY2014 RHIC run it was discovered that the standard scheme for merging bunches on the AGS injection porch required an injection kicker pulse shorter than what was available. To overcome this difficulty, K. Zeno proposed and developed an interesting and unusual alternative which uses RF harmonic numbers 12, 4, 2 (rather than the standard 8, 4, 2) to merge 8 helion bunches into 2. In this note we carry out simulations that illustrate how the alternative scheme works and how it compares with the standard scheme. This is done in Sections 13 andmore » 14. A scheme in which 6 bunches are merged into 1 is simulated in Section 15. This may be useful if more helions per merged bunch are needed in future runs. General formulae for the simulations are given in Sections 9 through 12. For completeness, Sections 1 through 8 give a derivation of the turn-by-turn equations of longitudinal motion at constant magnetic field. The derivation is based on the work of MacLachlan. The reader may wish to skip over these Sections and start with Section 9.« less
Aerodynamic optimization by simultaneously updating flow variables and design parameters
NASA Technical Reports Server (NTRS)
Rizk, M. H.
1990-01-01
The application of conventional optimization schemes to aerodynamic design problems leads to inner-outer iterative procedures that are very costly. An alternative approach is presented based on the idea of updating the flow variable iterative solutions and the design parameter iterative solutions simultaneously. Two schemes based on this idea are applied to problems of correcting wind tunnel wall interference and optimizing advanced propeller designs. The first of these schemes is applicable to a limited class of two-design-parameter problems with an equality constraint. It requires the computation of a single flow solution. The second scheme is suitable for application to general aerodynamic problems. It requires the computation of several flow solutions in parallel. In both schemes, the design parameters are updated as the iterative flow solutions evolve. Computations are performed to test the schemes' efficiency, accuracy, and sensitivity to variations in the computational parameters.
Geometric reduction of dynamical nonlocality in nanoscale quantum circuits.
Strambini, E; Makarenko, K S; Abulizi, G; de Jong, M P; van der Wiel, W G
2016-01-06
Nonlocality is a key feature discriminating quantum and classical physics. Quantum-interference phenomena, such as Young's double slit experiment, are one of the clearest manifestations of nonlocality, recently addressed as dynamical to specify its origin in the quantum equations of motion. It is well known that loss of dynamical nonlocality can occur due to (partial) collapse of the wavefunction due to a measurement, such as which-path detection. However, alternative mechanisms affecting dynamical nonlocality have hardly been considered, although of crucial importance in many schemes for quantum information processing. Here, we present a fundamentally different pathway of losing dynamical nonlocality, demonstrating that the detailed geometry of the detection scheme is crucial to preserve nonlocality. By means of a solid-state quantum-interference experiment we quantify this effect in a diffusive system. We show that interference is not only affected by decoherence, but also by a loss of dynamical nonlocality based on a local reduction of the number of quantum conduction channels of the interferometer. With our measurements and theoretical model we demonstrate that this mechanism is an intrinsic property of quantum dynamics. Understanding the geometrical constraints protecting nonlocality is crucial when designing quantum networks for quantum information processing.
Tua, Camilla; Nessi, Simone; Rigamonti, Lucia; Dolci, Giovanni; Grosso, Mario
2017-04-01
In recent years, alternative food supply chains based on short distance production and delivery have been promoted as being more environmentally friendly than those applied by the traditional retailing system. An example is the supply of seasonal and possibly locally grown fruit and vegetables directly to customers inside a returnable crate (the so-called 'box scheme'). In addition to other claimed environmental and economic advantages, the box scheme is often listed among the packaging waste prevention measures. To check whether such a claim is soundly based, a life cycle assessment was carried out to verify the real environmental effectiveness of the box scheme in comparison to the Italian traditional distribution. The study focused on two reference products, carrots and apples, which are available in the crate all year round. An experience of a box scheme carried out in Italy was compared with some traditional scenarios where the product is distributed loose or packaged at the large-scale retail trade. The packaging waste generation, 13 impact indicators on environment and human health and energy consumptions were calculated. Results show that the analysed experience of the box scheme, as currently managed, cannot be considered a packaging waste prevention measure when compared with the traditional distribution of fruit and vegetables. The weaknesses of the alternative system were identified and some recommendations were given to improve its environmental performance.
Sun, Mei; Shen, Jay J; Li, Chengyue; Cochran, Christopher; Wang, Ying; Chen, Fei; Li, Pingping; Lu, Jun; Chang, Fengshui; Li, Xiaohong; Hao, Mo
2016-08-22
This study aimed to measure the poverty head count ratio and poverty gap of rural Yanbian in order to examine whether China's New Rural Cooperative Medical Scheme has alleviated its medical impoverishment and to compare the results of this alternative approach with those of a World Bank approach. This cross-sectional study was based on a stratified random sample survey of 1,987 households and 6,135 individuals conducted in 2008 across eight counties in Yanbian Korean Autonomous Prefecture, Jilin province, China. A new approach was developed to define and identify medical impoverishment. The poverty head count ratio, relative poverty gap, and average poverty gap were used to measure medical impoverishment. Changes in medical impoverishment after the reimbursement under the New Rural Cooperative Medical Scheme were also examined. The government-run New Rural Cooperative Medical Scheme reduced the number of medically impoverished households by 24.6 %, as well as the relative and average gaps by 37.3 % and 38.9 %, respectively. China's New Rural Cooperative Medical Scheme has certain positive but limited effects on alleviating medical impoverishment in rural Yanbian regardless of how medical impoverishment is defined and measured. More governmental and private-sector efforts should therefore be encouraged to further improve the system in terms of financing, operation, and reimbursement policy.
Aircraft interior noise reduction by alternate resonance tuning
NASA Technical Reports Server (NTRS)
Bliss, Donald B.; Gottwald, James A.; Gustaveson, Mark B.; Burton, James R., III
1988-01-01
Model problem development and analysis continues with the Alternate Resonance Tuning (ART) concept. The various topics described are presently at different stages of completion: investigation of the effectiveness of the ART concept under an external propagating pressure field associated with propeller passage by the fuselage; analysis of ART performance with a double panel wall mounted in a flexible frame model; development of a data fitting scheme using a branch analysis with a Newton-Raphson scheme in multiple dimensions to determine values of critical parameters in the actual experimental apparatus; and investigation of the ART effect with real panels as opposed to the spring-mass-damper systems currently used in much of the theory.
Passive and active plasma deceleration for the compact disposal of electron beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonatto, A., E-mail: abonatto@lbl.gov; CAPES Foundation, Ministry of Education of Brazil, Brasília, DF 700040-020; Schroeder, C. B.
2015-08-15
Plasma-based decelerating schemes are investigated as compact alternatives for the disposal of high-energy beams (beam dumps). Analytical solutions for the energy loss of electron beams propagating in passive and active (laser-driven) schemes are derived. These solutions, along with numerical modeling, are used to investigate the evolution of the electron distribution, including energy chirp and total beam energy. In the active beam dump scheme, a laser-driver allows a more homogeneous beam energy extraction and drastically reduces the energy chirp observed in the passive scheme. These concepts could benefit applications requiring overall compactness, such as transportable light sources, or facilities operating atmore » high beam power.« less
Consistent forcing scheme in the cascaded lattice Boltzmann method
NASA Astrophysics Data System (ADS)
Fei, Linlin; Luo, Kai Hong
2017-11-01
In this paper, we give an alternative derivation for the cascaded lattice Boltzmann method (CLBM) within a general multiple-relaxation-time (MRT) framework by introducing a shift matrix. When the shift matrix is a unit matrix, the CLBM degrades into an MRT LBM. Based on this, a consistent forcing scheme is developed for the CLBM. The consistency of the nonslip rule, the second-order convergence rate in space, and the property of isotropy for the consistent forcing scheme is demonstrated through numerical simulations of several canonical problems. Several existing forcing schemes previously used in the CLBM are also examined. The study clarifies the relation between MRT LBM and CLBM under a general framework.
NASA Astrophysics Data System (ADS)
Noh, S.; Tachikawa, Y.; Shiiba, M.; Kim, S.
2011-12-01
Applications of the sequential data assimilation methods have been increasing in hydrology to reduce uncertainty in the model prediction. In a distributed hydrologic model, there are many types of state variables and each variable interacts with each other based on different time scales. However, the framework to deal with the delayed response, which originates from different time scale of hydrologic processes, has not been thoroughly addressed in the hydrologic data assimilation. In this study, we propose the lagged filtering scheme to consider the lagged response of internal states in a distributed hydrologic model using two filtering schemes; particle filtering (PF) and ensemble Kalman filtering (EnKF). The EnKF is one of the widely used sub-optimal filters implementing an efficient computation with limited number of ensemble members, however, still based on Gaussian approximation. PF can be an alternative in which the propagation of all uncertainties is carried out by a suitable selection of randomly generated particles without any assumptions about the nature of the distributions involved. In case of PF, advanced particle regularization scheme is implemented together to preserve the diversity of the particle system. In case of EnKF, the ensemble square root filter (EnSRF) are implemented. Each filtering method is parallelized and implemented in the high performance computing system. A distributed hydrologic model, the water and energy transfer processes (WEP) model, is applied for the Katsura River catchment, Japan to demonstrate the applicability of proposed approaches. Forecasted results via PF and EnKF are compared and analyzed in terms of the prediction accuracy and the probabilistic adequacy. Discussions are focused on the prospects and limitations of each data assimilation method.
Provably secure identity-based identification and signature schemes from code assumptions
Zhao, Yiming
2017-01-01
Code-based cryptography is one of few alternatives supposed to be secure in a post-quantum world. Meanwhile, identity-based identification and signature (IBI/IBS) schemes are two of the most fundamental cryptographic primitives, so several code-based IBI/IBS schemes have been proposed. However, with increasingly profound researches on coding theory, the security reduction and efficiency of such schemes have been invalidated and challenged. In this paper, we construct provably secure IBI/IBS schemes from code assumptions against impersonation under active and concurrent attacks through a provably secure code-based signature technique proposed by Preetha, Vasant and Rangan (PVR signature), and a security enhancement Or-proof technique. We also present the parallel-PVR technique to decrease parameter values while maintaining the standard security level. Compared to other code-based IBI/IBS schemes, our schemes achieve not only preferable public parameter size, private key size, communication cost and signature length due to better parameter choices, but also provably secure. PMID:28809940
Provably secure identity-based identification and signature schemes from code assumptions.
Song, Bo; Zhao, Yiming
2017-01-01
Code-based cryptography is one of few alternatives supposed to be secure in a post-quantum world. Meanwhile, identity-based identification and signature (IBI/IBS) schemes are two of the most fundamental cryptographic primitives, so several code-based IBI/IBS schemes have been proposed. However, with increasingly profound researches on coding theory, the security reduction and efficiency of such schemes have been invalidated and challenged. In this paper, we construct provably secure IBI/IBS schemes from code assumptions against impersonation under active and concurrent attacks through a provably secure code-based signature technique proposed by Preetha, Vasant and Rangan (PVR signature), and a security enhancement Or-proof technique. We also present the parallel-PVR technique to decrease parameter values while maintaining the standard security level. Compared to other code-based IBI/IBS schemes, our schemes achieve not only preferable public parameter size, private key size, communication cost and signature length due to better parameter choices, but also provably secure.
Lanzas, C; Broderick, G A; Fox, D G
2008-12-01
Adequate predictions of rumen-degradable protein (RDP) and rumen-undegradable protein (RUP) supplies are necessary to optimize performance while minimizing losses of excess nitrogen (N). The objectives of this study were to evaluate the original Cornell Net Carbohydrate Protein System (CNCPS) protein fractionation scheme and to develop and evaluate alternatives designed to improve its adequacy in predicting RDP and RUP. The CNCPS version 5 fractionates CP into 5 fractions based on solubility in protein precipitant agents, buffers, and detergent solutions: A represents the soluble nonprotein N, B1 is the soluble true protein, B2 represents protein with intermediate rates of degradation, B3 is the CP insoluble in neutral detergent solution but soluble in acid detergent solution, and C is the unavailable N. Model predictions were evaluated with studies that measured N flow data at the omasum. The N fractionation scheme in version 5 of the CNCPS explained 78% of the variation in RDP with a root mean square prediction error (RMSPE) of 275 g/d, and 51% of the RUP variation with RMSPE of 248 g/d. Neutral detergent insoluble CP flows were overpredicted with a mean bias of 128 g/d (40% of the observed mean). The greatest improvements in the accuracy of RDP and RUP predictions were obtained with the following 2 alternative schemes. Alternative 1 used the inhibitory in vitro system to measure the fractional rate of degradation for the insoluble protein fraction in which A = nonprotein N, B1 = true soluble protein, B2 = insoluble protein, C = unavailable protein (RDP: R(2) = 0.84 and RMSPE = 167 g/d; RUP: R(2) = 0.61 and RMSPE = 209 g/d), whereas alternative 2 redefined A and B1 fractions as the non-amino-N and amino-N in the soluble fraction respectively (RDP: R(2) = 0.79 with RMSPE = 195 g/d and RUP: R(2) = 0.54 with RMSPE = 225 g/d). We concluded that implementing alternative 1 or 2 will improve the accuracy of predicting RDP and RUP within the CNCPS framework.
Taylor, A H; Fox, K R; Hillsdon, M; Anokye, N; Campbell, J L; Foster, C; Green, C; Moxham, T; Mutrie, N; Searle, J; Trueman, P; Taylor, R S
2011-01-01
Objective To assess the impact of exercise referral schemes on physical activity and health outcomes. Design Systematic review and meta-analysis. Data sources Medline, Embase, PsycINFO, Cochrane Library, ISI Web of Science, SPORTDiscus, and ongoing trial registries up to October 2009. We also checked study references. Study selection Design: randomised controlled trials or non-randomised controlled (cluster or individual) studies published in peer review journals. Population: sedentary individuals with or without medical diagnosis. Exercise referral schemes defined as: clear referrals by primary care professionals to third party service providers to increase physical activity or exercise, physical activity or exercise programmes tailored to individuals, and initial assessment and monitoring throughout programmes. Comparators: usual care, no intervention, or alternative exercise referral schemes. Results Eight randomised controlled trials met the inclusion criteria, comparing exercise referral schemes with usual care (six trials), alternative physical activity intervention (two), and an exercise referral scheme plus a self determination theory intervention (one). Compared with usual care, follow-up data for exercise referral schemes showed an increased number of participants who achieved 90-150 minutes of physical activity of at least moderate intensity per week (pooled relative risk 1.16, 95% confidence intervals 1.03 to 1.30) and a reduced level of depression (pooled standardised mean difference −0.82, −1.28 to −0.35). Evidence of a between group difference in physical activity of moderate or vigorous intensity or in other health outcomes was inconsistent at follow-up. We did not find any difference in outcomes between exercise referral schemes and the other two comparator groups. None of the included trials separately reported outcomes in individuals with specific medical diagnoses.Substantial heterogeneity in the quality and nature of the exercise referral schemes across studies might have contributed to the inconsistency in outcome findings. Conclusions Considerable uncertainty remains as to the effectiveness of exercise referral schemes for increasing physical activity, fitness, or health indicators, or whether they are an efficient use of resources for sedentary people with or without a medical diagnosis. PMID:22058134
On recontamination and directional-bias problems in Monte Carlo simulation of PDF turbulence models
NASA Technical Reports Server (NTRS)
Hsu, Andrew T.
1991-01-01
Turbulent combustion can not be simulated adequately by conventional moment closure turbulence models. The difficulty lies in the fact that the reaction rate is in general an exponential function of the temperature, and the higher order correlations in the conventional moment closure models of the chemical source term can not be neglected, making the applications of such models impractical. The probability density function (pdf) method offers an attractive alternative: in a pdf model, the chemical source terms are closed and do not require additional models. A grid dependent Monte Carlo scheme was studied, since it is a logical alternative, wherein the number of computer operations increases only linearly with the increase of number of independent variables, as compared to the exponential increase in a conventional finite difference scheme. A new algorithm was devised that satisfies a restriction in the case of pure diffusion or uniform flow problems. Although for nonuniform flows absolute conservation seems impossible, the present scheme has reduced the error considerably.
Skavdahl, Isaac; Utgikar, Vivek; Christensen, Richard; ...
2016-05-24
We present an alternative control schemes for an Advanced High Temperature Reactor system consisting of a reactor, an intermediate heat exchanger, and a secondary heat exchanger (SHX) in this paper. One scheme is designed to control the cold outlet temperature of the SHX (T co) and the hot outlet temperature of the intermediate heat exchanger (T ho2) by manipulating the hot-side flow rates of the heat exchangers (F h/F h2) responding to the flow rate and temperature disturbances. The flow rate disturbances typically require a larger manipulation of the flow rates than temperature disturbances. An alternate strategy examines the controlmore » of the cold outlet temperature of the SHX (T co) only, since this temperature provides the driving force for energy production in the power conversion unit or the process application. The control can be achieved by three options: (1) flow rate manipulation; (2) reactor power manipulation; or (3) a combination of the two. The first option has a quicker response but requires a large flow rate change. The second option is the slowest but does not involve any change in the flow rates of streams. The final option appears preferable as it has an intermediate response time and requires only a minimal flow rate change.« less
The neutral emergence of error minimized genetic codes superior to the standard genetic code.
Massey, Steven E
2016-11-07
The standard genetic code (SGC) assigns amino acids to codons in such a way that the impact of point mutations is reduced, this is termed 'error minimization' (EM). The occurrence of EM has been attributed to the direct action of selection, however it is difficult to explain how the searching of alternative codes for an error minimized code can occur via codon reassignments, given that these are likely to be disruptive to the proteome. An alternative scenario is that EM has arisen via the process of genetic code expansion, facilitated by the duplication of genes encoding charging enzymes and adaptor molecules. This is likely to have led to similar amino acids being assigned to similar codons. Strikingly, we show that if during code expansion the most similar amino acid to the parent amino acid, out of the set of unassigned amino acids, is assigned to codons related to those of the parent amino acid, then genetic codes with EM superior to the SGC easily arise. This scheme mimics code expansion via the gene duplication of charging enzymes and adaptors. The result is obtained for a variety of different schemes of genetic code expansion and provides a mechanistically realistic manner in which EM has arisen in the SGC. These observations might be taken as evidence for self-organization in the earliest stages of life. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Berrada, K.; Eleuch, H.
2017-09-01
Various schemes have been proposed to improve the parameter-estimation precision. In the present work, we suggest an alternative method to preserve the estimation precision by considering a model that closely describes a realistic experimental scenario. We explore this active way to control and enhance the measurements precision for a two-level quantum system interacting with classical electromagnetic field using ultra-short strong pulses with an exact analytical solution, i.e. beyond the rotating wave approximation. In particular, we investigate the variation of the precision with a few cycles pulse and a smooth phase jump over a finite time interval. We show that by acting on the shape of the phase transient and other parameters of the considered system, the amount of information may be increased and has smaller decay rate in the long time. These features make two-level systems incorporated in ultra-short, of-resonant and gradually changing phase good candidates for implementation of schemes for the quantum computation and the coherent information processing.
Vibratory tactile display for textures
NASA Technical Reports Server (NTRS)
Ikei, Yasushi; Ikeno, Akihisa; Fukuda, Shuichi
1994-01-01
We have developed a tactile display that produces vibratory stimulus to a fingertip in contact with a vibrating tactor matrix. The display depicts tactile surface textures while the user is exploring a virtual object surface. A piezoelectric actuator drives the individual tactor in accordance with both the finger movement and the surface texture being traced. Spatiotemporal display control schemes were examined for presenting the fundamental surface texture elements. The temporal duration of vibratory stimulus was experimentally optimized to simulate the adaptation process of cutaneous sensation. The selected duration time for presenting a single line edge agreed with the time threshold of tactile sensation. Then spatial stimulus disposition schemes were discussed for representation of other edge shapes. As an alternative means not relying on amplitude control, a method of augmented duration at the edge was investigated. Spatial resolution of the display was measured for the lines presented both in perpendicular and parallel to a finger axis. Discrimination of texture density was also measured on random dot textures.
NASA Astrophysics Data System (ADS)
Avilova, I. P.; Krutilova, M. O.
2018-01-01
Economic growth is the main determinant of the trend to increased greenhouse gas (GHG) emission. Therefore, the reduction of emission and stabilization of GHG levels in the atmosphere become an urgent task to avoid the worst predicted consequences of climate change. GHG emissions in construction industry take a significant part of industrial GHG emission and are expected to consistently increase. The problem could be successfully solved with a help of both economical and organizational restrictions, based on enhanced algorithms of calculation and amercement of environmental harm in building industry. This study aims to quantify of GHG emission caused by different constructive schemes of RC framework in concrete casting. The result shows that proposed methodology allows to make a comparative analysis of alternative projects in residential housing, taking into account an environmental damage, caused by construction process. The study was carried out in the framework of the Program of flagship university development on the base of Belgorod State Technological University named after V.G. Shoukhov
NASA Astrophysics Data System (ADS)
Locke, Clayton R.; Kobayashi, Tohru; Midorikawa, Katsumi
2017-01-01
Odd-mass-selective ionization of palladium for purposes of resource recycling and management of long-lived fission products can be achieved by exploiting transition selection rules in a well-established three-step excitation process. In this conventional scheme, circularly polarized lasers of the same handedness excite isotopes via two intermediate 2D5/2 core states, and a third laser is then used for ionization via autoionizing Rydberg states. We propose an alternative excitation scheme via intermediate 2D3/2 core states before the autoionizing Rydberg state, improving ionization efficiency by over 130 times. We confirm high selectivity and measure odd-mass isotopes of >99.7(3)% of the total ionized product. We have identified and measured the relative ionization efficiency of the series of Rydberg states that converge to upper ionization limit of the 4 d 9(2D3/2) level, and identify the most efficient excitation is via the Rydberg state at 67668.18(10) cm-1.
Thermodynamic analysis of alternate energy carriers, hydrogen and chemical heat pipes
NASA Technical Reports Server (NTRS)
Cox, K. E.; Carty, R. H.; Conger, W. L.; Soliman, M. A.; Funk, J. E.
1976-01-01
The paper discusses the production concept and efficiency of two new energy transmission and storage media intended to overcome the disadvantages of electricity as an overall energy carrier. These media are hydrogen produced by water-splitting and the chemical heat pipe. Hydrogen can be transported or stored, and burned as energy is needed, forming only water and thus obviating pollution problems. The chemical heat pipe envisions a system in which heat is stored as the heat of reaction in chemical species. The thermodynamic analysis of these two methods is discussed in terms of first-law and second-law efficiency. It is concluded that chemical heat pipes offer large advantages over thermochemical hydrogen generation schemes on a first-law efficiency basis except for the degradation of thermal energy in temperature thus providing a source of low-temperature (800 K) heat for process heat applications. On a second-law efficiency basis, hydrogen schemes are superior in that the amount of available work is greater as compared to chemical heat pipes.
Samak, M. Mosleh E. Abu; Bakar, A. Ashrif A.; Kashif, Muhammad; Zan, Mohd Saiful Dzulkifly
2016-01-01
This paper discusses numerical analysis methods for different geometrical features that have limited interval values for typically used sensor wavelengths. Compared with existing Finite Difference Time Domain (FDTD) methods, the alternating direction implicit (ADI)-FDTD method reduces the number of sub-steps by a factor of two to three, which represents a 33% time savings in each single run. The local one-dimensional (LOD)-FDTD method has similar numerical equation properties, which should be calculated as in the previous method. Generally, a small number of arithmetic processes, which result in a shorter simulation time, are desired. The alternating direction implicit technique can be considered a significant step forward for improving the efficiency of unconditionally stable FDTD schemes. This comparative study shows that the local one-dimensional method had minimum relative error ranges of less than 40% for analytical frequencies above 42.85 GHz, and the same accuracy was generated by both methods.
ERIC Educational Resources Information Center
Geelan, David R.
2000-01-01
Suggests that Kuhn's and Lakatos' schemes for the philosophy of science have been pervasive metaphors for conceptual change approaches to the learning and teaching of science, and have been used both implicitly and explicitly to provide an organizing framework and justification matrix for those perspectives. Describes four alternative perspectives…
Delivering an Alternative Medicine Resource to the User's Desktop via World Wide Web.
ERIC Educational Resources Information Center
Li, Jie; Wu, Gang; Marks, Ellen; Fan, Weiyu
1998-01-01
Discusses the design and implementation of a World Wide Web-based alternative medicine virtual resource. This homepage integrates regional, national, and international resources and delivers library services to the user's desktop. Goals, structure, and organizational schemes of the system are detailed, and design issues for building such a…
Self-adaptive Solution Strategies
NASA Technical Reports Server (NTRS)
Padovan, J.
1984-01-01
The development of enhancements to current generation nonlinear finite element algorithms of the incremental Newton-Raphson type was overviewed. Work was introduced on alternative formulations which lead to improve algorithms that avoid the need for global level updating and inversion. To quantify the enhanced Newton-Raphson scheme and the new alternative algorithm, the results of several benchmarks are presented.
ERIC Educational Resources Information Center
Datt, Gaurav; Ravallion, Martin
"Workfare" schemes that offer poor participants unskilled jobs at low wages have become a popular alternative to cash or in-kind handouts. Yet little is known about a key determinant of the cost effectiveness of such schemes in reducing poverty: the behavioral responses through time allocation of participants and their families. These…
78 FR 40627 - Prohibitions and Conditions on the Importation and Exportation of Rough Diamonds
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-08
... November 5, 2002, the launch of the Kimberley Process Certification Scheme (KPCS) for rough diamonds. Under... implements the Kimberley Process Certification Scheme (KPCS) for rough diamonds. The KPCS is a process, based... not been controlled through the Kimberley Process Certification Scheme. By Executive Order 13312 dated...
NASA Astrophysics Data System (ADS)
Johnson, Stanley
An increasing adoption of digital signal processing (DSP) in optical fiber telecommunication has brought to the fore several interesting DSP enabled modulation formats. One such format is orthogonal frequency division multiplexing (OFDM), which has seen great success in wireless and wired RF applications, and is being actively investigated by several research groups for use in optical fiber telecom. In this dissertation, I present three implementations of OFDM for elastic optical networking and distributed network control. The first is a field programmable gate array (FPGA) based real-time implementation of a version of OFDM conventionally known as intensity modulation and direct detection (IMDD) OFDM. I experimentally demonstrate the ability of this transmission system to dynamically adjust bandwidth and modulation format to meet networking constraints in an automated manner. To the best of my knowledge, this is the first real-time software defined networking (SDN) based control of an OFDM system. In the second OFDM implementation, I experimentally demonstrate a novel OFDM transmission scheme that supports both direct detection and coherent detection receivers simultaneously using the same OFDM transmitter. This interchangeable receiver solution enables a trade-off between bit rate and equipment cost in network deployment and upgrades. I show that the proposed transmission scheme can provide a receiver sensitivity improvement of up to 1.73 dB as compared to IMDD OFDM. I also present two novel polarization analyzer based detection schemes, and study their performance using experiment and simulation. In the third implementation, I present an OFDM pilot-tone based scheme for distributed network control. The first instance of an SDN-based OFDM elastic optical network with pilot-tone assisted distributed control is demonstrated. An improvement in spectral efficiency and a fast reconfiguration time of 30 ms have been achieved in this experiment. Finally, I experimentally demonstrate optical re-timing of a 10.7 Gb/s data stream utilizing the property of bound soliton pairs (or "soliton molecules") to relax to an equilibrium temporal separation after propagation through a nonlinear dispersion alternating fiber span. Pulses offset up to 16 ps from bit center are successfully re-timed. The optical re-timing scheme studied here is a good example of signal processing in the optical domain and such a technique can overcome the bandwidth bottleneck present in DSP. An enhanced version of this re-timing scheme is analyzed using numerical simulations.
Cache-Oblivious parallel SIMD Viterbi decoding for sequence search in HMMER
2014-01-01
Background HMMER is a commonly used bioinformatics tool based on Hidden Markov Models (HMMs) to analyze and process biological sequences. One of its main homology engines is based on the Viterbi decoding algorithm, which was already highly parallelized and optimized using Farrar’s striped processing pattern with Intel SSE2 instruction set extension. Results A new SIMD vectorization of the Viterbi decoding algorithm is proposed, based on an SSE2 inter-task parallelization approach similar to the DNA alignment algorithm proposed by Rognes. Besides this alternative vectorization scheme, the proposed implementation also introduces a new partitioning of the Markov model that allows a significantly more efficient exploitation of the cache locality. Such optimization, together with an improved loading of the emission scores, allows the achievement of a constant processing throughput, regardless of the innermost-cache size and of the dimension of the considered model. Conclusions The proposed optimized vectorization of the Viterbi decoding algorithm was extensively evaluated and compared with the HMMER3 decoder to process DNA and protein datasets, proving to be a rather competitive alternative implementation. Being always faster than the already highly optimized ViterbiFilter implementation of HMMER3, the proposed Cache-Oblivious Parallel SIMD Viterbi (COPS) implementation provides a constant throughput and offers a processing speedup as high as two times faster, depending on the model’s size. PMID:24884826
NASA Astrophysics Data System (ADS)
Montané, Francesc; Fox, Andrew M.; Arellano, Avelino F.; MacBean, Natasha; Alexander, M. Ross; Dye, Alex; Bishop, Daniel A.; Trouet, Valerie; Babst, Flurin; Hessl, Amy E.; Pederson, Neil; Blanken, Peter D.; Bohrer, Gil; Gough, Christopher M.; Litvak, Marcy E.; Novick, Kimberly A.; Phillips, Richard P.; Wood, Jeffrey D.; Moore, David J. P.
2017-09-01
How carbon (C) is allocated to different plant tissues (leaves, stem, and roots) determines how long C remains in plant biomass and thus remains a central challenge for understanding the global C cycle. We used a diverse set of observations (AmeriFlux eddy covariance tower observations, biomass estimates from tree-ring data, and leaf area index (LAI) measurements) to compare C fluxes, pools, and LAI data with those predicted by a land surface model (LSM), the Community Land Model (CLM4.5). We ran CLM4.5 for nine temperate (including evergreen and deciduous) forests in North America between 1980 and 2013 using four different C allocation schemes: i. dynamic C allocation scheme (named "D-CLM4.5") with one dynamic allometric parameter, which allocates C to the stem and leaves to vary in time as a function of annual net primary production (NPP); ii. an alternative dynamic C allocation scheme (named "D-Litton"), where, similar to (i), C allocation is a dynamic function of annual NPP, but unlike (i) includes two dynamic allometric parameters involving allocation to leaves, stem, and coarse roots; iii.-iv. a fixed C allocation scheme with two variants, one representative of observations in evergreen (named "F-Evergreen") and the other of observations in deciduous forests (named "F-Deciduous"). D-CLM4.5 generally overestimated gross primary production (GPP) and ecosystem respiration, and underestimated net ecosystem exchange (NEE). In D-CLM4.5, initial aboveground biomass in 1980 was largely overestimated (between 10 527 and 12 897 g C m-2) for deciduous forests, whereas aboveground biomass accumulation through time (between 1980 and 2011) was highly underestimated (between 1222 and 7557 g C m-2) for both evergreen and deciduous sites due to a lower stem turnover rate in the sites than the one used in the model. D-CLM4.5 overestimated LAI in both evergreen and deciduous sites because the leaf C-LAI relationship in the model did not match the observed leaf C-LAI relationship at our sites. Although the four C allocation schemes gave similar results for aggregated C fluxes, they translated to important differences in long-term aboveground biomass accumulation and aboveground NPP. For deciduous forests, D-Litton gave more realistic Cstem / Cleaf ratios and strongly reduced the overestimation of initial aboveground biomass and aboveground NPP for deciduous forests by D-CLM4.5. We identified key structural and parameterization deficits that need refinement to improve the accuracy of LSMs in the near future. These include changing how C is allocated in fixed and dynamic schemes based on data from current forest syntheses and different parameterization of allocation schemes for different forest types. Our results highlight the utility of using measurements of aboveground biomass to evaluate and constrain the C allocation scheme in LSMs, and suggest that stem turnover is overestimated by CLM4.5 for these AmeriFlux sites. Understanding the controls of turnover will be critical to improving long-term C processes in LSMs.
Montané, Francesc; Fox, Andrew M.; Arellano, Avelino F.; ...
2017-09-22
How carbon (C) is allocated to different plant tissues (leaves, stem, and roots) determines how long C remains in plant biomass and thus remains a central challenge for understanding the global C cycle. We used a diverse set of observations (AmeriFlux eddy covariance tower observations, biomass estimates from tree-ring data, and leaf area index (LAI) measurements) to compare C fluxes, pools, and LAI data with those predicted by a land surface model (LSM), the Community Land Model (CLM4.5). We ran CLM4.5 for nine temperate (including evergreen and deciduous) forests in North America between 1980 and 2013 using four different C allocationmore » schemes: i. dynamic C allocation scheme (named "D-CLM4.5") with one dynamic allometric parameter, which allocates C to the stem and leaves to vary in time as a function of annual net primary production (NPP); ii. an alternative dynamic C allocation scheme (named "D-Litton"), where, similar to (i), C allocation is a dynamic function of annual NPP, but unlike (i) includes two dynamic allometric parameters involving allocation to leaves, stem, and coarse roots; iii.–iv. a fixed C allocation scheme with two variants, one representative of observations in evergreen (named "F-Evergreen") and the other of observations in deciduous forests (named "F-Deciduous"). D-CLM4.5 generally overestimated gross primary production (GPP) and ecosystem respiration, and underestimated net ecosystem exchange (NEE). In D-CLM4.5, initial aboveground biomass in 1980 was largely overestimated (between 10 527 and 12 897 g C m -2) for deciduous forests, whereas aboveground biomass accumulation through time (between 1980 and 2011) was highly underestimated (between 1222 and 7557 g C m -2) for both evergreen and deciduous sites due to a lower stem turnover rate in the sites than the one used in the model. D-CLM4.5 overestimated LAI in both evergreen and deciduous sites because the leaf C–LAI relationship in the model did not match the observed leaf C–LAI relationship at our sites. Although the four C allocation schemes gave similar results for aggregated C fluxes, they translated to important differences in long-term aboveground biomass accumulation and aboveground NPP. For deciduous forests, D-Litton gave more realistic C stem/C leaf ratios and strongly reduced the overestimation of initial aboveground biomass and aboveground NPP for deciduous forests by D-CLM4.5. We identified key structural and parameterization deficits that need refinement to improve the accuracy of LSMs in the near future. These include changing how C is allocated in fixed and dynamic schemes based on data from current forest syntheses and different parameterization of allocation schemes for different forest types. Our results highlight the utility of using measurements of aboveground biomass to evaluate and constrain the C allocation scheme in LSMs, and suggest that stem turnover is overestimated by CLM4.5 for these AmeriFlux sites. Understanding the controls of turnover will be critical to improving long-term C processes in LSMs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Montané, Francesc; Fox, Andrew M.; Arellano, Avelino F.
How carbon (C) is allocated to different plant tissues (leaves, stem, and roots) determines how long C remains in plant biomass and thus remains a central challenge for understanding the global C cycle. We used a diverse set of observations (AmeriFlux eddy covariance tower observations, biomass estimates from tree-ring data, and leaf area index (LAI) measurements) to compare C fluxes, pools, and LAI data with those predicted by a land surface model (LSM), the Community Land Model (CLM4.5). We ran CLM4.5 for nine temperate (including evergreen and deciduous) forests in North America between 1980 and 2013 using four different C allocationmore » schemes: i. dynamic C allocation scheme (named "D-CLM4.5") with one dynamic allometric parameter, which allocates C to the stem and leaves to vary in time as a function of annual net primary production (NPP); ii. an alternative dynamic C allocation scheme (named "D-Litton"), where, similar to (i), C allocation is a dynamic function of annual NPP, but unlike (i) includes two dynamic allometric parameters involving allocation to leaves, stem, and coarse roots; iii.–iv. a fixed C allocation scheme with two variants, one representative of observations in evergreen (named "F-Evergreen") and the other of observations in deciduous forests (named "F-Deciduous"). D-CLM4.5 generally overestimated gross primary production (GPP) and ecosystem respiration, and underestimated net ecosystem exchange (NEE). In D-CLM4.5, initial aboveground biomass in 1980 was largely overestimated (between 10 527 and 12 897 g C m -2) for deciduous forests, whereas aboveground biomass accumulation through time (between 1980 and 2011) was highly underestimated (between 1222 and 7557 g C m -2) for both evergreen and deciduous sites due to a lower stem turnover rate in the sites than the one used in the model. D-CLM4.5 overestimated LAI in both evergreen and deciduous sites because the leaf C–LAI relationship in the model did not match the observed leaf C–LAI relationship at our sites. Although the four C allocation schemes gave similar results for aggregated C fluxes, they translated to important differences in long-term aboveground biomass accumulation and aboveground NPP. For deciduous forests, D-Litton gave more realistic C stem/C leaf ratios and strongly reduced the overestimation of initial aboveground biomass and aboveground NPP for deciduous forests by D-CLM4.5. We identified key structural and parameterization deficits that need refinement to improve the accuracy of LSMs in the near future. These include changing how C is allocated in fixed and dynamic schemes based on data from current forest syntheses and different parameterization of allocation schemes for different forest types. Our results highlight the utility of using measurements of aboveground biomass to evaluate and constrain the C allocation scheme in LSMs, and suggest that stem turnover is overestimated by CLM4.5 for these AmeriFlux sites. Understanding the controls of turnover will be critical to improving long-term C processes in LSMs.« less
NASA Astrophysics Data System (ADS)
Hayatbini, N.; Faridzad, M.; Yang, T.; Akbari Asanjan, A.; Gao, X.; Sorooshian, S.
2016-12-01
The Artificial Neural Networks (ANNs) are useful in many fields, including water resources engineering and management. However, due to the non-linear and chaotic characteristics associated with natural processes and human decision making, the use of ANNs in real-world applications is still limited, and its performance needs to be further improved for a broader practical use. The commonly used Back-Propagation (BP) scheme and gradient-based optimization in training the ANNs have already found to be problematic in some cases. The BP scheme and gradient-based optimization methods are associated with the risk of premature convergence, stuck in local optimums, and the searching is highly dependent on initial conditions. Therefore, as an alternative to BP and gradient-based searching scheme, we propose an effective and efficient global searching method, termed the Shuffled Complex Evolutionary Global optimization algorithm with Principal Component Analysis (SP-UCI), to train the ANN connectivity weights. Large number of real-world datasets are tested with the SP-UCI-based ANN, as well as various popular Evolutionary Algorithms (EAs)-enhanced ANNs, i.e., Particle Swarm Optimization (PSO)-, Genetic Algorithm (GA)-, Simulated Annealing (SA)-, and Differential Evolution (DE)-enhanced ANNs. Results show that SP-UCI-enhanced ANN is generally superior over other EA-enhanced ANNs with regard to the convergence and computational performance. In addition, we carried out a case study for hydropower scheduling in the Trinity Lake in the western U.S. In this case study, multiple climate indices are used as predictors for the SP-UCI-enhanced ANN. The reservoir inflows and hydropower releases are predicted up to sub-seasonal to seasonal scale. Results show that SP-UCI-enhanced ANN is able to achieve better statistics than other EAs-based ANN, which implies the usefulness and powerfulness of proposed SP-UCI-enhanced ANN for reservoir operation, water resources engineering and management. The SP-UCI-enhanced ANN is universally applicable to many other regression and prediction problems, and it has a good potential to be an alternative to the classical BP scheme and gradient-based optimization methods.
Computing Evans functions numerically via boundary-value problems
NASA Astrophysics Data System (ADS)
Barker, Blake; Nguyen, Rose; Sandstede, Björn; Ventura, Nathaniel; Wahl, Colin
2018-03-01
The Evans function has been used extensively to study spectral stability of travelling-wave solutions in spatially extended partial differential equations. To compute Evans functions numerically, several shooting methods have been developed. In this paper, an alternative scheme for the numerical computation of Evans functions is presented that relies on an appropriate boundary-value problem formulation. Convergence of the algorithm is proved, and several examples, including the computation of eigenvalues for a multi-dimensional problem, are given. The main advantage of the scheme proposed here compared with earlier methods is that the scheme is linear and scalable to large problems.
Fractional Steps methods for transient problems on commodity computer architectures
NASA Astrophysics Data System (ADS)
Krotkiewski, M.; Dabrowski, M.; Podladchikov, Y. Y.
2008-12-01
Fractional Steps methods are suitable for modeling transient processes that are central to many geological applications. Low memory requirements and modest computational complexity facilitates calculations on high-resolution three-dimensional models. An efficient implementation of Alternating Direction Implicit/Locally One-Dimensional schemes for an Opteron-based shared memory system is presented. The memory bandwidth usage, the main bottleneck on modern computer architectures, is specially addressed. High efficiency of above 2 GFlops per CPU is sustained for problems of 1 billion degrees of freedom. The optimized sequential implementation of all 1D sweeps is comparable in execution time to copying the used data in the memory. Scalability of the parallel implementation on up to 8 CPUs is close to perfect. Performing one timestep of the Locally One-Dimensional scheme on a system of 1000 3 unknowns on 8 CPUs takes only 11 s. We validate the LOD scheme using a computational model of an isolated inclusion subject to a constant far field flux. Next, we study numerically the evolution of a diffusion front and the effective thermal conductivity of composites consisting of multiple inclusions and compare the results with predictions based on the differential effective medium approach. Finally, application of the developed parabolic solver is suggested for a real-world problem of fluid transport and reactions inside a reservoir.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-21
... has not been controlled through the Kimberley Process Certification Scheme (KPCS). Under Section 3(2) of the Act, ``controlled through the Kimberley Process Certification Scheme'' means an importation... Kimberley Process Certification Scheme. Angola--Ministry of Geology and Mines. Armenia--Ministry of Trade...
The study of PDF turbulence models in combustion
NASA Technical Reports Server (NTRS)
Hsu, Andrew T.
1991-01-01
In combustion computations, it is known that the predictions of chemical reaction rates are poor if conventional turbulence models are used. The probability density function (pdf) method seems to be the only alternative that uses local instantaneous values of the temperature, density, etc., in predicting chemical reaction rates, and thus is the only viable approach for more accurate turbulent combustion calculations. The fact that the pdf equation has a very large dimensionality renders finite difference schemes extremely demanding on computer memories and thus impractical. A logical alternative is the Monte Carlo scheme. Since CFD has a certain maturity as well as acceptance, it seems that the use of a combined CFD and Monte Carlo scheme is more beneficial. Therefore, a scheme is chosen that uses a conventional CFD flow solver in calculating the flow field properties such as velocity, pressure, etc., while the chemical reaction part is solved using a Monte Carlo scheme. The discharge of a heated turbulent plane jet into quiescent air was studied. Experimental data for this problem shows that when the temperature difference between the jet and the surrounding air is small, buoyancy effect can be neglected and the temperature can be treated as a passive scalar. The fact that jet flows have a self-similar solution lends convenience in the modeling study. Futhermore, the existence of experimental data for turbulent shear stress and temperature variance make the case ideal for the testing of pdf models wherein these values can be directly evaluated.
The Emergent Universe scheme and tunneling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Labraña, Pedro
We present an alternative scheme for an Emergent Universe scenario, developed previously in Phys. Rev. D 86, 083524 (2012), where the universe is initially in a static state supported by a scalar field located in a false vacuum. The universe begins to evolve when, by quantum tunneling, the scalar field decays into a state of true vacuum. The Emergent Universe models are interesting since they provide specific examples of non-singular inflationary universes.
2017-01-09
2017 Distribution A – Approved for public release; Distribution Unlimited. PA Clearance 17030 Introduction • Filtering schemes offer a less...dissipative alternative to the standard artificial dissipation operators when applied to high- order spatial/temporal schemes • Limiting Fact: Filters impart...systems require a preconditioned dual-time framework to be solved efficiently • Limiting Fact: Filtering cannot be applied only at the physical- time
Spatial interpolation schemes of daily precipitation for hydrologic modeling
Hwang, Y.; Clark, M.R.; Rajagopalan, B.; Leavesley, G.
2012-01-01
Distributed hydrologic models typically require spatial estimates of precipitation interpolated from sparsely located observational points to the specific grid points. We compare and contrast the performance of regression-based statistical methods for the spatial estimation of precipitation in two hydrologically different basins and confirmed that widely used regression-based estimation schemes fail to describe the realistic spatial variability of daily precipitation field. The methods assessed are: (1) inverse distance weighted average; (2) multiple linear regression (MLR); (3) climatological MLR; and (4) locally weighted polynomial regression (LWP). In order to improve the performance of the interpolations, the authors propose a two-step regression technique for effective daily precipitation estimation. In this simple two-step estimation process, precipitation occurrence is first generated via a logistic regression model before estimate the amount of precipitation separately on wet days. This process generated the precipitation occurrence, amount, and spatial correlation effectively. A distributed hydrologic model (PRMS) was used for the impact analysis in daily time step simulation. Multiple simulations suggested noticeable differences between the input alternatives generated by three different interpolation schemes. Differences are shown in overall simulation error against the observations, degree of explained variability, and seasonal volumes. Simulated streamflows also showed different characteristics in mean, maximum, minimum, and peak flows. Given the same parameter optimization technique, LWP input showed least streamflow error in Alapaha basin and CMLR input showed least error (still very close to LWP) in Animas basin. All of the two-step interpolation inputs resulted in lower streamflow error compared to the directly interpolated inputs. ?? 2011 Springer-Verlag.
Roads towards fault-tolerant universal quantum computation
NASA Astrophysics Data System (ADS)
Campbell, Earl T.; Terhal, Barbara M.; Vuillot, Christophe
2017-09-01
A practical quantum computer must not merely store information, but also process it. To prevent errors introduced by noise from multiplying and spreading, a fault-tolerant computational architecture is required. Current experiments are taking the first steps toward noise-resilient logical qubits. But to convert these quantum devices from memories to processors, it is necessary to specify how a universal set of gates is performed on them. The leading proposals for doing so, such as magic-state distillation and colour-code techniques, have high resource demands. Alternative schemes, such as those that use high-dimensional quantum codes in a modular architecture, have potential benefits, but need to be explored further.
Roads towards fault-tolerant universal quantum computation.
Campbell, Earl T; Terhal, Barbara M; Vuillot, Christophe
2017-09-13
A practical quantum computer must not merely store information, but also process it. To prevent errors introduced by noise from multiplying and spreading, a fault-tolerant computational architecture is required. Current experiments are taking the first steps toward noise-resilient logical qubits. But to convert these quantum devices from memories to processors, it is necessary to specify how a universal set of gates is performed on them. The leading proposals for doing so, such as magic-state distillation and colour-code techniques, have high resource demands. Alternative schemes, such as those that use high-dimensional quantum codes in a modular architecture, have potential benefits, but need to be explored further.
Multichannel blind deconvolution of spatially misaligned images.
Sroubek, Filip; Flusser, Jan
2005-07-01
Existing multichannel blind restoration techniques assume perfect spatial alignment of channels, correct estimation of blur size, and are prone to noise. We developed an alternating minimization scheme based on a maximum a posteriori estimation with a priori distribution of blurs derived from the multichannel framework and a priori distribution of original images defined by the variational integral. This stochastic approach enables us to recover the blurs and the original image from channels severely corrupted by noise. We observe that the exact knowledge of the blur size is not necessary, and we prove that translation misregistration up to a certain extent can be automatically removed in the restoration process.
Bian, Tianjian; Gao, Jie; Zhang, Chuang; ...
2017-12-10
In September 2012, Chinese scientists proposed a Circular Electron Positron Collider (CEPC) in China at 240 GeV center-of-mass energy for Higgs studies. The booster provides 120 GeV electron and positron beams to the CEPC collider for top-up injection at 0.1 Hz. The design of the full energy booster ring of the CEPC is a challenge. The ejected beam energy is 120 GeV and the injected beam energy is 6 GeV. Here in this paper we describe two alternative schemes, the wiggler bend scheme and the normal bend scheme. For the wiggler bend scheme, we propose to operate the booster ringmore » as a large wiggler at low energy and as a normal ring at high energy to avoid the problem of very low dipole magnet fields. Finally, for the normal bend scheme, we implement the orbit correction to correct the earth field.« less
A digital memories based user authentication scheme with privacy preservation.
Liu, JunLiang; Lyu, Qiuyun; Wang, Qiuhua; Yu, Xiangxiang
2017-01-01
The traditional username/password or PIN based authentication scheme, which still remains the most popular form of authentication, has been proved insecure, unmemorable and vulnerable to guessing, dictionary attack, key-logger, shoulder-surfing and social engineering. Based on this, a large number of new alternative methods have recently been proposed. However, most of them rely on users being able to accurately recall complex and unmemorable information or using extra hardware (such as a USB Key), which makes authentication more difficult and confusing. In this paper, we propose a Digital Memories based user authentication scheme adopting homomorphic encryption and a public key encryption design which can protect users' privacy effectively, prevent tracking and provide multi-level security in an Internet & IoT environment. Also, we prove the superior reliability and security of our scheme compared to other schemes and present a performance analysis and promising evaluation results.
A digital memories based user authentication scheme with privacy preservation
Liu, JunLiang; Lyu, Qiuyun; Wang, Qiuhua; Yu, Xiangxiang
2017-01-01
The traditional username/password or PIN based authentication scheme, which still remains the most popular form of authentication, has been proved insecure, unmemorable and vulnerable to guessing, dictionary attack, key-logger, shoulder-surfing and social engineering. Based on this, a large number of new alternative methods have recently been proposed. However, most of them rely on users being able to accurately recall complex and unmemorable information or using extra hardware (such as a USB Key), which makes authentication more difficult and confusing. In this paper, we propose a Digital Memories based user authentication scheme adopting homomorphic encryption and a public key encryption design which can protect users’ privacy effectively, prevent tracking and provide multi-level security in an Internet & IoT environment. Also, we prove the superior reliability and security of our scheme compared to other schemes and present a performance analysis and promising evaluation results. PMID:29190659
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bian, Tianjian; Gao, Jie; Zhang, Chuang
In September 2012, Chinese scientists proposed a Circular Electron Positron Collider (CEPC) in China at 240 GeV center-of-mass energy for Higgs studies. The booster provides 120 GeV electron and positron beams to the CEPC collider for top-up injection at 0.1 Hz. The design of the full energy booster ring of the CEPC is a challenge. The ejected beam energy is 120 GeV and the injected beam energy is 6 GeV. Here in this paper we describe two alternative schemes, the wiggler bend scheme and the normal bend scheme. For the wiggler bend scheme, we propose to operate the booster ringmore » as a large wiggler at low energy and as a normal ring at high energy to avoid the problem of very low dipole magnet fields. Finally, for the normal bend scheme, we implement the orbit correction to correct the earth field.« less
Review of the technological approaches for grey water treatment and reuses.
Li, Fangyue; Wichmann, Knut; Otterpohl, Ralf
2009-05-15
Based on literature review, a non-potable urban grey water reuse standard is proposed and the treatment alternatives and reuse scheme for grey water reuses are evaluated according to grey water characteristics and the proposed standard. The literature review shows that all types of grey water have good biodegradability. The bathroom and the laundry grey water are deficient in both nitrogen and phosphors. The kitchen grey water has a balanced COD: N: P ratio. The review also reveals that physical processes alone are not sufficient to guarantee an adequate reduction of the organics, nutrients and surfactants. The chemical processes can efficiently remove the suspended solids, organic materials and surfactants in the low strength grey water. The combination of aerobic biological process with physical filtration and disinfection is considered to be the most economical and feasible solution for grey water recycling. The MBR appears to be a very attractive solution in collective urban residential buildings.
Positron-Electron Annihilation Process in (2,2)-Difluoropropane Molecule
NASA Astrophysics Data System (ADS)
Liu, Yang; Ma, Xiao-Guang; Zhu, Ying-Hao
2016-04-01
The positron-electron annihilation process in (2,2)-difluoropropane molecule and the corresponding gamma-ray spectra are studied by quantum chemistry method. The positrophilic electrons in (2,2)-difluoropropane molecule are found for the first time. The theoretical predictions show that the outermost 2s electrons of fluoride atoms play an important role in positron-electron annihilation process of (2,2)-difiuoropropane. In the present scheme, the correlation coefficient between the theoretical gamma-ray spectra and the experiments can be 99%. The present study gives an alternative annihilation model for positron-electron pair in larger molecules. Supported by the National Natural Science Foundation of China under Grant No. 11347011 and the Natural Science Foundation Project of Shandong Province under Grant No. ZR2011AM010 and 2014 Technology Innovation Fund of Ludong University under Grant Nos. 1d151007 and ld15l016
NASA Technical Reports Server (NTRS)
Lin, Shu; Rhee, Dojun; Rajpal, Sandeep
1993-01-01
This report presents a low-complexity and high performance concatenated coding scheme for high-speed satellite communications. In this proposed scheme, the NASA Standard Reed-Solomon (RS) code over GF(2(exp 8) is used as the outer code and the second-order Reed-Muller (RM) code of Hamming distance 8 is used as the inner code. The RM inner code has a very simple trellis structure and is decoded with the soft-decision Viterbi decoding algorithm. It is shown that the proposed concatenated coding scheme achieves an error performance which is comparable to that of the NASA TDRS concatenated coding scheme in which the NASA Standard rate-1/2 convolutional code of constraint length 7 and d sub free = 10 is used as the inner code. However, the proposed RM inner code has much smaller decoding complexity, less decoding delay, and much higher decoding speed. Consequently, the proposed concatenated coding scheme is suitable for reliable high-speed satellite communications, and it may be considered as an alternate coding scheme for the NASA TDRS system.
77 FR 27831 - List of Participating Countries and Entities Under the Clean Diamond Trade Act of 2003
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-11
... Kimberley Process Certification Scheme (KPCS). Under Section 3(2) of the Act, ``controlled through the Kimberley Process Certification Scheme'' means an importation from the territory of a Participant or... Participants in the Kimberley Process Certification Scheme. Angola--Ministry of Geology and Mines. Armenia...
Code of Federal Regulations, 2010 CFR
2010-07-01
..., unless the rough diamond has been controlled through the Kimberley Process Certification Scheme. (b) The... States of any rough diamond not controlled through the Kimberley Process Certification Scheme do not... Process Certification Scheme and thus is not permitted, except in the following circumstance. The...
Quantum attack-resistent certificateless multi-receiver signcryption scheme.
Li, Huixian; Chen, Xubao; Pang, Liaojun; Shi, Weisong
2013-01-01
The existing certificateless signcryption schemes were designed mainly based on the traditional public key cryptography, in which the security relies on the hard problems, such as factor decomposition and discrete logarithm. However, these problems will be easily solved by the quantum computing. So the existing certificateless signcryption schemes are vulnerable to the quantum attack. Multivariate public key cryptography (MPKC), which can resist the quantum attack, is one of the alternative solutions to guarantee the security of communications in the post-quantum age. Motivated by these concerns, we proposed a new construction of the certificateless multi-receiver signcryption scheme (CLMSC) based on MPKC. The new scheme inherits the security of MPKC, which can withstand the quantum attack. Multivariate quadratic polynomial operations, which have lower computation complexity than bilinear pairing operations, are employed in signcrypting a message for a certain number of receivers in our scheme. Security analysis shows that our scheme is a secure MPKC-based scheme. We proved its security under the hardness of the Multivariate Quadratic (MQ) problem and its unforgeability under the Isomorphism of Polynomials (IP) assumption in the random oracle model. The analysis results show that our scheme also has the security properties of non-repudiation, perfect forward secrecy, perfect backward secrecy and public verifiability. Compared with the existing schemes in terms of computation complexity and ciphertext length, our scheme is more efficient, which makes it suitable for terminals with low computation capacity like smart cards.
Blind compressive sensing dynamic MRI
Lingala, Sajan Goud; Jacob, Mathews
2013-01-01
We propose a novel blind compressive sensing (BCS) frame work to recover dynamic magnetic resonance images from undersampled measurements. This scheme models the dynamic signal as a sparse linear combination of temporal basis functions, chosen from a large dictionary. In contrast to classical compressed sensing, the BCS scheme simultaneously estimates the dictionary and the sparse coefficients from the undersampled measurements. Apart from the sparsity of the coefficients, the key difference of the BCS scheme with current low rank methods is the non-orthogonal nature of the dictionary basis functions. Since the number of degrees of freedom of the BCS model is smaller than that of the low-rank methods, it provides improved reconstructions at high acceleration rates. We formulate the reconstruction as a constrained optimization problem; the objective function is the linear combination of a data consistency term and sparsity promoting ℓ1 prior of the coefficients. The Frobenius norm dictionary constraint is used to avoid scale ambiguity. We introduce a simple and efficient majorize-minimize algorithm, which decouples the original criterion into three simpler sub problems. An alternating minimization strategy is used, where we cycle through the minimization of three simpler problems. This algorithm is seen to be considerably faster than approaches that alternates between sparse coding and dictionary estimation, as well as the extension of K-SVD dictionary learning scheme. The use of the ℓ1 penalty and Frobenius norm dictionary constraint enables the attenuation of insignificant basis functions compared to the ℓ0 norm and column norm constraint assumed in most dictionary learning algorithms; this is especially important since the number of basis functions that can be reliably estimated is restricted by the available measurements. We also observe that the proposed scheme is more robust to local minima compared to K-SVD method, which relies on greedy sparse coding. Our phase transition experiments demonstrate that the BCS scheme provides much better recovery rates than classical Fourier-based CS schemes, while being only marginally worse than the dictionary aware setting. Since the overhead in additionally estimating the dictionary is low, this method can be very useful in dynamic MRI applications, where the signal is not sparse in known dictionaries. We demonstrate the utility of the BCS scheme in accelerating contrast enhanced dynamic data. We observe superior reconstruction performance with the BCS scheme in comparison to existing low rank and compressed sensing schemes. PMID:23542951
Pedizzi, C; Noya, I; Sarli, J; González-García, S; Lema, J M; Moreira, M T; Carballa, M
2018-04-20
The application of livestock manure on agricultural land is being restricted due to its significant content of phosphorus (P) and nitrogen (N), leading to eutrophication. At the same time, the growing demand for N and P mineral fertilizers is increasing their production costs and causing the depletion of natural phosphate rock deposits. In the present work, seven technologically feasible treatment schemes for energy (biogas) and nutrient recovery (e.g., struvite precipitation) and/or removal (e.g., partial nitritation/anammox) were evaluated from an environmental perspective. In general, while approaches based solely on energy recovery and use of digestate as fertilizer are commonly limited by community regulations, strategies pursuing the generation of high-quality struvite are not environmentally sound alternatives. In contrast, schemes that include further solid/liquid separation of the digestate improved the environmental profile, and their combination with an additional N-removal stage would lead to the most environmental-friendly framework. However, the preferred scenario was identified to be highly dependent on the particular conditions of each site, integrating environmental, social and economic criteria. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hasnain, Shahid; Saqib, Muhammad; Mashat, Daoud Suleiman
2017-07-01
This research paper represents a numerical approximation to non-linear three dimension reaction diffusion equation with non-linear source term from population genetics. Since various initial and boundary value problems exist in three dimension reaction diffusion phenomena, which are studied numerically by different numerical methods, here we use finite difference schemes (Alternating Direction Implicit and Fourth Order Douglas Implicit) to approximate the solution. Accuracy is studied in term of L2, L∞ and relative error norms by random selected grids along time levels for comparison with analytical results. The test example demonstrates the accuracy, efficiency and versatility of the proposed schemes. Numerical results showed that Fourth Order Douglas Implicit scheme is very efficient and reliable for solving 3-D non-linear reaction diffusion equation.
Design and Control of Integrated Systems for Hydrogen Production and Power Generation
NASA Astrophysics Data System (ADS)
Georgis, Dimitrios
Growing concerns on CO2 emissions have led to the development of highly efficient power plants. Options for increased energy efficiencies include alternative energy conversion pathways, energy integration and process intensification. Solid oxide fuel cells (SOFC) constitute a promising alternative for power generation since they convert the chemical energy electrochemically directly to electricity. Their high operating temperature shows potential for energy integration with energy intensive units (e.g. steam reforming reactors). Although energy integration is an essential tool for increased efficiencies, it leads to highly complex process schemes with rich dynamic behavior, which are challenging to control. Furthermore, the use of process intensification for increased energy efficiency imposes an additional control challenge. This dissertation identifies and proposes solutions on design, operational and control challenges of integrated systems for hydrogen production and power generation. Initially, a study on energy integrated SOFC systems is presented. Design alternatives are identified, control strategies are proposed for each alternative and their validity is evaluated under different operational scenarios. The operational range of the proposed control strategies is also analyzed. Next, thermal management of water gas shift membrane reactors, which are a typical application of process intensification, is considered. Design and operational objectives are identified and a control strategy is proposed employing advanced control algorithms. The performance of the proposed control strategy is evaluated and compared with classical control strategies. Finally SOFC systems for combined heat and power applications are considered. Multiple recycle loops are placed to increase design flexibility. Different operational objectives are identified and a nonlinear optimization problem is formulated. Optimal designs are obtained and their features are discussed and compared. The results of the dissertation provide a deeper understanding on the design, operational and control challenges of the above systems and can potentially guide further commercialization efforts. In addition to this, the results can be generalized and used for applications from the transportation and residential sector to large--scale power plants.
Structural and parameteric uncertainty quantification in cloud microphysics parameterization schemes
NASA Astrophysics Data System (ADS)
van Lier-Walqui, M.; Morrison, H.; Kumjian, M. R.; Prat, O. P.; Martinkus, C.
2017-12-01
Atmospheric model parameterization schemes employ approximations to represent the effects of unresolved processes. These approximations are a source of error in forecasts, caused in part by considerable uncertainty about the optimal value of parameters within each scheme -- parameteric uncertainty. Furthermore, there is uncertainty regarding the best choice of the overarching structure of the parameterization scheme -- structrual uncertainty. Parameter estimation can constrain the first, but may struggle with the second because structural choices are typically discrete. We address this problem in the context of cloud microphysics parameterization schemes by creating a flexible framework wherein structural and parametric uncertainties can be simultaneously constrained. Our scheme makes no assuptions about drop size distribution shape or the functional form of parametrized process rate terms. Instead, these uncertainties are constrained by observations using a Markov Chain Monte Carlo sampler within a Bayesian inference framework. Our scheme, the Bayesian Observationally-constrained Statistical-physical Scheme (BOSS), has flexibility to predict various sets of prognostic drop size distribution moments as well as varying complexity of process rate formulations. We compare idealized probabilistic forecasts from versions of BOSS with varying levels of structural complexity. This work has applications in ensemble forecasts with model physics uncertainty, data assimilation, and cloud microphysics process studies.
Furlanello, Cesare; Serafini, Maria; Merler, Stefano; Jurman, Giuseppe
2003-11-06
We describe the E-RFE method for gene ranking, which is useful for the identification of markers in the predictive classification of array data. The method supports a practical modeling scheme designed to avoid the construction of classification rules based on the selection of too small gene subsets (an effect known as the selection bias, in which the estimated predictive errors are too optimistic due to testing on samples already considered in the feature selection process). With E-RFE, we speed up the recursive feature elimination (RFE) with SVM classifiers by eliminating chunks of uninteresting genes using an entropy measure of the SVM weights distribution. An optimal subset of genes is selected according to a two-strata model evaluation procedure: modeling is replicated by an external stratified-partition resampling scheme, and, within each run, an internal K-fold cross-validation is used for E-RFE ranking. Also, the optimal number of genes can be estimated according to the saturation of Zipf's law profiles. Without a decrease of classification accuracy, E-RFE allows a speed-up factor of 100 with respect to standard RFE, while improving on alternative parametric RFE reduction strategies. Thus, a process for gene selection and error estimation is made practical, ensuring control of the selection bias, and providing additional diagnostic indicators of gene importance.
Khan, Zulfiqar Hasan; Gu, Irene Yu-Hua
2013-12-01
This paper proposes a novel Bayesian online learning and tracking scheme for video objects on Grassmann manifolds. Although manifold visual object tracking is promising, large and fast nonplanar (or out-of-plane) pose changes and long-term partial occlusions of deformable objects in video remain a challenge that limits the tracking performance. The proposed method tackles these problems with the main novelties on: 1) online estimation of object appearances on Grassmann manifolds; 2) optimal criterion-based occlusion handling for online updating of object appearances; 3) a nonlinear dynamic model for both the appearance basis matrix and its velocity; and 4) Bayesian formulations, separately for the tracking process and the online learning process, that are realized by employing two particle filters: one is on the manifold for generating appearance particles and another on the linear space for generating affine box particles. Tracking and online updating are performed in an alternating fashion to mitigate the tracking drift. Experiments using the proposed tracker on videos captured by a single dynamic/static camera have shown robust tracking performance, particularly for scenarios when target objects contain significant nonplanar pose changes and long-term partial occlusions. Comparisons with eight existing state-of-the-art/most relevant manifold/nonmanifold trackers with evaluations have provided further support to the proposed scheme.
ERIC Educational Resources Information Center
Tregear, Angela
2011-01-01
In the now extensive literature on alternative food networks (AFNs) (e.g. farmers' markets, community supported agriculture, box schemes), a body of work has pointed to socio-economic problems with such systems, which run counter to headline claims in the literature. This paper argues that rather than being a reflection of inherent complexities in…
ERIC Educational Resources Information Center
Clarkson, W. W.; And Others
This module discusses the characteristics of alternate sites and management schemes and attempts to evaluate the efficiency of each alternative in terms of waste treatment. Three types of non-crop land application are discussed: (1) forest lands; (2) park and recreational application; and (3) land reclamation in surface or strip mined areas. (BB)
Space Station racks weight and CG measurement using the rack insertion end-effector
NASA Technical Reports Server (NTRS)
Brewer, William V.
1994-01-01
The objective was to design a method to measure weight and center of gravity (C.G.) location for Space Station Modules by adding sensors to the existing Rack Insertion End Effector (RIEE). Accomplishments included alternative sensor placement schemes organized into categories. Vendors were queried for suitable sensor equipment recommendations. Inverse mathematical models for each category determine expected maximum sensor loads. Sensors are selected using these computations, yielding cost and accuracy data. Accuracy data for individual sensors are inserted into forward mathematical models to estimate the accuracy of an overall sensor scheme. Cost of the schemes can be estimated. Ease of implementation and operation are discussed.
Counterfactual entanglement distribution without transmitting any particles.
Guo, Qi; Cheng, Liu-Yong; Chen, Li; Wang, Hong-Fu; Zhang, Shou
2014-04-21
To date, all schemes for entanglement distribution needed to send entangled particles or a separable mediating particle among distant participants. Here, we propose a counterfactual protocol for entanglement distribution against the traditional forms, that is, two distant particles can be entangled with no physical particles travel between the two remote participants. We also present an alternative scheme for realizing the counterfactual photonic entangled state distribution using Michelson-type interferometer and self-assembled GaAs/InAs quantum dot embedded in a optical microcavity. The numerical analysis about the effect of experimental imperfections on the performance of the scheme shows that the entanglement distribution may be implementable with high fidelity.
NASA Astrophysics Data System (ADS)
Quaife, T. L.; Davenport, I. J.; Lines, E.; Styles, J.; Lewis, P.; Gurney, R. J.
2012-12-01
Satellite observations offer a spatially and temporally synoptic data source for constraining models of land surface processes, but exploitation of these data for such purposes has been largely ad-hoc to date. In part this is because traditional land surface models, and hence most land surface data assimilation schemes, have tended to focus on a specific component of the land surface problem; typically either surface fluxes of water and energy or biogeochemical cycles such as carbon and nitrogen. Furthermore the assimilation of satellite data into such models tends to be restricted to a single wavelength domain, for example passive microwave, thermal or optical, depending on the problem at hand. The next generation of land surface schemes, such as the Joint UK Land Environment Simulator (JULES) and the US Community Land Model (CLM) represent a broader range of processes but at the expense of increasing overall model complexity and in some cases reducing the level of detail in specific processes to accommodate this. Typically, the level of physical detail used to represent the interaction of electromagnetic radiation with the surface is not sufficient to enable prediction of intrinsic satellite observations (reflectance, brightness temperature and so on) and consequently these are not assimilated directly into the models. A seemingly attractive alternative is to assimilate high-level products derived from satellite observations but these are often only superficially related to the corresponding variables in land surface models due to conflicting assumptions between the two. This poster describes the water and energy balance modeling components of a project funded by the European Space Agency to develop a data assimilation scheme for the land surface and observation operators to translate between models and the intrinsic observations acquired by satellite missions. The rationale behind the design of the underlying process model is to represent the physics of the water and energy balance in as parsimonious manner as possible, using a force-restore approach, but describing the physics of electromagnetic radiation scattering at the surface sufficiently well that it is possible to assimilate the intrinsic observations made by remote sensing instruments. In this manner the initial configuration of the resulting scheme will be able to make optimal use of available satellite observations at arbitrary wavelengths and geometries. Model complexity can then be built up from this point whilst ensuring consistency with satellite observations.
NASA Astrophysics Data System (ADS)
Urban, Matthias; Möller, Robert; Fritzsche, Wolfgang
2003-02-01
DNA analytics is a growing field based on the increasing knowledge about the genome with special implications for the understanding of molecular bases for diseases. Driven by the need for cost-effective and high-throughput methods for molecular detection, DNA chips are an interesting alternative to more traditional analytical methods in this field. The standard readout principle for DNA chips is fluorescence based. Fluorescence is highly sensitive and broadly established, but shows limitations regarding quantification (due to signal and/or dye instability) and the need for sophisticated (and therefore high-cost) equipment. This article introduces a readout system for an alternative detection scheme based on electrical detection of nanoparticle-labeled DNA. If labeled DNA is present in the analyte solution, it will bind on complementary capture DNA immobilized in a microelectrode gap. A subsequent metal enhancement step leads to a deposition of conductive material on the nanoparticles, and finally an electrical contact between the electrodes. This detection scheme offers the potential for a simple (low-cost as well as robust) and highly miniaturizable method, which could be well-suited for point-of-care applications in the context of lab-on-a-chip technologies. The demonstrated apparatus allows a parallel readout of an entire array of microstructured measurement sites. The readout is combined with data-processing by an embedded personal computer, resulting in an autonomous instrument that measures and presents the results. The design and realization of such a system is described, and first measurements are presented.
NMRPipe: a multidimensional spectral processing system based on UNIX pipes.
Delaglio, F; Grzesiek, S; Vuister, G W; Zhu, G; Pfeifer, J; Bax, A
1995-11-01
The NMRPipe system is a UNIX software environment of processing, graphics, and analysis tools designed to meet current routine and research-oriented multidimensional processing requirements, and to anticipate and accommodate future demands and developments. The system is based on UNIX pipes, which allow programs running simultaneously to exchange streams of data under user control. In an NMRPipe processing scheme, a stream of spectral data flows through a pipeline of processing programs, each of which performs one component of the overall scheme, such as Fourier transformation or linear prediction. Complete multidimensional processing schemes are constructed as simple UNIX shell scripts. The processing modules themselves maintain and exploit accurate records of data sizes, detection modes, and calibration information in all dimensions, so that schemes can be constructed without the need to explicitly define or anticipate data sizes or storage details of real and imaginary channels during processing. The asynchronous pipeline scheme provides other substantial advantages, including high flexibility, favorable processing speeds, choice of both all-in-memory and disk-bound processing, easy adaptation to different data formats, simpler software development and maintenance, and the ability to distribute processing tasks on multi-CPU computers and computer networks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Guozhu, E-mail: gzhang6@ncsu.edu
Zebrafish have become a key alternative model for studying health effects of environmental stressors, partly due to their genetic similarity to humans, fast generation time, and the efficiency of generating high-dimensional systematic data. Studies aiming to characterize adverse health effects in zebrafish typically include several phenotypic measurements (endpoints). While there is a solid biomedical basis for capturing a comprehensive set of endpoints, making summary judgments regarding health effects requires thoughtful integration across endpoints. Here, we introduce a Bayesian method to quantify the informativeness of 17 distinct zebrafish endpoints as a data-driven weighting scheme for a multi-endpoint summary measure, called weightedmore » Aggregate Entropy (wAggE). We implement wAggE using high-throughput screening (HTS) data from zebrafish exposed to five concentrations of all 1060 ToxCast chemicals. Our results show that our empirical weighting scheme provides better performance in terms of the Receiver Operating Characteristic (ROC) curve for identifying significant morphological effects and improves robustness over traditional curve-fitting approaches. From a biological perspective, our results suggest that developmental cascade effects triggered by chemical exposure can be recapitulated by analyzing the relationships among endpoints. Thus, wAggE offers a powerful approach for analysis of multivariate phenotypes that can reveal underlying etiological processes. - Highlights: • Introduced a data-driven weighting scheme for multiple phenotypic endpoints. • Weighted Aggregate Entropy (wAggE) implies differential importance of endpoints. • Endpoint relationships reveal developmental cascade effects triggered by exposure. • wAggE is generalizable to multi-endpoint data of different shapes and scales.« less
NASA Technical Reports Server (NTRS)
Lin, Shian-Jiann; Chao, Winston C.; Sud, Y. C.; Walker, G. K.
1994-01-01
A generalized form of the second-order van Leer transport scheme is derived. Several constraints to the implied subgrid linear distribution are discussed. A very simple positive-definite scheme can be derived directly from the generalized form. A monotonic version of the scheme is applied to the Goddard Laboratory for Atmospheres (GLA) general circulation model (GCM) for the moisture transport calculations, replacing the original fourth-order center-differencing scheme. Comparisons with the original scheme are made in idealized tests as well as in a summer climate simulation using the full GLA GCM. A distinct advantage of the monotonic transport scheme is its ability to transport sharp gradients without producing spurious oscillations and unphysical negative mixing ratio. Within the context of low-resolution climate simulations, the aforementioned characteristics are demonstrated to be very beneficial in regions where cumulus convection is active. The model-produced precipitation pattern using the new transport scheme is more coherently organized both in time and in space, and correlates better with observations. The side effect of the filling algorithm used in conjunction with the original scheme is also discussed, in the context of idealized tests. The major weakness of the proposed transport scheme with a local monotonic constraint is its substantial implicit diffusion at low resolution. Alternative constraints are discussed to counter this problem.
NASA Astrophysics Data System (ADS)
Symon, Keith R.
2005-04-01
In the late 1950's and the 1960's the MURA (Midwestern Universities Research Association) working group developed fixed field alternating gradient (FFAG) particle accelerators. FFAG accelerators are a natural corollary of the invention of alternating gradient focusing. The fixed guide field accommodates all orbits from the injection to the final energy. For this reason, the transverse motion in the guide field is nearly decoupled from the longitudinal acceleration. This allows a wide variety of acceleration schemes, using betatron or rf accelerating fields, beam stacking, bucket lifts, phase displacement, etc. It also simplifies theoretical and experimental studies of accelerators. Theoretical studies included an extensive analysis of rf acceleration processes, nonlinear orbit dynamics, and collective instabilities. Two FFAG designs, radial sector and spiral sector, were invented. The MURA team built small electron models of each type, and used them to study orbit dynamics, acceleration processes, orbit instabilities, and space charge limits. A practical result of these studies was the invention of the spiral sector cyclotron. Another was beam stacking, which led to the first practical way of achieving colliding beams. A 50 MeV two-way radial sector model was built in which it proved possible to stack a beam of over 10 amperes of electrons.
Rotating permanent magnet excitation for blood flow measurement.
Nair, Sarath S; Vinodkumar, V; Sreedevi, V; Nagesh, D S
2015-11-01
A compact, portable and improved blood flow measurement system for an extracorporeal circuit having a rotating permanent magnetic excitation scheme is described in this paper. The system consists of a set of permanent magnets rotating near blood or any conductive fluid to create high-intensity alternating magnetic field in it and inducing a sinusoidal varying voltage across the column of fluid. The induced voltage signal is acquired, conditioned and processed to determine its flow rate. Performance analysis shows that a sensitivity of more than 250 mV/lpm can be obtained, which is more than five times higher than conventional flow measurement systems. Choice of rotating permanent magnet instead of an electromagnetic core generates alternate magnetic field of smooth sinusoidal nature which in turn reduces switching and interference noises. These results in reduction in complex electronic circuitry required for processing the signal to a great extent and enable the flow measuring device to be much less costlier, portable and light weight. The signal remains steady even with changes in environmental conditions and has an accuracy of greater than 95%. This paper also describes the construction details of the prototype, the factors affecting sensitivity and detailed performance analysis at various operating conditions.
NASA Astrophysics Data System (ADS)
Christensen, H. M.; Moroz, I.; Palmer, T.
2015-12-01
It is now acknowledged that representing model uncertainty in atmospheric simulators is essential for the production of reliable probabilistic ensemble forecasts, and a number of different techniques have been proposed for this purpose. Stochastic convection parameterization schemes use random numbers to represent the difference between a deterministic parameterization scheme and the true atmosphere, accounting for the unresolved sub grid-scale variability associated with convective clouds. An alternative approach varies the values of poorly constrained physical parameters in the model to represent the uncertainty in these parameters. This study presents new perturbed parameter schemes for use in the European Centre for Medium Range Weather Forecasts (ECMWF) convection scheme. Two types of scheme are developed and implemented. Both schemes represent the joint uncertainty in four of the parameters in the convection parametrisation scheme, which was estimated using the Ensemble Prediction and Parameter Estimation System (EPPES). The first scheme developed is a fixed perturbed parameter scheme, where the values of uncertain parameters are changed between ensemble members, but held constant over the duration of the forecast. The second is a stochastically varying perturbed parameter scheme. The performance of these schemes was compared to the ECMWF operational stochastic scheme, Stochastically Perturbed Parametrisation Tendencies (SPPT), and to a model which does not represent uncertainty in convection. The skill of probabilistic forecasts made using the different models was evaluated. While the perturbed parameter schemes improve on the stochastic parametrisation in some regards, the SPPT scheme outperforms the perturbed parameter approaches when considering forecast variables that are particularly sensitive to convection. Overall, SPPT schemes are the most skilful representations of model uncertainty due to convection parametrisation. Reference: H. M. Christensen, I. M. Moroz, and T. N. Palmer, 2015: Stochastic and Perturbed Parameter Representations of Model Uncertainty in Convection Parameterization. J. Atmos. Sci., 72, 2525-2544.
Three-dimensional simulation of vortex breakdown
NASA Technical Reports Server (NTRS)
Kuruvila, G.; Salas, M. D.
1990-01-01
The integral form of the complete, unsteady, compressible, three-dimensional Navier-Stokes equations in the conservation form, cast in generalized coordinate system, are solved, numerically, to simulate the vortex breakdown phenomenon. The inviscid fluxes are discretized using Roe's upwind-biased flux-difference splitting scheme and the viscous fluxes are discretized using central differencing. Time integration is performed using a backward Euler ADI (alternating direction implicit) scheme. A full approximation multigrid is used to accelerate the convergence to steady state.
Variable-spot ion beam figuring
NASA Astrophysics Data System (ADS)
Wu, Lixiang; Qiu, Keqiang; Fu, Shaojun
2016-03-01
This paper introduces a new scheme of ion beam figuring (IBF), or rather variable-spot IBF, which is conducted at a constant scanning velocity with variable-spot ion beam collimated by a variable diaphragm. It aims at improving the reachability and adaptation of the figuring process within the limits of machine dynamics by varying the ion beam spot size instead of the scanning velocity. In contrast to the dwell time algorithm in the conventional IBF, the variable-spot IBF adopts a new algorithm, which consists of the scan path programming and the trajectory optimization using pattern search. In this algorithm, instead of the dwell time, a new concept, integral etching time, is proposed to interpret the process of variable-spot IBF. We conducted simulations to verify its feasibility and practicality. The simulation results indicate the variable-spot IBF is a promising alternative to the conventional approach.
Random walk, diffusion and mixing in simulations of scalar transport in fluid flows
NASA Astrophysics Data System (ADS)
Klimenko, A. Y.
2008-12-01
Physical similarity and mathematical equivalence of continuous diffusion and particle random walk form one of the cornerstones of modern physics and the theory of stochastic processes. In many applied models used in simulation of turbulent transport and turbulent combustion, mixing between particles is used to reflect the influence of the continuous diffusion terms in the transport equations. We show that the continuous scalar transport and diffusion can be accurately specified by means of mixing between randomly walking Lagrangian particles with scalar properties and assess errors associated with this scheme. This gives an alternative formulation for the stochastic process which is selected to represent the continuous diffusion. This paper focuses on statistical errors and deals with relatively simple cases, where one-particle distributions are sufficient for a complete description of the problem.
Sieve estimation in a Markov illness-death process under dual censoring.
Boruvka, Audrey; Cook, Richard J
2016-04-01
Semiparametric methods are well established for the analysis of a progressive Markov illness-death process observed up to a noninformative right censoring time. However, often the intermediate and terminal events are censored in different ways, leading to a dual censoring scheme. In such settings, unbiased estimation of the cumulative transition intensity functions cannot be achieved without some degree of smoothing. To overcome this problem, we develop a sieve maximum likelihood approach for inference on the hazard ratio. A simulation study shows that the sieve estimator offers improved finite-sample performance over common imputation-based alternatives and is robust to some forms of dependent censoring. The proposed method is illustrated using data from cancer trials. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Simulating superradiance from higher-order-intensity-correlation measurements: Single atoms
NASA Astrophysics Data System (ADS)
Wiegner, R.; Oppel, S.; Bhatti, D.; von Zanthier, J.; Agarwal, G. S.
2015-09-01
Superradiance typically requires preparation of atoms in highly entangled multiparticle states, the so-called Dicke states. In this paper we discuss an alternative route where we prepare such states from initially uncorrelated atoms by a measurement process. By measuring higher-order intensity-intensity correlations we demonstrate that we can simulate the emission characteristics of Dicke superradiance by starting with atoms in the fully excited state. We describe the essence of the scheme by first investigating two excited atoms. Here we demonstrate how via Hanbury Brown and Twiss type of measurements we can produce Dicke superradiance and subradiance displayed commonly with two atoms in the single excited symmetric and antisymmetric Dicke states, respectively. We thereafter generalize the scheme to arbitrary numbers of atoms and detectors, and explain in detail the mechanism which leads to this result. The approach shows that the Hanbury Brown and Twiss type of intensity interference and the phenomenon of Dicke superradiance can be regarded as two sides of the same coin. We also present a compact result for the characteristic functional which generates all order intensity-intensity correlations.
A high-throughput two channel discrete wavelet transform architecture for the JPEG2000 standard
NASA Astrophysics Data System (ADS)
Badakhshannoory, Hossein; Hashemi, Mahmoud R.; Aminlou, Alireza; Fatemi, Omid
2005-07-01
The Discrete Wavelet Transform (DWT) is increasingly recognized in image and video compression standards, as indicated by its use in JPEG2000. The lifting scheme algorithm is an alternative DWT implementation that has a lower computational complexity and reduced resource requirement. In the JPEG2000 standard two lifting scheme based filter banks are introduced: the 5/3 and 9/7. In this paper a high throughput, two channel DWT architecture for both of the JPEG2000 DWT filters is presented. The proposed pipelined architecture has two separate input channels that process the incoming samples simultaneously with minimum memory requirement for each channel. The architecture had been implemented in VHDL and synthesized on a Xilinx Virtex2 XCV1000. The proposed architecture applies DWT on a 2K by 1K image at 33 fps with a 75 MHZ clock frequency. This performance is achieved with 70% less resources than two independent single channel modules. The high throughput and reduced resource requirement has made this architecture the proper choice for real time applications such as Digital Cinema.
Quantum Attack-Resistent Certificateless Multi-Receiver Signcryption Scheme
Li, Huixian; Chen, Xubao; Pang, Liaojun; Shi, Weisong
2013-01-01
The existing certificateless signcryption schemes were designed mainly based on the traditional public key cryptography, in which the security relies on the hard problems, such as factor decomposition and discrete logarithm. However, these problems will be easily solved by the quantum computing. So the existing certificateless signcryption schemes are vulnerable to the quantum attack. Multivariate public key cryptography (MPKC), which can resist the quantum attack, is one of the alternative solutions to guarantee the security of communications in the post-quantum age. Motivated by these concerns, we proposed a new construction of the certificateless multi-receiver signcryption scheme (CLMSC) based on MPKC. The new scheme inherits the security of MPKC, which can withstand the quantum attack. Multivariate quadratic polynomial operations, which have lower computation complexity than bilinear pairing operations, are employed in signcrypting a message for a certain number of receivers in our scheme. Security analysis shows that our scheme is a secure MPKC-based scheme. We proved its security under the hardness of the Multivariate Quadratic (MQ) problem and its unforgeability under the Isomorphism of Polynomials (IP) assumption in the random oracle model. The analysis results show that our scheme also has the security properties of non-repudiation, perfect forward secrecy, perfect backward secrecy and public verifiability. Compared with the existing schemes in terms of computation complexity and ciphertext length, our scheme is more efficient, which makes it suitable for terminals with low computation capacity like smart cards. PMID:23967037
Resource Management Scheme Based on Ubiquitous Data Analysis
Lee, Heung Ki; Jung, Jaehee
2014-01-01
Resource management of the main memory and process handler is critical to enhancing the system performance of a web server. Owing to the transaction delay time that affects incoming requests from web clients, web server systems utilize several web processes to anticipate future requests. This procedure is able to decrease the web generation time because there are enough processes to handle the incoming requests from web browsers. However, inefficient process management results in low service quality for the web server system. Proper pregenerated process mechanisms are required for dealing with the clients' requests. Unfortunately, it is difficult to predict how many requests a web server system is going to receive. If a web server system builds too many web processes, it wastes a considerable amount of memory space, and thus performance is reduced. We propose an adaptive web process manager scheme based on the analysis of web log mining. In the proposed scheme, the number of web processes is controlled through prediction of incoming requests, and accordingly, the web process management scheme consumes the least possible web transaction resources. In experiments, real web trace data were used to prove the improved performance of the proposed scheme. PMID:25197692
Indirect measurement of three-photon correlation in nonclassical light sources
NASA Astrophysics Data System (ADS)
Ann, Byoung-moo; Song, Younghoon; Kim, Junki; Yang, Daeho; An, Kyungwon
2016-06-01
We observe the three-photon correlation in nonclassical light sources by using an indirect measurement scheme based on the dead-time effect of photon-counting detectors. We first develop a general theory which enables us to extract the three-photon correlation from the two-photon correlation of an arbitrary light source measured with detectors with finite dead times. We then confirm the validity of our measurement scheme in experiments done with a cavity-QED microlaser operating with a large intracavity mean photon number exhibiting both sub- and super-Poissonian photon statistics. The experimental results are in good agreement with the theoretical expectation. Our measurement scheme provides an alternative approach for N -photon correlation measurement employing (N -1 ) detectors and thus a reduced measurement time for a given signal-to-noise ratio, compared to the usual scheme requiring N detectors.
Wang, Mingming; Sweetapple, Chris; Fu, Guangtao; Farmani, Raziyeh; Butler, David
2017-10-01
This paper presents a new framework for decision making in sustainable drainage system (SuDS) scheme design. It integrates resilience, hydraulic performance, pollution control, rainwater usage, energy analysis, greenhouse gas (GHG) emissions and costs, and has 12 indicators. The multi-criteria analysis methods of entropy weight and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) were selected to support SuDS scheme selection. The effectiveness of the framework is demonstrated with a SuDS case in China. Indicators used include flood volume, flood duration, a hydraulic performance indicator, cost and resilience. Resilience is an important design consideration, and it supports scheme selection in the case study. The proposed framework will help a decision maker to choose an appropriate design scheme for implementation without subjectivity. Copyright © 2017 Elsevier Ltd. All rights reserved.
Evaluation of a Multigrid Scheme for the Incompressible Navier-Stokes Equations
NASA Technical Reports Server (NTRS)
Swanson, R. C.
2004-01-01
A fast multigrid solver for the steady, incompressible Navier-Stokes equations is presented. The multigrid solver is based upon a factorizable discrete scheme for the velocity-pressure form of the Navier-Stokes equations. This scheme correctly distinguishes between the advection-diffusion and elliptic parts of the operator, allowing efficient smoothers to be constructed. To evaluate the multigrid algorithm, solutions are computed for flow over a flat plate, parabola, and a Karman-Trefftz airfoil. Both nonlifting and lifting airfoil flows are considered, with a Reynolds number range of 200 to 800. Convergence and accuracy of the algorithm are discussed. Using Gauss-Seidel line relaxation in alternating directions, multigrid convergence behavior approaching that of O(N) methods is achieved. The computational efficiency of the numerical scheme is compared with that of Runge-Kutta and implicit upwind based multigrid methods.
Jaffé, Rodolfo; Prous, Xavier; Zampaulo, Robson; Giannini, Tereza C; Imperatriz-Fonseca, Vera L; Maurity, Clóvis; Oliveira, Guilherme; Brandi, Iuri V; Siqueira, José O
2016-01-01
Caves pose significant challenges for mining projects, since they harbor many endemic and threatened species, and must therefore be protected. Recent discussions between academia, environmental protection agencies, and industry partners, have highlighted problems with the current Brazilian legislation for the protection of caves. While the licensing process is long, complex and cumbersome, the criteria used to assign caves into conservation relevance categories are often subjective, with relevance being mainly determined by the presence of obligate cave dwellers (troglobites) and their presumed rarity. However, the rarity of these troglobitic species is questionable, as most remain unidentified to the species level and their habitats and distribution ranges are poorly known. Using data from 844 iron caves retrieved from different speleology reports for the Carajás region (South-Eastern Amazon, Brazil), one of the world's largest deposits of high-grade iron ore, we assess the influence of different cave characteristics on four biodiversity proxies (species richness, presence of troglobites, presence of rare troglobites, and presence of resident bat populations). We then examine how the current relevance classification scheme ranks caves with different biodiversity indicators. Large caves were found to be important reservoirs of biodiversity, so they should be prioritized in conservation programs. Our results also reveal spatial autocorrelation in all the biodiversity proxies assessed, indicating that iron caves should be treated as components of a cave network immersed in the karst landscape. Finally, we show that by prioritizing the conservation of rare troglobites, the current relevance classification scheme is undermining overall cave biodiversity and leaving ecologically important caves unprotected. We argue that conservation efforts should target subterranean habitats as a whole and propose an alternative relevance ranking scheme, which could help simplify the assessment process and channel more resources to the effective protection of overall cave biodiversity.
NASA Astrophysics Data System (ADS)
Tanikawa, Ataru; Yoshikawa, Kohji; Okamoto, Takashi; Nitadori, Keigo
2012-02-01
We present a high-performance N-body code for self-gravitating collisional systems accelerated with the aid of a new SIMD instruction set extension of the x86 architecture: Advanced Vector eXtensions (AVX), an enhanced version of the Streaming SIMD Extensions (SSE). With one processor core of Intel Core i7-2600 processor (8 MB cache and 3.40 GHz) based on Sandy Bridge micro-architecture, we implemented a fourth-order Hermite scheme with individual timestep scheme ( Makino and Aarseth, 1992), and achieved the performance of ˜20 giga floating point number operations per second (GFLOPS) for double-precision accuracy, which is two times and five times higher than that of the previously developed code implemented with the SSE instructions ( Nitadori et al., 2006b), and that of a code implemented without any explicit use of SIMD instructions with the same processor core, respectively. We have parallelized the code by using so-called NINJA scheme ( Nitadori et al., 2006a), and achieved ˜90 GFLOPS for a system containing more than N = 8192 particles with 8 MPI processes on four cores. We expect to achieve about 10 tera FLOPS (TFLOPS) for a self-gravitating collisional system with N ˜ 10 5 on massively parallel systems with at most 800 cores with Sandy Bridge micro-architecture. This performance will be comparable to that of Graphic Processing Unit (GPU) cluster systems, such as the one with about 200 Tesla C1070 GPUs ( Spurzem et al., 2010). This paper offers an alternative to collisional N-body simulations with GRAPEs and GPUs.
A transient FETI methodology for large-scale parallel implicit computations in structural mechanics
NASA Technical Reports Server (NTRS)
Farhat, Charbel; Crivelli, Luis; Roux, Francois-Xavier
1992-01-01
Explicit codes are often used to simulate the nonlinear dynamics of large-scale structural systems, even for low frequency response, because the storage and CPU requirements entailed by the repeated factorizations traditionally found in implicit codes rapidly overwhelm the available computing resources. With the advent of parallel processing, this trend is accelerating because explicit schemes are also easier to parallelize than implicit ones. However, the time step restriction imposed by the Courant stability condition on all explicit schemes cannot yet -- and perhaps will never -- be offset by the speed of parallel hardware. Therefore, it is essential to develop efficient and robust alternatives to direct methods that are also amenable to massively parallel processing because implicit codes using unconditionally stable time-integration algorithms are computationally more efficient when simulating low-frequency dynamics. Here we present a domain decomposition method for implicit schemes that requires significantly less storage than factorization algorithms, that is several times faster than other popular direct and iterative methods, that can be easily implemented on both shared and local memory parallel processors, and that is both computationally and communication-wise efficient. The proposed transient domain decomposition method is an extension of the method of Finite Element Tearing and Interconnecting (FETI) developed by Farhat and Roux for the solution of static problems. Serial and parallel performance results on the CRAY Y-MP/8 and the iPSC-860/128 systems are reported and analyzed for realistic structural dynamics problems. These results establish the superiority of the FETI method over both the serial/parallel conjugate gradient algorithm with diagonal scaling and the serial/parallel direct method, and contrast the computational power of the iPSC-860/128 parallel processor with that of the CRAY Y-MP/8 system.
Kariuki, C M; Brascamp, E W; Komen, H; Kahi, A K; van Arendonk, J A M
2017-03-01
In developing countries minimal and erratic performance and pedigree recording impede implementation of large-sized breeding programs. Small-sized nucleus programs offer an alternative but rely on their economic performance for their viability. We investigated the economic performance of 2 alternative small-sized dairy nucleus programs [i.e., progeny testing (PT) and genomic selection (GS)] over a 20-yr investment period. The nucleus was made up of 453 male and 360 female animals distributed in 8 non-overlapping age classes. Each year 10 active sires and 100 elite dams were selected. Populations of commercial recorded cows (CRC) of sizes 12,592 and 25,184 were used to produce test daughters in PT or to create a reference population in GS, respectively. Economic performance was defined as gross margins, calculated as discounted revenues minus discounted costs following a single generation of selection. Revenues were calculated as cumulative discounted expressions (CDE, kg) × 0.32 (€/kg of milk) × 100,000 (size commercial population). Genetic superiorities, deterministically simulated using pseudo-BLUP index and CDE, were determined using gene flow. Costs were for one generation of selection. Results show that GS schemes had higher cumulated genetic gain in the commercial cow population and higher gross margins compared with PT schemes. Gross margins were between 3.2- and 5.2-fold higher for GS, depending on size of the CRC population. The increase in gross margin was mostly due to a decreased generation interval and lower running costs in GS schemes. In PT schemes many bulls are culled before selection. We therefore also compared 2 schemes in which semen was stored instead of keeping live bulls. As expected, semen storage resulted in an increase in gross margins in PT schemes, but gross margins remained lower than those of GS schemes. We conclude that implementation of small-sized GS breeding schemes can be economically viable for developing countries. The Authors. Published by the Federation of Animal Science Societies and Elsevier Inc. on behalf of the American Dairy Science Association®. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/3.0/).
Environmental economics of lignin derived transport fuels.
Obydenkova, Svetlana V; Kouris, Panos D; Hensen, Emiel J M; Heeres, Hero J; Boot, Michael D
2017-11-01
This paper explores the environmental and economic aspects of fast pyrolytic conversion of lignin, obtained from 2G ethanol plants, to transport fuels for both the marine and automotive markets. Various scenarios are explored, pertaining to aggregation of lignin from several sites, alternative energy carries to replace lignin, transport modalities, and allocation methodology. The results highlight two critical factors that ultimately determine the economic and/or environmental fuel viability. The first factor, the logistics scheme, exhibited the disadvantage of the centralized approach, owing to prohibitively expensive transportation costs of the low energy-dense lignin. Life cycle analysis (LCA) displayed the second critical factor related to alternative energy carrier selection. Natural gas (NG) chosen over additional biomass boosts well-to-wheel greenhouse gas emissions (WTW GHG) to a level incompatible with the reduction targets set by the U.S. renewable fuel standard (RFS). Adversely, the process' economics revealed higher profits vs. fossil energy carrier. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Strain Rate Tensor Estimation in Cine Cardiac MRI Based on Elastic Image Registration
NASA Astrophysics Data System (ADS)
Sánchez-Ferrero, Gonzalo Vegas; Vega, Antonio Tristán; Grande, Lucilio Cordero; de La Higuera, Pablo Casaseca; Fernández, Santiago Aja; Fernández, Marcos Martín; López, Carlos Alberola
In this work we propose an alternative method to estimate and visualize the Strain Rate Tensor (SRT) in Magnetic Resonance Images (MRI) when Phase Contrast MRI (PCMRI) and Tagged MRI (TMRI) are not available. This alternative is based on image processing techniques. Concretely, image registration algorithms are used to estimate the movement of the myocardium at each point. Additionally, a consistency checking method is presented to validate the accuracy of the estimates when no golden standard is available. Results prove that the consistency checking method provides an upper bound of the mean squared error of the estimate. Our experiments with real data show that the registration algorithm provides a useful deformation field to estimate the SRT fields. A classification between regional normal and dysfunctional contraction patterns, as compared with experts diagnosis, points out that the parameters extracted from the estimated SRT can represent these patterns. Additionally, a scheme for visualizing and analyzing the local behavior of the SRT field is presented.
Costall, A P
1984-01-01
Representational theories of perception postulate an isolated and autonomous "subject" set apart from its real environment, and then go on to invoke processes of mental representation, construction, or hypothesizing to explain how perception can nevertheless take place. Although James Gibson's most conspicuous contribution has been to challenge representational theory, his ultimate concern was the cognitivism which now prevails in psychology. He was convinced that the so-called cognitive revolution merely perpetuates, and even promotes, many of psychology's oldest mistakes. This review article considers Gibson's final statement of his "ecological" alternative to cognitivism (Gibson, 1979). It is intended not as a complete account of Gibson's alternative, however, but primarily as an appreciation of his critical contribution. Gibson's sustained attempt to counter representational theory served not only to reveal the variety of arguments used in support of this theory, but also to expose the questionable metaphysical assumptions upon which they rest. In concentrating upon Gibson's criticisms of representational theory, therefore, this paper aims to emphasize the point of his alternative scheme and to explain some of the important concerns shared by Gibson's ecological approach and operant psychology. PMID:6699538
Parallel discrete-event simulation schemes with heterogeneous processing elements.
Kim, Yup; Kwon, Ikhyun; Chae, Huiseung; Yook, Soon-Hyung
2014-07-01
To understand the effects of nonidentical processing elements (PEs) on parallel discrete-event simulation (PDES) schemes, two stochastic growth models, the restricted solid-on-solid (RSOS) model and the Family model, are investigated by simulations. The RSOS model is the model for the PDES scheme governed by the Kardar-Parisi-Zhang equation (KPZ scheme). The Family model is the model for the scheme governed by the Edwards-Wilkinson equation (EW scheme). Two kinds of distributions for nonidentical PEs are considered. In the first kind computing capacities of PEs are not much different, whereas in the second kind the capacities are extremely widespread. The KPZ scheme on the complex networks shows the synchronizability and scalability regardless of the kinds of PEs. The EW scheme never shows the synchronizability for the random configuration of PEs of the first kind. However, by regularizing the arrangement of PEs of the first kind, the EW scheme is made to show the synchronizability. In contrast, EW scheme never shows the synchronizability for any configuration of PEs of the second kind.
Error analysis of filtering operations in pixel-duplicated images of diabetic retinopathy
NASA Astrophysics Data System (ADS)
Mehrubeoglu, Mehrube; McLauchlan, Lifford
2010-08-01
In this paper, diabetic retinopathy is chosen for a sample target image to demonstrate the effectiveness of image enlargement through pixel duplication in identifying regions of interest. Pixel duplication is presented as a simpler alternative to data interpolation techniques for detecting small structures in the images. A comparative analysis is performed on different image processing schemes applied to both original and pixel-duplicated images. Structures of interest are detected and and classification parameters optimized for minimum false positive detection in the original and enlarged retinal pictures. The error analysis demonstrates the advantages as well as shortcomings of pixel duplication in image enhancement when spatial averaging operations (smoothing filters) are also applied.
Capture approximations beyond a statistical quantum mechanical method for atom-diatom reactions
NASA Astrophysics Data System (ADS)
Barrios, Lizandra; Rubayo-Soneira, Jesús; González-Lezana, Tomás
2016-03-01
Statistical techniques constitute useful approaches to investigate atom-diatom reactions mediated by insertion dynamics which involves complex-forming mechanisms. Different capture schemes based on energy considerations regarding the specific diatom rovibrational states are suggested to evaluate the corresponding probabilities of formation of such collision species between reactants and products in an attempt to test reliable alternatives for computationally demanding processes. These approximations are tested in combination with a statistical quantum mechanical method for the S + H2(v = 0 ,j = 1) → SH + H and Si + O2(v = 0 ,j = 1) → SiO + O reactions, where this dynamical mechanism plays a significant role, in order to probe their validity.
Fabrication of versatile cladding light strippers and fiber end-caps with CO2 laser radiation
NASA Astrophysics Data System (ADS)
Steinke, M.; Theeg, T.; Wysmolek, M.; Ottenhues, C.; Pulzer, T.; Neumann, J.; Kracht, D.
2018-02-01
We report on novel fabrication schemes of versatile cladding light strippers and end-caps via CO2 laser radiation. We integrated cladding light strippers in SMA-like connectors for reliable and stable fiber-coupling of high-power laser diodes. Moreover, the application of cladding light strippers in typical fiber geometries for high-power fiber lasers was evaluated. In addition, we also developed processes to fuse end-caps to fiber end faces via CO2 laser radiation and inscribe the fibers with cladding light strippers near the end-cap. Corresponding results indicate the great potential of such devices as a monolithic and low-cost alternative to SMA connectors.
NASA Astrophysics Data System (ADS)
Astashev, M. E.; Belosludtsev, K. N.; Kharakoz, D. P.
2014-05-01
One of the most accurate methods for measuring the compressibility of liquids is resonance measurement of sound velocity in a fixed-length interferometer. This method combines high sensitivity, accuracy, and small sample volume of the test liquid. The measuring principle is to study the resonance properties of a composite resonator that contains a test liquid sample. Ealier, the phase-locked loop (PLL) scheme was used for this. In this paper, we propose an alternative measurement scheme based on digital analysis of harmonic signals, describe the implementation of this scheme using commercially available data acquisition modules, and give examples of test measurements with accuracy evaluations of the results.
An implict LU scheme for the Euler equations applied to arbitrary cascades. [new method of factoring
NASA Technical Reports Server (NTRS)
Buratynski, E. K.; Caughey, D. A.
1984-01-01
An implicit scheme for solving the Euler equations is derived and demonstrated. The alternating-direction implicit (ADI) technique is modified, using two implicit-operator factors corresponding to lower-block-diagonal (L) or upper-block-diagonal (U) algebraic systems which can be easily inverted. The resulting LU scheme is implemented in finite-volume mode and applied to 2D subsonic and transonic cascade flows with differing degrees of geometric complexity. The results are presented graphically and found to be in good agreement with those of other numerical and analytical approaches. The LU method is also 2.0-3.4 times faster than ADI, suggesting its value in calculating 3D problems.
Network coding multiuser scheme for indoor visible light communications
NASA Astrophysics Data System (ADS)
Zhang, Jiankun; Dang, Anhong
2017-12-01
Visible light communication (VLC) is a unique alternative for indoor data transfer and developing beyond point-to-point. However, for realizing high-capacity networks, VLC is facing challenges including the constrained bandwidth of the optical access point and random occlusion. A network coding scheme for VLC (NC-VLC) is proposed, with increased throughput and system robustness. Based on the Lambertian illumination model, theoretical decoding failure probability of the multiuser NC-VLC system is derived, and the impact of the system parameters on the performance is analyzed. Experiments demonstrate the proposed scheme successfully in the indoor multiuser scenario. These results indicate that the NC-VLC system shows a good performance under the link loss and random occlusion.
Constructive polarization modulation for coherent population trapping clock
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yun, Peter, E-mail: enxue.yun@obspm.fr; Danet, Jean-Marie; Holleville, David
2014-12-08
We propose a constructive polarization modulation scheme for atomic clocks based on coherent population trapping (CPT). In this scheme, the polarization of a bichromatic laser beam is modulated between two opposite circular polarizations to avoid trapping the atomic populations in the extreme Zeeman sublevels. We show that if an appropriate phase modulation between the two optical components of the bichromatic laser is applied synchronously, the two CPT dark states which are produced successively by the alternate polarizations add constructively. Measured CPT resonance contrasts up to 20% in one-pulse CPT and 12% in two-pulse Ramsey-CPT experiments are reported, demonstrating the potentialmore » of this scheme for applications to high performance atomic clocks.« less
Improved diffusion Monte Carlo propagators for bosonic systems using Itô calculus
NASA Astrophysics Data System (ADS)
Hâkansson, P.; Mella, M.; Bressanini, Dario; Morosi, Gabriele; Patrone, Marta
2006-11-01
The construction of importance sampled diffusion Monte Carlo (DMC) schemes accurate to second order in the time step is discussed. A central aspect in obtaining efficient second order schemes is the numerical solution of the stochastic differential equation (SDE) associated with the Fokker-Plank equation responsible for the importance sampling procedure. In this work, stochastic predictor-corrector schemes solving the SDE and consistent with Itô calculus are used in DMC simulations of helium clusters. These schemes are numerically compared with alternative algorithms obtained by splitting the Fokker-Plank operator, an approach that we analyze using the analytical tools provided by Itô calculus. The numerical results show that predictor-corrector methods are indeed accurate to second order in the time step and that they present a smaller time step bias and a better efficiency than second order split-operator derived schemes when computing ensemble averages for bosonic systems. The possible extension of the predictor-corrector methods to higher orders is also discussed.
NASA Technical Reports Server (NTRS)
Hsu, Andrew T.
1992-01-01
Turbulent combustion can not be simulated adequately by conventional moment closure turbulent models. The probability density function (PDF) method offers an attractive alternative: in a PDF model, the chemical source terms are closed and do not require additional models. Because the number of computational operations grows only linearly in the Monte Carlo scheme, it is chosen over finite differencing schemes. A grid dependent Monte Carlo scheme following J.Y. Chen and W. Kollmann has been studied in the present work. It was found that in order to conserve the mass fractions absolutely, one needs to add further restrictions to the scheme, namely alpha(sub j) + gamma(sub j) = alpha(sub j - 1) + gamma(sub j + 1). A new algorithm was devised that satisfied this restriction in the case of pure diffusion or uniform flow problems. Using examples, it is shown that absolute conservation can be achieved. Although for non-uniform flows absolute conservation seems impossible, the present scheme has reduced the error considerably.
Multigrid calculation of three-dimensional turbomachinery flows
NASA Technical Reports Server (NTRS)
Caughey, David A.
1989-01-01
Research was performed in the general area of computational aerodynamics, with particular emphasis on the development of efficient techniques for the solution of the Euler and Navier-Stokes equations for transonic flows through the complex blade passages associated with turbomachines. In particular, multigrid methods were developed, using both explicit and implicit time-stepping schemes as smoothing algorithms. The specific accomplishments of the research have included: (1) the development of an explicit multigrid method to solve the Euler equations for three-dimensional turbomachinery flows based upon the multigrid implementation of Jameson's explicit Runge-Kutta scheme (Jameson 1983); (2) the development of an implicit multigrid scheme for the three-dimensional Euler equations based upon lower-upper factorization; (3) the development of a multigrid scheme using a diagonalized alternating direction implicit (ADI) algorithm; (4) the extension of the diagonalized ADI multigrid method to solve the Euler equations of inviscid flow for three-dimensional turbomachinery flows; and also (5) the extension of the diagonalized ADI multigrid scheme to solve the Reynolds-averaged Navier-Stokes equations for two-dimensional turbomachinery flows.
New regularization scheme for blind color image deconvolution
NASA Astrophysics Data System (ADS)
Chen, Li; He, Yu; Yap, Kim-Hui
2011-01-01
This paper proposes a new regularization scheme to address blind color image deconvolution. Color images generally have a significant correlation among the red, green, and blue channels. Conventional blind monochromatic deconvolution algorithms handle each color image channels independently, thereby ignoring the interchannel correlation present in the color images. In view of this, a unified regularization scheme for image is developed to recover edges of color images and reduce color artifacts. In addition, by using the color image properties, a spectral-based regularization operator is adopted to impose constraints on the blurs. Further, this paper proposes a reinforcement regularization framework that integrates a soft parametric learning term in addressing blind color image deconvolution. A blur modeling scheme is developed to evaluate the relevance of manifold parametric blur structures, and the information is integrated into the deconvolution scheme. An optimization procedure called alternating minimization is then employed to iteratively minimize the image- and blur-domain cost functions. Experimental results show that the method is able to achieve satisfactory restored color images under different blurring conditions.
Explodator: A new skeleton mechanism for the halate driven chemical oscillators
NASA Astrophysics Data System (ADS)
Noszticzius, Z.; Farkas, H.; Schelly, Z. A.
1984-06-01
In the first part of this work, some shortcomings in the present theories of the Belousov-Zhabotinskii oscillating reaction are discussed. In the second part, a new oscillatory scheme, the limited Explodator, is proposed as an alternative skeleton mechanism. This model contains an always unstable three-variable Lotka-Volterra core (the ``Explodator'') and a stabilizing limiting reaction. The new scheme exhibits Hopf bifurcation and limit cycle oscillations. Finally, some possibilities and problems of a generalization are mentioned.
LDPC-PPM Coding Scheme for Optical Communication
NASA Technical Reports Server (NTRS)
Barsoum, Maged; Moision, Bruce; Divsalar, Dariush; Fitz, Michael
2009-01-01
In a proposed coding-and-modulation/demodulation-and-decoding scheme for a free-space optical communication system, an error-correcting code of the low-density parity-check (LDPC) type would be concatenated with a modulation code that consists of a mapping of bits to pulse-position-modulation (PPM) symbols. Hence, the scheme is denoted LDPC-PPM. This scheme could be considered a competitor of a related prior scheme in which an outer convolutional error-correcting code is concatenated with an interleaving operation, a bit-accumulation operation, and a PPM inner code. Both the prior and present schemes can be characterized as serially concatenated pulse-position modulation (SCPPM) coding schemes. Figure 1 represents a free-space optical communication system based on either the present LDPC-PPM scheme or the prior SCPPM scheme. At the transmitting terminal, the original data (u) are processed by an encoder into blocks of bits (a), and the encoded data are mapped to PPM of an optical signal (c). For the purpose of design and analysis, the optical channel in which the PPM signal propagates is modeled as a Poisson point process. At the receiving terminal, the arriving optical signal (y) is demodulated to obtain an estimate (a^) of the coded data, which is then processed by a decoder to obtain an estimate (u^) of the original data.
Sharing Resources In Mobile/Satellite Communications
NASA Technical Reports Server (NTRS)
Yan, Tsun-Yee; Sue, Miles K.
1992-01-01
Report presents preliminary theoretical analysis of several alternative schemes for allocation of satellite resource among terrestrial subscribers of landmobile/satellite communication system. Demand-access and random-access approaches under code-division and frequency-division concepts compared.
Design of an extensive information representation scheme for clinical narratives.
Deléger, Louise; Campillos, Leonardo; Ligozat, Anne-Laure; Névéol, Aurélie
2017-09-11
Knowledge representation frameworks are essential to the understanding of complex biomedical processes, and to the analysis of biomedical texts that describe them. Combined with natural language processing (NLP), they have the potential to contribute to retrospective studies by unlocking important phenotyping information contained in the narrative content of electronic health records (EHRs). This work aims to develop an extensive information representation scheme for clinical information contained in EHR narratives, and to support secondary use of EHR narrative data to answer clinical questions. We review recent work that proposed information representation schemes and applied them to the analysis of clinical narratives. We then propose a unifying scheme that supports the extraction of information to address a large variety of clinical questions. We devised a new information representation scheme for clinical narratives that comprises 13 entities, 11 attributes and 37 relations. The associated annotation guidelines can be used to consistently apply the scheme to clinical narratives and are https://cabernet.limsi.fr/annotation_guide_for_the_merlot_french_clinical_corpus-Sept2016.pdf . The information scheme includes many elements of the major schemes described in the clinical natural language processing literature, as well as a uniquely detailed set of relations.
Two-stage atlas subset selection in multi-atlas based image segmentation.
Zhao, Tingting; Ruan, Dan
2015-06-01
Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. The authors have developed a novel two-stage atlas subset selection scheme for multi-atlas based segmentation. It achieves good segmentation accuracy with significantly reduced computation cost, making it a suitable configuration in the presence of extensive heterogeneous atlases.
Optimised Iteration in Coupled Monte Carlo - Thermal-Hydraulics Calculations
NASA Astrophysics Data System (ADS)
Hoogenboom, J. Eduard; Dufek, Jan
2014-06-01
This paper describes an optimised iteration scheme for the number of neutron histories and the relaxation factor in successive iterations of coupled Monte Carlo and thermal-hydraulic reactor calculations based on the stochastic iteration method. The scheme results in an increasing number of neutron histories for the Monte Carlo calculation in successive iteration steps and a decreasing relaxation factor for the spatial power distribution to be used as input to the thermal-hydraulics calculation. The theoretical basis is discussed in detail and practical consequences of the scheme are shown, among which a nearly linear increase per iteration of the number of cycles in the Monte Carlo calculation. The scheme is demonstrated for a full PWR type fuel assembly. Results are shown for the axial power distribution during several iteration steps. A few alternative iteration method are also tested and it is concluded that the presented iteration method is near optimal.
Umar, Nasir; Mohammed, Shafiu
2011-09-05
The need for health care reforms and alternative financing mechanism in many low and middle-income countries has been advocated. This led to the introduction of the national health insurance scheme (NHIS) in Nigeria, at first with the enrollment of formal sector employees. A qualitative study was conducted to assess enrollee's perception on the quality of health care before and after enrollment. Initial results revealed that respondents (heads of households) have generally viewed the NHIS favorably, but consistently expressed dissatisfaction over the terms of coverage. Specifically, because the NHIS enrollment covers only the primary insured person, their spouse and only up to four biological children (child defined as <18 years of age), in a setting where extended family is common. Dissatisfaction of enrollees could affect their willingness to participate in the insurance scheme, which may potentially affect the success and future extension of the scheme.
All-Particle Multiscale Computation of Hypersonic Rarefied Flow
NASA Astrophysics Data System (ADS)
Jun, E.; Burt, J. M.; Boyd, I. D.
2011-05-01
This study examines a new hybrid particle scheme used as an alternative means of multiscale flow simulation. The hybrid particle scheme employs the direct simulation Monte Carlo (DSMC) method in rarefied flow regions and the low diffusion (LD) particle method in continuum flow regions. The numerical procedures of the low diffusion particle method are implemented within an existing DSMC algorithm. The performance of the LD-DSMC approach is assessed by studying Mach 10 nitrogen flow over a sphere with a global Knudsen number of 0.002. The hybrid scheme results show good overall agreement with results from standard DSMC and CFD computation. Subcell procedures are utilized to improve computational efficiency and reduce sensitivity to DSMC cell size in the hybrid scheme. This makes it possible to perform the LD-DSMC simulation on a much coarser mesh that leads to a significant reduction in computation time.
Fast viscosity solutions for shape from shading under a more realistic imaging model
NASA Astrophysics Data System (ADS)
Wang, Guohui; Han, Jiuqiang; Jia, Honghai; Zhang, Xinman
2009-11-01
Shape from shading (SFS) has been a classical and important problem in the domain of computer vision. The goal of SFS is to reconstruct the 3-D shape of an object from its 2-D intensity image. To this end, an image irradiance equation describing the relation between the shape of a surface and its corresponding brightness variations is used. Then it is derived as an explicit partial differential equation (PDE). Using the nonlinear programming principle, we propose a detailed solution to Prados and Faugeras's implicit scheme for approximating the viscosity solution of the resulting PDE. Furthermore, by combining implicit and semi-implicit schemes, a new approximation scheme is presented. In order to accelerate the convergence speed, we adopt the Gauss-Seidel idea and alternating sweeping strategy to the approximation schemes. Experimental results on both synthetic and real images are performed to demonstrate that the proposed methods are fast and accurate.
NASA Astrophysics Data System (ADS)
Pantano, Carlos
2005-11-01
We describe a hybrid finite difference method for large-eddy simulation (LES) of compressible flows with a low-numerical dissipation scheme and structured adaptive mesh refinement (SAMR). Numerical experiments and validation calculations are presented including a turbulent jet and the strongly shock-driven mixing of a Richtmyer-Meshkov instability. The approach is a conservative flux-based SAMR formulation and as such, it utilizes refinement to computational advantage. The numerical method for the resolved scale terms encompasses the cases of scheme alternation and internal mesh interfaces resulting from SAMR. An explicit centered scheme that is consistent with a skew-symmetric finite difference formulation is used in turbulent flow regions while a weighted essentially non-oscillatory (WENO) scheme is employed to capture shocks. The subgrid stresses and transports are calculated by means of the streched-vortex model, Misra & Pullin (1997)
An Australian government dental scheme: Doctor-dentist-patient tensions in the triangle.
Weerakoon, Arosha; Fitzgerald, Lisa; Porter, Suzette
2014-11-30
Autonomy of participants is challenged when legislation to provide a public health service is weakly designed and implemented. Australia's Chronic Disease Dental Scheme was instigated to provide a government subsidy for private dental treatment for people suffering chronic illness impacting their oral health or vice versa. They were allocated AUD$4250 towards comprehensive treatment over 2 years with their eligibility determined by their general medical doctor. A qualitative research study was conducted to explore the experiences from the perspectives of the patient, medical and dental practitioner. One of the research outcomes identified a frequently reported level of discomfort in the patient/doctor/dentist triangle. Doctors and dentists reported feeling forced by patients into positions that compromised their autonomy in obeying the intent (if not the law) of the scheme. Additionally, dentists felt under pressure from doctors and patients to provide subsidized treatment to those eligible. In turn, the patients reported difficulties in gaining access to the scheme and in some cases, experiencing full or partially unmet oral health needs. REASON FOR CONFLICT: Poor inter-professional communication and lack of understanding about profession-unique patient-driven pressures, ultimately contributed to dissonance. Ill-defined eligibility guidelines rendered the doctor's ability to gate-keep challenging. OUTCOME OF CONFLICT: Inefficient gate-keeping led to exponential increase in referrals, resulting in unprecedented cost blow-outs. Ensuing government-led audits caused political tensions and contributed to the media-induced vilification of dentists. In December 2013, government financing of dental treatment through Chronic Disease Dental Scheme was discontinued, leaving many Australians without a viable alternative. There is a need for qualitative research methods to help identify social issues that affect public health policy process. In order to succeed, new health policies should respect, consider and attempt to understand the autonomy of key participants, prior to and throughout.
Ecosystem services as a common language for coastal ecosystem-based management.
Granek, Elise F; Polasky, Stephen; Kappel, Carrie V; Reed, Denise J; Stoms, David M; Koch, Evamaria W; Kennedy, Chris J; Cramer, Lori A; Hacker, Sally D; Barbier, Edward B; Aswani, Shankar; Ruckelshaus, Mary; Perillo, Gerardo M E; Silliman, Brian R; Muthiga, Nyawira; Bael, David; Wolanski, Eric
2010-02-01
Ecosystem-based management is logistically and politically challenging because ecosystems are inherently complex and management decisions affect a multitude of groups. Coastal ecosystems, which lie at the interface between marine and terrestrial ecosystems and provide an array of ecosystem services to different groups, aptly illustrate these challenges. Successful ecosystem-based management of coastal ecosystems requires incorporating scientific information and the knowledge and views of interested parties into the decision-making process. Estimating the provision of ecosystem services under alternative management schemes offers a systematic way to incorporate biogeophysical and socioeconomic information and the views of individuals and groups in the policy and management process. Employing ecosystem services as a common language to improve the process of ecosystem-based management presents both benefits and difficulties. Benefits include a transparent method for assessing trade-offs associated with management alternatives, a common set of facts and common currency on which to base negotiations, and improved communication among groups with competing interests or differing worldviews. Yet challenges to this approach remain, including predicting how human interventions will affect ecosystems, how such changes will affect the provision of ecosystem services, and how changes in service provision will affect the welfare of different groups in society. In a case study from Puget Sound, Washington, we illustrate the potential of applying ecosystem services as a common language for ecosystem-based management.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nick Degenstein; Minish Shah; Doughlas Louie
2012-05-01
The goal of this project is to develop a near-zero emissions flue gas purification technology for existing PC (pulverized coal) power plants that are retrofitted with oxy-combustion technology. The objective of Task 2 of this project was to evaluate an alternative method of SOx, NOx and Hg removal from flue gas produced by burning high sulfur coal in oxy-combustion power plants. The goal of the program was not only to investigate a new method of flue gas purification but also to produce useful acid byproduct streams as an alternative to using a traditional FGD and SCR for flue gas processing.more » During the project two main constraints were identified that limit the ability of the process to achieve project goals. 1) Due to boiler island corrosion issues >60% of the sulfur must be removed in the boiler island with the use of an FGD. 2) A suitable method could not be found to remove NOx from the concentrated sulfuric acid product, which limits sale-ability of the acid, as well as the NOx removal efficiency of the process. Given the complexity and safety issues inherent in the cycle it is concluded that the acid product would not be directly saleable and, in this case, other flue gas purification schemes are better suited for SOx/NOx/Hg control when burning high sulfur coal, e.g. this project's Task 3 process or a traditional FGD and SCR.« less
Wang, Qian; Hisatomi, Takashi; Suzuki, Yohichi; Pan, Zhenhua; Seo, Jeongsuk; Katayama, Masao; Minegishi, Tsutomu; Nishiyama, Hiroshi; Takata, Tsuyoshi; Seki, Kazuhiko; Kudo, Akihiko; Yamada, Taro; Domen, Kazunari
2017-02-01
Development of sunlight-driven water splitting systems with high efficiency, scalability, and cost-competitiveness is a central issue for mass production of solar hydrogen as a renewable and storable energy carrier. Photocatalyst sheets comprising a particulate hydrogen evolution photocatalyst (HEP) and an oxygen evolution photocatalyst (OEP) embedded in a conductive thin film can realize efficient and scalable solar hydrogen production using Z-scheme water splitting. However, the use of expensive precious metal thin films that also promote reverse reactions is a major obstacle to developing a cost-effective process at ambient pressure. In this study, we present a standalone particulate photocatalyst sheet based on an earth-abundant, relatively inert, and conductive carbon film for efficient Z-scheme water splitting at ambient pressure. A SrTiO 3 :La,Rh/C/BiVO 4 :Mo sheet is shown to achieve unassisted pure-water (pH 6.8) splitting with a solar-to-hydrogen energy conversion efficiency (STH) of 1.2% at 331 K and 10 kPa, while retaining 80% of this efficiency at 91 kPa. The STH value of 1.0% is the highest among Z-scheme pure water splitting operating at ambient pressure. The working mechanism of the photocatalyst sheet is discussed on the basis of band diagram simulation. In addition, the photocatalyst sheet split pure water more efficiently than conventional powder suspension systems and photoelectrochemical parallel cells because H + and OH - concentration overpotentials and an IR drop between the HEP and OEP were effectively suppressed. The proposed carbon-based photocatalyst sheet, which can be used at ambient pressure, is an important alternative to (photo)electrochemical systems for practical solar hydrogen production.
Corrosion study on high power feeding of telecomunication copper cable in 5 wt.% CaSO4.2H2O solution
NASA Astrophysics Data System (ADS)
Shamsudin, Shaiful Rizam; Hashim, Nabihah; Ibrahim, Mohd Saiful Bahri; Rahman, Muhammad Sayuzi Abdul; Idrus, Muhammad Amin; Hassan, Mohd Rezadzudin; Abdullah, Wan Razli Wan
2016-07-01
The studies were carried out to find out the best powering scheme over the copper telephone line. It was expected that the application of the higher power feeding could increase the data transfer and capable of providing the customer's satisfaction. To realize the application of higher remote power feeding, the potential of corrosion problem on Cu cables was studied. The natural corrosion behaviour of copper cable in the 0.5% CaSO4.2H2O solution was studied in term of open circuit potential for 30 days. The corrosion behaviour of higher power feeding was studied by the immersion and the planned interval test to determine the corrosion rate as well as the effect of voltage magnitudes and the current scheme i.e. positive direct (DC+) and alternating current (AC) at about 0.40 ± 0.01 mA/ cm2 current density. In the immersion test, both DC+ and AC scheme showed the increasing of feeding voltage magnitude has increased the corrosion rate of Cu samples starting from 60 to 100 volts. It was then reduced at about 100 - 120 volts which may due to the passive and transpassive mechanism. The corrosion rate was slowly reduced further from 120 to 200 volts. Visually, the positively charged of Cu cable was seems susceptible to severe corrosion, while AC scheme exhibited a slight corrosion reaction on the surface. However, the planned interval test and XRD results showed the corrosion activity of the copper cable in the studied solution was a relatively slow process and considered not to be corroded as a partially protective scale of copper oxide formed on the surface.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swanson, J.L.
1993-09-01
Disposal of high-level tank wastes at the Hanford Site is currently envisioned to divide the waste between two principal waste forms: glass for the high-level waste (HLW) and grout for the low-level waste (LLW). The draft flow diagram shown in Figure 1.1 was developed as part of the current planning process for the Tank Waste Remediation System (TWRS), which is evaluating options for tank cleanup. The TWRS has been established by the US Department of Energy (DOE) to safely manage the Hanford tank wastes. It includes tank safety and waste disposal issues, as well as the waste pretreatment and wastemore » minimization issues that are involved in the ``clean option`` discussed in this report. This report describes the results of a study led by Pacific Northwest Laboratory to determine if a more aggressive separations scheme could be devised which could mitigate concerns over the quantity of the HLW and the toxicity of the LLW produced by the reference system. This aggressive scheme, which would meet NRC Class A restrictions (10 CFR 61), would fit within the overall concept depicted in Figure 1.1; it would perform additional and/or modified operations in the areas identified as interim storage, pretreatment, and LLW concentration. Additional benefits of this scheme might result from using HLW and LLW disposal forms other than glass and grout, but such departures from the reference case are not included at this time. The evaluation of this aggressive separations scheme addressed institutional issues such as: radioactivity remaining in the Hanford Site LLW grout, volume of HLW glass that must be shipped offsite, and disposition of appropriate waste constituents to nonwaste forms.« less
Prokudin, Alexei; Sun, Peng; Yuan, Feng
2015-10-01
Following an earlier derivation by Catani-de Florian-Grazzini (2000) on the scheme dependence in the Collins-Soper- Sterman (CSS) resummation formalism in hard scattering processes, we investigate the scheme dependence of the Transverse Momentum Distributions (TMDs) and their applications. By adopting a universal C-coefficient function associated with the integrated parton distributions, the difference between various TMD schemes can be attributed to a perturbative calculable function depending on the hard momentum scale. Thus, we further apply several TMD schemes to the Drell-Yan process of lepton pair production in hadronic collisions, and find that the constrained non-perturbative form factors in different schemes are remarkablymore » consistent with each other and with that of the standard CSS formalism.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prokudin, Alexei; Sun, Peng; Yuan, Feng
Following an earlier derivation by Catani-de Florian-Grazzini (2000) on the scheme dependence in the Collins-Soper- Sterman (CSS) resummation formalism in hard scattering processes, we investigate the scheme dependence of the Transverse Momentum Distributions (TMDs) and their applications. By adopting a universal C-coefficient function associated with the integrated parton distributions, the difference between various TMD schemes can be attributed to a perturbative calculable function depending on the hard momentum scale. Thus, we further apply several TMD schemes to the Drell-Yan process of lepton pair production in hadronic collisions, and find that the constrained non-perturbative form factors in different schemes are remarkablymore » consistent with each other and with that of the standard CSS formalism.« less
NASA Astrophysics Data System (ADS)
Prokudin, Alexei; Sun, Peng; Yuan, Feng
2015-11-01
Following an earlier derivation by Catani, de Florian and Grazzini (2000) on the scheme dependence in the Collins-Soper-Sterman (CSS) resummation formalism in hard scattering processes, we investigate the scheme dependence of the Transverse Momentum Distributions (TMDs) and their applications. By adopting a universal C-coefficient function associated with the integrated parton distributions, the difference between various TMD schemes can be attributed to a perturbative calculable function depending on the hard momentum scale. We further apply several TMD schemes to the Drell-Yan process of lepton pair production in hadronic collisions, and find that the constrained non-perturbative form factors in different schemes are consistent with each other and with that of the standard CSS formalism.
Modeling, simulation and control of pulsed DE-GMA welding process for joining of aluminum to steel
NASA Astrophysics Data System (ADS)
Zhang, Gang; Shi, Yu; Li, Jie; Huang, Jiankang; Fan, Ding
2014-09-01
Joining of aluminum to steel has attracted significant attention from the welding research community, automotive and rail transportation industries. Many current welding methods have been developed and applied, however, they can not precisely control the heat input to work-piece, they are high costs, low efficiency and consist lots of complex welding devices, and the generated intermetallic compound layer in weld bead interface is thicker. A novel pulsed double electrode gas metal arc welding(Pulsed DE-GMAW) method is developed. To achieve a stable welding process for joining of aluminum to steel, a mathematical model of coupled arc is established, and a new control scheme that uses the average feedback arc voltage of main loop to adjust the wire feed speed to control coupled arc length is proposed and developed. Then, the impulse control simulation of coupled arc length, wire feed speed and wire extension is conducted to demonstrate the mathematical model and predict the stability of welding process by changing the distance of contact tip to work-piece(CTWD). To prove the proposed PSO based PID control scheme's feasibility, the rapid prototyping experimental system is setup and the bead-on-plate control experiments are conducted to join aluminum to steel. The impulse control simulation shows that the established model can accurately represent the variation of coupled arc length, wire feed speed and the average main arc voltage when the welding process is disturbed, and the developed controller has a faster response and adjustment, only runs about 0.1 s. The captured electric signals show the main arc voltage gradually closes to the supposed arc voltage by adjusting the wire feed speed in 0.8 s. The obtained typical current waveform demonstrates that the main current can be reduced by controlling the bypass current under maintaining a relative large total current. The control experiment proves the accuracy of proposed model and feasibility of new control scheme further. The beautiful and smooth weld beads are also obtained by this method. Pulsed DE-GMAW can thus be considered as an alternative method for low cost, high efficiency joining of aluminum to steel.
Ecosystem classifications based on summer and winter conditions.
Andrew, Margaret E; Nelson, Trisalyn A; Wulder, Michael A; Hobart, George W; Coops, Nicholas C; Farmer, Carson J Q
2013-04-01
Ecosystem classifications map an area into relatively homogenous units for environmental research, monitoring, and management. However, their effectiveness is rarely tested. Here, three classifications are (1) defined and characterized for Canada along summertime productivity (moderate-resolution imaging spectrometer fraction of absorbed photosynthetically active radiation) and wintertime snow conditions (special sensor microwave/imager snow water equivalent), independently and in combination, and (2) comparatively evaluated to determine the ability of each classification to represent the spatial and environmental patterns of alternative schemes, including the Canadian ecozone framework. All classifications depicted similar patterns across Canada, but detailed class distributions differed. Class spatial characteristics varied with environmental conditions within classifications, but were comparable between classifications. There was moderate correspondence between classifications. The strongest association was between productivity classes and ecozones. The classification along both productivity and snow balanced these two sets of variables, yielding intermediate levels of association in all pairwise comparisons. Despite relatively low spatial agreement between classifications, they successfully captured patterns of the environmental conditions underlying alternate schemes (e.g., snow classes explained variation in productivity and vice versa). The performance of ecosystem classifications and the relevance of their input variables depend on the environmental patterns and processes used for applications and evaluation. Productivity or snow regimes, as constructed here, may be desirable when summarizing patterns controlled by summer- or wintertime conditions, respectively, or of climate change responses. General purpose ecosystem classifications should include both sets of drivers. Classifications should be carefully, quantitatively, and comparatively evaluated relative to a particular application prior to their implementation as monitoring and assessment frameworks.
Cryogenics free production of hyperpolarized 129Xe and 83Kr for biomedical MRI applications
NASA Astrophysics Data System (ADS)
Hughes-Riley, Theodore; Six, Joseph S.; Lilburn, David M. L.; Stupic, Karl F.; Dorkes, Alan C.; Shaw, Dominick E.; Pavlovskaya, Galina E.; Meersmann, Thomas
2013-12-01
As an alternative to cryogenic gas handling, hyperpolarized (hp) gas mixtures were extracted directly from the spin exchange optical pumping (SEOP) process through expansion followed by compression to ambient pressure for biomedical MRI applications. The omission of cryogenic gas separation generally requires the usage of high xenon or krypton concentrations at low SEOP gas pressures to generate hp 129Xe or hp 83Kr with sufficient MR signal intensity for imaging applications. Two different extraction schemes for the hp gasses were explored with focus on the preservation of the nuclear spin polarization. It was found that an extraction scheme based on an inflatable, pressure controlled balloon is sufficient for hp 129Xe handling, while 83Kr can efficiently be extracted through a single cycle piston pump. The extraction methods were tested for ex vivo MRI applications with excised rat lungs. Precise mixing of the hp gases with oxygen, which may be of interest for potential in vivo applications, was accomplished during the extraction process using a piston pump. The 83Kr bulk gas phase T1 relaxation in the mixtures containing more than approximately 1% O2 was found to be slower than that of 129Xe in corresponding mixtures. The experimental setup also facilitated 129Xe T1 relaxation measurements as a function of O2 concentration within excised lungs.
Cryogenics free production of hyperpolarized 129Xe and 83Kr for biomedical MRI applications☆
Hughes-Riley, Theodore; Six, Joseph S.; Lilburn, David M.L.; Stupic, Karl F.; Dorkes, Alan C.; Shaw, Dominick E.; Pavlovskaya, Galina E.; Meersmann, Thomas
2013-01-01
As an alternative to cryogenic gas handling, hyperpolarized (hp) gas mixtures were extracted directly from the spin exchange optical pumping (SEOP) process through expansion followed by compression to ambient pressure for biomedical MRI applications. The omission of cryogenic gas separation generally requires the usage of high xenon or krypton concentrations at low SEOP gas pressures to generate hp 129Xe or hp 83Kr with sufficient MR signal intensity for imaging applications. Two different extraction schemes for the hp gasses were explored with focus on the preservation of the nuclear spin polarization. It was found that an extraction scheme based on an inflatable, pressure controlled balloon is sufficient for hp 129Xe handling, while 83Kr can efficiently be extracted through a single cycle piston pump. The extraction methods were tested for ex vivo MRI applications with excised rat lungs. Precise mixing of the hp gases with oxygen, which may be of interest for potential in vivo applications, was accomplished during the extraction process using a piston pump. The 83Kr bulk gas phase T1 relaxation in the mixtures containing more than approximately 1% O2 was found to be slower than that of 129Xe in corresponding mixtures. The experimental setup also facilitated 129Xe T1 relaxation measurements as a function of O2 concentration within excised lungs. PMID:24135800
Cryogenics free production of hyperpolarized 129Xe and 83Kr for biomedical MRI applications.
Hughes-Riley, Theodore; Six, Joseph S; Lilburn, David M L; Stupic, Karl F; Dorkes, Alan C; Shaw, Dominick E; Pavlovskaya, Galina E; Meersmann, Thomas
2013-12-01
As an alternative to cryogenic gas handling, hyperpolarized (hp) gas mixtures were extracted directly from the spin exchange optical pumping (SEOP) process through expansion followed by compression to ambient pressure for biomedical MRI applications. The omission of cryogenic gas separation generally requires the usage of high xenon or krypton concentrations at low SEOP gas pressures to generate hp (129)Xe or hp (83)Kr with sufficient MR signal intensity for imaging applications. Two different extraction schemes for the hp gasses were explored with focus on the preservation of the nuclear spin polarization. It was found that an extraction scheme based on an inflatable, pressure controlled balloon is sufficient for hp (129)Xe handling, while (83)Kr can efficiently be extracted through a single cycle piston pump. The extraction methods were tested for ex vivo MRI applications with excised rat lungs. Precise mixing of the hp gases with oxygen, which may be of interest for potential in vivo applications, was accomplished during the extraction process using a piston pump. The (83)Kr bulk gas phase T1 relaxation in the mixtures containing more than approximately 1% O2 was found to be slower than that of (129)Xe in corresponding mixtures. The experimental setup also facilitated (129)Xe T1 relaxation measurements as a function of O2 concentration within excised lungs. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.
Reducing acquisition times in multidimensional NMR with a time-optimized Fourier encoding algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Zhiyong; Department of Electronic Science, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, Xiamen University, Xiamen, Fujian 361005; Smith, Pieter E. S.
Speeding up the acquisition of multidimensional nuclear magnetic resonance (NMR) spectra is an important topic in contemporary NMR, with central roles in high-throughput investigations and analyses of marginally stable samples. A variety of fast NMR techniques have been developed, including methods based on non-uniform sampling and Hadamard encoding, that overcome the long sampling times inherent to schemes based on fast-Fourier-transform (FFT) methods. Here, we explore the potential of an alternative fast acquisition method that leverages a priori knowledge, to tailor polychromatic pulses and customized time delays for an efficient Fourier encoding of the indirect domain of an NMR experiment. Bymore » porting the encoding of the indirect-domain to the excitation process, this strategy avoids potential artifacts associated with non-uniform sampling schemes and uses a minimum number of scans equal to the number of resonances present in the indirect dimension. An added convenience is afforded by the fact that a usual 2D FFT can be used to process the generated data. Acquisitions of 2D heteronuclear correlation NMR spectra on quinine and on the anti-inflammatory drug isobutyl propionic phenolic acid illustrate the new method's performance. This method can be readily automated to deal with complex samples such as those occurring in metabolomics, in in-cell as well as in in vivo NMR applications, where speed and temporal stability are often primary concerns.« less
Automated Processing of Two-Dimensional Correlation Spectra
Sengstschmid; Sterk; Freeman
1998-04-01
An automated scheme is described which locates the centers of cross peaks in two-dimensional correlation spectra, even under conditions of severe overlap. Double-quantum-filtered correlation (DQ-COSY) spectra have been investigated, but the method is also applicable to TOCSY and NOESY spectra. The search criterion is the intrinsic symmetry (or antisymmetry) of cross-peak multiplets. An initial global search provides the preliminary information to build up a two-dimensional "chemical shift grid." All genuine cross peaks must be centered at intersections of this grid, a fact that reduces the extent of the subsequent search program enormously. The program recognizes cross peaks by examining the symmetry of signals in a test zone centered at a grid intersection. This "symmetry filter" employs a "lowest value algorithm" to discriminate against overlapping responses from adjacent multiplets. A progressive multiplet subtraction scheme provides further suppression of overlap effects. The processed two-dimensional correlation spectrum represents cross peaks as points at the chemical shift coordinates, with some indication of their relative intensities. Alternatively, the information is presented in the form of a correlation table. The authenticity of a given cross peak is judged by a set of "confidence criteria" expressed as numerical parameters. Experimental results are presented for the 400-MHz double-quantum-filtered COSY spectrum of 4-androsten-3,17-dione, a case where there is severe overlap. Copyright 1998 Academic Press.
Quantum metrology and estimation of Unruh effect
Wang, Jieci; Tian, Zehua; Jing, Jiliang; Fan, Heng
2014-01-01
We study the quantum metrology for a pair of entangled Unruh-Dewitt detectors when one of them is accelerated and coupled to a massless scalar field. Comparing with previous schemes, our model requires only local interaction and avoids the use of cavities in the probe state preparation process. We show that the probe state preparation and the interaction between the accelerated detector and the external field have significant effects on the value of quantum Fisher information, correspondingly pose variable ultimate limit of precision in the estimation of Unruh effect. We find that the precision of the estimation can be improved by a larger effective coupling strength and a longer interaction time. Alternatively, the energy gap of the detector has a range that can provide us a better precision. Thus we may adjust those parameters and attain a higher precision in the estimation. We also find that an extremely high acceleration is not required in the quantum metrology process. PMID:25424772
Biosensors with Built-In Biomolecular Logic Gates for Practical Applications
Lai, Yu-Hsuan; Sun, Sin-Cih; Chuang, Min-Chieh
2014-01-01
Molecular logic gates, designs constructed with biological and chemical molecules, have emerged as an alternative computing approach to silicon-based logic operations. These molecular computers are capable of receiving and integrating multiple stimuli of biochemical significance to generate a definitive output, opening a new research avenue to advanced diagnostics and therapeutics which demand handling of complex factors and precise control. In molecularly gated devices, Boolean logic computations can be activated by specific inputs and accurately processed via bio-recognition, bio-catalysis, and selective chemical reactions. In this review, we survey recent advances of the molecular logic approaches to practical applications of biosensors, including designs constructed with proteins, enzymes, nucleic acids, nanomaterials, and organic compounds, as well as the research avenues for future development of digitally operating “sense and act” schemes that logically process biochemical signals through networked circuits to implement intelligent control systems. PMID:25587423
Fluorescence correlation spectroscopy: novel variations of an established technique.
Haustein, Elke; Schwille, Petra
2007-01-01
Fluorescence correlation spectroscopy (FCS) is one of the major biophysical techniques used for unraveling molecular interactions in vitro and in vivo. It allows minimally invasive study of dynamic processes in biological specimens with extremely high temporal and spatial resolution. By recording and correlating the fluorescence fluctuations of single labeled molecules through the exciting laser beam, FCS gives information on molecular mobility and photophysical and photochemical reactions. By using dual-color fluorescence cross-correlation, highly specific binding studies can be performed. These have been extended to four reaction partners accessible by multicolor applications. Alternative detection schemes shift accessible time frames to slower processes (e.g., scanning FCS) or higher concentrations (e.g., TIR-FCS). Despite its long tradition, FCS is by no means dated. Rather, it has proven to be a highly versatile technique that can easily be adapted to solve specific biological questions, and it continues to find exciting applications in biology and medicine.
Life cycle assessment of mobile phone housing.
Yang, Jian-xin; Wang, Ru-song; Fu, Hao; Liu, Jing-ru
2004-01-01
The life cycle assessment of the mobile phone housing in Motorola(China) Electronics Ltd. was carried out, in which materials flows and environmental emissions based on a basic production scheme were analyzed and assessed. In the manufacturing stage, such primary processes as polycarbonate molding and surface painting are included, whereas different surface finishing technologies like normal painting, electroplate, IMD and VDM etc. were assessed. The results showed that housing decoration plays a significant role within the housing life cycle. The most significant environmental impact from housing production is the photochemical ozone formation potential. Environmental impacts of different decoration techniques varied widely, for example, the electroplating technique is more environmentally friendly than VDM. VDM consumes much more energy and raw material. In addition, the results of two alternative scenarios of dematerialization showed that material flow analysis and assessment is very important and valuable in selecting an environmentally friendly process.
Regulatory assembly of the vacuolar proton pump VoV1-ATPase in yeast cells by FLIM-FRET
NASA Astrophysics Data System (ADS)
Ernst, Stefan; Batisse, Claire; Zarrabi, Nawid; Böttcher, Bettina; Börsch, Michael
2010-02-01
We investigate the reversible disassembly of VOV1-ATPase in life yeast cells by time resolved confocal FRET imaging. VOV1-ATPase in the vacuolar membrane pumps protons from the cytosol into the vacuole. VOV1-ATPase is a rotary biological nanomotor driven by ATP hydrolysis. The emerging proton gradient is used for secondary transport processes as well as for pH and Ca2+ homoeostasis in the cell. The activity of the VOV1-ATPase is regulated through assembly / disassembly processes. During starvation the two parts of VOV1-ATPase start to disassemble. This process is reversed after addition of glucose. The exact mechanisms are unknown. To follow the disassembly / reassembly in vivo we tagged two subunits C and E with different fluorescent proteins. Cellular distributions of C and E were monitored using a duty cycle-optimized alternating laser excitation scheme (DCO-ALEX) for time resolved confocal FRET-FLIM measurements.
A Low-Cost Tracking System for Running Race Applications Based on Bluetooth Low Energy Technology.
Perez-Diaz-de-Cerio, David; Hernández-Solana, Ángela; Valdovinos, Antonio; Valenzuela, Jose Luis
2018-03-20
Timing points used in running races and other competition events are generally based on radio-frequency identification (RFID) technology. Athletes' times are calculated via passive RFID tags and reader kits. Specifically, the reader infrastructure needed is complex and requires the deployment of a mat or ramps which hide the receiver antennae under them. Moreover, with the employed tags, it is not possible to transmit additional and dynamic information such as pulse or oximetry monitoring, alarms, etc. In this paper we present a system based on two low complex schemes allowed in Bluetooth Low Energy (BLE): the non-connectable undirected advertisement process and a modified version of scannable undirected advertisement process using the new capabilities present in Bluetooth 5. After fully describing the system architecture, which allows full real-time position monitoring of the runners using mobile phones on the organizer side and BLE sensors on the participants' side, we derive the mobility patterns of runners and capacity requirements, which are determinant for evaluating the performance of the proposed system. They have been obtained from the analysis of the real data measured in the last Barcelona Marathon. By means of simulations, we demonstrate that, even under disadvantageous conditions (50% error ratio), both schemes perform reliably and are able to detect the 100% of the participants in all the cases. The cell coverage of the system needs to be adjusted when non-connectable process is considered. Nevertheless, through simulation and experimental, we show that the proposed scheme based on the new events available in Bluetooth 5 is clearly the best implementation alternative for all the cases, no matter the coverage area and the runner speed. The proposal widely exceeds the detection requirements of the real scenario, surpassing the measured peaks of 20 sensors per second incoming in the coverage area, moving at speeds that range from 1.5 m/s to 6.25 m/s. The designed real test-bed shows that the scheme is able to detect 72 sensors below 600 ms, fulfilling comfortably the requirements determined for the intended application. The main disadvantage of this system would be that the sensors are active, but we have proved that its consumption can be so low (9.5 µA) that, with a typical button cell, the sensor battery life would be over 10,000 h of use.
A Low-Cost Tracking System for Running Race Applications Based on Bluetooth Low Energy Technology
2018-01-01
Timing points used in running races and other competition events are generally based on radio-frequency identification (RFID) technology. Athletes’ times are calculated via passive RFID tags and reader kits. Specifically, the reader infrastructure needed is complex and requires the deployment of a mat or ramps which hide the receiver antennae under them. Moreover, with the employed tags, it is not possible to transmit additional and dynamic information such as pulse or oximetry monitoring, alarms, etc. In this paper we present a system based on two low complex schemes allowed in Bluetooth Low Energy (BLE): the non-connectable undirected advertisement process and a modified version of scannable undirected advertisement process using the new capabilities present in Bluetooth 5. After fully describing the system architecture, which allows full real-time position monitoring of the runners using mobile phones on the organizer side and BLE sensors on the participants’ side, we derive the mobility patterns of runners and capacity requirements, which are determinant for evaluating the performance of the proposed system. They have been obtained from the analysis of the real data measured in the last Barcelona Marathon. By means of simulations, we demonstrate that, even under disadvantageous conditions (50% error ratio), both schemes perform reliably and are able to detect the 100% of the participants in all the cases. The cell coverage of the system needs to be adjusted when non-connectable process is considered. Nevertheless, through simulation and experimental, we show that the proposed scheme based on the new events available in Bluetooth 5 is clearly the best implementation alternative for all the cases, no matter the coverage area and the runner speed. The proposal widely exceeds the detection requirements of the real scenario, surpassing the measured peaks of 20 sensors per second incoming in the coverage area, moving at speeds that range from 1.5 m/s to 6.25 m/s. The designed real test-bed shows that the scheme is able to detect 72 sensors below 600 ms, fulfilling comfortably the requirements determined for the intended application. The main disadvantage of this system would be that the sensors are active, but we have proved that its consumption can be so low (9.5 µA) that, with a typical button cell, the sensor battery life would be over 10,000 h of use. PMID:29558432
SIMULATING ATMOSPHERIC EXPOSURE USING AN INNOVATIVE METEOROLOGICAL SAMPLING SCHEME
Multimedia Risk assessments require the temporal integration of atmospheric concentration and deposition estimates with other media modules. However, providing an extended time series of estimates is computationally expensive. An alternative approach is to substitute long-ter...
Use of uninformative priors to initialize state estimation for dynamical systems
NASA Astrophysics Data System (ADS)
Worthy, Johnny L.; Holzinger, Marcus J.
2017-10-01
The admissible region must be expressed probabilistically in order to be used in Bayesian estimation schemes. When treated as a probability density function (PDF), a uniform admissible region can be shown to have non-uniform probability density after a transformation. An alternative approach can be used to express the admissible region probabilistically according to the Principle of Transformation Groups. This paper uses a fundamental multivariate probability transformation theorem to show that regardless of which state space an admissible region is expressed in, the probability density must remain the same under the Principle of Transformation Groups. The admissible region can be shown to be analogous to an uninformative prior with a probability density that remains constant under reparameterization. This paper introduces requirements on how these uninformative priors may be transformed and used for state estimation and the difference in results when initializing an estimation scheme via a traditional transformation versus the alternative approach.
A conjugate gradient method for solving the non-LTE line radiation transfer problem
NASA Astrophysics Data System (ADS)
Paletou, F.; Anterrieu, E.
2009-12-01
This study concerns the fast and accurate solution of the line radiation transfer problem, under non-LTE conditions. We propose and evaluate an alternative iterative scheme to the classical ALI-Jacobi method, and to the more recently proposed Gauss-Seidel and successive over-relaxation (GS/SOR) schemes. Our study is indeed based on applying a preconditioned bi-conjugate gradient method (BiCG-P). Standard tests, in 1D plane parallel geometry and in the frame of the two-level atom model with monochromatic scattering are discussed. Rates of convergence between the previously mentioned iterative schemes are compared, as are their respective timing properties. The smoothing capability of the BiCG-P method is also demonstrated.
Absolute frequency of cesium 6S-8S 822 nm two-photon transition by a high-resolution scheme.
Wu, Chien-Ming; Liu, Tze-Wei; Wu, Ming-Hsuan; Lee, Ray-Kuang; Cheng, Wang-Yau
2013-08-15
We present an alternative scheme for determining the frequencies of cesium (Cs) atom 6S-8S Doppler-free transitions. With the use of a single electro-optical crystal, we simultaneously narrow the laser linewidth, lock the laser frequency, and resolve a narrow spectrum point by point. The error budget for this scheme is presented, and we prove that the transition frequency obtained from the Cs cell at room temperature and with one-layer μ-metal shielding is already very near that for the condition of zero collision and zero magnetic field. We point out that a sophisticated linewidth measurement could be a good guidance for choosing a suitable Cs cell for better frequency accuracy.
Efficient bit sifting scheme of post-processing in quantum key distribution
NASA Astrophysics Data System (ADS)
Li, Qiong; Le, Dan; Wu, Xianyan; Niu, Xiamu; Guo, Hong
2015-10-01
Bit sifting is an important step in the post-processing of quantum key distribution (QKD). Its function is to sift out the undetected original keys. The communication traffic of bit sifting has essential impact on the net secure key rate of a practical QKD system. In this paper, an efficient bit sifting scheme is presented, of which the core is a lossless source coding algorithm. Both theoretical analysis and experimental results demonstrate that the performance of the scheme is approaching the Shannon limit. The proposed scheme can greatly decrease the communication traffic of the post-processing of a QKD system, which means the proposed scheme can decrease the secure key consumption for classical channel authentication and increase the net secure key rate of the QKD system, as demonstrated by analyzing the improvement on the net secure key rate. Meanwhile, some recommendations on the application of the proposed scheme to some representative practical QKD systems are also provided.
Richards, Suzanne H; Coast, Joanna; Gunnell, David J; Peters, Tim J; Pounsford, John; Darlow, Mary-Anne
1998-01-01
Objective: To compare effectiveness and acceptability of early discharge to a hospital at home scheme with that of routine discharge from acute hospital. Design: Pragmatic randomised controlled trial. Setting: Acute hospital wards and community in north of Bristol, with a catchment population of about 224 000 people. Subjects: 241 hospitalised but medically stable elderly patients who fulfilled criteria for early discharge to hospital at home scheme and who consented to participate. Interventions: Patients’ received hospital at home care or routine hospital care. Main outcome measures: Patients’ quality of life, satisfaction, and physical functioning assessed at 4 weeks and 3 months after randomisation to treatment; length of stay in hospital and in hospital at home scheme after randomisation; mortality at 3 months. Results: There were no significant differences in patient mortality, quality of life, and physical functioning between the two arms of the trial at 4 weeks or 3 months. Only one of 11 measures of patient satisfaction was significantly different: hospital at home patients perceived higher levels of involvement in decisions. Length of stay for those receiving routine hospital care was 62% (95% confidence interval 51% to 75%) of length of stay in hospital at home scheme. Conclusions: The early discharge hospital at home scheme was similar to routine hospital discharge in terms of effectiveness and acceptability. Increased length of stay associated with the scheme must be interpreted with caution because of different organisational characteristics of the services. Key messages Pressure on hospital beds, the increasing age of the population, and high costs associated with acute hospital care have fuelled the search for alternatives to inpatient hospital care There were no significant differences between early discharge to hospital at home scheme and routine hospital care in terms of patient quality of life, physical functioning, and most measures of patient satisfaction Length of stay for hospital patients was significantly shorter than that of hospital at home patients, but, owing to qualitative differences between the two interventions, this does not necessarily mean differences in effectiveness Early discharge to hospital at home provides an acceptable alternative to routine hospital care in terms of effectiveness and patient acceptability PMID:9624070
2015-05-26
in other systems , or whether it has alternative functions. Here, we report that CRISPR can be used to subtype Salmonella enterica serovariants...protects the bacteria against foreign DNA as described in other systems , or whether it has alternative functions. Here, we report that CRISPR can be...N. Shariat, R. E. Timme, J. B. Pettengill, R. Barrangou, E. G. Dudley. Characterization and evolution of Salmonella CRISPR-Cas systems
Scheme, Erik J; Englehart, Kevin B
2013-07-01
When controlling a powered upper limb prosthesis it is important not only to know how to move the device, but also when not to move. A novel approach to pattern recognition control, using a selective multiclass one-versus-one classification scheme has been shown to be capable of rejecting unintended motions. This method was shown to outperform other popular classification schemes when presented with muscle contractions that did not correspond to desired actions. In this work, a 3-D Fitts' Law test is proposed as a suitable alternative to using virtual limb environments for evaluating real-time myoelectric control performance. The test is used to compare the selective approach to a state-of-the-art linear discriminant analysis classification based scheme. The framework is shown to obey Fitts' Law for both control schemes, producing linear regression fittings with high coefficients of determination (R(2) > 0.936). Additional performance metrics focused on quality of control are discussed and incorporated in the evaluation. Using this framework the selective classification based scheme is shown to produce significantly higher efficiency and completion rates, and significantly lower overshoot and stopping distances, with no significant difference in throughput.
Hagen, Wim J H; Wan, William; Briggs, John A G
2017-02-01
Cryo-electron tomography (cryoET) allows 3D structural information to be obtained from cells and other biological samples in their close-to-native state. In combination with subtomogram averaging, detailed structures of repeating features can be resolved. CryoET data is collected as a series of images of the sample from different tilt angles; this is performed by physically rotating the sample in the microscope between each image. The angles at which the images are collected, and the order in which they are collected, together are called the tilt-scheme. Here we describe a "dose-symmetric tilt-scheme" that begins at low tilt and then alternates between increasingly positive and negative tilts. This tilt-scheme maximizes the amount of high-resolution information maintained in the tomogram for subsequent subtomogram averaging, and may also be advantageous for other applications. We describe implementation of the tilt-scheme in combination with further data-collection refinements including setting thresholds on acceptable drift and improving focus accuracy. Requirements for microscope set-up are introduced, and a macro is provided which automates the application of the tilt-scheme within SerialEM. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Singh, Ravendra; Ierapetritou, Marianthi; Ramachandran, Rohit
2013-11-01
The next generation of QbD based pharmaceutical products will be manufactured through continuous processing. This will allow the integration of online/inline monitoring tools, coupled with an efficient advanced model-based feedback control systems, to achieve precise control of process variables, so that the predefined product quality can be achieved consistently. The direct compaction process considered in this study is highly interactive and involves time delays for a number of process variables due to sensor placements, process equipment dimensions, and the flow characteristics of the solid material. A simple feedback regulatory control system (e.g., PI(D)) by itself may not be sufficient to achieve the tight process control that is mandated by regulatory authorities. The process presented herein comprises of coupled dynamics involving slow and fast responses, indicating the requirement of a hybrid control scheme such as a combined MPC-PID control scheme. In this manuscript, an efficient system-wide hybrid control strategy for an integrated continuous pharmaceutical tablet manufacturing process via direct compaction has been designed. The designed control system is a hybrid scheme of MPC-PID control. An effective controller parameter tuning strategy involving an ITAE method coupled with an optimization strategy has been used for tuning of both MPC and PID parameters. The designed hybrid control system has been implemented in a first-principles model-based flowsheet that was simulated in gPROMS (Process System Enterprise). Results demonstrate enhanced performance of critical quality attributes (CQAs) under the hybrid control scheme compared to only PID or MPC control schemes, illustrating the potential of a hybrid control scheme in improving pharmaceutical manufacturing operations. Copyright © 2013 Elsevier B.V. All rights reserved.
Zhang, Xudong
2002-10-01
This work describes a new approach that allows an angle-domain human movement model to generate, via forward kinematics, Cartesian-space human movement representation with otherwise inevitable end-point offset nullified but much of the kinematic authenticity retained. The approach incorporates a rectification procedure that determines the minimum postural angle change at the final frame to correct the end-point offset, and a deformation procedure that deforms the angle profile accordingly to preserve maximum original kinematic authenticity. Two alternative deformation schemes, named amplitude-proportional (AP) and time-proportional (TP) schemes, are proposed and formulated. As an illustration and empirical evaluation, the proposed approach, along with two deformation schemes, was applied to a set of target-directed right-hand reaching movements that had been previously measured and modeled. The evaluation showed that both deformation schemes nullified the final frame end-point offset and significantly reduced time-averaged position errors for the end-point as well as the most distal intermediate joint while causing essentially no change in the remaining joints. A comparison between the two schemes based on time-averaged joint and end-point position errors indicated that overall the TP scheme outperformed the AP scheme. In addition, no statistically significant difference in time-averaged angle error was identified between the raw prediction and either of the deformation schemes, nor between the two schemes themselves, suggesting minimal angle-domain distortion incurred by the deformation.
Alternative Level of Care: Canada's Hospital Beds, the Evidence and Options
Sutherland, Jason M.; Crump, R. Trafford
2013-01-01
Patients designated as alternative level of care (ALC) are an ongoing concern for healthcare policy makers across Canada. These patients occupy valuable hospital beds and limit access to acute care services. The objective of this paper is to present policy alternatives to address underlying factors associated with ALC bed use. Three alternatives, and their respective limitations and structural challenges, are discussed. Potential solutions may require a mix of policy options proposed here. Inadequate policy jeopardizes new acute care activity-based funding schemes in British Columbia and Ontario. Failure to address this issue could exacerbate pressures on the existing bottlenecks in the community care system in these and other provinces. PMID:23968671
A Robust Multi-Scale Modeling System for the Study of Cloud and Precipitation Processes
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo
2012-01-01
During the past decade, numerical weather and global non-hydrostatic models have started using more complex microphysical schemes originally developed for high resolution cloud resolving models (CRMs) with 1-2 km or less horizontal resolutions. These microphysical schemes affect the dynamic through the release of latent heat (buoyancy loading and pressure gradient) the radiation through the cloud coverage (vertical distribution of cloud species), and surface processes through rainfall (both amount and intensity). Recently, several major improvements of ice microphysical processes (or schemes) have been developed for cloud-resolving model (Goddard Cumulus Ensemble, GCE, model) and regional scale (Weather Research and Forecast, WRF) model. These improvements include an improved 3-ICE (cloud ice, snow and graupel) scheme (Lang et al. 2010); a 4-ICE (cloud ice, snow, graupel and hail) scheme and a spectral bin microphysics scheme and two different two-moment microphysics schemes. The performance of these schemes has been evaluated by using observational data from TRMM and other major field campaigns. In this talk, we will present the high-resolution (1 km) GeE and WRF model simulations and compared the simulated model results with observation from recent field campaigns [i.e., midlatitude continental spring season (MC3E; 2010), high latitude cold-season (C3VP, 2007; GCPEx, 2012), and tropical oceanic (TWP-ICE, 2006)].
Temporal Surface Reconstruction
1991-05-03
and the convergence cannot be guaranteed. Maybank [68] investigated alternative incremental schemes for the estimation of feature locations from a...depth from image sequences. International Journal of Computer Vision, 3, 1989. [68] S. J. Maybank . Filter based estimates of depth. In Proceedings of the
Iron Catalyst Chemistry in High Pressure Carbon Monoxide Nanotube Reactor
NASA Technical Reports Server (NTRS)
Scott, Carl D.; Povitsky, Alexander; Dateo, Christopher; Gokcen, Tahir; Smalley, Richard E.
2001-01-01
The high-pressure carbon monoxide (HiPco) technique for producing single wall carbon nanotubes (SWNT) is analyzed using a chemical reaction model coupled with properties calculated along streamlines. Streamline properties for mixing jets are calculated by the FLUENT code using the k-e turbulent model for pure carbon monixide. The HiPco process introduces cold iron pentacarbonyl diluted in CO, or alternatively nitrogen, at high pressure, ca. 30 atmospheres into a conical mixing zone. Hot CO is also introduced via three jets at angles with respect to the axis of the reactor. Hot CO decomposes the Fe(CO)5 to release atomic Fe. Cluster reaction rates are from Krestinin, et aI., based on shock tube measurements. Another model is from classical cluster theory given by Girshick's team. The calculations are performed on streamlines that assume that a cold mixture of Fe(CO)5 in CO is introduced along the reactor axis. Then iron forms clusters that catalyze the formation of SWNTs from the Boudouard reaction on Fe-containing clusters by reaction with CO. To simulate the chemical process along streamlines that were calculated by the fluid dynamics code FLUENT, a time history of temperature and dilution are determined along streamlines. Alternative catalyst injection schemes are also evaluated.
NASA Astrophysics Data System (ADS)
Chen, Y.-M.; Koniges, A. E.; Anderson, D. V.
1989-10-01
The biconjugate gradient method (BCG) provides an attractive alternative to the usual conjugate gradient algorithms for the solution of sparse systems of linear equations with nonsymmetric and indefinite matrix operators. A preconditioned algorithm is given, whose form resembles the incomplete L-U conjugate gradient scheme (ILUCG2) previously presented. Although the BCG scheme requires the storage of two additional vectors, it converges in a significantly lesser number of iterations (often half), while the number of calculations per iteration remains essentially the same.
NASA Astrophysics Data System (ADS)
2004-09-01
Meeting: Brecon hosts 'alternative-style' Education Group Conference Meeting: Schools' Physics Group meeting delivers valuable teaching update Saturn Mission: PPARC’s Saturn school resource goes online Funding: Grant scheme supports Einstein Year activities Meeting: Liverpool Teachers’ Conference revives enthusiasm for physics Loan Scheme: Moon samples loaned to schools Awards: Schoolnet rewards good use of ICT in learning Funding: PPARC provides cash for science projects Workshop: Experts in physics education research share knowledge at international event Bulgaria: Transit of Venus comes to town Conference: CERN weekend provides lessons in particle physics Summer School: Teachers receive the summer-school treatment
Air cooling of disk of a solid integrally cast turbine rotor for an automotive gas turbine
NASA Technical Reports Server (NTRS)
Gladden, H. J.
1977-01-01
A thermal analysis is made of surface cooling of a solid, integrally cast turbine rotor disk for an automotive gas turbine engine. Air purge and impingement cooling schemes are considered and compared with an uncooled reference case. Substantial reductions in blade temperature are predicted with each of the cooling schemes studied. It is shown that air cooling can result in a substantial gain in the stress-rupture life of the blade. Alternatively, increases in the turbine inlet temperature are possible.
Conditional equivalence testing: An alternative remedy for publication bias
Gustafson, Paul
2018-01-01
We introduce a publication policy that incorporates “conditional equivalence testing” (CET), a two-stage testing scheme in which standard NHST is followed conditionally by testing for equivalence. The idea of CET is carefully considered as it has the potential to address recent concerns about reproducibility and the limited publication of null results. In this paper we detail the implementation of CET, investigate similarities with a Bayesian testing scheme, and outline the basis for how a scientific journal could proceed to reduce publication bias while remaining relevant. PMID:29652891
Challenges of constructing salt cavern gas storage in China
NASA Astrophysics Data System (ADS)
Xia, Yan; Yuan, Guangjie; Ban, Fansheng; Zhuang, Xiaoqian; Li, Jingcui
2017-11-01
After more than ten years of research and engineering practice in salt cavern gas storage, the engineering technology of geology, drilling, leaching, completion, operation and monitoring system has been established. With the rapid growth of domestic consumption of natural gas, the requirement of underground gas storage is increasing. Because high-quality rock salt resources about 1000m depth are relatively scarce, the salt cavern gas storages will be built in deep rock salt. According to the current domestic conventional construction technical scheme, construction in deep salt formations will face many problems such as circulating pressure increasing, tubing blockage, deformation failure, higher completion risk and so on, caused by depth and the complex geological conditions. Considering these difficulties, the differences between current technical scheme and the construction scheme of twin well and big hole are analyzed, and the results show that the technical scheme of twin well and big hole have obvious advantages in reducing the circulating pressure loss, tubing blockage and failure risk, and they can be the alternative schemes to solve the technical difficulties of constructing salt cavern gas storages in the deep rock salt.
Vector quantization for efficient coding of upper subbands
NASA Technical Reports Server (NTRS)
Zeng, W. J.; Huang, Y. F.
1994-01-01
This paper examines the application of vector quantization (VQ) to exploit both intra-band and inter-band redundancy in subband coding. The focus here is on the exploitation of inter-band dependency. It is shown that VQ is particularly suitable and effective for coding the upper subbands. Three subband decomposition-based VQ coding schemes are proposed here to exploit the inter-band dependency by making full use of the extra flexibility of VQ approach over scalar quantization. A quadtree-based variable rate VQ (VRVQ) scheme which takes full advantage of the intra-band and inter-band redundancy is first proposed. Then, a more easily implementable alternative based on an efficient block-based edge estimation technique is employed to overcome the implementational barriers of the first scheme. Finally, a predictive VQ scheme formulated in the context of finite state VQ is proposed to further exploit the dependency among different subbands. A VRVQ scheme proposed elsewhere is extended to provide an efficient bit allocation procedure. Simulation results show that these three hybrid techniques have advantages, in terms of peak signal-to-noise ratio (PSNR) and complexity, over other existing subband-VQ approaches.
A triangular thin shell finite element: Nonlinear analysis. [structural analysis
NASA Technical Reports Server (NTRS)
Thomas, G. R.; Gallagher, R. H.
1975-01-01
Aspects of the formulation of a triangular thin shell finite element which pertain to geometrically nonlinear (small strain, finite displacement) behavior are described. The procedure for solution of the resulting nonlinear algebraic equations combines a one-step incremental (tangent stiffness) approach with one iteration in the Newton-Raphson mode. A method is presented which permits a rational estimation of step size in this procedure. Limit points are calculated by means of a superposition scheme coupled to the incremental side of the solution procedure while bifurcation points are calculated through a process of interpolation of the determinants of the tangent-stiffness matrix. Numerical results are obtained for a flat plate and two curved shell problems and are compared with alternative solutions.
Spin-orbit torque induced magnetization anisotropy modulation in Pt/(Co/Ni)4/Co/IrMn heterostructure
NASA Astrophysics Data System (ADS)
Engel, Christian; Goolaup, Sarjoosing; Luo, Feilong; Gan, Weiliang; Lew, Wen Siang
2017-04-01
In this work, we show that domain wall (DW) dynamics within a system provide an alternative platform to characterizing spin-orbit torque (SOT) effective fields. In perpendicularly magnetized wires with a Pt/(Co/Ni)4/Co/IrMn stack structure, differential Kerr imaging shows that the magnetization switching process is via the nucleation of the embryo state followed by domain wall propagation. By probing the current induced DW motion in the presence of in-plane field, the SOT effective fields are obtained using the harmonic Hall voltage scheme. The effective anisotropy field of the structure decreases by 12% due to the SOT effective fields, as the in-plane current in the wire is increased.
Generating Alternative Proposals for the Louvre Using Procedural Modeling
NASA Astrophysics Data System (ADS)
Calogero, E.; Arnold, D.
2011-09-01
This paper presents the process of reconstructing two facade designs for the East wing of the Louvre using procedural modeling. The first proposal reconstructed is Louis Le Vau's 1662 scheme and the second is the 1668 design of the "petit conseil" that still stands today. The initial results presented show how such reconstructions may aid general and expert understanding of the two designs. It is claimed that by formalizing the facade description into a shape grammar in CityEngine, a systematized approach to a stylistic analysis is possible. It is also asserted that such an analysis is still best understood in the historical context of what is known about the contemporary design intentions of the building creators and commissioners.
Alternative industrial carbon emissions benchmark based on input-output analysis
NASA Astrophysics Data System (ADS)
Han, Mengyao; Ji, Xi
2016-12-01
Some problems exist in the current carbon emissions benchmark setting systems. The primary consideration for industrial carbon emissions standards highly relate to direct carbon emissions (power-related emissions) and only a portion of indirect emissions are considered in the current carbon emissions accounting processes. This practice is insufficient and may cause double counting to some extent due to mixed emission sources. To better integrate and quantify direct and indirect carbon emissions, an embodied industrial carbon emissions benchmark setting method is proposed to guide the establishment of carbon emissions benchmarks based on input-output analysis. This method attempts to link direct carbon emissions with inter-industrial economic exchanges and systematically quantifies carbon emissions embodied in total product delivery chains. The purpose of this study is to design a practical new set of embodied intensity-based benchmarks for both direct and indirect carbon emissions. Beijing, at the first level of carbon emissions trading pilot schemes in China, plays a significant role in the establishment of these schemes and is chosen as an example in this study. The newly proposed method tends to relate emissions directly to each responsibility in a practical way through the measurement of complex production and supply chains and reduce carbon emissions from their original sources. This method is expected to be developed under uncertain internal and external contexts and is further expected to be generalized to guide the establishment of industrial benchmarks for carbon emissions trading schemes in China and other countries.
The health and social system for the aged in Japan.
Matsuda, Shinya
2002-08-01
Japan implemented a new social insurance scheme for the frail and elderly, Long-Term-Care Insurance (LTCI) on 1 April 2000. This was an époque-making event in the history of the Japanese public health policy, because it meant that in modifying its tradition of family care for the elderly, Japan had moved toward socialization of care. One of the main ideas behind the establishment of LTCI was to "de-medicalize" and rationalize the care of elderly persons with disabilities characteristic of the aging process. Because of the aging of the society, the Japanese social insurance system required a fundamental reform. The implementation of LTCI constitutes the first step in the future health reform in Japan. The LTCI scheme requires each citizen to take more responsibility for finance and decision-making in the social security system. The introduction of LTCI is also bringing in fundamental structural changes in the Japanese health system. With the development of the Integrated Delivery System (IDS), alternative care services such as assisted living are on-going. Another important social change is a community movement for the healthy longevity. For example, a variety of public health and social programs are organized in order to keep the elderly healthy and active as long as possible. In this article, the author explains on-going structural changes in the Japanese health system. Analyses are focused on the current debate for the reorganization of the health insurance scheme for the aged in Japan and community public health services for them.
Chen, Qingrong; Zhang, Jingjing; Xu, Xiaodong; Scheepers, Christoph; Yang, Yiming; Tanenhaus, Michael K
2016-09-01
In an ERP study, classic Chinese poems with a well-known rhyme scheme were used to generate an expectation of a rhyme in the absence of an expectation for a specific character. Critical characters were either consistent or inconsistent with the expected rhyme scheme and semantically congruent or incongruent with the content of the poem. These stimuli allowed us to examine whether a top-down rhyme scheme expectation would affect relatively early components of the ERP associated with character-to-sound mapping (P200) and lexically-mediated semantic processing (N400). The ERP data revealed that rhyme scheme congruence, but not semantic congruence modulated the P200: rhyme-incongruent characters elicited a P200 effect across the head demonstrating that top-down expectations influence early phonological coding of the character before lexical-semantic processing. Rhyme scheme incongruence also produced a right-lateralized N400-like effect. Moreover, compared to semantically congruous poems, semantically incongruous poems produced a larger N400 response only when the character was consistent with the expected rhyme scheme. The results suggest that top-down prosodic expectations can modulate early phonological processing in visual word recognition, indicating that prosodic expectations might play an important role in silent reading. They also suggest that semantic processing is influenced by general knowledge of text genre. Copyright © 2016 Elsevier B.V. All rights reserved.
The parallel algorithm for the 2D discrete wavelet transform
NASA Astrophysics Data System (ADS)
Barina, David; Najman, Pavel; Kleparnik, Petr; Kula, Michal; Zemcik, Pavel
2018-04-01
The discrete wavelet transform can be found at the heart of many image-processing algorithms. Until now, the transform on general-purpose processors (CPUs) was mostly computed using a separable lifting scheme. As the lifting scheme consists of a small number of operations, it is preferred for processing using single-core CPUs. However, considering a parallel processing using multi-core processors, this scheme is inappropriate due to a large number of steps. On such architectures, the number of steps corresponds to the number of points that represent the exchange of data. Consequently, these points often form a performance bottleneck. Our approach appropriately rearranges calculations inside the transform, and thereby reduces the number of steps. In other words, we propose a new scheme that is friendly to parallel environments. When evaluating on multi-core CPUs, we consistently overcome the original lifting scheme. The evaluation was performed on 61-core Intel Xeon Phi and 8-core Intel Xeon processors.
Convolutional Dictionary Learning: Acceleration and Convergence
NASA Astrophysics Data System (ADS)
Chun, Il Yong; Fessler, Jeffrey A.
2018-04-01
Convolutional dictionary learning (CDL or sparsifying CDL) has many applications in image processing and computer vision. There has been growing interest in developing efficient algorithms for CDL, mostly relying on the augmented Lagrangian (AL) method or the variant alternating direction method of multipliers (ADMM). When their parameters are properly tuned, AL methods have shown fast convergence in CDL. However, the parameter tuning process is not trivial due to its data dependence and, in practice, the convergence of AL methods depends on the AL parameters for nonconvex CDL problems. To moderate these problems, this paper proposes a new practically feasible and convergent Block Proximal Gradient method using a Majorizer (BPG-M) for CDL. The BPG-M-based CDL is investigated with different block updating schemes and majorization matrix designs, and further accelerated by incorporating some momentum coefficient formulas and restarting techniques. All of the methods investigated incorporate a boundary artifacts removal (or, more generally, sampling) operator in the learning model. Numerical experiments show that, without needing any parameter tuning process, the proposed BPG-M approach converges more stably to desirable solutions of lower objective values than the existing state-of-the-art ADMM algorithm and its memory-efficient variant do. Compared to the ADMM approaches, the BPG-M method using a multi-block updating scheme is particularly useful in single-threaded CDL algorithm handling large datasets, due to its lower memory requirement and no polynomial computational complexity. Image denoising experiments show that, for relatively strong additive white Gaussian noise, the filters learned by BPG-M-based CDL outperform those trained by the ADMM approach.
Study of stability of the difference scheme for the model problem of the gaslift process
NASA Astrophysics Data System (ADS)
Temirbekov, Nurlan; Turarov, Amankeldy
2017-09-01
The paper studies a model of the gaslift process where the motion in a gas-lift well is described by partial differential equations. The system describing the studied process consists of equations of motion, continuity, equations of thermodynamic state, and hydraulic resistance. A two-layer finite-difference Lax-Vendroff scheme is constructed for the numerical solution of the problem. The stability of the difference scheme for the model problem is investigated using the method of a priori estimates, the order of approximation is investigated, the algorithm for numerical implementation of the gaslift process model is given, and the graphs are presented. The development and investigation of difference schemes for the numerical solution of systems of equations of gas dynamics makes it possible to obtain simultaneously exact and monotonic solutions.
Planning and leading of the technological processes by mechanical working with microsoft project
NASA Astrophysics Data System (ADS)
Nae, I.; Grigore, N.
2016-08-01
Nowadays, fabrication systems and methods are being modified; new processing technologies come up, flow sheets develop a minimum number of phases, the flexibility of the technologies grows up, new methods and instruments of monitoring and leading the processing operations also come up. The technological course (route, entry, scheme, guiding) referring to the series of the operation, putting and execution phases of a mark in order to obtain the final product from the blank is represented by a sequence of activities realized by a logic manner, on a well determined schedule, with a determined budget and resources. Also, a project can be defined as a series of specific activities, methodical structured which they aim to finish a specific objective, within a fixed schedule and budget. Within the homogeneity between the project and the technological course, this research is presenting the defining of the technological course of mechanical chip removing process using Microsoft Project. Under these circumstances, this research highlights the advantages of this method: the celerity using of other technological alternatives in order to pick the optimal process, the job scheduling being constrained by any kinds, the standardization of some processing technological operations.
NASA Astrophysics Data System (ADS)
Lu, Hongwei; Ren, Lixia; Chen, Yizhong; Tian, Peipei; Liu, Jia
2017-12-01
Due to the uncertainty (i.e., fuzziness, stochasticity and imprecision) existed simultaneously during the process for groundwater remediation, the accuracy of ranking results obtained by the traditional methods has been limited. This paper proposes a cloud model based multi-attribute decision making framework (CM-MADM) with Monte Carlo for the contaminated-groundwater remediation strategies selection. The cloud model is used to handle imprecise numerical quantities, which can describe the fuzziness and stochasticity of the information fully and precisely. In the proposed approach, the contaminated concentrations are aggregated via the backward cloud generator and the weights of attributes are calculated by employing the weight cloud module. A case study on the remedial alternative selection for a contaminated site suffering from a 1,1,1-trichloroethylene leakage problem in Shanghai, China is conducted to illustrate the efficiency and applicability of the developed approach. Totally, an attribute system which consists of ten attributes were used for evaluating each alternative through the developed method under uncertainty, including daily total pumping rate, total cost and cloud model based health risk. Results indicated that A14 was evaluated to be the most preferred alternative for the 5-year, A5 for the 10-year, A4 for the 15-year and A6 for the 20-year remediation.
Natural selection. VII. History and interpretation of kin selection theory.
Frank, S A
2013-06-01
Kin selection theory is a kind of causal analysis. The initial form of kin selection ascribed cause to costs, benefits and genetic relatedness. The theory then slowly developed a deeper and more sophisticated approach to partitioning the causes of social evolution. Controversy followed because causal analysis inevitably attracts opposing views. It is always possible to separate total effects into different component causes. Alternative causal schemes emphasize different aspects of a problem, reflecting the distinct goals, interests and biases of different perspectives. For example, group selection is a particular causal scheme with certain advantages and significant limitations. Ultimately, to use kin selection theory to analyse natural patterns and to understand the history of debates over different approaches, one must follow the underlying history of causal analysis. This article describes the history of kin selection theory, with emphasis on how the causal perspective improved through the study of key patterns of natural history, such as dispersal and sex ratio, and through a unified approach to demographic and social processes. Independent historical developments in the multivariate analysis of quantitative traits merged with the causal analysis of social evolution by kin selection. © 2013 The Author. Journal of Evolutionary Biology © 2013 European Society For Evolutionary Biology.
On improving the efficiency of tensor voting.
Moreno, Rodrigo; Garcia, Miguel Angel; Puig, Domenec; Pizarro, Luis; Burgeth, Bernhard; Weickert, Joachim
2011-11-01
This paper proposes two alternative formulations to reduce the high computational complexity of tensor voting, a robust perceptual grouping technique used to extract salient information from noisy data. The first scheme consists of numerical approximations of the votes, which have been derived from an in-depth analysis of the plate and ball voting processes. The second scheme simplifies the formulation while keeping the same perceptual meaning of the original tensor voting: The stick tensor voting and the stick component of the plate tensor voting must reinforce surfaceness, the plate components of both the plate and ball tensor voting must boost curveness, whereas junctionness must be strengthened by the ball component of the ball tensor voting. Two new parameters have been proposed for the second formulation in order to control the potentially conflictive influence of the stick component of the plate vote and the ball component of the ball vote. Results show that the proposed formulations can be used in applications where efficiency is an issue since they have a complexity of order O(1). Moreover, the second proposed formulation has been shown to be more appropriate than the original tensor voting for estimating saliencies by appropriately setting the two new parameters.
Compact, cost-effective and field-portable microscope prototype based on MISHELF microscopy
NASA Astrophysics Data System (ADS)
Sanz, Martín; Picazo-Bueno, José Ángel; Granero, Luis; García, Javier; Micó, Vicente
2017-02-01
We report on a reduced cost, portable and compact prototype design of lensless holographic microscope with an illumination/detection scheme based on wavelength multiplexing, working with single hologram acquisition and using a fast convergence algorithm for image processing. All together, MISHELF (initials coming from Multi-Illumination Single-Holographic-Exposure Lensless Fresnel) microscopy allows the recording of three Fresnel domain diffraction patterns in a single camera snap-shot incoming from illuminating the sample with three coherent lights at once. Previous implementations have proposed an illumination/detection procedure based on a tuned (illumination wavelengths centered at the maximum sensitivity of the camera detection channels) configuration but here we report on a detuned (non-centered ones) scheme resulting in prototype miniaturization and cost reduction. Thus, MISHELF microscopy in combination with a novel and fast iterative algorithm allows high-resolution (μm range) phase-retrieved (twin image elimination) quantitative phase imaging of dynamic events (video rate recording speed). The performance of this microscope prototype is validated through experiments using both amplitude (USAF resolution test) and complex (live swine sperm cells and flowing microbeads) samples. The proposed method becomes in an alternative instrument improving some capabilities of existing lensless microscopes.
High accuracy switched-current circuits using an improved dynamic mirror
NASA Technical Reports Server (NTRS)
Zweigle, G.; Fiez, T.
1991-01-01
The switched-current technique, a recently developed circuit approach to analog signal processing, has emerged as an alternative/compliment to the well established switched-capacitor circuit technique. High speed switched-current circuits offer potential cost and power savings over slower switched-capacitor circuits. Accuracy improvements are a primary concern at this stage in the development of the switched-current technique. Use of the dynamic current mirror has produced circuits that are insensitive to transistor matching errors. The dynamic current mirror has been limited by other sources of error including clock-feedthrough and voltage transient errors. In this paper we present an improved switched-current building block using the dynamic current mirror. Utilizing current feedback the errors due to current imbalance in the dynamic current mirror are reduced. Simulations indicate that this feedback can reduce total harmonic distortion by as much as 9 dB. Additionally, we have developed a clock-feedthrough reduction scheme for which simulations reveal a potential 10 dB total harmonic distortion improvement. The clock-feedthrough reduction scheme also significantly reduces offset errors and allows for cancellation with a constant current source. Experimental results confirm the simulated improvements.
NASA Technical Reports Server (NTRS)
Ramamurti, R.; Ghia, U.; Ghia, K. N.
1988-01-01
A semi-elliptic formulation, termed the interacting parabolized Navier-Stokes (IPNS) formulation, is developed for the analysis of a class of subsonic viscous flows for which streamwise diffusion is neglible but which are significantly influenced by upstream interactions. The IPNS equations are obtained from the Navier-Stokes equations by dropping the streamwise viscous-diffusion terms but retaining upstream influence via the streamwise pressure-gradient. A two-step alternating-direction-explicit numerical scheme is developed to solve these equations. The quasi-linearization and discretization of the equations are carefully examined so that no artificial viscosity is added externally to the scheme. Also, solutions to compressible as well as nearly compressible flows are obtained without any modification either in the analysis or in the solution process. The procedure is applied to constricted channels and cascade passages formed by airfoils of various shapes. These geometries are represented using numerically generated curilinear boundary-oriented coordinates forming an H-grid. A hybrid C-H grid, more appropriate for cascade of airfoils with rounded leading edges, was also developed. Satisfactory results are obtained for flows through cascades of Joukowski airfoils.
NASA Astrophysics Data System (ADS)
Antoine, Xavier; Levitt, Antoine; Tang, Qinglin
2017-08-01
We propose a preconditioned nonlinear conjugate gradient method coupled with a spectral spatial discretization scheme for computing the ground states (GS) of rotating Bose-Einstein condensates (BEC), modeled by the Gross-Pitaevskii Equation (GPE). We first start by reviewing the classical gradient flow (also known as imaginary time (IMT)) method which considers the problem from the PDE standpoint, leading to numerically solve a dissipative equation. Based on this IMT equation, we analyze the forward Euler (FE), Crank-Nicolson (CN) and the classical backward Euler (BE) schemes for linear problems and recognize classical power iterations, allowing us to derive convergence rates. By considering the alternative point of view of minimization problems, we propose the preconditioned steepest descent (PSD) and conjugate gradient (PCG) methods for the GS computation of the GPE. We investigate the choice of the preconditioner, which plays a key role in the acceleration of the convergence process. The performance of the new algorithms is tested in 1D, 2D and 3D. We conclude that the PCG method outperforms all the previous methods, most particularly for 2D and 3D fast rotating BECs, while being simple to implement.
Compact, cost-effective and field-portable microscope prototype based on MISHELF microscopy
Sanz, Martín; Picazo-Bueno, José Ángel; Granero, Luis; García, Javier; Micó, Vicente
2017-01-01
We report on a reduced cost, portable and compact prototype design of lensless holographic microscope with an illumination/detection scheme based on wavelength multiplexing, working with single hologram acquisition and using a fast convergence algorithm for image processing. All together, MISHELF (initials coming from Multi-Illumination Single-Holographic-Exposure Lensless Fresnel) microscopy allows the recording of three Fresnel domain diffraction patterns in a single camera snap-shot incoming from illuminating the sample with three coherent lights at once. Previous implementations have proposed an illumination/detection procedure based on a tuned (illumination wavelengths centered at the maximum sensitivity of the camera detection channels) configuration but here we report on a detuned (non-centered ones) scheme resulting in prototype miniaturization and cost reduction. Thus, MISHELF microscopy in combination with a novel and fast iterative algorithm allows high-resolution (μm range) phase-retrieved (twin image elimination) quantitative phase imaging of dynamic events (video rate recording speed). The performance of this microscope prototype is validated through experiments using both amplitude (USAF resolution test) and complex (live swine sperm cells and flowing microbeads) samples. The proposed method becomes in an alternative instrument improving some capabilities of existing lensless microscopes. PMID:28233829
Thorn, Christine Johanna; Bissinger, Kerstin; Thorn, Simon; Bogner, Franz Xaver
2016-01-01
Successful learning is the integration of new knowledge into existing schemes, leading to an integrated and correct scientific conception. By contrast, the co-existence of scientific and alternative conceptions may indicate a fragmented knowledge profile. Every learner is unique and thus carries an individual set of preconceptions before classroom engagement due to prior experiences. Hence, instructors and teachers have to consider the heterogeneous knowledge profiles of their class when teaching. However, determinants of fragmented knowledge profiles are not well understood yet, which may hamper a development of adapted teaching schemes. We used a questionnaire-based approach to assess conceptual knowledge of tree assimilation and wood synthesis surveying 885 students of four educational levels: 6th graders, 10th graders, natural science freshmen and other academic studies freshmen. We analysed the influence of learner's characteristics such as educational level, age and sex on the coexistence of scientific and alternative conceptions. Within all subsamples well-known alternative conceptions regarding tree assimilation and wood synthesis coexisted with correct scientific ones. For example, students describe trees to be living on "soil and sunshine", representing scientific knowledge of photosynthesis mingled with an alternative conception of trees eating like animals. Fragmented knowledge profiles occurred in all subsamples, but our models showed that improved education and age foster knowledge integration. Sex had almost no influence on the existing scientific conceptions and evolution of knowledge integration. Consequently, complex biological issues such as tree assimilation and wood synthesis need specific support e.g. through repeated learning units in class- and seminar-rooms in order to help especially young students to handle and overcome common alternative conceptions and appropriately integrate scientific conceptions into their knowledge profile.
Thorn, Simon; Bogner, Franz Xaver
2016-01-01
Successful learning is the integration of new knowledge into existing schemes, leading to an integrated and correct scientific conception. By contrast, the co-existence of scientific and alternative conceptions may indicate a fragmented knowledge profile. Every learner is unique and thus carries an individual set of preconceptions before classroom engagement due to prior experiences. Hence, instructors and teachers have to consider the heterogeneous knowledge profiles of their class when teaching. However, determinants of fragmented knowledge profiles are not well understood yet, which may hamper a development of adapted teaching schemes. We used a questionnaire-based approach to assess conceptual knowledge of tree assimilation and wood synthesis surveying 885 students of four educational levels: 6th graders, 10th graders, natural science freshmen and other academic studies freshmen. We analysed the influence of learner’s characteristics such as educational level, age and sex on the coexistence of scientific and alternative conceptions. Within all subsamples well-known alternative conceptions regarding tree assimilation and wood synthesis coexisted with correct scientific ones. For example, students describe trees to be living on “soil and sunshine”, representing scientific knowledge of photosynthesis mingled with an alternative conception of trees eating like animals. Fragmented knowledge profiles occurred in all subsamples, but our models showed that improved education and age foster knowledge integration. Sex had almost no influence on the existing scientific conceptions and evolution of knowledge integration. Consequently, complex biological issues such as tree assimilation and wood synthesis need specific support e.g. through repeated learning units in class- and seminar-rooms in order to help especially young students to handle and overcome common alternative conceptions and appropriately integrate scientific conceptions into their knowledge profile. PMID:26807974
Mapping Mangrove Density from Rapideye Data in Central America
NASA Astrophysics Data System (ADS)
Son, Nguyen-Thanh; Chen, Chi-Farn; Chen, Cheng-Ru
2017-06-01
Mangrove forests provide a wide range of socioeconomic and ecological services for coastal communities. Extensive aquaculture development of mangrove waters in many developing countries has constantly ignored services of mangrove ecosystems, leading to unintended environmental consequences. Monitoring the current status and distribution of mangrove forests is deemed important for evaluating forest management strategies. This study aims to delineate the density distribution of mangrove forests in the Gulf of Fonseca, Central America with Rapideye data using the support vector machines (SVM). The data collected in 2012 for density classification of mangrove forests were processed based on four different band combination schemes: scheme-1 (bands 1-3, 5 excluding the red-edge band 4), scheme-2 (bands 1-5), scheme-3 (bands 1-3, 5 incorporating with the normalized difference vegetation index, NDVI), and scheme-4 (bands 1-3, 5 incorporating with the normalized difference red-edge index, NDRI). We also hypothesized if the obvious contribution of Rapideye red-edge band could improve the classification results. Three main steps of data processing were employed: (1), data pre-processing, (2) image classification, and (3) accuracy assessment to evaluate the contribution of red-edge band in terms of the accuracy of classification results across these four schemes. The classification maps compared with the ground reference data indicated the slightly higher accuracy level observed for schemes 2 and 4. The overall accuracies and Kappa coefficients were 97% and 0.95 for scheme-2 and 96.9% and 0.95 for scheme-4, respectively.
Lesot, Philippe; Kazimierczuk, Krzysztof; Trébosc, Julien; Amoureux, Jean-Paul; Lafon, Olivier
2015-11-01
Unique information about the atom-level structure and dynamics of solids and mesophases can be obtained by the use of multidimensional nuclear magnetic resonance (NMR) experiments. Nevertheless, the acquisition of these experiments often requires long acquisition times. We review here alternative sampling methods, which have been proposed to circumvent this issue in the case of solids and mesophases. Compared to the spectra of solutions, those of solids and mesophases present some specificities because they usually display lower signal-to-noise ratios, non-Lorentzian line shapes, lower spectral resolutions and wider spectral widths. We highlight herein the advantages and limitations of these alternative sampling methods. A first route to accelerate the acquisition time of multidimensional NMR spectra consists in the use of sparse sampling schemes, such as truncated, radial or random sampling ones. These sparsely sampled datasets are generally processed by reconstruction methods differing from the Discrete Fourier Transform (DFT). A host of non-DFT methods have been applied for solids and mesophases, including the G-matrix Fourier transform, the linear least-square procedures, the covariance transform, the maximum entropy and the compressed sensing. A second class of alternative sampling consists in departing from the Jeener paradigm for multidimensional NMR experiments. These non-Jeener methods include Hadamard spectroscopy as well as spatial or orientational encoding of the evolution frequencies. The increasing number of high field NMR magnets and the development of techniques to enhance NMR sensitivity will contribute to widen the use of these alternative sampling methods for the study of solids and mesophases in the coming years. Copyright © 2015 John Wiley & Sons, Ltd.
A fast CT reconstruction scheme for a general multi-core PC.
Zeng, Kai; Bai, Erwei; Wang, Ge
2007-01-01
Expensive computational cost is a severe limitation in CT reconstruction for clinical applications that need real-time feedback. A primary example is bolus-chasing computed tomography (CT) angiography (BCA) that we have been developing for the past several years. To accelerate the reconstruction process using the filtered backprojection (FBP) method, specialized hardware or graphics cards can be used. However, specialized hardware is expensive and not flexible. The graphics processing unit (GPU) in a current graphic card can only reconstruct images in a reduced precision and is not easy to program. In this paper, an acceleration scheme is proposed based on a multi-core PC. In the proposed scheme, several techniques are integrated, including utilization of geometric symmetry, optimization of data structures, single-instruction multiple-data (SIMD) processing, multithreaded computation, and an Intel C++ compilier. Our scheme maintains the original precision and involves no data exchange between the GPU and CPU. The merits of our scheme are demonstrated in numerical experiments against the traditional implementation. Our scheme achieves a speedup of about 40, which can be further improved by several folds using the latest quad-core processors.
A Fast CT Reconstruction Scheme for a General Multi-Core PC
Zeng, Kai; Bai, Erwei; Wang, Ge
2007-01-01
Expensive computational cost is a severe limitation in CT reconstruction for clinical applications that need real-time feedback. A primary example is bolus-chasing computed tomography (CT) angiography (BCA) that we have been developing for the past several years. To accelerate the reconstruction process using the filtered backprojection (FBP) method, specialized hardware or graphics cards can be used. However, specialized hardware is expensive and not flexible. The graphics processing unit (GPU) in a current graphic card can only reconstruct images in a reduced precision and is not easy to program. In this paper, an acceleration scheme is proposed based on a multi-core PC. In the proposed scheme, several techniques are integrated, including utilization of geometric symmetry, optimization of data structures, single-instruction multiple-data (SIMD) processing, multithreaded computation, and an Intel C++ compilier. Our scheme maintains the original precision and involves no data exchange between the GPU and CPU. The merits of our scheme are demonstrated in numerical experiments against the traditional implementation. Our scheme achieves a speedup of about 40, which can be further improved by several folds using the latest quad-core processors. PMID:18256731
Universal quantum computation using all-optical hybrid encoding
NASA Astrophysics Data System (ADS)
Guo, Qi; Cheng, Liu-Yong; Wang, Hong-Fu; Zhang, Shou
2015-04-01
By employing displacement operations, single-photon subtractions, and weak cross-Kerr nonlinearity, we propose an alternative way of implementing several universal quantum logical gates for all-optical hybrid qubits encoded in both single-photon polarization state and coherent state. Since these schemes can be straightforwardly implemented only using local operations without teleportation procedure, therefore, less physical resources and simpler operations are required than the existing schemes. With the help of displacement operations, a large phase shift of the coherent state can be obtained via currently available tiny cross-Kerr nonlinearity. Thus, all of these schemes are nearly deterministic and feasible under current technology conditions, which makes them suitable for large-scale quantum computing. Project supported by the National Natural Science Foundation of China (Grant Nos. 61465013, 11465020, and 11264042).
MPDATA: A positive definite solver for geophysical flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smolarkiewicz, P.K.; Margolin, L.G.
1997-12-31
This paper is a review of MPDATA, a class of methods for the numerical simulation of advection based on the sign-preserving properties of upstream differencing. MPDATA was designed originally as an inexpensive alternative to flux-limited schemes for evaluating the transport of nonnegative thermodynamic variables (such as liquid water or water vapor) in atmospheric models. During the last decade, MPDATA has evolved from a simple advection scheme to a general approach for integrating the conservation laws of geophysical fluids on micro-to-planetary scales. The purpose of this paper is to summarize the basic concepts leading to a family of MPDATA schemes, reviewmore » the existing MPDATA options, as well as to demonstrate the efficacy of the approach using diverse examples of complex geophysical flows.« less
Gartner, Coral; Hall, Wayne
2015-06-01
Australia has some of the most restrictive laws concerning use of nicotine in e-cigarettes. The only current legal option for Australians to legally possess and use nicotine for vaping is with a medical prescription and domestic supply is limited to compounding pharmacies that prepare medicines for specific patients. An alternative regulatory option that could be implemented under current drugs and poisons regulations is a 'nicotine licensing' scheme utilising current provisions for 'dangerous poisons'. This commentary discusses how such a scheme could be used to trial access to nicotine solutions for vaping outside of a 'medicines framework' in Australia. Copyright © 2015 Elsevier B.V. All rights reserved.
Classification of extraterrestrial civilizations
NASA Astrophysics Data System (ADS)
Tang, Tong B.; Chang, Grace
1991-06-01
A scheme of classification of extraterrestrial intelligence (ETI) communities based on the scope of energy accessible to the civilization in question is proposed as an alternative to the Kardeshev (1964) scheme that includes three types of civilization, as determined by their levels of energy expenditure. The proposed scheme includes six classes: (1) a civilization that runs essentially on energy exerted by individual beings or by domesticated lower life forms, (2) harnessing of natural sources on planetary surface with artificial constructions, like water wheels and wind sails, (3) energy from fossils and fissionable isotopes, mined beneath the planet surface, (4) exploitation of nuclear fusion on a large scale, whether on the planet, in space, or from primary solar energy, (5) extensive use of antimatter for energy storage, and (6) energy from spacetime, perhaps via the action of naked singularities.
Time cycle analysis and simulation of material flow in MOX process layout
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chakraborty, S.; Saraswat, A.; Danny, K.M.
The (U,Pu)O{sub 2} MOX fuel is the driver fuel for the upcoming PFBR (Prototype Fast Breeder Reactor). The fuel has around 30% PuO{sub 2}. The presence of high percentages of reprocessed PuO{sub 2} necessitates the design of optimized fuel fabrication process line which will address both production need as well as meet regulatory norms regarding radiological safety criteria. The powder pellet route has highly unbalanced time cycle. This difficulty can be overcome by optimizing process layout in terms of equipment redundancy and scheduling of input powder batches. Different schemes are tested before implementing in the process line with the helpmore » of a software. This software simulates the material movement through the optimized process layout. The different material processing schemes have been devised and validity of the schemes are tested with the software. Schemes in which production batches are meeting at any glove box location are considered invalid. A valid scheme ensures adequate spacing between the production batches and at the same time it meets the production target. This software can be further improved by accurately calculating material movement time through glove box train. One important factor is considering material handling time with automation systems in place.« less
Proposed scheme for parallel 10Gb/s VSR system and its verilog HDL realization
NASA Astrophysics Data System (ADS)
Zhou, Yi; Chen, Hongda; Zuo, Chao; Jia, Jiuchun; Shen, Rongxuan; Chen, Xiongbin
2005-02-01
This paper proposes a novel and innovative scheme for 10Gb/s parallel Very Short Reach (VSR) optical communication system. The optimized scheme properly manages the SDH/SONET redundant bytes and adjusts the position of error detecting bytes and error correction bytes. Compared with the OIF-VSR4-01.0 proposal, the scheme has a coding process module. The SDH/SONET frames in transmission direction are disposed as follows: (1) The Framer-Serdes Interface (FSI) gets 16×622.08Mb/s STM-64 frame. (2) The STM-64 frame is byte-wise stripped across 12 channels, all channels are data channels. During this process, the parity bytes and CRC bytes are generated in the similar way as OIF-VSR4-01.0 and stored in the code process module. (3) The code process module will regularly convey the additional parity bytes and CRC bytes to all 12 data channels. (4) After the 8B/10B coding, the 12 channels is transmitted to the parallel VCSEL array. The receive process approximately in reverse order of transmission process. By applying this scheme to 10Gb/s VSR system, the frame size in VSR system is reduced from 15552×12 bytes to 14040×12 bytes, the system redundancy is reduced obviously.
A survey of packages for large linear systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Kesheng; Milne, Brent
2000-02-11
This paper evaluates portable software packages for the iterative solution of very large sparse linear systems on parallel architectures. While we cannot hope to tell individual users which package will best suit their needs, we do hope that our systematic evaluation provides essential unbiased information about the packages and the evaluation process may serve as an example on how to evaluate these packages. The information contained here include feature comparisons, usability evaluations and performance characterizations. This review is primarily focused on self-contained packages that can be easily integrated into an existing program and are capable of computing solutions to verymore » large sparse linear systems of equations. More specifically, it concentrates on portable parallel linear system solution packages that provide iterative solution schemes and related preconditioning schemes because iterative methods are more frequently used than competing schemes such as direct methods. The eight packages evaluated are: Aztec, BlockSolve,ISIS++, LINSOL, P-SPARSLIB, PARASOL, PETSc, and PINEAPL. Among the eight portable parallel iterative linear system solvers reviewed, we recommend PETSc and Aztec for most application programmers because they have well designed user interface, extensive documentation and very responsive user support. Both PETSc and Aztec are written in the C language and are callable from Fortran. For those users interested in using Fortran 90, PARASOL is a good alternative. ISIS++is a good alternative for those who prefer the C++ language. Both PARASOL and ISIS++ are relatively new and are continuously evolving. Thus their user interface may change. In general, those packages written in Fortran 77 are more cumbersome to use because the user may need to directly deal with a number of arrays of varying sizes. Languages like C++ and Fortran 90 offer more convenient data encapsulation mechanisms which make it easier to implement a clean and intuitive user interface. In addition to reviewing these portable parallel iterative solver packages, we also provide a more cursory assessment of a range of related packages, from specialized parallel preconditioners to direct methods for sparse linear systems.« less
Spurious sea ice formation caused by oscillatory ocean tracer advection schemes
NASA Astrophysics Data System (ADS)
Naughten, Kaitlin A.; Galton-Fenzi, Benjamin K.; Meissner, Katrin J.; England, Matthew H.; Brassington, Gary B.; Colberg, Frank; Hattermann, Tore; Debernard, Jens B.
2017-08-01
Tracer advection schemes used by ocean models are susceptible to artificial oscillations: a form of numerical error whereby the advected field alternates between overshooting and undershooting the exact solution, producing false extrema. Here we show that these oscillations have undesirable interactions with a coupled sea ice model. When oscillations cause the near-surface ocean temperature to fall below the freezing point, sea ice forms for no reason other than numerical error. This spurious sea ice formation has significant and wide-ranging impacts on Southern Ocean simulations, including the disappearance of coastal polynyas, stratification of the water column, erosion of Winter Water, and upwelling of warm Circumpolar Deep Water. This significantly limits the model's suitability for coupled ocean-ice and climate studies. Using the terrain-following-coordinate ocean model ROMS (Regional Ocean Modelling System) coupled to the sea ice model CICE (Community Ice CodE) on a circumpolar Antarctic domain, we compare the performance of three different tracer advection schemes, as well as two levels of parameterised diffusion and the addition of flux limiters to prevent numerical oscillations. The upwind third-order advection scheme performs better than the centered fourth-order and Akima fourth-order advection schemes, with far fewer incidents of spurious sea ice formation. The latter two schemes are less problematic with higher parameterised diffusion, although some supercooling artifacts persist. Spurious supercooling was eliminated by adding flux limiters to the upwind third-order scheme. We present this comparison as evidence of the problematic nature of oscillatory advection schemes in sea ice formation regions, and urge other ocean/sea-ice modellers to exercise caution when using such schemes.
An analysis of hydrogen production via closed-cycle schemes. [thermochemical processings from water
NASA Technical Reports Server (NTRS)
Chao, R. E.; Cox, K. E.
1975-01-01
A thermodynamic analysis and state-of-the-art review of three basic schemes for production of hydrogen from water: electrolysis, thermal water-splitting, and multi-step thermochemical closed cycles is presented. Criteria for work-saving thermochemical closed-cycle processes are established, and several schemes are reviewed in light of such criteria. An economic analysis is also presented in the context of energy costs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Xiaodong; Xia, Yidong; Luo, Hong
A comparative study of two classes of third-order implicit time integration schemes is presented for a third-order hierarchical WENO reconstructed discontinuous Galerkin (rDG) method to solve the 3D unsteady compressible Navier-Stokes equations: — 1) the explicit first stage, single diagonally implicit Runge-Kutta (ESDIRK3) scheme, and 2) the Rosenbrock-Wanner (ROW) schemes based on the differential algebraic equations (DAEs) of Index-2. Compared with the ESDIRK3 scheme, a remarkable feature of the ROW schemes is that, they only require one approximate Jacobian matrix calculation every time step, thus considerably reducing the overall computational cost. A variety of test cases, ranging from inviscid flowsmore » to DNS of turbulent flows, are presented to assess the performance of these schemes. Here, numerical experiments demonstrate that the third-order ROW scheme for the DAEs of index-2 can not only achieve the designed formal order of temporal convergence accuracy in a benchmark test, but also require significantly less computing time than its ESDIRK3 counterpart to converge to the same level of discretization errors in all of the flow simulations in this study, indicating that the ROW methods provide an attractive alternative for the higher-order time-accurate integration of the unsteady compressible Navier-Stokes equations.« less
Liu, Xiaodong; Xia, Yidong; Luo, Hong; ...
2016-10-05
A comparative study of two classes of third-order implicit time integration schemes is presented for a third-order hierarchical WENO reconstructed discontinuous Galerkin (rDG) method to solve the 3D unsteady compressible Navier-Stokes equations: — 1) the explicit first stage, single diagonally implicit Runge-Kutta (ESDIRK3) scheme, and 2) the Rosenbrock-Wanner (ROW) schemes based on the differential algebraic equations (DAEs) of Index-2. Compared with the ESDIRK3 scheme, a remarkable feature of the ROW schemes is that, they only require one approximate Jacobian matrix calculation every time step, thus considerably reducing the overall computational cost. A variety of test cases, ranging from inviscid flowsmore » to DNS of turbulent flows, are presented to assess the performance of these schemes. Here, numerical experiments demonstrate that the third-order ROW scheme for the DAEs of index-2 can not only achieve the designed formal order of temporal convergence accuracy in a benchmark test, but also require significantly less computing time than its ESDIRK3 counterpart to converge to the same level of discretization errors in all of the flow simulations in this study, indicating that the ROW methods provide an attractive alternative for the higher-order time-accurate integration of the unsteady compressible Navier-Stokes equations.« less
Spatial-Temporal Data Collection with Compressive Sensing in Mobile Sensor Networks
Li, Jiayin; Guo, Wenzhong; Chen, Zhonghui; Xiong, Neal
2017-01-01
Compressive sensing (CS) provides an energy-efficient paradigm for data gathering in wireless sensor networks (WSNs). However, the existing work on spatial-temporal data gathering using compressive sensing only considers either multi-hop relaying based or multiple random walks based approaches. In this paper, we exploit the mobility pattern for spatial-temporal data collection and propose a novel mobile data gathering scheme by employing the Metropolis-Hastings algorithm with delayed acceptance, an improved random walk algorithm for a mobile collector to collect data from a sensing field. The proposed scheme exploits Kronecker compressive sensing (KCS) for spatial-temporal correlation of sensory data by allowing the mobile collector to gather temporal compressive measurements from a small subset of randomly selected nodes along a random routing path. More importantly, from the theoretical perspective we prove that the equivalent sensing matrix constructed from the proposed scheme for spatial-temporal compressible signal can satisfy the property of KCS models. The simulation results demonstrate that the proposed scheme can not only significantly reduce communication cost but also improve recovery accuracy for mobile data gathering compared to the other existing schemes. In particular, we also show that the proposed scheme is robust in unreliable wireless environment under various packet losses. All this indicates that the proposed scheme can be an efficient alternative for data gathering application in WSNs. PMID:29117152
NASA Technical Reports Server (NTRS)
Swanson, R. C.; Rossow, C.-C.
2008-01-01
A three-stage Runge-Kutta (RK) scheme with multigrid and an implicit preconditioner has been shown to be an effective solver for the fluid dynamic equations. This scheme has been applied to both the compressible and essentially incompressible Reynolds-averaged Navier-Stokes (RANS) equations using the algebraic turbulence model of Baldwin and Lomax (BL). In this paper we focus on the convergence of the RK/implicit scheme when the effects of turbulence are represented by either the Spalart-Allmaras model or the Wilcox k-! model, which are frequently used models in practical fluid dynamic applications. Convergence behavior of the scheme with these turbulence models and the BL model are directly compared. For this initial investigation we solve the flow equations and the partial differential equations of the turbulence models indirectly coupled. With this approach we examine the convergence behavior of each system. Both point and line symmetric Gauss-Seidel are considered for approximating the inverse of the implicit operator of the flow solver. To solve the turbulence equations we use a diagonally dominant alternating direction implicit (DDADI) scheme. Computational results are presented for three airfoil flow cases and comparisons are made with experimental data. We demonstrate that the two-dimensional RANS equations and transport-type equations for turbulence modeling can be efficiently solved with an indirectly coupled algorithm that uses the RK/implicit scheme for the flow equations.
Spatial-Temporal Data Collection with Compressive Sensing in Mobile Sensor Networks.
Zheng, Haifeng; Li, Jiayin; Feng, Xinxin; Guo, Wenzhong; Chen, Zhonghui; Xiong, Neal
2017-11-08
Compressive sensing (CS) provides an energy-efficient paradigm for data gathering in wireless sensor networks (WSNs). However, the existing work on spatial-temporal data gathering using compressive sensing only considers either multi-hop relaying based or multiple random walks based approaches. In this paper, we exploit the mobility pattern for spatial-temporal data collection and propose a novel mobile data gathering scheme by employing the Metropolis-Hastings algorithm with delayed acceptance, an improved random walk algorithm for a mobile collector to collect data from a sensing field. The proposed scheme exploits Kronecker compressive sensing (KCS) for spatial-temporal correlation of sensory data by allowing the mobile collector to gather temporal compressive measurements from a small subset of randomly selected nodes along a random routing path. More importantly, from the theoretical perspective we prove that the equivalent sensing matrix constructed from the proposed scheme for spatial-temporal compressible signal can satisfy the property of KCS models. The simulation results demonstrate that the proposed scheme can not only significantly reduce communication cost but also improve recovery accuracy for mobile data gathering compared to the other existing schemes. In particular, we also show that the proposed scheme is robust in unreliable wireless environment under various packet losses. All this indicates that the proposed scheme can be an efficient alternative for data gathering application in WSNs .
Advance commitment: an alternative approach to the family veto problem in organ procurement.
De Wispelaere, Jurgen; Stirton, Lindsay
2010-03-01
This article tackles the current deficit in the supply of cadaveric organs by addressing the family veto in organ donation. The authors believe that the family veto matters-ethically as well as practically-and that policies that completely disregard the views of the family in this decision are likely to be counterproductive. Instead, this paper proposes to engage directly with the most important reasons why families often object to the removal of the organs of a loved one who has signed up to the donor registry-notably a failure to understand fully and deliberate on the information and a reluctance to deal with this sort of decision at an emotionally distressing time. To accommodate these concerns it is proposed to separate radically the process of information, deliberation and agreement about the harvesting of a potential donor's organs from the event of death and bereavement through a scheme of advance commitment. This paper briefly sets out the proposal and discusses in some detail its design as well as what is believed to be the main advantages compared with the leading alternatives.
Li, Zhifei; Qin, Dongliang
2014-01-01
In defense related programs, the use of capability-based analysis, design, and acquisition has been significant. In order to confront one of the most challenging features of a huge design space in capability based analysis (CBA), a literature review of design space exploration was first examined. Then, in the process of an aerospace system of systems design space exploration, a bilayer mapping method was put forward, based on the existing experimental and operating data. Finally, the feasibility of the foregoing approach was demonstrated with an illustrative example. With the data mining RST (rough sets theory) and SOM (self-organized mapping) techniques, the alternative to the aerospace system of systems architecture was mapping from P-space (performance space) to C-space (configuration space), and then from C-space to D-space (design space), respectively. Ultimately, the performance space was mapped to the design space, which completed the exploration and preliminary reduction of the entire design space. This method provides a computational analysis and implementation scheme for large-scale simulation. PMID:24790572
Gao, Yuan; Zhou, Weigui; Ao, Hong; Chu, Jian; Zhou, Quan; Zhou, Bo; Wang, Kang; Li, Yi; Xue, Peng
2016-01-01
With the increasing demands for better transmission speed and robust quality of service (QoS), the capacity constrained backhaul gradually becomes a bottleneck in cooperative wireless networks, e.g., in the Internet of Things (IoT) scenario in joint processing mode of LTE-Advanced Pro. This paper focuses on resource allocation within capacity constrained backhaul in uplink cooperative wireless networks, where two base stations (BSs) equipped with single antennae serve multiple single-antennae users via multi-carrier transmission mode. In this work, we propose a novel cooperative transmission scheme based on compress-and-forward with user pairing to solve the joint mixed integer programming problem. To maximize the system capacity under the limited backhaul, we formulate the joint optimization problem of user sorting, subcarrier mapping and backhaul resource sharing among different pairs (subcarriers for users). A novel robust and efficient centralized algorithm based on alternating optimization strategy and perfect mapping is proposed. Simulations show that our novel method can improve the system capacity significantly under the constraint of the backhaul resource compared with the blind alternatives. PMID:27077865
SPECT detectors: the Anger Camera and beyond
Peterson, Todd E.; Furenlid, Lars R.
2011-01-01
The development of radiation detectors capable of delivering spatial information about gamma-ray interactions was one of the key enabling technologies for nuclear medicine imaging and, eventually, single-photon emission computed tomography (SPECT). The continuous NaI(Tl) scintillator crystal coupled to an array of photomultiplier tubes, almost universally referred to as the Anger Camera after its inventor, has long been the dominant SPECT detector system. Nevertheless, many alternative materials and configurations have been investigated over the years. Technological advances as well as the emerging importance of specialized applications, such as cardiac and preclinical imaging, have spurred innovation such that alternatives to the Anger Camera are now part of commercial imaging systems. Increased computing power has made it practical to apply advanced signal processing and estimation schemes to make better use of the information contained in the detector signals. In this review we discuss the key performance properties of SPECT detectors and survey developments in both scintillator and semiconductor detectors and their readouts with an eye toward some of the practical issues at least in part responsible for the continuing prevalence of the Anger Camera in the clinic. PMID:21828904
Li, Zhifei; Qin, Dongliang; Yang, Feng
2014-01-01
In defense related programs, the use of capability-based analysis, design, and acquisition has been significant. In order to confront one of the most challenging features of a huge design space in capability based analysis (CBA), a literature review of design space exploration was first examined. Then, in the process of an aerospace system of systems design space exploration, a bilayer mapping method was put forward, based on the existing experimental and operating data. Finally, the feasibility of the foregoing approach was demonstrated with an illustrative example. With the data mining RST (rough sets theory) and SOM (self-organized mapping) techniques, the alternative to the aerospace system of systems architecture was mapping from P-space (performance space) to C-space (configuration space), and then from C-space to D-space (design space), respectively. Ultimately, the performance space was mapped to the design space, which completed the exploration and preliminary reduction of the entire design space. This method provides a computational analysis and implementation scheme for large-scale simulation.
Study for new hardmask process scheme
NASA Astrophysics Data System (ADS)
Lee, Daeyoup; Tatti, Phillip; Lee, Richard; Chang, Jack; Cho, Winston; Bae, Sanggil
2017-03-01
Hardmask processes are a key technique to enable low-k semiconductors, but they can have an impact on patterning control, influencing defectivity, alignment, and overlay. Specifically, amorphous carbon layer (ACL) hardmask schemes can negatively affect overlay by creating distorted alignment signals. A new scheme needs to be developed that can be inserted where amorphous carbon is used but provide better alignment performance. Typical spin-on carbon (SOC) materials used in other hardmask schemes have issues with DCD-FCD skew. In this paper we will evaluate new spin-on carbon material with a higher carbon content that could be a candidate to replace amorphous carbon.
Implementation of a Cross-Layer Sensing Medium-Access Control Scheme.
Su, Yishan; Fu, Xiaomei; Han, Guangyao; Xu, Naishen; Jin, Zhigang
2017-04-10
In this paper, compressed sensing (CS) theory is utilized in a medium-access control (MAC) scheme for wireless sensor networks (WSNs). We propose a new, cross-layer compressed sensing medium-access control (CL CS-MAC) scheme, combining the physical layer and data link layer, where the wireless transmission in physical layer is considered as a compress process of requested packets in a data link layer according to compressed sensing (CS) theory. We first introduced using compressive complex requests to identify the exact active sensor nodes, which makes the scheme more efficient. Moreover, because the reconstruction process is executed in a complex field of a physical layer, where no bit and frame synchronizations are needed, the asynchronous and random requests scheme can be implemented without synchronization payload. We set up a testbed based on software-defined radio (SDR) to implement the proposed CL CS-MAC scheme practically and to demonstrate the validation. For large-scale WSNs, the simulation results show that the proposed CL CS-MAC scheme provides higher throughput and robustness than the carrier sense multiple access (CSMA) and compressed sensing medium-access control (CS-MAC) schemes.
On the effectiveness of a license scheme for E-waste recycling: The challenge of China and India
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shinkuma, Takayoshi, E-mail: shinkuma@kansai-u.ac.j; Managi, Shunsuke, E-mail: managi@ynu.ac.j
2010-07-15
It is well known that China and India have been recycling centers of WEEE, especially printed circuit boards, and that serious environmental pollution in these countries has been generated by improper recycling methods. After the governments of China and India banned improper recycling by the informal sector, improper recycling activities spread to other places. Then, these governments changed their policies to one of promoting proper recycling by introducing a scheme, under which E-waste recycling requires a license issued by the government. In this paper, the effectiveness of that license scheme is examined by means of an economic model. It canmore » be shown that the license scheme can work effectively only if disposers of E-waste have a responsibility to sell E-waste to license holders. Our results run counter to the idea that international E-waste trade should be banned and provide an alternative solution to the problem.« less
Lu, Chi-Jie; Chang, Chi-Chang
2014-01-01
Sales forecasting plays an important role in operating a business since it can be used to determine the required inventory level to meet consumer demand and avoid the problem of under/overstocking. Improving the accuracy of sales forecasting has become an important issue of operating a business. This study proposes a hybrid sales forecasting scheme by combining independent component analysis (ICA) with K-means clustering and support vector regression (SVR). The proposed scheme first uses the ICA to extract hidden information from the observed sales data. The extracted features are then applied to K-means algorithm for clustering the sales data into several disjoined clusters. Finally, the SVR forecasting models are applied to each group to generate final forecasting results. Experimental results from information technology (IT) product agent sales data reveal that the proposed sales forecasting scheme outperforms the three comparison models and hence provides an efficient alternative for sales forecasting.
2014-01-01
Sales forecasting plays an important role in operating a business since it can be used to determine the required inventory level to meet consumer demand and avoid the problem of under/overstocking. Improving the accuracy of sales forecasting has become an important issue of operating a business. This study proposes a hybrid sales forecasting scheme by combining independent component analysis (ICA) with K-means clustering and support vector regression (SVR). The proposed scheme first uses the ICA to extract hidden information from the observed sales data. The extracted features are then applied to K-means algorithm for clustering the sales data into several disjoined clusters. Finally, the SVR forecasting models are applied to each group to generate final forecasting results. Experimental results from information technology (IT) product agent sales data reveal that the proposed sales forecasting scheme outperforms the three comparison models and hence provides an efficient alternative for sales forecasting. PMID:25045738
NASA Technical Reports Server (NTRS)
Beggs, John H.; Briley, W. Roger
2001-01-01
There has been some recent work to develop two and three-dimensional alternating direction implicit (ADI) FDTD schemes. These ADI schemes are based upon the original ADI concept developed by Peaceman and Rachford and Douglas and Gunn, which is a popular solution method in Computational Fluid Dynamics (CFD). These ADI schemes work well and they require solution of a tridiagonal system of equations. A new approach proposed in this paper applies a LU/AF approximate factorization technique from CFD to Maxwell s equations in flux conservative form for one space dimension. The result is a scheme that will retain its unconditional stability in three space dimensions, but does not require the solution of tridiagonal systems. The theory for this new algorithm is outlined in a one-dimensional context for clarity. An extension to two and threedimensional cases is discussed. Results of Fourier analysis are discussed for both stability and dispersion/damping properties of the algorithm. Results are presented for a one-dimensional model problem, and the explicit FDTD algorithm is chosen as a convenient reference for comparison.
Quantum nondemolition measurement of the Werner state
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin Jiasen; Yu Changshui; Pei Pei
2010-10-15
We propose a theoretical scheme of quantum nondemolition measurement of two-qubit Werner state. We discuss our scheme with the two qubits restricted in a local place and then extend the scheme to the case in which two qubits are separated. We also consider the experimental realization of our scheme based on cavity quantum electrodynamics. It is very interesting that our scheme is robust against the dissipative effects introduced by the probe process. We also give a brief interpretation of our scheme finally.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Angstmann, C.N.; Donnelly, I.C.; Henry, B.I., E-mail: B.Henry@unsw.edu.au
We have introduced a new explicit numerical method, based on a discrete stochastic process, for solving a class of fractional partial differential equations that model reaction subdiffusion. The scheme is derived from the master equations for the evolution of the probability density of a sum of discrete time random walks. We show that the diffusion limit of the master equations recovers the fractional partial differential equation of interest. This limiting procedure guarantees the consistency of the numerical scheme. The positivity of the solution and stability results are simply obtained, provided that the underlying process is well posed. We also showmore » that the method can be applied to standard reaction–diffusion equations. This work highlights the broader applicability of using discrete stochastic processes to provide numerical schemes for partial differential equations, including fractional partial differential equations.« less
Multimedia risk assessments require the temporal integration of atmospheric concentration and deposition with other media modules. However, providing an extended time series of estimates is computationally expensive. An alternative approach is to substitute long-term average a...
High Performance Thin Layer Chromatography.
ERIC Educational Resources Information Center
Costanzo, Samuel J.
1984-01-01
Clarifies where in the scheme of modern chromatography high performance thin layer chromatography (TLC) fits and why in some situations it is a viable alternative to gas and high performance liquid chromatography. New TLC plates, sample applications, plate development, and instrumental techniques are considered. (JN)
Introduction of the Floquet-Magnus expansion in solid-state nuclear magnetic resonance spectroscopy.
Mananga, Eugène S; Charpentier, Thibault
2011-07-28
In this article, we present an alternative expansion scheme called Floquet-Magnus expansion (FME) used to solve a time-dependent linear differential equation which is a central problem in quantum physics in general and solid-state nuclear magnetic resonance (NMR) in particular. The commonly used methods to treat theoretical problems in solid-state NMR are the average Hamiltonian theory (AHT) and the Floquet theory (FT), which have been successful for designing sophisticated pulse sequences and understanding of different experiments. To the best of our knowledge, this is the first report of the FME scheme in the context of solid state NMR and we compare this approach with other series expansions. We present a modified FME scheme highlighting the importance of the (time-periodic) boundary conditions. This modified scheme greatly simplifies the calculation of higher order terms and shown to be equivalent to the Floquet theory (single or multimode time-dependence) but allows one to derive the effective Hamiltonian in the Hilbert space. Basic applications of the FME scheme are described and compared to previous treatments based on AHT, FT, and static perturbation theory. We discuss also the convergence aspects of the three schemes (AHT, FT, and FME) and present the relevant references. © 2011 American Institute of Physics
On resilience studies of system detection and recovery techniques against stealthy insider attacks
NASA Astrophysics Data System (ADS)
Wei, Sixiao; Zhang, Hanlin; Chen, Genshe; Shen, Dan; Yu, Wei; Pham, Khanh D.; Blasch, Erik P.; Cruz, Jose B.
2016-05-01
With the explosive growth of network technologies, insider attacks have become a major concern to business operations that largely rely on computer networks. To better detect insider attacks that marginally manipulate network traffic over time, and to recover the system from attacks, in this paper we implement a temporal-based detection scheme using the sequential hypothesis testing technique. Two hypothetical states are considered: the null hypothesis that the collected information is from benign historical traffic and the alternative hypothesis that the network is under attack. The objective of such a detection scheme is to recognize the change within the shortest time by comparing the two defined hypotheses. In addition, once the attack is detected, a server migration-based system recovery scheme can be triggered to recover the system to the state prior to the attack. To understand mitigation of insider attacks, a multi-functional web display of the detection analysis was developed for real-time analytic. Experiments using real-world traffic traces evaluate the effectiveness of Detection System and Recovery (DeSyAR) scheme. The evaluation data validates the detection scheme based on sequential hypothesis testing and the server migration-based system recovery scheme can perform well in effectively detecting insider attacks and recovering the system under attack.
Improving the Slum Planning Through Geospatial Decision Support System
NASA Astrophysics Data System (ADS)
Shekhar, S.
2014-11-01
In India, a number of schemes and programmes have been launched from time to time in order to promote integrated city development and to enable the slum dwellers to gain access to the basic services. Despite the use of geospatial technologies in planning, the local, state and central governments have only been partially successful in dealing with these problems. The study on existing policies and programmes also proved that when the government is the sole provider or mediator, GIS can become a tool of coercion rather than participatory decision-making. It has also been observed that local level administrators who have adopted Geospatial technology for local planning continue to base decision-making on existing political processes. In this juncture, geospatial decision support system (GSDSS) can provide a framework for integrating database management systems with analytical models, graphical display, tabular reporting capabilities and the expert knowledge of decision makers. This assists decision-makers to generate and evaluate alternative solutions to spatial problems. During this process, decision-makers undertake a process of decision research - producing a large number of possible decision alternatives and provide opportunities to involve the community in decision making. The objective is to help decision makers and planners to find solutions through a quantitative spatial evaluation and verification process. The study investigates the options for slum development in a formal framework of RAY (Rajiv Awas Yojana), an ambitious program of Indian Government for slum development. The software modules for realizing the GSDSS were developed using the ArcGIS and Community -VIZ software for Gulbarga city.
New, Improved Goddard Bulk-Microphysical Schemes for Studying Precipitation Processes in WRF
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo
2007-01-01
An improved bulk microphysical parameterization is implemented into the Weather Research and Forecasting ()VRF) model. This bulk microphysical scheme has three different options, 2ICE (cloud ice & snow), 3ICE-graupel (cloud ice, snow & graupel) and 3ICE-hail (cloud ice, snow & hail). High-resolution model simulations are conducted to examine the impact of microphysical schemes on two different weather events (a midlatitude linear convective system and an Atlantic hurricane). The results suggest that microphysics has a major impact on the organization and precipitation processes associated with a summer midlatitude convective line system. The Goddard 3ICE scheme with a cloud ice-snow-hail configuration agreed better with observations in terms of rainfall intensity and a narrow convective line than did simulations with a cloud ice-snow-graupel or cloud ice-snow (i.e., 2ICE) configuration. This is because the 3ICE-hail scheme includes dense ice precipitating (hail) particle with very fast fall speed (over 10 in For an Atlantic hurricane case, the Goddard microphysical schemes had no significant impact on the track forecast but did affect the intensity slightly. The improved Goddard schemes are also compared with WRF's three other 3ICE bulk microphysical schemes: WSM6, Purdue-Lin and Thompson. For the summer midlatitude convective line system, all of the schemes resulted in simulated precipitation events that were elongated in the southwest-northeast direction in qualitative agreement with the observed feature. However, the Goddard 3ICE scheme with the hail option and the Thompson scheme agree better with observations in terms of rainfall intensity, expect that the Goddard scheme simulated more heavy rainfall (over 48 mm/h). For the Atlantic hurricane case, none of the schemes had a significant impact on the track forecast; however, the simulated intensity using the Purdue-Lin scheme was much stronger than the other schemes. The vertical distributions of model simulated cloud species (i.e., snow) are quite sensitive to microphysical schemes, which is an important issue for future verification against satellite retrievals. Both the Purdue-Lin and WSM6 schemes simulated very little snow compared to the other schemes for both the midlatitude convective line and hurricane cases. Sensitivity tests are performed for these two WRF schemes to identify that snow productions could be increased by increasing the snow intercept, turning off the auto-conversion from snow to graupel and reducing the transfer processes from cloud-sized particles to precipitation-sized ice.
A novel color image encryption scheme using alternate chaotic mapping structure
NASA Astrophysics Data System (ADS)
Wang, Xingyuan; Zhao, Yuanyuan; Zhang, Huili; Guo, Kang
2016-07-01
This paper proposes an color image encryption algorithm using alternate chaotic mapping structure. Initially, we use the R, G and B components to form a matrix. Then one-dimension logistic and two-dimension logistic mapping is used to generate a chaotic matrix, then iterate two chaotic mappings alternately to permute the matrix. For every iteration, XOR operation is adopted to encrypt plain-image matrix, then make further transformation to diffuse the matrix. At last, the encrypted color image is obtained from the confused matrix. Theoretical analysis and experimental results has proved the cryptosystem is secure and practical, and it is suitable for encrypting color images.
Crossbar H-mode drift-tube linac design with alternative phase focusing for muon linac
NASA Astrophysics Data System (ADS)
Otani, M.; Futatsukawa, K.; Hasegawa, K.; Kitamura, R.; Kondo, Y.; Kurennoy, S.
2017-07-01
We have developed a Crossbar H-mode (CH) drift-tube linac (DTL) design with an alternative phase focusing (APF) scheme for a muon linac, in order to measure the anomalous magnetic moment and electric dipole moment (EDM) of muons at the Japan Proton Accelerator Research Complex (J-PARC). The CH-DTL accelerates muons from β = v/c = 0.08 to 0.28 at an operational frequency of 324 MHz. The design and results are described in this paper.
de Wolf, Watze; Comber, Mike; Douben, Peter; Gimeno, Sylvia; Holt, Martin; Léonard, Marc; Lillicrap, Adam; Sijm, Dick; van Egmond, Roger; Weisbrod, Anne; Whale, Graham
2007-01-01
When addressing the use of fish for the environmental safety of chemicals and effluents, there are many opportunities for applying the principles of the 3Rs: Reduce, Refine, and Replace. The current environmental regulatory testing strategy for bioconcentration and secondary poisoning has been reviewed, and alternative approaches that provide useful information are described. Several approaches can be used to reduce the number of fish used in the Organization for Economic Cooperation and Development (OECD) Test Guideline 305, including alternative in vivo test methods such as the dietary accumulation test and the static exposure approach. The best replacement approach would seem to use read-across, chemical grouping, and quantitative structure-activity relationships with an assessment of the key processes in bioconcentration: Adsorption, distribution, metabolism, and excretion. Biomimetic extraction has particular usefulness in addressing bioavailable chemicals and is in some circumstances capable of predicting uptake. Use of alternative organisms such as invertebrates should also be considered. A single cut-off value for molecular weight and size beyond which no absorption will take place cannot be identified. Recommendations for their use in bioaccumulative (B) categorization schemes are provided. Assessment of biotransformation with in vitro assays and in silico approaches holds significant promise. Further research is needed to identify their variability and confidence limits and the ways to use this as a basis to estimate bioconcentration factors. A tiered bioconcentration testing strategy has been developed taking account of the alternatives discussed.
Two-stage atlas subset selection in multi-atlas based image segmentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Tingting, E-mail: tingtingzhao@mednet.ucla.edu; Ruan, Dan, E-mail: druan@mednet.ucla.edu
2015-06-15
Purpose: Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. Methods: An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stagemore » atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. Results: The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. Conclusions: The authors have developed a novel two-stage atlas subset selection scheme for multi-atlas based segmentation. It achieves good segmentation accuracy with significantly reduced computation cost, making it a suitable configuration in the presence of extensive heterogeneous atlases.« less
Ghosh, Dilip; Skinner, Margot; Ferguson, Lynnette R
2006-04-03
Currently, the regulation of complementary and alternative medicines and related health claims in Australia and New Zealand is managed in a number of ways. Complementary medicines, including herbal, minerals, nutritional/dietary supplements, aromatherapy oils and homeopathic medicines are regulated under therapeutic goods/products legislation. The Therapeutic Goods Administration (TGA), a division of the Commonwealth Department of Health and Ageing is responsible for administering the provisions of the legislation in Australia. The New Zealand Medicines and Medical Devices Safety Authority (Medsafe) administers the provision of legislation in New Zealand. In December 2003 the Australian and New Zealand governments signed a Treaty to establish a single, bi-national agency to regulate therapeutic products, including medical devices prescription, over-the-counter and complementary medicines. A single agency will replace the Australian TGA and the New Zealand Medsafe. The role of the new agency will be to safeguard public health through regulation of the quality, safety and efficacy or performance of therapeutic products in both Australia and New Zealand. The major activities of the new joint Australia New Zealand therapeutic products agency are in product licensing, specifying labelling standards and setting the advertising scheme, together with determining the risk classes of medicines and creating an expanded list of ingredients permitted in Class I medicines. A new, expanded definition of complementary medicines is proposed and this definition is currently under consultation. Related Australian and New Zealand legislation is being developed to implement the joint scheme. Once this legislation is passed, the Treaty will come into force and the new joint regulatory scheme will begin. The agency is expected to commence operation no later than 1 July 2006 and will result in a single agency to regulate complementary and alternative medicines.
Mechanical Extraction of Power From Ocean Currents and Tides
NASA Technical Reports Server (NTRS)
Jones, Jack; Chao, Yi
2010-01-01
A proposed scheme for generating electric power from rivers and from ocean currents, tides, and waves is intended to offer economic and environmental advantages over prior such schemes, some of which are at various stages of implementation, others of which have not yet advanced beyond the concept stage. This scheme would be less environmentally objectionable than are prior schemes that involve the use of dams to block rivers and tidal flows. This scheme would also not entail the high maintenance costs of other proposed schemes that call for submerged electric generators and cables, which would be subject to degradation by marine growth and corrosion. A basic power-generation system according to the scheme now proposed would not include any submerged electrical equipment. The submerged portion of the system would include an all-mechanical turbine/pump unit that would superficially resemble a large land-based wind turbine (see figure). The turbine axis would turn slowly as it captured energy from the local river flow, ocean current, tidal flow, or flow from an ocean-wave device. The turbine axis would drive a pump through a gearbox to generate an enclosed flow of water, hydraulic fluid, or other suitable fluid at a relatively high pressure [typically approx.500 psi (approx.3.4 MPa)]. The pressurized fluid could be piped to an onshore or offshore facility, above the ocean surface, where it would be used to drive a turbine that, in turn, would drive an electric generator. The fluid could be recirculated between the submerged unit and the power-generation facility in a closed flow system; alternatively, if the fluid were seawater, it could be taken in from the ocean at the submerged turbine/pump unit and discharged back into the ocean from the power-generation facility. Another alternative would be to use the pressurized flow to charge an elevated reservoir or other pumped-storage facility, from whence fluid could later be released to drive a turbine/generator unit at a time of high power demand. Multiple submerged turbine/pump units could be positioned across a channel to extract more power than could be extracted by a single unit. In that case, the pressurized flows in their output pipes would be combined, via check valves, into a wider pipe that would deliver the combined flow to a power-generating or pumped-storage facility.
Fuel quality/processing study. Volume 3: Fuel upgrading studies
NASA Technical Reports Server (NTRS)
Jones, G. E., Jr.; Bruggink, P.; Sinnett, C.
1981-01-01
The methods used to calculate the refinery selling prices for the turbine fuels of low quality are described. Detailed descriptions and economics of the upgrading schemes are included. These descriptions include flow diagrams showing the interconnection between processes and the stream flows involved. Each scheme is in a complete, integrated, stand alone facility. Except for the purchase of electricity and water, each scheme provides its own fuel and manufactures, when appropriate, its own hydrogen.
NASA Astrophysics Data System (ADS)
Lass, Wiebke; Reusswig, Fritz
2014-05-01
Lost in Translation? Introducing Planetary Boundaries into Social Systems. Fritz Reusswig, Wiebke Lass Potsdam Institute for Climate Impact Research, Potsdam, Germany Identifying and quantifying planetary boundaries by interdisciplinary science efforts is a challenging task—and a risky one, as the 1972 Limits to Growth publication has shown. Even if we may be assured that scientific understanding of underlying processes of the Earth system has significantly improved since then, the challenge of translating these findings into the social systems of the planet remains crucial for any kind of action, and in many respects far more challenging. We would like to conceptualize what could also be termed a problem of coupling social and natural systems as a nested set of social translation processes, well aware of the limited applicability of the language-related translation metaphor. Societies must, first, perceive these boundaries, and they have to understand their relevance. This includes, among many other things, the organization of transdisciplinary scientific cooperation. They will then have to translate this understood perception into possible actions, i.e. strategies for different local bodies, actors, and institutional settings. This implies a lot of 'internal' translation processes, e.g. from the scientific subsystem to the mass media, the political and the economic subsystem. And it implies to develop subsystem-specific schemes of evaluation for these alternatives, e.g. convincing narratives, cost-benefit analyses, or ethical legitimacy considerations. And, finally, societies do have to translate chosen action alternatives into monitoring and evaluation schemes, e.g. for agricultural production or renewable energies. This process includes the continuation of observing and re-analyzing the planetary boundary concept itself, as a re-adjustment of these boundaries in the light of new scientific insights cannot be excluded. Taken all together, societies may well get lost in translation here—and we have not yet mentioned the societal management of other problems, such as wars and civil wars, or 'taming' the global financial markets. After having sketched this conceptual outline in some detail, we would like to focus on three planetary boundaries for illustrative purposes: GHG emissions, nitrogen fertilization, and biodiversity loss, and highlight some similarities as well as dissimilarities in the social translation processes involved. We would limit the range of examples to the EU, USA, and India. In a last step, we would like to illustrate a promising way of translating one specific planetary boundary—anthropogenic climate change—by a case study on how it is translated into urban energy and climate policies with the example of climate neutral Berlin 2050.
Kariuki, C M; Komen, H; Kahi, A K; van Arendonk, J A M
2014-12-01
Dairy cattle breeding programs in developing countries are constrained by minimal and erratic pedigree and performance recording on cows on commercial farms. Small-sized nucleus breeding programs offer a viable alternative. Deterministic simulations using selection index theory were performed to determine the optimum design for small-sized nucleus schemes for dairy cattle. The nucleus was made up of 197 bulls and 243 cows distributed in 8 non-overlapping age classes. Each year 10 sires and 100 dams were selected to produce the next generation of male and female selection candidates. Conception rates and sex ratio were fixed at 0.90 and 0.50, respectively, translating to 45 male and 45 female candidates joining the nucleus per year. Commercial recorded dams provided information for genetic evaluation of selection candidates (bulls) in the nucleus. Five strategies were defined: nucleus records only [within-nucleus dam performance (DP)], progeny records in addition to nucleus records [progeny testing (PT)], genomic information only [genomic selection (GS)], dam performance records in addition to genomic information (GS+DP), and progeny records in addition to genomic information (GS+PT). Alternative PT, GS, GS+DP, and GS+PT schemes differed in the number of progeny per sire and size of reference population. The maximum number of progeny records per sire was 30, and the maximum size of the reference population was 5,000. Results show that GS schemes had higher responses and lower accuracies compared with other strategies, with the higher response being due to shorter generation intervals. Compared with similar sized progeny-testing schemes, genomic-selection schemes would have lower accuracies but these are offset by higher responses per year, which might provide additional incentive for farmers to participate in recording. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Gas stripping and mixing in galaxy clusters: a numerical comparison study
NASA Astrophysics Data System (ADS)
Heß, Steffen; Springel, Volker
2012-11-01
The ambient hot intrahalo gas in clusters of galaxies is constantly fed and stirred by infalling galaxies, a process that can be studied in detail with cosmological hydrodynamical simulations. However, different numerical methods yield discrepant predictions for crucial hydrodynamical processes, leading for example to different entropy profiles in clusters of galaxies. In particular, the widely used Lagrangian smoothed particle hydrodynamics (SPH) scheme is suspected to strongly damp fluid instabilities and turbulence, which are both crucial to establish the thermodynamic structure of clusters. In this study, we test to which extent our recently developed Voronoi particle hydrodynamics (VPH) scheme yields different results for the stripping of gas out of infalling galaxies and for the bulk gas properties of cluster. We consider both the evolution of isolated galaxy models that are exposed to a stream of intracluster medium or are dropped into cluster models, as well as non-radiative cosmological simulations of cluster formation. We also compare our particle-based method with results obtained with a fundamentally different discretization approach as implemented in the moving-mesh code AREPO. We find that VPH leads to noticeably faster stripping of gas out of galaxies than SPH, in better agreement with the mesh-code than with SPH. We show that despite the fact that VPH in its present form is not as accurate as the moving mesh code in our investigated cases, its improved accuracy of gradient estimates makes VPH an attractive alternative to SPH.
2012-01-01
Background The Danish Multiple Sclerosis Society initiated a large-scale bridge building and integrative treatment project to take place from 2004–2010 at a specialized Multiple Sclerosis (MS) hospital. In this project, a team of five conventional health care practitioners and five alternative practitioners was set up to work together in developing and offering individualized treatments to 200 people with MS. The purpose of this paper is to present results from the six year treatment collaboration process regarding the development of an integrative treatment model. Discussion The collaborative work towards an integrative treatment model for people with MS, involved six steps: 1) Working with an initial model 2) Unfolding the different treatment philosophies 3) Discussing the elements of the Intervention-Mechanism-Context-Outcome-scheme (the IMCO-scheme) 4) Phrasing the common assumptions for an integrative MS program theory 5) Developing the integrative MS program theory 6) Building the integrative MS treatment model. The model includes important elements of the different treatment philosophies represented in the team and thereby describes a common understanding of the complexity of the courses of treatment. Summary An integrative team of practitioners has developed an integrative model for combined treatments of People with Multiple Sclerosis. The model unites different treatment philosophies and focuses on process-oriented factors and the strengthening of the patients’ resources and competences on a physical, an emotional and a cognitive level. PMID:22524586
Modeling for waste management associated with environmental-impact abatement under uncertainty.
Li, P; Li, Y P; Huang, G H; Zhang, J L
2015-04-01
Municipal solid waste (MSW) treatment can generate significant amounts of pollutants, and thus pose a risk on human health. Besides, in MSW management, various uncertainties exist in the related costs, impact factors, and objectives, which can affect the optimization processes and the decision schemes generated. In this study, a life cycle assessment-based interval-parameter programming (LCA-IPP) method is developed for MSW management associated with environmental-impact abatement under uncertainty. The LCA-IPP can effectively examine the environmental consequences based on a number of environmental impact categories (i.e., greenhouse gas equivalent, acid gas emissions, and respiratory inorganics), through analyzing each life cycle stage and/or major contributing process related to various MSW management activities. It can also tackle uncertainties existed in the related costs, impact factors, and objectives and expressed as interval numbers. Then, the LCA-IPP method is applied to MSW management for the City of Beijing, the capital of China, where energy consumptions and six environmental parameters [i.e., CO2, CO, CH4, NOX, SO2, inhalable particle (PM10)] are used as systematic tool to quantify environmental releases in entire life cycle stage of waste collection, transportation, treatment, and disposal of. Results associated with system cost, environmental impact, and the related policy implication are generated and analyzed. Results can help identify desired alternatives for managing MSW flows, which has advantages in providing compromised schemes under an integrated consideration of economic efficiency and environmental impact under uncertainty.
Recent developments in the structural design and optimization of ITER neutral beam manifold
NASA Astrophysics Data System (ADS)
Chengzhi, CAO; Yudong, PAN; Zhiwei, XIA; Bo, LI; Tao, JIANG; Wei, LI
2018-02-01
This paper describes a new design of the neutral beam manifold based on a more optimized support system. A proposed alternative scheme has presented to replace the former complex manifold supports and internal pipe supports in the final design phase. Both the structural reliability and feasibility were confirmed with detailed analyses. Comparative analyses between two typical types of manifold support scheme were performed. All relevant results of mechanical analyses for typical operation scenarios and fault conditions are presented. Future optimization activities are described, which will give useful information for a refined setting of components in the next phase.
Efficiency of exchange schemes in replica exchange
NASA Astrophysics Data System (ADS)
Lingenheil, Martin; Denschlag, Robert; Mathias, Gerald; Tavan, Paul
2009-08-01
In replica exchange simulations a fast diffusion of the replicas through the temperature space maximizes the efficiency of the statistical sampling. Here, we compare the diffusion speed as measured by the round trip rates for four exchange algorithms. We find different efficiency profiles with optimal average acceptance probabilities ranging from 8% to 41%. The best performance is determined by benchmark simulations for the most widely used algorithm, which alternately tries to exchange all even and all odd replica pairs. By analytical mathematics we show that the excellent performance of this exchange scheme is due to the high diffusivity of the underlying random walk.
Sampling for area estimation: A comparison of full-frame sampling with the sample segment approach
NASA Technical Reports Server (NTRS)
Hixson, M.; Bauer, M. E.; Davis, B. J. (Principal Investigator)
1979-01-01
The author has identified the following significant results. Full-frame classifications of wheat and non-wheat for eighty counties in Kansas were repetitively sampled to simulate alternative sampling plans. Evaluation of four sampling schemes involving different numbers of samples and different size sampling units shows that the precision of the wheat estimates increased as the segment size decreased and the number of segments was increased. Although the average bias associated with the various sampling schemes was not significantly different, the maximum absolute bias was directly related to sampling size unit.
A new approach for cancelable iris recognition
NASA Astrophysics Data System (ADS)
Yang, Kai; Sui, Yan; Zhou, Zhi; Du, Yingzi; Zou, Xukai
2010-04-01
The iris is a stable and reliable biometric for positive human identification. However, the traditional iris recognition scheme raises several privacy concerns. One's iris pattern is permanently bound with him and cannot be changed. Hence, once it is stolen, this biometric is lost forever as well as all the applications where this biometric is used. Thus, new methods are desirable to secure the original pattern and ensure its revocability and alternatives when compromised. In this paper, we propose a novel scheme which incorporates iris features, non-invertible transformation and data encryption to achieve "cancelability" and at the same time increases iris recognition accuracy.
Power-Constrained Fuzzy Logic Control of Video Streaming over a Wireless Interconnect
NASA Astrophysics Data System (ADS)
Razavi, Rouzbeh; Fleury, Martin; Ghanbari, Mohammed
2008-12-01
Wireless communication of video, with Bluetooth as an example, represents a compromise between channel conditions, display and decode deadlines, and energy constraints. This paper proposes fuzzy logic control (FLC) of automatic repeat request (ARQ) as a way of reconciling these factors, with a 40% saving in power in the worst channel conditions from economizing on transmissions when channel errors occur. Whatever the channel conditions are, FLC is shown to outperform the default Bluetooth scheme and an alternative Bluetooth-adaptive ARQ scheme in terms of reduced packet loss and delay, as well as improved video quality.
IMPROVING TACONITE PROCESSING PLANT EFFICIENCY BY COMPUTER SIMULATION, Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
William M. Bond; Salih Ersayin
2007-03-30
This project involved industrial scale testing of a mineral processing simulator to improve the efficiency of a taconite processing plant, namely the Minorca mine. The Concentrator Modeling Center at the Coleraine Minerals Research Laboratory, University of Minnesota Duluth, enhanced the capabilities of available software, Usim Pac, by developing mathematical models needed for accurate simulation of taconite plants. This project provided funding for this technology to prove itself in the industrial environment. As the first step, data representing existing plant conditions were collected by sampling and sample analysis. Data were then balanced and provided a basis for assessing the efficiency ofmore » individual devices and the plant, and also for performing simulations aimed at improving plant efficiency. Performance evaluation served as a guide in developing alternative process strategies for more efficient production. A large number of computer simulations were then performed to quantify the benefits and effects of implementing these alternative schemes. Modification of makeup ball size was selected as the most feasible option for the target performance improvement. This was combined with replacement of existing hydrocyclones with more efficient ones. After plant implementation of these modifications, plant sampling surveys were carried out to validate findings of the simulation-based study. Plant data showed very good agreement with the simulated data, confirming results of simulation. After the implementation of modifications in the plant, several upstream bottlenecks became visible. Despite these bottlenecks limiting full capacity, concentrator energy improvement of 7% was obtained. Further improvements in energy efficiency are expected in the near future. The success of this project demonstrated the feasibility of a simulation-based approach. Currently, the Center provides simulation-based service to all the iron ore mining companies operating in northern Minnesota, and future proposals are pending with non-taconite mineral processing applications.« less
NASA Technical Reports Server (NTRS)
Tao, W.K.; Shi, J.J.; Braun, S.; Simpson, J.; Chen, S.S.; Lang, S.; Hong, S.Y.; Thompson, G.; Peters-Lidard, C.
2009-01-01
A Goddard bulk microphysical parameterization is implemented into the Weather Research and Forecasting (WRF) model. This bulk microphysical scheme has three different options, 2ICE (cloud ice & snow), 3ICE-graupel (cloud ice, snow & graupel) and 3ICE-hail (cloud ice, snow & hail). High-resolution model simulations are conducted to examine the impact of microphysical schemes on different weather events: a midlatitude linear convective system and an Atlantic hurricane. The results suggest that microphysics has a major impact on the organization and precipitation processes associated with a summer midlatitude convective line system. The Goddard 3ICE scheme with the cloud ice-snow-hail configuration agreed better with observations ill of rainfall intensity and having a narrow convective line than did simulations with the cloud ice-snow-graupel and cloud ice-snow (i.e., 2ICE) configurations. This is because the Goddard 3ICE-hail configuration has denser precipitating ice particles (hail) with very fast fall speeds (over 10 m/s) For an Atlantic hurricane case, the Goddard microphysical scheme (with 3ICE-hail, 3ICE-graupel and 2ICE configurations) had no significant impact on the track forecast but did affect the intensity slightly. The Goddard scheme is also compared with WRF's three other 3ICE bulk microphysical schemes: WSM6, Purdue-Lin and Thompson. For the summer midlatitude convective line system, all of the schemes resulted in simulated precipitation events that were elongated in southwest-northeast direction in qualitative agreement with the observed feature. However, the Goddard 3ICE-hail and Thompson schemes were closest to the observed rainfall intensities although the Goddard scheme simulated more heavy rainfall (over 48 mm/h). For the Atlantic hurricane case, none of the schemes had a significant impact on the track forecast; however, the simulated intensity using the Purdue-Lin scheme was much stronger than the other schemes. The vertical distributions of model-simulated cloud species (e.g., snow) are quite sensitive to the microphysical schemes, which is an issue for future verification against satellite retrievals. Both the Purdue-Lin and WSM6 schemes simulated very little snow compared to the other schemes for both the midlatitude convective line and hurricane case. Sensitivity tests with these two schemes showed that increasing the snow intercept, turning off the auto-conversion from snow to graupel, eliminating dry growth, and reducing the transfer processes from cloud-sized particles to precipitation-sized ice collectively resulted in a net increase in those schemes' snow amounts.
Pricing Models and Payment Schemes for Library Collections.
ERIC Educational Resources Information Center
Stern, David
2002-01-01
Discusses new pricing and payment options for libraries in light of online products. Topics include alternative cost models rather than traditional subscriptions; use-based pricing; changes in scholarly communication due to information technology; methods to determine appropriate charges for different organizations; consortial plans; funding; and…
Double emittance exchanger as a bunch compressor for the MaRIE XFEL electron beam line at 1 GeV
NASA Astrophysics Data System (ADS)
Malyzhenkov, Alexander; Carlsten, Bruce E.; Yampolsky, Nikolai A.
2017-03-01
We demonstrate an alternative realization of a bunch compressor (specifically, the second bunch compressor for the MaRIE XFEL beamline, 1GeV electron energy) using a double emittance exchanger (EEX) and a telescope in the transverse phase space. We compare our results with a traditional bunch compressor realized via a chicane, taking into account the nonlinear dynamics, Coherent Synchrotron Radiation (CSR) and Space Charge (SC) effects. In particular, we use the Elegant code for tracking particles through the beamline, and analyze the evolution of the eigen-emittances to separate the influence of the CSR/SC effects from the nonlinear dynamics effects. We optimize the scheme parameters to reach a desirable compression factor and minimize the emittance growth. We observe dominant CSR effects in our scheme, resulting in critical emittance growth, and introduce an alternative version of an emittance exchanger with a reduced number of bending magnets to minimize the impact of CSR effects.
Double Emittance Exchanger as a Bunch Compressor for the MaRIE XFEL electron beam line at 1GeV
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malyzhenkov, Alexander; Yampolsky, Nikolai; Carlsten, Bruce Eric
We demonstrate an alternative realization of a bunch compressor (specifically the second bunch compressor for the MaRIE XFEL beamline, 1GeV electron energy) using a double emittance exchanger (EEX) and a telescope in the transverse phase space.We compare our results with a traditional bunch compressor realized via chicane, taking into account the nonlinear dynamics, Coherent Synchrotron Radiation (CSR) and Space Charge (SC) effects. In particular, we use the Elegant code for tracking particles through the beam line and analyze the eigen-emittances evolution to separate the influence of the CSR/SC effects from the nonlinear dynamics effects. We optimize the scheme parameters tomore » reach a desirable compression factor and minimize the emittance growth. We observe dominant CSR-effects in our scheme resulting in critical emittance growth and introduce alternative version of an emittance exchanger with a reduced number of bending magnets to minimize the impact of CSR effects.« less
Dynamics of moment neuronal networks.
Feng, Jianfeng; Deng, Yingchun; Rossoni, Enrico
2006-04-01
A theoretical framework is developed for moment neuronal networks (MNNs). Within this framework, the behavior of the system of spiking neurons is specified in terms of the first- and second-order statistics of their interspike intervals, i.e., the mean, the variance, and the cross correlations of spike activity. Since neurons emit and receive spike trains which can be described by renewal--but generally non-Poisson--processes, we first derive a suitable diffusion-type approximation of such processes. Two approximation schemes are introduced: the usual approximation scheme (UAS) and the Ornstein-Uhlenbeck scheme. It is found that both schemes approximate well the input-output characteristics of spiking models such as the IF and the Hodgkin-Huxley models. The MNN framework is then developed according to the UAS scheme, and its predictions are tested on a few examples.
Statistical process control based chart for information systems security
NASA Astrophysics Data System (ADS)
Khan, Mansoor S.; Cui, Lirong
2015-07-01
Intrusion detection systems have a highly significant role in securing computer networks and information systems. To assure the reliability and quality of computer networks and information systems, it is highly desirable to develop techniques that detect intrusions into information systems. We put forward the concept of statistical process control (SPC) in computer networks and information systems intrusions. In this article we propose exponentially weighted moving average (EWMA) type quality monitoring scheme. Our proposed scheme has only one parameter which differentiates it from the past versions. We construct the control limits for the proposed scheme and investigate their effectiveness. We provide an industrial example for the sake of clarity for practitioner. We give comparison of the proposed scheme with EWMA schemes and p chart; finally we provide some recommendations for the future work.
Dew, Angela; Barton, Rebecca; Ragen, Jo; Bulkeley, Kim; Iljadica, Alexandra; Chedid, Rebecca; Brentnall, Jennie; Bundy, Anita; Lincoln, Michelle; Gallego, Gisselle; Veitch, Craig
2016-12-01
The Australian National Disability Insurance Scheme (NDIS) will provide people with individual funding with which to purchase services such as therapy from private providers. This study developed a framework to support rural private therapists to meet the anticipated increase in demand. The study consisted of three stages utilizing focus groups, interviews and an online expert panel. Participants included private therapists delivering services in rural New South Wales (n = 28), disability service users (n = 9) and key representatives from a range of relevant consumer and service organizations (n = 16). We conducted a thematic analysis of focus groups and interview data and developed a draft framework which was subsequently refined based on feedback from stakeholders. The framework highlights the need for a 'rural-proofed' policy context in which service users, therapists and communities engage collaboratively in a therapy pathway. This collaborative engagement is supported by enablers, including networks, resources and processes which are influenced by the drivers of time, cost, opportunity and motivation. The framework identifies factors that will facilitate delivery of high-quality, sustainable, individualized private therapy services for people with a disability in rural Australia under the NDIS and emphasizes the need to reconceptualize the nature of private therapy service delivery. Implications for Rehabilitation Rural private therapists need upskilling to work with individuals with disability who have individual funding such as that provided by the Australian National Disability Insurance Scheme. Therapists working in rural communities need to consider alternative ways of delivering therapy to individuals with disability beyond the traditional one-on-one therapy models. Rural private therapists need support to work collaboratively with individuals with disability and the local community. Rural private therapists should harness locally available and broader networks, resources and processes to meet the needs and goals of individuals with disability.
Prous, Xavier; Zampaulo, Robson; Giannini, Tereza C.; Imperatriz-Fonseca, Vera L.; Maurity, Clóvis; Oliveira, Guilherme; Brandi, Iuri V.; Siqueira, José O.
2016-01-01
Caves pose significant challenges for mining projects, since they harbor many endemic and threatened species, and must therefore be protected. Recent discussions between academia, environmental protection agencies, and industry partners, have highlighted problems with the current Brazilian legislation for the protection of caves. While the licensing process is long, complex and cumbersome, the criteria used to assign caves into conservation relevance categories are often subjective, with relevance being mainly determined by the presence of obligate cave dwellers (troglobites) and their presumed rarity. However, the rarity of these troglobitic species is questionable, as most remain unidentified to the species level and their habitats and distribution ranges are poorly known. Using data from 844 iron caves retrieved from different speleology reports for the Carajás region (South-Eastern Amazon, Brazil), one of the world's largest deposits of high-grade iron ore, we assess the influence of different cave characteristics on four biodiversity proxies (species richness, presence of troglobites, presence of rare troglobites, and presence of resident bat populations). We then examine how the current relevance classification scheme ranks caves with different biodiversity indicators. Large caves were found to be important reservoirs of biodiversity, so they should be prioritized in conservation programs. Our results also reveal spatial autocorrelation in all the biodiversity proxies assessed, indicating that iron caves should be treated as components of a cave network immersed in the karst landscape. Finally, we show that by prioritizing the conservation of rare troglobites, the current relevance classification scheme is undermining overall cave biodiversity and leaving ecologically important caves unprotected. We argue that conservation efforts should target subterranean habitats as a whole and propose an alternative relevance ranking scheme, which could help simplify the assessment process and channel more resources to the effective protection of overall cave biodiversity. PMID:27997576
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scheibe, Timothy D.; Murphy, Ellyn M.; Chen, Xingyuan
2015-01-01
One of the most significant challenges facing hydrogeologic modelers is the disparity between those spatial and temporal scales at which fundamental flow, transport and reaction processes can best be understood and quantified (e.g., microscopic to pore scales, seconds to days) and those at which practical model predictions are needed (e.g., plume to aquifer scales, years to centuries). While the multiscale nature of hydrogeologic problems is widely recognized, technological limitations in computational and characterization restrict most practical modeling efforts to fairly coarse representations of heterogeneous properties and processes. For some modern problems, the necessary level of simplification is such that modelmore » parameters may lose physical meaning and model predictive ability is questionable for any conditions other than those to which the model was calibrated. Recently, there has been broad interest across a wide range of scientific and engineering disciplines in simulation approaches that more rigorously account for the multiscale nature of systems of interest. In this paper, we review a number of such approaches and propose a classification scheme for defining different types of multiscale simulation methods and those classes of problems to which they are most applicable. Our classification scheme is presented in terms of a flow chart (Multiscale Analysis Platform or MAP), and defines several different motifs of multiscale simulation. Within each motif, the member methods are reviewed and example applications are discussed. We focus attention on hybrid multiscale methods, in which two or more models with different physics described at fundamentally different scales are directly coupled within a single simulation. Very recently these methods have begun to be applied to groundwater flow and transport simulations, and we discuss these applications in the context of our classification scheme. As computational and characterization capabilities continue to improve, we envision that hybrid multiscale modeling will become more common and may become a viable alternative to conventional single-scale models in the near future.« less
Scheibe, Timothy D; Murphy, Ellyn M; Chen, Xingyuan; Rice, Amy K; Carroll, Kenneth C; Palmer, Bruce J; Tartakovsky, Alexandre M; Battiato, Ilenia; Wood, Brian D
2015-01-01
One of the most significant challenges faced by hydrogeologic modelers is the disparity between the spatial and temporal scales at which fundamental flow, transport, and reaction processes can best be understood and quantified (e.g., microscopic to pore scales and seconds to days) and at which practical model predictions are needed (e.g., plume to aquifer scales and years to centuries). While the multiscale nature of hydrogeologic problems is widely recognized, technological limitations in computation and characterization restrict most practical modeling efforts to fairly coarse representations of heterogeneous properties and processes. For some modern problems, the necessary level of simplification is such that model parameters may lose physical meaning and model predictive ability is questionable for any conditions other than those to which the model was calibrated. Recently, there has been broad interest across a wide range of scientific and engineering disciplines in simulation approaches that more rigorously account for the multiscale nature of systems of interest. In this article, we review a number of such approaches and propose a classification scheme for defining different types of multiscale simulation methods and those classes of problems to which they are most applicable. Our classification scheme is presented in terms of a flowchart (Multiscale Analysis Platform), and defines several different motifs of multiscale simulation. Within each motif, the member methods are reviewed and example applications are discussed. We focus attention on hybrid multiscale methods, in which two or more models with different physics described at fundamentally different scales are directly coupled within a single simulation. Very recently these methods have begun to be applied to groundwater flow and transport simulations, and we discuss these applications in the context of our classification scheme. As computational and characterization capabilities continue to improve, we envision that hybrid multiscale modeling will become more common and also a viable alternative to conventional single-scale models in the near future. © 2014, National Ground Water Association.
Efficient coarse simulation of a growing avascular tumor
Kavousanakis, Michail E.; Liu, Ping; Boudouvis, Andreas G.; Lowengrub, John; Kevrekidis, Ioannis G.
2013-01-01
The subject of this work is the development and implementation of algorithms which accelerate the simulation of early stage tumor growth models. Among the different computational approaches used for the simulation of tumor progression, discrete stochastic models (e.g., cellular automata) have been widely used to describe processes occurring at the cell and subcell scales (e.g., cell-cell interactions and signaling processes). To describe macroscopic characteristics (e.g., morphology) of growing tumors, large numbers of interacting cells must be simulated. However, the high computational demands of stochastic models make the simulation of large-scale systems impractical. Alternatively, continuum models, which can describe behavior at the tumor scale, often rely on phenomenological assumptions in place of rigorous upscaling of microscopic models. This limits their predictive power. In this work, we circumvent the derivation of closed macroscopic equations for the growing cancer cell populations; instead, we construct, based on the so-called “equation-free” framework, a computational superstructure, which wraps around the individual-based cell-level simulator and accelerates the computations required for the study of the long-time behavior of systems involving many interacting cells. The microscopic model, e.g., a cellular automaton, which simulates the evolution of cancer cell populations, is executed for relatively short time intervals, at the end of which coarse-scale information is obtained. These coarse variables evolve on slower time scales than each individual cell in the population, enabling the application of forward projection schemes, which extrapolate their values at later times. This technique is referred to as coarse projective integration. Increasing the ratio of projection times to microscopic simulator execution times enhances the computational savings. Crucial accuracy issues arising for growing tumors with radial symmetry are addressed by applying the coarse projective integration scheme in a cotraveling (cogrowing) frame. As a proof of principle, we demonstrate that the application of this scheme yields highly accurate solutions, while preserving the computational savings of coarse projective integration. PMID:22587128
NASA Technical Reports Server (NTRS)
Jameson, Antony
1994-01-01
The theory of non-oscillatory scalar schemes is developed in this paper in terms of the local extremum diminishing (LED) principle that maxima should not increase and minima should not decrease. This principle can be used for multi-dimensional problems on both structured and unstructured meshes, while it is equivalent to the total variation diminishing (TVD) principle for one-dimensional problems. A new formulation of symmetric limited positive (SLIP) schemes is presented, which can be generalized to produce schemes with arbitrary high order of accuracy in regions where the solution contains no extrema, and which can also be implemented on multi-dimensional unstructured meshes. Systems of equations lead to waves traveling with distinct speeds and possibly in opposite directions. Alternative treatments using characteristic splitting and scalar diffusive fluxes are examined, together with modification of the scalar diffusion through the addition of pressure differences to the momentum equations to produce full upwinding in supersonic flow. This convective upwind and split pressure (CUSP) scheme exhibits very rapid convergence in multigrid calculations of transonic flow, and provides excellent shock resolution at very high Mach numbers.
Optical frequency comb based multi-band microwave frequency conversion for satellite applications.
Yang, Xinwu; Xu, Kun; Yin, Jie; Dai, Yitang; Yin, Feifei; Li, Jianqiang; Lu, Hua; Liu, Tao; Ji, Yuefeng
2014-01-13
Based on optical frequency combs (OFC), we propose an efficient and flexible multi-band frequency conversion scheme for satellite repeater applications. The underlying principle is to mix dual coherent OFCs with one of which carrying the input signal. By optically channelizing the mixed OFCs, the converted signal in different bands can be obtained in different channels. Alternatively, the scheme can be configured to generate multi-band local oscillators (LO) for widely distribution. Moreover, the scheme realizes simultaneous inter- and intra-band frequency conversion just in a single structure and needs only three frequency-fixed microwave sources. We carry out a proof of concept experiment in which multiple LOs with 2 GHz, 10 GHz, 18 GHz, and 26 GHz are generated. A C-band signal of 6.1 GHz input to the proposed scheme is successfully converted to 4.1 GHz (C band), 3.9 GHz (C band) and 11.9 GHz (X band), etc. Compared with the back-to-back (B2B) case measured at 0 dBm input power, the proposed scheme shows a 9.3% error vector magnitude (EVM) degradation at each output channel. Furthermore, all channels satisfy the EVM limit in a very wide input power range.
NASA Astrophysics Data System (ADS)
Farrell, Patricio; Koprucki, Thomas; Fuhrmann, Jürgen
2017-10-01
We compare three thermodynamically consistent numerical fluxes known in the literature, appearing in a Voronoï finite volume discretization of the van Roosbroeck system with general charge carrier statistics. Our discussion includes an extension of the Scharfetter-Gummel scheme to non-Boltzmann (e.g. Fermi-Dirac) statistics. It is based on the analytical solution of a two-point boundary value problem obtained by projecting the continuous differential equation onto the interval between neighboring collocation points. Hence, it serves as a reference flux. The exact solution of the boundary value problem can be approximated by computationally cheaper fluxes which modify certain physical quantities. One alternative scheme averages the nonlinear diffusion (caused by the non-Boltzmann nature of the problem), another one modifies the effective density of states. To study the differences between these three schemes, we analyze the Taylor expansions, derive an error estimate, visualize the flux error and show how the schemes perform for a carefully designed p-i-n benchmark simulation. We present strong evidence that the flux discretization based on averaging the nonlinear diffusion has an edge over the scheme based on modifying the effective density of states.
Acceleration of planar foils by the indirect-direct drive scheme
NASA Astrophysics Data System (ADS)
Honrubia, J. J.; Martínez-Val, J. M.; Bocher, J. L.; Faucheux, G.
1996-05-01
We have investigated the hydrodynamic response of plastic and aluminum foils accelerated by a pulse formed by an x-ray prepulse followed by the main laser pulse. This illumination scheme, so-called indirect-direct drive scheme, has been proposed as an alternative to the direct and indirect drive. The advantages of such a scheme are that it can contribute to solve the problem of uniformity of the direct drive and, at the same time, it can be much more efficient and use simpler targets than the indirect-drive. Experiments about this hybrid drive scheme have been performed at Limeil with the PHEBUS facility and the standard experimental set-up and diagnostics. The agreement between experiments and simulations is good for quantities such as the energy of the laser converted into x-rays and the burnthrough time of the converter foil. To simulate the full hydrodynamic evolution of the converter and target foils separated a distance of 1 mm, 2-D effects should be taken into account. The basic goals have been to check the simulation codes developed by the Institute of Nuclear Fusion and to determine the hydrodynamic response of the target foil to the hybrid pulse. These goals have been fulfilled.
NASA Technical Reports Server (NTRS)
Kent, James; Holdaway, Daniel
2015-01-01
A number of geophysical applications require the use of the linearized version of the full model. One such example is in numerical weather prediction, where the tangent linear and adjoint versions of the atmospheric model are required for the 4DVAR inverse problem. The part of the model that represents the resolved scale processes of the atmosphere is known as the dynamical core. Advection, or transport, is performed by the dynamical core. It is a central process in many geophysical applications and is a process that often has a quasi-linear underlying behavior. However, over the decades since the advent of numerical modelling, significant effort has gone into developing many flavors of high-order, shape preserving, nonoscillatory, positive definite advection schemes. These schemes are excellent in terms of transporting the quantities of interest in the dynamical core, but they introduce nonlinearity through the use of nonlinear limiters. The linearity of the transport schemes used in Goddard Earth Observing System version 5 (GEOS-5), as well as a number of other schemes, is analyzed using a simple 1D setup. The linearized version of GEOS-5 is then tested using a linear third order scheme in the tangent linear version.
Top-up injection schemes for future circular lepton collider
NASA Astrophysics Data System (ADS)
Aiba, M.; Goddard, B.; Oide, K.; Papaphilippou, Y.; Saá Hernández, Á.; Shwartz, D.; White, S.; Zimmermann, F.
2018-02-01
Top-up injection is an essential ingredient for the future circular lepton collider (FCC-ee) to maximize the integrated luminosity and it determines the design performance. In ttbar operation mode, with a beam energy of 175 GeV, the design lifetime of ∼1 h is the shortest of the four anticipated operational modes, and the beam lifetime may be even shorter in actual operation. A highly robust top-up injection scheme is consequently imperative. Various top-up methods are investigated and a number of suitable schemes are considered in developing alternative designs for the injection straight section of the collider ring. For the first time, we consider multipole-kicker off-energy injection, for minimizing detector background in top-up operation, and the use of a thin wire septum in a lepton storage ring, for maximizing the luminosity.
Asynchronous Gossip for Averaging and Spectral Ranking
NASA Astrophysics Data System (ADS)
Borkar, Vivek S.; Makhijani, Rahul; Sundaresan, Rajesh
2014-08-01
We consider two variants of the classical gossip algorithm. The first variant is a version of asynchronous stochastic approximation. We highlight a fundamental difficulty associated with the classical asynchronous gossip scheme, viz., that it may not converge to a desired average, and suggest an alternative scheme based on reinforcement learning that has guaranteed convergence to the desired average. We then discuss a potential application to a wireless network setting with simultaneous link activation constraints. The second variant is a gossip algorithm for distributed computation of the Perron-Frobenius eigenvector of a nonnegative matrix. While the first variant draws upon a reinforcement learning algorithm for an average cost controlled Markov decision problem, the second variant draws upon a reinforcement learning algorithm for risk-sensitive control. We then discuss potential applications of the second variant to ranking schemes, reputation networks, and principal component analysis.
Finite-element lattice Boltzmann simulations of contact line dynamics
NASA Astrophysics Data System (ADS)
Matin, Rastin; Krzysztof Misztal, Marek; Hernández-García, Anier; Mathiesen, Joachim
2018-01-01
The lattice Boltzmann method has become one of the standard techniques for simulating a wide range of fluid flows. However, the intrinsic coupling of momentum and space discretization restricts the traditional lattice Boltzmann method to regular lattices. Alternative off-lattice Boltzmann schemes exist for both single- and multiphase flows that decouple the velocity discretization from the underlying spatial grid. The current study extends the applicability of these off-lattice methods by introducing a finite element formulation that enables simulating contact line dynamics for partially wetting fluids. This work exemplifies the implementation of the scheme and furthermore presents benchmark experiments that show the scheme reduces spurious currents at the liquid-vapor interface by at least two orders of magnitude compared to a nodal implementation and allows for predicting the equilibrium states accurately in the range of moderate contact angles.
A rapid boundary integral equation technique for protein electrostatics
NASA Astrophysics Data System (ADS)
Grandison, Scott; Penfold, Robert; Vanden-Broeck, Jean-Marc
2007-06-01
A new boundary integral formulation is proposed for the solution of electrostatic field problems involving piecewise uniform dielectric continua. Direct Coulomb contributions to the total potential are treated exactly and Green's theorem is applied only to the residual reaction field generated by surface polarisation charge induced at dielectric boundaries. The implementation shows significantly improved numerical stability over alternative schemes involving the total field or its surface normal derivatives. Although strictly respecting the electrostatic boundary conditions, the partitioned scheme does introduce a jump artefact at the interface. Comparison against analytic results in canonical geometries, however, demonstrates that simple interpolation near the boundary is a cheap and effective way to circumvent this characteristic in typical applications. The new scheme is tested in a naive model to successfully predict the ground state orientation of biomolecular aggregates comprising the soybean storage protein, glycinin.
Advanced control design for hybrid turboelectric vehicle
NASA Technical Reports Server (NTRS)
Abban, Joseph; Norvell, Johnesta; Momoh, James A.
1995-01-01
The new environment standards are a challenge and opportunity for industry and government who manufacture and operate urban mass transient vehicles. A research investigation to provide control scheme for efficient power management of the vehicle is in progress. Different design requirements using functional analysis and trade studies of alternate power sources and controls have been performed. The design issues include portability, weight and emission/fuel efficiency of induction motor, permanent magnet and battery. A strategic design scheme to manage power requirements using advanced control systems is presented. It exploits fuzzy logic, technology and rule based decision support scheme. The benefits of our study will enhance the economic and technical feasibility of technological needs to provide low emission/fuel efficient urban mass transit bus. The design team includes undergraduate researchers in our department. Sample results using NASA HTEV simulation tool are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, Jianjun; Shen, Dongyi; Feng, Yaming
Negative refraction has attracted much interest for its promising capability in imaging applications. Such an effect can be implemented by negative index meta-materials, however, which are usually accompanied by high loss and demanding fabrication processes. Recently, alternative nonlinear approaches like phase conjugation and four wave mixing have shown advantages of low-loss and easy-to-implement, but associated problems like narrow accepting angles can still halt their practical applications. Here, we demonstrate theoretically and experimentally a scheme to realize negative refraction by nonlinear difference frequency generation with wide tunability, where a thin Beta barium borate slice serves as a negative refraction layer bendingmore » the input signal beam to the idler beam at a negative angle. Furthermore, we realize optical focusing effect using such nonlinear negative refraction, which may enable many potential applications in imaging science.« less
Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing.
Li, Shuang; Liu, Bing; Zhang, Chen
2016-01-01
Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tondini, S.; Dipartimento di Fisica, Informatica e Matematica, Università di Modena e Reggio Emilia, Via Campi 213/a, 41125 Modena; Pucker, G.
2016-09-07
The role of the inversion layer on injection and recombination phenomena in light emitting diodes (LEDs) is here studied on a multilayer (ML) structure of silicon nanocrystals (Si-NCs) embedded in SiO{sub 2}. Two Si-NC LEDs, which are similar for the active material but different in the fabrication process, elucidate the role of the non-radiative recombination rates at the ML/substrate interface. By studying current- and capacitance-voltage characteristics as well as electroluminescence spectra and time-resolved electroluminescence under pulsed and alternating bias pumping scheme in both the devices, we are able to ascribe the different experimental results to an efficient or inefficient minoritymore » carrier (electron) supply by the p-type substrate in the metal oxide semiconductor LEDs.« less
Relaxation time estimation in surface NMR
Grunewald, Elliot D.; Walsh, David O.
2017-03-21
NMR relaxation time estimation methods and corresponding apparatus generate two or more alternating current transmit pulses with arbitrary amplitudes, time delays, and relative phases; apply a surface NMR acquisition scheme in which initial preparatory pulses, the properties of which may be fixed across a set of multiple acquisition sequence, are transmitted at the start of each acquisition sequence and are followed by one or more depth sensitive pulses, the pulse moments of which are varied across the set of multiple acquisition sequences; and apply processing techniques in which recorded NMR response data are used to estimate NMR properties and the relaxation times T.sub.1 and T.sub.2* as a function of position as well as one-dimensional and two-dimension distributions of T.sub.1 versus T.sub.2* as a function of subsurface position.
[The design of a cardiac monitoring and analysing system with low power consumption].
Chen, Zhen-cheng; Ni, Li-li; Zhu, Yan-gao; Wang, Hong-yan; Ma, Yan
2002-07-01
The paper deals with a portable analyzing monitor system with liquid crystal display (LCD), which is low in power consumption and suitable for China's specific conditions. Apart from the development of the overall scheme of the system, the paper introduces the design of the hardware and the software. The 80196 single chip microcomputer is used as the central microprocessor to process and real-time electrocardiac signal data. The system have the following functions: five types of arrhythmia analysis, alarm, freeze, and record of automatic paperfeeding. The portable system can be operated by alternate-current (AC) or direct-current (DC). Its hardware circuit is simplified and its software structure is optimized. Multiple low power consumption and LCD unit are adopted in its modular designs.
Robust detection-isolation-accommodation for sensor failures
NASA Technical Reports Server (NTRS)
Weiss, J. L.; Pattipati, K. R.; Willsky, A. S.; Eterno, J. S.; Crawford, J. T.
1985-01-01
The results of a one year study to: (1) develop a theory for Robust Failure Detection and Identification (FDI) in the presence of model uncertainty, (2) develop a design methodology which utilizes the robust FDI ththeory, (3) apply the methodology to a sensor FDI problem for the F-100 jet engine, and (4) demonstrate the application of the theory to the evaluation of alternative FDI schemes are presented. Theoretical results in statistical discrimination are used to evaluate the robustness of residual signals (or parity relations) in terms of their usefulness for FDI. Furthermore, optimally robust parity relations are derived through the optimization of robustness metrics. The result is viewed as decentralization of the FDI process. A general structure for decentralized FDI is proposed and robustness metrics are used for determining various parameters of the algorithm.
Cooperative single-photon subradiant states in a three-dimensional atomic array
NASA Astrophysics Data System (ADS)
Jen, H. H.
2016-11-01
We propose a complete superradiant and subradiant states that can be manipulated and prepared in a three-dimensional atomic array. These subradiant states can be realized by absorbing a single photon and imprinting the spatially-dependent phases on the atomic system. We find that the collective decay rates and associated cooperative Lamb shifts are highly dependent on the phases we manage to imprint, and the subradiant state of long lifetime can be found for various lattice spacings and atom numbers. We also investigate both optically thin and thick atomic arrays, which can serve for systematic studies of super- and sub-radiance. Our proposal offers an alternative scheme for quantum memory of light in a three-dimensional array of two-level atoms, which is applicable and potentially advantageous in quantum information processing.
Fast Pixel Buffer For Processing With Lookup Tables
NASA Technical Reports Server (NTRS)
Fisher, Timothy E.
1992-01-01
Proposed scheme for buffering data on intensities of picture elements (pixels) of image increases rate or processing beyond that attainable when data read, one pixel at time, from main image memory. Scheme applied in design of specialized image-processing circuitry. Intended to optimize performance of processor in which electronic equivalent of address-lookup table used to address those pixels in main image memory required for processing.
NASA Astrophysics Data System (ADS)
Qiu, Kun; Zhang, Chongfu; Ling, Yun; Wang, Yibo
2007-11-01
This paper proposes an all-optical label processing scheme using multiple optical orthogonal codes sequences (MOOCS) for optical packet switching (OPS) (MOOCS-OPS) networks, for the first time to the best of our knowledge. In this scheme, the multiple optical orthogonal codes (MOOC) from multiple-groups optical orthogonal codes (MGOOC) are permuted and combined to obtain the MOOCS for the optical labels, which are used to effectively enlarge the capacity of available optical codes for optical labels. The optical label processing (OLP) schemes are reviewed and analyzed, the principles of MOOCS-based optical labels for OPS networks are given, and analyzed, then the MOOCS-OPS topology and the key realization units of the MOOCS-based optical label packets are studied in detail, respectively. The performances of this novel all-optical label processing technology are analyzed, the corresponding simulation is performed. These analysis and results show that the proposed scheme can overcome the lack of available optical orthogonal codes (OOC)-based optical labels due to the limited number of single OOC for optical label with the short code length, and indicate that the MOOCS-OPS scheme is feasible.
Variance reduction for Fokker–Planck based particle Monte Carlo schemes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorji, M. Hossein, E-mail: gorjih@ifd.mavt.ethz.ch; Andric, Nemanja; Jenny, Patrick
Recently, Fokker–Planck based particle Monte Carlo schemes have been proposed and evaluated for simulations of rarefied gas flows [1–3]. In this paper, the variance reduction for particle Monte Carlo simulations based on the Fokker–Planck model is considered. First, deviational based schemes were derived and reviewed, and it is shown that these deviational methods are not appropriate for practical Fokker–Planck based rarefied gas flow simulations. This is due to the fact that the deviational schemes considered in this study lead either to instabilities in the case of two-weight methods or to large statistical errors if the direct sampling method is applied.more » Motivated by this conclusion, we developed a novel scheme based on correlated stochastic processes. The main idea here is to synthesize an additional stochastic process with a known solution, which is simultaneously solved together with the main one. By correlating the two processes, the statistical errors can dramatically be reduced; especially for low Mach numbers. To assess the methods, homogeneous relaxation, planar Couette and lid-driven cavity flows were considered. For these test cases, it could be demonstrated that variance reduction based on parallel processes is very robust and effective.« less
Skjånes, Kari; Lindblad, Peter; Muller, Jiri
2007-10-01
Many areas of algae technology have developed over the last decades, and there is an established market for products derived from algae, dominated by health food and aquaculture. In addition, the interest for active biomolecules from algae is increasing rapidly. The need for CO(2) management, in particular capture and storage is currently an important technological, economical and global political issue and will continue to be so until alternative energy sources and energy carriers diminish the need for fossil fuels. This review summarizes in an integrated manner different technologies for use of algae, demonstrating the possibility of combining different areas of algae technology to capture CO(2) and using the obtained algal biomass for various industrial applications thus bringing added value to the capturing and storage processes. Furthermore, we emphasize the use of algae in a novel biological process which produces H(2) directly from solar energy in contrast to the conventional CO(2) neutral biological methods. This biological process is a part of the proposed integrated CO(2) management scheme.
Application of Intel Many Integrated Core (MIC) accelerators to the Pleim-Xiu land surface scheme
NASA Astrophysics Data System (ADS)
Huang, Melin; Huang, Bormin; Huang, Allen H.
2015-10-01
The land-surface model (LSM) is one physics process in the weather research and forecast (WRF) model. The LSM includes atmospheric information from the surface layer scheme, radiative forcing from the radiation scheme, and precipitation forcing from the microphysics and convective schemes, together with internal information on the land's state variables and land-surface properties. The LSM is to provide heat and moisture fluxes over land points and sea-ice points. The Pleim-Xiu (PX) scheme is one LSM. The PX LSM features three pathways for moisture fluxes: evapotranspiration, soil evaporation, and evaporation from wet canopies. To accelerate the computation process of this scheme, we employ Intel Xeon Phi Many Integrated Core (MIC) Architecture as it is a multiprocessor computer structure with merits of efficient parallelization and vectorization essentials. Our results show that the MIC-based optimization of this scheme running on Xeon Phi coprocessor 7120P improves the performance by 2.3x and 11.7x as compared to the original code respectively running on one CPU socket (eight cores) and on one CPU core with Intel Xeon E5-2670.
Gutierrez, Hialy; Shewade, Ashwini; Dai, Minghan; Mendoza-Arana, Pedro; Gómez-Dantés, Octavio; Jain, Nishant; Khonelidze, Irma; Nabyonga-Orem, Juliet; Saleh, Karima; Teerawattananon, Yot; Nishtar, Sania; Hornberger, John
2015-08-01
Lessons learned by countries that have successfully implemented coverage schemes for health services may be valuable for other countries, especially low- and middle-income countries (LMICs), which likewise are seeking to provide/expand coverage. The research team surveyed experts in population health management from LMICs for information on characteristics of health care coverage schemes and factors that influenced decision-making processes. The level of coverage provided by the different schemes varied. Nearly all the health care coverage schemes involved various representatives and stakeholders in their decision-making processes. Maternal and child health, cardiovascular diseases, cancer, and HIV were among the highest priorities guiding coverage development decisions. Evidence used to inform coverage decisions included medical literature, regional and global epidemiology, and coverage policies of other coverage schemes. Funding was the most commonly reported reason for restricting coverage. This exploratory study provides an overview of health care coverage schemes from participating LMICs and contributes to the scarce evidence base on coverage decision making. Sharing knowledge and experiences among LMICs can support efforts to establish systems for accessible, affordable, and equitable health care.
Zhang, Yichuan; Wang, Jiangping
2015-07-01
Rivers serve as a highly valued component in ecosystem and urban infrastructures. River planning should follow basic principles of maintaining or reconstructing the natural landscape and ecological functions of rivers. Optimization of planning scheme is a prerequisite for successful construction of urban rivers. Therefore, relevant studies on optimization of scheme for natural ecology planning of rivers is crucial. In the present study, four planning schemes for Zhaodingpal River in Xinxiang City, Henan Province were included as the objects for optimization. Fourteen factors that influenced the natural ecology planning of urban rivers were selected from five aspects so as to establish the ANP model. The data processing was done using Super Decisions software. The results showed that important degree of scheme 3 was highest. A scientific, reasonable and accurate evaluation of schemes could be made by ANP method on natural ecology planning of urban rivers. This method could be used to provide references for sustainable development and construction of urban rivers. ANP method is also suitable for optimization of schemes for urban green space planning and design.
NASA Astrophysics Data System (ADS)
Jin, Juliang; Li, Lei; Wang, Wensheng; Zhang, Ming
2006-10-01
The optimal selection of schemes of water transportation projects is a process of choosing a relatively optimal scheme from a number of schemes of water transportation programming and management projects, which is of importance in both theory and practice in water resource systems engineering. In order to achieve consistency and eliminate the dimensions of fuzzy qualitative and fuzzy quantitative evaluation indexes, to determine the weights of the indexes objectively, and to increase the differences among the comprehensive evaluation index values of water transportation project schemes, a projection pursuit method, named FPRM-PP for short, was developed in this work for selecting the optimal water transportation project scheme based on the fuzzy preference relation matrix. The research results show that FPRM-PP is intuitive and practical, the correction range of the fuzzy preference relation matrix
Peralta, Victor; Cuesta, Manuel J
2005-11-15
The objective was to ascertain the underlying factor structure of alternative definitions of schizophrenia, and to examine the distribution of schizophrenia-related variables against the resulting factor solution. Twenty-three diagnostic schemes of schizophrenia were applied to 660 patients presenting with psychotic symptoms regardless of the specific diagnosis of psychotic disorder. Factor analysis of the 23 diagnostic schemes yielded three interpretable factors explaining 58% of the variance, the first factor (general schizophrenia factor) accounting for most of the variance (36%). On the basis of the general schizophrenia factor score, the sample was divided in quintile groups representing 5 levels of schizophrenia definition (absent, doubtful, very broad, broad and narrow) and the distribution of a number of schizophrenia-related variables was examined across the groups. This grouping procedure was used for examining the comparative validity of alternative levels of categorically defined schizophrenia and an ordinal (i.e. dimensional) definition. Overall, schizophrenia-related variables displayed a dose-response relationship with level of schizophrenia definition. Logistic regression analyses revealed that the dimensional definition explained more variance in the schizophrenia-related variables than the alternative levels for defining schizophrenia categorically. These results are consistent with a unitary and dimensional construct of schizophrenia with no clear "points of rarity" at its boundaries, thus supporting the continuum hypothesis of the psychotic illness.
NASA Technical Reports Server (NTRS)
Tao, W.-K.; Shi, J.; Chen, S. S>
2007-01-01
Advances in computing power allow atmospheric prediction models to be mn at progressively finer scales of resolution, using increasingly more sophisticated physical parameterizations and numerical methods. The representation of cloud microphysical processes is a key component of these models, over the past decade both research and operational numerical weather prediction models have started using more complex microphysical schemes that were originally developed for high-resolution cloud-resolving models (CRMs). A recent report to the United States Weather Research Program (USWRP) Science Steering Committee specifically calls for the replacement of implicit cumulus parameterization schemes with explicit bulk schemes in numerical weather prediction (NWP) as part of a community effort to improve quantitative precipitation forecasts (QPF). An improved Goddard bulk microphysical parameterization is implemented into a state-of the-art of next generation of Weather Research and Forecasting (WRF) model. High-resolution model simulations are conducted to examine the impact of microphysical schemes on two different weather events (a midlatitude linear convective system and an Atllan"ic hurricane). The results suggest that microphysics has a major impact on the organization and precipitation processes associated with a summer midlatitude convective line system. The 31CE scheme with a cloud ice-snow-hail configuration led to a better agreement with observation in terms of simulated narrow convective line and rainfall intensity. This is because the 3ICE-hail scheme includes dense ice precipitating (hail) particle with very fast fall speed (over 10 m/s). For an Atlantic hurricane case, varying the microphysical schemes had no significant impact on the track forecast but did affect the intensity (important for air-sea interaction)
Kim, Sung-Jin; Wang, Fang; Burns, Mark A; Kurabayashi, Katsuo
2009-06-01
Micromixing is a crucial step for biochemical reactions in microfluidic networks. A critical challenge is that the system containing micromixers needs numerous pumps, chambers, and channels not only for the micromixing but also for the biochemical reactions and detections. Thus, a simple and compatible design of the micromixer element for the system is essential. Here, we propose a simple, yet effective, scheme that enables micromixing and a biochemical reaction in a single microfluidic chamber without using any pumps. We accomplish this process by using natural convection in conjunction with alternating heating of two heaters for efficient micromixing, and by regulating capillarity for sample transport. As a model application, we demonstrate micromixing and subsequent polymerase chain reaction (PCR) for an influenza viral DNA fragment. This process is achieved in a platform of a microfluidic cartridge and a microfabricated heating-instrument with a fast thermal response. Our results will significantly simplify micromixing and a subsequent biochemical reaction that involves reagent heating in microfluidic networks.
Ong, Khai Lun; Kaur, Guneet; Pensupa, Nattha; Uisan, Kristiadi; Lin, Carol Sze Ki
2018-01-01
Staggering amounts of food waste are being generated in Asia by means of agricultural processing, food transportation and storage, and human food consumption activities. This along with the recent sustainable development goals of food security, environmental protection, and energy efficiency are the key drivers for food waste valorization. The aim of this review is to provide an insight on the latest trends in food waste valorization in Asian countries such as India, Thailand, Singapore, Malaysia and Indonesia. Landfilling, incineration, and composting are the first-generation food waste processing technologies. The advancement of valorisation alternatives to tackle the food waste issue is the focus of this review. Furthermore, a series of examples of key food waste valorization schemes in this Asian region as case studies to demonstrate the advancement in bioconversions in these countries are described. Finally, important legislation aspects for food waste disposal in these Asian countries are also reported. Copyright © 2017 Elsevier Ltd. All rights reserved.
Overview on the biotechnological production of L-DOPA.
Min, Kyoungseon; Park, Kyungmoon; Park, Don-Hee; Yoo, Young Je
2015-01-01
L-DOPA (3,4-dihydroxyphenyl-L-alanine) has been widely used as a drug for Parkinson's disease caused by deficiency of the neurotransmitter dopamine. Since Monsanto developed the commercial process for L-DOPA synthesis for the first time, most of currently supplied L-DOPA has been produced by the asymmetric method, especially asymmetric hydrogenation. However, the asymmetric synthesis shows critical limitations such as a poor conversion rate and a low enantioselectivity. Accordingly, alternative biotechnological approaches have been researched for overcoming the shortcomings: microbial fermentation using microorganisms with tyrosinase, tyrosine phenol-lyase, or p-hydroxyphenylacetate 3-hydroxylase activity and enzymatic conversion by immobilized tyrosinase. Actually, Ajinomoto Co. Ltd commercialized Erwinia herbicola fermentation to produce L-DOPA from catechol. In addition, the electroenzymatic conversion system was recently introduced as a newly emerging scheme. In this review, we aim to not only overview the biotechnological L-DOPA production methods, but also to briefly compare and analyze their advantages and drawbacks. Furthermore, we suggest the future potential of biotechnological L-DOPA production as an industrial process.
Advanced Imaging Methods for Long-Baseline Optical Interferometry
NASA Astrophysics Data System (ADS)
Le Besnerais, G.; Lacour, S.; Mugnier, L. M.; Thiebaut, E.; Perrin, G.; Meimon, S.
2008-11-01
We address the data processing methods needed for imaging with a long baseline optical interferometer. We first describe parametric reconstruction approaches and adopt a general formulation of nonparametric image reconstruction as the solution of a constrained optimization problem. Within this framework, we present two recent reconstruction methods, Mira and Wisard, representative of the two generic approaches for dealing with the missing phase information. Mira is based on an implicit approach and a direct optimization of a Bayesian criterion while Wisard adopts a self-calibration approach and an alternate minimization scheme inspired from radio-astronomy. Both methods can handle various regularization criteria. We review commonly used regularization terms and introduce an original quadratic regularization called ldquosoft support constraintrdquo that favors the object compactness. It yields images of quality comparable to nonquadratic regularizations on the synthetic data we have processed. We then perform image reconstructions, both parametric and nonparametric, on astronomical data from the IOTA interferometer, and discuss the respective roles of parametric and nonparametric approaches for optical interferometric imaging.
Multiferroic composites for magnetic data storage beyond the super-paramagnetic limit
NASA Astrophysics Data System (ADS)
Vopson, M. M.; Zemaityte, E.; Spreitzer, M.; Namvar, E.
2014-09-01
Ultra high-density magnetic data storage requires magnetic grains of <5 nm diameters. Thermal stability of such small magnetic grain demands materials with very large magneto-crystalline anisotropy, which makes data write process almost impossible, even when Heat Assisted Magnetic Recording (HAMR) technology is deployed. Here, we propose an alternative method of strengthening the thermal stability of the magnetic grains via elasto-mechanical coupling between the magnetic data storage layer and a piezo-ferroelectric substrate. Using Stoner-Wohlfarth single domain model, we show that the correct tuning of this coupling can increase the effective magneto-crystalline anisotropy of the magnetic grains making them stable beyond the super-paramagnetic limit. However, the effective magnetic anisotropy can also be lowered or even switched off during the write process by simply altering the applied voltage to the substrate. Based on these effects, we propose two magnetic data storage protocols, one of which could potentially replace HAMR technology, with both schemes promising unprecedented increases in the data storage areal density beyond the super-paramagnetic size limit.
Pot, politics and the press--reflections on cannabis law reform in Western Australia.
Lenton, Simon
2004-06-01
Windows of opportunity for changing drug laws open infrequently and they often close without legislative change being affected. In this paper the author, who has been intimately involved in the process, describes how evidence-based recommendations to 'decriminalize' cannabis have recently been progressed through public debate and the political process to become law in Western Australia (WA). The Cannabis Control Bill 2003 passed the WA Parliament on 23 September. The Bill, the legislative backing behind the Cannabis Infringement Notice (CIN) Scheme, came into effect on 22 March 2004. This made WA the fourth Australian jurisdiction, after South Australia, the Australian Capital Territory and the Northern Territory, to adopt a prohibition with civil penalties scheme for minor cannabis offences. This paper describes some of the background to the scheme, the process by which it has become law, the main provisions of the scheme and its evaluation. It includes reflections on the role of politics and the press in the process. The process of implementation and evaluation are outlined by the author, foreshadowing an ongoing opportunity to understand the impact of the change in legislation.
Genetic progress in multistage dairy cattle breeding schemes using genetic markers.
Schrooten, C; Bovenhuis, H; van Arendonk, J A M; Bijma, P
2005-04-01
The aim of this paper was to explore general characteristics of multistage breeding schemes and to evaluate multistage dairy cattle breeding schemes that use information on quantitative trait loci (QTL). Evaluation was either for additional genetic response or for reduction in number of progeny-tested bulls while maintaining the same response. The reduction in response in multistage breeding schemes relative to comparable single-stage breeding schemes (i.e., with the same overall selection intensity and the same amount of information in the final stage of selection) depended on the overall selection intensity, the selection intensity in the various stages of the breeding scheme, and the ratio of the accuracies of selection in the various stages of the breeding scheme. When overall selection intensity was constant, reduction in response increased with increasing selection intensity in the first stage. The decrease in response was highest in schemes with lower overall selection intensity. Reduction in response was limited in schemes with low to average emphasis on first-stage selection, especially if the accuracy of selection in the first stage was relatively high compared with the accuracy in the final stage. Closed nucleus breeding schemes in dairy cattle that use information on QTL were evaluated by deterministic simulation. In the base scheme, the selection index consisted of pedigree information and own performance (dams), or pedigree information and performance of 100 daughters (sires). In alternative breeding schemes, information on a QTL was accounted for by simulating an additional index trait. The fraction of the variance explained by the QTL determined the correlation between the additional index trait and the breeding goal trait. Response in progeny test schemes relative to a base breeding scheme without QTL information ranged from +4.5% (QTL explaining 5% of the additive genetic variance) to +21.2% (QTL explaining 50% of the additive genetic variance). A QTL explaining 5% of the additive genetic variance allowed a 35% reduction in the number of progeny tested bulls, while maintaining genetic response at the level of the base scheme. Genetic progress was up to 31.3% higher for schemes with increased embryo production and selection of embryos based on QTL information. The challenge for breeding organizations is to find the optimum breeding program with regard to additional genetic progress and additional (or reduced) cost.
Multicriteria methodological approach to manage urban air pollution
NASA Astrophysics Data System (ADS)
Vlachokostas, Ch.; Achillas, Ch.; Moussiopoulos, N.; Banias, G.
2011-08-01
Managing urban air pollution necessitates a feasible and efficient abatement strategy which is characterised as a defined set of specific control measures. In practice, hard budget constraints are present in any decision-making process and therefore available alternatives need to be hierarchised in a fast but still reliable manner. Moreover, realistic strategies require adequate information on the available control measures, taking also into account the area's special characteristics. The selection of the most applicable bundle of measures rests in achieving stakeholders' consensus, while taking into consideration mutually conflicting views and criteria. A preliminary qualitative comparison of alternative control measures would be most handy for decision-makers, forming the grounds for an in-depth analysis of the most promising ones. This paper presents an easy-to-follow multicriteria methodological approach in order to include and synthesise multi-disciplinary knowledge from various stakeholders so as to result into a priority list of abatement options, achieve consensus and secure the adoption of the resulting optimal solution. The approach relies on the active involvement of public authorities and local stakeholders in order to incorporate their environmental, economic and social preferences. The methodological scheme is implemented for the case of Thessaloniki, Greece, an area considered among the most polluted cities within Europe, especially with respect to airborne particles. Intense police control, natural gas penetration in buildings and metro construction equally result into the most "promising" alternatives in order to control air pollution in the GTA. The three optimal alternatives belong to different thematic areas, namely road transport, thermal heating and infrastructure. Thus, it is obvious that efforts should spread throughout all thematic areas. Natural gas penetration in industrial units, intense monitoring of environmental standards and regular maintenance of heavy oil burners are ranked as 4th, 5th and 6th optimal alternatives, respectively.
Intelligent Power Swing Detection Scheme to Prevent False Relay Tripping Using S-Transform
NASA Astrophysics Data System (ADS)
Mohamad, Nor Z.; Abidin, Ahmad F.; Musirin, Ismail
2014-06-01
Distance relay design is equipped with out-of-step tripping scheme to ensure correct distance relay operation during power swing. The out-of-step condition is a consequence result from unstable power swing. It requires proper detection of power swing to initiate a tripping signal followed by separation of unstable part from the entire power system. The distinguishing process of unstable swing from stable swing poses a challenging task. This paper presents an intelligent approach to detect power swing based on S-Transform signal processing tool. The proposed scheme is based on the use of S-Transform feature of active power at the distance relay measurement point. It is demonstrated that the proposed scheme is able to detect and discriminate the unstable swing from stable swing occurring in the system. To ascertain validity of the proposed scheme, simulations were carried out with the IEEE 39 bus system and its performance has been compared with the wavelet transform-based power swing detection scheme.
Subtraction with hadronic initial states at NLO: an NNLO-compatible scheme
NASA Astrophysics Data System (ADS)
Somogyi, Gábor
2009-05-01
We present an NNLO-compatible subtraction scheme for computing QCD jet cross sections of hadron-initiated processes at NLO accuracy. The scheme is constructed specifically with those complications in mind, that emerge when extending the subtraction algorithm to next-to-next-to-leading order. It is therefore possible to embed the present scheme in a full NNLO computation without any modifications.
Power corrections in the N -jettiness subtraction scheme
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boughezal, Radja; Liu, Xiaohui; Petriello, Frank
We discuss the leading-logarithmic power corrections in the N-jettiness subtraction scheme for higher-order perturbative QCD calculations. We compute the next-to-leading order power corrections for an arbitrary N-jet process, and we explicitly calculate the power correction through next-to-next-to-leading order for color-singlet production for bothmore » $$q\\bar{q}$$ and gg initiated processes. Our results are compact and simple to implement numerically. Including the leading power correction in the N-jettiness subtraction scheme substantially improves its numerical efficiency. Finally, we discuss what features of our techniques extend to processes containing final-state jets.« less
Power corrections in the N -jettiness subtraction scheme
Boughezal, Radja; Liu, Xiaohui; Petriello, Frank
2017-03-30
We discuss the leading-logarithmic power corrections in the N-jettiness subtraction scheme for higher-order perturbative QCD calculations. We compute the next-to-leading order power corrections for an arbitrary N-jet process, and we explicitly calculate the power correction through next-to-next-to-leading order for color-singlet production for bothmore » $$q\\bar{q}$$ and gg initiated processes. Our results are compact and simple to implement numerically. Including the leading power correction in the N-jettiness subtraction scheme substantially improves its numerical efficiency. Finally, we discuss what features of our techniques extend to processes containing final-state jets.« less
Perfect Detection of Spikes in the Linear Sub-threshold Dynamics of Point Neurons
Krishnan, Jeyashree; Porta Mana, PierGianLuca; Helias, Moritz; Diesmann, Markus; Di Napoli, Edoardo
2018-01-01
Spiking neuronal networks are usually simulated with one of three main schemes: the classical time-driven and event-driven schemes, and the more recent hybrid scheme. All three schemes evolve the state of a neuron through a series of checkpoints: equally spaced in the first scheme and determined neuron-wise by spike events in the latter two. The time-driven and the hybrid scheme determine whether the membrane potential of a neuron crosses a threshold at the end of the time interval between consecutive checkpoints. Threshold crossing can, however, occur within the interval even if this test is negative. Spikes can therefore be missed. The present work offers an alternative geometric point of view on neuronal dynamics, and derives, implements, and benchmarks a method for perfect retrospective spike detection. This method can be applied to neuron models with affine or linear subthreshold dynamics. The idea behind the method is to propagate the threshold with a time-inverted dynamics, testing whether the threshold crosses the neuron state to be evolved, rather than vice versa. Algebraically this translates into a set of inequalities necessary and sufficient for threshold crossing. This test is slower than the imperfect one, but can be optimized in several ways. Comparison confirms earlier results that the imperfect tests rarely miss spikes (less than a fraction 1/108 of missed spikes) in biologically relevant settings. PMID:29379430
Multicellular Computing Using Conjugation for Wiring
Goñi-Moreno, Angel; Amos, Martyn; de la Cruz, Fernando
2013-01-01
Recent efforts in synthetic biology have focussed on the implementation of logical functions within living cells. One aim is to facilitate both internal “re-programming” and external control of cells, with potential applications in a wide range of domains. However, fundamental limitations on the degree to which single cells may be re-engineered have led to a growth of interest in multicellular systems, in which a “computation” is distributed over a number of different cell types, in a manner analogous to modern computer networks. Within this model, individual cell type perform specific sub-tasks, the results of which are then communicated to other cell types for further processing. The manner in which outputs are communicated is therefore of great significance to the overall success of such a scheme. Previous experiments in distributed cellular computation have used global communication schemes, such as quorum sensing (QS), to implement the “wiring” between cell types. While useful, this method lacks specificity, and limits the amount of information that may be transferred at any one time. We propose an alternative scheme, based on specific cell-cell conjugation. This mechanism allows for the direct transfer of genetic information between bacteria, via circular DNA strands known as plasmids. We design a multi-cellular population that is able to compute, in a distributed fashion, a Boolean XOR function. Through this, we describe a general scheme for distributed logic that works by mixing different strains in a single population; this constitutes an important advantage of our novel approach. Importantly, the amount of genetic information exchanged through conjugation is significantly higher than the amount possible through QS-based communication. We provide full computational modelling and simulation results, using deterministic, stochastic and spatially-explicit methods. These simulations explore the behaviour of one possible conjugation-wired cellular computing system under different conditions, and provide baseline information for future laboratory implementations. PMID:23840385
Driving a car with custom-designed fuzzy inferencing VLSI chips and boards
NASA Technical Reports Server (NTRS)
Pin, Francois G.; Watanabe, Yutaka
1993-01-01
Vehicle control in a-priori unknown, unpredictable, and dynamic environments requires many calculational and reasoning schemes to operate on the basis of very imprecise, incomplete, or unreliable data. For such systems, in which all the uncertainties can not be engineered away, approximate reasoning may provide an alternative to the complexity and computational requirements of conventional uncertainty analysis and propagation techniques. Two types of computer boards including custom-designed VLSI chips were developed to add a fuzzy inferencing capability to real-time control systems. All inferencing rules on a chip are processed in parallel, allowing execution of the entire rule base in about 30 microseconds, and therefore, making control of 'reflex-type' of motions envisionable. The use of these boards and the approach using superposition of elemental sensor-based behaviors for the development of qualitative reasoning schemes emulating human-like navigation in a-priori unknown environments are first discussed. Then how the human-like navigation scheme implemented on one of the qualitative inferencing boards was installed on a test-bed platform to investigate two control modes for driving a car in a-priori unknown environments on the basis of sparse and imprecise sensor data is described. In the first mode, the car navigates fully autonomously, while in the second mode, the system acts as a driver's aid providing the driver with linguistic (fuzzy) commands to turn left or right and speed up or slow down depending on the obstacles perceived by the sensors. Experiments with both modes of control are described in which the system uses only three acoustic range (sonar) sensor channels to perceive the environment. Simulation results as well as indoors and outdoors experiments are presented and discussed to illustrate the feasibility and robustness of autonomous navigation and/or safety enhancing driver's aid using the new fuzzy inferencing hardware system and some human-like reasoning schemes which may include as little as six elemental behaviors embodied in fourteen qualitative rules.
Specific excitatory connectivity for feature integration in mouse primary visual cortex
Molina-Luna, Patricia; Roth, Morgane M.
2017-01-01
Local excitatory connections in mouse primary visual cortex (V1) are stronger and more prevalent between neurons that share similar functional response features. However, the details of how functional rules for local connectivity shape neuronal responses in V1 remain unknown. We hypothesised that complex responses to visual stimuli may arise as a consequence of rules for selective excitatory connectivity within the local network in the superficial layers of mouse V1. In mouse V1 many neurons respond to overlapping grating stimuli (plaid stimuli) with highly selective and facilitatory responses, which are not simply predicted by responses to single gratings presented alone. This complexity is surprising, since excitatory neurons in V1 are considered to be mainly tuned to single preferred orientations. Here we examined the consequences for visual processing of two alternative connectivity schemes: in the first case, local connections are aligned with visual properties inherited from feedforward input (a ‘like-to-like’ scheme specifically connecting neurons that share similar preferred orientations); in the second case, local connections group neurons into excitatory subnetworks that combine and amplify multiple feedforward visual properties (a ‘feature binding’ scheme). By comparing predictions from large scale computational models with in vivo recordings of visual representations in mouse V1, we found that responses to plaid stimuli were best explained by assuming feature binding connectivity. Unlike under the like-to-like scheme, selective amplification within feature-binding excitatory subnetworks replicated experimentally observed facilitatory responses to plaid stimuli; explained selective plaid responses not predicted by grating selectivity; and was consistent with broad anatomical selectivity observed in mouse V1. Our results show that visual feature binding can occur through local recurrent mechanisms without requiring feedforward convergence, and that such a mechanism is consistent with visual responses and cortical anatomy in mouse V1. PMID:29240769
ARM - Midlatitude Continental Convective Clouds
Jensen, Mike; Bartholomew, Mary Jane; Genio, Anthony Del; Giangrande, Scott; Kollias, Pavlos
2012-01-19
Convective processes play a critical role in the Earth's energy balance through the redistribution of heat and moisture in the atmosphere and their link to the hydrological cycle. Accurate representation of convective processes in numerical models is vital towards improving current and future simulations of Earths climate system. Despite improvements in computing power, current operational weather and global climate models are unable to resolve the natural temporal and spatial scales important to convective processes and therefore must turn to parameterization schemes to represent these processes. In turn, parameterization schemes in cloud-resolving models need to be evaluated for their generality and application to a variety of atmospheric conditions. Data from field campaigns with appropriate forcing descriptors have been traditionally used by modelers for evaluating and improving parameterization schemes.
ARM - Midlatitude Continental Convective Clouds (comstock-hvps)
Jensen, Mike; Comstock, Jennifer; Genio, Anthony Del; Giangrande, Scott; Kollias, Pavlos
2012-01-06
Convective processes play a critical role in the Earth's energy balance through the redistribution of heat and moisture in the atmosphere and their link to the hydrological cycle. Accurate representation of convective processes in numerical models is vital towards improving current and future simulations of Earths climate system. Despite improvements in computing power, current operational weather and global climate models are unable to resolve the natural temporal and spatial scales important to convective processes and therefore must turn to parameterization schemes to represent these processes. In turn, parameterization schemes in cloud-resolving models need to be evaluated for their generality and application to a variety of atmospheric conditions. Data from field campaigns with appropriate forcing descriptors have been traditionally used by modelers for evaluating and improving parameterization schemes.
A New Proxy Electronic Voting Scheme Achieved by Six-Particle Entangled States
NASA Astrophysics Data System (ADS)
Cao, Hai-Jing; Ding, Li-Yuan; Jiang, Xiu-Li; Li, Peng-Fei
2018-03-01
In this paper, we use quantum proxy signature to construct a new secret electronic voting scheme. In our scheme, six particles entangled states function as quantum channels. The voter Alice, the Vote Management Center Bob, the scrutineer Charlie only perform two particles measurements on the Bell bases to realize the electronic voting process. So the scheme reduces the technical difficulty and increases operation efficiency. We use quantum key distribution and one-time pad to guarantee its unconditional security. The significant advantage of our scheme is that transmitted information capacity is twice as much as the capacity of other schemes.
Digital signal processing techniques for coherent optical communication
NASA Astrophysics Data System (ADS)
Goldfarb, Gilad
Coherent detection with subsequent digital signal processing (DSP) is developed, analyzed theoretically and numerically and experimentally demonstrated in various fiber-optic transmission scenarios. The use of DSP in conjunction with coherent detection unleashes the benefits of coherent detection which rely on the preservaton of full information of the incoming field. These benefits include high receiver sensitivity, the ability to achieve high spectral-efficiency and the use of advanced modulation formats. With the immense advancements in DSP speeds, many of the problems hindering the use of coherent detection in optical transmission systems have been eliminated. Most notably, DSP alleviates the need for hardware phase-locking and polarization tracking, which can now be achieved in the digital domain. The complexity previously associated with coherent detection is hence significantly diminished and coherent detection is once gain considered a feasible detection alternative. In this thesis, several aspects of coherent detection (with or without subsequent DSP) are addressed. Coherent detection is presented as a means to extend the dispersion limit of a duobinary signal using an analog decision-directed phase-lock loop. Analytical bit-error ratio estimation for quadrature phase-shift keying signals is derived. To validate the promise for high spectral efficiency, the orthogonal-wavelength-division multiplexing scheme is suggested. In this scheme the WDM channels are spaced at the symbol rate, thus achieving the spectral efficiency limit. Theory, simulation and experimental results demonstrate the feasibility of this approach. Infinite impulse response filtering is shown to be an efficient alternative to finite impulse response filtering for chromatic dispersion compensation. Theory, design considerations, simulation and experimental results relating to this topic are presented. Interaction between fiber dispersion and nonlinearity remains the last major challenge deterministic effects pose for long-haul optical data transmission. Experimental results which demonstrate the possibility to digitally mitigate both dispersion and nonlinearity are presented. Impairment compensation is achieved using backward propagation by implementing the split-step method. Efficient realizations of the dispersion compensation operator used in this implementation are considered. Infinite-impulse response and wavelet-based filtering are both investigated as a means to reduce the required computational load associated with signal backward-propagation. Possible future research directions conclude this dissertation.
Mr Cameron's Three Million Apprenticeships
ERIC Educational Resources Information Center
Allen, Martin
2015-01-01
In the 2015 general election campaign David Cameron celebrated the success of apprenticeships during the Coalition and promised another 3 million. This article argues that the "reinvention" of apprenticeships has neither created real skills nor provided real alternatives for young people and that the UK schemes fall far short of those in…
Bovington, Jock; Srinivasan, Sudharsanan; Bowers, John E
2014-08-11
This paper discusses circuit based and waveguide based athermalization schemes and provides some design examples of athermalized lasers utilizing fully integrated athermal components as an alternative to power hungry thermo-electric controllers (TECs), off-chip wavelength lockers or monitors with lookup tables for tunable lasers. This class of solutions is important for uncooled transmitters on silicon.
Challenging Social Hierarchy: Playing with Oppositional Identities in Family Talk
ERIC Educational Resources Information Center
Bani-Shoraka, Helena
2008-01-01
This study examines how bilingual family members use language choice and language alternation as a local scheme of interpretation to distinguish different and often contesting social identities in interaction. It is argued that the playful creation of oppositional identities in interaction relieves the speakers from responsibility and creates a…
Alternative predictors in chaotic time series
NASA Astrophysics Data System (ADS)
Alves, P. R. L.; Duarte, L. G. S.; da Mota, L. A. C. P.
2017-06-01
In the scheme of reconstruction, non-polynomial predictors improve the forecast from chaotic time series. The algebraic manipulation in the Maple environment is the basis for obtaining of accurate predictors. Beyond the different times of prediction, the optional arguments of the computational routines optimize the running and the analysis of global mappings.
Software Partitioning Schemes for Advanced Simulation Computer Systems. Final Report.
ERIC Educational Resources Information Center
Clymer, S. J.
Conducted to design software partitioning techniques for use by the Air Force to partition a large flight simulator program for optimal execution on alternative configurations, this study resulted in a mathematical model which defines characteristics for an optimal partition, and a manually demonstrated partitioning algorithm design which…
Seven Measures of the Ways That Deciders Frame Their Career Decisions.
ERIC Educational Resources Information Center
Cochran, Larry
1983-01-01
Illustrates seven different measures of the ways people structure a career decision. Given sets of occupational alternatives and considerations, the career grid is a decisional balance sheet that indicates the way each occupation is judged on each consideration. It can be used to correct faulty decision schemes. (JAC)
The postulated scheme for the metabolism of inorganic As involves alternating steps of oxidative methylation and of reduction of As from the pentavalent to the trivalent oxidation state, producing methylated compounds containing AsIII that are highly reactive and toxic. S-adenosy...
The welfare and distributional effects of alternative fuel economy regulations will be compared, including an increase in existing CAFE standards, allowing for tradable credits, and implementing other design options in a trading scheme, such as sliding standards based on ve...
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 2 2010-10-01 2010-10-01 false Purpose. 47.100 Section 47.100 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) LOAD LINES COMBINATION LOAD LINES General § 47.100 Purpose. (a) The purpose of the regulations in this part is to set forth simplified alternative marking schemes for...
NASA Astrophysics Data System (ADS)
Maher, Penelope; Vallis, Geoffrey K.; Sherwood, Steven C.; Webb, Mark J.; Sansom, Philip G.
2018-04-01
Convective parameterizations are widely believed to be essential for realistic simulations of the atmosphere. However, their deficiencies also result in model biases. The role of convection schemes in modern atmospheric models is examined using Selected Process On/Off Klima Intercomparison Experiment simulations without parameterized convection and forced with observed sea surface temperatures. Convection schemes are not required for reasonable climatological precipitation. However, they are essential for reasonable daily precipitation and constraining extreme daily precipitation that otherwise develops. Systematic effects on lapse rate and humidity are likewise modest compared with the intermodel spread. Without parameterized convection Kelvin waves are more realistic. An unexpectedly large moist Southern Hemisphere storm track bias is identified. This storm track bias persists without convection schemes, as does the double Intertropical Convergence Zone and excessive ocean precipitation biases. This suggests that model biases originate from processes other than convection or that convection schemes are missing key processes.
Decision Analysis and Policy Formulation for Technology-Specific Renewable Energy Targets
NASA Astrophysics Data System (ADS)
Okioga, Irene Teshamulwa
This study establishes a decision making procedure using Analytic Hierarchy Process (AHP) for a U.S. national renewable portfolio standard, and proposes technology-specific targets for renewable electricity generation for the country. The study prioritizes renewable energy alternatives based on a multi-perspective view: from the public, policy makers, and investors' points-of-view, and uses multiple criteria for ranking the alternatives to generate a unified prioritization scheme. During this process, it considers a 'quadruple bottom-line' approach (4P), i.e. reflecting technical "progress", social "people", economic 'profits", and environmental "planet" factors. The AHP results indicated that electricity generation from solar PV ranked highest, and biomass energy ranked lowest. A "Benefits/Cost Incentives/Mandates" (BCIM) model was developed to identify where mandates are needed, and where incentives would instead be required to bring down costs for technologies that have potential for profitable deployment. The BCIM model balances the development of less mature renewable energy technologies, without the potential for rising near-term electricity rates for consumers. It also ensures that recommended policies do not lead to growth of just one type of technology--the "highest-benefit, least-cost" technology. The model indicated that mandates would be suited for solar PV, and incentives generally for geothermal and concentrated solar power. Development for biomass energy, as a "low-cost, low-benefits" alternative was recommended at a local rather than national level, mainly due to its low resource potential values. Further, biomass energy generated from wastewater treatment plants (WWTPs) had the least resource potential compared to other biomass sources. The research developed methodologies and recommendations for biogas electricity targets at WWTPs, to take advantage of the waste-to-energy opportunities.
Double-moment cloud microphysics scheme for the deep convection parameterization in the GFDL AM3
NASA Astrophysics Data System (ADS)
Belochitski, A.; Donner, L.
2014-12-01
A double-moment cloud microphysical scheme originally developed by Morrision and Gettelman (2008) for the stratiform clouds and later adopted for the deep convection by Song and Zhang (2011) has been implemented in to the Geophysical Fluid Dynamics Laboratory's atmospheric general circulation model AM3. The scheme treats cloud drop, cloud ice, rain, and snow number concentrations and mixing ratios as diagnostic variables and incorporates processes of autoconversion, self-collection, collection between hydrometeor species, sedimentation, ice nucleation, drop activation, homogeneous and heterogeneous freezing, and the Bergeron-Findeisen process. Such detailed representation of microphysical processes makes the scheme suitable for studying the interactions between aerosols and convection, as well as aerosols' indirect effects on clouds and their roles in climate change. The scheme is first tested in the single column version of the GFDL AM3 using forcing data obtained at the U.S. Department of Energy Atmospheric Radiation Measurment project's Southern Great Planes site. Scheme's impact on SCM simulations is discussed. As the next step, runs of the full atmospheric GCM incorporating the new parameterization are compared to the unmodified version of GFDL AM3. Global climatological fields and their variability are contrasted with those of the original version of the GCM. Impact on cloud radiative forcing and climate sensitivity is investigated.
Double-moment Cloud Microphysics Scheme for the Deep Convection Parameterization in the GFDL AM3
NASA Astrophysics Data System (ADS)
Belochitski, A.; Donner, L.
2013-12-01
A double-moment cloud microphysical scheme originally developed by Morrision and Gettelman (2008) for the stratiform clouds and later adopted for the deep convection by Song and Zhang (2011) is being implemented in to the deep convection parameterization of Geophysical Fluid Dynamics Laboratory's atmospheric general circulation model AM3. The scheme treats cloud drop, cloud ice, rain, and snow number concentrations and mixing ratios as diagnostic variables and incorporates processes of autoconversion, self-collection, collection between hydrometeor species, sedimentation, ice nucleation, drop activation, homogeneous and heterogeneous freezing, and the Bergeron-Findeisen process. Detailed representation of microphysical processes makes the scheme suitable for studying the interactions between aerosols and convection, as well as aerosols' indirect effects on clouds and the roles of these effects in climate change. The scheme is implemented into the single column version of the GFDL AM3 and evaluated using large scale forcing data obtained at the U.S. Department of Energy Atmospheric Radiation Measurment project's Southern Great Planes and Tropical West Pacific sites. Sensitivity of the scheme to formulations for autoconversion of cloud water and its accretion by rain, self-collection of rain and self-collection of snow, as well as the formulation for heterogenous ice nucleation is investigated. In the future, tests with the full atmospheric GCM will be conducted.
Dynamically protected cat-qubits: a new paradigm for universal quantum computation
NASA Astrophysics Data System (ADS)
Mirrahimi, Mazyar; Leghtas, Zaki; Albert, Victor V.; Touzard, Steven; Schoelkopf, Robert J.; Jiang, Liang; Devoret, Michel H.
2014-04-01
We present a new hardware-efficient paradigm for universal quantum computation which is based on encoding, protecting and manipulating quantum information in a quantum harmonic oscillator. This proposal exploits multi-photon driven dissipative processes to encode quantum information in logical bases composed of Schrödinger cat states. More precisely, we consider two schemes. In a first scheme, a two-photon driven dissipative process is used to stabilize a logical qubit basis of two-component Schrödinger cat states. While such a scheme ensures a protection of the logical qubit against the photon dephasing errors, the prominent error channel of single-photon loss induces bit-flip type errors that cannot be corrected. Therefore, we consider a second scheme based on a four-photon driven dissipative process which leads to the choice of four-component Schrödinger cat states as the logical qubit. Such a logical qubit can be protected against single-photon loss by continuous photon number parity measurements. Next, applying some specific Hamiltonians, we provide a set of universal quantum gates on the encoded qubits of each of the two schemes. In particular, we illustrate how these operations can be rendered fault-tolerant with respect to various decoherence channels of participating quantum systems. Finally, we also propose experimental schemes based on quantum superconducting circuits and inspired by methods used in Josephson parametric amplification, which should allow one to achieve these driven dissipative processes along with the Hamiltonians ensuring the universal operations in an efficient manner.
MeMoVolc report on classification and dynamics of volcanic explosive eruptions
NASA Astrophysics Data System (ADS)
Bonadonna, C.; Cioni, R.; Costa, A.; Druitt, T.; Phillips, J.; Pioli, L.; Andronico, D.; Harris, A.; Scollo, S.; Bachmann, O.; Bagheri, G.; Biass, S.; Brogi, F.; Cashman, K.; Dominguez, L.; Dürig, T.; Galland, O.; Giordano, G.; Gudmundsson, M.; Hort, M.; Höskuldsson, A.; Houghton, B.; Komorowski, J. C.; Küppers, U.; Lacanna, G.; Le Pennec, J. L.; Macedonio, G.; Manga, M.; Manzella, I.; Vitturi, M. de'Michieli; Neri, A.; Pistolesi, M.; Polacci, M.; Ripepe, M.; Rossi, E.; Scheu, B.; Sulpizio, R.; Tripoli, B.; Valade, S.; Valentine, G.; Vidal, C.; Wallenstein, N.
2016-11-01
Classifications of volcanic eruptions were first introduced in the early twentieth century mostly based on qualitative observations of eruptive activity, and over time, they have gradually been developed to incorporate more quantitative descriptions of the eruptive products from both deposits and observations of active volcanoes. Progress in physical volcanology, and increased capability in monitoring, measuring and modelling of explosive eruptions, has highlighted shortcomings in the way we classify eruptions and triggered a debate around the need for eruption classification and the advantages and disadvantages of existing classification schemes. Here, we (i) review and assess existing classification schemes, focussing on subaerial eruptions; (ii) summarize the fundamental processes that drive and parameters that characterize explosive volcanism; (iii) identify and prioritize the main research that will improve the understanding, characterization and classification of volcanic eruptions and (iv) provide a roadmap for producing a rational and comprehensive classification scheme. In particular, classification schemes need to be objective-driven and simple enough to permit scientific exchange and promote transfer of knowledge beyond the scientific community. Schemes should be comprehensive and encompass a variety of products, eruptive styles and processes, including for example, lava flows, pyroclastic density currents, gas emissions and cinder cone or caldera formation. Open questions, processes and parameters that need to be addressed and better characterized in order to develop more comprehensive classification schemes and to advance our understanding of volcanic eruptions include conduit processes and dynamics, abrupt transitions in eruption regime, unsteadiness, eruption energy and energy balance.
Eberhard, Wynn L
2017-04-01
The maximum likelihood estimator (MLE) is derived for retrieving the extinction coefficient and zero-range intercept in the lidar slope method in the presence of random and independent Gaussian noise. Least-squares fitting, weighted by the inverse of the noise variance, is equivalent to the MLE. Monte Carlo simulations demonstrate that two traditional least-squares fitting schemes, which use different weights, are less accurate. Alternative fitting schemes that have some positive attributes are introduced and evaluated. The principal factors governing accuracy of all these schemes are elucidated. Applying these schemes to data with Poisson rather than Gaussian noise alters accuracy little, even when the signal-to-noise ratio is low. Methods to estimate optimum weighting factors in actual data are presented. Even when the weighting estimates are coarse, retrieval accuracy declines only modestly. Mathematical tools are described for predicting retrieval accuracy. Least-squares fitting with inverse variance weighting has optimum accuracy for retrieval of parameters from single-wavelength lidar measurements when noise, errors, and uncertainties are Gaussian distributed, or close to optimum when only approximately Gaussian.
Performance Analysis of a Wind Turbine Driven Swash Plate Pump for Large Scale Offshore Applications
NASA Astrophysics Data System (ADS)
Buhagiar, D.; Sant, T.
2014-12-01
This paper deals with the performance modelling and analysis of offshore wind turbine-driven hydraulic pumps. The concept consists of an open loop hydraulic system with the rotor main shaft directly coupled to a swash plate pump to supply pressurised sea water. A mathematical model is derived to cater for the steady state behaviour of entire system. A simplified model for the pump is implemented together with different control scheme options for regulating the rotor shaft power. A new control scheme is investigated, based on the combined use of hydraulic pressure and pitch control. Using a steady-state analysis, the study shows how the adoption of alternative control schemes in a the wind turbine-hydraulic pump system may result in higher energy yields than those from a conventional system with an electrical generator and standard pitch control for power regulation. This is in particular the case with the new control scheme investigated in this study that is based on the combined use of pressure and rotor blade pitch control.
Mananga, Eugene S; Reid, Alicia E; Charpentier, Thibault
2012-02-01
This article describes the use of an alternative expansion scheme called Floquet-Magnus expansion (FME) to study the dynamics of spin system in solid-state NMR. The main tool used to describe the effect of time-dependent interactions in NMR is the average Hamiltonian theory (AHT). However, some NMR experiments, such as sample rotation and pulse crafting, seem to be more conveniently described using the Floquet theory (FT). Here, we present the first report highlighting the basics of the Floquet-Magnus expansion (FME) scheme and hint at its application on recoupling sequences that excite more efficiently double-quantum coherences, namely BABA and C7 radiofrequency pulse sequences. The use of Λ(n)(t) functions available only in the FME scheme, allows the comparison of the efficiency of BABA and C7 sequences. Copyright © 2011 Elsevier Inc. All rights reserved.
Reid, Alicia E.; Charpentier, Thibault
2013-01-01
This article describes the use of an alternative expansion scheme called Floquet-Magnus expansion (FME) to study the dynamics of spin system in solid-state NMR. The main tool used to describe the effect of time-dependent interactions in NMR is the average Hamiltonian theory (AHT). However, some NMR experiments, such as sample rotation and pulse crafting, seem to be more conveniently described using the Floquet theory (FT). Here, we present the first report highlighting the basics of the Floquet-Magnus expansion (FME) scheme and hint at its application on recoupling sequences that excite more efficiently double-quantum coherences, namely BABA and C7 radiofrequency pulse sequences. The use of Λn(t) functions available only in the FME scheme, allows the comparison of the efficiency of BABA and C7 sequences. PMID:22197191
Proper time regularization and the QCD chiral phase transition
Cui, Zhu-Fang; Zhang, Jin-Li; Zong, Hong-Shi
2017-01-01
We study the QCD chiral phase transition at finite temperature and finite quark chemical potential within the two flavor Nambu–Jona-Lasinio (NJL) model, where a generalization of the proper-time regularization scheme is motivated and implemented. We find that in the chiral limit the whole transition line in the phase diagram is of second order, whereas for finite quark masses a crossover is observed. Moreover, if we take into account the influence of quark condensate to the coupling strength (which also provides a possible way of how the effective coupling varies with temperature and quark chemical potential), it is found that a CEP may appear. These findings differ substantially from other NJL results which use alternative regularization schemes, some explanation and discussion are given at the end. This indicates that the regularization scheme can have a dramatic impact on the study of the QCD phase transition within the NJL model. PMID:28401889
ULTRA-SHARP solution of the Smith-Hutton problem
NASA Technical Reports Server (NTRS)
Leonard, B. P.; Mokhtari, Simin
1992-01-01
Highly convective scalar transport involving near-discontinuities and strong streamline curvature was addressed in a paper by Smith and Hutton in 1982, comparing several different convection schemes applied to a specially devised test problem. First order methods showed significant artificial diffusion, whereas higher order methods gave less smearing but had a tendency to overshoot and oscillate. Perhaps because unphysical oscillations are more obvious than unphysical smearing, the intervening period has seen a rise in popularity of low order artificially diffusive schemes, especially in the numerical heat transfer industry. The present paper describes an alternate strategy of using non-artificially diffusive high order methods, while maintaining strictly monotonic transitions through the use of simple flux limited constraints. Limited third order upwinding is usually found to be the most cost effective basic convection scheme. Tighter resolution of discontinuities can be obtained at little additional cost by using automatic adaptive stencil expansion to higher order in local regions, as needed.
Engineering single-polymer micelle shape using nonuniform spontaneous surface curvature
NASA Astrophysics Data System (ADS)
Moths, Brian; Witten, T. A.
2018-03-01
Conventional micelles, composed of simple amphiphiles, exhibit only a few standard morphologies, each characterized by its mean surface curvature set by the amphiphiles. Here we demonstrate a rational design scheme to construct micelles of more general shape from polymeric amphiphiles. We replace the many amphiphiles of a conventional micelle by a single flexible, linear, block copolymer chain containing two incompatible species arranged in multiple alternating segments. With suitable segment lengths, the chain exhibits a condensed spherical configuration in solution, similar to conventional micelles. Our design scheme posits that further shapes are attained by altering the segment lengths. As a first study of the power of this scheme, we demonstrate the capacity to produce long-lived micelles of horseshoe form using conventional bead-spring simulations in two dimensions. Modest changes in the segment lengths produce smooth changes in the micelle's shape and stability.
NASA Astrophysics Data System (ADS)
Kitagawa, M.; Yamamoto, Y.
1987-11-01
An alternative scheme for generating amplitude-squeezed states of photons based on unitary evolution which can properly be described by quantum mechanics is presented. This scheme is a nonlinear Mach-Zehnder interferometer containing an optical Kerr medium. The quasi-probability density (QPD) and photon-number distribution of the output field are calculated, and it is demonstrated that the reduced photon-number uncertainty and enhanced phase uncertainty maintain the minimum-uncertainty product. A self-phase-modulation of the single-mode quantized field in the Kerr medium is described based on localized operators. The spatial evolution of the state is demonstrated by QPD in the Schroedinger picture. It is shown that photon-number variance can be reduced to a level far below the limit for an ordinary squeezed state, and that the state prepared using this scheme remains a number-phase minimum-uncertainty state until the maximum reduction of number fluctuations is surpassed.
A novel double loop control model design for chemical unstable processes.
Cong, Er-Ding; Hu, Ming-Hui; Tu, Shan-Tung; Xuan, Fu-Zhen; Shao, Hui-He
2014-03-01
In this manuscript, based on Smith predictor control scheme for unstable process in industry, an improved double loop control model is proposed for chemical unstable processes. Inner loop is to stabilize integrating the unstable process and transform the original process to first-order plus pure dead-time dynamic stable process. Outer loop is to enhance the performance of set point response. Disturbance controller is designed to enhance the performance of disturbance response. The improved control system is simple with exact physical meaning. The characteristic equation is easy to realize stabilization. Three controllers are separately design in the improved scheme. It is easy to design each controller and good control performance for the respective closed-loop transfer function separately. The robust stability of the proposed control scheme is analyzed. Finally, case studies illustrate that the improved method can give better system performance than existing design methods. © 2013 ISA Published by ISA All rights reserved.
NASA Astrophysics Data System (ADS)
Gaikwad, Akshay; Rehal, Diksha; Singh, Amandeep; Arvind, Dorai, Kavita
2018-02-01
We present the NMR implementation of a scheme for selective and efficient quantum process tomography without ancilla. We generalize this scheme such that it can be implemented efficiently using only a set of measurements involving product operators. The method allows us to estimate any element of the quantum process matrix to a desired precision, provided a set of quantum states can be prepared efficiently. Our modified technique requires fewer experimental resources as compared to the standard implementation of selective and efficient quantum process tomography, as it exploits the special nature of NMR measurements to allow us to compute specific elements of the process matrix by a restrictive set of subsystem measurements. To demonstrate the efficacy of our scheme, we experimentally tomograph the processes corresponding to "no operation," a controlled-NOT (CNOT), and a controlled-Hadamard gate on a two-qubit NMR quantum information processor, with high fidelities.
Control of final moisture content of food products baked in continuous tunnel ovens
NASA Astrophysics Data System (ADS)
McFarlane, Ian
2006-02-01
There are well-known difficulties in making measurements of the moisture content of baked goods (such as bread, buns, biscuits, crackers and cake) during baking or at the oven exit; in this paper several sensing methods are discussed, but none of them are able to provide direct measurement with sufficient precision. An alternative is to use indirect inferential methods. Some of these methods involve dynamic modelling, with incorporation of thermal properties and using techniques familiar in computational fluid dynamics (CFD); a method of this class that has been used for the modelling of heat and mass transfer in one direction during baking is summarized, which may be extended to model transport of moisture within the product and also within the surrounding atmosphere. The concept of injecting heat during the baking process proportional to the calculated heat load on the oven has been implemented in a control scheme based on heat balance zone by zone through a continuous baking oven, taking advantage of the high latent heat of evaporation of water. Tests on biscuit production ovens are reported, with results that support a claim that the scheme gives more reproducible water distribution in the final product than conventional closed loop control of zone ambient temperatures, thus enabling water content to be held more closely within tolerance.
NASA Astrophysics Data System (ADS)
Laanearu, J.; Borodinecs, A.; Rimeika, M.; Palm, B.
2017-10-01
The thermal-energy potential of urban water sources is largely unused to accomplish the up-to-date requirements of the buildings energy demands in the cities of Baltic Sea Region. A reason is that the natural and excess-heat water sources have a low temperature and heat that should be upgraded before usage. The demand for space cooling should increase in near future with thermal insulation of buildings. There are a number of options to recover heat also from wastewater. It is proposed that a network of heat extraction and insertion including the thermal-energy recovery schemes has potential to be broadly implemented in the region with seasonally alternating temperature. The mapping of local conditions is essential in finding the suitable regions (hot spots) for future application of a heat recovery schemes by combining information about demands with information about available sources. The low-temperature water in the urban environment is viewed as a potential thermal-energy source. To recover thermal energy efficiently, it is also essential to ensure that it is used locally, and adverse effects on environment and industrial processes are avoided. Some characteristics reflecting the energy usage are discussed in respect of possible improvements of energy efficiency.
Mapping edge-based traffic measurements onto the internal links in MPLS network
NASA Astrophysics Data System (ADS)
Zhao, Guofeng; Tang, Hong; Zhang, Yi
2004-09-01
Applying multi-protocol label switching techniques to IP-based backbone for traffic engineering goals has shown advantageous. Obtaining a volume of load on each internal link of the network is crucial for traffic engineering applying. Though collecting can be available for each link, such as applying traditional SNMP scheme, the approach may cause heavy processing load and sharply degrade the throughput of the core routers. Then monitoring merely at the edge of the network and mapping the measurements onto the core provides a good alternative way. In this paper, we explore a scheme for traffic mapping with edge-based measurements in MPLS network. It is supposed that the volume of traffic on each internal link over the domain would be mapped onto by measurements available only at ingress nodes. We apply path-based measurements at ingress nodes without enabling measurements in the core of the network. We propose a method that can infer a path from the ingress to the egress node using label distribution protocol without collecting routing data from core routers. Based on flow theory and queuing theory, we prove that our approach is effective and present the algorithm for traffic mapping. We also show performance simulation results that indicate potential of our approach.
Detecting unstable periodic orbits in chaotic time series using synchronization
NASA Astrophysics Data System (ADS)
Olyaei, Ali Azimi; Wu, Christine; Kinsner, Witold
2017-07-01
An alternative approach of detecting unstable periodic orbits in chaotic time series is proposed using synchronization techniques. A master-slave synchronization scheme is developed, in which the chaotic system drives a system of harmonic oscillators through a proper coupling condition. The proposed scheme is designed so that the power of the coupling signal exhibits notches that drop to zero once the system approaches an unstable orbit yielding an explicit indication of the presence of a periodic motion. The results shows that the proposed approach is particularly suitable in practical situations, where the time series is short and noisy, or it is obtained from high-dimensional chaotic systems.
NASA Astrophysics Data System (ADS)
Driben, R.; Meier, T.
2014-04-01
Dispersion management of periodically alternating fiber sections with opposite signs of two leading dispersion terms is applied for the regeneration of self-accelerating truncated Airy pulses. It is demonstrated that for such a dispersion management scheme, the direction of the acceleration of the pulse is reversed twice within each period. In this scheme the system features light hot spots in the center of each fiber section, where the energy of the light pulse is tightly focused in a short temporal slot. Comprehensive numerical studies demonstrate a long-lasting propagation also under the influence of a strong fiber Kerr nonlinearity.
Processes of Middle-Class Reproduction in a Graduate Employment Scheme
ERIC Educational Resources Information Center
Smart, Sarah; Hutchings, Merryn; Maylor, Uvanney; Mendick, Heather; Menter, Ian
2009-01-01
Teach First is an educational charity that places graduates to teach in "challenging" schools for two years. It is marketed as an opportunity to develop employability while "making a difference". In this paper, I examine the process of class reproduction occurring in this graduate employment scheme through examining the…
An improved scheme for Flip-OFDM based on Hartley transform in short-range IM/DD systems.
Zhou, Ji; Qiao, Yaojun; Cai, Zhuo; Ji, Yuefeng
2014-08-25
In this paper, an improved Flip-OFDM scheme is proposed for IM/DD optical systems, where the modulation/demodulation processing takes advantage of the fast Hartley transform (FHT) algorithm. We realize the improved scheme in one symbol period while conventional Flip-OFDM scheme based on fast Fourier transform (FFT) in two consecutive symbol periods. So the complexity of many operations in improved scheme is half of that in conventional scheme, such as CP operation, polarity inversion and symbol delay. Compared to FFT with complex input constellation, the complexity of FHT with real input constellation is halved. The transmission experiment over 50-km SSMF has been realized to verify the feasibility of improved scheme. In conclusion, the improved scheme has the same BER performance with conventional scheme, but great superiority on complexity.
Robotic Attention Processing And Its Application To Visual Guidance
NASA Astrophysics Data System (ADS)
Barth, Matthew; Inoue, Hirochika
1988-03-01
This paper describes a method of real-time visual attention processing for robots performing visual guidance. This robot attention processing is based on a novel vision processor, the multi-window vision system that was developed at the University of Tokyo. The multi-window vision system is unique in that it only processes visual information inside local area windows. These local area windows are quite flexible in their ability to move anywhere on the visual screen, change their size and shape, and alter their pixel sampling rate. By using these windows for specific attention tasks, it is possible to perform high speed attention processing. The primary attention skills of detecting motion, tracking an object, and interpreting an image are all performed at high speed on the multi-window vision system. A basic robotic attention scheme using the attention skills was developed. The attention skills involved detection and tracking of salient visual features. The tracking and motion information thus obtained was utilized in producing the response to the visual stimulus. The response of the attention scheme was quick enough to be applicable to the real-time vision processing tasks of playing a video 'pong' game, and later using an automobile driving simulator. By detecting the motion of a 'ball' on a video screen and then tracking the movement, the attention scheme was able to control a 'paddle' in order to keep the ball in play. The response was faster than that of a human's, allowing the attention scheme to play the video game at higher speeds. Further, in the application to the driving simulator, the attention scheme was able to control both direction and velocity of a simulated vehicle following a lead car. These two applications show the potential of local visual processing in its use for robotic attention processing.
NASA Astrophysics Data System (ADS)
Kumar, Vivek; Raghurama Rao, S. V.
2008-04-01
Non-standard finite difference methods (NSFDM) introduced by Mickens [ Non-standard Finite Difference Models of Differential Equations, World Scientific, Singapore, 1994] are interesting alternatives to the traditional finite difference and finite volume methods. When applied to linear hyperbolic conservation laws, these methods reproduce exact solutions. In this paper, the NSFDM is first extended to hyperbolic systems of conservation laws, by a novel utilization of the decoupled equations using characteristic variables. In the second part of this paper, the NSFDM is studied for its efficacy in application to nonlinear scalar hyperbolic conservation laws. The original NSFDMs introduced by Mickens (1994) were not in conservation form, which is an important feature in capturing discontinuities at the right locations. Mickens [Construction and analysis of a non-standard finite difference scheme for the Burgers-Fisher equations, Journal of Sound and Vibration 257 (4) (2002) 791-797] recently introduced a NSFDM in conservative form. This method captures the shock waves exactly, without any numerical dissipation. In this paper, this algorithm is tested for the case of expansion waves with sonic points and is found to generate unphysical expansion shocks. As a remedy to this defect, we use the strategy of composite schemes [R. Liska, B. Wendroff, Composite schemes for conservation laws, SIAM Journal of Numerical Analysis 35 (6) (1998) 2250-2271] in which the accurate NSFDM is used as the basic scheme and localized relaxation NSFDM is used as the supporting scheme which acts like a filter. Relaxation schemes introduced by Jin and Xin [The relaxation schemes for systems of conservation laws in arbitrary space dimensions, Communications in Pure and Applied Mathematics 48 (1995) 235-276] are based on relaxation systems which replace the nonlinear hyperbolic conservation laws by a semi-linear system with a stiff relaxation term. The relaxation parameter ( λ) is chosen locally on the three point stencil of grid which makes the proposed method more efficient. This composite scheme overcomes the problem of unphysical expansion shocks and captures the shock waves with an accuracy better than the upwind relaxation scheme, as demonstrated by the test cases, together with comparisons with popular numerical methods like Roe scheme and ENO schemes.
Sodzi-Tettey, S; Aikins, M; Awoonor-Williams, J K; Agyepong, I A
2012-12-01
In 2004, Ghana started implementing a National Health Insurance Scheme (NHIS) to remove cost as a barrier to quality healthcare. Providers were initially paid by fee - for - service. In May 2008, this changed to paying providers by a combination of Ghana - Diagnostic Related Groupings (G-DRGs) for services and fee - for - service for medicines through the claims process. The study evaluated the claims management processes for two District MHIS in the Upper East Region of Ghana. Retrospective review of secondary claims data (2008) and a prospective observation of claims management (2009) were undertaken. Qualitative and quantitative approaches were used for primary data collection using interview guides and checklists. The reimbursements rates and value of rejected claims were calculated and compared for both districts using the z test. The null hypothesis was that no differences existed in parameters measured. Claims processes in both districts were similar and predominantly manual. There were administrative capacity, technical, human resource and working environment challenges contributing to delays in claims submission by providers and vetting and payment by schemes. Both Schemes rejected less than 1% of all claims submitted. Significant differences were observed between the Total Reimbursement Rates (TRR) and the Total Timely Reimbursement Rates (TTRR) for both schemes. For TRR, 89% and 86% were recorded for Kassena Nankana and Builsa Schemes respectively while for TTRR, 45% and 28% were recorded respectively. Ghana's NHIS needs to reform its provider payment and claims submission and processing systems to ensure simpler and faster processes. Computerization and investment to improve the capacity to administer for both purchasers and providers will be key in any reform.
Simplified Interval Observer Scheme: A New Approach for Fault Diagnosis in Instruments
Martínez-Sibaja, Albino; Astorga-Zaragoza, Carlos M.; Alvarado-Lassman, Alejandro; Posada-Gómez, Rubén; Aguila-Rodríguez, Gerardo; Rodríguez-Jarquin, José P.; Adam-Medina, Manuel
2011-01-01
There are different schemes based on observers to detect and isolate faults in dynamic processes. In the case of fault diagnosis in instruments (FDI) there are different diagnosis schemes based on the number of observers: the Simplified Observer Scheme (SOS) only requires one observer, uses all the inputs and only one output, detecting faults in one detector; the Dedicated Observer Scheme (DOS), which again uses all the inputs and just one output, but this time there is a bank of observers capable of locating multiple faults in sensors, and the Generalized Observer Scheme (GOS) which involves a reduced bank of observers, where each observer uses all the inputs and m-1 outputs, and allows the localization of unique faults. This work proposes a new scheme named Simplified Interval Observer SIOS-FDI, which does not requires the measurement of any input and just with just one output allows the detection of unique faults in sensors and because it does not require any input, it simplifies in an important way the diagnosis of faults in processes in which it is difficult to measure all the inputs, as in the case of biologic reactors. PMID:22346593
Performance of ICTP's RegCM4 in Simulating the Rainfall Characteristics over the CORDEX-SEA Domain
NASA Astrophysics Data System (ADS)
Neng Liew, Ju; Tangang, Fredolin; Tieh Ngai, Sheau; Chung, Jing Xiang; Narisma, Gemma; Cruz, Faye Abigail; Phan Tan, Van; Thanh, Ngo-Duc; Santisirisomboon, Jerasron; Milindalekha, Jaruthat; Singhruck, Patama; Gunawan, Dodo; Satyaningsih, Ratna; Aldrian, Edvin
2015-04-01
The performance of the RegCM4 in simulating rainfall variations over the Southeast Asia regions was examined. Different combinations of six deep convective parameterization schemes, namely i) Grell scheme with Arakawa-Schubert closure assumption, ii) Grell scheme with Fritch-Chappel closure assumption, iii) Emanuel MIT scheme, iv) mixed scheme with Emanuel MIT scheme over the Ocean and the Grell scheme over the land, v) mixed scheme with Grell scheme over the land and Emanuel MIT scheme over the ocean and (vi) Kuo scheme, and three ocean flux treatments were tested. In order to account for uncertainties among the observation products, four different gridded rainfall products were used for comparison. The simulated climate is generally drier over the equatorial regions and slightly wetter over the mainland Indo-China compare to the observation. However, simulation with MIT cumulus scheme used over the land area consistently produces large amplitude of positive rainfall biases, although it simulates more realistic annual rainfall variations. The simulations are found less sensitive to treatment of ocean fluxes. Although the simulations produced the rainfall climatology well, all of them simulated much stronger interannual variability compare to that of the observed. Nevertheless, the time evolution of the inter-annual variations was well reproduced particularly over the eastern part of maritime continent. Over the mainland Southeast Asia (SEA), unrealistic rainfall anomalies processes were simulated. The lacking of summer season air-sea interaction results in strong oceanic forcings over the regions, leading to positive rainfall anomalies during years with warm ocean temperature anomalies. This incurs much stronger atmospheric forcings on the land surface processes compare to that of the observed. A score ranking system was designed to rank the simulations according to their performance in reproducing different aspects of rainfall characteristics. The result suggests that the simulation with Emanuel MIT convective scheme and BATs land surface scheme produces better collective performance compare to the rest of the simulations.
A dispersion minimizing scheme for the 3-D Helmholtz equation based on ray theory
NASA Astrophysics Data System (ADS)
Stolk, Christiaan C.
2016-06-01
We develop a new dispersion minimizing compact finite difference scheme for the Helmholtz equation in 2 and 3 dimensions. The scheme is based on a newly developed ray theory for difference equations. A discrete Helmholtz operator and a discrete operator to be applied to the source and the wavefields are constructed. Their coefficients are piecewise polynomial functions of hk, chosen such that phase and amplitude errors are minimal. The phase errors of the scheme are very small, approximately as small as those of the 2-D quasi-stabilized FEM method and substantially smaller than those of alternatives in 3-D, assuming the same number of gridpoints per wavelength is used. In numerical experiments, accurate solutions are obtained in constant and smoothly varying media using meshes with only five to six points per wavelength and wave propagation over hundreds of wavelengths. When used as a coarse level discretization in a multigrid method the scheme can even be used with down to three points per wavelength. Tests on 3-D examples with up to 108 degrees of freedom show that with a recently developed hybrid solver, the use of coarser meshes can lead to corresponding savings in computation time, resulting in good simulation times compared to the literature.
Chung, Yun Won; Hwang, Ho Young
2010-01-01
In sensor network, energy conservation is one of the most critical issues since sensor nodes should perform a sensing task for a long time (e.g., lasting a few years) but the battery of them cannot be replaced in most practical situations. For this purpose, numerous energy conservation schemes have been proposed and duty cycling scheme is considered the most suitable power conservation technique, where sensor nodes alternate between states having different levels of power consumption. In order to analyze the energy consumption of energy conservation scheme based on duty cycling, it is essential to obtain the probability of each state. In this paper, we analytically derive steady state probability of sensor node states, i.e., sleep, listen, and active states, based on traffic characteristics and timer values, i.e., sleep timer, listen timer, and active timer. The effect of traffic characteristics and timer values on the steady state probability and energy consumption is analyzed in detail. Our work can provide sensor network operators guideline for selecting appropriate timer values for efficient energy conservation. The analytical methodology developed in this paper can be extended to other energy conservation schemes based on duty cycling with different sensor node states, without much difficulty. PMID:22219676
NASA Astrophysics Data System (ADS)
Wang, Jun; Zhao, Jianlin; Di, Jianglei; Jiang, Biqiang
2015-04-01
A scheme for recording fast process at nanosecond scale by using digital holographic interferometry with continuous wave (CW) laser is described and demonstrated experimentally, which employs delayed-time fibers and angular multiplexing technique and can realize the variable temporal resolution at nanosecond scale and different measured depths of object field at certain temporal resolution. The actual delay-time is controlled by two delayed-time fibers with different lengths. The object field information in two different states can be simultaneously recorded in a composite hologram. This scheme is also suitable for recording fast process at picosecond scale, by using an electro-optic modulator.
Order of accuracy of QUICK and related convection-diffusion schemes
NASA Technical Reports Server (NTRS)
Leonard, B. P.
1993-01-01
This report attempts to correct some misunderstandings that have appeared in the literature concerning the order of accuracy of the QUICK scheme for steady-state convective modeling. Other related convection-diffusion schemes are also considered. The original one-dimensional QUICK scheme written in terms of nodal-point values of the convected variable (with a 1/8-factor multiplying the 'curvature' term) is indeed a third-order representation of the finite volume formulation of the convection operator average across the control volume, written naturally in flux-difference form. An alternative single-point upwind difference scheme (SPUDS) using node values (with a 1/6-factor) is a third-order representation of the finite difference single-point formulation; this can be written in a pseudo-flux difference form. These are both third-order convection schemes; however, the QUICK finite volume convection operator is 33 percent more accurate than the single-point implementation of SPUDS. Another finite volume scheme, writing convective fluxes in terms of cell-average values, requires a 1/6-factor for third-order accuracy. For completeness, one can also write a single-point formulation of the convective derivative in terms of cell averages, and then express this in pseudo-flux difference form; for third-order accuracy, this requires a curvature factor of 5/24. Diffusion operators are also considered in both single-point and finite volume formulations. Finite volume formulations are found to be significantly more accurate. For example, classical second-order central differencing for the second derivative is exactly twice as accurate in a finite volume formulation as it is in single-point.
Gamma beams generation with high intensity lasers for two photon Breit-Wheeler pair production
NASA Astrophysics Data System (ADS)
D'Humieres, Emmanuel; Ribeyre, Xavier; Jansen, Oliver; Esnault, Leo; Jequier, Sophie; Dubois, Jean-Luc; Hulin, Sebastien; Tikhonchuk, Vladimir; Arefiev, Alex; Toncian, Toma; Sentoku, Yasuhiko
2017-10-01
Linear Breit-Wheeler pair creation is the lowest threshold process in photon-photon interaction, controlling the energy release in Gamma Ray Bursts and Active Galactic Nuclei, but it has never been directly observed in the laboratory. Using numerical simulations, we demonstrate the possibility to produce collimated gamma beams with high energy conversion efficiency using high intensity lasers and innovative targets. When two of these beams collide at particular angles, our analytical calculations demonstrate a beaming effect easing the detection of the pairs in the laboratory. This effect has been confirmed in photon collision simulations using a recently developed innovative algorithm. An alternative scheme using Bremsstrahlung radiation produced by next generation high repetition rate laser systems is also being explored and the results of first optimization campaigns in this regime will be presented.
Multi-Market Impacts of Market-Based Recycling Initiatives.
Fisher, Linda R
1999-09-01
In 1995 the average tipping fee in the state of New York was $70/ton, with some landfills charging as high as $100. 1 In New Jersey, fees reached prices as high as $165/ton. 2 With budget crises occurring at all levels of government, economists, environmental scientists, policy-makers, and others are scrambling to find alternatives to waste disposal. Recycling as a solution has risen to the forefront, most likely because it both saves landfill space and may use fewer resources than virgin material processing. At every level of government, policies are being set that encourage recycling. Unfortunately, some of these programs may be resulting in unintended and undesirable side effects. To understand these effects, a broader view of the many factors involved in materials use, waste generation, and disposal is necessary. Within this paper, the broader view is considered, including a discussion of the externalities that exist in the markets affected by waste and an analysis of the effects on all alternatives to recycling, including composting and reuse. Through use of mathematical optimization, this paper shows that a recycling subsidy, or the more complicated tax/subsidy scheme, does not necessarily provide greater environmental benefits compared with disposal taxes.
Detection-enhanced steady state entanglement with ions.
Bentley, C D B; Carvalho, A R R; Kielpinski, D; Hope, J J
2014-07-25
Driven dissipative steady state entanglement schemes take advantage of coupling to the environment to robustly prepare highly entangled states. We present a scheme for two trapped ions to generate a maximally entangled steady state with fidelity above 0.99, appropriate for use in quantum protocols. Furthermore, we extend the scheme by introducing detection of our dissipation process, significantly enhancing the fidelity. Our scheme is robust to anomalous heating and requires no sympathetic cooling.
Li, Y P; Huang, G H
2010-09-15
Considerable public concerns have been raised in the past decades since a large amount of pollutant emissions from municipal solid waste (MSW) disposal of processes pose risks on surrounding environment and human health. Moreover, in MSW management, various uncertainties exist in the related costs, impact factors and objectives, which can affect the optimization processes and the decision schemes generated. In this study, an interval-based possibilistic programming (IBPP) method is developed for planning the MSW management with minimized system cost and environmental impact under uncertainty. The developed method can deal with uncertainties expressed as interval values and fuzzy sets in the left- and right-hand sides of constraints and objective function. An interactive algorithm is provided for solving the IBPP problem, which does not lead to more complicated intermediate submodels and has a relatively low computational requirement. The developed model is applied to a case study of planning a MSW management system, where mixed integer linear programming (MILP) technique is introduced into the IBPP framework to facilitate dynamic analysis for decisions of timing, sizing and siting in terms of capacity expansion for waste-management facilities. Three cases based on different waste-management policies are examined. The results obtained indicate that inclusion of environmental impacts in the optimization model can change the traditional waste-allocation pattern merely based on the economic-oriented planning approach. The results obtained can help identify desired alternatives for managing MSW, which has advantages in providing compromised schemes under an integrated consideration of economic efficiency and environmental impact under uncertainty. Copyright 2010 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Sidelnikov, O. S.; Redyuk, A. A.; Sygletos, S.
2017-12-01
We consider neural network-based schemes of digital signal processing. It is shown that the use of a dynamic neural network-based scheme of signal processing ensures an increase in the optical signal transmission quality in comparison with that provided by other methods for nonlinear distortion compensation.
A Coding Scheme for Analysing Problem-Solving Processes of First-Year Engineering Students
ERIC Educational Resources Information Center
Grigg, Sarah J.; Benson, Lisa C.
2014-01-01
This study describes the development and structure of a coding scheme for analysing solutions to well-structured problems in terms of cognitive processes and problem-solving deficiencies for first-year engineering students. A task analysis approach was used to assess students' problem solutions using the hierarchical structure from a…
Extensive Listening in a Colombian University: Process, Product, and Perceptions
ERIC Educational Resources Information Center
Mayora, Carlos A.
2017-01-01
The current paper reports an experience implementing a small-scale narrow listening scheme (one of the varieties of extensive listening) with intermediate learners of English as a foreign language in a Colombian university. The paper presents (a) how the scheme was designed and implemented, including materials and procedures (the process); (b) how…
NASA Astrophysics Data System (ADS)
Ugon, B.; Nandong, J.; Zang, Z.
2017-06-01
The presence of unstable dead-time systems in process plants often leads to a daunting challenge in the design of standard PID controllers, which are not only intended to provide close-loop stability but also to give good performance-robustness overall. In this paper, we conduct stability analysis on a double-loop control scheme based on the Routh-Hurwitz stability criteria. We propose to use this unstable double-loop control scheme which employs two P/PID controllers to control first-order or second-order unstable dead-time processes typically found in process industries. Based on the Routh-Hurwitz stability necessary and sufficient criteria, we establish several stability regions which enclose within them the P/PID parameter values that guarantee close-loop stability of the double-loop control scheme. A systematic tuning rule is developed for the purpose of obtaining the optimal P/PID parameter values within the established regions. The effectiveness of the proposed tuning rule is demonstrated using several numerical examples and the result are compared with some well-established tuning methods reported in the literature.
Concurrent-scene/alternate-pattern analysis for robust video-based docking systems
NASA Technical Reports Server (NTRS)
Udomkesmalee, Suraphol
1991-01-01
A typical docking target employs a three-point design of retroreflective tape, one at each endpoint of the center-line, and one on the tip of the central post. Scenes, sensed via laser diode illumination, produce pictures with spots corresponding to desired reflection from the retroreflectors and other reflections. Control corrections for each axis of the vehicle can then be properly applied if the desired spots are accurately tracked. However, initial acquisition of these three spots (detection and identification problem) are non-trivial under a severe noise environment. Signal-to-noise enhancement, accomplished by subtracting the non-illuminated scene from the target scene illuminated by laser diodes, can not eliminate every false spot. Hence, minimization of docking failures due to target mistracking would suggest needed inclusion of added processing features pertaining to target locations. In this paper, we present a concurrent processing scheme for a modified docking target scene which could lead to a perfect docking system. Since the non-illuminated target scene is already available, adding another feature to the three-point design by marking two non-reflective lines, one between the two end-points and one from the tip of the central post to the center-line, would allow this line feature to be picked-up only when capturing the background scene (sensor data without laser illumination). Therefore, instead of performing the image subtraction to generate a picture with a high signal-to-noise ratio, a processed line-image based on the robust line detection technique (Hough transform) can be used to fuse with the actively sensed three-point target image to deduce the true locations of the docking target. This dual-channel confirmation scheme is necessary if a fail-safe system is to be realized from both the sensing and processing point-of-views. Detailed algorithms and preliminary results are presented.
Enhancing Vocabulary Acquisition through Reading: A Hierarchy of Text-Related Exercise Types.
ERIC Educational Resources Information Center
Wesche, M.; Paribakht, T. Sima
This paper describes a classification scheme developed to examine the effects of extensive reading on primary and second language vocabulary acquisition and reports on an experiment undertaken to test the model scheme. The classification scheme represents a hypothesized hierarchy of the degree and type of mental processing required by various…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-17
.... Export Promotion of Capital Goods Scheme (EPCGS) 3. Export Oriented Units (EOU) Reimbursement of Central... Scheme (DEPS) 10. Advance Authorization Program (AAP) 11. Export Processing Zones (Renamed Special Economic Zones) 12. Target Plus Scheme (TPS) 13. Income Tax Exemptions Under Section 10A \\7\\ \\7\\ See...
77 FR 61742 - Certain Lined Paper Products From India: Preliminary Results of Countervailing Duty...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-11
.... Pre- and Post-Shipment Export Financing 2. Export Promotion of Capital Goods Scheme (EPCGS) 3. Export... (80IB Tax Program) 9. Duty Entitlement Passbook Scheme (DEPS) 10. Advance Authorization Program (AAP) 11. Export Processing Zones (Renamed Special Economic Zones) 12. Target Plus Scheme (TPS) B. Programs...
Mentoring for Beginning Principals: Revisiting the Past or Preparing for the Future?
ERIC Educational Resources Information Center
Daresh, John C.
2007-01-01
This paper describes a study of mentoring programs for beginning principals in two different urban school districts. In both settings, the goal of mentoring was said to be support for instructional leadership behaviors by novice principals. This represents an alternative to traditional mentoring schemes designed solely to ensure that first year…
"We're Fighting for the Heart and Soul of Education"
ERIC Educational Resources Information Center
Hunt, Sally
2011-01-01
Reductions in further and higher education spending, combined with cuts to helping-hand schemes such as the Education Maintenance Allowance, present a fundamental threat to everything educators stand for. This author discusses the need to build a credible alternative that puts tertiary education at the heart of a strategy for economic growth and…
WIPO--Advancing Access to Information for Print Disabled People.
ERIC Educational Resources Information Center
Mann, David
This paper examines the role of the World Intellectual Property Organisation (WIPO) as it relates to copyright and to visually impaired people's right to read. The paper starts by summarizing the barriers that can arise both from refusal to grant permission for alternative formats and from the use of rights management schemes incompatible with…
Is "Teach for All" Knocking on Your Door?
ERIC Educational Resources Information Center
Price, Anne; McConney, Andrew
2013-01-01
Over the past few decades there has been a rapid expansion in alternative "fast track" routes for teacher preparation. Among the most aggressive of these are Teach for All (TFA) schemes characterized not only by their ultra fast entry to teaching (6-7 week course) but also by their underlying philosophy that the so called…
Final-Year Education Projects for Undergraduate Chemistry Students
ERIC Educational Resources Information Center
Page, Elizabeth
2011-01-01
The Undergraduate Ambassadors Scheme provides an opportunity for students in their final year of the chemistry degree course at the University of Reading to choose an educational project as an alternative to practical research. The undergraduates work in schools where they can be regarded as role models and offer one way of inspiring pupils to…