Comparison of multiple atmospheric chemistry schemes in C-IFS
NASA Astrophysics Data System (ADS)
Flemming, Johannes; Huijnen, Vincent; Arteta, Joaquim; Stein, Olaf; Inness, Antje; Josse, Beatrice; Schultz, Martin; Peuch, Vincent-Henri
2013-04-01
As part of the MACCII -project (EU-FP7) ECMWF's integrated forecast system (IFS) is being extended by modules for chemistry, deposition and emission of reactive gases. This integration of the chemistry complements the integration of aerosol processes in IFS (Composition-IFS). C-IFS provides global forecasts and analysis of atmospheric composition. Its main motivation is to utilize the IFS for the assimilation of satellite observation of atmospheric composition. Furthermore, the integration of chemistry packages directly into IFS will achieve better consistency in terms of the treatment of physical processes and has the potential for simulating interactions between atmospheric composition and meteorology. Atmospheric chemistry in C-IFS can be represented by the modified CB05 scheme as implemented in the TM5 model and the RACMOBUS scheme as implemented in the MOCAGE model. An implementation of the scheme of the MOZART 3.5 model is ongoing. We will present the latest progress in the development and application of C-IFS. We will focus on the comparison of the different chemistry schemes in an otherwise identical C-IFS model setup (emissions, meteorology) as well as in their original Chemistry and Transport Model setup.
Application of Intel Many Integrated Core (MIC) accelerators to the Pleim-Xiu land surface scheme
NASA Astrophysics Data System (ADS)
Huang, Melin; Huang, Bormin; Huang, Allen H.
2015-10-01
The land-surface model (LSM) is one physics process in the weather research and forecast (WRF) model. The LSM includes atmospheric information from the surface layer scheme, radiative forcing from the radiation scheme, and precipitation forcing from the microphysics and convective schemes, together with internal information on the land's state variables and land-surface properties. The LSM is to provide heat and moisture fluxes over land points and sea-ice points. The Pleim-Xiu (PX) scheme is one LSM. The PX LSM features three pathways for moisture fluxes: evapotranspiration, soil evaporation, and evaporation from wet canopies. To accelerate the computation process of this scheme, we employ Intel Xeon Phi Many Integrated Core (MIC) Architecture as it is a multiprocessor computer structure with merits of efficient parallelization and vectorization essentials. Our results show that the MIC-based optimization of this scheme running on Xeon Phi coprocessor 7120P improves the performance by 2.3x and 11.7x as compared to the original code respectively running on one CPU socket (eight cores) and on one CPU core with Intel Xeon E5-2670.
On processed splitting methods and high-order actions in path-integral Monte Carlo simulations.
Casas, Fernando
2010-10-21
Processed splitting methods are particularly well adapted to carry out path-integral Monte Carlo (PIMC) simulations: since one is mainly interested in estimating traces of operators, only the kernel of the method is necessary to approximate the thermal density matrix. Unfortunately, they suffer the same drawback as standard, nonprocessed integrators: kernels of effective order greater than two necessarily involve some negative coefficients. This problem can be circumvented, however, by incorporating modified potentials into the composition, thus rendering schemes of higher effective order. In this work we analyze a family of fourth-order schemes recently proposed in the PIMC setting, paying special attention to their linear stability properties, and justify their observed behavior in practice. We also propose a new fourth-order scheme requiring the same computational cost but with an enlarged stability interval.
Fuel quality/processing study. Volume 3: Fuel upgrading studies
NASA Technical Reports Server (NTRS)
Jones, G. E., Jr.; Bruggink, P.; Sinnett, C.
1981-01-01
The methods used to calculate the refinery selling prices for the turbine fuels of low quality are described. Detailed descriptions and economics of the upgrading schemes are included. These descriptions include flow diagrams showing the interconnection between processes and the stream flows involved. Each scheme is in a complete, integrated, stand alone facility. Except for the purchase of electricity and water, each scheme provides its own fuel and manufactures, when appropriate, its own hydrogen.
Energetic approach of biomass hydrolysis in supercritical water.
Cantero, Danilo A; Vaquerizo, Luis; Mato, Fidel; Bermejo, M Dolores; Cocero, M José
2015-03-01
Cellulose hydrolysis can be performed in supercritical water with a high selectivity of soluble sugars. The process produces high-pressure steam that can be integrated, from an energy point of view, with the whole biomass treating process. This work investigates the integration of biomass hydrolysis reactors with commercial combined heat and power (CHP) schemes, with special attention to reactor outlet streams. The innovation developed in this work allows adequate energy integration possibilities for heating and compression by using high temperature of the flue gases and direct shaft work from the turbine. The integration of biomass hydrolysis with a CHP process allows the selective conversion of biomass into sugars with low heat requirements. Integrating these two processes, the CHP scheme yield is enhanced around 10% by injecting water in the gas turbine. Furthermore, the hydrolysis reactor can be held at 400°C and 23 MPa using only the gas turbine outlet streams. Copyright © 2014 Elsevier Ltd. All rights reserved.
Singh, Ravendra; Ierapetritou, Marianthi; Ramachandran, Rohit
2013-11-01
The next generation of QbD based pharmaceutical products will be manufactured through continuous processing. This will allow the integration of online/inline monitoring tools, coupled with an efficient advanced model-based feedback control systems, to achieve precise control of process variables, so that the predefined product quality can be achieved consistently. The direct compaction process considered in this study is highly interactive and involves time delays for a number of process variables due to sensor placements, process equipment dimensions, and the flow characteristics of the solid material. A simple feedback regulatory control system (e.g., PI(D)) by itself may not be sufficient to achieve the tight process control that is mandated by regulatory authorities. The process presented herein comprises of coupled dynamics involving slow and fast responses, indicating the requirement of a hybrid control scheme such as a combined MPC-PID control scheme. In this manuscript, an efficient system-wide hybrid control strategy for an integrated continuous pharmaceutical tablet manufacturing process via direct compaction has been designed. The designed control system is a hybrid scheme of MPC-PID control. An effective controller parameter tuning strategy involving an ITAE method coupled with an optimization strategy has been used for tuning of both MPC and PID parameters. The designed hybrid control system has been implemented in a first-principles model-based flowsheet that was simulated in gPROMS (Process System Enterprise). Results demonstrate enhanced performance of critical quality attributes (CQAs) under the hybrid control scheme compared to only PID or MPC control schemes, illustrating the potential of a hybrid control scheme in improving pharmaceutical manufacturing operations. Copyright © 2013 Elsevier B.V. All rights reserved.
Bai, Xiao-ping; Zhang, Xi-wei
2013-01-01
Selecting construction schemes of the building engineering project is a complex multiobjective optimization decision process, in which many indexes need to be selected to find the optimum scheme. Aiming at this problem, this paper selects cost, progress, quality, and safety as the four first-order evaluation indexes, uses the quantitative method for the cost index, uses integrated qualitative and quantitative methodologies for progress, quality, and safety indexes, and integrates engineering economics, reliability theories, and information entropy theory to present a new evaluation method for building construction project. Combined with a practical case, this paper also presents detailed computing processes and steps, including selecting all order indexes, establishing the index matrix, computing score values of all order indexes, computing the synthesis score, sorting all selected schemes, and making analysis and decision. Presented method can offer valuable references for risk computing of building construction projects.
Molecular Symmetry in Ab Initio Calculations
NASA Astrophysics Data System (ADS)
Madhavan, P. V.; Written, J. L.
1987-05-01
A scheme is presented for the construction of the Fock matrix in LCAO-SCF calculations and for the transformation of basis integrals to LCAO-MO integrals that can utilize several symmetry unique lists of integrals corresponding to different symmetry groups. The algorithm is fully compatible with vector processing machines and is especially suited for parallel processing machines.
Prokudin, Alexei; Sun, Peng; Yuan, Feng
2015-10-01
Following an earlier derivation by Catani-de Florian-Grazzini (2000) on the scheme dependence in the Collins-Soper- Sterman (CSS) resummation formalism in hard scattering processes, we investigate the scheme dependence of the Transverse Momentum Distributions (TMDs) and their applications. By adopting a universal C-coefficient function associated with the integrated parton distributions, the difference between various TMD schemes can be attributed to a perturbative calculable function depending on the hard momentum scale. Thus, we further apply several TMD schemes to the Drell-Yan process of lepton pair production in hadronic collisions, and find that the constrained non-perturbative form factors in different schemes are remarkablymore » consistent with each other and with that of the standard CSS formalism.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prokudin, Alexei; Sun, Peng; Yuan, Feng
Following an earlier derivation by Catani-de Florian-Grazzini (2000) on the scheme dependence in the Collins-Soper- Sterman (CSS) resummation formalism in hard scattering processes, we investigate the scheme dependence of the Transverse Momentum Distributions (TMDs) and their applications. By adopting a universal C-coefficient function associated with the integrated parton distributions, the difference between various TMD schemes can be attributed to a perturbative calculable function depending on the hard momentum scale. Thus, we further apply several TMD schemes to the Drell-Yan process of lepton pair production in hadronic collisions, and find that the constrained non-perturbative form factors in different schemes are remarkablymore » consistent with each other and with that of the standard CSS formalism.« less
NASA Astrophysics Data System (ADS)
Prokudin, Alexei; Sun, Peng; Yuan, Feng
2015-11-01
Following an earlier derivation by Catani, de Florian and Grazzini (2000) on the scheme dependence in the Collins-Soper-Sterman (CSS) resummation formalism in hard scattering processes, we investigate the scheme dependence of the Transverse Momentum Distributions (TMDs) and their applications. By adopting a universal C-coefficient function associated with the integrated parton distributions, the difference between various TMD schemes can be attributed to a perturbative calculable function depending on the hard momentum scale. We further apply several TMD schemes to the Drell-Yan process of lepton pair production in hadronic collisions, and find that the constrained non-perturbative form factors in different schemes are consistent with each other and with that of the standard CSS formalism.
Integrated optical 3D digital imaging based on DSP scheme
NASA Astrophysics Data System (ADS)
Wang, Xiaodong; Peng, Xiang; Gao, Bruce Z.
2008-03-01
We present a scheme of integrated optical 3-D digital imaging (IO3DI) based on digital signal processor (DSP), which can acquire range images independently without PC support. This scheme is based on a parallel hardware structure with aid of DSP and field programmable gate array (FPGA) to realize 3-D imaging. In this integrated scheme of 3-D imaging, the phase measurement profilometry is adopted. To realize the pipeline processing of the fringe projection, image acquisition and fringe pattern analysis, we present a multi-threads application program that is developed under the environment of DSP/BIOS RTOS (real-time operating system). Since RTOS provides a preemptive kernel and powerful configuration tool, with which we are able to achieve a real-time scheduling and synchronization. To accelerate automatic fringe analysis and phase unwrapping, we make use of the technique of software optimization. The proposed scheme can reach a performance of 39.5 f/s (frames per second), so it may well fit into real-time fringe-pattern analysis and can implement fast 3-D imaging. Experiment results are also presented to show the validity of proposed scheme.
Studies in integrated line-and packet-switched computer communication systems
NASA Astrophysics Data System (ADS)
Maglaris, B. S.
1980-06-01
The problem of efficiently allocating the bandwidth of a trunk to both types of traffic is handled for various system and traffic models. A performance analysis is carried out both for variable and fixed frame schemes. It is shown that variable frame schemes, adjusting the frame length according to the traffic variations, offer better trunk utilization at the cost of the additional hardware and software complexity needed because of the lack of synchronization. An optimization study on the fixed frame schemes follows. The problem of dynamically allocating the fixed frame to both types of traffic is formulated as a Markovian Decision process. It is shown that the movable boundary scheme, suggested for commercial implementations of integrated multiplexors, offers optimal or near optimal performance and simplicity of implementation. Finally, the behavior of the movable boundary integrated scheme is studied for tandem link connections. Under the assumptions made for the line-switched traffic, the forward allocation technique is found to offer the best alternative among different path set-up strategies.
A parallel time integrator for noisy nonlinear oscillatory systems
NASA Astrophysics Data System (ADS)
Subber, Waad; Sarkar, Abhijit
2018-06-01
In this paper, we adapt a parallel time integration scheme to track the trajectories of noisy non-linear dynamical systems. Specifically, we formulate a parallel algorithm to generate the sample path of nonlinear oscillator defined by stochastic differential equations (SDEs) using the so-called parareal method for ordinary differential equations (ODEs). The presence of Wiener process in SDEs causes difficulties in the direct application of any numerical integration techniques of ODEs including the parareal algorithm. The parallel implementation of the algorithm involves two SDEs solvers, namely a fine-level scheme to integrate the system in parallel and a coarse-level scheme to generate and correct the required initial conditions to start the fine-level integrators. For the numerical illustration, a randomly excited Duffing oscillator is investigated in order to study the performance of the stochastic parallel algorithm with respect to a range of system parameters. The distributed implementation of the algorithm exploits Massage Passing Interface (MPI).
Ocean Variability Effects on Underwater Acoustic Communications
2011-09-01
schemes for accessing wide frequency bands. Compared with OFDM schemes, the multiband MIMO transmission combined with time reversal processing...systems, or multiple- input/multiple-output ( MIMO ) systems, decision feedback equalization and interference cancellation schemes have been integrated...unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 2 MIMO receiver also iterates channel estimation and symbol demodulation with
NASA Astrophysics Data System (ADS)
Alimohammadi, Shahrouz; Cavaglieri, Daniele; Beyhaghi, Pooriya; Bewley, Thomas R.
2016-11-01
This work applies a recently developed Derivative-free optimization algorithm to derive a new mixed implicit-explicit (IMEX) time integration scheme for Computational Fluid Dynamics (CFD) simulations. This algorithm allows imposing a specified order of accuracy for the time integration and other important stability properties in the form of nonlinear constraints within the optimization problem. In this procedure, the coefficients of the IMEX scheme should satisfy a set of constraints simultaneously. Therefore, the optimization process, at each iteration, estimates the location of the optimal coefficients using a set of global surrogates, for both the objective and constraint functions, as well as a model of the uncertainty function of these surrogates based on the concept of Delaunay triangulation. This procedure has been proven to converge to the global minimum of the constrained optimization problem provided the constraints and objective functions are twice differentiable. As a result, a new third-order, low-storage IMEX Runge-Kutta time integration scheme is obtained with remarkably fast convergence. Numerical tests are then performed leveraging the turbulent channel flow simulations to validate the theoretical order of accuracy and stability properties of the new scheme.
A prototype of mammography CADx scheme integrated to imaging quality evaluation techniques
NASA Astrophysics Data System (ADS)
Schiabel, Homero; Matheus, Bruno R. N.; Angelo, Michele F.; Patrocínio, Ana Claudia; Ventura, Liliane
2011-03-01
As all women over the age of 40 are recommended to perform mammographic exams every two years, the demands on radiologists to evaluate mammographic images in short periods of time has increased considerably. As a tool to improve quality and accelerate analysis CADe/Dx (computer-aided detection/diagnosis) schemes have been investigated, but very few complete CADe/Dx schemes have been developed and most are restricted to detection and not diagnosis. The existent ones usually are associated to specific mammographic equipment (usually DR), which makes them very expensive. So this paper describes a prototype of a complete mammography CADx scheme developed by our research group integrated to an imaging quality evaluation process. The basic structure consists of pre-processing modules based on image acquisition and digitization procedures (FFDM, CR or film + scanner), a segmentation tool to detect clustered microcalcifications and suspect masses and a classification scheme, which evaluates as the presence of microcalcifications clusters as well as possible malignant masses based on their contour. The aim is to provide enough information not only on the detected structures but also a pre-report with a BI-RADS classification. At this time the system is still lacking an interface integrating all the modules. Despite this, it is functional as a prototype for clinical practice testing, with results comparable to others reported in literature.
NASA Technical Reports Server (NTRS)
Sellers, P. J.; Berry, J. A.; Collatz, G. J.; Field, C. B.; Hall, F. G.
1992-01-01
The theoretical analyses of Sellers (1985, 1987), which linked canopy spectral reflectance properties to (unstressed) photosynthetic rates and conductances, are critically reviewed and significant shortcomings are identified. These are addressed in this article principally through the incorporation of a more sophisticated and realistic treatment of leaf physiological processes within a new canopy integration scheme. The results indicate that area-averaged spectral vegetation indices, as obtained from coarse resolution satellite sensors, may give good estimates of the area-integrals of photosynthesis and conductance even for spatially heterogenous (though physiologically uniform) vegetation covers.
Control of Vacuum Induction Brazing System for Sealing of Instrumentation Feedthrough
NASA Astrophysics Data System (ADS)
Ahn, Sung Ho; Hong, Jintae; Joung, Chang Young; Heo, Sung Ho
2017-04-01
The integrity of instrumentation cables is an important performance parameter in the brazing process, along with the sealing performance. In this paper, an accurate control scheme for brazing of the instrumentation feedthrough in a vacuum induction brazing system was developed. The experimental results show that the accurate brazing temperature control performance is achieved by the developed control scheme. It is demonstrated that the sealing performances of the instrumentation feedthrough and the integrity of the instrumentation cables are to be acceptable after brazing.
Nagy-Soper Subtraction: a Review
NASA Astrophysics Data System (ADS)
Robens, Tania
2013-07-01
In this review, we present a review on an alternative NLO subtraction scheme, based on the splitting kernels of an improved parton shower that promises to facilitate the inclusion of higher-order corrections into Monte Carlo event generators. We give expressions for the scheme for massless emitters, and point to work on the extension for massive cases. As an example, we show results for the C parameter of the process e+e-→3 jets at NLO which have recently been published as a verification of this scheme. We equally provide analytic expressions for integrated counterterms that have not been presented in previous work, and comment on the possibility of analytic approximations for the remaining numerical integrals.
Jeong, Kyeong-Min; Kim, Hee-Seung; Hong, Sung-In; Lee, Sung-Keun; Jo, Na-Young; Kim, Yong-Soo; Lim, Hong-Gi; Park, Jae-Hyeung
2012-10-08
Speed enhancement of integral imaging based incoherent Fourier hologram capture using a graphic processing unit is reported. Integral imaging based method enables exact hologram capture of real-existing three-dimensional objects under regular incoherent illumination. In our implementation, we apply parallel computation scheme using the graphic processing unit, accelerating the processing speed. Using enhanced speed of hologram capture, we also implement a pseudo real-time hologram capture and optical reconstruction system. The overall operation speed is measured to be 1 frame per second.
Design and Application of Integrated Assembly Technology of FRG in Residential Ceiling
NASA Astrophysics Data System (ADS)
Li, Xiuyun; Yu, Changyong
2018-06-01
FRG material is a new environmentally friendly indoor decoration materials and popular in prefabricated construction, the paper introduces the performance and design of materials, and takes FRG in the residential ceiling integrated assembly process into a demonstration project, which showed that FRG in the prefabricated modules integrated ceiling of the whole template scheme has a great artistry and application effect. Meanwhile it provides reference for the integrated ceiling assembly modular process design of similar indoor decoration.
Smith predictor based-sliding mode controller for integrating processes with elevated deadtime.
Camacho, Oscar; De la Cruz, Francisco
2004-04-01
An approach to control integrating processes with elevated deadtime using a Smith predictor sliding mode controller is presented. A PID sliding surface and an integrating first-order plus deadtime model have been used to synthesize the controller. Since the performance of existing controllers with a Smith predictor decrease in the presence of modeling errors, this paper presents a simple approach to combining the Smith predictor with the sliding mode concept, which is a proven, simple, and robust procedure. The proposed scheme has a set of tuning equations as a function of the characteristic parameters of the model. For implementation of our proposed approach, computer based industrial controllers that execute PID algorithms can be used. The performance and robustness of the proposed controller are compared with the Matausek-Micić scheme for linear systems using simulations.
A fast CT reconstruction scheme for a general multi-core PC.
Zeng, Kai; Bai, Erwei; Wang, Ge
2007-01-01
Expensive computational cost is a severe limitation in CT reconstruction for clinical applications that need real-time feedback. A primary example is bolus-chasing computed tomography (CT) angiography (BCA) that we have been developing for the past several years. To accelerate the reconstruction process using the filtered backprojection (FBP) method, specialized hardware or graphics cards can be used. However, specialized hardware is expensive and not flexible. The graphics processing unit (GPU) in a current graphic card can only reconstruct images in a reduced precision and is not easy to program. In this paper, an acceleration scheme is proposed based on a multi-core PC. In the proposed scheme, several techniques are integrated, including utilization of geometric symmetry, optimization of data structures, single-instruction multiple-data (SIMD) processing, multithreaded computation, and an Intel C++ compilier. Our scheme maintains the original precision and involves no data exchange between the GPU and CPU. The merits of our scheme are demonstrated in numerical experiments against the traditional implementation. Our scheme achieves a speedup of about 40, which can be further improved by several folds using the latest quad-core processors.
A Fast CT Reconstruction Scheme for a General Multi-Core PC
Zeng, Kai; Bai, Erwei; Wang, Ge
2007-01-01
Expensive computational cost is a severe limitation in CT reconstruction for clinical applications that need real-time feedback. A primary example is bolus-chasing computed tomography (CT) angiography (BCA) that we have been developing for the past several years. To accelerate the reconstruction process using the filtered backprojection (FBP) method, specialized hardware or graphics cards can be used. However, specialized hardware is expensive and not flexible. The graphics processing unit (GPU) in a current graphic card can only reconstruct images in a reduced precision and is not easy to program. In this paper, an acceleration scheme is proposed based on a multi-core PC. In the proposed scheme, several techniques are integrated, including utilization of geometric symmetry, optimization of data structures, single-instruction multiple-data (SIMD) processing, multithreaded computation, and an Intel C++ compilier. Our scheme maintains the original precision and involves no data exchange between the GPU and CPU. The merits of our scheme are demonstrated in numerical experiments against the traditional implementation. Our scheme achieves a speedup of about 40, which can be further improved by several folds using the latest quad-core processors. PMID:18256731
NASA Astrophysics Data System (ADS)
Savin, Andrei V.; Smirnov, Petr G.
2018-05-01
Simulation of collisional dynamics of a large ensemble of monodisperse particles by the method of discrete elements is considered. Verle scheme is used for integration of the equations of motion. Non-conservativeness of the finite-difference scheme is discovered depending on the time step, which is equivalent to a pure-numerical energy source appearance in the process of collision. Compensation method for the source is proposed and tested.
A novel double loop control model design for chemical unstable processes.
Cong, Er-Ding; Hu, Ming-Hui; Tu, Shan-Tung; Xuan, Fu-Zhen; Shao, Hui-He
2014-03-01
In this manuscript, based on Smith predictor control scheme for unstable process in industry, an improved double loop control model is proposed for chemical unstable processes. Inner loop is to stabilize integrating the unstable process and transform the original process to first-order plus pure dead-time dynamic stable process. Outer loop is to enhance the performance of set point response. Disturbance controller is designed to enhance the performance of disturbance response. The improved control system is simple with exact physical meaning. The characteristic equation is easy to realize stabilization. Three controllers are separately design in the improved scheme. It is easy to design each controller and good control performance for the respective closed-loop transfer function separately. The robust stability of the proposed control scheme is analyzed. Finally, case studies illustrate that the improved method can give better system performance than existing design methods. © 2013 ISA Published by ISA All rights reserved.
Finite difference schemes for long-time integration
NASA Technical Reports Server (NTRS)
Haras, Zigo; Taasan, Shlomo
1993-01-01
Finite difference schemes for the evaluation of first and second derivatives are presented. These second order compact schemes were designed for long-time integration of evolution equations by solving a quadratic constrained minimization problem. The quadratic cost function measures the global truncation error while taking into account the initial data. The resulting schemes are applicable for integration times fourfold, or more, longer than similar previously studied schemes. A similar approach was used to obtain improved integration schemes.
Optical temperature compensation schemes of spectral modulation sensors for aircraft engine control
NASA Astrophysics Data System (ADS)
Berkcan, Ertugrul
1993-02-01
Optical temperature compensation schemes for the ratiometric interrogation of spectral modulation sensors for source temperature robustness are presented. We have obtained better than 50 - 100X decrease of the temperature coefficient of the sensitivity using these types of compensation. We have also developed a spectrographic interrogation scheme that provides increased source temperature robustness; this affords a significantly improved accuracy over FADEC temperature ranges as well as temperature coefficient of the sensitivity that is substantially and further reduced. This latter compensation scheme can be integrated in a small E/O package including the detection, analog and digital signal processing. We find that these interrogation schemes can be used within a detector spatially multiplexed architecture.
Application of Krylov exponential propagation to fluid dynamics equations
NASA Technical Reports Server (NTRS)
Saad, Youcef; Semeraro, David
1991-01-01
An application of matrix exponentiation via Krylov subspace projection to the solution of fluid dynamics problems is presented. The main idea is to approximate the operation exp(A)v by means of a projection-like process onto a krylov subspace. This results in a computation of an exponential matrix vector product similar to the one above but of a much smaller size. Time integration schemes can then be devised to exploit this basic computational kernel. The motivation of this approach is to provide time-integration schemes that are essentially of an explicit nature but which have good stability properties.
Design and Verification of a Digital Controller for a 2-Piece Hemispherical Resonator Gyroscope.
Lee, Jungshin; Yun, Sung Wook; Rhim, Jaewook
2016-04-20
A Hemispherical Resonator Gyro (HRG) is the Coriolis Vibratory Gyro (CVG) that measures rotation angle or angular velocity using Coriolis force acting the vibrating mass. A HRG can be used as a rate gyro or integrating gyro without structural modification by simply changing the control scheme. In this paper, differential control algorithms are designed for a 2-piece HRG. To design a precision controller, the electromechanical modelling and signal processing must be pre-performed accurately. Therefore, the equations of motion for the HRG resonator with switched harmonic excitations are derived with the Duhamel Integral method. Electromechanical modeling of the resonator, electric module and charge amplifier is performed by considering the mode shape of a thin hemispherical shell. Further, signal processing and control algorithms are designed. The multi-flexing scheme of sensing, driving cycles and x, y-axis switching cycles is appropriate for high precision and low maneuverability systems. The differential control scheme is easily capable of rejecting the common mode errors of x, y-axis signals and changing the rate integrating mode on basis of these studies. In the rate gyro mode the controller is composed of Phase-Locked Loop (PLL), amplitude, quadrature and rate control loop. All controllers are designed on basis of a digital PI controller. The signal processing and control algorithms are verified through Matlab/Simulink simulations. Finally, a FPGA and DSP board with these algorithms is verified through experiments.
Yang, L M; Shu, C; Wang, Y
2016-03-01
In this work, a discrete gas-kinetic scheme (DGKS) is presented for simulation of two-dimensional viscous incompressible and compressible flows. This scheme is developed from the circular function-based GKS, which was recently proposed by Shu and his co-workers [L. M. Yang, C. Shu, and J. Wu, J. Comput. Phys. 274, 611 (2014)]. For the circular function-based GKS, the integrals for conservation forms of moments in the infinity domain for the Maxwellian function-based GKS are simplified to those integrals along the circle. As a result, the explicit formulations of conservative variables and fluxes are derived. However, these explicit formulations of circular function-based GKS for viscous flows are still complicated, which may not be easy for the application by new users. By using certain discrete points to represent the circle in the phase velocity space, the complicated formulations can be replaced by a simple solution process. The basic requirement is that the conservation forms of moments for the circular function-based GKS can be accurately satisfied by weighted summation of distribution functions at discrete points. In this work, it is shown that integral quadrature by four discrete points on the circle, which forms the D2Q4 discrete velocity model, can exactly match the integrals. Numerical results showed that the present scheme can provide accurate numerical results for incompressible and compressible viscous flows with roughly the same computational cost as that needed by the Roe scheme.
Lee, Tian-Fu; Chang, I-Pin; Lin, Tsung-Hung; Wang, Ching-Cheng
2013-06-01
The integrated EPR information system supports convenient and rapid e-medicine services. A secure and efficient authentication scheme for the integrated EPR information system provides safeguarding patients' electronic patient records (EPRs) and helps health care workers and medical personnel to rapidly making correct clinical decisions. Recently, Wu et al. proposed an efficient password-based user authentication scheme using smart cards for the integrated EPR information system, and claimed that the proposed scheme could resist various malicious attacks. However, their scheme is still vulnerable to lost smart card and stolen verifier attacks. This investigation discusses these weaknesses and proposes a secure and efficient authentication scheme for the integrated EPR information system as alternative. Compared with related approaches, the proposed scheme not only retains a lower computational cost and does not require verifier tables for storing users' secrets, but also solves the security problems in previous schemes and withstands possible attacks.
Integration of hybrid wireless networks in cloud services oriented enterprise information systems
NASA Astrophysics Data System (ADS)
Li, Shancang; Xu, Lida; Wang, Xinheng; Wang, Jue
2012-05-01
This article presents a hybrid wireless network integration scheme in cloud services-based enterprise information systems (EISs). With the emerging hybrid wireless networks and cloud computing technologies, it is necessary to develop a scheme that can seamlessly integrate these new technologies into existing EISs. By combining the hybrid wireless networks and computing in EIS, a new framework is proposed, which includes frontend layer, middle layer and backend layers connected to IP EISs. Based on a collaborative architecture, cloud services management framework and process diagram are presented. As a key feature, the proposed approach integrates access control functionalities within the hybrid framework that provide users with filtered views on available cloud services based on cloud service access requirements and user security credentials. In future work, we will implement the proposed framework over SwanMesh platform by integrating the UPnP standard into an enterprise information system.
A joint asymmetric watermarking and image encryption scheme
NASA Astrophysics Data System (ADS)
Boato, G.; Conotter, V.; De Natale, F. G. B.; Fontanari, C.
2008-02-01
Here we introduce a novel watermarking paradigm designed to be both asymmetric, i.e., involving a private key for embedding and a public key for detection, and commutative with a suitable encryption scheme, allowing both to cipher watermarked data and to mark encrypted data without interphering with the detection process. In order to demonstrate the effectiveness of the above principles, we present an explicit example where the watermarking part, based on elementary linear algebra, and the encryption part, exploiting a secret random permutation, are integrated in a commutative scheme.
Nagy-Soper subtraction scheme for multiparton final states
NASA Astrophysics Data System (ADS)
Chung, Cheng-Han; Robens, Tania
2013-04-01
In this work, we present the extension of an alternative subtraction scheme for next-to-leading order QCD calculations to the case of an arbitrary number of massless final state partons. The scheme is based on the splitting kernels of an improved parton shower and comes with a reduced number of final state momentum mappings. While a previous publication including the setup of the scheme has been restricted to cases with maximally two massless partons in the final state, we here provide the final state real emission and integrated subtraction terms for processes with any number of massless partons. We apply our scheme to three jet production at lepton colliders at next-to-leading order and present results for the differential C parameter distribution.
Train integrity detection risk analysis based on PRISM
NASA Astrophysics Data System (ADS)
Wen, Yuan
2018-04-01
GNSS based Train Integrity Monitoring System (TIMS) is an effective and low-cost detection scheme for train integrity detection. However, as an external auxiliary system of CTCS, GNSS may be influenced by external environments, such as uncertainty of wireless communication channels, which may lead to the failure of communication and positioning. In order to guarantee the reliability and safety of train operation, a risk analysis method of train integrity detection based on PRISM is proposed in this article. First, we analyze the risk factors (in GNSS communication process and the on-board communication process) and model them. Then, we evaluate the performance of the model in PRISM based on the field data. Finally, we discuss how these risk factors influence the train integrity detection process.
NASA Astrophysics Data System (ADS)
Yang, Hui; Zhang, Jie; Ji, Yuefeng; He, Yongqi; Lee, Young
2016-07-01
Cloud radio access network (C-RAN) becomes a promising scenario to accommodate high-performance services with ubiquitous user coverage and real-time cloud computing in 5G area. However, the radio network, optical network and processing unit cloud have been decoupled from each other, so that their resources are controlled independently. Traditional architecture cannot implement the resource optimization and scheduling for the high-level service guarantee due to the communication obstacle among them with the growing number of mobile internet users. In this paper, we report a study on multi-dimensional resources integration (MDRI) for service provisioning in cloud radio over fiber network (C-RoFN). A resources integrated provisioning (RIP) scheme using an auxiliary graph is introduced based on the proposed architecture. The MDRI can enhance the responsiveness to dynamic end-to-end user demands and globally optimize radio frequency, optical network and processing resources effectively to maximize radio coverage. The feasibility of the proposed architecture is experimentally verified on OpenFlow-based enhanced SDN testbed. The performance of RIP scheme under heavy traffic load scenario is also quantitatively evaluated to demonstrate the efficiency of the proposal based on MDRI architecture in terms of resource utilization, path blocking probability, network cost and path provisioning latency, compared with other provisioning schemes.
Yang, Hui; Zhang, Jie; Ji, Yuefeng; He, Yongqi; Lee, Young
2016-07-28
Cloud radio access network (C-RAN) becomes a promising scenario to accommodate high-performance services with ubiquitous user coverage and real-time cloud computing in 5G area. However, the radio network, optical network and processing unit cloud have been decoupled from each other, so that their resources are controlled independently. Traditional architecture cannot implement the resource optimization and scheduling for the high-level service guarantee due to the communication obstacle among them with the growing number of mobile internet users. In this paper, we report a study on multi-dimensional resources integration (MDRI) for service provisioning in cloud radio over fiber network (C-RoFN). A resources integrated provisioning (RIP) scheme using an auxiliary graph is introduced based on the proposed architecture. The MDRI can enhance the responsiveness to dynamic end-to-end user demands and globally optimize radio frequency, optical network and processing resources effectively to maximize radio coverage. The feasibility of the proposed architecture is experimentally verified on OpenFlow-based enhanced SDN testbed. The performance of RIP scheme under heavy traffic load scenario is also quantitatively evaluated to demonstrate the efficiency of the proposal based on MDRI architecture in terms of resource utilization, path blocking probability, network cost and path provisioning latency, compared with other provisioning schemes.
Yang, Hui; Zhang, Jie; Ji, Yuefeng; He, Yongqi; Lee, Young
2016-01-01
Cloud radio access network (C-RAN) becomes a promising scenario to accommodate high-performance services with ubiquitous user coverage and real-time cloud computing in 5G area. However, the radio network, optical network and processing unit cloud have been decoupled from each other, so that their resources are controlled independently. Traditional architecture cannot implement the resource optimization and scheduling for the high-level service guarantee due to the communication obstacle among them with the growing number of mobile internet users. In this paper, we report a study on multi-dimensional resources integration (MDRI) for service provisioning in cloud radio over fiber network (C-RoFN). A resources integrated provisioning (RIP) scheme using an auxiliary graph is introduced based on the proposed architecture. The MDRI can enhance the responsiveness to dynamic end-to-end user demands and globally optimize radio frequency, optical network and processing resources effectively to maximize radio coverage. The feasibility of the proposed architecture is experimentally verified on OpenFlow-based enhanced SDN testbed. The performance of RIP scheme under heavy traffic load scenario is also quantitatively evaluated to demonstrate the efficiency of the proposal based on MDRI architecture in terms of resource utilization, path blocking probability, network cost and path provisioning latency, compared with other provisioning schemes. PMID:27465296
NASA Astrophysics Data System (ADS)
Szelag, Bertrand; Abraham, Alexis; Brision, Stéphane; Gindre, Paul; Blampey, Benjamin; Myko, André; Olivier, Segolene; Kopp, Christophe
2017-05-01
Silicon photonic is becoming a reality for next generation communication system addressing the increasing needs of HPC (High Performance Computing) systems and datacenters. CMOS compatible photonic platforms are developed in many foundries integrating passive and active devices. The use of existing and qualified microelectronics process guarantees cost efficient and mature photonic technologies. Meanwhile, photonic devices have their own fabrication constraints, not similar to those of cmos devices, which can affect their performances. In this paper, we are addressing the integration of PN junction Mach Zehnder modulator in a 200mm CMOS compatible photonic platform. Implantation based device characteristics are impacted by many process variations among which screening layer thickness, dopant diffusion, implantation mask overlay. CMOS devices are generally quite robust with respect to these processes thanks to dedicated design rules. For photonic devices, the situation is different since, most of the time, doped areas must be carefully located within waveguides and CMOS solutions like self-alignment to the gate cannot be applied. In this work, we present different robust integration solutions for junction-based modulators. A simulation setup has been built in order to optimize of the process conditions. It consist in a Mathlab interface coupling process and device electro-optic simulators in order to run many iterations. Illustrations of modulator characteristic variations with process parameters are done using this simulation setup. Parameters under study are, for instance, X and Y direction lithography shifts, screening oxide and slab thicknesses. A robust process and design approach leading to a pn junction Mach Zehnder modulator insensitive to lithography misalignment is then proposed. Simulation results are compared with experimental datas. Indeed, various modulators have been fabricated with different process conditions and integration schemes. Extensive electro-optic characterization of these components will be presented.
NASA Astrophysics Data System (ADS)
Yang, L. M.; Shu, C.; Wang, Y.; Sun, Y.
2016-08-01
The sphere function-based gas kinetic scheme (GKS), which was presented by Shu and his coworkers [23] for simulation of inviscid compressible flows, is extended to simulate 3D viscous incompressible and compressible flows in this work. Firstly, we use certain discrete points to represent the spherical surface in the phase velocity space. Then, integrals along the spherical surface for conservation forms of moments, which are needed to recover 3D Navier-Stokes equations, are approximated by integral quadrature. The basic requirement is that these conservation forms of moments can be exactly satisfied by weighted summation of distribution functions at discrete points. It was found that the integral quadrature by eight discrete points on the spherical surface, which forms the D3Q8 discrete velocity model, can exactly match the integral. In this way, the conservative variables and numerical fluxes can be computed by weighted summation of distribution functions at eight discrete points. That is, the application of complicated formulations resultant from integrals can be replaced by a simple solution process. Several numerical examples including laminar flat plate boundary layer, 3D lid-driven cavity flow, steady flow through a 90° bending square duct, transonic flow around DPW-W1 wing and supersonic flow around NACA0012 airfoil are chosen to validate the proposed scheme. Numerical results demonstrate that the present scheme can provide reasonable numerical results for 3D viscous flows.
Evaluation of a new microphysical aerosol module in the ECMWF Integrated Forecasting System
NASA Astrophysics Data System (ADS)
Woodhouse, Matthew; Mann, Graham; Carslaw, Ken; Morcrette, Jean-Jacques; Schulz, Michael; Kinne, Stefan; Boucher, Olivier
2013-04-01
The Monitoring Atmospheric Composition and Climate II (MACC-II) project will provide a system for monitoring and predicting atmospheric composition. As part of the first phase of MACC, the GLOMAP-mode microphysical aerosol scheme (Mann et al., 2010, GMD) was incorporated within the ECMWF Integrated Forecasting System (IFS). The two-moment modal GLOMAP-mode scheme includes new particle formation, condensation, coagulation, cloud-processing, and wet and dry deposition. GLOMAP-mode is already incorporated as a module within the TOMCAT chemistry transport model and within the UK Met Office HadGEM3 general circulation model. The microphysical, process-based GLOMAP-mode scheme allows an improved representation of aerosol size and composition and can simulate aerosol evolution in the troposphere and stratosphere. The new aerosol forecasting and re-analysis system (known as IFS-GLOMAP) will also provide improved boundary conditions for regional air quality forecasts, and will benefit from assimilation of observed aerosol optical depths in near real time. Presented here is an evaluation of the performance of the IFS-GLOMAP system in comparison to in situ aerosol mass and number measurements, and remotely-sensed aerosol optical depth measurements. Future development will provide a fully-coupled chemistry-aerosol scheme, and the capability to resolve nitrate aerosol.
Lab-on-CMOS Integration of Microfluidics and Electrochemical Sensors
Huang, Yue; Mason, Andrew J.
2013-01-01
This paper introduces a CMOS-microfluidics integration scheme for electrochemical microsystems. A CMOS chip was embedded into a micro-machined silicon carrier. By leveling the CMOS chip and carrier surface to within 100 nm, an expanded obstacle-free surface suitable for photolithography was achieved. Thin film metal planar interconnects were microfabricated to bridge CMOS pads to the perimeter of the carrier, leaving a flat and smooth surface for integrating microfluidic structures. A model device containing SU-8 microfluidic mixers and detection channels crossing over microelectrodes on a CMOS integrated circuit was constructed using the chip-carrier assembly scheme. Functional integrity of microfluidic structures and on-CMOS electrodes was verified by a simultaneous sample dilution and electrochemical detection experiment within multi-channel microfluidics. This lab-on-CMOS integration process is capable of high packing density, is suitable for wafer-level batch production, and opens new opportunities to combine the performance benefits of on-CMOS sensors with lab-on-chip platforms. PMID:23939616
Lab-on-CMOS integration of microfluidics and electrochemical sensors.
Huang, Yue; Mason, Andrew J
2013-10-07
This paper introduces a CMOS-microfluidics integration scheme for electrochemical microsystems. A CMOS chip was embedded into a micro-machined silicon carrier. By leveling the CMOS chip and carrier surface to within 100 nm, an expanded obstacle-free surface suitable for photolithography was achieved. Thin film metal planar interconnects were microfabricated to bridge CMOS pads to the perimeter of the carrier, leaving a flat and smooth surface for integrating microfluidic structures. A model device containing SU-8 microfluidic mixers and detection channels crossing over microelectrodes on a CMOS integrated circuit was constructed using the chip-carrier assembly scheme. Functional integrity of microfluidic structures and on-CMOS electrodes was verified by a simultaneous sample dilution and electrochemical detection experiment within multi-channel microfluidics. This lab-on-CMOS integration process is capable of high packing density, is suitable for wafer-level batch production, and opens new opportunities to combine the performance benefits of on-CMOS sensors with lab-on-chip platforms.
Design and Verification of a Digital Controller for a 2-Piece Hemispherical Resonator Gyroscope
Lee, Jungshin; Yun, Sung Wook; Rhim, Jaewook
2016-01-01
A Hemispherical Resonator Gyro (HRG) is the Coriolis Vibratory Gyro (CVG) that measures rotation angle or angular velocity using Coriolis force acting the vibrating mass. A HRG can be used as a rate gyro or integrating gyro without structural modification by simply changing the control scheme. In this paper, differential control algorithms are designed for a 2-piece HRG. To design a precision controller, the electromechanical modelling and signal processing must be pre-performed accurately. Therefore, the equations of motion for the HRG resonator with switched harmonic excitations are derived with the Duhamel Integral method. Electromechanical modeling of the resonator, electric module and charge amplifier is performed by considering the mode shape of a thin hemispherical shell. Further, signal processing and control algorithms are designed. The multi-flexing scheme of sensing, driving cycles and x, y-axis switching cycles is appropriate for high precision and low maneuverability systems. The differential control scheme is easily capable of rejecting the common mode errors of x, y-axis signals and changing the rate integrating mode on basis of these studies. In the rate gyro mode the controller is composed of Phase-Locked Loop (PLL), amplitude, quadrature and rate control loop. All controllers are designed on basis of a digital PI controller. The signal processing and control algorithms are verified through Matlab/Simulink simulations. Finally, a FPGA and DSP board with these algorithms is verified through experiments. PMID:27104539
? filtering for stochastic systems driven by Poisson processes
NASA Astrophysics Data System (ADS)
Song, Bo; Wu, Zheng-Guang; Park, Ju H.; Shi, Guodong; Zhang, Ya
2015-01-01
This paper investigates the ? filtering problem for stochastic systems driven by Poisson processes. By utilising the martingale theory such as the predictable projection operator and the dual predictable projection operator, this paper transforms the expectation of stochastic integral with respect to the Poisson process into the expectation of Lebesgue integral. Then, based on this, this paper designs an ? filter such that the filtering error system is mean-square asymptotically stable and satisfies a prescribed ? performance level. Finally, a simulation example is given to illustrate the effectiveness of the proposed filtering scheme.
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Fulin; Cao, Yang; Zhang, Jun Jason
Ensuring flexible and reliable data routing is indispensable for the integration of Advanced Metering Infrastructure (AMI) networks, we propose a secure-oriented and load-balancing wireless data routing scheme. A novel utility function is designed based on security routing scheme. Then, we model the interactive security-oriented routing strategy among meter data concentrators or smart grid meters as a mixed-strategy network formation game. Finally, such problem results in a stable probabilistic routing scheme with proposed distributed learning algorithm. One contributions is that we studied that different types of applications affect the routing selection strategy and the strategy tendency. Another contributions is that themore » chosen strategy of our mixed routing can adaptively to converge to a new mixed strategy Nash equilibrium (MSNE) during the learning process in the smart grid.« less
Li, Chun-Ta; Weng, Chi-Yao; Lee, Cheng-Chi; Wang, Chun-Cheng
2015-11-01
To protect patient privacy and ensure authorized access to remote medical services, many remote user authentication schemes for the integrated electronic patient record (EPR) information system have been proposed in the literature. In a recent paper, Das proposed a hash based remote user authentication scheme using passwords and smart cards for the integrated EPR information system, and claimed that the proposed scheme could resist various passive and active attacks. However, in this paper, we found that Das's authentication scheme is still vulnerable to modification and user duplication attacks. Thereafter we propose a secure and efficient authentication scheme for the integrated EPR information system based on lightweight hash function and bitwise exclusive-or (XOR) operations. The security proof and performance analysis show our new scheme is well-suited to adoption in remote medical healthcare services.
A more secure anonymous user authentication scheme for the integrated EPR information system.
Wen, Fengtong
2014-05-01
Secure and efficient user mutual authentication is an essential task for integrated electronic patient record (EPR) information system. Recently, several authentication schemes have been proposed to meet this requirement. In a recent paper, Lee et al. proposed an efficient and secure password-based authentication scheme used smart cards for the integrated EPR information system. This scheme is believed to have many abilities to resist a range of network attacks. Especially, they claimed that their scheme could resist lost smart card attack. However, we reanalyze the security of Lee et al.'s scheme, and show that it fails to protect off-line password guessing attack if the secret information stored in the smart card is compromised. This also renders that their scheme is insecure against user impersonation attacks. Then, we propose a new user authentication scheme for integrated EPR information systems based on the quadratic residues. The new scheme not only resists a range of network attacks but also provides user anonymity. We show that our proposed scheme can provide stronger security.
NASA Astrophysics Data System (ADS)
Xia, Weiwei; Shen, Lianfeng
We propose two vertical handoff schemes for cellular network and wireless local area network (WLAN) integration: integrated service-based handoff (ISH) and integrated service-based handoff with queue capabilities (ISHQ). Compared with existing handoff schemes in integrated cellular/WLAN networks, the proposed schemes consider a more comprehensive set of system characteristics such as different features of voice and data services, dynamic information about the admitted calls, user mobility and vertical handoffs in two directions. The code division multiple access (CDMA) cellular network and IEEE 802.11e WLAN are taken into account in the proposed schemes. We model the integrated networks by using multi-dimensional Markov chains and the major performance measures are derived for voice and data services. The important system parameters such as thresholds to prioritize handoff voice calls and queue sizes are optimized. Numerical results demonstrate that the proposed ISHQ scheme can maximize the utilization of overall bandwidth resources with the best quality of service (QoS) provisioning for voice and data services.
NASA Technical Reports Server (NTRS)
Allard, R.; Mack, B.; Bayoumi, M. M.
1989-01-01
Most robot systems lack a suitable hardware and software environment for the efficient research of new control and sensing schemes. Typically, engineers and researchers need to be experts in control, sensing, programming, communication and robotics in order to implement, integrate and test new ideas in a robot system. In order to reduce this time, the Robot Controller Test Station (RCTS) has been developed. It uses a modular hardware and software architecture allowing easy physical and functional reconfiguration of a robot. This is accomplished by emphasizing four major design goals: flexibility, portability, ease of use, and ease of modification. An enhanced distributed processing version of RCTS is described. It features an expanded and more flexible communication system design. Distributed processing results in the availability of more local computing power and retains the low cost of microprocessors. A large number of possible communication, control and sensing schemes can therefore be easily introduced and tested, using the same basic software structure.
NASA Astrophysics Data System (ADS)
Liu, Zhangjun; Liu, Zenghui; Peng, Yongbo
2018-03-01
In view of the Fourier-Stieltjes integral formula of multivariate stationary stochastic processes, a unified formulation accommodating spectral representation method (SRM) and proper orthogonal decomposition (POD) is deduced. By introducing random functions as constraints correlating the orthogonal random variables involved in the unified formulation, the dimension-reduction spectral representation method (DR-SRM) and the dimension-reduction proper orthogonal decomposition (DR-POD) are addressed. The proposed schemes are capable of representing the multivariate stationary stochastic process with a few elementary random variables, bypassing the challenges of high-dimensional random variables inherent in the conventional Monte Carlo methods. In order to accelerate the numerical simulation, the technique of Fast Fourier Transform (FFT) is integrated with the proposed schemes. For illustrative purposes, the simulation of horizontal wind velocity field along the deck of a large-span bridge is proceeded using the proposed methods containing 2 and 3 elementary random variables. Numerical simulation reveals the usefulness of the dimension-reduction representation methods.
2012-01-01
Background The Danish Multiple Sclerosis Society initiated a large-scale bridge building and integrative treatment project to take place from 2004–2010 at a specialized Multiple Sclerosis (MS) hospital. In this project, a team of five conventional health care practitioners and five alternative practitioners was set up to work together in developing and offering individualized treatments to 200 people with MS. The purpose of this paper is to present results from the six year treatment collaboration process regarding the development of an integrative treatment model. Discussion The collaborative work towards an integrative treatment model for people with MS, involved six steps: 1) Working with an initial model 2) Unfolding the different treatment philosophies 3) Discussing the elements of the Intervention-Mechanism-Context-Outcome-scheme (the IMCO-scheme) 4) Phrasing the common assumptions for an integrative MS program theory 5) Developing the integrative MS program theory 6) Building the integrative MS treatment model. The model includes important elements of the different treatment philosophies represented in the team and thereby describes a common understanding of the complexity of the courses of treatment. Summary An integrative team of practitioners has developed an integrative model for combined treatments of People with Multiple Sclerosis. The model unites different treatment philosophies and focuses on process-oriented factors and the strengthening of the patients’ resources and competences on a physical, an emotional and a cognitive level. PMID:22524586
Chen, Hung-Ming; Lo, Jung-Wen; Yeh, Chang-Kuo
2012-12-01
The rapidly increased availability of always-on broadband telecommunication environments and lower-cost vital signs monitoring devices bring the advantages of telemedicine directly into the patient's home. Hence, the control of access to remote medical servers' resources has become a crucial challenge. A secure authentication scheme between the medical server and remote users is therefore needed to safeguard data integrity, confidentiality and to ensure availability. Recently, many authentication schemes that use low-cost mobile devices have been proposed to meet these requirements. In contrast to previous schemes, Khan et al. proposed a dynamic ID-based remote user authentication scheme that reduces computational complexity and includes features such as a provision for the revocation of lost or stolen smart cards and a time expiry check for the authentication process. However, Khan et al.'s scheme has some security drawbacks. To remedy theses, this study proposes an enhanced authentication scheme that overcomes the weaknesses inherent in Khan et al.'s scheme and demonstrated this scheme is more secure and robust for use in a telecare medical information system.
Decentralized Adaptive Control For Robots
NASA Technical Reports Server (NTRS)
Seraji, Homayoun
1989-01-01
Precise knowledge of dynamics not required. Proposed scheme for control of multijointed robotic manipulator calls for independent control subsystem for each joint, consisting of proportional/integral/derivative feedback controller and position/velocity/acceleration feedforward controller, both with adjustable gains. Independent joint controller compensates for unpredictable effects, gravitation, and dynamic coupling between motions of joints, while forcing joints to track reference trajectories. Scheme amenable to parallel processing in distributed computing system wherein each joint controlled by relatively simple algorithm on dedicated microprocessor.
Integration of object-oriented knowledge representation with the CLIPS rule based system
NASA Technical Reports Server (NTRS)
Logie, David S.; Kamil, Hasan
1990-01-01
The paper describes a portion of the work aimed at developing an integrated, knowledge based environment for the development of engineering-oriented applications. An Object Representation Language (ORL) was implemented in C++ which is used to build and modify an object-oriented knowledge base. The ORL was designed in such a way so as to be easily integrated with other representation schemes that could effectively reason with the object base. Specifically, the integration of the ORL with the rule based system C Language Production Systems (CLIPS), developed at the NASA Johnson Space Center, will be discussed. The object-oriented knowledge representation provides a natural means of representing problem data as a collection of related objects. Objects are comprised of descriptive properties and interrelationships. The object-oriented model promotes efficient handling of the problem data by allowing knowledge to be encapsulated in objects. Data is inherited through an object network via the relationship links. Together, the two schemes complement each other in that the object-oriented approach efficiently handles problem data while the rule based knowledge is used to simulate the reasoning process. Alone, the object based knowledge is little more than an object-oriented data storage scheme; however, the CLIPS inference engine adds the mechanism to directly and automatically reason with that knowledge. In this hybrid scheme, the expert system dynamically queries for data and can modify the object base with complete access to all the functionality of the ORL from rules.
Hollow laser plasma self-confined microjet generation
NASA Astrophysics Data System (ADS)
Sizyuk, Valeryi; Hassanein, Ahmed; CenterMaterials under Extreme Environment Team
2017-10-01
Hollow laser beam produced plasma (LPP) devices are being used for the generation of the self-confined cumulative microjet. Most important place by this LPP device construction is achieving of an annular distribution of the laser beam intensity by spot. An integrated model is being developed to detailed simulation of the plasma generation and evolution inside the laser beam channel. The model describes in two temperature approximation hydrodynamic processes in plasma, laser absorption processes, heat conduction, and radiation energy transport. The total variation diminishing scheme in the Lax-Friedrich formulation for the description of plasma hydrodynamic is used. Laser absorption and radiation transport models on the base of Monte Carlo method are being developed. Heat conduction part on the implicit scheme with sparse matrixes using is realized. The developed models are being integrated into HEIGHTS-LPP computer simulation package. The integrated modeling of the hollow beam laser plasma generation showed the self-confinement and acceleration of the plasma microjet inside the laser channel. It was found dependence of the microjet parameters including radiation emission on the hole and beam radiuses ratio. This work is supported by the National Science Foundation, PIRE project.
Integrable high order UWB pulse photonic generator based on cross phase modulation in a SOA-MZI.
Moreno, Vanessa; Rius, Manuel; Mora, José; Muriel, Miguel A; Capmany, José
2013-09-23
We propose and experimentally demonstrate a potentially integrable optical scheme to generate high order UWB pulses. The technique is based on exploiting the cross phase modulation generated in an InGaAsP Mach-Zehnder interferometer containing integrated semiconductor optical amplifiers, and is also adaptable to different pulse modulation formats through an optical processing unit which allows to control of the amplitude, polarity and time delay of the generated taps.
Finite Volume Methods: Foundation and Analysis
NASA Technical Reports Server (NTRS)
Barth, Timothy; Ohlberger, Mario
2003-01-01
Finite volume methods are a class of discretization schemes that have proven highly successful in approximating the solution of a wide variety of conservation law systems. They are extensively used in fluid mechanics, porous media flow, meteorology, electromagnetics, models of biological processes, semi-conductor device simulation and many other engineering areas governed by conservative systems that can be written in integral control volume form. This article reviews elements of the foundation and analysis of modern finite volume methods. The primary advantages of these methods are numerical robustness through the obtention of discrete maximum (minimum) principles, applicability on very general unstructured meshes, and the intrinsic local conservation properties of the resulting schemes. Throughout this article, specific attention is given to scalar nonlinear hyperbolic conservation laws and the development of high order accurate schemes for discretizing them. A key tool in the design and analysis of finite volume schemes suitable for non-oscillatory discontinuity capturing is discrete maximum principle analysis. A number of building blocks used in the development of numerical schemes possessing local discrete maximum principles are reviewed in one and several space dimensions, e.g. monotone fluxes, E-fluxes, TVD discretization, non-oscillatory reconstruction, slope limiters, positive coefficient schemes, etc. When available, theoretical results concerning a priori and a posteriori error estimates are given. Further advanced topics are then considered such as high order time integration, discretization of diffusion terms and the extension to systems of nonlinear conservation laws.
ACCURATE ORBITAL INTEGRATION OF THE GENERAL THREE-BODY PROBLEM BASED ON THE D'ALEMBERT-TYPE SCHEME
DOE Office of Scientific and Technical Information (OSTI.GOV)
Minesaki, Yukitaka
2013-03-15
We propose an accurate orbital integration scheme for the general three-body problem that retains all conserved quantities except angular momentum. The scheme is provided by an extension of the d'Alembert-type scheme for constrained autonomous Hamiltonian systems. Although the proposed scheme is merely second-order accurate, it can precisely reproduce some periodic, quasiperiodic, and escape orbits. The Levi-Civita transformation plays a role in designing the scheme.
Murphy, Simon; Raisanen, Larry; Moore, Graham; Edwards, Rhiannon Tudor; Linck, Pat; Williams, Nefyn; Ud Din, Nafees; Hale, Janine; Roberts, Chris; McNaish, Elaine; Moore, Laurence
2010-06-18
The benefits to health of a physically active lifestyle are well established and there is evidence that a sedentary lifestyle plays a significant role in the onset and progression of chronic disease. Despite a recognised need for effective public health interventions encouraging sedentary people with a medical condition to become more active, there are few rigorous evaluations of their effectiveness. Following NICE guidance, the Welsh national exercise referral scheme was implemented within the context of a pragmatic randomised controlled trial. The randomised controlled trial, with nested economic and process evaluations, recruited 2,104 inactive men and women aged 16+ with coronary heart disease (CHD) risk factors and/or mild to moderate depression, anxiety or stress. Participants were recruited from 12 local health boards in Wales and referred directly by health professionals working in a range of health care settings. Consenting participants were randomised to either a 16 week tailored exercise programme run by qualified exercise professionals at community sports centres (intervention), or received an information booklet on physical activity (control). A range of validated measures assessing physical activity, mental health, psycho-social processes and health economics were administered at 6 and 12 months, with the primary 12 month outcome measure being 7 day Physical Activity Recall. The process evaluation explored factors determining the effectiveness or otherwise of the scheme, whilst the economic evaluation determined the relative cost-effectiveness of the scheme in terms of public spending. Evaluation of such a large scale national public health intervention presents methodological challenges in terms of trial design and implementation. This study was facilitated by early collaboration with social research and policy colleagues to develop a rigorous design which included an innovative approach to patient referral and trial recruitment, a comprehensive process evaluation examining intervention delivery and an integrated economic evaluation. This will allow a unique insight into the feasibility, effectiveness and cost effectiveness of a national exercise referral scheme for participants with CHD risk factors or mild to moderate anxiety, depression, or stress and provides a potential model for future policy evaluations. Current Controlled Trials ISRCTN47680448.
Integrating funds for health and social care: an evidence review.
Mason, Anne; Goddard, Maria; Weatherly, Helen; Chalkley, Martin
2015-07-01
Integrated funds for health and social care are one possible way of improving care for people with complex care requirements. If integrated funds facilitate coordinated care, this could support improvements in patient experience, and health and social care outcomes, reduce avoidable hospital admissions and delayed discharges, and so reduce costs. In this article, we examine whether this potential has been realized in practice. We propose a framework based on agency theory for understanding the role that integrated funding can play in promoting coordinated care, and review the evidence to see whether the expected effects are realized in practice. We searched eight electronic databases and relevant websites, and checked reference lists of reviews and empirical studies. We extracted data on the types of funding integration used by schemes, their benefits and costs (including unintended effects), and the barriers to implementation. We interpreted our findings with reference to our framework. The review included 38 schemes from eight countries. Most of the randomized evidence came from Australia, with nonrandomized comparative evidence available from Australia, Canada, England, Sweden and the US. None of the comparative evidence isolated the effect of integrated funding; instead, studies assessed the effects of 'integrated financing plus integrated care' (i.e. 'integration') relative to usual care. Most schemes (24/38) assessed health outcomes, of which over half found no significant impact on health. The impact of integration on secondary care costs or use was assessed in 34 schemes. In 11 schemes, integration had no significant effect on secondary care costs or utilisation. Only three schemes reported significantly lower secondary care use compared with usual care. In the remaining 19 schemes, the evidence was mixed or unclear. Some schemes achieved short-term reductions in delayed discharges, but there was anecdotal evidence of unintended consequences such as premature hospital discharge and heightened risk of readmission. No scheme achieved a sustained reduction in hospital use. The primary barrier was the difficulty of implementing financial integration, despite the existence of statutory and regulatory support. Even where funds were successfully pooled, budget holders' control over access to services remained limited. Barriers in the form of differences in performance frameworks, priorities and governance were prominent amongst the UK schemes, whereas difficulties in linking different information systems were more widespread. Despite these barriers, many schemes - including those that failed to improve health or reduce costs - reported that access to care had improved. Some of these schemes revealed substantial levels of unmet need and so total costs increased. It is often assumed in policy that integrating funding will promote integrated care, and lead to better health outcomes and lower costs. Both our agency theory-based framework and the evidence indicate that the link is likely to be weak. Integrated care may uncover unmet need. Resolving this can benefit both individuals and society, but total care costs are likely to rise. Provided that integration delivers improvements in quality of life, even with additional costs, it may, nonetheless, offer value for money. © The Author(s) 2015.
Development of the Semi-implicit Time Integration in KIM-SH
NASA Astrophysics Data System (ADS)
NAM, H.
2015-12-01
The Korea Institute of Atmospheric Prediction Systems (KIAPS) was founded in 2011 by the Korea Meteorological Administration (KMA) to develop Korea's own global Numerical Weather Prediction (NWP) system as nine year (2011-2019) project. The KIM-SH is a KIAPS integrated model-spectral element based in the HOMME. In KIM-SH, the explicit schemes are employed. We introduce the three- and two-time-level semi-implicit scheme in KIM-SH as the time integration. Explicit schemes however have a tendancy to be unstable and require very small timesteps while semi-implicit schemes are very stable and can have much larger timesteps.We define the linear and reference values, then by definition of semi-implicit scheme, we apply the linear solver as GMRES. The numerical results from experiments will be introduced with the current development status of the time integration in KIM-SH. Several numerical examples are shown to confirm the efficiency and reliability of the proposed schemes.
Time integration algorithms for the two-dimensional Euler equations on unstructured meshes
NASA Technical Reports Server (NTRS)
Slack, David C.; Whitaker, D. L.; Walters, Robert W.
1994-01-01
Explicit and implicit time integration algorithms for the two-dimensional Euler equations on unstructured grids are presented. Both cell-centered and cell-vertex finite volume upwind schemes utilizing Roe's approximate Riemann solver are developed. For the cell-vertex scheme, a four-stage Runge-Kutta time integration, a fourstage Runge-Kutta time integration with implicit residual averaging, a point Jacobi method, a symmetric point Gauss-Seidel method and two methods utilizing preconditioned sparse matrix solvers are presented. For the cell-centered scheme, a Runge-Kutta scheme, an implicit tridiagonal relaxation scheme modeled after line Gauss-Seidel, a fully implicit lower-upper (LU) decomposition, and a hybrid scheme utilizing both Runge-Kutta and LU methods are presented. A reverse Cuthill-McKee renumbering scheme is employed for the direct solver to decrease CPU time by reducing the fill of the Jacobian matrix. A comparison of the various time integration schemes is made for both first-order and higher order accurate solutions using several mesh sizes, higher order accuracy is achieved by using multidimensional monotone linear reconstruction procedures. The results obtained for a transonic flow over a circular arc suggest that the preconditioned sparse matrix solvers perform better than the other methods as the number of elements in the mesh increases.
NASA Technical Reports Server (NTRS)
Deepak, A.; Fluellen, A.
1978-01-01
An efficient numerical method of multiple quadratures, the Conroy method, is applied to the problem of computing multiple scattering contributions in the radiative transfer through realistic planetary atmospheres. A brief error analysis of the method is given and comparisons are drawn with the more familiar Monte Carlo method. Both methods are stochastic problem-solving models of a physical or mathematical process and utilize the sampling scheme for points distributed over a definite region. In the Monte Carlo scheme the sample points are distributed randomly over the integration region. In the Conroy method, the sample points are distributed systematically, such that the point distribution forms a unique, closed, symmetrical pattern which effectively fills the region of the multidimensional integration. The methods are illustrated by two simple examples: one, of multidimensional integration involving two independent variables, and the other, of computing the second order scattering contribution to the sky radiance.
Zhuang, Yan; Xie, Bangtie; Weng, Shengxin; Xie, Yanming
2011-10-01
To discuss the feasibility and necessity of using HIS data integration to build large data warehouse system which is extensively used on re-evaluation of post-marketing traditional Chinese medicine, and to provide the thought and method of the overall design for it. With domestic and overseas' analysis and comparison on clinical experiments' design based on real world using electronic information system, and with characteristics of HIS in China, a general framework was designed and discussed which refers to design thought, design characteristics, existing problems and solutions and so on. A design scheme of HIS data warehouse on re-evaluation of post-marketing traditional Chinese medicine was presented. The design scheme was proved to be high coherence and low coupling, safe, Universal, efficient and easy to maintain, which can effectively solve the problems many hospitals have faced during the process of HIS data integration.
Integrating funds for health and social care: an evidence review
Goddard, Maria; Weatherly, Helen; Chalkley, Martin
2015-01-01
Objectives Integrated funds for health and social care are one possible way of improving care for people with complex care requirements. If integrated funds facilitate coordinated care, this could support improvements in patient experience, and health and social care outcomes, reduce avoidable hospital admissions and delayed discharges, and so reduce costs. In this article, we examine whether this potential has been realized in practice. Methods We propose a framework based on agency theory for understanding the role that integrated funding can play in promoting coordinated care, and review the evidence to see whether the expected effects are realized in practice. We searched eight electronic databases and relevant websites, and checked reference lists of reviews and empirical studies. We extracted data on the types of funding integration used by schemes, their benefits and costs (including unintended effects), and the barriers to implementation. We interpreted our findings with reference to our framework. Results The review included 38 schemes from eight countries. Most of the randomized evidence came from Australia, with nonrandomized comparative evidence available from Australia, Canada, England, Sweden and the US. None of the comparative evidence isolated the effect of integrated funding; instead, studies assessed the effects of ‘integrated financing plus integrated care’ (i.e. ‘integration’) relative to usual care. Most schemes (24/38) assessed health outcomes, of which over half found no significant impact on health. The impact of integration on secondary care costs or use was assessed in 34 schemes. In 11 schemes, integration had no significant effect on secondary care costs or utilisation. Only three schemes reported significantly lower secondary care use compared with usual care. In the remaining 19 schemes, the evidence was mixed or unclear. Some schemes achieved short-term reductions in delayed discharges, but there was anecdotal evidence of unintended consequences such as premature hospital discharge and heightened risk of readmission. No scheme achieved a sustained reduction in hospital use. The primary barrier was the difficulty of implementing financial integration, despite the existence of statutory and regulatory support. Even where funds were successfully pooled, budget holders’ control over access to services remained limited. Barriers in the form of differences in performance frameworks, priorities and governance were prominent amongst the UK schemes, whereas difficulties in linking different information systems were more widespread. Despite these barriers, many schemes – including those that failed to improve health or reduce costs – reported that access to care had improved. Some of these schemes revealed substantial levels of unmet need and so total costs increased. Conclusions It is often assumed in policy that integrating funding will promote integrated care, and lead to better health outcomes and lower costs. Both our agency theory-based framework and the evidence indicate that the link is likely to be weak. Integrated care may uncover unmet need. Resolving this can benefit both individuals and society, but total care costs are likely to rise. Provided that integration delivers improvements in quality of life, even with additional costs, it may, nonetheless, offer value for money. PMID:25595287
Finn, John M.
2015-03-01
Properties of integration schemes for solenoidal fields in three dimensions are studied, with a focus on integrating magnetic field lines in a plasma using adaptive time stepping. It is shown that implicit midpoint (IM) and a scheme we call three-dimensional leapfrog (LF) can do a good job (in the sense of preserving KAM tori) of integrating fields that are reversible, or (for LF) have a 'special divergence-free' property. We review the notion of a self-adjoint scheme, showing that such schemes are at least second order accurate and can always be formed by composing an arbitrary scheme with its adjoint. Wemore » also review the concept of reversibility, showing that a reversible but not exactly volume-preserving scheme can lead to a fractal invariant measure in a chaotic region, although this property may not often be observable. We also show numerical results indicating that the IM and LF schemes can fail to preserve KAM tori when the reversibility property (and the SDF property for LF) of the field is broken. We discuss extensions to measure preserving flows, the integration of magnetic field lines in a plasma and the integration of rays for several plasma waves. The main new result of this paper relates to non-uniform time stepping for volume-preserving flows. We investigate two potential schemes, both based on the general method of Ref. [11], in which the flow is integrated in split time steps, each Hamiltonian in two dimensions. The first scheme is an extension of the method of extended phase space, a well-proven method of symplectic integration with non-uniform time steps. This method is found not to work, and an explanation is given. The second method investigated is a method based on transformation to canonical variables for the two split-step Hamiltonian systems. This method, which is related to the method of non-canonical generating functions of Ref. [35], appears to work very well.« less
A numerical scheme to solve unstable boundary value problems
NASA Technical Reports Server (NTRS)
Kalnay Derivas, E.
1975-01-01
A new iterative scheme for solving boundary value problems is presented. It consists of the introduction of an artificial time dependence into a modified version of the system of equations. Then explicit forward integrations in time are followed by explicit integrations backwards in time. The method converges under much more general conditions than schemes based in forward time integrations (false transient schemes). In particular it can attain a steady state solution of an elliptical system of equations even if the solution is unstable, in which case other iterative schemes fail to converge. The simplicity of its use makes it attractive for solving large systems of nonlinear equations.
Improved determination of vector lithospheric magnetic anomalies from MAGSAT data
NASA Technical Reports Server (NTRS)
Ravat, Dhananjay
1993-01-01
Scientific contributions made in developing new methods to isolate and map vector magnetic anomalies from measurements made by Magsat are described. In addition to the objective of the proposal, the isolation and mapping of equatorial vector lithospheric Magsat anomalies, isolation of polar ionospheric fields during the period were also studied. Significant progress was also made in isolation of polar delta(Z) component and scalar anomalies as well as integration and synthesis of various techniques of removing equatorial and polar ionospheric effects. The significant contributions of this research are: (1) development of empirical/analytical techniques in modeling ionospheric fields in Magsat data and their removal from uncorrected anomalies to obtain better estimates of lithospheric anomalies (this task was accomplished for equatorial delta(X), delta(Z), and delta(B) component and polar delta(Z) and delta(B) component measurements; (2) integration of important processing techniques developed during the last decade with the newly developed technologies of ionospheric field modeling into an optimum processing scheme; and (3) implementation of the above processing scheme to map the most robust magnetic anomalies of the lithosphere (components as well as scalar).
An advanced teaching scheme for integrating problem-based learning in control education
NASA Astrophysics Data System (ADS)
Juuso, Esko K.
2018-03-01
Engineering education needs to provide both theoretical knowledge and problem-solving skills. Many topics can be presented in lectures and computer exercises are good tools in teaching the skills. Learning by doing is combined with lectures to provide additional material and perspectives. The teaching scheme includes lectures, computer exercises, case studies, seminars and reports organized as a problem-based learning process. In the gradually refining learning material, each teaching method has its own role. The scheme, which has been used in teaching two 4th year courses, is beneficial for overall learning progress, especially in bilingual courses. The students become familiar with new perspectives and are ready to use the course material in application projects.
A CPT for Improving Turbulence and Cloud Processes in the NCEP Global Models
NASA Astrophysics Data System (ADS)
Krueger, S. K.; Moorthi, S.; Randall, D. A.; Pincus, R.; Bogenschutz, P.; Belochitski, A.; Chikira, M.; Dazlich, D. A.; Swales, D. J.; Thakur, P. K.; Yang, F.; Cheng, A.
2016-12-01
Our Climate Process Team (CPT) is based on the premise that the NCEP (National Centers for Environmental Prediction) global models can be improved by installing an integrated, self-consistent description of turbulence, clouds, deep convection, and the interactions between clouds and radiative and microphysical processes. The goal of our CPT is to unify the representation of turbulence and subgrid-scale (SGS) cloud processes and to unify the representation of SGS deep convective precipitation and grid-scale precipitation as the horizontal resolution decreases. We aim to improve the representation of small-scale phenomena by implementing a PDF-based SGS turbulence and cloudiness scheme that replaces the boundary layer turbulence scheme, the shallow convection scheme, and the cloud fraction schemes in the GFS (Global Forecast System) and CFS (Climate Forecast System) global models. We intend to improve the treatment of deep convection by introducing a unified parameterization that scales continuously between the simulation of individual clouds when and where the grid spacing is sufficiently fine and the behavior of a conventional parameterization of deep convection when and where the grid spacing is coarse. We will endeavor to improve the representation of the interactions of clouds, radiation, and microphysics in the GFS/CFS by using the additional information provided by the PDF-based SGS cloud scheme. The team is evaluating the impacts of the model upgrades with metrics used by the NCEP short-range and seasonal forecast operations.
Alternative Packaging for Back-Illuminated Imagers
NASA Technical Reports Server (NTRS)
Pain, Bedabrata
2009-01-01
An alternative scheme has been conceived for packaging of silicon-based back-illuminated, back-side-thinned complementary metal oxide/semiconductor (CMOS) and charge-coupled-device image-detector integrated circuits, including an associated fabrication process. This scheme and process are complementary to those described in "Making a Back-Illuminated Imager With Back-Side Connections" (NPO-42839), NASA Tech Briefs, Vol. 32, No. 7 (July 2008), page 38. To avoid misunderstanding, it should be noted that in the terminology of imaging integrated circuits, "front side" or "back side" does not necessarily refer to the side that, during operation, faces toward or away from a source of light or other object to be imaged. Instead, "front side" signifies that side of a semiconductor substrate upon which the pixel pattern and the associated semiconductor devices and metal conductor lines are initially formed during fabrication, and "back side" signifies the opposite side. If the imager is of the type called "back-illuminated," then the back side is the one that faces an object to be imaged. Initially, a back-illuminated, back-side-thinned image-detector is fabricated with its back side bonded to a silicon handle wafer. At a subsequent stage of fabrication, the front side is bonded to a glass wafer (for mechanical support) and the silicon handle wafer is etched away to expose the back side. The frontside integrated circuitry includes metal input/output contact pads, which are rendered inaccessible by the bonding of the front side to the glass wafer. Hence, one of the main problems is to make the input/output contact pads accessible from the back side, which is ultimately to be the side accessible to the external world. The present combination of an alternative packaging scheme and associated fabrication process constitute a solution of the problem.
Comparison of two integration methods for dynamic causal modeling of electrophysiological data.
Lemaréchal, Jean-Didier; George, Nathalie; David, Olivier
2018-06-01
Dynamic causal modeling (DCM) is a methodological approach to study effective connectivity among brain regions. Based on a set of observations and a biophysical model of brain interactions, DCM uses a Bayesian framework to estimate the posterior distribution of the free parameters of the model (e.g. modulation of connectivity) and infer architectural properties of the most plausible model (i.e. model selection). When modeling electrophysiological event-related responses, the estimation of the model relies on the integration of the system of delay differential equations (DDEs) that describe the dynamics of the system. In this technical note, we compared two numerical schemes for the integration of DDEs. The first, and standard, scheme approximates the DDEs (more precisely, the state of the system, with respect to conduction delays among brain regions) using ordinary differential equations (ODEs) and solves it with a fixed step size. The second scheme uses a dedicated DDEs solver with adaptive step sizes to control error, making it theoretically more accurate. To highlight the effects of the approximation used by the first integration scheme in regard to parameter estimation and Bayesian model selection, we performed simulations of local field potentials using first, a simple model comprising 2 regions and second, a more complex model comprising 6 regions. In these simulations, the second integration scheme served as the standard to which the first one was compared. Then, the performances of the two integration schemes were directly compared by fitting a public mismatch negativity EEG dataset with different models. The simulations revealed that the use of the standard DCM integration scheme was acceptable for Bayesian model selection but underestimated the connectivity parameters and did not allow an accurate estimation of conduction delays. Fitting to empirical data showed that the models systematically obtained an increased accuracy when using the second integration scheme. We conclude that inference on connectivity strength and delay based on DCM for EEG/MEG requires an accurate integration scheme. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Spectroscopic techniques to study the immune response in human saliva
NASA Astrophysics Data System (ADS)
Nepomnyashchaya, E.; Savchenko, E.; Velichko, E.; Bogomaz, T.; Aksenov, E.
2018-01-01
Studies of the immune response dynamics by means of spectroscopic techniques, i.e., laser correlation spectroscopy and fluorescence spectroscopy, are described. The laser correlation spectroscopy is aimed at measuring sizes of particles in biological fluids. The fluorescence spectroscopy allows studying of the conformational and other structural changings in immune complex. We have developed a new scheme of a laser correlation spectrometer and an original signal processing algorithm. We have suggested a new fluorescence detection scheme based on a prism and an integrating pin diode. The developed system based on the spectroscopic techniques allows studies of complex process in human saliva and opens some prospects for an individual treatment of immune diseases.
Variational methods for direct/inverse problems of atmospheric dynamics and chemistry
NASA Astrophysics Data System (ADS)
Penenko, Vladimir; Penenko, Alexey; Tsvetova, Elena
2013-04-01
We present a variational approach for solving direct and inverse problems of atmospheric hydrodynamics and chemistry. It is important that the accurate matching of numerical schemes has to be provided in the chain of objects: direct/adjoint problems - sensitivity relations - inverse problems, including assimilation of all available measurement data. To solve the problems we have developed a new enhanced set of cost-effective algorithms. The matched description of the multi-scale processes is provided by a specific choice of the variational principle functionals for the whole set of integrated models. Then all functionals of variational principle are approximated in space and time by splitting and decomposition methods. Such approach allows us to separately consider, for example, the space-time problems of atmospheric chemistry in the frames of decomposition schemes for the integral identity sum analogs of the variational principle at each time step and in each of 3D finite-volumes. To enhance the realization efficiency, the set of chemical reactions is divided on the subsets related to the operators of production and destruction. Then the idea of the Euler's integrating factors is applied in the frames of the local adjoint problem technique [1]-[3]. The analytical solutions of such adjoint problems play the role of integrating factors for differential equations describing atmospheric chemistry. With their help, the system of differential equations is transformed to the equivalent system of integral equations. As a result we avoid the construction and inversion of preconditioning operators containing the Jacobi matrixes which arise in traditional implicit schemes for ODE solution. This is the main advantage of our schemes. At the same time step but on the different stages of the "global" splitting scheme, the system of atmospheric dynamic equations is solved. For convection - diffusion equations for all state functions in the integrated models we have developed the monotone and stable discrete-analytical numerical schemes [1]-[3] conserving the positivity of the chemical substance concentrations and possessing the properties of energy and mass balance that are postulated in the general variational principle for integrated models. All algorithms for solution of transport, diffusion and transformation problems are direct (without iterations). The work is partially supported by the Programs No 4 of Presidium RAS and No 3 of Mathematical Department of RAS, by RFBR project 11-01-00187 and Integrating projects of SD RAS No 8 and 35. Our studies are in the line with the goals of COST Action ES1004. References Penenko V., Tsvetova E. Discrete-analytical methods for the implementation of variational principles in environmental applications// Journal of computational and applied mathematics, 2009, v. 226, 319-330. Penenko A.V. Discrete-analytic schemes for solving an inverse coefficient heat conduction problem in a layered medium with gradient methods// Numerical Analysis and Applications, 2012, V. 5, pp 326-341. V. Penenko, E. Tsvetova. Variational methods for constructing the monotone approximations for atmospheric chemistry models //Numerical Analysis and Applications, 2013 (in press).
Das, Ashok Kumar
2015-03-01
An integrated EPR (Electronic Patient Record) information system of all the patients provides the medical institutions and the academia with most of the patients' information in details for them to make corrective decisions and clinical decisions in order to maintain and analyze patients' health. In such system, the illegal access must be restricted and the information from theft during transmission over the insecure Internet must be prevented. Lee et al. proposed an efficient password-based remote user authentication scheme using smart card for the integrated EPR information system. Their scheme is very efficient due to usage of one-way hash function and bitwise exclusive-or (XOR) operations. However, in this paper, we show that though their scheme is very efficient, their scheme has three security weaknesses such as (1) it has design flaws in password change phase, (2) it fails to protect privileged insider attack and (3) it lacks the formal security verification. We also find that another recently proposed Wen's scheme has the same security drawbacks as in Lee at al.'s scheme. In order to remedy these security weaknesses found in Lee et al.'s scheme and Wen's scheme, we propose a secure and efficient password-based remote user authentication scheme using smart cards for the integrated EPR information system. We show that our scheme is also efficient as compared to Lee et al.'s scheme and Wen's scheme as our scheme only uses one-way hash function and bitwise exclusive-or (XOR) operations. Through the security analysis, we show that our scheme is secure against possible known attacks. Furthermore, we simulate our scheme for the formal security verification using the widely-accepted AVISPA (Automated Validation of Internet Security Protocols and Applications) tool and show that our scheme is secure against passive and active attacks.
NASA Technical Reports Server (NTRS)
Vess, Melissa F.; Starin, Scott R.
2007-01-01
During design of the SDO Science and Inertial mode PID controllers, the decision was made to disable the integral torque whenever system stability was in question. Three different schemes were developed to determine when to disable or enable the integral torque, and a trade study was performed to determine which scheme to implement. The trade study compared complexity of the control logic, risk of not reenabling the integral gain in time to reject steady-state error, and the amount of integral torque space used. The first scheme calculated a simplified Routh criterion to determine when to disable the integral torque. The second scheme calculates the PD part of the torque and looked to see if that torque would cause actuator saturation. If so, only the PD torque is used. If not, the integral torque is added. Finally, the third scheme compares the attitude and rate errors to limits and disables the integral torque if either of the errors is greater than the limit. Based on the trade study results, the third scheme was selected. Once it was decided when to disable the integral torque, analysis was performed to determine how to disable the integral torque and whether or not to reset the integrator once the integral torque was reenabled. Three ways to disable the integral torque were investigated: zero the input into the integrator, which causes the integral part of the PID control torque to be held constant; zero the integral torque directly but allow the integrator to continue integrating; or zero the integral torque directly and reset the integrator on integral torque reactivation. The analysis looked at complexity of the control logic, slew time plus settling time between each calibration maneuver step, and ability to reject steady-state error. Based on the results of the analysis, the decision was made to zero the input into the integrator without resetting it. Throughout the analysis, a high fidelity simulation was used to test the various implementation methods.
The UNIQUe Label: Supporting a Culture of Innovation and Quality in Higher Education
NASA Astrophysics Data System (ADS)
Boonen, Annemie; Bijnens, Helena
European higher education institutions will need significant reforms, in order to guarantee their leading role in a globalized knowledge economy. These reforms can be enhanced by improving the way in which traditional universities integrate new technologies both in their educational activities and throughout their strategic and operational processes. The UNIQUe institutional accreditation scheme, analyzed and described in this chapter, intends to support this process of integrating the use of new technologies in higher education. With its specific open approach to quality in e-Learning, UNIQUe emphasizes innovation and creativity in a process that includes self-assessment and constructive dialog with peers and stakeholders involved. UNIQUe intends to use the institutional quality label as a catalyst for continuous improvement and change while setting up collaborative bench learning processes among universities for the adoption and integration of e-Learning.
Raul, Pramod R; Pagilla, Prabhakar R
2015-05-01
In this paper, two adaptive Proportional-Integral (PI) control schemes are designed and discussed for control of web tension in Roll-to-Roll (R2R) manufacturing systems. R2R systems are used to transport continuous materials (called webs) on rollers from the unwind roll to the rewind roll. Maintaining web tension at the desired value is critical to many R2R processes such as printing, coating, lamination, etc. Existing fixed gain PI tension control schemes currently used in industrial practice require extensive tuning and do not provide the desired performance for changing operating conditions and material properties. The first adaptive PI scheme utilizes the model reference approach where the controller gains are estimated based on matching of the actual closed-loop tension control systems with an appropriately chosen reference model. The second adaptive PI scheme utilizes the indirect adaptive control approach together with relay feedback technique to automatically initialize the adaptive PI gains. These adaptive tension control schemes can be implemented on any R2R manufacturing system. The key features of the two adaptive schemes is that their designs are simple for practicing engineers, easy to implement in real-time, and automate the tuning process. Extensive experiments are conducted on a large experimental R2R machine which mimics many features of an industrial R2R machine. These experiments include trials with two different polymer webs and a variety of operating conditions. Implementation guidelines are provided for both adaptive schemes. Experimental results comparing the two adaptive schemes and a fixed gain PI tension control scheme used in industrial practice are provided and discussed. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Jung, Jaewook; Kang, Dongwoo; Lee, Donghoon; Won, Dongho
2017-01-01
Nowadays, many hospitals and medical institutes employ an authentication protocol within electronic patient records (EPR) services in order to provide protected electronic transactions in e-medicine systems. In order to establish efficient and robust health care services, numerous studies have been carried out on authentication protocols. Recently, Li et al. proposed a user authenticated key agreement scheme according to EPR information systems, arguing that their scheme is able to resist various types of attacks and preserve diverse security properties. However, this scheme possesses critical vulnerabilities. First, the scheme cannot prevent off-line password guessing attacks and server spoofing attack, and cannot preserve user identity. Second, there is no password verification process with the failure to identify the correct password at the beginning of the login phase. Third, the mechanism of password change is incompetent, in that it induces inefficient communication in communicating with the server to change a user password. Therefore, we suggest an upgraded version of the user authenticated key agreement scheme that provides enhanced security. Our security and performance analysis shows that compared to other related schemes, our scheme not only improves the security level, but also ensures efficiency.
Kang, Dongwoo; Lee, Donghoon; Won, Dongho
2017-01-01
Nowadays, many hospitals and medical institutes employ an authentication protocol within electronic patient records (EPR) services in order to provide protected electronic transactions in e-medicine systems. In order to establish efficient and robust health care services, numerous studies have been carried out on authentication protocols. Recently, Li et al. proposed a user authenticated key agreement scheme according to EPR information systems, arguing that their scheme is able to resist various types of attacks and preserve diverse security properties. However, this scheme possesses critical vulnerabilities. First, the scheme cannot prevent off-line password guessing attacks and server spoofing attack, and cannot preserve user identity. Second, there is no password verification process with the failure to identify the correct password at the beginning of the login phase. Third, the mechanism of password change is incompetent, in that it induces inefficient communication in communicating with the server to change a user password. Therefore, we suggest an upgraded version of the user authenticated key agreement scheme that provides enhanced security. Our security and performance analysis shows that compared to other related schemes, our scheme not only improves the security level, but also ensures efficiency. PMID:28046075
NASA Technical Reports Server (NTRS)
Reed, K. W.; Stonesifer, R. B.; Atluri, S. N.
1983-01-01
A new hybrid-stress finite element algorith, suitable for analyses of large quasi-static deformations of inelastic solids, is presented. Principal variables in the formulation are the nominal stress-rate and spin. A such, a consistent reformulation of the constitutive equation is necessary, and is discussed. The finite element equations give rise to an initial value problem. Time integration has been accomplished by Euler and Runge-Kutta schemes and the superior accuracy of the higher order schemes is noted. In the course of integration of stress in time, it has been demonstrated that classical schemes such as Euler's and Runge-Kutta may lead to strong frame-dependence. As a remedy, modified integration schemes are proposed and the potential of the new schemes for suppressing frame dependence of numerically integrated stress is demonstrated. The topic of the development of valid creep fracture criteria is also addressed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Finn, John M., E-mail: finn@lanl.gov
2015-03-15
Properties of integration schemes for solenoidal fields in three dimensions are studied, with a focus on integrating magnetic field lines in a plasma using adaptive time stepping. It is shown that implicit midpoint (IM) and a scheme we call three-dimensional leapfrog (LF) can do a good job (in the sense of preserving KAM tori) of integrating fields that are reversible, or (for LF) have a “special divergence-free” (SDF) property. We review the notion of a self-adjoint scheme, showing that such schemes are at least second order accurate and can always be formed by composing an arbitrary scheme with its adjoint.more » We also review the concept of reversibility, showing that a reversible but not exactly volume-preserving scheme can lead to a fractal invariant measure in a chaotic region, although this property may not often be observable. We also show numerical results indicating that the IM and LF schemes can fail to preserve KAM tori when the reversibility property (and the SDF property for LF) of the field is broken. We discuss extensions to measure preserving flows, the integration of magnetic field lines in a plasma and the integration of rays for several plasma waves. The main new result of this paper relates to non-uniform time stepping for volume-preserving flows. We investigate two potential schemes, both based on the general method of Feng and Shang [Numer. Math. 71, 451 (1995)], in which the flow is integrated in split time steps, each Hamiltonian in two dimensions. The first scheme is an extension of the method of extended phase space, a well-proven method of symplectic integration with non-uniform time steps. This method is found not to work, and an explanation is given. The second method investigated is a method based on transformation to canonical variables for the two split-step Hamiltonian systems. This method, which is related to the method of non-canonical generating functions of Richardson and Finn [Plasma Phys. Controlled Fusion 54, 014004 (2012)], appears to work very well.« less
NASA Astrophysics Data System (ADS)
Bruder, Friedrich-Karl; Fäcke, Thomas; Grote, Fabian; Hagen, Rainer; Hönel, Dennis; Koch, Eberhard; Rewitz, Christian; Walze, Günther; Wewer, Brita
2017-05-01
Volume Holographic Optical Elements (vHOEs) gained wide attention as optical combiners for the use in smart glasses and augmented reality (SG and AR, respectively) consumer electronics and automotive head-up display applications. The unique characteristics of these diffractive grating structures - being lightweight, thin and flat - make them perfectly suitable for use in integrated optical components like spectacle lenses and car windshields. While being transparent in Off-Bragg condition, they provide full color capability and adjustable diffraction efficiency. The instant developing photopolymer Bayfol® HX film provides an ideal technology platform to optimize the performance of vHOEs in a wide range of applications. Important for any commercialization are simple and robust mass production schemes. In this paper, we present an efficient and easy to control one-beam recording scheme to copy a so-called master vHOE in a step-and-repeat process. In this contact-copy scheme, Bayfol® HX film is laminated to a master stack before being exposed by a scanning laser line. Subsequently, the film is delaminated in a controlled fashion and bleached. We explain working principles of the one-beam copy concept, discuss the opto-mechanical construction and outline the downstream process of the installed vHOE replication line. Moreover, we focus on aspects like performance optimization of the copy vHOE, the bleaching process and the suitable choice of protective cover film in the re-lamination step, preparing the integration of the vHOE into the final device.
Integral processing in beyond-Hartree-Fock calculations
NASA Technical Reports Server (NTRS)
Taylor, P. R.
1986-01-01
The increasing rate at which improvements in processing capacity outstrip improvements in input/output performance of large computers has led to recent attempts to bypass generation of a disk-based integral file. The direct self-consistent field (SCF) method of Almlof and co-workers represents a very successful implementation of this approach. This paper is concerned with the extension of this general approach to configuration interaction (CI) and multiconfiguration-self-consistent field (MCSCF) calculations. After a discussion of the particular types of molecular orbital (MO) integrals for which -- at least for most current generation machines -- disk-based storage seems unavoidable, it is shown how all the necessary integrals can be obtained as matrix elements of Coulomb and exchange operators that can be calculated using a direct approach. Computational implementations of such a scheme are discussed.
Net Assessment: Creating an Institutional Capacity and General Process to Perform It
2017-06-01
PROCESS TO PERFORM IT 5. FUNDING NUMBERS 6. AUTHOR(S) Humberto Enrique Lopez Arellano 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Naval...Postgraduate School Monterey, CA 93943-5000 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING /MONITORING AGENCY NAME(S) AND ADDRESS(ES...assessment products. Finally, he proposes three different schemes for integrating net assessment capacity into government organizations , public
Wu, Yunna; Xu, Chuanbo; Ke, Yiming; Chen, Kaifeng; Xu, Hu
2017-12-15
For tidal range power plants to be sustainable, the environmental impacts caused by the implement of various tidal barrage schemes must be assessed before construction. However, several problems exist in the current researches: firstly, evaluation criteria of the tidal barrage schemes environmental impact assessment (EIA) are not adequate; secondly, uncertainty of criteria information fails to be processed properly; thirdly, correlation among criteria is unreasonably measured. Hence the contributions of this paper are as follows: firstly, an evaluation criteria system is established from three dimensions of hydrodynamic, biological and morphological aspects. Secondly, cloud model is applied to describe the uncertainty of criteria information. Thirdly, Choquet integral with respect to λ-fuzzy measure is introduced to measure the correlation among criteria. On the above bases, a multi-criteria decision-making decision framework for tidal barrage scheme EIA is established to select the optimal scheme. Finally, a case study demonstrates the effectiveness of the proposed framework. Copyright © 2017 Elsevier Ltd. All rights reserved.
Perception as a closed-loop convergence process.
Ahissar, Ehud; Assa, Eldad
2016-05-09
Perception of external objects involves sensory acquisition via the relevant sensory organs. A widely-accepted assumption is that the sensory organ is the first station in a serial chain of processing circuits leading to an internal circuit in which a percept emerges. This open-loop scheme, in which the interaction between the sensory organ and the environment is not affected by its concurrent downstream neuronal processing, is strongly challenged by behavioral and anatomical data. We present here a hypothesis in which the perception of external objects is a closed-loop dynamical process encompassing loops that integrate the organism and its environment and converging towards organism-environment steady-states. We discuss the consistency of closed-loop perception (CLP) with empirical data and show that it can be synthesized in a robotic setup. Testable predictions are proposed for empirical distinction between open and closed loop schemes of perception.
Perception as a closed-loop convergence process
Ahissar, Ehud; Assa, Eldad
2016-01-01
Perception of external objects involves sensory acquisition via the relevant sensory organs. A widely-accepted assumption is that the sensory organ is the first station in a serial chain of processing circuits leading to an internal circuit in which a percept emerges. This open-loop scheme, in which the interaction between the sensory organ and the environment is not affected by its concurrent downstream neuronal processing, is strongly challenged by behavioral and anatomical data. We present here a hypothesis in which the perception of external objects is a closed-loop dynamical process encompassing loops that integrate the organism and its environment and converging towards organism-environment steady-states. We discuss the consistency of closed-loop perception (CLP) with empirical data and show that it can be synthesized in a robotic setup. Testable predictions are proposed for empirical distinction between open and closed loop schemes of perception. DOI: http://dx.doi.org/10.7554/eLife.12830.001 PMID:27159238
BossPro: a biometrics-based obfuscation scheme for software protection
NASA Astrophysics Data System (ADS)
Kuseler, Torben; Lami, Ihsan A.; Al-Assam, Hisham
2013-05-01
This paper proposes to integrate biometric-based key generation into an obfuscated interpretation algorithm to protect authentication application software from illegitimate use or reverse-engineering. This is especially necessary for mCommerce because application programmes on mobile devices, such as Smartphones and Tablet-PCs are typically open for misuse by hackers. Therefore, the scheme proposed in this paper ensures that a correct interpretation / execution of the obfuscated program code of the authentication application requires a valid biometric generated key of the actual person to be authenticated, in real-time. Without this key, the real semantics of the program cannot be understood by an attacker even if he/she gains access to this application code. Furthermore, the security provided by this scheme can be a vital aspect in protecting any application running on mobile devices that are increasingly used to perform business/financial or other security related applications, but are easily lost or stolen. The scheme starts by creating a personalised copy of any application based on the biometric key generated during an enrolment process with the authenticator as well as a nuance created at the time of communication between the client and the authenticator. The obfuscated code is then shipped to the client's mobile devise and integrated with real-time biometric extracted data of the client to form the unlocking key during execution. The novelty of this scheme is achieved by the close binding of this application program to the biometric key of the client, thus making this application unusable for others. Trials and experimental results on biometric key generation, based on client's faces, and an implemented scheme prototype, based on the Android emulator, prove the concept and novelty of this proposed scheme.
Sixth- and eighth-order Hermite integrator for N-body simulations
NASA Astrophysics Data System (ADS)
Nitadori, Keigo; Makino, Junichiro
2008-10-01
We present sixth- and eighth-order Hermite integrators for astrophysical N-body simulations, which use the derivatives of accelerations up to second-order ( snap) and third-order ( crackle). These schemes do not require previous values for the corrector, and require only one previous value to construct the predictor. Thus, they are fairly easy to implement. The additional cost of the calculation of the higher-order derivatives is not very high. Even for the eighth-order scheme, the number of floating-point operations for force calculation is only about two times larger than that for traditional fourth-order Hermite scheme. The sixth-order scheme is better than the traditional fourth-order scheme for most cases. When the required accuracy is very high, the eighth-order one is the best. These high-order schemes have several practical advantages. For example, they allow a larger number of particles to be integrated in parallel than the fourth-order scheme does, resulting in higher execution efficiency in both general-purpose parallel computers and GRAPE systems.
15 CFR 744.21 - Restrictions on certain military end-uses in the People's Republic of China (PRC).
Code of Federal Regulations, 2013 CFR
2013-01-01
... production, such as: design, design research, design analyses, design concepts, assembly and testing of prototypes, pilot production schemes, design data, process of transforming design data into a product, configuration design, integration design, layouts; and “production” means all production stages, such as...
15 CFR 744.21 - Restrictions on certain military end-uses in the People's Republic of China (PRC).
Code of Federal Regulations, 2012 CFR
2012-01-01
... production, such as: design, design research, design analyses, design concepts, assembly and testing of prototypes, pilot production schemes, design data, process of transforming design data into a product, configuration design, integration design, layouts; and “production” means all production stages, such as...
One innovative option for reducing greenhouse gas (GHG) emissions involves pairing carbon capture and storage (CCS) with the production of synthetic fuels and electricity from co-processed coal and biomass. In this scheme, the feedstocks are first converted to syngas, from which ...
Mihailovic, Dragutin T; Alapaty, Kiran; Podrascanin, Zorica
2009-03-01
Improving the parameterization of processes in the atmospheric boundary layer (ABL) and surface layer, in air quality and chemical transport models. To do so, an asymmetrical, convective, non-local scheme, with varying upward mixing rates is combined with the non-local, turbulent, kinetic energy scheme for vertical diffusion (COM). For designing it, a function depending on the dimensionless height to the power four in the ABL is suggested, which is empirically derived. Also, we suggested a new method for calculating the in-canopy resistance for dry deposition over a vegetated surface. The upward mixing rate forming the surface layer is parameterized using the sensible heat flux and the friction and convective velocities. Upward mixing rates varying with height are scaled with an amount of turbulent kinetic energy in layer, while the downward mixing rates are derived from mass conservation. The vertical eddy diffusivity is parameterized using the mean turbulent velocity scale that is obtained by the vertical integration within the ABL. In-canopy resistance is calculated by integration of inverse turbulent transfer coefficient inside the canopy from the effective ground roughness length to the canopy source height and, further, from its the canopy height. This combination of schemes provides a less rapid mass transport out of surface layer into other layers, during convective and non-convective periods, than other local and non-local schemes parameterizing mixing processes in the ABL. The suggested method for calculating the in-canopy resistance for calculating the dry deposition over a vegetated surface differs remarkably from the commonly used one, particularly over forest vegetation. In this paper, we studied the performance of a non-local, turbulent, kinetic energy scheme for vertical diffusion combined with a non-local, convective mixing scheme with varying upward mixing in the atmospheric boundary layer (COM) and its impact on the concentration of pollutants calculated with chemical and air-quality models. In addition, this scheme was also compared with a commonly used, local, eddy-diffusivity scheme. Simulated concentrations of NO2 by the COM scheme and new parameterization of the in-canopy resistance are closer to the observations when compared to those obtained from using the local eddy-diffusivity scheme. Concentrations calculated with the COM scheme and new parameterization of in-canopy resistance, are in general higher and closer to the observations than those obtained by the local, eddy-diffusivity scheme (on the order of 15-22%). To examine the performance of the scheme, simulated and measured concentrations of a pollutant (NO2) were compared for the years 1999 and 2002. The comparison was made for the entire domain used in simulations performed by the chemical European Monitoring and Evaluation Program Unified model (version UNI-ACID, rv2.0) where schemes were incorporated.
Hou, Chieh; Ateshian, Gerard A.
2015-01-01
Fibrous biological tissues may be modeled using a continuous fiber distribution (CFD) to capture tension-compression nonlinearity, anisotropic fiber distributions, and load-induced anisotropy. The CFD framework requires spherical integration of weighted individual fiber responses, with fibers contributing to the stress response only when they are in tension. The common method for performing this integration employs the discretization of the unit sphere into a polyhedron with nearly uniform triangular faces (finite element integration or FEI scheme). Although FEI has proven to be more accurate and efficient than integration using spherical coordinates, it presents three major drawbacks: First, the number of elements on the unit sphere needed to achieve satisfactory accuracy becomes a significant computational cost in a finite element analysis. Second, fibers may not be in tension in some regions on the unit sphere, where the integration becomes a waste. Third, if tensed fiber bundles span a small region compared to the area of the elements on the sphere, a significant discretization error arises. This study presents an integration scheme specialized to the CFD framework, which significantly mitigates the first drawback of the FEI scheme, while eliminating the second and third completely. Here, integration is performed only over the regions of the unit sphere where fibers are in tension. Gauss-Kronrod quadrature is used across latitudes and the trapezoidal scheme across longitudes. Over a wide range of strain states, fiber material properties, and fiber angular distributions, results demonstrate that this new scheme always outperforms FEI, sometimes by orders of magnitude in the number of computational steps and relative accuracy of the stress calculation. PMID:26291492
Hou, Chieh; Ateshian, Gerard A
2016-01-01
Fibrous biological tissues may be modeled using a continuous fiber distribution (CFD) to capture tension-compression nonlinearity, anisotropic fiber distributions, and load-induced anisotropy. The CFD framework requires spherical integration of weighted individual fiber responses, with fibers contributing to the stress response only when they are in tension. The common method for performing this integration employs the discretization of the unit sphere into a polyhedron with nearly uniform triangular faces (finite element integration or FEI scheme). Although FEI has proven to be more accurate and efficient than integration using spherical coordinates, it presents three major drawbacks: First, the number of elements on the unit sphere needed to achieve satisfactory accuracy becomes a significant computational cost in a finite element (FE) analysis. Second, fibers may not be in tension in some regions on the unit sphere, where the integration becomes a waste. Third, if tensed fiber bundles span a small region compared to the area of the elements on the sphere, a significant discretization error arises. This study presents an integration scheme specialized to the CFD framework, which significantly mitigates the first drawback of the FEI scheme, while eliminating the second and third completely. Here, integration is performed only over the regions of the unit sphere where fibers are in tension. Gauss-Kronrod quadrature is used across latitudes and the trapezoidal scheme across longitudes. Over a wide range of strain states, fiber material properties, and fiber angular distributions, results demonstrate that this new scheme always outperforms FEI, sometimes by orders of magnitude in the number of computational steps and relative accuracy of the stress calculation.
NASA Astrophysics Data System (ADS)
Pansing, Craig W.; Hua, Hong; Rolland, Jannick P.
2005-08-01
Head-mounted display (HMD) technologies find a variety of applications in the field of 3D virtual and augmented environments, 3D scientific visualization, as well as wearable displays. While most of the current HMDs use head pose to approximate line of sight, we propose to investigate approaches and designs for integrating eye tracking capability into HMDs from a low-level system design perspective and to explore schemes for optimizing system performance. In this paper, we particularly propose to optimize the illumination scheme, which is a critical component in designing an eye tracking-HMD (ET-HMD) integrated system. An optimal design can improve not only eye tracking accuracy, but also robustness. Using LightTools, we present the simulation of a complete eye illumination and imaging system using an eye model along with multiple near infrared LED (IRLED) illuminators and imaging optics, showing the irradiance variation of the different eye structures. The simulation of dark pupil effects along with multiple 1st-order Purkinje images will be presented. A parametric analysis is performed to investigate the relationships between the IRLED configurations and the irradiance distribution at the eye, and a set of optimal configuration parameters is recommended. The analysis will be further refined by actual eye image acquisition and processing.
Room-temperature-deposited dielectrics and superconductors for integrated photonics.
Shainline, Jeffrey M; Buckley, Sonia M; Nader, Nima; Gentry, Cale M; Cossel, Kevin C; Cleary, Justin W; Popović, Miloš; Newbury, Nathan R; Nam, Sae Woo; Mirin, Richard P
2017-05-01
We present an approach to fabrication and packaging of integrated photonic devices that utilizes waveguide and detector layers deposited at near-ambient temperature. All lithography is performed with a 365 nm i-line stepper, facilitating low cost and high scalability. We have shown low-loss SiN waveguides, high-Q ring resonators, critically coupled ring resonators, 50/50 beam splitters, Mach-Zehnder interferometers (MZIs) and a process-agnostic fiber packaging scheme. We have further explored the utility of this process for applications in nonlinear optics and quantum photonics. We demonstrate spectral tailoring and octave-spanning supercontinuum generation as well as the integration of superconducting nanowire single photon detectors with MZIs and channel-dropping filters. The packaging approach is suitable for operation up to 160 °C as well as below 1 K. The process is well suited for augmentation of existing foundry capabilities or as a stand-alone process.
Gaussian Process Interpolation for Uncertainty Estimation in Image Registration
Wachinger, Christian; Golland, Polina; Reuter, Martin; Wells, William
2014-01-01
Intensity-based image registration requires resampling images on a common grid to evaluate the similarity function. The uncertainty of interpolation varies across the image, depending on the location of resampled points relative to the base grid. We propose to perform Bayesian inference with Gaussian processes, where the covariance matrix of the Gaussian process posterior distribution estimates the uncertainty in interpolation. The Gaussian process replaces a single image with a distribution over images that we integrate into a generative model for registration. Marginalization over resampled images leads to a new similarity measure that includes the uncertainty of the interpolation. We demonstrate that our approach increases the registration accuracy and propose an efficient approximation scheme that enables seamless integration with existing registration methods. PMID:25333127
NASA Astrophysics Data System (ADS)
Chen, Dong; Shang-Hong, Zhao; MengYi, Deng
2018-03-01
The multiple crystal heralded source with post-selection (MHPS), originally introduced to improve the single-photon character of the heralded source, has specific applications for quantum information protocols. In this paper, by combining decoy-state measurement-device-independent quantum key distribution (MDI-QKD) with spontaneous parametric downconversion process, we present a modified MDI-QKD scheme with MHPS where two architectures are proposed corresponding to symmetric scheme and asymmetric scheme. The symmetric scheme, which linked by photon switches in a log-tree structure, is adopted to overcome the limitation of the current low efficiency of m-to-1 optical switches. The asymmetric scheme, which shows a chained structure, is used to cope with the scalability issue with increase in the number of crystals suffered in symmetric scheme. The numerical simulations show that our modified scheme has apparent advances both in transmission distance and key generation rate compared to the original MDI-QKD with weak coherent source and traditional heralded source with post-selection. Furthermore, the recent advances in integrated photonics suggest that if built into a single chip, the MHPS might be a practical alternative source in quantum key distribution tasks requiring single photons to work.
Variational discretization of the nonequilibrium thermodynamics of simple systems
NASA Astrophysics Data System (ADS)
Gay-Balmaz, François; Yoshimura, Hiroaki
2018-04-01
In this paper, we develop variational integrators for the nonequilibrium thermodynamics of simple closed systems. These integrators are obtained by a discretization of the Lagrangian variational formulation of nonequilibrium thermodynamics developed in (Gay-Balmaz and Yoshimura 2017a J. Geom. Phys. part I 111 169–93 Gay-Balmaz and Yoshimura 2017b J. Geom. Phys. part II 111 194–212) and thus extend the variational integrators of Lagrangian mechanics, to include irreversible processes. In the continuous setting, we derive the structure preserving property of the flow of such systems. This property is an extension of the symplectic property of the flow of the Euler–Lagrange equations. In the discrete setting, we show that the discrete flow solution of our numerical scheme verifies a discrete version of this property. We also present the regularity conditions which ensure the existence of the discrete flow. We finally illustrate our discrete variational schemes with the implementation of an example of a simple and closed system.
NASA Astrophysics Data System (ADS)
Goldberg, Niels; Ospald, Felix; Schneider, Matti
2017-10-01
In this article we introduce a fiber orientation-adapted integration scheme for Tucker's orientation averaging procedure applied to non-linear material laws, based on angular central Gaussian fiber orientation distributions. This method is stable w.r.t. fiber orientations degenerating into planar states and enables the construction of orthotropic hyperelastic energies for truly orthotropic fiber orientation states. We establish a reference scenario for fitting the Tucker average of a transversely isotropic hyperelastic energy, corresponding to a uni-directional fiber orientation, to microstructural simulations, obtained by FFT-based computational homogenization of neo-Hookean constituents. We carefully discuss ideas for accelerating the identification process, leading to a tremendous speed-up compared to a naive approach. The resulting hyperelastic material map turns out to be surprisingly accurate, simple to integrate in commercial finite element codes and fast in its execution. We demonstrate the capabilities of the extracted model by a finite element analysis of a fiber reinforced chain link.
Sun, Min-Chul; Kim, Garam; Kim, Sang Wan; Kim, Hyun Woo; Kim, Hyungjin; Lee, Jong-Ho; Shin, Hyungcheol; Park, Byung-Gook
2012-07-01
In order to extend the conventional low power Si CMOS technology beyond the 20-nm node without SOI substrates, we propose a novel co-integration scheme to build horizontal- and vertical-channel MOSFETs together and verify the idea using TCAD simulations. From the fabrication viewpoint, it is highlighted that this scheme provides additional vertical devices with good scalability by adding a few steps to the conventional CMOS process flow for fin formation. In addition, the benefits of the co-integrated vertical devices are investigated using a TCAD device simulation. From this study, it is confirmed that the vertical device shows improved off-current control and a larger drive current when the body dimension is less than 20 nm, due to the electric field coupling effect at the double-gated channel. Finally, the benefits from the circuit design viewpoint, such as the larger midpoint gain and beta and lower power consumption, are confirmed by the mixed-mode circuit simulation study.
Robust Stabilization of T-S Fuzzy Stochastic Descriptor Systems via Integral Sliding Modes.
Li, Jinghao; Zhang, Qingling; Yan, Xing-Gang; Spurgeon, Sarah K
2017-09-19
This paper addresses the robust stabilization problem for T-S fuzzy stochastic descriptor systems using an integral sliding mode control paradigm. A classical integral sliding mode control scheme and a nonparallel distributed compensation (Non-PDC) integral sliding mode control scheme are presented. It is shown that two restrictive assumptions previously adopted developing sliding mode controllers for Takagi-Sugeno (T-S) fuzzy stochastic systems are not required with the proposed framework. A unified framework for sliding mode control of T-S fuzzy systems is formulated. The proposed Non-PDC integral sliding mode control scheme encompasses existing schemes when the previously imposed assumptions hold. Stability of the sliding motion is analyzed and the sliding mode controller is parameterized in terms of the solutions of a set of linear matrix inequalities which facilitates design. The methodology is applied to an inverted pendulum model to validate the effectiveness of the results presented.
NASA Astrophysics Data System (ADS)
Huang, Melin; Huang, Bormin; Huang, Allen H.
2014-10-01
The Weather Research and Forecasting (WRF) model provided operational services worldwide in many areas and has linked to our daily activity, in particular during severe weather events. The scheme of Yonsei University (YSU) is one of planetary boundary layer (PBL) models in WRF. The PBL is responsible for vertical sub-grid-scale fluxes due to eddy transports in the whole atmospheric column, determines the flux profiles within the well-mixed boundary layer and the stable layer, and thus provide atmospheric tendencies of temperature, moisture (including clouds), and horizontal momentum in the entire atmospheric column. The YSU scheme is very suitable for massively parallel computation as there are no interactions among horizontal grid points. To accelerate the computation process of the YSU scheme, we employ Intel Many Integrated Core (MIC) Architecture as it is a multiprocessor computer structure with merits of efficient parallelization and vectorization essentials. Our results show that the MIC-based optimization improved the performance of the first version of multi-threaded code on Xeon Phi 5110P by a factor of 2.4x. Furthermore, the same CPU-based optimizations improved the performance on Intel Xeon E5-2603 by a factor of 1.6x as compared to the first version of multi-threaded code.
Zhang, Xiaoling; Huang, Kai; Zou, Rui; Liu, Yong; Yu, Yajuan
2013-01-01
The conflict of water environment protection and economic development has brought severe water pollution and restricted the sustainable development in the watershed. A risk explicit interval linear programming (REILP) method was used to solve integrated watershed environmental-economic optimization problem. Interval linear programming (ILP) and REILP models for uncertainty-based environmental economic optimization at the watershed scale were developed for the management of Lake Fuxian watershed, China. Scenario analysis was introduced into model solution process to ensure the practicality and operability of optimization schemes. Decision makers' preferences for risk levels can be expressed through inputting different discrete aspiration level values into the REILP model in three periods under two scenarios. Through balancing the optimal system returns and corresponding system risks, decision makers can develop an efficient industrial restructuring scheme based directly on the window of "low risk and high return efficiency" in the trade-off curve. The representative schemes at the turning points of two scenarios were interpreted and compared to identify a preferable planning alternative, which has the relatively low risks and nearly maximum benefits. This study provides new insights and proposes a tool, which was REILP, for decision makers to develop an effectively environmental economic optimization scheme in integrated watershed management.
Zou, Rui; Liu, Yong; Yu, Yajuan
2013-01-01
The conflict of water environment protection and economic development has brought severe water pollution and restricted the sustainable development in the watershed. A risk explicit interval linear programming (REILP) method was used to solve integrated watershed environmental-economic optimization problem. Interval linear programming (ILP) and REILP models for uncertainty-based environmental economic optimization at the watershed scale were developed for the management of Lake Fuxian watershed, China. Scenario analysis was introduced into model solution process to ensure the practicality and operability of optimization schemes. Decision makers' preferences for risk levels can be expressed through inputting different discrete aspiration level values into the REILP model in three periods under two scenarios. Through balancing the optimal system returns and corresponding system risks, decision makers can develop an efficient industrial restructuring scheme based directly on the window of “low risk and high return efficiency” in the trade-off curve. The representative schemes at the turning points of two scenarios were interpreted and compared to identify a preferable planning alternative, which has the relatively low risks and nearly maximum benefits. This study provides new insights and proposes a tool, which was REILP, for decision makers to develop an effectively environmental economic optimization scheme in integrated watershed management. PMID:24191144
Torres-González, Arturo; Martinez-de Dios, Jose Ramiro; Ollero, Anibal
2014-01-01
This work is motivated by robot-sensor network cooperation techniques where sensor nodes (beacons) are used as landmarks for range-only (RO) simultaneous localization and mapping (SLAM). This paper presents a RO-SLAM scheme that actuates over the measurement gathering process using mechanisms that dynamically modify the rate and variety of measurements that are integrated in the SLAM filter. It includes a measurement gathering module that can be configured to collect direct robot-beacon and inter-beacon measurements with different inter-beacon depth levels and at different rates. It also includes a supervision module that monitors the SLAM performance and dynamically selects the measurement gathering configuration balancing SLAM accuracy and resource consumption. The proposed scheme has been applied to an extended Kalman filter SLAM with auxiliary particle filters for beacon initialization (PF-EKF SLAM) and validated with experiments performed in the CONET Integrated Testbed. It achieved lower map and robot errors (34% and 14%, respectively) than traditional methods with a lower computational burden (16%) and similar beacon energy consumption. PMID:24776938
Torres-González, Arturo; Martinez-de Dios, Jose Ramiro; Ollero, Anibal
2014-04-25
This work is motivated by robot-sensor network cooperation techniques where sensor nodes (beacons) are used as landmarks for range-only (RO) simultaneous localization and mapping (SLAM). This paper presents a RO-SLAM scheme that actuates over the measurement gathering process using mechanisms that dynamically modify the rate and variety of measurements that are integrated in the SLAM filter. It includes a measurement gathering module that can be configured to collect direct robot-beacon and inter-beacon measurements with different inter-beacon depth levels and at different rates. It also includes a supervision module that monitors the SLAM performance and dynamically selects the measurement gathering configuration balancing SLAM accuracy and resource consumption. The proposed scheme has been applied to an extended Kalman filter SLAM with auxiliary particle filters for beacon initialization (PF-EKF SLAM) and validated with experiments performed in the CONET Integrated Testbed. It achieved lower map and robot errors (34% and 14%, respectively) than traditional methods with a lower computational burden (16%) and similar beacon energy consumption.
NASA Astrophysics Data System (ADS)
Lu, Jiazhen; Yang, Lie
2018-05-01
To achieve accurate and completely autonomous navigation for spacecraft, inertial/celestial integrated navigation gets increasing attention. In this study, a missile-borne inertial/stellar refraction integrated navigation scheme is proposed. Position Dilution of Precision (PDOP) for stellar refraction is introduced and the corresponding equation is derived. Based on the condition when PDOP reaches the minimum value, an optimized observation scheme is proposed. To verify the feasibility of the proposed scheme, numerical simulation is conducted. The results of the Extended Kalman Filter (EKF) and Unscented Kalman Filter (UKF) are compared and impact factors of navigation accuracy are studied in the simulation. The simulation results indicated that the proposed observation scheme has an accurate positioning performance, and the results of EKF and UKF are similar.
Lu, Jiazhen; Yang, Lie
2018-05-01
To achieve accurate and completely autonomous navigation for spacecraft, inertial/celestial integrated navigation gets increasing attention. In this study, a missile-borne inertial/stellar refraction integrated navigation scheme is proposed. Position Dilution of Precision (PDOP) for stellar refraction is introduced and the corresponding equation is derived. Based on the condition when PDOP reaches the minimum value, an optimized observation scheme is proposed. To verify the feasibility of the proposed scheme, numerical simulation is conducted. The results of the Extended Kalman Filter (EKF) and Unscented Kalman Filter (UKF) are compared and impact factors of navigation accuracy are studied in the simulation. The simulation results indicated that the proposed observation scheme has an accurate positioning performance, and the results of EKF and UKF are similar.
Integration of Tuyere, Raceway and Shaft Models for Predicting Blast Furnace Process
NASA Astrophysics Data System (ADS)
Fu, Dong; Tang, Guangwu; Zhao, Yongfu; D'Alessio, John; Zhou, Chenn Q.
2018-06-01
A novel modeling strategy is presented for simulating the blast furnace iron making process. Such physical and chemical phenomena are taking place across a wide range of length and time scales, and three models are developed to simulate different regions of the blast furnace, i.e., the tuyere model, the raceway model and the shaft model. This paper focuses on the integration of the three models to predict the entire blast furnace process. Mapping output and input between models and an iterative scheme are developed to establish communications between models. The effects of tuyere operation and burden distribution on blast furnace fuel efficiency are investigated numerically. The integration of different models provides a way to realistically simulate the blast furnace by improving the modeling resolution on local phenomena and minimizing the model assumptions.
Learning target masks in infrared linescan imagery
NASA Astrophysics Data System (ADS)
Fechner, Thomas; Rockinger, Oliver; Vogler, Axel; Knappe, Peter
1997-04-01
In this paper we propose a neural network based method for the automatic detection of ground targets in airborne infrared linescan imagery. Instead of using a dedicated feature extraction stage followed by a classification procedure, we propose the following three step scheme: In the first step of the recognition process, the input image is decomposed into its pyramid representation, thus obtaining a multiresolution signal representation. At the lowest three levels of the Laplacian pyramid a neural network filter of moderate size is trained to indicate the target location. The last step consists of a fusion process of the several neural network filters to obtain the final result. To perform this fusion we use a belief network to combine the various filter outputs in a statistical meaningful way. In addition, the belief network allows the integration of further knowledge about the image domain. By applying this multiresolution recognition scheme, we obtain a nearly scale- and rotational invariant target recognition with a significantly decreased false alarm rate compared with a single resolution target recognition scheme.
NASA Astrophysics Data System (ADS)
Lee, Jung-Youl; Seo, Il-Seok; Ma, Seong-Min; Kim, Hyeon-Soo; Kim, Jin-Woong; Kim, DoOh; Cross, Andrew
2013-03-01
The migration to a 3D implementation for NAND flash devices is seen as the leading contender to replace traditional planar NAND architectures. However the strategy of replacing shrinking design rules with greater aspect ratios is not without its own set of challenges. The yield-limiting defect challenges for the planar NAND front end were primarily bridges, protrusions and residues at the bottom of the gates, while the primary challenges for front end 3D NAND is buried particles, voids and bridges in the top, middle and bottom of high aspect ratio structures. Of particular interest are the yield challenges in the channel hole process module and developing an understanding of the contribution of litho and etch defectivity for this challenging new integration scheme. The key defectivity and process challenges in this module are missing, misshapen channel holes or under-etched channel holes as well as reducing noise sources related to other none yield limiting defect types and noise related to the process integration scheme. These challenges are expected to amplify as the memory density increases. In this study we show that a broadband brightfield approach to defect monitoring can be uniquely effective for the channel hole module. This approach is correlated to end-of-line (EOL) Wafer Bin Map for verification of capability.
Generalization of the event-based Carnevale-Hines integration scheme for integrate-and-fire models.
van Elburg, Ronald A J; van Ooyen, Arjen
2009-07-01
An event-based integration scheme for an integrate-and-fire neuron model with exponentially decaying excitatory synaptic currents and double exponential inhibitory synaptic currents has been introduced by Carnevale and Hines. However, the integration scheme imposes nonphysiological constraints on the time constants of the synaptic currents, which hamper its general applicability. This letter addresses this problem in two ways. First, we provide physical arguments demonstrating why these constraints on the time constants can be relaxed. Second, we give a formal proof showing which constraints can be abolished. As part of our formal proof, we introduce the generalized Carnevale-Hines lemma, a new tool for comparing double exponentials as they naturally occur in many cascaded decay systems, including receptor-neurotransmitter dissociation followed by channel closing. Through repeated application of the generalized lemma, we lift most of the original constraints on the time constants. Thus, we show that the Carnevale-Hines integration scheme for the integrate-and-fire model can be employed for simulating a much wider range of neuron and synapse types than was previously thought.
Nonreciprocal frequency conversion in a multimode microwave optomechanical circuit
NASA Astrophysics Data System (ADS)
Feofanov, A. K.; Bernier, N. R.; Toth, L. D.; Koottandavida, A.; Kippenberg, T. J.
Nonreciprocal devices such as isolators, circulators, and directional amplifiers are pivotal to quantum signal processing with superconducting circuits. In the microwave domain, commercially available nonreciprocal devices are based on ferrite materials. They are barely compatible with superconducting quantum circuits, lossy, and cannot be integrated on chip. Significant potential exists for implementing non-magnetic chip-scale nonreciprocal devices using microwave optomechanical circuits. Here we demonstrate a possibility of nonreciprocal frequency conversion in a multimode microwave optomechanical circuit using solely optomechanical interaction between modes. The conversion scheme and the results reflecting the actual progress on the experimental implementation of the scheme will be presented.
NASA Astrophysics Data System (ADS)
Li, G. Q.; Zhu, Z. H.
2015-12-01
Dynamic modeling of tethered spacecraft with the consideration of elasticity of tether is prone to the numerical instability and error accumulation over long-term numerical integration. This paper addresses the challenges by proposing a globally stable numerical approach with the nodal position finite element method (NPFEM) and the implicit, symplectic, 2-stage and 4th order Gaussian-Legendre Runge-Kutta time integration. The NPFEM eliminates the numerical error accumulation by using the position instead of displacement of tether as the state variable, while the symplectic integration enforces the energy and momentum conservation of the discretized finite element model to ensure the global stability of numerical solution. The effectiveness and robustness of the proposed approach is assessed by an elastic pendulum problem, whose dynamic response resembles that of tethered spacecraft, in comparison with the commonly used time integrators such as the classical 4th order Runge-Kutta schemes and other families of non-symplectic Runge-Kutta schemes. Numerical results show that the proposed approach is accurate and the energy of the corresponding numerical model is conservative over the long-term numerical integration. Finally, the proposed approach is applied to the dynamic modeling of deorbiting process of tethered spacecraft over a long period.
A computerized scheme for lung nodule detection in multiprojection chest radiography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo Wei; Li Qiang; Boyce, Sarah J.
2012-04-15
Purpose: Our previous study indicated that multiprojection chest radiography could significantly improve radiologists' performance for lung nodule detection in clinical practice. In this study, the authors further verify that multiprojection chest radiography can greatly improve the performance of a computer-aided diagnostic (CAD) scheme. Methods: Our database consisted of 59 subjects, including 43 subjects with 45 nodules and 16 subjects without nodules. The 45 nodules included 7 real and 38 simulated ones. The authors developed a conventional CAD scheme and a new fusion CAD scheme to detect lung nodules. The conventional CAD scheme consisted of four steps for (1) identification ofmore » initial nodule candidates inside lungs, (2) nodule candidate segmentation based on dynamic programming, (3) extraction of 33 features from nodule candidates, and (4) false positive reduction using a piecewise linear classifier. The conventional CAD scheme processed each of the three projection images of a subject independently and discarded the correlation information between the three images. The fusion CAD scheme included the four steps in the conventional CAD scheme and two additional steps for (5) registration of all candidates in the three images of a subject, and (6) integration of correlation information between the registered candidates in the three images. The integration step retained all candidates detected at least twice in the three images of a subject and removed those detected only once in the three images as false positives. A leave-one-subject-out testing method was used for evaluation of the performance levels of the two CAD schemes. Results: At the sensitivities of 70%, 65%, and 60%, our conventional CAD scheme reported 14.7, 11.3, and 8.6 false positives per image, respectively, whereas our fusion CAD scheme reported 3.9, 1.9, and 1.2 false positives per image, and 5.5, 2.8, and 1.7 false positives per patient, respectively. The low performance of the conventional CAD scheme may be attributed to the high noise level in chest radiography, and the small size and low contrast of most nodules. Conclusions: This study indicated that the fusion of correlation information in multiprojection chest radiography can markedly improve the performance of CAD scheme for lung nodule detection.« less
NASA Technical Reports Server (NTRS)
Temple, Gerald; Siegel, Marc; Amitai, Zwie
1991-01-01
First-in/first-out (FIFO) temporarily stores short surges of data generated by data-acquisition system at excessively high rate and releases data at lower rate suitable for processing by computer. Size and complexity reduced while capacity enhanced by use of newly developed, sophisticated integrated circuits and by "byte-folding" scheme doubling effective depth and data rate.
The Sky Is the Limit: Reconstructing Physical Geography from an Aerial Perspective
ERIC Educational Resources Information Center
Williams, Richard D.; Tooth, Stephen; Gibson, Morgan
2017-01-01
In an era of rapid geographical data acquisition, interpretations of remote sensing products are an integral part of many undergraduate geography degree schemes but there are fewer opportunities for collection and processing of primary remote sensing data. Unmanned Aerial Vehicles (UAVs) provide a relatively inexpensive opportunity to introduce…
Involution and Difference Schemes for the Navier-Stokes Equations
NASA Astrophysics Data System (ADS)
Gerdt, Vladimir P.; Blinkov, Yuri A.
In the present paper we consider the Navier-Stokes equations for the two-dimensional viscous incompressible fluid flows and apply to these equations our earlier designed general algorithmic approach to generation of finite-difference schemes. In doing so, we complete first the Navier-Stokes equations to involution by computing their Janet basis and discretize this basis by its conversion into the integral conservation law form. Then we again complete the obtained difference system to involution with eliminating the partial derivatives and extracting the minimal Gröbner basis from the Janet basis. The elements in the obtained difference Gröbner basis that do not contain partial derivatives of the dependent variables compose a conservative difference scheme. By exploiting arbitrariness in the numerical integration approximation we derive two finite-difference schemes that are similar to the classical scheme by Harlow and Welch. Each of the two schemes is characterized by a 5×5 stencil on an orthogonal and uniform grid. We also demonstrate how an inconsistent difference scheme with a 3×3 stencil is generated by an inappropriate numerical approximation of the underlying integrals.
The Personal Hearing System—A Software Hearing Aid for a Personal Communication System
NASA Astrophysics Data System (ADS)
Grimm, Giso; Guilmin, Gwénaël; Poppen, Frank; Vlaming, Marcel S. M. G.; Hohmann, Volker
2009-12-01
A concept and architecture of a personal communication system (PCS) is introduced that integrates audio communication and hearing support for the elderly and hearing-impaired through a personal hearing system (PHS). The concept envisions a central processor connected to audio headsets via a wireless body area network (WBAN). To demonstrate the concept, a prototype PCS is presented that is implemented on a netbook computer with a dedicated audio interface in combination with a mobile phone. The prototype can be used for field-testing possible applications and to reveal possibilities and limitations of the concept of integrating hearing support in consumer audio communication devices. It is shown that the prototype PCS can integrate hearing aid functionality, telephony, public announcement systems, and home entertainment. An exemplary binaural speech enhancement scheme that represents a large class of possible PHS processing schemes is shown to be compatible with the general concept. However, an analysis of hardware and software architectures shows that the implementation of a PCS on future advanced cell phone-like devices is challenging. Because of limitations in processing power, recoding of prototype implementations into fixed point arithmetic will be required and WBAN performance is still a limiting factor in terms of data rate and delay.
An Exact Integration Scheme for Radiative Cooling in Hydrodynamical Simulations
NASA Astrophysics Data System (ADS)
Townsend, R. H. D.
2009-04-01
A new scheme for incorporating radiative cooling in hydrodynamical codes is presented, centered around exact integration of the governing semidiscrete cooling equation. Using benchmark calculations based on the cooling downstream of a radiative shock, I demonstrate that the new scheme outperforms traditional explicit and implicit approaches in terms of accuracy, while remaining competitive in terms of execution speed.
Cloud microphysics modification with an online coupled COSMO-MUSCAT regional model
NASA Astrophysics Data System (ADS)
Sudhakar, D.; Quaas, J.; Wolke, R.; Stoll, J.; Muehlbauer, A. D.; Tegen, I.
2015-12-01
Abstract: The quantification of clouds, aerosols, and aerosol-cloud interactions in models, continues to be a challenge (IPCC, 2013). In this scenario two-moment bulk microphysical scheme is used to understand the aerosol-cloud interactions in the regional model COSMO (Consortium for Small Scale Modeling). The two-moment scheme in COSMO has been especially designed to represent aerosol effects on the microphysics of mixed-phase clouds (Seifert et al., 2006). To improve the model predictability, the radiation scheme has been coupled with two-moment microphysical scheme. Further, the cloud microphysics parameterization has been modified via coupling COSMO with MUSCAT (MultiScale Chemistry Aerosol Transport model, Wolke et al., 2004). In this study, we will be discussing the initial result from the online-coupled COSMO-MUSCAT model system with modified two-moment parameterization scheme along with COSP (CFMIP Observational Simulator Package) satellite simulator. This online coupled model system aims to improve the sub-grid scale process in the regional weather prediction scenario. The constant aerosol concentration used in the Seifert and Beheng, (2006) parameterizations in COSMO model has been replaced by aerosol concentration derived from MUSCAT model. The cloud microphysical process from the modified two-moment scheme is compared with stand-alone COSMO model. To validate the robustness of the model simulation, the coupled model system is integrated with COSP satellite simulator (Muhlbauer et al., 2012). Further, the simulations are compared with MODIS (Moderate Resolution Imaging Spectroradiometer) and ISCCP (International Satellite Cloud Climatology Project) satellite products.
Control of vacuum induction brazing system for sealing of instrumentation feed-through
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sung Ho Ahn; Jintae Hong; Chang Young Joung
2015-07-01
The integrity of instrumentation cables is an important performance parameter in addition to the sealing performance in the brazing process. An accurate brazing control was developed for the brazing of the instrumentation feed-through in the vacuum induction brazing system in this paper. The experimental results show that the accurate brazing temperature control performance is achieved by the developed control scheme. Consequently, the sealing performances of the instrumentation feed-through and the integrities of the instrumentation cables were satisfied after brazing. (authors)
Mars Pathfinder Microrover- Implementing a Low Cost Planetary Mission Experiment
NASA Technical Reports Server (NTRS)
Matijevic, J.
1996-01-01
The Mars Pathfinder Microrover Flight Experiment (MFEX) is a NASA Office of Space Access and Technology (OSAT) flight experiment which has been delivered and integrated with the Mars Pathfinder (MPF) lander and spacecraft system. The total cost of the MFEX mission, including all subsystem design and development, test, integration with the MPF lander and operations on Mars has been capped at $25 M??is paper discusses the process and the implementation scheme which has resulted in the development of this first Mars rover.
Two-phase anaerobic digestion within a solid waste/wastewater integrated management system
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Gioannis, G.; Diaz, L.F.; Muntoni, A.
2008-07-01
A two-phase, wet anaerobic digestion process was tested at laboratory scale using mechanically pre-treated municipal solid waste (MSW) as the substrate. The proposed process scheme differs from others due to the integration of the MSW and wastewater treatment cycles, which makes it possible to avoid the recirculation of process effluent. The results obtained show that the supplying of facultative biomass, drawn from the wastewater aeration tank, to the solid waste acidogenic reactor allows an improvement of the performance of the first phase of the process which is positively reflected on the second one. The proposed process performed successfully, adopting mesophilicmore » conditions and a relatively short hydraulic retention time in the methanogenic reactor, as well as high values of organic loading rate. Significant VS removal efficiency and biogas production were achieved. Moreover, the methanogenic reactor quickly reached optimal conditions for a stable methanogenic phase. Studies conducted elsewhere also confirm the feasibility of integrating the treatment of the organic fraction of MSW with that of wastewater.« less
NASA Technical Reports Server (NTRS)
Bates, J. R.; Semazzi, F. H. M.; Higgins, R. W.; Barros, Saulo R. M.
1990-01-01
A vector semi-Lagrangian semi-implicit two-time-level finite-difference integration scheme for the shallow water equations on the sphere is presented. A C-grid is used for the spatial differencing. The trajectory-centered discretization of the momentum equation in vector form eliminates pole problems and, at comparable cost, gives greater accuracy than a previous semi-Lagrangian finite-difference scheme which used a rotated spherical coordinate system. In terms of the insensitivity of the results to increasing timestep, the new scheme is as successful as recent spectral semi-Lagrangian schemes. In addition, the use of a multigrid method for solving the elliptic equation for the geopotential allows efficient integration with an operation count which, at high resolution, is of lower order than in the case of the spectral models. The properties of the new scheme should allow finite-difference models to compete with spectral models more effectively than has previously been possible.
Development of a plan for automating integrated circuit processing
NASA Technical Reports Server (NTRS)
1971-01-01
The operations analysis and equipment evaluations pertinent to the design of an automated production facility capable of manufacturing beam-lead CMOS integrated circuits are reported. The overall plan shows approximate cost of major equipment, production rate and performance capability, flexibility, and special maintenance requirements. Direct computer control is compared with supervisory-mode operations. The plan is limited to wafer processing operations from the starting wafer to the finished beam-lead die after separation etching. The work already accomplished in implementing various automation schemes, and the type of equipment which can be found for instant automation are described. The plan is general, so that small shops or large production units can perhaps benefit. Examples of major types of automated processing machines are shown to illustrate the general concepts of automated wafer processing.
Foli, Samson; Ros-Tonen, Mirjam A F; Reed, James; Sunderland, Terry
2018-07-01
In recognition of the failures of sectoral approaches to overcome global challenges of biodiversity loss, climate change, food insecurity and poverty, scientific discourse on biodiversity conservation and sustainable development is shifting towards integrated landscape governance arrangements. Current landscape initiatives however very much depend on external actors and funding, raising the question of whether, and how, and under what conditions, locally embedded resource management schemes can serve as entry points for the implementation of integrated landscape approaches. This paper assesses the entry point potential for three established natural resource management schemes in West Africa that target landscape degradation with involvement of local communities: the Chantier d'Aménagement Forestier scheme encompassing forest management sites across Burkina Faso and the Modified Taungya System and community wildlife resource management initiatives in Ghana. Based on a review of the current literature, we analyze the extent to which design principles that define a landscape approach apply to these schemes. We found that the CREMA meets most of the desired criteria, but that its scale may be too limited to guarantee effective landscape governance, hence requiring upscaling. Conversely, the other two initiatives are strongly lacking in their design principles on fundamental components regarding integrated approaches, continual learning, and capacity building. Monitoring and evaluation bodies and participatory learning and negotiation platforms could enhance the schemes' alignment with integrated landscape approaches.
Lin, Yuehe; Bennett, Wendy D.; Timchalk, Charles; Thrall, Karla D.
2004-03-02
Microanalytical systems based on a microfluidics/electrochemical detection scheme are described. Individual modules, such as microfabricated piezoelectrically actuated pumps and a microelectrochemical cell were integrated onto portable platforms. This allowed rapid change-out and repair of individual components by incorporating "plug and play" concepts now standard in PC's. Different integration schemes were used for construction of the microanalytical systems based on microfluidics/electrochemical detection. In one scheme, all individual modules were integrated in the surface of the standard microfluidic platform based on a plug-and-play design. Microelectrochemical flow cell which integrated three electrodes based on a wall-jet design was fabricated on polymer substrate. The microelectrochemical flow cell was then plugged directly into the microfluidic platform. Another integration scheme was based on a multilayer lamination method utilizing stacking modules with different functionality to achieve a compact microanalytical device. Application of the microanalytical system for detection of lead in, for example, river water and saliva samples using stripping voltammetry is described.
Assimilating the Future for Better Forecasts and Earlier Warnings
NASA Astrophysics Data System (ADS)
Du, H.; Wheatcroft, E.; Smith, L. A.
2016-12-01
Multi-model ensembles have become popular tools to account for some of the uncertainty due to model inadequacy in weather and climate simulation-based predictions. The current multi-model forecasts focus on combining single model ensemble forecasts by means of statistical post-processing. Assuming each model is developed independently or with different primary target variables, each is likely to contain different dynamical strengths and weaknesses. Using statistical post-processing, such information is only carried by the simulations under a single model ensemble: no advantage is taken to influence simulations under the other models. A novel methodology, named Multi-model Cross Pollination in Time, is proposed for multi-model ensemble scheme with the aim of integrating the dynamical information regarding the future from each individual model operationally. The proposed approach generates model states in time via applying data assimilation scheme(s) to yield truly "multi-model trajectories". It is demonstrated to outperform traditional statistical post-processing in the 40-dimensional Lorenz96 flow. Data assimilation approaches are originally designed to improve state estimation from the past to the current time. The aim of this talk is to introduce a framework that uses data assimilation to improve model forecasts at future time (not to argue for any one particular data assimilation scheme). Illustration of applying data assimilation "in the future" to provide early warning of future high-impact events is also presented.
Numerical Study of the Role of Shallow Convection in Moisture Transport and Climate
NASA Technical Reports Server (NTRS)
Seaman, Nelson L.; Stauffer, David R.; Munoz, Ricardo C.
2001-01-01
The objective of this investigation was to study the role of shallow convection on the regional water cycle of the Mississippi and Little Washita Basins of the Southern Great Plains (SGP) using a 3-D mesoscale model, the PSU/NCAR MM5. The underlying premise of the project was that current modeling of regional-scale climate and moisture cycles over the continents is deficient without adequate treatment of shallow convection. At the beginning of the study, it was hypothesized that an improved treatment of the regional water cycle can be achieved by using a 3-D mesoscale numerical model having high-quality parameterizations for the key physical processes controlling the water cycle. These included a detailed land-surface parameterization (the Parameterization for Land-Atmosphere-Cloud Exchange (PLACE) sub-model of Wetzel and Boone), an advanced boundary-layer parameterization (the 1.5-order turbulent kinetic energy (TKE) predictive scheme of Shafran et al.), and a more complete shallow convection parameterization (the hybrid-closure scheme of Deng et al.) than are available in most current models. PLACE is a product of researchers working at NASA's Goddard Space Flight Center in Greenbelt, MD. The TKE and shallow-convection schemes are the result of model development at Penn State. The long-range goal is to develop an integrated suite of physical sub-models that can be used for regional and perhaps global climate studies of the water budget. Therefore, the work plan focused on integrating, improving, and testing these parameterizations in the MM5 and applying them to study water-cycle processes over the SGP. These schemes have been tested extensively through the course of this study and the latter two have been improved significantly as a consequence.
NASA Astrophysics Data System (ADS)
Huijnen, V.; Bouarar, I.; Chabrillat, S. H.; Christophe, Y.; Thierno, D.; Karydis, V.; Marecal, V.; Pozzer, A.; Flemming, J.
2017-12-01
Operational atmospheric composition analyses and forecasts such as developed in the Copernicus Atmosphere Monitoring Service (CAMS) rely on modules describing emissions, chemical conversion, transport and removal processing, as well as data assimilation methods. The CAMS forecasts can be used to drive regional air quality models across the world. Critical analyses of uncertainties in any of these processes are continuously needed to advance the quality of such systems on a global scale, ranging from the surface up to the stratosphere. With regard to the atmospheric chemistry to describe the fate of trace gases, the operational system currently relies on a modified version of the CB05 chemistry scheme for the troposphere combined with the Cariolle scheme to describe stratospheric ozone, as integrated in ECMWF's Integrated Forecasting System (IFS). It is further constrained by assimilation of satellite observations of CO, O3 and NO2. As part of CAMS we have recently developed three fully independent schemes to describe the chemical conversion throughout the atmosphere. These parameterizations originate from parent model codes in MOZART, MOCAGE and a combination of TM5/BASCOE. In this contribution we evaluate the correspondence and elemental differences in the performance of the three schemes in an otherwise identical model configuration (excluding data-assimilation) against a large range of in-situ and satellite-based observations of ozone, CO, VOC's and chlorine-containing trace gases for both troposphere and stratosphere. This analysis aims to provide a measure of model uncertainty in the operational system for tracers that are not, or poorly, constrained by data assimilation. It aims also to provide guidance on the directions for further model improvement with regard to the chemical conversion module.
Experimental validation of thermo-chemical algorithm for a simulation of pultrusion processes
NASA Astrophysics Data System (ADS)
Barkanov, E.; Akishin, P.; Miazza, N. L.; Galvez, S.; Pantelelis, N.
2018-04-01
To provide better understanding of the pultrusion processes without or with temperature control and to support the pultrusion tooling design, an algorithm based on the mixed time integration scheme and nodal control volumes method has been developed. At present study its experimental validation is carried out by the developed cure sensors measuring the electrical resistivity and temperature on the profile surface. By this verification process the set of initial data used for a simulation of the pultrusion process with rod profile has been successfully corrected and finally defined.
NASA Astrophysics Data System (ADS)
Gaudreau, Louis; Bogan, Alex; Korkusinski, Marek; Studenikin, Sergei; Austing, D. Guy; Sachrajda, Andrew S.
2017-09-01
Long distance entanglement distribution is an important problem for quantum information technologies to solve. Current optical schemes are known to have fundamental limitations. A coherent photon-to-spin interface built with quantum dots (QDs) in a direct bandgap semiconductor can provide a solution for efficient entanglement distribution. QD circuits offer integrated spin processing for full Bell state measurement (BSM) analysis and spin quantum memory. Crucially the photo-generated spins can be heralded by non-destructive charge detection techniques. We review current schemes to transfer a polarization-encoded state or a time-bin-encoded state of a photon to the state of a spin in a QD. The spin may be that of an electron or that of a hole. We describe adaptations of the original schemes to employ heavy holes which have a number of attractive properties including a g-factor that is tunable to zero for QDs in an appropriately oriented external magnetic field. We also introduce simple throughput scaling models to demonstrate the potential performance advantage of full BSM capability in a QD scheme, even when the quantum memory is imperfect, over optical schemes relying on linear optical elements and ensemble quantum memories.
Self-aligned block technology: a step toward further scaling
NASA Astrophysics Data System (ADS)
Lazzarino, Frédéric; Mohanty, Nihar; Feurprier, Yannick; Huli, Lior; Luong, Vinh; Demand, Marc; Decoster, Stefan; Vega Gonzalez, Victor; Ryckaert, Julien; Kim, Ryan Ryoung Han; Mallik, Arindam; Leray, Philippe; Wilson, Chris; Boemmels, Jürgen; Kumar, Kaushik; Nafus, Kathleen; deVilliers, Anton; Smith, Jeffrey; Fonseca, Carlos; Bannister, Julie; Scheer, Steven; Tokei, Zsolt; Piumi, Daniele; Barla, Kathy
2017-04-01
In this work, we present and compare two integration approaches to enable self-alignment of the block suitable for the 5- nm technology node. The first approach is exploring the insertion of a spin-on metal-based material to memorize the first block and act as an etch stop layer in the overall integration. The second approach is evaluating the self-aligned block technology employing widely used organic materials and well-known processes. The concept and the motivation are discussed considering the effects on design and mask count as well as the impact on process complexity and EPE budget. We show the integration schemes and discuss the requirements to enable self-alignment. We present the details of materials and processes selection to allow optimal selective etches and we demonstrate the proof of concept using a 16- nm half-pitch BEOL vehicle. Finally, a study on technology insertion and cost estimation is presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jason Maung, K.; Hahn, H. Thomas; Ju, Y.S.
Multifunction integration of solar cells in load-bearing structures can enhance overall system performance by reducing parasitic components and material redundancy. The article describes a manufacturing strategy, named the co-curing scheme, to integrate thin-film silicon solar cells on carbon-fiber-reinforced epoxy composites and eliminate parasitic packaging layers. In this scheme, an assembly of a solar cell and a prepreg is cured to form a multifunctional composite in one processing step. The photovoltaic performance of the manufactured structures is then characterized under controlled cyclic mechanical loading. The study finds that the solar cell performance does not degrade under 0.3%-strain cyclic tension loading upmore » to 100 cycles. Significant degradation, however, is observed when the magnitude of cyclic loading is increased to 1% strain. The present study provides an initial set of data to guide and motivate further studies of multifunctional energy harvesting structures. (author)« less
An integrated control scheme for space robot after capturing non-cooperative target
NASA Astrophysics Data System (ADS)
Wang, Mingming; Luo, Jianjun; Yuan, Jianping; Walter, Ulrich
2018-06-01
How to identify the mass properties and eliminate the unknown angular momentum of space robotic system after capturing a non-cooperative target is of great challenge. This paper focuses on designing an integrated control framework which includes detumbling strategy, coordination control and parameter identification. Firstly, inverted and forward chain approaches are synthesized for space robot to obtain dynamic equation in operational space. Secondly, a detumbling strategy is introduced using elementary functions with normalized time, while the imposed end-effector constraints are considered. Next, a coordination control scheme for stabilizing both base and end-effector based on impedance control is implemented with the target's parameter uncertainty. With the measurements of the forces and torques exerted on the target, its mass properties are estimated during the detumbling process accordingly. Simulation results are presented using a 7 degree-of-freedom kinematically redundant space manipulator, which verifies the performance and effectiveness of the proposed method.
Advanced time integration algorithms for dislocation dynamics simulations of work hardening
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sills, Ryan B.; Aghaei, Amin; Cai, Wei
Efficient time integration is a necessity for dislocation dynamics simulations of work hardening to achieve experimentally relevant strains. In this work, an efficient time integration scheme using a high order explicit method with time step subcycling and a newly-developed collision detection algorithm are evaluated. First, time integrator performance is examined for an annihilating Frank–Read source, showing the effects of dislocation line collision. The integrator with subcycling is found to significantly out-perform other integration schemes. The performance of the time integration and collision detection algorithms is then tested in a work hardening simulation. The new algorithms show a 100-fold speed-up relativemore » to traditional schemes. As a result, subcycling is shown to improve efficiency significantly while maintaining an accurate solution, and the new collision algorithm allows an arbitrarily large time step size without missing collisions.« less
Advanced time integration algorithms for dislocation dynamics simulations of work hardening
Sills, Ryan B.; Aghaei, Amin; Cai, Wei
2016-04-25
Efficient time integration is a necessity for dislocation dynamics simulations of work hardening to achieve experimentally relevant strains. In this work, an efficient time integration scheme using a high order explicit method with time step subcycling and a newly-developed collision detection algorithm are evaluated. First, time integrator performance is examined for an annihilating Frank–Read source, showing the effects of dislocation line collision. The integrator with subcycling is found to significantly out-perform other integration schemes. The performance of the time integration and collision detection algorithms is then tested in a work hardening simulation. The new algorithms show a 100-fold speed-up relativemore » to traditional schemes. As a result, subcycling is shown to improve efficiency significantly while maintaining an accurate solution, and the new collision algorithm allows an arbitrarily large time step size without missing collisions.« less
Simulation Based Low-Cost Composite Process Development at the US Air Force Research Laboratory
NASA Technical Reports Server (NTRS)
Rice, Brian P.; Lee, C. William; Curliss, David B.
2003-01-01
Low-cost composite research in the US Air Force Research Laboratory, Materials and Manufacturing Directorate, Organic Matrix Composites Branch has focused on the theme of affordable performance. Practically, this means that we use a very broad view when considering the affordability of composites. Factors such as material costs, labor costs, recurring and nonrecurring manufacturing costs are balanced against performance to arrive at the relative affordability vs. performance measure of merit. The research efforts discussed here are two projects focused on affordable processing of composites. The first topic is the use of a neural network scheme to model cure reaction kinetics, then utilize the kinetics coupled with simple heat transport models to predict, in real-time, future exotherms and control them. The neural network scheme is demonstrated to be very robust and a much more efficient method that mechanistic cure modeling approach. This enables very practical low-cost processing of thick composite parts. The second project is liquid composite molding (LCM) process simulation. LCM processing of large 3D integrated composite parts has been demonstrated to be a very cost effective way to produce large integrated aerospace components specific examples of LCM processes are resin transfer molding (RTM), vacuum assisted resin transfer molding (VARTM), and other similar approaches. LCM process simulation is a critical part of developing an LCM process approach. Flow simulation enables the development of the most robust approach to introducing resin into complex preforms. Furthermore, LCM simulation can be used in conjunction with flow front sensors to control the LCM process in real-time to account for preform or resin variability.
Aravind, Priyadharshini; Subramanyan, Vasudevan; Ferro, Sergio; Gopalakrishnan, Rajagopal
2016-04-15
The present article reports an integrated treatment method viz biodegradation followed by photo-assisted electrooxidation, as a new approach, for the abatement of textile wastewater. In the first stage of the integrated treatment scheme, the chemical oxygen demand (COD) of the real textile effluent was reduced by a biodegradation process using hydrogels of cellulose-degrading Bacillus cereus. The bio-treated effluent was then subjected to the second stage of the integrated scheme viz indirect electrooxidation (InDEO) as well as photo-assisted indirect electro oxidation (P-InDEO) process using Ti/IrO2-RuO2-TiO2 and Ti as electrodes and applying a current density of 20 mA cm(-2). The influence of cellulose in InDEO has been reported here, for the first time. UV-Visible light of 280-800 nm has been irradiated toward the anode/electrolyte interface in P-InDEO. The effectiveness of this combined treatment process in textile effluent degradation has been probed by chemical oxygen demand (COD) measurements and (1)H - nuclear magnetic resonance spectroscopy (NMR). The obtained results indicate that the biological treatment allows obtaining a 93% of cellulose degradation and 47% of COD removal, increasing the efficiency of the subsequent InDEO by a 33%. In silico molecular docking analysis ascertained that cellulose fibers affect the InDEO process by interacting with the dyes that are responsible of the COD. On the other hand, P-InDEO resulted in both 95% of decolorization and 68% of COD removal, as a result of radical mediators. Free radicals generated during P-InDEO were characterized as oxychloride (OCl) by electron paramagnetic resonance spectroscopy (EPR). This form of coupled approach is especially suggested for the treatment of textile wastewater containing cellulose. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kyle, P.; Patel, P.; Calvin, K. V.
2014-12-01
Global integrated assessment models used for understanding the linkages between the future energy, agriculture, and climate systems typically represent between 8 and 30 geopolitical macro-regions, balancing the benefits of geographic resolution with the costs of additional data collection, processing, analysis, and computing resources. As these models are continually being improved and updated in order to address new questions for the research and policy communities, it is worth examining the consequences of the country-to-region mapping schemes used for model results. This study presents an application of a data processing system built for the GCAM integrated assessment model that allows any country-to-region assignments, with a minimum of four geopolitical regions and a maximum of 185. We test ten different mapping schemes, including the specific mappings used in existing major integrated assessment models. We also explore the impacts of clustering nations into regions according to the similarity of the structure of each nation's energy and agricultural sectors, as indicated by multivariate analysis. Scenarios examined include a reference scenario, a low-emissions scenario, and scenarios with agricultural and buildings sector climate change impacts. We find that at the global level, the major output variables (primary energy, agricultural land use) are surprisingly similar regardless of regional assignments, but at finer geographic scales, differences are pronounced. We suggest that enhancing geographic resolution is advantageous for analysis of climate impacts on the buildings and agricultural sectors, due to the spatial heterogeneity of these drivers.
Robust Integration Schemes for Generalized Viscoplasticity with Internal-State Variables
NASA Technical Reports Server (NTRS)
Saleeb, Atef F.; Li, W.; Wilt, Thomas E.
1997-01-01
The scope of the work in this presentation focuses on the development of algorithms for the integration of rate dependent constitutive equations. In view of their robustness; i.e., their superior stability and convergence properties for isotropic and anisotropic coupled viscoplastic-damage models, implicit integration schemes have been selected. This is the simplest in its class and is one of the most widely used implicit integrators at present.
Adaptive independent joint control of manipulators - Theory and experiment
NASA Technical Reports Server (NTRS)
Seraji, H.
1988-01-01
The author presents a simple decentralized adaptive control scheme for multijoint robot manipulators based on the independent joint control concept. The proposed control scheme for each joint consists of a PID (proportional integral and differential) feedback controller and a position-velocity-acceleration feedforward controller, both with adjustable gains. The static and dynamic couplings that exist between the joint motions are compensated by the adaptive independent joint controllers while ensuring trajectory tracking. The proposed scheme is implemented on a MicroVAX II computer for motion control of the first three joints of a PUMA 560 arm. Experimental results are presented to demonstrate that trajectory tracking is achieved despite strongly coupled, highly nonlinear joint dynamics. The results confirm that the proposed decentralized adaptive control of manipulators is feasible, in spite of strong interactions between joint motions. The control scheme presented is computationally very fast and is amenable to parallel processing implementation within a distributed computing architecture, where each joint is controlled independently by a simple algorithm on a dedicated microprocessor.
Efficient and accurate time-stepping schemes for integrate-and-fire neuronal networks.
Shelley, M J; Tao, L
2001-01-01
To avoid the numerical errors associated with resetting the potential following a spike in simulations of integrate-and-fire neuronal networks, Hansel et al. and Shelley independently developed a modified time-stepping method. Their particular scheme consists of second-order Runge-Kutta time-stepping, a linear interpolant to find spike times, and a recalibration of postspike potential using the spike times. Here we show analytically that such a scheme is second order, discuss the conditions under which efficient, higher-order algorithms can be constructed to treat resets, and develop a modified fourth-order scheme. To support our analysis, we simulate a system of integrate-and-fire conductance-based point neurons with all-to-all coupling. For six-digit accuracy, our modified Runge-Kutta fourth-order scheme needs a time-step of Delta(t) = 0.5 x 10(-3) seconds, whereas to achieve comparable accuracy using a recalibrated second-order or a first-order algorithm requires time-steps of 10(-5) seconds or 10(-9) seconds, respectively. Furthermore, since the cortico-cortical conductances in standard integrate-and-fire neuronal networks do not depend on the value of the membrane potential, we can attain fourth-order accuracy with computational costs normally associated with second-order schemes.
A multilevel finite element method for Fredholm integral eigenvalue problems
NASA Astrophysics Data System (ADS)
Xie, Hehu; Zhou, Tao
2015-12-01
In this work, we proposed a multigrid finite element (MFE) method for solving the Fredholm integral eigenvalue problems. The main motivation for such studies is to compute the Karhunen-Loève expansions of random fields, which play an important role in the applications of uncertainty quantification. In our MFE framework, solving the eigenvalue problem is converted to doing a series of integral iterations and eigenvalue solving in the coarsest mesh. Then, any existing efficient integration scheme can be used for the associated integration process. The error estimates are provided, and the computational complexity is analyzed. It is noticed that the total computational work of our method is comparable with a single integration step in the finest mesh. Several numerical experiments are presented to validate the efficiency of the proposed numerical method.
NASA Astrophysics Data System (ADS)
Parks, Helen Frances
This dissertation presents two projects related to the structured integration of large-scale mechanical systems. Structured integration uses the considerable differential geometric structure inherent in mechanical motion to inform the design of numerical integration schemes. This process improves the qualitative properties of simulations and becomes especially valuable as a measure of accuracy over long time simulations in which traditional Gronwall accuracy estimates lose their meaning. Often, structured integration schemes replicate continuous symmetries and their associated conservation laws at the discrete level. Such is the case for variational integrators, which discretely replicate the process of deriving equations of motion from variational principles. This results in the conservation of momenta associated to symmetries in the discrete system and conservation of a symplectic form when applicable. In the case of Lagrange-Dirac systems, variational integrators preserve a discrete analogue of the Dirac structure preserved in the continuous flow. In the first project of this thesis, we extend Dirac variational integrators to accommodate interconnected systems. We hope this work will find use in the fields of control, where a controlled system can be thought of as a "plant" system joined to its controller, and in the approach of very large systems, where modular modeling may prove easier than monolithically modeling the entire system. The second project of the thesis considers a different approach to large systems. Given a detailed model of the full system, can we reduce it to a more computationally efficient model without losing essential geometric structures in the system? Asked without the reference to structure, this is the essential question of the field of model reduction. The answer there has been a resounding yes, with Principal Orthogonal Decomposition (POD) with snapshots rising as one of the most successful methods. Our project builds on previous work to extend POD to structured settings. In particular, we consider systems evolving on Lie groups and make use of canonical coordinates in the reduction process. We see considerable improvement in the accuracy of the reduced model over the usual structure-agnostic POD approach.
Conversion and Extraction of Insoluble Organic Materials in Meteorites
NASA Technical Reports Server (NTRS)
Locke, Darren R.; Burton, Aaron S.; Niles, Paul B.
2016-01-01
We endeavor to develop and implement methods in our laboratory to convert and extract insoluble organic materials (IOM) from low car-bon bearing meteorites (such as ordinary chondrites) and Precambrian terrestrial rocks for the purpose of determining IOM structure and prebiotic chemistries preserved in these types of samples. The general scheme of converting and extracting IOM in samples is summarized in Figure 1. First, powdered samples are solvent extracted in a micro-Soxhlet apparatus multiple times using solvents ranging from non-polar to polar (hexane - non-polar, dichloromethane - non-polar to polar, methanol - polar protic, and acetonitrile - polar aprotic). Second, solid residue from solvent extractions is processed using strong acids, hydrochloric and hydrofluoric, to dissolve minerals and isolate IOM. Third, the isolated IOM is subjected to both thermal (pyrolysis) and chemical (oxidation) degradation to release compounds from the macromolecular material. Finally, products from oxidation and pyrolysis are analyzed by gas chromatography - mass spectrometry (GCMS). We are working toward an integrated method and analysis scheme that will allow us to determine prebiotic chemistries in ordinary chondrites and Precambrian terrestrial rocks. Powerful techniques that we are including are stepwise, flash, and gradual pyrolysis and ruthenium tetroxide oxidation. More details of the integrated scheme will be presented.
Dissipative preparation of entanglement in optical cavities.
Kastoryano, M J; Reiter, F; Sørensen, A S
2011-03-04
We propose a novel scheme for the preparation of a maximally entangled state of two atoms in an optical cavity. Starting from an arbitrary initial state, a singlet state is prepared as the unique fixed point of a dissipative quantum dynamical process. In our scheme, cavity decay is no longer undesirable, but plays an integral part in the dynamics. As a result, we get a qualitative improvement in the scaling of the fidelity with the cavity parameters. Our analysis indicates that dissipative state preparation is more than just a new conceptual approach, but can allow for significant improvement as compared to preparation protocols based on coherent unitary dynamics.
Maisotsenko cycle applications for multistage compressors cooling
NASA Astrophysics Data System (ADS)
Levchenko, D.; Yurko, I.; Artyukhov, A.; Baga, V.
2017-08-01
The present study provides the overview of Maisotsenko Cycle (M-Cycle) applications for gas cooling in compressor systems. Various schemes of gas cooling systems are considered regarding to their thermal efficiency and cooling capacity. Preliminary calculation of M-cycle HMX has been conducted. It is found that M-cycle HMX scheme allows to brake the limit of the ambient wet bulb temperature for evaporative cooling. It has demonstrated that a compact integrated heat and moisture exchange process can cool product fluid to the level below the ambient wet bulb temperature, even to the level of dew point temperature of the incoming air with substantially lower water and energy consumption requirements.
Wu, Zhen-Yu; Tseng, Yi-Ju; Chung, Yufang; Chen, Yee-Chun; Lai, Feipei
2012-08-01
With the rapid development of the Internet, both digitization and electronic orientation are required on various applications in the daily life. For hospital-acquired infection control, a Web-based Hospital-acquired Infection Surveillance System was implemented. Clinical data from different hospitals and systems were collected and analyzed. The hospital-acquired infection screening rules in this system utilized this information to detect different patterns of defined hospital-acquired infection. Moreover, these data were integrated into the user interface of a signal entry point to assist physicians and healthcare providers in making decisions. Based on Service-Oriented Architecture, web-service techniques which were suitable for integrating heterogeneous platforms, protocols, and applications, were used. In summary, this system simplifies the workflow of hospital infection control and improves the healthcare quality. However, it is probable for attackers to intercept the process of data transmission or access to the user interface. To tackle the illegal access and to prevent the information from being stolen during transmission over the insecure Internet, a password-based user authentication scheme is proposed for information integrity.
Analysis of 3D poroelastodynamics using BEM based on modified time-step scheme
NASA Astrophysics Data System (ADS)
Igumnov, L. A.; Petrov, A. N.; Vorobtsov, I. V.
2017-10-01
The development of 3d boundary elements modeling of dynamic partially saturated poroelastic media using a stepping scheme is presented in this paper. Boundary Element Method (BEM) in Laplace domain and the time-stepping scheme for numerical inversion of the Laplace transform are used to solve the boundary value problem. The modified stepping scheme with a varied integration step for quadrature coefficients calculation using the symmetry of the integrand function and integral formulas of Strongly Oscillating Functions was applied. The problem with force acting on a poroelastic prismatic console end was solved using the developed method. A comparison of the results obtained by the traditional stepping scheme with the solutions obtained by this modified scheme shows that the computational efficiency is better with usage of combined formulas.
A splitting integration scheme for the SPH simulation of concentrated particle suspensions
NASA Astrophysics Data System (ADS)
Bian, Xin; Ellero, Marco
2014-01-01
Simulating nearly contacting solid particles in suspension is a challenging task due to the diverging behavior of short-range lubrication forces, which pose a serious time-step limitation for explicit integration schemes. This general difficulty limits severely the total duration of simulations of concentrated suspensions. Inspired by the ideas developed in [S. Litvinov, M. Ellero, X.Y. Hu, N.A. Adams, J. Comput. Phys. 229 (2010) 5457-5464] for the simulation of highly dissipative fluids, we propose in this work a splitting integration scheme for the direct simulation of solid particles suspended in a Newtonian liquid. The scheme separates the contributions of different forces acting on the solid particles. In particular, intermediate- and long-range multi-body hydrodynamic forces, which are computed from the discretization of the Navier-Stokes equations using the smoothed particle hydrodynamics (SPH) method, are taken into account using an explicit integration; for short-range lubrication forces, velocities of pairwise interacting solid particles are updated implicitly by sweeping over all the neighboring pairs iteratively, until convergence in the solution is obtained. By using the splitting integration, simulations can be run stably and efficiently up to very large solid particle concentrations. Moreover, the proposed scheme is not limited to the SPH method presented here, but can be easily applied to other simulation techniques employed for particulate suspensions.
An exponential time-integrator scheme for steady and unsteady inviscid flows
NASA Astrophysics Data System (ADS)
Li, Shu-Jie; Luo, Li-Shi; Wang, Z. J.; Ju, Lili
2018-07-01
An exponential time-integrator scheme of second-order accuracy based on the predictor-corrector methodology, denoted PCEXP, is developed to solve multi-dimensional nonlinear partial differential equations pertaining to fluid dynamics. The effective and efficient implementation of PCEXP is realized by means of the Krylov method. The linear stability and truncation error are analyzed through a one-dimensional model equation. The proposed PCEXP scheme is applied to the Euler equations discretized with a discontinuous Galerkin method in both two and three dimensions. The effectiveness and efficiency of the PCEXP scheme are demonstrated for both steady and unsteady inviscid flows. The accuracy and efficiency of the PCEXP scheme are verified and validated through comparisons with the explicit third-order total variation diminishing Runge-Kutta scheme (TVDRK3), the implicit backward Euler (BE) and the implicit second-order backward difference formula (BDF2). For unsteady flows, the PCEXP scheme generates a temporal error much smaller than the BDF2 scheme does, while maintaining the expected acceleration at the same time. Moreover, the PCEXP scheme is also shown to achieve the computational efficiency comparable to the implicit schemes for steady flows.
Islam, S K Hafizul; Khan, Muhammad Khurram; Li, Xiong
2015-01-01
Over the past few years, secure and privacy-preserving user authentication scheme has become an integral part of the applications of the healthcare systems. Recently, Wen has designed an improved user authentication system over the Lee et al.'s scheme for integrated electronic patient record (EPR) information system, which has been analyzed in this study. We have found that Wen's scheme still has the following inefficiencies: (1) the correctness of identity and password are not verified during the login and password change phases; (2) it is vulnerable to impersonation attack and privileged-insider attack; (3) it is designed without the revocation of lost/stolen smart card; (4) the explicit key confirmation and the no key control properties are absent, and (5) user cannot update his/her password without the help of server and secure channel. Then we aimed to propose an enhanced two-factor user authentication system based on the intractable assumption of the quadratic residue problem (QRP) in the multiplicative group. Our scheme bears more securities and functionalities than other schemes found in the literature.
Islam, SK Hafizul; Khan, Muhammad Khurram; Li, Xiong
2015-01-01
Over the past few years, secure and privacy-preserving user authentication scheme has become an integral part of the applications of the healthcare systems. Recently, Wen has designed an improved user authentication system over the Lee et al.’s scheme for integrated electronic patient record (EPR) information system, which has been analyzed in this study. We have found that Wen’s scheme still has the following inefficiencies: (1) the correctness of identity and password are not verified during the login and password change phases; (2) it is vulnerable to impersonation attack and privileged-insider attack; (3) it is designed without the revocation of lost/stolen smart card; (4) the explicit key confirmation and the no key control properties are absent, and (5) user cannot update his/her password without the help of server and secure channel. Then we aimed to propose an enhanced two-factor user authentication system based on the intractable assumption of the quadratic residue problem (QRP) in the multiplicative group. Our scheme bears more securities and functionalities than other schemes found in the literature. PMID:26263401
NASA Astrophysics Data System (ADS)
Nielsen, M.; Elezzabi, A. Y.
2013-03-01
To become a competitor to replace CMOS-electronics for next-generation data processing, signal routing, and computing, nanoplasmonic circuits will require an analogue to electrical vias in order to enable vertical connections between device layers. Vertically stacked nanoplasmonic nanoring resonators formed of Ag/Si/Ag gap plasmon waveguides were studied as a novel 3-D coupling scheme that could be monolithically integrated on a silicon platform. The vertically coupled ring resonators were evanescently coupled to 100 nm x 100 nm Ag/Si/Ag input and output waveguides and the whole device was submerged in silicon dioxide. 3-D finite difference time domain simulations were used to examine the transmission spectra of the coupling device with varying device sizes and orientations. By having the signal coupling occur over multiple trips around the resonator, coupling efficiencies as high as 39% at telecommunication wavelengths between adjacent layers were present with planar device areas of only 1.00 μm2. As the vertical signal transfer was based on coupled ring resonators, the signal transfer was inherently wavelength dependent. Changing the device size by varying the radii of the nanorings allowed for tailoring the coupled frequency spectra. The plasmonic resonator based coupling scheme was found to have quality (Q) factors of upwards of 30 at telecommunication wavelengths. By allowing different device layers to operate on different wavelengths, this coupling scheme could to lead to parallel processing in stacked independent device layers.
Universal block diagram based modeling and simulation schemes for fractional-order control systems.
Bai, Lu; Xue, Dingyü
2017-05-08
Universal block diagram based schemes are proposed for modeling and simulating the fractional-order control systems in this paper. A fractional operator block in Simulink is designed to evaluate the fractional-order derivative and integral. Based on the block, the fractional-order control systems with zero initial conditions can be modeled conveniently. For modeling the system with nonzero initial conditions, the auxiliary signal is constructed in the compensation scheme. Since the compensation scheme is very complicated, therefore the integrator chain scheme is further proposed to simplify the modeling procedures. The accuracy and effectiveness of the schemes are assessed in the examples, the computation results testify the block diagram scheme is efficient for all Caputo fractional-order ordinary differential equations (FODEs) of any complexity, including the implicit Caputo FODEs. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Multiobjective hyper heuristic scheme for system design and optimization
NASA Astrophysics Data System (ADS)
Rafique, Amer Farhan
2012-11-01
As system design is becoming more and more multifaceted, integrated, and complex, the traditional single objective optimization trends of optimal design are becoming less and less efficient and effective. Single objective optimization methods present a unique optimal solution whereas multiobjective methods present pareto front. The foremost intent is to predict a reasonable distributed pareto-optimal solution set independent of the problem instance through multiobjective scheme. Other objective of application of intended approach is to improve the worthiness of outputs of the complex engineering system design process at the conceptual design phase. The process is automated in order to provide the system designer with the leverage of the possibility of studying and analyzing a large multiple of possible solutions in a short time. This article presents Multiobjective Hyper Heuristic Optimization Scheme based on low level meta-heuristics developed for the application in engineering system design. Herein, we present a stochastic function to manage meta-heuristics (low-level) to augment surety of global optimum solution. Generic Algorithm, Simulated Annealing and Swarm Intelligence are used as low-level meta-heuristics in this study. Performance of the proposed scheme is investigated through a comprehensive empirical analysis yielding acceptable results. One of the primary motives for performing multiobjective optimization is that the current engineering systems require simultaneous optimization of conflicting and multiple. Random decision making makes the implementation of this scheme attractive and easy. Injecting feasible solutions significantly alters the search direction and also adds diversity of population resulting in accomplishment of pre-defined goals set in the proposed scheme.
On-Chip Optical Nonreciprocity Using an Active Microcavity
Jiang, Xiaoshun; Yang, Chao; Wu, Hongya; Hua, Shiyue; Chang, Long; Ding, Yang; Hua, Qian; Xiao, Min
2016-01-01
Optically nonreciprocal devices provide critical functionalities such as light isolation and circulation in integrated photonic circuits for optical communications and information processing, but have been difficult to achieve. By exploring gain-saturation nonlinearity, we demonstrate on-chip optical nonreciprocity with excellent isolation performance within telecommunication wavelengths using only one toroid microcavity. Compatible with current complementary metal-oxide-semiconductor process, our compact and simple scheme works for a very wide range of input power levels from ~10 microwatts down to ~10 nanowatts, and exhibits remarkable properties of one-way light transport with sufficiently low insertion loss. These superior features make our device become a promising critical building block indispensable for future integrated nanophotonic networks. PMID:27958356
New KF-PP-SVM classification method for EEG in brain-computer interfaces.
Yang, Banghua; Han, Zhijun; Zan, Peng; Wang, Qian
2014-01-01
Classification methods are a crucial direction in the current study of brain-computer interfaces (BCIs). To improve the classification accuracy for electroencephalogram (EEG) signals, a novel KF-PP-SVM (kernel fisher, posterior probability, and support vector machine) classification method is developed. Its detailed process entails the use of common spatial patterns to obtain features, based on which the within-class scatter is calculated. Then the scatter is added into the kernel function of a radial basis function to construct a new kernel function. This new kernel is integrated into the SVM to obtain a new classification model. Finally, the output of SVM is calculated based on posterior probability and the final recognition result is obtained. To evaluate the effectiveness of the proposed KF-PP-SVM method, EEG data collected from laboratory are processed with four different classification schemes (KF-PP-SVM, KF-SVM, PP-SVM, and SVM). The results showed that the overall average improvements arising from the use of the KF-PP-SVM scheme as opposed to KF-SVM, PP-SVM and SVM schemes are 2.49%, 5.83 % and 6.49 % respectively.
Phase sensitive amplification in integrated waveguides (Conference Presentation)
NASA Astrophysics Data System (ADS)
Schroeder, Jochen B.; Zhang, Youngbin; Husko, Chad A.; LeFrancois, Simon; Eggleton, Benjamin J.
2017-02-01
Phase sensitive amplification (PSA) is an attractive technology for integrated all-optical signal processing, due to it's potential for noiseless amplification, phase regeneration and generation of squeezed light. In this talk I will review our results on implementing four-wave-mixing based PSA inside integrated photonic devices. In particular I will discuss PSA in chalcogenide ridge waveguides and silicon slow-light photonic crystals. We achieve PSA in both pump- and signal-degenerate schemes with maximum extinction ratios of 11 (silicon) and 18 (chalcogenide) dB. I will further discuss the influence of two-photon absorption and free carrier effects on the performance of silicon-based PSAs.
Structural dynamics payload loads estimates: User guide
NASA Technical Reports Server (NTRS)
Shanahan, T. G.; Engels, R. C.
1982-01-01
This User Guide with an overview of an integration scheme to determine the response of a launch vehicle with multiple payloads. Chapter II discusses the software package associated with the integration scheme together with several sample problems. A short cut version of the integration technique is also discussed. The Guide concludes with a list of references and the listings of the subroutines.
ERIC Educational Resources Information Center
Peterson, Matthew O.
2016-01-01
Science education researchers have turned their attention to the use of images in textbooks, both because pages are heavily illustrated and because visual literacy is an important aptitude for science students. Text-image integration in the textbook is described here as composition schemes in increasing degrees of integration: prose primary (PP),…
Fractional order implementation of Integral Resonant Control - A nanopositioning application.
San-Millan, Andres; Feliu-Batlle, Vicente; Aphale, Sumeet S
2017-10-04
By exploiting the co-located sensor-actuator arrangement in typical flexure-based piezoelectric stack actuated nanopositioners, the polezero interlacing exhibited by their axial frequency response can be transformed to a zero-pole interlacing by adding a constant feed-through term. The Integral Resonant Control (IRC) utilizes this unique property to add substantial damping to the dominant resonant mode by the use of a simple integrator implemented in closed loop. IRC used in conjunction with an integral tracking scheme, effectively reduces positioning errors introduced by modelling inaccuracies or parameter uncertainties. Over the past few years, successful application of the IRC control technique to nanopositioning systems has demonstrated performance robustness, easy tunability and versatility. The main drawback has been the relatively small positioning bandwidth achievable. This paper proposes a fractional order implementation of the classical integral tracking scheme employed in tandem with the IRC scheme to deliver damping and tracking. The fractional order integrator introduces an additional design parameter which allows desired pole-placement, resulting in superior closed loop bandwidth. Simulations and experimental results are presented to validate the theory. A 250% improvement in the achievable positioning bandwidth is observed with proposed fractional order scheme. Copyright © 2017. Published by Elsevier Ltd.
NASA Technical Reports Server (NTRS)
Chan, William M.
1992-01-01
The following papers are presented: (1) numerical methods for the simulation of complex multi-body flows with applications for the Integrated Space Shuttle vehicle; (2) a generalized scheme for 3-D hyperbolic grid generation; (3) collar grids for intersecting geometric components within the Chimera overlapped grid scheme; and (4) application of the Chimera overlapped grid scheme to simulation of Space Shuttle ascent flows.
Genetic and economic evaluation of Japanese Black (Wagyu) cattle breeding schemes.
Kahi, A K; Hirooka, H
2005-09-01
Deterministic simulation was used to evaluate 10 breeding schemes for genetic gain and profitability and in the context of maximizing returns from investment in Japanese Black cattle breeding. A breeding objective that integrated the cow-calf and feedlot segments was considered. Ten breeding schemes that differed in the records available for use as selection criteria were defined. The schemes ranged from one that used carcass traits currently available to Japanese Black cattle breeders (Scheme 1) to one that also included linear measurements and male and female reproduction traits (Scheme 10). The latter scheme represented the highest level of performance recording. In all breeding schemes, sires were chosen from the proportion selected during the first selection stage (performance testing), modeling a two-stage selection process. The effect on genetic gain and profitability of varying test capacity and number of progeny per sire and of ultrasound scanning of live animals was examined for all breeding schemes. Breeding schemes that selected young bulls during performance testing based on additional individual traits and information on carcass traits from their relatives generated additional genetic gain and profitability. Increasing test capacity resulted in an increase in genetic gain in all schemes. Profitability was optimal in Scheme 2 (a scheme similar to Scheme 1, but selection of young bulls also was based on information on carcass traits from their relatives) to 10 when 900 to 1,000 places were available for performance testing. Similarly, as the number of progeny used in the selection of sires increased, genetic gain first increased sharply and then gradually in all schemes. Profit was optimal across all breeding schemes when sires were selected based on information from 150 to 200 progeny. Additional genetic gain and profitability were generated in each breeding scheme with ultrasound scanning of live animals for carcass traits. Ultrasound scanning of live animals was more important than the addition of any other traits in the selection criteria. These results may be used to provide guidance to Japanese Black cattle breeders.
Crystallographic data processing for free-electron laser sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, Thomas A., E-mail: taw@physics.org; Barty, Anton; Stellato, Francesco
2013-07-01
A processing pipeline for diffraction data acquired using the ‘serial crystallography’ methodology with a free-electron laser source is described with reference to the crystallographic analysis suite CrystFEL and the pre-processing program Cheetah. A processing pipeline for diffraction data acquired using the ‘serial crystallography’ methodology with a free-electron laser source is described with reference to the crystallographic analysis suite CrystFEL and the pre-processing program Cheetah. A detailed analysis of the nature and impact of indexing ambiguities is presented. Simulations of the Monte Carlo integration scheme, which accounts for the partially recorded nature of the diffraction intensities, are presented and show thatmore » the integration of partial reflections could be made to converge more quickly if the bandwidth of the X-rays were to be increased by a small amount or if a slight convergence angle were introduced into the incident beam.« less
Control of coherent information via on-chip photonic-phononic emitter-receivers.
Shin, Heedeuk; Cox, Jonathan A; Jarecki, Robert; Starbuck, Andrew; Wang, Zheng; Rakich, Peter T
2015-03-05
Rapid progress in integrated photonics has fostered numerous chip-scale sensing, computing and signal processing technologies. However, many crucial filtering and signal delay operations are difficult to perform with all-optical devices. Unlike photons propagating at luminal speeds, GHz-acoustic phonons moving at slower velocities allow information to be stored, filtered and delayed over comparatively smaller length-scales with remarkable fidelity. Hence, controllable and efficient coupling between coherent photons and phonons enables new signal processing technologies that greatly enhance the performance and potential impact of integrated photonics. Here we demonstrate a mechanism for coherent information processing based on travelling-wave photon-phonon transduction, which achieves a phonon emit-and-receive process between distinct nanophotonic waveguides. Using this device, physics--which supports GHz frequencies--we create wavelength-insensitive radiofrequency photonic filters with frequency selectivity, narrow-linewidth and high power-handling in silicon. More generally, this emit-receive concept is the impetus for enabling new signal processing schemes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Fang, E-mail: fliu@lsec.cc.ac.cn; Lin, Lin, E-mail: linlin@math.berkeley.edu; Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720
We present a numerical integration scheme for evaluating the convolution of a Green's function with a screened Coulomb potential on the real axis in the GW approximation of the self energy. Our scheme takes the zero broadening limit in Green's function first, replaces the numerator of the integrand with a piecewise polynomial approximation, and performs principal value integration on subintervals analytically. We give the error bound of our numerical integration scheme and show by numerical examples that it is more reliable and accurate than the standard quadrature rules such as the composite trapezoidal rule. We also discuss the benefit ofmore » using different self energy expressions to perform the numerical convolution at different frequencies.« less
All-optical liquid crystal spatial light modulators
NASA Astrophysics Data System (ADS)
Tabiryan, Nelson; Grozhik, Vladimir; Khoo, Iam Choon; Nersisyan, Sarik R.; Serak, Svetlana
2003-12-01
Nonlinear optical processes in liquid crystals (LC) can be used for construction of all-optical spatial light modulators (SLM) where the photosensitivity and phase modulating functions are integrated into a single layer of an LC-material. Such spatial light integrated modulators (SLIMs) cost only a fraction of the conventional LC-SLM and can be used with high power laser radiation due to high transparency of LC materials and absence of light absorbing electrodes on the substrates of the LC-cell constituting the SLIM. Recent development of LC materials the photosensitivity of which is comparable to that of semiconductors has led to using SLIM in schemes of optical anti-jamming, sensor protection, and image processing. All-optical processes add remarkable versatility to the operation of SLIM harnessing the wealth inherent to light-matter interaction phenomena.
Chiang, Kai-Wei; Chang, Hsiu-Wen; Li, Chia-Yuan; Huang, Yun-Wen
2009-01-01
Digital mobile mapping, which integrates digital imaging with direct geo-referencing, has developed rapidly over the past fifteen years. Direct geo-referencing is the determination of the time-variable position and orientation parameters for a mobile digital imager. The most common technologies used for this purpose today are satellite positioning using Global Positioning System (GPS) and Inertial Navigation System (INS) using an Inertial Measurement Unit (IMU). They are usually integrated in such a way that the GPS receiver is the main position sensor, while the IMU is the main orientation sensor. The Kalman Filter (KF) is considered as the optimal estimation tool for real-time INS/GPS integrated kinematic position and orientation determination. An intelligent hybrid scheme consisting of an Artificial Neural Network (ANN) and KF has been proposed to overcome the limitations of KF and to improve the performance of the INS/GPS integrated system in previous studies. However, the accuracy requirements of general mobile mapping applications can’t be achieved easily, even by the use of the ANN-KF scheme. Therefore, this study proposes an intelligent position and orientation determination scheme that embeds ANN with conventional Rauch-Tung-Striebel (RTS) smoother to improve the overall accuracy of a MEMS INS/GPS integrated system in post-mission mode. By combining the Micro Electro Mechanical Systems (MEMS) INS/GPS integrated system and the intelligent ANN-RTS smoother scheme proposed in this study, a cheaper but still reasonably accurate position and orientation determination scheme can be anticipated. PMID:22574034
NASA Astrophysics Data System (ADS)
Yan, Y.; Barth, A.; Beckers, J. M.; Brankart, J. M.; Brasseur, P.; Candille, G.
2017-07-01
In this paper, three incremental analysis update schemes (IAU 0, IAU 50 and IAU 100) are compared in the same assimilation experiments with a realistic eddy permitting primitive equation model of the North Atlantic Ocean using the Ensemble Kalman Filter. The difference between the three IAU schemes lies on the position of the increment update window. The relevance of each IAU scheme is evaluated through analyses on both thermohaline and dynamical variables. The validation of the assimilation results is performed according to both deterministic and probabilistic metrics against different sources of observations. For deterministic validation, the ensemble mean and the ensemble spread are compared to the observations. For probabilistic validation, the continuous ranked probability score (CRPS) is used to evaluate the ensemble forecast system according to reliability and resolution. The reliability is further decomposed into bias and dispersion by the reduced centred random variable (RCRV) score. The obtained results show that 1) the IAU 50 scheme has the same performance as the IAU 100 scheme 2) the IAU 50/100 schemes outperform the IAU 0 scheme in error covariance propagation for thermohaline variables in relatively stable region, while the IAU 0 scheme outperforms the IAU 50/100 schemes in dynamical variables estimation in dynamically active region 3) in case with sufficient number of observations and good error specification, the impact of IAU schemes is negligible. The differences between the IAU 0 scheme and the IAU 50/100 schemes are mainly due to different model integration time and different instability (density inversion, large vertical velocity, etc.) induced by the increment update. The longer model integration time with the IAU 50/100 schemes, especially the free model integration, on one hand, allows for better re-establishment of the equilibrium model state, on the other hand, smooths the strong gradients in dynamically active region.
2007-04-01
input signal with the conjugate of a delayed copy of itself, i.e., )exp(2* kjAzz knn ϕ=− , has a phase argument independent of n. As a result, the...Signal Processing (Elseivier), 2005. S.M. Kay, “A Fast and Accurate Single Frequency Estimator,” IEEE Trans. Acous. Speech Signal Proc., 37(12), 1987
Secure Data Aggregation with Fully Homomorphic Encryption in Large-Scale Wireless Sensor Networks.
Li, Xing; Chen, Dexin; Li, Chunyan; Wang, Liangmin
2015-07-03
With the rapid development of wireless communication technology, sensor technology, information acquisition and processing technology, sensor networks will finally have a deep influence on all aspects of people's lives. The battery resources of sensor nodes should be managed efficiently in order to prolong network lifetime in large-scale wireless sensor networks (LWSNs). Data aggregation represents an important method to remove redundancy as well as unnecessary data transmission and hence cut down the energy used in communication. As sensor nodes are deployed in hostile environments, the security of the sensitive information such as confidentiality and integrity should be considered. This paper proposes Fully homomorphic Encryption based Secure data Aggregation (FESA) in LWSNs which can protect end-to-end data confidentiality and support arbitrary aggregation operations over encrypted data. In addition, by utilizing message authentication codes (MACs), this scheme can also verify data integrity during data aggregation and forwarding processes so that false data can be detected as early as possible. Although the FHE increase the computation overhead due to its large public key size, simulation results show that it is implementable in LWSNs and performs well. Compared with other protocols, the transmitted data and network overhead are reduced in our scheme.
Finite time step and spatial grid effects in δf simulation of warm plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sturdevant, Benjamin J., E-mail: benjamin.j.sturdevant@gmail.com; Department of Applied Mathematics, University of Colorado at Boulder, Boulder, CO 80309; Parker, Scott E.
2016-01-15
This paper introduces a technique for analyzing time integration methods used with the particle weight equations in δf method particle-in-cell (PIC) schemes. The analysis applies to the simulation of warm, uniform, periodic or infinite plasmas in the linear regime and considers the collective behavior similar to the analysis performed by Langdon for full-f PIC schemes [1,2]. We perform both a time integration analysis and spatial grid analysis for a kinetic ion, adiabatic electron model of ion acoustic waves. An implicit time integration scheme is studied in detail for δf simulations using our weight equation analysis and for full-f simulations usingmore » the method of Langdon. It is found that the δf method exhibits a CFL-like stability condition for low temperature ions, which is independent of the parameter characterizing the implicitness of the scheme. The accuracy of the real frequency and damping rate due to the discrete time and spatial schemes is also derived using a perturbative method. The theoretical analysis of numerical error presented here may be useful for the verification of simulations and for providing intuition for the design of new implicit time integration schemes for the δf method, as well as understanding differences between δf and full-f approaches to plasma simulation.« less
Enabling an Integrated Rate-temporal Learning Scheme on Memristor
NASA Astrophysics Data System (ADS)
He, Wei; Huang, Kejie; Ning, Ning; Ramanathan, Kiruthika; Li, Guoqi; Jiang, Yu; Sze, Jiayin; Shi, Luping; Zhao, Rong; Pei, Jing
2014-04-01
Learning scheme is the key to the utilization of spike-based computation and the emulation of neural/synaptic behaviors toward realization of cognition. The biological observations reveal an integrated spike time- and spike rate-dependent plasticity as a function of presynaptic firing frequency. However, this integrated rate-temporal learning scheme has not been realized on any nano devices. In this paper, such scheme is successfully demonstrated on a memristor. Great robustness against the spiking rate fluctuation is achieved by waveform engineering with the aid of good analog properties exhibited by the iron oxide-based memristor. The spike-time-dependence plasticity (STDP) occurs at moderate presynaptic firing frequencies and spike-rate-dependence plasticity (SRDP) dominates other regions. This demonstration provides a novel approach in neural coding implementation, which facilitates the development of bio-inspired computing systems.
NASA Astrophysics Data System (ADS)
Penenko, Vladimir; Tsvetova, Elena; Penenko, Alexey
2015-04-01
The proposed method is considered on an example of hydrothermodynamics and atmospheric chemistry models [1,2]. In the development of the existing methods for constructing numerical schemes possessing the properties of total approximation for operators of multiscale process models, we have developed a new variational technique, which uses the concept of adjoint integrating factors. The technique is as follows. First, a basic functional of the variational principle (the integral identity that unites the model equations, initial and boundary conditions) is transformed using Lagrange's identity and the second Green's formula. As a result, the action of the operators of main problem in the space of state functions is transferred to the adjoint operators defined in the space of sufficiently smooth adjoint functions. By the choice of adjoint functions the order of the derivatives becomes lower by one than those in the original equations. We obtain a set of new balance relationships that take into account the sources and boundary conditions. Next, we introduce the decomposition of the model domain into a set of finite volumes. For multi-dimensional non-stationary problems, this technique is applied in the framework of the variational principle and schemes of decomposition and splitting on the set of physical processes for each coordinate directions successively at each time step. For each direction within the finite volume, the analytical solutions of one-dimensional homogeneous adjoint equations are constructed. In this case, the solutions of adjoint equations serve as integrating factors. The results are the hybrid discrete-analytical schemes. They have the properties of stability, approximation and unconditional monotony for convection-diffusion operators. These schemes are discrete in time and analytic in the spatial variables. They are exact in case of piecewise-constant coefficients within the finite volume and along the coordinate lines of the grid area in each direction on a time step. In each direction, they have tridiagonal structure. They are solved by the sweep method. An important advantage of the discrete-analytical schemes is that the values of derivatives at the boundaries of finite volume are calculated together with the values of the unknown functions. This technique is particularly attractive for problems with dominant convection, as it does not require artificial monotonization and limiters. The same idea of integrating factors is applied in temporal dimension to the stiff systems of equations describing chemical transformation models [2]. The proposed method is applicable for the problems involving convection-diffusion-reaction operators. The work has been partially supported by the Presidium of RAS under Program 43, and by the RFBR grants 14-01-00125 and 14-01-31482. References: 1. V.V. Penenko, E.A. Tsvetova, A.V. Penenko. Variational approach and Euler's integrating factors for environmental studies// Computers and Mathematics with Applications, (2014) V.67, Issue 12, P. 2240-2256. 2. V.V.Penenko, E.A.Tsvetova. Variational methods of constructing monotone approximations for atmospheric chemistry models // Numerical analysis and applications, 2013, V. 6, Issue 3, pp 210-220.
A comparison of two multi-variable integrator windup protection schemes
NASA Technical Reports Server (NTRS)
Mattern, Duane
1993-01-01
Two methods are examined for limit and integrator wind-up protection for multi-input, multi-output linear controllers subject to actuator constraints. The methods begin with an existing linear controller that satisfies the specifications for the nominal, small perturbation, linear model of the plant. The controllers are formulated to include an additional contribution to the state derivative calculations. The first method to be examined is the multi-variable version of the single-input, single-output, high gain, Conventional Anti-Windup (CAW) scheme. Except for the actuator limits, the CAW scheme is linear. The second scheme to be examined, denoted the Modified Anti-Windup (MAW) scheme, uses a scalar to modify the magnitude of the controller output vector while maintaining the vector direction. The calculation of the scalar modifier is a nonlinear function of the controller outputs and the actuator limits. In both cases the constrained actuator is tracked. These two integrator windup protection methods are demonstrated on a turbofan engine control system with five measurements, four control variables, and four actuators. The closed-loop responses of the two schemes are compared and contrasted during limit operation. The issue of maintaining the direction of the controller output vector using the Modified Anti-Windup scheme is discussed and the advantages and disadvantages of both of the IWP methods are presented.
Kinahan, David J; Kearney, Sinéad M; Dimov, Nikolay; Glynn, Macdara T; Ducrée, Jens
2014-07-07
The centrifugal "lab-on-a-disc" concept has proven to have great potential for process integration of bioanalytical assays, in particular where ease-of-use, ruggedness, portability, fast turn-around time and cost efficiency are of paramount importance. Yet, as all liquids residing on the disc are exposed to the same centrifugal field, an inherent challenge of these systems remains the automation of multi-step, multi-liquid sample processing and subsequent detection. In order to orchestrate the underlying bioanalytical protocols, an ample palette of rotationally and externally actuated valving schemes has been developed. While excelling with the level of flow control, externally actuated valves require interaction with peripheral instrumentation, thus compromising the conceptual simplicity of the centrifugal platform. In turn, for rotationally controlled schemes, such as common capillary burst valves, typical manufacturing tolerances tend to limit the number of consecutive laboratory unit operations (LUOs) that can be automated on a single disc. In this paper, a major advancement on recently established dissolvable film (DF) valving is presented; for the very first time, a liquid handling sequence can be controlled in response to completion of preceding liquid transfer event, i.e. completely independent of external stimulus or changes in speed of disc rotation. The basic, event-triggered valve configuration is further adapted to leverage conditional, large-scale process integration. First, we demonstrate a fluidic network on a disc encompassing 10 discrete valving steps including logical relationships such as an AND-conditional as well as serial and parallel flow control. Then we present a disc which is capable of implementing common laboratory unit operations such as metering and selective routing of flows. Finally, as a pilot study, these functions are integrated on a single disc to automate a common, multi-step lab protocol for the extraction of total RNA from mammalian cell homogenate.
A multi-hop teleportation protocol of arbitrary four-qubit states through intermediate nodes
NASA Astrophysics Data System (ADS)
Choudhury, Binayak S.; Samanta, Soumen
Teleportation processes over long distances become affected by the almost inevitable existence of noise which interferes with the entangled quantum channels. In view of this, intermediate nodes are introduced in the scheme. These nodes are connected in series through quantum entanglement. In this paper, we present a protocol for transferring an entangled four-particle cluster-type state in an integrated manner through the intermediate nodes. Its efficiency and advantage over the corresponding part by part teleportation process is discussed.
NASA Technical Reports Server (NTRS)
Duff, Michael J. B. (Editor); Siegel, Howard J. (Editor); Corbett, Francis J. (Editor)
1986-01-01
The conference presents papers on the architectures, algorithms, and applications of image processing. Particular attention is given to a very large scale integration system for image reconstruction from projections, a prebuffer algorithm for instant display of volume data, and an adaptive image sequence filtering scheme based on motion detection. Papers are also presented on a simple, direct practical method of sensing local motion and analyzing local optical flow, image matching techniques, and an automated biological dosimetry system.
Microelectromechanical reprogrammable logic device.
Hafiz, M A A; Kosuru, L; Younis, M I
2016-03-29
In modern computing, the Boolean logic operations are set by interconnect schemes between the transistors. As the miniaturization in the component level to enhance the computational power is rapidly approaching physical limits, alternative computing methods are vigorously pursued. One of the desired aspects in the future computing approaches is the provision for hardware reconfigurability at run time to allow enhanced functionality. Here we demonstrate a reprogrammable logic device based on the electrothermal frequency modulation scheme of a single microelectromechanical resonator, capable of performing all the fundamental 2-bit logic functions as well as n-bit logic operations. Logic functions are performed by actively tuning the linear resonance frequency of the resonator operated at room temperature and under modest vacuum conditions, reprogrammable by the a.c.-driving frequency. The device is fabricated using complementary metal oxide semiconductor compatible mass fabrication process, suitable for on-chip integration, and promises an alternative electromechanical computing scheme.
Microelectromechanical reprogrammable logic device
Hafiz, M. A. A.; Kosuru, L.; Younis, M. I.
2016-01-01
In modern computing, the Boolean logic operations are set by interconnect schemes between the transistors. As the miniaturization in the component level to enhance the computational power is rapidly approaching physical limits, alternative computing methods are vigorously pursued. One of the desired aspects in the future computing approaches is the provision for hardware reconfigurability at run time to allow enhanced functionality. Here we demonstrate a reprogrammable logic device based on the electrothermal frequency modulation scheme of a single microelectromechanical resonator, capable of performing all the fundamental 2-bit logic functions as well as n-bit logic operations. Logic functions are performed by actively tuning the linear resonance frequency of the resonator operated at room temperature and under modest vacuum conditions, reprogrammable by the a.c.-driving frequency. The device is fabricated using complementary metal oxide semiconductor compatible mass fabrication process, suitable for on-chip integration, and promises an alternative electromechanical computing scheme. PMID:27021295
Multi-Source Cooperative Data Collection with a Mobile Sink for the Wireless Sensor Network.
Han, Changcai; Yang, Jinsheng
2017-10-30
The multi-source cooperation integrating distributed low-density parity-check codes is investigated to jointly collect data from multiple sensor nodes to the mobile sink in the wireless sensor network. The one-round and two-round cooperative data collection schemes are proposed according to the moving trajectories of the sink node. Specifically, two sparse cooperation models are firstly formed based on geographical locations of sensor source nodes, the impairment of inter-node wireless channels and moving trajectories of the mobile sink. Then, distributed low-density parity-check codes are devised to match the directed graphs and cooperation matrices related with the cooperation models. In the proposed schemes, each source node has quite low complexity attributed to the sparse cooperation and the distributed processing. Simulation results reveal that the proposed cooperative data collection schemes obtain significant bit error rate performance and the two-round cooperation exhibits better performance compared with the one-round scheme. The performance can be further improved when more source nodes participate in the sparse cooperation. For the two-round data collection schemes, the performance is evaluated for the wireless sensor networks with different moving trajectories and the variant data sizes.
Multi-Source Cooperative Data Collection with a Mobile Sink for the Wireless Sensor Network
Han, Changcai; Yang, Jinsheng
2017-01-01
The multi-source cooperation integrating distributed low-density parity-check codes is investigated to jointly collect data from multiple sensor nodes to the mobile sink in the wireless sensor network. The one-round and two-round cooperative data collection schemes are proposed according to the moving trajectories of the sink node. Specifically, two sparse cooperation models are firstly formed based on geographical locations of sensor source nodes, the impairment of inter-node wireless channels and moving trajectories of the mobile sink. Then, distributed low-density parity-check codes are devised to match the directed graphs and cooperation matrices related with the cooperation models. In the proposed schemes, each source node has quite low complexity attributed to the sparse cooperation and the distributed processing. Simulation results reveal that the proposed cooperative data collection schemes obtain significant bit error rate performance and the two-round cooperation exhibits better performance compared with the one-round scheme. The performance can be further improved when more source nodes participate in the sparse cooperation. For the two-round data collection schemes, the performance is evaluated for the wireless sensor networks with different moving trajectories and the variant data sizes. PMID:29084155
Methodology for the assessment of oxygen as an energy carrier
NASA Astrophysics Data System (ADS)
Yang, Ming Wei
Due to the energy intensity of the oxygen generating process, the electric power grid would benefit if the oxygen generating process was consumed electric power only during low demand periods. Thus, the question to be addressed in this study is whether oxygen production and/or usage can be modified to achieve energy storage and/or transmission objectives at lower cost. The specific benefit to grid would be a leveling, over time, of the demand profile and thus would require less installation capacity. In order to track the availability of electricity, a compressed air storage unit is installed between the cryogenic distillation section and the main air compressor of air separation unit. A profit maximizing scheme for sizing storage inventory and related equipments is developed. The optimum scheme is capable of market responsiveness. Profits of steel maker, oxy-combustion, and IGCC plants with storage facilities can be higher than those plants without storage facilities, especially, at high-price market. Price tracking feature of air storage integration will certainly increase profit margins of the plants. The integration may push oxy-combustion and integrated gasification combined cycle process into economic viability. Since oxygen is used in consumer sites, it may generate at remote locations and transport to the place needed. Energy losses and costs analysis of oxygen transportation is conducted for various applications. Energy consumptions of large capacity and long distance GOX and LOX pipelines are lower than small capacity pipelines. However, transportation losses and costs of GOX and LOX pipelines are still higher than electricity transmission.
Lahariya, Chandrakant; Mishra, Ashok; Nandan, Deoki; Gautam, Praveen; Gupta, Sanjay
2011-01-01
Conditional Cash Transfer (CCT) schemes have shown largely favorable changes in the health seeking behavior. This evaluation study assesses the process and performance of an Additional Cash Incentive (ACI) scheme within an ongoing CCT scheme in India, and document lessons. A controlled before and during design study was conducted in Madhya Pradesh state of India, from August 2007 to March 2008, with increased in institutional deliveries as a primary outcome. In depth interviews, focus group discussions and household surveys were done for data collection. Lack of awareness about ACI scheme amongst general population and beneficiaries, cumbersome cash disbursement procedure, intricate eligibility criteria, extensive paper work, and insufficient focus on community involvement were the major implementation challenges. There were anecdotal reports of political interference and possible scope for corruption. At the end of implementation period, overall rate of institutional deliveries had increased in both target and control populations; however, the differences were not statistically significant. No cause and effect association could be proven by this study. Poor planning and coordination, and lack of public awareness about the scheme resulted in low utilization. Thus, proper IEC and training, detailed implementation plan, orientation training for implementer, sufficient budgetary allocation, and community participation should be an integral part for successful implementation of any such scheme. The lesson learned this evaluation study may be useful in any developing country setting and may be utilized for planning and implementation of any ACI scheme in future.
Public Auditing with Privacy Protection in a Multi-User Model of Cloud-Assisted Body Sensor Networks
Li, Song; Cui, Jie; Zhong, Hong; Liu, Lu
2017-01-01
Wireless Body Sensor Networks (WBSNs) are gaining importance in the era of the Internet of Things (IoT). The modern medical system is a particular area where the WBSN techniques are being increasingly adopted for various fundamental operations. Despite such increasing deployments of WBSNs, issues such as the infancy in the size, capabilities and limited data processing capacities of the sensor devices restrain their adoption in resource-demanding applications. Though providing computing and storage supplements from cloud servers can potentially enrich the capabilities of the WBSNs devices, data security is one of the prevailing issues that affects the reliability of cloud-assisted services. Sensitive applications such as modern medical systems demand assurance of the privacy of the users’ medical records stored in distant cloud servers. Since it is economically impossible to set up private cloud servers for every client, auditing data security managed in the remote servers has necessarily become an integral requirement of WBSNs’ applications relying on public cloud servers. To this end, this paper proposes a novel certificateless public auditing scheme with integrated privacy protection. The multi-user model in our scheme supports groups of users to store and share data, thus exhibiting the potential for WBSNs’ deployments within community environments. Furthermore, our scheme enriches user experiences by offering public verifiability, forward security mechanisms and revocation of illegal group members. Experimental evaluations demonstrate the security effectiveness of our proposed scheme under the Random Oracle Model (ROM) by outperforming existing cloud-assisted WBSN models. PMID:28475110
Li, Song; Cui, Jie; Zhong, Hong; Liu, Lu
2017-05-05
Wireless Body Sensor Networks (WBSNs) are gaining importance in the era of the Internet of Things (IoT). The modern medical system is a particular area where the WBSN techniques are being increasingly adopted for various fundamental operations. Despite such increasing deployments of WBSNs, issues such as the infancy in the size, capabilities and limited data processing capacities of the sensor devices restrain their adoption in resource-demanding applications. Though providing computing and storage supplements from cloud servers can potentially enrich the capabilities of the WBSNs devices, data security is one of the prevailing issues that affects the reliability of cloud-assisted services. Sensitive applications such as modern medical systems demand assurance of the privacy of the users' medical records stored in distant cloud servers. Since it is economically impossible to set up private cloud servers for every client, auditing data security managed in the remote servers has necessarily become an integral requirement of WBSNs' applications relying on public cloud servers. To this end, this paper proposes a novel certificateless public auditing scheme with integrated privacy protection. The multi-user model in our scheme supports groups of users to store and share data, thus exhibiting the potential for WBSNs' deployments within community environments. Furthermore, our scheme enriches user experiences by offering public verifiability, forward security mechanisms and revocation of illegal group members. Experimental evaluations demonstrate the security effectiveness of our proposed scheme under the Random Oracle Model (ROM) by outperforming existing cloud-assisted WBSN models.
Organization of functional interaction of corporate information systems
NASA Astrophysics Data System (ADS)
Safronov, V. V.; Barabanov, V. F.; Podvalniy, S. L.; Nuzhnyy, A. M.
2018-03-01
In this article the methods of specialized software systems integration are analyzed and the concept of seamless integration of production decisions is offered. In view of this concept developed structural and functional schemes of the specialized software are shown. The proposed schemes and models are improved for a machine-building enterprise.
General relaxation schemes in multigrid algorithms for higher order singularity methods
NASA Technical Reports Server (NTRS)
Oskam, B.; Fray, J. M. J.
1981-01-01
Relaxation schemes based on approximate and incomplete factorization technique (AF) are described. The AF schemes allow construction of a fast multigrid method for solving integral equations of the second and first kind. The smoothing factors for integral equations of the first kind, and comparison with similar results from the second kind of equations are a novel item. Application of the MD algorithm shows convergence to the level of truncation error of a second order accurate panel method.
NASA Astrophysics Data System (ADS)
Somogyi, Gábor
2013-04-01
We finish the definition of a subtraction scheme for computing NNLO corrections to QCD jet cross sections. In particular, we perform the integration of the soft-type contributions to the doubly unresolved counterterms via the method of Mellin-Barnes representations. With these final ingredients in place, the definition of the scheme is complete and the computation of fully differential rates for electron-positron annihilation into two and three jets at NNLO accuracy becomes feasible.
Dead pixel replacement in LWIR microgrid polarimeters.
Ratliff, Bradley M; Tyo, J Scott; Boger, James K; Black, Wiley T; Bowers, David L; Fetrow, Matthew P
2007-06-11
LWIR imaging arrays are often affected by nonresponsive pixels, or "dead pixels." These dead pixels can severely degrade the quality of imagery and often have to be replaced before subsequent image processing and display of the imagery data. For LWIR arrays that are integrated with arrays of micropolarizers, the problem of dead pixels is amplified. Conventional dead pixel replacement (DPR) strategies cannot be employed since neighboring pixels are of different polarizations. In this paper we present two DPR schemes. The first is a modified nearest-neighbor replacement method. The second is a method based on redundancy in the polarization measurements.We find that the redundancy-based DPR scheme provides an order-of-magnitude better performance for typical LWIR polarimetric data.
Interference Mitigation Schemes for Wireless Body Area Sensor Networks: A Comparative Survey
Le, Thien T.T.; Moh, Sangman
2015-01-01
A wireless body area sensor network (WBASN) consists of a coordinator and multiple sensors to monitor the biological signals and functions of the human body. This exciting area has motivated new research and standardization processes, especially in the area of WBASN performance and reliability. In scenarios of mobility or overlapped WBASNs, system performance will be significantly degraded because of unstable signal integrity. Hence, it is necessary to consider interference mitigation in the design. This survey presents a comparative review of interference mitigation schemes in WBASNs. Further, we show that current solutions are limited in reaching satisfactory performance, and thus, more advanced solutions should be developed in the future. PMID:26110407
Error recovery in shared memory multiprocessors using private caches
NASA Technical Reports Server (NTRS)
Wu, Kun-Lung; Fuchs, W. Kent; Patel, Janak H.
1990-01-01
The problem of recovering from processor transient faults in shared memory multiprocesses systems is examined. A user-transparent checkpointing and recovery scheme using private caches is presented. Processes can recover from errors due to faulty processors by restarting from the checkpointed computation state. Implementation techniques using checkpoint identifiers and recovery stacks are examined as a means of reducing performance degradation in processor utilization during normal execution. This cache-based checkpointing technique prevents rollback propagation, provides rapid recovery, and can be integrated into standard cache coherence protocols. An analytical model is used to estimate the relative performance of the scheme during normal execution. Extensions to take error latency into account are presented.
A progress report on estuary modeling by the finite-element method
Gray, William G.
1978-01-01
Various schemes are investigated for finite-element modeling of two-dimensional surface-water flows. The first schemes investigated combine finite-element spatial discretization with split-step time stepping schemes that have been found useful in finite-difference computations. Because of the large number of numerical integrations performed in space and the large sparse matrices solved, these finite-element schemes were found to be economically uncompetitive with finite-difference schemes. A very promising leapfrog scheme is proposed which, when combined with a novel very fast spatial integration procedure, eliminates the need to solve any matrices at all. Additional problems attacked included proper propagation of waves and proper specification of the normal flow-boundary condition. This report indicates work in progress and does not come to a definitive conclusion as to the best approach for finite-element modeling of surface-water problems. The results presented represent findings obtained between September 1973 and July 1976. (Woodard-USGS)
Lightweight ECC based RFID authentication integrated with an ID verifier transfer protocol.
He, Debiao; Kumar, Neeraj; Chilamkurti, Naveen; Lee, Jong-Hyouk
2014-10-01
The radio frequency identification (RFID) technology has been widely adopted and being deployed as a dominant identification technology in a health care domain such as medical information authentication, patient tracking, blood transfusion medicine, etc. With more and more stringent security and privacy requirements to RFID based authentication schemes, elliptic curve cryptography (ECC) based RFID authentication schemes have been proposed to meet the requirements. However, many recently published ECC based RFID authentication schemes have serious security weaknesses. In this paper, we propose a new ECC based RFID authentication integrated with an ID verifier transfer protocol that overcomes the weaknesses of the existing schemes. A comprehensive security analysis has been conducted to show strong security properties that are provided from the proposed authentication scheme. Moreover, the performance of the proposed authentication scheme is analyzed in terms of computational cost, communicational cost, and storage requirement.
Webcams for Bird Detection and Monitoring: A Demonstration Study
Verstraeten, Willem W.; Vermeulen, Bart; Stuckens, Jan; Lhermitte, Stefaan; Van der Zande, Dimitry; Van Ranst, Marc; Coppin, Pol
2010-01-01
Better insights into bird migration can be a tool for assessing the spread of avian borne infections or ecological/climatologic issues reflected in deviating migration patterns. This paper evaluates whether low budget permanent cameras such as webcams can offer a valuable contribution to the reporting of migratory birds. An experimental design was set up to study the detection capability using objects of different size, color and velocity. The results of the experiment revealed the minimum size, maximum velocity and contrast of the objects required for detection by a standard webcam. Furthermore, a modular processing scheme was proposed to track and follow migratory birds in webcam recordings. Techniques such as motion detection by background subtraction, stereo vision and lens distortion were combined to form the foundation of the bird tracking algorithm. Additional research to integrate webcam networks, however, is needed and future research should enforce the potential of the processing scheme by exploring and testing alternatives of each individual module or processing step. PMID:22319308
A New Turbo-shaft Engine Control Law during Variable Rotor Speed Transient Process
NASA Astrophysics Data System (ADS)
Hua, Wei; Miao, Lizhen; Zhang, Haibo; Huang, Jinquan
2015-12-01
A closed-loop control law employing compressor guided vanes is firstly investigated to solve unacceptable fuel flow dynamic change in single fuel control for turbo-shaft engine here, especially for rotorcraft in variable rotor speed process. Based on an Augmented Linear Quadratic Regulator (ALQR) algorithm, a dual-input, single-output robust control scheme is proposed for a turbo-shaft engine, involving not only the closed loop adjustment of fuel flow but also that of compressor guided vanes. Furthermore, compared to single fuel control, some digital simulation cases using this new scheme about variable rotor speed have been implemented on the basis of an integrated system of helicopter and engine model. The results depict that the command tracking performance to the free turbine rotor speed can be asymptotically realized. Moreover, the fuel flow transient process has been significantly improved, and the fuel consumption has been dramatically cut down by more than 2% while keeping the helicopter level fight unchanged.
Webcams for bird detection and monitoring: a demonstration study.
Verstraeten, Willem W; Vermeulen, Bart; Stuckens, Jan; Lhermitte, Stefaan; Van der Zande, Dimitry; Van Ranst, Marc; Coppin, Pol
2010-01-01
Better insights into bird migration can be a tool for assessing the spread of avian borne infections or ecological/climatologic issues reflected in deviating migration patterns. This paper evaluates whether low budget permanent cameras such as webcams can offer a valuable contribution to the reporting of migratory birds. An experimental design was set up to study the detection capability using objects of different size, color and velocity. The results of the experiment revealed the minimum size, maximum velocity and contrast of the objects required for detection by a standard webcam. Furthermore, a modular processing scheme was proposed to track and follow migratory birds in webcam recordings. Techniques such as motion detection by background subtraction, stereo vision and lens distortion were combined to form the foundation of the bird tracking algorithm. Additional research to integrate webcam networks, however, is needed and future research should enforce the potential of the processing scheme by exploring and testing alternatives of each individual module or processing step.
NASA Technical Reports Server (NTRS)
Jothiprasad, Giridhar; Mavriplis, Dimitri J.; Caughey, David A.; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
The efficiency gains obtained using higher-order implicit Runge-Kutta schemes as compared with the second-order accurate backward difference schemes for the unsteady Navier-Stokes equations are investigated. Three different algorithms for solving the nonlinear system of equations arising at each timestep are presented. The first algorithm (NMG) is a pseudo-time-stepping scheme which employs a non-linear full approximation storage (FAS) agglomeration multigrid method to accelerate convergence. The other two algorithms are based on Inexact Newton's methods. The linear system arising at each Newton step is solved using iterative/Krylov techniques and left preconditioning is used to accelerate convergence of the linear solvers. One of the methods (LMG) uses Richardson's iterative scheme for solving the linear system at each Newton step while the other (PGMRES) uses the Generalized Minimal Residual method. Results demonstrating the relative superiority of these Newton's methods based schemes are presented. Efficiency gains as high as 10 are obtained by combining the higher-order time integration schemes with the more efficient nonlinear solvers.
NASA Astrophysics Data System (ADS)
Jang, Munseon; Yun, Kwang-Seok
2017-12-01
In this paper, we presents a MEMS pressure sensor integrated with a readout circuit on a chip for an on-chip signal processing. The capacitive pressure sensor is formed on a CMOS chip by using a post-CMOS MEMS processes. The proposed device consists of a sensing capacitor that is square in shape, a reference capacitor and a readout circuitry based on a switched-capacitor scheme to detect capacitance change at various environmental pressures. The readout circuit was implemented by using a commercial 0.35 μm CMOS process with 2 polysilicon and 4 metal layers. Then, the pressure sensor was formed by wet etching of metal 2 layer through via hole structures. Experimental results show that the MEMS pressure sensor has a sensitivity of 11 mV/100 kPa at the pressure range of 100-400 kPa.
NASA Astrophysics Data System (ADS)
Michalak, D. J.; Bruno, A.; Caudillo, R.; Elsherbini, A. A.; Falcon, J. A.; Nam, Y. S.; Poletto, S.; Roberts, J.; Thomas, N. K.; Yoscovits, Z. R.; Dicarlo, L.; Clarke, J. S.
Experimental quantum computing is rapidly approaching the integration of sufficient numbers of quantum bits for interesting applications, but many challenges still remain. These challenges include: realization of an extensible design for large array scale up, sufficient material process control, and discovery of integration schemes compatible with industrial 300 mm fabrication. We present recent developments in extensible circuits with vertical delivery. Toward the goal of developing a high-volume manufacturing process, we will present recent results on a new Josephson junction process that is compatible with current tooling. We will then present the improvements in NbTiN material uniformity that typical 300 mm fabrication tooling can provide. While initial results on few-qubit systems are encouraging, advanced processing control is expected to deliver the improvements in qubit uniformity, coherence time, and control required for larger systems. Research funded by Intel Corporation.
Fuzzy Adaptive Cubature Kalman Filter for Integrated Navigation Systems.
Tseng, Chien-Hao; Lin, Sheng-Fuu; Jwo, Dah-Jing
2016-07-26
This paper presents a sensor fusion method based on the combination of cubature Kalman filter (CKF) and fuzzy logic adaptive system (FLAS) for the integrated navigation systems, such as the GPS/INS (Global Positioning System/inertial navigation system) integration. The third-degree spherical-radial cubature rule applied in the CKF has been employed to avoid the numerically instability in the system model. In processing navigation integration, the performance of nonlinear filter based estimation of the position and velocity states may severely degrade caused by modeling errors due to dynamics uncertainties of the vehicle. In order to resolve the shortcoming for selecting the process noise covariance through personal experience or numerical simulation, a scheme called the fuzzy adaptive cubature Kalman filter (FACKF) is presented by introducing the FLAS to adjust the weighting factor of the process noise covariance matrix. The FLAS is incorporated into the CKF framework as a mechanism for timely implementing the tuning of process noise covariance matrix based on the information of degree of divergence (DOD) parameter. The proposed FACKF algorithm shows promising accuracy improvement as compared to the extended Kalman filter (EKF), unscented Kalman filter (UKF), and CKF approaches.
Fuzzy Adaptive Cubature Kalman Filter for Integrated Navigation Systems
Tseng, Chien-Hao; Lin, Sheng-Fuu; Jwo, Dah-Jing
2016-01-01
This paper presents a sensor fusion method based on the combination of cubature Kalman filter (CKF) and fuzzy logic adaptive system (FLAS) for the integrated navigation systems, such as the GPS/INS (Global Positioning System/inertial navigation system) integration. The third-degree spherical-radial cubature rule applied in the CKF has been employed to avoid the numerically instability in the system model. In processing navigation integration, the performance of nonlinear filter based estimation of the position and velocity states may severely degrade caused by modeling errors due to dynamics uncertainties of the vehicle. In order to resolve the shortcoming for selecting the process noise covariance through personal experience or numerical simulation, a scheme called the fuzzy adaptive cubature Kalman filter (FACKF) is presented by introducing the FLAS to adjust the weighting factor of the process noise covariance matrix. The FLAS is incorporated into the CKF framework as a mechanism for timely implementing the tuning of process noise covariance matrix based on the information of degree of divergence (DOD) parameter. The proposed FACKF algorithm shows promising accuracy improvement as compared to the extended Kalman filter (EKF), unscented Kalman filter (UKF), and CKF approaches. PMID:27472336
A numerical scheme to solve unstable boundary value problems
NASA Technical Reports Server (NTRS)
Kalnay-Rivas, E.
1977-01-01
The considered scheme makes it possible to determine an unstable steady state solution in cases in which, because of lack of symmetry, such a solution cannot be obtained analytically, and other time integration or relaxation schemes, because of instability, fail to converge. The iterative solution of a single complex equation is discussed and a nonlinear system of equations is considered. Described applications of the scheme are related to a steady state solution with shear instability, an unstable nonlinear Ekman boundary layer, and the steady state solution of a baroclinic atmosphere with asymmetric forcing. The scheme makes use of forward and backward time integrations of the original spatial differential operators and of an approximation of the adjoint operators. Only two computations of the time derivative per iteration are required.
The hierarchical expert tuning of PID controllers using tools of soft computing.
Karray, F; Gueaieb, W; Al-Sharhan, S
2002-01-01
We present soft computing-based results pertaining to the hierarchical tuning process of PID controllers located within the control loop of a class of nonlinear systems. The results are compared with PID controllers implemented either in a stand alone scheme or as a part of conventional gain scheduling structure. This work is motivated by the increasing need in the industry to design highly reliable and efficient controllers for dealing with regulation and tracking capabilities of complex processes characterized by nonlinearities and possibly time varying parameters. The soft computing-based controllers proposed are hybrid in nature in that they integrate within a well-defined hierarchical structure the benefits of hard algorithmic controllers with those having supervisory capabilities. The controllers proposed also have the distinct features of learning and auto-tuning without the need for tedious and computationally extensive online systems identification schemes.
Photonic crystal nanocavity assisted rejection ratio tunable notch microwave photonic filter
Long, Yun; Xia, Jinsong; Zhang, Yong; Dong, Jianji; Wang, Jian
2017-01-01
Driven by the increasing demand on handing microwave signals with compact device, low power consumption, high efficiency and high reliability, it is highly desired to generate, distribute, and process microwave signals using photonic integrated circuits. Silicon photonics offers a promising platform facilitating ultracompact microwave photonic signal processing assisted by silicon nanophotonic devices. In this paper, we propose, theoretically analyze and experimentally demonstrate a simple scheme to realize ultracompact rejection ratio tunable notch microwave photonic filter (MPF) based on a silicon photonic crystal (PhC) nanocavity with fixed extinction ratio. Using a conventional modulation scheme with only a single phase modulator (PM), the rejection ratio of the presented MPF can be tuned from about 10 dB to beyond 60 dB. Moreover, the central frequency tunable operation in the high rejection ratio region is also demonstrated in the experiment. PMID:28067332
PI controller design for indirect vector controlled induction motor: A decoupling approach.
Jain, Jitendra Kr; Ghosh, Sandip; Maity, Somnath; Dworak, Pawel
2017-09-01
Decoupling of the stator currents is important for smoother torque response of indirect vector controlled induction motors. Typically, feedforward decoupling is used to take care of current coupling that requires exact knowledge of motor parameters, additional circuitry and signal processing. In this paper, a method is proposed to design the regulating proportional-integral gains that minimize coupling without any requirement of the additional decoupler. The variation of the coupling terms for change in load torque is considered as the performance measure. An iterative linear matrix inequality based H ∞ control design approach is used to obtain the controller gains. A comparison between the feedforward and the proposed decoupling schemes is presented through simulation and experimental results. The results show that the proposed scheme is simple yet effective even without additional block or burden on signal processing. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Photonic crystal nanocavity assisted rejection ratio tunable notch microwave photonic filter
NASA Astrophysics Data System (ADS)
Long, Yun; Xia, Jinsong; Zhang, Yong; Dong, Jianji; Wang, Jian
2017-01-01
Driven by the increasing demand on handing microwave signals with compact device, low power consumption, high efficiency and high reliability, it is highly desired to generate, distribute, and process microwave signals using photonic integrated circuits. Silicon photonics offers a promising platform facilitating ultracompact microwave photonic signal processing assisted by silicon nanophotonic devices. In this paper, we propose, theoretically analyze and experimentally demonstrate a simple scheme to realize ultracompact rejection ratio tunable notch microwave photonic filter (MPF) based on a silicon photonic crystal (PhC) nanocavity with fixed extinction ratio. Using a conventional modulation scheme with only a single phase modulator (PM), the rejection ratio of the presented MPF can be tuned from about 10 dB to beyond 60 dB. Moreover, the central frequency tunable operation in the high rejection ratio region is also demonstrated in the experiment.
Higgs boson decay into b-quarks at NNLO accuracy
NASA Astrophysics Data System (ADS)
Del Duca, Vittorio; Duhr, Claude; Somogyi, Gábor; Tramontano, Francesco; Trócsányi, Zoltán
2015-04-01
We compute the fully differential decay rate of the Standard Model Higgs boson into b-quarks at next-to-next-to-leading order (NNLO) accuracy in αs. We employ a general subtraction scheme developed for the calculation of higher order perturbative corrections to QCD jet cross sections, which is based on the universal infrared factorization properties of QCD squared matrix elements. We show that the subtractions render the various contributions to the NNLO correction finite. In particular, we demonstrate analytically that the sum of integrated subtraction terms correctly reproduces the infrared poles of the two-loop double virtual contribution to this process. We present illustrative differential distributions obtained by implementing the method in a parton level Monte Carlo program. The basic ingredients of our subtraction scheme, used here for the first time to compute a physical observable, are universal and can be employed for the computation of more involved processes.
Photonic crystal nanocavity assisted rejection ratio tunable notch microwave photonic filter.
Long, Yun; Xia, Jinsong; Zhang, Yong; Dong, Jianji; Wang, Jian
2017-01-09
Driven by the increasing demand on handing microwave signals with compact device, low power consumption, high efficiency and high reliability, it is highly desired to generate, distribute, and process microwave signals using photonic integrated circuits. Silicon photonics offers a promising platform facilitating ultracompact microwave photonic signal processing assisted by silicon nanophotonic devices. In this paper, we propose, theoretically analyze and experimentally demonstrate a simple scheme to realize ultracompact rejection ratio tunable notch microwave photonic filter (MPF) based on a silicon photonic crystal (PhC) nanocavity with fixed extinction ratio. Using a conventional modulation scheme with only a single phase modulator (PM), the rejection ratio of the presented MPF can be tuned from about 10 dB to beyond 60 dB. Moreover, the central frequency tunable operation in the high rejection ratio region is also demonstrated in the experiment.
NASA Astrophysics Data System (ADS)
Rößler, Thomas; Stein, Olaf; Heng, Yi; Baumeister, Paul; Hoffmann, Lars
2018-02-01
The accuracy of trajectory calculations performed by Lagrangian particle dispersion models (LPDMs) depends on various factors. The optimization of numerical integration schemes used to solve the trajectory equation helps to maximize the computational efficiency of large-scale LPDM simulations. We analyzed global truncation errors of six explicit integration schemes of the Runge-Kutta family, which we implemented in the Massive-Parallel Trajectory Calculations (MPTRAC) advection module. The simulations were driven by wind fields from operational analysis and forecasts of the European Centre for Medium-Range Weather Forecasts (ECMWF) at T1279L137 spatial resolution and 3 h temporal sampling. We defined separate test cases for 15 distinct regions of the atmosphere, covering the polar regions, the midlatitudes, and the tropics in the free troposphere, in the upper troposphere and lower stratosphere (UT/LS) region, and in the middle stratosphere. In total, more than 5000 different transport simulations were performed, covering the months of January, April, July, and October for the years 2014 and 2015. We quantified the accuracy of the trajectories by calculating transport deviations with respect to reference simulations using a fourth-order Runge-Kutta integration scheme with a sufficiently fine time step. Transport deviations were assessed with respect to error limits based on turbulent diffusion. Independent of the numerical scheme, the global truncation errors vary significantly between the different regions. Horizontal transport deviations in the stratosphere are typically an order of magnitude smaller compared with the free troposphere. We found that the truncation errors of the six numerical schemes fall into three distinct groups, which mostly depend on the numerical order of the scheme. Schemes of the same order differ little in accuracy, but some methods need less computational time, which gives them an advantage in efficiency. The selection of the integration scheme and the appropriate time step should possibly take into account the typical altitude ranges as well as the total length of the simulations to achieve the most efficient simulations. However, trying to summarize, we recommend the third-order Runge-Kutta method with a time step of 170 s or the midpoint scheme with a time step of 100 s for efficient simulations of up to 10 days of simulation time for the specific ECMWF high-resolution data set considered in this study. Purely stratospheric simulations can use significantly larger time steps of 800 and 1100 s for the midpoint scheme and the third-order Runge-Kutta method, respectively.
Social Information Is Integrated into Value and Confidence Judgments According to Its Reliability.
De Martino, Benedetto; Bobadilla-Suarez, Sebastian; Nouguchi, Takao; Sharot, Tali; Love, Bradley C
2017-06-21
How much we like something, whether it be a bottle of wine or a new film, is affected by the opinions of others. However, the social information that we receive can be contradictory and vary in its reliability. Here, we tested whether the brain incorporates these statistics when judging value and confidence. Participants provided value judgments about consumer goods in the presence of online reviews. We found that participants updated their initial value and confidence judgments in a Bayesian fashion, taking into account both the uncertainty of their initial beliefs and the reliability of the social information. Activity in dorsomedial prefrontal cortex tracked the degree of belief update. Analogous to how lower-level perceptual information is integrated, we found that the human brain integrates social information according to its reliability when judging value and confidence. SIGNIFICANCE STATEMENT The field of perceptual decision making has shown that the sensory system integrates different sources of information according to their respective reliability, as predicted by a Bayesian inference scheme. In this work, we hypothesized that a similar coding scheme is implemented by the human brain to process social signals and guide complex, value-based decisions. We provide experimental evidence that the human prefrontal cortex's activity is consistent with a Bayesian computation that integrates social information that differs in reliability and that this integration affects the neural representation of value and confidence. Copyright © 2017 De Martino et al.
NASA Technical Reports Server (NTRS)
Chao, W. C.
1982-01-01
With appropriate modifications, a recently proposed explicit-multiple-time-step scheme (EMTSS) is incorporated into the UCLA model. In this scheme, the linearized terms in the governing equations that generate the gravity waves are split into different vertical modes. Each mode is integrated with an optimal time step, and at periodic intervals these modes are recombined. The other terms are integrated with a time step dictated by the CFL condition for low-frequency waves. This large time step requires a special modification of the advective terms in the polar region to maintain stability. Test runs for 72 h show that EMTSS is a stable, efficient and accurate scheme.
Kang, Zhe; Yuan, Jinhui; Zhang, Xianting; Wu, Qiang; Sang, Xinzhu; Farrell, Gerald; Yu, Chongxiu; Li, Feng; Tam, Hwa Yaw; Wai, P. K. A.
2014-01-01
All-optical analog-to-digital converters based on the third-order nonlinear effects in silicon waveguide are a promising candidate to overcome the limitation of electronic devices and are suitable for photonic integration. In this paper, a 2-bit optical spectral quantization scheme for on-chip all-optical analog-to-digital conversion is proposed. The proposed scheme is realized by filtering the broadened and split spectrum induced by the self-phase modulation effect in a silicon horizontal slot waveguide filled with silicon-nanocrystal. Nonlinear coefficient as high as 8708 W−1/m is obtained because of the tight mode confinement of the horizontal slot waveguide and the high nonlinear refractive index of the silicon-nanocrystal, which provides the enhanced nonlinear interaction and accordingly low power threshold. The results show that a required input peak power level less than 0.4 W can be achieved, along with the 1.98-bit effective-number-of-bit and Gray code output. The proposed scheme can find important applications in on-chip all-optical digital signal processing systems. PMID:25417847
NASA Astrophysics Data System (ADS)
Hagiwara, Yohsuke; Ohta, Takehiro; Tateno, Masaru
2009-02-01
An interface program connecting a quantum mechanics (QM) calculation engine, GAMESS, and a molecular mechanics (MM) calculation engine, AMBER, has been developed for QM/MM hybrid calculations. A protein-DNA complex is used as a test system to investigate the following two types of QM/MM schemes. In a 'subtractive' scheme, electrostatic interactions between QM/MM regions are truncated in QM calculations; in an 'additive' scheme, long-range electrostatic interactions within a cut-off distance from QM regions are introduced into one-electron integration terms of a QM Hamiltonian. In these calculations, 338 atoms are assigned as QM atoms using Hartree-Fock (HF)/density functional theory (DFT) hybrid all-electron calculations. By comparing the results of the additive and subtractive schemes, it is found that electronic structures are perturbed significantly by the introduction of MM partial charges surrounding QM regions, suggesting that biological processes occurring in functional sites are modulated by the surrounding structures. This also indicates that the effects of long-range electrostatic interactions involved in the QM Hamiltonian are crucial for accurate descriptions of electronic structures of biological macromolecules.
Kang, Zhe; Yuan, Jinhui; Zhang, Xianting; Wu, Qiang; Sang, Xinzhu; Farrell, Gerald; Yu, Chongxiu; Li, Feng; Tam, Hwa Yaw; Wai, P K A
2014-11-24
All-optical analog-to-digital converters based on the third-order nonlinear effects in silicon waveguide are a promising candidate to overcome the limitation of electronic devices and are suitable for photonic integration. In this paper, a 2-bit optical spectral quantization scheme for on-chip all-optical analog-to-digital conversion is proposed. The proposed scheme is realized by filtering the broadened and split spectrum induced by the self-phase modulation effect in a silicon horizontal slot waveguide filled with silicon-nanocrystal. Nonlinear coefficient as high as 8708 W(-1)/m is obtained because of the tight mode confinement of the horizontal slot waveguide and the high nonlinear refractive index of the silicon-nanocrystal, which provides the enhanced nonlinear interaction and accordingly low power threshold. The results show that a required input peak power level less than 0.4 W can be achieved, along with the 1.98-bit effective-number-of-bit and Gray code output. The proposed scheme can find important applications in on-chip all-optical digital signal processing systems.
Adaptive mesh fluid simulations on GPU
NASA Astrophysics Data System (ADS)
Wang, Peng; Abel, Tom; Kaehler, Ralf
2010-10-01
We describe an implementation of compressible inviscid fluid solvers with block-structured adaptive mesh refinement on Graphics Processing Units using NVIDIA's CUDA. We show that a class of high resolution shock capturing schemes can be mapped naturally on this architecture. Using the method of lines approach with the second order total variation diminishing Runge-Kutta time integration scheme, piecewise linear reconstruction, and a Harten-Lax-van Leer Riemann solver, we achieve an overall speedup of approximately 10 times faster execution on one graphics card as compared to a single core on the host computer. We attain this speedup in uniform grid runs as well as in problems with deep AMR hierarchies. Our framework can readily be applied to more general systems of conservation laws and extended to higher order shock capturing schemes. This is shown directly by an implementation of a magneto-hydrodynamic solver and comparing its performance to the pure hydrodynamic case. Finally, we also combined our CUDA parallel scheme with MPI to make the code run on GPU clusters. Close to ideal speedup is observed on up to four GPUs.
Optimal Multi-scale Demand-side Management for Continuous Power-Intensive Processes
NASA Astrophysics Data System (ADS)
Mitra, Sumit
With the advent of deregulation in electricity markets and an increasing share of intermittent power generation sources, the profitability of industrial consumers that operate power-intensive processes has become directly linked to the variability in energy prices. Thus, for industrial consumers that are able to adjust to the fluctuations, time-sensitive electricity prices (as part of so-called Demand-Side Management (DSM) in the smart grid) offer potential economical incentives. In this thesis, we introduce optimization models and decomposition strategies for the multi-scale Demand-Side Management of continuous power-intensive processes. On an operational level, we derive a mode formulation for scheduling under time-sensitive electricity prices. The formulation is applied to air separation plants and cement plants to minimize the operating cost. We also describe how a mode formulation can be used for industrial combined heat and power plants that are co-located at integrated chemical sites to increase operating profit by adjusting their steam and electricity production according to their inherent flexibility. Furthermore, a robust optimization formulation is developed to address the uncertainty in electricity prices by accounting for correlations and multiple ranges in the realization of the random variables. On a strategic level, we introduce a multi-scale model that provides an understanding of the value of flexibility of the current plant configuration and the value of additional flexibility in terms of retrofits for Demand-Side Management under product demand uncertainty. The integration of multiple time scales leads to large-scale two-stage stochastic programming problems, for which we need to apply decomposition strategies in order to obtain a good solution within a reasonable amount of time. Hence, we describe two decomposition schemes that can be applied to solve two-stage stochastic programming problems: First, a hybrid bi-level decomposition scheme with novel Lagrangean-type and subset-type cuts to strengthen the relaxation. Second, an enhanced cross-decomposition scheme that integrates Benders decomposition and Lagrangean decomposition on a scenario basis. To demonstrate the effectiveness of our developed methodology, we provide several industrial case studies throughout the thesis.
NASA Astrophysics Data System (ADS)
Ammouri, Aymen; Ben Salah, Walid; Khachroumi, Sofiane; Ben Salah, Tarek; Kourda, Ferid; Morel, Hervé
2014-05-01
Design of integrated power converters needs prototype-less approaches. Specific simulations are required for investigation and validation process. Simulation relies on active and passive device models. Models of planar devices, for instance, are still not available in power simulator tools. There is, thus, a specific limitation during the simulation process of integrated power systems. The paper focuses on the development of a physically-based planar inductor model and its validation inside a power converter during transient switching. The planar inductor model remains a complex device to model, particularly when the skin, the proximity and the parasitic capacitances effects are taken into account. Heterogeneous simulation scheme, including circuit and device models, is successfully implemented in VHDL-AMS language and simulated in Simplorer platform. The mixed simulation results has been favorably tested and compared with practical measurements. It is found that the multi-domain simulation results and measurements data are in close agreement.
Note: Fully integrated 3.2 Gbps quantum random number generator with real-time extraction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Xiao-Guang; Nie, You-Qi; Liang, Hao
2016-07-15
We present a real-time and fully integrated quantum random number generator (QRNG) by measuring laser phase fluctuations. The QRNG scheme based on laser phase fluctuations is featured for its capability of generating ultra-high-speed random numbers. However, the speed bottleneck of a practical QRNG lies on the limited speed of randomness extraction. To close the gap between the fast randomness generation and the slow post-processing, we propose a pipeline extraction algorithm based on Toeplitz matrix hashing and implement it in a high-speed field-programmable gate array. Further, all the QRNG components are integrated into a module, including a compact and actively stabilizedmore » interferometer, high-speed data acquisition, and real-time data post-processing and transmission. The final generation rate of the QRNG module with real-time extraction can reach 3.2 Gbps.« less
Yang, Ting; Dong, Jianji; Lu, Liangjun; Zhou, Linjie; Zheng, Aoling; Zhang, Xinliang; Chen, Jianping
2014-07-04
Photonic integrated circuits for photonic computing open up the possibility for the realization of ultrahigh-speed and ultra wide-band signal processing with compact size and low power consumption. Differential equations model and govern fundamental physical phenomena and engineering systems in virtually any field of science and engineering, such as temperature diffusion processes, physical problems of motion subject to acceleration inputs and frictional forces, and the response of different resistor-capacitor circuits, etc. In this study, we experimentally demonstrate a feasible integrated scheme to solve first-order linear ordinary differential equation with constant-coefficient tunable based on a single silicon microring resonator. Besides, we analyze the impact of the chirp and pulse-width of input signals on the computing deviation. This device can be compatible with the electronic technology (typically complementary metal-oxide semiconductor technology), which may motivate the development of integrated photonic circuits for optical computing.
Yang, Ting; Dong, Jianji; Lu, Liangjun; Zhou, Linjie; Zheng, Aoling; Zhang, Xinliang; Chen, Jianping
2014-01-01
Photonic integrated circuits for photonic computing open up the possibility for the realization of ultrahigh-speed and ultra wide-band signal processing with compact size and low power consumption. Differential equations model and govern fundamental physical phenomena and engineering systems in virtually any field of science and engineering, such as temperature diffusion processes, physical problems of motion subject to acceleration inputs and frictional forces, and the response of different resistor-capacitor circuits, etc. In this study, we experimentally demonstrate a feasible integrated scheme to solve first-order linear ordinary differential equation with constant-coefficient tunable based on a single silicon microring resonator. Besides, we analyze the impact of the chirp and pulse-width of input signals on the computing deviation. This device can be compatible with the electronic technology (typically complementary metal-oxide semiconductor technology), which may motivate the development of integrated photonic circuits for optical computing. PMID:24993440
NETRA: A parallel architecture for integrated vision systems. 1: Architecture and organization
NASA Technical Reports Server (NTRS)
Choudhary, Alok N.; Patel, Janak H.; Ahuja, Narendra
1989-01-01
Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is considered to be a system that uses vision algorithms from all levels of processing for a high level application (such as object recognition). A model of computation is presented for parallel processing for an IVS. Using the model, desired features and capabilities of a parallel architecture suitable for IVSs are derived. Then a multiprocessor architecture (called NETRA) is presented. This architecture is highly flexible without the use of complex interconnection schemes. The topology of NETRA is recursively defined and hence is easily scalable from small to large systems. Homogeneity of NETRA permits fault tolerance and graceful degradation under faults. It is a recursively defined tree-type hierarchical architecture where each of the leaf nodes consists of a cluster of processors connected with a programmable crossbar with selective broadcast capability to provide for desired flexibility. A qualitative evaluation of NETRA is presented. Then general schemes are described to map parallel algorithms onto NETRA. Algorithms are classified according to their communication requirements for parallel processing. An extensive analysis of inter-cluster communication strategies in NETRA is presented, and parameters affecting performance of parallel algorithms when mapped on NETRA are discussed. Finally, a methodology to evaluate performance of algorithms on NETRA is described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Xiaodong; Xia, Yidong; Luo, Hong
A comparative study of two classes of third-order implicit time integration schemes is presented for a third-order hierarchical WENO reconstructed discontinuous Galerkin (rDG) method to solve the 3D unsteady compressible Navier-Stokes equations: — 1) the explicit first stage, single diagonally implicit Runge-Kutta (ESDIRK3) scheme, and 2) the Rosenbrock-Wanner (ROW) schemes based on the differential algebraic equations (DAEs) of Index-2. Compared with the ESDIRK3 scheme, a remarkable feature of the ROW schemes is that, they only require one approximate Jacobian matrix calculation every time step, thus considerably reducing the overall computational cost. A variety of test cases, ranging from inviscid flowsmore » to DNS of turbulent flows, are presented to assess the performance of these schemes. Here, numerical experiments demonstrate that the third-order ROW scheme for the DAEs of index-2 can not only achieve the designed formal order of temporal convergence accuracy in a benchmark test, but also require significantly less computing time than its ESDIRK3 counterpart to converge to the same level of discretization errors in all of the flow simulations in this study, indicating that the ROW methods provide an attractive alternative for the higher-order time-accurate integration of the unsteady compressible Navier-Stokes equations.« less
Liu, Xiaodong; Xia, Yidong; Luo, Hong; ...
2016-10-05
A comparative study of two classes of third-order implicit time integration schemes is presented for a third-order hierarchical WENO reconstructed discontinuous Galerkin (rDG) method to solve the 3D unsteady compressible Navier-Stokes equations: — 1) the explicit first stage, single diagonally implicit Runge-Kutta (ESDIRK3) scheme, and 2) the Rosenbrock-Wanner (ROW) schemes based on the differential algebraic equations (DAEs) of Index-2. Compared with the ESDIRK3 scheme, a remarkable feature of the ROW schemes is that, they only require one approximate Jacobian matrix calculation every time step, thus considerably reducing the overall computational cost. A variety of test cases, ranging from inviscid flowsmore » to DNS of turbulent flows, are presented to assess the performance of these schemes. Here, numerical experiments demonstrate that the third-order ROW scheme for the DAEs of index-2 can not only achieve the designed formal order of temporal convergence accuracy in a benchmark test, but also require significantly less computing time than its ESDIRK3 counterpart to converge to the same level of discretization errors in all of the flow simulations in this study, indicating that the ROW methods provide an attractive alternative for the higher-order time-accurate integration of the unsteady compressible Navier-Stokes equations.« less
NASA Technical Reports Server (NTRS)
Khayat, Michael A.; Wilton, Donald R.; Fink, Patrick W.
2007-01-01
Simple and efficient numerical procedures using singularity cancellation methods are presented for evaluating singular and near-singular potential integrals. Four different transformations are compared and the advantages of the Radial-angular transform are demonstrated. A method is then described for optimizing this integration scheme.
Golze, Dorothea; Benedikter, Niels; Iannuzzi, Marcella; Wilhelm, Jan; Hutter, Jürg
2017-01-21
An integral scheme for the efficient evaluation of two-center integrals over contracted solid harmonic Gaussian functions is presented. Integral expressions are derived for local operators that depend on the position vector of one of the two Gaussian centers. These expressions are then used to derive the formula for three-index overlap integrals where two of the three Gaussians are located at the same center. The efficient evaluation of the latter is essential for local resolution-of-the-identity techniques that employ an overlap metric. We compare the performance of our integral scheme to the widely used Cartesian Gaussian-based method of Obara and Saika (OS). Non-local interaction potentials such as standard Coulomb, modified Coulomb, and Gaussian-type operators, which occur in range-separated hybrid functionals, are also included in the performance tests. The speed-up with respect to the OS scheme is up to three orders of magnitude for both integrals and their derivatives. In particular, our method is increasingly efficient for large angular momenta and highly contracted basis sets.
NASA Astrophysics Data System (ADS)
Golze, Dorothea; Benedikter, Niels; Iannuzzi, Marcella; Wilhelm, Jan; Hutter, Jürg
2017-01-01
An integral scheme for the efficient evaluation of two-center integrals over contracted solid harmonic Gaussian functions is presented. Integral expressions are derived for local operators that depend on the position vector of one of the two Gaussian centers. These expressions are then used to derive the formula for three-index overlap integrals where two of the three Gaussians are located at the same center. The efficient evaluation of the latter is essential for local resolution-of-the-identity techniques that employ an overlap metric. We compare the performance of our integral scheme to the widely used Cartesian Gaussian-based method of Obara and Saika (OS). Non-local interaction potentials such as standard Coulomb, modified Coulomb, and Gaussian-type operators, which occur in range-separated hybrid functionals, are also included in the performance tests. The speed-up with respect to the OS scheme is up to three orders of magnitude for both integrals and their derivatives. In particular, our method is increasingly efficient for large angular momenta and highly contracted basis sets.
Analytical and numerical analysis of frictional damage in quasi brittle materials
NASA Astrophysics Data System (ADS)
Zhu, Q. Z.; Zhao, L. Y.; Shao, J. F.
2016-07-01
Frictional sliding and crack growth are two main dissipation processes in quasi brittle materials. The frictional sliding along closed cracks is the origin of macroscopic plastic deformation while the crack growth induces a material damage. The main difficulty of modeling is to consider the inherent coupling between these two processes. Various models and associated numerical algorithms have been proposed. But there are so far no analytical solutions even for simple loading paths for the validation of such algorithms. In this paper, we first present a micro-mechanical model taking into account the damage-friction coupling for a large class of quasi brittle materials. The model is formulated by combining a linear homogenization procedure with the Mori-Tanaka scheme and the irreversible thermodynamics framework. As an original contribution, a series of analytical solutions of stress-strain relations are developed for various loading paths. Based on the micro-mechanical model, two numerical integration algorithms are exploited. The first one involves a coupled friction/damage correction scheme, which is consistent with the coupling nature of the constitutive model. The second one contains a friction/damage decoupling scheme with two consecutive steps: the friction correction followed by the damage correction. With the analytical solutions as reference results, the two algorithms are assessed through a series of numerical tests. It is found that the decoupling correction scheme is efficient to guarantee a systematic numerical convergence.
Recent Developments in Grid Generation and Force Integration Technology for Overset Grids
NASA Technical Reports Server (NTRS)
Chan, William M.; VanDalsem, William R. (Technical Monitor)
1994-01-01
Recent developments in algorithms and software tools for generating overset grids for complex configurations are described. These include the overset surface grid generation code SURGRD and version 2.0 of the hyperbolic volume grid generation code HYPGEN. The SURGRD code is in beta test mode where the new features include the capability to march over a collection of panel networks, a variety of ways to control the side boundaries and the marching step sizes and distance, a more robust projection scheme and an interpolation option. New features in version 2.0 of HYPGEN include a wider range of boundary condition types. The code also allows the user to specify different marching step sizes and distance for each point on the surface grid. A scheme that takes into account of the overlapped zones on the body surface for the purpose of forces and moments computation is also briefly described, The process involves the following two software modules: MIXSUR - a composite grid generation module to produce a collection of quadrilaterals and triangles on which pressure and viscous stresses are to be integrated, and OVERINT - a forces and moments integration module.
Efficient coarse simulation of a growing avascular tumor
Kavousanakis, Michail E.; Liu, Ping; Boudouvis, Andreas G.; Lowengrub, John; Kevrekidis, Ioannis G.
2013-01-01
The subject of this work is the development and implementation of algorithms which accelerate the simulation of early stage tumor growth models. Among the different computational approaches used for the simulation of tumor progression, discrete stochastic models (e.g., cellular automata) have been widely used to describe processes occurring at the cell and subcell scales (e.g., cell-cell interactions and signaling processes). To describe macroscopic characteristics (e.g., morphology) of growing tumors, large numbers of interacting cells must be simulated. However, the high computational demands of stochastic models make the simulation of large-scale systems impractical. Alternatively, continuum models, which can describe behavior at the tumor scale, often rely on phenomenological assumptions in place of rigorous upscaling of microscopic models. This limits their predictive power. In this work, we circumvent the derivation of closed macroscopic equations for the growing cancer cell populations; instead, we construct, based on the so-called “equation-free” framework, a computational superstructure, which wraps around the individual-based cell-level simulator and accelerates the computations required for the study of the long-time behavior of systems involving many interacting cells. The microscopic model, e.g., a cellular automaton, which simulates the evolution of cancer cell populations, is executed for relatively short time intervals, at the end of which coarse-scale information is obtained. These coarse variables evolve on slower time scales than each individual cell in the population, enabling the application of forward projection schemes, which extrapolate their values at later times. This technique is referred to as coarse projective integration. Increasing the ratio of projection times to microscopic simulator execution times enhances the computational savings. Crucial accuracy issues arising for growing tumors with radial symmetry are addressed by applying the coarse projective integration scheme in a cotraveling (cogrowing) frame. As a proof of principle, we demonstrate that the application of this scheme yields highly accurate solutions, while preserving the computational savings of coarse projective integration. PMID:22587128
Hirakawa, Teruo; Suzuki, Teppei; Bowler, David R; Miyazaki, Tsuyoshi
2017-10-11
We discuss the development and implementation of a constant temperature (NVT) molecular dynamics scheme that combines the Nosé-Hoover chain thermostat with the extended Lagrangian Born-Oppenheimer molecular dynamics (BOMD) scheme, using a linear scaling density functional theory (DFT) approach. An integration scheme for this canonical-ensemble extended Lagrangian BOMD is developed and discussed in the context of the Liouville operator formulation. Linear scaling DFT canonical-ensemble extended Lagrangian BOMD simulations are tested on bulk silicon and silicon carbide systems to evaluate our integration scheme. The results show that the conserved quantity remains stable with no systematic drift even in the presence of the thermostat.
NASA Astrophysics Data System (ADS)
Xie, Qing; Xiao, Zhixiang; Ren, Zhuyin
2018-09-01
A spectral radius scaling semi-implicit time stepping scheme has been developed for simulating unsteady compressible reactive flows with detailed chemistry, in which the spectral radius in the LUSGS scheme has been augmented to account for viscous/diffusive and reactive terms and a scalar matrix is proposed to approximate the chemical Jacobian using the minimum species destruction timescale. The performance of the semi-implicit scheme, together with a third-order explicit Runge-Kutta scheme and a Strang splitting scheme, have been investigated in auto-ignition and laminar premixed and nonpremixed flames of three representative fuels, e.g., hydrogen, methane, and n-heptane. Results show that the minimum species destruction time scale can well represent the smallest chemical time scale in reactive flows and the proposed scheme can significantly increase the allowable time steps in simulations. The scheme is stable when the time step is as large as 10 μs, which is about three to five orders of magnitude larger than the smallest time scales in various tests considered. For the test flames considered, the semi-implicit scheme achieves second order of accuracy in time. Moreover, the errors in quantities of interest are smaller than those from the Strang splitting scheme indicating the accuracy gain when the reaction and transport terms are solved coupled. Results also show that the relative efficiency of different schemes depends on fuel mechanisms and test flames. When the minimum time scale in reactive flows is governed by transport processes instead of chemical reactions, the proposed semi-implicit scheme is more efficient than the splitting scheme. Otherwise, the relative efficiency depends on the cost in sub-iterations for convergence within each time step and in the integration for chemistry substep. Then, the capability of the compressible reacting flow solver and the proposed semi-implicit scheme is demonstrated for capturing the hydrogen detonation waves. Finally, the performance of the proposed method is demonstrated in a two-dimensional hydrogen/air diffusion flame.
NASA Technical Reports Server (NTRS)
Buehler, Martin G. (Inventor)
1988-01-01
A set of addressable test structures, each of which uses addressing schemes to access individual elements of the structure in a matrix, is used to test the quality of a wafer before integrated circuits produced thereon are diced, packaged and subjected to final testing. The electrical characteristic of each element is checked and compared to the electrical characteristic of all other like elements in the matrix. The effectiveness of the addressable test matrix is in readily analyzing the electrical characteristics of the test elements and in providing diagnostic information.
NASA Technical Reports Server (NTRS)
Knox, J. C.; Mulloth, Lila; Frederick, Kenneth; Affleck, Dave
2003-01-01
Accumulation and subsequent compression of carbon dioxide that is removed from space cabin are two important processes involved in a closed-loop air revitalization scheme of the International Space Station (ISS). The carbon dioxide removal assembly (CDRA) of ISS currently operates in an open loop mode without a compressor. This paper describes the integrated test results of a flight-like CDRA and a temperature-swing adsorption compressor (TSAC) for carbon dioxide removal and compression. The paper provides details of the TSAC operation at various CO2 loadings and corresponding performance of CDRA.
Borole, Abhijeet P.
2015-08-25
Conversion of biomass into bioenergy is possible via multiple pathways resulting in production of biofuels, bioproducts and biopower. Efficient and sustainable conversion of biomass, however, requires consideration of many environmental and societal parameters in order to minimize negative impacts. Integration of multiple conversion technologies and inclusion of upcoming alternatives such as bioelectrochemical systems can minimize these impacts and improve conservation of resources such as hydrogen, water and nutrients via recycle and reuse. This report outlines alternate pathways integrating microbial electrolysis in biorefinery schemes to improve energy efficiency while evaluating environmental sustainability parameters.
XML-based approaches for the integration of heterogeneous bio-molecular data.
Mesiti, Marco; Jiménez-Ruiz, Ernesto; Sanz, Ismael; Berlanga-Llavori, Rafael; Perlasca, Paolo; Valentini, Giorgio; Manset, David
2009-10-15
The today's public database infrastructure spans a very large collection of heterogeneous biological data, opening new opportunities for molecular biology, bio-medical and bioinformatics research, but raising also new problems for their integration and computational processing. In this paper we survey the most interesting and novel approaches for the representation, integration and management of different kinds of biological data by exploiting XML and the related recommendations and approaches. Moreover, we present new and interesting cutting edge approaches for the appropriate management of heterogeneous biological data represented through XML. XML has succeeded in the integration of heterogeneous biomolecular information, and has established itself as the syntactic glue for biological data sources. Nevertheless, a large variety of XML-based data formats have been proposed, thus resulting in a difficult effective integration of bioinformatics data schemes. The adoption of a few semantic-rich standard formats is urgent to achieve a seamless integration of the current biological resources.
Rajagopalan, S. P.
2017-01-01
Certificateless-based signcryption overcomes inherent shortcomings in traditional Public Key Infrastructure (PKI) and Key Escrow problem. It imparts efficient methods to design PKIs with public verifiability and cipher text authenticity with minimum dependency. As a classic primitive in public key cryptography, signcryption performs validity of cipher text without decryption by combining authentication, confidentiality, public verifiability and cipher text authenticity much more efficiently than the traditional approach. In this paper, we first define a security model for certificateless-based signcryption called, Complex Conjugate Differential Integrated Factor (CC-DIF) scheme by introducing complex conjugates through introduction of the security parameter and improving secured message distribution rate. However, both partial private key and secret value changes with respect to time. To overcome this weakness, a new certificateless-based signcryption scheme is proposed by setting the private key through Differential (Diff) Equation using an Integration Factor (DiffEIF), minimizing computational cost and communication overhead. The scheme is therefore said to be proven secure (i.e. improving the secured message distributing rate) against certificateless access control and signcryption-based scheme. In addition, compared with the three other existing schemes, the CC-DIF scheme has the least computational cost and communication overhead for secured message communication in mobile network. PMID:29040290
Alagarsamy, Sumithra; Rajagopalan, S P
2017-01-01
Certificateless-based signcryption overcomes inherent shortcomings in traditional Public Key Infrastructure (PKI) and Key Escrow problem. It imparts efficient methods to design PKIs with public verifiability and cipher text authenticity with minimum dependency. As a classic primitive in public key cryptography, signcryption performs validity of cipher text without decryption by combining authentication, confidentiality, public verifiability and cipher text authenticity much more efficiently than the traditional approach. In this paper, we first define a security model for certificateless-based signcryption called, Complex Conjugate Differential Integrated Factor (CC-DIF) scheme by introducing complex conjugates through introduction of the security parameter and improving secured message distribution rate. However, both partial private key and secret value changes with respect to time. To overcome this weakness, a new certificateless-based signcryption scheme is proposed by setting the private key through Differential (Diff) Equation using an Integration Factor (DiffEIF), minimizing computational cost and communication overhead. The scheme is therefore said to be proven secure (i.e. improving the secured message distributing rate) against certificateless access control and signcryption-based scheme. In addition, compared with the three other existing schemes, the CC-DIF scheme has the least computational cost and communication overhead for secured message communication in mobile network.
NASA Technical Reports Server (NTRS)
Gabrielsen, R. E.; Uenal, A.
1981-01-01
A numerical scheme for solving two dimensional Fredholm integral equations of the second kind is developed. The proof of the convergence of the numerical scheme is shown for three cases: the case of periodic kernels, the case of semiperiodic kernels, and the case of nonperiodic kernels. Applications to the incompressible, stationary Navier-Stokes problem are of primary interest.
A 3D image sensor with adaptable charge subtraction scheme for background light suppression
NASA Astrophysics Data System (ADS)
Shin, Jungsoon; Kang, Byongmin; Lee, Keechang; Kim, James D. K.
2013-02-01
We present a 3D ToF (Time-of-Flight) image sensor with adaptive charge subtraction scheme for background light suppression. The proposed sensor can alternately capture high resolution color image and high quality depth map in each frame. In depth-mode, the sensor requires enough integration time for accurate depth acquisition, but saturation will occur in high background light illumination. We propose to divide the integration time into N sub-integration times adaptively. In each sub-integration time, our sensor captures an image without saturation and subtracts the charge to prevent the pixel from the saturation. In addition, the subtraction results are cumulated N times obtaining a final result image without background illumination at full integration time. Experimental results with our own ToF sensor show high background suppression performance. We also propose in-pixel storage and column-level subtraction circuit for chiplevel implementation of the proposed method. We believe the proposed scheme will enable 3D sensors to be used in out-door environment.
Multiple wavelength silicon photonic 200 mm R+D platform for 25Gb/s and above applications
NASA Astrophysics Data System (ADS)
Szelag, B.; Blampey, B.; Ferrotti, T.; Reboud, V.; Hassan, K.; Malhouitre, S.; Grand, G.; Fowler, D.; Brision, S.; Bria, T.; Rabillé, G.; Brianceau, P.; Hartmann, J. M.; Hugues, V.; Myko, A.; Elleboode, F.; Gays, F.; Fédéli, J. M.; Kopp, C.
2016-05-01
A silicon photonics platform that uses a CMOS foundry line is described. Fabrication process is following a modular integration scheme which leads to a flexible platform, allowing different device combinations. A complete device library is demonstrated for 1310 nm applications with state of the art performances. A PDK which includes specific photonic features and which is compatible with commercial EDA tools has been developed allowing an MPW shuttle service. Finally platform evolutions such as device offer extension to 1550 nm or new process modules introduction are presented.
Control of coherent information via on-chip photonic–phononic emitter–receivers
Shin, Heedeuk; Cox, Jonathan A.; Jarecki, Robert; ...
2015-03-05
We report that rapid progress in integrated photonics has fostered numerous chip-scale sensing, computing and signal processing technologies. However, many crucial filtering and signal delay operations are difficult to perform with all-optical devices. Unlike photons propagating at luminal speeds, GHz-acoustic phonons moving at slower velocities allow information to be stored, filtered and delayed over comparatively smaller length-scales with remarkable fidelity. Hence, controllable and efficient coupling between coherent photons and phonons enables new signal processing technologies that greatly enhance the performance and potential impact of integrated photonics. Here we demonstrate a mechanism for coherent information processing based on travelling-wave photon–phonon transduction,more » which achieves a phonon emit-and-receive process between distinct nanophotonic waveguides. Using this device, physics—which supports GHz frequencies—we create wavelength-insensitive radiofrequency photonic filters with frequency selectivity, narrow-linewidth and high power-handling in silicon. More generally, this emit-receive concept is the impetus for enabling new signal processing schemes.« less
Control of coherent information via on-chip photonic–phononic emitter–receivers
Shin, Heedeuk; Cox, Jonathan A.; Jarecki, Robert; Starbuck, Andrew; Wang, Zheng; Rakich, Peter T.
2015-01-01
Rapid progress in integrated photonics has fostered numerous chip-scale sensing, computing and signal processing technologies. However, many crucial filtering and signal delay operations are difficult to perform with all-optical devices. Unlike photons propagating at luminal speeds, GHz-acoustic phonons moving at slower velocities allow information to be stored, filtered and delayed over comparatively smaller length-scales with remarkable fidelity. Hence, controllable and efficient coupling between coherent photons and phonons enables new signal processing technologies that greatly enhance the performance and potential impact of integrated photonics. Here we demonstrate a mechanism for coherent information processing based on travelling-wave photon–phonon transduction, which achieves a phonon emit-and-receive process between distinct nanophotonic waveguides. Using this device, physics—which supports GHz frequencies—we create wavelength-insensitive radiofrequency photonic filters with frequency selectivity, narrow-linewidth and high power-handling in silicon. More generally, this emit-receive concept is the impetus for enabling new signal processing schemes. PMID:25740405
NASA Astrophysics Data System (ADS)
Huang, Melin; Huang, Bormin; Huang, Allen H.-L.
2015-10-01
The schemes of cumulus parameterization are responsible for the sub-grid-scale effects of convective and/or shallow clouds, and intended to represent vertical fluxes due to unresolved updrafts and downdrafts and compensating motion outside the clouds. Some schemes additionally provide cloud and precipitation field tendencies in the convective column, and momentum tendencies due to convective transport of momentum. The schemes all provide the convective component of surface rainfall. Betts-Miller-Janjic (BMJ) is one scheme to fulfill such purposes in the weather research and forecast (WRF) model. National Centers for Environmental Prediction (NCEP) has tried to optimize the BMJ scheme for operational application. As there are no interactions among horizontal grid points, this scheme is very suitable for parallel computation. With the advantage of Intel Xeon Phi Many Integrated Core (MIC) architecture, efficient parallelization and vectorization essentials, it allows us to optimize the BMJ scheme. If compared to the original code respectively running on one CPU socket (eight cores) and on one CPU core with Intel Xeon E5-2670, the MIC-based optimization of this scheme running on Xeon Phi coprocessor 7120P improves the performance by 2.4x and 17.0x, respectively.
Efficient Low Dissipative High Order Schemes for Multiscale MHD Flows, I: Basic Theory
NASA Technical Reports Server (NTRS)
Sjoegreen, Bjoern; Yee, H. C.
2003-01-01
The objective of this paper is to extend our recently developed highly parallelizable nonlinear stable high order schemes for complex multiscale hydrodynamic applications to the viscous MHD equations. These schemes employed multiresolution wavelets as adaptive numerical dissipation controls t o limit the amount of and to aid the selection and/or blending of the appropriate types of dissipation to be used. The new scheme is formulated for both the conservative and non-conservative form of the MHD equations in curvilinear grids. The four advantages of the present approach over existing MHD schemes reported in the open literature are as follows. First, the scheme is constructed for long-time integrations of shock/turbulence/combustion MHD flows. Available schemes are too diffusive for long-time integrations and/or turbulence/combustion problems. Second, unlike exist- ing schemes for the conservative MHD equations which suffer from ill-conditioned eigen- decompositions, the present scheme makes use of a well-conditioned eigen-decomposition obtained from a minor modification of the eigenvectors of the non-conservative MHD equations t o solve the conservative form of the MHD equations. Third, this approach of using the non-conservative eigensystem when solving the conservative equations also works well in the context of standard shock-capturing schemes for the MHD equations. Fourth, a new approach to minimize the numerical error of the divergence-free magnetic condition for high order schemes is introduced. Numerical experiments with typical MHD model problems revealed the applicability of the newly developed schemes for the MHD equations.
Secure Data Aggregation with Fully Homomorphic Encryption in Large-Scale Wireless Sensor Networks
Li, Xing; Chen, Dexin; Li, Chunyan; Wang, Liangmin
2015-01-01
With the rapid development of wireless communication technology, sensor technology, information acquisition and processing technology, sensor networks will finally have a deep influence on all aspects of people’s lives. The battery resources of sensor nodes should be managed efficiently in order to prolong network lifetime in large-scale wireless sensor networks (LWSNs). Data aggregation represents an important method to remove redundancy as well as unnecessary data transmission and hence cut down the energy used in communication. As sensor nodes are deployed in hostile environments, the security of the sensitive information such as confidentiality and integrity should be considered. This paper proposes Fully homomorphic Encryption based Secure data Aggregation (FESA) in LWSNs which can protect end-to-end data confidentiality and support arbitrary aggregation operations over encrypted data. In addition, by utilizing message authentication codes (MACs), this scheme can also verify data integrity during data aggregation and forwarding processes so that false data can be detected as early as possible. Although the FHE increase the computation overhead due to its large public key size, simulation results show that it is implementable in LWSNs and performs well. Compared with other protocols, the transmitted data and network overhead are reduced in our scheme. PMID:26151208
NASA Technical Reports Server (NTRS)
Litt, Jonathan; Kurtkaya, Mehmet; Duyar, Ahmet
1994-01-01
This paper presents an application of a fault detection and diagnosis scheme for the sensor faults of a helicopter engine. The scheme utilizes a model-based approach with real time identification and hypothesis testing which can provide early detection, isolation, and diagnosis of failures. It is an integral part of a proposed intelligent control system with health monitoring capabilities. The intelligent control system will allow for accommodation of faults, reduce maintenance cost, and increase system availability. The scheme compares the measured outputs of the engine with the expected outputs of an engine whose sensor suite is functioning normally. If the differences between the real and expected outputs exceed threshold values, a fault is detected. The isolation of sensor failures is accomplished through a fault parameter isolation technique where parameters which model the faulty process are calculated on-line with a real-time multivariable parameter estimation algorithm. The fault parameters and their patterns can then be analyzed for diagnostic and accommodation purposes. The scheme is applied to the detection and diagnosis of sensor faults of a T700 turboshaft engine. Sensor failures are induced in a T700 nonlinear performance simulation and data obtained are used with the scheme to detect, isolate, and estimate the magnitude of the faults.
From Three-Photon Greenberger-Horne-Zeilinger States to Ballistic Universal Quantum Computation.
Gimeno-Segovia, Mercedes; Shadbolt, Pete; Browne, Dan E; Rudolph, Terry
2015-07-10
Single photons, manipulated using integrated linear optics, constitute a promising platform for universal quantum computation. A series of increasingly efficient proposals have shown linear-optical quantum computing to be formally scalable. However, existing schemes typically require extensive adaptive switching, which is experimentally challenging and noisy, thousands of photon sources per renormalized qubit, and/or large quantum memories for repeat-until-success strategies. Our work overcomes all these problems. We present a scheme to construct a cluster state universal for quantum computation, which uses no adaptive switching, no large memories, and which is at least an order of magnitude more resource efficient than previous passive schemes. Unlike previous proposals, it is constructed entirely from loss-detecting gates and offers a robustness to photon loss. Even without the use of an active loss-tolerant encoding, our scheme naturally tolerates a total loss rate ∼1.6% in the photons detected in the gates. This scheme uses only 3 Greenberger-Horne-Zeilinger states as a resource, together with a passive linear-optical network. We fully describe and model the iterative process of cluster generation, including photon loss and gate failure. This demonstrates that building a linear-optical quantum computer needs to be less challenging than previously thought.
Poulsen, Signe; Jørgensen, Michael Søgaard
2011-09-01
The aim of this article is to analyse the social shaping of worksite food interventions at two Danish worksites. The overall aims are to contribute first, to the theoretical frameworks for the planning and analysis of food and health interventions at worksites and second, to a foodscape approach to worksite food interventions. The article is based on a case study of the design of a canteen takeaway (CTA) scheme for employees at two Danish hospitals. This was carried out as part of a project to investigate the shaping and impact of schemes that offer employees meals to buy, to take home or to eat at the worksite during irregular working hours. Data collection was carried out through semi-structured interviews with stakeholders within the two change processes. Two focus group interviews were also carried out at one hospital and results from a user survey carried out by other researchers at the other hospital were included. Theoretically, the study was based on the social constitution approach to change processes at worksites and a co-evolution approach to problem-solution complexes as part of change processes. Both interventions were initiated because of the need to improve the food supply for the evening shift and the work-life balance. The shaping of the schemes at the two hospitals became rather different change processes due to the local organizational processes shaped by previously developed norms and values. At one hospital the change process challenged norms and values about food culture and challenged ideas in the canteen kitchen about working hours. At the other hospital, the change was more of a learning process that aimed at finding the best way to offer a CTA scheme. Worksite health promotion practitioners should be aware that the intervention itself is an object of negotiation between different stakeholders at a worksite based on existing norms and values. The social contextual model and the setting approach to worksite health interventions lack reflections about how such norms and values might influence the shaping of the intervention. It is recommended that future planning and analyses of worksite health promotion interventions apply a combination of the social constitution approach to worksites and an integrated food supply and demand perspective based on analyses of the co-evolution of problem-solution complexes.
Efficient adaptive pseudo-symplectic numerical integration techniques for Landau-Lifshitz dynamics
NASA Astrophysics Data System (ADS)
d'Aquino, M.; Capuano, F.; Coppola, G.; Serpico, C.; Mayergoyz, I. D.
2018-05-01
Numerical time integration schemes for Landau-Lifshitz magnetization dynamics are considered. Such dynamics preserves the magnetization amplitude and, in the absence of dissipation, also implies the conservation of the free energy. This property is generally lost when time discretization is performed for the numerical solution. In this work, explicit numerical schemes based on Runge-Kutta methods are introduced. The schemes are termed pseudo-symplectic in that they are accurate to order p, but preserve magnetization amplitude and free energy to order q > p. An effective strategy for adaptive time-stepping control is discussed for schemes of this class. Numerical tests against analytical solutions for the simulation of fast precessional dynamics are performed in order to point out the effectiveness of the proposed methods.
Vernier-like super resolution with guided correlated photon pairs.
Nespoli, Matteo; Goan, Hsi-Sheng; Shih, Min-Hsiung
2016-01-11
We describe a dispersion-enabled, ultra-low power realization of super-resolution in an integrated Mach-Zehnder interferometer. Our scheme is based on a Vernier-like effect in the coincident detection of frequency correlated, non-degenerate photon pairs at the sensor output in the presence of group index dispersion. We design and simulate a realistic integrated refractive index sensor in a silicon nitride on silica platform and characterize its performance in the proposed scheme. We present numerical results showing a sensitivity improvement upward of 40 times over a traditional sensing scheme. The device we design is well within the reach of modern semiconductor fabrication technology. We believe this is the first metrology scheme that uses waveguide group index dispersion as a resource to attain super-resolution.
NASA Astrophysics Data System (ADS)
Somogyi, Gábor; Trócsányi, Zoltán
2008-08-01
In previous articles we outlined a subtraction scheme for regularizing doubly-real emission and real-virtual emission in next-to-next-to-leading order (NNLO) calculations of jet cross sections in electron-positron annihilation. In order to find the NNLO correction these subtraction terms have to be integrated over the factorized unresolved phase space and combined with the two-loop corrections. In this paper we perform the integration of all one-parton unresolved subtraction terms.
NASA Technical Reports Server (NTRS)
Li, Xiao-Fan; Sui, C.-H.; Lau, K.-M.; Tao, W.-K.
2004-01-01
Prognostic cloud schemes are increasingly used in weather and climate models in order to better treat cloud-radiation processes. Simplifications are often made in such schemes for computational efficiency, like the scheme being used in the National Centers for Environment Prediction models that excludes some microphysical processes and precipitation-radiation interaction. In this study, sensitivity tests with a 2D cloud resolving model are carried out to examine effects of the excluded microphysical processes and precipitation-radiation interaction on tropical thermodynamics and cloud properties. The model is integrated for 10 days with the imposed vertical velocity derived from the Tropical Ocean Global Atmosphere Coupled Ocean-Atmosphere Response Experiment. The experiment excluding the depositional growth of snow from cloud ice shows anomalous growth of cloud ice and more than 20% increase of fractional cloud cover, indicating that the lack of the depositional snow growth causes unrealistically large mixing ratio of cloud ice. The experiment excluding the precipitation-radiation interaction displays a significant cooling and drying bias. The analysis of heat and moisture budgets shows that the simulation without the interaction produces more stable upper troposphere and more unstable mid and lower troposphere than does the simulation with the interaction. Thus, the suppressed growth of ice clouds in upper troposphere and stronger radiative cooling in mid and lower troposphere are responsible for the cooling bias, and less evaporation of rain associated with the large-scale subsidence induces the drying in mid and lower troposphere.
Engineering integrated photonics for heralded quantum gates
NASA Astrophysics Data System (ADS)
Meany, Thomas; Biggerstaff, Devon N.; Broome, Matthew A.; Fedrizzi, Alessandro; Delanty, Michael; Steel, M. J.; Gilchrist, Alexei; Marshall, Graham D.; White, Andrew G.; Withford, Michael J.
2016-06-01
Scaling up linear-optics quantum computing will require multi-photon gates which are compact, phase-stable, exhibit excellent quantum interference, and have success heralded by the detection of ancillary photons. We investigate the design, fabrication and characterisation of the optimal known gate scheme which meets these requirements: the Knill controlled-Z gate, implemented in integrated laser-written waveguide arrays. We show device performance to be less sensitive to phase variations in the circuit than to small deviations in the coupler reflectivity, which are expected given the tolerance values of the fabrication method. The mode fidelity is also shown to be less sensitive to reflectivity and phase errors than the process fidelity. Our best device achieves a fidelity of 0.931 ± 0.001 with the ideal 4 × 4 unitary circuit and a process fidelity of 0.680 ± 0.005 with the ideal computational-basis process.
Engineering integrated photonics for heralded quantum gates
Meany, Thomas; Biggerstaff, Devon N.; Broome, Matthew A.; Fedrizzi, Alessandro; Delanty, Michael; Steel, M. J.; Gilchrist, Alexei; Marshall, Graham D.; White, Andrew G.; Withford, Michael J.
2016-01-01
Scaling up linear-optics quantum computing will require multi-photon gates which are compact, phase-stable, exhibit excellent quantum interference, and have success heralded by the detection of ancillary photons. We investigate the design, fabrication and characterisation of the optimal known gate scheme which meets these requirements: the Knill controlled-Z gate, implemented in integrated laser-written waveguide arrays. We show device performance to be less sensitive to phase variations in the circuit than to small deviations in the coupler reflectivity, which are expected given the tolerance values of the fabrication method. The mode fidelity is also shown to be less sensitive to reflectivity and phase errors than the process fidelity. Our best device achieves a fidelity of 0.931 ± 0.001 with the ideal 4 × 4 unitary circuit and a process fidelity of 0.680 ± 0.005 with the ideal computational-basis process. PMID:27282928
Engineering integrated photonics for heralded quantum gates.
Meany, Thomas; Biggerstaff, Devon N; Broome, Matthew A; Fedrizzi, Alessandro; Delanty, Michael; Steel, M J; Gilchrist, Alexei; Marshall, Graham D; White, Andrew G; Withford, Michael J
2016-06-10
Scaling up linear-optics quantum computing will require multi-photon gates which are compact, phase-stable, exhibit excellent quantum interference, and have success heralded by the detection of ancillary photons. We investigate the design, fabrication and characterisation of the optimal known gate scheme which meets these requirements: the Knill controlled-Z gate, implemented in integrated laser-written waveguide arrays. We show device performance to be less sensitive to phase variations in the circuit than to small deviations in the coupler reflectivity, which are expected given the tolerance values of the fabrication method. The mode fidelity is also shown to be less sensitive to reflectivity and phase errors than the process fidelity. Our best device achieves a fidelity of 0.931 ± 0.001 with the ideal 4 × 4 unitary circuit and a process fidelity of 0.680 ± 0.005 with the ideal computational-basis process.
Quality Assurance in Engineering Education: Comparison of Accreditation Schemes and ISO 9001.
ERIC Educational Resources Information Center
Karapetrovic, Stanislav; Rajamani, Divakar; Willborn, Walter
1998-01-01
Outlines quality assurance schemes for distance-education technologies that are based on the ISO 9000 family of international quality-assurance standards. Argues that engineering faculties can establish such systems on the basis of and integrated with accreditation schemes. Contains 34 references. (DDR)
Zhao, Zhenguo; Shi, Wenbo
2014-01-01
Probabilistic signature scheme has been widely used in modern electronic commerce since it could provide integrity, authenticity, and nonrepudiation. Recently, Wu and Lin proposed a novel probabilistic signature (PS) scheme using the bilinear square Diffie-Hellman (BSDH) problem. They also extended it to a universal designated verifier signature (UDVS) scheme. In this paper, we analyze the security of Wu et al.'s PS scheme and UDVS scheme. Through concrete attacks, we demonstrate both of their schemes are not unforgeable. The security analysis shows that their schemes are not suitable for practical applications.
Information integration for a sky survey by data warehousing
NASA Astrophysics Data System (ADS)
Luo, A.; Zhang, Y.; Zhao, Y.
The virtualization service of data system for a sky survey LAMOST is very important for astronomers The service needs to integrate information from data collections catalogs and references and support simple federation of a set of distributed files and associated metadata Data warehousing has been in existence for several years and demonstrated superiority over traditional relational database management systems by providing novel indexing schemes that supported efficient on-line analytical processing OLAP of large databases Now relational database systems such as Oracle etc support the warehouse capability which including extensions to the SQL language to support OLAP operations and a number of metadata management tools have been created The information integration of LAMOST by applying data warehousing is to effectively provide data and knowledge on-line
Welter, Petra; Riesmeier, Jörg; Fischer, Benedikt; Grouls, Christoph; Kuhl, Christiane; Deserno, Thomas M
2011-01-01
It is widely accepted that content-based image retrieval (CBIR) can be extremely useful for computer-aided diagnosis (CAD). However, CBIR has not been established in clinical practice yet. As a widely unattended gap of integration, a unified data concept for CBIR-based CAD results and reporting is lacking. Picture archiving and communication systems and the workflow of radiologists must be considered for successful data integration to be achieved. We suggest that CBIR systems applied to CAD should integrate their results in a picture archiving and communication systems environment such as Digital Imaging and Communications in Medicine (DICOM) structured reporting documents. A sample DICOM structured reporting template adaptable to CBIR and an appropriate integration scheme is presented. The proposed CBIR data concept may foster the promulgation of CBIR systems in clinical environments and, thereby, improve the diagnostic process.
Riesmeier, Jörg; Fischer, Benedikt; Grouls, Christoph; Kuhl, Christiane; Deserno (né Lehmann), Thomas M
2011-01-01
It is widely accepted that content-based image retrieval (CBIR) can be extremely useful for computer-aided diagnosis (CAD). However, CBIR has not been established in clinical practice yet. As a widely unattended gap of integration, a unified data concept for CBIR-based CAD results and reporting is lacking. Picture archiving and communication systems and the workflow of radiologists must be considered for successful data integration to be achieved. We suggest that CBIR systems applied to CAD should integrate their results in a picture archiving and communication systems environment such as Digital Imaging and Communications in Medicine (DICOM) structured reporting documents. A sample DICOM structured reporting template adaptable to CBIR and an appropriate integration scheme is presented. The proposed CBIR data concept may foster the promulgation of CBIR systems in clinical environments and, thereby, improve the diagnostic process. PMID:21672913
Optimal placement of fast cut back units based on the theory of cellular automata and agent
NASA Astrophysics Data System (ADS)
Yan, Jun; Yan, Feng
2017-06-01
The thermal power generation units with the function of fast cut back could serve power for auxiliary system and keep island operation after a major blackout, so they are excellent substitute for the traditional black-start power sources. Different placement schemes for FCB units have different influence on the subsequent restoration process. Considering the locality of the emergency dispatching rules, the unpredictability of specific dispatching instructions and unexpected situations like failure of transmission line energization, a novel deduction model for network reconfiguration based on the theory of cellular automata and agent is established. Several indexes are then defined for evaluating the placement schemes for FCB units. The attribute weights determination method based on subjective and objective integration and grey relational analysis are combinatorically used to determine the optimal placement scheme for FCB unit. The effectiveness of the proposed method is validated by the test results on the New England 10-unit 39-bus power system.
NASA Astrophysics Data System (ADS)
Wang, Jun; Chen, J. M.; Li, Manchun; Ju, Weimin
2007-06-01
As the major eligible land use activities in the Clean Development Mechanism (CDM), afforestation and reforestation offer opportunities and potential economic benefits for developing countries to participate in carbon-trade in the potential international carbon (C) sink markets. However, the design and selection of appropriate afforestation and reforestation locations in CDM are complex processes which need integrated assessment (IA) of C sequestration (CS) potential, environmental effects, and socio-economic impacts. This paper promotes the consideration of CS benefits in local land use planning and presents a GIS-based integrated assessment and spatial decision support system (IA-SDSS) to support decision-making on 'where' and 'how' to afforest. It integrates an Integrated Terrestrial Ecosystem Carbon Model (InTEC) and a GIS platform for modeling regional long-term CS potential and assessment of geo-referenced land use criteria including CS consequence, and produces ranking of plantation schemes with different tree species using the Analytic hierarchy process (AHP) method. Three land use scenarios are investigated: (i) traditional land use planning criteria without C benefits, (ii) land use for CS with low C price, and (iii) land use for CS with high price. Different scenarios and consequences will influence the weights of tree-species selection in the AHP decision process.
Intel Xeon Phi accelerated Weather Research and Forecasting (WRF) Goddard microphysics scheme
NASA Astrophysics Data System (ADS)
Mielikainen, J.; Huang, B.; Huang, A. H.-L.
2014-12-01
The Weather Research and Forecasting (WRF) model is a numerical weather prediction system designed to serve both atmospheric research and operational forecasting needs. The WRF development is a done in collaboration around the globe. Furthermore, the WRF is used by academic atmospheric scientists, weather forecasters at the operational centers and so on. The WRF contains several physics components. The most time consuming one is the microphysics. One microphysics scheme is the Goddard cloud microphysics scheme. It is a sophisticated cloud microphysics scheme in the Weather Research and Forecasting (WRF) model. The Goddard microphysics scheme is very suitable for massively parallel computation as there are no interactions among horizontal grid points. Compared to the earlier microphysics schemes, the Goddard scheme incorporates a large number of improvements. Thus, we have optimized the Goddard scheme code. In this paper, we present our results of optimizing the Goddard microphysics scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The Intel MIC is capable of executing a full operating system and entire programs rather than just kernels as the GPU does. The MIC coprocessor supports all important Intel development tools. Thus, the development environment is one familiar to a vast number of CPU developers. Although, getting a maximum performance out of MICs will require using some novel optimization techniques. Those optimization techniques are discussed in this paper. The results show that the optimizations improved performance of Goddard microphysics scheme on Xeon Phi 7120P by a factor of 4.7×. In addition, the optimizations reduced the Goddard microphysics scheme's share of the total WRF processing time from 20.0 to 7.5%. Furthermore, the same optimizations improved performance on Intel Xeon E5-2670 by a factor of 2.8× compared to the original code.
NASA Astrophysics Data System (ADS)
Alapaty, K.; Zhang, G. J.; Song, X.; Kain, J. S.; Herwehe, J. A.
2012-12-01
Short lived pollutants such as aerosols play an important role in modulating not only the radiative balance but also cloud microphysical properties and precipitation rates. In the past, to understand the interactions of aerosols with clouds, several cloud-resolving modeling studies were conducted. These studies indicated that in the presence of anthropogenic aerosols, single-phase deep convection precipitation is reduced or suppressed. On the other hand, anthropogenic aerosol pollution led to enhanced precipitation for mixed-phase deep convective clouds. To date, there have not been many efforts to incorporate such aerosol indirect effects (AIE) in mesoscale models or global models that use parameterization schemes for deep convection. Thus, the objective of this work is to implement a diagnostic cloud microphysical scheme directly into a deep convection parameterization facilitating aerosol indirect effects in the WRF-CMAQ integrated modeling systems. Major research issues addressed in this study are: What is the sensitivity of a deep convection scheme to cloud microphysical processes represented by a bulk double-moment scheme? How close are the simulated cloud water paths as compared to observations? Does increased aerosol pollution lead to increased precipitation for mixed-phase clouds? These research questions are addressed by performing several WRF simulations using the Kain-Fritsch convection parameterization and a diagnostic cloud microphysical scheme. In the first set of simulations (control simulations) the WRF model is used to simulate two scenarios of deep convection over the continental U.S. during two summer periods at 36 km grid resolution. In the second set, these simulations are repeated after incorporating a diagnostic cloud microphysical scheme to study the impacts of inclusion of cloud microphysical processes. Finally, in the third set, aerosol concentrations simulated by the CMAQ modeling system are supplied to the embedded cloud microphysical scheme to study impacts of aerosol concentrations on precipitation and radiation fields. Observations available from the ARM microbase data, the SURFRAD network, GOES imagery, and other reanalysis and measurements will be used to analyze the impacts of a cloud microphysical scheme and aerosol concentrations on parameterized convection.
NASA Astrophysics Data System (ADS)
Magda, Danièle; de Sainte Marie, Christine; Plantureux, Sylvain; Agreil, Cyril; Amiaud, Bernard; Mestelan, Philippe; Mihout, Sarah
2015-11-01
Current agri-environmental schemes for reconciling agricultural production with biodiversity conservation are proving ineffective Europe-wide, increasing interest in results-based schemes (RBSs). We describe here the French "Flowering Meadows" competition, rewarding the "best agroecological balance" in semi-natural grasslands managed by livestock farmers. This competition, which was entered by about a thousand farmers in 50 regional nature parks between 2007 and 2014, explicitly promotes a new style of agri-environmental scheme focusing on an ability to reach the desired outcome rather than adherence to prescriptive management rules. Building on our experience in the design and monitoring of the competition, we argue that the cornerstone of successful RBSs is a collective learning process in which the reconciliation of agriculture and environment is reconsidered in terms of synergistic relationships between agricultural and ecological functioning. We present the interactive, iterative process by which we defined an original method for assessing species-rich grasslands in agroecological terms. This approach was based on the integration of new criteria, such as flexibility, feeding value, and consistency of use, into the assessment of forage production performance and the consideration of biodiversity conservation through its functional role within the grassland ecosystem, rather than simply noting the presence or abundance of species. We describe the adaptation of this methodology on the basis of competition feedback, to bring about a significant shift in the conventional working methods of agronomists and conservationists (including researchers).The potential and efficacy of RBSs for promoting ecologically sound livestock systems are discussed in the concluding remarks, and they relate to the ecological intensification debate.
NASA Astrophysics Data System (ADS)
Raley, Angélique; Lee, Joe; Smith, Jeffrey T.; Sun, Xinghua; Farrell, Richard A.; Shearer, Jeffrey; Xu, Yongan; Ko, Akiteru; Metz, Andrew W.; Biolsi, Peter; Devilliers, Anton; Arnold, John; Felix, Nelson
2018-04-01
We report a sub-30nm pitch self-aligned double patterning (SADP) integration scheme with EUV lithography coupled with self-aligned block technology (SAB) targeting the back end of line (BEOL) metal line patterning applications for logic nodes beyond 5nm. The integration demonstration is a validation of the scalability of a previously reported flow, which used 193nm immersion SADP targeting a 40nm pitch with the same material sets (Si3N4 mandrel, SiO2 spacer, Spin on carbon, spin on glass). The multi-color integration approach is successfully demonstrated and provides a valuable method to address overlay concerns and more generally edge placement error (EPE) as a whole for advanced process nodes. Unbiased LER/LWR analysis comparison between EUV SADP and 193nm immersion SADP shows that both integrations follow the same trend throughout the process steps. While EUV SADP shows increased LER after mandrel pull, metal hardmask open and dielectric etch compared to 193nm immersion SADP, the final process performance is matched in terms of LWR (1.08nm 3 sigma unbiased) and is only 6% higher than 193nm immersion SADP for average unbiased LER. Using EUV SADP enables almost doubling the line density while keeping most of the remaining processes and films unchanged, and provides a compelling alternative to other multipatterning integrations, which present their own sets of challenges.
A Semi-Implicit, Three-Dimensional Model for Estuarine Circulation
Smith, Peter E.
2006-01-01
A semi-implicit, finite-difference method for the numerical solution of the three-dimensional equations for circulation in estuaries is presented and tested. The method uses a three-time-level, leapfrog-trapezoidal scheme that is essentially second-order accurate in the spatial and temporal numerical approximations. The three-time-level scheme is shown to be preferred over a two-time-level scheme, especially for problems with strong nonlinearities. The stability of the semi-implicit scheme is free from any time-step limitation related to the terms describing vertical diffusion and the propagation of the surface gravity waves. The scheme does not rely on any form of vertical/horizontal mode-splitting to treat the vertical diffusion implicitly. At each time step, the numerical method uses a double-sweep method to transform a large number of small tridiagonal equation systems and then uses the preconditioned conjugate-gradient method to solve a single, large, five-diagonal equation system for the water surface elevation. The governing equations for the multi-level scheme are prepared in a conservative form by integrating them over the height of each horizontal layer. The layer-integrated volumetric transports replace velocities as the dependent variables so that the depth-integrated continuity equation that is used in the solution for the water surface elevation is linear. Volumetric transports are computed explicitly from the momentum equations. The resulting method is mass conservative, efficient, and numerically accurate.
Research to Assembly Scheme for Satellite Deck Based on Robot Flexibility Control Principle
NASA Astrophysics Data System (ADS)
Guo, Tao; Hu, Ruiqin; Xiao, Zhengyi; Zhao, Jingjing; Fang, Zhikai
2018-03-01
Deck assembly is critical quality control point in final satellite assembly process, and cable extrusion and structure collision problems in assembly process will affect development quality and progress of satellite directly. Aimed at problems existing in deck assembly process, assembly project scheme for satellite deck based on robot flexibility control principle is proposed in this paper. Scheme is introduced firstly; secondly, key technologies on end force perception and flexible docking control in the scheme are studied; then, implementation process of assembly scheme for satellite deck is described in detail; finally, actual application case of assembly scheme is given. Result shows that compared with traditional assembly scheme, assembly scheme for satellite deck based on robot flexibility control principle has obvious advantages in work efficiency, reliability and universality aspects etc.
2014-01-01
Background Nigeria has included a regulated community-based health insurance (CBHI) model within its National Health Insurance Scheme (NHIS). Uptake to date has been disappointing, however. The aim of this study is to review the present status of CBHI in SSA in general to highlight the issues that affect its successful integration within the NHIS of Nigeria and more widely in developing countries. Methods A literature survey using PubMed and EconLit was carried out to identify and review studies that report factors affecting implementation of CBHI in SSA with a focus on Nigeria. Results CBHI schemes with a variety of designs have been introduced across SSA but with generally disappointing results so far. Two exceptions are Ghana and Rwanda, both of which have introduced schemes with effective government control and support coupled with intensive implementation programmes. Poor support for CBHI is repeatedly linked elsewhere with failure to engage and account for the ‘real world’ needs of beneficiaries, lack of clear legislative and regulatory frameworks, inadequate financial support, and unrealistic enrolment requirements. Nigeria’s CBHI-type schemes for the informal sectors of its NHIS have been set up under an appropriate legislative framework, but work is needed to eliminate regressive financing, to involve scheme members in the setting up and management of programmes, to inform and educate more effectively, to eliminate lack of confidence in the schemes, and to address inequity in provision. Targeted subsidies should also be considered. Conclusions Disappointing uptake of CBHI-type NHIS elements in Nigeria can be addressed through closer integration of informal and formal programmes under the NHIS umbrella, with increasing involvement of beneficiaries in scheme design and management, improved communication and education, and targeted financial assistance. PMID:24559409
Three-pass protocol scheme for bitmap image security by using vernam cipher algorithm
NASA Astrophysics Data System (ADS)
Rachmawati, D.; Budiman, M. A.; Aulya, L.
2018-02-01
Confidentiality, integrity, and efficiency are the crucial aspects of data security. Among the other digital data, image data is too prone to abuse of operation like duplication, modification, etc. There are some data security techniques, one of them is cryptography. The security of Vernam Cipher cryptography algorithm is very dependent on the key exchange process. If the key is leaked, security of this algorithm will collapse. Therefore, a method that minimizes key leakage during the exchange of messages is required. The method which is used, is known as Three-Pass Protocol. This protocol enables message delivery process without the key exchange. Therefore, the sending messages process can reach the receiver safely without fear of key leakage. The system is built by using Java programming language. The materials which are used for system testing are image in size 200×200 pixel, 300×300 pixel, 500×500 pixel, 800×800 pixel and 1000×1000 pixel. The result of experiments showed that Vernam Cipher algorithm in Three-Pass Protocol scheme could restore the original image.
Matsuoka, Takeshi; Tanaka, Shigenori; Ebina, Kuniyoshi
2014-03-01
We propose a hierarchical reduction scheme to cope with coupled rate equations that describe the dynamics of multi-time-scale photosynthetic reactions. To numerically solve nonlinear dynamical equations containing a wide temporal range of rate constants, we first study a prototypical three-variable model. Using a separation of the time scale of rate constants combined with identified slow variables as (quasi-)conserved quantities in the fast process, we achieve a coarse-graining of the dynamical equations reduced to those at a slower time scale. By iteratively employing this reduction method, the coarse-graining of broadly multi-scale dynamical equations can be performed in a hierarchical manner. We then apply this scheme to the reaction dynamics analysis of a simplified model for an illuminated photosystem II, which involves many processes of electron and excitation-energy transfers with a wide range of rate constants. We thus confirm a good agreement between the coarse-grained and fully (finely) integrated results for the population dynamics. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Sivak, David A; Chodera, John D; Crooks, Gavin E
2014-06-19
When simulating molecular systems using deterministic equations of motion (e.g., Newtonian dynamics), such equations are generally numerically integrated according to a well-developed set of algorithms that share commonly agreed-upon desirable properties. However, for stochastic equations of motion (e.g., Langevin dynamics), there is still broad disagreement over which integration algorithms are most appropriate. While multiple desiderata have been proposed throughout the literature, consensus on which criteria are important is absent, and no published integration scheme satisfies all desiderata simultaneously. Additional nontrivial complications stem from simulating systems driven out of equilibrium using existing stochastic integration schemes in conjunction with recently developed nonequilibrium fluctuation theorems. Here, we examine a family of discrete time integration schemes for Langevin dynamics, assessing how each member satisfies a variety of desiderata that have been enumerated in prior efforts to construct suitable Langevin integrators. We show that the incorporation of a novel time step rescaling in the deterministic updates of position and velocity can correct a number of dynamical defects in these integrators. Finally, we identify a particular splitting (related to the velocity Verlet discretization) that has essentially universally appropriate properties for the simulation of Langevin dynamics for molecular systems in equilibrium, nonequilibrium, and path sampling contexts.
Photomask etch system and process for 10nm technology node and beyond
NASA Astrophysics Data System (ADS)
Chandrachood, Madhavi; Grimbergen, Michael; Yu, Keven; Leung, Toi; Tran, Jeffrey; Chen, Jeff; Bivens, Darin; Yalamanchili, Rao; Wistrom, Richard; Faure, Tom; Bartlau, Peter; Crawford, Shaun; Sakamoto, Yoshifumi
2015-10-01
While the industry is making progress to offer EUV lithography schemes to attain ultimate critical dimensions down to 20 nm half pitch, an interim optical lithography solution to address an immediate need for resolution is offered by various integration schemes using advanced PSM (Phase Shift Mask) materials including thin e-beam resist and hard mask. Using the 193nm wavelength to produce 10nm or 7nm patterns requires a range of optimization techniques, including immersion and multiple patterning, which place a heavy demand on photomask technologies. Mask schemes with hard mask certainly help attain better selectivity and hence better resolution but pose integration challenges and defectivity issues. This paper presents a new photomask etch solution for attenuated phase shift masks that offers high selectivity (Cr:Resist > 1.5:1), tighter control on the CD uniformity with a 3sigma value approaching 1 nm and controllable CD bias (5-20 nm) with excellent CD linearity performance (<5 nm) down to the finer resolution. The new system has successfully demonstrated capability to meet the 10 nm node photomask CD requirements without the use of more complicated hard mask phase shift blanks. Significant improvement in post wet clean recovery performance was demonstrated by the use of advanced chamber materials. Examples of CD uniformity, linearity, and minimum feature size, and etch bias performance on 10 nm test site and production mask designs will be shown.
Selimis, Georgios; Huang, Li; Massé, Fabien; Tsekoura, Ioanna; Ashouei, Maryam; Catthoor, Francky; Huisken, Jos; Stuyt, Jan; Dolmans, Guido; Penders, Julien; De Groot, Harmke
2011-10-01
In order for wireless body area networks to meet widespread adoption, a number of security implications must be explored to promote and maintain fundamental medical ethical principles and social expectations. As a result, integration of security functionality to sensor nodes is required. Integrating security functionality to a wireless sensor node increases the size of the stored software program in program memory, the required time that the sensor's microprocessor needs to process the data and the wireless network traffic which is exchanged among sensors. This security overhead has dominant impact on the energy dissipation which is strongly related to the lifetime of the sensor, a critical aspect in wireless sensor network (WSN) technology. Strict definition of the security functionality, complete hardware model (microprocessor and radio), WBAN topology and the structure of the medium access control (MAC) frame are required for an accurate estimation of the energy that security introduces into the WBAN. In this work, we define a lightweight security scheme for WBAN, we estimate the additional energy consumption that the security scheme introduces to WBAN based on commercial available off-the-shelf hardware components (microprocessor and radio), the network topology and the MAC frame. Furthermore, we propose a new microcontroller design in order to reduce the energy consumption of the system. Experimental results and comparisons with other works are given.
Provenance based data integrity checking and verification in cloud environments
Haq, Inam Ul; Jan, Bilal; Khan, Fakhri Alam; Ahmad, Awais
2017-01-01
Cloud computing is a recent tendency in IT that moves computing and data away from desktop and hand-held devices into large scale processing hubs and data centers respectively. It has been proposed as an effective solution for data outsourcing and on demand computing to control the rising cost of IT setups and management in enterprises. However, with Cloud platforms user’s data is moved into remotely located storages such that users lose control over their data. This unique feature of the Cloud is facing many security and privacy challenges which need to be clearly understood and resolved. One of the important concerns that needs to be addressed is to provide the proof of data integrity, i.e., correctness of the user’s data stored in the Cloud storage. The data in Clouds is physically not accessible to the users. Therefore, a mechanism is required where users can check if the integrity of their valuable data is maintained or compromised. For this purpose some methods are proposed like mirroring, checksumming and using third party auditors amongst others. However, these methods use extra storage space by maintaining multiple copies of data or the presence of a third party verifier is required. In this paper, we address the problem of proving data integrity in Cloud computing by proposing a scheme through which users are able to check the integrity of their data stored in Clouds. In addition, users can track the violation of data integrity if occurred. For this purpose, we utilize a relatively new concept in the Cloud computing called “Data Provenance”. Our scheme is capable to reduce the need of any third party services, additional hardware support and the replication of data items on client side for integrity checking. PMID:28545151
Provenance based data integrity checking and verification in cloud environments.
Imran, Muhammad; Hlavacs, Helmut; Haq, Inam Ul; Jan, Bilal; Khan, Fakhri Alam; Ahmad, Awais
2017-01-01
Cloud computing is a recent tendency in IT that moves computing and data away from desktop and hand-held devices into large scale processing hubs and data centers respectively. It has been proposed as an effective solution for data outsourcing and on demand computing to control the rising cost of IT setups and management in enterprises. However, with Cloud platforms user's data is moved into remotely located storages such that users lose control over their data. This unique feature of the Cloud is facing many security and privacy challenges which need to be clearly understood and resolved. One of the important concerns that needs to be addressed is to provide the proof of data integrity, i.e., correctness of the user's data stored in the Cloud storage. The data in Clouds is physically not accessible to the users. Therefore, a mechanism is required where users can check if the integrity of their valuable data is maintained or compromised. For this purpose some methods are proposed like mirroring, checksumming and using third party auditors amongst others. However, these methods use extra storage space by maintaining multiple copies of data or the presence of a third party verifier is required. In this paper, we address the problem of proving data integrity in Cloud computing by proposing a scheme through which users are able to check the integrity of their data stored in Clouds. In addition, users can track the violation of data integrity if occurred. For this purpose, we utilize a relatively new concept in the Cloud computing called "Data Provenance". Our scheme is capable to reduce the need of any third party services, additional hardware support and the replication of data items on client side for integrity checking.
NASA Astrophysics Data System (ADS)
Bensiali, Bouchra; Bodi, Kowsik; Ciraolo, Guido; Ghendrih, Philippe; Liandrat, Jacques
2013-03-01
In this work, we compare different interpolation operators in the context of particle tracking with an emphasis on situations involving velocity field with steep gradients. Since, in this case, most classical methods give rise to the Gibbs phenomenon (generation of oscillations near discontinuities), we present new methods for particle tracking based on subdivision schemes and especially on the Piecewise Parabolic Harmonic (PPH) scheme which has shown its advantage in image processing in presence of strong contrasts. First an analytic univariate case with a discontinuous velocity field is considered in order to highlight the effect of the Gibbs phenomenon on trajectory calculation. Theoretical results are provided. Then, we show, regardless of the interpolation method, the need to use a conservative approach when integrating a conservative problem with a velocity field deriving from a potential. Finally, the PPH scheme is applied in a more realistic case of a time-dependent potential encountered in the edge turbulence of magnetically confined plasmas, to compare the propagation of density structures (turbulence bursts) with the dynamics of test particles. This study highlights the difference between particle transport and density transport in turbulent fields.
Improved system integration for integrated gasification combined cycle (IGCC) systems.
Frey, H Christopher; Zhu, Yunhua
2006-03-01
Integrated gasification combined cycle (IGCC) systems are a promising technology for power generation. They include an air separation unit (ASU), a gasification system, and a gas turbine combined cycle power block, and feature competitive efficiency and lower emissions compared to conventional power generation technology. IGCC systems are not yet in widespread commercial use and opportunities remain to improve system feasibility via improved process integration. A process simulation model was developed for IGCC systems with alternative types of ASU and gas turbine integration. The model is applied to evaluate integration schemes involving nitrogen injection, air extraction, and combinations of both, as well as different ASU pressure levels. The optimal nitrogen injection only case in combination with an elevated pressure ASU had the highest efficiency and power output and approximately the lowest emissions per unit output of all cases considered, and thus is a recommended design option. The optimal combination of air extraction coupled with nitrogen injection had slightly worse efficiency, power output, and emissions than the optimal nitrogen injection only case. Air extraction alone typically produced lower efficiency, lower power output, and higher emissions than all other cases. The recommended nitrogen injection only case is estimated to provide annualized cost savings compared to a nonintegrated design. Process simulation modeling is shown to be a useful tool for evaluation and screening of technology options.
77 FR 27832 - Shipping Coordinating Committee; Notice of Committee Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-11
... Scheme --Integration of women in the maritime sector --Global maritime training institutions --Impact... financial sustainability of the Organization --Voluntary IMO Member State Audit Scheme --Consideration of...
FD/DAMA Scheme For Mobile/Satellite Communications
NASA Technical Reports Server (NTRS)
Yan, Tsun-Yee; Wang, Charles C.; Cheng, Unjeng; Rafferty, William; Dessouky, Khaled I.
1992-01-01
Integrated-Adaptive Mobile Access Protocol (I-AMAP) proposed to allocate communication channels to subscribers in first-generation MSAT-X mobile/satellite communication network. Based on concept of frequency-division/demand-assigned multiple access (FD/DAMA) where partition of available spectrum adapted to subscribers' demands for service. Requests processed, and competing requests resolved according to channel-access protocol, or free-access tree algorithm described in "Connection Protocol for Mobile/Satellite Communications" (NPO-17735). Assigned spectrum utilized efficiently.
31 CFR 592.301 - Controlled through the Kimberley Process Certification Scheme.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Process Certification Scheme. 592.301 Section 592.301 Money and Finance: Treasury Regulations Relating to... Certification Scheme. (a) Except as otherwise provided in paragraph (b) of this section, the term controlled through the Kimberley Process Certification Scheme refers to the following requirements that apply, as...
31 CFR 592.301 - Controlled through the Kimberley Process Certification Scheme.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Process Certification Scheme. 592.301 Section 592.301 Money and Finance: Treasury Regulations Relating to... Certification Scheme. (a) Except as otherwise provided in paragraph (b) of this section, the term controlled through the Kimberley Process Certification Scheme refers to the following requirements that apply, as...
31 CFR 592.301 - Controlled through the Kimberley Process Certification Scheme.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Process Certification Scheme. 592.301 Section 592.301 Money and Finance: Treasury Regulations Relating to... Certification Scheme. (a) Except as otherwise provided in paragraph (b) of this section, the term controlled through the Kimberley Process Certification Scheme refers to the following requirements that apply, as...
31 CFR 592.301 - Controlled through the Kimberley Process Certification Scheme.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Process Certification Scheme. 592.301 Section 592.301 Money and Finance: Treasury Regulations Relating to... Certification Scheme. (a) Except as otherwise provided in paragraph (b) of this section, the term controlled through the Kimberley Process Certification Scheme refers to the following requirements that apply, as...
31 CFR 592.301 - Controlled through the Kimberley Process Certification Scheme.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Process Certification Scheme. 592.301 Section 592.301 Money and Finance: Treasury Regulations Relating to... Certification Scheme. (a) Except as otherwise provided in paragraph (b) of this section, the term controlled through the Kimberley Process Certification Scheme refers to the following requirements that apply, as...
Global land-atmosphere coupling associated with cold climate processes
NASA Astrophysics Data System (ADS)
Dutra, Emanuel
This dissertation constitutes an assessment of the role of cold processes, associated with snow cover, in controlling the land-atmosphere coupling. The work was based on model simulations, including offline simulations with the land surface model HTESSEL, and coupled atmosphere simulations with the EC-EARTH climate model. A revised snow scheme was developed and tested in HTESSEL and EC-EARTH. The snow scheme is currently operational at the European Centre for Medium-Range Weather Forecasts integrated forecast system, and in the default configuration of EC-EARTH. The improved representation of the snowpack dynamics in HTESSEL resulted in improvements in the near surface temperature simulations of EC-EARTH. The new snow scheme development was complemented with the option of multi-layer version that showed its potential in modeling thick snowpacks. A key process was the snow thermal insulation that led to significant improvements of the surface water and energy balance components. Similar findings were observed when coupling the snow scheme to lake ice, where lake ice duration was significantly improved. An assessment on the snow cover sensitivity to horizontal resolution, parameterizations and atmospheric forcing within HTESSEL highlighted the role of the atmospheric forcing accuracy and snowpack parameterizations in detriment of horizontal resolution over flat regions. A set of experiments with and without free snow evolution was carried out with EC-EARTH to assess the impact of the interannual variability of snow cover on near surface and soil temperatures. It was found that snow cover interannual variability explained up to 60% of the total interannual variability of near surface temperature over snow covered regions. Although these findings are model dependent, the results showed consistency with previously published work. Furthermore, the detailed validation of the snow dynamics simulations in HTESSEL and EC-EARTH guarantees consistency of the results.
Zhao, Zhenguo; Shi, Wenbo
2014-01-01
Probabilistic signature scheme has been widely used in modern electronic commerce since it could provide integrity, authenticity, and nonrepudiation. Recently, Wu and Lin proposed a novel probabilistic signature (PS) scheme using the bilinear square Diffie-Hellman (BSDH) problem. They also extended it to a universal designated verifier signature (UDVS) scheme. In this paper, we analyze the security of Wu et al.'s PS scheme and UDVS scheme. Through concrete attacks, we demonstrate both of their schemes are not unforgeable. The security analysis shows that their schemes are not suitable for practical applications. PMID:25025083
A Tightly-Coupled GPS/INS/UWB Cooperative Positioning Sensors System Supported by V2I Communication
Wang, Jian; Gao, Yang; Li, Zengke; Meng, Xiaolin; Hancock, Craig M.
2016-01-01
This paper investigates a tightly-coupled Global Position System (GPS)/Ultra-Wideband (UWB)/Inertial Navigation System (INS) cooperative positioning scheme using a Robust Kalman Filter (RKF) supported by V2I communication. The scheme proposes a method that uses range measurements of UWB units transmitted among the terminals as augmentation inputs of the observations. The UWB range inputs are used to reform the GPS observation equations that consist of pseudo-range and Doppler measurements and the updated observation equation is processed in a tightly-coupled GPS/UWB/INS integrated positioning equation using an adaptive Robust Kalman Filter. The result of the trial conducted on the roof of the Nottingham Geospatial Institute (NGI) at the University of Nottingham shows that the integrated solution provides better accuracy and improves the availability of the system in GPS denied environments. RKF can eliminate the effects of gross errors. Additionally, the internal and external reliabilities of the system are enhanced when the UWB observables received from the moving terminals are involved in the positioning algorithm. PMID:27355947
The network and transmission of based on the principle of laser multipoint communication
NASA Astrophysics Data System (ADS)
Fu, Qiang; Liu, Xianzhu; Jiang, Huilin; Hu, Yuan; Jiang, Lun
2014-11-01
Space laser communication is the perfectly choose to the earth integrated information backbone network in the future. This paper introduces the structure of the earth integrated information network that is a large capacity integrated high-speed broadband information network, a variety of communications platforms were densely interconnected together, such as the land, sea, air and deep air users or aircraft, the technologies of the intelligent high-speed processing, switching and routing were adopt. According to the principle of maximum effective comprehensive utilization of information resources, get accurately information, fast processing and efficient transmission through inter-satellite, satellite earth, sky and ground station and other links. Namely it will be a space-based, air-based and ground-based integrated information network. It will be started from the trends of laser communication. The current situation of laser multi-point communications were expounded, the transmission scheme of the dynamic multi-point between wireless laser communication n network has been carefully studied, a variety of laser communication network transmission schemes the corresponding characteristics and scope described in detail , described the optical multiplexer machine that based on the multiport form of communication is applied to relay backbone link; the optical multiplexer-based on the form of the segmentation receiver field of view is applied to small angle link, the optical multiplexer-based form of three concentric spheres structure is applied to short distances, motorized occasions, and the multi-point stitching structure based on the rotation paraboloid is applied to inter-satellite communications in detail. The multi-point laser communication terminal apparatus consist of the transmitting and receiving antenna, a relay optical system, the spectroscopic system, communication system and communication receiver transmitter system. The communication forms of optical multiplexer more than four goals or more, the ratio of received power and volume weight will be Obvious advantages, and can track multiple moving targets in flexible.It would to provide reference for the construction of earth integrated information networks.
Artificial Neuron Based on Integrated Semiconductor Quantum Dot Mode-Locked Lasers
NASA Astrophysics Data System (ADS)
Mesaritakis, Charis; Kapsalis, Alexandros; Bogris, Adonis; Syvridis, Dimitris
2016-12-01
Neuro-inspired implementations have attracted strong interest as a power efficient and robust alternative to the digital model of computation with a broad range of applications. Especially, neuro-mimetic systems able to produce and process spike-encoding schemes can offer merits like high noise-resiliency and increased computational efficiency. Towards this direction, integrated photonics can be an auspicious platform due to its multi-GHz bandwidth, its high wall-plug efficiency and the strong similarity of its dynamics under excitation with biological spiking neurons. Here, we propose an integrated all-optical neuron based on an InAs/InGaAs semiconductor quantum-dot passively mode-locked laser. The multi-band emission capabilities of these lasers allows, through waveband switching, the emulation of the excitation and inhibition modes of operation. Frequency-response effects, similar to biological neural circuits, are observed just as in a typical two-section excitable laser. The demonstrated optical building block can pave the way for high-speed photonic integrated systems able to address tasks ranging from pattern recognition to cognitive spectrum management and multi-sensory data processing.
Artificial Neuron Based on Integrated Semiconductor Quantum Dot Mode-Locked Lasers
Mesaritakis, Charis; Kapsalis, Alexandros; Bogris, Adonis; Syvridis, Dimitris
2016-01-01
Neuro-inspired implementations have attracted strong interest as a power efficient and robust alternative to the digital model of computation with a broad range of applications. Especially, neuro-mimetic systems able to produce and process spike-encoding schemes can offer merits like high noise-resiliency and increased computational efficiency. Towards this direction, integrated photonics can be an auspicious platform due to its multi-GHz bandwidth, its high wall-plug efficiency and the strong similarity of its dynamics under excitation with biological spiking neurons. Here, we propose an integrated all-optical neuron based on an InAs/InGaAs semiconductor quantum-dot passively mode-locked laser. The multi-band emission capabilities of these lasers allows, through waveband switching, the emulation of the excitation and inhibition modes of operation. Frequency-response effects, similar to biological neural circuits, are observed just as in a typical two-section excitable laser. The demonstrated optical building block can pave the way for high-speed photonic integrated systems able to address tasks ranging from pattern recognition to cognitive spectrum management and multi-sensory data processing. PMID:27991574
Analysis of adaptive algorithms for an integrated communication network
NASA Technical Reports Server (NTRS)
Reed, Daniel A.; Barr, Matthew; Chong-Kwon, Kim
1985-01-01
Techniques were examined that trade communication bandwidth for decreased transmission delays. When the network is lightly used, these schemes attempt to use additional network resources to decrease communication delays. As the network utilization rises, the schemes degrade gracefully, still providing service but with minimal use of the network. Because the schemes use a combination of circuit and packet switching, they should respond to variations in the types and amounts of network traffic. Also, a combination of circuit and packet switching to support the widely varying traffic demands imposed on an integrated network was investigated. The packet switched component is best suited to bursty traffic where some delays in delivery are acceptable. The circuit switched component is reserved for traffic that must meet real time constraints. Selected packet routing algorithms that might be used in an integrated network were simulated. An integrated traffic places widely varying workload demands on a network. Adaptive algorithms were identified, ones that respond to both the transient and evolutionary changes that arise in integrated networks. A new algorithm was developed, hybrid weighted routing, that adapts to workload changes.
Lu, Hai-Han; Li, Chung-Yi; Chen, Hwan-Wei; Ho, Chun-Ming; Cheng, Ming-Te; Huang, Sheng-Jhe; Yang, Zih-Yi; Lin, Xin-Yao
2016-07-25
A bidirectional fiber-wireless and fiber-invisible laser light communication (IVLLC) integrated system that employs polarization-orthogonal modulation scheme for hybrid cable television (CATV)/microwave (MW)/millimeter-wave (MMW)/baseband (BB) signal transmission is proposed and demonstrated. To our knowledge, it is the first one that adopts a polarization-orthogonal modulation scheme in a bidirectional fiber-wireless and fiber-IVLLC integrated system with hybrid CATV/MW/MMW/BB signal. For downlink transmission, carrier-to-noise ratio (CNR), composite second-order (CSO), composite triple-beat (CTB), and bit error rate (BER) perform well over 40-km single-mode fiber (SMF) and 10-m RF/50-m optical wireless transport scenarios. For uplink transmission, good BER performance is obtained over 40-km SMF and 50-m optical wireless transport scenario. Such a bidirectional fiber-wireless and fiber-IVLLC integrated system for hybrid CATV/MW/MMW/BB signal transmission will be an attractive alternative for providing broadband integrated services, including CATV, Internet, and telecommunication services. It is shown to be a prominent one to present the advancements for the convergence of fiber backbone and RF/optical wireless feeder.
Cultivation of students' engineering designing ability based on optoelectronic system course project
NASA Astrophysics Data System (ADS)
Cao, Danhua; Wu, Yubin; Li, Jingping
2017-08-01
We carry out teaching based on optoelectronic related course group, aiming at junior students majored in Optoelectronic Information Science and Engineering. " Optoelectronic System Course Project " is product-designing-oriented and lasts for a whole semester. It provides a chance for students to experience the whole process of product designing, and improve their abilities to search literature, proof schemes, design and implement their schemes. In teaching process, each project topic is carefully selected and repeatedly refined to guarantee the projects with the knowledge integrity, engineering meanings and enjoyment. Moreover, we set up a top team with professional and experienced teachers, and build up learning community. Meanwhile, the communication between students and teachers as well as the interaction among students are taken seriously in order to improve their team-work ability and communicational skills. Therefore, students are not only able to have a chance to review the knowledge hierarchy of optics, electronics, and computer sciences, but also are able to improve their engineering mindset and innovation consciousness.
Design of a superconducting 28 GHz ion source magnet for FRIB using a shell-based support structure
Felice, H.; Rochepault, E.; Hafalia, R.; ...
2014-12-05
The Superconducting Magnet Program at the Lawrence Berkeley National Laboratory (LBNL) is completing the design of a 28 GHz NbTi ion source magnet for the Facility for Rare Isotope Beams (FRIB). The design parameters are based on the parameters of the ECR ion source VENUS in operation at LBNL since 2002 featuring a sextupole-in-solenoids configuration. Whereas most of the magnet components (such as conductor, magnetic design, protection scheme) remain very similar to the VENUS magnet components, the support structure of the FRIB ion source uses a different concept. A shell-based support structure using bladders and keys is implemented in themore » design allowing fine tuning of the sextupole preload and reversibility of the magnet assembly process. As part of the design work, conductor insulation scheme, coil fabrication processes and assembly procedures are also explored to optimize performance. We present the main features of the design emphasizing the integrated design approach used at LBNL to achieve this result.« less
NASA Astrophysics Data System (ADS)
Ohta, Ayumi; Kobayashi, Osamu; Danielache, Sebastian O.; Nanbu, Shinkoh
2017-03-01
The ultra-fast photoisomerization reactions between 1,3-cyclohexadiene (CHD) and 1,3,5-cis-hexatriene (HT) in both hexane and ethanol solvents were revealed by nonadiabatic ab initio molecular dynamics (AI-MD) with a particle-mesh Ewald summation method and our Own N-layered Integrated molecular Orbital and molecular Mechanics model (PME-ONIOM) scheme. Zhu-Nakamura version trajectory surface hopping method (ZN-TSH) was employed to treat the ultra-fast nonadiabatic decaying process. The results for hexane and ethanol simulations reasonably agree with experimental data. The high nonpolar-nonpolar affinity between CHD and the solvent was observed in hexane solvent, which definitely affected the excited state lifetimes, the product branching ratio of CHD:HT, and solute (CHD) dynamics. In ethanol solvent, however, the CHD solute was isomerized in the solvent cage caused by the first solvation shell. The photochemical dynamics in ethanol solvent results in the similar property to the process appeared in vacuo (isolated CHD dynamics).
NASA Astrophysics Data System (ADS)
Paardekooper, S.-J.
2017-08-01
We present a new method for numerical hydrodynamics which uses a multidimensional generalization of the Roe solver and operates on an unstructured triangular mesh. The main advantage over traditional methods based on Riemann solvers, which commonly use one-dimensional flux estimates as building blocks for a multidimensional integration, is its inherently multidimensional nature, and as a consequence its ability to recognize multidimensional stationary states that are not hydrostatic. A second novelty is the focus on graphics processing units (GPUs). By tailoring the algorithms specifically to GPUs, we are able to get speedups of 100-250 compared to a desktop machine. We compare the multidimensional upwind scheme to a traditional, dimensionally split implementation of the Roe solver on several test problems, and we find that the new method significantly outperforms the Roe solver in almost all cases. This comes with increased computational costs per time-step, which makes the new method approximately a factor of 2 slower than a dimensionally split scheme acting on a structured grid.
Higher-order time integration of Coulomb collisions in a plasma using Langevin equations
Dimits, A. M.; Cohen, B. I.; Caflisch, R. E.; ...
2013-02-08
The extension of Langevin-equation Monte-Carlo algorithms for Coulomb collisions from the conventional Euler-Maruyama time integration to the next higher order of accuracy, the Milstein scheme, has been developed, implemented, and tested. This extension proceeds via a formulation of the angular scattering directly as stochastic differential equations in the two fixed-frame spherical-coordinate velocity variables. Results from the numerical implementation show the expected improvement [O(Δt) vs. O(Δt 1/2)] in the strong convergence rate both for the speed |v| and angular components of the scattering. An important result is that this improved convergence is achieved for the angular component of the scattering ifmore » and only if the “area-integral” terms in the Milstein scheme are included. The resulting Milstein scheme is of value as a step towards algorithms with both improved accuracy and efficiency. These include both algorithms with improved convergence in the averages (weak convergence) and multi-time-level schemes. The latter have been shown to give a greatly reduced cost for a given overall error level when compared with conventional Monte-Carlo schemes, and their performance is improved considerably when the Milstein algorithm is used for the underlying time advance versus the Euler-Maruyama algorithm. A new method for sampling the area integrals is given which is a simplification of an earlier direct method and which retains high accuracy. Lastly, this method, while being useful in its own right because of its relative simplicity, is also expected to considerably reduce the computational requirements for the direct conditional sampling of the area integrals that is needed for adaptive strong integration.« less
NASA Astrophysics Data System (ADS)
Wang, Tianyi; Gong, Feng; Lu, Anjiang; Zhang, Damin; Zhang, Zhengping
2017-12-01
In this paper, we propose a scheme that integrates quantum key distribution and private classical communication via continuous variables. The integrated scheme employs both quadratures of a weak coherent state, with encrypted bits encoded on the signs and Gaussian random numbers encoded on the values of the quadratures. The integration enables quantum and classical data to share the same physical and logical channel. Simulation results based on practical system parameters demonstrate that both classical communication and quantum communication can be implemented over distance of tens of kilometers, thus providing a potential solution for simultaneous transmission of quantum communication and classical communication.
A computationally efficient scheme for the non-linear diffusion equation
NASA Astrophysics Data System (ADS)
Termonia, P.; Van de Vyver, H.
2009-04-01
This Letter proposes a new numerical scheme for integrating the non-linear diffusion equation. It is shown that it is linearly stable. Some tests are presented comparing this scheme to a popular decentered version of the linearized Crank-Nicholson scheme, showing that, although this scheme is slightly less accurate in treating the highly resolved waves, (i) the new scheme better treats highly non-linear systems, (ii) better handles the short waves, (iii) for a given test bed turns out to be three to four times more computationally cheap, and (iv) is easier in implementation.
Combining image-processing and image compression schemes
NASA Technical Reports Server (NTRS)
Greenspan, H.; Lee, M.-C.
1995-01-01
An investigation into the combining of image-processing schemes, specifically an image enhancement scheme, with existing compression schemes is discussed. Results are presented on the pyramid coding scheme, the subband coding scheme, and progressive transmission. Encouraging results are demonstrated for the combination of image enhancement and pyramid image coding schemes, especially at low bit rates. Adding the enhancement scheme to progressive image transmission allows enhanced visual perception at low resolutions. In addition, further progressing of the transmitted images, such as edge detection schemes, can gain from the added image resolution via the enhancement.
Lu, Dan; Joshi, Amita; Wang, Bei; Olsen, Steve; Yi, Joo-Hee; Krop, Ian E; Burris, Howard A; Girish, Sandhya
2013-08-01
Trastuzumab emtansine (T-DM1) is an antibody-drug conjugate recently approved by the US Food and Drug Administration for the treatment of human epidermal growth factor receptor 2 (HER2)-positive metastatic breast cancer previously treated with trastuzumab and taxane chemotherapy. It comprises the microtubule inhibitory cytotoxic agent DM1 conjugated to the HER2-targeted humanized monoclonal antibody trastuzumab via a stable linker. To characterize the pharmacokinetics of T-DM1 in patients with metastatic breast cancer, concentrations of multiple analytes were quantified, including serum concentrations of T-DM1 conjugate and total trastuzumab (the sum of conjugated and unconjugated trastuzumab), as well as plasma concentrations of DM1. The clearance of T-DM1 conjugate is approximately 2 to 3 times faster than its parent antibody, trastuzumab. However, the clearance pathways accounting for this faster clearance rate are unclear. An integrated population pharmacokinetic model that simultaneously fits the pharmacokinetics of T-DM1 conjugate and total trastuzumab can help to elucidate the clearance pathways of T-DM1. The model can also be used to predict total trastuzumab pharmacokinetic profiles based on T-DM1 conjugate pharmacokinetic data and sparse total trastuzumab pharmacokinetic data, thereby reducing the frequency of pharmacokinetic sampling. T-DM1 conjugate and total trastuzumab serum concentration data, including baseline trastuzumab concentrations prior to T-DM1 treatment, from phase I and II studies were used to develop this integrated population pharmacokinetic model. Based on a hypothetical T-DM1 catabolism scheme, two-compartment models for T-DM1 conjugate and trastuzumab were integrated by assuming a one-step deconjugation clearance from T-DM1 conjugate to trastuzumab. The ability of the model to predict the total trastuzumab pharmacokinetic profile based on T-DM1 conjugate pharmacokinetics and various sampling schemes of total trastuzumab pharmacokinetics was assessed to evaluate total trastuzumab sampling schemes. The final model reflects a simplified catabolism scheme of T-DM1, suggesting that T-DM1 clearance pathways include both deconjugation and proteolytic degradation. The model fits T-DM1 conjugate and total trastuzumab pharmacokinetic data simultaneously. The deconjugation clearance of T-DM1 was estimated to be ~0.4 L/day. Proteolytic degradation clearances for T-DM1 and trastuzumab were similar (~0.3 L/day). This model accurately predicts total trastuzumab pharmacokinetic profiles based on T-DM1 conjugate pharmacokinetic data and sparse total trastuzumab pharmacokinetic data sampled at preinfusion and end of infusion in cycle 1, and in one additional steady state cycle. This semi-mechanistic integrated model links T-DM1 conjugate and total trastuzumab pharmacokinetic data, and supports the inclusion of both proteolytic degradation and deconjugation as clearance pathways in the hypothetical T-DM1 catabolism scheme. The model attributes a faster T-DM1 conjugate clearance versus that of trastuzumab to the presence of a deconjugation process and suggests a similar proteolytic clearance of T-DM1 and trastuzumab. Based on the model and T-DM1 conjugate pharmacokinetic data, a sparse pharmacokinetic sampling scheme for total trastuzumab provides an entire pharmacokinetic profile with similar predictive accuracy to that of a dense pharmacokinetic sampling scheme.
NASA Technical Reports Server (NTRS)
Chulya, Abhisak; Walker, Kevin P.
1991-01-01
A new scheme to integrate a system of stiff differential equations for both the elasto-plastic creep and the unified viscoplastic theories is presented. The method has high stability, allows large time increments, and is implicit and iterative. It is suitable for use with continuum damage theories. The scheme was incorporated into MARC, a commercial finite element code through a user subroutine called HYPELA. Results from numerical problems under complex loading histories are presented for both small and large scale analysis. To demonstrate the scheme's accuracy and efficiency, comparisons to a self-adaptive forward Euler method are made.
NASA Technical Reports Server (NTRS)
Chulya, A.; Walker, K. P.
1989-01-01
A new scheme to integrate a system of stiff differential equations for both the elasto-plastic creep and the unified viscoplastic theories is presented. The method has high stability, allows large time increments, and is implicit and iterative. It is suitable for use with continuum damage theories. The scheme was incorporated into MARC, a commercial finite element code through a user subroutine called HYPELA. Results from numerical problems under complex loading histories are presented for both small and large scale analysis. To demonstrate the scheme's accuracy and efficiency, comparisons to a self-adaptive forward Euler method are made.
Blast furnace supervision and control system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Remorino, M.; Lingiardi, O.; Zecchi, M.
1997-12-31
On December 1992, a group of companies headed by Techint, took over Somisa, the state-owned integrated steel plant located at San Nicolas, Province of Buenos Aires, Argentina, culminating an ambitious government privatization scheme. The blast furnace 2 went into a full reconstruction and relining in January 1995. After a 140 MU$ investment the new blast furnace 2 was started in September 1995. After more than one year of operation of the blast furnace the system has proven itself useful and reliable. The main reasons for the success of the system are: same use interface for all blast furnace areas --more » operation, process, maintenance and management, (full horizontal and vertical integration); and full accessibility to all information and process tools though some restrictions apply to field commands (people empowerment). The paper describes the central system.« less
NASA Technical Reports Server (NTRS)
Li, Yaqiong; Choi, Steve; Ho, Shuay-Pwu; Crowley, Kevin T.; Salatino, Maria; Simon, Sara M.; Staggs, Suzanne T.; Nati, Federico; Wollack, Edward J.
2016-01-01
The Advanced ACTPol (AdvACT) upgrade on the Atacama Cosmology Telescope (ACT) consists of multichroicTransition Edge Sensor (TES) detector arrays to measure the Cosmic Microwave Background (CMB) polarization anisotropies in multiple frequency bands. The first AdvACT detector array, sensitive to both 150 and 230 GHz, is fabricated on a 150 mm diameter wafer and read out with a completely different scheme compared to ACTPol. Approximately 2000 TES bolometers are packed into the wafer leading to both a much denser detector density and readout circuitry. The demonstration of the assembly and integration of the AdvACT arrays is important for the next generation CMB experiments, which will continue to increase the pixel number and density. We present the detailed assembly process of the first AdvACT detector array.
78 FR 32698 - Shipping Coordinating Committee; Notice of Committee Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-31
... --Partnerships for progress --Voluntary IMO Member State Audit Scheme --Integration of women in the maritime... Member State Audit Scheme --Consideration of the report of the Maritime Safety Committee --Consideration...
Multi-symplectic integrators: numerical schemes for Hamiltonian PDEs that conserve symplecticity
NASA Astrophysics Data System (ADS)
Bridges, Thomas J.; Reich, Sebastian
2001-06-01
The symplectic numerical integration of finite-dimensional Hamiltonian systems is a well established subject and has led to a deeper understanding of existing methods as well as to the development of new very efficient and accurate schemes, e.g., for rigid body, constrained, and molecular dynamics. The numerical integration of infinite-dimensional Hamiltonian systems or Hamiltonian PDEs is much less explored. In this Letter, we suggest a new theoretical framework for generalizing symplectic numerical integrators for ODEs to Hamiltonian PDEs in R2: time plus one space dimension. The central idea is that symplecticity for Hamiltonian PDEs is directional: the symplectic structure of the PDE is decomposed into distinct components representing space and time independently. In this setting PDE integrators can be constructed by concatenating uni-directional ODE symplectic integrators. This suggests a natural definition of multi-symplectic integrator as a discretization that conserves a discrete version of the conservation of symplecticity for Hamiltonian PDEs. We show that this approach leads to a general framework for geometric numerical schemes for Hamiltonian PDEs, which have remarkable energy and momentum conservation properties. Generalizations, including development of higher-order methods, application to the Euler equations in fluid mechanics, application to perturbed systems, and extension to more than one space dimension are also discussed.
Upon Generating (2+1)-dimensional Dynamical Systems
NASA Astrophysics Data System (ADS)
Zhang, Yufeng; Bai, Yang; Wu, Lixin
2016-06-01
Under the framework of the Adler-Gel'fand-Dikii(AGD) scheme, we first propose two Hamiltonian operator pairs over a noncommutative ring so that we construct a new dynamical system in 2+1 dimensions, then we get a generalized special Novikov-Veselov (NV) equation via the Manakov triple. Then with the aid of a special symmetric Lie algebra of a reductive homogeneous group G, we adopt the Tu-Andrushkiw-Huang (TAH) scheme to generate a new integrable (2+1)-dimensional dynamical system and its Hamiltonian structure, which can reduce to the well-known (2+1)-dimensional Davey-Stewartson (DS) hierarchy. Finally, we extend the binormial residue representation (briefly BRR) scheme to the super higher dimensional integrable hierarchies with the help of a super subalgebra of the super Lie algebra sl(2/1), which is also a kind of symmetric Lie algebra of the reductive homogeneous group G. As applications, we obtain a super 2+1 dimensional MKdV hierarchy which can be reduced to a super 2+1 dimensional generalized AKNS equation. Finally, we compare the advantages and the shortcomings for the three schemes to generate integrable dynamical systems.
Gröbner Bases and Generation of Difference Schemes for Partial Differential Equations
NASA Astrophysics Data System (ADS)
Gerdt, Vladimir P.; Blinkov, Yuri A.; Mozzhilkin, Vladimir V.
2006-05-01
In this paper we present an algorithmic approach to the generation of fully conservative difference schemes for linear partial differential equations. The approach is based on enlargement of the equations in their integral conservation law form by extra integral relations between unknown functions and their derivatives, and on discretization of the obtained system. The structure of the discrete system depends on numerical approximation methods for the integrals occurring in the enlarged system. As a result of the discretization, a system of linear polynomial difference equations is derived for the unknown functions and their partial derivatives. A difference scheme is constructed by elimination of all the partial derivatives. The elimination can be achieved by selecting a proper elimination ranking and by computing a Gröbner basis of the linear difference ideal generated by the polynomials in the discrete system. For these purposes we use the difference form of Janet-like Gröbner bases and their implementation in Maple. As illustration of the described methods and algorithms, we construct a number of difference schemes for Burgers and Falkowich-Karman equations and discuss their numerical properties.
Automation of closed environments in space for human comfort and safety
NASA Technical Reports Server (NTRS)
1992-01-01
This report culminates the work accomplished during a three year design project on the automation of an Environmental Control and Life Support System (ECLSS) suitable for space travel and colonization. The system would provide a comfortable living environment in space that is fully functional with limited human supervision. A completely automated ECLSS would increase astronaut productivity while contributing to their safety and comfort. The first section of this report, section 1.0, briefly explains the project, its goals, and the scheduling used by the team in meeting these goals. Section 2.0 presents an in-depth look at each of the component subsystems. Each subsection describes the mathematical modeling and computer simulation used to represent that portion of the system. The individual models have been integrated into a complete computer simulation of the CO2 removal process. In section 3.0, the two simulation control schemes are described. The classical control approach uses traditional methods to control the mechanical equipment. The expert control system uses fuzzy logic and artificial intelligence to control the system. By integrating the two control systems with the mathematical computer simulation, the effectiveness of the two schemes can be compared. The results are then used as proof of concept in considering new control schemes for the entire ECLSS. Section 4.0 covers the results and trends observed when the model was subjected to different test situations. These results provide insight into the operating procedures of the model and the different control schemes. The appendix, section 5.0, contains summaries of lectures presented during the past year, homework assignments, and the completed source code used for the computer simulation and control system.
Integrity Verification for Multiple Data Copies in Cloud Storage Based on Spatiotemporal Chaos
NASA Astrophysics Data System (ADS)
Long, Min; Li, You; Peng, Fei
Aiming to strike for a balance between the security, efficiency and availability of the data verification in cloud storage, a novel integrity verification scheme based on spatiotemporal chaos is proposed for multiple data copies. Spatiotemporal chaos is implemented for node calculation of the binary tree, and the location of the data in the cloud is verified. Meanwhile, dynamic operation can be made to the data. Furthermore, blind information is used to prevent a third-party auditor (TPA) leakage of the users’ data privacy in a public auditing process. Performance analysis and discussion indicate that it is secure and efficient, and it supports dynamic operation and the integrity verification of multiple copies of data. It has a great potential to be implemented in cloud storage services.
Zou, Bin; Jiang, Xiaolu; Duan, Xiaoli; Zhao, Xiuge; Zhang, Jing; Tang, Jingwen; Sun, Guoqing
2017-03-23
Traditional sampling for soil pollution evaluation is cost intensive and has limited representativeness. Therefore, developing methods that can accurately and rapidly identify at-risk areas and the contributing pollutants is imperative for soil remediation. In this study, we propose an innovative integrated H-G scheme combining human health risk assessment and geographical detector methods that was based on geographical information system technology and validated its feasibility in a renewable resource industrial park in mainland China. With a discrete site investigation of cadmium (Cd), arsenic (As), copper (Cu), mercury (Hg) and zinc (Zn) concentrations, the continuous surfaces of carcinogenic risk and non-carcinogenic risk caused by these heavy metals were estimated and mapped. Source apportionment analysis using geographical detector methods further revealed that these risks were primarily attributed to As, according to the power of the determinant and its associated synergic actions with other heavy metals. Concentrations of critical As and Cd, and the associated exposed CRs are closed to the safe thresholds after remediating the risk areas identified by the integrated H-G scheme. Therefore, the integrated H-G scheme provides an effective approach to support decision-making for regional contaminated soil remediation at fine spatial resolution with limited sampling data over a large geographical extent.
Numerically stable formulas for a particle-based explicit exponential integrator
NASA Astrophysics Data System (ADS)
Nadukandi, Prashanth
2015-05-01
Numerically stable formulas are presented for the closed-form analytical solution of the X-IVAS scheme in 3D. This scheme is a state-of-the-art particle-based explicit exponential integrator developed for the particle finite element method. Algebraically, this scheme involves two steps: (1) the solution of tangent curves for piecewise linear vector fields defined on simplicial meshes and (2) the solution of line integrals of piecewise linear vector-valued functions along these tangent curves. Hence, the stable formulas presented here have general applicability, e.g. exact integration of trajectories in particle-based (Lagrangian-type) methods, flow visualization and computer graphics. The Newton form of the polynomial interpolation definition is used to express exponential functions of matrices which appear in the analytical solution of the X-IVAS scheme. The divided difference coefficients in these expressions are defined in a piecewise manner, i.e. in a prescribed neighbourhood of removable singularities their series approximations are computed. An optimal series approximation of divided differences is presented which plays a critical role in this methodology. At least ten significant decimal digits in the formula computations are guaranteed to be exact using double-precision floating-point arithmetic. The worst case scenarios occur in the neighbourhood of removable singularities found in fourth-order divided differences of the exponential function.
A simple molecular mechanics integrator in mixed rigid body and dihedral angle space
Vitalis, Andreas; Pappu, Rohit V.
2014-01-01
We propose a numerical scheme to integrate equations of motion in a mixed space of rigid-body and dihedral angle coordinates. The focus of the presentation is biomolecular systems and the framework is applicable to polymers with tree-like topology. By approximating the effective mass matrix as diagonal and lumping all bias torques into the time dependencies of the diagonal elements, we take advantage of the formal decoupling of individual equations of motion. We impose energy conservation independently for every degree of freedom and this is used to derive a numerical integration scheme. The cost of all auxiliary operations is linear in the number of atoms. By coupling the scheme to one of two popular thermostats, we extend the method to sample constant temperature ensembles. We demonstrate that the integrator of choice yields satisfactory stability and is free of mass-metric tensor artifacts, which is expected by construction of the algorithm. Two fundamentally different systems, viz., liquid water and an α-helical peptide in a continuum solvent are used to establish the applicability of our method to a wide range of problems. The resultant constant temperature ensembles are shown to be thermodynamically accurate. The latter relies on detailed, quantitative comparisons to data from reference sampling schemes operating on exactly the same sets of degrees of freedom. PMID:25053299
NASA Astrophysics Data System (ADS)
Wang, Lanjing; Shao, Wenjing; Wang, Zhiyue; Fu, Wenfeng; Zhao, Wensheng
2018-02-01
Taking the MEA chemical absorption carbon capture system with 85% of the carbon capture rate of a 660MW ultra-super critical unit as an example,this paper puts forward a new type of turbine which dedicated to supply steam to carbon capture system. The comparison of the thermal systems of the power plant under different steam supply schemes by using the EBSILON indicated optimal extraction scheme for Steam Extraction System in Carbon Capture System. The results show that the cycle heat efficiency of the unit introduced carbon capture turbine system is higher than that of the usual scheme without it. With the introduction of the carbon capture turbine, the scheme which extracted steam from high pressure cylinder’ s steam input point shows the highest cycle thermal efficiency. Its indexes are superior to other scheme, and more suitable for existing coal-fired power plant integrated post combustion carbon dioxide capture system.
New regularization scheme for blind color image deconvolution
NASA Astrophysics Data System (ADS)
Chen, Li; He, Yu; Yap, Kim-Hui
2011-01-01
This paper proposes a new regularization scheme to address blind color image deconvolution. Color images generally have a significant correlation among the red, green, and blue channels. Conventional blind monochromatic deconvolution algorithms handle each color image channels independently, thereby ignoring the interchannel correlation present in the color images. In view of this, a unified regularization scheme for image is developed to recover edges of color images and reduce color artifacts. In addition, by using the color image properties, a spectral-based regularization operator is adopted to impose constraints on the blurs. Further, this paper proposes a reinforcement regularization framework that integrates a soft parametric learning term in addressing blind color image deconvolution. A blur modeling scheme is developed to evaluate the relevance of manifold parametric blur structures, and the information is integrated into the deconvolution scheme. An optimization procedure called alternating minimization is then employed to iteratively minimize the image- and blur-domain cost functions. Experimental results show that the method is able to achieve satisfactory restored color images under different blurring conditions.
NASA Astrophysics Data System (ADS)
Iakshina, D. F.; Golubeva, E. N.
2017-11-01
The vertical distribution of the hydrological characteristics in the upper ocean layer is mostly formed under the influence of turbulent and convective mixing, which are not resolved in the system of equations for large-scale ocean. Therefore it is necessary to include additional parameterizations of these processes into the numerical models. In this paper we carry out a comparative analysis of the different vertical mixing parameterizations in simulations of climatic variability of the Arctic water and sea ice circulation. The 3D regional numerical model for the Arctic and North Atlantic developed in the ICMMG SB RAS (Institute of Computational Mathematics and Mathematical Geophysics of the Siberian Branch of the Russian Academy of Science) and package GOTM (General Ocean Turbulence Model1,2, http://www.gotm.net/) were used as the numerical instruments . NCEP/NCAR reanalysis data were used for determination of the surface fluxes related to ice and ocean. The next turbulence closure schemes were used for the vertical mixing parameterizations: 1) Integration scheme based on the Richardson criteria (RI); 2) Second-order scheme TKE with coefficients Canuto-A3 (CANUTO); 3) First-order scheme TKE with coefficients Schumann and Gerz4 (TKE-1); 4) Scheme KPP5 (KPP). In addition we investigated some important characteristics of the Arctic Ocean state including the intensity of Atlantic water inflow, ice cover state and fresh water content in Beaufort Sea.
Investigating students’ failure in fractional concept construction
NASA Astrophysics Data System (ADS)
Kurniawan, Henry; Sutawidjaja, Akbar; Rahman As’ari, Abdur; Muksar, Makbul; Setiawan, Iwan
2018-04-01
Failure is a failure to achieve goals. This failure occurs because a larger scheme integrates the schemes in mind that are related to the problem at hand. These schemes are integrated so that they are interconnected to form new structures. This new scheme structure is used to interpret the problems at hand. This research is a qualitative research done to trace student’s failure which happened in fractional concept construction. Subjects in this study as many as 2 students selected from 15 students with the consideration of these students meet the criteria that have been set into two groups that fail in solving the problem. Both groups, namely group 1 is a search group for the failure of students of S1 subject and group 2 is a search group for the failure of students of S2 subject.
Three-dimensional femtosecond laser processing for lab-on-a-chip applications
NASA Astrophysics Data System (ADS)
Sima, Felix; Sugioka, Koji; Vázquez, Rebeca Martínez; Osellame, Roberto; Kelemen, Lóránd; Ormos, Pal
2018-02-01
The extremely high peak intensity associated with ultrashort pulse width of femtosecond laser allows us to induce nonlinear interaction such as multiphoton absorption and tunneling ionization with materials that are transparent to the laser wavelength. More importantly, focusing the femtosecond laser beam inside the transparent materials confines the nonlinear interaction only within the focal volume, enabling three-dimensional (3D) micro- and nanofabrication. This 3D capability offers three different schemes, which involve undeformative, subtractive, and additive processing. The undeformative processing preforms internal refractive index modification to construct optical microcomponents including optical waveguides. Subtractive processing can realize the direct fabrication of 3D microfluidics, micromechanics, microelectronics, and photonic microcomponents in glass. Additive processing represented by two-photon polymerization enables the fabrication of 3D polymer micro- and nanostructures for photonic and microfluidic devices. These different schemes can be integrated to realize more functional microdevices including lab-on-a-chip devices, which are miniaturized laboratories that can perform reaction, detection, analysis, separation, and synthesis of biochemical materials with high efficiency, high speed, high sensitivity, low reagent consumption, and low waste production. This review paper describes the principles and applications of femtosecond laser 3D micro- and nanofabrication for lab-on-a-chip applications. A hybrid technique that promises to enhance functionality of lab-on-a-chip devices is also introduced.
Mishra, Dheerendra; Mukhopadhyay, Sourav; Chaturvedi, Ankita; Kumari, Saru; Khan, Muhammad Khurram
2014-06-01
Remote user authentication is desirable for a Telecare Medicine Information System (TMIS) for the safety, security and integrity of transmitted data over the public channel. In 2013, Tan presented a biometric based remote user authentication scheme and claimed that his scheme is secure. Recently, Yan et al. demonstrated some drawbacks in Tan's scheme and proposed an improved scheme to erase the drawbacks of Tan's scheme. We analyze Yan et al.'s scheme and identify that their scheme is vulnerable to off-line password guessing attack, and does not protect anonymity. Moreover, in their scheme, login and password change phases are inefficient to identify the correctness of input where inefficiency in password change phase can cause denial of service attack. Further, we design an improved scheme for TMIS with the aim to eliminate the drawbacks of Yan et al.'s scheme.
A co-designed equalization, modulation, and coding scheme
NASA Technical Reports Server (NTRS)
Peile, Robert E.
1992-01-01
The commercial impact and technical success of Trellis Coded Modulation seems to illustrate that, if Shannon's capacity is going to be neared, the modulation and coding of an analogue signal ought to be viewed as an integrated process. More recent work has focused on going beyond the gains obtained for Average White Gaussian Noise and has tried to combine the coding/modulation with adaptive equalization. The motive is to gain similar advances on less perfect or idealized channels.
GOBF-ARMA based model predictive control for an ideal reactive distillation column.
Seban, Lalu; Kirubakaran, V; Roy, B K; Radhakrishnan, T K
2015-11-01
This paper discusses the control of an ideal reactive distillation column (RDC) using model predictive control (MPC) based on a combination of deterministic generalized orthonormal basis filter (GOBF) and stochastic autoregressive moving average (ARMA) models. Reactive distillation (RD) integrates reaction and distillation in a single process resulting in process and energy integration promoting green chemistry principles. Improved selectivity of products, increased conversion, better utilization and control of reaction heat, scope for difficult separations and the avoidance of azeotropes are some of the advantages that reactive distillation offers over conventional technique of distillation column after reactor. The introduction of an in situ separation in the reaction zone leads to complex interactions between vapor-liquid equilibrium, mass transfer rates, diffusion and chemical kinetics. RD with its high order and nonlinear dynamics, and multiple steady states is a good candidate for testing and verification of new control schemes. Here a combination of GOBF-ARMA models is used to catch and represent the dynamics of the RDC. This GOBF-ARMA model is then used to design an MPC scheme for the control of product purity of RDC under different operating constraints and conditions. The performance of proposed modeling and control using GOBF-ARMA based MPC is simulated and analyzed. The proposed controller is found to perform satisfactorily for reference tracking and disturbance rejection in RDC. Copyright © 2015 Elsevier Inc. All rights reserved.
GPU acceleration of Runge Kutta-Fehlberg and its comparison with Dormand-Prince method
NASA Astrophysics Data System (ADS)
Seen, Wo Mei; Gobithaasan, R. U.; Miura, Kenjiro T.
2014-07-01
There is a significant reduction of processing time and speedup of performance in computer graphics with the emergence of Graphic Processing Units (GPUs). GPUs have been developed to surpass Central Processing Unit (CPU) in terms of performance and processing speed. This evolution has opened up a new area in computing and researches where highly parallel GPU has been used for non-graphical algorithms. Physical or phenomenal simulations and modelling can be accelerated through General Purpose Graphic Processing Units (GPGPU) and Compute Unified Device Architecture (CUDA) implementations. These phenomena can be represented with mathematical models in the form of Ordinary Differential Equations (ODEs) which encompasses the gist of change rate between independent and dependent variables. ODEs are numerically integrated over time in order to simulate these behaviours. The classical Runge-Kutta (RK) scheme is the common method used to numerically solve ODEs. The Runge Kutta Fehlberg (RKF) scheme has been specially developed to provide an estimate of the principal local truncation error at each step, known as embedding estimate technique. This paper delves into the implementation of RKF scheme for GPU devices and compares its result with Dorman Prince method. A pseudo code is developed to show the implementation in detail. Hence, practitioners will be able to understand the data allocation in GPU, formation of RKF kernels and the flow of data to/from GPU-CPU upon RKF kernel evaluation. The pseudo code is then written in C Language and two ODE models are executed to show the achievable speedup as compared to CPU implementation. The accuracy and efficiency of the proposed implementation method is discussed in the final section of this paper.
Should learners reason one step at a time? A randomised trial of two diagnostic scheme designs.
Blissett, Sarah; Morrison, Deric; McCarty, David; Sibbald, Matthew
2017-04-01
Making a diagnosis can be difficult for learners as they must integrate multiple clinical variables. Diagnostic schemes can help learners with this complex task. A diagnostic scheme is an algorithm that organises possible diagnoses by assigning signs or symptoms (e.g. systolic murmur) to groups of similar diagnoses (e.g. aortic stenosis and aortic sclerosis) and provides distinguishing features to help discriminate between similar diagnoses (e.g. carotid pulse). The current literature does not identify whether scheme layouts should guide learners to reason one step at a time in a terminally branching scheme or weigh multiple variables simultaneously in a hybrid scheme. We compared diagnostic accuracy, perceptual errors and cognitive load using two scheme layouts for cardiac auscultation. Focused on the task of identifying murmurs on Harvey, a cardiopulmonary simulator, 86 internal medicine residents used two scheme layouts. The terminally branching scheme organised the information into single variable decisions. The hybrid scheme combined single variable decisions with a chart integrating multiple distinguishing features. Using a crossover design, participants completed one set of murmurs (diastolic or systolic) with either the terminally branching or the hybrid scheme. The second set of murmurs was completed with the other scheme. A repeated measures manova was performed to compare diagnostic accuracy, perceptual errors and cognitive load between the scheme layouts. There was a main effect of the scheme layout (Wilks' λ = 0.841, F 3,80 = 5.1, p = 0.003). Use of a terminally branching scheme was associated with increased diagnostic accuracy (65 versus 53%, p = 0.02), fewer perceptual errors (0.61 versus 0.98 errors, p = 0.001) and lower cognitive load (3.1 versus 3.5/7, p = 0.023). The terminally branching scheme was associated with improved diagnostic accuracy, fewer perceptual errors and lower cognitive load, suggesting that terminally branching schemes are effective for improving diagnostic accuracy. These findings can inform the design of schemes and other clinical decision aids. © 2017 John Wiley & Sons Ltd and The Association for the Study of Medical Education.
Integrated quantum photonic sensor based on Hong-Ou-Mandel interference.
Basiri-Esfahani, Sahar; Myers, Casey R; Armin, Ardalan; Combes, Joshua; Milburn, Gerard J
2015-06-15
Photonic-crystal-based integrated optical systems have been used for a broad range of sensing applications with great success. This has been motivated by several advantages such as high sensitivity, miniaturization, remote sensing, selectivity and stability. Many photonic crystal sensors have been proposed with various fabrication designs that result in improved optical properties. In parallel, integrated optical systems are being pursued as a platform for photonic quantum information processing using linear optics and Fock states. Here we propose a novel integrated Fock state optical sensor architecture that can be used for force, refractive index and possibly local temperature detection. In this scheme, two coupled cavities behave as an "effective beam splitter". The sensor works based on fourth order interference (the Hong-Ou-Mandel effect) and requires a sequence of single photon pulses and consequently has low pulse power. Changes in the parameter to be measured induce variations in the effective beam splitter reflectivity and result in changes to the visibility of interference. We demonstrate this generic scheme in coupled L3 photonic crystal cavities as an example and find that this system, which only relies on photon coincidence detection and does not need any spectral resolution, can estimate forces as small as 10(-7) Newtons and can measure one part per million change in refractive index using a very low input power of 10(-10)W. Thus linear optical quantum photonic architectures can achieve comparable sensor performance to semiclassical devices.
NASA Technical Reports Server (NTRS)
Cartier, D. E.
1976-01-01
This concise paper considers the effect on the autocorrelation function of a pseudonoise (PN) code when the acquisition scheme only integrates coherently over part of the code and then noncoherently combines these results. The peak-to-null ratio of the effective PN autocorrelation function is shown to degrade to the square root of n, where n is the number of PN symbols over which coherent integration takes place.
NASA Technical Reports Server (NTRS)
Gallagher, R. R.
1974-01-01
Exercise subroutine modifications are implemented in an exercise-respiratory system model yielding improvement of system response to exercise forcings. A more physiologically desirable respiratory ventilation rate in addition to an improved regulation of arterial gas tensions and cerebral blood flow is observed. A respiratory frequency expression is proposed which would be appropriate as an interfacing element of the respiratory-pulsatile cardiovascular system. Presentation of a circulatory-respiratory system integration scheme along with its computer program listing is given. The integrated system responds to exercise stimulation for both nonstressed and stressed physiological states. Other integration possibilities are discussed with respect to the respiratory, pulsatile cardiovascular, thermoregulatory, and the long-term circulatory systems.
FORUM: A Suggestion for an Improved Vegetation Scheme for Local and Global Mapping and Monitoring.
ADAMS
1999-01-01
/ Understanding of global ecological problems is at least partly dependent on clear assessments of vegetation change, and such assessment is always dependent on the use of a vegetation classification scheme. Use of satellite remotely sensed data is the only practical means of carrying out any global-scale vegetation mapping exercise, but if the resulting maps are to be useful to most ecologists and conservationists, they must be closely tied to clearly defined features of vegetation on the ground. Furthermore, much of the mapping that does take place involves more local-scale description of field sites; for purposes of cost and practicality, such studies usually do not involve remote sensing using satellites. There is a need for a single scheme that integrates the smallest to the largest scale in a way that is meaningful to most environmental scientists. Existing schemes are unsatisfactory for this task; they are ambiguous, unnecessarily complex, and their categories do not correspond to common-sense definitions. In response to these problems, a simple structural-physiognomically based scheme with 23 fundamental categories is proposed here for mapping and monitoring on any scale, from local to global. The fundamental categories each subdivide into more specific structural categories for more detailed mapping, but all the categories can be used throughout the world and at any scale, allowing intercomparison between regions. The next stage in the process will be to obtain the views of as many people working in as many different fields as possible, to see whether the proposed scheme suits their needs and how it should be modified. With a few modifications, such a scheme could easily be appended to an existing land cover classification scheme, such as the FAO system, greatly increasing the usefulness and accessability of the results of the landcover classification. KEY WORDS: Vegetation scheme; Mapping; Monitoring; Land cover
Geometric integration in Born-Oppenheimer molecular dynamics.
Odell, Anders; Delin, Anna; Johansson, Börje; Cawkwell, Marc J; Niklasson, Anders M N
2011-12-14
Geometric integration schemes for extended Lagrangian self-consistent Born-Oppenheimer molecular dynamics, including a weak dissipation to remove numerical noise, are developed and analyzed. The extended Lagrangian framework enables the geometric integration of both the nuclear and electronic degrees of freedom. This provides highly efficient simulations that are stable and energy conserving even under incomplete and approximate self-consistent field (SCF) convergence. We investigate three different geometric integration schemes: (1) regular time reversible Verlet, (2) second order optimal symplectic, and (3) third order optimal symplectic. We look at energy conservation, accuracy, and stability as a function of dissipation, integration time step, and SCF convergence. We find that the inclusion of dissipation in the symplectic integration methods gives an efficient damping of numerical noise or perturbations that otherwise may accumulate from finite arithmetics in a perfect reversible dynamics. © 2011 American Institute of Physics
V S, Unni; Mishra, Deepak; Subrahmanyam, G R K S
2016-12-01
The need for image fusion in current image processing systems is increasing mainly due to the increased number and variety of image acquisition techniques. Image fusion is the process of combining substantial information from several sensors using mathematical techniques in order to create a single composite image that will be more comprehensive and thus more useful for a human operator or other computer vision tasks. This paper presents a new approach to multifocus image fusion based on sparse signal representation. Block-based compressive sensing integrated with a projection-driven compressive sensing (CS) recovery that encourages sparsity in the wavelet domain is used as a method to get the focused image from a set of out-of-focus images. Compression is achieved during the image acquisition process using a block compressive sensing method. An adaptive thresholding technique within the smoothed projected Landweber recovery process reconstructs high-resolution focused images from low-dimensional CS measurements of out-of-focus images. Discrete wavelet transform and dual-tree complex wavelet transform are used as the sparsifying basis for the proposed fusion. The main finding lies in the fact that sparsification enables a better selection of the fusion coefficients and hence better fusion. A Laplacian mixture model fit is done in the wavelet domain and estimation of the probability density function (pdf) parameters by expectation maximization leads us to the proper selection of the coefficients of the fused image. Using the proposed method compared with the fusion scheme without employing the projected Landweber (PL) scheme and the other existing CS-based fusion approaches, it is observed that with fewer samples itself, the proposed method outperforms other approaches.
Parallel processing using an optical delay-based reservoir computer
NASA Astrophysics Data System (ADS)
Van der Sande, Guy; Nguimdo, Romain Modeste; Verschaffelt, Guy
2016-04-01
Delay systems subject to delayed optical feedback have recently shown great potential in solving computationally hard tasks. By implementing a neuro-inspired computational scheme relying on the transient response to optical data injection, high processing speeds have been demonstrated. However, reservoir computing systems based on delay dynamics discussed in the literature are designed by coupling many different stand-alone components which lead to bulky, lack of long-term stability, non-monolithic systems. Here we numerically investigate the possibility of implementing reservoir computing schemes based on semiconductor ring lasers. Semiconductor ring lasers are semiconductor lasers where the laser cavity consists of a ring-shaped waveguide. SRLs are highly integrable and scalable, making them ideal candidates for key components in photonic integrated circuits. SRLs can generate light in two counterpropagating directions between which bistability has been demonstrated. We demonstrate that two independent machine learning tasks , even with different nature of inputs with different input data signals can be simultaneously computed using a single photonic nonlinear node relying on the parallelism offered by photonics. We illustrate the performance on simultaneous chaotic time series prediction and a classification of the Nonlinear Channel Equalization. We take advantage of different directional modes to process individual tasks. Each directional mode processes one individual task to mitigate possible crosstalk between the tasks. Our results indicate that prediction/classification with errors comparable to the state-of-the-art performance can be obtained even with noise despite the two tasks being computed simultaneously. We also find that a good performance is obtained for both tasks for a broad range of the parameters. The results are discussed in detail in [Nguimdo et al., IEEE Trans. Neural Netw. Learn. Syst. 26, pp. 3301-3307, 2015
NASA Astrophysics Data System (ADS)
Spencer, K. L.; Harvey, G. L.
2012-06-01
Coastal saltmarsh ecosystems occupy only a small percentage of Earth's land surface, yet contribute a wide range of ecosystem services that have significant global economic and societal value. These environments currently face significant challenges associated with climate change, sea level rise, development and water quality deterioration and are consequently the focus of a range of management schemes. Increasingly, soft engineering techniques such as managed realignment (MR) are being employed to restore and recreate these environments, driven primarily by the need for habitat (re)creation and sustainable coastal flood defence. Such restoration schemes also have the potential to provide additional ecosystem services including climate regulation and waste processing. However, these sites have frequently been physically impacted by their previous land use and there is a lack of understanding of how this 'disturbance' impacts the delivery of ecosystem services or of the complex linkages between ecological, physical and biogeochemical processes in restored systems. Through the exploration of current data this paper determines that hydrological, geomorphological and hydrodynamic functioning of restored sites may be significantly impaired with respects to natural 'undisturbed' systems and that links between morphology, sediment structure, hydrology and solute transfer are poorly understood. This has consequences for the delivery of seeds, the provision of abiotic conditions suitable for plant growth, the development of microhabitats and the cycling of nutrients/contaminants and may impact the delivery of ecosystem services including biodiversity, climate regulation and waste processing. This calls for a change in our approach to research in these environments with a need for integrated, interdisciplinary studies over a range of spatial and temporal scales incorporating both intensive and extensive research design.
On-board closed-loop congestion control for satellite based packet switching networks
NASA Technical Reports Server (NTRS)
Chu, Pong P.; Ivancic, William D.; Kim, Heechul
1993-01-01
NASA LeRC is currently investigating a satellite architecture that incorporates on-board packet switching capability. Because of the statistical nature of packet switching, arrival traffic may fluctuate and thus it is necessary to integrate congestion control mechanism as part of the on-board processing unit. This study focuses on the closed-loop reactive control. We investigate the impact of the long propagation delay on the performance and propose a scheme to overcome the problem. The scheme uses a global feedback signal to regulate the packet arrival rate of ground stations. In this scheme, the satellite continuously broadcasts the status of its output buffer and the ground stations respond by selectively discarding packets or by tagging the excessive packets as low-priority. The two schemes are evaluated by theoretical queuing analysis and simulation. The former is used to analyze the simplified model and to determine the basic trends and bounds, and the later is used to assess the performance of a more realistic system and to evaluate the effectiveness of more sophisticated control schemes. The results show that the long propagation delay makes the closed-loop congestion control less responsive. The broadcasted information can only be used to extract statistical information. The discarding scheme needs carefully-chosen status information and reduction function, and normally requires a significant amount of ground discarding to reduce the on-board packet loss probability. The tagging scheme is more effective since it tolerates more uncertainties and allows a larger margin of error in status information. It can protect the high-priority packets from excessive loss and fully utilize the downlink bandwidth at the same time.
An integrated CMOS bio-potential amplifier with a feed-forward DC cancellation topology.
Parthasarathy, Jayant; Erdman, Arthur G; Redish, Aaron D; Ziaie, Babak
2006-01-01
This paper describes a novel technique to realize an integrated CMOS bio-potential amplifier with a feedforward DC cancellation topology. The amplifier is designed to provide substantial DC cancellation even while amplifying very low frequency signals. More than 80 dB offset rejection ratio is achieved without any external capacitors. The cancellation scheme is robust against process and temperature variations. The amplifier is fabricated through MOSIS AMI 1.5 microm technology (0.05 mm2 area). Measurement results show a gain of 43.5 dB in the pass band (<1 mHz-5 KHz), an input referred noise of 3.66 microVrms, and a current consumption of 22 microA.
Implicit and semi-implicit schemes in the Versatile Advection Code: numerical tests
NASA Astrophysics Data System (ADS)
Toth, G.; Keppens, R.; Botchev, M. A.
1998-04-01
We describe and evaluate various implicit and semi-implicit time integration schemes applied to the numerical simulation of hydrodynamical and magnetohydrodynamical problems. The schemes were implemented recently in the software package Versatile Advection Code, which uses modern shock capturing methods to solve systems of conservation laws with optional source terms. The main advantage of implicit solution strategies over explicit time integration is that the restrictive constraint on the allowed time step can be (partially) eliminated, thus the computational cost is reduced. The test problems cover one and two dimensional, steady state and time accurate computations, and the solutions contain discontinuities. For each test, we confront explicit with implicit solution strategies.
NASA Astrophysics Data System (ADS)
Liu, Xin; Jiang, Junzhe; Jia, Yushuai; Qiu, Jinmin; Xia, Tonglin; Zhang, Yuhong; Li, Yuqin; Chen, Xiangshu
2017-08-01
The efficient treatment of dye wastewater has been a hot topic of environment field. The integration of adsorption and photocatalytic degradation via fabrication of bi-component heterojunction photocatalyst is considered as a facile and effective strategy to enhance the dye elimination efficiency. In this report, a Z-scheme heterojunction material, SrTiO3(La,Cr)/WO3 with bifunction of adsorption and photocatalysis was successfully synthesized for efficient removal of methylene blue (MB) under visible light irradiation. The morphology and microstructure characterization demonstrates that the SrTiO3(La,Cr) nanoparticles are uniformly decorated on the WO3 nanosheets, forming an intimate heterojunction interface. MB degradation results indicate that the removal efficiency by the synergistic adsorption-photocatalysis process is greatly improved compared to pure WO3 and SrTiO3(La,Cr) with the adsorption and photocatalytic activity closely related to the composition of the material. The possible mechanism for the enhanced photocatalytic activity could be ascribed to the formation of a Z-scheme heterojunction system based on active species trapping experiments. Furthermore, the investigations of adsorption kinetics and isotherm show that the adsorption process follows pseudo-second-order kinetic model and Langmuir isotherm, respectively. Due to the synergistic advantages of negative zeta potential, large surface area and accelerated separation of photogenerated carriers driven by Z-scheme heterojunction, SrTiO3(La,Cr)/WO3 exhibits excellent adsorption-photocatalytic performance and stability on MB removal, which could be potentially used for practical wastewater treatment.
Propulsion system performance resulting from an integrated flight/propulsion control design
NASA Technical Reports Server (NTRS)
Mattern, Duane; Garg, Sanjay
1992-01-01
Propulsion-system-specific results are presented from the application of the integrated methodology for propulsion and airframe control (IMPAC) design approach to integrated flight/propulsion control design for a 'short takeoff and vertical landing' (STOVL) aircraft in transition flight. The IMPAC method is briefly discussed and the propulsion system specifications for the integrated control design are examined. The structure of a linear engine controller that results from partitioning a linear centralized controller is discussed. The details of a nonlinear propulsion control system are presented, including a scheme to protect the engine operational limits: the fan surge margin and the acceleration/deceleration schedule that limits the fuel flow. Also, a simple but effective multivariable integrator windup protection scheme is examined. Nonlinear closed-loop simulation results are presented for two typical pilot commands for transition flight: acceleration while maintaining flightpath angle and a change in flightpath angle while maintaining airspeed. The simulation nonlinearities include the airframe/engine coupling, the actuator and sensor dynamics and limits, the protection scheme for the engine operational limits, and the integrator windup protection. Satisfactory performance of the total airframe plus engine system for transition flight, as defined by the specifications, was maintained during the limit operation of the closed-loop engine subsystem.
Yang, Hui; Zhang, Jie; Ji, Yuefeng; Tian, Rui; Han, Jianrui; Lee, Young
2015-11-30
Data center interconnect with elastic optical network is a promising scenario to meet the high burstiness and high-bandwidth requirements of data center services. In our previous work, we implemented multi-stratum resilience between IP and elastic optical networks that allows to accommodate data center services. In view of this, this study extends to consider the resource integration by breaking the limit of network device, which can enhance the resource utilization. We propose a novel multi-stratum resources integration (MSRI) architecture based on network function virtualization in software defined elastic data center optical interconnect. A resource integrated mapping (RIM) scheme for MSRI is introduced in the proposed architecture. The MSRI can accommodate the data center services with resources integration when the single function or resource is relatively scarce to provision the services, and enhance globally integrated optimization of optical network and application resources. The overall feasibility and efficiency of the proposed architecture are experimentally verified on the control plane of OpenFlow-based enhanced software defined networking (eSDN) testbed. The performance of RIM scheme under heavy traffic load scenario is also quantitatively evaluated based on MSRI architecture in terms of path blocking probability, provisioning latency and resource utilization, compared with other provisioning schemes.
Optimizing Cubature for Efficient Integration of Subspace Deformations
An, Steven S.; Kim, Theodore; James, Doug L.
2009-01-01
We propose an efficient scheme for evaluating nonlinear subspace forces (and Jacobians) associated with subspace deformations. The core problem we address is efficient integration of the subspace force density over the 3D spatial domain. Similar to Gaussian quadrature schemes that efficiently integrate functions that lie in particular polynomial subspaces, we propose cubature schemes (multi-dimensional quadrature) optimized for efficient integration of force densities associated with particular subspace deformations, particular materials, and particular geometric domains. We support generic subspace deformation kinematics, and nonlinear hyperelastic materials. For an r-dimensional deformation subspace with O(r) cubature points, our method is able to evaluate subspace forces at O(r2) cost. We also describe composite cubature rules for runtime error estimation. Results are provided for various subspace deformation models, several hyperelastic materials (St.Venant-Kirchhoff, Mooney-Rivlin, Arruda-Boyce), and multimodal (graphics, haptics, sound) applications. We show dramatically better efficiency than traditional Monte Carlo integration. CR Categories: I.6.8 [Simulation and Modeling]: Types of Simulation—Animation, I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling—Physically based modeling G.1.4 [Mathematics of Computing]: Numerical Analysis—Quadrature and Numerical Differentiation PMID:19956777
NASA Astrophysics Data System (ADS)
Chang, C. L.; Chen, C. Y.; Sung, C. C.; Liou, D. H.; Chang, C. Y.; Cha, H. C.
This work presents a new fuel sensor-less control scheme for liquid feed fuel cells that is able to control the supply to a fuel cell system for operation under dynamic loading conditions. The control scheme uses cell-operating characteristics, such as potential, current, and power, to regulate the fuel concentration of a liquid feed fuel cell without the need for a fuel concentration sensor. A current integral technique has been developed to calculate the quantity of fuel required at each monitoring cycle, which can be combined with the concentration regulating process to control the fuel supply for stable operation. As verified by systematic experiments, this scheme can effectively control the fuel supply of a liquid feed fuel cell with reduced response time, even under conditions where the membrane electrolyte assembly (MEA) deteriorates gradually. This advance will aid the commercialization of liquid feed fuel cells and make them more adaptable for use in portable and automotive power units such as laptops, e-bikes, and handicap cars.
Vehicle Scheduling Schemes for Commercial and Emergency Logistics Integration
Li, Xiaohui; Tan, Qingmei
2013-01-01
In modern logistics operations, large-scale logistics companies, besides active participation in profit-seeking commercial business, also play an essential role during an emergency relief process by dispatching urgently-required materials to disaster-affected areas. Therefore, an issue has been widely addressed by logistics practitioners and caught researchers' more attention as to how the logistics companies achieve maximum commercial profit on condition that emergency tasks are effectively and performed satisfactorily. In this paper, two vehicle scheduling models are proposed to solve the problem. One is a prediction-related scheme, which predicts the amounts of disaster-relief materials and commercial business and then accepts the business that will generate maximum profits; the other is a priority-directed scheme, which, firstly groups commercial and emergency business according to priority grades and then schedules both types of business jointly and simultaneously by arriving at the maximum priority in total. Moreover, computer-based simulations are carried out to evaluate the performance of these two models by comparing them with two traditional disaster-relief tactics in China. The results testify the feasibility and effectiveness of the proposed models. PMID:24391724
Vehicle scheduling schemes for commercial and emergency logistics integration.
Li, Xiaohui; Tan, Qingmei
2013-01-01
In modern logistics operations, large-scale logistics companies, besides active participation in profit-seeking commercial business, also play an essential role during an emergency relief process by dispatching urgently-required materials to disaster-affected areas. Therefore, an issue has been widely addressed by logistics practitioners and caught researchers' more attention as to how the logistics companies achieve maximum commercial profit on condition that emergency tasks are effectively and performed satisfactorily. In this paper, two vehicle scheduling models are proposed to solve the problem. One is a prediction-related scheme, which predicts the amounts of disaster-relief materials and commercial business and then accepts the business that will generate maximum profits; the other is a priority-directed scheme, which, firstly groups commercial and emergency business according to priority grades and then schedules both types of business jointly and simultaneously by arriving at the maximum priority in total. Moreover, computer-based simulations are carried out to evaluate the performance of these two models by comparing them with two traditional disaster-relief tactics in China. The results testify the feasibility and effectiveness of the proposed models.
Hashimoto, Michinao; Langer, Robert; Kohane, Daniel S
2013-01-21
This paper describes a general scheme to fabricate microchannels from curable polymers on a laboratory benchtop. Using the scheme described here, benchtop fabrication of SU-8 microfluidic systems was demonstrated for the first time, and their compatibility with organic solvents was demonstrated. The fabrication process has three major stages: 1) transferring patterns of microchannels to polymer films by molding, 2) releasing the patterned film and creating inlets and outlets for fluids, and 3) sealing two films together to create a closed channel system. Addition of a PDMS slab supporting the polymer film provided structural integrity during and after fabrication, allowing manipulation of the polymer films without fracturing or deformation. SU-8 channels fabricated according to this scheme exhibited solvent compatibility against continuous exposure to acetone and ethylacetate, which are incompatible with native PDMS. Using the SU-8 channels, continuous generation of droplets of ethylacetate, and templated synthesis of poly (lactic-co-glycolic acid) (PLGA) microparticles, both with stable size, were demonstrated continuously over 24 h, and at intervals over 75 days.
78 FR 40627 - Prohibitions and Conditions on the Importation and Exportation of Rough Diamonds
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-08
... November 5, 2002, the launch of the Kimberley Process Certification Scheme (KPCS) for rough diamonds. Under... implements the Kimberley Process Certification Scheme (KPCS) for rough diamonds. The KPCS is a process, based... not been controlled through the Kimberley Process Certification Scheme. By Executive Order 13312 dated...
Information flow in an atmospheric model and data assimilation
NASA Astrophysics Data System (ADS)
Yoon, Young-noh
2011-12-01
Weather forecasting consists of two processes, model integration and analysis (data assimilation). During the model integration, the state estimate produced by the analysis evolves to the next cycle time according to the atmospheric model to become the background estimate. The analysis then produces a new state estimate by combining the background state estimate with new observations, and the cycle repeats. In an ensemble Kalman filter, the probability distribution of the state estimate is represented by an ensemble of sample states, and the covariance matrix is calculated using the ensemble of sample states. We perform numerical experiments on toy atmospheric models introduced by Lorenz in 2005 to study the information flow in an atmospheric model in conjunction with ensemble Kalman filtering for data assimilation. This dissertation consists of two parts. The first part of this dissertation is about the propagation of information and the use of localization in ensemble Kalman filtering. If we can perform data assimilation locally by considering the observations and the state variables only near each grid point, then we can reduce the number of ensemble members necessary to cover the probability distribution of the state estimate, reducing the computational cost for the data assimilation and the model integration. Several localized versions of the ensemble Kalman filter have been proposed. Although tests applying such schemes have proven them to be extremely promising, a full basic understanding of the rationale and limitations of localization is currently lacking. We address these issues and elucidate the role played by chaotic wave dynamics in the propagation of information and the resulting impact on forecasts. The second part of this dissertation is about ensemble regional data assimilation using joint states. Assuming that we have a global model and a regional model of higher accuracy defined in a subregion inside the global region, we propose a data assimilation scheme that produces the analyses for the global and the regional model simultaneously, considering forecast information from both models. We show that our new data assimilation scheme produces better results both in the subregion and the global region than the data assimilation scheme that produces the analyses for the global and the regional model separately.
Functional traits, convergent evolution, and periodic tables of niches.
Winemiller, Kirk O; Fitzgerald, Daniel B; Bower, Luke M; Pianka, Eric R
2015-08-01
Ecology is often said to lack general theories sufficiently predictive for applications. Here, we examine the concept of a periodic table of niches and feasibility of niche classification schemes from functional trait and performance data. Niche differences and their influence on ecological patterns and processes could be revealed effectively by first performing data reduction/ordination analyses separately on matrices of trait and performance data compiled according to logical associations with five basic niche 'dimensions', or aspects: habitat, life history, trophic, defence and metabolic. Resultant patterns then are integrated to produce interpretable niche gradients, ordinations and classifications. Degree of scheme periodicity would depend on degrees of niche conservatism and convergence causing species clustering across multiple niche dimensions. We analysed a sample data set containing trait and performance data to contrast two approaches for producing niche schemes: species ordination within niche gradient space, and niche categorisation according to trait-value thresholds. Creation of niche schemes useful for advancing ecological knowledge and its applications will depend on research that produces functional trait and performance datasets directly related to niche dimensions along with criteria for data standardisation and quality. As larger databases are compiled, opportunities will emerge to explore new methods for data reduction, ordination and classification. © 2015 The Authors. Ecology Letters published by CNRS and John Wiley & Sons Ltd.
Direct adaptive control of a PUMA 560 industrial robot
NASA Technical Reports Server (NTRS)
Seraji, Homayoun; Lee, Thomas; Delpech, Michel
1989-01-01
The implementation and experimental validation of a new direct adaptive control scheme on a PUMA 560 industrial robot is described. The testbed facility consists of a Unimation PUMA 560 six-jointed robot and controller, and a DEC MicroVAX II computer which hosts the Robot Control C Library software. The control algorithm is implemented on the MicroVAX which acts as a digital controller for the PUMA robot, and the Unimation controller is effectively bypassed and used merely as an I/O device to interface the MicroVAX to the joint motors. The control algorithm for each robot joint consists of an auxiliary signal generated by a constant-gain Proportional plus Integral plus Derivative (PID) controller, and an adaptive position-velocity (PD) feedback controller with adjustable gains. The adaptive independent joint controllers compensate for the inter-joint couplings and achieve accurate trajectory tracking without the need for the complex dynamic model and parameter values of the robot. Extensive experimental results on PUMA joint control are presented to confirm the feasibility of the proposed scheme, in spite of strong interactions between joint motions. Experimental results validate the capabilities of the proposed control scheme. The control scheme is extremely simple and computationally very fast for concurrent processing with high sampling rates.
Optimal scan strategy for mega-pixel and kilo-gray-level OLED-on-silicon microdisplay.
Ji, Yuan; Ran, Feng; Ji, Weigui; Xu, Meihua; Chen, Zhangjing; Jiang, Yuxi; Shen, Weixin
2012-06-10
The digital pixel driving scheme makes the organic light-emitting diode (OLED) microdisplays more immune to the pixel luminance variations and simplifies the circuit architecture and design flow compared to the analog pixel driving scheme. Additionally, it is easily applied in full digital systems. However, the data bottleneck becomes a notable problem as the number of pixels and gray levels grow dramatically. This paper will discuss the digital driving ability to achieve kilogray-levels for megapixel displays. The optimal scan strategy is proposed for creating ultra high gray levels and increasing light efficiency and contrast ratio. Two correction schemes are discussed to improve the gray level linearity. A 1280×1024×3 OLED-on-silicon microdisplay, with 4096 gray levels, is designed based on the optimal scan strategy. The circuit driver is integrated in the silicon backplane chip in the 0.35 μm 3.3 V-6 V dual voltage one polysilicon layer, four metal layers (1P4M) complementary metal-oxide semiconductor (CMOS) process with custom top metal. The design aspects of the optimal scan controller are also discussed. The test results show the gray level linearity of the correction schemes for the optimal scan strategy is acceptable by the human eye.
Li, Xingang; Li, Jia; Sui, Hong; He, Lin; Cao, Xingtao; Li, Yonghong
2018-07-05
Soil remediation has been considered as one of the most difficult pollution treatment tasks due to its high complexity in contaminants, geological conditions, usage, urgency, etc. The diversity in remediation technologies further makes quick selection of suitable remediation schemes much tougher even the site investigation has been done. Herein, a sustainable decision support hierarchical model has been developed to select, evaluate and determine preferred soil remediation schemes comprehensively based on modified analytic hierarchy process (MAHP). This MAHP method combines competence model and the Grubbs criteria with the conventional AHP. It not only considers the competence differences among experts in group decision, but also adjusts the big deviation caused by different experts' preference through sample analysis. This conversion allows the final remediation decision more reasonable. In this model, different evaluation criteria, including economic effect, environmental effect and technological effect, are employed to evaluate the integrated performance of remediation schemes followed by a strict computation using above MAHP. To confirm the feasibility of this developed model, it has been tested by a benzene workshop contaminated site in Beijing coking plant. Beyond soil remediation, this MAHP model would also be applied in other fields referring to multi-criteria group decision making. Copyright © 2018 Elsevier B.V. All rights reserved.
Increasing sensitivity of pulse EPR experiments using echo train detection schemes.
Mentink-Vigier, F; Collauto, A; Feintuch, A; Kaminker, I; Tarle, V; Goldfarb, D
2013-11-01
Modern pulse EPR experiments are routinely used to study the structural features of paramagnetic centers. They are usually performed at low temperatures, where relaxation times are long and polarization is high, to achieve a sufficient Signal/Noise Ratio (SNR). However, when working with samples whose amount and/or concentration are limited, sensitivity becomes an issue and therefore measurements may require a significant accumulation time, up to 12h or more. As the detection scheme of practically all pulse EPR sequences is based on the integration of a spin echo--either primary, stimulated or refocused--a considerable increase in SNR can be obtained by replacing the single echo detection scheme by a train of echoes. All these echoes, generated by Carr-Purcell type sequences, are integrated and summed together to improve the SNR. This scheme is commonly used in NMR and here we demonstrate its applicability to a number of frequently used pulse EPR experiments: Echo-Detected EPR, Davies and Mims ENDOR (Electron-Nuclear Double Resonance), DEER (Electron-Electron Double Resonance|) and EDNMR (Electron-Electron Double Resonance (ELDOR)-Detected NMR), which were combined with a Carr-Purcell-Meiboom-Gill (CPMG) type detection scheme at W-band. By collecting the transient signal and integrating a number of refocused echoes, this detection scheme yielded a 1.6-5 folds SNR improvement, depending on the paramagnetic center and the pulse sequence applied. This improvement is achieved while keeping the experimental time constant and it does not introduce signal distortion. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Knox, James C.; Miller, Lee; Campbell, Melissa; Mulloth, Lila; Varghese, Mini
2006-01-01
Accumulation and subsequent compression of carbon dioxide that is removed from the space cabin are two important processes involved in a closed-loop air revitalization scheme of the International Space Station (ISS). The 4-Bed Molecular Sieve (4BMS) of ISS currently operates in an open loop mode without a compressor. The Sabatier Engineering Development Unit (EDU) processes waste CO2 to provide water to the crew. This paper reports the integrated 4BMS, air-cooled Temperature Swing Adsorption Compressor (TSAC), and Sabatier EDU testing. The TSAC prototype was developed at NASA Ames Research Center (ARC). The 4BMS was modified to a functionally flight-like condition at NASA Marshall Space Flight Center (MSFC). Testing was conducted at MSFC. The paper provides details of the TSAC operation at various CO2 loadings and corresponding performance of the 4BMS and Sabatier.
Second stage gasifier in staged gasification and integrated process
Liu, Guohai; Vimalchand, Pannalal; Peng, Wan Wang
2015-10-06
A second stage gasification unit in a staged gasification integrated process flow scheme and operating methods are disclosed to gasify a wide range of low reactivity fuels. The inclusion of second stage gasification unit operating at high temperatures closer to ash fusion temperatures in the bed provides sufficient flexibility in unit configurations, operating conditions and methods to achieve an overall carbon conversion of over 95% for low reactivity materials such as bituminous and anthracite coals, petroleum residues and coke. The second stage gasification unit includes a stationary fluidized bed gasifier operating with a sufficiently turbulent bed of predefined inert bed material with lean char carbon content. The second stage gasifier fluidized bed is operated at relatively high temperatures up to 1400.degree. C. Steam and oxidant mixture can be injected to further increase the freeboard region operating temperature in the range of approximately from 50 to 100.degree. C. above the bed temperature.
Integrated neuron circuit for implementing neuromorphic system with synaptic device
NASA Astrophysics Data System (ADS)
Lee, Jeong-Jun; Park, Jungjin; Kwon, Min-Woo; Hwang, Sungmin; Kim, Hyungjin; Park, Byung-Gook
2018-02-01
In this paper, we propose and fabricate Integrate & Fire neuron circuit for implementing neuromorphic system. Overall operation of the circuit is verified by measuring discrete devices and the output characteristics of the circuit. Since the neuron circuit shows asymmetric output characteristic that can drive synaptic device with Spike-Timing-Dependent-Plasticity (STDP) characteristic, the autonomous weight update process is also verified by connecting the synaptic device and the neuron circuit. The timing difference of the pre-neuron and the post-neuron induce autonomous weight change of the synaptic device. Unlike 2-terminal devices, which is frequently used to implement neuromorphic system, proposed scheme of the system enables autonomous weight update and simple configuration by using 4-terminal synapse device and appropriate neuron circuit. Weight update process in the multi-layer neuron-synapse connection ensures implementation of the hardware-based artificial intelligence, based on Spiking-Neural- Network (SNN).
Three-dimensional simulation of vortex breakdown
NASA Technical Reports Server (NTRS)
Kuruvila, G.; Salas, M. D.
1990-01-01
The integral form of the complete, unsteady, compressible, three-dimensional Navier-Stokes equations in the conservation form, cast in generalized coordinate system, are solved, numerically, to simulate the vortex breakdown phenomenon. The inviscid fluxes are discretized using Roe's upwind-biased flux-difference splitting scheme and the viscous fluxes are discretized using central differencing. Time integration is performed using a backward Euler ADI (alternating direction implicit) scheme. A full approximation multigrid is used to accelerate the convergence to steady state.
Sillanpää, Mika; Ncibi, Mohamed Chaker; Matilainen, Anu
2018-02-15
Natural organic matter (NOM), a key component in aquatic environments, is a complex matrix of organic substances characterized by its fluctuating amounts in water and variable molecular and chemical properties, leading to various interaction schemes with the biogeosphere and hydrologic cycle. These factors, along with the increasing amounts of NOM in surface and ground waters, make the effort of removing naturally-occurring organics from drinking water supplies, and also from municipal wastewater effluents, a challenging task requiring the development of highly efficient and versatile water treatment technologies. Advanced oxidation processes (AOPs) received an increasing amount of attention from researchers around the world, especially during the last decade. The related processes were frequently reported to be among the most suitable water treatment technologies to remove NOM from drinking water supplies and mitigate the formation of disinfection by products (DBPs). Thus, the present work overviews recent research and development studies conducted on the application of AOPs to degrade NOM including UV and/or ozone-based applications, different Fenton processes and various heterogeneous catalytic and photocatalytic oxidative processes. Other non-conventional AOPs such as ultrasonication, ionizing radiation and plasma technologies were also reported. Furthermore, since AOPs are unlikely to achieve complete oxidation of NOM, integration schemes with other water treatment technologies were presented including membrane filtration, adsorption and others processes. Copyright © 2017 Elsevier Ltd. All rights reserved.
A simple molecular mechanics integrator in mixed rigid body and dihedral angle space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vitalis, Andreas, E-mail: a.vitalis@bioc.uzh.ch; Pappu, Rohit V.
2014-07-21
We propose a numerical scheme to integrate equations of motion in a mixed space of rigid-body and dihedral angle coordinates. The focus of the presentation is biomolecular systems and the framework is applicable to polymers with tree-like topology. By approximating the effective mass matrix as diagonal and lumping all bias torques into the time dependencies of the diagonal elements, we take advantage of the formal decoupling of individual equations of motion. We impose energy conservation independently for every degree of freedom and this is used to derive a numerical integration scheme. The cost of all auxiliary operations is linear inmore » the number of atoms. By coupling the scheme to one of two popular thermostats, we extend the method to sample constant temperature ensembles. We demonstrate that the integrator of choice yields satisfactory stability and is free of mass-metric tensor artifacts, which is expected by construction of the algorithm. Two fundamentally different systems, viz., liquid water and an α-helical peptide in a continuum solvent are used to establish the applicability of our method to a wide range of problems. The resultant constant temperature ensembles are shown to be thermodynamically accurate. The latter relies on detailed, quantitative comparisons to data from reference sampling schemes operating on exactly the same sets of degrees of freedom.« less
NASA Astrophysics Data System (ADS)
Lee, J.; Kim, K.
A Very Large Scale Integration (VLSI) architecture for robot direct kinematic computation suitable for industrial robot manipulators was investigated. The Denavit-Hartenberg transformations are reviewed to exploit a proper processing element, namely an augmented CORDIC. Specifically, two distinct implementations are elaborated on, such as the bit-serial and parallel. Performance of each scheme is analyzed with respect to the time to compute one location of the end-effector of a 6-links manipulator, and the number of transistors required.
Statistical Mechanics and Dynamics of the Outer Solar System.I. The Jupiter/Saturn Zone
NASA Technical Reports Server (NTRS)
Grazier, K. R.; Newman, W. I.; Kaula, W. M.; Hyman, J. M.
1996-01-01
We report on numerical simulations designed to understand how the solar system evolved through a winnowing of planetesimals accreeted from the early solar nebula. This sorting process is driven by the energy and angular momentum and continues to the present day. We reconsider the existence and importance of stable niches in the Jupiter/Saturn Zone using greatly improved numerical techniques based on high-order optimized multi-step integration schemes coupled to roundoff error minimizing methods.
NASA Technical Reports Server (NTRS)
Lee, J.; Kim, K.
1991-01-01
A Very Large Scale Integration (VLSI) architecture for robot direct kinematic computation suitable for industrial robot manipulators was investigated. The Denavit-Hartenberg transformations are reviewed to exploit a proper processing element, namely an augmented CORDIC. Specifically, two distinct implementations are elaborated on, such as the bit-serial and parallel. Performance of each scheme is analyzed with respect to the time to compute one location of the end-effector of a 6-links manipulator, and the number of transistors required.
2012-11-01
axis at a 2-m height above the ground and the observation point is at a 1.7-m height along a radial line at ϕ = 30°. Ground properties: εr’ = 4...fields of a horizontal electric dipole as a function of range. The dipole is buried in the ground at a 10-cm depth and the observation point is at...would necessitate the evaluation of a triple integral. To expedite the matrix filling process, different common schemes are available in efficiently
NASA Astrophysics Data System (ADS)
Ushaq, Muhammad; Fang, Jiancheng
2013-10-01
Integrated navigation systems for various applications, generally employs the centralized Kalman filter (CKF) wherein all measured sensor data are communicated to a single central Kalman filter. The advantage of CKF is that there is a minimal loss of information and high precision under benign conditions. But CKF may suffer computational overloading, and poor fault tolerance. The alternative is the federated Kalman filter (FKF) wherein the local estimates can deliver optimal or suboptimal state estimate as per certain information fusion criterion. FKF has enhanced throughput and multiple level fault detection capability. The Standard CKF or FKF require that the system noise and the measurement noise are zero-mean and Gaussian. Moreover it is assumed that covariance of system and measurement noises remain constant. But if the theoretical and actual statistical features employed in Kalman filter are not compatible, the Kalman filter does not render satisfactory solutions and divergence problems also occur. To resolve such problems, in this paper, an adaptive Kalman filter scheme strengthened with fuzzy inference system (FIS) is employed to adapt the statistical features of contributing sensors, online, in the light of real system dynamics and varying measurement noises. The excessive faults are detected and isolated by employing Chi Square test method. As a case study, the presented scheme has been implemented on Strapdown Inertial Navigation System (SINS) integrated with the Celestial Navigation System (CNS), GPS and Doppler radar using FKF. Collectively the overall system can be termed as SINS/CNS/GPS/Doppler integrated navigation system. The simulation results have validated the effectiveness of the presented scheme with significantly enhanced precision, reliability and fault tolerance. Effectiveness of the scheme has been tested against simulated abnormal errors/noises during different time segments of flight. It is believed that the presented scheme can be applied to the navigation system of aircraft or unmanned aerial vehicle (UAV).
NASA Technical Reports Server (NTRS)
Jothiprasad, Giridhar; Mavriplis, Dimitri J.; Caughey, David A.
2002-01-01
The rapid increase in available computational power over the last decade has enabled higher resolution flow simulations and more widespread use of unstructured grid methods for complex geometries. While much of this effort has been focused on steady-state calculations in the aerodynamics community, the need to accurately predict off-design conditions, which may involve substantial amounts of flow separation, points to the need to efficiently simulate unsteady flow fields. Accurate unsteady flow simulations can easily require several orders of magnitude more computational effort than a corresponding steady-state simulation. For this reason, techniques for improving the efficiency of unsteady flow simulations are required in order to make such calculations feasible in the foreseeable future. The purpose of this work is to investigate possible reductions in computer time due to the choice of an efficient time-integration scheme from a series of schemes differing in the order of time-accuracy, and by the use of more efficient techniques to solve the nonlinear equations which arise while using implicit time-integration schemes. This investigation is carried out in the context of a two-dimensional unstructured mesh laminar Navier-Stokes solver.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ming, Yang; Wu, Zi-jian; Xu, Fei, E-mail: feixu@nju.edu.cn
The nonmaximally entangled state is a special kind of entangled state, which has important applications in quantum information processing. It has been generated in quantum circuits based on bulk optical elements. However, corresponding schemes in integrated quantum circuits have been rarely considered. In this Letter, we propose an effective solution for this problem. An electro-optically tunable nonmaximally mode-entangled photon state is generated in an on-chip domain-engineered lithium niobate (LN) waveguide. Spontaneous parametric down-conversion and electro-optic interaction are effectively combined through suitable domain design to transform the entangled state into our desired formation. Moreover, this is a flexible approach to entanglementmore » architectures. Other kinds of reconfigurable entanglements are also achievable through this method. LN provides a very promising platform for future quantum circuit integration.« less
Relative Pose Estimation Using Image Feature Triplets
NASA Astrophysics Data System (ADS)
Chuang, T. Y.; Rottensteiner, F.; Heipke, C.
2015-03-01
A fully automated reconstruction of the trajectory of image sequences using point correspondences is turning into a routine practice. However, there are cases in which point features are hardly detectable, cannot be localized in a stable distribution, and consequently lead to an insufficient pose estimation. This paper presents a triplet-wise scheme for calibrated relative pose estimation from image point and line triplets, and investigates the effectiveness of the feature integration upon the relative pose estimation. To this end, we employ an existing point matching technique and propose a method for line triplet matching in which the relative poses are resolved during the matching procedure. The line matching method aims at establishing hypotheses about potential minimal line matches that can be used for determining the parameters of relative orientation (pose estimation) of two images with respect to the reference one; then, quantifying the agreement using the estimated orientation parameters. Rather than randomly choosing the line candidates in the matching process, we generate an associated lookup table to guide the selection of potential line matches. In addition, we integrate the homologous point and line triplets into a common adjustment procedure. In order to be able to also work with image sequences the adjustment is formulated in an incremental manner. The proposed scheme is evaluated with both synthetic and real datasets, demonstrating its satisfactory performance and revealing the effectiveness of image feature integration.
A Semiautomated Framework for Integrating Expert Knowledge into Disease Marker Identification
Wang, Jing; Webb-Robertson, Bobbie-Jo M.; Matzke, Melissa M.; ...
2013-01-01
Background . The availability of large complex data sets generated by high throughput technologies has enabled the recent proliferation of disease biomarker studies. However, a recurring problem in deriving biological information from large data sets is how to best incorporate expert knowledge into the biomarker selection process. Objective . To develop a generalizable framework that can incorporate expert knowledge into data-driven processes in a semiautomated way while providing a metric for optimization in a biomarker selection scheme. Methods . The framework was implemented as a pipeline consisting of five components for the identification of signatures from integrated clustering (ISIC). Expertmore » knowledge was integrated into the biomarker identification process using the combination of two distinct approaches; a distance-based clustering approach and an expert knowledge-driven functional selection. Results . The utility of the developed framework ISIC was demonstrated on proteomics data from a study of chronic obstructive pulmonary disease (COPD). Biomarker candidates were identified in a mouse model using ISIC and validated in a study of a human cohort. Conclusions . Expert knowledge can be introduced into a biomarker discovery process in different ways to enhance the robustness of selected marker candidates. Developing strategies for extracting orthogonal and robust features from large data sets increases the chances of success in biomarker identification.« less
A Semiautomated Framework for Integrating Expert Knowledge into Disease Marker Identification
Wang, Jing; Webb-Robertson, Bobbie-Jo M.; Matzke, Melissa M.; Varnum, Susan M.; Brown, Joseph N.; Riensche, Roderick M.; Adkins, Joshua N.; Jacobs, Jon M.; Hoidal, John R.; Scholand, Mary Beth; Pounds, Joel G.; Blackburn, Michael R.; Rodland, Karin D.; McDermott, Jason E.
2013-01-01
Background. The availability of large complex data sets generated by high throughput technologies has enabled the recent proliferation of disease biomarker studies. However, a recurring problem in deriving biological information from large data sets is how to best incorporate expert knowledge into the biomarker selection process. Objective. To develop a generalizable framework that can incorporate expert knowledge into data-driven processes in a semiautomated way while providing a metric for optimization in a biomarker selection scheme. Methods. The framework was implemented as a pipeline consisting of five components for the identification of signatures from integrated clustering (ISIC). Expert knowledge was integrated into the biomarker identification process using the combination of two distinct approaches; a distance-based clustering approach and an expert knowledge-driven functional selection. Results. The utility of the developed framework ISIC was demonstrated on proteomics data from a study of chronic obstructive pulmonary disease (COPD). Biomarker candidates were identified in a mouse model using ISIC and validated in a study of a human cohort. Conclusions. Expert knowledge can be introduced into a biomarker discovery process in different ways to enhance the robustness of selected marker candidates. Developing strategies for extracting orthogonal and robust features from large data sets increases the chances of success in biomarker identification. PMID:24223463
Singh, Rajinder P.; Dahe, Ganpat J.; Dudeck, Kevin W.; ...
2014-12-31
Sustainable reliance on hydrocarbon feedstocks for energy generation requires CO₂ separation technology development for energy efficient carbon capture from industrial mixed gas streams. High temperature H₂ selective glassy polymer membranes are an attractive option for energy efficient H₂/CO₂ separations in advanced power production schemes with integrated carbon capture. They enable high overall process efficiencies by providing energy efficient CO₂ separations at process relevant operating conditions and correspondingly, minimized parasitic energy losses. Polybenzimidazole (PBI)-based materials have demonstrated commercially attractive H₂/CO₂ separation characteristics and exceptional tolerance to hydrocarbon fuel derived synthesis (syngas) gas operating conditions and chemical environments. To realize a commerciallymore » attractive carbon capture technology based on these PBI materials, development of high performance, robust PBI hollow fiber membranes (HFMs) is required. In this work, we discuss outcomes of our recent efforts to demonstrate and optimize the fabrication and performance of PBI HFMs for use in pre-combustion carbon capture schemes. These efforts have resulted in PBI HFMs with commercially attractive fabrication protocols, defect minimized structures, and commercially attractive permselectivity characteristics at IGCC syngas process relevant conditions. The H₂/CO₂ separation performance of these PBI HFMs presented in this document regarding realistic process conditions is greater than that of any other polymeric system reported to-date.« less
Investigation on navigation patterns of inertial/celestial integrated systems
NASA Astrophysics Data System (ADS)
Luo, Dacheng; Liu, Yan; Liu, Zhiguo; Jiao, Wei; Wang, Qiuyan
2014-11-01
It is known that Strapdown Inertial Navigation System (SINS), Global Navigation Satellite System (GNSS) and Celestial Navigation System (CNS) can complement each other's advantages. The SINS/CNS integrated system, which has the characteristics of strong autonomy, high accuracy and good anti-jamming, is widely used in military and civilian applications. Similar to SINS/GNSS integrated system, the SINS/CNS integrated system can also be divided into three kinds according to the difference of integrating depth, i.e., loosely coupled pattern, tightly coupled pattern and deeply coupled pattern. In this paper, the principle and characteristics of each pattern of SINS/CNS system are analyzed. Based on the comparison of these patterns, a novel deeply coupled SINS/CNS integrated navigation scheme is proposed. The innovation of this scheme is that a new star pattern matching method aided by SINS information is put forward. Thus the complementary features of these two subsystems are reflected.
From cells to tissue: A continuum model of epithelial mechanics
NASA Astrophysics Data System (ADS)
Ishihara, Shuji; Marcq, Philippe; Sugimura, Kaoru
2017-08-01
A two-dimensional continuum model of epithelial tissue mechanics was formulated using cellular-level mechanical ingredients and cell morphogenetic processes, including cellular shape changes and cellular rearrangements. This model incorporates stress and deformation tensors, which can be compared with experimental data. Focusing on the interplay between cell shape changes and cell rearrangements, we elucidated dynamical behavior underlying passive relaxation, active contraction-elongation, and tissue shear flow, including a mechanism for contraction-elongation, whereby tissue flows perpendicularly to the axis of cell elongation. This study provides an integrated scheme for the understanding of the orchestration of morphogenetic processes in individual cells to achieve epithelial tissue morphogenesis.
Quality-of-care research in mental health: responding to the challenge.
McGlynn, E A; Norquist, G S; Wells, K B; Sullivan, G; Liberman, R P
1988-01-01
Quality-of-care research in mental health is in the developmental stages, which affords an opportunity to take an integrative approach, building on principles from efficacy, effectiveness, quality assessment, and quality assurance research. We propose an analytic strategy for designing research on the quality of mental health services using an adaptation of the structure, process, and outcome classification scheme. As a concrete illustration of our approach, we discuss research on a particular target population-patients with chronic schizophrenia. Future research should focus on developing models of treatment, establishing criteria and standards for outcomes and processes, and gathering data on community practices.
Research on robot mobile obstacle avoidance control based on visual information
NASA Astrophysics Data System (ADS)
Jin, Jiang
2018-03-01
Robots to detect obstacles and control robots to avoid obstacles has been a key research topic of robot control. In this paper, a scheme of visual information acquisition is proposed. By judging visual information, the visual information is transformed into the information source of path processing. In accordance with the established route, in the process of encountering obstacles, the algorithm real-time adjustment trajectory to meet the purpose of intelligent control of mobile robots. Simulation results show that, through the integration of visual sensing information, the obstacle information is fully obtained, while the real-time and accuracy of the robot movement control is guaranteed.
A fast non-local means algorithm based on integral image and reconstructed similar kernel
NASA Astrophysics Data System (ADS)
Lin, Zheng; Song, Enmin
2018-03-01
Image denoising is one of the essential methods in digital image processing. The non-local means (NLM) denoising approach is a remarkable denoising technique. However, its time complexity of the computation is high. In this paper, we design a fast NLM algorithm based on integral image and reconstructed similar kernel. First, the integral image is introduced in the traditional NLM algorithm. In doing so, it reduces a great deal of repetitive operations in the parallel processing, which will greatly improves the running speed of the algorithm. Secondly, in order to amend the error of the integral image, we construct a similar window resembling the Gaussian kernel in the pyramidal stacking pattern. Finally, in order to eliminate the influence produced by replacing the Gaussian weighted Euclidean distance with Euclidean distance, we propose a scheme to construct a similar kernel with a size of 3 x 3 in a neighborhood window which will reduce the effect of noise on a single pixel. Experimental results demonstrate that the proposed algorithm is about seventeen times faster than the traditional NLM algorithm, yet produce comparable results in terms of Peak Signal-to- Noise Ratio (the PSNR increased 2.9% in average) and perceptual image quality.
Studies in optical parallel processing. [All optical and electro-optic approaches
NASA Technical Reports Server (NTRS)
Lee, S. H.
1978-01-01
Threshold and A/D devices for converting a gray scale image into a binary one were investigated for all-optical and opto-electronic approaches to parallel processing. Integrated optical logic circuits (IOC) and optical parallel logic devices (OPA) were studied as an approach to processing optical binary signals. In the IOC logic scheme, a single row of an optical image is coupled into the IOC substrate at a time through an array of optical fibers. Parallel processing is carried out out, on each image element of these rows, in the IOC substrate and the resulting output exits via a second array of optical fibers. The OPAL system for parallel processing which uses a Fabry-Perot interferometer for image thresholding and analog-to-digital conversion, achieves a higher degree of parallel processing than is possible with IOC.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erbar, J.H.; Maddox, R.N.
1981-07-06
Expansion processes, using either Joule-Thomson or isentropic principles play an important role in the processing of natural gas streams for liquid recovery and/or hydrocarbon-dewpoint control. Constant-enthalpy expansion has been an integral part of gas processing schemes for many years. The constant entropy, or isentropic, process is more recent but has achieved wide-spread popularity. In typcial flow sheets for expansion processess, the expansion device is shown to be a value or choke. It also could be an expansion turbine to indicate an isentropic expansion. The expansion may be to lower pressure; or, in the case of turboexpansion, it could recover materialmore » or produce work. More frequently, the aim of the expansion is to produce low temperature and enhance liquid recovery.« less
Jiang, Chao; Zhang, Hongyan; Wang, Jia; Wang, Yaru; He, Heng; Liu, Rui; Zhou, Fangyuan; Deng, Jialiang; Li, Pengcheng; Luo, Qingming
2011-11-01
Laser speckle imaging (LSI) is a noninvasive and full-field optical imaging technique which produces two-dimensional blood flow maps of tissues from the raw laser speckle images captured by a CCD camera without scanning. We present a hardware-friendly algorithm for the real-time processing of laser speckle imaging. The algorithm is developed and optimized specifically for LSI processing in the field programmable gate array (FPGA). Based on this algorithm, we designed a dedicated hardware processor for real-time LSI in FPGA. The pipeline processing scheme and parallel computing architecture are introduced into the design of this LSI hardware processor. When the LSI hardware processor is implemented in the FPGA running at the maximum frequency of 130 MHz, up to 85 raw images with the resolution of 640×480 pixels can be processed per second. Meanwhile, we also present a system on chip (SOC) solution for LSI processing by integrating the CCD controller, memory controller, LSI hardware processor, and LCD display controller into a single FPGA chip. This SOC solution also can be used to produce an application specific integrated circuit for LSI processing.
Continuous variables logic via coupled automata using a DNAzyme cascade with feedback.
Lilienthal, S; Klein, M; Orbach, R; Willner, I; Remacle, F; Levine, R D
2017-03-01
The concentration of molecules can be changed by chemical reactions and thereby offer a continuous readout. Yet computer architecture is cast in textbooks in terms of binary valued, Boolean variables. To enable reactive chemical systems to compute we show how, using the Cox interpretation of probability theory, one can transcribe the equations of chemical kinetics as a sequence of coupled logic gates operating on continuous variables. It is discussed how the distinct chemical identity of a molecule allows us to create a common language for chemical kinetics and Boolean logic. Specifically, the logic AND operation is shown to be equivalent to a bimolecular process. The logic XOR operation represents chemical processes that take place concurrently. The values of the rate constants enter the logic scheme as inputs. By designing a reaction scheme with a feedback we endow the logic gates with a built in memory because their output then depends on the input and also on the present state of the system. Technically such a logic machine is an automaton. We report an experimental realization of three such coupled automata using a DNAzyme multilayer signaling cascade. A simple model verifies analytically that our experimental scheme provides an integrator generating a power series that is third order in time. The model identifies two parameters that govern the kinetics and shows how the initial concentrations of the substrates are the coefficients in the power series.
NASA Astrophysics Data System (ADS)
Kim, Tae-Wook; Park, Sang-Gyu; Choi, Byong-Deok
2011-03-01
The previous pixel-level digital-to-analog-conversion (DAC) scheme that implements a part of a DAC in a pixel circuit turned out to be very efficient for reducing the peripheral area of an integrated data driver fabricated with low-temperature polycrystalline silicon thin-film transistors (LTPS TFTs). However, how the pixel-level DAC can be compatible with the existing pixel circuits including compensation schemes of TFT variations and IR drops on supply rails, which is of primary importance for active matrix organic light emitting diodes (AMOLEDs) is an issue in this scheme, because LTPS TFTs suffer from random variations in their characteristics. In this paper, we show that the pixel-level DAC scheme can be successfully used with the previous compensation schemes by giving two examples of voltage- and current-programming pixels. The previous pixel-level DAC schemes require additional two TFTs and one capacitor, but for these newly proposed pixel circuits, the overhead is no more than two TFTs by utilizing the already existing capacitor. In addition, through a detailed analysis, it has been shown that the pixel-level DAC can be expanded to a 4-bit resolution, or be applied together with 1:2 demultiplexing driving for 6- to 8-in. diagonal XGA AMOLED display panels.
Cendagorta, Joseph R; Bačić, Zlatko; Tuckerman, Mark E
2018-03-14
We introduce a scheme for approximating quantum time correlation functions numerically within the Feynman path integral formulation. Starting with the symmetrized version of the correlation function expressed as a discretized path integral, we introduce a change of integration variables often used in the derivation of trajectory-based semiclassical methods. In particular, we transform to sum and difference variables between forward and backward complex-time propagation paths. Once the transformation is performed, the potential energy is expanded in powers of the difference variables, which allows us to perform the integrals over these variables analytically. The manner in which this procedure is carried out results in an open-chain path integral (in the remaining sum variables) with a modified potential that is evaluated using imaginary-time path-integral sampling rather than requiring the generation of a large ensemble of trajectories. Consequently, any number of path integral sampling schemes can be employed to compute the remaining path integral, including Monte Carlo, path-integral molecular dynamics, or enhanced path-integral molecular dynamics. We believe that this approach constitutes a different perspective in semiclassical-type approximations to quantum time correlation functions. Importantly, we argue that our approximation can be systematically improved within a cumulant expansion formalism. We test this approximation on a set of one-dimensional problems that are commonly used to benchmark approximate quantum dynamical schemes. We show that the method is at least as accurate as the popular ring-polymer molecular dynamics technique and linearized semiclassical initial value representation for correlation functions of linear operators in most of these examples and improves the accuracy of correlation functions of nonlinear operators.
NASA Astrophysics Data System (ADS)
Cendagorta, Joseph R.; Bačić, Zlatko; Tuckerman, Mark E.
2018-03-01
We introduce a scheme for approximating quantum time correlation functions numerically within the Feynman path integral formulation. Starting with the symmetrized version of the correlation function expressed as a discretized path integral, we introduce a change of integration variables often used in the derivation of trajectory-based semiclassical methods. In particular, we transform to sum and difference variables between forward and backward complex-time propagation paths. Once the transformation is performed, the potential energy is expanded in powers of the difference variables, which allows us to perform the integrals over these variables analytically. The manner in which this procedure is carried out results in an open-chain path integral (in the remaining sum variables) with a modified potential that is evaluated using imaginary-time path-integral sampling rather than requiring the generation of a large ensemble of trajectories. Consequently, any number of path integral sampling schemes can be employed to compute the remaining path integral, including Monte Carlo, path-integral molecular dynamics, or enhanced path-integral molecular dynamics. We believe that this approach constitutes a different perspective in semiclassical-type approximations to quantum time correlation functions. Importantly, we argue that our approximation can be systematically improved within a cumulant expansion formalism. We test this approximation on a set of one-dimensional problems that are commonly used to benchmark approximate quantum dynamical schemes. We show that the method is at least as accurate as the popular ring-polymer molecular dynamics technique and linearized semiclassical initial value representation for correlation functions of linear operators in most of these examples and improves the accuracy of correlation functions of nonlinear operators.
Multifunctional millimeter-wave radar system for helicopter safety
NASA Astrophysics Data System (ADS)
Goshi, Darren S.; Case, Timothy J.; McKitterick, John B.; Bui, Long Q.
2012-06-01
A multi-featured sensor solution has been developed that enhances the operational safety and functionality of small airborne platforms, representing an invaluable stride toward enabling higher-risk, tactical missions. This paper demonstrates results from a recently developed multi-functional sensor system that integrates a high performance millimeter-wave radar front end, an evidence grid-based integration processing scheme, and the incorporation into a 3D Synthetic Vision System (SVS) display. The front end architecture consists of a w-band real-beam scanning radar that generates a high resolution real-time radar map and operates with an adaptable antenna architecture currently configured with an interferometric capability for target height estimation. The raw sensor data is further processed within an evidence grid-based integration functionality that results in high-resolution maps in the region surrounding the platform. Lastly, the accumulated radar results are displayed in a fully rendered 3D SVS environment integrated with local database information to provide the best representation of the surrounding environment. The integrated system concept will be discussed and initial results from an experimental flight test of this developmental system will be presented. Specifically, the forward-looking operation of the system demonstrates the system's ability to produce high precision terrain mapping with obstacle detection and avoidance capability, showcasing the system's versatility in a true operational environment.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-21
... has not been controlled through the Kimberley Process Certification Scheme (KPCS). Under Section 3(2) of the Act, ``controlled through the Kimberley Process Certification Scheme'' means an importation... Kimberley Process Certification Scheme. Angola--Ministry of Geology and Mines. Armenia--Ministry of Trade...
NASA Technical Reports Server (NTRS)
Arakawa, A.; Lamb, V. R.
1979-01-01
A three-dimensional finite difference scheme for the solution of the shallow water momentum equations which accounts for the conservation of potential enstrophy in the flow of a homogeneous incompressible shallow atmosphere over steep topography as well as for total energy conservation is presented. The scheme is derived to be consistent with a reasonable scheme for potential vorticity advection in a long-term integration for a general flow with divergent mass flux. Numerical comparisons of the characteristics of the present potential enstrophy-conserving scheme with those of a scheme that conserves potential enstrophy only for purely horizontal nondivergent flow are presented which demonstrate the reduction of computational noise in the wind field with the enstrophy-conserving scheme and its convergence even in relatively coarse grids.
Cubic scaling algorithms for RPA correlation using interpolative separable density fitting
NASA Astrophysics Data System (ADS)
Lu, Jianfeng; Thicke, Kyle
2017-12-01
We present a new cubic scaling algorithm for the calculation of the RPA correlation energy. Our scheme splits up the dependence between the occupied and virtual orbitals in χ0 by use of Cauchy's integral formula. This introduces an additional integral to be carried out, for which we provide a geometrically convergent quadrature rule. Our scheme also uses the newly developed Interpolative Separable Density Fitting algorithm to further reduce the computational cost in a way analogous to that of the Resolution of Identity method.
Chiang, Kai-Wei; Duong, Thanh Trung; Liao, Jhen-Kai
2013-01-01
The integration of an Inertial Navigation System (INS) and the Global Positioning System (GPS) is common in mobile mapping and navigation applications to seamlessly determine the position, velocity, and orientation of the mobile platform. In most INS/GPS integrated architectures, the GPS is considered to be an accurate reference with which to correct for the systematic errors of the inertial sensors, which are composed of biases, scale factors and drift. However, the GPS receiver may produce abnormal pseudo-range errors mainly caused by ionospheric delay, tropospheric delay and the multipath effect. These errors degrade the overall position accuracy of an integrated system that uses conventional INS/GPS integration strategies such as loosely coupled (LC) and tightly coupled (TC) schemes. Conventional tightly coupled INS/GPS integration schemes apply the Klobuchar model and the Hopfield model to reduce pseudo-range delays caused by ionospheric delay and tropospheric delay, respectively, but do not address the multipath problem. However, the multipath effect (from reflected GPS signals) affects the position error far more significantly in a consumer-grade GPS receiver than in an expensive, geodetic-grade GPS receiver. To avoid this problem, a new integrated INS/GPS architecture is proposed. The proposed method is described and applied in a real-time integrated system with two integration strategies, namely, loosely coupled and tightly coupled schemes, respectively. To verify the effectiveness of the proposed method, field tests with various scenarios are conducted and the results are compared with a reliable reference system. PMID:23955434
Emissivity measurements of shocked tin using a multi-wavelength integrating sphere
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seifter, A; Holtkamp, D B; Iverson, A J
Pyrometric measurements of radiance to determine temperature have been performed on shock physics experiments for decades. However, multi-wavelength pyrometry schemes sometimes fail to provide credible temperatures in experiments, which incur unknown changes in sample emissivity, because an emissivity change also affects the spectral radiance. Hence, for shock physics experiments using pyrometry to measure temperatures, it is essential to determine the dynamic sample emissivity. The most robust way to determine the normal spectral emissivity is to measure the spectral normal-hemispherical reflectance using an integrating sphere. In this paper we describe a multi-wavelength (1.6–5.0 μm) integrating sphere system that utilizes a “reversed”more » scheme, which we use for shock physics experiments. The sample to be shocked is illuminated uniformly by scattering broadband light from inside a sphere onto the sample. A portion of the light reflected from the sample is detected at a point 12° from normal to the sample surface. For this experiment, we used the system to measure emissivity of shocked tin at four wavelengths for shock stress values between 17 and 33 GPa. The results indicate a large increase in effective emissivity upon shock release from tin when the shock is above 24–25 GPa, a shock stress that partially melts the sample. We also recorded an IR image of one of the shocked samples through the integrating sphere, and the emissivity inferred from the image agreed well with the integrating-sphere, pyrometer-detector data. Here, we discuss experimental data, uncertainties, and a data analysis process. We also describe unique emissivity-measurement problems arising from shock experiments and methods to overcome such problems.« less
Whole-system evaluation research of a scheme to support inner city recruitment and retention of GPs.
Bellman, Loretta
2002-12-01
The GP Assistant/Research Associate scheme developed in the Guy's, King's and St Thomas' School of Medicine, London, aims to attract and recruit young GPs (GP Assistants) and develop their commitment to work in local inner city practices. Continuing professional development for both young and established GPs is a key feature of the scheme. The objectives of the whole-system evaluation research were to explore the perspectives of 34 stakeholders in the academic department, the practices and the PCGs, and to investigate the experiences of 19 GP Assistants who have participated in the scheme. Qualitative methods included semi-structured interviews, non-participant observations in the practices, audio-taped meetings and personal journals. Data collection also included reviewing documentation of the scheme, i.e. the previous quantitative evaluation report, publications and e-mails. The multi-method approach enabled individual, group and team perspectives of the scheme and triangulation of the data through comparing dialogue with observations and documentary evidence. Thematic analysis was undertaken to elicit the complex experiences of the GP Assistants. Wide-ranging findings included enthusiastic support for the continuation of the scheme. The GP Assistants' personal and professional development was clearly evident from the themes 'eye opener', new knowledge, managing multiple roles, feeling vulnerable, time constraints and empowering processes. Seven of the GP Assistants have become partners and ten chose to remain working in local practices. Significant challenges for managing and leading the scheme were apparent. Greater co-operation and collaborative working between the academic department and the practices is required. The scheme provides a highly valued visible means of support for GPs and could act as a model for a career pathway aimed at enhancing recruitment and retention of GPs. The scheme is also at the forefront of national initiatives aimed at supporting single-handed practices and helping GPs with their continuing professional development. An integrated approach to change, education, research and development is advocated to enable recruitment and retention of GPs, their academic development, and to underpin the evolution of PCTs as learning organizations.
Chip-integrated optical power limiter based on an all-passive micro-ring resonator
NASA Astrophysics Data System (ADS)
Yan, Siqi; Dong, Jianji; Zheng, Aoling; Zhang, Xinliang
2014-10-01
Recent progress in silicon nanophotonics has dramatically advanced the possible realization of large-scale on-chip optical interconnects integration. Adopting photons as information carriers can break the performance bottleneck of electronic integrated circuit such as serious thermal losses and poor process rates. However, in integrated photonics circuits, few reported work can impose an upper limit of optical power therefore prevent the optical device from harm caused by high power. In this study, we experimentally demonstrate a feasible integrated scheme based on a single all-passive micro-ring resonator to realize the optical power limitation which has a similar function of current limiting circuit in electronics. Besides, we analyze the performance of optical power limiter at various signal bit rates. The results show that the proposed device can limit the signal power effectively at a bit rate up to 20 Gbit/s without deteriorating the signal. Meanwhile, this ultra-compact silicon device can be completely compatible with the electronic technology (typically complementary metal-oxide semiconductor technology), which may pave the way of very large scale integrated photonic circuits for all-optical information processors and artificial intelligence systems.
Method and apparatus for conversion of carbonaceous materials to liquid fuel
Lux, Kenneth W.; Namazian, Mehdi; Kelly, John T.
2015-12-01
Embodiments of the invention relates to conversion of hydrocarbon material including but not limited to coal and biomass to a synthetic liquid transportation fuel. The invention includes the integration of a non-catalytic first reaction scheme, which converts carbonaceous materials into a solid product that includes char and ash and a gaseous product; a non-catalytic second reaction scheme, which converts a portion of the gaseous product from the first reaction scheme to light olefins and liquid byproducts; a traditional gas-cleanup operations; and the third reaction scheme to combine the olefins from the second reaction scheme to produce a targeted fuel like liquid transportation fuels.
A cache-aided multiprocessor rollback recovery scheme
NASA Technical Reports Server (NTRS)
Wu, Kun-Lung; Fuchs, W. Kent
1989-01-01
This paper demonstrates how previous uniprocessor cache-aided recovery schemes can be applied to multiprocessor architectures, for recovering from transient processor failures, utilizing private caches and a global shared memory. As with cache-aided uniprocessor recovery, the multiprocessor cache-aided recovery scheme of this paper can be easily integrated into standard bus-based snoopy cache coherence protocols. A consistent shared memory state is maintained without the necessity of global check-pointing.
NASA Astrophysics Data System (ADS)
Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.
2014-10-01
Purdue-Lin scheme is a relatively sophisticated microphysics scheme in the Weather Research and Forecasting (WRF) model. The scheme includes six classes of hydro meteors: water vapor, cloud water, raid, cloud ice, snow and graupel. The scheme is very suitable for massively parallel computation as there are no interactions among horizontal grid points. In this paper, we accelerate the Purdue Lin scheme using Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi is a high performance coprocessor consists of up to 61 cores. The Xeon Phi is connected to a CPU via the PCI Express (PICe) bus. In this paper, we will discuss in detail the code optimization issues encountered while tuning the Purdue-Lin microphysics Fortran code for Xeon Phi. In particularly, getting a good performance required utilizing multiple cores, the wide vector operations and make efficient use of memory. The results show that the optimizations improved performance of the original code on Xeon Phi 5110P by a factor of 4.2x. Furthermore, the same optimizations improved performance on Intel Xeon E5-2603 CPU by a factor of 1.2x compared to the original code.
Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun
2017-01-07
Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6 ± 15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size.
NASA Astrophysics Data System (ADS)
Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun
2017-01-01
Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6 ± 15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size.
Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun
2016-01-01
Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6±15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size. PMID:27991456
Integrated guidance and control for microsatellite real-time automated proximity operations
NASA Astrophysics Data System (ADS)
Chen, Ying; He, Zhen; Zhou, Ding; Yu, Zhenhua; Li, Shunli
2018-07-01
This paper investigates the trajectory planning and control of autonomous spacecraft proximity operations with impulsive dynamics. A new integrated guidance and control scheme is developed to perform automated close-range rendezvous for underactuated microsatellites. To efficiently prevent collision, a modified RRT* trajectory planning algorithm is proposed under this context. Several engineering constraints such as collision avoidance, plume impingement, field of view and control feasibility are considered simultaneously. Then, the feedback controller that employs a turn-burn-turn strategy with a combined impulsive orbital control and finite-time attitude control is designed to ensure the implementation of planned trajectory. Finally, the performance of trajectory planner and controller are evaluated through numerical tests. Simulation results indicate the real-time implementability of the proposed integrated guidance and control scheme with position control error less than 0.5 m and velocity control error less than 0.05 m/s. Consequently, the proposed scheme offers the potential for wide applications, such as on-orbit maintenance, space surveillance and debris removal.
NASA Technical Reports Server (NTRS)
Phillips, J. R.
1996-01-01
In this paper we derive error bounds for a collocation-grid-projection scheme tuned for use in multilevel methods for solving boundary-element discretizations of potential integral equations. The grid-projection scheme is then combined with a precorrected FFT style multilevel method for solving potential integral equations with 1/r and e(sup ikr)/r kernels. A complexity analysis of this combined method is given to show that for homogeneous problems, the method is order n natural log n nearly independent of the kernel. In addition, it is shown analytically and experimentally that for an inhomogeneity generated by a very finely discretized surface, the combined method slows to order n(sup 4/3). Finally, examples are given to show that the collocation-based grid-projection plus precorrected-FFT scheme is competitive with fast-multipole algorithms when considering realistic problems and 1/r kernels, but can be used over a range of spatial frequencies with only a small performance penalty.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, W.; Yin, J.; Li, C.
This paper presents a novel front-end electronics based on a front-end ASIC with post digital filtering and calibration dedicated to CZT detectors for PET imaging. A cascade amplifier based on split-leg topology is selected to realize the charge-sensitive amplifier (CSA) for the sake of low noise performances and the simple scheme of the power supplies. The output of the CSA is connected to a variable-gain amplifier to generate the compatible signals for the A/D conversion. A multi-channel single-slope ADC is designed to sample multiple points for the digital filtering and shaping. The digital signal processing algorithms are implemented by amore » FPGA. To verify the proposed scheme, a front-end readout prototype ASIC is designed and implemented in 0.35 μm CMOS process. In a single readout channel, a CSA, a VGA, a 10-bit ADC and registers are integrated. Two dummy channels, bias circuits, and time controller are also integrated. The die size is 2.0 mm x 2.1 mm. The input range of the ASIC is from 2000 e{sup -} to 100000 e{sup -}, which is suitable for the detection of the X-and gamma ray from 11.2 keV to 550 keV. The linearity of the output voltage is less than 1 %. The gain of the readout channel is 40.2 V/pC. The static power dissipation is about 10 mW/channel. The above tested results show that the electrical performances of the ASIC can well satisfy PET imaging applications. (authors)« less
The Application of the Montage Image Mosaic Engine To The Visualization Of Astronomical Images
NASA Astrophysics Data System (ADS)
Berriman, G. Bruce; Good, J. C.
2017-05-01
The Montage Image Mosaic Engine was designed as a scalable toolkit, written in C for performance and portability across *nix platforms, that assembles FITS images into mosaics. This code is freely available and has been widely used in the astronomy and IT communities for research, product generation, and for developing next-generation cyber-infrastructure. Recently, it has begun finding applicability in the field of visualization. This development has come about because the toolkit design allows easy integration into scalable systems that process data for subsequent visualization in a browser or client. The toolkit it includes a visualization tool suitable for automation and for integration into Python: mViewer creates, with a single command, complex multi-color images overlaid with coordinate displays, labels, and observation footprints, and includes an adaptive image histogram equalization method that preserves the structure of a stretched image over its dynamic range. The Montage toolkit contains functionality originally developed to support the creation and management of mosaics, but which also offers value to visualization: a background rectification algorithm that reveals the faint structure in an image; and tools for creating cutout and downsampled versions of large images. Version 5 of Montage offers support for visualizing data written in HEALPix sky-tessellation scheme, and functionality for processing and organizing images to comply with the TOAST sky-tessellation scheme required for consumption by the World Wide Telescope (WWT). Four online tutorials allow readers to reproduce and extend all the visualizations presented in this paper.
77 FR 27831 - List of Participating Countries and Entities Under the Clean Diamond Trade Act of 2003
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-11
... Kimberley Process Certification Scheme (KPCS). Under Section 3(2) of the Act, ``controlled through the Kimberley Process Certification Scheme'' means an importation from the territory of a Participant or... Participants in the Kimberley Process Certification Scheme. Angola--Ministry of Geology and Mines. Armenia...
Code of Federal Regulations, 2010 CFR
2010-07-01
..., unless the rough diamond has been controlled through the Kimberley Process Certification Scheme. (b) The... States of any rough diamond not controlled through the Kimberley Process Certification Scheme do not... Process Certification Scheme and thus is not permitted, except in the following circumstance. The...
Edgeworth expansions of stochastic trading time
NASA Astrophysics Data System (ADS)
Decamps, Marc; De Schepper, Ann
2010-08-01
Under most local and stochastic volatility models the underlying forward is assumed to be a positive function of a time-changed Brownian motion. It relates nicely the implied volatility smile to the so-called activity rate in the market. Following Young and DeWitt-Morette (1986) [8], we propose to apply the Duru-Kleinert process-cum-time transformation in path integral to formulate the transition density of the forward. The method leads to asymptotic expansions of the transition density around a Gaussian kernel corresponding to the average activity in the market conditional on the forward value. The approximation is numerically illustrated for pricing vanilla options under the CEV model and the popular normal SABR model. The asymptotics can also be used for Monte Carlo simulations or backward integration schemes.
Symmetric weak ternary quantum homomorphic encryption schemes
NASA Astrophysics Data System (ADS)
Wang, Yuqi; She, Kun; Luo, Qingbin; Yang, Fan; Zhao, Chao
2016-03-01
Based on a ternary quantum logic circuit, four symmetric weak ternary quantum homomorphic encryption (QHE) schemes were proposed. First, for a one-qutrit rotation gate, a QHE scheme was constructed. Second, in view of the synthesis of a general 3 × 3 unitary transformation, another one-qutrit QHE scheme was proposed. Third, according to the one-qutrit scheme, the two-qutrit QHE scheme about generalized controlled X (GCX(m,n)) gate was constructed and further generalized to the n-qutrit unitary matrix case. Finally, the security of these schemes was analyzed in two respects. It can be concluded that the attacker can correctly guess the encryption key with a maximum probability pk = 1/33n, thus it can better protect the privacy of users’ data. Moreover, these schemes can be well integrated into the future quantum remote server architecture, and thus the computational security of the users’ private quantum information can be well protected in a distributed computing environment.
NASA Astrophysics Data System (ADS)
Chen, Shih-Kai; Jang, Cheng-Shin; Tsai, Cheng-Bin
2015-04-01
To respond to agricultural water shortage impacted by climate change without affecting rice yield in the future, the application of water-saving irrigation, such as SRI methodology, is considered to be adopted in rice-cultivation in Taiwan. However, the flooded paddy fields could be considered as an important source of groundwater recharge in Central Taiwan. The water-saving benefit of this new methodology and its impact on the reducing of groundwater recharge should be integrally assessed in this area. The objective of this study was to evaluate the changes of groundwater recharge/ irrigation water use between the SRI and traditional irrigation schemes (continuous irrigation, rotational irrigation). An experimental paddy field located in the proximal area of the Choushui River alluvial fan (the largest groundwater pumping region in Taiwan) was chosen as the study area. The 3-D finite element groundwater model (FEMWATER) with the variable boundary condition analog functions, was applied in simulating groundwater recharge process and amount under traditional irrigation schemes and SRI methodology. The use of effective rainfall was taken into account or not in different simulation scenarios for each irrigation scheme. The simulation results showed that there were no significant variations of infiltration rate in the use of effective rainfall or not, but the low soil moisture setting in deep soil layers resulted in higher infiltration rate. Taking the use of effective rainfall into account, the average infiltration rate for continuous irrigation, rotational irrigation, and SRI methodology in the first crop season of 2013 were 4.04 mm/day, 4.00 mm/day and 3.92 mm/day, respectively. The groundwater recharge amount of SRI methodology was slightly lower than those of traditional irrigation schemes, reducing 4% and 2% compared with continuous irrigation and rotational irrigation, respectively. The field irrigation requirement amount of SRI methodology was significantly lower than those of traditional irrigation schemes, saving 35% and 9% compared with continuous irrigation and rotational irrigation, respectively. The SRI methodology significantly improved water-saving benefit compared with the disadvantage of reducing groundwater recharge. The results could be used as a basis for the relevant government agency to formulate the integral water resource management strategies in this area. Keywords: SRI, Paddy field, Infiltration, Groundwater recharge
NASA Astrophysics Data System (ADS)
Narison, Stephan
2004-05-01
About Stephan Narison; Outline of the book; Preface; Acknowledgements; Part I. General Introduction: 1. A short flash on particle physics; 2. The pre-QCD era; 3. The QCD story; 4. Field theory ingredients; Part II. QCD Gauge Theory: 5. Lagrangian and gauge invariance; 6. Quantization using path integral; 7. QCD and its global invariance; Part III. MS scheme for QCD and QED: Introduction; 8. Dimensional regularization; 9. The MS renormalization scheme; 10. Renormalization of operators using the background field method; 11. The renormalization group; 12. Other renormalization schemes; 13. MS scheme for QED; 14. High-precision low-energy QED tests; Part IV. Deep Inelastic Scattering at Hadron Colliders: 15. OPE for deep inelastic scattering; 16. Unpolarized lepton-hadron scattering; 17. The Altarelli-Parisi equation; 18. More on unpolarized deep inelastic scatterings; 19. Polarized deep-inelastic processes; 20. Drell-Yan process; 21. One 'prompt photon' inclusive production; Part V. Hard Processes in e+e- Collisions: Introduction; 22. One hadron inclusive production; 23. gg scatterings and the 'spin' of the photon; 24. QCD jets; 25. Total inclusive hadron productions; Part VI. Summary of QCD Tests and as Measurements; Part VII. Power Corrections in QCD: 26. Introduction; 27. The SVZ expansion; 28. Technologies for evaluating Wilson coefficients; 29. Renormalons; 30. Beyond the SVZ expansion; Part VIII. QCD Two-Point Functions: 31. References guide to original works; 32. (Pseudo)scalar correlators; 33. (Axial-)vector two-point functions; 34. Tensor-quark correlator; 35. Baryonic correlators; 36. Four-quark correlators; 37. Gluonia correlators; 38. Hybrid correlators; 39. Correlators in x-space; Part IX. QCD Non-Perturbative Methods: 40. Introduction; 41. Lattice gauge theory; 42. Chiral perturbation theory; 43. Models of the QCD effective action; 44. Heavy quark effective theory; 45. Potential approaches to quarkonia; 46. On monopole and confinement; Part X. QCD Spectral Sum Rules: 47. Introduction; 48. Theoretical foundations; 49. Survey of QCD spectral sum rules; 50. Weinberg and DMO sum rules; 51. The QCD coupling as; 52. The QCD condensates; 53. Light and heavy quark masses, etc.; 54. Hadron spectroscopy; 55. D, B and Bc exclusive weak decays; 56. B0(s)-B0(s) mixing, kaon CP violation; 57. Thermal behaviour of QCD; 58. More on spectral sum rules; Part XI. Appendix A: physical constants and unites; Appendix B: weight factors for SU(N)c; Appendix C: coordinates and momenta; Appendix D: Dirac equation and matrices; Appendix E: Feynman rules; Appendix F: Feynman integrals; Appendix G: useful formulae for the sum rules; Bibliography; Index.
NASA Astrophysics Data System (ADS)
Narison, Stephan
2007-07-01
About Stephan Narison; Outline of the book; Preface; Acknowledgements; Part I. General Introduction: 1. A short flash on particle physics; 2. The pre-QCD era; 3. The QCD story; 4. Field theory ingredients; Part II. QCD Gauge Theory: 5. Lagrangian and gauge invariance; 6. Quantization using path integral; 7. QCD and its global invariance; Part III. MS scheme for QCD and QED: Introduction; 8. Dimensional regularization; 9. The MS renormalization scheme; 10. Renormalization of operators using the background field method; 11. The renormalization group; 12. Other renormalization schemes; 13. MS scheme for QED; 14. High-precision low-energy QED tests; Part IV. Deep Inelastic Scattering at Hadron Colliders: 15. OPE for deep inelastic scattering; 16. Unpolarized lepton-hadron scattering; 17. The Altarelli-Parisi equation; 18. More on unpolarized deep inelastic scatterings; 19. Polarized deep-inelastic processes; 20. Drell-Yan process; 21. One 'prompt photon' inclusive production; Part V. Hard Processes in e+e- Collisions: Introduction; 22. One hadron inclusive production; 23. gg scatterings and the 'spin' of the photon; 24. QCD jets; 25. Total inclusive hadron productions; Part VI. Summary of QCD Tests and as Measurements; Part VII. Power Corrections in QCD: 26. Introduction; 27. The SVZ expansion; 28. Technologies for evaluating Wilson coefficients; 29. Renormalons; 30. Beyond the SVZ expansion; Part VIII. QCD Two-Point Functions: 31. References guide to original works; 32. (Pseudo)scalar correlators; 33. (Axial-)vector two-point functions; 34. Tensor-quark correlator; 35. Baryonic correlators; 36. Four-quark correlators; 37. Gluonia correlators; 38. Hybrid correlators; 39. Correlators in x-space; Part IX. QCD Non-Perturbative Methods: 40. Introduction; 41. Lattice gauge theory; 42. Chiral perturbation theory; 43. Models of the QCD effective action; 44. Heavy quark effective theory; 45. Potential approaches to quarkonia; 46. On monopole and confinement; Part X. QCD Spectral Sum Rules: 47. Introduction; 48. Theoretical foundations; 49. Survey of QCD spectral sum rules; 50. Weinberg and DMO sum rules; 51. The QCD coupling as; 52. The QCD condensates; 53. Light and heavy quark masses, etc.; 54. Hadron spectroscopy; 55. D, B and Bc exclusive weak decays; 56. B0(s)-B0(s) mixing, kaon CP violation; 57. Thermal behaviour of QCD; 58. More on spectral sum rules; Part XI. Appendix A: physical constants and unites; Appendix B: weight factors for SU(N)c; Appendix C: coordinates and momenta; Appendix D: Dirac equation and matrices; Appendix E: Feynman rules; Appendix F: Feynman integrals; Appendix G: useful formulae for the sum rules; Bibliography; Index.
Double dissociation of value computations in orbitofrontal and anterior cingulate neurons
Kennerley, Steven W.; Behrens, Timothy E. J.; Wallis, Jonathan D.
2011-01-01
Damage to prefrontal cortex (PFC) impairs decision-making, but the underlying value computations that might cause such impairments remain unclear. Here we report that value computations are doubly dissociable within PFC neurons. While many PFC neurons encoded chosen value, they used opponent encoding schemes such that averaging the neuronal population eliminated value coding. However, a special population of neurons in anterior cingulate cortex (ACC) - but not orbitofrontal cortex (OFC) - multiplex chosen value across decision parameters using a unified encoding scheme, and encoded reward prediction errors. In contrast, neurons in OFC - but not ACC - encoded chosen value relative to the recent history of choice values. Together, these results suggest complementary valuation processes across PFC areas: OFC neurons dynamically evaluate current choices relative to recent choice values, while ACC neurons encode choice predictions and prediction errors using a common valuation currency reflecting the integration of multiple decision parameters. PMID:22037498
Stygoregions – a promising approach to a bioregional classification of groundwater systems
Stein, Heide; Griebler, Christian; Berkhoff, Sven; Matzke, Dirk; Fuchs, Andreas; Hahn, Hans Jürgen
2012-01-01
Linked to diverse biological processes, groundwater ecosystems deliver essential services to mankind, the most important of which is the provision of drinking water. In contrast to surface waters, ecological aspects of groundwater systems are ignored by the current European Union and national legislation. Groundwater management and protection measures refer exclusively to its good physicochemical and quantitative status. Current initiatives in developing ecologically sound integrative assessment schemes by taking groundwater fauna into account depend on the initial classification of subsurface bioregions. In a large scale survey, the regional and biogeographical distribution patterns of groundwater dwelling invertebrates were examined for many parts of Germany. Following an exploratory approach, our results underline that the distribution patterns of invertebrates in groundwater are not in accordance with any existing bioregional classification system established for surface habitats. In consequence, we propose to develope a new classification scheme for groundwater ecosystems based on stygoregions. PMID:22993698
Limited Rank Matrix Learning, discriminative dimension reduction and visualization.
Bunte, Kerstin; Schneider, Petra; Hammer, Barbara; Schleif, Frank-Michael; Villmann, Thomas; Biehl, Michael
2012-02-01
We present an extension of the recently introduced Generalized Matrix Learning Vector Quantization algorithm. In the original scheme, adaptive square matrices of relevance factors parameterize a discriminative distance measure. We extend the scheme to matrices of limited rank corresponding to low-dimensional representations of the data. This allows to incorporate prior knowledge of the intrinsic dimension and to reduce the number of adaptive parameters efficiently. In particular, for very large dimensional data, the limitation of the rank can reduce computation time and memory requirements significantly. Furthermore, two- or three-dimensional representations constitute an efficient visualization method for labeled data sets. The identification of a suitable projection is not treated as a pre-processing step but as an integral part of the supervised training. Several real world data sets serve as an illustration and demonstrate the usefulness of the suggested method. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H.; Miller, Cass T.
2010-07-01
The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte-Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward difference formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte-Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications.
Introducing the MIT Regional Climate Model (MRCM)
NASA Astrophysics Data System (ADS)
Eltahir, Elfatih A. B.; Winter, Jonathn M.; Marcella, Marc P.; Gianotti, Rebecca L.; Im, Eun-Soon
2013-04-01
During the last decade researchers at MIT have worked on improving the skill of Regional Climate Model version 3 (RegCM3) in simulating climate over different regions through the incorporation of new physical schemes or modification of original schemes. The MIT Regional Climate Model (MRCM) features several modifications over RegCM3 including coupling of Integrated Biosphere Simulator (IBIS), a new surface albedo assignment method, a new convective cloud and rainfall auto-conversion scheme, and a modified boundary layer height and cloud scheme. Here, we introduce the MRCM and briefly describe the major model modifications relative to RegCM3 and their impact on the model performance. The most significant difference relative to the RegCM3 original configuration is coupling the Integrated Biosphere Simulator (IBIS) land-surface scheme (Winter et al., 2009). Based on the simulations using IBIS over the North America, the Maritime Continent, Southwest Asia and West Africa, we demonstrate that the use of IBIS as the land surface scheme results in better representation of surface energy and water budgets in comparison to BATS. Furthermore, the addition of a new irrigation scheme to IBIS makes it possible to investigate the effects of irrigation over any region. Also a new surface albedo assignment method used together with IBIS brings further improvement in simulations of surface radiation (Marcella and Eltahir, 2013). Another important feature of the MRCM is the introduction of a new convective cloud and rainfall auto-conversion scheme (Gianotti and Eltahir, 2013). This modification brings more physical realism into an important component of the model, and succeeds in simulating convective-radiative feedback improving model performance across several radiation fields and rainfall characteristics. Other features of MRCM such as the modified boundary layer height and cloud scheme, and the improvements in the dust emission and transport representations will be discussed.
Structural and parameteric uncertainty quantification in cloud microphysics parameterization schemes
NASA Astrophysics Data System (ADS)
van Lier-Walqui, M.; Morrison, H.; Kumjian, M. R.; Prat, O. P.; Martinkus, C.
2017-12-01
Atmospheric model parameterization schemes employ approximations to represent the effects of unresolved processes. These approximations are a source of error in forecasts, caused in part by considerable uncertainty about the optimal value of parameters within each scheme -- parameteric uncertainty. Furthermore, there is uncertainty regarding the best choice of the overarching structure of the parameterization scheme -- structrual uncertainty. Parameter estimation can constrain the first, but may struggle with the second because structural choices are typically discrete. We address this problem in the context of cloud microphysics parameterization schemes by creating a flexible framework wherein structural and parametric uncertainties can be simultaneously constrained. Our scheme makes no assuptions about drop size distribution shape or the functional form of parametrized process rate terms. Instead, these uncertainties are constrained by observations using a Markov Chain Monte Carlo sampler within a Bayesian inference framework. Our scheme, the Bayesian Observationally-constrained Statistical-physical Scheme (BOSS), has flexibility to predict various sets of prognostic drop size distribution moments as well as varying complexity of process rate formulations. We compare idealized probabilistic forecasts from versions of BOSS with varying levels of structural complexity. This work has applications in ensemble forecasts with model physics uncertainty, data assimilation, and cloud microphysics process studies.
Precup, Radu-Emil; David, Radu-Codrut; Petriu, Emil M; Radac, Mircea-Bogdan; Preitl, Stefan
2014-11-01
This paper suggests a new generation of optimal PI controllers for a class of servo systems characterized by saturation and dead zone static nonlinearities and second-order models with an integral component. The objective functions are expressed as the integral of time multiplied by absolute error plus the weighted sum of the integrals of output sensitivity functions of the state sensitivity models with respect to two process parametric variations. The PI controller tuning conditions applied to a simplified linear process model involve a single design parameter specific to the extended symmetrical optimum (ESO) method which offers the desired tradeoff to several control system performance indices. An original back-calculation and tracking anti-windup scheme is proposed in order to prevent the integrator wind-up and to compensate for the dead zone nonlinearity of the process. The minimization of the objective functions is carried out in the framework of optimization problems with inequality constraints which guarantee the robust stability with respect to the process parametric variations and the controller robustness. An adaptive gravitational search algorithm (GSA) solves the optimization problems focused on the optimal tuning of the design parameter specific to the ESO method and of the anti-windup tracking gain. A tuning method for PI controllers is proposed as an efficient approach to the design of resilient control systems. The tuning method and the PI controllers are experimentally validated by the adaptive GSA-based tuning of PI controllers for the angular position control of a laboratory servo system.
NASA Astrophysics Data System (ADS)
Mielikainen, Jarno; Huang, Bormin; Huang, Allen
2015-10-01
The Thompson cloud microphysics scheme is a sophisticated cloud microphysics scheme in the Weather Research and Forecasting (WRF) model. The scheme is very suitable for massively parallel computation as there are no interactions among horizontal grid points. Compared to the earlier microphysics schemes, the Thompson scheme incorporates a large number of improvements. Thus, we have optimized the speed of this important part of WRF. Intel Many Integrated Core (MIC) ushers in a new era of supercomputing speed, performance, and compatibility. It allows the developers to run code at trillions of calculations per second using the familiar programming model. In this paper, we present our results of optimizing the Thompson microphysics scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The coprocessor supports all important Intel development tools. Thus, the development environment is familiar one to a vast number of CPU developers. Although, getting a maximum performance out of MICs will require using some novel optimization techniques. New optimizations for an updated Thompson scheme are discusses in this paper. The optimizations improved the performance of the original Thompson code on Xeon Phi 7120P by a factor of 1.8x. Furthermore, the same optimizations improved the performance of the Thompson on a dual socket configuration of eight core Intel Xeon E5-2670 CPUs by a factor of 1.8x compared to the original Thompson code.
A Reconfigurable Readout Integrated Circuit for Heterogeneous Display-Based Multi-Sensor Systems
Park, Kyeonghwan; Kim, Seung Mok; Eom, Won-Jin; Kim, Jae Joon
2017-01-01
This paper presents a reconfigurable multi-sensor interface and its readout integrated circuit (ROIC) for display-based multi-sensor systems, which builds up multi-sensor functions by utilizing touch screen panels. In addition to inherent touch detection, physiological and environmental sensor interfaces are incorporated. The reconfigurable feature is effectively implemented by proposing two basis readout topologies of amplifier-based and oscillator-based circuits. For noise-immune design against various noises from inherent human-touch operations, an alternate-sampling error-correction scheme is proposed and integrated inside the ROIC, achieving a 12-bit resolution of successive approximation register (SAR) of analog-to-digital conversion without additional calibrations. A ROIC prototype that includes the whole proposed functions and data converters was fabricated in a 0.18 μm complementary metal oxide semiconductor (CMOS) process, and its feasibility was experimentally verified to support multiple heterogeneous sensing functions of touch, electrocardiogram, body impedance, and environmental sensors. PMID:28368355
A Reconfigurable Readout Integrated Circuit for Heterogeneous Display-Based Multi-Sensor Systems.
Park, Kyeonghwan; Kim, Seung Mok; Eom, Won-Jin; Kim, Jae Joon
2017-04-03
This paper presents a reconfigurable multi-sensor interface and its readout integrated circuit (ROIC) for display-based multi-sensor systems, which builds up multi-sensor functions by utilizing touch screen panels. In addition to inherent touch detection, physiological and environmental sensor interfaces are incorporated. The reconfigurable feature is effectively implemented by proposing two basis readout topologies of amplifier-based and oscillator-based circuits. For noise-immune design against various noises from inherent human-touch operations, an alternate-sampling error-correction scheme is proposed and integrated inside the ROIC, achieving a 12-bit resolution of successive approximation register (SAR) of analog-to-digital conversion without additional calibrations. A ROIC prototype that includes the whole proposed functions and data converters was fabricated in a 0.18 μm complementary metal oxide semiconductor (CMOS) process, and its feasibility was experimentally verified to support multiple heterogeneous sensing functions of touch, electrocardiogram, body impedance, and environmental sensors.
Description of the Prometheus Program Alternator/Thruster Integration Laboratory (ATIL)
NASA Technical Reports Server (NTRS)
Baez, Anastacio N.; Birchenough, Arthur G.; Lebron-Velilla, Ramon C.; Gonzalez, Marcelo C.
2005-01-01
The Project Prometheus Alternator Electric Thruster Integration Laboratory's (ATIL) primary two objectives are to obtain test data to influence the power conversion and electric propulsion systems design, and to assist in developing the primary power quality specifications prior to system Preliminary Design Review (PDR). ATIL is being developed in stages or configurations of increasing fidelity and complexity in order to support the various phases of the Prometheus program. ATIL provides a timely insight of the electrical interactions between a representative Permanent Magnet Generator, its associated control schemes, realistic electric system loads, and an operating electric propulsion thruster. The ATIL main elements are an electrically driven 100 kWe Alternator Test Unit (ATU), an alternator controller using parasitic loads, and a thruster Power Processing Unit (PPU) breadboard. This paper describes the ATIL components, its development approach, preliminary integration test results, and current status.
Coordinating teams of autonomous vehicles: an architectural perspective
NASA Astrophysics Data System (ADS)
Czichon, Cary; Peterson, Robert W.; Mettala, Erik G.; Vondrak, Ivo
2005-05-01
In defense-related robotics research, a mission level integration gap exists between mission tasks (tactical) performed by ground, sea, or air applications and elementary behaviors enacted by processing, communications, sensors, and weaponry resources (platform specific). The gap spans ensemble (heterogeneous team) behaviors, automatic MOE/MOP tracking, and tactical task modeling/simulation for virtual and mixed teams comprised of robotic and human combatants. This study surveys robotic system architectures, compares approaches for navigating problem/state spaces by autonomous systems, describes an architecture for an integrated, repository-based modeling, simulation, and execution environment, and outlines a multi-tiered scheme for robotic behavior components that is agent-based, platform-independent, and extendable via plug-ins. Tools for this integrated environment, along with a distributed agent framework for collaborative task performance are being developed by a U.S. Army funded SBIR project (RDECOM Contract N61339-04-C-0005).
Highly localized distributed Brillouin scattering response in a photonic integrated circuit
NASA Astrophysics Data System (ADS)
Zarifi, Atiyeh; Stiller, Birgit; Merklein, Moritz; Li, Neuton; Vu, Khu; Choi, Duk-Yong; Ma, Pan; Madden, Stephen J.; Eggleton, Benjamin J.
2018-03-01
The interaction of optical and acoustic waves via stimulated Brillouin scattering (SBS) has recently reached on-chip platforms, which has opened new fields of applications ranging from integrated microwave photonics and on-chip narrow-linewidth lasers, to phonon-based optical delay and signal processing schemes. Since SBS is an effect that scales exponentially with interaction length, on-chip implementation on a short length scale is challenging, requiring carefully designed waveguides with optimized opto-acoustic overlap. In this work, we use the principle of Brillouin optical correlation domain analysis to locally measure the SBS spectrum with high spatial resolution of 800 μm and perform a distributed measurement of the Brillouin spectrum along a spiral waveguide in a photonic integrated circuit. This approach gives access to local opto-acoustic properties of the waveguides, including the Brillouin frequency shift and linewidth, essential information for the further development of high quality photonic-phononic waveguides for SBS applications.
NNLO jet cross sections by subtraction
NASA Astrophysics Data System (ADS)
Somogyi, G.; Bolzoni, P.; Trócsányi, Z.
2010-08-01
We report on the computation of a class of integrals that appear when integrating the so-called iterated singly-unresolved approximate cross section of the NNLO subtraction scheme of Refs. [G. Somogyi, Z. Trócsányi, and V. Del Duca, JHEP 06, 024 (2005), arXiv:hep-ph/0502226; G. Somogyi and Z. Trócsányi, (2006), arXiv:hep-ph/0609041; G. Somogyi, Z. Trócsányi, and V. Del Duca, JHEP 01, 070 (2007), arXiv:hep-ph/0609042; G. Somogyi and Z. Trócsányi, JHEP 01, 052 (2007), arXiv:hep-ph/0609043] over the factorised phase space of unresolved partons. The integrated approximate cross section itself can be written as the product of an insertion operator (in colour space) times the Born cross section. We give selected results for the insertion operator for processes with two and three hard partons in the final state.
Crypto-Watermarking of Transmitted Medical Images.
Al-Haj, Ali; Mohammad, Ahmad; Amer, Alaa'
2017-02-01
Telemedicine is a booming healthcare practice that has facilitated the exchange of medical data and expertise between healthcare entities. However, the widespread use of telemedicine applications requires a secured scheme to guarantee confidentiality and verify authenticity and integrity of exchanged medical data. In this paper, we describe a region-based, crypto-watermarking algorithm capable of providing confidentiality, authenticity, and integrity for medical images of different modalities. The proposed algorithm provides authenticity by embedding robust watermarks in images' region of non-interest using SVD in the DWT domain. Integrity is provided in two levels: strict integrity implemented by a cryptographic hash watermark, and content-based integrity implemented by a symmetric encryption-based tamper localization scheme. Confidentiality is achieved as a byproduct of hiding patient's data in the image. Performance of the algorithm was evaluated with respect to imperceptibility, robustness, capacity, and tamper localization, using different medical images. The results showed the effectiveness of the algorithm in providing security for telemedicine applications.
Lee, H H; Chen, G; Yue, P L
2001-01-01
Theoretical and experimental studies have established that integrated treatment systems (mostly chemical and biological) for various industrial wastewaters can achieve better quality of treatment and can be cost-effective. In the present study, the objective is to minimize the use of process water in the textile industry by an economical recycle and reuse scheme. The textile wastewater was first characterized in terms of COD, BOD5, salinity and color. In order to recycle such wastewater, the contaminants should be mineralized and/or removed according to the reusable textile water quality standards. Typical results show that this is achievable. An economic analysis has been conducted on the proposed integrated system. The economic analysis shows that the integrated system is economically more attractive than any of the single treatment technologies for achieving the same target of treatment. The information presented in this paper provides a feasible option for the reduction of effluent discharges in the textile industry.
Recent advances in computational-analytical integral transforms for convection-diffusion problems
NASA Astrophysics Data System (ADS)
Cotta, R. M.; Naveira-Cotta, C. P.; Knupp, D. C.; Zotin, J. L. Z.; Pontes, P. C.; Almeida, A. P.
2017-10-01
An unifying overview of the Generalized Integral Transform Technique (GITT) as a computational-analytical approach for solving convection-diffusion problems is presented. This work is aimed at bringing together some of the most recent developments on both accuracy and convergence improvements on this well-established hybrid numerical-analytical methodology for partial differential equations. Special emphasis is given to novel algorithm implementations, all directly connected to enhancing the eigenfunction expansion basis, such as a single domain reformulation strategy for handling complex geometries, an integral balance scheme in dealing with multiscale problems, the adoption of convective eigenvalue problems in formulations with significant convection effects, and the direct integral transformation of nonlinear convection-diffusion problems based on nonlinear eigenvalue problems. Then, selected examples are presented that illustrate the improvement achieved in each class of extension, in terms of convergence acceleration and accuracy gain, which are related to conjugated heat transfer in complex or multiscale microchannel-substrate geometries, multidimensional Burgers equation model, and diffusive metal extraction through polymeric hollow fiber membranes. Numerical results are reported for each application and, where appropriate, critically compared against the traditional GITT scheme without convergence enhancement schemes and commercial or dedicated purely numerical approaches.
Efficient method of evaluation for Gaussian Hartree-Fock exchange operator for Gau-PBE functional
NASA Astrophysics Data System (ADS)
Song, Jong-Won; Hirao, Kimihiko
2015-07-01
We previously developed an efficient screened hybrid functional called Gaussian-Perdew-Burke-Ernzerhof (Gau-PBE) [Song et al., J. Chem. Phys. 135, 071103 (2011)] for large molecules and extended systems, which is characterized by the usage of a Gaussian function as a modified Coulomb potential for the Hartree-Fock (HF) exchange. We found that the adoption of a Gaussian HF exchange operator considerably decreases the calculation time cost of periodic systems while improving the reproducibility of the bandgaps of semiconductors. We present a distance-based screening scheme here that is tailored for the Gaussian HF exchange integral that utilizes multipole expansion for the Gaussian two-electron integrals. We found a new multipole screening scheme helps to save the time cost for the HF exchange integration by efficiently decreasing the number of integrals of, specifically, the near field region without incurring substantial changes in total energy. In our assessment on the periodic systems of seven semiconductors, the Gau-PBE hybrid functional with a new screening scheme has 1.56 times the time cost of a pure functional while the previous Gau-PBE was 1.84 times and HSE06 was 3.34 times.
Efficient method of evaluation for Gaussian Hartree-Fock exchange operator for Gau-PBE functional
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Jong-Won; Hirao, Kimihiko, E-mail: hirao@riken.jp
2015-07-14
We previously developed an efficient screened hybrid functional called Gaussian-Perdew–Burke–Ernzerhof (Gau-PBE) [Song et al., J. Chem. Phys. 135, 071103 (2011)] for large molecules and extended systems, which is characterized by the usage of a Gaussian function as a modified Coulomb potential for the Hartree-Fock (HF) exchange. We found that the adoption of a Gaussian HF exchange operator considerably decreases the calculation time cost of periodic systems while improving the reproducibility of the bandgaps of semiconductors. We present a distance-based screening scheme here that is tailored for the Gaussian HF exchange integral that utilizes multipole expansion for the Gaussian two-electron integrals.more » We found a new multipole screening scheme helps to save the time cost for the HF exchange integration by efficiently decreasing the number of integrals of, specifically, the near field region without incurring substantial changes in total energy. In our assessment on the periodic systems of seven semiconductors, the Gau-PBE hybrid functional with a new screening scheme has 1.56 times the time cost of a pure functional while the previous Gau-PBE was 1.84 times and HSE06 was 3.34 times.« less
Non-rigid CT/CBCT to CBCT registration for online external beam radiotherapy guidance
NASA Astrophysics Data System (ADS)
Zachiu, Cornel; de Senneville, Baudouin Denis; Tijssen, Rob H. N.; Kotte, Alexis N. T. J.; Houweling, Antonetta C.; Kerkmeijer, Linda G. W.; Lagendijk, Jan J. W.; Moonen, Chrit T. W.; Ries, Mario
2018-01-01
Image-guided external beam radiotherapy (EBRT) allows radiation dose deposition with a high degree of accuracy and precision. Guidance is usually achieved by estimating the displacements, via image registration, between cone beam computed tomography (CBCT) and computed tomography (CT) images acquired at different stages of the therapy. The resulting displacements are then used to reposition the patient such that the location of the tumor at the time of treatment matches its position during planning. Moreover, ongoing research aims to use CBCT-CT image registration for online plan adaptation. However, CBCT images are usually acquired using a small number of x-ray projections and/or low beam intensities. This often leads to the images being subject to low contrast, low signal-to-noise ratio and artifacts, which ends-up hampering the image registration process. Previous studies addressed this by integrating additional image processing steps into the registration procedure. However, these steps are usually designed for particular image acquisition schemes, therefore limiting their use on a case-by-case basis. In the current study we address CT to CBCT and CBCT to CBCT registration by the means of the recently proposed EVolution registration algorithm. Contrary to previous approaches, EVolution does not require the integration of additional image processing steps in the registration scheme. Moreover, the algorithm requires a low number of input parameters, is easily parallelizable and provides an elastic deformation on a point-by-point basis. Results have shown that relative to a pure CT-based registration, the intrinsic artifacts present in typical CBCT images only have a sub-millimeter impact on the accuracy and precision of the estimated deformation. In addition, the algorithm has low computational requirements, which are compatible with online image-based guidance of EBRT treatments.
Low-power chip-level optical interconnects based on bulk-silicon single-chip photonic transceivers
NASA Astrophysics Data System (ADS)
Kim, Gyungock; Park, Hyundai; Joo, Jiho; Jang, Ki-Seok; Kwack, Myung-Joon; Kim, Sanghoon; Kim, In Gyoo; Kim, Sun Ae; Oh, Jin Hyuk; Park, Jaegyu; Kim, Sanggi
2016-03-01
We present new scheme for chip-level photonic I/Os, based on monolithically integrated vertical photonic devices on bulk silicon, which increases the integration level of PICs to a complete photonic transceiver (TRx) including chip-level light source. A prototype of the single-chip photonic TRx based on a bulk silicon substrate demonstrated 20 Gb/s low power chip-level optical interconnects between fabricated chips, proving that this scheme can offer compact low-cost chip-level I/O solutions and have a significant impact on practical electronic-photonic integration in high performance computers (HPC), cpu-memory interface, 3D-IC, and LAN/SAN/data-center and network applications.
Stability of mixed time integration schemes for transient thermal analysis
NASA Technical Reports Server (NTRS)
Liu, W. K.; Lin, J. I.
1982-01-01
A current research topic in coupled-field problems is the development of effective transient algorithms that permit different time integration methods with different time steps to be used simultaneously in various regions of the problems. The implicit-explicit approach seems to be very successful in structural, fluid, and fluid-structure problems. This paper summarizes this research direction. A family of mixed time integration schemes, with the capabilities mentioned above, is also introduced for transient thermal analysis. A stability analysis and the computer implementation of this technique are also presented. In particular, it is shown that the mixed time implicit-explicit methods provide a natural framework for the further development of efficient, clean, modularized computer codes.
Robust high-performance control for robotic manipulators
NASA Technical Reports Server (NTRS)
Seraji, H.
1989-01-01
A robust control scheme to accomplish accurate trajectory tracking for an integrated system of manipulator-plus-actuators is proposed. The control scheme comprises a feedforward and a feedback controller. The feedforward controller contains any known part of the manipulator dynamics that can be used for online control. The feedback controller consists of adaptive position and velocity feedback gains and an auxiliary signal which is simply generated by a fixed-gain proportional/integral/derivative controller. The feedback controller is updated by very simple adaptation laws which contain both proportional and integral adaptation terms. By introduction of a simple sigma modification to the adaptation laws, robustness is guaranteed in the presence of unmodeled dynamics and disturbances.
Numerical time-domain electromagnetics based on finite-difference and convolution
NASA Astrophysics Data System (ADS)
Lin, Yuanqu
Time-domain methods posses a number of advantages over their frequency-domain counterparts for the solution of wideband, nonlinear, and time varying electromagnetic scattering and radiation phenomenon. Time domain integral equation (TDIE)-based methods, which incorporate the beneficial properties of integral equation method, are thus well suited for solving broadband scattering problems for homogeneous scatterers. Widespread adoption of TDIE solvers has been retarded relative to other techniques by their inefficiency, inaccuracy and instability. Moreover, two-dimensional (2D) problems are especially problematic, because 2D Green's functions have infinite temporal support, exacerbating these difficulties. This thesis proposes a finite difference delay modeling (FDDM) scheme for the solution of the integral equations of 2D transient electromagnetic scattering problems. The method discretizes the integral equations temporally using first- and second-order finite differences to map Laplace-domain equations into the Z domain before transforming to the discrete time domain. The resulting procedure is unconditionally stable because of the nature of the Laplace- to Z-domain mapping. The first FDDM method developed in this thesis uses second-order Lagrange basis functions with Galerkin's method for spatial discretization. The second application of the FDDM method discretizes the space using a locally-corrected Nystrom method, which accelerates the precomputation phase and achieves high order accuracy. The Fast Fourier Transform (FFT) is applied to accelerate the marching-on-time process in both methods. While FDDM methods demonstrate impressive accuracy and stability in solving wideband scattering problems for homogeneous scatterers, they still have limitations in analyzing interactions between several inhomogenous scatterers. Therefore, this thesis devises a multi-region finite-difference time-domain (MR-FDTD) scheme based on domain-optimal Green's functions for solving sparsely-populated problems. The scheme uses a discrete Green's function (DGF) on the FDTD lattice to truncate the local subregions, and thus reduces reflection error on the local boundary. A continuous Green's function (CGF) is implemented to pass the influence of external fields into each FDTD region which mitigates the numerical dispersion and anisotropy of standard FDTD. Numerical results will illustrate the accuracy and stability of the proposed techniques.
Nanowire active-matrix circuitry for low-voltage macroscale artificial skin.
Takei, Kuniharu; Takahashi, Toshitake; Ho, Johnny C; Ko, Hyunhyub; Gillies, Andrew G; Leu, Paul W; Fearing, Ronald S; Javey, Ali
2010-10-01
Large-scale integration of high-performance electronic components on mechanically flexible substrates may enable new applications in electronics, sensing and energy. Over the past several years, tremendous progress in the printing and transfer of single-crystalline, inorganic micro- and nanostructures on plastic substrates has been achieved through various process schemes. For instance, contact printing of parallel arrays of semiconductor nanowires (NWs) has been explored as a versatile route to enable fabrication of high-performance, bendable transistors and sensors. However, truly macroscale integration of ordered NW circuitry has not yet been demonstrated, with the largest-scale active systems being of the order of 1 cm(2) (refs 11,15). This limitation is in part due to assembly- and processing-related obstacles, although larger-scale integration has been demonstrated for randomly oriented NWs (ref. 16). Driven by this challenge, here we demonstrate macroscale (7×7 cm(2)) integration of parallel NW arrays as the active-matrix backplane of a flexible pressure-sensor array (18×19 pixels). The integrated sensor array effectively functions as an artificial electronic skin, capable of monitoring applied pressure profiles with high spatial resolution. The active-matrix circuitry operates at a low operating voltage of less than 5 V and exhibits superb mechanical robustness and reliability, without performance degradation on bending to small radii of curvature (2.5 mm) for over 2,000 bending cycles. This work presents the largest integration of ordered NW-array active components, and demonstrates a model platform for future integration of nanomaterials for practical applications.
NMRPipe: a multidimensional spectral processing system based on UNIX pipes.
Delaglio, F; Grzesiek, S; Vuister, G W; Zhu, G; Pfeifer, J; Bax, A
1995-11-01
The NMRPipe system is a UNIX software environment of processing, graphics, and analysis tools designed to meet current routine and research-oriented multidimensional processing requirements, and to anticipate and accommodate future demands and developments. The system is based on UNIX pipes, which allow programs running simultaneously to exchange streams of data under user control. In an NMRPipe processing scheme, a stream of spectral data flows through a pipeline of processing programs, each of which performs one component of the overall scheme, such as Fourier transformation or linear prediction. Complete multidimensional processing schemes are constructed as simple UNIX shell scripts. The processing modules themselves maintain and exploit accurate records of data sizes, detection modes, and calibration information in all dimensions, so that schemes can be constructed without the need to explicitly define or anticipate data sizes or storage details of real and imaginary channels during processing. The asynchronous pipeline scheme provides other substantial advantages, including high flexibility, favorable processing speeds, choice of both all-in-memory and disk-bound processing, easy adaptation to different data formats, simpler software development and maintenance, and the ability to distribute processing tasks on multi-CPU computers and computer networks.
Implicit level set algorithms for modelling hydraulic fracture propagation.
Peirce, A
2016-10-13
Hydraulic fractures are tensile cracks that propagate in pre-stressed solid media due to the injection of a viscous fluid. Developing numerical schemes to model the propagation of these fractures is particularly challenging due to the degenerate, hypersingular nature of the coupled integro-partial differential equations. These equations typically involve a singular free boundary whose velocity can only be determined by evaluating a distinguished limit. This review paper describes a class of numerical schemes that have been developed to use the multiscale asymptotic behaviour typically encountered near the fracture boundary as multiple physical processes compete to determine the evolution of the fracture. The fundamental concepts of locating the free boundary using the tip asymptotics and imposing the tip asymptotic behaviour in a weak form are illustrated in two quite different formulations of the governing equations. These formulations are the displacement discontinuity boundary integral method and the extended finite-element method. Practical issues are also discussed, including new models for proppant transport able to capture 'tip screen-out'; efficient numerical schemes to solve the coupled nonlinear equations; and fast methods to solve resulting linear systems. Numerical examples are provided to illustrate the performance of the numerical schemes. We conclude the paper with open questions for further research. This article is part of the themed issue 'Energy and the subsurface'. © 2016 The Author(s).
A potent approach for the development of FPGA based DAQ system for HEP experiments
NASA Astrophysics Data System (ADS)
Khan, Shuaib Ahmad; Mitra, Jubin; David, Erno; Kiss, Tivadar; Nayak, Tapan Kumar
2017-10-01
With ever increasing particle beam energies and interaction rates in modern High Energy Physics (HEP) experiments in the present and future accelerator facilities, there has always been the demand for robust Data Acquisition (DAQ) schemes which perform in the harsh radiation environment and handle high data volume. The scheme is required to be flexible enough to adapt to the demands of future detector and electronics upgrades, and at the same time keeping the cost factor in mind. To address these challenges, in the present work, we discuss an efficient DAQ scheme for error resilient, high speed data communication on commercially available state-of-the-art FPGA with optical links. The scheme utilises GigaBit Transceiver (GBT) protocol to establish radiation tolerant communication link between on-detector front-end electronics situated in harsh radiation environment to the back-end Data Processing Unit (DPU) placed in a low radiation zone. The acquired data are reconstructed in DPU which reduces the data volume significantly, and then transmitted to the computing farms through high speed optical links using 10 Gigabit Ethernet (10GbE). In this study, we focus on implementation and testing of GBT protocol and 10GbE links on an Intel FPGA. Results of the measurements of resource utilisation, critical path delays, signal integrity, eye diagram and Bit Error Rate (BER) are presented, which are the indicators for efficient system performance.
A patient privacy protection scheme for medical information system.
Lu, Chenglang; Wu, Zongda; Liu, Mingyong; Chen, Wei; Guo, Junfang
2013-12-01
In medical information systems, there are a lot of confidential information about patient privacy. It is therefore an important problem how to prevent patient's personal privacy information from being disclosed. Although traditional security protection strategies (such as identity authentication and authorization access control) can well ensure data integrity, they cannot prevent system's internal staff (such as administrators) from accessing and disclosing patient privacy information. In this paper, we present an effective scheme to protect patients' personal privacy for a medical information system. In the scheme, privacy data before being stored in the database of the server of a medical information system would be encrypted using traditional encryption algorithms, so that the data even if being disclosed are also difficult to be decrypted and understood. However, to execute various kinds of query operations over the encrypted data efficiently, we would also augment the encrypted data with additional index, so as to process as much of the query as possible at the server side, without the need to decrypt the data. Thus, in this paper, we mainly explore how the index of privacy data is constructed, and how a query operation over privacy data is translated into a new query over the corresponding index so that it can be executed at the server side immediately. Finally, both theoretical analysis and experimental evaluation validate the practicality and effectiveness of our proposed scheme.
Implicit level set algorithms for modelling hydraulic fracture propagation
2016-01-01
Hydraulic fractures are tensile cracks that propagate in pre-stressed solid media due to the injection of a viscous fluid. Developing numerical schemes to model the propagation of these fractures is particularly challenging due to the degenerate, hypersingular nature of the coupled integro-partial differential equations. These equations typically involve a singular free boundary whose velocity can only be determined by evaluating a distinguished limit. This review paper describes a class of numerical schemes that have been developed to use the multiscale asymptotic behaviour typically encountered near the fracture boundary as multiple physical processes compete to determine the evolution of the fracture. The fundamental concepts of locating the free boundary using the tip asymptotics and imposing the tip asymptotic behaviour in a weak form are illustrated in two quite different formulations of the governing equations. These formulations are the displacement discontinuity boundary integral method and the extended finite-element method. Practical issues are also discussed, including new models for proppant transport able to capture ‘tip screen-out’; efficient numerical schemes to solve the coupled nonlinear equations; and fast methods to solve resulting linear systems. Numerical examples are provided to illustrate the performance of the numerical schemes. We conclude the paper with open questions for further research. This article is part of the themed issue ‘Energy and the subsurface’. PMID:27597787
Xia, Yidong; Lou, Jialin; Luo, Hong; ...
2015-02-09
Here, an OpenACC directive-based graphics processing unit (GPU) parallel scheme is presented for solving the compressible Navier–Stokes equations on 3D hybrid unstructured grids with a third-order reconstructed discontinuous Galerkin method. The developed scheme requires the minimum code intrusion and algorithm alteration for upgrading a legacy solver with the GPU computing capability at very little extra effort in programming, which leads to a unified and portable code development strategy. A face coloring algorithm is adopted to eliminate the memory contention because of the threading of internal and boundary face integrals. A number of flow problems are presented to verify the implementationmore » of the developed scheme. Timing measurements were obtained by running the resulting GPU code on one Nvidia Tesla K20c GPU card (Nvidia Corporation, Santa Clara, CA, USA) and compared with those obtained by running the equivalent Message Passing Interface (MPI) parallel CPU code on a compute node (consisting of two AMD Opteron 6128 eight-core CPUs (Advanced Micro Devices, Inc., Sunnyvale, CA, USA)). Speedup factors of up to 24× and 1.6× for the GPU code were achieved with respect to one and 16 CPU cores, respectively. The numerical results indicate that this OpenACC-based parallel scheme is an effective and extensible approach to port unstructured high-order CFD solvers to GPU computing.« less
NASA Astrophysics Data System (ADS)
Hunink, Johannes E.; Bryant, Benjamin P.; Vogl, Adrian; Droogers, Peter
2015-04-01
We analyse the multiple impacts of investments in sustainable land use practices on ecosystem services in the Upper Tana basin (Kenya) to support a watershed conservation scheme (a "water fund"). We apply an integrated modelling framework, building on previous field-based and modelling studies in the basin, and link biophysical outputs to economic benefits for the main actors in the basin. The first step in the modelling workflow is the use of a high-resolution spatial prioritization tool (Resource Investment Optimization System -- RIOS) to allocate the type and location of conservation investments in the different subbasins, subject to budget constraints and stakeholder concerns. We then run the Soil and Water Assessment Tool (SWAT) using the RIOS-identified investment scenarios to produce spatially explicit scenarios that simulate changes in water yield and suspended sediment. Finally, in close collaboration with downstream water users (urban water supply and hydropower) we link those biophysical outputs to monetary metrics, including: reduced water treatment costs, increased hydropower production, and crop yield benefits for upstream farmers in the conservation area. We explore how different budgets and different spatial targeting scenarios influence the return of the investments and the effectiveness of the water fund scheme. This study is novel in that it presents an integrated analysis targeting interventions in a decision context that takes into account local environmental and socio-economic conditions, and then relies on detailed, process-based, biophysical models to demonstrate the economic return on those investments. We conclude that the approach allows for an analysis on different spatial and temporal scales, providing conclusive evidence to stakeholders and decision makers on the contribution and benefits of the land-based investments in this basin. This is serving as foundational work to support the implementation of the Upper Tana-Nairobi Water Fund, a public-private partnership to safeguard ecosystem service provision and food security.
Variationally consistent discretization schemes and numerical algorithms for contact problems
NASA Astrophysics Data System (ADS)
Wohlmuth, Barbara
We consider variationally consistent discretization schemes for mechanical contact problems. Most of the results can also be applied to other variational inequalities, such as those for phase transition problems in porous media, for plasticity or for option pricing applications from finance. The starting point is to weakly incorporate the constraint into the setting and to reformulate the inequality in the displacement in terms of a saddle-point problem. Here, the Lagrange multiplier represents the surface forces, and the constraints are restricted to the boundary of the simulation domain. Having a uniform inf-sup bound, one can then establish optimal low-order a priori convergence rates for the discretization error in the primal and dual variables. In addition to the abstract framework of linear saddle-point theory, complementarity terms have to be taken into account. The resulting inequality system is solved by rewriting it equivalently by means of the non-linear complementarity function as a system of equations. Although it is not differentiable in the classical sense, semi-smooth Newton methods, yielding super-linear convergence rates, can be applied and easily implemented in terms of a primal-dual active set strategy. Quite often the solution of contact problems has a low regularity, and the efficiency of the approach can be improved by using adaptive refinement techniques. Different standard types, such as residual- and equilibrated-based a posteriori error estimators, can be designed based on the interpretation of the dual variable as Neumann boundary condition. For the fully dynamic setting it is of interest to apply energy-preserving time-integration schemes. However, the differential algebraic character of the system can result in high oscillations if standard methods are applied. A possible remedy is to modify the fully discretized system by a local redistribution of the mass. Numerical results in two and three dimensions illustrate the wide range of possible applications and show the performance of the space discretization scheme, non-linear solver, adaptive refinement process and time integration.
Air cooling of disk of a solid integrally cast turbine rotor for an automotive gas turbine
NASA Technical Reports Server (NTRS)
Gladden, H. J.
1977-01-01
A thermal analysis is made of surface cooling of a solid, integrally cast turbine rotor disk for an automotive gas turbine engine. Air purge and impingement cooling schemes are considered and compared with an uncooled reference case. Substantial reductions in blade temperature are predicted with each of the cooling schemes studied. It is shown that air cooling can result in a substantial gain in the stress-rupture life of the blade. Alternatively, increases in the turbine inlet temperature are possible.
Verhagen, Evert; Voogt, Nelly; Bruinsma, Anja; Finch, Caroline F
2014-04-01
Evidence of effectiveness does not equal successful implementation. To progress the field, practical tools are needed to bridge the gap between research and practice and to truly unite effectiveness and implementation evidence. This paper describes the Knowledge Transfer Scheme integrating existing implementation research frameworks into a tool which has been developed specifically to bridge the gap between knowledge derived from research on the one side and evidence-based usable information and tools for practice on the other.
High-performance packaging for monolithic microwave and millimeter-wave integrated circuits
NASA Technical Reports Server (NTRS)
Shalkhauser, K. A.; Li, K.; Shih, Y. C.
1992-01-01
Packaging schemes are developed that provide low-loss, hermetic enclosure for enhanced monolithic microwave and millimeter-wave integrated circuits. These package schemes are based on a fused quartz substrate material offering improved RF performance through 44 GHz. The small size and weight of the packages make them useful for a number of applications, including phased array antenna systems. As part of the packaging effort, a test fixture was developed to interface the single chip packages to conventional laboratory instrumentation for characterization of the packaged devices.
An Operator-Integration-Factor Splitting (OIFS) method for Incompressible Flows in Moving Domains
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patel, Saumil S.; Fischer, Paul F.; Min, Misun
In this paper, we present a characteristic-based numerical procedure for simulating incompressible flows in domains with moving boundaries. Our approach utilizes an operator-integration-factor splitting technique to help produce an effcient and stable numerical scheme. Using the spectral element method and an arbitrary Lagrangian-Eulerian formulation, we investigate flows where the convective acceleration effects are non-negligible. Several examples, ranging from laminar to turbulent flows, are considered. Comparisons with a standard, semi-implicit time-stepping procedure illustrate the improved performance of the scheme.
Integrator Windup Protection-Techniques and a STOVL Aircraft Engine Controller Application
NASA Technical Reports Server (NTRS)
KrishnaKumar, K.; Narayanaswamy, S.
1997-01-01
Integrators are included in the feedback loop of a control system to eliminate the steady state errors in the commanded variables. The integrator windup problem arises if the control actuators encounter operational limits before the steady state errors are driven to zero by the integrator. The typical effects of windup are large system oscillations, high steady state error, and a delayed system response following the windup. In this study, methods to prevent the integrator windup are examined to provide Integrator Windup Protection (IW) for an engine controller of a Short Take-Off and Vertical Landing (STOVL) aircraft. An unified performance index is defined to optimize the performance of the Conventional Anti-Windup (CAW) and the Modified Anti-Windup (MAW) methods. A modified Genetic Algorithm search procedure with stochastic parameter encoding is implemented to obtain the optimal parameters of the CAW scheme. The advantages and drawbacks of the CAW and MAW techniques are discussed and recommendations are made for the choice of the IWP scheme, given some characteristics of the system.
Documentation of the Goddard Laboratory for atmospheres fourth-order two-layer shallow water model
NASA Technical Reports Server (NTRS)
Takacs, L. L. (Compiler)
1986-01-01
The theory and numerical treatment used in the 2-level GLA fourth-order shallow water model are described. This model was designed to emulate the horizontal finite differences used by the GLA Fourth-Order General Circulation Model (Kalnay et al., 1983) in addition to its grid structure, form of high-latitude and global filtering, and time-integration schemes. A user's guide is also provided instructing the user on how to create initial conditions, execute the model, and post-process the data history.
A knowledge-based approach to improving optimization techniques in system planning
NASA Technical Reports Server (NTRS)
Momoh, J. A.; Zhang, Z. Z.
1990-01-01
A knowledge-based (KB) approach to improve mathematical programming techniques used in the system planning environment is presented. The KB system assists in selecting appropriate optimization algorithms, objective functions, constraints and parameters. The scheme is implemented by integrating symbolic computation of rules derived from operator and planner's experience and is used for generalized optimization packages. The KB optimization software package is capable of improving the overall planning process which includes correction of given violations. The method was demonstrated on a large scale power system discussed in the paper.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Volkov, M V; Garanin, S G; Dolgopolov, Yu V
2014-11-30
A seven-channel fibre laser system operated by the master oscillator – multichannel power amplifier scheme is the phase locked using a stochastic parallel gradient algorithm. The phase modulators on lithium niobate crystals are controlled by a multichannel electronic unit with the microcontroller processing signals in real time. The dynamic phase locking of the laser system with the bandwidth of 14 kHz is demonstrated, the time of phasing is 3 – 4 ms. (fibre and integrated-optical structures)
Two-dimensional thermal modeling of power monolithic microwave integrated circuits (MMIC's)
NASA Technical Reports Server (NTRS)
Fan, Mark S.; Christou, Aris; Pecht, Michael G.
1992-01-01
Numerical simulations of the two-dimensional temperature distributions for a typical GaAs MMIC circuit are conducted, aiming at understanding the heat conduction process of the circuit chip and providing temperature information for device reliability analysis. The method used is to solve the two-dimensional heat conduction equation with a control-volume-based finite difference scheme. In particular, the effects of the power dissipation and the ambient temperature are examined, and the criterion for the worst operating environment is discussed in terms of the allowed highest device junction temperature.
Adaptive vibration control of structures under earthquakes
NASA Astrophysics Data System (ADS)
Lew, Jiann-Shiun; Juang, Jer-Nan; Loh, Chin-Hsiung
2017-04-01
techniques, for structural vibration suppression under earthquakes. Various control strategies have been developed to protect structures from natural hazards and improve the comfort of occupants in buildings. However, there has been little development of adaptive building control with the integration of real-time system identification and control design. Generalized predictive control, which combines the process of real-time system identification and the process of predictive control design, has received widespread acceptance and has been successfully applied to various test-beds. This paper presents a formulation of the predictive control scheme for adaptive vibration control of structures under earthquakes. Comprehensive simulations are performed to demonstrate and validate the proposed adaptive control technique for earthquake-induced vibration of a building.
Chemical vapor deposition fluid flow simulation modelling tool
NASA Technical Reports Server (NTRS)
Bullister, Edward T.
1992-01-01
Accurate numerical simulation of chemical vapor deposition (CVD) processes requires a general purpose computational fluid dynamics package combined with specialized capabilities for high temperature chemistry. In this report, we describe the implementation of these specialized capabilities in the spectral element code NEKTON. The thermal expansion of the gases involved is shown to be accurately approximated by the low Mach number perturbation expansion of the incompressible Navier-Stokes equations. The radiative heat transfer between multiple interacting radiating surfaces is shown to be tractable using the method of Gebhart. The disparate rates of reaction and diffusion in CVD processes are calculated via a point-implicit time integration scheme. We demonstrate the use above capabilities on prototypical CVD applications.
Calculating with light using a chip-scale all-optical abacus.
Feldmann, J; Stegmaier, M; Gruhler, N; Ríos, C; Bhaskaran, H; Wright, C D; Pernice, W H P
2017-11-02
Machines that simultaneously process and store multistate data at one and the same location can provide a new class of fast, powerful and efficient general-purpose computers. We demonstrate the central element of an all-optical calculator, a photonic abacus, which provides multistate compute-and-store operation by integrating functional phase-change materials with nanophotonic chips. With picosecond optical pulses we perform the fundamental arithmetic operations of addition, subtraction, multiplication, and division, including a carryover into multiple cells. This basic processing unit is embedded into a scalable phase-change photonic network and addressed optically through a two-pulse random access scheme. Our framework provides first steps towards light-based non-von Neumann arithmetic.
NASA Astrophysics Data System (ADS)
Diaz, Manuel A.; Solovchuk, Maxim A.; Sheu, Tony W. H.
2018-06-01
A nonlinear system of partial differential equations capable of describing the nonlinear propagation and attenuation of finite amplitude perturbations in thermoviscous media is presented. This system constitutes a full nonlinear wave model that has been formulated in the conservation form. Initially, this model is investigated analytically in the inviscid limit where it has been found that the resulting flux function fulfills the Lax-Wendroff theorem, and the scheme can match the solutions of the Westervelt and Burgers equations numerically. Here, high-order numerical descriptions of strongly nonlinear wave propagations become of great interest. For that matter we consider finite difference formulations of the weighted essentially non-oscillatory (WENO) schemes associated with explicit strong stability preserving Runge-Kutta (SSP-RK) time integration methods. Although this strategy is known to be computationally demanding, it is found to be effective when implemented to be solved in graphical processing units (GPUs). As we consider wave propagations in unbounded domains, perfectly matching layers (PML) have been also considered in this work. The proposed system model is validated and illustrated by using one- and two-dimensional benchmark test cases proposed in the literature for nonlinear acoustic propagation in homogeneous thermoviscous media.
NASA Astrophysics Data System (ADS)
Jridi, Maher; Alfalou, Ayman
2018-03-01
In this paper, enhancement of an existing optical simultaneous fusion, compression and encryption (SFCE) scheme in terms of real-time requirements, bandwidth occupation and encryption robustness is proposed. We have used and approximate form of the DCT to decrease the computational resources. Then, a novel chaos-based encryption algorithm is introduced in order to achieve the confusion and diffusion effects. In the confusion phase, Henon map is used for row and column permutations, where the initial condition is related to the original image. Furthermore, the Skew Tent map is employed to generate another random matrix in order to carry out pixel scrambling. Finally, an adaptation of a classical diffusion process scheme is employed to strengthen security of the cryptosystem against statistical, differential, and chosen plaintext attacks. Analyses of key space, histogram, adjacent pixel correlation, sensitivity, and encryption speed of the encryption scheme are provided, and favorably compared to those of the existing crypto-compression system. The proposed method has been found to be digital/optical implementation-friendly which facilitates the integration of the crypto-compression system on a very broad range of scenarios.
Stabilized linear semi-implicit schemes for the nonlocal Cahn-Hilliard equation
NASA Astrophysics Data System (ADS)
Du, Qiang; Ju, Lili; Li, Xiao; Qiao, Zhonghua
2018-06-01
Comparing with the well-known classic Cahn-Hilliard equation, the nonlocal Cahn-Hilliard equation is equipped with a nonlocal diffusion operator and can describe more practical phenomena for modeling phase transitions of microstructures in materials. On the other hand, it evidently brings more computational costs in numerical simulations, thus efficient and accurate time integration schemes are highly desired. In this paper, we propose two energy-stable linear semi-implicit methods with first and second order temporal accuracies respectively for solving the nonlocal Cahn-Hilliard equation. The temporal discretization is done by using the stabilization technique with the nonlocal diffusion term treated implicitly, while the spatial discretization is carried out by the Fourier collocation method with FFT-based fast implementations. The energy stabilities are rigorously established for both methods in the fully discrete sense. Numerical experiments are conducted for a typical case involving Gaussian kernels. We test the temporal convergence rates of the proposed schemes and make a comparison of the nonlocal phase transition process with the corresponding local one. In addition, long-time simulations of the coarsening dynamics are also performed to predict the power law of the energy decay.
Wang, Hao; Jiang, Jie; Zhang, Guangjun
2017-04-21
The simultaneous extraction of optical navigation measurements from a target celestial body and star images is essential for autonomous optical navigation. Generally, a single optical navigation sensor cannot simultaneously image the target celestial body and stars well-exposed because their irradiance difference is generally large. Multi-sensor integration or complex image processing algorithms are commonly utilized to solve the said problem. This study analyzes and demonstrates the feasibility of simultaneously imaging the target celestial body and stars well-exposed within a single exposure through a single field of view (FOV) optical navigation sensor using the well capacity adjusting (WCA) scheme. First, the irradiance characteristics of the celestial body are analyzed. Then, the celestial body edge model and star spot imaging model are established when the WCA scheme is applied. Furthermore, the effect of exposure parameters on the accuracy of star centroiding and edge extraction is analyzed using the proposed model. Optimal exposure parameters are also derived by conducting Monte Carlo simulation to obtain the best performance of the navigation sensor. Finally, laboratorial and night sky experiments are performed to validate the correctness of the proposed model and optimal exposure parameters.
Wang, Hao; Jiang, Jie; Zhang, Guangjun
2017-01-01
The simultaneous extraction of optical navigation measurements from a target celestial body and star images is essential for autonomous optical navigation. Generally, a single optical navigation sensor cannot simultaneously image the target celestial body and stars well-exposed because their irradiance difference is generally large. Multi-sensor integration or complex image processing algorithms are commonly utilized to solve the said problem. This study analyzes and demonstrates the feasibility of simultaneously imaging the target celestial body and stars well-exposed within a single exposure through a single field of view (FOV) optical navigation sensor using the well capacity adjusting (WCA) scheme. First, the irradiance characteristics of the celestial body are analyzed. Then, the celestial body edge model and star spot imaging model are established when the WCA scheme is applied. Furthermore, the effect of exposure parameters on the accuracy of star centroiding and edge extraction is analyzed using the proposed model. Optimal exposure parameters are also derived by conducting Monte Carlo simulation to obtain the best performance of the navigation sensor. Finally, laboratorial and night sky experiments are performed to validate the correctness of the proposed model and optimal exposure parameters. PMID:28430132
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, Ji-Young; Hong, Song-You; Sunny Lim, Kyo-Sun
The sensitivity of a cumulus parameterization scheme (CPS) to a representation of precipitation production is examined. To do this, the parameter that determines the fraction of cloud condensate converted to precipitation in the simplified Arakawa–Schubert (SAS) convection scheme is modified following the results from a cloud-resolving simulation. While the original conversion parameter is assumed to be constant, the revised parameter includes a temperature dependency above the freezing level, whichleadstolessproductionoffrozenprecipitating condensate with height. The revised CPS has been evaluated for a heavy rainfall event over Korea as well as medium-range forecasts using the Global/Regional Integrated Model system (GRIMs). The inefficient conversionmore » of cloud condensate to convective precipitation at colder temperatures generally leads to a decrease in pre-cipitation, especially in the category of heavy rainfall. The resultant increase of detrained moisture induces moistening and cooling at the top of clouds. A statistical evaluation of the medium-range forecasts with the revised precipitation conversion parameter shows an overall improvement of the forecast skill in precipitation and large-scale fields, indicating importance of more realistic representation of microphysical processes in CPSs.« less
Skjånes, Kari; Lindblad, Peter; Muller, Jiri
2007-10-01
Many areas of algae technology have developed over the last decades, and there is an established market for products derived from algae, dominated by health food and aquaculture. In addition, the interest for active biomolecules from algae is increasing rapidly. The need for CO(2) management, in particular capture and storage is currently an important technological, economical and global political issue and will continue to be so until alternative energy sources and energy carriers diminish the need for fossil fuels. This review summarizes in an integrated manner different technologies for use of algae, demonstrating the possibility of combining different areas of algae technology to capture CO(2) and using the obtained algal biomass for various industrial applications thus bringing added value to the capturing and storage processes. Furthermore, we emphasize the use of algae in a novel biological process which produces H(2) directly from solar energy in contrast to the conventional CO(2) neutral biological methods. This biological process is a part of the proposed integrated CO(2) management scheme.
Image processing using Gallium Arsenide (GaAs) technology
NASA Technical Reports Server (NTRS)
Miller, Warner H.
1989-01-01
The need to increase the information return from space-borne imaging systems has increased in the past decade. The use of multi-spectral data has resulted in the need for finer spatial resolution and greater spectral coverage. Onboard signal processing will be necessary in order to utilize the available Tracking and Data Relay Satellite System (TDRSS) communication channel at high efficiency. A generally recognized approach to the increased efficiency of channel usage is through data compression techniques. The compression technique implemented is a differential pulse code modulation (DPCM) scheme with a non-uniform quantizer. The need to advance the state-of-the-art of onboard processing was recognized and a GaAs integrated circuit technology was chosen. An Adaptive Programmable Processor (APP) chip set was developed which is based on an 8-bit slice general processor. The reason for choosing the compression technique for the Multi-spectral Linear Array (MLA) instrument is described. Also a description is given of the GaAs integrated circuit chip set which will demonstrate that data compression can be performed onboard in real time at data rate in the order of 500 Mb/s.
Ramos Tercero, E A; Sforza, E; Morandini, M; Bertucco, A
2014-02-01
The capability to grow microalgae in nonsterilized wastewater is essential for an application of this technology in an actual industrial process. Batch experiments were carried out with the species in nonsterilized urban wastewater from local treatment plants to measure both the algal growth and the nutrient consumption. Chlorella protothecoides showed a high specific growth rate (about 1 day(-1)), and no effects of bacterial contamination were observed. Then, this microalgae was grown in a continuous photobioreactor with CO₂-air aeration in order to verify the feasibility of an integrated process of the removal of nutrient from real wastewaters. Different residence times were tested, and biomass productivity and nutrients removal were measured. A maximum of microalgae productivity was found at around 0.8 day of residence time in agreement with theoretical expectation in the case of light-limited cultures. In addition, N-NH₄ and P-PO₄ removal rates were determined in order to model the kinetic of nutrients uptake. Results from batch and continuous experiments were used to propose an integrated process scheme of wastewater treatment at industrial scale including a section with C. protothecoides.
Delpiano, J; Pizarro, L; Peddie, C J; Jones, M L; Griffin, L D; Collinson, L M
2018-04-26
Integrated array tomography combines fluorescence and electron imaging of ultrathin sections in one microscope, and enables accurate high-resolution correlation of fluorescent proteins to cell organelles and membranes. Large numbers of serial sections can be imaged sequentially to produce aligned volumes from both imaging modalities, thus producing enormous amounts of data that must be handled and processed using novel techniques. Here, we present a scheme for automated detection of fluorescent cells within thin resin sections, which could then be used to drive automated electron image acquisition from target regions via 'smart tracking'. The aim of this work is to aid in optimization of the data acquisition process through automation, freeing the operator to work on other tasks and speeding up the process, while reducing data rates by only acquiring images from regions of interest. This new method is shown to be robust against noise and able to deal with regions of low fluorescence. © 2018 The Authors. Journal of Microscopy published by JohnWiley & Sons Ltd on behalf of Royal Microscopical Society.
Satellite Radiothermovision on Synoptic and Climatically Significant Scales
NASA Astrophysics Data System (ADS)
Ermakov, D. M.; Sharkov, E. A.; Chernushich, A. P.
2017-12-01
This paper is focused on the development of a methodological basis for the authors' approach to the processing of large volumes of satellite radiothermal data, which is known as satellite radiothermovision. A closed scheme for calculating the latent heat flux (and other integral characteristics of the dynamics of geophysical fields) through arbitrary contours (boundaries) has been constructed and mathematically described. The opportunity for working with static, as well as movable and deformable boundaries of arbitrary shape, has been provided. The computational scheme was tested using the example of calculations of the atmospheric advection of the latent heat from the North Atlantics to the Arctic in 2014. Preliminary analysis of the results showed a high potential of the approach when applying it to the study of a wide range of synoptic and climatically significant atmospheric processes of the Earth. Some areas for the further development of the satellite radiothermovision approach are briefly discussed. It is noted that expanding the analysis of the available satellite data to as much data as possible is of considerable importance. Among the immediate prospects is the analysis of large arrays of data already accumulated and processed in terms of the satellite radiothermovision ideology, which are partially presented and continuously updated on a specialized geoportal.
OML: optical maskless lithography for economic design prototyping and small-volume production
NASA Astrophysics Data System (ADS)
Sandstrom, Tor; Bleeker, Arno; Hintersteiner, Jason; Troost, Kars; Freyer, Jorge; van der Mast, Karel
2004-05-01
The business case for Maskless Lithography is more compelling than ever before, due to more critical processes, rising mask costs and shorter product cycles. The economics of Maskless Lithography gives a crossover volume from Maskless to mask-based lithography at surprisingly many wafers per mask for surprisingly few wafers per hour throughput. Also, small-volume production will in many cases be more economical with Maskless Lithography, even when compared to "shuttle" schemes, reticles with multiple layers, etc. The full benefit of Maskless Lithography is only achievable by duplicating processes that are compatible with volume production processes on conventional scanners. This can be accomplished by the integration of pattern generators based on spatial light modulator technology with state-of-the-art optical scanner systems. This paper reports on the system design of an Optical Maskless Scanner in development by ASML and Micronic: small-field optics with high demagnification, variable NA and illumination schemes, spatial light modulators with millions of MEMS mirrors on CMOS drivers, a data path with a sustained data flow of more than 250 GPixels per second, stitching of sub-fields to scanner fields, and rasterization and writing strategies for throughput and good image fidelity. Predicted lithographic performance based on image simulations is also shown.
Boundary integral equation analysis for suspension of spheres in Stokes flow
NASA Astrophysics Data System (ADS)
Corona, Eduardo; Veerapaneni, Shravan
2018-06-01
We show that the standard boundary integral operators, defined on the unit sphere, for the Stokes equations diagonalize on a specific set of vector spherical harmonics and provide formulas for their spectra. We also derive analytical expressions for evaluating the operators away from the boundary. When two particle are located close to each other, we use a truncated series expansion to compute the hydrodynamic interaction. On the other hand, we use the standard spectrally accurate quadrature scheme to evaluate smooth integrals on the far-field, and accelerate the resulting discrete sums using the fast multipole method (FMM). We employ this discretization scheme to analyze several boundary integral formulations of interest including those arising in porous media flow, active matter and magneto-hydrodynamics of rigid particles. We provide numerical results verifying the accuracy and scaling of their evaluation.
NASA Astrophysics Data System (ADS)
Pont, Grégoire; Brenner, Pierre; Cinnella, Paola; Maugars, Bruno; Robinet, Jean-Christophe
2017-12-01
A Godunov's type unstructured finite volume method suitable for highly compressible turbulent scale-resolving simulations around complex geometries is constructed by using a successive correction technique. First, a family of k-exact Godunov schemes is developed by recursively correcting the truncation error of the piecewise polynomial representation of the primitive variables. The keystone of the proposed approach is a quasi-Green gradient operator which ensures consistency on general meshes. In addition, a high-order single-point quadrature formula, based on high-order approximations of the successive derivatives of the solution, is developed for flux integration along cell faces. The proposed family of schemes is compact in the algorithmic sense, since it only involves communications between direct neighbors of the mesh cells. The numerical properties of the schemes up to fifth-order are investigated, with focus on their resolvability in terms of number of mesh points required to resolve a given wavelength accurately. Afterwards, in the aim of achieving the best possible trade-off between accuracy, computational cost and robustness in view of industrial flow computations, we focus more specifically on the third-order accurate scheme of the family, and modify locally its numerical flux in order to reduce the amount of numerical dissipation in vortex-dominated regions. This is achieved by switching from the upwind scheme, mostly applied in highly compressible regions, to a fourth-order centered one in vortex-dominated regions. An analytical switch function based on the local grid Reynolds number is adopted in order to warrant numerical stability of the recentering process. Numerical applications demonstrate the accuracy and robustness of the proposed methodology for compressible scale-resolving computations. In particular, supersonic RANS/LES computations of the flow over a cavity are presented to show the capability of the scheme to predict flows with shocks, vortical structures and complex geometries.
NASA Astrophysics Data System (ADS)
Widyaningrum, E.; Gorte, B. G. H.
2017-05-01
LiDAR data acquisition is recognized as one of the fastest solutions to provide basis data for large-scale topographical base maps worldwide. Automatic LiDAR processing is believed one possible scheme to accelerate the large-scale topographic base map provision by the Geospatial Information Agency in Indonesia. As a progressive advanced technology, Geographic Information System (GIS) open possibilities to deal with geospatial data automatic processing and analyses. Considering further needs of spatial data sharing and integration, the one stop processing of LiDAR data in a GIS environment is considered a powerful and efficient approach for the base map provision. The quality of the automated topographic base map is assessed and analysed based on its completeness, correctness, quality, and the confusion matrix.
NASA Astrophysics Data System (ADS)
Bruder, Friedrich-Karl; Fäcke, Thomas; Grote, Fabian; Hagen, Rainer; Hönel, Dennis; Koch, Eberhard; Rewitz, Christian; Walze, Günther; Wewer, Brita
2017-03-01
Volume Holographic Optical Elements (vHOEs) gained wide attention as optical combiners for the use in augmented and virtual reality (AR and VR, respectively) consumer electronics and automotive head-up display applications. The unique characteristics of these diffractive grating structures - being lightweight, thin and flat - make them perfectly suitable for use in integrated optical components like spectacle lenses and car windshields. While being transparent in Off-Bragg condition, they provide full color capability and adjustable diffraction efficiency. The instant developing photopolymer Bayfol® HX film provides an ideal technology platform to optimize the performance of vHOEs in a wide range of applications. Important for any commercialization are simple and robust mass production schemes. In this paper, we present an efficient and easy to control one-beam recording scheme to copy a so-called master vHOE in a step-and-repeat process. In this contact-copy scheme, Bayfol® HX film is laminated to a master stack before being exposed by a scanning laser line. Subsequently, the film is delaminated in a controlled fashion and bleached. We explain working principles of the one-beam copy concept and discuss the mechanical construction of the installed vHOE replication line. Moreover, we treat aspects like master design, effects of vibration and suppression of noise gratings. Furthermore, digital vHOEs are introduced as master holograms. They enable new ways of optical design and paths to large scale vHOEs.
Resource Management Scheme Based on Ubiquitous Data Analysis
Lee, Heung Ki; Jung, Jaehee
2014-01-01
Resource management of the main memory and process handler is critical to enhancing the system performance of a web server. Owing to the transaction delay time that affects incoming requests from web clients, web server systems utilize several web processes to anticipate future requests. This procedure is able to decrease the web generation time because there are enough processes to handle the incoming requests from web browsers. However, inefficient process management results in low service quality for the web server system. Proper pregenerated process mechanisms are required for dealing with the clients' requests. Unfortunately, it is difficult to predict how many requests a web server system is going to receive. If a web server system builds too many web processes, it wastes a considerable amount of memory space, and thus performance is reduced. We propose an adaptive web process manager scheme based on the analysis of web log mining. In the proposed scheme, the number of web processes is controlled through prediction of incoming requests, and accordingly, the web process management scheme consumes the least possible web transaction resources. In experiments, real web trace data were used to prove the improved performance of the proposed scheme. PMID:25197692
A Novel Grid SINS/DVL Integrated Navigation Algorithm for Marine Application
Kang, Yingyao; Zhao, Lin; Cheng, Jianhua; Fan, Xiaoliang
2018-01-01
Integrated navigation algorithms under the grid frame have been proposed based on the Kalman filter (KF) to solve the problem of navigation in some special regions. However, in the existing study of grid strapdown inertial navigation system (SINS)/Doppler velocity log (DVL) integrated navigation algorithms, the Earth models of the filter dynamic model and the SINS mechanization are not unified. Besides, traditional integrated systems with the KF based correction scheme are susceptible to measurement errors, which would decrease the accuracy and robustness of the system. In this paper, an adaptive robust Kalman filter (ARKF) based hybrid-correction grid SINS/DVL integrated navigation algorithm is designed with the unified reference ellipsoid Earth model to improve the navigation accuracy in middle-high latitude regions for marine application. Firstly, to unify the Earth models, the mechanization of grid SINS is introduced and the error equations are derived based on the same reference ellipsoid Earth model. Then, a more accurate grid SINS/DVL filter model is designed according to the new error equations. Finally, a hybrid-correction scheme based on the ARKF is proposed to resist the effect of measurement errors. Simulation and experiment results show that, compared with the traditional algorithms, the proposed navigation algorithm can effectively improve the navigation performance in middle-high latitude regions by the unified Earth models and the ARKF based hybrid-correction scheme. PMID:29373549
A phase-based stereo vision system-on-a-chip.
Díaz, Javier; Ros, Eduardo; Sabatini, Silvio P; Solari, Fabio; Mota, Sonia
2007-02-01
A simple and fast technique for depth estimation based on phase measurement has been adopted for the implementation of a real-time stereo system with sub-pixel resolution on an FPGA device. The technique avoids the attendant problem of phase warping. The designed system takes full advantage of the inherent processing parallelism and segmentation capabilities of FPGA devices to achieve a computation speed of 65megapixels/s, which can be arranged with a customized frame-grabber module to process 211frames/s at a size of 640x480 pixels. The processing speed achieved is higher than conventional camera frame rates, thus allowing the system to extract multiple estimations and be used as a platform to evaluate integration schemes of a population of neurons without increasing hardware resource demands.
Improving TOGAF ADM 9.1 Migration Planning Phase by ITIL V3 Service Transition
NASA Astrophysics Data System (ADS)
Hanum Harani, Nisa; Akhmad Arman, Arry; Maulana Awangga, Rolly
2018-04-01
Modification planning of business transformation involving technological utilization required a system of transition and migration planning process. Planning of system migration activity is the most important. The migration process is including complex elements such as business re-engineering, transition scheme mapping, data transformation, application development, individual involvement by computer and trial interaction. TOGAF ADM is the framework and method of enterprise architecture implementation. TOGAF ADM provides a manual refer to the architecture and migration planning. The planning includes an implementation solution, in this case, IT solution, but when the solution becomes an IT operational planning, TOGAF could not handle it. This paper presents a new model framework detail transitions process of integration between TOGAF and ITIL. We evaluated our models in field study inside a private university.
Development of highly accurate approximate scheme for computing the charge transfer integral
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pershin, Anton; Szalay, Péter G.
The charge transfer integral is a key parameter required by various theoretical models to describe charge transport properties, e.g., in organic semiconductors. The accuracy of this important property depends on several factors, which include the level of electronic structure theory and internal simplifications of the applied formalism. The goal of this paper is to identify the performance of various approximate approaches of the latter category, while using the high level equation-of-motion coupled cluster theory for the electronic structure. The calculations have been performed on the ethylene dimer as one of the simplest model systems. By studying different spatial perturbations, itmore » was shown that while both energy split in dimer and fragment charge difference methods are equivalent with the exact formulation for symmetrical displacements, they are less efficient when describing transfer integral along the asymmetric alteration coordinate. Since the “exact” scheme was found computationally expensive, we examine the possibility to obtain the asymmetric fluctuation of the transfer integral by a Taylor expansion along the coordinate space. By exploring the efficiency of this novel approach, we show that the Taylor expansion scheme represents an attractive alternative to the “exact” calculations due to a substantial reduction of computational costs, when a considerably large region of the potential energy surface is of interest. Moreover, we show that the Taylor expansion scheme, irrespective of the dimer symmetry, is very accurate for the entire range of geometry fluctuations that cover the space the molecule accesses at room temperature.« less
NASA Astrophysics Data System (ADS)
Steen, S. E.; McNab, S. J.; Sekaric, L.; Babich, I.; Patel, J.; Bucchignano, J.; Rooks, M.; Fried, D. M.; Topol, A. W.; Brancaccio, J. R.; Yu, R.; Hergenrother, J. M.; Doyle, J. P.; Nunes, R.; Viswanathan, R. G.; Purushothaman, S.; Rothwell, M. B.
2005-05-01
Semiconductor process development teams are faced with increasing process and integration complexity while the time between lithographic capability and volume production has remained more or less constant over the last decade. Lithography tools have often gated the volume checkpoint of a new device node on the ITRS roadmap. The processes have to be redeveloped after the tooling capability for the new groundrule is obtained since straight scaling is no longer sufficient. In certain cases the time window that the process development teams have is actually decreasing. In the extreme, some forecasts are showing that by the time the 45nm technology node is scheduled for volume production, the tooling vendors will just begin shipping the tools required for this technology node. To address this time pressure, IBM has implemented a hybrid-lithography strategy that marries the advantages of optical lithography (high throughput) with electron beam direct write lithography (high resolution and alignment capability). This hybrid-lithography scheme allows for the timely development of semiconductor processes for the 32nm node, and beyond. In this paper we will describe how hybrid lithography has enabled early process integration and device learning and how IBM applied e-beam & optical hybrid lithography to create the world's smallest working SRAM cell.
A rapid boundary integral equation technique for protein electrostatics
NASA Astrophysics Data System (ADS)
Grandison, Scott; Penfold, Robert; Vanden-Broeck, Jean-Marc
2007-06-01
A new boundary integral formulation is proposed for the solution of electrostatic field problems involving piecewise uniform dielectric continua. Direct Coulomb contributions to the total potential are treated exactly and Green's theorem is applied only to the residual reaction field generated by surface polarisation charge induced at dielectric boundaries. The implementation shows significantly improved numerical stability over alternative schemes involving the total field or its surface normal derivatives. Although strictly respecting the electrostatic boundary conditions, the partitioned scheme does introduce a jump artefact at the interface. Comparison against analytic results in canonical geometries, however, demonstrates that simple interpolation near the boundary is a cheap and effective way to circumvent this characteristic in typical applications. The new scheme is tested in a naive model to successfully predict the ground state orientation of biomolecular aggregates comprising the soybean storage protein, glycinin.
Zhang, Zhijun; Li, Zhijun; Zhang, Yunong; Luo, Yamei; Li, Yuanqing
2015-12-01
We propose a dual-arm cyclic-motion-generation (DACMG) scheme by a neural-dynamic method, which can remedy the joint-angle-drift phenomenon of a humanoid robot. In particular, according to a neural-dynamic design method, first, a cyclic-motion performance index is exploited and applied. This cyclic-motion performance index is then integrated into a quadratic programming (QP)-type scheme with time-varying constraints, called the time-varying-constrained DACMG (TVC-DACMG) scheme. The scheme includes the kinematic motion equations of two arms and the time-varying joint limits. The scheme can not only generate the cyclic motion of two arms for a humanoid robot but also control the arms to move to the desired position. In addition, the scheme considers the physical limit avoidance. To solve the QP problem, a recurrent neural network is presented and used to obtain the optimal solutions. Computer simulations and physical experiments demonstrate the effectiveness and the accuracy of such a TVC-DACMG scheme and the neural network solver.
Parallel discrete-event simulation schemes with heterogeneous processing elements.
Kim, Yup; Kwon, Ikhyun; Chae, Huiseung; Yook, Soon-Hyung
2014-07-01
To understand the effects of nonidentical processing elements (PEs) on parallel discrete-event simulation (PDES) schemes, two stochastic growth models, the restricted solid-on-solid (RSOS) model and the Family model, are investigated by simulations. The RSOS model is the model for the PDES scheme governed by the Kardar-Parisi-Zhang equation (KPZ scheme). The Family model is the model for the scheme governed by the Edwards-Wilkinson equation (EW scheme). Two kinds of distributions for nonidentical PEs are considered. In the first kind computing capacities of PEs are not much different, whereas in the second kind the capacities are extremely widespread. The KPZ scheme on the complex networks shows the synchronizability and scalability regardless of the kinds of PEs. The EW scheme never shows the synchronizability for the random configuration of PEs of the first kind. However, by regularizing the arrangement of PEs of the first kind, the EW scheme is made to show the synchronizability. In contrast, EW scheme never shows the synchronizability for any configuration of PEs of the second kind.
NASA Technical Reports Server (NTRS)
Solarna, David; Moser, Gabriele; Le Moigne-Stewart, Jacqueline; Serpico, Sebastiano B.
2017-01-01
Because of the large variety of sensors and spacecraft collecting data, planetary science needs to integrate various multi-sensor and multi-temporal images. These multiple data represent a precious asset, as they allow the study of targets spectral responses and of changes in the surface structure; because of their variety, they also require accurate and robust registration. A new crater detection algorithm, used to extract features that will be integrated in an image registration framework, is presented. A marked point process-based method has been developed to model the spatial distribution of elliptical objects (i.e. the craters) and a birth-death Markov chain Monte Carlo method, coupled with a region-based scheme aiming at computational efficiency, is used to find the optimal configuration fitting the image. The extracted features are exploited, together with a newly defined fitness function based on a modified Hausdorff distance, by an image registration algorithm whose architecture has been designed to minimize the computational time.
Use of agents to implement an integrated computing environment
NASA Technical Reports Server (NTRS)
Hale, Mark A.; Craig, James I.
1995-01-01
Integrated Product and Process Development (IPPD) embodies the simultaneous application to both system and quality engineering methods throughout an iterative design process. The use of IPPD results in the time-conscious, cost-saving development of engineering systems. To implement IPPD, a Decision-Based Design perspective is encapsulated in an approach that focuses on the role of the human designer in product development. The approach has two parts and is outlined in this paper. First, an architecture, called DREAMS, is being developed that facilitates design from a decision-based perspective. Second, a supporting computing infrastructure, called IMAGE, is being designed. Agents are used to implement the overall infrastructure on the computer. Successful agent utilization requires that they be made of three components: the resource, the model, and the wrap. Current work is focused on the development of generalized agent schemes and associated demonstration projects. When in place, the technology independent computing infrastructure will aid the designer in systematically generating knowledge used to facilitate decision-making.
Maurer, S A; Kussmann, J; Ochsenfeld, C
2014-08-07
We present a low-prefactor, cubically scaling scaled-opposite-spin second-order Møller-Plesset perturbation theory (SOS-MP2) method which is highly suitable for massively parallel architectures like graphics processing units (GPU). The scaling is reduced from O(N⁵) to O(N³) by a reformulation of the MP2-expression in the atomic orbital basis via Laplace transformation and the resolution-of-the-identity (RI) approximation of the integrals in combination with efficient sparse algebra for the 3-center integral transformation. In contrast to previous works that employ GPUs for post Hartree-Fock calculations, we do not simply employ GPU-based linear algebra libraries to accelerate the conventional algorithm. Instead, our reformulation allows to replace the rate-determining contraction step with a modified J-engine algorithm, that has been proven to be highly efficient on GPUs. Thus, our SOS-MP2 scheme enables us to treat large molecular systems in an accurate and efficient manner on a single GPU-server.
Parareal algorithms with local time-integrators for time fractional differential equations
NASA Astrophysics Data System (ADS)
Wu, Shu-Lin; Zhou, Tao
2018-04-01
It is challenge work to design parareal algorithms for time-fractional differential equations due to the historical effect of the fractional operator. A direct extension of the classical parareal method to such equations will lead to unbalance computational time in each process. In this work, we present an efficient parareal iteration scheme to overcome this issue, by adopting two recently developed local time-integrators for time fractional operators. In both approaches, one introduces auxiliary variables to localized the fractional operator. To this end, we propose a new strategy to perform the coarse grid correction so that the auxiliary variables and the solution variable are corrected separately in a mixed pattern. It is shown that the proposed parareal algorithm admits robust rate of convergence. Numerical examples are presented to support our conclusions.
Collusion-aware privacy-preserving range query in tiered wireless sensor networks.
Zhang, Xiaoying; Dong, Lei; Peng, Hui; Chen, Hong; Zhao, Suyun; Li, Cuiping
2014-12-11
Wireless sensor networks (WSNs) are indispensable building blocks for the Internet of Things (IoT). With the development of WSNs, privacy issues have drawn more attention. Existing work on the privacy-preserving range query mainly focuses on privacy preservation and integrity verification in two-tiered WSNs in the case of compromisedmaster nodes, but neglects the damage of node collusion. In this paper, we propose a series of collusion-aware privacy-preserving range query protocols in two-tiered WSNs. To the best of our knowledge, this paper is the first to consider collusion attacks for a range query in tiered WSNs while fulfilling the preservation of privacy and integrity. To preserve the privacy of data and queries, we propose a novel encoding scheme to conceal sensitive information. To preserve the integrity of the results, we present a verification scheme using the correlation among data. In addition, two schemes are further presented to improve result accuracy and reduce communication cost. Finally, theoretical analysis and experimental results confirm the efficiency, accuracy and privacy of our proposals.
Collusion-Aware Privacy-Preserving Range Query in Tiered Wireless Sensor Networks†
Zhang, Xiaoying; Dong, Lei; Peng, Hui; Chen, Hong; Zhao, Suyun; Li, Cuiping
2014-01-01
Wireless sensor networks (WSNs) are indispensable building blocks for the Internet of Things (IoT). With the development of WSNs, privacy issues have drawn more attention. Existing work on the privacy-preserving range query mainly focuses on privacy preservation and integrity verification in two-tiered WSNs in the case of compromised master nodes, but neglects the damage of node collusion. In this paper, we propose a series of collusion-aware privacy-preserving range query protocols in two-tiered WSNs. To the best of our knowledge, this paper is the first to consider collusion attacks for a range query in tiered WSNs while fulfilling the preservation of privacy and integrity. To preserve the privacy of data and queries, we propose a novel encoding scheme to conceal sensitive information. To preserve the integrity of the results, we present a verification scheme using the correlation among data. In addition, two schemes are further presented to improve result accuracy and reduce communication cost. Finally, theoretical analysis and experimental results confirm the efficiency, accuracy and privacy of our proposals. PMID:25615731
Protection of Health Imagery by Region Based Lossless Reversible Watermarking Scheme
Priya, R. Lakshmi; Sadasivam, V.
2015-01-01
Providing authentication and integrity in medical images is a problem and this work proposes a new blind fragile region based lossless reversible watermarking technique to improve trustworthiness of medical images. The proposed technique embeds the watermark using a reversible least significant bit embedding scheme. The scheme combines hashing, compression, and digital signature techniques to create a content dependent watermark making use of compressed region of interest (ROI) for recovery of ROI as reported in literature. The experiments were carried out to prove the performance of the scheme and its assessment reveals that ROI is extracted in an intact manner and PSNR values obtained lead to realization that the presented scheme offers greater protection for health imageries. PMID:26649328
Neufeld, E; Chavannes, N; Samaras, T; Kuster, N
2007-08-07
The modeling of thermal effects, often based on the Pennes Bioheat Equation, is becoming increasingly popular. The FDTD technique commonly used in this context suffers considerably from staircasing errors at boundaries. A new conformal technique is proposed that can easily be integrated into existing implementations without requiring a special update scheme. It scales fluxes at interfaces with factors derived from the local surface normal. The new scheme is validated using an analytical solution, and an error analysis is performed to understand its behavior. The new scheme behaves considerably better than the standard scheme. Furthermore, in contrast to the standard scheme, it is possible to obtain with it more accurate solutions by increasing the grid resolution.
Factorized Runge-Kutta-Chebyshev Methods
NASA Astrophysics Data System (ADS)
O'Sullivan, Stephen
2017-05-01
The second-order extended stability Factorized Runge-Kutta-Chebyshev (FRKC2) explicit schemes for the integration of large systems of PDEs with diffusive terms are presented. The schemes are simple to implement through ordered sequences of forward Euler steps with complex stepsizes, and easily parallelised for large scale problems on distributed architectures. Preserving 7 digits for accuracy at 16 digit precision, the schemes are theoretically capable of maintaining internal stability for acceleration factors in excess of 6000 with respect to standard explicit Runge-Kutta methods. The extent of the stability domain is approximately the same as that of RKC schemes, and a third longer than in the case of RKL2 schemes. Extension of FRKC methods to fourth-order, by both complex splitting and Butcher composition techniques, is also discussed. A publicly available implementation of FRKC2 schemes may be obtained from maths.dit.ie/frkc
A Secure ECC-based RFID Mutual Authentication Protocol to Enhance Patient Medication Safety.
Jin, Chunhua; Xu, Chunxiang; Zhang, Xiaojun; Li, Fagen
2016-01-01
Patient medication safety is an important issue in patient medication systems. In order to prevent medication errors, integrating Radio Frequency Identification (RFID) technology into automated patient medication systems is required in hospitals. Based on RFID technology, such systems can provide medical evidence for patients' prescriptions and medicine doses, etc. Due to the mutual authentication between the medication server and the tag, RFID authentication scheme is the best choice for automated patient medication systems. In this paper, we present a RFID mutual authentication scheme based on elliptic curve cryptography (ECC) to enhance patient medication safety. Our scheme can achieve security requirements and overcome various attacks existing in other schemes. In addition, our scheme has better performance in terms of computational cost and communication overhead. Therefore, the proposed scheme is well suitable for patient medication systems.
An improved lambda-scheme for one-dimensional flows
NASA Technical Reports Server (NTRS)
Moretti, G.; Dipiano, M. T.
1983-01-01
A code for the calculation of one-dimensional flows is presented, which combines a simple and efficient version of the lambda-scheme with tracking of discontinuities. The latter is needed to identify points where minor departures from the basic integration scheme are applied to prevent infiltration of numerical errors. Such a tracking is obtained via a systematic application of Boolean algebra. It is, therefore, very efficient. Fifteen examples are presented and discussed in detail. The results are exceptionally good. All discontinuites are captured within one mesh interval.
Electromechanical Displacement Detection With an On-Chip High Electron Mobility Transistor Amplifier
NASA Astrophysics Data System (ADS)
Oda, Yasuhiko; Onomitsu, Koji; Kometani, Reo; Warisawa, Shin-ichi; Ishihara, Sunao; Yamaguchi, Hiroshi
2011-06-01
We developed a highly sensitive displacement detection scheme for a GaAs-based electromechanical resonator using an integrated high electron mobility transistor (HEMT). Piezoelectric voltage generated by the vibration of the resonator is applied to the gate of the HEMT, resulting in the on-chip amplification of the signal voltage. This detection scheme achieves a displacement sensitivity of ˜9 pm·Hz-1/2, which is one of the highest among on-chip purely electrical displacement detection schemes at room temperature.
Bolis, A; Cantwell, C D; Kirby, R M; Sherwin, S J
2014-01-01
We investigate the relative performance of a second-order Adams–Bashforth scheme and second-order and fourth-order Runge–Kutta schemes when time stepping a 2D linear advection problem discretised using a spectral/hp element technique for a range of different mesh sizes and polynomial orders. Numerical experiments explore the effects of short (two wavelengths) and long (32 wavelengths) time integration for sets of uniform and non-uniform meshes. The choice of time-integration scheme and discretisation together fixes a CFL limit that imposes a restriction on the maximum time step, which can be taken to ensure numerical stability. The number of steps, together with the order of the scheme, affects not only the runtime but also the accuracy of the solution. Through numerical experiments, we systematically highlight the relative effects of spatial resolution and choice of time integration on performance and provide general guidelines on how best to achieve the minimal execution time in order to obtain a prescribed solution accuracy. The significant role played by higher polynomial orders in reducing CPU time while preserving accuracy becomes more evident, especially for uniform meshes, compared with what has been typically considered when studying this type of problem.© 2014. The Authors. International Journal for Numerical Methods in Fluids published by John Wiley & Sons, Ltd. PMID:25892840
Implicit time accurate simulation of unsteady flow
NASA Astrophysics Data System (ADS)
van Buuren, René; Kuerten, Hans; Geurts, Bernard J.
2001-03-01
Implicit time integration was studied in the context of unsteady shock-boundary layer interaction flow. With an explicit second-order Runge-Kutta scheme, a reference solution to compare with the implicit second-order Crank-Nicolson scheme was determined. The time step in the explicit scheme is restricted by both temporal accuracy as well as stability requirements, whereas in the A-stable implicit scheme, the time step has to obey temporal resolution requirements and numerical convergence conditions. The non-linear discrete equations for each time step are solved iteratively by adding a pseudo-time derivative. The quasi-Newton approach is adopted and the linear systems that arise are approximately solved with a symmetric block Gauss-Seidel solver. As a guiding principle for properly setting numerical time integration parameters that yield an efficient time accurate capturing of the solution, the global error caused by the temporal integration is compared with the error resulting from the spatial discretization. Focus is on the sensitivity of properties of the solution in relation to the time step. Numerical simulations show that the time step needed for acceptable accuracy can be considerably larger than the explicit stability time step; typical ratios range from 20 to 80. At large time steps, convergence problems that are closely related to a highly complex structure of the basins of attraction of the iterative method may occur. Copyright
NASA Astrophysics Data System (ADS)
Raring, James W.
The proliferation of the internet has fueled the explosive growth of telecommunications over the past three decades. As a result, the demand for communication systems providing increased bandwidth and flexibility at lower cost continues to rise. Lightwave communication systems meet these demands. The integration of multiple optoelectronic components onto a single chip could revolutionize the photonics industry. Photonic integrated circuits (PIC) provide the potential for cost reduction, decreased loss, decreased power consumption, and drastic space savings over conventional fiber optic communication systems comprised of discrete components. For optimal performance, each component within the PIC may require a unique epitaxial layer structure, band-gap energy, and/or waveguide architecture. Conventional integration methods facilitating such flexibility are increasingly complex and often result in decreased device yield, driving fabrication costs upward. It is this trade-off between performance and device yield that has hindered the scaling of photonic circuits. This dissertation presents high-functionality PICs operating at 10 and 40 Gb/s fabricated using novel integration technologies based on a robust quantum-well-intermixing (QWI) method and metal organic chemical vapor deposition (MOCVD) regrowth. We optimize the QWI process for the integration of high-performance quantum well electroabsorption modulators (QW-EAM) with sampled-grating (SG) DBR lasers to demonstrate the first widely-tunable negative chirp 10 and 40 Gb/s EAM based transmitters. Alone, QWI does not afford the integration of high-performance semiconductor optical amplifiers (SOA) and photodetectors with the transmitters. To overcome this limitation, we have developed a novel high-flexibility integration scheme combining MOCVD regrowth with QWI to merge low optical confinement factor SOAs and 40 Gb/s uni-traveling carrier (UTC) photodiodes on the same chip as the QW-EAM based transmitters. These high-saturation power receiver structures represent the state-of-the-art technologies for even discrete components. Using the novel integration technology, we present the first widely-tunable single-chip device capable of transmit and receive functionality at 40 Gb/s. This device monolithically integrates tunable lasers, EAMs, SOAs, and photodetectors with performance that rivals optimized discrete components. The high-flexibility integration scheme requires only simple blanket regrowth steps and thus breaks the performance versus yield trade-off plaguing conventional fabrication techniques employed for high-functionality PICs.