Control of Transitional and Turbulent Flows Using Plasma-Based Actuators
2006-06-01
by means of asymmetric dielectric-barrier-discharge ( DBD ) actuators is presented. The flow fields are simulated employ- ing an extensively validated...effective use of DBD devices. As a consequence, meaningful computations require the use of three-dimensional large-eddy simulation approaches capable of...counter-flow DBD actuator is shown to provide an effective on-demand tripping device . This prop- erty is exploited for the suppression of laminar
Nitride-Based Materials for Flexible MEMS Tactile and Flow Sensors in Robotics
Abels, Claudio; Mastronardi, Vincenzo Mariano; Guido, Francesco; Dattoma, Tommaso; Qualtieri, Antonio; Megill, William M.; De Vittorio, Massimo; Rizzi, Francesco
2017-01-01
The response to different force load ranges and actuation at low energies is of considerable interest for applications of compliant and flexible devices undergoing large deformations. We present a review of technological platforms based on nitride materials (aluminum nitride and silicon nitride) for the microfabrication of a class of flexible micro-electro-mechanical systems. The approach exploits the material stress differences among the constituent layers of nitride-based (AlN/Mo, SixNy/Si and AlN/polyimide) mechanical elements in order to create microstructures, such as upwardly-bent cantilever beams and bowed circular membranes. Piezoresistive properties of nichrome strain gauges and direct piezoelectric properties of aluminum nitride can be exploited for mechanical strain/stress detection. Applications in flow and tactile sensing for robotics are described. PMID:28489040
A Bayesian model for highly accelerated phase-contrast MRI.
Rich, Adam; Potter, Lee C; Jin, Ning; Ash, Joshua; Simonetti, Orlando P; Ahmad, Rizwan
2016-08-01
Phase-contrast magnetic resonance imaging is a noninvasive tool to assess cardiovascular disease by quantifying blood flow; however, low data acquisition efficiency limits the spatial and temporal resolutions, real-time application, and extensions to four-dimensional flow imaging in clinical settings. We propose a new data processing approach called Reconstructing Velocity Encoded MRI with Approximate message passing aLgorithms (ReVEAL) that accelerates the acquisition by exploiting data structure unique to phase-contrast magnetic resonance imaging. The proposed approach models physical correlations across space, time, and velocity encodings. The proposed Bayesian approach exploits the relationships in both magnitude and phase among velocity encodings. A fast iterative recovery algorithm is introduced based on message passing. For validation, prospectively undersampled data are processed from a pulsatile flow phantom and five healthy volunteers. The proposed approach is in good agreement, quantified by peak velocity and stroke volume (SV), with reference data for acceleration rates R≤10. For SV, Pearson r≥0.99 for phantom imaging (n = 24) and r≥0.96 for prospectively accelerated in vivo imaging (n = 10) for R≤10. The proposed approach enables accurate quantification of blood flow from highly undersampled data. The technique is extensible to four-dimensional flow imaging, where higher acceleration may be possible due to additional redundancy. Magn Reson Med 76:689-701, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Real-time updating of the flood frequency distribution through data assimilation
NASA Astrophysics Data System (ADS)
Aguilar, Cristina; Montanari, Alberto; Polo, María-José
2017-07-01
We explore the memory properties of catchments for predicting the likelihood of floods based on observations of average flows in pre-flood seasons. Our approach assumes that flood formation is driven by the superimposition of short- and long-term perturbations. The former is given by the short-term meteorological forcing leading to infiltration and/or saturation excess, while the latter is originated by higher-than-usual storage in the catchment. To exploit the above sensitivity to long-term perturbations, a meta-Gaussian model and a data assimilation approach are implemented for updating the flood frequency distribution a season in advance. Accordingly, the peak flow in the flood season is predicted in probabilistic terms by exploiting its dependence on the average flow in the antecedent seasons. We focus on the Po River at Pontelagoscuro and the Danube River at Bratislava. We found that the shape of the flood frequency distribution is noticeably impacted by higher-than-usual flows occurring up to several months earlier. The proposed technique may allow one to reduce the uncertainty associated with the estimation of flood frequency.
Exploiting similarity in turbulent shear flows for turbulence modeling
NASA Technical Reports Server (NTRS)
Robinson, David F.; Harris, Julius E.; Hassan, H. A.
1992-01-01
It is well known that current k-epsilon models cannot predict the flow over a flat plate and its wake. In an effort to address this issue and other issues associated with turbulence closure, a new approach for turbulence modeling is proposed which exploits similarities in the flow field. Thus, if we consider the flow over a flat plate and its wake, then in addition to taking advantage of the log-law region, we can exploit the fact that the flow becomes self-similar in the far wake. This latter behavior makes it possible to cast the governing equations as a set of total differential equations. Solutions of this set and comparison with measured shear stress and velocity profiles yields the desired set of model constants. Such a set is, in general, different from other sets of model constants. The rational for such an approach is that if we can correctly model the flow over a flat plate and its far wake, then we can have a better chance of predicting the behavior in between. It is to be noted that the approach does not appeal, in any way, to the decay of homogeneous turbulence. This is because the asymptotic behavior of the flow under consideration is not representative of the decay of homogeneous turbulence.
Exploiting similarity in turbulent shear flows for turbulence modeling
NASA Astrophysics Data System (ADS)
Robinson, David F.; Harris, Julius E.; Hassan, H. A.
1992-12-01
It is well known that current k-epsilon models cannot predict the flow over a flat plate and its wake. In an effort to address this issue and other issues associated with turbulence closure, a new approach for turbulence modeling is proposed which exploits similarities in the flow field. Thus, if we consider the flow over a flat plate and its wake, then in addition to taking advantage of the log-law region, we can exploit the fact that the flow becomes self-similar in the far wake. This latter behavior makes it possible to cast the governing equations as a set of total differential equations. Solutions of this set and comparison with measured shear stress and velocity profiles yields the desired set of model constants. Such a set is, in general, different from other sets of model constants. The rational for such an approach is that if we can correctly model the flow over a flat plate and its far wake, then we can have a better chance of predicting the behavior in between. It is to be noted that the approach does not appeal, in any way, to the decay of homogeneous turbulence. This is because the asymptotic behavior of the flow under consideration is not representative of the decay of homogeneous turbulence.
A Bayesian Model for Highly Accelerated Phase-Contrast MRI
Rich, Adam; Potter, Lee C.; Jin, Ning; Ash, Joshua; Simonetti, Orlando P.; Ahmad, Rizwan
2015-01-01
Purpose Phase-contrast magnetic resonance imaging (PC-MRI) is a noninvasive tool to assess cardiovascular disease by quantifying blood flow; however, low data acquisition efficiency limits the spatial and temporal resolutions, real-time application, and extensions to 4D flow imaging in clinical settings. We propose a new data processing approach called Reconstructing Velocity Encoded MRI with Approximate message passing aLgorithms (ReVEAL) that accelerates the acquisition by exploiting data structure unique to PC-MRI. Theory and Methods ReVEAL models physical correlations across space, time, and velocity encodings. The proposed Bayesian approach exploits the relationships in both magnitude and phase among velocity encodings. A fast iterative recovery algorithm is introduced based on message passing. For validation, prospectively undersampled data are processed from a pulsatile flow phantom and five healthy volunteers. Results ReVEAL is in good agreement, quantified by peak velocity and stroke volume (SV), with reference data for acceleration rates R ≤ 10. For SV, Pearson r ≥ 0.996 for phantom imaging (n = 24) and r ≥ 0.956 for prospectively accelerated in vivo imaging (n = 10) for R ≤ 10. Conclusion ReVEAL enables accurate quantification of blood flow from highly undersampled data. The technique is extensible to 4D flow imaging, where higher acceleration may be possible due to additional redundancy. PMID:26444911
On the derivation of flow rating curves in data-scarce environments
NASA Astrophysics Data System (ADS)
Manfreda, Salvatore
2018-07-01
River monitoring is a critical issue for hydrological modelling that relies strongly on the use of flow rating curves (FRCs). In most cases, these functions are derived by least-squares fitting which usually leads to good performance indices, even when based on a limited range of data that especially lack high flow observations. In this context, cross-section geometry is a controlling factor which is not fully exploited in classical approaches. In fact, river discharge is obtained as the product of two factors: 1) the area of the wetted cross-section and 2) the cross-sectionally averaged velocity. Both factors can be expressed as a function of the river stage, defining a viable alternative in the derivation of FRCs. This makes it possible to exploit information about cross-section geometry limiting, at least partially, the uncertainty in the extrapolation of discharge at higher flow values. Numerical analyses and field data confirm the reliability of the proposed procedure for the derivation of FRCs.
On event-based optical flow detection
Brosch, Tobias; Tschechne, Stephan; Neumann, Heiko
2015-01-01
Event-based sensing, i.e., the asynchronous detection of luminance changes, promises low-energy, high dynamic range, and sparse sensing. This stands in contrast to whole image frame-wise acquisition by standard cameras. Here, we systematically investigate the implications of event-based sensing in the context of visual motion, or flow, estimation. Starting from a common theoretical foundation, we discuss different principal approaches for optical flow detection ranging from gradient-based methods over plane-fitting to filter based methods and identify strengths and weaknesses of each class. Gradient-based methods for local motion integration are shown to suffer from the sparse encoding in address-event representations (AER). Approaches exploiting the local plane like structure of the event cloud, on the other hand, are shown to be well suited. Within this class, filter based approaches are shown to define a proper detection scheme which can also deal with the problem of representing multiple motions at a single location (motion transparency). A novel biologically inspired efficient motion detector is proposed, analyzed and experimentally validated. Furthermore, a stage of surround normalization is incorporated. Together with the filtering this defines a canonical circuit for motion feature detection. The theoretical analysis shows that such an integrated circuit reduces motion ambiguity in addition to decorrelating the representation of motion related activations. PMID:25941470
A multisyringe flow-based system for kinetic-catalytic determination of cobalt(II).
Chaparro, Laura; Ferrer, Laura; Leal, Luz; Cerdà, Víctor
2015-02-01
A kinetic-catalytic method for cobalt determination based on the catalytic effect of cobalt(II) on the oxidative coupling of 1,2-dihydroxyanthraquinone (alizarin) was automated exploiting multisyringe flow injection analysis (MSFIA). The proposed method was performed at pH 9.2, resulting in a discoloration process in the presence of hydrogen peroxide. The fixed-time approach was employed for analytical signal measurement. The spectrophotometric detection was used exploiting a liquid waveguide capillary cell (LWCC), of 1m optical length at 465 nm. The optimization was carried out by a multivariate approach, reaching critical values of 124 µmol L(-1) and 0.22 mol L(-1) for alizarin and hydrogen peroxide, respectively, and 67°C of reagent temperature. A sample volume of 150 µL was used allowing a sampling rate of 30h(-1). Under optimal conditions, calibration curve was linear in the range of 1-200 µg L(-1) Co, achieving a DL of 0.3 µg L(-1) Co. The repeatability, expressed as relative standard deviation (RSD) was lower than 1%. The proposed analytical procedure was applied to the determination of cobalt in cobalt gluconate and different forms of vitamin B12, cyanocobalamin and hydroxicobalamin with successful results showing recoveries around 95%. Copyright © 2014 Elsevier B.V. All rights reserved.
Fast incorporation of optical flow into active polygons.
Unal, Gozde; Krim, Hamid; Yezzi, Anthony
2005-06-01
In this paper, we first reconsider, in a different light, the addition of a prediction step to active contour-based visual tracking using an optical flow and clarify the local computation of the latter along the boundaries of continuous active contours with appropriate regularizers. We subsequently detail our contribution of computing an optical flow-based prediction step directly from the parameters of an active polygon, and of exploiting it in object tracking. This is in contrast to an explicitly separate computation of the optical flow and its ad hoc application. It also provides an inherent regularization effect resulting from integrating measurements along polygon edges. As a result, we completely avoid the need of adding ad hoc regularizing terms to the optical flow computations, and the inevitably arbitrary associated weighting parameters. This direct integration of optical flow into the active polygon framework distinguishes this technique from most previous contour-based approaches, where regularization terms are theoretically, as well as practically, essential. The greater robustness and speed due to a reduced number of parameters of this technique are additional and appealing features.
Parallel Implementation of a Frozen Flow Based Wavefront Reconstructor
NASA Astrophysics Data System (ADS)
Nagy, J.; Kelly, K.
2013-09-01
Obtaining high resolution images of space objects from ground based telescopes is challenging, often requiring the use of a multi-frame blind deconvolution (MFBD) algorithm to remove blur caused by atmospheric turbulence. In order for an MFBD algorithm to be effective, it is necessary to obtain a good initial estimate of the wavefront phase. Although wavefront sensors work well in low turbulence situations, they are less effective in high turbulence, such as when imaging in daylight, or when imaging objects that are close to the Earth's horizon. One promising approach, which has been shown to work very well in high turbulence settings, uses a frozen flow assumption on the atmosphere to capture the inherent temporal correlations present in consecutive frames of wavefront data. Exploiting these correlations can lead to more accurate estimation of the wavefront phase, and the associated PSF, which leads to more effective MFBD algorithms. However, with the current serial implementation, the approach can be prohibitively expensive in situations when it is necessary to use a large number of frames. In this poster we describe a parallel implementation that overcomes this constraint. The parallel implementation exploits sparse matrix computations, and uses the Trilinos package developed at Sandia National Laboratories. Trilinos provides a variety of core mathematical software for parallel architectures that have been designed using high quality software engineering practices, The package is open source, and portable to a variety of high-performance computing architectures.
A Variational Approach to Video Registration with Subspace Constraints.
Garg, Ravi; Roussos, Anastasios; Agapito, Lourdes
2013-01-01
This paper addresses the problem of non-rigid video registration, or the computation of optical flow from a reference frame to each of the subsequent images in a sequence, when the camera views deformable objects. We exploit the high correlation between 2D trajectories of different points on the same non-rigid surface by assuming that the displacement of any point throughout the sequence can be expressed in a compact way as a linear combination of a low-rank motion basis. This subspace constraint effectively acts as a trajectory regularization term leading to temporally consistent optical flow. We formulate it as a robust soft constraint within a variational framework by penalizing flow fields that lie outside the low-rank manifold. The resulting energy functional can be decoupled into the optimization of the brightness constancy and spatial regularization terms, leading to an efficient optimization scheme. Additionally, we propose a novel optimization scheme for the case of vector valued images, based on the dualization of the data term. This allows us to extend our approach to deal with colour images which results in significant improvements on the registration results. Finally, we provide a new benchmark dataset, based on motion capture data of a flag waving in the wind, with dense ground truth optical flow for evaluation of multi-frame optical flow algorithms for non-rigid surfaces. Our experiments show that our proposed approach outperforms state of the art optical flow and dense non-rigid registration algorithms.
Micromixer based on viscoelastic flow instability at low Reynolds number.
Lam, Y C; Gan, H Y; Nguyen, N T; Lie, H
2009-03-30
We exploited the viscoelasticity of biocompatible dilute polymeric solutions, namely, dilute poly(ethylene oxide) solutions, to significantly enhance mixing in microfluidic devices at a very small Reynolds number, i.e., Re approximately 0.023, but large Peclet and elasticity numbers. With an abrupt contraction microgeometry (8:1 contraction ratio), two different dilute poly(ethylene oxide) solutions were successfully mixed with a short flow length at a relatively fast mixing time of <10 mus. Microparticle image velocimetry was employed in our investigations to characterize the flow fields. The increase in velocity fluctuation with an increase in flow rate and Deborah number indicates the increase in viscoelastic flow instability. Mixing efficiency was characterized by fluorescent concentration measurements. Our results showed that enhanced mixing can be achieved through viscoelastic flow instability under situations where molecular-diffusion and inertia effects are negligible. This approach bypasses the laminar flow limitation, usually associated with a low Reynolds number, which is not conducive to mixing.
Micromixer based on viscoelastic flow instability at low Reynolds number
Lam, Y. C.; Gan, H. Y.; Nguyen, N. T.; Lie, H.
2009-01-01
We exploited the viscoelasticity of biocompatible dilute polymeric solutions, namely, dilute poly(ethylene oxide) solutions, to significantly enhance mixing in microfluidic devices at a very small Reynolds number, i.e., Re≈0.023, but large Peclet and elasticity numbers. With an abrupt contraction microgeometry (8:1 contraction ratio), two different dilute poly(ethylene oxide) solutions were successfully mixed with a short flow length at a relatively fast mixing time of <10 μs. Microparticle image velocimetry was employed in our investigations to characterize the flow fields. The increase in velocity fluctuation with an increase in flow rate and Deborah number indicates the increase in viscoelastic flow instability. Mixing efficiency was characterized by fluorescent concentration measurements. Our results showed that enhanced mixing can be achieved through viscoelastic flow instability under situations where molecular-diffusion and inertia effects are negligible. This approach bypasses the laminar flow limitation, usually associated with a low Reynolds number, which is not conducive to mixing. PMID:19693399
NASA Astrophysics Data System (ADS)
Shao, Zhongshi; Pi, Dechang; Shao, Weishi
2017-11-01
This article proposes an extended continuous estimation of distribution algorithm (ECEDA) to solve the permutation flow-shop scheduling problem (PFSP). In ECEDA, to make a continuous estimation of distribution algorithm (EDA) suitable for the PFSP, the largest order value rule is applied to convert continuous vectors to discrete job permutations. A probabilistic model based on a mixed Gaussian and Cauchy distribution is built to maintain the exploration ability of the EDA. Two effective local search methods, i.e. revolver-based variable neighbourhood search and Hénon chaotic-based local search, are designed and incorporated into the EDA to enhance the local exploitation. The parameters of the proposed ECEDA are calibrated by means of a design of experiments approach. Simulation results and comparisons based on some benchmark instances show the efficiency of the proposed algorithm for solving the PFSP.
Tanyimboh, Tiku T; Seyoum, Alemtsehay G
2016-12-01
This article investigates the computational efficiency of constraint handling in multi-objective evolutionary optimization algorithms for water distribution systems. The methodology investigated here encourages the co-existence and simultaneous development including crossbreeding of subpopulations of cost-effective feasible and infeasible solutions based on Pareto dominance. This yields a boundary search approach that also promotes diversity in the gene pool throughout the progress of the optimization by exploiting the full spectrum of non-dominated infeasible solutions. The relative effectiveness of small and moderate population sizes with respect to the number of decision variables is investigated also. The results reveal the optimization algorithm to be efficient, stable and robust. It found optimal and near-optimal solutions reliably and efficiently. The real-world system based optimization problem involved multiple variable head supply nodes, 29 fire-fighting flows, extended period simulation and multiple demand categories including water loss. The least cost solutions found satisfied the flow and pressure requirements consistently. The best solutions achieved indicative savings of 48.1% and 48.2% based on the cost of the pipes in the existing network, for populations of 200 and 1000, respectively. The population of 1000 achieved slightly better results overall. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Reducing the pressure drag of a D-shaped bluff body using linear feedback control
NASA Astrophysics Data System (ADS)
Dalla Longa, L.; Morgans, A. S.; Dahan, J. A.
2017-12-01
The pressure drag of blunt bluff bodies is highly relevant in many practical applications, including to the aerodynamic drag of road vehicles. This paper presents theory revealing that a mean drag reduction can be achieved by manipulating wake flow fluctuations. A linear feedback control strategy then exploits this idea, targeting attenuation of the spatially integrated base (back face) pressure fluctuations. Large-eddy simulations of the flow over a D-shaped blunt bluff body are used as a test-bed for this control strategy. The flow response to synthetic jet actuation is characterised using system identification, and controller design is via shaping of the frequency response to achieve fluctuation attenuation. The designed controller successfully attenuates integrated base pressure fluctuations, increasing the time-averaged pressure on the body base by 38%. The effect on the flow field is to push the roll-up of vortices further downstream and increase the extent of the recirculation bubble. This control approach uses only body-mounted sensing/actuation and input-output model identification, meaning that it could be applied experimentally.
Hydrodynamic lift for single cell manipulation in a femtosecond laser fabricated optofluidic chip
NASA Astrophysics Data System (ADS)
Bragheri, Francesca; Osellame, Roberto
2017-08-01
Single cell sorting based either on fluorescence or on mechanical properties has been exploited in the last years in microfluidic devices. Hydrodynamic focusing allows increasing the efficiency of theses devices by improving the matching between the region of optical analysis and that of cell flow. Here we present a very simple solution fabricated by femtosecond laser micromachining that exploits flow laminarity in microfluidic channels to easily lift the sample flowing position to the channel portion illuminated by the optical waveguides used for single cell trapping and analysis.
Lima, Manoel J A; Reis, Boaventura F
2017-03-01
This paper describes an environmentally friendly procedure for the determination of losartan potassium (Los-K) in pharmaceuticals. The photometric method was based on the light scattering effect due to particles suspension, which were formed by the reaction of Los-K with Cu (II) ions. The method was automated employing a multicommuted flow analysis approach, implemented using solenoid mini-pumps for fluid propelling and a homemade LED based photometer. Under the optimized experimental conditions, the procedure showed a linear relationship in the concentration range of 23.2-417.6mgL -1 (r=0.9997, n=6), a relative standard deviation of 1.61% (n=10), a limit of detection (3.3*σ) estimated to be 12.1mgL -1 , and a sampling rate of 140 determinations per hour. Each determination consumed 12µg of copper (II) acetate and generated 0.54mL of waste. Copyright © 2016 Elsevier B.V. All rights reserved.
Yu, Guihua; Kushwaha, Amit; Lee, Jungkyu K; Shaqfeh, Eric S G; Bao, Zhenan
2011-01-25
DNA has been recently explored as a powerful tool for developing molecular scaffolds for making reproducible and reliable metal contacts to single organic semiconducting molecules. A critical step in the process of exploiting DNA-organic molecule-DNA (DOD) array structures is the controlled tethering and stretching of DNA molecules. Here we report the development of reproducible surface chemistry for tethering DNA molecules at tunable density and demonstrate shear flow processing as a rationally controlled approach for stretching/aligning DNA molecules of various lengths. Through enzymatic cleavage of λ-phage DNA to yield a series of DNA chains of various lengths from 17.3 μm down to 4.2 μm, we have investigated the flow/extension behavior of these tethered DNA molecules under different flow strengths in the flow-gradient plane. We compared Brownian dynamic simulations for the flow dynamics of tethered λ-DNA in shear, and found our flow-gradient plane experimental results matched well with our bead-spring simulations. The shear flow processing demonstrated in our studies represents a controllable approach for tethering and stretching DNA molecules of various lengths. Together with further metallization of DNA chains within DOD structures, this bottom-up approach can potentially enable efficient and reliable fabrication of large-scale nanoelectronic devices based on single organic molecules, therefore opening opportunities in both fundamental understanding of charge transport at the single molecular level and many exciting applications for ever-shrinking molecular circuits.
An improved optical scheme for self-mixing low-coherence flowmeters
NASA Astrophysics Data System (ADS)
Di Cecilia, Luca; Rovati, Luigi; Cattini, Stefano
2017-02-01
In this paper we present a fiber-based low-coherence self-mixing interferometer exploiting a single-arm approach to measure the flow in a pipe. The main advantages of the proposed system are the flexibility offered by fiber-connected optical head, a greater ease of alignment, the rejection of "common-mode" vibrations, and greater stability. Thanks to the use of a low-coherence source, the proposed system investigates the velocity of the scattering particles owing only in a fixed and well defined region located close to the duct wall itself. The reported experimental results demonstrate that in laminar flow regime the developed system is able to determine the flow and it is quite robust to variation in the scatterers concentration. Increasing the scatterers concentration of about 24 times, the sensitivity S has reduced of less than 30%.
Agathos, Catherine P; Bernardin, Delphine; Baranton, Konogan; Assaiante, Christine; Isableu, Brice
2017-04-07
Optic flow provides visual self-motion information and is shown to modulate gait and provoke postural reactions. We have previously reported an increased reliance on the visual, as opposed to the somatosensory-based egocentric, frame of reference (FoR) for spatial orientation with age. In this study, we evaluated FoR reliance for self-motion perception with respect to the ground surface. We examined how effects of ground optic flow direction on posture may be enhanced by an intermittent podal contact with the ground, and reliance on the visual FoR and aging. Young, middle-aged and old adults stood quietly (QS) or stepped in place (SIP) for 30s under static stimulation, approaching and receding optic flow on the ground and a control condition. We calculated center of pressure (COP) translation and optic flow sensitivity was defined as the ratio of COP translation velocity over absolute optic flow velocity: the visual self-motion quotient (VSQ). COP translation was more influenced by receding flow during QS and by approaching flow during SIP. In addition, old adults drifted forward while SIP without any imposed visual stimulation. Approaching flow limited this natural drift and receding flow enhanced it, as indicated by the VSQ. The VSQ appears to be a motor index of reliance on the visual FoR during SIP and is associated with greater reliance on the visual and reduced reliance on the egocentric FoR. Exploitation of the egocentric FoR for self-motion perception with respect to the ground surface is compromised by age and associated with greater sensitivity to optic flow. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Castellana, Vito G.; Tumeo, Antonino; Ferrandi, Fabrizio
Emerging applications such as data mining, bioinformatics, knowledge discovery, social network analysis are irregular. They use data structures based on pointers or linked lists, such as graphs, unbalanced trees or unstructures grids, which generates unpredictable memory accesses. These data structures usually are large, but difficult to partition. These applications mostly are memory bandwidth bounded and have high synchronization intensity. However, they also have large amounts of inherent dynamic parallelism, because they potentially perform a task for each one of the element they are exploring. Several efforts are looking at accelerating these applications on hybrid architectures, which integrate general purpose processorsmore » with reconfigurable devices. Some solutions, which demonstrated significant speedups, include custom-hand tuned accelerators or even full processor architectures on the reconfigurable logic. In this paper we present an approach for the automatic synthesis of accelerators from C, targeted at irregular applications. In contrast to typical High Level Synthesis paradigms, which construct a centralized Finite State Machine, our approach generates dynamically scheduled hardware components. While parallelism exploitation in typical HLS-generated accelerators is usually bound within a single execution flow, our solution allows concurrently running multiple execution flow, thus also exploiting the coarser grain task parallelism of irregular applications. Our approach supports multiple, multi-ported and distributed memories, and atomic memory operations. Its main objective is parallelizing as many memory operations as possible, independently from their execution time, to maximize the memory bandwidth utilization. This significantly differs from current HLS flows, which usually consider a single memory port and require precise scheduling of memory operations. A key innovation of our approach is the generation of a memory interface controller, which dynamically maps concurrent memory accesses to multiple ports. We present a case study on a typical irregular kernel, Graph Breadth First search (BFS), exploring different tradeoffs in terms of parallelism and number of memories.« less
NASA Astrophysics Data System (ADS)
Reznicek, R.
The present conference on flow visualization encompasses methods exploiting tracing particles, surface tracing methods, methods exploiting the effects of streaming fluid on passing radiation/field, computer-aided flow visualization, and applications to fluid mechanics, aerodynamics, flow devices, shock tubes, and heat/mass transfer. Specific issues include visualizing velocity distribution by stereo photography, dark-field Fourier quasiinterferometry, speckle tomography of an open flame, a fast eye for real-time image analysis, and velocity-field determination based on flow-image analysis. Also addressed are flows around rectangular prisms with oscillating flaps at the leading edges, the tomography of aerodynamic objects, the vapor-screen technique applied to a delta-wing aircraft, flash-lamp planar imaging, IR-thermography applications in convective heat transfer, and the visualization of marangoni effects in evaporating sessile drops.
NASA Astrophysics Data System (ADS)
Altena, Bas; Kääb, Andreas
2017-06-01
Contemporary optical remote sensing satellites or constellations of satellites can acquire imagery at sub-weekly or even daily timescales. Thus, these systems facilitate the potential for within-season velocity estimation of glacier surfaces. State-of-the-art techniques for displacement estimation are based on matching image pairs and are thus constrained by the need of significant displacement and/or preservation of the surface over time. Consequently, such approaches cannot benefit entirely from the increasing satellite revisit times. Here, we explore an approach that is fundamentally different from image correlation or similar techniques and exploits the concept of optical flow. Our goal is to assess if this concept could overcome above current limitations of image matching and thus give new insights in glacier flow dynamics. We implement two different methods of optical flow, and test these on the SPOT5 Take5 dataset over Kronebreen, Svalbard and over Kaskawulsh Glacier, Yukon. For Kaskawulsh Glacier we are able to extract seasonal velocity variation, that temporally coincide with events of increased air temperatures. Furthermore, even for the cloudy dataset of Kronebreen, we were able to extract spatio-temporal trajectories which correlate well with measured GPS flow paths. Because the underlying concept is simple and computationally efficient due to data-reduction, our methodology can easily be used for exploratory regional studies of several glaciers or estimation of small and slow flowing glaciers.
Time-Domain Filtering for Spatial Large-Eddy Simulation
NASA Technical Reports Server (NTRS)
Pruett, C. David
1997-01-01
An approach to large-eddy simulation (LES) is developed whose subgrid-scale model incorporates filtering in the time domain, in contrast to conventional approaches, which exploit spatial filtering. The method is demonstrated in the simulation of a heated, compressible, axisymmetric jet, and results are compared with those obtained from fully resolved direct numerical simulation. The present approach was, in fact, motivated by the jet-flow problem and the desire to manipulate the flow by localized (point) sources for the purposes of noise suppression. Time-domain filtering appears to be more consistent with the modeling of point sources; moreover, time-domain filtering may resolve some fundamental inconsistencies associated with conventional space-filtered LES approaches.
Ribeiro, David S M; Prior, João A V; Taveira, Christian J M; Mendes, José M A F S; Santos, João L M
2011-06-15
In this work, and for the first time, it was developed an automatic and fast screening miniaturized flow system for the toxicological control of glibenclamide in beverages, with application in forensic laboratory investigations, and also, for the chemical control of commercially available pharmaceutical formulations. The automatic system exploited the multipumping flow (MPFS) concept and allowed the implementation of a new glibenclamide determination method based on the fluorometric monitoring of the drug in acidic medium (λ(ex)=301 nm; λ(em)=404 nm), in the presence of an anionic surfactant (SDS), promoting an organized micellar medium to enhance the fluorometric measurements. The developed approach assured good recoveries in the analysis of five spiked alcoholic beverages. Additionally, a good agreement was verified when comparing the results obtained in the determination of glibenclamide in five commercial pharmaceutical formulations by the proposed method and by the pharmacopoeia reference procedure. Copyright © 2011 Elsevier B.V. All rights reserved.
A physically based analytical model of flood frequency curves
NASA Astrophysics Data System (ADS)
Basso, S.; Schirmer, M.; Botter, G.
2016-09-01
Predicting magnitude and frequency of floods is a key issue in hydrology, with implications in many fields ranging from river science and geomorphology to the insurance industry. In this paper, a novel physically based approach is proposed to estimate the recurrence intervals of seasonal flow maxima. The method links the extremal distribution of streamflows to the stochastic dynamics of daily discharge, providing an analytical expression of the seasonal flood frequency curve. The parameters involved in the formulation embody climate and landscape attributes of the contributing catchment and can be estimated from daily rainfall and streamflow data. Only one parameter, which is linked to the antecedent wetness condition in the watershed, needs to be calibrated on the observed maxima. The performance of the method is discussed through a set of applications in four rivers featuring heterogeneous daily flow regimes. The model provides reliable estimates of seasonal maximum flows in different climatic settings and is able to capture diverse shapes of flood frequency curves emerging in erratic and persistent flow regimes. The proposed method exploits experimental information on the full range of discharges experienced by rivers. As a consequence, model performances do not deteriorate when the magnitude of events with return times longer than the available sample size is estimated. The approach provides a framework for the prediction of floods based on short data series of rainfall and daily streamflows that may be especially valuable in data scarce regions of the world.
NASA Astrophysics Data System (ADS)
Buongiorno, M. F.; Silvestri, M.; Musacchio, M.
2017-12-01
In this work a complete processing chain from the detection of the beginning of eruption to the estimation of lava flow temperature on active volcanoes using remote sensing data is presented showing the results for the Mt. Etna eruption on March 2017. The early detection of new eruption is based on the potentiality ensured by geostationary very low spatial resolution satellite (3x3 km in nadiral view), the hot spot/lava flow evolution is derived by S2 polar medium/high spatial resolution (20x20 mt) while the surface temperature is estimated by polar medium/low spatial resolution such as L8, ASTER and S3 (from 90 mt up to 1km).This approach merges two outcome derived by activity performed for monitoring purposes within INGV R&D activities and the results obtained by Geohazards Exploitation Platform ESA funded project (GEP) aimed to the development of shared platform for providing services based on EO data. Because the variety of phenomena to be analyzed a multi temporal multi scale approach has been used to implement suitable and robust algorithms for the different sensors. With the exception of Sentinel 2 (MSI) data, for which the algorithm used is based on NIR-SWIR bands, we exploit the MIR-TIR channels of L8, ASTER, S3 and SEVIRI for generating automatically the surface thermal state analysis. The developed procedure produces time series data and allows to extract information from each single co-registered pixel, to highlight variation of temperatures within specific areas. The final goal is to implement an easy tool which enables scientists and users to extract valuable information from satellite time series at different scales produced by ESA and EUMETSAT in the frame of Europe's Copernicus program and other Earth observation satellites programs such as LANDSAT (USGS) and GOES (NOAA).
NASA Astrophysics Data System (ADS)
Xue, Jie; Gui, Dongwei; Lei, Jiaqiang; Sun, Huaiwei; Zeng, Fanjiang; Feng, Xinlong
2017-12-01
Agriculture and the eco-environment are increasingly competing for water. The extension of intensive farmland for ensuring food security has resulted in excessive water exploitation by agriculture. Consequently, this has led to a lack of water supply in natural ecosystems. This paper proposes a trade-off framework to coordinate the water-use conflict between agriculture and the eco-environment, based on economic compensation for irrigation stakeholders. A hybrid Bayesian network (HBN) is developed to implement the framework, including: (a) agricultural water shortage assessments after meeting environmental flows; (b) water-use tradeoff analysis between agricultural irrigation and environmental flows using the HBN; and (c) quantification of the agricultural economic compensation for different irrigation stakeholders. The constructed HBN is computed by dynamic discretization, which is a more robust and accurate propagation algorithm than general static discretization. A case study of the Qira oasis area in Northwest China demonstrates that the water trade-off based on economic compensation depends on the available water supply and environmental flows at different levels. Agricultural irrigation water extracted for grain crops should be preferentially guaranteed to ensure food security, in spite of higher economic compensation in other cash crops' irrigation for water coordination. Updating water-saving engineering and adopting drip irrigation technology in agricultural facilities after satisfying environmental flows would greatly relieve agricultural water shortage and save the economic compensation for different irrigation stakeholders. The approach in this study can be easily applied in water-stressed areas worldwide for dealing with water competition.
An Analysis of a Developing and Non-Developing Disturbance During the Predict Experiment
2015-09-25
convection. As the wave propagates primarily westwards, the flow establishes dynamic flow boundaries (a Kelvin cat’s eye) that effectively trap moist...stability, the navy will need to be effective at anticipating the vast destruction caused by tropical cyclones. A thorough understanding of 6 genesis...the most current and innovative approaches for effective tasking, collection, process- ing, exploitation, and dissemination of tropical cyclone decision
Ecosystem-based fisheries management requires a change to the selective fishing philosophy.
Zhou, Shijie; Smith, Anthony D M; Punt, André E; Richardson, Anthony J; Gibbs, Mark; Fulton, Elizabeth A; Pascoe, Sean; Bulman, Catherine; Bayliss, Peter; Sainsbury, Keith
2010-05-25
Globally, many fish species are overexploited, and many stocks have collapsed. This crisis, along with increasing concerns over flow-on effects on ecosystems, has caused a reevaluation of traditional fisheries management practices, and a new ecosystem-based fisheries management (EBFM) paradigm has emerged. As part of this approach, selective fishing is widely encouraged in the belief that nonselective fishing has many adverse impacts. In particular, incidental bycatch is seen as wasteful and a negative feature of fishing, and methods to reduce bycatch are implemented in many fisheries. However, recent advances in fishery science and ecology suggest that a selective approach may also result in undesirable impacts both to fisheries and marine ecosystems. Selective fishing applies one or more of the "6-S" selections: species, stock, size, sex, season, and space. However, selective fishing alters biodiversity, which in turn changes ecosystem functioning and may affect fisheries production, hindering rather than helping achieve the goals of EBFM. We argue here that a "balanced exploitation" approach might alleviate many of the ecological effects of fishing by avoiding intensive removal of particular components of the ecosystem, while still supporting sustainable fisheries. This concept may require reducing exploitation rates on certain target species or groups to protect vulnerable components of the ecosystem. Benefits to society could be maintained or even increased because a greater proportion of the entire suite of harvested species is used.
Raman spectroscopic instrumentation and plasmonic methods for material characterization
NASA Astrophysics Data System (ADS)
Tanaka, Kazuki
The advent of nanotechnology has led to incredible growth in how we consume, make and approach advanced materials. By exploiting nanoscale material properties, unique control of optical, thermal, mechanical, and electrical characteristics becomes possible. This thesis describes the development of a novel localized surface plasmon resonant (LSPR) color sensitive photosensor, based on functionalization of gold nanoparticles onto tianium dioxide nanowires and sensing by a metal-semiconducting nanowire-metal photodiode structure. This LSPR photosensor has been integrated into a system that incorporates Raman spectroscopy, microfluidics, optical trapping, and sorting flow cytometry into a unique material characterization system called the microfluidic optical fiber trapping Raman sorting flow cytometer (MOFTRSFC). Raman spectroscopy is utilized as a powerful molecular characterization technique used to analyze biological, mineralogical and nanomaterial samples. To combat the inherently weak Raman signal, plasmonic methods have been applied to exploit surface enhanced Raman scattering (SERS) and localized surface plasmon resonance (LSPR), increasing Raman intensity by up to 5 orders of magnitude. The resultant MOFTRSFC system is a prototype instrument that can effectively trap, analyze, and sort micron-sized dielectric particles and biological cells. Raman spectroscopy has been presented in several modalities, including the development of a portable near-infrared Raman spectrometer and other emerging technologies.
Biomimetics: determining engineering opportunities from nature
NASA Astrophysics Data System (ADS)
Fish, Frank E.
2009-08-01
The biomimetic approach seeks to incorporate designs based on biological organisms into engineered technologies. Biomimetics can be used to engineer machines that emulate the performance of organisms, particularly in instances where the organism's performance exceeds current mechanical technology or provides new directions to solve existing problems. For biologists, an adaptationist program has allowed for the identification of novel features of organisms based on engineering principles; whereas for engineers, identification of such novel features is necessary to exploit them for biomimetic development. Adaptations (leading edge tubercles to passively modify flow and high efficiency oscillatory propulsive systems) from marine animals demonstrate potential utility in the development of biomimetic products. Nature retains a store of untouched knowledge, which would be beneficial in advancing technology.
A theoretical approach for analyzing the restabilization of wakes
NASA Astrophysics Data System (ADS)
Hill, D. C.
1992-04-01
Recently reported experimental results demonstrate that restabilization of the low-Reynolds-number flow past a circular cylinder can be achieved by the placement of a smaller cylinder in the wake of the first at particular locations. Traditional numerical procedures for modeling such phenomena are computationally expensive. An approach is presented here in which the properties of the adjoint solutions to the linearized equations of motion are exploited to map quickly the best positions for the small cylinder's placement. Comparisons with experiment and previous computations are favorable. The approach is shown to be applicable to general flows, illustrating how strongly control mechanisms that involve sources of momentum couple to unstable (or stable) modes of the system.
Cerebral capillary velocimetry based on temporal OCT speckle contrast.
Choi, Woo June; Li, Yuandong; Qin, Wan; Wang, Ruikang K
2016-12-01
We propose a new optical coherence tomography (OCT) based method to measure red blood cell (RBC) velocities of single capillaries in the cortex of rodent brain. This OCT capillary velocimetry exploits quantitative laser speckle contrast analysis to estimate speckle decorrelation rate from the measured temporal OCT speckle signals, which is related to microcirculatory flow velocity. We hypothesize that OCT signal due to sub-surface capillary flow can be treated as the speckle signal in the single scattering regime and thus its time scale of speckle fluctuations can be subjected to single scattering laser speckle contrast analysis to derive characteristic decorrelation time. To validate this hypothesis, OCT measurements are conducted on a single capillary flow phantom operating at preset velocities, in which M-mode B-frames are acquired using a high-speed OCT system. Analysis is then performed on the time-varying OCT signals extracted at the capillary flow, exhibiting a typical inverse relationship between the estimated decorrelation time and absolute RBC velocity, which is then used to deduce the capillary velocities. We apply the method to in vivo measurements of mouse brain, demonstrating that the proposed approach provides additional useful information in the quantitative assessment of capillary hemodynamics, complementary to that of OCT angiography.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ryan Hruska
Currently, small Unmanned Aerial Vehicles (UAVs) are primarily used for capturing and down-linking real-time video. To date, their role as a low-cost airborne platform for capturing high-resolution, georeferenced still imagery has not been fully utilized. On-going work within the Unmanned Vehicle Systems Program at the Idaho National Laboratory (INL) is attempting to exploit this small UAV-acquired, still imagery potential. Initially, a UAV-based still imagery work flow model was developed that includes initial UAV mission planning, sensor selection, UAV/sensor integration, and imagery collection, processing, and analysis. Components to support each stage of the work flow are also being developed. Critical tomore » use of acquired still imagery is the ability to detect changes between images of the same area over time. To enhance the analysts’ change detection ability, a UAV-specific, GIS-based change detection system called SADI or System for Analyzing Differences in Imagery is under development. This paper will discuss the associated challenges and approaches to collecting still imagery with small UAVs. Additionally, specific components of the developed work flow system will be described and graphically illustrated using varied examples of small UAV-acquired still imagery.« less
Quintana, José Benito; Miró, Manuel; Estela, José Manuel; Cerdà, Víctor
2006-04-15
In this paper, the third generation of flow injection analysis, also named the lab-on-valve (LOV) approach, is proposed for the first time as a front end to high-performance liquid chromatography (HPLC) for on-line solid-phase extraction (SPE) sample processing by exploiting the bead injection (BI) concept. The proposed microanalytical system based on discontinuous programmable flow features automated packing (and withdrawal after single use) of a small amount of sorbent (<5 mg) into the microconduits of the flow network and quantitative elution of sorbed species into a narrow band (150 microL of 95% MeOH). The hyphenation of multisyringe flow injection analysis (MSFIA) with BI-LOV prior to HPLC analysis is utilized for on-line postextraction treatment to ensure chemical compatibility between the eluate medium and the initial HPLC gradient conditions. This circumvents the band-broadening effect commonly observed in conventional on-line SPE-based sample processors due to the low eluting strength of the mobile phase. The potential of the novel MSFI-BI-LOV hyphenation for on-line handling of complex environmental and biological samples prior to reversed-phase chromatographic separations was assessed for the expeditious determination of five acidic pharmaceutical residues (viz., ketoprofen, naproxen, bezafibrate, diclofenac, and ibuprofen) and one metabolite (viz., salicylic acid) in surface water, urban wastewater, and urine. To this end, the copolymeric divinylbenzene-co-n-vinylpyrrolidone beads (Oasis HLB) were utilized as renewable sorptive entities in the micromachined unit. The automated analytical method features relative recovery percentages of >88%, limits of detection within the range 0.02-0.67 ng mL(-1), and coefficients of variation <11% for the column renewable mode and gives rise to a drastic reduction in operation costs ( approximately 25-fold) as compared to on-line column switching systems.
Rational Exploitation and Utilizing of Groundwater in Jiangsu Coastal Area
NASA Astrophysics Data System (ADS)
Kang, B.; Lin, X.
2017-12-01
Jiangsu coastal area is located in the southeast coast of China, where is a new industrial base and an important coastal and Land Resources Development Zone of China. In the areas with strong human exploitation activities, regional groundwater evolution is obviously affected by human activities. In order to solve the environmental geological problems caused by groundwater exploitation fundamentally, we must find out the forming conditions of regional groundwater hydrodynamic field, and the impact of human activities on groundwater hydrodynamic field evolution and hydrogeochemical evolition. Based on these results, scientific management and reasonable exploitation of the regional groundwater resources can be provided for the utilization. Taking the coastal area of Jiangsu as the research area, we investigate and analyze of the regional hydrogeological conditions. The numerical simulation model of groundwater flow was established according to the water power, chemical and isotopic methods, the conditions of water flow and the influence of hydrodynamic field on the water chemical field. We predict the evolution of regional groundwater dynamics under the influence of human activities and climate change and evaluate the influence of groundwater dynamic field evolution on the environmental geological problems caused by groundwater exploitation under various conditions. We get the following conclusions: Three groundwater exploitation optimal schemes were established. The groundwater salinization was taken as the primary control condition. The substitution model was proposed to model groundwater exploitation and water level changes by BP network method.Then genetic algorithm was used to solve the optimization solution. Three groundwater exploitation optimal schemes were submit to local water resource management. The first sheme was used to solve the groundwater salinization problem. The second sheme focused on dual water supply. The third sheme concerned on emergency water supppy. This is the first time environment problem taken as water management objectinve in this coastal area.
Self-Employment among Italian Female Graduates
ERIC Educational Resources Information Center
Rosti, Luisa; Chelli, Francesco
2009-01-01
Purpose: The purpose of this paper is to investigate the gender impact of tertiary education on the probability of entering and remaining in self-employment. Design/methodology/approach: A data set on labour market flows produced by the Italian National Statistical Office is exploited by interviewing about 62,000 graduate and non-graduate…
Bacterial cell identification in differential interference contrast microscopy images.
Obara, Boguslaw; Roberts, Mark A J; Armitage, Judith P; Grau, Vicente
2013-04-23
Microscopy image segmentation lays the foundation for shape analysis, motion tracking, and classification of biological objects. Despite its importance, automated segmentation remains challenging for several widely used non-fluorescence, interference-based microscopy imaging modalities. For example in differential interference contrast microscopy which plays an important role in modern bacterial cell biology. Therefore, new revolutions in the field require the development of tools, technologies and work-flows to extract and exploit information from interference-based imaging data so as to achieve new fundamental biological insights and understanding. We have developed and evaluated a high-throughput image analysis and processing approach to detect and characterize bacterial cells and chemotaxis proteins. Its performance was evaluated using differential interference contrast and fluorescence microscopy images of Rhodobacter sphaeroides. Results demonstrate that the proposed approach provides a fast and robust method for detection and analysis of spatial relationship between bacterial cells and their chemotaxis proteins.
A Ffowcs Williams and Hawkings formulation for hydroacoustic analysis of propeller sheet cavitation
NASA Astrophysics Data System (ADS)
Testa, C.; Ianniello, S.; Salvatore, F.
2018-01-01
A novel hydroacoustic formulation for the prediction of tonal noise emitted by marine propellers in presence of unsteady sheet cavitation, is presented. The approach is based on the standard Ffowcs Williams and Hawkings equation and the use of transpiration (velocity and acceleration) terms, accounting for the time evolution of the vapour cavity attached on the blade surface. Drawbacks and potentialities of the method are tested on a marine propeller operating in a nonhomogeneous onset flow, by exploiting the hydrodynamic data from a potential-based panel method equipped with a sheet cavitation model and comparing the noise predictions with those carried out by an alternative numerical approach, documented in literature. It is shown that the proposed formulation yields a one-to-one correlation between emitted noise and sheet cavitation dynamics, carrying out accurate predictions in terms of noise magnitude and directivity.
On the statistical mechanics of the 2D stochastic Euler equation
NASA Astrophysics Data System (ADS)
Bouchet, Freddy; Laurie, Jason; Zaboronski, Oleg
2011-12-01
The dynamics of vortices and large scale structures is qualitatively very different in two dimensional flows compared to its three dimensional counterparts, due to the presence of multiple integrals of motion. These are believed to be responsible for a variety of phenomena observed in Euler flow such as the formation of large scale coherent structures, the existence of meta-stable states and random abrupt changes in the topology of the flow. In this paper we study stochastic dynamics of the finite dimensional approximation of the 2D Euler flow based on Lie algebra su(N) which preserves all integrals of motion. In particular, we exploit rich algebraic structure responsible for the existence of Euler's conservation laws to calculate the invariant measures and explore their properties and also study the approach to equilibrium. Unexpectedly, we find deep connections between equilibrium measures of finite dimensional su(N) truncations of the stochastic Euler equations and random matrix models. Our work can be regarded as a preparation for addressing the questions of large scale structures, meta-stability and the dynamics of random transitions between different flow topologies in stochastic 2D Euler flows.
3D CFD simulation of Multi-phase flow separators
NASA Astrophysics Data System (ADS)
Zhu, Zhiying
2017-10-01
During the exploitation of natural gas, some water and sands are contained. It will be better to separate water and sands from natural gas to insure favourable transportation and storage. In this study, we use CFD to analyse the effect of multi-phase flow separator, whose detailed geometrical parameters are designed in advanced. VOF model and DPM are used here. From the results of CFD, we can draw a conclusion that separated effect of multi-phase flow achieves better results. No solid and water is carried out from gas outlet. CFD simulation provides an economical and efficient approach to shed more light on details of the flow behaviour.
An effective PSO-based memetic algorithm for flow shop scheduling.
Liu, Bo; Wang, Ling; Jin, Yi-Hui
2007-02-01
This paper proposes an effective particle swarm optimization (PSO)-based memetic algorithm (MA) for the permutation flow shop scheduling problem (PFSSP) with the objective to minimize the maximum completion time, which is a typical non-deterministic polynomial-time (NP) hard combinatorial optimization problem. In the proposed PSO-based MA (PSOMA), both PSO-based searching operators and some special local searching operators are designed to balance the exploration and exploitation abilities. In particular, the PSOMA applies the evolutionary searching mechanism of PSO, which is characterized by individual improvement, population cooperation, and competition to effectively perform exploration. On the other hand, the PSOMA utilizes several adaptive local searches to perform exploitation. First, to make PSO suitable for solving PFSSP, a ranked-order value rule based on random key representation is presented to convert the continuous position values of particles to job permutations. Second, to generate an initial swarm with certain quality and diversity, the famous Nawaz-Enscore-Ham (NEH) heuristic is incorporated into the initialization of population. Third, to balance the exploration and exploitation abilities, after the standard PSO-based searching operation, a new local search technique named NEH_1 insertion is probabilistically applied to some good particles selected by using a roulette wheel mechanism with a specified probability. Fourth, to enrich the searching behaviors and to avoid premature convergence, a simulated annealing (SA)-based local search with multiple different neighborhoods is designed and incorporated into the PSOMA. Meanwhile, an effective adaptive meta-Lamarckian learning strategy is employed to decide which neighborhood to be used in SA-based local search. Finally, to further enhance the exploitation ability, a pairwise-based local search is applied after the SA-based search. Simulation results based on benchmarks demonstrate the effectiveness of the PSOMA. Additionally, the effects of some parameters on optimization performances are also discussed.
Development of upwind schemes for the Euler equations
NASA Technical Reports Server (NTRS)
Chakravarthy, Sukumar R.
1987-01-01
Described are many algorithmic and computational aspects of upwind schemes and their second-order accurate formulations based on Total-Variation-Diminishing (TVD) approaches. An operational unification of the underlying first-order scheme is first presented encompassing Godunov's, Roe's, Osher's, and Split-Flux methods. For higher order versions, the preprocessing and postprocessing approaches to constructing TVD discretizations are considered. TVD formulations can be used to construct relaxation methods for unfactored implicit upwind schemes, which in turn can be exploited to construct space-marching procedures for even the unsteady Euler equations. A major part of the report describes time- and space-marching procedures for solving the Euler equations in 2-D, 3-D, Cartesian, and curvilinear coordinates. Along with many illustrative examples, several results of efficient computations on 3-D supersonic flows with subsonic pockets are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duan, Qi; Al-Shaer, Ehab; Chatterjee, Samrat
The Infrastructure Distributed Denial of Service (IDDoS) attacks continue to be one of the most devastating challenges facing cyber systems. The new generation of IDDoS attacks exploit the inherent weakness of cyber infrastructure including deterministic nature of routes, skew distribution of flows, and Internet ossification to discover the network critical links and launch highly stealthy flooding attacks that are not observable at the victim end. In this paper, first, we propose a new metric to quantitatively measure the potential susceptibility of any arbitrary target server or domain to stealthy IDDoS attacks, and es- timate the impact of such susceptibility onmore » enterprises. Second, we develop a proactive route mutation technique to minimize the susceptibility to these attacks by dynamically changing the flow paths periodically to invalidate the adversary knowledge about the network and avoid targeted critical links. Our proposed approach actively changes these network paths while satisfying security and qualify of service requirements. We present an integrated approach of proactive route mutation that combines both infrastructure-based mutation that is based on reconfiguration of switches and routers, and middle-box approach that uses an overlay of end-point proxies to construct a virtual network path free of critical links to reach a destination. We implemented the proactive path mutation technique on a Software Defined Network using the OpendDaylight controller to demonstrate a feasible deployment of this approach. Our evaluation validates the correctness, effectiveness, and scalability of the proposed approaches.« less
Multi-GPU unsteady 2D flow simulation coupled with a state-to-state chemical kinetics
NASA Astrophysics Data System (ADS)
Tuttafesta, Michele; Pascazio, Giuseppe; Colonna, Gianpiero
2016-10-01
In this work we are presenting a GPU version of a CFD code for high enthalpy reacting flow, using the state-to-state approach. In supersonic and hypersonic flows, thermal and chemical non-equilibrium is one of the fundamental aspects that must be taken into account for the accurate characterization of the plasma and state-to-state kinetics is the most accurate approach used for this kind of problems. This model consists in writing a continuity equation for the population of each vibrational level of the molecules in the mixture, determining at the same time the species densities and the distribution of the population in internal levels. An explicit scheme is employed here to integrate the governing equations, so as to exploit the GPU structure and obtain an efficient algorithm. The best performances are obtained for reacting flows in state-to-state approach, reaching speedups of the order of 100, thanks to the use of an operator splitting scheme for the kinetics equations.
Volumetric velocimetry for fluid flows
NASA Astrophysics Data System (ADS)
Discetti, Stefano; Coletti, Filippo
2018-04-01
In recent years, several techniques have been introduced that are capable of extracting 3D three-component velocity fields in fluid flows. Fast-paced developments in both hardware and processing algorithms have generated a diverse set of methods, with a growing range of applications in flow diagnostics. This has been further enriched by the increasingly marked trend of hybridization, in which the differences between techniques are fading. In this review, we carry out a survey of the prominent methods, including optical techniques and approaches based on medical imaging. An overview of each is given with an example of an application from the literature, while focusing on their respective strengths and challenges. A framework for the evaluation of velocimetry performance in terms of dynamic spatial range is discussed, along with technological trends and emerging strategies to exploit 3D data. While critical challenges still exist, these observations highlight how volumetric techniques are transforming experimental fluid mechanics, and that the possibilities they offer have just begun to be explored.
NASA Astrophysics Data System (ADS)
Shao, Zhongshi; Pi, Dechang; Shao, Weishi
2018-05-01
This article presents an effective estimation of distribution algorithm, named P-EDA, to solve the blocking flow-shop scheduling problem (BFSP) with the makespan criterion. In the P-EDA, a Nawaz-Enscore-Ham (NEH)-based heuristic and the random method are combined to generate the initial population. Based on several superior individuals provided by a modified linear rank selection, a probabilistic model is constructed to describe the probabilistic distribution of the promising solution space. The path relinking technique is incorporated into EDA to avoid blindness of the search and improve the convergence property. A modified referenced local search is designed to enhance the local exploitation. Moreover, a diversity-maintaining scheme is introduced into EDA to avoid deterioration of the population. Finally, the parameters of the proposed P-EDA are calibrated using a design of experiments approach. Simulation results and comparisons with some well-performing algorithms demonstrate the effectiveness of the P-EDA for solving BFSP.
Mori, Masanobu; Nakano, Koji; Sasaki, Masaya; Shinozaki, Haruka; Suzuki, Shiho; Okawara, Chitose; Miró, Manuel; Itabashi, Hideyuki
2016-02-01
A dynamic flow-through microcolumn extraction system based on extractant re-circulation is herein proposed as a novel analytical approach for simplification of bioaccessibility tests of trace elements in sediments. On-line metal leaching is undertaken in the format of all injection (AI) analysis, which is a sequel of flow injection analysis, but involving extraction under steady-state conditions. The minimum circulation times and flow rates required to determine the maximum bioaccessible pools of target metals (viz., Cu, Zn, Cd, and Pb) from lake and river sediment samples were estimated using Tessier's sequential extraction scheme and an acid single extraction test. The on-line AIA method was successfully validated by mass balance studies of CRM and real sediment samples. Tessier's test in on-line AI format demonstrated to be carried out by one third of extraction time (6h against more than 17 h by the conventional method), with better analytical precision (<9.2% against >15% by the conventional method) and significant decrease in blank readouts as compared with the manual batch counterpart. Copyright © 2015 Elsevier B.V. All rights reserved.
Long Penetration Mode Counterflowing Jets for Supersonic Slender Configurations - A Numerical Study
NASA Technical Reports Server (NTRS)
Venkatachari, Balaji Shankar; Cheng, Gary; Chang, Chau-Layn; Zichettello, Benjamin; Bilyeu, David L.
2013-01-01
A novel approach of using counterflowing jets positioned strategically on the aircraft and exploiting its long penetration mode (LPM) of interaction towards sonic-boom mitigation forms the motivation for this study. Given that most previous studies on the counterflowing LPM jet have all been on blunt bodies and at high supersonic or hypersonic flow conditions, exploring the feasibility to obtain a LPM jet issuing from a slender body against low supersonic freestream conditions is the main focus of this study. Computational fluid dynamics computations of axisymmetric models (cone-cylinder and quartic geometry), of relevance to NASA's High Speed project, are carried out using the space-time conservation element solution element viscous flow solver with unstructured meshes. A systematic parametric study is conducted to determine the optimum combination of counterflowing jet size, mass flow rate, and nozzle geometry for obtaining LPM jets. Details from these computations will be used to assess the potential of the LPM counterflowing supersonic jet as a means of active flow control for enabling supersonic flight over land and to establish the knowledge base for possible future implementation of such technologies.
Higher Education in Non-Standard Wage Contracts
ERIC Educational Resources Information Center
Rosti, Luisa; Chelli, Francesco
2012-01-01
Purpose: The purpose of this paper is to verify whether higher education increases the likelihood of young Italian workers moving from non-standard to standard wage contracts. Design/methodology/approach: The authors exploit a data set on labour market flows, produced by the Italian National Statistical Office, by interviewing about 85,000…
Simulation of all-scale atmospheric dynamics on unstructured meshes
NASA Astrophysics Data System (ADS)
Smolarkiewicz, Piotr K.; Szmelter, Joanna; Xiao, Feng
2016-10-01
The advance of massively parallel computing in the nineteen nineties and beyond encouraged finer grid intervals in numerical weather-prediction models. This has improved resolution of weather systems and enhanced the accuracy of forecasts, while setting the trend for development of unified all-scale atmospheric models. This paper first outlines the historical background to a wide range of numerical methods advanced in the process. Next, the trend is illustrated with a technical review of a versatile nonoscillatory forward-in-time finite-volume (NFTFV) approach, proven effective in simulations of atmospheric flows from small-scale dynamics to global circulations and climate. The outlined approach exploits the synergy of two specific ingredients: the MPDATA methods for the simulation of fluid flows based on the sign-preserving properties of upstream differencing; and the flexible finite-volume median-dual unstructured-mesh discretisation of the spatial differential operators comprising PDEs of atmospheric dynamics. The paper consolidates the concepts leading to a family of generalised nonhydrostatic NFTFV flow solvers that include soundproof PDEs of incompressible Boussinesq, anelastic and pseudo-incompressible systems, common in large-eddy simulation of small- and meso-scale dynamics, as well as all-scale compressible Euler equations. Such a framework naturally extends predictive skills of large-eddy simulation to the global atmosphere, providing a bottom-up alternative to the reverse approach pursued in the weather-prediction models. Theoretical considerations are substantiated by calculations attesting to the versatility and efficacy of the NFTFV approach. Some prospective developments are also discussed.
Al Lawati, Haider A J; Al Mughairy, Baqia; Al Lawati, Iman; Suliman, FakhrEldin O
2018-04-30
A novel mixing approach was utilized with a highly sensitive chemiluminescence (CL) method to determine the total phenolic content (TPC) in honey samples using an acidic potassium permanganate-formaldehyde system. The mixing approach was based on exploiting the mixing efficiency of nanodroplets generated in a microfluidic platform. Careful optimization of the instrument setup and various experimental conditions were employed to obtain excellent sensitivity. The mixing efficiency of the droplets was compared with the CL signal intensity obtained using the common serpentine chip design, with both approaches using at a total flow rate of 15 μl min -1 ; the results showed that the nanodroplets provided 600% higher CL signal intensity at this low flow rate. Using the optimum conditions, calibration equations, limits of detection (LOD) and limits of quantification (LOQ) for gallic acid (GA), caffeic acid (CA), kaempferol (KAM), quercetin (QRC) and catechin (CAT) were obtained. The LOD ranged from 6.2 ppb for CA to 11.0 ppb for QRC. Finally, the method was applied for the determination of TPC in several local and commercial honey samples. Copyright © 2018 John Wiley & Sons, Ltd.
Exploit and ignore the consequences: A mother of planetary issues.
Moustafa, Khaled
2016-07-01
Many environmental and planetary issues are due to an exploitation strategy based on exploit, consume and ignore the consequences. As many natural and environmental resources are limited in time and space, such exploitation approach causes important damages on earth, in the sea and maybe soon in the space. To sustain conditions under which humans and other living species can coexist in productive and dynamic harmony with their environments, terrestrial and space exploration programs may need to be based on 'scrutinize the consequences, prepare adequate solutions and then, only then, exploit'. Otherwise, the exploitation of planetary resources may put the environmental stability and sustainability at a higher risk than it is currently predicted. Copyright © 2016 Elsevier B.V. All rights reserved.
Exploit and ignore the consequences: A mother of planetary issues
NASA Astrophysics Data System (ADS)
Moustafa, K.
2016-07-01
Many environmental and planetary issues are due to an exploitation strategy based on exploit, consume and ignore the consequences. As many natural and environmental resources are limited in time and space, such exploitation approach causes important damages on earth, in the sea and maybe soon in the space. To sustain conditions under which humans and other living species can coexist in productive and dynamic harmony with their environments, terrestrial and space exploration programs may need to be based on 'scrutinize the consequences, prepare adequate solutions and then, only then, exploit'. Otherwise, the exploitation of planetary resources may put the environmental stability and sustainability at a higher risk than it is currently predicted. (C) 2016 Elsevier B.V. All rights reserved.
Horstkotte, Burkhard; Alonso, Juan Carlos; Miró, Manuel; Cerdà, Víctor
2010-01-15
An integrated analyzer based on the multisyringe flow injection analysis approach is proposed for the automated determination of dissolved oxygen in seawater. The entire Winkler method including precipitation of manganese(II) hydroxide, fixation of dissolved oxygen, dissolution of the oxidized manganese hydroxide precipitate, and generation of iodine and tri-iodide ion are in-line effected within the flow network. Spectrophotometric quantification of iodine and tri-iodide at the isosbestic wavelength of 466nm renders enhanced method reliability. The calibration function is linear up to 19mgL(-1) dissolved oxygen and an injection frequency of 17 per hour is achieved. The multisyringe system features a highly satisfying signal stability with repeatabilities of 2.2% RSD that make it suitable for continuous determination of dissolved oxygen in seawater. Compared to the manual starch-end-point titrimetric Winkler method and early reported automated systems, concentrations and consumption of reagents and sample are reduced up to hundredfold. The versatility of the multisyringe assembly was exploited in the implementation of an ancillary automatic batch-wise Winkler titrator using a single syringe of the module for accurate titration of the released iodine/tri-iodide with thiosulfate.
Global Assessment of Exploitable Surface Reservoir Storage under Climate Change
NASA Astrophysics Data System (ADS)
Liu, L.; Parkinson, S.; Gidden, M.; Byers, E.; Satoh, Y.; Riahi, K.
2016-12-01
Surface water reservoirs provide us with reliable water supply systems, hydropower generation, flood control, and recreation services. Reliable reservoirs can be robust measures for water security and can help smooth out challenging seasonal variability of river flows. Yet, reservoirs also cause flow fragmentation in rivers and can lead to flooding of upstream areas, thereby displacing existing land-uses and ecosystems. The anticipated population growth, land use and climate change in many regions globally suggest a critical need to assess the potential for appropriate reservoir capacity that can balance rising demands with long-term water security. In this research, we assessed exploitable reservoir potential under climate change and human development constraints by deriving storage-yield relationships for 235 river basins globally. The storage-yield relationships map the amount of storage capacity required to meet a given water demand based on a 30-year inflow sequence. Runoff data is simulated with an ensemble of Global Hydrological Models (GHMs) for each of five bias-corrected general circulation models (GCMs) under four climate change pathways. These data are used to define future 30-year inflows in each river basin for time period between 2010 and 2080. The calculated capacity is then combined with geographical information of environmental and human development exclusion zones to further limit the storage capacity expansion potential in each basin. We investigated the reliability of reservoir potentials across different climate change scenarios and Shared Socioeconomic Pathways (SSPs) to identify river basins where reservoir expansion will be particularly challenging. Preliminary results suggest large disparities in reservoir potential across basins: some basins have already approached exploitable reserves, while some others display abundant potential. Exclusions zones pose significant impact on the amount of actual exploitable storage and firm yields worldwide: 30% of reservoir potential would be unavailable because of land occupation by environmental and human development. Results from this study will help decision makers to understand the reliability of infrastructure systems particularly sensitive to future water availability.
Ruhlandt, A; Töpperwien, M; Krenkel, M; Mokso, R; Salditt, T
2017-07-26
We present an approach towards four dimensional (4d) movies of materials, showing dynamic processes within the entire 3d structure. The method is based on tomographic reconstruction on dynamically curved paths using a motion model estimated by optical flow techniques, considerably reducing the typical motion artefacts of dynamic tomography. At the same time we exploit x-ray phase contrast based on free propagation to enhance the signal from micron scale structure recorded with illumination times down to a millisecond (ms). The concept is demonstrated by observing the burning process of a match stick in 4d, using high speed synchrotron phase contrast x-ray tomography recordings. The resulting movies reveal the structural changes of the wood cells during the combustion.
Vector quantization for efficient coding of upper subbands
NASA Technical Reports Server (NTRS)
Zeng, W. J.; Huang, Y. F.
1994-01-01
This paper examines the application of vector quantization (VQ) to exploit both intra-band and inter-band redundancy in subband coding. The focus here is on the exploitation of inter-band dependency. It is shown that VQ is particularly suitable and effective for coding the upper subbands. Three subband decomposition-based VQ coding schemes are proposed here to exploit the inter-band dependency by making full use of the extra flexibility of VQ approach over scalar quantization. A quadtree-based variable rate VQ (VRVQ) scheme which takes full advantage of the intra-band and inter-band redundancy is first proposed. Then, a more easily implementable alternative based on an efficient block-based edge estimation technique is employed to overcome the implementational barriers of the first scheme. Finally, a predictive VQ scheme formulated in the context of finite state VQ is proposed to further exploit the dependency among different subbands. A VRVQ scheme proposed elsewhere is extended to provide an efficient bit allocation procedure. Simulation results show that these three hybrid techniques have advantages, in terms of peak signal-to-noise ratio (PSNR) and complexity, over other existing subband-VQ approaches.
An, Yongkai; Lu, Wenxi; Cheng, Weiguo
2015-01-01
This paper introduces a surrogate model to identify an optimal exploitation scheme, while the western Jilin province was selected as the study area. A numerical simulation model of groundwater flow was established first, and four exploitation wells were set in the Tongyu county and Qian Gorlos county respectively so as to supply water to Daan county. Second, the Latin Hypercube Sampling (LHS) method was used to collect data in the feasible region for input variables. A surrogate model of the numerical simulation model of groundwater flow was developed using the regression kriging method. An optimization model was established to search an optimal groundwater exploitation scheme using the minimum average drawdown of groundwater table and the minimum cost of groundwater exploitation as multi-objective functions. Finally, the surrogate model was invoked by the optimization model in the process of solving the optimization problem. Results show that the relative error and root mean square error of the groundwater table drawdown between the simulation model and the surrogate model for 10 validation samples are both lower than 5%, which is a high approximation accuracy. The contrast between the surrogate-based simulation optimization model and the conventional simulation optimization model for solving the same optimization problem, shows the former only needs 5.5 hours, and the latter needs 25 days. The above results indicate that the surrogate model developed in this study could not only considerably reduce the computational burden of the simulation optimization process, but also maintain high computational accuracy. This can thus provide an effective method for identifying an optimal groundwater exploitation scheme quickly and accurately. PMID:26264008
NASA Astrophysics Data System (ADS)
Garrett, S. J.; Cooper, A. J.; Harris, J. H.; Özkan, M.; Segalini, A.; Thomas, P. J.
2016-01-01
We summarise results of a theoretical study investigating the distinct convective instability properties of steady boundary-layer flow over rough rotating disks. A generic roughness pattern of concentric circles with sinusoidal surface undulations in the radial direction is considered. The goal is to compare predictions obtained by means of two alternative, and fundamentally different, modelling approaches for surface roughness for the first time. The motivating rationale is to identify commonalities and isolate results that might potentially represent artefacts associated with the particular methodologies underlying one of the two modelling approaches. The most significant result of practical relevance obtained is that both approaches predict overall stabilising effects on type I instability mode of rotating disk flow. This mode leads to transition of the rotating-disk boundary layer and, more generally, the transition of boundary-layers with a cross-flow profile. Stabilisation of the type 1 mode means that it may be possible to exploit surface roughness for laminar-flow control in boundary layers with a cross-flow component. However, we also find differences between the two sets of model predictions, some subtle and some substantial. These will represent criteria for establishing which of the two alternative approaches is more suitable to correctly describe experimental data when these become available.
Accelerating 4D flow MRI by exploiting vector field divergence regularization.
Santelli, Claudio; Loecher, Michael; Busch, Julia; Wieben, Oliver; Schaeffter, Tobias; Kozerke, Sebastian
2016-01-01
To improve velocity vector field reconstruction from undersampled four-dimensional (4D) flow MRI by penalizing divergence of the measured flow field. Iterative image reconstruction in which magnitude and phase are regularized separately in alternating iterations was implemented. The approach allows incorporating prior knowledge of the flow field being imaged. In the present work, velocity data were regularized to reduce divergence, using either divergence-free wavelets (DFW) or a finite difference (FD) method using the ℓ1-norm of divergence and curl. The reconstruction methods were tested on a numerical phantom and in vivo data. Results of the DFW and FD approaches were compared with data obtained with standard compressed sensing (CS) reconstruction. Relative to standard CS, directional errors of vector fields and divergence were reduced by 55-60% and 38-48% for three- and six-fold undersampled data with the DFW and FD methods. Velocity vector displays of the numerical phantom and in vivo data were found to be improved upon DFW or FD reconstruction. Regularization of vector field divergence in image reconstruction from undersampled 4D flow data is a valuable approach to improve reconstruction accuracy of velocity vector fields. © 2014 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Charogiannis, Alexandros; Denner, Fabian; van Wachem, Berend G. M.; Kalliadasis, Serafim; Markides, Christos N.
2017-12-01
We scrutinize the statistical characteristics of liquid films flowing over an inclined planar surface based on film height and velocity measurements that are recovered simultaneously by application of planar laser-induced fluorescence (PLIF) and particle tracking velocimetry (PTV), respectively. Our experiments are complemented by direct numerical simulations (DNSs) of liquid films simulated for different conditions so as to expand the parameter space of our investigation. Our statistical analysis builds upon a Reynolds-like decomposition of the time-varying flow rate that was presented in our previous research effort on falling films in [Charogiannis et al., Phys. Rev. Fluids 2, 014002 (2017), 10.1103/PhysRevFluids.2.014002], and which reveals that the dimensionless ratio of the unsteady term to the mean flow rate increases linearly with the product of the coefficients of variation of the film height and bulk velocity, as well as with the ratio of the Nusselt height to the mean film height, both at the same upstream PLIF/PTV measurement location. Based on relations that are derived to describe these results, a methodology for predicting the mass-transfer capability (through the mean and standard deviation of the bulk flow speed) of these flows is developed in terms of the mean and standard deviation of the film thickness and the mean flow rate, which are considerably easier to obtain experimentally than velocity profiles. The errors associated with these predictions are estimated at ≈1.5 % and 8% respectively in the experiments and at <1 % and <2 % respectively in the DNSs. Beyond the generation of these relations for the prediction of important film flow characteristics based on simple flow information, the data provided can be used to design improved heat- and mass-transfer equipment reactors or other process operation units which exploit film flows, but also to develop and validate multiphase flow models in other physical and technological settings.
Activity-based exploitation of Full Motion Video (FMV)
NASA Astrophysics Data System (ADS)
Kant, Shashi
2012-06-01
Video has been a game-changer in how US forces are able to find, track and defeat its adversaries. With millions of minutes of video being generated from an increasing number of sensor platforms, the DOD has stated that the rapid increase in video is overwhelming their analysts. The manpower required to view and garner useable information from the flood of video is unaffordable, especially in light of current fiscal restraints. "Search" within full-motion video has traditionally relied on human tagging of content, and video metadata, to provision filtering and locate segments of interest, in the context of analyst query. Our approach utilizes a novel machine-vision based approach to index FMV, using object recognition & tracking, events and activities detection. This approach enables FMV exploitation in real-time, as well as a forensic look-back within archives. This approach can help get the most information out of video sensor collection, help focus the attention of overburdened analysts form connections in activity over time and conserve national fiscal resources in exploiting FMV.
Modelling catchment areas for secondary care providers: a case study.
Jones, Simon; Wardlaw, Jessica; Crouch, Susan; Carolan, Michelle
2011-09-01
Hospitals need to understand patient flows in an increasingly competitive health economy. New initiatives like Patient Choice and the Darzi Review further increase this demand. Essential to understanding patient flows are demographic and geographic profiles of health care service providers, known as 'catchment areas' and 'catchment populations'. This information helps Primary Care Trusts (PCTs) to review how their populations are accessing services, measure inequalities and commission services; likewise it assists Secondary Care Providers (SCPs) to measure and assess potential gains in market share, redesign services, evaluate admission thresholds and plan financial budgets. Unlike PCTs, SCPs do not operate within fixed geographic boundaries. Traditionally, SCPs have used administrative boundaries or arbitrary drive times to model catchment areas. Neither approach satisfactorily represents current patient flows. Furthermore, these techniques are time-consuming and can be challenging for healthcare managers to exploit. This paper presents three different approaches to define catchment areas, each more detailed than the previous method. The first approach 'First Past the Post' defines catchment areas by allocating a dominant SCP to each Census Output Area (OA). The SCP with the highest proportion of activity within each OA is considered the dominant SCP. The second approach 'Proportional Flow' allocates activity proportionally to each OA. This approach allows for cross-boundary flows to be captured in a catchment area. The third and final approach uses a gravity model to define a catchment area, which incorporates drive or travel time into the analysis. Comparing approaches helps healthcare providers to understand whether using more traditional and simplistic approaches to define catchment areas and populations achieves the same or similar results as complex mathematical modelling. This paper has demonstrated, using a case study of Manchester, that when estimating the catchment area of a planned new hospital, the extra level of detail provided by the gravity model may prove necessary. However, in virtually all other applications, the Proportional Flow method produced the optimal model for catchment populations in Manchester, based on several criteria: it produced the smallest RMS error; it addressed cross-boundary flows; the data used to create the catchment was readily available to SCPs; and it was simpler to reproduce than the gravity model method. Further work is needed to address how the Proportional Flow method can be used to reflect service redesign and handle OAs with zero or low activity. A next step should be the rolling out of the method across England and looking at further drill downs of data such as catchment by Healthcare Resource Group (HRG) rather than specialty level.
Magnetic Reduced Graphene Oxide/Nickel/Platinum Nanoparticles Micromotors for Mycotoxin Analysis.
Molinero-Fernández, Águeda; Jodra, Adrián; Moreno-Guzmán, María; López, Miguel Ángel; Escarpa, Alberto
2018-05-17
Magnetic reduced graphene oxide/nickel/platinum nanoparticles (rGO/Ni/PtNPs) micromotors for mycotoxin analysis in food samples were developed for food-safety diagnosis. While the utilization of self-propelled micromotors in bioassays has led to a fundamentally new approach, mainly due to the greatly enhanced target-receptor contacts owing to their continuous movement around the sample and the associated mixing effect, herein the magnetic properties of rGO/Ni/PtNPs micromotors for mycotoxin analysis are additionally explored. The micromotor-based strategy for targeted mycotoxin biosensing focused on the accurate control of micromotor-based operations: 1) on-the-move capture of free aptamers by exploiting the adsorption (outer rGO layer) and catalytic (inner PtNPs layer) properties and 2) micromotor stopped flow in just 2 min by exploiting the magnetic properties (intermediate Ni layer). This strategy allowed fumonisin B1 determination with high sensitivity (limit of detection: 0.70 ng mL -1 ) and excellent accuracy (error: 0.05 % in certified reference material and quantitative recoveries of 104±4 % in beer) even in the presence of concurrent ochratoxin A (105-108±8 % in wines). These results confirm the developed approach as an innovative and reliable analytical tool for food-safety monitoring, and confirm the role of micromotors as a new paradigm in analytical chemistry. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Continuous-flow separation of live and dead yeasts using reservoir-based dielectrophoresis (rDEP)
NASA Astrophysics Data System (ADS)
Patel, Saurin; Showers, Daniel; Vedantam, Pallavi; Tzeng, Tzuen-Rong; Qian, Shizhi; Xuan, Xiangchun
2012-11-01
Separating live and dead cells is critical to the diagnosis of early stage diseases and to the efficacy test of drug screening etc. We develop a novel microfluidic approach to continuous separation of yeast cells by viability inside a reservoir. It exploits the cell dielectrophoresis that is induced by the inherent electric field gradient at the reservoir-microchannel junction to selectively trap dead yeasts and continuously sort them from live ones. We term this approach reservoir-based dielectrophoresis (rDEP). The transporting, focusing, and trapping of live and dead yeast cells at the reservoir-microchannel junction are studied separately by varying the DC-biased AC electric fields. These phenomena can all be reasonably predicted by a 2D numerical model. We find that the AC to DC field ratio for live yeast trapping is higher than that for dead cells because the former experiences a weaker rDEP while having a larger electrokinetic mobility. It is this difference in the AC to DC field ratio that enables the viability-based yeast cell separation. The rDEP approach has unique advantages over existing DEP-based techniques such as the occupation of zero channel space and the elimination of in-channel mechanical or electrical parts. NSF
Co-Flow Hollow Cathode Technology
NASA Technical Reports Server (NTRS)
Hofer, Richard R.; Goebel, Dan M.
2011-01-01
Hall thrusters utilize identical hollow cathode technology as ion thrusters, yet must operate at much higher mass flow rates in order to efficiently couple to the bulk plasma discharge. Higher flow rates are necessary in order to provide enough neutral collisions to transport electrons across magnetic fields so that they can reach the discharge. This higher flow rate, however, has potential life-limiting implications for the operation of the cathode. A solution to the problem involves splitting the mass flow into the hollow cathode into two streams, the internal and external flows. The internal flow is fixed and set such that the neutral pressure in the cathode allows for a high utilization of the emitter surface area. The external flow is variable depending on the flow rate through the anode of the Hall thruster, but also has a minimum in order to suppress high-energy ion generation. In the co-flow hollow cathode, the cathode assembly is mounted on thruster centerline, inside the inner magnetic core of the thruster. An annular gas plenum is placed at the base of the cathode and propellant is fed throughout to produce an azimuthally symmetric flow of gas that evenly expands around the cathode keeper. This configuration maximizes propellant utilization and is not subject to erosion processes. External gas feeds have been considered in the past for ion thruster applications, but usually in the context of eliminating high energy ion production. This approach is adapted specifically for the Hall thruster and exploits the geometry of a Hall thruster to feed and focus the external flow without introducing significant new complexity to the thruster design.
Mathematical Model for a Simplified Calculation of the Input Momentum Coefficient for AFC Purposes
NASA Astrophysics Data System (ADS)
Hirsch, Damian; Gharib, Morteza
2016-11-01
Active Flow Control (AFC) is an emerging technology which aims at enhancing the aerodynamic performance of flight vehicles (i.e., to save fuel). A viable AFC system must consider the limited resources available on a plane for attaining performance goals. A higher performance goal (i.e., airplane incremental lift) demands a higher input fluidic requirement (i.e., mass flow rate). Therefore, the key requirement for a successful and practical design is to minimize power input while maximizing performance to achieve design targets. One of the most used design parameters is the input momentum coefficient Cμ. The difficulty associated with Cμ lies in obtaining the parameters for its calculation. In the literature two main approaches can be found, which both have their own disadvantages (assumptions, difficult measurements). A new, much simpler calculation approach will be presented that is based on a mathematical model that can be applied to most jet designs (i.e., steady or sweeping jets). The model-incorporated assumptions will be justified theoretically as well as experimentally. Furthermore, the model's capabilities are exploited to give new insight to the AFC technology and its physical limitations. Supported by Boeing.
Capture of Fluorescence Decay Times by Flow Cytometry
Naivar, Mark A.; Jenkins, Patrick; Freyer, James P.
2012-01-01
In flow cytometry, the fluorescence decay time of an excitable species has been largely underutilized and is not likely found as a standard parameter on any imaging cytometer, sorting, or analyzing system. Most cytometers lack fluorescence lifetime hardware mainly owing to two central issues. Foremost, research and development with lifetime techniques has lacked proper exploitation of modern laser systems, data acquisition boards, and signal processing techniques. Secondly, a lack of enthusiasm for fluorescence lifetime applications in cells and with bead-based assays has persisted among the greater cytometry community. In this unit, we describe new approaches that address these issues and demonstrate the simplicity of digitally acquiring fluorescence relaxation rates in flow. The unit is divided into protocol and commentary sections in order to provide a most comprehensive discourse on acquiring the fluorescence lifetime with frequency-domain methods. The unit covers (i) standard fluorescence lifetime acquisition (protocol-based) with frequency-modulated laser excitation, (ii) digital frequency-domain cytometry analyses, and (iii) interfacing fluorescence lifetime measurements onto sorting systems. Within the unit is also a discussion on how digital methods are used for aliasing in order to harness higher frequency ranges. Also, a final discussion is provided on heterodyning and processing of waveforms for multi-exponential decay extraction. PMID:25419263
Regional groundwater flow modeling of the Geba basin, northern Ethiopia
NASA Astrophysics Data System (ADS)
Gebreyohannes, Tesfamichael; De Smedt, Florimond; Walraevens, Kristine; Gebresilassie, Solomon; Hussien, Abdelwassie; Hagos, Miruts; Amare, Kassa; Deckers, Jozef; Gebrehiwot, Kindeya
2017-05-01
The Geba basin is one of the most food-insecure areas of the Tigray regional state in northern Ethiopia due to recurrent drought resulting from erratic distribution of rainfall. Since the beginning of the 1990s, rain-fed agriculture has been supported through small-scale irrigation schemes mainly by surface-water harvesting, but success has been limited. Hence, use of groundwater for irrigation purposes has gained considerable attention. The main purpose of this study is to assess groundwater resources in the Geba basin by means of a MODFLOW modeling approach. The model is calibrated using observed groundwater levels, yielding a clear insight into the groundwater flow systems and reserves. Results show that none of the hydrogeological formations can be considered as aquifers that can be exploited for large-scale groundwater exploitation. However, aquitards can be identified that can support small-scale groundwater abstraction for irrigation needs in regions that are either designated as groundwater discharge areas or where groundwater levels are shallow and can be tapped by hand-dug wells or shallow boreholes.
Numerical approach on dynamic self-assembly of colloidal particles
NASA Astrophysics Data System (ADS)
Ibrahimi, Muhamet; Ilday, Serim; Makey, Ghaith; Pavlov, Ihor; Yavuz, Özgàn; Gulseren, Oguz; Ilday, Fatih Omer
Far from equilibrium systems of artificial ensembles are crucial for understanding many intelligent features in self-organized natural systems. However, the lack of established theory underlies a need for numerical implementations. Inspired by a novel work, we simulate a solution-suspended colloidal system that dynamically self assembles due to convective forces generated in the solvent when heated by a laser. In order to incorporate with random fluctuations of particles and continuously changing flow, we exploit a random-walk based Brownian motion model and a fluid dynamics solver prepared for games, respectively. Simulation results manage to fit to experiments and show many quantitative features of a non equilibrium dynamic self assembly, including phase space compression and an ensemble-energy input feedback loop.
Experiments on Adaptive Techniques for Host-Based Intrusion Detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
DRAELOS, TIMOTHY J.; COLLINS, MICHAEL J.; DUGGAN, DAVID P.
2001-09-01
This research explores four experiments of adaptive host-based intrusion detection (ID) techniques in an attempt to develop systems that can detect novel exploits. The technique considered to have the most potential is adaptive critic designs (ACDs) because of their utilization of reinforcement learning, which allows learning exploits that are difficult to pinpoint in sensor data. Preliminary results of ID using an ACD, an Elman recurrent neural network, and a statistical anomaly detection technique demonstrate an ability to learn to distinguish between clean and exploit data. We used the Solaris Basic Security Module (BSM) as a data source and performed considerablemore » preprocessing on the raw data. A detection approach called generalized signature-based ID is recommended as a middle ground between signature-based ID, which has an inability to detect novel exploits, and anomaly detection, which detects too many events including events that are not exploits. The primary results of the ID experiments demonstrate the use of custom data for generalized signature-based intrusion detection and the ability of neural network-based systems to learn in this application environment.« less
Novel approach to the exploitation of the tidal energy. Volume 1: Summary and discussion
NASA Astrophysics Data System (ADS)
Gorlov, A. M.
1981-12-01
The hydropneumatic concept in the approach to harnessing low tidal hydropower is discussed. The energy of water flow is converted into the energy of an air jet by a specialized air chamber which is placed on the ocean floor across a flowing watercourse. Water passes through the chamber where it works as a natural piston compressing air in the upper part of the closure. Compressed air is used as a new working plenum to drive air turbines. The kinetic energy of an air jet provided by the air chamber is sufficient for stable operation of industrial air turbines. It is possible to use light plastic barriers instead of conventional rigid dams (the water sail concept). It is confirmed that the concept can result in a less expensive and more effective tidal power plant project than the conventional hydroturbine approach.
A Height Estimation Approach for Terrain Following Flights from Monocular Vision.
Campos, Igor S G; Nascimento, Erickson R; Freitas, Gustavo M; Chaimowicz, Luiz
2016-12-06
In this paper, we present a monocular vision-based height estimation algorithm for terrain following flights. The impressive growth of Unmanned Aerial Vehicle (UAV) usage, notably in mapping applications, will soon require the creation of new technologies to enable these systems to better perceive their surroundings. Specifically, we chose to tackle the terrain following problem, as it is still unresolved for consumer available systems. Virtually every mapping aircraft carries a camera; therefore, we chose to exploit this in order to use presently available hardware to extract the height information toward performing terrain following flights. The proposed methodology consists of using optical flow to track features from videos obtained by the UAV, as well as its motion information to estimate the flying height. To determine if the height estimation is reliable, we trained a decision tree that takes the optical flow information as input and classifies whether the output is trustworthy or not. The classifier achieved accuracies of 80 % for positives and 90 % for negatives, while the height estimation algorithm presented good accuracy.
Geothermal down well pumping system
NASA Technical Reports Server (NTRS)
Matthews, H. B.; Mcbee, W. D.
1974-01-01
A key technical problem in the exploitation of hot water geothermal energy resources is down-well pumping to inhibit mineral precipitation, improve thermal efficiency, and enhance flow. A novel approach to this problem involves the use of a small fraction of the thermal energy of the well water to boil and super-heat a clean feedwater flow in a down-hole exchanger adjacent to the pump. This steam powers a high-speed turbine-driven pump. The exhaust steam is brought to the surface through an exhaust pipe, condensed, and recirculated. A small fraction of the high-pressure clean feedwater is diverted to lubricate the turbine pump bearings and prevent leakage of brine into the turbine-pump unit. A project demonstrating the feasibility of this approach by means of both laboratory and down-well tests is discussed.
NASA Astrophysics Data System (ADS)
Allen, Jeffery M.
This research involves a few First-Order System Least Squares (FOSLS) formulations of a nonlinear-Stokes flow model for ice sheets. In Glen's flow law, a commonly used constitutive equation for ice rheology, the viscosity becomes infinite as the velocity gradients approach zero. This typically occurs near the ice surface or where there is basal sliding. The computational difficulties associated with the infinite viscosity are often overcome by an arbitrary modification of Glen's law that bounds the maximum viscosity. The FOSLS formulations developed in this thesis are designed to overcome this difficulty. The first FOSLS formulation is just the first-order representation of the standard nonlinear, full-Stokes and is known as the viscosity formulation and suffers from the problem above. To overcome the problem of infinite viscosity, two new formulation exploit the fact that the deviatoric stress, the product of viscosity and strain-rate, approaches zero as the viscosity goes to infinity. Using the deviatoric stress as the basis for a first-order system results in the the basic fluidity system. Augmenting the basic fluidity system with a curl-type equation results in the augmented fluidity system, which is more amenable to the iterative solver, Algebraic MultiGrid (AMG). A Nested Iteration (NI) Newton-FOSLS-AMG approach is used to solve the nonlinear-Stokes problems. Several test problems from the ISMIP set of benchmarks is examined to test the effectiveness of the various formulations. These test show that the viscosity based method is more expensive and less accurate. The basic fluidity system shows optimal finite-element convergence. However, there is not yet an efficient iterative solver for this type of system and this is the topic of future research. Alternatively, AMG performs better on the augmented fluidity system when using specific scaling. Unfortunately, this scaling results in reduced finite-element convergence.
Flow-Based Network Analysis of the Caenorhabditis elegans Connectome
Bacik, Karol A.; Schaub, Michael T.; Billeh, Yazan N.; Barahona, Mauricio
2016-01-01
We exploit flow propagation on the directed neuronal network of the nematode C. elegans to reveal dynamically relevant features of its connectome. We find flow-based groupings of neurons at different levels of granularity, which we relate to functional and anatomical constituents of its nervous system. A systematic in silico evaluation of the full set of single and double neuron ablations is used to identify deletions that induce the most severe disruptions of the multi-resolution flow structure. Such ablations are linked to functionally relevant neurons, and suggest potential candidates for further in vivo investigation. In addition, we use the directional patterns of incoming and outgoing network flows at all scales to identify flow profiles for the neurons in the connectome, without pre-imposing a priori categories. The four flow roles identified are linked to signal propagation motivated by biological input-response scenarios. PMID:27494178
Road Risk Modeling and Cloud-Aided Safety-Based Route Planning.
Li, Zhaojian; Kolmanovsky, Ilya; Atkins, Ella; Lu, Jianbo; Filev, Dimitar P; Michelini, John
2016-11-01
This paper presents a safety-based route planner that exploits vehicle-to-cloud-to-vehicle (V2C2V) connectivity. Time and road risk index (RRI) are considered as metrics to be balanced based on user preference. To evaluate road segment risk, a road and accident database from the highway safety information system is mined with a hybrid neural network model to predict RRI. Real-time factors such as time of day, day of the week, and weather are included as correction factors to the static RRI prediction. With real-time RRI and expected travel time, route planning is formulated as a multiobjective network flow problem and further reduced to a mixed-integer programming problem. A V2C2V implementation of our safety-based route planning approach is proposed to facilitate access to real-time information and computing resources. A real-world case study, route planning through the city of Columbus, Ohio, is presented. Several scenarios illustrate how the "best" route can be adjusted to favor time versus safety metrics.
Final Report: Large-Scale Optimization for Bayesian Inference in Complex Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghattas, Omar
2013-10-15
The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimiza- tion) Project focuses on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimiza- tion and inversion methods. Our research is directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. Our efforts are integrated in the context of a challenging testbed problem that considers subsurface reacting flow and transport. The MIT component of the SAGUAROmore » Project addresses the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas-Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to- observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as "reduce then sample" and "sample then reduce." In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.« less
2006-09-01
exploited and that we get best value for money from our investment. We announced in the Strategy that we had set in place an evidence-based peer review of...currently meets the Department’s needs. The study was also to set a benchmark for future regular reviews of the programme to ensure quality, value for...The level of resources devoted to such research should be seen in the context of the overall value of expenditure flowing from such decisions. The
NASA Astrophysics Data System (ADS)
Scarsoglio, Stefania; Cazzato, Fabio; Ridolfi, Luca
2017-09-01
A network-based approach is presented to investigate the cerebrovascular flow patterns during atrial fibrillation (AF) with respect to normal sinus rhythm (NSR). AF, the most common cardiac arrhythmia with faster and irregular beating, has been recently and independently associated with the increased risk of dementia. However, the underlying hemodynamic mechanisms relating the two pathologies remain mainly undetermined so far; thus, the contribution of modeling and refined statistical tools is valuable. Pressure and flow rate temporal series in NSR and AF are here evaluated along representative cerebral sites (from carotid arteries to capillary brain circulation), exploiting reliable artificially built signals recently obtained from an in silico approach. The complex network analysis evidences, in a synthetic and original way, a dramatic signal variation towards the distal/capillary cerebral regions during AF, which has no counterpart in NSR conditions. At the large artery level, networks obtained from both AF and NSR hemodynamic signals exhibit elongated and chained features, which are typical of pseudo-periodic series. These aspects are almost completely lost towards the microcirculation during AF, where the networks are topologically more circular and present random-like characteristics. As a consequence, all the physiological phenomena at the microcerebral level ruled by periodicity—such as regular perfusion, mean pressure per beat, and average nutrient supply at the cellular level—can be strongly compromised, since the AF hemodynamic signals assume irregular behaviour and random-like features. Through a powerful approach which is complementary to the classical statistical tools, the present findings further strengthen the potential link between AF hemodynamic and cognitive decline.
Roda, Barbara; Mirasoli, Mara; Zattoni, Andrea; Casale, Monica; Oliveri, Paolo; Bigi, Alessandro; Reschiglian, Pierluigi; Simoni, Patrizia; Roda, Aldo
2016-10-01
An integrated sensing system is presented for the first time, where a metal oxide semiconductor sensor-based electronic olfactory system (MOS array), employed for pathogen bacteria identification based on their volatile organic compound (VOC) characterisation, is assisted by a preliminary separative technique based on gravitational field-flow fractionation (GrFFF). In the integrated system, a preliminary step using GrFFF fractionation of a complex sample provided bacteria-enriched fractions readily available for subsequent MOS array analysis. The MOS array signals were then analysed employing a chemometric approach using principal components analysis (PCA) for a first-data exploration, followed by linear discriminant analysis (LDA) as a classification tool, using the PCA scores as input variables. The ability of the GrFFF-MOS system to distinguish between viable and non-viable cells of the same strain was demonstrated for the first time, yielding 100 % ability of correct prediction. The integrated system was also applied as a proof of concept for multianalyte purposes, for the detection of two bacterial strains (Escherichia coli O157:H7 and Yersinia enterocolitica) simultaneously present in artificially contaminated milk samples, obtaining a 100 % ability of correct prediction. Acquired results show that GrFFF band slicing before MOS array analysis can significantly increase reliability and reproducibility of pathogen bacteria identification based on their VOC production, simplifying the analytical procedure and largely eliminating sample matrix effects. The developed GrFFF-MOS integrated system can be considered a simple straightforward approach for pathogen bacteria identification directly from their food matrix. Graphical abstract An integrated sensing system is presented for pathogen bacteria identification in food, in which field-flow fractionation is exploited to prepare enriched cell fractions prior to their analysis by electronic olfactory system analysis.
Zamarreno-Ramos, C; Linares-Barranco, A; Serrano-Gotarredona, T; Linares-Barranco, B
2013-02-01
This paper presents a modular, scalable approach to assembling hierarchically structured neuromorphic Address Event Representation (AER) systems. The method consists of arranging modules in a 2D mesh, each communicating bidirectionally with all four neighbors. Address events include a module label. Each module includes an AER router which decides how to route address events. Two routing approaches have been proposed, analyzed and tested, using either destination or source module labels. Our analyses reveal that depending on traffic conditions and network topologies either one or the other approach may result in better performance. Experimental results are given after testing the approach using high-end Virtex-6 FPGAs. The approach is proposed for both single and multiple FPGAs, in which case a special bidirectional parallel-serial AER link with flow control is exploited, using the FPGA Rocket-I/O interfaces. Extensive test results are provided exploiting convolution modules of 64 × 64 pixels with kernels with sizes up to 11 × 11, which process real sensory data from a Dynamic Vision Sensor (DVS) retina. One single Virtex-6 FPGA can hold up to 64 of these convolution modules, which is equivalent to a neural network with 262 × 10(3) neurons and almost 32 million synapses.
NASA Astrophysics Data System (ADS)
Bou-Fakhreddine, Bassam; Mougharbel, Imad; Faye, Alain; Abou Chakra, Sara; Pollet, Yann
2018-03-01
Accurate daily river flow forecast is essential in many applications of water resources such as hydropower operation, agricultural planning and flood control. This paper presents a forecasting approach to deal with a newly addressed situation where hydrological data exist for a period longer than that of meteorological data (measurements asymmetry). In fact, one of the potential solutions to resolve measurements asymmetry issue is data re-sampling. It is a matter of either considering only the hydrological data or the balanced part of the hydro-meteorological data set during the forecasting process. However, the main disadvantage is that we may lose potentially relevant information from the left-out data. In this research, the key output is a Two-Phase Constructive Fuzzy inference hybrid model that is implemented over the non re-sampled data. The introduced modeling approach must be capable of exploiting the available data efficiently with higher prediction efficiency relative to Constructive Fuzzy model trained over re-sampled data set. The study was applied to Litani River in the Bekaa Valley - Lebanon by using 4 years of rainfall and 24 years of river flow daily measurements. A Constructive Fuzzy System Model (C-FSM) and a Two-Phase Constructive Fuzzy System Model (TPC-FSM) are trained. Upon validating, the second model has shown a primarily competitive performance and accuracy with the ability to preserve a higher day-to-day variability for 1, 3 and 6 days ahead. In fact, for the longest lead period, the C-FSM and TPC-FSM were able of explaining respectively 84.6% and 86.5% of the actual river flow variation. Overall, the results indicate that TPC-FSM model has provided a better tool to capture extreme flows in the process of streamflow prediction.
Fishing and temperature effects on the size structure of exploited fish stocks.
Tu, Chen-Yi; Chen, Kuan-Ting; Hsieh, Chih-Hao
2018-05-08
Size structure of fish stock plays an important role in maintaining sustainability of the population. Size distribution of an exploited stock is predicted to shift toward small individuals caused by size-selective fishing and/or warming; however, their relative contribution remains relatively unexplored. In addition, existing analyses on size structure have focused on univariate size-based indicators (SBIs), such as mean length, evenness of size classes, or the upper 95-percentile of the length frequency distribution; these approaches may not capture full information of size structure. To bridge the gap, we used the variation partitioning approach to examine how the size structure (composition of size classes) responded to fishing, warming and the interaction. We analyzed 28 exploited stocks in the West US, Alaska and North Sea. Our result shows fishing has the most prominent effect on the size structure of the exploited stocks. In addition, the fish stocks experienced higher variability in fishing is more responsive to the temperature effect in their size structure, suggesting that fishing may elevate the sensitivity of exploited stocks in responding to environmental effects. The variation partitioning approach provides complementary information to univariate SBIs in analyzing size structure.
Briggs, Martin A.; Day-Lewis, Frederick D.; Ong, John B.; Curtis, Gary P.; Lane, John W.
2013-01-01
Anomalous solute transport, modeled as rate-limited mass transfer, has an observable geoelectrical signature that can be exploited to infer the controlling parameters. Previous experiments indicate the combination of time-lapse geoelectrical and fluid conductivity measurements collected during ionic tracer experiments provides valuable insight into the exchange of solute between mobile and immobile porosity. Here, we use geoelectrical measurements to monitor tracer experiments at a former uranium mill tailings site in Naturita, Colorado. We use nonlinear regression to calibrate dual-domain mass transfer solute-transport models to field data. This method differs from previous approaches by calibrating the model simultaneously to observed fluid conductivity and geoelectrical tracer signals using two parameter scales: effective parameters for the flow path upgradient of the monitoring point and the parameters local to the monitoring point. We use regression statistics to rigorously evaluate the information content and sensitivity of fluid conductivity and geophysical data, demonstrating multiple scales of mass transfer parameters can simultaneously be estimated. Our results show, for the first time, field-scale spatial variability of mass transfer parameters (i.e., exchange-rate coefficient, porosity) between local and upgradient effective parameters; hence our approach provides insight into spatial variability and scaling behavior. Additional synthetic modeling is used to evaluate the scope of applicability of our approach, indicating greater range than earlier work using temporal moments and a Lagrangian-based Damköhler number. The introduced Eulerian-based Damköhler is useful for estimating tracer injection duration needed to evaluate mass transfer exchange rates that range over several orders of magnitude.
Spectrally-balanced chromatic approach-lighting system
NASA Technical Reports Server (NTRS)
Chase, W. D.
1977-01-01
Approach lighting system employing combinations of red and blue lights reduces problem of color-based optical illusions. System exploits inherent chromatic aberration of eye to create three-dimensional effect, giving pilot visual clues of position.
Majid, Abdul; Ali, Safdar
2015-01-01
We developed genetic programming (GP)-based evolutionary ensemble system for the early diagnosis, prognosis and prediction of human breast cancer. This system has effectively exploited the diversity in feature and decision spaces. First, individual learners are trained in different feature spaces using physicochemical properties of protein amino acids. Their predictions are then stacked to develop the best solution during GP evolution process. Finally, results for HBC-Evo system are obtained with optimal threshold, which is computed using particle swarm optimization. Our novel approach has demonstrated promising results compared to state of the art approaches.
NASA Astrophysics Data System (ADS)
Vibhava, F.; Graham, W. D.; De Rooij, R.; Maxwell, R. M.; Martin, J. B.; Cohen, M. J.
2011-12-01
The Santa Fe River Basin (SFRB) consists of three linked hydrologic units: the upper confined region (UCR), semi-confined transitional region (Cody Escarpment, CE) and lower unconfined region (LUR). Contrasting geological characteristics among these units affect streamflow generation processes. In the UCR, surface runoff and surficial stores dominate whereas in the LCR minimal surface runoff occurs and flow is dominated by groundwater sources and sinks. In the CE region the Santa Fe River (SFR) is captured entirely by a sinkhole into the Floridan aquifer, emerging as a first magnitude spring 6 km to the south. In light of these contrasting hydrological settings, developing a predictive, basin scale, physically-based hydrologic simulation model remains a research challenge. This ongoing study aims to assess the ability of a fully-coupled, physically-based three-dimensional hydrologic model (PARFLOW-CLM), to predict hydrologic conditions in the SFRB. The assessment will include testing the model's ability to adequately represent surface and subsurface flow sources, flow paths, and travel times within the basin as well as the surface-groundwater exchanges throughout the basin. In addition to simulating water fluxes, we also are collecting high resolution specific conductivity data at 10 locations throughout the river. Our objective is to exploit hypothesized strong end-member separation between riverine source water geochemistry to further refine the PARFLOW-CLM representation of riverine mixing and delivery dynamics.
Measuring flows in the solar interior: current developments, results, and outstanding problems
NASA Astrophysics Data System (ADS)
Schad, Ariane
2016-10-01
I will present an overview of the current developments to determine flows in the solar interior and recent results from helioseismology. I will lay special focus on the inference of the deep structure of the meridional flow, which is one of the most challenging problems in helioseismology. In recent times, promising approaches have been developed for solving this problem. The time-distance analysis made large improvements in this after becoming aware of and compensating for a systematic effect in the analysis, the origin of which is not clear yet. In addition to this, a different approach is now available, which directly exploits the distortion of mode eigenfunctions by the meridional flow as well as rotation. These methods have presented us partly surprisingly complex meridional flow patterns, which, however, do not provide a consistent picture of the flow. Resolving this puzzle is part of current research since this has important consequences on our understanding of the solar dynamo. Another interesting discrepancy was found in recent studies between the amplitudes of the large- and small-scale dynamics in the convection zone estimated from helioseismology and those predicted from theoretical models. This raises fundamental questions how the Sun, and in general a star, maintains its heat transport and redistributes its angular momentum that lead, e.g., to the observed differential rotation.
Design of a mesoscale continuous flow route towards lithiated methoxyallene.
Seghers, Sofie; Heugebaert, Thomas S A; Moens, Matthias; Sonck, Jolien; Thybaut, Joris; Stevens, Chris Victor
2018-05-11
The unique nucleophilic properties of lithiated methoxyallene allow for C-C bond formation with a wide variety of electrophiles, thus introducing an allenic group for further functionalization. This approach has yielded a tremendously broad range of (hetero)cyclic scaffolds, including API precursors. To date, however, its valorization at scale is hampered by the batch synthesis protocol which suffers from serious safety issues. Hence, the attractive heat and mass transfer properties of flow technology were exploited to establish a mesoscale continuous flow route towards lithiated methoxyallene. An excellent conversion of 94% was obtained, corresponding to a methoxyallene throughput of 8.2 g/h. The process is characterized by short reaction times, mild reaction conditions and a stoichiometric use of reagents. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Sogachev, Andrey; Kelly, Mark
2016-03-01
Displacement height ( d) is an important parameter in the simple modelling of wind speed and vertical fluxes above vegetative canopies, such as forests. Here we show that, aside from implicit definition through a (displaced) logarithmic profile, accepted formulations for d do not consistently predict flow properties above a forest. Turbulent transport can affect the displacement height, and is an integral part of what is called the roughness sublayer. We develop a more general approach for estimation of d, through production of turbulent kinetic energy and turbulent transport, and show how previous stress-based formulations for displacement height can be seen as simplified cases of a more general definition including turbulent transport. Further, we also give a simplified and practical form for d that is in agreement with the general approach, exploiting the concept of vortex thickness scale from mixing-layer theory. We assess the new and previous displacement height formulations by using flow statistics derived from the atmospheric boundary-layer Reynolds-averaged Navier-Stokes model SCADIS as well as from wind-tunnel observations, for different vegetation types and flow regimes in neutral conditions. The new formulations tend to produce smaller d than stress-based forms, falling closer to the classic logarithmically-defined displacement height. The new, more generally defined, displacement height appears to be more compatible with profiles of components of the turbulent kinetic energy budget, accounting for the combined effects of turbulent transport and shear production. The Coriolis force also plays a role, introducing wind-speed dependence into the behaviour of the roughness sublayer; this affects the turbulent transport, shear production, stress, and wind speed, as well as the displacement height, depending on the character of the forest. We further show how our practical (`mixing-layer') form for d matches the new turbulence-based relation, as well as correspondence to previous (stress-based) formulations.
Compound Capillary Flows in Complex Containers: Drop Tower Test Results
NASA Astrophysics Data System (ADS)
Bolleddula, Daniel A.; Chen, Yongkang; Semerjian, Ben; Tavan, Noël; Weislogel, Mark M.
2010-10-01
Drop towers continue to provide unique capabilities to investigate capillary flow phenomena relevant to terrestrial and space-based capillary fluidics applications. In this study certain `capillary rise' flows and the value of drop tower experimental investigations are briefly reviewed. A new analytic solution for flows along planar interior edges is presented. A selection of test cell geometries are then discussed where compound capillary flows occur spontaneously and simultaneously over local and global length scales. Sample experimental results are provided. Tertiary experiments on a family of asymmetric geometries that isolate the global component of such flows are then presented along with a qualitative analysis that may be used to either avoid or exploit such flows. The latter may also serve as a design tool with which to assess the impact of inadvertent container asymmetry.
All-Fullerene-Based Cells for Nonaqueous Redox Flow Batteries.
Friedl, Jochen; Lebedeva, Maria A; Porfyrakis, Kyriakos; Stimming, Ulrich; Chamberlain, Thomas W
2018-01-10
Redox flow batteries have the potential to revolutionize our use of intermittent sustainable energy sources such as solar and wind power by storing the energy in liquid electrolytes. Our concept study utilizes a novel electrolyte system, exploiting derivatized fullerenes as both anolyte and catholyte species in a series of battery cells, including a symmetric, single species system which alleviates the common problem of membrane crossover. The prototype multielectron system, utilizing molecular based charge carriers, made from inexpensive, abundant, and sustainable materials, principally, C and Fe, demonstrates remarkable current and energy densities and promising long-term cycling stability.
Continuous Indoor Positioning Fusing WiFi, Smartphone Sensors and Landmarks
Deng, Zhi-An; Wang, Guofeng; Qin, Danyang; Na, Zhenyu; Cui, Yang; Chen, Juan
2016-01-01
To exploit the complementary strengths of WiFi positioning, pedestrian dead reckoning (PDR), and landmarks, we propose a novel fusion approach based on an extended Kalman filter (EKF). For WiFi positioning, unlike previous fusion approaches setting measurement noise parameters empirically, we deploy a kernel density estimation-based model to adaptively measure the related measurement noise statistics. Furthermore, a trusted area of WiFi positioning defined by fusion results of previous step and WiFi signal outlier detection are exploited to reduce computational cost and improve WiFi positioning accuracy. For PDR, we integrate a gyroscope, an accelerometer, and a magnetometer to determine the user heading based on another EKF model. To reduce accumulation error of PDR and enable continuous indoor positioning, not only the positioning results but also the heading estimations are recalibrated by indoor landmarks. Experimental results in a realistic indoor environment show that the proposed fusion approach achieves substantial positioning accuracy improvement than individual positioning approaches including PDR and WiFi positioning. PMID:27608019
Continuous Indoor Positioning Fusing WiFi, Smartphone Sensors and Landmarks.
Deng, Zhi-An; Wang, Guofeng; Qin, Danyang; Na, Zhenyu; Cui, Yang; Chen, Juan
2016-09-05
To exploit the complementary strengths of WiFi positioning, pedestrian dead reckoning (PDR), and landmarks, we propose a novel fusion approach based on an extended Kalman filter (EKF). For WiFi positioning, unlike previous fusion approaches setting measurement noise parameters empirically, we deploy a kernel density estimation-based model to adaptively measure the related measurement noise statistics. Furthermore, a trusted area of WiFi positioning defined by fusion results of previous step and WiFi signal outlier detection are exploited to reduce computational cost and improve WiFi positioning accuracy. For PDR, we integrate a gyroscope, an accelerometer, and a magnetometer to determine the user heading based on another EKF model. To reduce accumulation error of PDR and enable continuous indoor positioning, not only the positioning results but also the heading estimations are recalibrated by indoor landmarks. Experimental results in a realistic indoor environment show that the proposed fusion approach achieves substantial positioning accuracy improvement than individual positioning approaches including PDR and WiFi positioning.
A Flexible CUDA LU-based Solver for Small, Batched Linear Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tumeo, Antonino; Gawande, Nitin A.; Villa, Oreste
This chapter presents the implementation of a batched CUDA solver based on LU factorization for small linear systems. This solver may be used in applications such as reactive flow transport models, which apply the Newton-Raphson technique to linearize and iteratively solve the sets of non linear equations that represent the reactions for ten of thousands to millions of physical locations. The implementation exploits somewhat counterintuitive GPGPU programming techniques: it assigns the solution of a matrix (representing a system) to a single CUDA thread, does not exploit shared memory and employs dynamic memory allocation on the GPUs. These techniques enable ourmore » implementation to simultaneously solve sets of systems with over 100 equations and to employ LU decomposition with complete pivoting, providing the higher numerical accuracy required by certain applications. Other currently available solutions for batched linear solvers are limited by size and only support partial pivoting, although they may result faster in certain conditions. We discuss the code of our implementation and present a comparison with the other implementations, discussing the various tradeoffs in terms of performance and flexibility. This work will enable developers that need batched linear solvers to choose whichever implementation is more appropriate to the features and the requirements of their applications, and even to implement dynamic switching approaches that can choose the best implementation depending on the input data.« less
Fulian; Gooch; Fisher; Stevens; Compton
2000-08-01
The development and application of a new electrochemical device using a computer-aided design strategy is reported. This novel design is based on the flow of electrolyte solution past a microwire electrode situated centrally within a large duct. In the design stage, finite element simulations were employed to evaluate feasible working geometries and mass transport rates. The computer-optimized designs were then exploited to construct experimental devices. Steady-state voltammetric measurements were performed for a reversible one-electron-transfer reaction to establish the experimental relationship between electrolysis current and solution velocity. The experimental results are compared to those predicted numerically, and good agreement is found. The numerical studies are also used to establish an empirical relationship between the mass transport limited current and the volume flow rate, providing a simple and quantitative alternative for workers who would prefer to exploit this device without the need to develop the numerical aspects.
Distributed learning and multi-objectivity in traffic light control
NASA Astrophysics Data System (ADS)
Brys, Tim; Pham, Tong T.; Taylor, Matthew E.
2014-01-01
Traffic jams and suboptimal traffic flows are ubiquitous in modern societies, and they create enormous economic losses each year. Delays at traffic lights alone account for roughly 10% of all delays in US traffic. As most traffic light scheduling systems currently in use are static, set up by human experts rather than being adaptive, the interest in machine learning approaches to this problem has increased in recent years. Reinforcement learning (RL) approaches are often used in these studies, as they require little pre-existing knowledge about traffic flows. Distributed constraint optimisation approaches (DCOP) have also been shown to be successful, but are limited to cases where the traffic flows are known. The distributed coordination of exploration and exploitation (DCEE) framework was recently proposed to introduce learning in the DCOP framework. In this paper, we present a study of DCEE and RL techniques in a complex simulator, illustrating the particular advantages of each, comparing them against standard isolated traffic actuated signals. We analyse how learning and coordination behave under different traffic conditions, and discuss the multi-objective nature of the problem. Finally we evaluate several alternative reward signals in the best performing approach, some of these taking advantage of the correlation between the problem-inherent objectives to improve performance.
Osberg, Brendan
2006-01-01
In this essay I explore two arguments against commercial surrogacy, based on commodification and exploitation respectively. I adopt a consequentialist framework and argue that commodification arguments must be grounded in a resultant harm to either child or surrogate, and that a priori arguments which condemn the practice for puritanical reasons cannot form a basis for public law. Furthermore there is no overwhelming evidence of harm caused to either party involved in commercial surrogacy, and hence Canadian law (which forbids the practice) must (and can) be justified on exploitative grounds. Objections raised by Wilkinson based on an 'isolated case' approach are addressed when one takes into account the political implications of public policy. I argue that is precisely these implications that justify laws forbidding commercial surrogacy on the grounds of preventing systematic exploitation.
Legaz-García, María del Carmen; Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás; Chute, Christopher G; Tao, Cui
2015-01-01
Introduction The semantic interoperability of electronic healthcare records (EHRs) systems is a major challenge in the medical informatics area. International initiatives pursue the use of semantically interoperable clinical models, and ontologies have frequently been used in semantic interoperability efforts. The objective of this paper is to propose a generic, ontology-based, flexible approach for supporting the automatic transformation of clinical models, which is illustrated for the transformation of Clinical Element Models (CEMs) into openEHR archetypes. Methods Our transformation method exploits the fact that the information models of the most relevant EHR specifications are available in the Web Ontology Language (OWL). The transformation approach is based on defining mappings between those ontological structures. We propose a way in which CEM entities can be transformed into openEHR by using transformation templates and OWL as common representation formalism. The transformation architecture exploits the reasoning and inferencing capabilities of OWL technologies. Results We have devised a generic, flexible approach for the transformation of clinical models, implemented for the unidirectional transformation from CEM to openEHR, a series of reusable transformation templates, a proof-of-concept implementation, and a set of openEHR archetypes that validate the methodological approach. Conclusions We have been able to transform CEM into archetypes in an automatic, flexible, reusable transformation approach that could be extended to other clinical model specifications. We exploit the potential of OWL technologies for supporting the transformation process. We believe that our approach could be useful for international efforts in the area of semantic interoperability of EHR systems. PMID:25670753
A Height Estimation Approach for Terrain Following Flights from Monocular Vision
Campos, Igor S. G.; Nascimento, Erickson R.; Freitas, Gustavo M.; Chaimowicz, Luiz
2016-01-01
In this paper, we present a monocular vision-based height estimation algorithm for terrain following flights. The impressive growth of Unmanned Aerial Vehicle (UAV) usage, notably in mapping applications, will soon require the creation of new technologies to enable these systems to better perceive their surroundings. Specifically, we chose to tackle the terrain following problem, as it is still unresolved for consumer available systems. Virtually every mapping aircraft carries a camera; therefore, we chose to exploit this in order to use presently available hardware to extract the height information toward performing terrain following flights. The proposed methodology consists of using optical flow to track features from videos obtained by the UAV, as well as its motion information to estimate the flying height. To determine if the height estimation is reliable, we trained a decision tree that takes the optical flow information as input and classifies whether the output is trustworthy or not. The classifier achieved accuracies of 80% for positives and 90% for negatives, while the height estimation algorithm presented good accuracy. PMID:27929424
High Temperature Composite Heat Exchangers
NASA Technical Reports Server (NTRS)
Eckel, Andrew J.; Jaskowiak, Martha H.
2002-01-01
High temperature composite heat exchangers are an enabling technology for a number of aeropropulsion applications. They offer the potential for mass reductions of greater than fifty percent over traditional metallics designs and enable vehicle and engine designs. Since they offer the ability to operate at significantly higher operating temperatures, they facilitate operation at reduced coolant flows and make possible temporary uncooled operation in temperature regimes, such as experienced during vehicle reentry, where traditional heat exchangers require coolant flow. This reduction in coolant requirements can translate into enhanced range or system payload. A brief review of the approaches and challengers to exploiting this important technology are presented, along with a status of recent government-funded projects.
Application of magnetohydrodynamic actuation to continuous flow chemistry.
West, Jonathan; Karamata, Boris; Lillis, Brian; Gleeson, James P; Alderman, John; Collins, John K; Lane, William; Mathewson, Alan; Berney, Helen
2002-11-01
Continuous flow microreactors with an annular microchannel for cyclical chemical reactions were fabricated by either bulk micromachining in silicon or by rapid prototyping using EPON SU-8. Fluid propulsion in these unusual microchannels was achieved using AC magnetohydrodynamic (MHD) actuation. This integrated micropumping mechanism obviates the use of moving parts by acting locally on the electrolyte, exploiting its inherent conductive nature. Both silicon and SU-8 microreactors were capable of MHD actuation, attaining fluid velocities of the order of 300 microm s(-1) when using a 500 mM KCl electrolyte. The polymerase chain reaction (PCR), a thermocycling process, was chosen as an illustrative example of a cyclical chemistry. Accordingly, temperature zones were provided to enable a thermal cycle during each revolution. With this approach, fluid velocity determines cycle duration. Here, we report device fabrication and performance, a model to accurately describe fluid circulation by MHD actuation, and compatibility issues relating to this approach to chemistry.
Numerical simulation of gas hydrate exploitation from subsea reservoirs in the Black Sea
NASA Astrophysics Data System (ADS)
Janicki, Georg; Schlüter, Stefan; Hennig, Torsten; Deerberg, Görge
2017-04-01
Natural gas (methane) is the most environmental friendly source of fossil energy. When coal is replace by natural gas in power production the emission of carbon dioxide is reduced by 50 %. The vast amount of methane assumed in gas hydrate deposits can help to overcome a shortage of fossil energy resources in the future. To increase their potential for energy applications new technological approaches are being discussed and developed worldwide. Besides technical challenges that have to be overcome climate and safety issues have to be considered before a commercial exploitation of such unconventional reservoirs. The potential of producing natural gas from subsea gas hydrate deposits by various means (e. g. depressurization and/or carbon dioxide injection) is numerically studied in the frame of the German research project »SUGAR - Submarine Gas Hydrate Reservoirs«. In order to simulate the exploitation of hydrate-bearing sediments in the subsea, an in-house simulation model HyReS which is implemented in the general-purpose software COMSOL Multiphysics is used. This tool turned out to be especially suited for the flexible implementation of non-standard correlations concerning heat transfer, fluid flow, hydrate kinetics, and other relevant model data. Partially based on the simulation results, the development of a technical concept and its evaluation are the subject of ongoing investigations, whereby geological and ecological criteria are to be considered. The results illustrate the processes and effects occurring during the gas production from a subsea gas hydrate deposit by depressurization. The simulation results from a case study for a deposit located in the Black Sea reveal that the production of natural gas by simple depressurization is possible but with quite low rates. It can be shown that the hydrate decomposition and thus the gas production strongly depend on the geophysical properties of the reservoir, the mass and heat transport within the reservoir, and the model settings. In particular, the permeability and the available heat, which is required to decompose the hydrate, play an important role. The work is focused on the thermodynamic principles and technological approaches for the exploitation.
NASA Astrophysics Data System (ADS)
Ghattas, O.; Petra, N.; Cui, T.; Marzouk, Y.; Benjamin, P.; Willcox, K.
2016-12-01
Model-based projections of the dynamics of the polar ice sheets play a central role in anticipating future sea level rise. However, a number of mathematical and computational challenges place significant barriers on improving predictability of these models. One such challenge is caused by the unknown model parameters (e.g., in the basal boundary conditions) that must be inferred from heterogeneous observational data, leading to an ill-posed inverse problem and the need to quantify uncertainties in its solution. In this talk we discuss the problem of estimating the uncertainty in the solution of (large-scale) ice sheet inverse problems within the framework of Bayesian inference. Computing the general solution of the inverse problem--i.e., the posterior probability density--is intractable with current methods on today's computers, due to the expense of solving the forward model (3D full Stokes flow with nonlinear rheology) and the high dimensionality of the uncertain parameters (which are discretizations of the basal sliding coefficient field). To overcome these twin computational challenges, it is essential to exploit problem structure (e.g., sensitivity of the data to parameters, the smoothing property of the forward model, and correlations in the prior). To this end, we present a data-informed approach that identifies low-dimensional structure in both parameter space and the forward model state space. This approach exploits the fact that the observations inform only a low-dimensional parameter space and allows us to construct a parameter-reduced posterior. Sampling this parameter-reduced posterior still requires multiple evaluations of the forward problem, therefore we also aim to identify a low dimensional state space to reduce the computational cost. To this end, we apply a proper orthogonal decomposition (POD) approach to approximate the state using a low-dimensional manifold constructed using ``snapshots'' from the parameter reduced posterior, and the discrete empirical interpolation method (DEIM) to approximate the nonlinearity in the forward problem. We show that using only a limited number of forward solves, the resulting subspaces lead to an efficient method to explore the high-dimensional posterior.
Formation of droplet interface bilayers in a Teflon tube
NASA Astrophysics Data System (ADS)
Walsh, Edmond; Feuerborn, Alexander; Cook, Peter R.
2016-09-01
Droplet-interface bilayers (DIBs) have applications in disciplines ranging from biology to computing. We present a method for forming them manually using a Teflon tube attached to a syringe pump; this method is simple enough it should be accessible to those without expertise in microfluidics. It exploits the properties of interfaces between three immiscible liquids, and uses fluid flow through the tube to pack together drops coated with lipid monolayers to create bilayers at points of contact. It is used to create functional nanopores in DIBs composed of phosphocholine using the protein α-hemolysin (αHL), to demonstrate osmotically-driven mass transfer of fluid across surfactant-based DIBs, and to create arrays of DIBs. The approach is scalable, and thousands of DIBs can be prepared using a robot in one hour; therefore, it is feasible to use it for high throughput applications.
Piatti, Filippo; Palumbo, Maria Chiara; Consolo, Filippo; Pluchinotta, Francesca; Greiser, Andreas; Sturla, Francesco; Votta, Emiliano; Siryk, Sergii V; Vismara, Riccardo; Fiore, Gianfranco Beniamino; Lombardi, Massimo; Redaelli, Alberto
2018-02-08
The performance of blood-processing devices largely depends on the associated fluid dynamics, which hence represents a key aspect in their design and optimization. To this aim, two approaches are currently adopted: computational fluid-dynamics, which yields highly resolved three-dimensional data but relies on simplifying assumptions, and in vitro experiments, which typically involve the direct video-acquisition of the flow field and provide 2D data only. We propose a novel method that exploits space- and time-resolved magnetic resonance imaging (4D-flow) to quantify the complex 3D flow field in blood-processing devices and to overcome these limitations. We tested our method on a real device that integrates an oxygenator and a heat exchanger. A dedicated mock loop was implemented, and novel 4D-flow sequences with sub-millimetric spatial resolution and region-dependent velocity encodings were defined. Automated in house software was developed to quantify the complex 3D flow field within the different regions of the device: region-dependent flow rates, pressure drops, paths of the working fluid and wall shear stresses were computed. Our analysis highlighted the effects of fine geometrical features of the device on the local fluid-dynamics, which would be unlikely observed by current in vitro approaches. Also, the effects of non-idealities on the flow field distribution were captured, thanks to the absence of the simplifying assumptions that typically characterize numerical models. To the best of our knowledge, our approach is the first of its kind and could be extended to the analysis of a broad range of clinically relevant devices. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ortiz, Marco; Wolff, Matthias
2004-10-01
The sustainability of different integrated management regimes for the mangrove ecosystem of the Caeté Estuary (North Brazil) were assessed using a holistic theoretical framework. As a way to demonstrate that the behaviour and trajectory of complex whole systems are not epiphenomenal to the properties of the small parts, a set of conceptual models from more reductionistic to more holistic were enunciated. These models integrate the scientific information published until present for this mangrove ecosystem. The sustainability of different management scenarios (forestry and fishery) was assessed. Since the exploitation of mangrove trees is not allowed according Brazilian laws, the forestry was only included for simulation purposes. The model simulations revealed that sustainability predictions of reductionistic models should not be extrapolated into holistic approaches. Forestry and fishery activities seem to be sustainable only if they are self-damped. The exploitation of the two mangrove species Rhizophora mangle and Avicenia germinans does not appear to be sustainable, thus a rotation harvest is recommended. A similar conclusion holds for the exploitation of invertebrate species. Our results suggest that more studies should be focused on the estimation of maximum sustainable yield based on a multispecies approach. Any reference to holistic sustainability based on reductionistic approaches may distort our understanding of the natural complex ecosystems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Du, Dan; Wang, Jun; Wang, Limin
An integrated lateral flow test strip with electrochemical sensor (LFTSES) device with rapid, selective and sensitive response for quantification of exposure to organophosphorus (OP) pesticides and nerve agents has been developed. The principle of this approach is based on parallel measurements of post-exposure and baseline acetylcholinesterase (AChE) enzyme activity, where reactivation of the phosphorylated AChE is exploited to enable measurement of total amount of AChE (including inhibited and active) which is used as a baseline for calculation of AChE inhibition. Quantitative measurement of phosphorylated adduct (OP-AChE) was realized by subtracting the active AChE from the total amount of AChE. Themore » proposed LFTSES device integrates immunochromatographic test strip technology with electrochemical measurement using a disposable screen printed electrode which is located under the test zone. It shows linear response between AChE enzyme activity and enzyme concentration from 0.05 to 10 nM, with detection limit of 0.02 nM. Based on this reactivation approach, the LFTSES device has been successfully applied for in vitro red blood cells inhibition studies using chlorpyrifos oxon as a model OP agent. This approach not only eliminates the difficulty in screening of low-dose OP exposure because of individual variation of normal AChE values, but also avoids the problem in overlapping substrate specificity with cholinesterases and avoids potential interference from other electroactive species in biological samples. It is baseline free and thus provides a rapid, sensitive, selective and inexpensive tool for in-field and point-of-care assessment of exposures to OP pesticides and nerve agents.« less
Multisensor data fusion across time and space
NASA Astrophysics Data System (ADS)
Villeneuve, Pierre V.; Beaven, Scott G.; Reed, Robert A.
2014-06-01
Field measurement campaigns typically deploy numerous sensors having different sampling characteristics for spatial, temporal, and spectral domains. Data analysis and exploitation is made more difficult and time consuming as the sample data grids between sensors do not align. This report summarizes our recent effort to demonstrate feasibility of a processing chain capable of "fusing" image data from multiple independent and asynchronous sensors into a form amenable to analysis and exploitation using commercially-available tools. Two important technical issues were addressed in this work: 1) Image spatial registration onto a common pixel grid, 2) Image temporal interpolation onto a common time base. The first step leverages existing image matching and registration algorithms. The second step relies upon a new and innovative use of optical flow algorithms to perform accurate temporal upsampling of slower frame rate imagery. Optical flow field vectors were first derived from high-frame rate, high-resolution imagery, and then finally used as a basis for temporal upsampling of the slower frame rate sensor's imagery. Optical flow field values are computed using a multi-scale image pyramid, thus allowing for more extreme object motion. This involves preprocessing imagery to varying resolution scales and initializing new vector flow estimates using that from the previous coarser-resolution image. Overall performance of this processing chain is demonstrated using sample data involving complex too motion observed by multiple sensors mounted to the same base. Multiple sensors were included, including a high-speed visible camera, up to a coarser resolution LWIR camera.
NASA Astrophysics Data System (ADS)
Krauß, T.
2014-11-01
The focal plane assembly of most pushbroom scanner satellites is built up in a way that different multispectral or multispectral and panchromatic bands are not all acquired exactly at the same time. This effect is due to offsets of some millimeters of the CCD-lines in the focal plane. Exploiting this special configuration allows the detection of objects moving during this small time span. In this paper we present a method for automatic detection and extraction of moving objects - mainly traffic - from single very high resolution optical satellite imagery of different sensors. The sensors investigated are WorldView-2, RapidEye, Pléiades and also the new SkyBox satellites. Different sensors require different approaches for detecting moving objects. Since the objects are mapped on different positions only in different spectral bands also the change of spectral properties have to be taken into account. In case the main distance in the focal plane is between the multispectral and the panchromatic CCD-line like for Pléiades an approach for weighted integration to receive mostly identical images is investigated. Other approaches for RapidEye and WorldView-2 are also shown. From these intermediate bands difference images are calculated and a method for detecting the moving objects from these difference images is proposed. Based on these presented methods images from different sensors are processed and the results are assessed for detection quality - how many moving objects can be detected, how many are missed - and accuracy - how accurate is the derived speed and size of the objects. Finally the results are discussed and an outlook for possible improvements towards operational processing is presented.
Bhateria, Manisha; Rachumallu, Ramakrishna; Singh, Rajbir; Bhatta, Rabi Sankar
2014-08-01
Erythrocytes (red blood cells [RBCs]) and artificial or synthetic delivery systems such as liposomes, nanoparticles (NPs) are the most investigated carrier systems. Herein, progress made from conventional approach of using RBC as delivery systems to novel approach of using synthetic delivery systems based on RBC properties will be reviewed. We aim to highlight both conventional and novel approaches of using RBCs as potential carrier system. Conventional approaches which include two main strategies are: i) directly loading therapeutic moieties in RBCs; and ii) coupling them with RBCs whereas novel approaches exploit structural, mechanical and biological properties of RBCs to design synthetic delivery systems through various engineering strategies. Initial attempts included coupling of antibodies to liposomes to specifically target RBCs. Knowledge obtained from several studies led to the development of RBC membrane derived liposomes (nanoerythrosomes), inspiring future application of RBC or its structural features in other attractive delivery systems (hydrogels, filomicelles, microcapsules, micro- and NPs) for even greater potential. In conclusion, this review dwells upon comparative analysis of various conventional and novel engineering strategies in developing RBC based drug delivery systems, diversifying their applications in arena of drug delivery. Regardless of the challenges in front of us, RBC based delivery systems offer an exciting approach of exploiting biological entities in a multitude of medical applications.
Nichols, J.M.; Moniz, L.; Nichols, J.D.; Pecora, L.M.; Cooch, E.
2005-01-01
A number of important questions in ecology involve the possibility of interactions or ?coupling? among potential components of ecological systems. The basic question of whether two components are coupled (exhibit dynamical interdependence) is relevant to investigations of movement of animals over space, population regulation, food webs and trophic interactions, and is also useful in the design of monitoring programs. For example, in spatially extended systems, coupling among populations in different locations implies the existence of redundant information in the system and the possibility of exploiting this redundancy in the development of spatial sampling designs. One approach to the identification of coupling involves study of the purported mechanisms linking system components. Another approach is based on time series of two potential components of the same system and, in previous ecological work, has relied on linear cross-correlation analysis. Here we present two different attractor-based approaches, continuity and mutual prediction, for determining the degree to which two population time series (e.g., at different spatial locations) are coupled. Both approaches are demonstrated on a one-dimensional predator?prey model system exhibiting complex dynamics. Of particular interest is the spatial asymmetry introduced into the model as linearly declining resource for the prey over the domain of the spatial coordinate. Results from these approaches are then compared to the more standard cross-correlation analysis. In contrast to cross-correlation, both continuity and mutual prediction are clearly able to discern the asymmetry in the flow of information through this system.
Accumulation of Colloidal Particles in Flow Junctions Induced by Fluid Flow and Diffusiophoresis
NASA Astrophysics Data System (ADS)
Shin, Sangwoo; Ault, Jesse T.; Warren, Patrick B.; Stone, Howard A.
2017-10-01
The flow of solutions containing solutes and colloidal particles in porous media is widely found in systems including underground aquifers, hydraulic fractures, estuarine or coastal habitats, water filtration systems, etc. In such systems, solute gradients occur when there is a local change in the solute concentration. While the effects of solute gradients have been found to be important for many applications, we observe an unexpected colloidal behavior in porous media driven by the combination of solute gradients and the fluid flow. When two flows with different solute concentrations are in contact near a junction, a sharp solute gradient is formed at the interface, which may allow strong diffusiophoresis of the particles directed against the flow. Consequently, the particles accumulate near the pore entrance, rapidly approaching the packing limit. These colloidal dynamics have important implications for the clogging of a porous medium, where particles that are orders of magnitude smaller than the pore width can accumulate and block the pores within a short period of time. We also show that this effect can be exploited as a useful tool for preconcentrating biomolecules for rapid bioassays.
NASA Astrophysics Data System (ADS)
Faybishenko, Boris; Witherspoon, Paul A.; Gale, John
How to characterize fluid flow, heat, and chemical transport in geologic media remains a central challenge for geoscientists and engineers worldwide. Investigations of fluid flow and transport within rock relate to such fundamental and applied problems as environmental remediation; nonaqueous phase liquid (NAPL) transport; exploitation of oil, gas, and geothermal resources; disposal of spent nuclear fuel; and geotechnical engineering. It is widely acknowledged that fractures in unsaturated-saturated rock can play a major role in solute transport from the land surface to underlying aquifers. It is also evident that general issues concerning flow and transport predictions in subsurface fractured zones can be resolved in a practical manner by integrating investigations into the physical nature of flow in fractures, developing relevant mathematical models and modeling approaches, and collecting site characterization data. Because of the complexity of flow and transport processes in most fractured rock flow problems, it is not yet possible to develop models directly from first principles. One reason for this is the presence of episodic, preferential water seepage and solute transport, which usually proceed more rapidly than expected from volume-averaged and time-averaged models. However, the physics of these processes is still known.
Biomedical device prototype based on small scale hydrodynamic cavitation
NASA Astrophysics Data System (ADS)
Ghorbani, Morteza; Sozer, Canberk; Alcan, Gokhan; Unel, Mustafa; Ekici, Sinan; Uvet, Huseyin; Koşar, Ali
2018-03-01
This study presents a biomedical device prototype based on small scale hydrodynamic cavitation. The application of small scale hydrodynamic cavitation and its integration to a biomedical device prototype is offered as an important alternative to other techniques, such as ultrasound therapy, and thus constitutes a local, cheap, and energy-efficient solution, for urinary stone therapy and abnormal tissue ablation (e.g., benign prostate hyperplasia (BPH)). The destructive nature of bubbly, cavitating, flows was exploited, and the potential of the prototype was assessed and characterized. Bubbles generated in a small flow restrictive element (micro-orifice) based on hydrodynamic cavitation were utilized for this purpose. The small bubbly, cavitating, flow generator (micro-orifice) was fitted to a small flexible probe, which was actuated with a micromanipulator using fine control. This probe also houses an imaging device for visualization so that the emerging cavitating flow could be locally targeted to the desired spot. In this study, the feasibility of this alternative treatment method and its integration to a device prototype were successfully accomplished.
NASA Astrophysics Data System (ADS)
Sant, T.; Buhagiar, D.; Farrugia, R. N.
2014-06-01
A new concept utilising floating wind turbines to exploit the low temperatures of deep sea water for space cooling in buildings is presented. The approach is based on offshore hydraulic wind turbines pumping pressurised deep sea water to a centralised plant consisting of a hydro-electric power system coupled to a large-scale sea water-cooled air conditioning (AC) unit of an urban district cooling network. In order to investigate the potential advantages of this new concept over conventional technologies, a simplified model for performance simulation of a vapour compression AC unit was applied independently to three different systems, with the AC unit operating with (1) a constant flow of sea surface water, (2) a constant flow of sea water consisting of a mixture of surface sea water and deep sea water delivered by a single offshore hydraulic wind turbine and (3) an intermittent flow of deep sea water pumped by a single offshore hydraulic wind turbine. The analysis was based on one year of wind and ambient temperature data for the Central Mediterranean that is known for its deep waters, warm climate and relatively low wind speeds. The study confirmed that while the present concept is less efficient than conventional turbines utilising grid-connected electrical generators, a significant portion of the losses associated with the hydraulic transmission through the pipeline are offset by the extraction of cool deep sea water which reduces the electricity consumption of urban air-conditioning units.
Legaz-García, María del Carmen; Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás; Chute, Christopher G; Tao, Cui
2015-05-01
The semantic interoperability of electronic healthcare records (EHRs) systems is a major challenge in the medical informatics area. International initiatives pursue the use of semantically interoperable clinical models, and ontologies have frequently been used in semantic interoperability efforts. The objective of this paper is to propose a generic, ontology-based, flexible approach for supporting the automatic transformation of clinical models, which is illustrated for the transformation of Clinical Element Models (CEMs) into openEHR archetypes. Our transformation method exploits the fact that the information models of the most relevant EHR specifications are available in the Web Ontology Language (OWL). The transformation approach is based on defining mappings between those ontological structures. We propose a way in which CEM entities can be transformed into openEHR by using transformation templates and OWL as common representation formalism. The transformation architecture exploits the reasoning and inferencing capabilities of OWL technologies. We have devised a generic, flexible approach for the transformation of clinical models, implemented for the unidirectional transformation from CEM to openEHR, a series of reusable transformation templates, a proof-of-concept implementation, and a set of openEHR archetypes that validate the methodological approach. We have been able to transform CEM into archetypes in an automatic, flexible, reusable transformation approach that could be extended to other clinical model specifications. We exploit the potential of OWL technologies for supporting the transformation process. We believe that our approach could be useful for international efforts in the area of semantic interoperability of EHR systems. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Sauchyn, David J.; St-Jacques, Jeannine-Marie; Luckman, Brian H.
2015-01-01
Exploitation of the Alberta oil sands, the world’s third-largest crude oil reserve, requires fresh water from the Athabasca River, an allocation of 4.4% of the mean annual flow. This allocation takes into account seasonal fluctuations but not long-term climatic variability and change. This paper examines the decadal-scale variability in river discharge in the Athabasca River Basin (ARB) with (i) a generalized least-squares (GLS) regression analysis of the trend and variability in gauged flow and (ii) a 900-y tree-ring reconstruction of the water-year flow of the Athabasca River at Athabasca, Alberta. The GLS analysis removes confounding transient trends related to the Pacific Decadal Oscillation (PDO) and Pacific North American mode (PNA). It shows long-term declining flows throughout the ARB. The tree-ring record reveals a larger range of flows and severity of hydrologic deficits than those captured by the instrumental records that are the basis for surface water allocation. It includes periods of sustained low flow of multiple decades in duration, suggesting the influence of the PDO and PNA teleconnections. These results together demonstrate that low-frequency variability must be considered in ARB water allocation, which has not been the case. We show that the current and projected surface water allocations from the Athabasca River for the exploitation of the Alberta oil sands are based on an untenable assumption of the representativeness of the short instrumental record. PMID:26392554
Oweiss, Karim G
2006-07-01
This paper suggests a new approach for data compression during extracutaneous transmission of neural signals recorded by high-density microelectrode array in the cortex. The approach is based on exploiting the temporal and spatial characteristics of the neural recordings in order to strip the redundancy and infer the useful information early in the data stream. The proposed signal processing algorithms augment current filtering and amplification capability and may be a viable replacement to on chip spike detection and sorting currently employed to remedy the bandwidth limitations. Temporal processing is devised by exploiting the sparseness capabilities of the discrete wavelet transform, while spatial processing exploits the reduction in the number of physical channels through quasi-periodic eigendecomposition of the data covariance matrix. Our results demonstrate that substantial improvements are obtained in terms of lower transmission bandwidth, reduced latency and optimized processor utilization. We also demonstrate the improvements qualitatively in terms of superior denoising capabilities and higher fidelity of the obtained signals.
Schilde, M; Doerner, K F; Hartl, R F
2014-10-01
In urban areas, logistic transportation operations often run into problems because travel speeds change, depending on the current traffic situation. If not accounted for, time-dependent and stochastic travel speeds frequently lead to missed time windows and thus poorer service. Especially in the case of passenger transportation, it often leads to excessive passenger ride times as well. Therefore, time-dependent and stochastic influences on travel speeds are relevant for finding feasible and reliable solutions. This study considers the effect of exploiting statistical information available about historical accidents, using stochastic solution approaches for the dynamic dial-a-ride problem (dynamic DARP). The authors propose two pairs of metaheuristic solution approaches, each consisting of a deterministic method (average time-dependent travel speeds for planning) and its corresponding stochastic version (exploiting stochastic information while planning). The results, using test instances with up to 762 requests based on a real-world road network, show that in certain conditions, exploiting stochastic information about travel speeds leads to significant improvements over deterministic approaches.
Continuous real-time measurement of aqueous cyanide
Rosentreter, Jeffrey J.; Gering, Kevin L.
2007-03-06
This invention provides a method and system capable of the continuous, real-time measurement of low concentrations of aqueous free cyanide (CN) using an on-line, flow through system. The system is based on the selective reactivity of cyanide anions and the characteristically nonreactive nature of metallic gold films, wherein this selective reactivity is exploited as an indirect measurement for aqueous cyanide. In the present invention the dissolution of gold, due to the solubilization reaction with the analyte cyanide anion, is monitored using a piezoelectric microbalance contained within a flow cell.
NASA Astrophysics Data System (ADS)
Juanes, R.; Jha, B.
2014-12-01
The coupling between subsurface flow and geomechanical deformation is critical in the assessment of the environmental impacts of groundwater use, underground liquid waste disposal, geologic storage of carbon dioxide, and exploitation of shale gas reserves. In particular, seismicity induced by fluid injection and withdrawal has emerged as a central element of the scientific discussion around subsurface technologies that tap into water and energy resources. Here we present a new computational approach to model coupled multiphase flow and geomechanics of faulted reservoirs. We represent faults as surfaces embedded in a three-dimensional medium by using zero-thickness interface elements to accurately model fault slip under dynamically evolving fluid pressure and fault strength. We incorporate the effect of fluid pressures from multiphase flow in the mechanical stability of faults and employ a rigorous formulation of nonlinear multiphase geomechanics that is capable of handling strong capillary effects. We develop a numerical simulation tool by coupling a multiphase flow simulator with a mechanics simulator, using the unconditionally stable fixed-stress scheme for the sequential solution of two-way coupling between flow and geomechanics. We validate our modeling approach using several synthetic, but realistic, test cases that illustrate the onset and evolution of earthquakes from fluid injection and withdrawal. We also present the application of the coupled flow-geomechanics simulation technology to the post mortem analysis of the Mw=5.1, May 2011 Lorca earthquake in south-east Spain, and assess the potential that the earthquake was induced by groundwater extraction.
Identifying finite-time coherent sets from limited quantities of Lagrangian data.
Williams, Matthew O; Rypina, Irina I; Rowley, Clarence W
2015-08-01
A data-driven procedure for identifying the dominant transport barriers in a time-varying flow from limited quantities of Lagrangian data is presented. Our approach partitions state space into coherent pairs, which are sets of initial conditions chosen to minimize the number of trajectories that "leak" from one set to the other under the influence of a stochastic flow field during a pre-specified interval in time. In practice, this partition is computed by solving an optimization problem to obtain a pair of functions whose signs determine set membership. From prior experience with synthetic, "data rich" test problems, and conceptually related methods based on approximations of the Perron-Frobenius operator, we observe that the functions of interest typically appear to be smooth. We exploit this property by using the basis sets associated with spectral or "mesh-free" methods, and as a result, our approach has the potential to more accurately approximate these functions given a fixed amount of data. In practice, this could enable better approximations of the coherent pairs in problems with relatively limited quantities of Lagrangian data, which is usually the case with experimental geophysical data. We apply this method to three examples of increasing complexity: The first is the double gyre, the second is the Bickley Jet, and the third is data from numerically simulated drifters in the Sulu Sea.
Antiquity versus modern times in hydraulics - a case study
NASA Astrophysics Data System (ADS)
Stroia, L.; Georgescu, S. C.; Georgescu, A. M.
2010-08-01
Water supply and water management in Antiquity represent more than Modern World can imagine about how people in that period used to think about, and exploit the resources they had, aiming at developing and improving their society and own lives. This paper points out examples of how they handled different situations, and how they managed to cope with the growing number of population in the urban areas, by adapting or by improving their water supply systems. The paper tries to emphasize the engineering contribution of Rome and the Roman Empire, mainly in the capital but also in the provinces, as for instance the today territory of France, by analysing some aqueducts from the point of view of modern Hydraulic Engineering. A third order polynomial regression is proposed to compute the water flow rate, based on the flow cross-sectional area measured in quinaria. This paper also emphasizes on contradictory things between what we thought we knew about Ancient Roman civilization, and what could really be proven, either by a modern engineering approach, a documentary approach, or by commonsense, where none of the above could be used. It is certain that the world we live in is the heritage of the Greco-Roman culture and therefore, we are due to acknowledge their contribution, especially taking into account the lack of knowledge of that time, and the poor resources they had.
NASA Astrophysics Data System (ADS)
Ben-Romdhane, Hajer; Krichen, Saoussen; Alba, Enrique
2017-05-01
Optimisation in changing environments is a challenging research topic since many real-world problems are inherently dynamic. Inspired by the natural evolution process, evolutionary algorithms (EAs) are among the most successful and promising approaches that have addressed dynamic optimisation problems. However, managing the exploration/exploitation trade-off in EAs is still a prevalent issue, and this is due to the difficulties associated with the control and measurement of such a behaviour. The proposal of this paper is to achieve a balance between exploration and exploitation in an explicit manner. The idea is to use two equally sized populations: the first one performs exploration while the second one is responsible for exploitation. These tasks are alternated from one generation to the next one in a regular pattern, so as to obtain a balanced search engine. Besides, we reinforce the ability of our algorithm to quickly adapt after cnhanges by means of a memory of past solutions. Such a combination aims to restrain the premature convergence, to broaden the search area, and to speed up the optimisation. We show through computational experiments, and based on a series of dynamic problems and many performance measures, that our approach improves the performance of EAs and outperforms competing algorithms.
Improving the global efficiency in small hydropower practice
NASA Astrophysics Data System (ADS)
Razurel, P.; Gorla, L.; Crouzy, B.; Perona, P.
2015-12-01
The global increase in energy production from renewable sources has seen river exploitation for small hydropower plants to also grow considerably in the last decade. River intakes used to divert water from the main course to the power plant are at the base of such practice. A key issue concern with finding innovative concepts to both design and manage such structures in order to improve classic operational rules. Among these, the Minimal Flow Release (MFR) concept has long been used in spite of its environmental inconsistency.In this work, we show that the economical and ecological efficiency of diverting water for energy production in small hydropower plants can be improved towards sustainability by engineering a novel class of flow-redistribution policies. We use the mathematical form of the Fermi-Dirac statistical distribution to define non-proportional dynamic flow-redistribution rules, which broadens the spectrum of dynamic flow releases based on proportional redistribution. The theoretical background as well as the economic interpretation is presented and applied to three case studies in order to systematically test the global performance of such policies. Out of numerical simulations, a Pareto frontier emerges in the economic vs environmental efficiency plot, which show that non-proportional distribution policies improve both efficiencies with respect to those obtained from some traditional MFR and proportional policies. This picture is shown also for long term climatic scenarios affecting water availability and the natural flow regime.In a time of intense and increasing exploitation close to resource saturation, preserving natural river reaches requires to abandon inappropriate static release policies in favor of non-proportional ones towards a sustainable use of the water resource.
Generalized laws of thermodynamics in the presence of correlations.
Bera, Manabendra N; Riera, Arnau; Lewenstein, Maciej; Winter, Andreas
2017-12-19
The laws of thermodynamics, despite their wide range of applicability, are known to break down when systems are correlated with their environments. Here we generalize thermodynamics to physical scenarios which allow presence of correlations, including those where strong correlations are present. We exploit the connection between information and physics, and introduce a consistent redefinition of heat dissipation by systematically accounting for the information flow from system to bath in terms of the conditional entropy. As a consequence, the formula for the Helmholtz free energy is accordingly modified. Such a remedy not only fixes the apparent violations of Landauer's erasure principle and the second law due to anomalous heat flows, but also leads to a generally valid reformulation of the laws of thermodynamics. In this information-theoretic approach, correlations between system and environment store work potential. Thus, in this view, the apparent anomalous heat flows are the refrigeration processes driven by such potentials.
Solving Partial Differential Equations in a data-driven multiprocessor environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaudiot, J.L.; Lin, C.M.; Hosseiniyar, M.
1988-12-31
Partial differential equations can be found in a host of engineering and scientific problems. The emergence of new parallel architectures has spurred research in the definition of parallel PDE solvers. Concurrently, highly programmable systems such as data-how architectures have been proposed for the exploitation of large scale parallelism. The implementation of some Partial Differential Equation solvers (such as the Jacobi method) on a tagged token data-flow graph is demonstrated here. Asynchronous methods (chaotic relaxation) are studied and new scheduling approaches (the Token No-Labeling scheme) are introduced in order to support the implementation of the asychronous methods in a data-driven environment.more » New high-level data-flow language program constructs are introduced in order to handle chaotic operations. Finally, the performance of the program graphs is demonstrated by a deterministic simulation of a message passing data-flow multiprocessor. An analysis of the overhead in the data-flow graphs is undertaken to demonstrate the limits of parallel operations in dataflow PDE program graphs.« less
Three-Dimensional Printing Based Hybrid Manufacturing of Microfluidic Devices.
Alapan, Yunus; Hasan, Muhammad Noman; Shen, Richang; Gurkan, Umut A
2015-05-01
Microfluidic platforms offer revolutionary and practical solutions to challenging problems in biology and medicine. Even though traditional micro/nanofabrication technologies expedited the emergence of the microfluidics field, recent advances in advanced additive manufacturing hold significant potential for single-step, stand-alone microfluidic device fabrication. One such technology, which holds a significant promise for next generation microsystem fabrication is three-dimensional (3D) printing. Presently, building 3D printed stand-alone microfluidic devices with fully embedded microchannels for applications in biology and medicine has the following challenges: (i) limitations in achievable design complexity, (ii) need for a wider variety of transparent materials, (iii) limited z-resolution, (iv) absence of extremely smooth surface finish, and (v) limitations in precision fabrication of hollow and void sections with extremely high surface area to volume ratio. We developed a new way to fabricate stand-alone microfluidic devices with integrated manifolds and embedded microchannels by utilizing a 3D printing and laser micromachined lamination based hybrid manufacturing approach. In this new fabrication method, we exploit the minimized fabrication steps enabled by 3D printing, and reduced assembly complexities facilitated by laser micromachined lamination method. The new hybrid fabrication method enables key features for advanced microfluidic system architecture: (i) increased design complexity in 3D, (ii) improved control over microflow behavior in all three directions and in multiple layers, (iii) transverse multilayer flow and precisely integrated flow distribution, and (iv) enhanced transparency for high resolution imaging and analysis. Hybrid manufacturing approaches hold great potential in advancing microfluidic device fabrication in terms of standardization, fast production, and user-independent manufacturing.
Three-Dimensional Printing Based Hybrid Manufacturing of Microfluidic Devices
Shen, Richang; Gurkan, Umut A.
2016-01-01
Microfluidic platforms offer revolutionary and practical solutions to challenging problems in biology and medicine. Even though traditional micro/nanofabrication technologies expedited the emergence of the microfluidics field, recent advances in advanced additive manufacturing hold significant potential for single-step, stand-alone microfluidic device fabrication. One such technology, which holds a significant promise for next generation microsystem fabrication is three-dimensional (3D) printing. Presently, building 3D printed stand-alone microfluidic devices with fully embedded microchannels for applications in biology and medicine has the following challenges: (i) limitations in achievable design complexity, (ii) need for a wider variety of transparent materials, (iii) limited z-resolution, (iv) absence of extremely smooth surface finish, and (v) limitations in precision fabrication of hollow and void sections with extremely high surface area to volume ratio. We developed a new way to fabricate stand-alone microfluidic devices with integrated manifolds and embedded microchannels by utilizing a 3D printing and laser micromachined lamination based hybrid manufacturing approach. In this new fabrication method, we exploit the minimized fabrication steps enabled by 3D printing, and reduced assembly complexities facilitated by laser micromachined lamination method. The new hybrid fabrication method enables key features for advanced microfluidic system architecture: (i) increased design complexity in 3D, (ii) improved control over microflow behavior in all three directions and in multiple layers, (iii) transverse multilayer flow and precisely integrated flow distribution, and (iv) enhanced transparency for high resolution imaging and analysis. Hybrid manufacturing approaches hold great potential in advancing microfluidic device fabrication in terms of standardization, fast production, and user-independent manufacturing. PMID:27512530
NASA Astrophysics Data System (ADS)
Sorrentino, Marco; Pianese, Cesare
The exploitation of an SOFC-system model to define and test control and energy management strategies is presented. Such a work is motivated by the increasing interest paid to SOFC technology by industries and governments due to its highly appealing potentialities in terms of energy savings, fuel flexibility, cogeneration, low-pollution and low-noise operation. The core part of the model is the SOFC stack, surrounded by a number of auxiliary devices, i.e. air compressor, regulating pressure valves, heat exchangers, pre-reformer and post-burner. Due to the slow thermal dynamics of SOFCs, a set of three lumped-capacity models describes the dynamic response of fuel cell and heat exchangers to any operation change. The dynamic model was used to develop low-level control strategies aimed at guaranteeing targeted performance while keeping stack temperature derivative within safe limits to reduce stack degradation due to thermal stresses. Control strategies for both cold-start and warmed-up operations were implemented by combining feedforward and feedback approaches. Particularly, the main cold-start control action relies on the precise regulation of methane flow towards anode and post-burner via by-pass valves; this strategy is combined with a cathode air-flow adjustment to have a tight control of both stack temperature gradient and warm-up time. Results are presented to show the potentialities of the proposed model-based approach to: (i) serve as a support to control strategies development and (ii) solve the trade-off between fast SOFC cold-start and avoidance of thermal-stress caused damages.
Flow in the Deep Mantle from Seisimc Anisotropy: Progress and Prospects
NASA Astrophysics Data System (ADS)
Long, M. D.
2017-12-01
Observations of seismic anisotropy, or the directional dependence of seismic wavespeeds, provide one some of the most direct constraints on the pattern of flow in the Earth's mantle. In particular, as our understanding of crystallographic preferred orientation (CPO) of olivine aggregates under a range of deformation conditions has improved, our ability to exploit observations of upper mantle anisotropy has led to fundamental discoveries about the patterns of flow in the upper mantle and the drivers of that flow. It has been a challenge, however, to develop a similar framework for understanding flow in the deep mantle (transition zone, uppermost lower mantle, and lowermost mantle), even though there is convincing observational evidence for seismic anisotropy at these depths. Recent progress on the observational front has allowed for an increasingly detailed view of mid-mantle anisotropy (transition zone and uppermost lower mantle), particularly in subduction systems, which may eventually lead to a better understanding of mid-mantle deformation and the dynamics of slab interaction with the surrounding mid-mantle. New approaches to the observation and modeling of lowermost mantle anisotropy, in combination with constraints from mineral physics, are progressing towards interpretive frameworks that allow for the discrimination of different mantle flow geometries in different regions of D". In particular, observational strategies that involve the use of multiple types of body wave phases sampled over a range of propagation azimuths enable detailed forward modeling approaches that can discriminate between different mechanisms for D" anisotropy (e.g., CPO of post-perovskite, bridgmanite, or ferropericlase, or shape preferred orientation of partial melt) and identify plausible anisotropic orientations. We have recently begun to move towards a full waveform modeling approach in this work, which allows for a more accurate simulation for seismic wave propagation. Ongoing improvements in seismic observational strategies, experimental and computational mineral physics, and geodynamic modeling approaches are leading to new avenues for understanding flow in the deep mantle through the study of seismic anisotropy.
Tuning Fractures With Dynamic Data
NASA Astrophysics Data System (ADS)
Yao, Mengbi; Chang, Haibin; Li, Xiang; Zhang, Dongxiao
2018-02-01
Flow in fractured porous media is crucial for production of oil/gas reservoirs and exploitation of geothermal energy. Flow behaviors in such media are mainly dictated by the distribution of fractures. Measuring and inferring the distribution of fractures is subject to large uncertainty, which, in turn, leads to great uncertainty in the prediction of flow behaviors. Inverse modeling with dynamic data may assist to constrain fracture distributions, thus reducing the uncertainty of flow prediction. However, inverse modeling for flow in fractured reservoirs is challenging, owing to the discrete and non-Gaussian distribution of fractures, as well as strong nonlinearity in the relationship between flow responses and model parameters. In this work, building upon a series of recent advances, an inverse modeling approach is proposed to efficiently update the flow model to match the dynamic data while retaining geological realism in the distribution of fractures. In the approach, the Hough-transform method is employed to parameterize non-Gaussian fracture fields with continuous parameter fields, thus rendering desirable properties required by many inverse modeling methods. In addition, a recently developed forward simulation method, the embedded discrete fracture method (EDFM), is utilized to model the fractures. The EDFM maintains computational efficiency while preserving the ability to capture the geometrical details of fractures because the matrix is discretized as structured grid, while the fractures being handled as planes are inserted into the matrix grids. The combination of Hough representation of fractures with the EDFM makes it possible to tune the fractures (through updating their existence, location, orientation, length, and other properties) without requiring either unstructured grids or regridding during updating. Such a treatment is amenable to numerous inverse modeling approaches, such as the iterative inverse modeling method employed in this study, which is capable of dealing with strongly nonlinear problems. A series of numerical case studies with increasing complexity are set up to examine the performance of the proposed approach.
Pedagogical Approaches for Technology-Integrated Science Teaching
ERIC Educational Resources Information Center
Hennessy, Sara; Wishart, Jocelyn; Whitelock, Denise; Deaney, Rosemary; Brawn, Richard; la Velle, Linda; McFarlane, Angela; Ruthven, Kenneth; Winterbottom, Mark
2007-01-01
The two separate projects described have examined how teachers exploit computer-based technologies in supporting learning of science at secondary level. This paper examines how pedagogical approaches associated with these technological tools are adapted to both the cognitive and structuring resources available in the classroom setting. Four…
McGivern, Jered V; Ebert, Allison D
2014-04-01
In order for the pharmaceutical industry to maintain a constant flow of novel drugs and therapeutics into the clinic, compounds must be thoroughly validated for safety and efficacy in multiple biological and biochemical systems. Pluripotent stem cells, because of their ability to develop into any cell type in the body and recapitulate human disease, may be an important cellular system to add to the drug development repertoire. This review will discuss some of the benefits of using pluripotent stem cells for drug discovery and safety studies as well as some of the recent applications of stem cells in drug screening studies. We will also address some of the hurdles that need to be overcome in order to make stem cell-based approaches an efficient and effective tool in the quest to produce clinically successful drug compounds. Copyright © 2013 Elsevier B.V. All rights reserved.
An aerodynamic assessment of various supersonic fighter airplanes based on Soviet design concepts
NASA Technical Reports Server (NTRS)
Spearman, M. L.
1983-01-01
The aerodynamic, stability, and control characteristics of several supersonic fighter airplane concepts were assessed. The configurations include fixed-wing airplanes having delta wings, swept wings, and trapezoidal wings, and variable wing-sweep airplanes. Each concept employs aft tail controls. The concepts vary from lightweight, single engine, air superiority, point interceptor, or ground attack types to larger twin-engine interceptor and reconnaissance designs. Results indicate that careful application of the transonic or supersonic area rule can provide nearly optimum shaping for minimum drag for a specified Mach number requirement. Through the proper location of components and the exploitation of interference flow fields, the concepts provide linear pitching moment characteristics, high control effectiveness, and reasonably small variations in aerodynamic center location with a resulting high potential for maneuvering capability. By careful attention to component shaping and location and through the exploitation of local flow fields, favorable roll-to-yaw ratios may result and a high degree of directional stability can be achieved.
Advances in multiphase flow measurements using magnetic resonance relaxometry
NASA Astrophysics Data System (ADS)
Kantzas, Apostolos; Kryuchkov, Sergey; Chandrasekaran, Blake
2009-02-01
When it comes to the measurement of bitumen and water content as they are produced from thermally exploited reservoirs (cyclic steam stimulation or steam assisted gravity drainage) most of the current tools that are available in the market fail. This was demonstrated previously when our group introduced the first concept of a magnetic resonance based water-cut meter. The use of magnetic resonance as a potential tool for fluid cut metering from thermally produced heavy oil and bitumen reservoirs is revisited. At first a review of the work to date is presented. Our recent approach in the tackling of this problem follows. A patented process is coupled with a patented pipe design that can be used inside a magnetic field and can capture fluids up to 260°C and 4.2MPa. The paper describes the technical advances to this goal and offers a first glimpse of field data from an actual thermal facility for bitumen production. The paper also addresses an approach for converting the current discrete measurement device into a continuous measurement system. Preliminary results for this new concept are also presented.
Park, Sung-Hoon; Lee, Sangeui; Moreira, David; Bandaru, Prabhakar R.; Han, InTaek; Yun, Dong-Jin
2015-01-01
A simple, scalable, non-lithographic, technique for fabricating durable superhydrophobic (SH) surfaces, based on the fingering instabilities associated with non-Newtonian flow and shear tearing, has been developed. The high viscosity of the nanotube/elastomer paste has been exploited for the fabrication. The fabricated SH surfaces had the appearance of bristled shark skin and were robust with respect to mechanical forces. While flow instability is regarded as adverse to roll-coating processes for fabricating uniform films, we especially use the effect to create the SH surface. Along with their durability and self-cleaning capabilities, we have demonstrated drag reduction effects of the fabricated films through dynamic flow measurements. PMID:26490133
Park, Sung-Hoon; Lee, Sangeui; Moreira, David; Bandaru, Prabhakar R; Han, InTaek; Yun, Dong-Jin
2015-10-22
A simple, scalable, non-lithographic, technique for fabricating durable superhydrophobic (SH) surfaces, based on the fingering instabilities associated with non-Newtonian flow and shear tearing, has been developed. The high viscosity of the nanotube/elastomer paste has been exploited for the fabrication. The fabricated SH surfaces had the appearance of bristled shark skin and were robust with respect to mechanical forces. While flow instability is regarded as adverse to roll-coating processes for fabricating uniform films, we especially use the effect to create the SH surface. Along with their durability and self-cleaning capabilities, we have demonstrated drag reduction effects of the fabricated films through dynamic flow measurements.
Supplier behaviour and public contracting in the English agency nursing market.
Lonsdale, Chris; Kirkpatrick, Ian; Hoque, Kim; de Ruyter, Alex
2010-01-01
The worldwide expansion in the use of private firms to deliver public services and infrastructure has promoted a substantial literature on public sector contract and relationship management. This literature is currently dominated by the notion that supplier relationships should be based upon trust. Less prominent are more sceptical approaches that emphasize the need to assiduously manage potential supplier exploitation and opportunism. This article addresses this imbalance by focusing upon the recent experience of the English National Health Service (NHS) in its dealings with its nursing agencies. Between 1997 and 2001, the NHS was subjected to considerable exploitation and opportunism. This forced managers to adopt a supply strategy based upon an assiduous use of e-auctions, framework agreements and quality audits. The article assesses the effectiveness of this strategy and reflects upon whether a more defensive approach to contract and relationship management offers a viable alternative to one based upon trust.
Ensuring the consistancy of Flow Direction Curve reconstructions: the 'quantile solidarity' approach
NASA Astrophysics Data System (ADS)
Poncelet, Carine; Andreassian, Vazken; Oudin, Ludovic
2015-04-01
Flow Duration Curves (FDCs) are a hydrologic tool describing the distribution of streamflows at a catchment outlet. FDCs are usually used for calibration of hydrological models, managing water quality and classifying catchments, among others. For gauged catchments, empirical FDCs can be computed from streamflow records. For ungauged catchments, on the other hand, FDCs cannot be obtained from streamflow records and must therefore be obtained in another manner, for example through reconstructions. Regression-based reconstructions are methods relying on the evaluation of quantiles separately from catchments' attributes (climatic or physical features).The advantage of this category of methods is that it is informative about the processes and it is non-parametric. However, the large number of parameters required can cause unwanted artifacts, typically reconstructions that do not always produce increasing quantiles. In this paper we propose a new approach named Quantile Solidarity (QS), which is applied under strict proxy-basin test conditions (Klemes, 1986) to a set of 600 French catchments. Half of the catchments are considered as gauged and used to calibrate the regression and compute residuals of the regression. The QS approach consists in a three-step regionalization scheme, which first links quantile values to physical descriptors, then reduces the number of regression parameters and finally exploits the spatial correlation of the residuals. The innovation is the utilisation of the parameters continuity across the quantiles to dramatically reduce the number of parameters. The second half of catchment is used as an independent validation set over which we show that the QS approach ensures strictly growing FDC reconstructions in ungauged conditions. Reference: V. KLEMEŠ (1986) Operational testing of hydrological simulation models, Hydrological Sciences Journal, 31:1, 13-24
Ecosystem-based fisheries management requires a change to the selective fishing philosophy
Zhou, Shijie; Smith, Anthony D. M.; Punt, André E.; Richardson, Anthony J.; Gibbs, Mark; Fulton, Elizabeth A.; Pascoe, Sean; Bulman, Catherine; Bayliss, Peter; Sainsbury, Keith
2010-01-01
Globally, many fish species are overexploited, and many stocks have collapsed. This crisis, along with increasing concerns over flow-on effects on ecosystems, has caused a reevaluation of traditional fisheries management practices, and a new ecosystem-based fisheries management (EBFM) paradigm has emerged. As part of this approach, selective fishing is widely encouraged in the belief that nonselective fishing has many adverse impacts. In particular, incidental bycatch is seen as wasteful and a negative feature of fishing, and methods to reduce bycatch are implemented in many fisheries. However, recent advances in fishery science and ecology suggest that a selective approach may also result in undesirable impacts both to fisheries and marine ecosystems. Selective fishing applies one or more of the “6-S” selections: species, stock, size, sex, season, and space. However, selective fishing alters biodiversity, which in turn changes ecosystem functioning and may affect fisheries production, hindering rather than helping achieve the goals of EBFM. We argue here that a “balanced exploitation” approach might alleviate many of the ecological effects of fishing by avoiding intensive removal of particular components of the ecosystem, while still supporting sustainable fisheries. This concept may require reducing exploitation rates on certain target species or groups to protect vulnerable components of the ecosystem. Benefits to society could be maintained or even increased because a greater proportion of the entire suite of harvested species is used. PMID:20435916
NASA Astrophysics Data System (ADS)
Carrière, Simon D.; Chalikakis, Konstantinos; Danquigny, Charles; Davi, Hendrik; Mazzilli, Naomi; Ollivier, Chloé; Emblanch, Christophe
2016-11-01
Some portions of the porous rock matrix in the karst unsaturated zone (UZ) can contain large volumes of water and play a major role in water flow regulation. The essential results are presented of a local-scale study conducted in 2011 and 2012 above the Low Noise Underground Laboratory (LSBB - Laboratoire Souterrain à Bas Bruit) at Rustrel, southeastern France. Previous research revealed the geological structure and water-related features of the study site and illustrated the feasibility of specific hydrogeophysical measurements. In this study, the focus is on hydrodynamics at the seasonal and event timescales. Magnetic resonance sounding (MRS) measured a high water content (more than 10 %) in a large volume of rock. This large volume of water cannot be stored in fractures and conduits within the UZ. MRS was also used to measure the seasonal variation of water stored in the karst UZ. A process-based model was developed to simulate the effect of vegetation on groundwater recharge dynamics. In addition, electrical resistivity tomography (ERT) monitoring was used to assess preferential water pathways during a rain event. This study demonstrates the major influence of water flow within the porous rock matrix on the UZ hydrogeological functioning at both the local (LSBB) and regional (Fontaine de Vaucluse) scales. By taking into account the role of the porous matrix in water flow regulation, these findings may significantly improve karst groundwater hydrodynamic modelling, exploitation, and sustainable management.
NASA Technical Reports Server (NTRS)
Novik, Dmitry A.; Tilton, James C.
1993-01-01
The compression, or efficient coding, of single band or multispectral still images is becoming an increasingly important topic. While lossy compression approaches can produce reconstructions that are visually close to the original, many scientific and engineering applications require exact (lossless) reconstructions. However, the most popular and efficient lossless compression techniques do not fully exploit the two-dimensional structural links existing in the image data. We describe here a general approach to lossless data compression that effectively exploits two-dimensional structural links of any length. After describing in detail two main variants on this scheme, we discuss experimental results.
Block sparsity-based joint compressed sensing recovery of multi-channel ECG signals.
Singh, Anurag; Dandapat, Samarendra
2017-04-01
In recent years, compressed sensing (CS) has emerged as an effective alternative to conventional wavelet based data compression techniques. This is due to its simple and energy-efficient data reduction procedure, which makes it suitable for resource-constrained wireless body area network (WBAN)-enabled electrocardiogram (ECG) telemonitoring applications. Both spatial and temporal correlations exist simultaneously in multi-channel ECG (MECG) signals. Exploitation of both types of correlations is very important in CS-based ECG telemonitoring systems for better performance. However, most of the existing CS-based works exploit either of the correlations, which results in a suboptimal performance. In this work, within a CS framework, the authors propose to exploit both types of correlations simultaneously using a sparse Bayesian learning-based approach. A spatiotemporal sparse model is employed for joint compression/reconstruction of MECG signals. Discrete wavelets transform domain block sparsity of MECG signals is exploited for simultaneous reconstruction of all the channels. Performance evaluations using Physikalisch-Technische Bundesanstalt MECG diagnostic database show a significant gain in the diagnostic reconstruction quality of the MECG signals compared with the state-of-the art techniques at reduced number of measurements. Low measurement requirement may lead to significant savings in the energy-cost of the existing CS-based WBAN systems.
Formation of Nanoparticle Stripe Patterns via Flexible-Blade Flow Coating
NASA Astrophysics Data System (ADS)
Lee, Dong Yun; Kim, Hyun Suk; Parkos, Cassandra; Lee, Cheol Hee; Emrick, Todd; Crosby, Alfred
2011-03-01
We present the controlled formation of nanostripe patterns of nanoparticles on underlying substrates by flexible-blade flow coating. This technique exploits the combination of convective flow of confined nanoparticle solutions and programmed translation of a substrate to fabricate nanoparticle-polymer line assemblies with width below 300 nm, thickness of a single nanoparticle, and lengths exceeding 10 cm. We demonstrate how the incorporation of a flexible blade into this technique allows capillary forces to self-regulate the uniformity of convective flow processes across large lateral lengths. Furthermore, we exploit solvent mixture dynamics to enhance intra-assembly particle packing and dimensional range. This facile technique opens up a new paradigm for integration of nanoscale patterns over large areas for various applications.
Evaluation of hydrochemical changes due to intensive aquifer exploitation: case studies from Mexico.
Esteller, M V; Rodríguez, R; Cardona, A; Padilla-Sánchez, L
2012-09-01
The impact of intensive aquifer exploitation has been observed in numerous places around the world. Mexico is a representative example of this problem. In 2010, 101 out of the 653 aquifers recognized in the country, showed negative social, economic, and environmental effects related to intensive exploitation. The environmental effects include, among others, groundwater level decline, subsidence, attenuation, and drying up of springs, decreased river flow, and deterioration of water quality. This study aimed at determining the hydrochemical changes produced by intensive aquifer exploitation and highlighting water quality modifications, taking as example the Valle de Toluca, Salamanca, and San Luis Potosi aquifers in Mexico's highlands. There, elements such as fluoride, arsenic, iron, and manganese have been detected, resulting from the introduction of older groundwater with longer residence times and distinctive chemical composition (regional flows). High concentrations of other elements such as chloride, sulfate, nitrate, and vanadium, as well as pathogens, all related to anthropogenic pollution sources (wastewater infiltration, irrigation return flow, and atmospheric pollutants, among others) were also observed. Some of these elements (nitrate, fluoride, arsenic, iron, and manganese) have shown concentrations above Mexican and World Health Organization drinking water standards.
Adaptive zooming in X-ray computed tomography.
Dabravolski, Andrei; Batenburg, Kees Joost; Sijbers, Jan
2014-01-01
In computed tomography (CT), the source-detector system commonly rotates around the object in a circular trajectory. Such a trajectory does not allow to exploit a detector fully when scanning elongated objects. Increase the spatial resolution of the reconstructed image by optimal zooming during scanning. A new approach is proposed, in which the full width of the detector is exploited for every projection angle. This approach is based on the use of prior information about the object's convex hull to move the source as close as possible to the object, while avoiding truncation of the projections. Experiments show that the proposed approach can significantly improve reconstruction quality, producing reconstructions with smaller errors and revealing more details in the object. The proposed approach can lead to more accurate reconstructions and increased spatial resolution in the object compared to the conventional circular trajectory.
Jin, Cheng; Feng, Jianjiang; Wang, Lei; Yu, Heng; Liu, Jiang; Lu, Jiwen; Zhou, Jie
2018-05-01
In this paper, we present an approach for left atrial appendage (LAA) multi-phase fast segmentation and quantitative assisted diagnosis of atrial fibrillation (AF) based on 4D-CT data. We take full advantage of the temporal dimension information to segment the living, flailed LAA based on a parametric max-flow method and graph-cut approach to build 3-D model of each phase. To assist the diagnosis of AF, we calculate the volumes of 3-D models, and then generate a "volume-phase" curve to calculate the important dynamic metrics: ejection fraction, filling flux, and emptying flux of the LAA's blood by volume. This approach demonstrates more precise results than the conventional approaches that calculate metrics by area, and allows for the quick analysis of LAA-volume pattern changes of in a cardiac cycle. It may also provide insight into the individual differences in the lesions of the LAA. Furthermore, we apply support vector machines (SVMs) to achieve a quantitative auto-diagnosis of the AF by exploiting seven features from volume change ratios of the LAA, and perform multivariate logistic regression analysis for the risk of LAA thrombosis. The 100 cases utilized in this research were taken from the Philips 256-iCT. The experimental results demonstrate that our approach can construct the 3-D LAA geometries robustly compared to manual annotations, and reasonably infer that the LAA undergoes filling, emptying and re-filling, re-emptying in a cardiac cycle. This research provides a potential for exploring various physiological functions of the LAA and quantitatively estimating the risk of stroke in patients with AF. Copyright © 2018 Elsevier Ltd. All rights reserved.
Energy harvesting by means of flow-induced vibrations on aerospace vehicles
NASA Astrophysics Data System (ADS)
Li, Daochun; Wu, Yining; Da Ronch, Andrea; Xiang, Jinwu
2016-10-01
This paper reviews the design, implementation, and demonstration of energy harvesting devices that exploit flow-induced vibrations as the main source of energy. Starting with a presentation of various concepts of energy harvesters that are designed to benefit from a general class of flow-induced vibrations, specific attention is then given at those technologies that may offer, today or in the near future, a potential benefit to extend the operational capabilities and to monitor critical parameters of unmanned aerial vehicles. Various phenomena characterized by flow-induced vibrations are discussed, including limit cycle oscillations of plates and wing sections, vortex-induced and galloping oscillations of bluff bodies, vortex-induced vibrations of downstream structures, and atmospheric turbulence and gusts. It was found that linear or linearized modeling approaches are commonly employed to support the design phase of energy harvesters. As a result, highly nonlinear and coupled phenomena that characterize flow-induced vibrations are neglected in the design process. The Authors encourage a shift in the current design paradigm: considering coupled nonlinear phenomena, and adequate modeling tools to support their analysis, from a design limitation to a design opportunity. Special emphasis is placed on identifying designs and implementations applicable to aircraft configurations. Application fields of flow-induced vibrations-based energy harvesters are discussed including power supply for wireless sensor networks and simultaneous energy harvest and control. A large body of work on energy harvesters is included in this review journal. Whereas most of the references claim direct applications to unmanned aerial vehicles, it is apparent that, in most of the cases presented, the working principles and characteristics of the energy harvesters are incompatible with any aerospace applications. Finally, the challenges that hold back the integration of energy harvesting technologies in the aerospace field are discussed.
Viscid-inviscid interaction associated with incompressible flow past wedges at high Reynolds number
NASA Technical Reports Server (NTRS)
Warpinski, N. R.; Chow, W. L.
1977-01-01
An analytical method is suggested for the study of the viscid inviscid interaction associated with incompressible flow past wedges with arbitrary angles. It is shown that the determination of the nearly constant pressure (base pressure) prevailing within the near wake is really the heart of the problem, and the pressure can only be established from these interactive considerations. The basic free streamline flow field is established through two discrete parameters which adequately describe the inviscid flow around the body and the wake. The viscous flow processes such as the boundary layer buildup, turbulent jet mixing, and recompression are individually analyzed and attached to the inviscid flow in the sense of the boundary layer concept. The interaction between the viscous and inviscid streams is properly displayed by the fact that the aforementioned discrete parameters needed for the inviscid flow are determined by the viscous flow condition at the point of reattachment. It is found that the reattachment point behaves as a saddle point singularity for the system of equations describing the recompressive viscous flow processes, and this behavior is exploited for the establishment of the overall flow field. Detailed results such as the base pressure, pressure distributions on the wedge, and the geometry of the wake are determined as functions of the wedge angle.
Implementation of a 3d numerical model of a folded multilayer carbonate aquifer
NASA Astrophysics Data System (ADS)
Di Salvo, Cristina; Guyennon, Nicolas; Romano, Emanuele; Bruna Petrangeli, Anna; Preziosi, Elisabetta
2016-04-01
The main objective of this research is to present a case study of the numerical model implementation of a complex carbonate, structurally folded aquifer, with a finite difference, porous equivalent model. The case study aquifer (which extends over 235 km2 in the Apennine chain, Central Italy) provides a long term average of 3.5 m3/s of good quality groundwater to the surface river network, sustaining the minimum vital flow, and it is planned to be exploited in the next years for public water supply. In the downstream part of the river in the study area, a "Site of Community Importance" include the Nera River for its valuable aquatic fauna. However, the possible negative effects of the foreseen exploitation on groundwater dependent ecosystems are a great concern and model grounded scenarios are needed. This multilayer aquifer was conceptualized as five hydrostratigraphic units: three main aquifers (the uppermost unconfined, the central and the deepest partly confined), are separated by two locally discontinuous aquitards. The Nera river cuts through the two upper aquifers and acts as the main natural sink for groundwater. An equivalent porous medium approach was chosen. The complex tectonic structure of the aquifer requires several steps in defining the conceptual model; the presence of strongly dipping layers with very heterogeneous hydraulic conductivity, results in different thicknesses of saturated portions. Aquifers can have both unconfined or confined zones; drying and rewetting must be allowed when considering recharge/discharge cycles. All these characteristics can be included in the conceptual and numerical model; however, being the number of flow and head target scarce, the over-parametrization of the model must be avoided. Following the principle of parsimony, three steady state numerical models were developed, starting from a simple model, and then adding complexity: 2D (single layer), QUASI -3D (with leackage term simulating flow through aquitards) and fully-3D (with aquitards simulated explicitly and transient flow represented by 3D governing equations). At first, steady state simulation were run under average seasonal recharge. To overcome dry-cell problems in the FULL-3D model, the Newton-Raphson formulation for MODFLOW-2005 was invoked. Steady state calibration was achieved mainly using annual average flow along four streambed's Nera River springs and average water level data available only in two observation wells. Results show that a FULL-3D zoned model was required to match the observed distribution of river base flow. The FULL-3D model was then run in transient conditions (1990-2013) by using monthly spatially distributed recharge estimated using the Thornthwaite-Mather method based on 60 years of climate data. The monitored flow of one spring, used for public water supply, was used as proxy data for reconstruct Nera River hydrogram; proxy-based hydrogram was used for calibration of storage coefficients and further model's parameters adjustment. Once calibrated, the model was run under different aquifer management scenario (i.e., pumping wells planned to be active for water supply); the related risk of depletion of spring discharge and groundwater-surface water interaction was evaluated.
Firdaus, Ahmad; Anuar, Nor Badrul; Razak, Mohd Faizal Ab; Hashem, Ibrahim Abaker Targio; Bachok, Syafiq; Sangaiah, Arun Kumar
2018-05-04
The increasing demand for Android mobile devices and blockchain has motivated malware creators to develop mobile malware to compromise the blockchain. Although the blockchain is secure, attackers have managed to gain access into the blockchain as legal users, thereby comprising important and crucial information. Examples of mobile malware include root exploit, botnets, and Trojans and root exploit is one of the most dangerous malware. It compromises the operating system kernel in order to gain root privileges which are then used by attackers to bypass the security mechanisms, to gain complete control of the operating system, to install other possible types of malware to the devices, and finally, to steal victims' private keys linked to the blockchain. For the purpose of maximizing the security of the blockchain-based medical data management (BMDM), it is crucial to investigate the novel features and approaches contained in root exploit malware. This study proposes to use the bio-inspired method of practical swarm optimization (PSO) which automatically select the exclusive features that contain the novel android debug bridge (ADB). This study also adopts boosting (adaboost, realadaboost, logitboost, and multiboost) to enhance the machine learning prediction that detects unknown root exploit, and scrutinized three categories of features including (1) system command, (2) directory path and (3) code-based. The evaluation gathered from this study suggests a marked accuracy value of 93% with Logitboost in the simulation. Logitboost also helped to predicted all the root exploit samples in our developed system, the root exploit detection system (RODS).
Contactless Inductive Bubble Detection in a Liquid Metal Flow
Gundrum, Thomas; Büttner, Philipp; Dekdouk, Bachir; Peyton, Anthony; Wondrak, Thomas; Galindo, Vladimir; Eckert, Sven
2016-01-01
The detection of bubbles in liquid metals is important for many technical applications. The opaqueness and the high temperature of liquid metals set high demands on the measurement system. The high electrical conductivity of the liquid metal can be exploited for contactless methods based on electromagnetic induction. We will present a measurement system which consists of one excitation coil and a pickup coil system on the opposite sides of the pipe. With this sensor we were able to detect bubbles in a sodium flow inside a stainless steel pipe and bubbles in a column filled with a liquid Gallium alloy. PMID:26751444
Reactive transport modeling in the subsurface environment with OGS-IPhreeqc
NASA Astrophysics Data System (ADS)
He, Wenkui; Beyer, Christof; Fleckenstein, Jan; Jang, Eunseon; Kalbacher, Thomas; Naumov, Dimitri; Shao, Haibing; Wang, Wenqing; Kolditz, Olaf
2015-04-01
Worldwide, sustainable water resource management becomes an increasingly challenging task due to the growth of population and extensive applications of fertilizer in agriculture. Moreover, climate change causes further stresses to both water quantity and quality. Reactive transport modeling in the coupled soil-aquifer system is a viable approach to assess the impacts of different land use and groundwater exploitation scenarios on the water resources. However, the application of this approach is usually limited in spatial scale and to simplified geochemical systems due to the huge computational expense involved. Such computational expense is not only caused by solving the high non-linearity of the initial boundary value problems of water flow in the unsaturated zone numerically with rather fine spatial and temporal discretization for the correct mass balance and numerical stability, but also by the intensive computational task of quantifying geochemical reactions. In the present study, a flexible and efficient tool for large scale reactive transport modeling in variably saturated porous media and its applications are presented. The open source scientific software OpenGeoSys (OGS) is coupled with the IPhreeqc module of the geochemical solver PHREEQC. The new coupling approach makes full use of advantages from both codes: OGS provides a flexible choice of different numerical approaches for simulation of water flow in the vadose zone such as the pressure-based or mixed forms of Richards equation; whereas the IPhreeqc module leads to a simplification of data storage and its communication with OGS, which greatly facilitates the coupling and code updating. Moreover, a parallelization scheme with MPI (Message Passing Interface) is applied, in which the computational task of water flow and mass transport is partitioned through domain decomposition, whereas the efficient parallelization of geochemical reactions is achieved by smart allocation of computational workload over multiple compute nodes. The plausibility of the new coupling is verified by several benchmark tests. In addition, the efficiency of the new coupling approach is demonstrated by its application in a large scale scenario, in which the environmental fate of pesticides in a complex soil-aquifer system is studied.
Reactive transport modeling in variably saturated porous media with OGS-IPhreeqc
NASA Astrophysics Data System (ADS)
He, W.; Beyer, C.; Fleckenstein, J. H.; Jang, E.; Kalbacher, T.; Shao, H.; Wang, W.; Kolditz, O.
2014-12-01
Worldwide, sustainable water resource management becomes an increasingly challenging task due to the growth of population and extensive applications of fertilizer in agriculture. Moreover, climate change causes further stresses to both water quantity and quality. Reactive transport modeling in the coupled soil-aquifer system is a viable approach to assess the impacts of different land use and groundwater exploitation scenarios on the water resources. However, the application of this approach is usually limited in spatial scale and to simplified geochemical systems due to the huge computational expense involved. Such computational expense is not only caused by solving the high non-linearity of the initial boundary value problems of water flow in the unsaturated zone numerically with rather fine spatial and temporal discretization for the correct mass balance and numerical stability, but also by the intensive computational task of quantifying geochemical reactions. In the present study, a flexible and efficient tool for large scale reactive transport modeling in variably saturated porous media and its applications are presented. The open source scientific software OpenGeoSys (OGS) is coupled with the IPhreeqc module of the geochemical solver PHREEQC. The new coupling approach makes full use of advantages from both codes: OGS provides a flexible choice of different numerical approaches for simulation of water flow in the vadose zone such as the pressure-based or mixed forms of Richards equation; whereas the IPhreeqc module leads to a simplification of data storage and its communication with OGS, which greatly facilitates the coupling and code updating. Moreover, a parallelization scheme with MPI (Message Passing Interface) is applied, in which the computational task of water flow and mass transport is partitioned through domain decomposition, whereas the efficient parallelization of geochemical reactions is achieved by smart allocation of computational workload over multiple compute nodes. The plausibility of the new coupling is verified by several benchmark tests. In addition, the efficiency of the new coupling approach is demonstrated by its application in a large scale scenario, in which the environmental fate of pesticides in a complex soil-aquifer system is studied.
NASA Astrophysics Data System (ADS)
von Larcher, Thomas; Blome, Therese; Klein, Rupert; Schneider, Reinhold; Wolf, Sebastian; Huber, Benjamin
2016-04-01
Handling high-dimensional data sets like they occur e.g. in turbulent flows or in multiscale behaviour of certain types in Geosciences are one of the big challenges in numerical analysis and scientific computing. A suitable solution is to represent those large data sets in an appropriate compact form. In this context, tensor product decomposition methods currently emerge as an important tool. One reason is that these methods often enable one to attack high-dimensional problems successfully, another that they allow for very compact representations of large data sets. We follow the novel Tensor-Train (TT) decomposition method to support the development of improved understanding of the multiscale behavior and the development of compact storage schemes for solutions of such problems. One long-term goal of the project is the construction of a self-consistent closure for Large Eddy Simulations (LES) of turbulent flows that explicitly exploits the tensor product approach's capability of capturing self-similar structures. Secondly, we focus on a mixed deterministic-stochastic subgrid scale modelling strategy currently under development for application in Finite Volume Large Eddy Simulation (LES) codes. Advanced methods of time series analysis for the databased construction of stochastic models with inherently non-stationary statistical properties and concepts of information theory based on a modified Akaike information criterion and on the Bayesian information criterion for the model discrimination are used to construct surrogate models for the non-resolved flux fluctuations. Vector-valued auto-regressive models with external influences form the basis for the modelling approach [1], [2], [4]. Here, we present the reconstruction capabilities of the two modeling approaches tested against 3D turbulent channel flow data computed by direct numerical simulation (DNS) for an incompressible, isothermal fluid at Reynolds number Reτ = 590 (computed by [3]). References [1] I. Horenko. On identification of nonstationary factor models and its application to atmospherical data analysis. J. Atm. Sci., 67:1559-1574, 2010. [2] P. Metzner, L. Putzig and I. Horenko. Analysis of persistent non-stationary time series and applications. CAMCoS, 7:175-229, 2012. [3] M. Uhlmann. Generation of a temporally well-resolved sequence of snapshots of the flow-field in turbulent plane channel flow. URL: http://www-turbul.ifh.unikarlsruhe.de/uhlmann/reports/produce.pdf, 2000. [4] Th. von Larcher, A. Beck, R. Klein, I. Horenko, P. Metzner, M. Waidmann, D. Igdalov, G. Gassner and C.-D. Munz. Towards a Framework for the Stochastic Modelling of Subgrid Scale Fluxes for Large Eddy Simulation. Meteorol. Z., 24:313-342, 2015.
ESPA-Based Multiple Satellite Architecture for Mars Science and Exploration
NASA Astrophysics Data System (ADS)
Lo, A. S.; Griffin, K.; Hanson, M.; Lee, G.
2012-06-01
We propose a LCROSS-based approach, enabled by ts innovative use of the ESPA ring. Exploiting this architecture for Mars mission can use the upcoming Mars launch opportunities to inject multiple satellites that can support the wide range of NASA’s goals.
Street Viewer: An Autonomous Vision Based Traffic Tracking System.
Bottino, Andrea; Garbo, Alessandro; Loiacono, Carmelo; Quer, Stefano
2016-06-03
The development of intelligent transportation systems requires the availability of both accurate traffic information in real time and a cost-effective solution. In this paper, we describe Street Viewer, a system capable of analyzing the traffic behavior in different scenarios from images taken with an off-the-shelf optical camera. Street Viewer operates in real time on embedded hardware architectures with limited computational resources. The system features a pipelined architecture that, on one side, allows one to exploit multi-threading intensively and, on the other side, allows one to improve the overall accuracy and robustness of the system, since each layer is aimed at refining for the following layers the information it receives as input. Another relevant feature of our approach is that it is self-adaptive. During an initial setup, the application runs in learning mode to build a model of the flow patterns in the observed area. Once the model is stable, the system switches to the on-line mode where the flow model is used to count vehicles traveling on each lane and to produce a traffic information summary. If changes in the flow model are detected, the system switches back autonomously to the learning mode. The accuracy and the robustness of the system are analyzed in the paper through experimental results obtained on several different scenarios and running the system for long periods of time.
Schilde, M.; Doerner, K.F.; Hartl, R.F.
2014-01-01
In urban areas, logistic transportation operations often run into problems because travel speeds change, depending on the current traffic situation. If not accounted for, time-dependent and stochastic travel speeds frequently lead to missed time windows and thus poorer service. Especially in the case of passenger transportation, it often leads to excessive passenger ride times as well. Therefore, time-dependent and stochastic influences on travel speeds are relevant for finding feasible and reliable solutions. This study considers the effect of exploiting statistical information available about historical accidents, using stochastic solution approaches for the dynamic dial-a-ride problem (dynamic DARP). The authors propose two pairs of metaheuristic solution approaches, each consisting of a deterministic method (average time-dependent travel speeds for planning) and its corresponding stochastic version (exploiting stochastic information while planning). The results, using test instances with up to 762 requests based on a real-world road network, show that in certain conditions, exploiting stochastic information about travel speeds leads to significant improvements over deterministic approaches. PMID:25844013
Metric Documentation of Cultural Heritage: Research Directions from the Italian Gamher Project
NASA Astrophysics Data System (ADS)
Bitelli, G.; Balletti, C.; Brumana, R.; Barazzetti, L.; D'Urso, M. G.; Rinaudo, F.; Tucci, G.
2017-08-01
GAMHer is a collaborative project that aims at exploiting and validating Geomatics algorithms, methodologies and procedures in the framework of new European regulations, which require a more extensive and productive use of digital information, as requested by the Digital Agenda for Europe as one of the seven pillars of the Europe 2020 Strategy. To this aim, GAMHer focuses on the need of a certified accuracy for surveying and monitoring projects with photogrammetry and laser scanning technologies, especially when used in a multiscale approach for landscape and built heritage documentation, conservation, and management. The approach used follows a multi-LoD (level of detail) transition that exploits GIS systems at the landscape scale, BIM technology and "point cloud based" 3d modelling for the scale of the building, and an innovative BIM/GIS integrated approach to foster innovation, promote users' collaboration and encourage communication between users. The outcomes of GAMHer are not intended to be used only by a community of Geomatics specialists, but also by a heterogeneous user community that exploit images and laser scans in their professional activities.
Dynamics of Fluids and Transport in Fractured Rock
NASA Astrophysics Data System (ADS)
Faybishenko, Boris; Witherspoon, Paul A.; Gale, John
How to characterize fluid flow, heat, and chemical transport in geologic media remains a central challenge for geo-scientists and engineers worldwide. Investigations of fluid flow and transport within rock relate to such fundamental and applied problems as environmental remediation; nonaqueous phase liquid (NAPL) transport; exploitation of oil, gas, and geothermal resources; disposal of spent nuclear fuel; and geotechnical engineering. It is widely acknowledged that fractures in unsaturated-saturated rock can play a major role in solute transport from the land surface to underlying aquifers. It is also evident that general issues concerning flow and transport predictions in subsurface fractured zones can be resolved in a practical manner by integrating investigations into the physical nature of flow in fractures, developing relevant mathematical models and modeling approaches, and collecting site characterization data. Because of the complexity of flow and transport processes in most fractured rock flow problems, it is not yet possible to develop models directly from first principles. One reason for this is the presence of episodic, preferential water seepage and solute transport, which usually proceed more rapidly than expected from volume-averaged and time-averaged models. However, the physics of these processes is still known.
Accumulation of Colloidal Particles in Flow Junctions Induced by Fluid Flow and Diffusiophoresis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shin, Sangwoo; Ault, Jesse T.; Warren, Patrick B.
The flow of solutions containing solutes and colloidal particles in porous media is widely found in systems including underground aquifers, hydraulic fractures, estuarine or coastal habitats, water filtration systems, etc. In such systems, solute gradients occur when there is a local change in the solute concentration. While the effects of solute gradients have been found to be important for many applications, we observe an unexpected colloidal behavior in porous media driven by the combination of solute gradients and the fluid flow. When two flows with different solute concentrations are in contact near a junction, a sharp solute gradient is formedmore » at the interface, which may allow strong diffusiophoresis of the particles directed against the flow. Consequently, the particles accumulate near the pore entrance, rapidly approaching the packing limit. These colloidal dynamics have important implications for the clogging of a porous medium, where particles that are orders of magnitude smaller than the pore width can accumulate and block the pores within a short period of time. As a result, we also show that this effect can be exploited as a useful tool for preconcentrating biomolecules for rapid bioassays.« less
Accumulation of Colloidal Particles in Flow Junctions Induced by Fluid Flow and Diffusiophoresis
Shin, Sangwoo; Ault, Jesse T.; Warren, Patrick B.; ...
2017-11-16
The flow of solutions containing solutes and colloidal particles in porous media is widely found in systems including underground aquifers, hydraulic fractures, estuarine or coastal habitats, water filtration systems, etc. In such systems, solute gradients occur when there is a local change in the solute concentration. While the effects of solute gradients have been found to be important for many applications, we observe an unexpected colloidal behavior in porous media driven by the combination of solute gradients and the fluid flow. When two flows with different solute concentrations are in contact near a junction, a sharp solute gradient is formedmore » at the interface, which may allow strong diffusiophoresis of the particles directed against the flow. Consequently, the particles accumulate near the pore entrance, rapidly approaching the packing limit. These colloidal dynamics have important implications for the clogging of a porous medium, where particles that are orders of magnitude smaller than the pore width can accumulate and block the pores within a short period of time. As a result, we also show that this effect can be exploited as a useful tool for preconcentrating biomolecules for rapid bioassays.« less
NASA Astrophysics Data System (ADS)
Nanda, Tarun; Kumar, B. Ravi; Singh, Vishal
2017-11-01
Micromechanical modeling is used to predict material's tensile flow curve behavior based on microstructural characteristics. This research develops a simplified micromechanical modeling approach for predicting flow curve behavior of dual-phase steels. The existing literature reports on two broad approaches for determining tensile flow curve of these steels. The modeling approach developed in this work attempts to overcome specific limitations of the existing two approaches. This approach combines dislocation-based strain-hardening method with rule of mixtures. In the first step of modeling, `dislocation-based strain-hardening method' was employed to predict tensile behavior of individual phases of ferrite and martensite. In the second step, the individual flow curves were combined using `rule of mixtures,' to obtain the composite dual-phase flow behavior. To check accuracy of proposed model, four distinct dual-phase microstructures comprising of different ferrite grain size, martensite fraction, and carbon content in martensite were processed by annealing experiments. The true stress-strain curves for various microstructures were predicted with the newly developed micromechanical model. The results of micromechanical model matched closely with those of actual tensile tests. Thus, this micromechanical modeling approach can be used to predict and optimize the tensile flow behavior of dual-phase steels.
Biomass transformation webs provide a unified approach to consumer–resource modelling
Getz, Wayne M.
2011-01-01
An approach to modelling food web biomass flows among live and dead compartments within and among species is formulated using metaphysiological principles that characterise population growth in terms of basal metabolism, feeding, senescence and exploitation. This leads to a unified approach to modelling interactions among plants, herbivores, carnivores, scavengers, parasites and their resources. Also, dichotomising sessile miners from mobile gatherers of resources, with relevance to feeding and starvation time scales, suggests a new classification scheme involving 10 primary categories of consumer types. These types, in various combinations, rigorously distinguish scavenger from parasite, herbivory from phytophagy and detritivore from decomposer. Application of the approach to particular consumer–resource interactions is demonstrated, culminating in the construction of an anthrax-centred food web model, with parameters applicable to Etosha National Park, Namibia, where deaths of elephants and zebra from the bacterial pathogen, Bacillus anthracis, provide significant subsidies to jackals, vultures and other scavengers. PMID:21199247
Multiphasic modelling of bone-cement injection into vertebral cancellous bone.
Bleiler, Christian; Wagner, Arndt; Stadelmann, Vincent A; Windolf, Markus; Köstler, Harald; Boger, Andreas; Gueorguiev-Rüegg, Boyko; Ehlers, Wolfgang; Röhrle, Oliver
2015-01-01
Percutaneous vertebroplasty represents a current procedure to effectively reinforce osteoporotic bone via the injection of bone cement. This contribution considers a continuum-mechanically based modelling approach and simulation techniques to predict the cement distributions within a vertebra during injection. To do so, experimental investigations, imaging data and image processing techniques are combined and exploited to extract necessary data from high-resolution μCT image data. The multiphasic model is based on the Theory of Porous Media, providing the theoretical basis to describe within one set of coupled equations the interaction of an elastically deformable solid skeleton, of liquid bone cement and the displacement of liquid bone marrow. The simulation results are validated against an experiment, in which bone cement was injected into a human vertebra under realistic conditions. The major advantage of this comprehensive modelling approach is the fact that one can not only predict the complex cement flow within an entire vertebra but is also capable of taking into account solid deformations in a fully coupled manner. The presented work is the first step towards the ultimate and future goal of extending this framework to a clinical tool allowing for pre-operative cement distribution predictions by means of numerical simulations. Copyright © 2015 John Wiley & Sons, Ltd.
Tensor-based tracking of the aorta in phase-contrast MR images
NASA Astrophysics Data System (ADS)
Azad, Yoo-Jin; Malsam, Anton; Ley, Sebastian; Rengier, Fabian; Dillmann, Rüdiger; Unterhinninghofen, Roland
2014-03-01
The velocity-encoded magnetic resonance imaging (PC-MRI) is a valuable technique to measure the blood flow velocity in terms of time-resolved 3D vector fields. For diagnosis, presurgical planning and therapy control monitoring the patient's hemodynamic situation is crucial. Hence, an accurate and robust segmentation of the diseased vessel is the basis for further methods like the computation of the blood pressure. In the literature, there exist some approaches to transfer the methods of processing DT-MR images to PC-MR data, but the potential of this approach is not fully exploited yet. In this paper, we present a method to extract the centerline of the aorta in PC-MR images by applying methods from the DT-MRI. On account of this, in the first step the velocity vector fields are converted into tensor fields. In the next step tensor-based features are derived and by applying a modified tensorline algorithm the tracking of the vessel course is accomplished. The method only uses features derived from the tensor imaging without the use of additional morphology information. For evaluation purposes we applied our method to 4 volunteer as well as 26 clinical patient datasets with good results. In 29 of 30 cases our algorithm accomplished to extract the vessel centerline.
Granular Flow Graph, Adaptive Rule Generation and Tracking.
Pal, Sankar Kumar; Chakraborty, Debarati Bhunia
2017-12-01
A new method of adaptive rule generation in granular computing framework is described based on rough rule base and granular flow graph, and applied for video tracking. In the process, several new concepts and operations are introduced, and methodologies formulated with superior performance. The flow graph enables in defining an intelligent technique for rule base adaptation where its characteristics in mapping the relevance of attributes and rules in decision-making system are exploited. Two new features, namely, expected flow graph and mutual dependency between flow graphs are defined to make the flow graph applicable in the tasks of both training and validation. All these techniques are performed in neighborhood granular level. A way of forming spatio-temporal 3-D granules of arbitrary shape and size is introduced. The rough flow graph-based adaptive granular rule-based system, thus produced for unsupervised video tracking, is capable of handling the uncertainties and incompleteness in frames, able to overcome the incompleteness in information that arises without initial manual interactions and in providing superior performance and gaining in computation time. The cases of partial overlapping and detecting the unpredictable changes are handled efficiently. It is shown that the neighborhood granulation provides a balanced tradeoff between speed and accuracy as compared to pixel level computation. The quantitative indices used for evaluating the performance of tracking do not require any information on ground truth as in the other methods. Superiority of the algorithm to nonadaptive and other recent ones is demonstrated extensively.
Constructing compact and effective graphs for recommender systems via node and edge aggregations
Lee, Sangkeun; Kahng, Minsuk; Lee, Sang-goo
2014-12-10
Exploiting graphs for recommender systems has great potential to flexibly incorporate heterogeneous information for producing better recommendation results. As our baseline approach, we first introduce a naive graph-based recommendation method, which operates with a heterogeneous log-metadata graph constructed from user log and content metadata databases. Although the na ve graph-based recommendation method is simple, it allows us to take advantages of heterogeneous information and shows promising flexibility and recommendation accuracy. However, it often leads to extensive processing time due to the sheer size of the graphs constructed from entire user log and content metadata databases. In this paper, we proposemore » node and edge aggregation approaches to constructing compact and e ective graphs called Factor-Item bipartite graphs by aggregating nodes and edges of a log-metadata graph. Furthermore, experimental results using real world datasets indicate that our approach can significantly reduce the size of graphs exploited for recommender systems without sacrificing the recommendation quality.« less
Kale, Akshay; Song, Le; Lu, Xinyu; Yu, Liandong; Hu, Guoqing; Xuan, Xiangchun
2018-03-01
Insulator-based dielectrophoresis (iDEP) exploits in-channel hurdles and posts etc. to create electric field gradients for various particle manipulations. However, the presence of such insulating structures also amplifies the Joule heating in the fluid around themselves, leading to both temperature gradients and electrothermal flow. These Joule heating effects have been previously demonstrated to weaken the dielectrophoretic focusing and trapping of microscale and nanoscale particles. We find that the electrothermal flow vortices are able to entrain submicron particles for a localized enrichment near the insulating tips of a ratchet microchannel. This increase in particle concentration is reasonably predicted by a full-scale numerical simulation of the mass transport along with the coupled charge, heat and fluid transport. Our model also predicts the electric current and flow pattern in the fluid with a good agreement with the experimental observations. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Fetita, C.; Chang-Chien, K. C.; Brillet, P. Y.; Pr"teux, F.; Chang, R. F.
2012-03-01
Our study aims at developing a computer-aided diagnosis (CAD) system for fully automatic detection and classification of pathological lung parenchyma patterns in idiopathic interstitial pneumonias (IIP) and emphysema using multi-detector computed tomography (MDCT). The proposed CAD system is based on three-dimensional (3-D) mathematical morphology, texture and fuzzy logic analysis, and can be divided into four stages: (1) a multi-resolution decomposition scheme based on a 3-D morphological filter was exploited to discriminate the lung region patterns at different analysis scales. (2) An additional spatial lung partitioning based on the lung tissue texture was introduced to reinforce the spatial separation between patterns extracted at the same resolution level in the decomposition pyramid. Then, (3) a hierarchic tree structure was exploited to describe the relationship between patterns at different resolution levels, and for each pattern, six fuzzy membership functions were established for assigning a probability of association with a normal tissue or a pathological target. Finally, (4) a decision step exploiting the fuzzy-logic assignments selects the target class of each lung pattern among the following categories: normal (N), emphysema (EM), fibrosis/honeycombing (FHC), and ground glass (GDG). According to a preliminary evaluation on an extended database, the proposed method can overcome the drawbacks of a previously developed approach and achieve higher sensitivity and specificity.
Spatial and temporal control of thermal waves by using DMDs for interference based crack detection
NASA Astrophysics Data System (ADS)
Thiel, Erik; Kreutzbruck, Marc; Ziegler, Mathias
2016-02-01
Active Thermography is a well-established non-destructive testing method and used to detect cracks, voids or material inhomogeneities. It is based on applying thermal energy to a samples' surface whereas inner defects alter the nonstationary heat flow. Conventional excitation of a sample is hereby done spatially, either planar (e.g. using a lamp) or local (e.g. using a focused laser) and temporally, either pulsed or periodical. In this work we combine a high power laser with a Digital Micromirror Device (DMD) allowing us to merge all degrees of freedom to a spatially and temporally controlled heat source. This enables us to exploit the possibilities of coherent thermal wave shaping. Exciting periodically while controlling at the same time phase and amplitude of the illumination source induces - via absorption at the sample's surface - a defined thermal wave propagation through a sample. That means thermal waves can be controlled almost like acoustical or optical waves. However, in contrast to optical or acoustical waves, thermal waves are highly damped due to the diffusive character of the thermal heat flow and therefore limited in penetration depth in relation to the achievable resolution. Nevertheless, the coherence length of thermal waves can be chosen in the mmrange for modulation frequencies below 10 Hz which is perfectly met by DMD technology. This approach gives us the opportunity to transfer known technologies from wave shaping techniques to thermography methods. We will present experiments on spatial and temporal wave shaping, demonstrating interference based crack detection.
NASA Astrophysics Data System (ADS)
Re, B.; Dobrzynski, C.; Guardone, A.
2017-07-01
A novel strategy to solve the finite volume discretization of the unsteady Euler equations within the Arbitrary Lagrangian-Eulerian framework over tetrahedral adaptive grids is proposed. The volume changes due to local mesh adaptation are treated as continuous deformations of the finite volumes and they are taken into account by adding fictitious numerical fluxes to the governing equation. This peculiar interpretation enables to avoid any explicit interpolation of the solution between different grids and to compute grid velocities so that the Geometric Conservation Law is automatically fulfilled also for connectivity changes. The solution on the new grid is obtained through standard ALE techniques, thus preserving the underlying scheme properties, such as conservativeness, stability and monotonicity. The adaptation procedure includes node insertion, node deletion, edge swapping and points relocation and it is exploited both to enhance grid quality after the boundary movement and to modify the grid spacing to increase solution accuracy. The presented approach is assessed by three-dimensional simulations of steady and unsteady flow fields. The capability of dealing with large boundary displacements is demonstrated by computing the flow around the translating infinite- and finite-span NACA 0012 wing moving through the domain at the flight speed. The proposed adaptive scheme is applied also to the simulation of a pitching infinite-span wing, where the bi-dimensional character of the flow is well reproduced despite the three-dimensional unstructured grid. Finally, the scheme is exploited in a piston-induced shock-tube problem to take into account simultaneously the large deformation of the domain and the shock wave. In all tests, mesh adaptation plays a crucial role.
A Potential Approach for Low Flow Selection in Water Resource Supply and Management
Ying Ouyang
2012-01-01
Low flow selections are essential to water resource management, water supply planning, and watershed ecosystem restoration. In this study, a new approach, namely the frequent-low (FL) approach (or frequent-low index), was developed based on the minimum frequent-low flow or level used in minimum flows and/or levels program in northeast Florida, USA. This FL approach was...
NASA Technical Reports Server (NTRS)
Wallis, Graham B.
1989-01-01
Some features of two recent approaches of two-phase potential flow are presented. The first approach is based on a set of progressive examples that can be analyzed using common techniques, such as conservation laws, and taken together appear to lead in the direction of a general theory. The second approach is based on variational methods, a classical approach to conservative mechanical systems that has a respectable history of application to single phase flows. This latter approach, exemplified by several recent papers by Geurst, appears generally to be consistent with the former approach, at least in those cases for which it is possible to obtain comparable results. Each approach has a justifiable theoretical base and is self-consistent. Moreover, both approaches appear to give the right prediction for several well-defined situations.
A second-order shock-adaptive Godunov scheme based on the generalized Lagrangian formulation
NASA Astrophysics Data System (ADS)
Lepage, Claude
Application of the Godunov scheme to the Euler equations of gas dynamics, based on the Eulerian formulation of flow, smears discontinuities (especially sliplines) over several computational cells, while the accuracy in the smooth flow regions is of the order of a function of the cell width. Based on the generalized Lagrangian formulation (GLF), the Godunov scheme yields far superior results. By the use of coordinate streamlines in the GLF, the slipline (itself a streamline) is resolved crisply. Infinite shock resolution is achieved through the splitting of shock cells, while the accuracy in the smooth flow regions is improved using a nonconservative formulation of the governing equations coupled to a second order extension of the Godunov scheme. Furthermore, GLF requires no grid generation for boundary value problems and the simple structure of the solution to the Riemann problem in the GLF is exploited in the numerical implementation of the shock adaptive scheme. Numerical experiments reveal high efficiency and unprecedented resolution of shock and slipline discontinuities.
Lundquist, J. K.; Churchfield, M. J.; Lee, S.; ...
2015-02-23
Wind-profiling lidars are now regularly used in boundary-layer meteorology and in applications such as wind energy and air quality. Lidar wind profilers exploit the Doppler shift of laser light backscattered from particulates carried by the wind to measure a line-of-sight (LOS) velocity. The Doppler beam swinging (DBS) technique, used by many commercial systems, considers measurements of this LOS velocity in multiple radial directions in order to estimate horizontal and vertical winds. The method relies on the assumption of homogeneous flow across the region sampled by the beams. Using such a system in inhomogeneous flow, such as wind turbine wakes ormore » complex terrain, will result in errors. To quantify the errors expected from such violation of the assumption of horizontal homogeneity, we simulate inhomogeneous flow in the atmospheric boundary layer, notably stably stratified flow past a wind turbine, with a mean wind speed of 6.5 m s -1 at the turbine hub-height of 80 m. This slightly stable case results in 15° of wind direction change across the turbine rotor disk. The resulting flow field is sampled in the same fashion that a lidar samples the atmosphere with the DBS approach, including the lidar range weighting function, enabling quantification of the error in the DBS observations. The observations from the instruments located upwind have small errors, which are ameliorated with time averaging. However, the downwind observations, particularly within the first two rotor diameters downwind from the wind turbine, suffer from errors due to the heterogeneity of the wind turbine wake. Errors in the stream-wise component of the flow approach 30% of the hub-height inflow wind speed close to the rotor disk. Errors in the cross-stream and vertical velocity components are also significant: cross-stream component errors are on the order of 15% of the hub-height inflow wind speed (1.0 m s −1) and errors in the vertical velocity measurement exceed the actual vertical velocity. By three rotor diameters downwind, DBS-based assessments of wake wind speed deficits based on the stream-wise velocity can be relied on even within the near wake within 1.0 s -1 (or 15% of the hub-height inflow wind speed), and the cross-stream velocity error is reduced to 8% while vertical velocity estimates are compromised. Furthermore, measurements of inhomogeneous flow such as wind turbine wakes are susceptible to these errors, and interpretations of field observations should account for this uncertainty.« less
NASA Astrophysics Data System (ADS)
Lundquist, J. K.; Churchfield, M. J.; Lee, S.; Clifton, A.
2015-02-01
Wind-profiling lidars are now regularly used in boundary-layer meteorology and in applications such as wind energy and air quality. Lidar wind profilers exploit the Doppler shift of laser light backscattered from particulates carried by the wind to measure a line-of-sight (LOS) velocity. The Doppler beam swinging (DBS) technique, used by many commercial systems, considers measurements of this LOS velocity in multiple radial directions in order to estimate horizontal and vertical winds. The method relies on the assumption of homogeneous flow across the region sampled by the beams. Using such a system in inhomogeneous flow, such as wind turbine wakes or complex terrain, will result in errors. To quantify the errors expected from such violation of the assumption of horizontal homogeneity, we simulate inhomogeneous flow in the atmospheric boundary layer, notably stably stratified flow past a wind turbine, with a mean wind speed of 6.5 m s-1 at the turbine hub-height of 80 m. This slightly stable case results in 15° of wind direction change across the turbine rotor disk. The resulting flow field is sampled in the same fashion that a lidar samples the atmosphere with the DBS approach, including the lidar range weighting function, enabling quantification of the error in the DBS observations. The observations from the instruments located upwind have small errors, which are ameliorated with time averaging. However, the downwind observations, particularly within the first two rotor diameters downwind from the wind turbine, suffer from errors due to the heterogeneity of the wind turbine wake. Errors in the stream-wise component of the flow approach 30% of the hub-height inflow wind speed close to the rotor disk. Errors in the cross-stream and vertical velocity components are also significant: cross-stream component errors are on the order of 15% of the hub-height inflow wind speed (1.0 m s-1) and errors in the vertical velocity measurement exceed the actual vertical velocity. By three rotor diameters downwind, DBS-based assessments of wake wind speed deficits based on the stream-wise velocity can be relied on even within the near wake within 1.0 m s-1 (or 15% of the hub-height inflow wind speed), and the cross-stream velocity error is reduced to 8% while vertical velocity estimates are compromised. Measurements of inhomogeneous flow such as wind turbine wakes are susceptible to these errors, and interpretations of field observations should account for this uncertainty.
Flow cytometric HyPer-based assay for hydrogen peroxide.
Lyublinskaya, O G; Antonov, S A; Gorokhovtsev, S G; Pugovkina, N A; Kornienko, Ju S; Ivanova, Ju S; Shatrova, A N; Aksenov, N D; Zenin, V V; Nikolsky, N N
2018-05-30
HyPer is a genetically encoded fluorogenic sensor for hydrogen peroxide which is generally used for the ratiometric imaging of H 2 O 2 fluxes in living cells. Here, we demonstrate the advantages of HyPer-based ratiometric flow cytometry assay for H 2 O 2 , by using K562 and human mesenchymal stem cell lines expressing HyPer. We show that flow cytometry analysis is suitable to detect HyPer response to submicromolar concentrations of extracellularly added H 2 O 2 that is much lower than concentrations addressed previously in the other HyPer-based assays (such as cell imaging or fluorimetry). Suggested technique is also much more sensitive to hydrogen peroxide than the widespread flow cytometry assay exploiting H 2 O 2 -reactive dye H 2 DCFDA and, contrary to the H 2 DCFDA-based assay, can be employed for the kinetic studies of H 2 O 2 utilization by cells, including measurements of the rate constants of H 2 O 2 removal. In addition, flow cytometry multi-parameter ratiometric measurements enable rapid and high-throughput detection of endogenously generated H 2 O 2 in different subpopulations of HyPer-expressing cells. To sum up, HyPer can be used in multi-parameter flow cytometry studies as a highly sensitive indicator of intracellular H 2 O 2 . Copyright © 2018. Published by Elsevier Inc.
Exploiting Software Tool Towards Easier Use And Higher Efficiency
NASA Astrophysics Data System (ADS)
Lin, G. H.; Su, J. T.; Deng, Y. Y.
2006-08-01
In developing countries, using data based on instrument made by themselves in maximum extent is very important. It is not only related to maximizing science returns upon prophase investment -- deep accumulations in every aspects but also science output. Based on the idea, we are exploiting a software (called THDP: Tool of Huairou Data Processing). It is used for processing a series of issues, which is met necessary in processing data. This paper discusses its designed purpose, functions, method and specialities. The primary vehicle for general data interpretation is through various techniques of data visualization, techniques of interactive. In the software, we employed Object Oriented approach. It is appropriate to the vehicle. it is imperative that the approach provide not only function, but do so in as convenient a fashion as possible. As result of the software exploiting, it is not only easier to learn data processing for beginner and more convenienter to need further improvement for senior but also increase greatly efficiency in every phrases include analyse, parameter adjusting, result display. Under frame of virtual observatory, for developing countries, we should study more and newer related technologies, which can advance ability and efficiency in science research, like the software we are developing
NASA Astrophysics Data System (ADS)
Pétré, Marie-Amélie; Rivera, Alfonso; Lefebvre, René
2016-04-01
The Milk River transboundary aquifer straddles southern Alberta (Canada) and northern Montana (United States), a semi-arid and water-short region. The extensive use of this regional sandstone aquifer over the 20th century has led to a major drop in water levels locally, and concerns about the durability of the resources have been raised since the mid-1950. Even though the Milk River Aquifer (MRA) has been studied for decades, most of the previous studies were limited by the international border, preventing a sound understanding of the aquifer dynamics. Yet, a complete portrait of the aquifer is required for proper management of this shared resource. The transboundary study of the MRA aims to overcome transboundary limitations by providing a comprehensive characterization of the groundwater resource at the aquifer scale, following a three-stage approach: 1) The development of a 3D unified geological model of the MRA (50,000 km2). The stratigraphic framework on both sides of the border was harmonized and various sources of geological data were unified to build the transboundary geological model. The delineation of the aquifer and the geometry and thicknesses of the geological units were defined continuously across the border. 2) Elaboration of a conceptual hydrogeological model by linking hydrogeological and geochemical data with the 3D unified geological model. This stage is based on a thorough literature review and focused complementary field work on both sides of the border. The conceptual model includes the determination of the groundwater flow pattern, the spatial distribution of hydraulic properties, a groundwater budget and the definition of the groundwater types. Isotopes (3H, 14C, 36Cl) were used to delineate the recharge area as well as the active and low-flow areas. 3) The building of a 3D numerical groundwater flow model of the MRA (26,000 km2). This model is a transposition of the geological and hydrogeological conceptual models. A pre-exploitation steady-state model and a subsequent transient numerical model with several exploitation scenarios were developed. The numerical model aims to test the conceptual model and to provide a basis to assess the best possible uses of this valuable resource that is shared by Canada and the United States of America. This study provides a unique approach with scientific tools for proper aquifer assessment and groundwater management at the aquifer scale, not interrupted by a jurisdictional boundary. These tools are combined and integrated into three models, which together will form the basis of reliable sustainable groundwater and aquifer management in cooperation, thus facilitating the creation of a system of transboundary water governance based on scientific knowledge.
CNT based thermal Brownian motor to pump water in nanodevices
NASA Astrophysics Data System (ADS)
Oyarzua, Elton; Zambrano, Harvey; Walther, J. H.
2016-11-01
Brownian molecular motors are nanoscale machines that exploit thermal fluctuations for directional motion by employing mechanisms such as the Feynman-Smoluchowski ratchet. In this study, using Non Equilibrium Molecular Dynamics, we propose a novel thermal Brownian motor for pumping water through Carbon Nanotubes (CNTs). To achieve this we impose a thermal gradient along the axis of a CNT filled with water and impose, in addition, a spatial asymmetry by fixing specific zones on the CNT in order to modify the vibrational modes of the CNT. We find that the temperature gradient and imposed spatial asymmetry drive the water flow in a preferential direction. We systematically modified the magnitude of the applied thermal gradient and the axial position of the fixed points. The analysis involves measurement of the vibrational modes in the CNTs using a Fast Fourier Transform (FFT) algorithm. We observed water flow in CNTs of 0.94, 1.4 and 2.0 nm in diameter, reaching a maximum velocity of 5 m/s for a thermal gradient of 3.3 K/nm. The proposed thermal motor is capable of delivering a continuous flow throughout a CNT, providing a useful tool for driving liquids in nanofluidic devices by exploiting thermal gradients. We aknowledge partial support from Fondecyt project 11130559.
ERIC Educational Resources Information Center
Khribi, Mohamed Koutheair; Jemni, Mohamed; Nasraoui, Olfa
2009-01-01
In this paper, we describe an automatic personalization approach aiming to provide online automatic recommendations for active learners without requiring their explicit feedback. Recommended learning resources are computed based on the current learner's recent navigation history, as well as exploiting similarities and dissimilarities among…
Linear control of oscillator and amplifier flows*
NASA Astrophysics Data System (ADS)
Schmid, Peter J.; Sipp, Denis
2016-08-01
Linear control applied to fluid systems near an equilibrium point has important applications for many flows of industrial or fundamental interest. In this article we give an exposition of tools and approaches for the design of control strategies for globally stable or unstable flows. For unstable oscillator flows a feedback configuration and a model-based approach is proposed, while for stable noise-amplifier flows a feedforward setup and an approach based on system identification is advocated. Model reduction and robustness issues are addressed for the oscillator case; statistical learning techniques are emphasized for the amplifier case. Effective suppression of global and convective instabilities could be demonstrated for either case, even though the system-identification approach results in a superior robustness to off-design conditions.
Evaluation of particle-based flow characteristics using novel Eulerian indices
NASA Astrophysics Data System (ADS)
Cho, Youngmoon; Kang, Seongwon
2017-11-01
The main objective of this study is to evaluate flow characteristics in complex particle-laden flows efficiently using novel Eulerian indices. For flows with a large number of particles, a Lagrangian approach leads to accurate yet inefficient prediction in many engineering problems. We propose a technique based on Eulerian transport equation and ensemble-averaged particle properties, which enables efficient evaluation of various particle-based flow characteristics such as the residence time, accumulated travel distance, mean radial force, etc. As a verification study, we compare the developed Eulerian indices with those using Lagrangian approaches for laminar flows with and without a swirling motion and density ratio. The results show satisfactory agreement between two approaches. The accumulated travel distance is modified to analyze flow motions inside IC engines and, when applied to flow bench cases, it can predict swirling and tumbling motions successfully. For flows inside a cyclone separator, the mean radial force is applied to predict the separation of particles and is shown to have a high correlation to the separation efficiency for various working conditions. In conclusion, the proposed Eulerian indices are shown to be useful tools to analyze complex particle-based flow characteristics. Corresponding author.
Effect of inlet modelling on surface drainage in coupled urban flood simulation
NASA Astrophysics Data System (ADS)
Jang, Jiun-Huei; Chang, Tien-Hao; Chen, Wei-Bo
2018-07-01
For a highly developed urban area with complete drainage systems, flood simulation is necessary for describing the flow dynamics from rainfall, to surface runoff, and to sewer flow. In this study, a coupled flood model based on diffusion wave equations was proposed to simulate one-dimensional sewer flow and two-dimensional overland flow simultaneously. The overland flow model provides details on the rainfall-runoff process to estimate the excess runoff that enters the sewer system through street inlets for sewer flow routing. Three types of inlet modelling are considered in this study, including the manhole-based approach that ignores the street inlets by draining surface water directly into manholes, the inlet-manhole approach that drains surface water into manholes that are each connected to multiple inlets, and the inlet-node approach that drains surface water into sewer nodes that are connected to individual inlets. The simulation results were compared with a high-intensity rainstorm event that occurred in 2015 in Taipei City. In the verification of the maximum flood extent, the two approaches that considered street inlets performed considerably better than that without street inlets. When considering the aforementioned models in terms of temporal flood variation, using manholes as receivers leads to an overall inefficient draining of the surface water either by the manhole-based approach or by the inlet-manhole approach. Using the inlet-node approach is more reasonable than using the inlet-manhole approach because the inlet-node approach greatly reduces the fluctuation of the sewer water level. The inlet-node approach is more efficient in draining surface water by reducing flood volume by 13% compared with the inlet-manhole approach and by 41% compared with the manhole-based approach. The results show that inlet modeling has a strong influence on drainage efficiency in coupled flood simulation.
The Potential in Bioethanol Production From Waste Fiber Sludges in Pulp Mill-Based Biorefineries
NASA Astrophysics Data System (ADS)
Sjöde, Anders; Alriksson, Björn; Jönsson, Leif J.; Nilvebrant, Nils-Olof
Industrial production of bioethanol from fibers that are unusable for pulp production in pulp mills offers an approach to product diversification and more efficient exploitation of the raw material. In an attempt to utilize fibers flowing to the biological waste treatment, selected fiber sludges from three different pulp mills were collected, chemically analyzed, enzymatically hydrolyzed, and fermented for bioethanol production. Another aim was to produce solid residues with higher heat values than those of the original fiber sludges to gain a better fuel for combustion. The glucan content ranged between 32 and 66% of the dry matter. The lignin content varied considerably (1-25%), as did the content of wood extractives (0.2-5.8%). Hydrolysates obtained using enzymatic hydrolysis were found to be readily fermentable using Saccharomyces cerevisiae. Hydrolysis resulted in improved heat values compared with corresponding untreated fiber sludges. Oligomeric xylan fragments in the solid residue obtained after enzymatic hydrolysis were identified using matrix-assisted laser desorption ionization-time of flight and their potential as a new product of a pulp mill-based biorefinery is discussed.
The potential in bioethanol production from waste fiber sludges in pulp mill-based biorefineries.
Sjöde, Anders; Alriksson, Björn; Jönsson, Leif J; Nilvebrant, Nils-Olof
2007-04-01
Industrial production of bioethanol from fibers that are unusable for pulp production in pulp mills offers an approach to product diversification and more efficient exploitation of the raw material. In an attempt to utilize fibers flowing to the biological waste treatment, selected fiber sludges from three different pulp mills were collected, chemically analyzed, enzymatically hydrolyzed, and fermented for bioethanol production. Another aim was to produce solid residues with higher heat values than those of the original fiber sludges to gain a better fuel for combustion. The glucan content ranged between 32 and 66% of the dry matter. The lignin content varied considerably (1-25%), as did the content of wood extractives (0.2-5.8%). Hydrolysates obtained using enzymatic hydrolysis were found to be readily fermentable using Saccharomyces cerevisiae. Hydrolysis resulted in improved heat values compared with corresponding untreated fiber sludges. Oligomeric xylan fragments in the solid residue obtained after enzymatic hydrolysis were identified using matrix-assisted laser desorption ionization-time of flight and their potential as a new product of a pulp mill-based biorefinery is discussed.
Structure and stability of the finite-area von Kármán street
NASA Astrophysics Data System (ADS)
Luzzatto-Fegiz, Paolo; Williamson, Charles H. K.
2012-06-01
By using a recently developed numerical method, we explore in detail the possible inviscid equilibrium flows for a Kármán street comprising uniform, large-area vortices. In order to determine stability, we make use of an energy-based stability argument (originally proposed by Lord Kelvin), whose previous implementation had been unsuccessful in determining stability for the Kármán street [P. G. Saffman and J. C. Schatzman, "Stability of a vortex street of finite vortices," J. Fluid Mech. 117, 171-186 (1982), 10.1017/S0022112082001578]. We discuss in detail the issues affecting this interpretation of Kelvin's ideas, and show that this energy-based argument cannot detect subharmonic instabilities. To find superharmonic instabilities, we employ a recently introduced approach, which constitutes a reliable implementation of Kelvin's stability ideas [P. Luzzatto-Fegiz and C. H. K. Williamson, "Stability of conservative flows and new steady fluid solutions from bifurcation diagrams exploiting a variational argument," Phys. Rev. Lett. 104, 044504 (2010), 10.1103/PhysRevLett.104.044504]. For periodic flows, this leads us to organize solutions into families with fixed impulse I, and to construct diagrams involving the flow energy E and horizontal spacing (i.e., wavelength) L. Families of large-I vortex streets exhibit a turning point in L, and terminate with "cat's eyes" vortices (as also suggested by previous investigators). However, for low-I streets, the solution families display a multitude of turning points (leading to multiple possible streets, for given L), and terminate with teardrop-shaped vortices. This is radically different from previous suggestions in the literature. These two qualitatively different limiting states are connected by a special street, whereby vortices from opposite rows touch, such that each vortex boundary exhibits three corners. Furthermore, by following the family of I = 0 streets to small L, we gain access to a large, hitherto unexplored flow regime, involving streets with L significantly smaller than previously believed possible. To elucidate in detail the possible solution regimes, we introduce a map of spacing L, versus impulse I, which we construct by numerically computing a large number of steady vortex configurations. For each constant-impulse family of steady vortices, our stability approach also reveals a single superharmonic bifurcation, leading to new families of vortex streets, which exhibit lower symmetry.
Quantitative identification of riverine nitrogen from point, direct runoff and base flow sources.
Huang, Hong; Zhang, Baifa; Lu, Jun
2014-01-01
We present a methodological example for quantifying the contributions of riverine total nitrogen (TN) from point, direct runoff and base flow sources by combining a recursive digital filter technique and statistical methods. First, we separated daily riverine flow into direct runoff and base flow using a recursive digital filter technique; then, a statistical model was established using daily simultaneous data for TN load, direct runoff rate, base flow rate, and temperature; and finally, the TN loading from direct runoff and base flow sources could be inversely estimated. As a case study, this approach was adopted to identify the TN source contributions in Changle River, eastern China. Results showed that, during 2005-2009, the total annual TN input to the river was 1,700.4±250.2 ton, and the contributions of point, direct runoff and base flow sources were 17.8±2.8%, 45.0±3.6%, and 37.2±3.9%, respectively. The innovation of the approach is that the nitrogen from direct runoff and base flow sources could be separately quantified. The approach is simple but detailed enough to take the major factors into account, providing an effective and reliable method for riverine nitrogen loading estimation and source apportionment.
Optical Flow Estimation for Flame Detection in Videos
Mueller, Martin; Karasev, Peter; Kolesov, Ivan; Tannenbaum, Allen
2014-01-01
Computational vision-based flame detection has drawn significant attention in the past decade with camera surveillance systems becoming ubiquitous. Whereas many discriminating features, such as color, shape, texture, etc., have been employed in the literature, this paper proposes a set of motion features based on motion estimators. The key idea consists of exploiting the difference between the turbulent, fast, fire motion, and the structured, rigid motion of other objects. Since classical optical flow methods do not model the characteristics of fire motion (e.g., non-smoothness of motion, non-constancy of intensity), two optical flow methods are specifically designed for the fire detection task: optimal mass transport models fire with dynamic texture, while a data-driven optical flow scheme models saturated flames. Then, characteristic features related to the flow magnitudes and directions are computed from the flow fields to discriminate between fire and non-fire motion. The proposed features are tested on a large video database to demonstrate their practical usefulness. Moreover, a novel evaluation method is proposed by fire simulations that allow for a controlled environment to analyze parameter influences, such as flame saturation, spatial resolution, frame rate, and random noise. PMID:23613042
Vetrella, Amedeo Rodi; Fasano, Giancarmine; Accardo, Domenico; Moccia, Antonio
2016-12-17
Autonomous navigation of micro-UAVs is typically based on the integration of low cost Global Navigation Satellite System (GNSS) receivers and Micro-Electro-Mechanical Systems (MEMS)-based inertial and magnetic sensors to stabilize and control the flight. The resulting navigation performance in terms of position and attitude accuracy may not suffice for other mission needs, such as the ones relevant to fine sensor pointing. In this framework, this paper presents a cooperative UAV navigation algorithm that allows a chief vehicle, equipped with inertial and magnetic sensors, a Global Positioning System (GPS) receiver, and a vision system, to improve its navigation performance (in real time or in the post processing phase) exploiting formation flying deputy vehicles equipped with GPS receivers. The focus is set on outdoor environments and the key concept is to exploit differential GPS among vehicles and vision-based tracking (DGPS/Vision) to build a virtual additional navigation sensor whose information is then integrated in a sensor fusion algorithm based on an Extended Kalman Filter. The developed concept and processing architecture are described, with a focus on DGPS/Vision attitude determination algorithm. Performance assessment is carried out on the basis of both numerical simulations and flight tests. In the latter ones, navigation estimates derived from the DGPS/Vision approach are compared with those provided by the onboard autopilot system of a customized quadrotor. The analysis shows the potential of the developed approach, mainly deriving from the possibility to exploit magnetic- and inertial-independent accurate attitude information.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saurav, Kumar; Chandan, Vikas
District-heating-and-cooling (DHC) systems are a proven energy solution that has been deployed for many years in a growing number of urban areas worldwide. They comprise a variety of technologies that seek to develop synergies between the production and supply of heat, cooling, domestic hot water and electricity. Although the benefits of DHC systems are significant and have been widely acclaimed, yet the full potential of modern DHC systems remains largely untapped. There are several opportunities for development of energy efficient DHC systems, which will enable the effective exploitation of alternative renewable resources, waste heat recovery, etc., in order to increasemore » the overall efficiency and facilitate the transition towards the next generation of DHC systems. This motivated the need for modelling these complex systems. Large-scale modelling of DHC-networks is challenging, as it has several components such as buildings, pipes, valves, heating source, etc., interacting with each other. In this paper, we focus on building modelling. In particular, we present a gray-box methodology for thermal modelling of buildings. Gray-box modelling is a hybrid of data driven and physics based models where, coefficients of the equations from physics based models are learned using data. This approach allows us to capture the dynamics of the buildings more effectively as compared to pure data driven approach. Additionally, this approach results in a simpler models as compared to pure physics based models. We first develop the individual components of the building such as temperature evolution, flow controller, etc. These individual models are then integrated in to the complete gray-box model for the building. The model is validated using data collected from one of the buildings at Lule{\\aa}, a city on the coast of northern Sweden.« less
3D tracking of laparoscopic instruments using statistical and geometric modeling.
Wolf, Rémi; Duchateau, Josselin; Cinquin, Philippe; Voros, Sandrine
2011-01-01
During a laparoscopic surgery, the endoscope can be manipulated by an assistant or a robot. Several teams have worked on the tracking of surgical instruments, based on methods ranging from the development of specific devices to image processing methods. We propose to exploit the instruments' insertion points, which are fixed on the patients abdominal cavity, as a geometric constraint for the localization of the instruments. A simple geometric model of a laparoscopic instrument is described, as well as a parametrization that exploits a spherical geometric grid, which offers attracting homogeneity and isotropy properties. The general architecture of our proposed approach is based on the probabilistic Condensation algorithm.
NASA Astrophysics Data System (ADS)
Rau, Gabriel C.; Halloran, Landon J. S.; Cuthbert, Mark O.; Andersen, Martin S.; Acworth, R. Ian; Tellam, John H.
2017-09-01
Ephemeral and intermittent flow in dryland stream channels infiltrates into sediments, replenishes groundwater resources and underpins riparian ecosystems. However, the spatiotemporal complexity of the transitory flow processes that occur beneath such stream channels are poorly observed and understood. We develop a new approach to characterise the dynamics of surface water-groundwater interactions in dryland streams using pairs of temperature records measured at different depths within the streambed. The approach exploits the fact that the downward propagation of the diel temperature fluctuation from the surface depends on the sediment thermal diffusivity. This is controlled by time-varying fractions of air and water contained in streambed sediments causing a contrast in thermal properties. We demonstrate the usefulness of this method with multi-level temperature and pressure records of a flow event acquired using 12 streambed arrays deployed along a ∼ 12 km dryland channel section. Thermal signatures clearly indicate the presence of water and characterise the vertical flow component as well as the occurrence of horizontal hyporheic flow. We jointly interpret thermal signatures as well as surface and groundwater levels to distinguish four different hydrological regimes: [A] dry channel, [B] surface run-off, [C] pool-riffle sequence, and [D] isolated pools. The occurrence and duration of the regimes depends on the rate at which the infiltrated water redistributes in the subsurface which, in turn, is controlled by the hydraulic properties of the variably saturated sediment. Our results have significant implications for understanding how transitory flows recharge alluvial sediments, influence water quality and underpin dryland ecosystems.
Geometric saliency to characterize radar exploitation performance
NASA Astrophysics Data System (ADS)
Nolan, Adam; Keserich, Brad; Lingg, Andrew; Goley, Steve
2014-06-01
Based on the fundamental scattering mechanisms of facetized computer-aided design (CAD) models, we are able to define expected contributions (EC) to the radar signature. The net result of this analysis is the prediction of the salient aspects and contributing vehicle morphology based on the aspect. Although this approach does not provide the fidelity of an asymptotic electromagnetic (EM) simulation, it does provide very fast estimates of the unique scattering that can be consumed by a signature exploitation algorithm. The speed of this approach is particularly relevant when considering the high dimensionality of target configuration variability due to articulating parts which are computationally burdensome to predict. The key scattering phenomena considered in this work are the specular response from a single bounce interaction with surfaces and dihedral response formed between the ground plane and vehicle. Results of this analysis are demonstrated for a set of civilian target models.
ERIC Educational Resources Information Center
Mbaziira, Alex Vincent
2017-01-01
Cybercriminals are increasingly using Internet-based text messaging applications to exploit their victims. Incidents of deceptive cybercrime in text-based communication are increasing and include fraud, scams, as well as favorable and unfavorable fake reviews. In this work, we use a text-based deception detection approach to train models for…
Landers, Monica; McGrath, Kimberly; Johnson, Melissa H; Armstrong, Mary I; Dollard, Norin
2017-01-01
Commercial sexual exploitation of children has emerged as a critical issue within child welfare, but little is currently known about this population or effective treatment approaches to address their unique needs. Children in foster care and runaways are reported to be vulnerable to exploitation because they frequently have unmet needs for family relationships, and they have had inadequate supervision and histories of trauma of which traffickers take advantage. The current article presents data on the demographic characteristics, trauma history, mental and behavioral health needs, physical health needs, and strengths collected on a sample of 87 commercially sexually exploited youth. These youth were served in a specialized treatment program in Miami-Dade County, Florida, for exploited youth involved with the child welfare system. Findings revealed that the youth in this study have high rates of previous sexual abuse (86% of the youth) and other traumatic experiences prior to their exploitation. Youth also exhibited considerable mental and behavioral health needs. Given that few programs emphasize the unique needs of children who have been sexually exploited, recommendations are offered for providing a continuum of specialized housing and treatment services to meet the needs of sexually exploited youth, based on the authors' experiences working with this population.
Assessing the Effectiveness of Web-Based Tutorials Using Pre-and Post-Test Measurements
ERIC Educational Resources Information Center
Guy, Retta Sweat; Lownes-Jackson, Millicent
2012-01-01
Computer technology in general and the Internet in particular have facilitated as well as motivated the development of Web-based tutorials (MacKinnon & Williams, 2006). The current research study describes a pedagogical approach that exploits the use of self-paced, Web-based tutorials for assisting students with reviewing grammar and mechanics…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Huaiguang; Zhang, Yingchen; Muljadi, Eduard
In this paper, a short-term load forecasting approach based network reconfiguration is proposed in a parallel manner. Specifically, a support vector regression (SVR) based short-term load forecasting approach is designed to provide an accurate load prediction and benefit the network reconfiguration. Because of the nonconvexity of the three-phase balanced optimal power flow, a second-order cone program (SOCP) based approach is used to relax the optimal power flow problem. Then, the alternating direction method of multipliers (ADMM) is used to compute the optimal power flow in distributed manner. Considering the limited number of the switches and the increasing computation capability, themore » proposed network reconfiguration is solved in a parallel way. The numerical results demonstrate the feasible and effectiveness of the proposed approach.« less
Oh, Hyuntaek; Yaraghi, Nicholas; Raghavan, Srinivasa R
2015-05-19
Molecular organogelators convert oils into gels by forming self-assembled fibrous networks. Here, we demonstrate that such gelation can be activated by contacting the oil with an immiscible solvent (water). Our gelator is dibenzylidene sorbitol (DBS), which forms a low-viscosity sol when added to toluene containing a small amount of dimethyl sulfoxide (DMSO). Upon contact with water, DMSO partitions into the water, activating gelation of DBS in the toluene. The gel grows from the oil/water interface and slowly envelops the oil phase. We have exploited this effect for the self-repair of oil leaks from underwater tubes. When a DBS/toluene/DMSO solution flows through the tube, it forms a gel selectively at the leak point, thereby plugging the leak and restoring flow. Our approach is reminiscent of wound-sealing via blood-clotting: there also, inactive gelators in blood are activated at the wound site into a fibrous network, thereby plugging the wound and restoring blood flow.
Combination of Kinematics with Flow Visualization to Compute Total Circulation
NASA Technical Reports Server (NTRS)
Brasseur, J. G; Chang, I-Dee
1981-01-01
A method is described in which kinematics is exploited to compute the total circulation of a vortex from relatively simple flow visualization experiments. There are several advantages in the technique, including the newly acquired ability to calculate the changes in strength of a single vortex as it evolves. The main concepts and methodology are discussed in a general way for application to vortices which carry along with them definable regions of essentially irrotational fluid; however, the approach might be generalized to other flows which contain regions of concentrated vorticity. As an illustrative example, an application to the study of the transient changes in total circulation of individual vortex rings as they travel up a tube is described, taking into account the effect of the tube boundary. The accuracy of the method, assessed in part by a direct comparison with a laser Doppler measurement is felt to be well within experimental precision for vortex rings over a wide range of Reynolds numbers.
Concurrency-based approaches to parallel programming
NASA Technical Reports Server (NTRS)
Kale, L.V.; Chrisochoides, N.; Kohl, J.; Yelick, K.
1995-01-01
The inevitable transition to parallel programming can be facilitated by appropriate tools, including languages and libraries. After describing the needs of applications developers, this paper presents three specific approaches aimed at development of efficient and reusable parallel software for irregular and dynamic-structured problems. A salient feature of all three approaches in their exploitation of concurrency within a processor. Benefits of individual approaches such as these can be leveraged by an interoperability environment which permits modules written using different approaches to co-exist in single applications.
Chambon, Stanislas; Galtier, Mathieu N; Arnal, Pierrick J; Wainrib, Gilles; Gramfort, Alexandre
2018-04-01
Sleep stage classification constitutes an important preliminary exam in the diagnosis of sleep disorders. It is traditionally performed by a sleep expert who assigns to each 30 s of the signal of a sleep stage, based on the visual inspection of signals such as electroencephalograms (EEGs), electrooculograms (EOGs), electrocardiograms, and electromyograms (EMGs). We introduce here the first deep learning approach for sleep stage classification that learns end-to-end without computing spectrograms or extracting handcrafted features, that exploits all multivariate and multimodal polysomnography (PSG) signals (EEG, EMG, and EOG), and that can exploit the temporal context of each 30-s window of data. For each modality, the first layer learns linear spatial filters that exploit the array of sensors to increase the signal-to-noise ratio, and the last layer feeds the learnt representation to a softmax classifier. Our model is compared to alternative automatic approaches based on convolutional networks or decisions trees. Results obtained on 61 publicly available PSG records with up to 20 EEG channels demonstrate that our network architecture yields the state-of-the-art performance. Our study reveals a number of insights on the spatiotemporal distribution of the signal of interest: a good tradeoff for optimal classification performance measured with balanced accuracy is to use 6 EEG with 2 EOG (left and right) and 3 EMG chin channels. Also exploiting 1 min of data before and after each data segment offers the strongest improvement when a limited number of channels are available. As sleep experts, our system exploits the multivariate and multimodal nature of PSG signals in order to deliver the state-of-the-art classification performance with a small computational cost.
Game theoretic approach for cooperative feature extraction in camera networks
NASA Astrophysics Data System (ADS)
Redondi, Alessandro E. C.; Baroffio, Luca; Cesana, Matteo; Tagliasacchi, Marco
2016-07-01
Visual sensor networks (VSNs) consist of several camera nodes with wireless communication capabilities that can perform visual analysis tasks such as object identification, recognition, and tracking. Often, VSN deployments result in many camera nodes with overlapping fields of view. In the past, such redundancy has been exploited in two different ways: (1) to improve the accuracy/quality of the visual analysis task by exploiting multiview information or (2) to reduce the energy consumed for performing the visual task, by applying temporal scheduling techniques among the cameras. We propose a game theoretic framework based on the Nash bargaining solution to bridge the gap between the two aforementioned approaches. The key tenet of the proposed framework is for cameras to reduce the consumed energy in the analysis process by exploiting the redundancy in the reciprocal fields of view. Experimental results in both simulated and real-life scenarios confirm that the proposed scheme is able to increase the network lifetime, with a negligible loss in terms of visual analysis accuracy.
From the speed of sound to the speed of light: Ultrasonic Cherenkov refractometry
NASA Astrophysics Data System (ADS)
Hallewell, G. D.
2017-12-01
Despite its success in the SLD CRID at the SLAC Linear Collider, ultrasonic measurement of Cherenkov radiator refractive index has been less fully exploited in more recent Cherenkov detectors employing gaseous radiators. This is surprising, since it is ideally suited to monitoring hydrostatic variations in refractive index as well as its evolution during the replacement of a light radiator passivation gas (e.g. N2, CO2) with a heavier fluorocarbon (e.g. C4F10[CF4]; mol. wt. 188[88]). The technique exploits the dependence of sound velocity on the molar concentrations of the two components at known temperature and pressure. The SLD barrel CRID used an 87%C5F12/13%N2 blend, mixed before injection into the radiator vessel: blend control based on ultrasonic mixture analysis maintained the β=1 Cherenkov ring angle to a long term variation better than ±0.3%, with refractivity monitored ultrasonically at multiple points within the radiator vessel. Recent advances using microcontroller-based electronics have led to ultrasonic instruments capable of simultaneously measuring gas flow and binary mixture composition in the fluorocarbon evaporative cooling systems of the ATLAS Inner Detector. Sound transit times are measured with multi-MHz transit time clocks in opposite directions in flowing gas for simultaneous measurement of flow rate and sound velocity. Gas composition is evaluated in real-time by comparison with a sound velocity/composition database. Such instruments could be incorporated into new and upgraded gas Cherenkov detectors for radiator gas mixture (and corresponding refractive index) measurement to a precision better than 10-3. They have other applications in binary gas analysis - including in Xenon-based anaesthesia. These possibilities are discussed.
Unsupervised chunking based on graph propagation from bilingual corpus.
Zhu, Ling; Wong, Derek F; Chao, Lidia S
2014-01-01
This paper presents a novel approach for unsupervised shallow parsing model trained on the unannotated Chinese text of parallel Chinese-English corpus. In this approach, no information of the Chinese side is applied. The exploitation of graph-based label propagation for bilingual knowledge transfer, along with an application of using the projected labels as features in unsupervised model, contributes to a better performance. The experimental comparisons with the state-of-the-art algorithms show that the proposed approach is able to achieve impressive higher accuracy in terms of F-score.
Action change detection in video using a bilateral spatial-temporal constraint
NASA Astrophysics Data System (ADS)
Tian, Jing; Chen, Li
2016-08-01
Action change detection in video aims to detect action discontinuity in video. The silhouettes-based features are desirable for action change detection. This paper studies the problem of silhouette-quality assessment. For that, a non-reference approach without the need for ground truth is proposed in this paper to evaluate the quality of silhouettes, by exploiting both the boundary contrast of the silhouettes in the spatial domain and the consistency of the silhouettes in the temporal domain. This is in contrast to that either only spatial information or only temporal information of silhouettes is exploited in conventional approaches. Experiments are conducted using artificially generated degraded silhouettes to show that the proposed approach outperforms conventional approaches to achieve more accurate quality assessment. Furthermore, experiments are performed to show that the proposed approach is able to improve the accuracy performance of conventional action change approaches in two human action video data-sets. The average runtime of the proposed approach for Weizmann action video data-set is 0.08 second for one frame using Matlab programming language. It is computationally efficient and potential to real-time implementations.
Mancier, Valérie; Leclercq, Didier
2007-02-01
Two new determination methods of the power dissipated in an aqueous medium by an ultrasound generator were developed. They are based on the use of a heat flow sensor inserted between a tank and a heat sink that allows to measure the power directly coming through the sensor. To be exploitable, the first method requires waiting for stationary flow. On the other hand, the second, extrapolated from the first one, makes it possible to determine the dissipated power in only five minutes. Finally, the results obtained with the flowmetric method are compared to the classical calorimetric ones.
Supersampling and Network Reconstruction of Urban Mobility.
Sagarra, Oleguer; Szell, Michael; Santi, Paolo; Díaz-Guilera, Albert; Ratti, Carlo
2015-01-01
Understanding human mobility is of vital importance for urban planning, epidemiology, and many other fields that draw policies from the activities of humans in space. Despite the recent availability of large-scale data sets of GPS traces or mobile phone records capturing human mobility, typically only a subsample of the population of interest is represented, giving a possibly incomplete picture of the entire system under study. Methods to reliably extract mobility information from such reduced data and to assess their sampling biases are lacking. To that end, we analyzed a data set of millions of taxi movements in New York City. We first show that, once they are appropriately transformed, mobility patterns are highly stable over long time scales. Based on this observation, we develop a supersampling methodology to reliably extrapolate mobility records from a reduced sample based on an entropy maximization procedure, and we propose a number of network-based metrics to assess the accuracy of the predicted vehicle flows. Our approach provides a well founded way to exploit temporal patterns to save effort in recording mobility data, and opens the possibility to scale up data from limited records when information on the full system is required.
NASA Astrophysics Data System (ADS)
Aiftimiei, D. C.; Antonacci, M.; Bagnasco, S.; Boccali, T.; Bucchi, R.; Caballer, M.; Costantini, A.; Donvito, G.; Gaido, L.; Italiano, A.; Michelotto, D.; Panella, M.; Salomoni, D.; Vallero, S.
2017-10-01
One of the challenges a scientific computing center has to face is to keep delivering well consolidated computational frameworks (i.e. the batch computing farm), while conforming to modern computing paradigms. The aim is to ease system administration at all levels (from hardware to applications) and to provide a smooth end-user experience. Within the INDIGO- DataCloud project, we adopt two different approaches to implement a PaaS-level, on-demand Batch Farm Service based on HTCondor and Mesos. In the first approach, described in this paper, the various HTCondor daemons are packaged inside pre-configured Docker images and deployed as Long Running Services through Marathon, profiting from its health checks and failover capabilities. In the second approach, we are going to implement an ad-hoc HTCondor framework for Mesos. Container-to-container communication and isolation have been addressed exploring a solution based on overlay networks (based on the Calico Project). Finally, we have studied the possibility to deploy an HTCondor cluster that spans over different sites, exploiting the Condor Connection Broker component, that allows communication across a private network boundary or firewall as in case of multi-site deployments. In this paper, we are going to describe and motivate our implementation choices and to show the results of the first tests performed.
Contemporary management issues confronting fisheries science
NASA Astrophysics Data System (ADS)
Frank, Kenneth T.; Brickman, David
2001-06-01
Stock collapses have occurred worldwide. The most frequently cited cause is over-fishing, suggesting that fisheries management has been ineffective in controlling exploitation rates. The progression of a fishery from an over-exploited to a collapsed state involves impairment of the reproductive capacity of the target species, i.e. recruitment over-fishing. In many cases, this occurs by reduction of the spawning stock biomass (SSB) through the systematic elimination of spawning components within a stock complex. While operational definitions of minimum levels of SSB have been developed, they have seldom been applied and never adopted in a Canadian groundfish management context. The answer to the question of how much is enough to perpetuate a stock under exploitation has been illusive. Serebryakov [J. Cons. Int. Explor. Mer, 47 (1990) 267] has advocated definition of critical levels of SSB based on survival rates (R/SSB). We review his method and discuss the utility of the approach. An alternative approach to the problem of estimating minimum SSB is through a fundamental revision of the traditional stock and recruitment relationship. Explicit theoretical SSB thresholds below which reproduction/recruitment is severely impaired based upon density-dependent mating success (or Allee effects) is considered a superior approach to the question of how much is enough because of its ecological grounding. However, the successful application of this approach will require re-definition of the space/time scales of the management unit. Finally, support is growing for the establishment of closed areas or "no-take zones" as an alternative approach to managing the problems of fishing a stock complex by enabling sub-populations to escape fishing. While the expected benefits of areas protected from fishing are numerous, clear demonstrations of benefits of such areas in marine temperate ecosystems are lacking. In fact, unintended negative consequences may result from such actions.
Flow Physics and Control for Internal and External Aerodynamics
NASA Technical Reports Server (NTRS)
Wygnanski, I.
2010-01-01
Exploiting instabilities rather than forcing the flow is advantageous. Simple 2D concepts may not always work. Nonlinear effects may result in first order effect. Interaction between spanwise and streamwise vortices may have a paramount effect on the mean flow, but this interaction may not always be beneficial.
NASA Astrophysics Data System (ADS)
Erler, Engin
Tip clearance flow is the flow through the clearance between the rotor blade tip and the shroud of a turbomachine, such as compressors and turbines. This flow is driven by the pressure difference across the blade (aerodynamic loading) in the tip region and is a major source of loss in performance and aerodynamic stability in axial compressors of modern aircraft engines. An increase in tip clearance, either temporary due to differential radial expansion between the blade and the shroud during transient operation or permanent due to engine wear or manufacturing tolerances on small blades, increases tip clearance flow and results in higher fuel consumption and higher risk of engine surge. A compressor design that can reduce the sensitivity of its performance and aerodynamic stability to tip clearance increase would have a major impact on short and long-term engine performance and operating envelope. While much research has been carried out on improving nominal compressor performance, little had been done on desensitization to tip clearance increase beyond isolated observations that certain blade designs such as forward chordwise sweep, seem to be less sensitive to tip clearance size increase. The current project aims to identify through a computational study the flow features and associated mechanisms that reduces sensitivity of axial compressor rotors to tip clearance size and propose blade design strategies that can exploit these results. The methodology starts with the design of a reference conventional axial compressor rotor followed by a parametric study with variations of this reference design through modification of the camber line and of the stacking line of blade profiles along the span. It is noted that a simple desensitization method would be to reduce the aerodynamic loading of the blade tip which would reduce the tip clearance flow and its proportional contribution to performance loss. However, with the larger part of the work on the flow done in this region, this approach would entail a nominal performance penalty. Therefore, the chosen rotor design philosophy aims to keep the spanwise loading constant to avoid trading performance for desensitization. The rotor designs that resulted from this exercise are simulated in ANSYS CFX at different tip clearance sizes. The change in their performance with respect to tip clearance size (sensitivity) is compared both on an integral level in terms of pressure ratio and adiabatic efficiency, as well as on a detailed level in terms of aerodynamic losses and blockage associated with tip clearance flow. The sensitivity of aerodynamic stability is evaluated either directly through the simulations of the rotor characteristics up to the stall point (expensive in time and resources) for a few designs or indirectly through the position of the interface between the incoming and tip clearance flow with respect to the rotor leading edge plane. The latter approach is based on a generally observed stall criteria in modern axial compressors. The rotor designs are then assessed according to their sensitivity in comparison to that of the reference rotor design to detect features that can explain the trend in sensitivity to tip clearance size. These features can then be validated and the associated flow mechanisms explained through numerical simulations and modelling. Analysis of the database from the rotor parametric study shows that the observed trend in sensitivity cannot be explained by the shifting of the aerodynamic loading along the blade chord, as initially hypothesized based on the literature review. Instead, two flow features are found to reduce sensitivity of performance and stability to tip clearance, namely an increase in incoming meridional momentum in the tip region and a reduction/elimination of double leakage flow. Double leakage flow is the flow that exits the tip clearance of one blade and proceeds into the clearance of the adjacent blade rather than convecting downstream out of the local blade passage. These flow features are isolated and validated based on the reference rotor design through changes in the inlet total pressure condition to alter incoming flow momentum and blade number count to change double leakage rate. In terms of flow mechanism, double leakage is shown to be detrimental to performance and stability, and its proportional increase with tip clearance size explains the sensitivity increase in the presence of double leakage and, conversely, the desensitization effect of reducing or eliminating double leakage. The increase in incoming meridional momentum in the tip region reduces sensitivity to tip clearance through its reduction of double leakage as well as through improved mixing with tip clearance flow, as demonstrated by an analytical model without double leakage flow. The above results imply that any blade design strategy that exploits the two desensitizing flow features would reduce the performance and stability sensitivity to tip clearance size. The increase of the incoming meridional momentum can be achieved through forward chordwise sweep of the blade. The reduction of double leakage without changing blade pitch can be obtained by decreasing the blade stagger angle in the tip region. Examples of blade designs associated with these strategies are shown through CFX simulations to be successful in reducing sensitivity to tip clearance size.
NASA Astrophysics Data System (ADS)
Gawior, D.; Rutkiewicz, P.; Malik, I.; Wistuba, M.
2017-11-01
LiDAR data provide new insights into the historical development of mining industry recorded in the topography and landscape. In the study on the lead ore mining in the 13th-17th century we identified remnants of mining activity in relief that are normally obscured by dense vegetation. The industry in Tarnowice Plateau was based on exploitation of galena from the bedrock. New technologies, including DEM from airborne LiDAR provide show that present landscape and relief of post-mining area under study developed during several, subsequent phases of exploitation when different techniques of exploitation were used and probably different types of ores were exploited. Study conducted on the Tarnowice Plateau proved that combining GIS visualization techniques with historical maps, among all geological maps, is a promising approach in reconstructing development of anthropogenic relief and landscape..
Heterogeneous immunoassays using magnetic beads on a digital microfluidic platform.
Sista, Ramakrishna S; Eckhardt, Allen E; Srinivasan, Vijay; Pollack, Michael G; Palanki, Srinivas; Pamula, Vamsee K
2008-12-01
A digital microfluidic platform for performing heterogeneous sandwich immunoassays based on efficient handling of magnetic beads is presented in this paper. This approach is based on manipulation of discrete droplets of samples and reagents using electrowetting without the need for channels where the droplets are free to move laterally. Droplet-based manipulation of magnetic beads therefore does not suffer from clogging of channels. Immunoassays on a digital microfluidic platform require the following basic operations: bead attraction, bead washing, bead retention, and bead resuspension. Several parameters such as magnetic field strength, pull force, position, and buffer composition were studied for effective bead operations. Dilution-based washing of magnetic beads was demonstrated by immobilizing the magnetic beads using a permanent magnet and splitting the excess supernatant using electrowetting. Almost 100% bead retention was achieved after 7776-fold dilution-based washing of the supernatant. Efficient resuspension of magnetic beads was achieved by transporting a droplet with magnetic beads across five electrodes on the platform and exploiting the flow patterns within the droplet to resuspend the beads. All the magnetic-bead droplet operations were integrated together to generate standard curves for sandwich heterogeneous immunoassays on human insulin and interleukin-6 (IL-6) with a total time to result of 7 min for each assay.
Heterogeneous Immunoassays Using Magnetic beads On a Digital Microfluidic Platform
Sista, Ramakrishna S.; Eckhardt, Allen E.; Srinivasan, Vijay; Pollack, Michael G.; Palanki, Srinivas; Pamula, Vamsee K.
2009-01-01
A digital microfluidic platform for performing heterogeneous sandwich immunoassays based on efficient handling of magnetic beads is presented in this paper. This approach is based on manipulation of discrete droplets of samples and reagents using electrowetting without the need for channels where the droplets are free to move laterally. Droplet-based manipulation of magnetic beads therefore does not suffer from clogging of channels. Immunoassays on a digital microfluidic platform require the following basic operations: bead attraction, bead washing, bead retention, and bead resuspension. Several parameters such as magnetic field strength, pull force, position, and buffer composition were studied for effective bead operations. Dilution-based washing of magnetic beads was demonstrated by immobilizing the magnetic beads using a permanent magnet and splitting the excess supernatant using electrowetting. Almost 100% bead retention was achieved after 7776 fold dilution-based washing of the supernatant. Efficient resuspension of magnetic beads was achieved by transporting a droplet with magnetic beads across five electrodes on the platform and exploiting the flow patterns within the droplet to resuspend the beads. All the magnetic-bead droplet operations were integrated together to generate standard curves for sandwich heterogeneous immunoassays on Human Insulin and Interleukin-6 (IL-6) with a total time to result of seven minutes for each assay. PMID:19023486
Engineering Knowledge for Assistive Living
NASA Astrophysics Data System (ADS)
Chen, Liming; Nugent, Chris
This paper introduces a knowledge based approach to assistive living in smart homes. It proposes a system architecture that makes use of knowledge in the lifecycle of assistive living. The paper describes ontology based knowledge engineering practices and discusses mechanisms for exploiting knowledge for activity recognition and assistance. It presents system implementation and experiments, and discusses initial results.
Flow and Noise Control: Toward a Closer Linkage
NASA Technical Reports Server (NTRS)
Thomas, Russell H.; Choudhari, Meelan M.; Joslin, Ronald D.
2002-01-01
Motivated by growing demands for aircraft noise reduction and for revolutionary new aerovehicle concepts, the late twentieth century witnessed the beginning of a shift from single-discipline research, toward an increased emphasis on harnessing the potential of flow and noise control as implemented in a more fully integrated, multidisciplinary framework. At the same time, technologies for developing radically new aerovehicles, which promise quantum leap benefits in cost, safety and performance benefits with environmental friendliness, have appeared on the horizon. Transitioning new technologies to commercial applications will also require coupling further advances in traditional areas of aeronautics with intelligent exploitation of nontraditional and interdisciplinary technologies. Physics-based modeling and simulation are crucial enabling capabilities for synergistic linkage of flow and noise control. In these very fundamental ways, flow and noise control are being driven to be more closely linked during the early design phases of a vehicle concept for optimal and mutual noise and performance benefits.
NASA Astrophysics Data System (ADS)
Seraphin, Pierre; Vallet-Coulomb, Christine; Gonçalvès, Julio
2016-04-01
Traditional flood irrigation is used since the 16th century in the Crau plain (Southern France) for hay production. To supply this high consuming irrigation practice, water is diverted from the Durance River, originating from the Alps, and the large amount of irrigation return flows constitutes the main recharge of the Crau aquifer, which is in turn largely exploited for domestic, industrial and agricultural water use. A possible reduction of irrigation fluxes due to a need of water saving or to a future land-use change could endanger the groundwater resource. A robust quantification of the groundwater mass balance is thus required to assess a sustainable water management in the region. The high isotopic contrast between these exogenous irrigation waters and local precipitations allows the use of stable isotopes of water as conservative tracers to deduce their contributions to the surface recharge. An extensive groundwater sampling was performed to obtain δ18O and δ2H over the whole aquifer. Based on a new piezometric contour map, combined with a reestimate of the aquifer geometry, the isotopic data are implemented in a geostatistical approach to produce a conceptual equivalent-homogeneous reservoir, in order to apply a simple water and isotope mass balance mixing model. The isotopic composition of the two end-members is assessed, and the quantification of groundwater flows is then used to calculate the two recharge fluxes. Near to steady-state condition, the set of isotopic data treated by geostatistics leads to a recharge by irrigation of 5.20 ± 0.93 m3 s-1 i.e. 1173 ± 210 mm yr-1, and a natural recharge of 2.26 ± 0.91 m3 s-1 i.e. 132 ± 53 mm yr-1. Thus, 70 ± 9% of the effective surface recharge comes from the irrigation return flow, consistent with the literature (between 67% and 78%). This study constitutes a straightforward and independent approach to assess groundwater surface recharges with uncertainties and will help to constrain a future transient groundwater flow model of the Crau aquifer.
NASA Technical Reports Server (NTRS)
Grossman, Bernard
1999-01-01
Compressible and incompressible versions of a three-dimensional unstructured mesh Reynolds-averaged Navier-Stokes flow solver have been differentiated and resulting derivatives have been verified by comparisons with finite differences and a complex-variable approach. In this implementation, the turbulence model is fully coupled with the flow equations in order to achieve this consistency. The accuracy demonstrated in the current work represents the first time that such an approach has been successfully implemented. The accuracy of a number of simplifying approximations to the linearizations of the residual have been examined. A first-order approximation to the dependent variables in both the adjoint and design equations has been investigated. The effects of a "frozen" eddy viscosity and the ramifications of neglecting some mesh sensitivity terms were also examined. It has been found that none of the approximations yielded derivatives of acceptable accuracy and were often of incorrect sign. However, numerical experiments indicate that an incomplete convergence of the adjoint system often yield sufficiently accurate derivatives, thereby significantly lowering the time required for computing sensitivity information. The convergence rate of the adjoint solver relative to the flow solver has been examined. Inviscid adjoint solutions typically require one to four times the cost of a flow solution, while for turbulent adjoint computations, this ratio can reach as high as eight to ten. Numerical experiments have shown that the adjoint solver can stall before converging the solution to machine accuracy, particularly for viscous cases. A possible remedy for this phenomenon would be to include the complete higher-order linearization in the preconditioning step, or to employ a simple form of mesh sequencing to obtain better approximations to the solution through the use of coarser meshes. An efficient surface parameterization based on a free-form deformation technique has been utilized and the resulting codes have been integrated with an optimization package. Lastly, sample optimizations have been shown for inviscid and turbulent flow over an ONERA M6 wing. Drag reductions have been demonstrated by reducing shock strengths across the span of the wing. In order for large scale optimization to become routine, the benefits of parallel architectures should be exploited. Although the flow solver has been parallelized using compiler directives. The parallel efficiency is under 50 percent. Clearly, parallel versions of the codes will have an immediate impact on the ability to design realistic configurations on fine meshes, and this effort is currently underway.
Kim, Sungjin; Jinich, Adrián; Aspuru-Guzik, Alán
2017-04-24
We propose a multiple descriptor multiple kernel (MultiDK) method for efficient molecular discovery using machine learning. We show that the MultiDK method improves both the speed and accuracy of molecular property prediction. We apply the method to the discovery of electrolyte molecules for aqueous redox flow batteries. Using multiple-type-as opposed to single-type-descriptors, we obtain more relevant features for machine learning. Following the principle of "wisdom of the crowds", the combination of multiple-type descriptors significantly boosts prediction performance. Moreover, by employing multiple kernels-more than one kernel function for a set of the input descriptors-MultiDK exploits nonlinear relations between molecular structure and properties better than a linear regression approach. The multiple kernels consist of a Tanimoto similarity kernel and a linear kernel for a set of binary descriptors and a set of nonbinary descriptors, respectively. Using MultiDK, we achieve an average performance of r 2 = 0.92 with a test set of molecules for solubility prediction. We also extend MultiDK to predict pH-dependent solubility and apply it to a set of quinone molecules with different ionizable functional groups to assess their performance as flow battery electrolytes.
Clarke, David J; Stokes, Adam A; Langridge-Smith, Pat; Mackay, C Logan
2010-03-01
We have developed an automated quench-flow microreactor which interfaces directly to an electrospray ionization (ESI) mass spectrometer. We have used this device in conjunction with ESI Fourier transform ion cyclotron resonance mass spectrometry (FTICR MS) to demonstrate the potential of this approach for studying the mechanistic details of enzyme reactions. For the model system chosen to test this device, namely, the pre-steady-state hydrolysis of p-nitrophenyl acetate by the enzyme chymotrypsin, the kinetic parameters obtained are in good agreement with those in the literature. To our knowledge, this is the first reported use of online quench-flow coupled with FTICR MS. Furthermore, we have exploited the power of FTICR MS to interrogate the quenched covalently bound enzyme intermediate using top-down fragmentation. The accurate mass capabilities of FTICR MS permitted the nature of the intermediate to be assigned with high confidence. Electron capture dissociation (ECD) fragmentation allowed us to locate the intermediate to a five amino acid section of the protein--which includes the known catalytic residue, Ser(195). This experimental approach, which uniquely can provide both kinetic and chemical details of enzyme mechanisms, is a potentially powerful tool for studies of enzyme catalysis.
NASA Astrophysics Data System (ADS)
Stancanelli, Laura Maria; Peres, David Johnny; Cancelliere, Antonino; Foti, Enrico
2017-07-01
Rainfall-induced shallow slides can evolve into debris flows that move rapidly downstream with devastating consequences. Mapping the susceptibility to debris flow is an important aid for risk mitigation. We propose a novel practical approach to derive debris flow inundation maps useful for susceptibility assessment, that is based on the integrated use of DEM-based spatially-distributed hydrological and slope stability models with debris flow propagation models. More specifically, the TRIGRS infiltration and infinite slope stability model and the FLO-2D model for the simulation of the related debris flow propagation and deposition are combined. An empirical instability-to-debris flow triggering threshold calibrated on the basis of observed events, is applied to link the two models and to accomplish the task of determining the amount of unstable mass that develops as a debris flow. Calibration of the proposed methodology is carried out based on real data of the debris flow event occurred on 1 October 2009, in the Peloritani mountains area (Italy). Model performance, assessed by receiver-operating-characteristics (ROC) indexes, evidences fairly good reproduction of the observed event. Comparison with the performance of the traditional debris flow modeling procedure, in which sediment and water hydrographs are inputed as lumped at selected points on top of the streams, is also performed, in order to assess quantitatively the limitations of such commonly applied approach. Results show that the proposed method, besides of being more process-consistent than the traditional hydrograph-based approach, can potentially provide a more accurate simulation of debris-flow phenomena, in terms of spatial patterns of erosion and deposition as well on the quantification of mobilized volumes and depths, avoiding overestimation of debris flow triggering volume and, thus, of maximum inundation flow depths.
Compression of contour data through exploiting curve-to-curve dependence
NASA Technical Reports Server (NTRS)
Yalabik, N.; Cooper, D. B.
1975-01-01
An approach to exploiting curve-to-curve dependencies in order to achieve high data compression is presented. One of the approaches to date of along curve compression through use of cubic spline approximation is taken and extended by investigating the additional compressibility achievable through curve-to-curve structure exploitation. One of the models under investigation is reported on.
Generalized serial search code acquisition - The equivalent circular state diagram approach
NASA Technical Reports Server (NTRS)
Polydoros, A.; Simon, M. K.
1984-01-01
A transform-domain method for deriving the generating function of the acquisition process resulting from an arbitrary serial search strategy is presented. The method relies on equivalent circular state diagrams, uses Mason's formula from flow-graph theory, and employs a minimum number of required parameters. The transform-domain approach is briefly described and the concept of equivalent circular state diagrams is introduced and exploited to derive the generating function and resulting mean acquisition time for three particular cases of interest, the continuous/center Z search, the broken/center Z search, and the expanding window search. An optimization of the latter technique is performed whereby the number of partial windows which minimizes the mean acquisition time is determined. The numerical results satisfy certain intuitive predictions and provide useful design guidelines for such systems.
Convective heat transfer and infrared thermography.
Carlomagno, Giovanni M; Astarita, Tommaso; Cardone, Gennaro
2002-10-01
Infrared (IR) thermography, because of its two-dimensional and non-intrusive nature, can be exploited in industrial applications as well as in research. This paper deals with measurement of convective heat transfer coefficients (h) in three complex fluid flow configurations that concern the main aspects of both internal and external cooling of turbine engine components: (1) flow in ribbed, or smooth, channels connected by a 180 degrees sharp turn, (2) a jet in cross-flow, and (3) a jet impinging on a wall. The aim of this study was to acquire detailed measurements of h distribution in complex flow configurations related to both internal and external cooling of turbine components. The heated thin foil technique, which involves the detection of surface temperature by means of an IR scanning radiometer, was exploited to measure h. Particle image velocimetry was also used in one of the configurations to precisely determine the velocity field.
NASA Astrophysics Data System (ADS)
Schelenz, Sophie; Dietrich, Peter; Vienken, Thomas
2016-04-01
A sustainable thermal exploitation of the shallow subsurface requires a precise understanding of all relevant heat transport processes. Currently, planning practice of shallow geothermal systems (especially for systems < 30 kW) focuses on conductive heat transport as the main energy source while the impact of groundwater flow as the driver for advective heat transport is neglected or strongly simplified. The presented study proves that those simplifications of complex geological and hydrogeological subsurface characteristics are insufficient for a precise evaluation of site-specific energy extraction rates. Based on synthetic model scenarios with varying subsurface conditions (groundwater flow velocity and aquifer thickness) the impact of advection on induced long term temperature changes in 5 and 10 m distance of the borehole heat exchanger is presented. Extending known investigations, this study enhances the evaluation of shallow geothermal energy extraction rates by considering conductive and advective heat transport under varying aquifer thicknesses. Further, it evaluates the impact of advection on installation lengths of the borehole heat exchanger to optimize the initial financial investment. Finally, an evaluation approach is presented that classifies relevant heat transport processes according to their Péclet number to enable a first quantitative assessment of the subsurface energy regime and recommend further investigation and planning procedures.
NASA Astrophysics Data System (ADS)
Wolfs, Vincent; Willems, Patrick
2013-10-01
Many applications in support of water management decisions require hydrodynamic models with limited calculation time, including real time control of river flooding, uncertainty and sensitivity analyses by Monte-Carlo simulations, and long term simulations in support of the statistical analysis of the model simulation results (e.g. flood frequency analysis). Several computationally efficient hydrodynamic models exist, but little attention is given to the modelling of floodplains. This paper presents a methodology that can emulate output from a full hydrodynamic model by predicting one or several levels in a floodplain, together with the flow rate between river and floodplain. The overtopping of the embankment is modelled as an overflow at a weir. Adaptive neuro fuzzy inference systems (ANFIS) are exploited to cope with the varying factors affecting the flow. Different input sets and identification methods are considered in model construction. Because of the dual use of simplified physically based equations and data-driven techniques, the ANFIS consist of very few rules with a low number of input variables. A second calculation scheme can be followed for exceptionally large floods. The obtained nominal emulation model was tested for four floodplains along the river Dender in Belgium. Results show that the obtained models are accurate with low computational cost.
Lagrangian based methods for coherent structure detection
NASA Astrophysics Data System (ADS)
Allshouse, Michael R.; Peacock, Thomas
2015-09-01
There has been a proliferation in the development of Lagrangian analytical methods for detecting coherent structures in fluid flow transport, yielding a variety of qualitatively different approaches. We present a review of four approaches and demonstrate the utility of these methods via their application to the same sample analytic model, the canonical double-gyre flow, highlighting the pros and cons of each approach. Two of the methods, the geometric and probabilistic approaches, are well established and require velocity field data over the time interval of interest to identify particularly important material lines and surfaces, and influential regions, respectively. The other two approaches, implementing tools from cluster and braid theory, seek coherent structures based on limited trajectory data, attempting to partition the flow transport into distinct regions. All four of these approaches share the common trait that they are objective methods, meaning that their results do not depend on the frame of reference used. For each method, we also present a number of example applications ranging from blood flow and chemical reactions to ocean and atmospheric flows.
Harvey, Judson W.; Wagner, Brian J.; Bencala, Kenneth E.
1996-01-01
Stream water was locally recharged into shallow groundwater flow paths that returned to the stream (hyporheic exchange) in St. Kevin Gulch, a Rocky Mountain stream in Colorado contaminated by acid mine drainage. Two approaches were used to characterize hyporheic exchange: sub-reach-scale measurement of hydraulic heads and hydraulic conductivity to compute streambed fluxes (hydrometric approach) and reachscale modeling of in-stream solute tracer injections to determine characteristic length and timescales of exchange with storage zones (stream tracer approach). Subsurface data were the standard of comparison used to evaluate the reliability of the stream tracer approach to characterize hyporheic exchange. The reach-averaged hyporheic exchange flux (1.5 mL s−1 m−1), determined by hydrometric methods, was largest when stream base flow was low (10 L s−1); hyporheic exchange persisted when base flow was 10-fold higher, decreasing by approximately 30%. Reliability of the stream tracer approach to detect hyporheic exchange was assessed using first-order uncertainty analysis that considered model parameter sensitivity. The stream tracer approach did not reliably characterize hyporheic exchange at high base flow: the model was apparently more sensitive to exchange with surface water storage zones than with the hyporheic zone. At low base flow the stream tracer approach reliably characterized exchange between the stream and gravel streambed (timescale of hours) but was relatively insensitive to slower exchange with deeper alluvium (timescale of tens of hours) that was detected by subsurface measurements. The stream tracer approach was therefore not equally sensitive to all timescales of hyporheic exchange. We conclude that while the stream tracer approach is an efficient means to characterize surface-subsurface exchange, future studies will need to more routinely consider decreasing sensitivities of tracer methods at higher base flow and a potential bias toward characterizing only a fast component of hyporheic exchange. Stream tracer models with multiple rate constants to consider both fast exchange with streambed gravel and slower exchange with deeper alluvium appear to be warranted.
Discrete microfluidics: Reorganizing droplet arrays at a bend
NASA Astrophysics Data System (ADS)
Surenjav, Enkhtuul; Herminghaus, Stephan; Priest, Craig; Seemann, Ralf
2009-10-01
Microfluidic manipulation of densely packed droplet arrangements (i.e., gel emulsions) using sharp microchannel bends was studied as a function of bend angle, droplet volume fraction, droplet size, and flow velocity. Emulsion reorganization was found to be specifically dependent on the pathlength that the droplets are forced to travel as they navigate the bend under spatial confinement. We describe how bend-induced droplet displacements might be exploited in complex, droplet-based microfluidics.
Template-based protein-protein docking exploiting pairwise interfacial residue restraints.
Xue, Li C; Rodrigues, João P G L M; Dobbs, Drena; Honavar, Vasant; Bonvin, Alexandre M J J
2017-05-01
Although many advanced and sophisticated ab initio approaches for modeling protein-protein complexes have been proposed in past decades, template-based modeling (TBM) remains the most accurate and widely used approach, given a reliable template is available. However, there are many different ways to exploit template information in the modeling process. Here, we systematically evaluate and benchmark a TBM method that uses conserved interfacial residue pairs as docking distance restraints [referred to as alpha carbon-alpha carbon (CA-CA)-guided docking]. We compare it with two other template-based protein-protein modeling approaches, including a conserved non-pairwise interfacial residue restrained docking approach [referred to as the ambiguous interaction restraint (AIR)-guided docking] and a simple superposition-based modeling approach. Our results show that, for most cases, the CA-CA-guided docking method outperforms both superposition with refinement and the AIR-guided docking method. We emphasize the superiority of the CA-CA-guided docking on cases with medium to large conformational changes, and interactions mediated through loops, tails or disordered regions. Our results also underscore the importance of a proper refinement of superimposition models to reduce steric clashes. In summary, we provide a benchmarked TBM protocol that uses conserved pairwise interface distance as restraints in generating realistic 3D protein-protein interaction models, when reliable templates are available. The described CA-CA-guided docking protocol is based on the HADDOCK platform, which allows users to incorporate additional prior knowledge of the target system to further improve the quality of the resulting models. © The Author 2016. Published by Oxford University Press.
A Deep Learning based Approach to Reduced Order Modeling of Fluids using LSTM Neural Networks
NASA Astrophysics Data System (ADS)
Mohan, Arvind; Gaitonde, Datta
2017-11-01
Reduced Order Modeling (ROM) can be used as surrogates to prohibitively expensive simulations to model flow behavior for long time periods. ROM is predicated on extracting dominant spatio-temporal features of the flow from CFD or experimental datasets. We explore ROM development with a deep learning approach, which comprises of learning functional relationships between different variables in large datasets for predictive modeling. Although deep learning and related artificial intelligence based predictive modeling techniques have shown varied success in other fields, such approaches are in their initial stages of application to fluid dynamics. Here, we explore the application of the Long Short Term Memory (LSTM) neural network to sequential data, specifically to predict the time coefficients of Proper Orthogonal Decomposition (POD) modes of the flow for future timesteps, by training it on data at previous timesteps. The approach is demonstrated by constructing ROMs of several canonical flows. Additionally, we show that statistical estimates of stationarity in the training data can indicate a priori how amenable a given flow-field is to this approach. Finally, the potential and limitations of deep learning based ROM approaches will be elucidated and further developments discussed.
Accelerating Subsurface Transport Simulation on Heterogeneous Clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Villa, Oreste; Gawande, Nitin A.; Tumeo, Antonino
Reactive transport numerical models simulate chemical and microbiological reactions that occur along a flowpath. These models have to compute reactions for a large number of locations. They solve the set of ordinary differential equations (ODEs) that describes the reaction for each location through the Newton-Raphson technique. This technique involves computing a Jacobian matrix and a residual vector for each set of equation, and then solving iteratively the linearized system by performing Gaussian Elimination and LU decomposition until convergence. STOMP, a well known subsurface flow simulation tool, employs matrices with sizes in the order of 100x100 elements and, for numerical accuracy,more » LU factorization with full pivoting instead of the faster partial pivoting. Modern high performance computing systems are heterogeneous machines whose nodes integrate both CPUs and GPUs, exposing unprecedented amounts of parallelism. To exploit all their computational power, applications must use both the types of processing elements. For the case of subsurface flow simulation, this mainly requires implementing efficient batched LU-based solvers and identifying efficient solutions for enabling load balancing among the different processors of the system. In this paper we discuss two approaches that allows scaling STOMP's performance on heterogeneous clusters. We initially identify the challenges in implementing batched LU-based solvers for small matrices on GPUs, and propose an implementation that fulfills STOMP's requirements. We compare this implementation to other existing solutions. Then, we combine the batched GPU solver with an OpenMP-based CPU solver, and present an adaptive load balancer that dynamically distributes the linear systems to solve between the two components inside a node. We show how these approaches, integrated into the full application, provide speed ups from 6 to 7 times on large problems, executed on up to 16 nodes of a cluster with two AMD Opteron 6272 and a Tesla M2090 per node.« less
Selective flow-induced vesicle rupture to sort by membrane mechanical properties
NASA Astrophysics Data System (ADS)
Pommella, Angelo; Brooks, Nicholas J.; Seddon, John M.; Garbin, Valeria
2015-08-01
Vesicle and cell rupture caused by large viscous stresses in ultrasonication is central to biomedical and bioprocessing applications. The flow-induced opening of lipid membranes can be exploited to deliver drugs into cells, or to recover products from cells, provided that it can be obtained in a controlled fashion. Here we demonstrate that differences in lipid membrane and vesicle properties can enable selective flow-induced vesicle break-up. We obtained vesicle populations with different membrane properties by using different lipids (SOPC, DOPC, or POPC) and lipid:cholesterol mixtures (SOPC:chol and DOPC:chol). We subjected vesicles to large deformations in the acoustic microstreaming flow generated by ultrasound-driven microbubbles. By simultaneously deforming vesicles with different properties in the same flow, we determined the conditions in which rupture is selective with respect to the membrane stretching elasticity. We also investigated the effect of vesicle radius and excess area on the threshold for rupture, and identified conditions for robust selectivity based solely on the mechanical properties of the membrane. Our work should enable new sorting mechanisms based on the difference in membrane composition and mechanical properties between different vesicles, capsules, or cells.
Selective flow-induced vesicle rupture to sort by membrane mechanical properties
Pommella, Angelo; Brooks, Nicholas J.; Seddon, John M.; Garbin, Valeria
2015-01-01
Vesicle and cell rupture caused by large viscous stresses in ultrasonication is central to biomedical and bioprocessing applications. The flow-induced opening of lipid membranes can be exploited to deliver drugs into cells, or to recover products from cells, provided that it can be obtained in a controlled fashion. Here we demonstrate that differences in lipid membrane and vesicle properties can enable selective flow-induced vesicle break-up. We obtained vesicle populations with different membrane properties by using different lipids (SOPC, DOPC, or POPC) and lipid:cholesterol mixtures (SOPC:chol and DOPC:chol). We subjected vesicles to large deformations in the acoustic microstreaming flow generated by ultrasound-driven microbubbles. By simultaneously deforming vesicles with different properties in the same flow, we determined the conditions in which rupture is selective with respect to the membrane stretching elasticity. We also investigated the effect of vesicle radius and excess area on the threshold for rupture, and identified conditions for robust selectivity based solely on the mechanical properties of the membrane. Our work should enable new sorting mechanisms based on the difference in membrane composition and mechanical properties between different vesicles, capsules, or cells. PMID:26302783
An Example-Based Multi-Atlas Approach to Automatic Labeling of White Matter Tracts
Yoo, Sang Wook; Guevara, Pamela; Jeong, Yong; Yoo, Kwangsun; Shin, Joseph S.; Mangin, Jean-Francois; Seong, Joon-Kyung
2015-01-01
We present an example-based multi-atlas approach for classifying white matter (WM) tracts into anatomic bundles. Our approach exploits expert-provided example data to automatically classify the WM tracts of a subject. Multiple atlases are constructed to model the example data from multiple subjects in order to reflect the individual variability of bundle shapes and trajectories over subjects. For each example subject, an atlas is maintained to allow the example data of a subject to be added or deleted flexibly. A voting scheme is proposed to facilitate the multi-atlas exploitation of example data. For conceptual simplicity, we adopt the same metrics in both example data construction and WM tract labeling. Due to the huge number of WM tracts in a subject, it is time-consuming to label each WM tract individually. Thus, the WM tracts are grouped according to their shape similarity, and WM tracts within each group are labeled simultaneously. To further enhance the computational efficiency, we implemented our approach on the graphics processing unit (GPU). Through nested cross-validation we demonstrated that our approach yielded high classification performance. The average sensitivities for bundles in the left and right hemispheres were 89.5% and 91.0%, respectively, and their average false discovery rates were 14.9% and 14.2%, respectively. PMID:26225419
An Example-Based Multi-Atlas Approach to Automatic Labeling of White Matter Tracts.
Yoo, Sang Wook; Guevara, Pamela; Jeong, Yong; Yoo, Kwangsun; Shin, Joseph S; Mangin, Jean-Francois; Seong, Joon-Kyung
2015-01-01
We present an example-based multi-atlas approach for classifying white matter (WM) tracts into anatomic bundles. Our approach exploits expert-provided example data to automatically classify the WM tracts of a subject. Multiple atlases are constructed to model the example data from multiple subjects in order to reflect the individual variability of bundle shapes and trajectories over subjects. For each example subject, an atlas is maintained to allow the example data of a subject to be added or deleted flexibly. A voting scheme is proposed to facilitate the multi-atlas exploitation of example data. For conceptual simplicity, we adopt the same metrics in both example data construction and WM tract labeling. Due to the huge number of WM tracts in a subject, it is time-consuming to label each WM tract individually. Thus, the WM tracts are grouped according to their shape similarity, and WM tracts within each group are labeled simultaneously. To further enhance the computational efficiency, we implemented our approach on the graphics processing unit (GPU). Through nested cross-validation we demonstrated that our approach yielded high classification performance. The average sensitivities for bundles in the left and right hemispheres were 89.5% and 91.0%, respectively, and their average false discovery rates were 14.9% and 14.2%, respectively.
Feature selection with harmony search.
Diao, Ren; Shen, Qiang
2012-12-01
Many search strategies have been exploited for the task of feature selection (FS), in an effort to identify more compact and better quality subsets. Such work typically involves the use of greedy hill climbing (HC), or nature-inspired heuristics, in order to discover the optimal solution without going through exhaustive search. In this paper, a novel FS approach based on harmony search (HS) is presented. It is a general approach that can be used in conjunction with many subset evaluation techniques. The simplicity of HS is exploited to reduce the overall complexity of the search process. The proposed approach is able to escape from local solutions and identify multiple solutions owing to the stochastic nature of HS. Additional parameter control schemes are introduced to reduce the effort and impact of parameter configuration. These can be further combined with the iterative refinement strategy, tailored to enforce the discovery of quality subsets. The resulting approach is compared with those that rely on HC, genetic algorithms, and particle swarm optimization, accompanied by in-depth studies of the suggested improvements.
Vetrella, Amedeo Rodi; Fasano, Giancarmine; Accardo, Domenico; Moccia, Antonio
2016-01-01
Autonomous navigation of micro-UAVs is typically based on the integration of low cost Global Navigation Satellite System (GNSS) receivers and Micro-Electro-Mechanical Systems (MEMS)-based inertial and magnetic sensors to stabilize and control the flight. The resulting navigation performance in terms of position and attitude accuracy may not suffice for other mission needs, such as the ones relevant to fine sensor pointing. In this framework, this paper presents a cooperative UAV navigation algorithm that allows a chief vehicle, equipped with inertial and magnetic sensors, a Global Positioning System (GPS) receiver, and a vision system, to improve its navigation performance (in real time or in the post processing phase) exploiting formation flying deputy vehicles equipped with GPS receivers. The focus is set on outdoor environments and the key concept is to exploit differential GPS among vehicles and vision-based tracking (DGPS/Vision) to build a virtual additional navigation sensor whose information is then integrated in a sensor fusion algorithm based on an Extended Kalman Filter. The developed concept and processing architecture are described, with a focus on DGPS/Vision attitude determination algorithm. Performance assessment is carried out on the basis of both numerical simulations and flight tests. In the latter ones, navigation estimates derived from the DGPS/Vision approach are compared with those provided by the onboard autopilot system of a customized quadrotor. The analysis shows the potential of the developed approach, mainly deriving from the possibility to exploit magnetic- and inertial-independent accurate attitude information. PMID:27999318
NASA Astrophysics Data System (ADS)
Matoušek, Václav; Kesely, Mikoláš; Vlasák, Pavel
2018-06-01
The deposition velocity is an important operation parameter in hydraulic transport of solid particles in pipelines. It represents flow velocity at which transported particles start to settle out at the bottom of the pipe and are no longer transported. A number of predictive models has been developed to determine this threshold velocity for slurry flows of different solids fractions (fractions of different grain size and density). Most of the models consider flow in a horizontal pipe only, modelling approaches for inclined flows are extremely scarce due partially to a lack of experimental information about the effect of pipe inclination on the slurry flow pattern and behaviour. We survey different approaches to modelling of particle deposition in flowing slurry and discuss mechanisms on which deposition-limit models are based. Furthermore, we analyse possibilities to incorporate the effect of flow inclination into the predictive models and select the most appropriate ones based on their ability to modify the modelled deposition mechanisms to conditions associated with the flow inclination. A usefulness of the selected modelling approaches and their modifications are demonstrated by comparing model predictions with experimental results for inclined slurry flows from our own laboratory and from the literature.
Pyne, Matthew I.; Carlisle, Daren M.; Konrad, Christopher P.; Stein, Eric D.
2017-01-01
Regional classification of streams is an early step in the Ecological Limits of Hydrologic Alteration framework. Many stream classifications are based on an inductive approach using hydrologic data from minimally disturbed basins, but this approach may underrepresent streams from heavily disturbed basins or sparsely gaged arid regions. An alternative is a deductive approach, using watershed climate, land use, and geomorphology to classify streams, but this approach may miss important hydrological characteristics of streams. We classified all stream reaches in California using both approaches. First, we used Bayesian and hierarchical clustering to classify reaches according to watershed characteristics. Streams were clustered into seven classes according to elevation, sedimentary rock, and winter precipitation. Permutation-based analysis of variance and random forest analyses were used to determine which hydrologic variables best separate streams into their respective classes. Stream typology (i.e., the class that a stream reach is assigned to) is shaped mainly by patterns of high and mean flow behavior within the stream's landscape context. Additionally, random forest was used to determine which hydrologic variables best separate minimally disturbed reference streams from non-reference streams in each of the seven classes. In contrast to stream typology, deviation from reference conditions is more difficult to detect and is largely defined by changes in low-flow variables, average daily flow, and duration of flow. Our combined deductive/inductive approach allows us to estimate flow under minimally disturbed conditions based on the deductive analysis and compare to measured flow based on the inductive analysis in order to estimate hydrologic change.
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Pretto, Lucas R., E-mail: lucas.de.pretto@usp.br; Nogueira, Gesse E. C.; Freitas, Anderson Z.
2016-04-28
Functional modalities of Optical Coherence Tomography (OCT) based on speckle analysis are emerging in the literature. We propose a simple approach to the autocorrelation of OCT signal to enable volumetric flow rate differentiation, based on decorrelation time. Our results show that this technique could distinguish flows separated by 3 μl/min, limited by the acquisition speed of the system. We further perform a B-scan of gradient flow inside a microchannel, enabling the visualization of the drag effect on the walls.
Pressure gradients fail to predict diffusio-osmosis
NASA Astrophysics Data System (ADS)
Liu, Yawei; Ganti, Raman; Frenkel, Daan
2018-05-01
We present numerical simulations of diffusio-osmotic flow, i.e. the fluid flow generated by a concentration gradient along a solid-fluid interface. In our study, we compare a number of distinct approaches that have been proposed for computing such flows and compare them with a reference calculation based on direct, non-equilibrium molecular dynamics simulations. As alternatives, we consider schemes that compute diffusio-osmotic flow from the gradient of the chemical potentials of the constituent species and from the gradient of the component of the pressure tensor parallel to the interface. We find that the approach based on treating chemical potential gradients as external forces acting on various species agrees with the direct simulations, thereby supporting the approach of Marbach et al (2017 J. Chem. Phys. 146 194701). In contrast, an approach based on computing the gradients of the microscopic pressure tensor does not reproduce the direct non-equilibrium results.
Chen, Qiu Lan; Liu, Zhou; Shum, Ho Cheung
2014-11-01
In this work, we demonstrate the use of stereolithographic 3D printing to fabricate millifluidic devices, which are used to engineer particles with multiple compartments. As the 3D design is directly transferred to the actual prototype, this method accommodates 3D millimeter-scaled features that are difficult to achieve by either lithographic-based microfabrication or traditional macrofabrication techniques. We exploit this approach to produce millifluidic networks to deliver multiple fluidic components. By taking advantage of the laminar flow, the fluidic components can form liquid jets with distinct patterns, and each pattern has clear boundaries between the liquid phases. Afterwards, droplets with controlled size are fabricated by spraying the liquid jet in an electric field, and subsequently converted to particles after a solidification step. As a demonstration, we fabricate calcium alginate particles with structures of (1) slice-by-slice multiple lamellae, (2) concentric core-shells, and (3) petals surrounding the particle centers. Furthermore, distinct hybrid particles combining two or more of the above structures are also obtained. These compartmentalized particles impart spatially dependent functionalities and properties. To show their applicability, various ingredients, including fruit juices, drugs, and magnetic nanoparticles are encapsulated in the different compartments as proof-of-concepts for applications, including food, drug delivery, and bioassays. Our 3D printed electro-millifluidic approach represents a convenient and robust method to extend the range of structures of functional particles.
Microfluidics for producing poly (lactic-co-glycolic acid)-based pharmaceutical nanoparticles.
Li, Xuanyu; Jiang, Xingyu
2017-12-24
Microfluidic chips allow the rapid production of a library of nanoparticles (NPs) with distinct properties by changing the precursors and the flow rates, significantly decreasing the time for screening optimal formulation as carriers for drug delivery compared to conventional methods. The batch-to-batch reproducibility which is essential for clinical translation is achieved by precisely controlling the precursors and the flow rate, regardless of operators. Poly (lactic-co-glycolic acid) (PLGA) is the most widely used Food and Drug Administration (FDA)-approved biodegradable polymers. Researchers often combine PLGA with lipids or amphiphilic molecules to assemble into a core/shell structure to exploit the potential of PLGA-based NPs as powerful carriers for cancer-related drug delivery. In this review, we discuss the advantages associated with microfluidic chips for producing PLGA-based functional nanocomplexes for drug delivery. These laboratory-based methods can readily scale up to provide sufficient amount of PLGA-based NPs in microfluidic chips for clinical studies and industrial-scale production. Copyright © 2017. Published by Elsevier B.V.
Exploiting Bounded Signal Flow for Graph Orientation Based on Cause-Effect Pairs
NASA Astrophysics Data System (ADS)
Dorn, Britta; Hüffner, Falk; Krüger, Dominikus; Niedermeier, Rolf; Uhlmann, Johannes
We consider the following problem: Given an undirected network and a set of sender-receiver pairs, direct all edges such that the maximum number of "signal flows" defined by the pairs can be routed respecting edge directions. This problem has applications in communication networks and in understanding protein interaction based cell regulation mechanisms. Since this problem is NP-hard, research so far concentrated on polynomial-time approximation algorithms and tractable special cases. We take the viewpoint of parameterized algorithmics and examine several parameters related to the maximum signal flow over vertices or edges. We provide several fixed-parameter tractability results, and in one case a sharp complexity dichotomy between a linear-time solvable case and a slightly more general NP-hard case. We examine the value of these parameters for several real-world network instances. For many relevant cases, the NP-hard problem can be solved to optimality. In this way, parameterized analysis yields both deeper insight into the computational complexity and practical solving strategies.
Tapered Microfluidic for Continuous Micro-Object Separation Based on Hydrodynamic Principle.
Ahmad, Ida Laila; Ahmad, Mohd Ridzuan; Takeuchi, Masaru; Nakajima, Masahiro; Hasegawa, Yasuhisa
2017-12-01
Recent advances in microfluidic technologies have created a demand for a simple and efficient separation intended for various applications such as food industries, biological preparation, and medical diagnostic. In this paper, we report a tapered microfluidic device for passive continuous separation of microparticles by using hydrodynamic separation. By exploiting the hydrodynamic properties of the fluid flow and physical characteristics of micro particles, effective size based separation is demonstrated. The tapered microfluidic device has widening geometries with respect to specific taper angle which amplify the sedimentation effect experienced by particles of different sizes. A mixture of 3-μm and 10-μm polystyrene microbeads are successfully separated using 20° and 25° taper angles. The results obtained are in agreement with three-dimensional finite element simulation conducted using Abaqus 6.12. Moreover, the feasibility of this mechanism for biological separation is demonstrated by using polydisperse samples consists of 3-μm polystyrene microbeads and human epithelial cervical carcinoma (HeLa) cells. 98% of samples purity is recovered at outlet 1 and outlet 3 with flow rate of 0.5-3.0 μl/min. Our device is interesting despite adopting passive separation approach. This method enables straightforward, label-free, and continuous separation of multiparticles in a stand-alone device without the need for bulky apparatus. Therefore, this device may become an enabling technology for point of care diagnosis tools and may hold potential for micrototal analysis system applications.
Exploiting Non-Markovianity for Quantum Control.
Reich, Daniel M; Katz, Nadav; Koch, Christiane P
2015-07-22
Quantum technology, exploiting entanglement and the wave nature of matter, relies on the ability to accurately control quantum systems. Quantum control is often compromised by the interaction of the system with its environment since this causes loss of amplitude and phase. However, when the dynamics of the open quantum system is non-Markovian, amplitude and phase flow not only from the system into the environment but also back. Interaction with the environment is then not necessarily detrimental. We show that the back-flow of amplitude and phase can be exploited to carry out quantum control tasks that could not be realized if the system was isolated. The control is facilitated by a few strongly coupled, sufficiently isolated environmental modes. Our paradigmatic example considers a weakly anharmonic ladder with resonant amplitude control only, restricting realizable operations to SO(N). The coupling to the environment, when harnessed with optimization techniques, allows for full SU(N) controllability.
Integrated Photoelectrochemical Solar Energy Conversion and Organic Redox Flow Battery Devices.
Li, Wenjie; Fu, Hui-Chun; Li, Linsen; Cabán-Acevedo, Miguel; He, Jr-Hau; Jin, Song
2016-10-10
Building on regenerative photoelectrochemical solar cells and emerging electrochemical redox flow batteries (RFBs), more efficient, scalable, compact, and cost-effective hybrid energy conversion and storage devices could be realized. An integrated photoelectrochemical solar energy conversion and electrochemical storage device is developed by integrating regenerative silicon solar cells and 9,10-anthraquinone-2,7-disulfonic acid (AQDS)/1,2-benzoquinone-3,5-disulfonic acid (BQDS) RFBs. The device can be directly charged by solar light without external bias, and discharged like normal RFBs with an energy storage density of 1.15 Wh L -1 and a solar-to-output electricity efficiency (SOEE) of 1.7 % over many cycles. The concept exploits a previously undeveloped design connecting two major energy technologies and promises a general approach for storing solar energy electrochemically with high theoretical storage capacity and efficiency. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
High-resolution wavefront reconstruction using the frozen flow hypothesis
NASA Astrophysics Data System (ADS)
Liu, Xuewen; Liang, Yonghui; Liu, Jin; Xu, Jieping
2017-10-01
This paper describes an approach to reconstructing wavefronts on finer grid using the frozen flow hypothesis (FFH), which exploits spatial and temporal correlations between consecutive wavefront sensor (WFS) frames. Under the assumption of FFH, slope data from WFS can be connected to a finer, composite slope grid using translation and down sampling, and elements in transformation matrices are determined by wind information. Frames of slopes are then combined and slopes on finer grid are reconstructed by solving a sparse, large-scale, ill-posed least squares problem. By using reconstructed finer slope data and adopting Fried geometry of WFS, high-resolution wavefronts are then reconstructed. The results show that this method is robust even with detector noise and wind information inaccuracy, and under bad seeing conditions, high-frequency information in wavefronts can be recovered more accurately compared with when correlations in WFS frames are ignored.
Acoustically Generated Flows in Flexural Plate Wave Sensors: a Multifield Analysis
NASA Astrophysics Data System (ADS)
Sayar, Ersin; Farouk, Bakhtier
2011-11-01
Acoustically excited flows in a microchannel flexural plate wave device are explored numerically with a coupled solid-fluid mechanics model. The device can be exploited to integrate micropumps with microfluidic chips. A comprehensive understanding of the device requires the development of coupled two or three-dimensional fluid structure interactive (FSI) models. The channel walls are composed of layers of ZnO, Si3N4 and Al. An isothermal equation of state for the fluid (water) is employed. The flexural motions of the channel walls and the resulting flowfields are solved simultaneously. A parametric analysis is performed by varying the values of the driving frequency, voltage of the electrical signal and the channel height. The time averaged axial velocity is found to be proportional to the square of the wave amplitude. The present approach is superior to the method of successive approximations where the solid-liquid coupling is weak.
Li, Jing Xin; Yang, Li; Yang, Lei; Zhang, Chao; Huo, Zhao Min; Chen, Min Hao; Luan, Xiao Feng
2018-03-01
Quantitative evaluation of ecosystem service is a primary premise for rational resources exploitation and sustainable development. Examining ecosystem services flow provides a scientific method to quantity ecosystem services. We built an assessment indicator system based on land cover/land use under the framework of four types of ecosystem services. The types of ecosystem services flow were reclassified. Using entropy theory, disorder degree and developing trend of indicators and urban ecosystem were quantitatively assessed. Beijing was chosen as the study area, and twenty-four indicators were selected for evaluation. The results showed that the entropy value of Beijing urban ecosystem during 2004 to 2015 was 0.794 and the entropy flow was -0.024, suggesting a large disordered degree and near verge of non-health. The system got maximum values for three times, while the mean annual variation of the system entropy value increased gradually in three periods, indicating that human activities had negative effects on urban ecosystem. Entropy flow reached minimum value in 2007, implying the environmental quality was the best in 2007. The determination coefficient for the fitting function of total permanent population in Beijing and urban ecosystem entropy flow was 0.921, indicating that urban ecosystem health was highly correlated with total permanent population.
DOT National Transportation Integrated Search
2015-11-01
One of the most efficient ways to solve the damage detection problem using the statistical pattern recognition : approach is that of exploiting the methods of outlier analysis. Cast within the pattern recognition framework, : damage detection assesse...
Communicating and Interacting: An Exploration of the Changing Roles of Media in CALL/CMC
ERIC Educational Resources Information Center
Hoven, Debra
2006-01-01
The sites of learning and teaching using CALL are shifting from CD-based, LAN-based, or stand-alone programs to the Internet. As this change occurs, pedagogical approaches to using CALL are also shifting to forms which better exploit the communication, collaboration, and negotiation aspects of the Internet. Numerous teachers and designers have…
Towards a Viscous Wall Model for Immersed Boundary Methods
NASA Technical Reports Server (NTRS)
Brehm, Christoph; Barad, Michael F.; Kiris, Cetin C.
2016-01-01
Immersed boundary methods are frequently employed for simulating flows at low Reynolds numbers or for applications where viscous boundary layer effects can be neglected. The primary shortcoming of Cartesian mesh immersed boundary methods is the inability of efficiently resolving thin turbulent boundary layers in high-Reynolds number flow application. The inefficiency of resolving the thin boundary is associated with the use of constant aspect ratio Cartesian grid cells. Conventional CFD approaches can efficiently resolve the large wall normal gradients by utilizing large aspect ratio cells near the wall. This paper presents different approaches for immersed boundary methods to account for the viscous boundary layer interaction with the flow-field away from the walls. Different wall modeling approaches proposed in previous research studies are addressed and compared to a new integral boundary layer based approach. In contrast to common wall-modeling approaches that usually only utilize local flow information, the integral boundary layer based approach keeps the streamwise history of the boundary layer. This allows the method to remain effective at much larger y+ values than local wall modeling approaches. After a theoretical discussion of the different approaches, the method is applied to increasingly more challenging flow fields including fully attached, separated, and shock-induced separated (laminar and turbulent) flows.
NASA Astrophysics Data System (ADS)
Bode, F.; Nowak, W.; Reed, P. M.; Reuschen, S.
2016-12-01
Drinking-water well catchments need effective early-warning monitoring networks. Groundwater water supply wells in complex urban environments are in close proximity to a myriad of potential industrial pollutant sources that could irreversibly damage their source aquifers. These urban environments pose fiscal and physical challenges to designing monitoring networks. Ideal early-warning monitoring networks would satisfy three objectives: to detect (1) all potential contaminations within the catchment (2) as early as possible before they reach the pumping wells, (3) while minimizing costs. Obviously, the ideal case is nonexistent, so we search for tradeoffs using multiobjective optimization. The challenge of this optimization problem is the high number of potential monitoring-well positions (the search space) and the non-linearity of the underlying groundwater flow-and-transport problem. This study evaluates (1) different ways to effectively restrict the search space in an efficient way, with and without expert knowledge, (2) different methods to represent the search space during the optimization and (3) the influence of incremental increases in uncertainty in the system. Conductivity, regional flow direction and potential source locations are explored as key uncertainties. We show the need and the benefit of our methods by comparing optimized monitoring networks for different uncertainty levels with networks that seek to effectively exploit expert knowledge. The study's main contributions are the different approaches restricting and representing the search space. The restriction algorithms are based on a point-wise comparison of decision elements of the search space. The representation of the search space can be either binary or continuous. For both cases, the search space must be adjusted properly. Our results show the benefits and drawbacks of binary versus continuous search space representations and the high potential of automated search space restriction algorithms for high-dimensional, highly non-linear optimization problems.
Risk assessment by dynamic representation of vulnerability, exploitation, and impact
NASA Astrophysics Data System (ADS)
Cam, Hasan
2015-05-01
Assessing and quantifying cyber risk accurately in real-time is essential to providing security and mission assurance in any system and network. This paper presents a modeling and dynamic analysis approach to assessing cyber risk of a network in real-time by representing dynamically its vulnerabilities, exploitations, and impact using integrated Bayesian network and Markov models. Given the set of vulnerabilities detected by a vulnerability scanner in a network, this paper addresses how its risk can be assessed by estimating in real-time the exploit likelihood and impact of vulnerability exploitation on the network, based on real-time observations and measurements over the network. The dynamic representation of the network in terms of its vulnerabilities, sensor measurements, and observations is constructed dynamically using the integrated Bayesian network and Markov models. The transition rates of outgoing and incoming links of states in hidden Markov models are used in determining exploit likelihood and impact of attacks, whereas emission rates help quantify the attack states of vulnerabilities. Simulation results show the quantification and evolving risk scores over time for individual and aggregated vulnerabilities of a network.
The use of Natural Flood Management to mitigate local flooding in the rural landscape
NASA Astrophysics Data System (ADS)
Wilkinson, Mark; Quinn, Paul; Ghimire, Sohan; Nicholson, Alex; Addy, Steve
2014-05-01
The past decade has seen increases in the occurrence of flood events across Europe, putting a growing number of settlements of varying sizes at risk. The issue of flooding in smaller villages is usually not well publicised. In these small communities, the cost of constructing and maintaining traditional flood defences often outweigh the potential benefits, which has led to a growing quest for more cost effective and sustainable approaches. Here we aim to provide such an approach that alongside flood risk reduction, also has multipurpose benefits of sediment control, water quality amelioration, and habitat creation. Natural flood management (NFM) aims to reduce flooding by working with natural features and characteristics to slow down or temporarily store flood waters. NFM measures include dynamic water storage ponds and wetlands, interception bunds, channel restoration and instream wood placement, and increasing soil infiltration through soil management and tree planting. Based on integrated monitoring and modelling studies, we demonstrate the potential to manage runoff locally using NFM in rural systems by effectively managing flow pathways (hill slopes and small channels) and by exploiting floodplains and buffers strips. Case studies from across the UK show that temporary storage ponds (ranging from 300 to 3000m3) and other NFM measures can reduce peak flows in small catchments (5 to 10 km2) by up to 15 to 30 percent. In addition, increasing the overall effective storage capacity by a network of NFM measures was found to be most effective for total reduction of local flood peaks. Hydraulic modelling has shown that the positioning of such features within the catchment, and how they are connected to the main channel, may also affect their effectiveness. Field evidence has shown that these ponds can collect significant accumulations of fine sediment during flood events. On the other hand, measures such as wetlands could also play an important role during low flow conditions, by providing base flows during drought conditions. Ongoing research using hydrological datasets aims to assess how these features function during low flow conditions and how storage ponds could be used as irrigation ponds in arable areas. To allow for effective implementation and upkeep of NFM measures on the ground, demonstration sites have been developed through a process of iterative stakeholder engagement. Coupled with the use of novel visualisation techniques, results are currently being communicated to a wider community of local landowners and catchment managers. The approach of using networks of interception bunds and offline storage areas in the rural landscape could potentially provide a cost effective means to reduce flood risk in small responsive catchments across Europe. As such it could provide an alternative or addition to traditional engineering techniques, while also effectively managing catchments to achieve multiple environmental objectives.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gondouin, M.
1991-10-31
The West Sak (Upper Cretaceous) sands, overlaying the Kuparuk field, would rank among the largest known oil fields in the US, but technical difficulties have so far prevented its commercial exploitation. Steam injection is the most successful and the most commonly-used method of heavy oil recovery, but its application to the West Sak presents major problems. Such difficulties may be overcome by using a novel approach, in which steam is generated downhole in a catalytic Methanator, from Syngas made at the surface from endothermic reactions (Table 1). The Methanator effluent, containing steam and soluble gases resulting from exothermic reactions (Tablemore » 1), is cyclically injected into the reservoir by means of a horizontal drainhole while hot produced fluids flow form a second drainhole into a central production tubing. The downhole reactor feed and BFW flow downward to two concentric tubings. The large-diameter casing required to house the downhole reactor assembly is filled above it with Arctic Pack mud, or crude oil, to further reduce heat leaks. A quantitative analysis of this production scheme for the West Sak required a preliminary engineering of the downhole and surface facilities and a tentative forecast of well production rates. The results, based on published information on the West Sak, have been used to estimate the cost of these facilities, per daily barrel of oil produced. A preliminary economic analysis and conclusions are presented together with an outline of future work. Economic and regulatory conditions which would make this approach viable are discussed. 28 figs.« less
The ATLAS Event Service: A new approach to event processing
NASA Astrophysics Data System (ADS)
Calafiura, P.; De, K.; Guan, W.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Tsulaia, V.; Van Gemmeren, P.; Wenaus, T.
2015-12-01
The ATLAS Event Service (ES) implements a new fine grained approach to HEP event processing, designed to be agile and efficient in exploiting transient, short-lived resources such as HPC hole-filling, spot market commercial clouds, and volunteer computing. Input and output control and data flows, bookkeeping, monitoring, and data storage are all managed at the event level in an implementation capable of supporting ATLAS-scale distributed processing throughputs (about 4M CPU-hours/day). Input data flows utilize remote data repositories with no data locality or pre-staging requirements, minimizing the use of costly storage in favor of strongly leveraging powerful networks. Object stores provide a highly scalable means of remotely storing the quasi-continuous, fine grained outputs that give ES based applications a very light data footprint on a processing resource, and ensure negligible losses should the resource suddenly vanish. We will describe the motivations for the ES system, its unique features and capabilities, its architecture and the highly scalable tools and technologies employed in its implementation, and its applications in ATLAS processing on HPCs, commercial cloud resources, volunteer computing, and grid resources. Notice: This manuscript has been authored by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.
Wang, Yanying; Liu, Yaqin; Ding, Fang; Zhu, Xiaoyan; Yang, Li; Zou, Ping; Rao, Hanbing; Zhao, Qingbiao; Wang, Xianxiang
2018-06-07
In this study, we developed a simple colorimetric approach to detect glutathione (GSH). The proposed approach is based on the ability of CuS-PDA-Au composite material to catalytically oxidize 3,3',5,5'-tetramethylbenzidine (TMB) to ox-TMB to induce a blue color with an absorption peak centered at 652 nm. However, the introduction of GSH can result in a decrease in oxidized TMB; similarly, it can combine with Au nanoparticles (Au NPs) on the surface of CuS-PDA-Au composite material. Both approaches can result in a fading blue color and a reduction of the absorbance at 652 nm. Based on this above, we proposed a technique to detect GSH quantitatively and qualitatively through UV-Vis spectroscopy and naked eye, respectively. This approach demonstrates a low detection limit of 0.42 μM with a broad detection range of 5 × 10 -7 -1 × 10 -4 M with the assistance of UV-Vis spectroscopy. More importantly, this approach is convenient and rapid. This method was successfully applied to GSH detection in human serum and cell lines. Graphical abstract A colorimetric approach has been developed by exploiting the peroxidase-like activity of CuS-polydopamine-Au composite for sensitive glutathione detection.
NASA Technical Reports Server (NTRS)
Palazzolo, Alan; Bhattacharya, Avijit; Athavale, Mahesh; Venkataraman, Balaji; Ryan, Steve; Funston, Kerry
1997-01-01
This paper highlights bulk flow and CFD-based models prepared to calculate force and leakage properties for seals and shrouded impeller leakage paths. The bulk flow approach uses a Hir's based friction model and the CFD approach solves the Navier Stoke's (NS) equation with a finite whirl orbit or via analytical perturbation. The results show good agreement in most instances with available benchmarks.
Optical microwave filter based on spectral slicing by use of arrayed waveguide gratings.
Pastor, Daniel; Ortega, Beatriz; Capmany, José; Sales, Salvador; Martinez, Alfonso; Muñoz, Pascual
2003-10-01
We have experimentally demonstrated a new optical signal processor based on the use of arrayed waveguide gratings. The structure exploits the concept of spectral slicing combined with the use of an optical dispersive medium. The approach presents increased flexibility from previous slicing-based structures in terms of tunability, reconfiguration, and apodization of the samples or coefficients of the transversal optical filter.
How transfer flights shape the structure of the airline network.
Ryczkowski, Tomasz; Fronczak, Agata; Fronczak, Piotr
2017-07-17
In this paper, we analyse the gravity model in the global passenger air-transport network. We show that in the standard form, the model is inadequate for correctly describing the relationship between passenger flows and typical geo-economic variables that characterize connected countries. We propose a model for transfer flights that allows exploitation of these discrepancies in order to discover hidden subflows in the network. We illustrate its usefulness by retrieving the distance coefficient in the gravity model, which is one of the determinants of the globalization process. Finally, we discuss the correctness of the presented approach by comparing the distance coefficient to several well-known economic events.
Spencer-Hughes, Victoria; Syred, Jonathan; Allison, Alison; Holdsworth, Gillian; Baraitser, Paula
2017-02-14
Sexual health services routinely screen for child sexual exploitation (CSE). Although sexual health services are increasingly provided online, there has been no research on the translation of the safeguarding function to online services. We studied expert practitioner views on safeguarding in this context. The aim was to document expert practitioner views on safeguarding in the context of an online sexual health service. We conducted semistructured interviews with lead professionals purposively sampled from local, regional, or national organizations with a direct influence over CSE protocols, child protection policies, and sexual health services. Interviews were analyzed by three researchers using a matrix-based analytic method. Our respondents described two different approaches to safeguarding. The "information-providing" approach considers that young people experiencing CSE will ask for help when they are ready from someone they trust. The primary function of the service is to provide information, provoke reflection, generate trust, and respond reliably to disclosure. The approach values online services as an anonymous space to test out disclosure without commitment. The "information-gathering" approach considers that young people may withhold information about exploitation. Therefore, services should seek out information to assess risk and initiate disclosure. This approach values face-to-face opportunities for individualized questioning and immediate referral. The information-providing approach is associated with confidential telephone support lines and the information-gathering approach with clinical services. The approach adopted online will depend on ethos and the range of services provided. Effective transition from online to clinic services after disclosure is an essential element of this process and further research is needed to understand and support this transition. ©Victoria Spencer-Hughes, Jonathan Syred, Alison Allison, Gillian Holdsworth, Paula Baraitser. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 14.02.2017.
Evaluation methodology for query-based scene understanding systems
NASA Astrophysics Data System (ADS)
Huster, Todd P.; Ross, Timothy D.; Culbertson, Jared L.
2015-05-01
In this paper, we are proposing a method for the principled evaluation of scene understanding systems in a query-based framework. We can think of a query-based scene understanding system as a generalization of typical sensor exploitation systems where instead of performing a narrowly defined task (e.g., detect, track, classify, etc.), the system can perform general user-defined tasks specified in a query language. Examples of this type of system have been developed as part of DARPA's Mathematics of Sensing, Exploitation, and Execution (MSEE) program. There is a body of literature on the evaluation of typical sensor exploitation systems, but the open-ended nature of the query interface introduces new aspects to the evaluation problem that have not been widely considered before. In this paper, we state the evaluation problem and propose an approach to efficiently learn about the quality of the system under test. We consider the objective of the evaluation to be to build a performance model of the system under test, and we rely on the principles of Bayesian experiment design to help construct and select optimal queries for learning about the parameters of that model.
Knowledge-Based Topic Model for Unsupervised Object Discovery and Localization.
Niu, Zhenxing; Hua, Gang; Wang, Le; Gao, Xinbo
Unsupervised object discovery and localization is to discover some dominant object classes and localize all of object instances from a given image collection without any supervision. Previous work has attempted to tackle this problem with vanilla topic models, such as latent Dirichlet allocation (LDA). However, in those methods no prior knowledge for the given image collection is exploited to facilitate object discovery. On the other hand, the topic models used in those methods suffer from the topic coherence issue-some inferred topics do not have clear meaning, which limits the final performance of object discovery. In this paper, prior knowledge in terms of the so-called must-links are exploited from Web images on the Internet. Furthermore, a novel knowledge-based topic model, called LDA with mixture of Dirichlet trees, is proposed to incorporate the must-links into topic modeling for object discovery. In particular, to better deal with the polysemy phenomenon of visual words, the must-link is re-defined as that one must-link only constrains one or some topic(s) instead of all topics, which leads to significantly improved topic coherence. Moreover, the must-links are built and grouped with respect to specific object classes, thus the must-links in our approach are semantic-specific , which allows to more efficiently exploit discriminative prior knowledge from Web images. Extensive experiments validated the efficiency of our proposed approach on several data sets. It is shown that our method significantly improves topic coherence and outperforms the unsupervised methods for object discovery and localization. In addition, compared with discriminative methods, the naturally existing object classes in the given image collection can be subtly discovered, which makes our approach well suited for realistic applications of unsupervised object discovery.Unsupervised object discovery and localization is to discover some dominant object classes and localize all of object instances from a given image collection without any supervision. Previous work has attempted to tackle this problem with vanilla topic models, such as latent Dirichlet allocation (LDA). However, in those methods no prior knowledge for the given image collection is exploited to facilitate object discovery. On the other hand, the topic models used in those methods suffer from the topic coherence issue-some inferred topics do not have clear meaning, which limits the final performance of object discovery. In this paper, prior knowledge in terms of the so-called must-links are exploited from Web images on the Internet. Furthermore, a novel knowledge-based topic model, called LDA with mixture of Dirichlet trees, is proposed to incorporate the must-links into topic modeling for object discovery. In particular, to better deal with the polysemy phenomenon of visual words, the must-link is re-defined as that one must-link only constrains one or some topic(s) instead of all topics, which leads to significantly improved topic coherence. Moreover, the must-links are built and grouped with respect to specific object classes, thus the must-links in our approach are semantic-specific , which allows to more efficiently exploit discriminative prior knowledge from Web images. Extensive experiments validated the efficiency of our proposed approach on several data sets. It is shown that our method significantly improves topic coherence and outperforms the unsupervised methods for object discovery and localization. In addition, compared with discriminative methods, the naturally existing object classes in the given image collection can be subtly discovered, which makes our approach well suited for realistic applications of unsupervised object discovery.
NASA Technical Reports Server (NTRS)
Waszak, Martin R.
1992-01-01
The application of a sector-based stability theory approach to the formulation of useful uncertainty descriptions for linear, time-invariant, multivariable systems is explored. A review of basic sector properties and sector-based approach are presented first. The sector-based approach is then applied to several general forms of parameter uncertainty to investigate its advantages and limitations. The results indicate that the sector uncertainty bound can be used effectively to evaluate the impact of parameter uncertainties on the frequency response of the design model. Inherent conservatism is a potential limitation of the sector-based approach, especially for highly dependent uncertain parameters. In addition, the representation of the system dynamics can affect the amount of conservatism reflected in the sector bound. Careful application of the model can help to reduce this conservatism, however, and the solution approach has some degrees of freedom that may be further exploited to reduce the conservatism.
A Risk-Based Ecohydrological Approach to Assessing Environmental Flow Regimes
NASA Astrophysics Data System (ADS)
Mcgregor, Glenn B.; Marshall, Jonathan C.; Lobegeiger, Jaye S.; Holloway, Dean; Menke, Norbert; Coysh, Julie
2018-03-01
For several decades there has been recognition that water resource development alters river flow regimes and impacts ecosystem values. Determining strategies to protect or restore flow regimes to achieve ecological outcomes is a focus of water policy and legislation in many parts of the world. However, consideration of existing environmental flow assessment approaches for application in Queensland identified deficiencies precluding their adoption. Firstly, in managing flows and using ecosystem condition as an indicator of effectiveness, many approaches ignore the fact that river ecosystems are subjected to threatening processes other than flow regime alteration. Secondly, many focus on providing flows for responses without considering how often they are necessary to sustain ecological values in the long-term. Finally, few consider requirements at spatial-scales relevant to the desired outcomes, with frequent focus on individual places rather than the regions supporting sustainability. Consequently, we developed a risk-based ecohydrological approach that identifies ecosystem values linked to desired ecological outcomes, is sensitive to flow alteration and uses indicators of broader ecosystem requirements. Monitoring and research is undertaken to quantify flow-dependencies and ecological modelling is used to quantify flow-related ecological responses over an historical flow period. The relative risk from different flow management scenarios can be evaluated at relevant spatial-scales. This overcomes the deficiencies identified above and provides a robust and useful foundation upon which to build the information needed to support water planning decisions. Application of the risk assessment approach is illustrated here by two case studies.
A metal-free organic-inorganic aqueous flow battery.
Huskinson, Brian; Marshak, Michael P; Suh, Changwon; Er, Süleyman; Gerhardt, Michael R; Galvin, Cooper J; Chen, Xudong; Aspuru-Guzik, Alán; Gordon, Roy G; Aziz, Michael J
2014-01-09
As the fraction of electricity generation from intermittent renewable sources--such as solar or wind--grows, the ability to store large amounts of electrical energy is of increasing importance. Solid-electrode batteries maintain discharge at peak power for far too short a time to fully regulate wind or solar power output. In contrast, flow batteries can independently scale the power (electrode area) and energy (arbitrarily large storage volume) components of the system by maintaining all of the electro-active species in fluid form. Wide-scale utilization of flow batteries is, however, limited by the abundance and cost of these materials, particularly those using redox-active metals and precious-metal electrocatalysts. Here we describe a class of energy storage materials that exploits the favourable chemical and electrochemical properties of a family of molecules known as quinones. The example we demonstrate is a metal-free flow battery based on the redox chemistry of 9,10-anthraquinone-2,7-disulphonic acid (AQDS). AQDS undergoes extremely rapid and reversible two-electron two-proton reduction on a glassy carbon electrode in sulphuric acid. An aqueous flow battery with inexpensive carbon electrodes, combining the quinone/hydroquinone couple with the Br2/Br(-) redox couple, yields a peak galvanic power density exceeding 0.6 W cm(-2) at 1.3 A cm(-2). Cycling of this quinone-bromide flow battery showed >99 per cent storage capacity retention per cycle. The organic anthraquinone species can be synthesized from inexpensive commodity chemicals. This organic approach permits tuning of important properties such as the reduction potential and solubility by adding functional groups: for example, we demonstrate that the addition of two hydroxy groups to AQDS increases the open circuit potential of the cell by 11% and we describe a pathway for further increases in cell voltage. The use of π-aromatic redox-active organic molecules instead of redox-active metals represents a new and promising direction for realizing massive electrical energy storage at greatly reduced cost.
Discharge data assimilation in a distributed hydrologic model for flood forecasting purposes
NASA Astrophysics Data System (ADS)
Ercolani, G.; Castelli, F.
2017-12-01
Flood early warning systems benefit from accurate river flow forecasts, and data assimilation may improve their reliability. However, the actual enhancement that can be obtained in the operational practice should be investigated in detail and quantified. In this work we assess the benefits that the simultaneous assimilation of discharge observations at multiple locations can bring to flow forecasting through a distributed hydrologic model. The distributed model, MOBIDIC, is part of the operational flood forecasting chain of Tuscany Region in Central Italy. The assimilation system adopts a mixed variational-Monte Carlo approach to update efficiently initial river flow, soil moisture, and a parameter related to runoff production. The evaluation of the system is based on numerous hindcast experiments of real events. The events are characterized by significant rainfall that resulted in both high and relatively low flow in the river network. The area of study is the main basin of Tuscany Region, i.e. Arno river basin, which extends over about 8300 km2 and whose mean annual precipitation is around 800 mm. Arno's mainstream, with its nearly 240 km length, passes through major Tuscan cities, as Florence and Pisa, that are vulnerable to floods (e.g. flood of November 1966). The assimilation tests follow the usage of the model in the forecasting chain, employing the operational resolution in both space and time (500 m and 15 minutes respectively) and releasing new flow forecasts every 6 hours. The assimilation strategy is evaluated in respect to open loop simulations, i.e. runs that do not exploit discharge observations through data assimilation. We compare hydrographs in their entirety, as well as classical performance indexes, as error on peak flow and Nash-Sutcliffe efficiency. The dependence of performances on lead time and location is assessed. Results indicate that the operational forecasting chain can benefit from the developed assimilation system, although with a significant variability due to the specific characteristics of any single event, and with downstream locations more sensitive to observations than upstream sites.
Huang, Song-Bin; Wu, Min-Hsien; Lin, Yen-Heng; Hsieh, Chia-Hsun; Yang, Chih-Liang; Lin, Hung-Chih; Tseng, Ching-Ping; Lee, Gwo-Bin
2013-04-07
Negative selection-based circulating tumor cell (CTC) isolation is believed valuable to harvest more native, and in particular all possible CTCs without biases relevant to the properties of surface antigens on the CTCs. Under such a cell isolation strategy, however, the CTC purity is normally compromised. To address this issue, this study reports the integration of optically-induced-dielectrophoretic (ODEP) force-based cell manipulation, and a laminar flow regime in a microfluidic platform for the isolation of untreated, and highly pure CTCs after conventional negative selection-based CTC isolation. In the design, six sections of moving light-bar screens were continuously and simultaneously exerted in two parallel laminar flows to concurrently separate the cancer cells from the leukocytes based on their size difference and electric properties. The separated cell populations were further partitioned, delivered, and collected through the two flows. With this approach, the cancer cells can be isolated in a continuous, effective, and efficient manner. In this study, the operating conditions of ODEP for the manipulation of prostate cancer (PC-3) and human oral cancer (OEC-M1) cells, and leukocytes with minor cell aggregation phenomenon were first characterized. Moreover, performances of the proposed method for the isolation of cancer cells were experimentally investigated. The results showed that the presented CTC isolation scheme was able to isolate PC-3 cells or OEC-M1 cells from a leukocyte background with high recovery rate (PC-3 cells: 76-83%, OEC-M1 cells: 61-68%), and high purity (PC-3 cells: 74-82%, OEC-M1 cells: 64-66%) (set flow rate: 0.1 μl min(-1) and sample volume: 1 μl). The latter is beyond what is currently possible in the conventional CTC isolations. Moreover, the viability of isolated cancer cells was evaluated to be as high as 94 ± 2%, and 95 ± 3% for the PC-3, and OEC-M1 cells, respectively. Furthermore, the isolated cancer cells were also shown to preserve their proliferative capability. As a whole, this study has presented an ODEP-based microfluidic platform that is capable of isolating CTCs in a continuous, label-free, cell-friendly, and particularly highly pure manner. All these traits are found particularly meaningful for exploiting the harvested CTCs for the subsequent cell-based, or biochemical assays.
NASA Astrophysics Data System (ADS)
Cauchie, Léna; Lengliné, Olivier; Schmittbuhl, Jean
2017-04-01
Abundant seismicity is generally observed during the exploitation of geothermal reservoirs, especially during phases of hydraulic stimulations. At the Enhanced Geothermal System of Soultz-Sous-Forêts in France, the induced seismicity has been thoroughly studied over the years of exploitation and the mechanism at its origin has been related to both fluid pressure increase during stimulation and aseismic creeping movements. The fluid-induced seismic events often exhibit a high degree of similarity and the mechanism at the origin of these repeated events is thought to be associated with slow slip process where asperities on the rupture zone act several times. In order to improve our knowledge on the mechanisms associated with such events and on the damaged zones involved during the hydraulic stimulations, we investigate the behaviour of the multiplets and their persistent nature, if it prevails, over several water injection intervals. For this purpose, we analysed large datasets recorded from a downhole seismic network for several water injection periods (1993, 2000, …). For each stimulation interval, thousands of events are recorded at depth. We detected the events using the continuous kurtosis-based migration method and classified them into families of comparable waveforms using an approach based on cross-correlation analysis. We obtain precise relative locations of the multiplets using differential arrival times obtained through cross-correlation of similar waveforms. Finally, the properties of the similar fluid-induced seismic events are derived (magnitude, spectral content) and examined over the several hydraulic tests. Hopefully these steps will lead to a better understanding of the repetitive nature of these events and the investigation of their persistence will outline the heterogeneities of the structures (temperatures anomalies, regional stress perturbations, fluid flow channelling) regularly involved during the different stimulations.
Local and System Level Considerations for Plasma-Based Techniques in Hypersonic Flight
NASA Astrophysics Data System (ADS)
Suchomel, Charles; Gaitonde, Datta
2007-01-01
The harsh environment encountered due to hypersonic flight, particularly when air-breathing propulsion devices are utilized, poses daunting challenges to successful maturation of suitable technologies. This has spurred the quest for revolutionary solutions, particularly those exploiting the fact that air under these conditions can become electrically conducting either naturally or through artificial enhancement. Optimized development of such concepts must emphasize not only the detailed physics by which the fluid interacts with the imposed electromagnetic fields, but must also simultaneously identify system level issues integration and efficiencies that provide the greatest leverage. This paper presents some recent advances at both levels. At the system level, an analysis is summarized that incorporates the interdependencies occurring between weight, power and flow field performance improvements. Cruise performance comparisons highlight how one drag reduction device interacts with the vehicle to improve range. Quantified parameter interactions allow specification of system requirements and energy consuming technologies that affect overall flight vehicle performance. Results based on on the fundamental physics are presented by distilling numerous computational studies into a few guiding principles. These highlight the complex non-intuitive relationships between the various fluid and electromagnetic fields, together with thermodynamic considerations. Generally, energy extraction is an efficient process, while the reverse is accompanied by significant dissipative heating and inefficiency. Velocity distortions can be detrimental to plasma operation, but can be exploited to tailor flows through innovative electromagnetic configurations.
From Signature-Based Towards Behaviour-Based Anomaly Detection (Extended Abstract)
2010-11-01
data acquisition can serve as sensors. De- facto standard for IP flow monitoring is NetFlow format. Although NetFlow was originally developed by Cisco...packets with some common properties that pass through a network device. These collected flows are exported to an external device, the NetFlow ...Thanks to the network-based approach using NetFlow data, the detection algorithm is host independent and highly scalable. Deep Packet Inspection
NASA Astrophysics Data System (ADS)
Fuse, Shinichiro; Mifune, Yuto; Nakamura, Hiroyuki; Tanaka, Hiroshi
2016-11-01
Feglymycin is a naturally occurring, anti-HIV and antimicrobial 13-mer peptide that includes highly racemizable 3,5-dihydroxyphenylglycines (Dpgs). Here we describe the total synthesis of feglymycin based on a linear/convergent hybrid approach. Our originally developed micro-flow amide bond formation enabled highly racemizable peptide chain elongation based on a linear approach that was previously considered impossible. Our developed approach will enable the practical preparation of biologically active oligopeptides that contain highly racemizable amino acids, which are attractive drug candidates.
Nonlinear relaxation algorithms for circuit simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saleh, R.A.
Circuit simulation is an important Computer-Aided Design (CAD) tool in the design of Integrated Circuits (IC). However, the standard techniques used in programs such as SPICE result in very long computer-run times when applied to large problems. In order to reduce the overall run time, a number of new approaches to circuit simulation were developed and are described. These methods are based on nonlinear relaxation techniques and exploit the relative inactivity of large circuits. Simple waveform-processing techniques are described to determine the maximum possible speed improvement that can be obtained by exploiting this property of large circuits. Three simulation algorithmsmore » are described, two of which are based on the Iterated Timing Analysis (ITA) method and a third based on the Waveform-Relaxation Newton (WRN) method. New programs that incorporate these techniques were developed and used to simulate a variety of industrial circuits. The results from these simulations are provided. The techniques are shown to be much faster than the standard approach. In addition, a number of parallel aspects of these algorithms are described, and a general space-time model of parallel-task scheduling is developed.« less
Finite element techniques in computational time series analysis of turbulent flows
NASA Astrophysics Data System (ADS)
Horenko, I.
2009-04-01
In recent years there has been considerable increase of interest in the mathematical modeling and analysis of complex systems that undergo transitions between several phases or regimes. Such systems can be found, e.g., in weather forecast (transitions between weather conditions), climate research (ice and warm ages), computational drug design (conformational transitions) and in econometrics (e.g., transitions between different phases of the market). In all cases, the accumulation of sufficiently detailed time series has led to the formation of huge databases, containing enormous but still undiscovered treasures of information. However, the extraction of essential dynamics and identification of the phases is usually hindered by the multidimensional nature of the signal, i.e., the information is "hidden" in the time series. The standard filtering approaches (like f.~e. wavelets-based spectral methods) have in general unfeasible numerical complexity in high-dimensions, other standard methods (like f.~e. Kalman-filter, MVAR, ARCH/GARCH etc.) impose some strong assumptions about the type of the underlying dynamics. Approach based on optimization of the specially constructed regularized functional (describing the quality of data description in terms of the certain amount of specified models) will be introduced. Based on this approach, several new adaptive mathematical methods for simultaneous EOF/SSA-like data-based dimension reduction and identification of hidden phases in high-dimensional time series will be presented. The methods exploit the topological structure of the analysed data an do not impose severe assumptions on the underlying dynamics. Special emphasis will be done on the mathematical assumptions and numerical cost of the constructed methods. The application of the presented methods will be first demonstrated on a toy example and the results will be compared with the ones obtained by standard approaches. The importance of accounting for the mathematical assumptions used in the analysis will be pointed up in this example. Finally, applications to analysis of meteorological and climate data will be presented.
NASA Astrophysics Data System (ADS)
Phipps, Marja; Lewis, Gina
2012-06-01
Over the last decade, intelligence capabilities within the Department of Defense/Intelligence Community (DoD/IC) have evolved from ad hoc, single source, just-in-time, analog processing; to multi source, digitally integrated, real-time analytics; to multi-INT, predictive Processing, Exploitation and Dissemination (PED). Full Motion Video (FMV) technology and motion imagery tradecraft advancements have greatly contributed to Intelligence, Surveillance and Reconnaissance (ISR) capabilities during this timeframe. Imagery analysts have exploited events, missions and high value targets, generating and disseminating critical intelligence reports within seconds of occurrence across operationally significant PED cells. Now, we go beyond FMV, enabling All-Source Analysts to effectively deliver ISR information in a multi-INT sensor rich environment. In this paper, we explore the operational benefits and technical challenges of an Activity Based Intelligence (ABI) approach to FMV PED. Existing and emerging ABI features within FMV PED frameworks are discussed, to include refined motion imagery tools, additional intelligence sources, activity relevant content management techniques and automated analytics.
NASA Astrophysics Data System (ADS)
Srinivasa, K. G.; Shree Devi, B. N.
2017-10-01
String searching in documents has become a tedious task with the evolution of Big Data. Generation of large data sets demand for a high performance search algorithm in areas such as text mining, information retrieval and many others. The popularity of GPU's for general purpose computing has been increasing for various applications. Therefore it is of great interest to exploit the thread feature of a GPU to provide a high performance search algorithm. This paper proposes an optimized new approach to N-gram model for string search in a number of lengthy documents and its GPU implementation. The algorithm exploits GPGPUs for searching strings in many documents employing character level N-gram matching with parallel Score Table approach and search using CUDA API. The new approach of Score table used for frequency storage of N-grams in a document, makes the search independent of the document's length and allows faster access to the frequency values, thus decreasing the search complexity. The extensive thread feature in a GPU has been exploited to enable parallel pre-processing of trigrams in a document for Score Table creation and parallel search in huge number of documents, thus speeding up the whole search process even for a large pattern size. Experiments were carried out for many documents of varied length and search strings from the standard Lorem Ipsum text on NVIDIA's GeForce GT 540M GPU with 96 cores. Results prove that the parallel approach for Score Table creation and searching gives a good speed up than the same approach executed serially.
Along-the-net reconstruction of hydropower potential with consideration of anthropic alterations
NASA Astrophysics Data System (ADS)
Masoero, A.; Claps, P.; Gallo, E.; Ganora, D.; Laio, F.
2014-09-01
Even in regions with mature hydropower development, requirements for stable renewable power sources suggest revision of plans of exploitation of water resources, while taking care of the environmental regulations. Mean Annual Flow (MAF) is a key parameter when trying to represent water availability for hydropower purposes. MAF is usually determined in ungauged basins by means of regional statistical analysis. For this study a regional estimation method consistent along-the-river network has been developed for MAF estimation; the method uses a multi-regressive approach based on geomorphoclimatic descriptors, and it is applied on 100 gauged basins located in NW Italy. The method has been designed to keep the estimates of mean annual flow congruent at the confluences, by considering only raster-summable explanatory variables. Also, the influence of human alterations in the regional analysis of MAF has been studied: impact due to the presence of existing hydropower plants has been taken into account, restoring the "natural" value of runoff through analytical corrections. To exemplify the representation of the assessment of residual hydropower potential, the model has been applied extensively to two specific mountain watersheds by mapping the estimated mean flow for the basins draining into each pixel of a the DEM-derived river network. Spatial algorithms were developed using the OpenSource Software GRASS GIS and PostgreSQL/PostGIS. Spatial representation of the hydropower potential was obtained using different mean flow vs hydraulic-head relations for each pixel. Final potential indices have been represented and mapped through the Google Earth platform, providing a complete and interactive picture of the available potential, useful for planning and regulation purposes.
Multidomain approach for calculating compressible flows
NASA Technical Reports Server (NTRS)
Cambier, L.; Chazzi, W.; Veuillot, J. P.; Viviand, H.
1982-01-01
A multidomain approach for calculating compressible flows by using unsteady or pseudo-unsteady methods is presented. This approach is based on a general technique of connecting together two domains in which hyperbolic systems (that may differ) are solved with the aid of compatibility relations associated with these systems. Some examples of this approach's application to calculating transonic flows in ideal fluids are shown, particularly the adjustment of shock waves. The approach is then applied to treating a shock/boundary layer interaction problem in a transonic channel.
Lagrangian based methods for coherent structure detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allshouse, Michael R., E-mail: mallshouse@chaos.utexas.edu; Peacock, Thomas, E-mail: tomp@mit.edu
There has been a proliferation in the development of Lagrangian analytical methods for detecting coherent structures in fluid flow transport, yielding a variety of qualitatively different approaches. We present a review of four approaches and demonstrate the utility of these methods via their application to the same sample analytic model, the canonical double-gyre flow, highlighting the pros and cons of each approach. Two of the methods, the geometric and probabilistic approaches, are well established and require velocity field data over the time interval of interest to identify particularly important material lines and surfaces, and influential regions, respectively. The other twomore » approaches, implementing tools from cluster and braid theory, seek coherent structures based on limited trajectory data, attempting to partition the flow transport into distinct regions. All four of these approaches share the common trait that they are objective methods, meaning that their results do not depend on the frame of reference used. For each method, we also present a number of example applications ranging from blood flow and chemical reactions to ocean and atmospheric flows.« less
A Kernel Embedding-Based Approach for Nonstationary Causal Model Inference.
Hu, Shoubo; Chen, Zhitang; Chan, Laiwan
2018-05-01
Although nonstationary data are more common in the real world, most existing causal discovery methods do not take nonstationarity into consideration. In this letter, we propose a kernel embedding-based approach, ENCI, for nonstationary causal model inference where data are collected from multiple domains with varying distributions. In ENCI, we transform the complicated relation of a cause-effect pair into a linear model of variables of which observations correspond to the kernel embeddings of the cause-and-effect distributions in different domains. In this way, we are able to estimate the causal direction by exploiting the causal asymmetry of the transformed linear model. Furthermore, we extend ENCI to causal graph discovery for multiple variables by transforming the relations among them into a linear nongaussian acyclic model. We show that by exploiting the nonstationarity of distributions, both cause-effect pairs and two kinds of causal graphs are identifiable under mild conditions. Experiments on synthetic and real-world data are conducted to justify the efficacy of ENCI over major existing methods.
2018-01-01
Modern cities are subject to periodic or unexpected critical events, which may bring economic losses or even put people in danger. When some monitoring systems based on wireless sensor networks are deployed, sensing and transmission configurations of sensor nodes may be adjusted exploiting the relevance of the considered events, but efficient detection and classification of events of interest may be hard to achieve. In Smart City environments, several people spontaneously post information in social media about some event that is being observed and such information may be mined and processed for detection and classification of critical events. This article proposes an integrated approach to detect and classify events of interest posted in social media, notably in Twitter, and the assignment of sensing priorities to source nodes. By doing so, wireless sensor networks deployed in Smart City scenarios can be optimized for higher efficiency when monitoring areas under the influence of the detected events. PMID:29614060
Costa, Daniel G; Duran-Faundez, Cristian; Andrade, Daniel C; Rocha-Junior, João B; Peixoto, João Paulo Just
2018-04-03
Modern cities are subject to periodic or unexpected critical events, which may bring economic losses or even put people in danger. When some monitoring systems based on wireless sensor networks are deployed, sensing and transmission configurations of sensor nodes may be adjusted exploiting the relevance of the considered events, but efficient detection and classification of events of interest may be hard to achieve. In Smart City environments, several people spontaneously post information in social media about some event that is being observed and such information may be mined and processed for detection and classification of critical events. This article proposes an integrated approach to detect and classify events of interest posted in social media, notably in Twitter , and the assignment of sensing priorities to source nodes. By doing so, wireless sensor networks deployed in Smart City scenarios can be optimized for higher efficiency when monitoring areas under the influence of the detected events.
Low energy consumption vortex wave flow membrane bioreactor.
Wang, Zhiqiang; Dong, Weilong; Hu, Xiaohong; Sun, Tianyu; Wang, Tao; Sun, Youshan
2017-11-01
In order to reduce the energy consumption and membrane fouling of the conventional membrane bioreactor (MBR), a kind of low energy consumption vortex wave flow MBR was exploited based on the combination of biofilm process and membrane filtration process, as well as the vortex wave flow technique. The experimental results showed that the vortex wave flow state in the membrane module could be formed when the Reynolds number (Re) of liquid was adjusted between 450 and 1,050, and the membrane flux declined more slowly in the vortex wave flow state than those in the laminar flow state and turbulent flow state. The MBR system was used to treat domestic wastewater under the condition of vortex wave flow state for 30 days. The results showed that the removal efficiency for CODcr and NH 3 -N was 82% and 98% respectively, and the permeate quality met the requirement of 'Water quality standard for urban miscellaneous water consumption (GB/T 18920-2002)'. Analysis of the energy consumption of the MBR showed that the average energy consumption was 1.90 ± 0.55 kWh/m 3 (permeate), which was only two thirds of conventional MBR energy consumption.
Reducing current reversal time in electric motor control
Bredemann, Michael V
2014-11-04
The time required to reverse current flow in an electric motor is reduced by exploiting inductive current that persists in the motor when power is temporarily removed. Energy associated with this inductive current is used to initiate reverse current flow in the motor.
Progress in chemical luminescence-based biosensors: A critical review.
Roda, Aldo; Mirasoli, Mara; Michelini, Elisa; Di Fusco, Massimo; Zangheri, Martina; Cevenini, Luca; Roda, Barbara; Simoni, Patrizia
2016-02-15
Biosensors are a very active research field. They have the potential to lead to low-cost, rapid, sensitive, reproducible, and miniaturized bioanalytical devices, which exploit the high binding avidity and selectivity of biospecific binding molecules together with highly sensitive detection principles. Of the optical biosensors, those based on chemical luminescence detection (including chemiluminescence, bioluminescence, electrogenerated chemiluminescence, and thermochemiluminescence) are particularly attractive, due to their high-to-signal ratio and the simplicity of the required measurement equipment. Several biosensors based on chemical luminescence have been described for quantitative, and in some cases multiplex, analysis of organic molecules (such as hormones, drugs, pollutants), proteins, and nucleic acids. These exploit a variety of miniaturized analytical formats, such as microfluidics, microarrays, paper-based analytical devices, and whole-cell biosensors. Nevertheless, despite the high analytical performances described in the literature, the field of chemical luminescence biosensors has yet to demonstrate commercial success. This review presents the main recent advances in the field and discusses the approaches, challenges, and open issues, with the aim of stimulating a broader interest in developing chemical luminescence biosensors and improving their commercial exploitation. Copyright © 2015 Elsevier B.V. All rights reserved.
An Integrated Processing Strategy for Mountain Glacier Motion Monitoring Based on SAR Images
NASA Astrophysics Data System (ADS)
Ruan, Z.; Yan, S.; Liu, G.; LV, M.
2017-12-01
Mountain glacier dynamic variables are important parameters in studies of environment and climate change in High Mountain Asia. Due to the increasing events of abnormal glacier-related hazards, research of monitoring glacier movements has attracted more interest during these years. Glacier velocities are sensitive and changing fast under complex conditions of high mountain regions, which implies that analysis of glacier dynamic changes requires comprehensive and frequent observations with relatively high accuracy. Synthetic aperture radar (SAR) has been successfully exploited to detect glacier motion in a number of previous studies, usually with pixel-tracking and interferometry methods. However, the traditional algorithms applied to mountain glacier regions are constrained by the complex terrain and diverse glacial motion types. Interferometry techniques are prone to fail in mountain glaciers because of their narrow size and the steep terrain, while pixel-tracking algorithm, which is more robust in high mountain areas, is subject to accuracy loss. In order to derive glacier velocities continually and efficiently, we propose a modified strategy to exploit SAR data information for mountain glaciers. In our approach, we integrate a set of algorithms for compensating non-glacial-motion-related signals which exist in the offset values retrieved by sub-pixel cross-correlation of SAR image pairs. We exploit modified elastic deformation model to remove the offsets associated with orbit and sensor attitude, and for the topographic residual offset we utilize a set of operations including DEM-assisted compensation algorithm and wavelet-based algorithm. At the last step of the flow, an integrated algorithm combining phase and intensity information of SAR images will be used to improve regional motion results failed in cross-correlation related processing. The proposed strategy is applied to the West Kunlun Mountain and Muztagh Ata region in western China using ALOS/PALSAR data. The results show that the strategy can effectively improve the accuracy of velocity estimation by reducing the mean and standard deviation values from 0.32 m and 0.4 m to 0.16 m. It is proved to be highly appropriate for monitoring glacier motion over a widely varying range of ice velocities with a relatively high accuracy.
Information dynamics of brain-heart physiological networks during sleep
NASA Astrophysics Data System (ADS)
Faes, L.; Nollo, G.; Jurysta, F.; Marinazzo, D.
2014-10-01
This study proposes an integrated approach, framed in the emerging fields of network physiology and information dynamics, for the quantitative analysis of brain-heart interaction networks during sleep. With this approach, the time series of cardiac vagal autonomic activity and brain wave activities measured respectively as the normalized high frequency component of heart rate variability and the EEG power in the δ, θ, α, σ, and β bands, are considered as realizations of the stochastic processes describing the dynamics of the heart system and of different brain sub-systems. Entropy-based measures are exploited to quantify the predictive information carried by each (sub)system, and to dissect this information into a part actively stored in the system and a part transferred to it from the other connected systems. The application of this approach to polysomnographic recordings of ten healthy subjects led us to identify a structured network of sleep brain-brain and brain-heart interactions, with the node described by the β EEG power acting as a hub which conveys the largest amount of information flowing between the heart and brain nodes. This network was found to be sustained mostly by the transitions across different sleep stages, as the information transfer was weaker during specific stages than during the whole night, and vanished progressively when moving from light sleep to deep sleep and to REM sleep.
Automated Assume-Guarantee Reasoning by Abstraction Refinement
NASA Technical Reports Server (NTRS)
Pasareanu, Corina S.; Giannakopoulous, Dimitra; Glannakopoulou, Dimitra
2008-01-01
Current automated approaches for compositional model checking in the assume-guarantee style are based on learning of assumptions as deterministic automata. We propose an alternative approach based on abstraction refinement. Our new method computes the assumptions for the assume-guarantee rules as conservative and not necessarily deterministic abstractions of some of the components, and refines those abstractions using counter-examples obtained from model checking them together with the other components. Our approach also exploits the alphabets of the interfaces between components and performs iterative refinement of those alphabets as well as of the abstractions. We show experimentally that our preliminary implementation of the proposed alternative achieves similar or better performance than a previous learning-based implementation.
FloPSy - Search-Based Floating Point Constraint Solving for Symbolic Execution
NASA Astrophysics Data System (ADS)
Lakhotia, Kiran; Tillmann, Nikolai; Harman, Mark; de Halleux, Jonathan
Recently there has been an upsurge of interest in both, Search-Based Software Testing (SBST), and Dynamic Symbolic Execution (DSE). Each of these two approaches has complementary strengths and weaknesses, making it a natural choice to explore the degree to which the strengths of one can be exploited to offset the weakness of the other. This paper introduces an augmented version of DSE that uses a SBST-based approach to handling floating point computations, which are known to be problematic for vanilla DSE. The approach has been implemented as a plug in for the Microsoft Pex DSE testing tool. The paper presents results from both, standard evaluation benchmarks, and two open source programs.
Effluent characterization and different modes of reuse in agriculture-a model case study.
Das, Madhumita; Kumar, Ashwani
2009-06-01
High-quality waters are steadily retreating worldwide. Discharge of industrial effluent in the environment again declines soil/water quality to a great extent. On the other hand, effluent reuse in agriculture could be a means to conserve natural resources by providing assured water supply for growing crops. But industrial effluents are highly variable in nature, containing a variety of substances, and all are not favorable for farming. Appraisal and developing modes of effluent reuse is therefore a prerequisite to enable its proper use in agriculture. Effluents of various industries were assessed and approaches for their use in farming were developed for a particular region in this study. As per availability of effluents, the same could be implemented in other water-scarce areas. Effluents of 20 different industrial units were characterized by 24 attributes. Comparing these with corresponding irrigation water quality standards, the probability of their reuse was interpreted in the first approach. On the basis of relevant properties of major soil types dominated in a particular region, the soil-based usability of effluent was worked out in the second approach. By emphasizing the limitation of groundwater development where it went beyond 50% exploitation level, the land form and major soil type were then identified by applying a soil-based effluent reuse approach; the area-specific suitability of its use was perceived in the third approach. On the basis of irrigation water quality standards, the irrigation potentials of paper mill, fermentation (breweries and distilleries), and sugar factory effluents were recognized. In a soil-based approach, the compatibility of effluent with soil type was marked with A (preferred) and B (moderately preferred) classes and, compiling their recurring presence, the unanimous preference for paper mill effluent followed by rubber goods manufacturing industries/marine shrimp processing units, fermentation, and sugar mills was noted. Usability of these was also evident from a groundwater exploitation status-based approach. The approaches of assessing industrial effluents differing in compositions systematically reflected the ability and applicability of certain effluents in agriculture. The context-specific assessment of effluent offers options to compare effluent from a range of viewpoints and enhances its reasonability of use for growing crops. Chemical characterization of various industrial effluents first disclosed their potential of reuse. The soil-properties-based compatibility of effluent focused their prospects of use and groundwater-exploitation-status-based portrayed its area of use in a specific region. Assessment of effluent through these enhances reliability and appropriateness of its reuse in agriculture. Options of industrial effluent (prospective) reuse in agriculture provide ways to combat freshwater crisis without degrading environmental quality. It may be applied for assessing effluent before its reuse in several water-starved countries.
NASA Astrophysics Data System (ADS)
Wu, Fu-Chun; Chang, Ching-Fu; Shiau, Jenq-Tzong
2015-05-01
The full range of natural flow regime is essential for sustaining the riverine ecosystems and biodiversity, yet there are still limited tools available for assessment of flow regime alterations over a spectrum of temporal scales. Wavelet analysis has proven useful for detecting hydrologic alterations at multiple scales via the wavelet power spectrum (WPS) series. The existing approach based on the global WPS (GWPS) ratio tends to be dominated by the rare high-power flows so that alterations of the more frequent low-power flows are often underrepresented. We devise a new approach based on individual deviations between WPS (DWPS) that are root-mean-squared to yield the global DWPS (GDWPS). We test these two approaches on the three reaches of the Feitsui Reservoir system (Taiwan) that are subjected to different classes of anthropogenic interventions. The GDWPS reveal unique features that are not detected with the GWPS ratios. We also segregate the effects of individual subflow components on the overall flow regime alterations using the subflow GDWPS. The results show that the daily hydropeaking waves below the reservoir not only intensified the flow oscillations at daily scale but most significantly eliminated subweekly flow variability. Alterations of flow regime were most severe below the diversion weir, where the residual hydropeaking resulted in a maximum impact at daily scale while the postdiversion null flows led to large hydrologic alterations over submonthly scales. The smallest impacts below the confluence reveal that the hydrologic alterations at scales longer than 2 days were substantially mitigated with the joining of the unregulated tributary flows, whereas the daily-scale hydrologic alteration was retained because of the hydropeaking inherited from the reservoir releases. The proposed DWPS approach unravels for the first time the details of flow regime alterations at these intermediate scales that are overridden by the low-frequency high-power flows when the long-term averaged GWPS are used.
NASA Astrophysics Data System (ADS)
Kiryukhin, A. V.
2012-12-01
A TOUGH2-EOS1 3D rectangular numerical model of the Mutnovsky geothermal field (Kiryukhin, 1996) was re-calibrated using natural state and history exploitation data during the time period 1984-2006 years. Recalibration using iTOUGH2-EOS1+tracer inversion modeling capabilities, was useful to remove outliers from calibration data, identify sets of the estimated parameters of the model, and perform estimations. Chloride ion was used as a "tracer" in this modeling. Thermal hydrodynamic observational data which were used for model recalibration are as follows: 37 temperature and 1 pressure calibration points - for natural state, 13 production wells with monthly averaged enthalpies (650 values during the time period 1983-1987, 2000-2006 years) and 1 transient pressure monitoring wells (57 values during 2003-2006 years) - for exploitation history match. Chemical observational data includes transient mass chloride concentrations from 10 production wells and chloride hot spring sampling data (149 values during 1999-2006 years). The following features of Mutnovsky geothermal reservoir based on integrated inverse modeling analysis of natural state and exploitation data were estimated and better understood: 1. Reservoir permeability was found to be one order more comparable to model-1996, especially the lower part coinciding with intrusion contact zone (600-800 mD at -750 - -1250 masl); 2. Local meteoric inflow in the central part of the field accounting for 45 - 80 kg/s since year 2002; 3. Reinjection rates were estimated significantly lower, than officially reported as 100% of total fluid withdrawal; 4. Upflow fluid flows were estimated hotter (314oC) and the rates are larger (+50%), than assumed before; 5. Global double porosity parameters estimates are: fracture spacing - 5 - 10 m, void fraction N 10-3; 6. Main upflow zone chloride mass concentration estimate is 150 ppm. Conversion of the calibrated TOUGH2-EOS1+tracer model into electrical resistivity model using TOUGH2-EOS9 (L. Magnusdottir, 2012) may significantly improve efficiency of Electrical Resistivity Tomography (ETR) applications to detect spatial features of infiltration downflows and chloride enriched reinjected flows during reservoir exploitation.
Disaster debris estimation using high-resolution polarimetric stereo-SAR
NASA Astrophysics Data System (ADS)
Koyama, Christian N.; Gokon, Hideomi; Jimbo, Masaru; Koshimura, Shunichi; Sato, Motoyuki
2016-10-01
This paper addresses the problem of debris estimation which is one of the most important initial challenges in the wake of a disaster like the Great East Japan Earthquake and Tsunami. Reasonable estimates of the debris have to be made available to decision makers as quickly as possible. Current approaches to obtain this information are far from being optimal as they usually rely on manual interpretation of optical imagery. We have developed a novel approach for the estimation of tsunami debris pile heights and volumes for improved emergency response. The method is based on a stereo-synthetic aperture radar (stereo-SAR) approach for very high-resolution polarimetric SAR. An advanced gradient-based optical-flow estimation technique is applied for optimal image coregistration of the low-coherence non-interferometric data resulting from the illumination from opposite directions and in different polarizations. By applying model based decomposition of the coherency matrix, only the odd bounce scattering contributions are used to optimize echo time computation. The method exclusively considers the relative height differences from the top of the piles to their base to achieve a very fine resolution in height estimation. To define the base, a reference point on non-debris-covered ground surface is located adjacent to the debris pile targets by exploiting the polarimetric scattering information. The proposed technique is validated using in situ data of real tsunami debris taken on a temporary debris management site in the tsunami affected area near Sendai city, Japan. The estimated height error is smaller than 0.6 m RMSE. The good quality of derived pile heights allows for a voxel-based estimation of debris volumes with a RMSE of 1099 m3. Advantages of the proposed method are fast computation time, and robust height and volume estimation of debris piles without the need for pre-event data or auxiliary information like DEM, topographic maps or GCPs.
Towards a semantics-based approach in the development of geographic portals
NASA Astrophysics Data System (ADS)
Athanasis, Nikolaos; Kalabokidis, Kostas; Vaitis, Michail; Soulakellis, Nikolaos
2009-02-01
As the demand for geospatial data increases, the lack of efficient ways to find suitable information becomes critical. In this paper, a new methodology for knowledge discovery in geographic portals is presented. Based on the Semantic Web, our approach exploits the Resource Description Framework (RDF) in order to describe the geoportal's information with ontology-based metadata. When users traverse from page to page in the portal, they take advantage of the metadata infrastructure to navigate easily through data of interest. New metadata descriptions are published in the geoportal according to the RDF schemas.
Environmental flow assessments for transformed estuaries
NASA Astrophysics Data System (ADS)
Sun, Tao; Zhang, Heyue; Yang, Zhifeng; Yang, Wei
2015-01-01
Here, we propose an approach to environmental flow assessment that considers spatial pattern variations in potential habitats affected by river discharges and tidal currents in estuaries. The approach comprises four steps: identifying and simulating the distributions of critical environmental factors for habitats of typical species in an estuary; mapping of suitable habitats based on spatial distributions of the Habitat Suitability Index (HSI) and adopting the habitat aggregation index to understand fragmentation of potential suitable habitats; defining variations in water requirements for a certain species using trade-off analysis for different protection objectives; and recommending environmental flows in the estuary considering the compatibility and conflict of freshwater requirements for different species. This approach was tested using a case study in the Yellow River Estuary. Recommended environmental flows were determined by incorporating the requirements of four types of species into the assessments. Greater variability in freshwater inflows could be incorporated into the recommended environmental flows considering the adaptation of potential suitable habitats with variations in the flow regime. Environmental flow allocations should be conducted in conjunction with land use conflict management in estuaries. Based on the results presented here, the proposed approach offers flexible assessment of environmental flow for aquatic ecosystems that may be subject to future change.
Exploiting vibrational resonance in weak-signal detection
NASA Astrophysics Data System (ADS)
Ren, Yuhao; Pan, Yan; Duan, Fabing; Chapeau-Blondeau, François; Abbott, Derek
2017-08-01
In this paper, we investigate the first exploitation of the vibrational resonance (VR) effect to detect weak signals in the presence of strong background noise. By injecting a series of sinusoidal interference signals of the same amplitude but with different frequencies into a generalized correlation detector, we show that the detection probability can be maximized at an appropriate interference amplitude. Based on a dual-Dirac probability density model, we compare the VR method with the stochastic resonance approach via adding dichotomous noise. The compared results indicate that the VR method can achieve a higher detection probability for a wider variety of noise distributions.
Exploiting vibrational resonance in weak-signal detection.
Ren, Yuhao; Pan, Yan; Duan, Fabing; Chapeau-Blondeau, François; Abbott, Derek
2017-08-01
In this paper, we investigate the first exploitation of the vibrational resonance (VR) effect to detect weak signals in the presence of strong background noise. By injecting a series of sinusoidal interference signals of the same amplitude but with different frequencies into a generalized correlation detector, we show that the detection probability can be maximized at an appropriate interference amplitude. Based on a dual-Dirac probability density model, we compare the VR method with the stochastic resonance approach via adding dichotomous noise. The compared results indicate that the VR method can achieve a higher detection probability for a wider variety of noise distributions.
Site-directed nucleases: a paradigm shift in predictable, knowledge-based plant breeding.
Podevin, Nancy; Davies, Howard V; Hartung, Frank; Nogué, Fabien; Casacuberta, Josep M
2013-06-01
Conventional plant breeding exploits existing genetic variability and introduces new variability by mutagenesis. This has proven highly successful in securing food supplies for an ever-growing human population. The use of genetically modified plants is a complementary approach but all plant breeding techniques have limitations. Here, we discuss how the recent evolution of targeted mutagenesis and DNA insertion techniques based on tailor-made site-directed nucleases (SDNs) provides opportunities to overcome such limitations. Plant breeding companies are exploiting SDNs to develop a new generation of crops with new and improved traits. Nevertheless, some technical limitations as well as significant uncertainties on the regulatory status of SDNs may challenge their use for commercial plant breeding. Copyright © 2013 Elsevier Ltd. All rights reserved.
SATware: A Semantic Approach for Building Sentient Spaces
NASA Astrophysics Data System (ADS)
Massaguer, Daniel; Mehrotra, Sharad; Vaisenberg, Ronen; Venkatasubramanian, Nalini
This chapter describes the architecture of a semantic-based middleware environment for building sensor-driven sentient spaces. The proposed middleware explicitly models sentient space semantics (i.e., entities, spaces, activities) and supports mechanisms to map sensor observations to the state of the sentient space. We argue how such a semantic approach provides a powerful programming environment for building sensor spaces. In addition, the approach provides natural ways to exploit semantics for variety of purposes including scheduling under resource constraints and sensor recalibration.
RNA interference in the clinic: challenges and future directions
Pecot, Chad V.; Calin, George A.; Coleman, Robert L.; Lopez-Berestein, Gabriel; Sood, Anil K.
2011-01-01
Inherent difficulties with blocking many desirable targets using conventional approaches have prompted many to consider using RNA interference (RNAi) as a therapeutic approach. Although exploitation of RNAi has immense potential as a cancer therapeutic, many physiological obstacles stand in the way of successful and efficient delivery. This Review explores current challenges to the development of synthetic RNAi-based therapies and considers new approaches to circumvent biological barriers, to avoid intolerable side effects and to achieve controlled and sustained release. PMID:21160526
NASA Astrophysics Data System (ADS)
Stollsteiner, P.; Bessiere, H.; Nicolas, J.; Allier, D.; Berthet, O.
2015-04-01
This article is based on a BRGM study on piezometric indicators, threshold values of discharge and groundwater levels for the assessment of potentially-exploitable water resources of chalky watersheds. A method for estimating low water levels based on groundwater levels is presented from three examples representing chalk aquifers with different cycles: annual, combined and interannual. The first is located in Picardy and the two others in the Champagne-Ardennes region. Piezometers with annual cycles, used in these examples, are supposed to be representative of the aquifer hydro-dynamics. Except for multi-annual systems, the analysis between discharge measurements at a hydrometric station and groundwater levels measured at a piezometer representative of the main aquifer, leads to relatively precise and satisfactory relationships within a chalky context. These relationships may be useful for monitoring, validation, extension or reconstruction of the low water flow data. On the one hand, they allow definition of the piezometric levels corresponding to the different alert thresholds of river discharges. On the other hand, they clarify the proportions of low surface water flow from runoff or drainage of the aquifer. Finally, these correlations give an assessment of the minimum flow for the coming weeks. However, these correlations cannot be used to optimize the value of the exploitable water resource because it seems to be difficult to integrate the value of the effective rainfall that could occur during the draining period. Moreover, in the case of multi-annual systems, the solution is to attempt a comprehensive system modelling and, if it is satisfactory, using the simulated values to get rid of parasites or running the model for forecasting purposes.
Hessenauer, Jan-Michael; Vokoun, Jason C.; Suski, Cory D.; Davis, Justin; Jacobs, Robert; O’Donnell, Eileen
2015-01-01
Non-random mortality associated with commercial and recreational fisheries have the potential to cause evolutionary changes in fish populations. Inland recreational fisheries offer unique opportunities for the study of fisheries induced evolution due to the ability to replicate study systems, limited gene flow among populations, and the existence of unexploited reference populations. Experimental research has demonstrated that angling vulnerability is heritable in Largemouth Bass Micropterus salmoides, and is correlated with elevated resting metabolic rates (RMR) and higher fitness. However, whether such differences are present in wild populations is unclear. This study sought to quantify differences in RMR among replicated exploited and unexploited populations of Largemouth Bass. We collected age-0 Largemouth Bass from two Connecticut drinking water reservoirs unexploited by anglers for almost a century, and two exploited lakes, then transported and reared them in the same pond. Field RMR of individuals from each population was quantified using intermittent-flow respirometry. Individuals from unexploited reservoirs had a significantly higher mean RMR (6%) than individuals from exploited populations. These findings are consistent with expectations derived from artificial selection by angling on Largemouth Bass, suggesting that recreational angling may act as an evolutionary force influencing the metabolic rates of fishes in the wild. Reduced RMR as a result of fisheries induced evolution may have ecosystem level effects on energy demand, and be common in exploited recreational populations globally. PMID:26039091
Chen, He; Ma, Lekuan; Guo, Wei; Yang, Ying; Guo, Tong; Feng, Cheng
2013-01-01
Most rivers worldwide are highly regulated by anthropogenic activities through flow regulation and water pollution. Environmental flow regulation is used to reduce the effects of anthropogenic activities on aquatic ecosystems. Formulating flow alteration-ecological response relationships is a key factor in environmental flow assessment. Traditional environmental flow models are characterized by natural relationships between flow regimes and ecosystem factors. However, food webs are often altered from natural states, which disturb environmental flow assessment in such ecosystems. In ecosystems deteriorated by heavy anthropogenic activities, the effects of environmental flow regulation on species are difficult to assess with current modeling approaches. Environmental flow management compels the development of tools that link flow regimes and food webs in an ecosystem. Food web approaches are more suitable for the task because they are more adaptive for disordered multiple species in a food web deteriorated by anthropogenic activities. This paper presents a global method of environmental flow assessment in deteriorated aquatic ecosystems. Linkages between flow regimes and food web dynamics are modeled by incorporating multiple species into an ecosystem to explore ecosystem-based environmental flow management. The approach allows scientists and water resources managers to analyze environmental flows in deteriorated ecosystems in an ecosystem-based way.
Huang, Lanying
2017-11-09
Prior to the passing of 2009 Human Trafficking Prevention Act (HTPA), human trafficking was underestimated in Taiwan. In the past, domestic trafficking in women and girls often targeted vulnerable groups such as young girls from poor families or minority groups. Since the 1990s, an increasing flow of immigrant women, mainly from Vietnam and Indonesia and some from China, into Taiwan has created a new group of Human Trafficking victims. The current study intends to identify, describe, and categorize reported and prosecuted human trafficking cases involving women and girls according to the HTPA in Taiwan. Using the court proceedings of prosecuted trafficking in women and girls cases under Taiwan's HTPA from all 21 districts in Taiwan from 2009 to 2012 under the title keyword of 'Human Trafficking', this current study aims to categorize different patterns of existing trafficking in women and girls in Taiwan. The analysis is based on 37 court cases, involving 195 victimized women and girls and 118 perpetrators. This study identifies six forms of Human Trafficking victims according to their country of origin, vulnerability status, and means of transport. This study found that women and girls suffer from both labor and sexual exploitation, from mainly domestic male perpetrators. While sexual exploitation is more evenly distributed among citizens and immigrants and affects both adults and minors, labor exploitation seems to be an exclusive phenomenon among women immigrant workers in the data. Human Trafficking cases in Taiwan share many of the similarities of Human Trafficking in other regions, which are highly associated with gender inequality and gender-based vulnerability.
A data-driven decomposition approach to model aerodynamic forces on flapping airfoils
NASA Astrophysics Data System (ADS)
Raiola, Marco; Discetti, Stefano; Ianiro, Andrea
2017-11-01
In this work, we exploit a data-driven decomposition of experimental data from a flapping airfoil experiment with the aim of isolating the main contributions to the aerodynamic force and obtaining a phenomenological model. Experiments are carried out on a NACA 0012 airfoil in forward flight with both heaving and pitching motion. Velocity measurements of the near field are carried out with Planar PIV while force measurements are performed with a load cell. The phase-averaged velocity fields are transformed into the wing-fixed reference frame, allowing for a description of the field in a domain with fixed boundaries. The decomposition of the flow field is performed by means of the POD applied on the velocity fluctuations and then extended to the phase-averaged force data by means of the Extended POD approach. This choice is justified by the simple consideration that aerodynamic forces determine the largest contributions to the energetic balance in the flow field. Only the first 6 modes have a relevant contribution to the force. A clear relationship can be drawn between the force and the flow field modes. Moreover, the force modes are closely related (yet slightly different) to the contributions of the classic potential models in literature, allowing for their correction. This work has been supported by the Spanish MINECO under Grant TRA2013-41103-P.
Continuous-Flow Electrophoresis of DNA and Proteins in a Two-Dimensional Capillary-Well Sieve.
Duan, Lian; Cao, Zhen; Yobas, Levent
2017-09-19
Continuous-flow electrophoresis of macromolecules is demonstrated using an integrated capillary-well sieve arranged into a two-dimensional anisotropic array on silicon. The periodic array features thousands of entropic barriers, each resulting from an abrupt interface between a 2 μm deep well (channel) and a 70 nm capillary. These entropic barriers owing to two-dimensional confinement within the capillaries are vastly steep in relation to those arising from slits featuring one-dimensional confinement. Thus, the sieving mechanisms can sustain relatively large electric field strengths over a relatively small array area. The sieve rapidly sorts anionic macromolecules, including DNA chains and proteins in native or denatured states, into distinct trajectories according to size or charge under electric field vectors orthogonally applied. The baseline separation is achieved in less than 1 min within a horizontal migration length of ∼1.5 mm. The capillaries are self-enclosed conduits in cylindrical profile featuring a uniform diameter and realized through an approach that avoids advanced patterning techniques. The approach exploits a thermal reflow of a layer of doped glass for shape transformation into cylindrical capillaries and for controllably shrinking the capillary diameter. Lastly, atomic layer deposition of alumina is introduced for the first time to fine-tune the capillary diameter as well as to neutralize the surface charge, thereby suppressing undesired electroosmotic flows.
Waveform design for detection of weapons based on signature exploitation
NASA Astrophysics Data System (ADS)
Ahmad, Fauzia; Amin, Moeness G.; Dogaru, Traian
2010-04-01
We present waveform design based on signature exploitation techniques for improved detection of weapons in urban sensing applications. A single-antenna monostatic radar system is considered. Under the assumption of exact knowledge of the target orientation and, hence, known impulse response, matched illumination approach is used for optimal target detection. For the case of unknown target orientation, we analyze the target signatures as random processes and perform signal-to-noise-ratio based waveform optimization. Numerical electromagnetic modeling is used to provide the impulse responses of an AK-47 assault rifle for various target aspect angles relative to the radar. Simulation results depict an improvement in the signal-to-noise-ratio at the output of the matched filter receiver for both matched illumination and stochastic waveforms as compared to a chirp waveform of the same duration and energy.
Estimator banks: a new tool for direction-of-arrival estimation
NASA Astrophysics Data System (ADS)
Gershman, Alex B.; Boehme, Johann F.
1997-10-01
A new powerful tool for improving the threshold performance of direction-of-arrival (DOA) estimation is considered. The essence of our approach is to reduce the number of outliers in the threshold domain using the so-called estimator bank containing multiple 'parallel' underlying DOA estimators which are based on pseudorandom resampling of the MUSIC spatial spectrum for given data batch or sample covariance matrix. To improve the threshold performance relative to conventional MUSIC, evolutionary principles are used, i.e., only 'successful' underlying estimators (having no failure in the preliminary estimated source localization sectors) are exploited in the final estimate. An efficient beamspace root implementation of the estimator bank approach is developed, combined with the array interpolation technique which enables the application to arbitrary arrays. A higher-order extension of our approach is also presented, where the cumulant-based MUSIC estimator is exploited as a basic technique for spatial spectrum resampling. Simulations and experimental data processing show that our algorithm performs well below the MUSIC threshold, namely, has the threshold performance similar to that of the stochastic ML method. At the same time, the computational cost of our algorithm is much lower than that of stochastic ML because no multidimensional optimization is involved.
Artuñedo, Antonio; del Toro, Raúl M.; Haber, Rodolfo E.
2017-01-01
Nowadays many studies are being conducted to develop solutions for improving the performance of urban traffic networks. One of the main challenges is the necessary cooperation among different entities such as vehicles or infrastructure systems and how to exploit the information available through networks of sensors deployed as infrastructures for smart cities. In this work an algorithm for cooperative control of urban subsystems is proposed to provide a solution for mobility problems in cities. The interconnected traffic lights controller (TLC) network adapts traffic lights cycles, based on traffic and air pollution sensory information, in order to improve the performance of urban traffic networks. The presence of air pollution in cities is not only caused by road traffic but there are other pollution sources that contribute to increase or decrease the pollution level. Due to the distributed and heterogeneous nature of the different components involved, a system of systems engineering approach is applied to design a consensus-based control algorithm. The designed control strategy contains a consensus-based component that uses the information shared in the network for reaching a consensus in the state of TLC network components. Discrete event systems specification is applied for modelling and simulation. The proposed solution is assessed by simulation studies with very promising results to deal with simultaneous responses to both pollution levels and traffic flows in urban traffic networks. PMID:28445398
Artuñedo, Antonio; Del Toro, Raúl M; Haber, Rodolfo E
2017-04-26
Nowadays many studies are being conducted to develop solutions for improving the performance of urban traffic networks. One of the main challenges is the necessary cooperation among different entities such as vehicles or infrastructure systems and how to exploit the information available through networks of sensors deployed as infrastructures for smart cities. In this work an algorithm for cooperative control of urban subsystems is proposed to provide a solution for mobility problems in cities. The interconnected traffic lights controller ( TLC ) network adapts traffic lights cycles, based on traffic and air pollution sensory information, in order to improve the performance of urban traffic networks. The presence of air pollution in cities is not only caused by road traffic but there are other pollution sources that contribute to increase or decrease the pollution level. Due to the distributed and heterogeneous nature of the different components involved, a system of systems engineering approach is applied to design a consensus-based control algorithm. The designed control strategy contains a consensus-based component that uses the information shared in the network for reaching a consensus in the state of TLC network components. Discrete event systems specification is applied for modelling and simulation. The proposed solution is assessed by simulation studies with very promising results to deal with simultaneous responses to both pollution levels and traffic flows in urban traffic networks.
Rotated waveplates in integrated waveguide optics.
Corrielli, Giacomo; Crespi, Andrea; Geremia, Riccardo; Ramponi, Roberta; Sansoni, Linda; Santinelli, Andrea; Mataloni, Paolo; Sciarrino, Fabio; Osellame, Roberto
2014-06-25
Controlling and manipulating the polarization state of a light beam is crucial in applications ranging from optical sensing to optical communications, both in the classical and quantum regime, and ultimately whenever interference phenomena are to be exploited. In addition, many of these applications present severe requirements of phase stability and greatly benefit from a monolithic integrated-optics approach. However, integrated devices that allow arbitrary transformations of the polarization state are very difficult to produce with conventional lithographic technologies. Here we demonstrate waveguide-based optical waveplates, with arbitrarily rotated birefringence axis, fabricated by femtosecond laser pulses. To validate our approach, we exploit this component to realize a compact device for the quantum state tomography of two polarization-entangled photons. This work opens perspectives for integrated manipulation of polarization-encoded information with relevant applications ranging from integrated polarimetric sensing to quantum key distribution.
Levy, Scott; Ferreira, Kurt B.; Bridges, Patrick G.; ...
2014-12-09
Building the next-generation of extreme-scale distributed systems will require overcoming several challenges related to system resilience. As the number of processors in these systems grow, the failure rate increases proportionally. One of the most common sources of failure in large-scale systems is memory. In this paper, we propose a novel runtime for transparently exploiting memory content similarity to improve system resilience by reducing the rate at which memory errors lead to node failure. We evaluate the viability of this approach by examining memory snapshots collected from eight high-performance computing (HPC) applications and two important HPC operating systems. Based on themore » characteristics of the similarity uncovered, we conclude that our proposed approach shows promise for addressing system resilience in large-scale systems.« less
Gravity Effects in Microgap Flow Boiling
NASA Technical Reports Server (NTRS)
Robinson, Franklin; Bar-Cohen, Avram
2017-01-01
Increasing integration density of electronic components has exacerbated the thermal management challenges facing electronic system developers. The high power, heat flux, and volumetric heat generation of emerging devices are driving the transition from remote cooling, which relies on conduction and spreading, to embedded cooling, which facilitates direct contact between the heat-generating device and coolant flow. Microgap coolers employ the forced flow of dielectric fluids undergoing phase change in a heated channel between devices. While two phase microcoolers are used routinely in ground-based systems, the lack of acceptable models and correlations for microgravity operation has limited their use for spacecraft thermal management. Previous research has revealed that gravitational acceleration plays a diminishing role as the channel diameter shrinks, but there is considerable variation among the proposed gravity-insensitive channel dimensions and minimal research on rectangular ducts. Reliable criteria for achieving gravity-insensitive flow boiling performance would enable spaceflight systems to exploit this powerful thermal management technique and reduce development time and costs through reliance on ground-based testing. In the present effort, the authors have studied the effect of evaporator orientation on flow boiling performance of HFE7100 in a 218 m tall by 13.0 mm wide microgap cooler. Similar heat transfer coefficients and critical heat flux were achieved across five evaporator orientations, indicating that the effect of gravity was negligible.
Label-free in-flow detection of single DNA molecules using glass nanopipettes.
Gong, Xiuqing; Patil, Amol V; Ivanov, Aleksandar P; Kong, Qingyuan; Gibb, Thomas; Dogan, Fatma; deMello, Andrew J; Edel, Joshua B
2014-01-07
With the view of enhancing the functionality of label-free single molecule nanopore-based detection, we have designed and developed a highly robust, mechanically stable, integrated nanopipette-microfluidic device which combines the recognized advantages of microfluidic systems and the unique properties/advantages of nanopipettes. Unlike more typical planar solid-state nanopores, which have inherent geometrical constraints, nanopipettes can be easily positioned at any point within a microfluidic channel. This is highly advantageous, especially when taking into account fluid flow properties. We show that we are able to detect and discriminate between DNA molecules of varying lengths when motivated through a microfluidic channel, upon the application of appropriate voltage bias across the nanopipette. The effects of applied voltage and volumetric flow rates have been studied to ascertain translocation event frequency and capture rate. Additionally, by exploiting the advantages associated with microfluidic systems (such as flow control and concomitant control over analyte concentration/presence), we show that the technology offers a new opportunity for single molecule detection and recognition in microfluidic devices.
Superfluid high REynolds von Kármán experiment.
Rousset, B; Bonnay, P; Diribarne, P; Girard, A; Poncet, J M; Herbert, E; Salort, J; Baudet, C; Castaing, B; Chevillard, L; Daviaud, F; Dubrulle, B; Gagne, Y; Gibert, M; Hébral, B; Lehner, Th; Roche, P-E; Saint-Michel, B; Bon Mardion, M
2014-10-01
The Superfluid High REynolds von Kármán experiment facility exploits the capacities of a high cooling power refrigerator (400 W at 1.8 K) for a large dimension von Kármán flow (inner diameter 0.78 m), which can work with gaseous or subcooled liquid (He-I or He-II) from room temperature down to 1.6 K. The flow is produced between two counter-rotating or co-rotating disks. The large size of the experiment allows exploration of ultra high Reynolds numbers based on Taylor microscale and rms velocity [S. B. Pope, Turbulent Flows (Cambridge University Press, 2000)] (Rλ > 10000) or resolution of the dissipative scale for lower Re. This article presents the design and first performance of this apparatus. Measurements carried out in the first runs of the facility address the global flow behavior: calorimetric measurement of the dissipation, torque and velocity measurements on the two turbines. Moreover first local measurements (micro-Pitot, hot wire,…) have been installed and are presented.
Superfluid high REynolds von Kármán experiment
NASA Astrophysics Data System (ADS)
Rousset, B.; Bonnay, P.; Diribarne, P.; Girard, A.; Poncet, J. M.; Herbert, E.; Salort, J.; Baudet, C.; Castaing, B.; Chevillard, L.; Daviaud, F.; Dubrulle, B.; Gagne, Y.; Gibert, M.; Hébral, B.; Lehner, Th.; Roche, P.-E.; Saint-Michel, B.; Bon Mardion, M.
2014-10-01
The Superfluid High REynolds von Kármán experiment facility exploits the capacities of a high cooling power refrigerator (400 W at 1.8 K) for a large dimension von Kármán flow (inner diameter 0.78 m), which can work with gaseous or subcooled liquid (He-I or He-II) from room temperature down to 1.6 K. The flow is produced between two counter-rotating or co-rotating disks. The large size of the experiment allows exploration of ultra high Reynolds numbers based on Taylor microscale and rms velocity [S. B. Pope, Turbulent Flows (Cambridge University Press, 2000)] (Rλ > 10000) or resolution of the dissipative scale for lower Re. This article presents the design and first performance of this apparatus. Measurements carried out in the first runs of the facility address the global flow behavior: calorimetric measurement of the dissipation, torque and velocity measurements on the two turbines. Moreover first local measurements (micro-Pitot, hot wire,…) have been installed and are presented.
A Novel Approach to Adaptive Flow Separation Control
2016-09-03
particular, it considers control of flow separation over a NACA-0025 airfoil using microjet actuators and develops Adaptive Sampling Based Model...Predictive Control ( Adaptive SBMPC), a novel approach to Nonlinear Model Predictive Control that applies the Minimal Resource Allocation Network...Distribution Unlimited UU UU UU UU 03-09-2016 1-May-2013 30-Apr-2016 Final Report: A Novel Approach to Adaptive Flow Separation Control The views, opinions
Using Web-Based Knowledge Extraction Techniques to Support Cultural Modeling
NASA Astrophysics Data System (ADS)
Smart, Paul R.; Sieck, Winston R.; Shadbolt, Nigel R.
The World Wide Web is a potentially valuable source of information about the cognitive characteristics of cultural groups. However, attempts to use the Web in the context of cultural modeling activities are hampered by the large-scale nature of the Web and the current dominance of natural language formats. In this paper, we outline an approach to support the exploitation of the Web for cultural modeling activities. The approach begins with the development of qualitative cultural models (which describe the beliefs, concepts and values of cultural groups), and these models are subsequently used to develop an ontology-based information extraction capability. Our approach represents an attempt to combine conventional approaches to information extraction with epidemiological perspectives of culture and network-based approaches to cultural analysis. The approach can be used, we suggest, to support the development of models providing a better understanding of the cognitive characteristics of particular cultural groups.
NASA Astrophysics Data System (ADS)
Liu, Shun; Xu, Jinglei; Yu, Kaikai
2017-06-01
This paper proposes an improved approach for extraction of pressure fields from velocity data, such as obtained by particle image velocimetry (PIV), especially for steady compressible flows with strong shocks. The principle of this approach is derived from Navier-Stokes equations, assuming adiabatic condition and neglecting viscosity of flow field boundaries measured by PIV. The computing method is based on MacCormack's technique in computational fluid dynamics. Thus, this approach is called the MacCormack method. Moreover, the MacCormack method is compared with several approaches proposed in previous literature, including the isentropic method, the spatial integration and the Poisson method. The effects of velocity error level and PIV spatial resolution on these approaches are also quantified by using artificial velocity data containing shock waves. The results demonstrate that the MacCormack method has higher reconstruction accuracy than other approaches, and its advantages become more remarkable with shock strengthening. Furthermore, the performance of the MacCormack method is also validated by using synthetic PIV images with an oblique shock wave, confirming the feasibility and advantage of this approach in real PIV experiments. This work is highly significant for the studies on aerospace engineering, especially the outer flow fields of supersonic aircraft and the internal flow fields of ramjets.
Virtual Reconstruction of Lost Architectures: from the Tls Survey to AR Visualization
NASA Astrophysics Data System (ADS)
Quattrini, R.; Pierdicca, R.; Frontoni, E.; Barcaglioni, R.
2016-06-01
The exploitation of high quality 3D models for dissemination of archaeological heritage is currently an investigated topic, although Mobile Augmented Reality platforms for historical architecture are not available, allowing to develop low-cost pipelines for effective contents. The paper presents a virtual anastylosis, starting from historical sources and from 3D model based on TLS survey. Several efforts and outputs in augmented or immersive environments, exploiting this reconstruction, are discussed. The work demonstrates the feasibility of a 3D reconstruction approach for complex architectural shapes starting from point clouds and its AR/VR exploitation, allowing the superimposition with archaeological evidences. Major contributions consist in the presentation and the discussion of a pipeline starting from the virtual model, to its simplification showing several outcomes, comparing also the supported data qualities and advantages/disadvantages due to MAR and VR limitations.
NASA Astrophysics Data System (ADS)
Courdent, Vianney; Grum, Morten; Munk-Nielsen, Thomas; Mikkelsen, Peter S.
2017-05-01
Precipitation is the cause of major perturbation to the flow in urban drainage and wastewater systems. Flow forecasts, generated by coupling rainfall predictions with a hydrologic runoff model, can potentially be used to optimize the operation of integrated urban drainage-wastewater systems (IUDWSs) during both wet and dry weather periods. Numerical weather prediction (NWP) models have significantly improved in recent years, having increased their spatial and temporal resolution. Finer resolution NWP are suitable for urban-catchment-scale applications, providing longer lead time than radar extrapolation. However, forecasts are inevitably uncertain, and fine resolution is especially challenging for NWP. This uncertainty is commonly addressed in meteorology with ensemble prediction systems (EPSs). Handling uncertainty is challenging for decision makers and hence tools are necessary to provide insight on ensemble forecast usage and to support the rationality of decisions (i.e. forecasts are uncertain and therefore errors will be made; decision makers need tools to justify their choices, demonstrating that these choices are beneficial in the long run). This study presents an economic framework to support the decision-making process by providing information on when acting on the forecast is beneficial and how to handle the EPS. The relative economic value (REV) approach associates economic values with the potential outcomes and determines the preferential use of the EPS forecast. The envelope curve of the REV diagram combines the results from each probability forecast to provide the highest relative economic value for a given gain-loss ratio. This approach is traditionally used at larger scales to assess mitigation measures for adverse events (i.e. the actions are taken when events are forecast). The specificity of this study is to optimize the energy consumption in IUDWS during low-flow periods by exploiting the electrical smart grid market (i.e. the actions are taken when no events are forecast). Furthermore, the results demonstrate the benefit of NWP neighbourhood post-processing methods to enhance the forecast skill and increase the range of beneficial uses.
Loren, Bradley P; Wleklinski, Michael; Koswara, Andy; Yammine, Kathryn; Hu, Yanyang; Nagy, Zoltan K; Thompson, David H; Cooks, R Graham
2017-06-01
A highly integrated approach to the development of a process for the continuous synthesis and purification of diphenhydramine is reported. Mass spectrometry (MS) is utilized throughout the system for on-line reaction monitoring, off-line yield quantitation, and as a reaction screening module that exploits reaction acceleration in charged microdroplets for high throughput route screening. This effort has enabled the discovery and optimization of multiple routes to diphenhydramine in glass microreactors using MS as a process analytical tool (PAT). The ability to rapidly screen conditions in charged microdroplets was used to guide optimization of the process in a microfluidic reactor. A quantitative MS method was developed and used to measure the reaction kinetics. Integration of the continuous-flow reactor/on-line MS methodology with a miniaturized crystallization platform for continuous reaction monitoring and controlled crystallization of diphenhydramine was also achieved. Our findings suggest a robust approach for the continuous manufacture of pharmaceutical drug products, exemplified in the particular case of diphenhydramine, and optimized for efficiency and crystal size, and guided by real-time analytics to produce the agent in a form that is readily adapted to continuous synthesis.
Drag reduction in a turbulent channel flow using a passivity-based approach
NASA Astrophysics Data System (ADS)
Heins, Peter; Jones, Bryn; Sharma, Atul
2013-11-01
A new active feedback control strategy for attenuating perturbation energy in a turbulent channel flow is presented. Using a passivity-based approach, a controller synthesis procedure has been devised which is capable of making the linear dynamics of a channel flow as close to passive as is possible given the limitations on sensing and actuation. A controller that is capable of making the linearized flow passive is guaranteed to globally stabilize the true flow. The resulting controller is capable of greatly restricting the amount of turbulent energy that the nonlinearity can feed back into the flow. DNS testing of a controller using wall-sensing of streamwise and spanwise shear stress and actuation via wall transpiration acting upon channel flows with Reτ = 100 - 250 showed significant reductions in skin-friction drag.
Exploiting the Maximum Entropy Principle to Increase Retrieval Effectiveness.
ERIC Educational Resources Information Center
Cooper, William S.
1983-01-01
Presents information retrieval design approach in which queries of computer-based system consist of sets of terms, either unweighted or weighted with subjective term precision estimates, and retrieval outputs ranked by probability of usefulness estimated by "maximum entropy principle." Boolean and weighted request systems are discussed.…
Who Makes the Most? Measuring the "Urban Environmental Virtuosity"
ERIC Educational Resources Information Center
Romano, Oriana; Ercolano, Salvatore
2013-01-01
This paper advances a composite indicator called urban environmental virtuosity index (UEVI), in order to measure the efforts made by public local bodies in applying an ecosystem approach to urban management. UEVI employs the less exploited process-based selection criteria for representing the original concept of virtuosity, providing makes a…
Self-gated golden-angle spiral 4D flow MRI.
Bastkowski, Rene; Weiss, Kilian; Maintz, David; Giese, Daniel
2018-01-17
The acquisition of 4D flow magnetic resonance imaging (MRI) in cardiovascular applications has recently made large progress toward clinical feasibility. The need for simultaneous compensation of cardiac and breathing motion still poses a challenge for widespread clinical use. Especially, breathing motion, addressed by gating approaches, can lead to unpredictable and long scan times. The current work proposes a time-efficient self-gated 4D flow sequence that exploits up to 100% of the acquired data and operates at a predictable scan time. A self-gated golden-angle spiral 4D flow sequence was implemented and tested in 10 volunteers. Data were retrospectively binned into respiratory and cardiac states and reconstructed using a conjugate-gradient sensitivity encoding reconstruction. Net flow curves, stroke volumes, and peak flow in the aorta were evaluated and compared to a conventional Cartesian 4D flow sequence. Additionally, flow quantities reconstructed from 50% to 100% of the self-gated 4D flow data were compared. Self-gating signals for respiratory and cardiac motion were extracted for all volunteers. Flow quantities were in agreement with the standard Cartesian scan. Mean differences in stroke volumes and peak flow of 7.6 ± 11.5 and 4.0 ± 79.9 mL/s were obtained, respectively. By retrospectively increasing breathing navigator efficiency while decreasing acquisition times (15:06-07:33 minutes), 50% of the acquired data were sufficient to measure stroke volumes with errors under 9.6 mL. The feasibility to acquire respiratory and cardiac self-gated 4D flow data at a predictable scan time was demonstrated. Magn Reson Med, 2018. © 2018 International Society for Magnetic Resonance in Medicine. © 2018 International Society for Magnetic Resonance in Medicine.
Partitioned coupling of advection-diffusion-reaction systems and Brinkman flows
NASA Astrophysics Data System (ADS)
Lenarda, Pietro; Paggi, Marco; Ruiz Baier, Ricardo
2017-09-01
We present a partitioned algorithm aimed at extending the capabilities of existing solvers for the simulation of coupled advection-diffusion-reaction systems and incompressible, viscous flow. The space discretisation of the governing equations is based on mixed finite element methods defined on unstructured meshes, whereas the time integration hinges on an operator splitting strategy that exploits the differences in scales between the reaction, advection, and diffusion processes, considering the global system as a number of sequentially linked sets of partial differential, and algebraic equations. The flow solver presents the advantage that all unknowns in the system (here vorticity, velocity, and pressure) can be fully decoupled and thus turn the overall scheme very attractive from the computational perspective. The robustness of the proposed method is illustrated with a series of numerical tests in 2D and 3D, relevant in the modelling of bacterial bioconvection and Boussinesq systems.
Travagliati, Marco; Girardo, Salvatore; Pisignano, Dario; Beltram, Fabio; Cecchini, Marco
2013-09-03
Spatiotemporal image correlation spectroscopy (STICS) is a simple and powerful technique, well established as a tool to probe protein dynamics in cells. Recently, its potential as a tool to map velocity fields in lab-on-a-chip systems was discussed. However, the lack of studies on its performance has prevented its use for microfluidics applications. Here, we systematically and quantitatively explore STICS microvelocimetry in microfluidic devices. We exploit a simple experimental setup, based on a standard bright-field inverted microscope (no fluorescence required) and a high-fps camera, and apply STICS to map liquid flow in polydimethylsiloxane (PDMS) microchannels. Our data demonstrates optimal 2D velocimetry up to 10 mm/s flow and spatial resolution down to 5 μm.
Baxendale, Ian R; Braatz, Richard D; Hodnett, Benjamin K; Jensen, Klavs F; Johnson, Martin D; Sharratt, Paul; Sherlock, Jon-Paul; Florence, Alastair J
2015-03-01
This whitepaper highlights current challenges and opportunities associated with continuous synthesis, workup, and crystallization of active pharmaceutical ingredients (drug substances). We describe the technologies and requirements at each stage and emphasize the different considerations for developing continuous processes compared with batch. In addition to the specific sequence of operations required to deliver the necessary chemical and physical transformations for continuous drug substance manufacture, consideration is also given to how adoption of continuous technologies may impact different manufacturing stages in development from discovery, process development, through scale-up and into full scale production. The impact of continuous manufacture on drug substance quality and the associated challenges for control and for process safety are also emphasized. In addition to the technology and operational considerations necessary for the adoption of continuous manufacturing (CM), this whitepaper also addresses the cultural, as well as skills and training, challenges that will need to be met by support from organizations in order to accommodate the new work flows. Specific action items for industry leaders are: Develop flow chemistry toolboxes, exploiting the advantages of flow processing and including highly selective chemistries that allow use of simple and effective continuous workup technologies. Availability of modular or plug and play type equipment especially for workup to assist in straightforward deployment in the laboratory. As with learning from other industries, standardization is highly desirable and will require cooperation across industry and academia to develop and implement. Implement and exploit process analytical technologies (PAT) for real-time dynamic control of continuous processes. Develop modeling and simulation techniques to support continuous process development and control. Progress is required in multiphase systems such as crystallization. Involve all parts of the organization from discovery, research and development, and manufacturing in the implementation of CM. Engage with academia to develop the training provision to support the skills base for CM, particularly in flow chemistry, physical chemistry, and chemical engineering skills at the chemistry-process interface. Promote and encourage publication and dissemination of examples of CM across the sector to demonstrate capability, engage with regulatory comment, and establish benchmarks for performance and highlight challenges. Develop the economic case for CM of drug substance. This will involve various stakeholders at project and business level, however establishing the critical economic drivers is critical to driving the transformation in manufacturing. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.
Final Report, “Exploiting Global View for Resilience”
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chien, Andrew
2017-03-29
Final technical report for the "Exploiting Global View for Resilience" project. The GVR project aims to create a new approach to portable, resilient applications. The GVR approach builds on a global view data model,, adding versioning (multi-version), user control of timing and rate (multi-stream), and flexible cross layer error signalling and recovery. With a versioned array as a portable abstraction, GVR enables application programmers to exploit deep scientific and application code insights to manage resilience (and its overhead) in a flexible, portable fashion.
Label-free high-throughput imaging flow cytometry
NASA Astrophysics Data System (ADS)
Mahjoubfar, A.; Chen, C.; Niazi, K. R.; Rabizadeh, S.; Jalali, B.
2014-03-01
Flow cytometry is an optical method for studying cells based on their individual physical and chemical characteristics. It is widely used in clinical diagnosis, medical research, and biotechnology for analysis of blood cells and other cells in suspension. Conventional flow cytometers aim a laser beam at a stream of cells and measure the elastic scattering of light at forward and side angles. They also perform single-point measurements of fluorescent emissions from labeled cells. However, many reagents used in cell labeling reduce cellular viability or change the behavior of the target cells through the activation of undesired cellular processes or inhibition of normal cellular activity. Therefore, labeled cells are not completely representative of their unaltered form nor are they fully reliable for downstream studies. To remove the requirement of cell labeling in flow cytometry, while still meeting the classification sensitivity and specificity goals, measurement of additional biophysical parameters is essential. Here, we introduce an interferometric imaging flow cytometer based on the world's fastest continuous-time camera. Our system simultaneously measures cellular size, scattering, and protein concentration as supplementary biophysical parameters for label-free cell classification. It exploits the wide bandwidth of ultrafast laser pulses to perform blur-free quantitative phase and intensity imaging at flow speeds as high as 10 meters per second and achieves nanometer-scale optical path length resolution for precise measurements of cellular protein concentration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liang Sai, E-mail: liangsai09@gmail.com; Zhang Tianzhu, E-mail: zhangtz@mail.tsinghua.edu.cn
Highlights: Black-Right-Pointing-Pointer Impacts of solid waste recycling on Suzhou's urban metabolism in 2015 are analyzed. Black-Right-Pointing-Pointer Sludge recycling for biogas is regarded as an accepted method. Black-Right-Pointing-Pointer Technical levels of reusing scrap tires and food wastes should be improved. Black-Right-Pointing-Pointer Other fly ash utilization methods should be exploited. Black-Right-Pointing-Pointer Secondary wastes from reusing food wastes and sludge should be concerned. - Abstract: Investigating impacts of urban solid waste recycling on urban metabolism contributes to sustainable urban solid waste management and urban sustainability. Using a physical input-output model and scenario analysis, urban metabolism of Suzhou in 2015 is predicted and impactsmore » of four categories of solid waste recycling on urban metabolism are illustrated: scrap tire recycling, food waste recycling, fly ash recycling and sludge recycling. Sludge recycling has positive effects on reducing all material flows. Thus, sludge recycling for biogas is regarded as an accepted method. Moreover, technical levels of scrap tire recycling and food waste recycling should be improved to produce positive effects on reducing more material flows. Fly ash recycling for cement production has negative effects on reducing all material flows except solid wastes. Thus, other fly ash utilization methods should be exploited. In addition, the utilization and treatment of secondary wastes from food waste recycling and sludge recycling should be concerned.« less
Nolan, John P.; Mandy, Francis
2008-01-01
While the term flow cytometry refers to the measurement of cells, the approach of making sensitive multiparameter optical measurements in a flowing sample stream is a very general analytical approach. The past few years have seen an explosion in the application of flow cytometry technology for molecular analysis and measurements using micro-particles as solid supports. While microsphere-based molecular analyses using flow cytometry date back three decades, the need for highly parallel quantitative molecular measurements that has arisen from various genomic and proteomic advances has driven the development in particle encoding technology to enable highly multiplexed assays. Multiplexed particle-based immunoassays are now common place, and new assays to study genes, protein function, and molecular assembly. Numerous efforts are underway to extend the multiplexing capabilities of microparticle-based assays through new approaches to particle encoding and analyte reporting. The impact of these developments will be seen in the basic research and clinical laboratories, as well as in drug development. PMID:16604537
CATS - A process-based model for turbulent turbidite systems at the reservoir scale
NASA Astrophysics Data System (ADS)
Teles, Vanessa; Chauveau, Benoît; Joseph, Philippe; Weill, Pierre; Maktouf, Fakher
2016-09-01
The Cellular Automata for Turbidite systems (CATS) model is intended to simulate the fine architecture and facies distribution of turbidite reservoirs with a multi-event and process-based approach. The main processes of low-density turbulent turbidity flow are modeled: downslope sediment-laden flow, entrainment of ambient water, erosion and deposition of several distinct lithologies. This numerical model, derived from (Salles, 2006; Salles et al., 2007), proposes a new approach based on the Rouse concentration profile to consider the flow capacity to carry the sediment load in suspension. In CATS, the flow distribution on a given topography is modeled with local rules between neighboring cells (cellular automata) based on potential and kinetic energy balance and diffusion concepts. Input parameters are the initial flow parameters and a 3D topography at depositional time. An overview of CATS capabilities in different contexts is presented and discussed.
A knowledge-based approach to automated flow-field zoning for computational fluid dynamics
NASA Technical Reports Server (NTRS)
Vogel, Alison Andrews
1989-01-01
An automated three-dimensional zonal grid generation capability for computational fluid dynamics is shown through the development of a demonstration computer program capable of automatically zoning the flow field of representative two-dimensional (2-D) aerodynamic configurations. The applicability of a knowledge-based programming approach to the domain of flow-field zoning is examined. Several aspects of flow-field zoning make the application of knowledge-based techniques challenging: the need for perceptual information, the role of individual bias in the design and evaluation of zonings, and the fact that the zoning process is modeled as a constructive, design-type task (for which there are relatively few examples of successful knowledge-based systems in any domain). Engineering solutions to the problems arising from these aspects are developed, and a demonstration system is implemented which can design, generate, and output flow-field zonings for representative 2-D aerodynamic configurations.
Lima, Manoel J A; Fernandes, Ridvan N; Tanaka, Auro A; Reis, Boaventura F
2016-02-01
This paper describes a new technique for the determination of captopril in pharmaceutical formulations, implemented by employing multicommuted flow analysis. The analytical procedure was based on the reaction between hypochlorite and captopril. The remaining hypochlorite oxidized luminol that generated electromagnetic radiation detected using a homemade luminometer. To the best of our knowledge, this is the first time that this reaction has been exploited for the determination of captopril in pharmaceutical products, offering a clean analytical procedure with minimal reagent usage. The effectiveness of the proposed procedure was confirmed by analyzing a set of pharmaceutical formulations. Application of the paired t-test showed that there was no significant difference between the data sets at a 95% confidence level. The useful features of the new analytical procedure included a linear response for captopril concentrations in the range 20.0-150.0 µmol/L (r = 0.997), a limit of detection (3σ) of 2.0 µmol/L, a sample throughput of 164 determinations per hour, reagent consumption of 9 µg luminol and 42 µg hypochlorite per determination and generation of 0.63 mL of waste. A relative standard deviation of 1% (n = 6) for a standard solution containing 80 µmol/L captopril was also obtained. Copyright © 2015 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Maljaars, Jakob M.; Labeur, Robert Jan; Möller, Matthias
2018-04-01
A generic particle-mesh method using a hybridized discontinuous Galerkin (HDG) framework is presented and validated for the solution of the incompressible Navier-Stokes equations. Building upon particle-in-cell concepts, the method is formulated in terms of an operator splitting technique in which Lagrangian particles are used to discretize an advection operator, and an Eulerian mesh-based HDG method is employed for the constitutive modeling to account for the inter-particle interactions. Key to the method is the variational framework provided by the HDG method. This allows to formulate the projections between the Lagrangian particle space and the Eulerian finite element space in terms of local (i.e. cellwise) ℓ2-projections efficiently. Furthermore, exploiting the HDG framework for solving the constitutive equations results in velocity fields which excellently approach the incompressibility constraint in a local sense. By advecting the particles through these velocity fields, the particle distribution remains uniform over time, obviating the need for additional quality control. The presented methodology allows for a straightforward extension to arbitrary-order spatial accuracy on general meshes. A range of numerical examples shows that optimal convergence rates are obtained in space and, given the particular time stepping strategy, second-order accuracy is obtained in time. The model capabilities are further demonstrated by presenting results for the flow over a backward facing step and for the flow around a cylinder.
DNA Detection by Flow Cytometry using PNA-Modified Metal-Organic Framework Particles.
Mejia-Ariza, Raquel; Rosselli, Jessica; Breukers, Christian; Manicardi, Alex; Terstappen, Leon W M M; Corradini, Roberto; Huskens, Jurriaan
2017-03-23
A DNA-sensing platform is developed by exploiting the easy surface functionalization of metal-organic framework (MOF) particles and their highly parallelized fluorescence detection by flow cytometry. Two strategies were employed to functionalize the surface of MIL-88A, using either covalent or non-covalent interactions, resulting in alkyne-modified and biotin-modified MIL-88A, respectively. Covalent surface coupling of an azide-dye and the alkyne-MIL-88A was achieved by means of a click reaction. Non-covalent streptavidin-biotin interactions were employed to link biotin-PNA to biotin-MIL-88A particles mediated by streptavidin. Characterization by confocal imaging and flow cytometry demonstrated that DNA can be bound selectively to the MOF surface. Flow cytometry provided quantitative data of the interaction with DNA. Making use of the large numbers of particles that can be simultaneously processed by flow cytometry, this MOF platform was able to discriminate between fully complementary, single-base mismatched, and randomized DNA targets. © 2017 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA.
Fully Exploiting The Potential Of The Periodic Table Through Pattern Recognition.
ERIC Educational Resources Information Center
Schultz, Emeric
2005-01-01
An approach to learning chemical facts that starts with the periodic table and depends primarily on recognizing and completing patterns and following a few simple rules is described. This approach exploits the exceptions that arise and uses them as opportunities for further concept development.
NASA Astrophysics Data System (ADS)
Santos, Ana Clara; Schaefli, Bettina; Manso, Pedro; Schleiss, Anton; Portela, Maria Manuela; Rinaldo, Andrea
2015-04-01
In its Energy Strategy 2050, Switzerland is revising its energy perspectives with a strong focus on renewable sources of energy and in particular hydropower. In this context, the Swiss Government funded a number of competence centers for energy research (SCCERs), including one on the Supply of Energy (SCCER-SoE), which develops fundamental research and innovative solutions in geoenergies and hydropower . Hydropower is already the major energy source in Switzerland, corresponding to approximately 55% of the total national electricity production (which was 69 TWh in 2014). The Energy Strategy 2050 foresees at least a net increase by 1.53 TWh/year in average hydrological conditions, in a context were almost all major river systems are already exploited and a straightforward application of recent environmental laws will impact (reduce) current hydropower production. In this contribution, we present the roadmap of the SCCER-SoE and an overview of our strategy to unravel currently non-exploited hydropower potential, in particular in river systems that are already used for hydropower production. The aim is hereby to quantify non-exploited natural flows, unnecessary water spills or storage volume deficits, whilst considering non-conventional approaches to water resources valuation and management. Such a better understanding of the current potential is paramount to justify future scenarios of adaptation of the existing hydropower infrastructure combining the increase of storage capacity with new connections between existing reservoirs, heightening or strengthening existing dams, increasing the operational volume of natural lakes (including new glacier lakes), or by building new dams. Tapping hidden potential shall also require operational changes to benefit from new flow patterns emerging under an evolving climate and in particular in the context of the ongoing glacier retreat. The paper shall present a broad view over the mentioned issues and first conclusions of ongoing research at the country scale.
Frohnauer, N.K.; Pierce, C.L.; Kallemeyn, L.W.
2007-01-01
The genetically unique population of muskellunge Esox masquinongy inhabiting Shoepack Lake in Voyageurs National Park, Minnesota, is potentially at risk for loss of genetic variability and long-term viability. Shoepack Lake has been subject to dramatic surface area changes from the construction of an outlet dam by beavers Castor canadensis and its subsequent failure. We simulated the long-term dynamics of this population in response to recruitment variation, increased exploitation, and reduced habitat area. We then estimated the effective population size of the simulated population and evaluated potential threats to long-term viability, based on which we recommend management actions to help preserve the long-term viability of the population. Simulations based on the population size and habitat area at the beginning of a companion study resulted in an effective population size that was generally above the threshold level for risk of loss of genetic variability, except when fishing mortality was increased. Simulations based on the reduced habitat area after the beaver dam failure and our assumption of a proportional reduction in population size resulted in an effective population size that was generally below the threshold level for risk of loss of genetic variability. Our results identified two potential threats to the long-term viability of the Shoepack Lake muskellunge population, reduction in habitat area and exploitation. Increased exploitation can be prevented through traditional fishery management approaches such as the adoption of no-kill, barbless hook, and limited entry regulations. Maintenance of the greatest possible habitat area and prevention of future habitat area reductions will require maintenance of the outlet dam built by beavers. Our study should enhance the long-term viability of the Shoepack Lake muskellunge population and illustrates a useful approach for other unique populations. ?? Copyright by the American Fisheries Society 2007.
Computed Flow Through An Artificial Heart And Valve
NASA Technical Reports Server (NTRS)
Rogers, Stuart E.; Kwak, Dochan; Kiris, Cetin; Chang, I-Dee
1994-01-01
NASA technical memorandum discusses computations of flow of blood through artificial heart and through tilting-disk artificial heart valve. Represents further progress in research described in "Numerical Simulation of Flow Through an Artificial Heart" (ARC-12478). One purpose of research to exploit advanced techniques of computational fluid dynamics and capabilities of supercomputers to gain understanding of complicated internal flows of viscous, essentially incompressible fluids like blood. Another to use understanding to design better artificial hearts and valves.
Inertioelastic Flow Instability at a Stagnation Point
NASA Astrophysics Data System (ADS)
Burshtein, Noa; Zografos, Konstantinos; Shen, Amy Q.; Poole, Robert J.; Haward, Simon J.
2017-10-01
A number of important industrial applications exploit the ability of small quantities of high molecular weight polymer to suppress instabilities that arise in the equivalent flow of Newtonian fluids, a particular example being turbulent drag reduction. However, it can be extremely difficult to probe exactly how the polymer acts to, e.g., modify the streamwise near-wall eddies in a fully turbulent flow. Using a novel cross-slot flow configuration, we exploit a flow instability in order to create and study a single steady-state streamwise vortex. By quantitative experiment, we show how the addition of small quantities (parts per million) of a flexible polymer to a Newtonian solvent dramatically affects both the onset conditions for this instability and the subsequent growth of the axial vorticity. Complementary numerical simulations with a finitely extensible nonlinear elastic dumbbell model show that these modifications are due to the growth of polymeric stress within specific regions of the flow domain. Our data fill a significant gap in the literature between the previously reported purely inertial and purely elastic flow regimes and provide a link between the two by showing how the instability mode is transformed as the fluid elasticity is varied. Our results and novel methods are relevant to understanding the mechanisms underlying industrial uses of weakly elastic fluids and also to understanding inertioelastic instabilities in more confined flows through channels with intersections and stagnation points.
Landers, Mark N.; Ankcorn, Paul D.
2008-01-01
The influence of onsite septic wastewater-treatment systems (OWTS) on base-flow quantity needs to be understood to evaluate consumptive use of surface-water resources by OWTS. If the influence of OWTS on stream base flow can be measured and if the inflow to OWTS is known from water-use data, then water-budget approaches can be used to evaluate consumptive use. This report presents a method to evaluate the influence of OWTS on ground-water recharge and base-flow quantity. Base flow was measured in Gwinnett County, Georgia, during an extreme drought in October 2007 in 12 watersheds that have low densities of OWTS (22 to 96 per square mile) and 12 watersheds that have high densities (229 to 965 per square mile) of OWTS. Mean base-flow yield in the high-density OWTS watersheds is 90 percent greater than in the low-density OWTS watersheds. The density of OWTS is statistically significant (p-value less than 0.01) in relation to base-flow yield as well as specific conductance. Specific conductance of base flow increases with OWTS density, which may indicate influence from treated wastewater. The study results indicate considerable unexplained variation in measured base-flow yield for reasons that may include: unmeasured processes, a limited dataset, and measurement errors. Ground-water recharge from a high density of OWTS is assumed to be steady state from year to year so that the annual amount of increase in base flow from OWTS is expected to be constant. In dry years, however, OWTS contributions represent a larger percentage of natural base flow than in wet years. The approach of this study could be combined with water-use data and analyses to estimate consumptive use of OWTS.
Boundary layer separation and reattachment detection on airfoils by thermal flow sensors.
Sturm, Hannes; Dumstorff, Gerrit; Busche, Peter; Westermann, Dieter; Lang, Walter
2012-10-24
A sensor concept for detection of boundary layer separation (flow separation, stall) and reattachment on airfoils is introduced in this paper. Boundary layer separation and reattachment are phenomena of fluid mechanics showing characteristics of extinction and even inversion of the flow velocity on an overflowed surface. The flow sensor used in this work is able to measure the flow velocity in terms of direction and quantity at the sensor's position and expected to determine those specific flow conditions. Therefore, an array of thermal flow sensors has been integrated (flush-mounted) on an airfoil and placed in a wind tunnel for measurement. Sensor signals have been recorded at different wind speeds and angles of attack for different positions on the airfoil. The sensors used here are based on the change of temperature distribution on a membrane (calorimetric principle). Thermopiles are used as temperature sensors in this approach offering a baseline free sensor signal, which is favorable for measurements at zero flow. Measurement results show clear separation points (zero flow) and even negative flow values (back flow) for all sensor positions. In addition to standard silicon-based flow sensors, a polymer-based flexible approach has been tested showing similar results.
Boundary Layer Separation and Reattachment Detection on Airfoils by Thermal Flow Sensors
Sturm, Hannes; Dumstorff, Gerrit; Busche, Peter; Westermann, Dieter; Lang, Walter
2012-01-01
A sensor concept for detection of boundary layer separation (flow separation, stall) and reattachment on airfoils is introduced in this paper. Boundary layer separation and reattachment are phenomena of fluid mechanics showing characteristics of extinction and even inversion of the flow velocity on an overflowed surface. The flow sensor used in this work is able to measure the flow velocity in terms of direction and quantity at the sensor's position and expected to determine those specific flow conditions. Therefore, an array of thermal flow sensors has been integrated (flush-mounted) on an airfoil and placed in a wind tunnel for measurement. Sensor signals have been recorded at different wind speeds and angles of attack for different positions on the airfoil. The sensors used here are based on the change of temperature distribution on a membrane (calorimetric principle). Thermopiles are used as temperature sensors in this approach offering a baseline free sensor signal, which is favorable for measurements at zero flow. Measurement results show clear separation points (zero flow) and even negative flow values (back flow) for all sensor positions. In addition to standard silicon-based flow sensors, a polymer-based flexible approach has been tested showing similar results. PMID:23202160
Digital Analysis and Sorting of Fluorescence Lifetime by Flow Cytometry
Houston, Jessica P.; Naivar, Mark A.; Freyer, James P.
2010-01-01
Frequency-domain flow cytometry techniques are combined with modifications to the digital signal processing capabilities of the Open Reconfigurable Cytometric Acquisition System (ORCAS) to analyze fluorescence decay lifetimes and control sorting. Real-time fluorescence lifetime analysis is accomplished by rapidly digitizing correlated, radiofrequency modulated detector signals, implementing Fourier analysis programming with ORCAS’ digital signal processor (DSP) and converting the processed data into standard cytometric list mode data. To systematically test the capabilities of the ORCAS 50 MS/sec analog-to-digital converter (ADC) and our DSP programming, an error analysis was performed using simulated light scatter and fluorescence waveforms (0.5–25 ns simulated lifetime), pulse widths ranging from 2 to 15 µs, and modulation frequencies from 2.5 to 16.667 MHz. The standard deviations of digitally acquired lifetime values ranged from 0.112 to >2 ns, corresponding to errors in actual phase shifts from 0.0142° to 1.6°. The lowest coefficients of variation (<1%) were found for 10-MHz modulated waveforms having pulse widths of 6 µs and simulated lifetimes of 4 ns. Direct comparison of the digital analysis system to a previous analog phase-sensitive flow cytometer demonstrated similar precision and accuracy on measurements of a range of fluorescent microspheres, unstained cells and cells stained with three common fluorophores. Sorting based on fluorescence lifetime was accomplished by adding analog outputs to ORCAS and interfacing with a commercial cell sorter with a radiofrequency modulated solid-state laser. Two populations of fluorescent microspheres with overlapping fluorescence intensities but different lifetimes (2 and 7 ns) were separated to ~98% purity. Overall, the digital signal acquisition and processing methods we introduce present a simple yet robust approach to phase-sensitive measurements in flow cytometry. The ability to simply and inexpensively implement this system on a commercial flow sorter will both allow better dissemination of this technology and better exploit the traditionally underutilized parameter of fluorescence lifetime. PMID:20662090
An entropy-variables-based formulation of residual distribution schemes for non-equilibrium flows
NASA Astrophysics Data System (ADS)
Garicano-Mena, Jesús; Lani, Andrea; Degrez, Gérard
2018-06-01
In this paper we present an extension of Residual Distribution techniques for the simulation of compressible flows in non-equilibrium conditions. The latter are modeled by means of a state-of-the-art multi-species and two-temperature model. An entropy-based variable transformation that symmetrizes the projected advective Jacobian for such a thermophysical model is introduced. Moreover, the transformed advection Jacobian matrix presents a block diagonal structure, with mass-species and electronic-vibrational energy being completely decoupled from the momentum and total energy sub-system. The advantageous structure of the transformed advective Jacobian can be exploited by contour-integration-based Residual Distribution techniques: established schemes that operate on dense matrices can be substituted by the same scheme operating on the momentum-energy subsystem matrix and repeated application of scalar scheme to the mass-species and electronic-vibrational energy terms. Finally, the performance gain of the symmetrizing-variables formulation is quantified on a selection of representative testcases, ranging from subsonic to hypersonic, in inviscid or viscous conditions.
Fifth Graders' Flow Experience in a Digital Game-Based Science Learning Environment
ERIC Educational Resources Information Center
Zheng, Meixun; Spires, Hiller A.
2014-01-01
This mixed methods study examined 73 5th graders' flow experience in a game-based science learning environment using two gameplay approaches (solo and collaborative gameplay). Both survey and focus group interview findings revealed that students had high flow experience; however, there were no flow experience differences that were contingent upon…
NASA Astrophysics Data System (ADS)
Clementina Caputo, Maria; Masciale, Rita; Masciopinto, Costantino; De Carlo, Lorenzo
2016-04-01
The high cost and scarcity of fossil fuels have promoted the increased use of natural heat for a number of direct applications. Just as for fossil fuels, the exploitation of geothermal energy should consider its environmental impact and sustainability. Particular attention deserves the so-called open loop geothermal groundwater heat pump (GWHP) system, which uses groundwater as geothermal fluid. From an economic point of view, the implementation of this kind of geothermal system is particularly attractive in coastal areas, which have generally shallow aquifers. Anyway the potential problem of seawater intrusion has led to laws that restrict the use of groundwater. The scarcity of freshwater could be a major impediment for the utilization of geothermal resources. In this study a new methodology has been proposed. It was based on an experimental approach to characterize a coastal area in order to exploit the low-enthalpy geothermal resource. The coastal karst and fractured aquifer near Bari, in Southern Italy, was selected for this purpose. For the purpose of investigating the influence of an open-loop GWHP system on the seawater intrusion, a long-term pumping test was performed. The test simulated the effects of a prolonged withdrawal on the chemical-physical groundwater characteristics of the studied aquifer portion. The duration of the test was programmed in 16 days, and it was performed with a constant pumping flowrate of 50 m3/h. The extracted water was outflowed into an adjacent artificial channel, by means of a piping system. Water depth, temperature and electrical conductivity of the pumped water were monitored for 37 days, including also some days before and after the pumping duration. The monitored parameters, collected in the pumping and in five observation wells placed 160 m down-gradient with respect to the groundwater flow direction, have been used to estimate different scenarios of the impact of the GWHP system on the seawater intrusion by mean of a numerical model. Model flow simulations were carried out under transient flow conditions, in order to determine perturbations of the saline front into the Bari fractured aquifer, caused by the long-term pumping at 50 m3/h.
Rapid Prototyping of High Performance Signal Processing Applications
NASA Astrophysics Data System (ADS)
Sane, Nimish
Advances in embedded systems for digital signal processing (DSP) are enabling many scientific projects and commercial applications. At the same time, these applications are key to driving advances in many important kinds of computing platforms. In this region of high performance DSP, rapid prototyping is critical for faster time-to-market (e.g., in the wireless communications industry) or time-to-science (e.g., in radio astronomy). DSP system architectures have evolved from being based on application specific integrated circuits (ASICs) to incorporate reconfigurable off-the-shelf field programmable gate arrays (FPGAs), the latest multiprocessors such as graphics processing units (GPUs), or heterogeneous combinations of such devices. We, thus, have a vast design space to explore based on performance trade-offs, and expanded by the multitude of possibilities for target platforms. In order to allow systematic design space exploration, and develop scalable and portable prototypes, model based design tools are increasingly used in design and implementation of embedded systems. These tools allow scalable high-level representations, model based semantics for analysis and optimization, and portable implementations that can be verified at higher levels of abstractions and targeted toward multiple platforms for implementation. The designer can experiment using such tools at an early stage in the design cycle, and employ the latest hardware at later stages. In this thesis, we have focused on dataflow-based approaches for rapid DSP system prototyping. This thesis contributes to various aspects of dataflow-based design flows and tools as follows: 1. We have introduced the concept of topological patterns, which exploits commonly found repetitive patterns in DSP algorithms to allow scalable, concise, and parameterizable representations of large scale dataflow graphs in high-level languages. We have shown how an underlying design tool can systematically exploit a high-level application specification consisting of topological patterns in various aspects of the design flow. 2. We have formulated the core functional dataflow (CFDF) model of computation, which can be used to model a wide variety of deterministic dynamic dataflow behaviors. We have also presented key features of the CFDF model and tools based on these features. These tools provide support for heterogeneous dataflow behaviors, an intuitive and common framework for functional specification, support for functional simulation, portability from several existing dataflow models to CFDF, integrated emphasis on minimally-restricted specification of actor functionality, and support for efficient static, quasi-static, and dynamic scheduling techniques. 3. We have developed a generalized scheduling technique for CFDF graphs based on decomposition of a CFDF graph into static graphs that interact at run-time. Furthermore, we have refined this generalized scheduling technique using a new notion of "mode grouping," which better exposes the underlying static behavior. We have also developed a scheduling technique for a class of dynamic applications that generates parameterized looped schedules (PLSs), which can handle dynamic dataflow behavior without major limitations on compile-time predictability. 4. We have demonstrated the use of dataflow-based approaches for design and implementation of radio astronomy DSP systems using an application example of a tunable digital downconverter (TDD) for spectrometers. Design and implementation of this module has been an integral part of this thesis work. This thesis demonstrates a design flow that consists of a high-level software prototype, analysis, and simulation using the dataflow interchange format (DIF) tool, and integration of this design with the existing tool flow for the target implementation on an FPGA platform, called interconnect break-out board (IBOB). We have also explored the trade-off between low hardware cost for fixed configurations of digital downconverters and flexibility offered by TDD designs. 5. This thesis has contributed significantly to the development and release of the latest version of a graph package oriented toward models of computation (MoCGraph). Our enhancements to this package include support for tree data structures, and generalized schedule trees (GSTs), which provide a useful data structure for a wide variety of schedule representations. Our extensions to the MoCGraph package provided key support for the CFDF model, and functional simulation capabilities in the DIF package.
Low-Rank Correction Methods for Algebraic Domain Decomposition Preconditioners
Li, Ruipeng; Saad, Yousef
2017-08-01
This study presents a parallel preconditioning method for distributed sparse linear systems, based on an approximate inverse of the original matrix, that adopts a general framework of distributed sparse matrices and exploits domain decomposition (DD) and low-rank corrections. The DD approach decouples the matrix and, once inverted, a low-rank approximation is applied by exploiting the Sherman--Morrison--Woodbury formula, which yields two variants of the preconditioning methods. The low-rank expansion is computed by the Lanczos procedure with reorthogonalizations. Numerical experiments indicate that, when combined with Krylov subspace accelerators, this preconditioner can be efficient and robust for solving symmetric sparse linear systems. Comparisonsmore » with pARMS, a DD-based parallel incomplete LU (ILU) preconditioning method, are presented for solving Poisson's equation and linear elasticity problems.« less
Exploiting CRISPR/Cas: Interference Mechanisms and Applications
Richter, Hagen; Randau, Lennart; Plagens, André
2013-01-01
The discovery of biological concepts can often provide a framework for the development of novel molecular tools, which can help us to further understand and manipulate life. One recent example is the elucidation of the prokaryotic adaptive immune system, clustered regularly interspaced short palindromic repeats (CRISPR)/CRISPR-associated (Cas) that protects bacteria and archaea against viruses or conjugative plasmids. The immunity is based on small RNA molecules that are incorporated into versatile multi-domain proteins or protein complexes and specifically target viral nucleic acids via base complementarity. CRISPR/Cas interference machines are utilized to develop novel genome editing tools for different organisms. Here, we will review the latest progress in the elucidation and application of prokaryotic CRISPR/Cas systems and discuss possible future approaches to exploit the potential of these interference machineries. PMID:23857052
Exploiting Motion Capture to Enhance Avoidance Behaviour in Games
NASA Astrophysics Data System (ADS)
van Basten, Ben J. H.; Jansen, Sander E. M.; Karamouzas, Ioannis
Realistic simulation of interacting virtual characters is essential in computer games, training and simulation applications. The problem is very challenging since people are accustomed to real-world situations and thus, they can easily detect inconsistencies and artifacts in the simulations. Over the past twenty years several models have been proposed for simulating individuals, groups and crowds of characters. However, little effort has been made to actually understand how humans solve interactions and avoid inter-collisions in real-life. In this paper, we exploit motion capture data to gain more insights into human-human interactions. We propose four measures to describe the collision-avoidance behavior. Based on these measures, we extract simple rules that can be applied on top of existing agent and force based approaches, increasing the realism of the resulting simulations.
Low-Rank Correction Methods for Algebraic Domain Decomposition Preconditioners
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Ruipeng; Saad, Yousef
This study presents a parallel preconditioning method for distributed sparse linear systems, based on an approximate inverse of the original matrix, that adopts a general framework of distributed sparse matrices and exploits domain decomposition (DD) and low-rank corrections. The DD approach decouples the matrix and, once inverted, a low-rank approximation is applied by exploiting the Sherman--Morrison--Woodbury formula, which yields two variants of the preconditioning methods. The low-rank expansion is computed by the Lanczos procedure with reorthogonalizations. Numerical experiments indicate that, when combined with Krylov subspace accelerators, this preconditioner can be efficient and robust for solving symmetric sparse linear systems. Comparisonsmore » with pARMS, a DD-based parallel incomplete LU (ILU) preconditioning method, are presented for solving Poisson's equation and linear elasticity problems.« less
Exploiting CRISPR/Cas: interference mechanisms and applications.
Richter, Hagen; Randau, Lennart; Plagens, André
2013-07-12
The discovery of biological concepts can often provide a framework for the development of novel molecular tools, which can help us to further understand and manipulate life. One recent example is the elucidation of the prokaryotic adaptive immune system, clustered regularly interspaced short palindromic repeats (CRISPR)/CRISPR-associated (Cas) that protects bacteria and archaea against viruses or conjugative plasmids. The immunity is based on small RNA molecules that are incorporated into versatile multi-domain proteins or protein complexes and specifically target viral nucleic acids via base complementarity. CRISPR/Cas interference machines are utilized to develop novel genome editing tools for different organisms. Here, we will review the latest progress in the elucidation and application of prokaryotic CRISPR/Cas systems and discuss possible future approaches to exploit the potential of these interference machineries.
NASA Astrophysics Data System (ADS)
Tain, Rong-Wen; Alperin, Noam
2008-03-01
Intracranial compliance (ICC) determines the ability of the intracranial space to accommodate increase in volume (e.g., brain swelling) without a large increase in intracranial pressure (ICP). Therefore, measurement of ICC is potentially important for diagnosis and guiding treatment of related neurological problems. Modeling based approach uses an assumed lumped-parameter model of the craniospinal system (CSS) (e.g., RCL circuit), with either the arterial or the net transcranial blood flow (arterial inflow minus venous outflow) as input and the cranio-spinal cerebrospinal fluid (CSF) flow as output. The phase difference between the output and input is then often used as a measure of ICC However, it is not clear whether there is a predetermined relationship between ICC and the phase difference between these waveforms. A different approach for estimation of ICC has been recently proposed. This approach estimates ICC from the ratio of the intracranial volume and pressure changes that occur naturally with each heartbeat. The current study evaluates the sensitivity of the phase-based and the direct approach to changes in ICC. An RLC circuit model of the cranio-spinal system is used to simulate the cranio-spinal CSF flow for 3 different ICC states using the transcranial blood flows measured by MRI phase contrast from healthy human subjects. The effect of the increase in the ICC on the magnitude and phase response is calculated from the system's transfer function. We observed that within the heart rate frequency range, changes in ICC predominantly affected the amplitude of CSF pulsation and less so the phases. The compliance is then obtained for the different ICC states using the direct approach. The measures of compliance calculated using the direct approach demonstrated the highest sensitivity for changes in ICC. This work explains why phase shift based measure of ICC is less sensitive than amplitude based measures such as the direct approach method.
Biomedical wellness monitoring system based upon molecular markers
NASA Astrophysics Data System (ADS)
Ingram, Whitney
2012-06-01
We wish to assist caretakers with a sensor monitoring systems for tracking the physiological changes of homealone patients. One goal is seeking biomarkers and modern imaging sensors like stochastic optical reconstruction microscopy (STORM), which has achieved visible imaging at the nano-scale range. Imaging techniques like STORM can be combined with a fluorescent functional marker in a system to capture the early transformation signs from wellness to illness. By exploiting both microscopic knowledge of genetic pre-disposition and the macroscopic influence of epigenetic factors we hope to target these changes remotely. We adopt dual spectral infrared imaging for blind source separation (BSS) to detect angiogenesis changes and use laser speckle imaging for hypertension blood flow monitoring. Our design hypothesis for the monitoring system is guided by the user-friendly, veteran-preferred "4-Non" principles (noninvasive, non-contact, non-tethered, non-stop-to-measure) and by the NIH's "4Ps" initiatives (predictive, personalized, preemptive, and participatory). We augment the potential storage system with the recent know-how of video Compressive Sampling (CSp) from surveillance cameras. In CSp only major changes are saved, which reduces the manpower cost of caretakers and medical analysts. This CSp algorithm is based on smart associative memory (AM) matrix storage: change features and detailed scenes are written by the outer-product and read by the inner product without the usual Harsh index for image searching. From this approach, we attempt to design an effective household monitoring approach to save healthcare costs and maintain the quality of life of seniors.
Learning of Rule Ensembles for Multiple Attribute Ranking Problems
NASA Astrophysics Data System (ADS)
Dembczyński, Krzysztof; Kotłowski, Wojciech; Słowiński, Roman; Szeląg, Marcin
In this paper, we consider the multiple attribute ranking problem from a Machine Learning perspective. We propose two approaches to statistical learning of an ensemble of decision rules from decision examples provided by the Decision Maker in terms of pairwise comparisons of some objects. The first approach consists in learning a preference function defining a binary preference relation for a pair of objects. The result of application of this function on all pairs of objects to be ranked is then exploited using the Net Flow Score procedure, giving a linear ranking of objects. The second approach consists in learning a utility function for single objects. The utility function also gives a linear ranking of objects. In both approaches, the learning is based on the boosting technique. The presented approaches to Preference Learning share good properties of the decision rule preference model and have good performance in the massive-data learning problems. As Preference Learning and Multiple Attribute Decision Aiding share many concepts and methodological issues, in the introduction, we review some aspects bridging these two fields. To illustrate the two approaches proposed in this paper, we solve with them a toy example concerning the ranking of a set of cars evaluated by multiple attributes. Then, we perform a large data experiment on real data sets. The first data set concerns credit rating. Since recent research in the field of Preference Learning is motivated by the increasing role of modeling preferences in recommender systems and information retrieval, we chose two other massive data sets from this area - one comes from movie recommender system MovieLens, and the other concerns ranking of text documents from 20 Newsgroups data set.
Fourier Magnitude-Based Privacy-Preserving Clustering on Time-Series Data
NASA Astrophysics Data System (ADS)
Kim, Hea-Suk; Moon, Yang-Sae
Privacy-preserving clustering (PPC in short) is important in publishing sensitive time-series data. Previous PPC solutions, however, have a problem of not preserving distance orders or incurring privacy breach. To solve this problem, we propose a new PPC approach that exploits Fourier magnitudes of time-series. Our magnitude-based method does not cause privacy breach even though its techniques or related parameters are publicly revealed. Using magnitudes only, however, incurs the distance order problem, and we thus present magnitude selection strategies to preserve as many Euclidean distance orders as possible. Through extensive experiments, we showcase the superiority of our magnitude-based approach.
Robust Inference of Risks of Large Portfolios
Fan, Jianqing; Han, Fang; Liu, Han; Vickers, Byron
2016-01-01
We propose a bootstrap-based robust high-confidence level upper bound (Robust H-CLUB) for assessing the risks of large portfolios. The proposed approach exploits rank-based and quantile-based estimators, and can be viewed as a robust extension of the H-CLUB procedure (Fan et al., 2015). Such an extension allows us to handle possibly misspecified models and heavy-tailed data, which are stylized features in financial returns. Under mixing conditions, we analyze the proposed approach and demonstrate its advantage over H-CLUB. We further provide thorough numerical results to back up the developed theory, and also apply the proposed method to analyze a stock market dataset. PMID:27818569
Echtermeyer, Alexander; Amar, Yehia; Zakrzewski, Jacek; Lapkin, Alexei
2017-01-01
A recently described C(sp 3 )-H activation reaction to synthesise aziridines was used as a model reaction to demonstrate the methodology of developing a process model using model-based design of experiments (MBDoE) and self-optimisation approaches in flow. The two approaches are compared in terms of experimental efficiency. The self-optimisation approach required the least number of experiments to reach the specified objectives of cost and product yield, whereas the MBDoE approach enabled a rapid generation of a process model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiao, H., E-mail: hengxiao@vt.edu; Wu, J.-L.; Wang, J.-X.
Despite their well-known limitations, Reynolds-Averaged Navier–Stokes (RANS) models are still the workhorse tools for turbulent flow simulations in today's engineering analysis, design and optimization. While the predictive capability of RANS models depends on many factors, for many practical flows the turbulence models are by far the largest source of uncertainty. As RANS models are used in the design and safety evaluation of many mission-critical systems such as airplanes and nuclear power plants, quantifying their model-form uncertainties has significant implications in enabling risk-informed decision-making. In this work we develop a data-driven, physics-informed Bayesian framework for quantifying model-form uncertainties in RANS simulations.more » Uncertainties are introduced directly to the Reynolds stresses and are represented with compact parameterization accounting for empirical prior knowledge and physical constraints (e.g., realizability, smoothness, and symmetry). An iterative ensemble Kalman method is used to assimilate the prior knowledge and observation data in a Bayesian framework, and to propagate them to posterior distributions of velocities and other Quantities of Interest (QoIs). We use two representative cases, the flow over periodic hills and the flow in a square duct, to evaluate the performance of the proposed framework. Both cases are challenging for standard RANS turbulence models. Simulation results suggest that, even with very sparse observations, the obtained posterior mean velocities and other QoIs have significantly better agreement with the benchmark data compared to the baseline results. At most locations the posterior distribution adequately captures the true model error within the developed model form uncertainty bounds. The framework is a major improvement over existing black-box, physics-neutral methods for model-form uncertainty quantification, where prior knowledge and details of the models are not exploited. This approach has potential implications in many fields in which the governing equations are well understood but the model uncertainty comes from unresolved physical processes. - Highlights: • Proposed a physics–informed framework to quantify uncertainty in RANS simulations. • Framework incorporates physical prior knowledge and observation data. • Based on a rigorous Bayesian framework yet fully utilizes physical model. • Applicable for many complex physical systems beyond turbulent flows.« less
A potential approach for low flow selection in water resource supply and management
NASA Astrophysics Data System (ADS)
Ouyang, Ying
2012-08-01
SummaryLow flow selections are essential to water resource management, water supply planning, and watershed ecosystem restoration. In this study, a new approach, namely the frequent-low (FL) approach (or frequent-low index), was developed based on the minimum frequent-low flow or level used in minimum flows and/or levels program in northeast Florida, USA. This FL approach was then compared to the conventional 7Q10 approach for low flow selections prior to its applications, using the USGS flow data from the freshwater environment (Big Sunflower River, Mississippi) as well as from the estuarine environment (St. Johns River, Florida). Unlike the FL approach that is associated with the biological and ecological impacts, the 7Q10 approach could lead to the selections of extremely low flows (e.g., near-zero flows) that may hinder its use for establishing criteria to prevent streams from significant harm to biological and ecological communities. Additionally, the 7Q10 approach could not be used when the period of data records is less than 10 years by definition while this may not the case for the FL approach. Results from both approaches showed that the low flows from the Big Sunflower River and the St. Johns River decreased as time elapsed, demonstrating that these two rivers have become drier during the last several decades with a potential of salted water intrusion to the St. Johns River. Results from the FL approach further revealed that the recurrence probability of low flow increased while the recurrence interval of low flow decreased as time elapsed in both rivers, indicating that low flows occurred more frequent in these rivers as time elapsed. This report suggests that the FL approach, developed in this study, is a useful alternative for low flow selections in addition to the 7Q10 approach.
JPEG2000-coded image error concealment exploiting convex sets projections.
Atzori, Luigi; Ginesu, Giaime; Raccis, Alessio
2005-04-01
Transmission errors in JPEG2000 can be grouped into three main classes, depending on the affected area: LL, high frequencies at the lower decomposition levels, and high frequencies at the higher decomposition levels. The first type of errors are the most annoying but can be concealed exploiting the signal spatial correlation like in a number of techniques proposed in the past; the second are less annoying but more difficult to address; the latter are often imperceptible. In this paper, we address the problem of concealing the second class or errors when high bit-planes are damaged by proposing a new approach based on the theory of projections onto convex sets. Accordingly, the error effects are masked by iteratively applying two procedures: low-pass (LP) filtering in the spatial domain and restoration of the uncorrupted wavelet coefficients in the transform domain. It has been observed that a uniform LP filtering brought to some undesired side effects that negatively compensated the advantages. This problem has been overcome by applying an adaptive solution, which exploits an edge map to choose the optimal filter mask size. Simulation results demonstrated the efficiency of the proposed approach.
Investigation of persistent Multiplets at the EGS reservoir of Soultz-Sous-Forêts, France
NASA Astrophysics Data System (ADS)
Lengliné, O.; Cauchie, L.; Schmittbuhl, J.
2017-12-01
During the exploitation of geothermal reservoirs, abundant seismicity is generally observed, especially during phases of hydraulic stimulations. The induced seismicity at the Enhanced Geothermal System of Soultz-Sous-Forêts in France, has been thoroughly studied over the years of exploitation. The mechanism at its origin has been related to both fluid pressure increases during stimulation and aseismic creeping movements. The fluid-induced seismic events often exhibit a high degree of similarity and the mechanism at the origin of these repeated events is thought to be associated with slow slip process where asperities on the rupture zone act several times.To have a better understanding of the mechanisms associated with such events and on the damaged zones involved during the hydraulic stimulations, we investigate the behavior of the multiplets and their persistent nature over several water injection intervals. For this purpose, we analyzed large datasets recorded from a borehole seismic network for several water injection periods (1993, 2000). For each stimulation interval, thousands of events are recorded at depth. We detected the events using a STA/LTA approach and classified them into families of comparable waveforms using an approach based on cross-correlation analysis. Classification of the seismic events is then improved depending on their location within the multiplets. For this purpose, inter-event distances within multiplets are examined and determined from cross-correlation analysis between pairs of events. These distances are then compared to the source dimensions derived from the estimation of the corner frequencies estimation. The multiplets properties (location, events size) are then investigated within and over several hydraulic tests. Hopefully these steps will lead to increase the knowledge on the repetitive nature of these events and the investigation of their persistence will outline the heterogeneities of the structures (regional stress perturbations, fluid flow channeling) regularly involved during the different stimulations.
Talygin, E A; Zazybo, N A; Zhorzholiany, S T; Krestinich, I M; Mironov, A A; Kiknadze, G I; Bokerya, L A; Gorodkov, A Y; Makarenko, V N; Alexandrova, S A
2016-01-01
New approach to intracardiac blood flow condition analysis based on geometric parameters of left ventricle flow channel has been suggested. Parameters, that used in this method, follow from exact solutions of nonstationary Navier-Stocks equations for selforganized tornado-like flows of viscous incompressible fluid. The main advantage of this method is considering dynamic anatomy of intracardiac cavity and trabeculae relief of left ventricle streamlined surface, both registered in a common mri-process, as flow condition indicator. Calculated quantity options that characterizes blood flow condition can be use as diagnostic criterias for estimation of violation in blood circulation function which entails heart ejection reduction. Developed approach allows to clarify heart jet organization mechanism and estimate the share of the tornado-like flow self-organization in heart ejection structure.
A Comprehensive Study of Data Collection Schemes Using Mobile Sinks in Wireless Sensor Networks
Khan, Abdul Waheed; Abdullah, Abdul Hanan; Anisi, Mohammad Hossein; Bangash, Javed Iqbal
2014-01-01
Recently sink mobility has been exploited in numerous schemes to prolong the lifetime of wireless sensor networks (WSNs). Contrary to traditional WSNs where sensory data from sensor field is ultimately sent to a static sink, mobile sink-based approaches alleviate energy-holes issues thereby facilitating balanced energy consumption among nodes. In mobility scenarios, nodes need to keep track of the latest location of mobile sinks for data delivery. However, frequent propagation of sink topological updates undermines the energy conservation goal and therefore should be controlled. Furthermore, controlled propagation of sinks' topological updates affects the performance of routing strategies thereby increasing data delivery latency and reducing packet delivery ratios. This paper presents a taxonomy of various data collection/dissemination schemes that exploit sink mobility. Based on how sink mobility is exploited in the sensor field, we classify existing schemes into three classes, namely path constrained, path unconstrained, and controlled sink mobility-based schemes. We also organize existing schemes based on their primary goals and provide a comparative study to aid readers in selecting the appropriate scheme in accordance with their particular intended applications and network dynamics. Finally, we conclude our discussion with the identification of some unresolved issues in pursuit of data delivery to a mobile sink. PMID:24504107
NASA Astrophysics Data System (ADS)
Kim, Seong-Whan; Suthaharan, Shan; Lee, Heung-Kyu; Rao, K. R.
2001-01-01
Quality of Service (QoS)-guarantee in real-time communication for multimedia applications is significantly important. An architectural framework for multimedia networks based on substreams or flows is effectively exploited for combining source and channel coding for multimedia data. But the existing frame by frame approach which includes Moving Pictures Expert Group (MPEG) cannot be neglected because it is a standard. In this paper, first, we designed an MPEG transcoder which converts an MPEG coded stream into variable rate packet sequences to be used for our joint source/channel coding (JSCC) scheme. Second, we designed a classification scheme to partition the packet stream into multiple substreams which have their own QoS requirements. Finally, we designed a management (reservation and scheduling) scheme for substreams to support better perceptual video quality such as the bound of end-to-end jitter. We have shown that our JSCC scheme is better than two other two popular techniques by simulation and real video experiments on the TCP/IP environment.
Learning to soar in turbulent environments
NASA Astrophysics Data System (ADS)
Reddy, Gautam; Celani, Antonio; Sejnowski, Terrence; Vergassola, Massimo
Birds and gliders exploit warm, rising atmospheric currents (thermals) to reach heights comparable to low-lying clouds with a reduced expenditure of energy. Soaring provides a remarkable instance of complex decision-making in biology and requires a long-term strategy to effectively use the ascending thermals. Furthermore, the problem is technologically relevant to extend the flying range of autonomous gliders. The formation of thermals unavoidably generates strong turbulent fluctuations, which make deriving an efficient policy harder and thus constitute an essential element of soaring. Here, we approach soaring flight as a problem of learning to navigate highly fluctuating turbulent environments. We simulate the atmospheric boundary layer by numerical models of turbulent convective flow and combine them with model-free, experience-based, reinforcement learning algorithms to train the virtual gliders. For the learned policies in the regimes of moderate and strong turbulence levels, the virtual glider adopts an increasingly conservative policy as turbulence levels increase, quantifying the degree of risk affordable in turbulent environments. Reinforcement learning uncovers those sensorimotor cues that permit effective control over soaring in turbulent environments.
Random walks based multi-image segmentation: Quasiconvexity results and GPU-based solutions
Collins, Maxwell D.; Xu, Jia; Grady, Leo; Singh, Vikas
2012-01-01
We recast the Cosegmentation problem using Random Walker (RW) segmentation as the core segmentation algorithm, rather than the traditional MRF approach adopted in the literature so far. Our formulation is similar to previous approaches in the sense that it also permits Cosegmentation constraints (which impose consistency between the extracted objects from ≥ 2 images) using a nonparametric model. However, several previous nonparametric cosegmentation methods have the serious limitation that they require adding one auxiliary node (or variable) for every pair of pixels that are similar (which effectively limits such methods to describing only those objects that have high entropy appearance models). In contrast, our proposed model completely eliminates this restrictive dependence –the resulting improvements are quite significant. Our model further allows an optimization scheme exploiting quasiconvexity for model-based segmentation with no dependence on the scale of the segmented foreground. Finally, we show that the optimization can be expressed in terms of linear algebra operations on sparse matrices which are easily mapped to GPU architecture. We provide a highly specialized CUDA library for Cosegmentation exploiting this special structure, and report experimental results showing these advantages. PMID:25278742
International Trade Modelling Using Open Flow Networks: A Flow-Distance Based Analysis.
Shen, Bin; Zhang, Jiang; Li, Yixiao; Zheng, Qiuhua; Li, Xingsen
2015-01-01
This paper models and analyzes international trade flows using open flow networks (OFNs) with the approaches of flow distances, which provide a novel perspective and effective tools for the study of international trade. We discuss the establishment of OFNs of international trade from two coupled viewpoints: the viewpoint of trading commodity flow and that of money flow. Based on the novel model with flow distance approaches, meaningful insights are gained. First, by introducing the concepts of trade trophic levels and niches, countries' roles and positions in the global supply chains (or value-added chains) can be evaluated quantitatively. We find that the distributions of trading "trophic levels" have the similar clustering pattern for different types of commodities, and summarize some regularities between money flow and commodity flow viewpoints. Second, we find that active and competitive countries trade a wide spectrum of products, while inactive and underdeveloped countries trade a limited variety of products. Besides, some abnormal countries import many types of goods, which the vast majority of countries do not need to import. Third, harmonic node centrality is proposed and we find the phenomenon of centrality stratification. All the results illustrate the usefulness of the model of OFNs with its network approaches for investigating international trade flows.
International Trade Modelling Using Open Flow Networks: A Flow-Distance Based Analysis
Shen, Bin; Zhang, Jiang; Li, Yixiao; Zheng, Qiuhua; Li, Xingsen
2015-01-01
This paper models and analyzes international trade flows using open flow networks (OFNs) with the approaches of flow distances, which provide a novel perspective and effective tools for the study of international trade. We discuss the establishment of OFNs of international trade from two coupled viewpoints: the viewpoint of trading commodity flow and that of money flow. Based on the novel model with flow distance approaches, meaningful insights are gained. First, by introducing the concepts of trade trophic levels and niches, countries’ roles and positions in the global supply chains (or value-added chains) can be evaluated quantitatively. We find that the distributions of trading “trophic levels” have the similar clustering pattern for different types of commodities, and summarize some regularities between money flow and commodity flow viewpoints. Second, we find that active and competitive countries trade a wide spectrum of products, while inactive and underdeveloped countries trade a limited variety of products. Besides, some abnormal countries import many types of goods, which the vast majority of countries do not need to import. Third, harmonic node centrality is proposed and we find the phenomenon of centrality stratification. All the results illustrate the usefulness of the model of OFNs with its network approaches for investigating international trade flows. PMID:26569618
ERIC Educational Resources Information Center
Cermakova, Lucie; Moneta, Giovanni B.; Spada, Marcantonio M.
2010-01-01
This study investigated how attentional control and study-related dispositional flow influence students' approaches to studying when preparing for academic examinations. Based on information-processing theories, it was hypothesised that attentional control would be positively associated with deep and strategic approaches to studying, and…
Development Issues on Linked Data Weblog Enrichment
NASA Astrophysics Data System (ADS)
Ruiz-Rube, Iván; Cornejo, Carlos M.; Dodero, Juan Manuel; García, Vicente M.
In this paper, we describe the issues found during the development of LinkedBlog, a Linked Data extension for WordPress blogs. This extension enables to enrich text-based and video information contained in blog entries with RDF triples that are suitable to be stored, managed and exploited by other web-based applications. The issues have to do with the generality, usability, tracking, depth, security, trustiness and performance of the linked data enrichment process. The presented annotation approach aims at maintaining web-based contents independent from the underlying ontological model, by providing a loosely coupled RDFa-based approach in the linked data application. Finally, we detail how the performance of annotations can be improved through a semantic reasoner.
Luo, Bin; Liu, Shaomin; Zhi, Linjie
2012-03-12
A 'gold rush' has been triggered all over the world for exploiting the possible applications of graphene-based nanomaterials. For this purpose, two important problems have to be solved; one is the preparation of graphene-based nanomaterials with well-defined structures, and the other is the controllable fabrication of these materials into functional devices. This review gives a brief overview of the recent research concerning chemical and thermal approaches toward the production of well-defined graphene-based nanomaterials and their applications in energy-related areas, including solar cells, lithium ion secondary batteries, supercapacitors, and catalysis. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Rekart, Michael L
2005-12-17
Sex work is an extremely dangerous profession. The use of harm-reduction principles can help to safeguard sex workers' lives in the same way that drug users have benefited from drug-use harm reduction. Sex workers are exposed to serious harms: drug use, disease, violence, discrimination, debt, criminalisation, and exploitation (child prostitution, trafficking for sex work, and exploitation of migrants). Successful and promising harm-reduction strategies are available: education, empowerment, prevention, care, occupational health and safety, decriminalisation of sex workers, and human-rights-based approaches. Successful interventions include peer education, training in condom-negotiating skills, safety tips for street-based sex workers, male and female condoms, the prevention-care synergy, occupational health and safety guidelines for brothels, self-help organisations, and community-based child protection networks. Straightforward and achievable steps are available to improve the day-to-day lives of sex workers while they continue to work. Conceptualising and debating sex-work harm reduction as a new paradigm can hasten this process.
Heterogeneous Tensor Decomposition for Clustering via Manifold Optimization.
Sun, Yanfeng; Gao, Junbin; Hong, Xia; Mishra, Bamdev; Yin, Baocai
2016-03-01
Tensor clustering is an important tool that exploits intrinsically rich structures in real-world multiarray or Tensor datasets. Often in dealing with those datasets, standard practice is to use subspace clustering that is based on vectorizing multiarray data. However, vectorization of tensorial data does not exploit complete structure information. In this paper, we propose a subspace clustering algorithm without adopting any vectorization process. Our approach is based on a novel heterogeneous Tucker decomposition model taking into account cluster membership information. We propose a new clustering algorithm that alternates between different modes of the proposed heterogeneous tensor model. All but the last mode have closed-form updates. Updating the last mode reduces to optimizing over the multinomial manifold for which we investigate second order Riemannian geometry and propose a trust-region algorithm. Numerical experiments show that our proposed algorithm compete effectively with state-of-the-art clustering algorithms that are based on tensor factorization.
Mechanics of fluid flow over compliant wrinkled polymeric surfaces
NASA Astrophysics Data System (ADS)
Raayai, Shabnam; McKinley, Gareth; Boyce, Mary
2014-03-01
Skin friction coefficients (based on frontal area) of sharks and dolphins are lower than birds, fish and swimming beetles. By either exploiting flow-induced changes in their flexible skin or microscale textures, dolphins and sharks can change the structure of the fluid flow around them and thus reduce viscous drag forces on their bodies. Inspired by this ability, investigators have tried using compliant walls and riblet-like textures as drag reduction methods in aircraft and marine industries and have been able to achieve reductions up to 19%. Here we investigate flow-structure interaction and wrinkling of soft polymer surfaces that can emulate shark riblets and dolphin's flexible skin. Wrinkling arises spontaneously as the result of mismatched deformation of a thin stiff coating bound to a thick soft elastic substrate. Wrinkles can be fabricated by controlling the ratio of the stiffness of the coating and substrate, the applied displacement and the thickness of the coating. In this work we will examine the evolution in the kinematic structures associated with steady viscous flow over the polymer wrinkled surfaces and in particular compare the skin friction with corresponding results for flow over non-textured and rigid surfaces.
Energy minimization for self-organized structure formation and actuation
NASA Astrophysics Data System (ADS)
Kofod, Guggi; Wirges, Werner; Paajanen, Mika; Bauer, Siegfried
2007-02-01
An approach for creating complex structures with embedded actuation in planar manufacturing steps is presented. Self-organization and energy minimization are central to this approach, illustrated with a model based on minimization of the hyperelastic free energy strain function of a stretched elastomer and the bending elastic energy of a plastic frame. A tulip-shaped gripper structure illustrates the technological potential of the approach. Advantages are simplicity of manufacture, complexity of final structures, and the ease with which any electroactive material can be exploited as means of actuation.
Cierniak, Robert; Lorent, Anna
2016-09-01
The main aim of this paper is to investigate properties of our originally formulated statistical model-based iterative approach applied to the image reconstruction from projections problem which are related to its conditioning, and, in this manner, to prove a superiority of this approach over ones recently used by other authors. The reconstruction algorithm based on this conception uses a maximum likelihood estimation with an objective adjusted to the probability distribution of measured signals obtained from an X-ray computed tomography system with parallel beam geometry. The analysis and experimental results presented here show that our analytical approach outperforms the referential algebraic methodology which is explored widely in the literature and exploited in various commercial implementations. Copyright © 2016 Elsevier Ltd. All rights reserved.
Marrocco, Cristina; Pallotta, Valeria; D'alessandro, Angelo; Alves, Gilda; Zolla, Lello
2012-05-01
Blood doping represents one main trend in doping strategies. Blood doping refers to the practice of boosting the number of red blood cells (RBCs) in the bloodstream in order to enhance athletic performance, by means of blood transfusions, administration of erythropoiesis-stimulating substances, blood substitutes, natural or artificial altitude facilities, and innovative gene therapies. While detection of recombinant EPO and homologous transfusion is already feasible through electrophoretic, mass spectrometry or flow cytometry-based approaches, no method is currently available to tackle doping strategies relying on autologous transfusions. We exploited an in vitro model of autologous transfusion through a 1:10 dilution of concentrated RBCs after 30 days of storage upon appropriate dilution in freshly withdrawn RBCs from the same donor. Western blot towards membrane Prdx2 and Percoll density gradients were exploited to assess their suitability as biomarkers of transfusion. Membrane Prdx2 was visible in day 30 samples albeit not in day 0, while it was still visible in the 1:10 dilution of day 30 in day 0 RBCs. Cell gradients also highlighted changes in the profile of the RBC subpopulations upon dilution of stored RBCs in the fresh ones. From this preliminary in vitro investigation it emerges that Prdx2 and RBC populations might be further tested as candidate biomarkers of blood doping through autologous transfusion, though it is yet to be assessed whether the kinetics in vivo of Prdx2 exposure in the membrane of transfused RBCs will endow a sufficient time-window to allow reliable anti-doping testing.
Marrocco, Cristina; Pallotta, Valeria; D’Alessandro, Angelo; Alves, Gilda; Zolla, Lello
2012-01-01
Background Blood doping represents one main trend in doping strategies. Blood doping refers to the practice of boosting the number of red blood cells (RBCs) in the bloodstream in order to enhance athletic performance, by means of blood transfusions, administration of erythropoiesis-stimulating substances, blood substitutes, natural or artificial altitude facilities, and innovative gene therapies. While detection of recombinant EPO and homologous transfusion is already feasible through electrophoretic, mass spectrometry or flow cytometry-based approaches, no method is currently available to tackle doping strategies relying on autologous transfusions. Materials and methods. We exploited an in vitro model of autologous transfusion through a 1:10 dilution of concentrated RBCs after 30 days of storage upon appropriate dilution in freshly withdrawn RBCs from the same donor. Western blot towards membrane Prdx2 and Percoll density gradients were exploited to assess their suitability as biomarkers of transfusion. Results Membrane Prdx2 was visible in day 30 samples albeit not in day 0, while it was still visible in the 1:10 dilution of day 30 in day 0 RBCs. Cell gradients also highlighted changes in the profile of the RBC subpopulations upon dilution of stored RBCs in the fresh ones. Discussion. From this preliminary in vitro investigation it emerges that Prdx2 and RBC populations might be further tested as candidate biomarkers of blood doping through autologous transfusion, though it is yet to be assessed whether the kinetics in vivo of Prdx2 exposure in the membrane of transfused RBCs will endow a sufficient time-window to allow reliable anti-doping testing. PMID:22890272
Lewis, Oliver; Campbell, Ann
This paper explores how, and how effectively, two systems of international law have addressed exploitation, violence and abuse of people with mental disabilities. The two international systems reviewed were the Council of Europe's European Court of Human Rights and the United Nations Committee on the Rights of Persons with Disabilities. The two issues dealt with are (a) forced institutionalisation and denial of community-based services and (b) medically-sanctioned treatment as abuse or violence. The paper offers a comparative analysis of the way in which the two bodies have dealt with exploitation, violence and abuse of people with disabilities, and offers recommendations as to how the two bodies could adjust their approaches to come into closer alignment. Copyright © 2017. Published by Elsevier Ltd.
Exploiting Concurrent Wake-Up Transmissions Using Beat Frequencies.
Kumberg, Timo; Schindelhauer, Christian; Reindl, Leonhard
2017-07-26
Wake-up receivers are the natural choice for wireless sensor networks because of their ultra-low power consumption and their ability to provide communications on demand. A downside of ultra-low power wake-up receivers is their low sensitivity caused by the passive demodulation of the carrier signal. In this article, we present a novel communication scheme by exploiting purposefully-interfering out-of-tune signals of two or more wireless sensor nodes, which produce the wake-up signal as the beat frequency of superposed carriers. Additionally, we introduce a communication algorithm and a flooding protocol based on this approach. Our experiments show that our approach increases the received signal strength up to 3 dB, improving communication robustness and reliability. Furthermore, we demonstrate the feasibility of our newly-developed protocols by means of an outdoor experiment and an indoor setup consisting of several nodes. The flooding algorithm achieves almost a 100% wake-up rate in less than 20 ms.
Region-Based Prediction for Image Compression in the Cloud.
Begaint, Jean; Thoreau, Dominique; Guillotel, Philippe; Guillemot, Christine
2018-04-01
Thanks to the increasing number of images stored in the cloud, external image similarities can be leveraged to efficiently compress images by exploiting inter-images correlations. In this paper, we propose a novel image prediction scheme for cloud storage. Unlike current state-of-the-art methods, we use a semi-local approach to exploit inter-image correlation. The reference image is first segmented into multiple planar regions determined from matched local features and super-pixels. The geometric and photometric disparities between the matched regions of the reference image and the current image are then compensated. Finally, multiple references are generated from the estimated compensation models and organized in a pseudo-sequence to differentially encode the input image using classical video coding tools. Experimental results demonstrate that the proposed approach yields significant rate-distortion performance improvements compared with the current image inter-coding solutions such as high efficiency video coding.
Optimizing Search and Ranking in Folksonomy Systems by Exploiting Context Information
NASA Astrophysics Data System (ADS)
Abel, Fabian; Henze, Nicola; Krause, Daniel
Tagging systems enable users to annotate resources with freely chosen keywords. The evolving bunch of tag assignments is called folksonomy and there exist already some approaches that exploit folksonomies to improve resource retrieval. In this paper, we analyze and compare graph-based ranking algorithms: FolkRank and SocialPageRank. We enhance these algorithms by exploiting the context of tags, and evaluate the results on the GroupMe! dataset. In GroupMe!, users can organize and maintain arbitrary Web resources in self-defined groups. When users annotate resources in GroupMe!, this can be interpreted in context of a certain group. The grouping activity itself is easy for users to perform. However, it delivers valuable semantic information about resources and their context. We present GRank that uses the context information to improve and optimize the detection of relevant search results, and compare different strategies for ranking result lists in folksonomy systems.
Thermal Performance of Surface Wick Structures.
NASA Astrophysics Data System (ADS)
Chen, Yongkang; Tavan, Noel; Baker, John; Melvin, Lawrence; Weislogel, Mark
2010-03-01
Microscale surface wick structures that exploit capillary driven flow in interior corners have been designed. In this study we examine the interplay between capillary flow and evaporative heat transfer that effectively reduces the surface temperature. The tests are performed by raising the surface temperature to various levels before the flow is introduced to the surfaces. Certainly heat transfer weakens the capillary driven flow. It is observed, however, the surface temperature can be reduced significantly. The effects of geometric parameters and interconnectivity are to be characterized to identify optimal configurations.
Geist, Rebecca E; DuBois, Chase H; Nichols, Timothy C; Caughey, Melissa C; Merricks, Elizabeth P; Raymer, Robin; Gallippi, Caterina M
2016-09-01
Acoustic radiation force impulse (ARFI) Surveillance of Subcutaneous Hemorrhage (ASSH) has been previously demonstrated to differentiate bleeding phenotype and responses to therapy in dogs and humans, but to date, the method has lacked experimental validation. This work explores experimental validation of ASSH in a poroelastic tissue-mimic and in vivo in dogs. The experimental design exploits calibrated flow rates and infusion durations of evaporated milk in tofu or heparinized autologous blood in dogs. The validation approach enables controlled comparisons of ASSH-derived bleeding rate (BR) and time to hemostasis (TTH) metrics. In tissue-mimicking experiments, halving the calibrated flow rate yielded ASSH-derived BRs that decreased by 44% to 48%. Furthermore, for calibrated flow durations of 5.0 minutes and 7.0 minutes, average ASSH-derived TTH was 5.2 minutes and 7.0 minutes, respectively, with ASSH predicting the correct TTH in 78% of trials. In dogs undergoing calibrated autologous blood infusion, ASSH measured a 3-minute increase in TTH, corresponding to the same increase in the calibrated flow duration. For a measured 5% decrease in autologous infusion flow rate, ASSH detected a 7% decrease in BR. These tissue-mimicking and in vivo preclinical experimental validation studies suggest the ASSH BR and TTH measures reflect bleeding dynamics. © The Author(s) 2015.
ERIC Educational Resources Information Center
Kohlbacher, Florian; Mukai, Kazuo
2007-01-01
Purpose: This paper aims to explain and analyze community-based corporate knowledge sharing and organizational learning, the actual use of communities in Hewlett Packard (HP) Consulting and Integration (CI) and their role in leveraging and exploiting existing and creating new knowledge. Design/methodology/approach: The paper presents an…
Precise and Efficient Retrieval of Captioned Images: The MARIE Project.
ERIC Educational Resources Information Center
Rowe, Neil C.
1999-01-01
The MARIE project explores knowledge-based information retrieval of captioned images of the kind found in picture libraries and on the Internet. MARIE's five-part approach exploits the idea that images are easier to understand with context, especially descriptive text near them, but it also does image analysis. Experiments show MARIE prototypes…
A Low Cost Structurally Optimized Design for Diverse Filter Types
Kazmi, Majida; Aziz, Arshad; Akhtar, Pervez; Ikram, Nassar
2016-01-01
A wide range of image processing applications deploys two dimensional (2D)-filters for performing diversified tasks such as image enhancement, edge detection, noise suppression, multi scale decomposition and compression etc. All of these tasks require multiple type of 2D-filters simultaneously to acquire the desired results. The resource hungry conventional approach is not a viable option for implementing these computationally intensive 2D-filters especially in a resource constraint environment. Thus it calls for optimized solutions. Mostly the optimization of these filters are based on exploiting structural properties. A common shortcoming of all previously reported optimized approaches is their restricted applicability only for a specific filter type. These narrow scoped solutions completely disregard the versatility attribute of advanced image processing applications and in turn offset their effectiveness while implementing a complete application. This paper presents an efficient framework which exploits the structural properties of 2D-filters for effectually reducing its computational cost along with an added advantage of versatility for supporting diverse filter types. A composite symmetric filter structure is introduced which exploits the identities of quadrant and circular T-symmetries in two distinct filter regions simultaneously. These T-symmetries effectually reduce the number of filter coefficients and consequently its multipliers count. The proposed framework at the same time empowers this composite filter structure with additional capabilities of realizing all of its Ψ-symmetry based subtypes and also its special asymmetric filters case. The two-fold optimized framework thus reduces filter computational cost up to 75% as compared to the conventional approach as well as its versatility attribute not only supports diverse filter types but also offers further cost reduction via resource sharing for sequential implementation of diversified image processing applications especially in a constraint environment. PMID:27832133
Investigating bioconjugation by atomic force microscopy
2013-01-01
Nanotechnological applications increasingly exploit the selectivity and processivity of biological molecules. Integration of biomolecules such as proteins or DNA into nano-systems typically requires their conjugation to surfaces, for example of carbon-nanotubes or fluorescent quantum dots. The bioconjugated nanostructures exploit the unique strengths of both their biological and nanoparticle components and are used in diverse, future oriented research areas ranging from nanoelectronics to biosensing and nanomedicine. Atomic force microscopy imaging provides valuable, direct insight for the evaluation of different conjugation approaches at the level of the individual molecules. Recent technical advances have enabled high speed imaging by AFM supporting time resolutions sufficient to follow conformational changes of intricately assembled nanostructures in solution. In addition, integration of AFM with different spectroscopic and imaging approaches provides an enhanced level of information on the investigated sample. Furthermore, the AFM itself can serve as an active tool for the assembly of nanostructures based on bioconjugation. AFM is hence a major workhorse in nanotechnology; it is a powerful tool for the structural investigation of bioconjugation and bioconjugation-induced effects as well as the simultaneous active assembly and analysis of bioconjugation-based nanostructures. PMID:23855448
Investigating bioconjugation by atomic force microscopy.
Tessmer, Ingrid; Kaur, Parminder; Lin, Jiangguo; Wang, Hong
2013-07-15
Nanotechnological applications increasingly exploit the selectivity and processivity of biological molecules. Integration of biomolecules such as proteins or DNA into nano-systems typically requires their conjugation to surfaces, for example of carbon-nanotubes or fluorescent quantum dots. The bioconjugated nanostructures exploit the unique strengths of both their biological and nanoparticle components and are used in diverse, future oriented research areas ranging from nanoelectronics to biosensing and nanomedicine. Atomic force microscopy imaging provides valuable, direct insight for the evaluation of different conjugation approaches at the level of the individual molecules. Recent technical advances have enabled high speed imaging by AFM supporting time resolutions sufficient to follow conformational changes of intricately assembled nanostructures in solution. In addition, integration of AFM with different spectroscopic and imaging approaches provides an enhanced level of information on the investigated sample. Furthermore, the AFM itself can serve as an active tool for the assembly of nanostructures based on bioconjugation. AFM is hence a major workhorse in nanotechnology; it is a powerful tool for the structural investigation of bioconjugation and bioconjugation-induced effects as well as the simultaneous active assembly and analysis of bioconjugation-based nanostructures.
Comparing implementations of penalized weighted least-squares sinogram restoration.
Forthmann, Peter; Koehler, Thomas; Defrise, Michel; La Riviere, Patrick
2010-11-01
A CT scanner measures the energy that is deposited in each channel of a detector array by x rays that have been partially absorbed on their way through the object. The measurement process is complex and quantitative measurements are always and inevitably associated with errors, so CT data must be preprocessed prior to reconstruction. In recent years, the authors have formulated CT sinogram preprocessing as a statistical restoration problem in which the goal is to obtain the best estimate of the line integrals needed for reconstruction from the set of noisy, degraded measurements. The authors have explored both penalized Poisson likelihood (PL) and penalized weighted least-squares (PWLS) objective functions. At low doses, the authors found that the PL approach outperforms PWLS in terms of resolution-noise tradeoffs, but at standard doses they perform similarly. The PWLS objective function, being quadratic, is more amenable to computational acceleration than the PL objective. In this work, the authors develop and compare two different methods for implementing PWLS sinogram restoration with the hope of improving computational performance relative to PL in the standard-dose regime. Sinogram restoration is still significant in the standard-dose regime since it can still outperform standard approaches and it allows for correction of effects that are not usually modeled in standard CT preprocessing. The authors have explored and compared two implementation strategies for PWLS sinogram restoration: (1) A direct matrix-inversion strategy based on the closed-form solution to the PWLS optimization problem and (2) an iterative approach based on the conjugate-gradient algorithm. Obtaining optimal performance from each strategy required modifying the naive off-the-shelf implementations of the algorithms to exploit the particular symmetry and sparseness of the sinogram-restoration problem. For the closed-form approach, the authors subdivided the large matrix inversion into smaller coupled problems and exploited sparseness to minimize matrix operations. For the conjugate-gradient approach, the authors exploited sparseness and preconditioned the problem to speed up convergence. All methods produced qualitatively and quantitatively similar images as measured by resolution-variance tradeoffs and difference images. Despite the acceleration strategies, the direct matrix-inversion approach was found to be uncompetitive with iterative approaches, with a computational burden higher by an order of magnitude or more. The iterative conjugate-gradient approach, however, does appear promising, with computation times half that of the authors' previous penalized-likelihood implementation. Iterative conjugate-gradient based PWLS sinogram restoration with careful matrix optimizations has computational advantages over direct matrix PWLS inversion and over penalized-likelihood sinogram restoration and can be considered a good alternative in standard-dose regimes.
Exploiting Quantum Resonance to Solve Combinatorial Problems
NASA Technical Reports Server (NTRS)
Zak, Michail; Fijany, Amir
2006-01-01
Quantum resonance would be exploited in a proposed quantum-computing approach to the solution of combinatorial optimization problems. In quantum computing in general, one takes advantage of the fact that an algorithm cannot be decoupled from the physical effects available to implement it. Prior approaches to quantum computing have involved exploitation of only a subset of known quantum physical effects, notably including parallelism and entanglement, but not including resonance. In the proposed approach, one would utilize the combinatorial properties of tensor-product decomposability of unitary evolution of many-particle quantum systems for physically simulating solutions to NP-complete problems (a class of problems that are intractable with respect to classical methods of computation). In this approach, reinforcement and selection of a desired solution would be executed by means of quantum resonance. Classes of NP-complete problems that are important in practice and could be solved by the proposed approach include planning, scheduling, search, and optimal design.
Zhang, Weixiong; Ruan, Jianhua; Ho, Tuan-Hua David; You, Youngsook; Yu, Taotao; Quatrano, Ralph S
2005-07-15
A fundamental problem of computational genomics is identifying the genes that respond to certain endogenous cues and environmental stimuli. This problem can be referred to as targeted gene finding. Since gene regulation is mainly determined by the binding of transcription factors and cis-regulatory DNA sequences, most existing gene annotation methods, which exploit the conservation of open reading frames, are not effective in finding target genes. A viable approach to targeted gene finding is to exploit the cis-regulatory elements that are known to be responsible for the transcription of target genes. Given such cis-elements, putative target genes whose promoters contain the elements can be identified. As a case study, we apply the above approach to predict the genes in model plant Arabidopsis thaliana which are inducible by a phytohormone, abscisic acid (ABA), and abiotic stress, such as drought, cold and salinity. We first construct and analyze two ABA specific cis-elements, ABA-responsive element (ABRE) and its coupling element (CE), in A.thaliana, based on their conservation in rice and other cereal plants. We then use the ABRE-CE module to identify putative ABA-responsive genes in A.thaliana. Based on RT-PCR verification and the results from literature, this method has an accuracy rate of 67.5% for the top 40 predictions. The cis-element based targeted gene finding approach is expected to be widely applicable since a large number of cis-elements in many species are available.
The process group approach to reliable distributed computing
NASA Technical Reports Server (NTRS)
Birman, Kenneth P.
1992-01-01
The difficulty of developing reliable distribution software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems that are substantially easier to develop, exploit sophisticated forms of cooperative computation, and achieve high reliability. Six years of research on ISIS, describing the model, its implementation challenges, and the types of applications to which ISIS has been applied are reviewed.
A Robust Model-Based Coding Technique for Ultrasound Video
NASA Technical Reports Server (NTRS)
Docef, Alen; Smith, Mark J. T.
1995-01-01
This paper introduces a new approach to coding ultrasound video, the intended application being very low bit rate coding for transmission over low cost phone lines. The method exploits both the characteristic noise and the quasi-periodic nature of the signal. Data compression ratios between 250:1 and 1000:1 are shown to be possible, which is sufficient for transmission over ISDN and conventional phone lines. Preliminary results show this approach to be promising for remote ultrasound examinations.
Advancing towards functional environmental flows for temperate floodplain rivers.
Hayes, Daniel S; Brändle, Julia M; Seliger, Carina; Zeiringer, Bernhard; Ferreira, Teresa; Schmutz, Stefan
2018-08-15
Abstraction, diversion, and storage of flow alter rivers worldwide. In this context, minimum flow regulations are applied to mitigate adverse impacts and to protect affected river reaches from environmental deterioration. Mostly, however, only selected instream criteria are considered, neglecting the floodplain as an indispensable part of the fluvial ecosystem. Based on essential functions and processes of unimpaired temperate floodplain rivers, we identify fundamental principles to which we must adhere to determine truly ecologically-relevant environmental flows. Literature reveals that the natural flow regime and its seasonal components are primary drivers for functions and processes of abiotic and biotic elements such as morphology, water quality, floodplain, groundwater, riparian vegetation, fish, macroinvertebrates, and amphibians, thus preserving the integrity of floodplain river ecosystems. Based on the relationship between key flow regime elements and associated environmental components within as well as adjacent to the river, we formulate a process-oriented functional floodplain flow (ff-flow) approach which offers a holistic conceptual framework for environmental flow assessment in temperate floodplain river systems. The ff-flow approach underlines the importance of emulating the natural flow regime with its seasonal variability, flow magnitude, frequency, event duration, and rise and fall of the hydrograph. We conclude that the ecological principles presented in the ff-flow approach ensure the protection of floodplain rivers impacted by flow regulation by establishing ecologically relevant environmental flows and guiding flow restoration measures. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Numerical optimization using flow equations.
Punk, Matthias
2014-12-01
We develop a method for multidimensional optimization using flow equations. This method is based on homotopy continuation in combination with a maximum entropy approach. Extrema of the optimizing functional correspond to fixed points of the flow equation. While ideas based on Bayesian inference such as the maximum entropy method always depend on a prior probability, the additional step in our approach is to perform a continuous update of the prior during the homotopy flow. The prior probability thus enters the flow equation only as an initial condition. We demonstrate the applicability of this optimization method for two paradigmatic problems in theoretical condensed matter physics: numerical analytic continuation from imaginary to real frequencies and finding (variational) ground states of frustrated (quantum) Ising models with random or long-range antiferromagnetic interactions.
Numerical optimization using flow equations
NASA Astrophysics Data System (ADS)
Punk, Matthias
2014-12-01
We develop a method for multidimensional optimization using flow equations. This method is based on homotopy continuation in combination with a maximum entropy approach. Extrema of the optimizing functional correspond to fixed points of the flow equation. While ideas based on Bayesian inference such as the maximum entropy method always depend on a prior probability, the additional step in our approach is to perform a continuous update of the prior during the homotopy flow. The prior probability thus enters the flow equation only as an initial condition. We demonstrate the applicability of this optimization method for two paradigmatic problems in theoretical condensed matter physics: numerical analytic continuation from imaginary to real frequencies and finding (variational) ground states of frustrated (quantum) Ising models with random or long-range antiferromagnetic interactions.
Astolfi, Andrea; Felicetti, Tommaso; Iraci, Nunzio; Manfroni, Giuseppe; Massari, Serena; Pietrella, Donatella; Tabarrini, Oriana; Kaatz, Glenn W; Barreca, Maria L; Sabatini, Stefano; Cecchetti, Violetta
2017-02-23
An intriguing opportunity to address antimicrobial resistance is represented by the inhibition of efflux pumps. Focusing on NorA, the most important efflux pump of Staphylococcus aureus, an efflux pump inhibitors (EPIs) library was used for ligand-based pharmacophore modeling studies. By exploitation of the obtained models, an in silico drug repositioning approach allowed for the identification of novel and potent NorA EPIs.
A semi-automatic method for extracting thin line structures in images as rooted tree network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brazzini, Jacopo; Dillard, Scott; Soille, Pierre
2010-01-01
This paper addresses the problem of semi-automatic extraction of line networks in digital images - e.g., road or hydrographic networks in satellite images, blood vessels in medical images, robust. For that purpose, we improve a generic method derived from morphological and hydrological concepts and consisting in minimum cost path estimation and flow simulation. While this approach fully exploits the local contrast and shape of the network, as well as its arborescent nature, we further incorporate local directional information about the structures in the image. Namely, an appropriate anisotropic metric is designed by using both the characteristic features of the targetmore » network and the eigen-decomposition of the gradient structure tensor of the image. Following, the geodesic propagation from a given seed with this metric is combined with hydrological operators for overland flow simulation to extract the line network. The algorithm is demonstrated for the extraction of blood vessels in a retina image and of a river network in a satellite image.« less
Parametric and experimental analysis using a power flow approach
NASA Technical Reports Server (NTRS)
Cuschieri, J. M.
1988-01-01
Having defined and developed a structural power flow approach for the analysis of structure-borne transmission of structural vibrations, the technique is used to perform an analysis of the influence of structural parameters on the transmitted energy. As a base for comparison, the parametric analysis is first performed using a Statistical Energy Analysis approach and the results compared with those obtained using the power flow approach. The advantages of using structural power flow are thus demonstrated by comparing the type of results obtained by the two methods. Additionally, to demonstrate the advantages of using the power flow method and to show that the power flow results represent a direct physical parameter that can be measured on a typical structure, an experimental investigation of structural power flow is also presented. Results are presented for an L-shaped beam for which an analytical solution has already been obtained. Furthermore, the various methods available to measure vibrational power flow are compared to investigate the advantages and disadvantages of each method.
Controlling mixing and segregation in time periodic granular flows
NASA Astrophysics Data System (ADS)
Bhattacharya, Tathagata
Segregation is a major problem for many solids processing industries. Differences in particle size or density can lead to flow-induced segregation. In the present work, we employ the discrete element method (DEM)---one type of particle dynamics (PD) technique---to investigate the mixing and segregation of granular material in some prototypical solid handling devices, such as a rotating drum and chute. In DEM, one calculates the trajectories of individual particles based on Newton's laws of motion by employing suitable contact force models and a collision detection algorithm. Recently, it has been suggested that segregation in particle mixers can be thwarted if the particle flow is inverted at a rate above a critical forcing frequency. Further, it has been hypothesized that, for a rotating drum, the effectiveness of this technique can be linked to the probability distribution of the number of times a particle passes through the flowing layer per rotation of the drum. In the first portion of this work, various configurations of solid mixers are numerically and experimentally studied to investigate the conditions for improved mixing in light of these hypotheses. Besides rotating drums, many studies of granular flow have focused on gravity driven chute flows owing to its practical importance in granular transportation and to the fact that the relative simplicity of this type of flow allows for development and testing of new theories. In this part of the work, we observe the deposition behavior of both mono-sized and polydisperse dry granular materials in an inclined chute flow. The effects of different parameters such as chute angle, particle size, falling height and charge amount on the mass fraction distribution of granular materials after deposition are investigated. The simulation results obtained using DEM are compared with the experimental findings and a high degree of agreement is observed. Tuning of the underlying contact force parameters allows the achievement of realistic results and is used as a means of validating the model against available experimental data. The tuned model is then used to find the critical chute length for segregation based on the hypothesis that segregation can be thwarted if the particle flow is inverted at a rate above a critical forcing frequency. The critical frequency, fcrit, is inversely proportional to the characteristic time of segregation, ts. Mixing is observed instead of segregation when the chute length L < U avgts, where Uavg denotes the average stream-wise flow velocity of the particles. While segregation is often an undesired effect, sometimes separating the components of a particle mixture is the ultimate goal. Rate-based separation processes hold promise as both more environmentally benign as well as less energy intensive when compared to conventional particle separations technologies such as vibrating screens or flotation methods. This approach is based on differences in the kinetic properties of the components of a mixture, such as the velocity of migration or diffusivity. In this portion of the work, two examples of novel rate-based separation devices are demonstrated. The first example involves the study of the dynamics of gravity-driven particles through an array of obstacles. Both discrete element (DEM) simulations and experiments are used to augment the understanding of this device. Dissipative collisions (both between the particles themselves and with the obstacles) give rise to a diffusive motion of particles perpendicular to the flow direction and the differences in diffusion lengths are exploited to separate the particles. The second example employs DEM to analyze a ratchet mechanism where a current of particles can be produced in a direction perpendicular to the energy input. In this setup, a vibrating saw-toothed base is employed to induce different mobility for different types of particles. The effect of operating conditions and design parameters on the separation efficiency are discussed. Keywords: granular flow, particle, mixing, segregation, discrete element method, particle dynamics, tumbler, chute, periodic flow inversion, collisional flow, rate-based separation, ratchet, static separator, dissipative particle dynamics, non-spherical droplet.
A new hybrid case-based reasoning approach for medical diagnosis systems.
Sharaf-El-Deen, Dina A; Moawad, Ibrahim F; Khalifa, M E
2014-02-01
Case-Based Reasoning (CBR) has been applied in many different medical applications. Due to the complexities and the diversities of this domain, most medical CBR systems become hybrid. Besides, the case adaptation process in CBR is often a challenging issue as it is traditionally carried out manually by domain experts. In this paper, a new hybrid case-based reasoning approach for medical diagnosis systems is proposed to improve the accuracy of the retrieval-only CBR systems. The approach integrates case-based reasoning and rule-based reasoning, and also applies the adaptation process automatically by exploiting adaptation rules. Both adaptation rules and reasoning rules are generated from the case-base. After solving a new case, the case-base is expanded, and both adaptation and reasoning rules are updated. To evaluate the proposed approach, a prototype was implemented and experimented to diagnose breast cancer and thyroid diseases. The final results show that the proposed approach increases the diagnosing accuracy of the retrieval-only CBR systems, and provides a reliable accuracy comparing to the current breast cancer and thyroid diagnosis systems.
Print-and-play: a new paradigm for the nearly-instant aerospace system
NASA Astrophysics Data System (ADS)
Church, Kenneth H.; Newton, C. Michael; Marsh, Albert J.; MacDonald, Eric W.; Soto, Cassandra D.; Lyke, James C.
2010-04-01
Nanosatellites, in particular the sub-class of CubeSATs, will provide an ability to place multiple small satellites in space more efficiently than larger satellites, with the eventual expectation that they will compete against some of the roles played by traditional large satellites that are expensive to launch. In order to do this, it is necessary to decrease the weight and volume without decreasing the capabilities. At the same time, it is desirable to create systems extremely rapidly, less than a week from concept to orbit. The Air Force has been working on a concept termed "CubeFlow" which will be a web-based design flow for rapidly constructible CubeSAT systems. In CubeFlow, distributed suppliers create offerings (modules, software functions, for satellite bus and payloads) meeting standard size and interface specifications, which are registered as a living catalog to a design community within the web-based CubeFlow environment. The idea of allowing any interested parties to make circuits and sensors that simply and compatibly connect to a modular satellite carrier is going to change how satellites are developed and launched, promoting creative exploitation and reduced development time and costs. We extend the power of the CubeFlow framework by a concept we call "print-and-play." "Print-and-play" enriches the CubeFlow concept dramatically. Whereas the CubeFlow system is oriented to the brokering of pre-created offerings from a "plug-and-play" vendor community, the idea of "print-andplay" allows similar offerings to be created "from scratch," using web-based plug-ins to capture design requirements, which are communicated to rapid prototyping tools.
Laboratory Plasma Source as an MHD Model for Astrophysical Jets
NASA Technical Reports Server (NTRS)
Mayo, Robert M.
1997-01-01
The significance of the work described herein lies in the demonstration of Magnetized Coaxial Plasma Gun (MCG) devices like CPS-1 to produce energetic laboratory magneto-flows with embedded magnetic fields that can be used as a simulation tool to study flow interaction dynamic of jet flows, to demonstrate the magnetic acceleration and collimation of flows with primarily toroidal fields, and study cross field transport in turbulent accreting flows. Since plasma produced in MCG devices have magnetic topology and MHD flow regime similarity to stellar and extragalactic jets, we expect that careful investigation of these flows in the laboratory will reveal fundamental physical mechanisms influencing astrophysical flows. Discussion in the next section (sec.2) focuses on recent results describing collimation, leading flow surface interaction layers, and turbulent accretion. The primary objectives for a new three year effort would involve the development and deployment of novel electrostatic, magnetic, and visible plasma diagnostic techniques to measure plasma and flow parameters of the CPS-1 device in the flow chamber downstream of the plasma source to study, (1) mass ejection, morphology, and collimation and stability of energetic outflows, (2) the effects of external magnetization on collimation and stability, (3) the interaction of such flows with background neutral gas, the generation of visible emission in such interaction, and effect of neutral clouds on jet flow dynamics, and (4) the cross magnetic field transport of turbulent accreting flows. The applicability of existing laboratory plasma facilities to the study of stellar and extragalactic plasma should be exploited to elucidate underlying physical mechanisms that cannot be ascertained though astrophysical observation, and provide baseline to a wide variety of proposed models, MHD and otherwise. The work proposed herin represents a continued effort on a novel approach in relating laboratory experiments to astrophysical jet observation. There exists overwhelming similarity among these flows that has already produced some fascinating results and is expected to continue a high pay off in future flow similarity studies.
NASA Technical Reports Server (NTRS)
Allan, Brian G.
2000-01-01
A reduced order modeling approach of the Navier-Stokes equations is presented for the design of a distributed optimal feedback kernel. This approach is based oil a Krylov subspace method where significant modes of the flow are captured in the model This model is then used in all optimal feedback control design where sensing and actuation is performed oil tile entire flow field. This control design approach yields all optimal feedback kernel which provides insight into the placement of sensors and actuators in the flow field. As all evaluation of this approach, a two-dimensional shear layer and driven cavity flow are investigated.
NASA Technical Reports Server (NTRS)
Liou, J.; Tezduyar, T. E.
1990-01-01
Adaptive implicit-explicit (AIE), grouped element-by-element (GEBE), and generalized minimum residuals (GMRES) solution techniques for incompressible flows are combined. In this approach, the GEBE and GMRES iteration methods are employed to solve the equation systems resulting from the implicitly treated elements, and therefore no direct solution effort is involved. The benchmarking results demonstrate that this approach can substantially reduce the CPU time and memory requirements in large-scale flow problems. Although the description of the concepts and the numerical demonstration are based on the incompressible flows, the approach presented here is applicable to larger class of problems in computational mechanics.
Ore minerals textural characterization by hyperspectral imaging
NASA Astrophysics Data System (ADS)
Bonifazi, Giuseppe; Picone, Nicoletta; Serranti, Silvia
2013-02-01
The utilization of hyperspectral detection devices, for natural resources mapping/exploitation through remote sensing techniques, dates back to the early 1970s. From the first devices utilizing a one-dimensional profile spectrometer, HyperSpectral Imaging (HSI) devices have been developed. Thus, from specific-customized devices, originally developed by Governmental Agencies (e.g. NASA, specialized research labs, etc.), a lot of HSI based equipment are today available at commercial level. Parallel to this huge increase of hyperspectral systems development/manufacturing, addressed to airborne application, a strong increase also occurred in developing HSI based devices for "ground" utilization that is sensing units able to play inside a laboratory, a processing plant and/or in an open field. Thanks to this diffusion more and more applications have been developed and tested in this last years also in the materials sectors. Such an approach, when successful, is quite challenging being usually reliable, robust and characterised by lower costs if compared with those usually associated to commonly applied analytical off- and/or on-line analytical approaches. In this paper such an approach is presented with reference to ore minerals characterization. According to the different phases and stages of ore minerals and products characterization, and starting from the analyses of the detected hyperspectral firms, it is possible to derive useful information about mineral flow stream properties and their physical-chemical attributes. This last aspect can be utilized to define innovative process mineralogy strategies and to implement on-line procedures at processing level. The present study discusses the effects related to the adoption of different hardware configurations, the utilization of different logics to perform the analysis and the selection of different algorithms according to the different characterization, inspection and quality control actions to apply.
NASA Astrophysics Data System (ADS)
Xie, Z.; Zeng, Y.; Liu, S.; Gao, J.; Jia, B.; Qin, P.
2017-12-01
Both anthropogenic water regulation and groundwater lateral flow essentially affect groundwater table patterns. Their relationship is close because lateral flow recharges the groundwater depletion cone, which is induced by over-exploitation. And the movement of frost and thaw fronts (FTFs) affects soil water and thermal characteristics, as well as energy and water exchanges between land surface and the atmosphere. In this study, schemes describing groundwater lateral flow, human water regulation and the changes in soil freeze-thaw fronts were developed and incorporated into the Community Land Model 4.5. Then the model was applied in Heihe River Basin(HRB), an arid and semiarid region, northwest China. High resolution ( 1 km) numerical simulations showed that groundwater lateral flow driven by changes in water heads can essentially change the groundwater table pattern with the deeper water table appearing in the hillslope regions and shallower water table appearing in valley bottom regions and plains. Over the last decade, anthropogenic groundwater exploitation deepened the water table by approximately 2 m in the middle reaches of the HRB and rapidly reduced the terrestrial water storage, while irrigation increased soil moisture by approximately 0.1 m3 m-3. The water stored in the mainstream of the Heihe River was also reduced by human surface water withdrawal. The latent heat flux was increased by 30 W m-2 over the irrigated region, with an identical decrease in sensible heat flux. The simulated groundwater lateral flow was shown to effectively recharge the groundwater depletion cone caused by over-exploitation. The offset rate is higher in plains than mountainous regions. In addition, the simulated FTFs depth compared well with the observed data both in D66 station (permafrost) and Hulugou station (seasonally frozen ground). Over the HRB, the upstream area is permafrost region with maximum thawed depth at 2.5 m and lower region is seasonal frozen ground region with maximum frozen depth at 3 m.
Real-Time MENTAT programming language and architecture
NASA Technical Reports Server (NTRS)
Grimshaw, Andrew S.; Silberman, Ami; Liu, Jane W. S.
1989-01-01
Real-time MENTAT, a programming environment designed to simplify the task of programming real-time applications in distributed and parallel environments, is described. It is based on the same data-driven computation model and object-oriented programming paradigm as MENTAT. It provides an easy-to-use mechanism to exploit parallelism, language constructs for the expression and enforcement of timing constraints, and run-time support for scheduling and exciting real-time programs. The real-time MENTAT programming language is an extended C++. The extensions are added to facilitate automatic detection of data flow and generation of data flow graphs, to express the timing constraints of individual granules of computation, and to provide scheduling directives for the runtime system. A high-level view of the real-time MENTAT system architecture and programming language constructs is provided.
NASA Astrophysics Data System (ADS)
D'Alessandro, Valerio; Binci, Lorenzo; Montelpare, Sergio; Ricci, Renato
2018-01-01
Open-source CFD codes provide suitable environments for implementing and testing low-dissipative algorithms typically used to simulate turbulence. In this research work we developed CFD solvers for incompressible flows based on high-order explicit and diagonally implicit Runge-Kutta (RK) schemes for time integration. In particular, an iterated PISO-like procedure based on Rhie-Chow correction was used to handle pressure-velocity coupling within each implicit RK stage. For the explicit approach, a projected scheme was used to avoid the "checker-board" effect. The above-mentioned approaches were also extended to flow problems involving heat transfer. It is worth noting that the numerical technology available in the OpenFOAM library was used for space discretization. In this work, we additionally explore the reliability and effectiveness of the proposed implementations by computing several unsteady flow benchmarks; we also show that the numerical diffusion due to the time integration approach is completely canceled using the solution techniques proposed here.
Seo, Jung Hee; Mittal, Rajat
2010-01-01
A new sharp-interface immersed boundary method based approach for the computation of low-Mach number flow-induced sound around complex geometries is described. The underlying approach is based on a hydrodynamic/acoustic splitting technique where the incompressible flow is first computed using a second-order accurate immersed boundary solver. This is followed by the computation of sound using the linearized perturbed compressible equations (LPCE). The primary contribution of the current work is the development of a versatile, high-order accurate immersed boundary method for solving the LPCE in complex domains. This new method applies the boundary condition on the immersed boundary to a high-order by combining the ghost-cell approach with a weighted least-squares error method based on a high-order approximating polynomial. The method is validated for canonical acoustic wave scattering and flow-induced noise problems. Applications of this technique to relatively complex cases of practical interest are also presented. PMID:21318129
NASA Astrophysics Data System (ADS)
Grosso, Juan M.; Ocampo-Martinez, Carlos; Puig, Vicenç
2017-10-01
This paper proposes a distributed model predictive control approach designed to work in a cooperative manner for controlling flow-based networks showing periodic behaviours. Under this distributed approach, local controllers cooperate in order to enhance the performance of the whole flow network avoiding the use of a coordination layer. Alternatively, controllers use both the monolithic model of the network and the given global cost function to optimise the control inputs of the local controllers but taking into account the effect of their decisions over the remainder subsystems conforming the entire network. In this sense, a global (all-to-all) communication strategy is considered. Although the Pareto optimality cannot be reached due to the existence of non-sparse coupling constraints, the asymptotic convergence to a Nash equilibrium is guaranteed. The resultant strategy is tested and its effectiveness is shown when applied to a large-scale complex flow-based network: the Barcelona drinking water supply system.
Wei Liao; Rohr, Karl; Chang-Ki Kang; Zang-Hee Cho; Worz, Stefan
2016-01-01
We propose a novel hybrid approach for automatic 3D segmentation and quantification of high-resolution 7 Tesla magnetic resonance angiography (MRA) images of the human cerebral vasculature. Our approach consists of two main steps. First, a 3D model-based approach is used to segment and quantify thick vessels and most parts of thin vessels. Second, remaining vessel gaps of the first step in low-contrast and noisy regions are completed using a 3D minimal path approach, which exploits directional information. We present two novel minimal path approaches. The first is an explicit approach based on energy minimization using probabilistic sampling, and the second is an implicit approach based on fast marching with anisotropic directional prior. We conducted an extensive evaluation with over 2300 3D synthetic images and 40 real 3D 7 Tesla MRA images. Quantitative and qualitative evaluation shows that our approach achieves superior results compared with a previous minimal path approach. Furthermore, our approach was successfully used in two clinical studies on stroke and vascular dementia.
Numerical Modeling of Three-Dimensional Confined Flows
NASA Technical Reports Server (NTRS)
Greywall, M. S.
1981-01-01
A three dimensional confined flow model is presented. The flow field is computed by calculating velocity and enthalpy along a set of streamlines. The finite difference equations are obtained by applying conservation principles to streamtubes constructed around the chosen streamlines. With appropriate substitutions for the body force terms, the approach computes three dimensional magnetohydrodynamic channel flows. A listing of a computer code, based on this approach is presented in FORTRAN IV language. The code computes three dimensional compressible viscous flow through a rectangular duct, with the duct cross section specified along the axis.
Robust-mode analysis of hydrodynamic flows
NASA Astrophysics Data System (ADS)
Roy, Sukesh; Gord, James R.; Hua, Jia-Chen; Gunaratne, Gemunu H.
2017-04-01
The emergence of techniques to extract high-frequency high-resolution data introduces a new avenue for modal decomposition to assess the underlying dynamics, especially of complex flows. However, this task requires the differentiation of robust, repeatable flow constituents from noise and other irregular features of a flow. Traditional approaches involving low-pass filtering and principle components analysis have shortcomings. The approach outlined here, referred to as robust-mode analysis, is based on Koopman decomposition. Three applications to (a) a counter-rotating cellular flame state, (b) variations in financial markets, and (c) turbulent injector flows are provided.
Marine current energy conversion: the dawn of a new era in electricity production.
Bahaj, AbuBakr S
2013-02-28
Marine currents can carry large amounts of energy, largely driven by the tides, which are a consequence of the gravitational effects of the planetary motion of the Earth, the Moon and the Sun. Augmented flow velocities can be found where the underwater topography (bathymetry) in straits between islands and the mainland or in shallows around headlands plays a major role in enhancing the flow velocities, resulting in appreciable kinetic energy. At some of these sites where practical flows are more than 1 m s(-1), marine current energy conversion is considered to be economically viable. This study describes the salient issues related to the exploitation of marine currents for electricity production, resource assessment, the conversion technologies and the status of leading projects in the field. This study also summarizes important issues related to site development and some of the approaches currently being undertaken to inform device and array development. This study concludes that, given the highlighted commitments to establish favourable regulatory and incentive regimes as well as the aspiration for energy independence and combating climate change, the progress to multi-megawatt arrays will be much faster than that achieved for wind energy development.
Current approaches to exploit actinomycetes as a source of novel natural products.
Genilloud, Olga; González, Ignacio; Salazar, Oscar; Martín, Jesus; Tormo, José Rubén; Vicente, Francisca
2011-03-01
For decades, microbial natural products have been one of the major sources of novel drugs for pharmaceutical companies, and today all evidence suggests that novel molecules with potential therapeutic applications are still waiting to be discovered from these natural sources, especially from actinomycetes. Any appropriate exploitation of the chemical diversity of these microbial sources relies on proper understanding of their biological diversity and other related key factors that maximize the possibility of successful identification of novel molecules. Without doubt, the discovery of platensimycin has shown that microbial natural products can continue to deliver novel scaffolds if appropriate tools are put in place to reveal them in a cost-effective manner. Whereas today innovative technologies involving exploitation of uncultivated environmental diversity, together with chemical biology and in silico approaches, are seeing rapid development in natural products research, maximization of the chances of exploiting chemical diversity from microbial collections is still essential for novel drug discovery. This work provides an overview of the integrated approaches developed at the former Basic Research Center of Merck Sharp and Dohme in Spain to exploit the diversity and biosynthetic potential of actinomycetes, and includes some examples of those that were successfully applied to the discovery of novel antibiotics.
NASA Astrophysics Data System (ADS)
Li, Minghui; Yin, Guangzhi; Xu, Jiang; Li, Wenpu; Song, Zhenlong; Jiang, Changbao
2016-12-01
Fluid-solid coupling investigations of the geological storage of CO2, efficient unconventional oil and natural gas exploitations are mostly conducted under conventional triaxial stress conditions ( σ 2 = σ 3), ignoring the effects of σ 2 on the geomechanical properties and permeability of rocks (shale, coal and sandstone). A novel multi-functional true triaxial geophysical (TTG) apparatus was designed, fabricated, calibrated and tested to simulate true triaxial stress ( σ 1 > σ 2 > σ 3) conditions and to reveal geomechanical properties and permeability evolutions of rocks. The apparatus was developed with the capacity to carry out geomechanical and fluid flow experiments at high three-dimensional loading forces and injection pressures under true triaxial stress conditions. The control and measurement of the fluid flow with effective sealing of rock specimen corners were achieved using a specially designed internally sealed fluid flow system. To validate that the apparatus works properly and to recognize the effects of each principal stress on rock deformation and permeability, stress-strain and permeability experiments and a hydraulic fracturing simulation experiment on shale specimens were conducted under true triaxial stress conditions using the TTG apparatus. Results show that the apparatus has advantages in recognizing the effects of σ 2 on the geomechanical properties and permeability of rocks. Results also demonstrate the effectiveness and reliability of the novel TTG apparatus. The apparatus provides a new method of studying the geomechanical properties and permeability evolutions of rocks under true triaxial stress conditions, promoting further investigations of the geological storage of CO2, efficient unconventional oil and gas exploitations.
Cevenini, Luca; Calabretta, Maria Maddalena; Lopreside, Antonia; Tarantino, Giuseppe; Tassoni, Annalisa; Ferri, Maura; Roda, Aldo; Michelini, Elisa
2016-12-01
The availability of smartphones with high-performance digital image sensors and processing power has completely reshaped the landscape of point-of-need analysis. Thanks to the high maturity level of reporter gene technology and the availability of several bioluminescent proteins with improved features, we were able to develop a bioluminescence smartphone-based biosensing platform exploiting the highly sensitive NanoLuc luciferase as reporter. A 3D-printed smartphone-integrated cell biosensor based on genetically engineered Hek293T cells was developed. Quantitative assessment of (anti)-inflammatory activity and toxicity of liquid samples was performed with a simple and rapid add-and-measure procedure. White grape pomace extracts, known to contain several bioactive compounds, were analyzed, confirming the suitability of the smartphone biosensing platform for analysis of untreated complex biological matrices. Such approach could meet the needs of small medium enterprises lacking fully equipped laboratories for first-level safety tests and rapid screening of new bioactive products. Graphical abstract Smartphone-based bioluminescence cell biosensor.
Prajapat, Amrutlal L; Gogate, Parag R
2016-09-01
Depolymerization of polyacrylic acid (PAA) as sodium salt has been investigated using ultrasonic and solar irradiations with process intensification studies based on combination with hydrogen peroxide (H2O2) and ozone (O3). Effect of solar intensity, ozone flow and ultrasonic power dissipation on the extent of viscosity reduction has been investigated for individual treatment approaches. The combined approaches such as US+solar, solar+O3, solar+H2O2, US+H2O2 and US+O3 have been subsequently investigated under optimum conditions and established to be more efficient as compared to individual approaches. Approach based on US (60W)+solar+H2O2 (0.01%) resulted in the maximum extent of viscosity reduction as 98.97% in 35min whereas operation of solar+H2O2 (0.01%), US (60W), H2O2 (0.3%) and solar irradiation resulted in about 98.08%, 90.13%, 8.91% and 90.77% intrinsic viscosity reduction in 60min respectively. Approach of US (60W)+solar+ozone (400mg/h flow rate) resulted in extent of viscosity reduction as 99.47% in 35min whereas only ozone (400mg/h flow rate), ozone (400mg/h flow rate)+US (60W) and ozone (400mg/h flow rate)+solar resulted in 69.04%, 98.97% and 98.51% reduction in 60min, 55min and 55min respectively. The chemical identity of the treated polymer using combined approaches was also characterized using FTIR (Fourier transform infrared) spectra and it was established that no significant structural changes were obtained during the treatment. Overall, it can be said that the combination technique based on US and solar irradiations in the presence of hydrogen peroxide is the best approach for the depolymerization of PAA solution. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Ruggieri, Rosario; Forti, Paolo; Antoci, Maria Lucia; De Waele, Jo
2017-03-01
The area around Ragusa in Sicily is well known for the exploration of petroleum deposits hosted in Mesozoic carbonate rocks. These reservoirs are overlain by less permeable rocks, whereas the surface geology is characterized by outcrops of Oligo-Miocene carbonate units hosting important aquifers. Some of the karst springs of the area are used as drinking water supplies, and therefore these vulnerable aquifers should be monitored and protected adequately. In the early afternoon (14:00) of 27 May until the late evening (19:30) of 28 May 2011, during the construction of an exploitation borehole (Tresauro 2), more than 1000 m3 of drilling fluids were lost in an unknown karst void. Two days later, from 06:30 on 30 May, water flowing from Paradiso Spring, lying some 13.7 km SW of the borehole and 378 m lower, normally used as a domestic water supply, was so intensely coloured that it was unfit for drinking. Bulk chemical analyses carried out on the water have shown a composition that is very similar to that of the drilling fluids lost at the Tresauro borehole, confirming a hydrological connection. Estimations indicate that the first signs of the drilling fluids took about 59 h to flow from their injection point to the spring, corresponding to a mean velocity of ∼230 m/h. That Paradiso Spring is recharged by a well-developed underground drainage system is also confirmed by the marked flow rate changes measured at the spring, ranging from a base flow of around 10-15 l/s to flood peaks of 2-3 m3/s. Reflecting the source and nature of the initial contamination, the pollution lasted for just a few days, and the water returned to acceptable drinking-water standards relatively quickly. However, pollution related to heavy-mineral fines continues to be registered during flooding of the spring, when the aqueducts are normally shut down because of the high turbidity values. This pollution event offers an instructive example of how hydrocarbon exploitation in intensely karstified areas, where natural springs provide domestic water supplies, should be controlled effectively to prevent such disasters occurring. This pollution incident is also a useful example of how such "accidental" tracer tests can identify rapid karstic flowpaths over long distances.
Receptor-based 3D-QSAR in Drug Design: Methods and Applications in Kinase Studies.
Fang, Cheng; Xiao, Zhiyan
2016-01-01
Receptor-based 3D-QSAR strategy represents a superior integration of structure-based drug design (SBDD) and three-dimensional quantitative structure-activity relationship (3D-QSAR) analysis. It combines the accurate prediction of ligand poses by the SBDD approach with the good predictability and interpretability of statistical models derived from the 3D-QSAR approach. Extensive efforts have been devoted to the development of receptor-based 3D-QSAR methods and two alternative approaches have been exploited. One associates with computing the binding interactions between a receptor and a ligand to generate structure-based descriptors for QSAR analyses. The other concerns the application of various docking protocols to generate optimal ligand poses so as to provide reliable molecular alignments for the conventional 3D-QSAR operations. This review highlights new concepts and methodologies recently developed in the field of receptorbased 3D-QSAR, and in particular, covers its application in kinase studies.
Non-genetic engineering of cells for drug delivery and cell-based therapy.
Wang, Qun; Cheng, Hao; Peng, Haisheng; Zhou, Hao; Li, Peter Y; Langer, Robert
2015-08-30
Cell-based therapy is a promising modality to address many unmet medical needs. In addition to genetic engineering, material-based, biochemical, and physical science-based approaches have emerged as novel approaches to modify cells. Non-genetic engineering of cells has been applied in delivering therapeutics to tissues, homing of cells to the bone marrow or inflammatory tissues, cancer imaging, immunotherapy, and remotely controlling cellular functions. This new strategy has unique advantages in disease therapy and is complementary to existing gene-based cell engineering approaches. A better understanding of cellular systems and different engineering methods will allow us to better exploit engineered cells in biomedicine. Here, we review non-genetic cell engineering techniques and applications of engineered cells, discuss the pros and cons of different methods, and provide our perspectives on future research directions. Copyright © 2014 Elsevier B.V. All rights reserved.
Fish tracking by combining motion based segmentation and particle filtering
NASA Astrophysics Data System (ADS)
Bichot, E.; Mascarilla, L.; Courtellemont, P.
2006-01-01
In this paper, we suggest a new importance sampling scheme to improve a particle filtering based tracking process. This scheme relies on exploitation of motion segmentation. More precisely, we propagate hypotheses from particle filtering to blobs of similar motion to target. Hence, search is driven toward regions of interest in the state space and prediction is more accurate. We also propose to exploit segmentation to update target model. Once the moving target has been identified, a representative model is learnt from its spatial support. We refer to this model in the correction step of the tracking process. The importance sampling scheme and the strategy to update target model improve the performance of particle filtering in complex situations of occlusions compared to a simple Bootstrap approach as shown by our experiments on real fish tank sequences.
Mesh-type acoustic vector sensor
NASA Astrophysics Data System (ADS)
Zalalutdinov, M. K.; Photiadis, D. M.; Szymczak, W. G.; McMahon, J. W.; Bucaro, J. A.; Houston, B. H.
2017-07-01
Motivated by the predictions of a theoretical model developed to describe the acoustic flow force exerted on closely spaced nano-fibers in a viscous medium, we have demonstrated a novel concept for a particle velocity-based directional acoustic sensor. The central element of the concept exploits the acoustically induced normal displacement of a fine mesh as a measure of the collinear projection of the particle velocity in the sound wave. The key observations are (i) the acoustically induced flow force on an individual fiber within the mesh is nearly independent of the fiber diameter and (ii) the mesh-flow interaction can be well-described theoretically by a nearest neighbor coupling approximation. Scaling arguments based on these two observations indicate that the refinement of the mesh down to the nanoscale leads to significant improvements in performance. The combination of the two dimensional nature of the mesh together with the nanoscale dimensions provides a dramatic gain in the total length of fiber exposed to the flow, leading to a sensitivity enhancement by orders of magnitude. We describe the fabrication of a prototype mesh sensor equipped with optical readout. Preliminary measurements carried out over a considerable bandwidth together with the results of numerical simulations are in good agreement with the theory, thus providing a proof of concept.
Tanev, Stoyan; Sun, Wenbo; Pond, James; Tuchin, Valery V.; Zharov, Vladimir P.
2010-01-01
The formulation of the Finite-Difference Time-Domain (FDTD) approach is presented in the framework of its potential applications to in vivo flow cytometry based on light scattering. The consideration is focused on comparison of light scattering by a single biological cell alone in controlled refractive index matching conditions and by cells labeled by gold nanoparticles. The optical schematics including phase contrast (OPCM) microscopy as a prospective modality for in vivo flow cytometry is also analyzed. The validation of the FDTD approach for the simulation of flow cytometry may open a new avenue in the development of advanced cytometric techniques based on scattering effects from nanoscale targets. PMID:19670359
NASA Astrophysics Data System (ADS)
Li, Long; Solana, Carmen; Canters, Frank; Kervyn, Matthieu
2017-10-01
Mapping lava flows using satellite images is an important application of remote sensing in volcanology. Several volcanoes have been mapped through remote sensing using a wide range of data, from optical to thermal infrared and radar images, using techniques such as manual mapping, supervised/unsupervised classification, and elevation subtraction. So far, spectral-based mapping applications mainly focus on the use of traditional pixel-based classifiers, without much investigation into the added value of object-based approaches and into advantages of using machine learning algorithms. In this study, Nyamuragira, characterized by a series of > 20 overlapping lava flows erupted over the last century, was used as a case study. The random forest classifier was tested to map lava flows based on pixels and objects. Image classification was conducted for the 20 individual flows and for 8 groups of flows of similar age using a Landsat 8 image and a DEM of the volcano, both at 30-meter spatial resolution. Results show that object-based classification produces maps with continuous and homogeneous lava surfaces, in agreement with the physical characteristics of lava flows, while lava flows mapped through the pixel-based classification are heterogeneous and fragmented including much "salt and pepper noise". In terms of accuracy, both pixel-based and object-based classification performs well but the former results in higher accuracies than the latter except for mapping lava flow age groups without using topographic features. It is concluded that despite spectral similarity, lava flows of contrasting age can be well discriminated and mapped by means of image classification. The classification approach demonstrated in this study only requires easily accessible image data and can be applied to other volcanoes as well if there is sufficient information to calibrate the mapping.
Gobeill, Julien; Pasche, Emilie; Vishnyakova, Dina; Ruch, Patrick
2013-01-01
The available curated data lag behind current biological knowledge contained in the literature. Text mining can assist biologists and curators to locate and access this knowledge, for instance by characterizing the functional profile of publications. Gene Ontology (GO) category assignment in free text already supports various applications, such as powering ontology-based search engines, finding curation-relevant articles (triage) or helping the curator to identify and encode functions. Popular text mining tools for GO classification are based on so called thesaurus-based--or dictionary-based--approaches, which exploit similarities between the input text and GO terms themselves. But their effectiveness remains limited owing to the complex nature of GO terms, which rarely occur in text. In contrast, machine learning approaches exploit similarities between the input text and already curated instances contained in a knowledge base to infer a functional profile. GO Annotations (GOA) and MEDLINE make possible to exploit a growing amount of curated abstracts (97 000 in November 2012) for populating this knowledge base. Our study compares a state-of-the-art thesaurus-based system with a machine learning system (based on a k-Nearest Neighbours algorithm) for the task of proposing a functional profile for unseen MEDLINE abstracts, and shows how resources and performances have evolved. Systems are evaluated on their ability to propose for a given abstract the GO terms (2.8 on average) used for curation in GOA. We show that since 2006, although a massive effort was put into adding synonyms in GO (+300%), our thesaurus-based system effectiveness is rather constant, reaching from 0.28 to 0.31 for Recall at 20 (R20). In contrast, thanks to its knowledge base growth, our machine learning system has steadily improved, reaching from 0.38 in 2006 to 0.56 for R20 in 2012. Integrated in semi-automatic workflows or in fully automatic pipelines, such systems are more and more efficient to provide assistance to biologists. DATABASE URL: http://eagl.unige.ch/GOCat/
Co-state initialization for the minimum-time low-thrust trajectory optimization
NASA Astrophysics Data System (ADS)
Taheri, Ehsan; Li, Nan I.; Kolmanovsky, Ilya
2017-05-01
This paper presents an approach for co-state initialization which is a critical step in solving minimum-time low-thrust trajectory optimization problems using indirect optimal control numerical methods. Indirect methods used in determining the optimal space trajectories typically result in two-point boundary-value problems and are solved by single- or multiple-shooting numerical methods. Accurate initialization of the co-state variables facilitates the numerical convergence of iterative boundary value problem solvers. In this paper, we propose a method which exploits the trajectory generated by the so-called pseudo-equinoctial and three-dimensional finite Fourier series shape-based methods to estimate the initial values of the co-states. The performance of the approach for two interplanetary rendezvous missions from Earth to Mars and from Earth to asteroid Dionysus is compared against three other approaches which, respectively, exploit random initialization of co-states, adjoint-control transformation and a standard genetic algorithm. The results indicate that by using our proposed approach the percent of the converged cases is higher for trajectories with higher number of revolutions while the computation time is lower. These features are advantageous for broad trajectory search in the preliminary phase of mission designs.
Dynamics and Control of Newtonian and Viscoelastic Fluids
NASA Astrophysics Data System (ADS)
Lieu, Binh K.
Transition to turbulence represents one of the most intriguing natural phenomena. Flows that are smooth and ordered may become complex and disordered as the flow strength increases. This process is known as transition to turbulence. In this dissertation, we develop theoretical and computational tools for analysis and control of transition and turbulence in shear flows of Newtonian, such as air and water, and complex viscoelastic fluids, such as polymers and molten plastics. Part I of the dissertation is devoted to the design and verification of sensor-free and feedback-based strategies for controlling the onset of turbulence in channel flows of Newtonian fluids. We use high fidelity simulations of the nonlinear flow dynamics to demonstrate the effectiveness of our model-based approach to flow control design. In Part II, we utilize systems theoretic tools to study transition and turbulence in channel flows of viscoelastic fluids. For flows with strong elastic forces, we demonstrate that flow fluctuations can experience significant amplification even in the absence of inertia. We use our theoretical developments to uncover the underlying physical mechanism that leads to this high amplification. For turbulent flows with polymer additives, we develop a model-based method for analyzing the influence of polymers on drag reduction. We demonstrate that our approach predicts drag reducing trends observed in full-scale numerical simulations. In Part III, we develop mathematical framework and computational tools for calculating frequency responses of spatially distributed systems. Using state-of-the-art automatic spectral collocation techniques and new integral formulation, we show that our approach yields more reliable and accurate solutions than currently available methods.
A metal-free organic-inorganic aqueous flow battery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huskinson, B; Marshak, MP; Suh, C
2014-01-08
As the fraction of electricity generation from intermittent renewable sources-such as solar or wind-grows, the ability to store large amounts of electrical energy is of increasing importance. Solid-electrode batteries maintain discharge at peak power for far too short a time to fully regulate wind or solar power output(1,2). In contrast, flow batteries can independently scale the power (electrode area) and energy (arbitrarily large storage volume) components of the system by maintaining all of the electro-active species in fluid form(3-5). Wide-scale utilization of flow batteries is, however, limited by the abundance and cost of these materials, particularly those using redox-active metalsmore » and precious-metal electrocatalysts(6,7). Here we describe a class of energy storage materials that exploits the favourable chemical and electro-chemical properties of a family of molecules known as quinones. The example we demonstrate is ametal-free flow battery based on the redox chemistry of 9,10-anthraquinone-2,7-disulphonic acid (AQDS). AQDS undergoes extremely rapid and reversible two-electron two-proton reduction on a glassy carbon electrode in sulphuric acid. An aqueous flow battery with inexpensive carbon electrodes, combining the quinone/hydroquinone couple with the Br-2/Br- redox couple, yields a peak galvanic power density exceeding 0.6 W cm(-2) at 1.3 A cm(-2). Cycling of this quinone-bromide flow battery showed >99 per cent storage capacity retention per cycle. The organic anthraquinone species can be synthesized from inexpensive commodity chemicals(8). This organic approach permits tuning of important properties such as the reduction potential and solubility by adding functional groups: for example, we demonstrate that the addition of two hydroxy groups to AQDS increases the open circuit potential of the cell by 11% and we describe a pathway for further increases in cell voltage. The use of p-aromatic redox-active organic molecules instead of redox-active metals represents a new and promising direction for realizing massive electrical energy storage at greatly reduced cost.« less
Pore-Scale Simulation and Sensitivity Analysis of Apparent Gas Permeability in Shale Matrix
Zhang, Pengwei; Hu, Liming; Meegoda, Jay N.
2017-01-01
Extremely low permeability due to nano-scale pores is a distinctive feature of gas transport in a shale matrix. The permeability of shale depends on pore pressure, porosity, pore throat size and gas type. The pore network model is a practical way to explain the macro flow behavior of porous media from a microscopic point of view. In this research, gas flow in a shale matrix is simulated using a previously developed three-dimensional pore network model that includes typical bimodal pore size distribution, anisotropy and low connectivity of the pore structure in shale. The apparent gas permeability of shale matrix was calculated under different reservoir pressures corresponding to different gas exploitation stages. Results indicate that gas permeability is strongly related to reservoir gas pressure, and hence the apparent permeability is not a unique value during the shale gas exploitation, and simulations suggested that a constant permeability for continuum-scale simulation is not accurate. Hence, the reservoir pressures of different shale gas exploitations should be considered. In addition, a sensitivity analysis was also performed to determine the contributions to apparent permeability of a shale matrix from petro-physical properties of shale such as pore throat size and porosity. Finally, the impact of connectivity of nano-scale pores on shale gas flux was analyzed. These results would provide an insight into understanding nano/micro scale flows of shale gas in the shale matrix. PMID:28772465
Pore-Scale Simulation and Sensitivity Analysis of Apparent Gas Permeability in Shale Matrix.
Zhang, Pengwei; Hu, Liming; Meegoda, Jay N
2017-01-25
Extremely low permeability due to nano-scale pores is a distinctive feature of gas transport in a shale matrix. The permeability of shale depends on pore pressure, porosity, pore throat size and gas type. The pore network model is a practical way to explain the macro flow behavior of porous media from a microscopic point of view. In this research, gas flow in a shale matrix is simulated using a previously developed three-dimensional pore network model that includes typical bimodal pore size distribution, anisotropy and low connectivity of the pore structure in shale. The apparent gas permeability of shale matrix was calculated under different reservoir pressures corresponding to different gas exploitation stages. Results indicate that gas permeability is strongly related to reservoir gas pressure, and hence the apparent permeability is not a unique value during the shale gas exploitation, and simulations suggested that a constant permeability for continuum-scale simulation is not accurate. Hence, the reservoir pressures of different shale gas exploitations should be considered. In addition, a sensitivity analysis was also performed to determine the contributions to apparent permeability of a shale matrix from petro-physical properties of shale such as pore throat size and porosity. Finally, the impact of connectivity of nano-scale pores on shale gas flux was analyzed. These results would provide an insight into understanding nano/micro scale flows of shale gas in the shale matrix.
A Regularization Approach to Blind Deblurring and Denoising of QR Barcodes.
van Gennip, Yves; Athavale, Prashant; Gilles, Jérôme; Choksi, Rustum
2015-09-01
QR bar codes are prototypical images for which part of the image is a priori known (required patterns). Open source bar code readers, such as ZBar, are readily available. We exploit both these facts to provide and assess purely regularization-based methods for blind deblurring of QR bar codes in the presence of noise.
Detailed Maintenance Planning for Military Systems with Random Lead Times and Cannibalization
2014-12-01
relativement aux systèmes d’entretien. Prendre les meilleures décisions possible signifie ici de trouver un équilibre entre les coûts d’exploitation et la...Multistage Stochastic Programming: A Scenario Tree Based Approach to Planning under Uncertainty, In Sucar, L. E., Morales , E. F., and Hoey, J
Adaptive Statistical Language Modeling; A Maximum Entropy Approach
1994-04-19
models exploit the immediate past only. To extract information from further back in the document’s history , I use trigger pairs as the basic information...9 2.2 Context-Free Estimation (Unigram) ...... .................... 12 2.3 Short-Term History (Conventional N-gram...12 2.4 Short-term Class History (Class-Based N-gram) ................... 14 2.5 Intermediate Distance ........ ........................... 16
Isolation of Circulating Tumor Cells by Dielectrophoresis
Gascoyne, Peter R. C.; Shim, Sangjo
2014-01-01
Dielectrophoresis (DEP) is an electrokinetic method that allows intrinsic dielectric properties of suspended cells to be exploited for discrimination and separation. It has emerged as a promising method for isolating circulation tumor cells (CTCs) from blood. DEP-isolation of CTCs is independent of cell surface markers. Furthermore, isolated CTCs are viable and can be maintained in culture, suggesting that DEP methods should be more generally applicable than antibody-based approaches. The aim of this article is to review and synthesize for both oncologists and biomedical engineers interested in CTC isolation the pertinent characteristics of DEP and CTCs. The aim is to promote an understanding of the factors involved in realizing DEP-based instruments having both sufficient discrimination and throughput to allow routine analysis of CTCs in clinical practice. The article brings together: (a) the principles of DEP; (b) the biological basis for the dielectric differences between CTCs and blood cells; (c) why such differences are expected to be present for all types of tumors; and (d) instrumentation requirements to process 10 mL blood specimens in less than 1 h to enable routine clinical analysis. The force equilibrium method of dielectrophoretic field-flow fractionation (DEP-FFF) is shown to offer higher discrimination and throughput than earlier DEP trapping methods and to be applicable to clinical studies. PMID:24662940
The Paracel Islands and U.S. Interests and Approaches in the South China Sea
2014-06-01
Billiton, and Hong Kong-owned and Canada-based Husky Energy. Initial Chinese offshore exploitation occurred in the nearby Pearl River Mouth Basin and...possibility for marine-based tourism in the region, and in April 2013, China authorized tourists to visit the Paracels.40 China, which currently controls...the entire Paracel archipelago, is expanding tourism , fishing, and the military garrison on Woody Island, the archipelago’s largest feature, as the
Three-dimensional propagation in near-field tomographic X-ray phase retrieval
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruhlandt, Aike, E-mail: aruhlan@gwdg.de; Salditt, Tim
An extension of phase retrieval algorithms for near-field X-ray (propagation) imaging to three dimensions is presented, enhancing the quality of the reconstruction by exploiting previously unused three-dimensional consistency constraints. This paper presents an extension of phase retrieval algorithms for near-field X-ray (propagation) imaging to three dimensions, enhancing the quality of the reconstruction by exploiting previously unused three-dimensional consistency constraints. The approach is based on a novel three-dimensional propagator and is derived for the case of optically weak objects. It can be easily implemented in current phase retrieval architectures, is computationally efficient and reduces the need for restrictive prior assumptions, resultingmore » in superior reconstruction quality.« less
Hidden symmetry and nonlinear paraxial atom optics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Impens, Francois
2009-12-15
A hidden symmetry of the nonlinear wave equation is exploited to analyze the propagation of paraxial and uniform atom-laser beams in time-independent and quadratic transverse potentials with cylindrical symmetry. The quality factor and the paraxial ABCD formalism are generalized to account exactly for mean-field interaction effects in such beams. Using an approach based on moments, these theoretical tools provide a simple yet exact picture of the interacting beam profile evolution. Guided atom laser experiments are discussed. This treatment addresses simultaneously optical and atomic beams in a unified manner, exploiting the formal analogy between nonlinear optics, nonlinear paraxial atom optics, andmore » the physics of two-dimensional Bose-Einstein condensates.« less
A stochastic two-scale model for pressure-driven flow between rough surfaces
Larsson, Roland; Lundström, Staffan; Wall, Peter; Almqvist, Andreas
2016-01-01
Seal surface topography typically consists of global-scale geometric features as well as local-scale roughness details and homogenization-based approaches are, therefore, readily applied. These provide for resolving the global scale (large domain) with a relatively coarse mesh, while resolving the local scale (small domain) in high detail. As the total flow decreases, however, the flow pattern becomes tortuous and this requires a larger local-scale domain to obtain a converged solution. Therefore, a classical homogenization-based approach might not be feasible for simulation of very small flows. In order to study small flows, a model allowing feasibly-sized local domains, for really small flow rates, is developed. Realization was made possible by coupling the two scales with a stochastic element. Results from numerical experiments, show that the present model is in better agreement with the direct deterministic one than the conventional homogenization type of model, both quantitatively in terms of flow rate and qualitatively in reflecting the flow pattern. PMID:27436975
NASA Technical Reports Server (NTRS)
Felici, Helene M.; Drela, Mark
1993-01-01
A new approach based on the coupling of an Eulerian and a Lagrangian solver, aimed at reducing the numerical diffusion errors of standard Eulerian time-marching finite-volume solvers, is presented. The approach is applied to the computation of the secondary flow in two bent pipes and the flow around a 3D wing. Using convective point markers the Lagrangian approach provides a correction of the basic Eulerian solution. The Eulerian flow in turn integrates in time the Lagrangian state-vector. A comparison of coarse and fine grid Eulerian solutions makes it possible to identify numerical diffusion. It is shown that the Eulerian/Lagrangian approach is an effective method for reducing numerical diffusion errors.
Implementation of Flow Tripping Capability in the USM3D Unstructured Flow Solver
NASA Technical Reports Server (NTRS)
Pandya, Mohagna J.; Abdol-Harrid, Khaled S.; Campbell, Richard L.; Frink, Neal T.
2006-01-01
A flow tripping capability is added to an established NASA tetrahedral unstructured parallel Navier-Stokes flow solver, USM3D. The capability is based on prescribing an appropriate profile of turbulence model variables to energize the boundary layer in a plane normal to a specified trip region on the body surface. We demonstrate this approach using the k-e two-equation turbulence model of USM3D. Modification to the solution procedure primarily consists of developing a data structure to identify all unstructured tetrahedral grid cells located in the plane normal to a specified surface trip region and computing a function based on the mean flow solution to specify the modified profile of the turbulence model variables. We leverage this data structure and also show an adjunct approach that is based on enforcing a laminar flow condition on the otherwise fully turbulent flow solution in user specified region. The latter approach is applied for the solutions obtained using other one- and two-equation turbulence models of USM3D. A key ingredient of the present capability is the use of a graphical user-interface tool PREDISC to define a trip region on the body surface in an existing grid. Verification of the present modifications is demonstrated on three cases, namely, a flat plate, the RAE2822 airfoil, and the DLR F6 wing-fuselage configuration.
Implementation of Flow Tripping Capability in the USM3D Unstructured Flow Solver
NASA Technical Reports Server (NTRS)
Pandya, Mohagna J.; Abdol-Hamid, Khaled S.; Campbell, Richard L.; Frink, Neal T.
2006-01-01
A flow tripping capability is added to an established NASA tetrahedral unstructured parallel Navier-Stokes flow solver, USM3D. The capability is based on prescribing an appropriate profile of turbulence model variables to energize the boundary layer in a plane normal to a specified trip region on the body surface. We demonstrate this approach using the k-epsilon two-equation turbulence model of USM3D. Modification to the solution procedure primarily consists of developing a data structure to identify all unstructured tetrahedral grid cells located in the plane normal to a specified surface trip region and computing a function based on the mean flow solution to specify the modified profile of the turbulence model variables. We leverage this data structure and also show an adjunct approach that is based on enforcing a laminar flow condition on the otherwise fully turbulent flow solution in user-specified region. The latter approach is applied for the solutions obtained using other one-and two-equation turbulence models of USM3D. A key ingredient of the present capability is the use of a graphical user-interface tool PREDISC to define a trip region on the body surface in an existing grid. Verification of the present modifications is demonstrated on three cases, namely, a flat plate, the RAE2822 airfoil, and the DLR F6 wing-fuselage configuration.
Deep Question Answering for protein annotation
Gobeill, Julien; Gaudinat, Arnaud; Pasche, Emilie; Vishnyakova, Dina; Gaudet, Pascale; Bairoch, Amos; Ruch, Patrick
2015-01-01
Biomedical professionals have access to a huge amount of literature, but when they use a search engine, they often have to deal with too many documents to efficiently find the appropriate information in a reasonable time. In this perspective, question-answering (QA) engines are designed to display answers, which were automatically extracted from the retrieved documents. Standard QA engines in literature process a user question, then retrieve relevant documents and finally extract some possible answers out of these documents using various named-entity recognition processes. In our study, we try to answer complex genomics questions, which can be adequately answered only using Gene Ontology (GO) concepts. Such complex answers cannot be found using state-of-the-art dictionary- and redundancy-based QA engines. We compare the effectiveness of two dictionary-based classifiers for extracting correct GO answers from a large set of 100 retrieved abstracts per question. In the same way, we also investigate the power of GOCat, a GO supervised classifier. GOCat exploits the GOA database to propose GO concepts that were annotated by curators for similar abstracts. This approach is called deep QA, as it adds an original classification step, and exploits curated biological data to infer answers, which are not explicitly mentioned in the retrieved documents. We show that for complex answers such as protein functional descriptions, the redundancy phenomenon has a limited effect. Similarly usual dictionary-based approaches are relatively ineffective. In contrast, we demonstrate how existing curated data, beyond information extraction, can be exploited by a supervised classifier, such as GOCat, to massively improve both the quantity and the quality of the answers with a +100% improvement for both recall and precision. Database URL: http://eagl.unige.ch/DeepQA4PA/ PMID:26384372
Deep Question Answering for protein annotation.
Gobeill, Julien; Gaudinat, Arnaud; Pasche, Emilie; Vishnyakova, Dina; Gaudet, Pascale; Bairoch, Amos; Ruch, Patrick
2015-01-01
Biomedical professionals have access to a huge amount of literature, but when they use a search engine, they often have to deal with too many documents to efficiently find the appropriate information in a reasonable time. In this perspective, question-answering (QA) engines are designed to display answers, which were automatically extracted from the retrieved documents. Standard QA engines in literature process a user question, then retrieve relevant documents and finally extract some possible answers out of these documents using various named-entity recognition processes. In our study, we try to answer complex genomics questions, which can be adequately answered only using Gene Ontology (GO) concepts. Such complex answers cannot be found using state-of-the-art dictionary- and redundancy-based QA engines. We compare the effectiveness of two dictionary-based classifiers for extracting correct GO answers from a large set of 100 retrieved abstracts per question. In the same way, we also investigate the power of GOCat, a GO supervised classifier. GOCat exploits the GOA database to propose GO concepts that were annotated by curators for similar abstracts. This approach is called deep QA, as it adds an original classification step, and exploits curated biological data to infer answers, which are not explicitly mentioned in the retrieved documents. We show that for complex answers such as protein functional descriptions, the redundancy phenomenon has a limited effect. Similarly usual dictionary-based approaches are relatively ineffective. In contrast, we demonstrate how existing curated data, beyond information extraction, can be exploited by a supervised classifier, such as GOCat, to massively improve both the quantity and the quality of the answers with a +100% improvement for both recall and precision. Database URL: http://eagl.unige.ch/DeepQA4PA/. © The Author(s) 2015. Published by Oxford University Press.
NASA Astrophysics Data System (ADS)
Daude, F.; Galon, P.
2018-06-01
A Finite-Volume scheme for the numerical computations of compressible single- and two-phase flows in flexible pipelines is proposed based on an approximate Godunov-type approach. The spatial discretization is here obtained using the HLLC scheme. In addition, the numerical treatment of abrupt changes in area and network including several pipelines connected at junctions is also considered. The proposed approach is based on the integral form of the governing equations making it possible to tackle general equations of state. A coupled approach for the resolution of fluid-structure interaction of compressible fluid flowing in flexible pipes is considered. The structural problem is solved using Euler-Bernoulli beam finite elements. The present Finite-Volume method is applied to ideal gas and two-phase steam-water based on the Homogeneous Equilibrium Model (HEM) in conjunction with a tabulated equation of state in order to demonstrate its ability to tackle general equations of state. The extensive application of the scheme for both shock tube and other transient flow problems demonstrates its capability to resolve such problems accurately and robustly. Finally, the proposed 1-D fluid-structure interaction model appears to be computationally efficient.
Locally Weighted Ensemble Clustering.
Huang, Dong; Wang, Chang-Dong; Lai, Jian-Huang
2018-05-01
Due to its ability to combine multiple base clusterings into a probably better and more robust clustering, the ensemble clustering technique has been attracting increasing attention in recent years. Despite the significant success, one limitation to most of the existing ensemble clustering methods is that they generally treat all base clusterings equally regardless of their reliability, which makes them vulnerable to low-quality base clusterings. Although some efforts have been made to (globally) evaluate and weight the base clusterings, yet these methods tend to view each base clustering as an individual and neglect the local diversity of clusters inside the same base clustering. It remains an open problem how to evaluate the reliability of clusters and exploit the local diversity in the ensemble to enhance the consensus performance, especially, in the case when there is no access to data features or specific assumptions on data distribution. To address this, in this paper, we propose a novel ensemble clustering approach based on ensemble-driven cluster uncertainty estimation and local weighting strategy. In particular, the uncertainty of each cluster is estimated by considering the cluster labels in the entire ensemble via an entropic criterion. A novel ensemble-driven cluster validity measure is introduced, and a locally weighted co-association matrix is presented to serve as a summary for the ensemble of diverse clusters. With the local diversity in ensembles exploited, two novel consensus functions are further proposed. Extensive experiments on a variety of real-world datasets demonstrate the superiority of the proposed approach over the state-of-the-art.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, R.
This report documents the initial progress on the reduced-order flow model developments in SAM for thermal stratification and mixing modeling. Two different modeling approaches are pursued. The first one is based on one-dimensional fluid equations with additional terms accounting for the thermal mixing from both flow circulations and turbulent mixing. The second approach is based on three-dimensional coarse-grid CFD approach, in which the full three-dimensional fluid conservation equations are modeled with closure models to account for the effects of turbulence.
PuReD-MCL: a graph-based PubMed document clustering methodology.
Theodosiou, T; Darzentas, N; Angelis, L; Ouzounis, C A
2008-09-01
Biomedical literature is the principal repository of biomedical knowledge, with PubMed being the most complete database collecting, organizing and analyzing such textual knowledge. There are numerous efforts that attempt to exploit this information by using text mining and machine learning techniques. We developed a novel approach, called PuReD-MCL (Pubmed Related Documents-MCL), which is based on the graph clustering algorithm MCL and relevant resources from PubMed. PuReD-MCL avoids using natural language processing (NLP) techniques directly; instead, it takes advantage of existing resources, available from PubMed. PuReD-MCL then clusters documents efficiently using the MCL graph clustering algorithm, which is based on graph flow simulation. This process allows users to analyse the results by highlighting important clues, and finally to visualize the clusters and all relevant information using an interactive graph layout algorithm, for instance BioLayout Express 3D. The methodology was applied to two different datasets, previously used for the validation of the document clustering tool TextQuest. The first dataset involves the organisms Escherichia coli and yeast, whereas the second is related to Drosophila development. PuReD-MCL successfully reproduces the annotated results obtained from TextQuest, while at the same time provides additional insights into the clusters and the corresponding documents. Source code in perl and R are available from http://tartara.csd.auth.gr/~theodos/
Speckle variance optical coherence tomography of blood flow in the beating mouse embryonic heart.
Grishina, Olga A; Wang, Shang; Larina, Irina V
2017-05-01
Efficient separation of blood and cardiac wall in the beating embryonic heart is essential and critical for experiment-based computational modelling and analysis of early-stage cardiac biomechanics. Although speckle variance optical coherence tomography (SV-OCT) relying on calculation of intensity variance over consecutively acquired frames is a powerful approach for segmentation of fluid flow from static tissue, application of this method in the beating embryonic heart remains challenging because moving structures generate SV signal indistinguishable from the blood. Here, we demonstrate a modified four-dimensional SV-OCT approach that effectively separates the blood flow from the dynamic heart wall in the beating mouse embryonic heart. The method takes advantage of the periodic motion of the cardiac wall and is based on calculation of the SV signal over the frames corresponding to the same phase of the heartbeat cycle. Through comparison with Doppler OCT imaging, we validate this speckle-based approach and show advantages in its insensitiveness to the flow direction and velocity as well as reduced influence from the heart wall movement. This approach has a potential in variety of applications relying on visualization and segmentation of blood flow in periodically moving structures, such as mechanical simulation studies and finite element modelling. Picture: Four-dimensional speckle variance OCT imaging shows the blood flow inside the beating heart of an E8.5 mouse embryo. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Technical Reports Server (NTRS)
Povinelli, L. A.
1984-01-01
An assessment of several three dimensional inviscid turbine aerodynamic computer codes and loss models used at the NASA Lewis Research Center is presented. Five flow situations are examined, for which both experimental data and computational results are available. The five flows form a basis for the evaluation of the computational procedures. It was concluded that stator flows may be calculated with a high degree of accuracy, whereas, rotor flow fields are less accurately determined. Exploitation of contouring, learning, bowing, and sweeping will require a three dimensional viscous analysis technique.
NASA Astrophysics Data System (ADS)
Piniewski, Mikołaj
2016-05-01
The objective of this study was to apply a previously developed large-scale and high-resolution SWAT model of the Vistula and the Odra basins, calibrated with the focus of natural flow simulation, in order to assess the impact of three different dam reservoirs on streamflow using the Indicators of Hydrologic Alteration (IHA). A tailored spatial calibration approach was designed, in which calibration was focused on a large set of relatively small non-nested sub-catchments with semi-natural flow regime. These were classified into calibration clusters based on the flow statistics similarity. After performing calibration and validation that gave overall positive results, the calibrated parameter values were transferred to the remaining part of the basins using an approach based on hydrological similarity of donor and target catchments. The calibrated model was applied in three case studies with the purpose of assessing the effect of dam reservoirs (Włocławek, Siemianówka and Czorsztyn Reservoirs) on streamflow alteration. Both the assessment based on gauged streamflow (Before-After design) and the one based on simulated natural streamflow showed large alterations in selected flow statistics related to magnitude, duration, high and low flow pulses and rate of change. Some benefits of using a large-scale and high-resolution hydrological model for the assessment of streamflow alteration include: (1) providing an alternative or complementary approach to the classical Before-After designs, (2) isolating the climate variability effect from the dam (or any other source of alteration) effect, (3) providing a practical tool that can be applied at a range of spatial scales over large area such as a country, in a uniform way. Thus, presented approach can be applied for designing more natural flow regimes, which is crucial for river and floodplain ecosystem restoration in the context of the European Union's policy on environmental flows.
Horizontal exploitation of the Upper Cretaceous Austin Chalk of south Texas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borkowski, R.; Hand, L.; Dickerson, D.
1990-05-01
Horizontal drilling in the fractured Austin Chalk of south Texas has proven to be a viable technology for exploiting reserve opportunities in mature trends as well as in frontier areas. To date, the results of an interdisciplinary approach to the regional analysis of structure and stress regimes combined with studies of the depositional characteristics of the Austin Chalk and Eagleford Shale have been a success. Productive characteristics of the Austin Chalk indicate the influence of regional fractures on the preferential flow direction and partitioning in the Pearsall field area of the trend. Well bore orientation and inclination are designed suchmore » that multiple fracture swarms at several stratigraphic horizons are intersected with a single horizontal well bore. As a result of the greater frequency of fracture contacts with the well bore, there is a significant increase in the ultimate recovery of hydrocarbons in place. Conventional vertical drilling techniques are frequently ineffective at encountering these laterally partitioned fracture sets, resulting in lower volumes of recoverable hydrocarbons. Additionally, horizontal well bores may increase ultimate recovery of hydrocarbons by lowering the pressure gradient to the well bore and maximizing the reservoir energy.« less
Odor Landscapes in Turbulent Environments
NASA Astrophysics Data System (ADS)
Celani, Antonio; Villermaux, Emmanuel; Vergassola, Massimo
2014-10-01
The olfactory system of male moths is exquisitely sensitive to pheromones emitted by females and transported in the environment by atmospheric turbulence. Moths respond to minute amounts of pheromones, and their behavior is sensitive to the fine-scale structure of turbulent plumes where pheromone concentration is detectible. The signal of pheromone whiffs is qualitatively known to be intermittent, yet quantitative characterization of its statistical properties is lacking. This challenging fluid dynamics problem is also relevant for entomology, neurobiology, and the technological design of olfactory stimulators aimed at reproducing physiological odor signals in well-controlled laboratory conditions. Here, we develop a Lagrangian approach to the transport of pheromones by turbulent flows and exploit it to predict the statistics of odor detection during olfactory searches. The theory yields explicit probability distributions for the intensity and the duration of pheromone detections, as well as their spacing in time. Predictions are favorably tested by using numerical simulations, laboratory experiments, and field data for the atmospheric surface layer. The resulting signal of odor detections lends itself to implementation with state-of-the-art technologies and quantifies the amount and the type of information that male moths can exploit during olfactory searches.
Li, Chenxi; Wang, Ruikang
2017-04-01
We propose an approach to measure heterogeneous velocities of red blood cells (RBCs) in capillary vessels using full-field time-varying dynamic speckle signals. The approach utilizes a low coherent laser speckle imaging system to record the instantaneous speckle pattern, followed by an eigen-decomposition-based filtering algorithm to extract dynamic speckle signal due to the moving RBCs. The velocity of heterogeneous RBC flows is determined by cross-correlating the temporal dynamic speckle signals obtained at adjacent locations. We verify the approach by imaging mouse pinna in vivo, demonstrating its capability for full-field RBC flow mapping and quantifying flow pattern with high resolution. It is expected to investigate the dynamic action of RBCs flow in capillaries under physiological changes.
Joint modality fusion and temporal context exploitation for semantic video analysis
NASA Astrophysics Data System (ADS)
Papadopoulos, Georgios Th; Mezaris, Vasileios; Kompatsiaris, Ioannis; Strintzis, Michael G.
2011-12-01
In this paper, a multi-modal context-aware approach to semantic video analysis is presented. Overall, the examined video sequence is initially segmented into shots and for every resulting shot appropriate color, motion and audio features are extracted. Then, Hidden Markov Models (HMMs) are employed for performing an initial association of each shot with the semantic classes that are of interest separately for each modality. Subsequently, a graphical modeling-based approach is proposed for jointly performing modality fusion and temporal context exploitation. Novelties of this work include the combined use of contextual information and multi-modal fusion, and the development of a new representation for providing motion distribution information to HMMs. Specifically, an integrated Bayesian Network is introduced for simultaneously performing information fusion of the individual modality analysis results and exploitation of temporal context, contrary to the usual practice of performing each task separately. Contextual information is in the form of temporal relations among the supported classes. Additionally, a new computationally efficient method for providing motion energy distribution-related information to HMMs, which supports the incorporation of motion characteristics from previous frames to the currently examined one, is presented. The final outcome of this overall video analysis framework is the association of a semantic class with every shot. Experimental results as well as comparative evaluation from the application of the proposed approach to four datasets belonging to the domains of tennis, news and volleyball broadcast video are presented.
Self-control of traffic lights and vehicle flows in urban road networks
NASA Astrophysics Data System (ADS)
Lämmer, Stefan; Helbing, Dirk
2008-04-01
Based on fluid-dynamic and many-particle (car-following) simulations of traffic flows in (urban) networks, we study the problem of coordinating incompatible traffic flows at intersections. Inspired by the observation of self-organized oscillations of pedestrian flows at bottlenecks, we propose a self-organization approach to traffic light control. The problem can be treated as a multi-agent problem with interactions between vehicles and traffic lights. Specifically, our approach assumes a priority-based control of traffic lights by the vehicle flows themselves, taking into account short-sighted anticipation of vehicle flows and platoons. The considered local interactions lead to emergent coordination patterns such as 'green waves' and achieve an efficient, decentralized traffic light control. While the proposed self-control adapts flexibly to local flow conditions and often leads to non-cyclical switching patterns with changing service sequences of different traffic flows, an almost periodic service may evolve under certain conditions and suggests the existence of a spontaneous synchronization of traffic lights despite the varying delays due to variable vehicle queues and travel times. The self-organized traffic light control is based on an optimization and a stabilization rule, each of which performs poorly at high utilizations of the road network, while their proper combination reaches a superior performance. The result is a considerable reduction not only in the average travel times, but also of their variation. Similar control approaches could be applied to the coordination of logistic and production processes.
NASA Astrophysics Data System (ADS)
Lignell, David O.; Lansinger, Victoria B.; Medina, Juan; Klein, Marten; Kerstein, Alan R.; Schmidt, Heiko; Fistler, Marco; Oevermann, Michael
2018-06-01
The one-dimensional turbulence (ODT) model resolves a full range of time and length scales and is computationally efficient. ODT has been applied to a wide range of complex multi-scale flows, such as turbulent combustion. Previous ODT comparisons to experimental data have focused mainly on planar flows. Applications to cylindrical flows, such as round jets, have been based on rough analogies, e.g., by exploiting the fortuitous consistency of the similarity scalings of temporally developing planar jets and spatially developing round jets. To obtain a more systematic treatment, a new formulation of the ODT model in cylindrical and spherical coordinates is presented here. The model is written in terms of a geometric factor so that planar, cylindrical, and spherical configurations are represented in the same way. Temporal and spatial versions of the model are presented. A Lagrangian finite-volume implementation is used with a dynamically adaptive mesh. The adaptive mesh facilitates the implementation of cylindrical and spherical versions of the triplet map, which is used to model turbulent advection (eddy events) in the one-dimensional flow coordinate. In cylindrical and spherical coordinates, geometric stretching of the three triplet map images occurs due to the radial dependence of volume, with the stretching being strongest near the centerline. Two triplet map variants, TMA and TMB, are presented. In TMA, the three map images have the same volume, but different radial segment lengths. In TMB, the three map images have the same radial segment lengths, but different segment volumes. Cylindrical results are presented for temporal pipe flow, a spatial nonreacting jet, and a spatial nonreacting jet flame. These results compare very well to direct numerical simulation for the pipe flow, and to experimental data for the jets. The nonreacting jet treatment overpredicts velocity fluctuations near the centerline, due to the geometric stretching of the triplet maps and its effect on the eddy event rate distribution. TMB performs better than TMA. A hybrid planar-TMB (PTMB) approach is also presented, which further improves the results. TMA, TMB, and PTMB are nearly identical in the pipe flow where the key dynamics occur near the wall away from the centerline. The jet flame illustrates effects of variable density and viscosity, including dilatational effects.
A framework for estimating potential fluid flow from digital imagery
NASA Astrophysics Data System (ADS)
Luttman, Aaron; Bollt, Erik M.; Basnayake, Ranil; Kramer, Sean; Tufillaro, Nicholas B.
2013-09-01
Given image data of a fluid flow, the flow field, ⟨u,v⟩, governing the evolution of the system can be estimated using a variational approach to optical flow. Assuming that the flow field governing the advection is the symplectic gradient of a stream function or the gradient of a potential function—both falling under the category of a potential flow—it is natural to re-frame the optical flow problem to reconstruct the stream or potential function directly rather than the components of the flow individually. There are several advantages to this framework. Minimizing a functional based on the stream or potential function rather than based on the components of the flow will ensure that the computed flow is a potential flow. Next, this approach allows a more natural method for imposing scientific priors on the computed flow, via regularization of the optical flow functional. Also, this paradigm shift gives a framework—rather than an algorithm—and can be applied to nearly any existing variational optical flow technique. In this work, we develop the mathematical formulation of the potential optical flow framework and demonstrate the technique on synthetic flows that represent important dynamics for mass transport in fluid flows, as well as a flow generated by a satellite data-verified ocean model of temperature transport.
Flip-angle based ratiometric approach for pulsed CEST-MRI pH imaging
NASA Astrophysics Data System (ADS)
Arena, Francesca; Irrera, Pietro; Consolino, Lorena; Colombo Serra, Sonia; Zaiss, Moritz; Longo, Dario Livio
2018-02-01
Several molecules have been exploited for developing MRI pH sensors based on the chemical exchange saturation transfer (CEST) technique. A ratiometric approach, based on the saturation of two exchanging pools at the same saturation power, or by varying the saturation power levels on the same pool, is usually needed to rule out the concentration term from the pH measurement. However, all these methods have been demonstrated by using a continuous wave saturation scheme that limits its translation to clinical scanners. This study shows a new ratiometric CEST-MRI pH-mapping approach based on a pulsed CEST saturation scheme for a radiographic contrast agent (iodixanol) possessing a single chemical exchange site. This approach is based on the ratio of the CEST contrast effects at two different flip angles combinations (180°/360° and 180°/720°), keeping constant the mean irradiation RF power (Bavg power). The proposed ratiometric approach index is concentration independent and it showed good pH sensitivity and accuracy in the physiological range between 6.0 and 7.4.
Large perturbation flow field analysis and simulation for supersonic inlets
NASA Technical Reports Server (NTRS)
Varner, M. O.; Martindale, W. R.; Phares, W. J.; Kneile, K. R.; Adams, J. C., Jr.
1984-01-01
An analysis technique for simulation of supersonic mixed compression inlets with large flow field perturbations is presented. The approach is based upon a quasi-one-dimensional inviscid unsteady formulation which includes engineering models of unstart/restart, bleed, bypass, and geometry effects. Numerical solution of the governing time dependent equations of motion is accomplished through a shock capturing finite difference algorithm, of which five separate approaches are evaluated. Comparison with experimental supersonic wind tunnel data is presented to verify the present approach for a wide range of transient inlet flow conditions.
Optical nano artifact metrics using silicon random nanostructures
NASA Astrophysics Data System (ADS)
Matsumoto, Tsutomu; Yoshida, Naoki; Nishio, Shumpei; Hoga, Morihisa; Ohyagi, Yasuyuki; Tate, Naoya; Naruse, Makoto
2016-08-01
Nano-artifact metrics exploit unique physical attributes of nanostructured matter for authentication and clone resistance, which is vitally important in the age of Internet-of-Things where securing identities is critical. However, expensive and huge experimental apparatuses, such as scanning electron microscopy, have been required in the former studies. Herein, we demonstrate an optical approach to characterise the nanoscale-precision signatures of silicon random structures towards realising low-cost and high-value information security technology. Unique and versatile silicon nanostructures are generated via resist collapse phenomena, which contains dimensions that are well below the diffraction limit of light. We exploit the nanoscale precision ability of confocal laser microscopy in the height dimension; our experimental results demonstrate that the vertical precision of measurement is essential in satisfying the performances required for artifact metrics. Furthermore, by using state-of-the-art nanostructuring technology, we experimentally fabricate clones from the genuine devices. We demonstrate that the statistical properties of the genuine and clone devices are successfully exploited, showing that the liveness-detection-type approach, which is widely deployed in biometrics, is valid in artificially-constructed solid-state nanostructures. These findings pave the way for reasonable and yet sufficiently secure novel principles for information security based on silicon random nanostructures and optical technologies.
Enhancing power density of biophotovoltaics by decoupling storage and power delivery
NASA Astrophysics Data System (ADS)
Saar, Kadi L.; Bombelli, Paolo; Lea-Smith, David J.; Call, Toby; Aro, Eva-Mari; Müller, Thomas; Howe, Christopher J.; Knowles, Tuomas P. J.
2018-01-01
Biophotovoltaic devices (BPVs), which use photosynthetic organisms as active materials to harvest light, have a range of attractive features relative to synthetic and non-biological photovoltaics, including their environmentally friendly nature and ability to self-repair. However, efficiencies of BPVs are currently lower than those of synthetic analogues. Here, we demonstrate BPVs delivering anodic power densities of over 0.5 W m-2, a value five times that for previously described BPVs. We achieved this through the use of cyanobacterial mutants with increased electron export characteristics together with a microscale flow-based design that allowed independent optimization of the charging and power delivery processes, as well as membrane-free operation by exploiting laminar flow to separate the catholyte and anolyte streams. These results suggest that miniaturization of active elements and flow control for decoupled operation and independent optimization of the core processes involved in BPV design are effective strategies for enhancing power output and thus the potential of BPVs as viable systems for sustainable energy generation.
Regtmeier, Jan; Käsewieter, Jörg; Everwand, Martina; Anselmetti, Dario
2011-05-01
Continuous-flow separation of nanoparticles (NPs) (15 and 39 nm) is demonstrated based on electrostatic sieving at a micro-nanofluidic interface. The interface is realized in a poly(dimethylsiloxane) device with a nanoslit of 525 nm laterally spanning the microfluidic channel (aspect ratio of 540:1). Within this nanoslit, the Debye layers overlap and generate an electrostatic sieve. This was exploited to selectively deflect and sort NPs with a sorting purity of up to 97%. Because of the continuous-flow operation, the sample is continuously fed into the device, immediately separated, and the parameters can be adapted in real time. For bioanalytical purposes, we also demonstrate the deflection of proteins (longest axis 6.8 nm). The continuous operation mode and the general applicability of this separation concept make this method a valuable addition to the current Lab-on-a-Chip devices for continuous sorting of NPs and macromolecules. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
García-Gil, Alejandro; Epting, Jannis; Mueller, Matthias H.; Huggenberger, Peter; Vázquez-Suñé, Enric
2015-04-01
In urban areas the shallow subsurface often is used as a heat resource (shallow geothermal energy), i.e. for the installation and operation of a broad variety of geothermal systems. Increasingly, groundwater is used as a low-cost heat sink, e.g. for building acclimatization. Together with other shallow geothermal exploitation systems significantly increased groundwater temperatures have been observed in many urban areas (urban heat island effect). The experience obtained from two selected case study cities in Basel (CH) and Zaragoza (ES) has allowed developing concepts and methods for the management of thermal resources in urban areas. Both case study cities already have a comprehensive monitoring network operating (hydraulics and temperature) as well as calibrated high-resolution numerical groundwater flow and heat-transport models. The existing datasets and models have allowed to compile and compare the different hydraulic and thermal boundary conditions for both groundwater bodies, including: (1) River boundaries (River Rhine and Ebro), (2) Regional hydraulic and thermal settings, (3) Interaction with the atmosphere under consideration of urbanization and (4) Anthropogenic quantitative and thermal groundwater use. The potential natural states of the considered groundwater bodies also have been investigated for different urban settings and varying processes concerning groundwater flow and thermal regimes. Moreover, concepts for the management of thermal resources in urban areas and the transferability of the applied methods to other urban areas are discussed. The methods used provide an appropriate selection of parameters (spatiotemporal resolution) that have to be measured for representative interpretations of groundwater flow and thermal regimes of specific groundwater bodies. From the experience acquired from the case studies it is shown that understanding the variable influences of the specific geological and hydrogeological as well as hydraulic and thermal boundary conditions in urban settings is crucial. It also could be shown that good quality data are necessary to appropriately define and investigate thermal boundary conditions and the temperature development in urban systems. Groundwater temperatures in both investigated groundwater bodies are already over-heated and essentially impede further thermal groundwater use for cooling purposes. Current legislation approaches are not suitable to evaluate new concessions for thermal exploitation. Therefore, novel approaches for the assessment of new concessions which take into account the complex interaction of natural boundaries as well as existing shallow geothermal systems have to be developed.
NASA Astrophysics Data System (ADS)
Oschepkova, Elena; Vasinskaya, Irina; Sockoluck, Irina
2017-11-01
In view of changing educational paradigm (adopting of two-tier system of higher education concept - undergraduate and graduate programs) a need of using of modern learning and information and communications technologies arises putting into practice learner-centered approaches in training of highly qualified specialists for extraction and processing of solid commercial minerals enterprises. In the unstable market demand situation and changeable institutional environment, from one side, and necessity of work balancing, supplying conditions and product quality when mining-and-geological parameters change, from the other side, mining enterprises have to introduce and develop the integrated management process of product and informative and logistic flows under united management system. One of the main limitations, which keeps down the developing process on Russian mining enterprises, is staff incompetence at all levels of logistic management. Under present-day conditions extraction and processing of solid commercial minerals enterprises need highly qualified specialists who can do self-directed researches, develop new and improve present arranging, planning and managing technologies of technical operation and commercial exploitation of transport and transportation and processing facilities based on logistics. Learner-centered approach and individualization of the learning process necessitate the designing of individual learning route (ILR), which can help the students to realize their professional facilities according to requirements for specialists for extraction and processing of solid commercial minerals enterprises.
3D magnetospheric parallel hybrid multi-grid method applied to planet–plasma interactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leclercq, L., E-mail: ludivine.leclercq@latmos.ipsl.fr; Modolo, R., E-mail: ronan.modolo@latmos.ipsl.fr; Leblanc, F.
2016-03-15
We present a new method to exploit multiple refinement levels within a 3D parallel hybrid model, developed to study planet–plasma interactions. This model is based on the hybrid formalism: ions are kinetically treated whereas electrons are considered as a inertia-less fluid. Generally, ions are represented by numerical particles whose size equals the volume of the cells. Particles that leave a coarse grid subsequently entering a refined region are split into particles whose volume corresponds to the volume of the refined cells. The number of refined particles created from a coarse particle depends on the grid refinement rate. In order tomore » conserve velocity distribution functions and to avoid calculations of average velocities, particles are not coalesced. Moreover, to ensure the constancy of particles' shape function sizes, the hybrid method is adapted to allow refined particles to move within a coarse region. Another innovation of this approach is the method developed to compute grid moments at interfaces between two refinement levels. Indeed, the hybrid method is adapted to accurately account for the special grid structure at the interfaces, avoiding any overlapping grid considerations. Some fundamental test runs were performed to validate our approach (e.g. quiet plasma flow, Alfven wave propagation). Lastly, we also show a planetary application of the model, simulating the interaction between Jupiter's moon Ganymede and the Jovian plasma.« less
Faustini, Marco; Kim, Jun; Jeong, Guan-Young; Kim, Jin Yeong; Moon, Hoi Ri; Ahn, Wha-Seung; Kim, Dong-Pyo
2013-10-02
Herein, we report a novel nanoliter droplet-based microfluidic strategy for continuous and ultrafast synthesis of metal-organic framework (MOF) crystals and MOF heterostructures. Representative MOF structures, such as HKUST-1, MOF-5, IRMOF-3, and UiO-66, were synthesized within a few minutes via solvothermal reactions with substantially faster kinetics in comparison to the conventional batch processes. The approach was successfully extended to the preparation of a demanding Ru3BTC2 structure that requires high-pressure hydrothermal synthesis conditions. Finally, three different types of core-shell MOF composites, i.e., Co3BTC2@Ni3BTC2, MOF-5@diCH3-MOF-5, and Fe3O4@ZIF-8, were synthesized by exploiting a unique two-step integrated microfluidic synthesis scheme in a continuous-flow mode. The synthesized MOF crystals were characterized by X-ray diffraction, scanning electron microscopy, and BET surface area measurements. In comparison with bare MOF-5, MOF-5@diCH3-MOF-5 showed enhanced structural stability in the presence of moisture, and the catalytic performance of Fe3O4@ZIF-8 was examined using Knoevenagel condensation as a probe reaction. The microfluidic strategy allowed continuous fabrication of high-quality MOF crystals and composites exhibiting distinct morphological characteristics in a time-efficient manner and represents a viable alternative to the time-consuming and multistep MOF synthesis processes.
Identifying apicoplast-targeting antimalarials using high-throughput compatible approaches
Ekland, Eric H.; Schneider, Jessica; Fidock, David A.
2011-01-01
Malarial parasites have evolved resistance to all previously used therapies, and recent evidence suggests emerging resistance to the first-line artemisinins. To identify antimalarials with novel mechanisms of action, we have developed a high-throughput screen targeting the apicoplast organelle of Plasmodium falciparum. Antibiotics known to interfere with this organelle, such as azithromycin, exhibit an unusual phenotype whereby the progeny of drug-treated parasites die. Our screen exploits this phenomenon by assaying for “delayed death” compounds that exhibit a higher potency after two cycles of intraerythrocytic development compared to one. We report a primary assay employing parasites with an integrated copy of a firefly luciferase reporter gene and a secondary flow cytometry-based assay using a nucleic acid stain paired with a mitochondrial vital dye. Screening of the U.S. National Institutes of Health Clinical Collection identified known and novel antimalarials including kitasamycin. This inexpensive macrolide, used for agricultural applications, exhibited an in vitro IC50 in the 50 nM range, comparable to the 30 nM activity of our control drug, azithromycin. Imaging and pharmacologic studies confirmed kitasamycin action against the apicoplast, and in vivo activity was observed in a murine malaria model. These assays provide the foundation for high-throughput campaigns to identify novel chemotypes for combination therapies to treat multidrug-resistant malaria.—Ekland, E. H., Schneider, J., Fidock, D. A. Identifying apicoplast-targeting antimalarials using high-throughput compatible approaches. PMID:21746861
Geospace simulations on the Cell BE processor
NASA Astrophysics Data System (ADS)
Germaschewski, K.; Raeder, J.; Larson, D.
2008-12-01
OpenGGCM (Open Geospace General circulation Model) is an established numerical code that simulates the Earth's space environment. The most computing intensive part is the MHD (magnetohydrodynamics) solver that models the plasma surrounding Earth and its interaction with Earth's magnetic field and the solar wind flowing in from the sun. Like other global magnetosphere codes, OpenGGCM's realism is limited by computational constraints on grid resolution. We investigate porting of the MHD solver to the Cell BE architecture, a novel inhomogeneous multicore architecture capable of up to 230 GFlops per processor. Realizing this high performance on the Cell processor is a programming challenge, though. We implemented the MHD solver using a multi-level parallel approach: On the coarsest level, the problem is distributed to processors based upon the usual domain decomposition approach. Then, on each processor, the problem is divided into 3D columns, each of which is handled by the memory limited SPEs (synergistic processing elements) slice by slice. Finally, SIMD instructions are used to fully exploit the vector/SIMD FPUs in each SPE. Memory management needs to be handled explicitly by the code, using DMA to move data from main memory to the per-SPE local store and vice versa. We obtained excellent performance numbers, a speed-up of a factor of 25 compared to just using the main processor, while still keeping the numerical implementation details of the code maintainable.
Geospace simulations using modern accelerator processor technology
NASA Astrophysics Data System (ADS)
Germaschewski, K.; Raeder, J.; Larson, D. J.
2009-12-01
OpenGGCM (Open Geospace General Circulation Model) is a well-established numerical code simulating the Earth's space environment. The most computing intensive part is the MHD (magnetohydrodynamics) solver that models the plasma surrounding Earth and its interaction with Earth's magnetic field and the solar wind flowing in from the sun. Like other global magnetosphere codes, OpenGGCM's realism is currently limited by computational constraints on grid resolution. OpenGGCM has been ported to make use of the added computational powerof modern accelerator based processor architectures, in particular the Cell processor. The Cell architecture is a novel inhomogeneous multicore architecture capable of achieving up to 230 GFLops on a single chip. The University of New Hampshire recently acquired a PowerXCell 8i based computing cluster, and here we will report initial performance results of OpenGGCM. Realizing the high theoretical performance of the Cell processor is a programming challenge, though. We implemented the MHD solver using a multi-level parallelization approach: On the coarsest level, the problem is distributed to processors based upon the usual domain decomposition approach. Then, on each processor, the problem is divided into 3D columns, each of which is handled by the memory limited SPEs (synergistic processing elements) slice by slice. Finally, SIMD instructions are used to fully exploit the SIMD FPUs in each SPE. Memory management needs to be handled explicitly by the code, using DMA to move data from main memory to the per-SPE local store and vice versa. We use a modern technique, automatic code generation, which shields the application programmer from having to deal with all of the implementation details just described, keeping the code much more easily maintainable. Our preliminary results indicate excellent performance, a speed-up of a factor of 30 compared to the unoptimized version.
Efficient rejection-based simulation of biochemical reactions with stochastic noise and delays
NASA Astrophysics Data System (ADS)
Thanh, Vo Hong; Priami, Corrado; Zunino, Roberto
2014-10-01
We propose a new exact stochastic rejection-based simulation algorithm for biochemical reactions and extend it to systems with delays. Our algorithm accelerates the simulation by pre-computing reaction propensity bounds to select the next reaction to perform. Exploiting such bounds, we are able to avoid recomputing propensities every time a (delayed) reaction is initiated or finished, as is typically necessary in standard approaches. Propensity updates in our approach are still performed, but only infrequently and limited for a small number of reactions, saving computation time and without sacrificing exactness. We evaluate the performance improvement of our algorithm by experimenting with concrete biological models.
Analysis of Spring Flow Change in the Jinan City under Influences of Recent Human Activities
NASA Astrophysics Data System (ADS)
Liu, Xiaomeng; Hu, Litang; Sun, Kangning
2018-06-01
Jinan city, the capital of Shandong Province in China, is famous for its beautiful springs. With the rapid development of the economy in recent years, water demand in Jinan city has been increasing rapidly. The over-exploitation of groundwater has caused a decline in groundwater level and, notably, dried up springs under extreme climate conditions. To keep the springs gushing perennially and sustainably use groundwater resources, the local government has implemented many measures to restore the water table, such as the Sponge City Construction Project in Jinan. Focusing on changes in spring flow and its impact factors in Jinan, this paper analyzes the changes in observed spring flow in the most recent 50 years and then discusses the causes of decreases in the spring flow with the consideration of climate and human activities. Spring flow in the study area was changed from the natural state to a period of multiwater source management. The artificial neural network (ANN) model was developed to demonstrate the relationship among spring flow, precipitation, and groundwater abstraction to predict the variations of spring flow under the conditions of climate change and human activities. The good agreement between the simulated and observed results indicates that both precipitation and exploitation are important influence factors. However the effective infiltration of precipitation into groundwater is the most influential factor. The results can provide guidance for groundwater resource protection in the Jinan spring catchment.
Current opinion in Alzheimer's disease therapy by nanotechnology-based approaches.
Ansari, Shakeel Ahmed; Satar, Rukhsana; Perveen, Asma; Ashraf, Ghulam Md
2017-03-01
Nanotechnology typically deals with the measuring and modeling of matter at nanometer scale by incorporating the fields of engineering and technology. The most prominent feature of these engineered materials involves their manipulation/modification for imparting new functional properties. The current review covers the most recent findings of Alzheimer's disease (AD) therapeutics based on nanoscience and technology. Current studies involve the application of nanotechnology in developing novel diagnostic and therapeutic tools for neurological disorders. Nanotechnology-based approaches can be exploited for limiting/reversing these diseases for promoting functional regeneration of damaged neurons. These strategies offer neuroprotection by facilitating the delivery of drugs and small molecules more effectively across the blood-brain barrier. Nanotechnology based approaches show promise in improving AD therapeutics. Further replication work on synthesis and surface modification of nanoparticles, longer-term clinical trials, and attempts to increase their impact in treating AD are required.
Laser-based volumetric flow visualization by digital color imaging of a spectrally coded volume.
McGregor, T J; Spence, D J; Coutts, D W
2008-01-01
We present the framework for volumetric laser-based flow visualization instrumentation using a spectrally coded volume to achieve three-component three-dimensional particle velocimetry. By delivering light from a frequency doubled Nd:YAG laser with an optical fiber, we exploit stimulated Raman scattering within the fiber to generate a continuum spanning the visible spectrum from 500 to 850 nm. We shape and disperse the continuum light to illuminate a measurement volume of 20 x 10 x 4 mm(3), in which light sheets of differing spectral properties overlap to form an unambiguous color variation along the depth direction. Using a digital color camera we obtain images of particle fields in this volume. We extract the full spatial distribution of particles with depth inferred from particle color. This paper provides a proof of principle of this instrument, examining the spatial distribution of a static field and a spray field of water droplets ejected by the nozzle of an airbrush.
Scalable and Accurate SMT-based Model Checking of Data Flow Systems
2013-10-30
guided by the semantics of the description language . In this project we developed instead a complementary and novel approach based on a somewhat brute...believe that our approach could help considerably in expanding the reach of abstract interpretation techniques to a variety of tar- get languages , as...project. We worked on developing a framework for compositional verification that capitalizes on the fact that data-flow languages , such as Lustre, have
Turbulent Flow Modification With Thermoacoustic Waves for Separation Control
2017-08-24
analyses using two different approaches in order to provide guidance to physics-based design of active flow control using thermal-based actuators. RPPR... control effects are also observed by Post & Corke (2004) on the same airfoil. The uses of plasma actuators on other shear layer setups have been...region may be a more practical approach than introducing control inputs externally. On the other hand, Barone & Lele (2005) studied the receptivity of the
A hybrid quantum-inspired genetic algorithm for multiobjective flow shop scheduling.
Li, Bin-Bin; Wang, Ling
2007-06-01
This paper proposes a hybrid quantum-inspired genetic algorithm (HQGA) for the multiobjective flow shop scheduling problem (FSSP), which is a typical NP-hard combinatorial optimization problem with strong engineering backgrounds. On the one hand, a quantum-inspired GA (QGA) based on Q-bit representation is applied for exploration in the discrete 0-1 hyperspace by using the updating operator of quantum gate and genetic operators of Q-bit. Moreover, random-key representation is used to convert the Q-bit representation to job permutation for evaluating the objective values of the schedule solution. On the other hand, permutation-based GA (PGA) is applied for both performing exploration in permutation-based scheduling space and stressing exploitation for good schedule solutions. To evaluate solutions in multiobjective sense, a randomly weighted linear-sum function is used in QGA, and a nondominated sorting technique including classification of Pareto fronts and fitness assignment is applied in PGA with regard to both proximity and diversity of solutions. To maintain the diversity of the population, two trimming techniques for population are proposed. The proposed HQGA is tested based on some multiobjective FSSPs. Simulation results and comparisons based on several performance metrics demonstrate the effectiveness of the proposed HQGA.
Generalized watermarking attack based on watermark estimation and perceptual remodulation
NASA Astrophysics Data System (ADS)
Voloshynovskiy, Sviatoslav V.; Pereira, Shelby; Herrigel, Alexander; Baumgartner, Nazanin; Pun, Thierry
2000-05-01
Digital image watermarking has become a popular technique for authentication and copyright protection. For verifying the security and robustness of watermarking algorithms, specific attacks have to be applied to test them. In contrast to the known Stirmark attack, which degrades the quality of the image while destroying the watermark, this paper presents a new approach which is based on the estimation of a watermark and the exploitation of the properties of Human Visual System (HVS). The new attack satisfies two important requirements. First, image quality after the attack as perceived by the HVS is not worse than the quality of the stego image. Secondly, the attack uses all available prior information about the watermark and cover image statistics to perform the best watermark removal or damage. The proposed attack is based on a stochastic formulation of the watermark removal problem, considering the embedded watermark as additive noise with some probability distribution. The attack scheme consists of two main stages: (1) watermark estimation and partial removal by a filtering based on a Maximum a Posteriori (MAP) approach; (2) watermark alteration and hiding through addition of noise to the filtered image, taking into account the statistics of the embedded watermark and exploiting HVS characteristics. Experiments on a number of real world and computer generated images show the high efficiency of the proposed attack against known academic and commercial methods: the watermark is completely destroyed in all tested images without altering the image quality. The approach can be used against watermark embedding schemes that operate either in coordinate domain, or transform domains like Fourier, DCT or wavelet.
ERIC Educational Resources Information Center
DeMott, John
The prototypal course in newspaper management described in this paper is based on systems analysis and the systems flow approach. The introductory section of the paper discusses the need for instruction in newspaper management, the concepts of the systems approach and systems flow and the way they relate to enterprise management, and specific…
NASA Astrophysics Data System (ADS)
Pastoriza, L. R.; Holdsworth, R.; McCaffrey, K. J. W.; Dempsey, E. D.; Walker, R. J.; Gluyas, J.; Reyes, J. K.
2016-12-01
Fluid flow pathway characterization is critical to geothermal exploration and exploitation. It requires a good understanding of the structural evolution, fault distribution and fluid flow properties. A dominantly fieldwork-based approach has been used to evaluate the potential fracture permeability characteristics of a typical high-temperature geothermal reservoir in the Southern Negros Geothermal Field, Philippines. This is a liquid-dominated geothermal resource hosted in the andesitic to dacitic Quaternary Cuernos de Negros Volcano in Negros Island. Fieldwork reveals two main fracture groups based on fault rock characteristics, alteration type, relative age of deformation, and associated thermal manifestation, with the younger fractures mainly related to the development of the modern geothermal system. Palaeostress analyses of cross-cutting fault and fracture arrays reveal a progressive counterclockwise rotation of stress axes from the (?)Pliocene up to the present-day, which is consistent with the regional tectonic models. A combined slip and dilation tendency analysis of the mapped faults indicates that NW-SE structures should be particularly promising drilling targets. Frequency versus length and aperture plots of fractures across six to eight orders of magnitude show power-law relationships with a change in scaling exponent in the region of 100 to 500m length-scales. Finally, evaluation of the topology of the fracture branches shows the dominance of Y-nodes that are mostly doubly connected suggesting good connectivity and permeability within the fracture networks. The results obtained in this study illustrate the value of methods that can be globally applied during exploration to better characterize fracture systems in geothermal reservoirs using multiscale datasets.